[ / / / / / / / / / / / / / ] [ dir / asatru / choroy / dempart / f / hikki / lovelive / miku / vietnam ][Options][ watchlist ]

/tech/ - Technology

You can now write text to your AI-generated image at https://aiproto.com It is currently free to use for Proto members.
Email
Comment *
File
Select/drop/paste files here
Password (Randomized for file and post deletion; you may also set your own.)
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Expand all images

File (hide): 0fea9f6236bbe8c⋯.png (981.13 KB, 1136x549, 1136:549, Winblows vs fakeRAID.png) (h) (u)

[–]

 No.1039253>>1039258 >>1039603 >>1039688 >>1039922 [Watch Thread][Show All Posts]

To the left is 4 drives set to "two-way mirror" in Windows Storage Spaces, to the left is motherboard RAID10 on the same drives. FakeRAID is somehow faster.

WHY??

To be clear: I'm trying to do a RAID10 via Windows Storage Spaces, which people tell me "it's like that zfs thing man".

The FakeRAID in this case is AMD-RAID on a CrossHair VII Hero / 2700X. Mind you, just installing the damn drivers was a pain, because the installer tells you "it can't be installed if your OS is installed on an NVMe, install your OS on a different drive" (as if it's an acceptable thing for a driver installer to ask). Drives are 4x 4TB IronWolf NAS drives. My intention is to RAID10 them for performance and a bit of fault-tolerance.

Is there a proper way to RAID10 in Windows? I tried Disk Management but it just lets me stripe or mirror, not stripe and mirror. Even the RAID5 is grayed out, so maybe they gimped it in Win10.

Don't get me wrong, Win10 is awful in general, but I need this machine to run Windows for more than one year and Win7 dies next january.

 No.1039254>>1039922

File (hide): 08a18ea28c3b373⋯.png (1.69 MB, 2001x1725, 29:25, individual drives.png) (h) (u)

For reference, here are the individual drives' performance. I expected better sequential reads out "two-way mirror", which by all measures seems to be RAID10 internally.


 No.1039258>>1039339 >>1039546

>>1039253 (OP)

>and Win7 dies next january

Which doesn't mean it will stop working, just that it won't get more updates. And if you care about security... well, you shouldn't be using windows at all.


 No.1039259>>1039339 >>1039558

Hardware raid is just an excuse for either a extremely outdated MINIX or linux install running software raid on a dedicated SOC on your board. Use software raid unless its a FOSS dedicated SOC.


 No.1039339

>>1039258

>And if you care about security... well, you shouldn't be using windows at all.

For a work machine it's fine. There's security against common malware and security against glow-in-the-darkies. The former is more pragmatic and just requires not leaving the OS unpatched.

>>1039259

>Hardware raid is just an excuse for either a extremely outdated MINIX or linux install running software raid on a dedicated SOC on your board. Use software raid unless its a FOSS dedicated SOC.

There isn't a FOSS dedicated SOC. I wish there was, but it's all botnet. That said, I just want something that doesn't cost an arm and a leg. I wouldn't even mind running some ARM board with a FreeNAS server inside the machine, would be cool and all, but I don't think any of them have 4x SATA ports + GbE.

Is all hardware RAID shit? I hope not.


 No.1039546

>>1039258

>just that it won't get more updates.

if you've been installing any windows updates at all over the past 5 years you've been doing it wrong, the updates are nothing but spyware

fresh windows 7 iso -> disable windows update -> done


 No.1039555>>1039599

raid 10 can be configured in different ways

remember there's no standard for raid modes, any combination of stripe+mirror is a "raid 10", that doesn't mean it's required to actually read or write to multiple drives.

raid 1 in linux for example will not read from multiple drives for a single sequential read. (in raid 10 far mode it will).

>The FakeRAID in this case is AMD-RAID on a CrossHair VII Hero / 2700X

in short who knows what the fuck it's doing

>WHY??

there is very little actual raid "standard". If it's striping and mirroring they can call it raid 10 no matter what is going on behind the scenes. If there's N copies of the data spread across N drives they can call it raid 1 no matter what it's doing behind the scenes.

compare the windows performance to linux raid performance to judge how shitty or not shitty windows storage performance is.

>Mind you, just installing the damn drivers was a pain, because the installer tells you "it can't be installed if your OS is installed on an NVMe

the fact that it needs drivers is retarded. most motherboards that support a fake raid feature will to it at the motherboard level, however it's doing it without it being hardware raid. it must be using the cpu somehow, i don't know the internals, but it'll be transparent to windows or anything else, it'll just show up as a single drive instead of 4.

>FakeRAID is somehow faster.

>WHY??

they are both fake raid. the fact that you had to install a driver for the motherboards fake raid ( which is especially shitty ), should make it clear. your just using the motherboards drivers to do it instead of windows 10's, which to the surprise of nobody, some random chinkshit software is faster.


 No.1039558>>1039570

>>1039259

there is a reason not to use software raid, even if the SOC is nothing but an ancient linux, the motherboard caps out with speed on it's sata bus.

the PCI bus is faster.


 No.1039570

>>1039558

That's only true and good if the dedicated SOC actually uses the full bandwidth of the PCI bus which it probably won't if its cheap chinkshit that is running ancient linux. If it doesn't then not only do you risk your data going into a blackhole via bugs in implementation you are wasting power using the SOC blackbox.


 No.1039599>>1039607

>>1039555

>any combination of stripe+mirror is a "raid 10", that doesn't mean it's required to actually read or write to multiple drives.

That's pretty confusing to my intuition of stripe and mirror. Any striped read/write should involve 2+ drives, right? That's the whole point, is it not? And mirroring duplicates that, I'd assume?

>the fact that it needs drivers is retarded. most motherboards that support a fake raid feature will to it at the motherboard level

Seemingly all motherboard RAID offloads the "heavy duty" to the CPU, which baffles me because an ARM Cortex should have been enough on the motherboard. Intel Rapid Storage also implements RAID via the driver, someone correct me if I'm wrong.

>which to the surprise of nobody, some random chinkshit software is faster.

But that's the surprising part!

How come software RAID running on a decent processor is slower than a chink driver?


 No.1039603>>1039621

>>1039253 (OP)

>To the left is 4 drives set to "two-way mirror" in Windows Storage Spaces, to the left is motherboard RAID10 on the same drives.

>To the left is 4 drives set to "two-way mirror" in Windows Storage Spaces, to the left is motherboard RAID10 on the same drives.

>left left

>when niggers post

Reminds me of that faggot that did 6 drives with RAID0 and was on here asking what to do when 1 died. Oh how we laughed. Good times.


 No.1039607>>1039621

>>1039599

>How come software RAID running on a decent processor is slower than a chink driver?

A true soft-RAID solution like Storage Spaces or whatever it's called has to account for variance in the way different disk controllers and other things work whereas chinkdriver has to account for one controller on one motherboard. The Windows soft-RAID also does checksumming, if I recall, which will always be slower than non-checksummed RAID.


 No.1039621

File (hide): 768d954ecf38523⋯.png (1.4 MB, 1002x864, 167:144, WSS WDM RAID10-like.png) (h) (u)

>>1039603

>left left

Sorry for the nigger moment. The right one in the OP, which is faster, is fakeRAID. This is what I cannot understand.

>Reminds me of that faggot that did 6 drives with RAID0 and was on here asking what to do when 1 died.

That one had to be a troll though.

>>1039607

>A true soft-RAID solution like Storage Spaces or whatever it's called has to account for variance in the way different disk controllers and other things work whereas chinkdriver has to account for one controller on one motherboard. The Windows soft-RAID also does checksumming, if I recall, which will always be slower than non-checksummed RAID.

None of that makes sense. Any variance between disk controllers is irrelevant if they're all using the same protocols (e.g., SATA) Even if it was checksumming the contents of files, which I don't think it is (isn't that ReFS you're talking about? which is seemingly deprecated), that would still not explain a slowdown since it's a CPU operation and this barely consumes 1% of the CPU if that (I checked while benchmarking the storage).

Pic related: Two pairs of disks as two-way mirrors done in Storage Spaces, striped via disk manager. The performance is almost as good as motherboard RAID10, except for unqueued random IO and random writes. This is not CPU-bound; so none of this makes sense.


 No.1039688>>1039690 >>1039691

>>1039253 (OP)

Software RAID generally surpassed hardware RAID some years ago.


 No.1039690>>1039691

>>1039688

(that is, unless you fork out for a high end RAID controller)


 No.1039691>>1039700 >>1039711 >>1039714 >>1040138

>>1039688

>Software RAID generally surpassed hardware RAID some years ago.

Sure, in Linux. Windows Storage Spaces is slow as shit and I have the benchmarks to prove it. You have the same fucking data in two separate drives, and it will read off of just one, it's a glorified RAID 10 that performs worse than a single fucking drive. This does not have resilience or anything, it's just NTFS, which is as complex as ext3 or something.

After some testing I'm going to just use the shitty motherboard RAID 10 because it's like twice as fast and just as resilient. In fact, did I mention the stupid Windows "Two-way mirror" across four drives will die if any two drives die? So it's even worse than the "maybe two drives" of resilience in actual RAID 10.

>>1039690

>(that is, unless you fork out for a high end RAID controller)

Just tell me what that is. What is a good 4-port SATA RAID controller?


 No.1039700>>1039925

>>1039691

>Just tell me what that is. What is a good 4-port SATA RAID controller?

mdadm

ZFS

Don't buy raid hardware.


 No.1039711>>1039714

>>1039691

You can get fast as fuck harware RAID by buying an LSI 92xx series card with BBU (battery backup). The BBU will allow you to use card-level write caching (it is disabled otherwise). Best way to purchase this is through e-bay; there's a lot of Chinese vendors selling these as OEM. Probably you'll spend 350-400 USD.


 No.1039714>>1039909


 No.1039715>>1039909 >>1039925 >>1040138

File (hide): 44a58893b0518ea⋯.jpg (238.39 KB, 1353x1001, 123:91, 1324304542523.jpg) (h) (u)

Also

>Windows Storage Spaces, which people tell me "it's like that zfs thing man".

Stop listening to those people. Windows storage spaces is a garbage fire and you are asking for data loss, not because a drive dies, but because windows decides it doesn't like your array anymore and fuck you.

If you are needing Raid10, it's really about time to learn how to put together a NAS and share those files with samba. If you're willing to burn money on fucking hardware raid, then the following will end up saving you money.

-Don't fall for the tiny NAS meme. You want as many PCI-E slots as possible, and never get a chassis with less than 8 spaces for HDD's. Old computers, undervolted if possible, are great for this, otherwise bargain bin ryzen hardware is the way to go.

-FreeNAS or OpenMediaVault (debian) for the OS

-10G SFP+ hardware from ebay is amazingly cheap. Mellanox connectx-3 single or dual port cards are preferable. Connectx-2 is too old. If you want a switch, look into what mikrotik offers.

-If you need more sata ports, use cheap used LSI 9211-8i HBA's also from ebay. A tiny fan on the heatsink helps. Take the heatsink off, drill two holes in the corners, use twisty ties, done.

-Take the time to learn whatever file system you use, and learn what is does and does not protect against. Outside of hardware failure/windows storage spaces/btrfs, the reason for data loss is realistically just you being a dumbass.

-6TB, 8TB and 10TB Easystores and WD Essentials go on sale regularly, dropping down to about $110, $140 and $170

-3-2-1 backups faggot. Come here crying you lost everything and we'll laugh at you.

-Set up the system so if a drive dies or starts throwing errors, you get an email.


 No.1039909

>>1039714

>(only fucking $120, wow)

Now that's an enticing price range, but it seems to only have SAS connectors.

>>1039715

>If you are needing Raid10, it's really about time to learn how to put together a NAS and share those files with samba

I've never dealt with SFP+, thanks for your post. (Neat pic too)

Man, I don't even know what to tell you. All the info I need is in your post. I'll save it and give a hard thought about a NAS. Still, just using a NAS for backup seems like a viable solution as well, and living with an unreliable software RAID.

>3-2-1 backups

I wonder how anons deal with the 1 part. The usual solution is trusting your data to some big botnet like Amazon, I find that so unsavory.


 No.1039922

>>1039254

>>1039253 (OP)

>crystaldiskinfo

>that eyecandy botnet disk assessment program

Lol

I bet you browse reddit.


 No.1039925

>>1039715

>windows decides it doesn't like your array anymore and fuck you.

Same shit with NTFS. If there is a mismatch in the NTFS partition data itself it will also decide to altogether stop functioning. Could recover everything though if not exclusively winfag-cock only.

>>1039700

>Don't buy raid hardware.

Literally this.

https://boards.4channel.org/g/thread/69988697
>Be in California
>earthquake
>Destroys RAID1 array on NAS due to shake


 No.1040138>>1040248

>>1039691

>Windows Storage Spaces is slow as shit and I have the benchmarks to prove it. You have the same fucking data in two separate drives, and it will read off of just one

linux raid 1 will only read off one drive for a sequential read no matter how many drives you have mirrored. you will only see a performance boost if you have multiple simultaneous reads, which is why you always use raid 10 far 2 in every single use case.

>>1039715

>NAS

that's break for backup but the performance is shit unless you have a 10 gigabit network. max throughput on 1gigabit is only 125MB/s

your going to need a hardware raid card no matter what you do if you want to raid ssd's and get the performance you should be getting. 2 ssd's in raid 10 will be gimped by the sata bus.


 No.1040248>>1040251

File (hide): 491d9f758846285⋯.jpg (70.41 KB, 610x409, 610:409, chinese ssd.jpg) (h) (u)

>>1040138

>linux raid 1 will only read off one drive for a sequential read no matter how many drives you have mirrored

I don't get it. Why doesn't RAID 10 read off of 4 disks at once? If you're only reading a single large file, it should do this. This makes no sense.


 No.1040251

>>1040248

(For the record, this isn't a defficiency of the SATA bus, since I can get 750 MB/s just fine on 4 hard drives RAID0'd)




[Return][Go to top][Catalog][Screencap][Nerve Center][Cancer][Update] ( Scroll to new posts) ( Auto) 5
25 replies | 5 images | Page ???
[Post a Reply]
[ / / / / / / / / / / / / / ] [ dir / asatru / choroy / dempart / f / hikki / lovelive / miku / vietnam ][ watchlist ]