[ / / / / / / / / / / / / / ] [ dir / agatha2 / imouto / mde / mewch / miku / pinoy / vg / xivlg ][Options][ watchlist ]

/tech/ - Technology

You can now write text to your AI-generated image at https://aiproto.com It is currently free to use for Proto members.
Email
Comment *
File
Select/drop/paste files here
Password (Randomized for file and post deletion; you may also set your own.)
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Expand all images

File (hide): 6d0816711a4a5a8⋯.png (168.52 KB, 760x563, 760:563, zol.png) (h) (u)

[–]

 No.1030875>>1030876 >>1030881 >>1030986 >>1031112 [Watch Thread][Show All Posts]

What does /tech/ think about this? How does this impact the current 'filesystem war' of ZFSonLinux, Btrfs, and Stratis?

 No.1030876

>>1030875 (OP)

Except that Red Hat's Stratis is much younger


 No.1030879>>1030890 >>1030900 >>1031155 >>1031159

Who gives a fuck? If you're hoarding data, you should move to object storage.

>single root

>no need to keep track of which file is in which "volume" (something the fucking computer should do)

>use whatever file system you want for the drives

>infinitely expandable

>buy drives of any size make model

>just add drives to your computers and watch the system immediately rebalance itself

>ran out of ports? get a new server and hook it up to the network

>granular replication options

Learning and installing CEPH is the best thing I've ever done.


 No.1030881

>>1030875 (OP)

Good stuff. The less reason we have to keep FreeBSD around, the better.


 No.1030890>>1030900

>>1030879

>all write()s hang on a server after a single OSD in the network bugs out

>third-party software that doesn't do much buffering with disk I/O will take a huge hit from the expensive syscalls

>troubleshoot performance and with one brief configuration toggle, manage to damage file systems across your entire network

>you'll never "just use ceph", the same mentality of "let's replace well-understood small-scale issues with poorly-understood large-scale issues" will infect your whole stack


 No.1030900>>1030901

>>1030890

>>1030879

wtf even IS ceph?


 No.1030901>>1031054

>>1030900

https://ceph.com/

*inhales marijuana*

*gains 23 points of corruption*

it's the cloud, maaaan.

it's files but they're on clouds instead of disks, man.

you'll never have to deal with a single server urgently needing a disk swap in its single dedicated RAID, maaaan.

a few disks fail on the 20th of December? you can just wait until after Christmas to swap 'em out.

this well-known, easy to deal with, easy to understand problem is gone :)


 No.1030986>>1030991 >>1030998 >>1031159

>>1030875 (OP)

zfs sucks. cant have more than few terabytes of storage because it requires enterprise hardware after that. you cant have 100+gb ram on consumer hardware that often comes with 4 or less ram slots.


 No.1030991

>>1030986

That's just not true. And you can turn off features that require more RAM in order to get similar functionality that other filesystems have.

t. 4GB zfs setup.


 No.1030998>>1031008

>>1030986

If you can afford many TBs of storage, the RAM is not a problem.

No, stacking no raid no backup hard disks does not count.


 No.1031008>>1031011

>>1030998

hdds are cheap. you can have over 10tb in a single drive and even the worst motherboards have at least 4 sata ports. most consumer hardware support maybe 32gb ram and some meme matx/itx boards even less. the server parts would cost at least as much as the drives would.


 No.1031011

>>1031008

The more platters a disk has, the higher the risk of failure: getting 10 TB disks is irresponsible without some highly redundant scheme such as RAID.

Also, double the cost to consider space needed for backups and their own redundancy system.

Average consumer hardware has 4 RAM slots, and 16 GB sticks are not rare, so you can easily get to 64 GB total.


 No.1031054>>1031069


 No.1031057

File (hide): c01790b81e365cd⋯.jpg (46.23 KB, 645x480, 43:32, JkBFUCt8to.jpg) (h) (u)

>ZFS

>shilling for that bloated shit

It's almost as if you WANT to lose your lifes.


 No.1031067>>1031269

I wish people would stop being so scared of btrfs. The RAID 5 and 6 modes are still flagged as "experimental" but I've been running btrfs in RAID 5 mode for about 6 years now without issue. It's even been through an HDD failure and several expansions/replacements and one drive removal, and has run on kernels between 3.2 up to 4.14 and I've yet to lose any data. not 1 corrupted file in 5.5tb of both frequently read/written and archival data in 6 years. The only problem I'd worry about is if it's doing frequent small writes like databases, or if you live in an area with very inconsistent power, you should either disable CoW or get an UPS.


 No.1031069

>>1031054

if you don't know anything at all, you'll need to ask better questions than that.


 No.1031112

>>1030875 (OP) ext4 is enough for me so I don't need ZFS.


 No.1031125>>1031155

I don't really expect a file system to do much other than these:

>retaining my data after subsequent power losses, even if the storage drive is encrypted

>support for symbolic links so I can copy my custom Tor Browser instance across storage devices without having to compress it first

>support for super-long filenames, I should be able rename the file into its SHA512 digest

>easy to set up

Does ZFS satisfy all these use cases? I'm on ext4 right now though mostly because it came by default with my distribution of choice.


 No.1031155>>1031156 >>1031178

>>1031125

>long term storage

ZFS

>short term storage

ext4

There you go.

ZFS is incredibly fast for reading if you give it enough ram because you're essentially running everything off a ramdisk.

>>1030879

>hoarding data

Is what everyone should be doing. It's cheap, and commies are purging all wrongthink from the internet.

>ceph for anything under 100T

how retarded


 No.1031156>>1031159 >>1031213 >>1031270

>>1031155

its the other way. ext for long term so you dont need some crazy expensive hardware and zfs for something that you need to access often.


 No.1031159>>1031178

>>1030986

>because it requires enterprise hardware after that

Wrong. The overhead it requires is so small it's irrelevant on anything Core2Duo or newer.

>>1031156

>need some crazy expensive hardware

You don't. What makes you say that? Any computer in the last 13 years will run it fine.

>>1030879

I'm considering moving my 100TB ZFS server over to Ceph because I now have a second server with 30TB and I don't want to use an overlay system on top to combine both directory structures.


 No.1031178>>1031213

>>1031155

>>1031159

>100T

Yeah I'll get there eventually...

>how retarded

How else would you expand storage easily? All file system level stuff have inane limitations like "you can only add another drive of the same size" and "you can't just add more drives to a volume, you gotta gradually replace existing drives with bigger ones" and "since you can't easily expand volumes you end up creating new ones and having to manually keep track of which volumes your files are located in". It's a pain in the ass to deal with this.

What I want is:

>exactly one root for all my data

>system distributes data across all available drives

>configurable redundancy

>ability to just buy and install any random drives

If it's not CEPH then what is it?

>I'm considering moving my 100TB ZFS server over to Ceph

If you do please share your experiences. Would like more discussion on this subject


 No.1031204>>1031264

I've already switched to btrfs, and with the write hole getting fixed there's no reason not to do the same except that ZFS has name recognition to people who've been in the industry for a long time.

Unless it relicenses I couldn't care less. Only place I might use ZFS would be on a dedicated NAS rig.


 No.1031213

>>1031178

>How else would you expand storage easily?

ZFS

>>1031156

ZoL runs on 512MB ram, P4 equivalent.

Most nas appliances from the past few years can actually use it, but for whatever reason they don't.


 No.1031219

>zfs require expensive hardware

Why do lincucks always resort to lying?


 No.1031264

File (hide): 6fdb57f0253c3ac⋯.png (116.95 KB, 867x591, 289:197, writehole.png) (h) (u)

>>1031204

>and with the write hole getting fixed

you sure that has happened?

>Some fixes went to 4.12, namely scrub and auto-repair fixes. Feature marked as mostly OK for now.

>Further fixes to raid56 related code are applied each release. The write hole is the last missing part, preliminary patches have been posted but needed to be reworked.

seems like they're definitely working on it, but it's not actually fully fixed yet.


 No.1031269

>>1031067

IIRC the problems with RAID 5/6 are also problems with an actual hardware RAID 5/6.


 No.1031270>>1031278 >>1031584 >>1031623

>>1031156

>crazy expensive hardware

I host a 20TB mirror parity ZFS samba share for my lan inside a VMWare machine with 512MB of memory running on my desktop. It saturates the disk and network when getting demand from multiple programs/users.

People need to stop parroting this garbage. I put this off for years because of remarks like this.

>you need AT LEAST 13TB of RAM per gigabyte backed by SSD ARC caches if you want reasonable performance (1MB/s)

Stop this.


 No.1031278

>>1031270

>SSD ARC caches

>insane RAM

And you know what's hilarious? L2ARC has memory demands and is rarely used. yet they parrot the idea you need it

slog devices are only used for sync writes.

dedup is only good in some edge cases none of them are any you'll ever experience.

ZFS is the best around if the data is sitting on disk for more than a few days.


 No.1031313>>1031382

botnet


 No.1031382

>>1031313

In what way? ZFS has been Free Software for many years now.


 No.1031584

>>1031270

ZFS eats up RAM with deduplication enabled


 No.1031623>>1031687

File (hide): 197e8615bb304b6⋯.jpg (78.9 KB, 404x500, 101:125, ASRock QC5000ITX.jpg) (h) (u)

>>1031270

I personally don't go below 8gb, since that's what it is fully tested against, and I've seen rare reports of people having issues going below 4gb (though these issues due to their OS not playing nice and could likely be resolved with some tuning). It's nice to know that it can go that low though.

To reiterate for those that don't know, the only time zfs NEEDS ram is when using deduplication.

Don't fucking use deduplication

It's off by default for a reason and if you aren't running an enterprise system you don't need it and it will only waste your resources and causes problems. Otherwise when running ZFS normally, it'll just make use of free ram for caching, but it doesn't NEED that ram. You can reduce how much ram it tries to cache with some tuning variables.

Likewise (as others have said) SSD L2ARC caches do NOT do what people think they do and only benefit specific use cases.

Hopeful the idea it needs ECC is also already thoroughly debunked. ECC is nice, but not needed any more than any other filesystem.

The only thing that's expensive about ZFS is stupidity and hard disks. And as it's happens, here's some 10TB $170 easystores: https://slickdeals.net/f/12838102-my-best-buy-members-10tb-wd-easystore-external-usb-3-0-hard-drive-32gb-usb-flash-drive-170-free-shipping?src=frontpage

My first ZFS/samba NAS was literally pic related for $50 and a bunch of spare parts/thrift store junk. The most expensive parts were the disks and a decent PSU. Which reminds me, you can cheap out on a lot, but don't cheap out on the PSU.

t. paranoid datahoarder with 24TB NAS + backups.


 No.1031633>>1031688

ZFS > XFS > ext4 > ReiserFS >= Reiser4 > btrfs > JFS > UFS


 No.1031687

>>1031623

>Hopeful the idea it needs ECC is also already thoroughly debunked. ECC is nice, but not needed any more than any other filesystem

t. garage tech enthusiast


 No.1031688

>>1031633

I don't see any FAT32 in there.




[Return][Go to top][Catalog][Screencap][Nerve Center][Cancer][Update] ( Scroll to new posts) ( Auto) 5
35 replies | 4 images | Page ?
[Post a Reply]
[ / / / / / / / / / / / / / ] [ dir / agatha2 / imouto / mde / mewch / miku / pinoy / vg / xivlg ][ watchlist ]