[ / / / / / / / / / / / / / ] [ dir / animu / boers / cafechan / caos / ideas / kc / leftpol / vg ][Options][ watchlist ]

/tech/ - Technology

You can now write text to your AI-generated image at https://aiproto.com It is currently free to use for Proto members.
Name
Email
Subject
Comment *
File
Select/drop/paste files here
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Expand all images

File (hide): 895674ac711cfa3⋯.png (6.64 KB, 438x350, 219:175, ZFS.png) (h) (u)

[–]

 No.873546>>873547 >>873548 >>873615 >>873646 >>873690 >>873794 >>874043 >>874677 [Watch Thread][Show All Posts]

What the fuck is this thing. I keep hearing people talk about it like it's the best filesystem in the universe. explan to a brainlet what the hype is about?

 No.873547>>873615 >>873704 >>885081

>>873546 (OP)

It's more than a file system, it's a volume manager and a file system combined. It makes it very convenient to build software raids. The other large strength behind ZFS is that it is a copy on write (CoW) file system, making it amenable for efficient snapshotting.

If you need to ask though, you probably don't need it.


 No.873548>>873550

>>873546 (OP)

it's a file system that's got some neat features

don't use mysql with it though


 No.873550

>>873548

>don't use mysql with it though

Thats just due to misconfiguration though. You can disable compression and whatever the other issue was on the dataset you store your mysql database on.

sage because this could have been in the sticky.


 No.873565>>873571

Literally the only reason people used FreeBSD over Linux for a while.


 No.873566>>873645

Object storage systems are better.


 No.873570>>873572

a meme


 No.873571>>873573

>>873565

ZoL has now over taken them in features, which is quite funny.


 No.873572

File (hide): 80c55611fb846e0⋯.png (58.28 KB, 645x729, 215:243, 80c.png) (h) (u)

>>873570

t. (You)


 No.873573>>873583

>>873571

Hell, even XFS is being extended to have some of the goodies you get with ZFS: https://lwn.net/SubscriberLink/747633/ad7f94ed75c8779e/


 No.873583>>873662

>>873573

I formatted all of my storage drives to XFS in Linux.


 No.873587>>873588 >>873590

Dead wife.


 No.873588

>>873587

That's ReiserFS you dumb fuck.


 No.873590

>>873587

They haven't implemented that yet


 No.873591>>873594

An almost related question: is CoW enabled by default on btrfs, and how do I take advantage of it?


 No.873594>>873596 >>873606

>>873591

By not storing your files on an FS that has been known to lose data. You actually have to be retarded to use it over zfs. If you need the features zfs provides use zfs, it won't lose your data.


 No.873596

>>873594

I used butterfs for like 2 years without any troubles on openSUSE. Didn't utilize any raid functionality or similar though


 No.873606>>873610

>>873594

I don't think fucking faceberg of all would use a failing FS, do you?

If you can't answer a question stay silent, don't spout retarded memes.


 No.873610>>873615 >>873622 >>873637 >>873646

>>873606

>retarded memes

Number of bugs causing data loss/corruption in ZFS: 0

Number of bugs causing data loss/corruption in BTRFS: more the ZFS[1][2][3]

[1]https://www.phoronix.com/scan.php?page=news_item&px=Btrfs-Data-Bug-Hole-Read-Comp

[2]https://btrfs.wiki.kernel.org/index.php/RAID56

[3]https://www.theregister.co.uk/2017/08/16/red_hat_banishes_btrfs_from_rhel/


 No.873615>>873621 >>873795 >>873956 >>874675

>>873546 (OP)

It's oracle so I won't touch it use Btrfs if you don't want to get sued.

>>873547

>you probably don't need it.

This

Only recommend for systems thousands of terabytes aka datacenters.

>>873610

Ture but btrfs isn't funded by CIA niggers.


 No.873621

>>873615

>Developer(s) Facebook, Fujitsu, Fusion-IO, Intel, Linux Foundation, Netgear, Oracle Corporation, Red Hat, STRATO AG, and SUSE[1]

What did they mean by this?


 No.873622

File (hide): c594c028220ec94⋯.png (19.1 KB, 291x317, 291:317, 1399341169979.png) (h) (u)

>>873610

>likely patched bug

>raid

>not a bug


 No.873637

>>873610

There's some background on the decision by RedHat to discontinue BTRFS. It's as simple as not having enough employees familiar with it.

https://news.ycombinator.com/item?id=14907771


 No.873645>>873647

Does Linux have decent ZFS drivers yet? If not, is anyone seriously working on them? Now that Larry is bringing the guillotine down on Sun, what is the future of the Sun tech like ZFS, and various Solaris derivatives like Illumos & OpenIndiana?

>>873566

whynotboth.spic

t. eternally butthurt resource fork nostalgic


 No.873646>>873692 >>877069

>>873546 (OP)

zfs is paranoid, it thinks every hard drive will lie, do bad things and needs to be kept in check, the zfs devs have actually dealt with hardware vendors who's hard drives did shit things like telling the OS it's written to disk but only put it in its cache, zfs finds those kinds of bugs.

bitrot stopped being a meme after zfs detected it happening.

>>873610

What's even worse about some of those btrfs dataloss bugs is that it was discovered by a guy going through the FreeBSD handbook zfs section using btrfs equivalents.


 No.873647>>873649

>>873645

Yes. To the point where it is now laughable that people claim FreeBSD is the ZFS platform.

http://zfsonlinux.org

http://open-zfs.org/wiki/Main_Page


 No.873649>>873653 >>873670 >>873762 >>873785

>>873647

>now laughable that people claim FreeBSD is the ZFS platform.

Not really since zfs on linux will always involve out of tree modules because of "muh non compatible license" cancer. I build them in on gentoo but distros can never ship binary linux kernels with built in zfs drivers. On top of that zfs on linux is still behind on features. For example they just got the built in encryption feature this month if I'm not mistaken.


 No.873653>>873664 >>877070

>>873649

> they just got the built in encryption feature

That landed late last year, it was even in the latest MacOS release of OpenZFS. FreeBSD to my knowledge has yet to port encryption.

Also Ubuntu ships ZFS binaries, https://insights.ubuntu.com/2016/02/18/zfs-licensing-and-linux/


 No.873662>>877071

>>873583

Didn't FreeBSD nix XFS support recently?


 No.873664

>>873653

>That landed late last year, it was even in the latest MacOS release of OpenZFS. FreeBSD to my knowledge has yet to port encryption.

I think I am incorrect. I got it from the following

https://blog.mthode.org/posts/2018/Feb/native-zfs-encryption-for-your-rootfs/

>notes

>Make sure you install from git master, there was a disk format change for encrypted datasets that just went in a week or so ago.

He appears to be referencing one of these two commits:

https://github.com/zfsonlinux/zfs/commit/ae76f45cda0e0857f99e53959cf71c7a5d66bd8b

https://github.com/zfsonlinux/zfs/commit/7da8f8d81bf1fadc2d9dff10f0435fe601e919fa

This looks like it only applies to zroot disks, which stills looks to suffer from the linux/freebsd problem of not encrypting the kernel. Unrelated but someone really should write a proper bootloader for linux that can handle this like the openbsd one can.

>Also Ubuntu ships ZFS binaries

Interesting, though I assume it has to remain a module for that to apply. I still don't think SPL and the actual ZFS code will ever be shipped even as a normally disabled feature in the kernel. Which in turn will continue to hurt the adoption of zfs on linux.


 No.873670

>>873649

That literally changes nothing. The irony is that Illumos is the ZFS platform, not FreeBSD.


 No.873690>>873698 >>873701 >>873948 >>877074

>>873546 (OP)

ZFS is the best RAID solution out there, but it's got steep hardware requirements: a large amount of ECC RAM for L1 read caching, a large solid-state drive for L2 caching, and a high-throughput SSD for write caching. This is in addition to your hard drive array, which should have (at least!) two parity drives for redundancy. It's quite expensive to build a decent ZFS array, but it performs very well and has an impressive feature set.

On the software side, it's pretty easy to setup on Linux if you're comfortable on a commandline. I wouldn't recommend FreeNAS (a customized ZFS-centric FreeBSD distro) because the GUI configs are not bidrectional with those on the commandline. So if you setup everything in the GUI and need to drop down to the terminal for fine tuning, it's not gonna work so well.


 No.873692

File (hide): e78c9b8c32a0014⋯.jpg (102.77 KB, 540x609, 180:203, chihuhua.jpg) (h) (u)

>>873646

>zfs is paranoid, it thinks every hard drive will lie, do bad things and needs to be kept in check

It's reasonable: I've lost data to CRC failures more than once.

Are ZFS and BTRFS the only filesystems that offer stronger hash checks?


 No.873698>>873700

>>873690

I started messing around with ZFS for fun on a 3TB drive, was considering just keeping it and expanding it to mirror vdev, then adding another mirror vdev later for raid10 (can't imagine using any more space than that, I don't store too much stuff).

But my desktop only has 16gb of non-ECC ram, and I don't have separate drives for caching/logs.

So what might be a better setup for a home user? mdadm with xfs? how does xfs handle power loss? (no UPS)


 No.873700

>>873698

ZFS would definitely not be worth it; way too much overhead. I would grab a cheap, used LSI SAS/SATA card from ebay for hardware RAID. I've no knowledge on how XFS handles power loss, sorry.


 No.873701>>873703

>>873690

Those aren't hardware requirements, friendo.


 No.873703

>>873701

If you want to reap the actual benefits of using ZFS, they are. But strictly speaking, you are right.


 No.873704>>873734

>>873547

>If you need to ask though, you probably don't need it.

not him, but you only learn by asking you know.

Might not need it now but will need it after learning about it.


 No.873734

>>873704

That expression isn't to be taken literally. Just as "If you have to ask, you can't afford it", only means the price is high.


 No.873762

>>873649

>Not really since zfs on linux will always involve out of tree modules

This sounds like the use case for dkms


 No.873785

>>873649

>built in encryption

Sounds like a fun way to lock yourself out of your own files.


 No.873794>>873822

>>873546 (OP)

It's a meme (just like zsh, void and other things).


 No.873795

>>873615

>thousands of terabytes

aka, you know, petabytes and such


 No.873822

>>873794

It's a GOOD meme.


 No.873948>>873957 >>873958

>>873690

Yeah I've heard about those hardware requirements. Is this an issue for btrfs as well?


 No.873956

File (hide): 7fc6f4787c71f24⋯.gif (1.33 MB, 640x352, 20:11, 1394988458582.gif) (h) (u)

>>873615

>Only recommend for systems thousands of terabytes aka datacenters.

No, ZFS makes sense on desktop & soho file storage too, even if just for snapshotting functionality and quote management. Raw device-mapper snapshots (used by LVM) are lame, and using volumes for space managment is clumsy and obtuse.


 No.873957

>>873948

Those hardware requirements are a meme. It comes from that the fact that since ZFS was designed to be used on very large storage array it uses something like 90% of your ram as arc by default. You can that down to one or two GB, or even disable it and still reap the other benefits of zfs. They would equally apply to btrfs if anyone was dumb enough to actually trust large portions of data with it.


 No.873958

>>873948

Yes. That is issue for ANY STORAGE CLAIMING TO BE RELIABLE.

ZFS on no ECC, low memory, single-parity system is shit, because anything will be shit in these conditions.

Also ECC requirement is overrated, scrub-of-death is mythical situation,. You have larger chance of power supply failing and burning your drives with surge, than memory failing in that specific way that can cause it.


 No.874043>>874049 >>874081 >>874168

>>873546 (OP)

ZFS is the SystemD of file systems. it's a dumpster fire. sure zfs has some nice features, but it's unstable


 No.874049

>>874043

>ZFS is the SystemD of file systems.

What?

>it's a dumpster fire.

What are you talking about?

>sure zfs has some nice features,

Implying you even understand those.

>but it's unstable

You have absolutely no idea what you're talking about. You deserve to be mocked.


 No.874081

>>874043

Hi bot-kun!


 No.874168

>>874043

That would be btrfs you moron.


 No.874194>>874230

File (hide): e60d43ace3769de⋯.jpg (566.92 KB, 955x1045, 191:209, zfs.jpg) (h) (u)

I've used ZFS for years now, starting on FreeBSD with (3) 2tb drives, then built a big boy (for you) filer with (12) 4tb drives. On the new filer, I use Oracle Solaris as the OS, because they created and maintain the ZFS file system, so I went straight to the source. The downside is you have to know how to use Solaris, which is a bit different from Linux. You can do a search for ZFS version and read about the different features that are available. First off, I would recommend ZFS if you are concerned about your data AND you can afford the additional hardware. By additional hardware, I would recommend at least 2 hard drives of the same make and model. The reason is that ZFS offers the ability to do RAID in many different forms, but also with this feature, the added bennefit of checksumming your data when written, with the ability to re-build your data if it's found to have been corrupted. You can read more about what causes corruption, and it's somewhat rare and probably wouldn't affect you, but it does happen and is real. With at least 2 drives, you can create a software mirror, where a copy of the data resides on each hard drive. If a file becomes corrupt, or if an entire hard drive fails, you don't loose your important data. If a file is corrupt, a the file can be rebuilt by running a scan against the drive, and can be repaired from the second copy from the second hard drive. If a drive fails, again, you still have a full single copy on the working drive. This is just an example, there are other very cool drive configurations you can run, RAID 5 (single parity), RAID 6 (double parity), you can assign hot spares, you can create two RAID 5s or RAID 6 and stripe data accross them. You can create RAIDs out of USB drives to use as a more reliable backup (still have the data if one USB fails). And you get the supper cool snapshot ability. I configure mine to create a snapshot weekly, and if I lose a file, screw up a file, or just want to go back in time on a file, I can do that easily. There are a lot of cool things you can do with ZFS. Like I mentioned, I like running ZFS on Solaris, but I don't use that as my main computer, it's a dedicated file server. FreeBSD would be my recommendation if you want to use it on your workstation, and after that, any Linux distro of your choice that makes it available. I know Debian and Arch both have support for it. Oracle maintains a great information library online for using the file system. If you just want to try it, you can install the file system on most linux distros, then try it using a couple of USB drives, or even using several image files using dd or fallocate. One last thing, what's nice about ZFS is it's not hard drive contoller dependant. If you have a hardware RAID array, and the drive controller fails, you better be able to find another exact model, or you can't read the data on your hard drives. With ZFS if you computer mobo goes tits up, just unplug your hard drives, throw them in another system, and rediscover them with ZFS and your data is back online.


 No.874230>>874268 >>874282

>>874194

Sounds like some interesting features, mostly related to doing software RAID. My question is: what is the advantage to doing this over using some other filesystem + Linux's Logical Volume Management, which can handle software RAID as well? Keep in mind that LVM can also do snapshots and whatnot as well.

https://wiki.archlinux.org/index.php/LVM#RAID


 No.874268

>>874230

I don't know if I can fully explain the pros and cons of each, as I'm not as experienced with LVM. I would say, after reading the link, that ZFS would be more robust, although like I said, I haven't used LVM much. One main benefit, at least to me, is the checksum of your data when written to validate the integrity of the data. There are additional attributes you can assign or tweak on the volumes you create, such as encryption, compression, and check-summing as mentioned above. You can probably do the same thing with LVM (I know encryption is an option) but with ZFS it's all rolled in. While LVM says it handles snapshots, how many can it handle? I know with mine, I have it take a snapshot every week, and I've had it going back 8+ months before I go and clean it up. I know with some filesystems, having that many snapshots would essentially grind the system to a halt, VMWare's builtin snapshots for example. Also with RAID on ZFS, it's fairly easy to replace a failed drive and rebuild the array. I assume you can do that with LVM, but I'd have to search it. Does LVM support hot-spares, because that's another nice insurance policy to have if one of your drives fail, it can start rebuilding right away. I hope that provides some additional thoughts on the subject, all just my two cents. If your worried about the protection of your data, give ZFS a look, as I believe it offers a more robust solution in that area that LVM does.


 No.874282

>>874230

With ZFS being a multilayered solution, you've got many opportunities for tighter integration. What happens in LVM if an error is detected in RAID? Does it just flag a failure, or does it try to automatically correct it?


 No.874284>>874312 >>874342 >>874436 >>874632


 No.874312>>874619

>>874284

>tl;dr: Hard drives already do this, the risks of loss are astronomically low

Dropped, everything after that isn't worth reading.

The meatgrinder of hardware firmware is known to be horrible, made by barely functional autists on cocaine.

"Just trust the hard drives" is one of the dumbest things I will hear this week.


 No.874342>>874436 >>874950

>>874284

>Hard drives already do it better

Later:

>This is the only scenario where the hard drive would lose data silently, therefore it’s also the only bit rot scenario that ZFS CRCs can help with.

>rgbrenner offered indirect anecdotal evidence

Later:

>I disagree. In my experience the vast majority of hardware works as expected.

The cognitive dissonance is strong with this one.

This faggot also wrote the equivalent of winzip which makes him think he knows what's best for a filesystem designed to run on mixed and untrustworthy media.


 No.874436

File (hide): 736883017a9bc27⋯.jpg (1.45 MB, 1382x2549, 1382:2549, Screenshot-2018-2-24 Jody ….jpg) (h) (u)

>>874284

>>874342

Watch out! Looks like we found ourselves another liberalist!


 No.874610

It is the best filesystem. It is also neither perfect or fully automagic. You need to understand the options, and choose them appropriately for your use case, or you could have bad performance. It could could use some updating to fix some areas that it is starting to trail behind in.

And no, it doesn't fucking require ecc, any more than any other filesystem requires. Just giving you a heads up, the myth that ZFS must use ECC is entirely because of a very autistic forum user that brow beats everyone he talks to into agreeing with him. Hell, ZFS even has an unsupported function that error checks ordinary memory anyways to better catch problems [see ZFS_DEBUG_MODIFY flag (zfs_flags=0x10)]. ZFS doesn't not need expensive hardware, quite the opposite it was built so you could survive using shitty hardware.

The only major issue with ZFS are the fanboys that very loudly broadcast their opinions while overestimating what zfs is.


 No.874619>>874630

File (hide): ae85758142c86b2⋯.jpg (62.06 KB, 650x334, 325:167, sata power cable pins.jpg) (h) (u)

>>874312

No fucking kidding. Hell, the SATA standard was even changed because firmware/hardware is so shit. Over the next few years, the 3rd SATA power pin will start being used to force the drive completely off to reset the drive without having to pull it (important for servers). Basically, servers which have a shitton of drives were RMA'ing drives as dead, but all they really needed was to lose power to be reset. Some western digital and HGST drives already have this.

If you have a new drive what won't spin up (but might work somewhere else, like a usb enclosure or different set of power cables), they you may need to tape the power pins. The first three pins, which are 3.3v can all be taped off, as none of them are normally used.


 No.874630>>874678

>>874619

>the 3rd SATA power pin

As in pin number three or what?

>If you have a new drive what won't spin up, *then you may need to tape the power pins.

So what you're saying is that the 3.3V rail might stop a disk from booting up if connected?


 No.874632

>>874284

This guy is a perfect example where a little bit of knowledge can be a dangerous thing.


 No.874675

>>873615

>It's oracle so I won't touch it

The CDDL version of ZFS is maintsined by the Illumos project. It's copyleft free software. Oracle can't go back and un-CDDL the old versions of OpenSolaris' software, so you're fine, otherwise the project would have been raped already.


 No.874677>>874682 >>874684

File (hide): f687caecaa48ec9⋯.jpg (37.74 KB, 250x266, 125:133, lives in mem.jpg) (h) (u)

>>873546 (OP)

I heard somebody say that it can access files on HDDs without spinning the disks. I don't know how the fuck that would work though, and when I installed OpenIndiana I definitely heard my hard drive spinning, so that guy might have just been retarded.


 No.874678>>874872

>>874630

Yes, power pin 3. You can just tape all three for simplicity.

>So what you're saying is that the 3.3V rail might stop a disk from booting up if connected?

Yes, it's a hard OFF switch basically, same as physically disconnecting the drive. Expect to see this popping up in newer in newer drives, especially in drives meant for "enterprise". In drives using the standard. Some sata power cables don't give any problems, as they may not even bother supplying 3.3v power, but ones that always supply 3.3v will keep the drive from powering on.

HGST already has some drives that use the new standard, and it also supplies along with them a short molex to sata power adaptor. Obviously these won't help if you are using a backplane, and you'll need to tape. https://www.hgst.com/sites/default/files/resources/HGST-Power-Disable-Pin-TB.pdf

It's not a big deal, unless you aren't aware of it. Then you'll get a nice new drive, spend the next hour(s) trying to figure out why it won't fucking turn on in some situations. HGST really should have used a jumper or something though for this transition period.


 No.874682

>>874677

You're probably thinking of a read cache.


 No.874684

>>874677

Sounds like ZFS accessing files from it's cache. I don't know enough about it's behavior to know if it will serve things from the cache without spinning up disks, but if it has the data already in a cache it doesn't technically need to.


 No.874872>>874917

>>874678

How does one take advantage of the new standard? Supply the 3.3V rail with power and put a switch in line?


 No.874917

>>874872

Exactly. There are no available consumer options as far as I am aware. I know it's being used in enterprise, but I'm not sure how exactly. So DIY is the only way to go right now.


 No.874950

>>874342

> In my experience the vast majority of hardware works as expected.

That bit is extra hilarious, he THINKS the hardware is working as expected because he has no other mechanism to tell him otherwise.

He trusts hardware isn't lying, but it does, all the time.

Vid related:

https://www.youtube.com/watch?v=fE2KDzZaxvE


 No.875139>>875155 >>876186

Tell me about XFS. I heard it's good with large files and ideal for HD cartoons.


 No.875155>>876193 >>876200

>>875139

>can't shrink offline

>good

Just stick with EXT4.


 No.876186>>876188

>>875139

ext4 is a lot if the power ggoes out


 No.876188

>>876186

That's why you gget a UPS.


 No.876193>>876195 >>876208

>>875155

XFS will soon get extent CoW, though. Makes it pretty attractive while waiting 100 years for btrfs to become stable.


 No.876195

>>876193

If you want CoW, just use zfs.


 No.876200>>876204 >>876207

>>875155

>Shrinking your FS

wtf would you do that for? You need to add a windows partition? go back to reddit


 No.876204

>>876200

>not wanting to do whatever you want to your computer

>>>/apple/


 No.876207

>>876200

The fact that you've never needed to do this for reasons not associated with Windows suggests you are new.


 No.876208

>>876193

>while waiting 100 years for btrfs to become stable

You'll have better luck waiting for HAMMER2 to be ported to linux, even when its' latest release is still only "experimental".


 No.877069

>>873646

And things like writing the correct data to the incorrect location. Which is why ZFS always tries to store the checksum away from the data.


 No.877070>>877073

>>873653

FreeBSD doesn't need it because you could put your zpool on GELI devices since day one.


 No.877071

>>873662

FreeBSD never had serious / working XFS or ext[234]fs support.


 No.877073>>877075

>>877070

Not cross platform though.


 No.877074>>877091

>>873690

Those are absolutely not requirements. You don't need to have an L2ARC or a separate zlog device.

It's recommended to have a 64-bit system because 32 bits of virtual address space isn't quite enough for the way its cache works, but even that's not a hard requirement.


 No.877075

>>877073

True. If you're constantly moving your disks between machines with different OSes then yeah it's going to be a problem for you.


 No.877091

>>877074

>It's recommended to have a 64-bit system because 32 bits of virtual address space isn't quite enough for the way its cache works, but even that's not a hard requirement.

It can run on 512MB RAM on a 32-bit OS, at least ZoL can. I was an early adopter that found some very strange bugs caused by doing some very odd things.

You might have to flush the caches every few minutes, but it's possible.

A few early ZoL updates destroyed pools.

I remember having panic attacks when shit fucked up during pool updates.

I managed to get it working on a dumpster P3 with 1G RAM and garbage disks.

Would I do it again? Not unless I absolutely have to.


 No.877097

BSDfag here. An appropriately set vfs.zfs.arc_max in /boot/loader.conf and ZFS will be rock solid with whatever amount of RAM you have.


 No.885081

>>873547

How does snapshotting work anyways? I never understood the magic behind it.




[Return][Go to top][Catalog][Screencap][Nerve Center][Cancer][Update] ( Scroll to new posts) ( Auto) 5
90 replies | 9 images | Page ???
[Post a Reply]
[ / / / / / / / / / / / / / ] [ dir / animu / boers / cafechan / caos / ideas / kc / leftpol / vg ][ watchlist ]