[ / / / / / / / / / / / / / ] [ dir / arepa / ausneets / fascist / hkon9 / hydrus / maka / metatech / wooo ][Options][ watchlist ]

/tech/ - Technology

You can now write text to your AI-generated image at https://aiproto.com It is currently free to use for Proto members.
Email
Comment *
File
Select/drop/paste files here
Password (Randomized for file and post deletion; you may also set your own.)
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Expand all images

New Volunteer

File (hide): e353242585b57ae⋯.jpg (18.88 KB, 425x282, 425:282, broken-hard-drive.jpg) (h) (u)

[–]

 No.939278>>939353 >>939378 >>939715 [Watch Thread][Show All Posts]

OK /tech/nicians it's time to flex those big brains of yours and help out a Windows pleb like me with some good advice.

So I'm backing up my disks using CloneZilla and I end up with many GB's worth of backup images [which will become worthless a few months from now but that's besides the point]. Anyway, for extra data safety I want to use PAR2 to add some recovery records and here's where you come in!

I'm a moron and I have no clue what block size to set, what redundancy percentage to set, what file sizing scheme to use, and so on. So you tell me, what defaults would YOU use if you were in my place? Considering say 50 GB of data. I mean you do periodically back up your disks, don't you anon? And you do protect your backups with QuickPar, GPar2 or ekpar2, don't you anon?

 No.939351

This sounds like a question for the clonezilla forum or stackexchange.

Assholes in /tech/ will likely only chose you for your choice of tools.

Good luck.


 No.939353


 No.939378

>>939278 (OP)

I don't see the point of erasure coding to protect against drive failures. A drive failure will blow away large sections of a file, or the whole thing, and be unrecoverable. Redundant drives or other redundant storage like S3 is the better option. Since you need to store backups offsite anyway (you do store your backups offsite, don't you anon?), just pick a service that handles redundancy.


 No.939381

You need parity because you're making a disk image every time you backup. If one of my files is damaged, the damaged file and the pristine copy are both saved. If your disk image is damaged, you potentially lose everything.


 No.939580

File (hide): 76d9bf3ff61e953⋯.jpg (23.89 KB, 680x250, 68:25, ZFS-.jpg) (h) (u)

Here, take this. You may need it later if it's ever TB's instead of GB's.


 No.939715

>>939278 (OP)

>OK /tech/nicians it's time to flex those big brains of yours and help out a Windows pleb like me

I disagree.


 No.939731

About 15 years ago I was managing some backups at work and ended up using par2cmdline with block size set to 900k. My backup program created .tar.bz2 files with block size of 900k. Apparently (according to the PAR2 docs) you want the block sizes to match, in order to maximize your chances of recovery. But maybe that was naive and I should have instead only considered the block size of the filesystem those backups were stored on (which was probably either 16k or 32k, since it was an FFS partition on OpenBSD).

I don't remember what I set the redundancy level to, but if you have plenty of space then it makes sense to bump that up.




[Return][Go to top][Catalog][Screencap][Nerve Center][Cancer][Update] ( Scroll to new posts) ( Auto) 5
7 replies | 1 images | Page ?
[Post a Reply]
[ / / / / / / / / / / / / / ] [ dir / arepa / ausneets / fascist / hkon9 / hydrus / maka / metatech / wooo ][ watchlist ]