[ / / / / / / / / / / / / / ] [ dir / agatha2 / ashleyj / bmn / cafechan / dempart / doomer / hydrus / lds ][Options][ watchlist ]

/tech/ - Technology

You can now write text to your AI-generated image at https://aiproto.com It is currently free to use for Proto members.
Email
Comment *
File
Select/drop/paste files here
Password (Randomized for file and post deletion; you may also set your own.)
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Expand all images

File (hide): 092eb26d661c802⋯.png (42.11 KB, 800x283, 800:283, not even all of it.png) (h) (u)

[–]

 No.1008423>>1008427 >>1008471 >>1008537 >>1010439 >>1010494 >>1032943 >>1034096 [Watch Thread][Show All Posts]

I have a fuckhuge imageboard folder and had an idea the other day to make a system where I could expose my collection to the internet in a way that would allow other anons to download stuff and help organize it by submitting tickets to suggest changes (add, remove, move, rename).

So my question for you is this: Are there any existing solutions that I could set up that would accomplish this or would I need to cobble something together?

 No.1008427>>1008430 >>1032945

>>1008423 (OP)

Git repository and an issue tracker. For example github/gitlab.


 No.1008430>>1008432

>>1008427

github will shut him down for wrong think and racism


 No.1008432>>1008435 >>1008440

>>1008430

That's why I said gitlab. Otherwise you can always selfhost gitlab instance


 No.1008435>>1008437 >>1008440

>>1008432

Gitlab is also a silicon valley company


 No.1008437>>1008439 >>1008715

>>1008435

Self-hosted gitlab.


 No.1008439>>1008446

>>1008437

Maybe you can base it on some issue tracking software, but you'd want some kind of gallery frontend so people can just browse it without needing to clone the whole repo.

Also git works really poorly with binary files, your .git folder will be huge once you start making changes


 No.1008440

>>1008435

>>1008432

Right now, hosting a gitlab instance looks appealing to me (I'm investigating it right now). I would have to get a server and set it up but that isn't beyond my abilities.


 No.1008446

>>1008439

Yeah, but I'm sure you'd sometimes like to upload something, remove something etc and eventually revert to older instances. If you know any versioning software suitable for such case, please gib a link


 No.1008471

>>1008423 (OP)

Try Internet Archive. ISIS used it and got away with it, you should too.


 No.1008520>>1008523 >>1034096

Use IPFS, faggot.


 No.1008523>>1008837 >>1014417

>>1008520

botnet


 No.1008530

If you're going the self-hosted route, Fossil has a built-in web interface. But I'm not sure if it's flexible enough to accomodate for your collection of smugs.


 No.1008537>>1008540

>>1008423 (OP)

>50GB imageboard folder

>Fuckhuge


 No.1008540>>1008541 >>1034096

>>1008537

In over 12 years I've collected less than 1k files and I've deleted three quarters of them (though sometimes I wish I hadn't). Do you just save everything you see?


 No.1008541>>1008606

>>1008540

I use Hydrus to make sure i don't collect duplicates, prune regularly, and have a soft spot for webm threads, so i've got around three to four hundred GB spread out over 200,000 files.


 No.1008606>>1008807 >>1032656 >>1034096

>>1008541

>Hydrus

That looks incredible, but I'm skeptical that security is properly implemented. I wonder if there is a non-sharing fork available.


 No.1008715

>>1008437

>using 8GB of RAM for a git server


 No.1008722>>1008729 >>1034096

>not just dumping all your files in one folder with SHA-256 filenames and then using softlinks to category directories

>plebs using bloated software like hydrus and even using VCS instead of Doing It The UNIX Way(tm)


 No.1008729

>>1008722

>using directories instead of tags

You have to have 10K+ memes collected in order to post in this thread, newfag.


 No.1008807

>>1008606

Then fork it, rip out all the stuff that communicates anywhere, or go harass the dev. Wasn't he doing a strawpoll recently about what to implement next?

>>>/hydrus/


 No.1008837


 No.1008856>>1008913 >>1011145 >>1032593

make a booru


 No.1008913>>1010421

You should go through your own trash. You will also be a lot happier when you trim off the fat and stop hoarding shit you don't\won't use, and only keep what you actually will. Personally, I've been organizing and adding to my document collection

For now, you can find any information you want on the internet... but it won't be that way forever. Would be a shame if you wasted this time collecting reaction images for a dying medium.

>>1008856

Pretty much what OP wants.


 No.1010421>>1010440 >>1010451

>>1008913

Not anon, how would you decide whats worth keeping? The reason you/I archive in the first place is that you dont know wether you want to keep a file around.


 No.1010439

>>1008423 (OP)

Train an OpenCV image classifier using the Haskell bindings.


 No.1010440>>1010599 >>1034137

>>1010421

I archive because I know I want to use it, or have used it. I have no use for a million imageboard pics, so I don't save any. However I have a 10GB folder of just documents (mostly pdfs). I suppose I value information more than data.


 No.1010451

>>1010421

>The reason you/I archive in the first place is that you dont know wether you want to keep a file around.

If you have a hundreds of thousands of hoarded and untagged files, you'll never find what you're looking for should you ever need a specific file anyway.


 No.1010491

File (hide): ea26e44c0b47ff3⋯.jpg (63.47 KB, 500x706, 250:353, crystal_skull.jpg) (h) (u)

What about a db with image hashes mapped to a name and/or folder structure? The db is some sort of collective with voting etc. I would assume it would get abused in like 3 sec though. Then just run a program that renames images according to the db. Also run it reverse and upload names of your image hashes. Then run a AI bot that merges the names into one.


 No.1010494

>>1008423 (OP)

>fuckhuge

Anon I...


 No.1010499

Maybe some booru software could import them all.


 No.1010599>>1010739 >>1011139

>>1010440

I'm also an archivist (mainly pdfs). We should build a network in the future, to make everything redundant.


 No.1010641

literally a booru


 No.1010739

>>1010599

Someday maybe. The big problems I've had with the existing archives I've been picking clean so far are:

>most of the books aren't vetted terribly well, and only exist in the archive because somebody got it from another archive. no guarantees that anybody used it, or that it's even a good reference

>half the good shit shilled on the gentoomen\osdev\etc wikis isn't in them

>it's rare that anything referenced in bigger projects is in an existing archive (see the GCC source for examples)

>research papers and patents are rarely archived, you have to follow the trail of citations manually

Only solution I've came up with is doing it myself.


 No.1011139>>1033247

>>1010599

>We should build a network in the future, to make everything redundant.

We should, not the question is ... how? Maybe we shouldn't be relying on the internet as much as we do.

Just use the Internet for co-ordinating a Sneakernet of multiple terabyte per delivery, because for most people the Internet is much too slow and it is certainly not anonymous or private.

What kind of PDFs by the way.


 No.1011140

>not the question

now the question


 No.1011145


 No.1011160>>1034138

I've been looking to start the same kind of thing and am currently considering NextCloud. Haven't verified it will work for this purpose, but it seems like it might work.

Failing that, perhaps I can build a web front-end for a NAS that lets people just directly download files. Adding requests, etc, would be lovely. My main concerns are security and bandwidth consumption.


 No.1011554>>1014406


 No.1011555

If you could upload these to archive.org for now and work out the details of a distributed system later, that might help kickstart interest once we know what's available. Pack them into zips of 1gb each or by subfolder and upload, they accept any size, anytime.


 No.1014406>>1032656

>>1011554

Isn't anything better than hydrus

A program that doesn't lock you up on it and doesn't change folders/files structure ???


 No.1014417


 No.1021945

Nginx+tor just make a bare-bones basic webserver with a hsv3 onion


 No.1032590

Torrent. Did one around 20gigs. I think people still seed it to this day.


 No.1032593

I use tag filenames so they are easier to find because some images fit in multiple directories and 100Gb is already too much to handle duplicates

>>1008856

and this


 No.1032656

>>1008606

>but I'm skeptical that security is properly implemented.

It runs offline if needed, what are you worried about?

Also you can just not use a public tag repo.

>>1014406

You can mimic file/folders with tags, if for some reason you need those.


 No.1032897

Hold on I've seen this exact thread before. OP check latter pages and see if the threads still up.


 No.1032943

>>1008423 (OP)

You're running Windows, anon. I'm not sure you're up to this.


 No.1032945

>>1008427

Most git* services would take him down in like 6 days. It's a better idea to set up a VPS with a gitlab/gitea instance.


 No.1033247>>1033530

>>1011139

Different anon but I have 20000 copies of the anarchist cookbook.


 No.1033530

File (hide): 3ef953d1500b6af⋯.png (49.12 KB, 449x494, 449:494, asdsad.png) (h) (u)

File (hide): 48af55aa45eeb97⋯.png (115.35 KB, 1028x1298, 514:649, afssd.png) (h) (u)

>>1033247

i have only this one and its too big for this site.


 No.1034096>>1034097

File (hide): 73c3e6f494e129b⋯.jpg (71.56 KB, 1024x683, 1024:683, Cisco 2620MX & 2950-24.jpg) (h) (u)

>>1008423 (OP)

PROTIP 1: do not use windows for file hoarding.

PROTIP 2: nobody will sort you shit for you. ESPECIALLY WITH FUCKING TICKETS, NO REALLY? YOU EXPECT PEOPLE TO FILE TICKETS ABOUT WHERE TO PUT IMAGE?. But might produce better sorted pack, that you can incorporate into yours later. So putting your collection online is god's work

PROTIP 3: forget about git. It will only bring pain in your case

>>1008520

This. Planning to publish my shit too, once I bother fixing my net

>>1008606

'ip netns exec' is your friend. Cannot share anything without interfaces

>>1008722

Duplication is great.

>softlinks

Hardlinks give you reference-counting & garbage collection for free.

Also don't bother with index. Scan directory, group files with same size, hash them, relink duplicates to single inode - https://0x0.st/zihz.py

It does re-hash some groups of same-sized-but-different files on every run, but that's not a problem for a weekly batch job.

>>1008540

>Do you just save everything you see?

No, I save even shit I don't see.

66G sorted

97G unsorted

11.5T scraped


 No.1034097

>>1034096

>https://0x0.st/zihz.py

Don't forget to replace md5 with sha.


 No.1034106>>1034110 >>1034111

At some point I realized that I don't actually revisit my fuckhuge collection that is as large as yours so I made it a habit to reduce the crap on my computer every day. So far I halved my data collection within a couple of months and hopefully I'll pick anything important out of the pile before lighting everything else on fire really soon.

Hoarding is bloat.


 No.1034110

>>1034106

thats why i dont download these massive book collections. there are good books but a tb or more of some random math shit is not something that i care about and hdds arent free. archivers are a different thing but those probably have a infinite supply of money and hardware and they really like it so they do it.


 No.1034111>>1034117

>>1034106

Censorship is ramping up. Hoarding is the only way we can protect from that shit. Of course I mean hoarding up useful, relevant shit. Throw away your seasons of Friends and most animu and mango.


 No.1034117>>1034126

>>1034111

they arent going to cencor the shit that these stupid book torrents are full of. most of the collections are probably even full of outdated information


 No.1034126

File (hide): b5bb7e9d1019c78⋯.jpg (1.59 MB, 1257x4361, 1257:4361, b5bb7e9d1019c780aa65271c6f….jpg) (h) (u)

>>1034117

Yes they will, just give it time.


 No.1034137

>>1010440

>However I have a 10GB folder of just documents (mostly pdfs).

You could... set up a torrent. I'd seed.


 No.1034138>>1035296

>>1011160

Nextcloud is very bloated, I wouldn't recommend it


 No.1035296

>>1034138

what would you recommend instead?

i use it everyday and it has a tonne of fantastic features.


 No.1035300

any of the common version control systems ? host it on gitlab jk.


 No.1035324

File (hide): 68623ddd3d18468⋯.jpg (1.22 MB, 1152x864, 4:3, matrixback2.jpg) (h) (u)

Write a script to thumbnail each image. Use something like image magick and a BASH script easy money. Save all thumbs to a second folder.

Get output from $(ls /pathtopicsfolder/).

create list of hyperlinks by appending HTML tags to the path

use split command to break the list in chunks of 9 or 16.

loop through the chunked files and create each as an html document.

LS of html documents and repeat similar to earlier process and create site map.

Takes basically no time once you get the code knocked out. Less than 50 lines.


 No.1035338

File (hide): e3cb872e15c85db⋯.jpg (3.28 MB, 4320x3240, 4:3, Amanda's Pictures 054.JPG) (h) (u)

Ok, 20k files.

I'm autistic and can do it. I have a lot of free time.

I'm interested to add everything to my own collection as well.

Tell me how you want it sorted and provide a link to everything.




[Return][Go to top][Catalog][Screencap][Nerve Center][Cancer][Update] ( Scroll to new posts) ( Auto) 5
63 replies | 11 images | Page ?
[Post a Reply]
[ / / / / / / / / / / / / / ] [ dir / agatha2 / ashleyj / bmn / cafechan / dempart / doomer / hydrus / lds ][ watchlist ]