[ / / / / / / / / / / / / / ] [ dir / random / bane / clang / hentai / htg / loomis / mde / pone / rule34 ]

/hydrus/ - Hydrus Network

Bug reports, feature requests, and other discussion for the hydrus network.
Name
Email
Subject
Comment *
File
Password (Randomized for file and post deletion; you may also set your own.)
Archive
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Voice recorder Show voice recorder

(the Stop button will be clickable 5 seconds after you press Record)
Options

Allowed file types:jpg, jpeg, gif, png, webm, mp4, swf, pdf
Max filesize is 16 MB.
Max image dimensions are 15000 x 15000.
You may upload 5 per post.


New user? Start here ---> http://hydrusnetwork.github.io/hydrus/

Experienced user with a bit of cash who wants to help out? ---> Patreon

Current to-do list has: 2,017 items

Current big job: Catching up on Qt, MPV, tag work, and small jobs. New poll once things have calmed down.


HookTube embed. Click on thumbnail to play.

3e5bc6  No.8536

windows

zip: https://github.com/hydrusnetwork/hydrus/releases/download/v301/Hydrus.Network.301.-.Windows.-.Extract.only.zip

exe: https://github.com/hydrusnetwork/hydrus/releases/download/v301/Hydrus.Network.301.-.Windows.-.Installer.exe

os x

app: https://github.com/hydrusnetwork/hydrus/releases/download/v301/Hydrus.Network.301.-.OS.X.-.App.dmg

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v301/Hydrus.Network.301.-.OS.X.-.Extract.only.tar.gz

linux

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v301/Hydrus.Network.301.-.Linux.-.Executable.tar.gz

source

tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v301.tar.gz

I had a difficult week due to a bunch of IRL stress, but I got some good hydrus work done. The page of images downloader is now on the new parsing system, and I have prototyped a new way to 'gather' certain pages together.

simple downloader

The 'page of images downloader' is now the 'simple downloader'. It uses the new parsing system–which for very advanced users means that it uses parsing formulae–and so can find files from pages in much more flexible ways. At the moment, this means a dropdown with different parsers–you select the parser you want, paste some URLs in, and it should queue them up and fetch files all ok.

To get us started, I have written some basic parsers for it that can handle 4chan threads, 8chan threads (including 3-year-old threads that have some broken links on the new thread watcher), gfycat mp4s and webms, imgur still images and mp4s, and twitter images. I expect to write more parsers here myself, and I expect some other users will write some as well. It supports JSON and well as HTML parsing. I also want to write some more ui to make it easier to import and export new parsers.

Note: The new simple downloader cannot yet do the old 'get the destination of image links' parse rule the old downloader could. If this is important to you, please hold off updating for a week–I hope to have it in for v302.

Please give this a go and let me know how it works for you and if any of my new presets fail in any situations. I am really pleased with how simple yet powerful this can be, and I look forward to deploying more of this new parsing stuff as I move on to overhauling galleries.

gathering pages

Right-clicking on a page of pages now gives you a new 'gather' option. This is intended to 'gather' all the pages of a certain state across your whole session and then line them up inside that page of pages. To begin with, this only allows gathering of dead/404 thread watchers, but it seems to work well.

There is obviously more that I can do here, so again please give it a go and let me know what you think. Gathering 'finished' downloader pages sounds like a sensible next step.

sankaku complex bandwidth

Sankaku Complex contacted me this week to report that they have recently been running into bandwidth problems, particularly with scrapers and other downloaders like hydrus. They were respectful in reaching out to me and I am sympathetic to their problem. After some discussion, rather than removing hydrus support for Sankaku entirely, I am in this version adding a new restrictive default bandwidth rule for the sankakucomplex.com domain of 64MB/day.

If you are a heavy Sankaku user, please bear with this limit until we can figure out some better solutions. If there is an easy way to move a subscription to another source or slow down some larger queues you have piled up, I am sure they would appreciate it a lot. I am told they plan to update their API to allow more intelligent program access in future, and while they have no way to donate right now to help with bandwidth costs, they also hope to roll out a subscription service in the coming months.

On the hydrus end, I have decided to fold some kind of donation-link ui into the ongoing downloader overhaul, something like a "Here is how to support this source: (LINK)" to highlight donation pages or "Hey, please keep it to <XMB a day, thank you" wiki pages for those users who wish to help the sites (and are also able to!). I also hope to get some better 'veto' options working in the new gallery downloaders so we can avoid downloading large gifs and other garbage that fits tag censorship lists and so on in the first place. Also, as Known URLs are handled in more intelligent ways in the client, it will soon make sense to create a Public URL Repo, at which point we'll be able to cut out a huge number of duplicate downloads and spread the bandwidth burden about just by sharing hash-URL mappings with each other. Not to mention the eventual nirvana when we can just have clients peer-to-peering each other directly.

What we are doing with hydrus is all new stuff, and I am often ignorant myself until I hear new perspectives on workflow or whatever, so please let me know what you think about this stuff. I am keen to find ways that we can continue accessing sites for files and tags and other metadata without falling into it being a niusance for others. And to figure out what actually are practical and reasonable ongoing bandwidth rules for different situations.

misc

I fixed tag parents! I apologise for the inconvenience–when I optimised their load speed last week, I fucked it up and ended up loading them in the wrong way so they wouldn't display right.

The new system:known_url should load much faster in almost all situations.

There is a new 'subscription report mode' under help->debug->report modes. If you have subs that inexplicably aren't running, please give this a go and send me a clip from all the stuff it will print to your log.

full list

- after discussions with Sankaku Complex about their recent bandwidth problems, added a new 64MB/day default bandwidth rule for sankakucomplex.com–please check the release post for more information

- the 'page of images downloader' is now called the 'simple downloader' that uses the new parsing system (particularly, a single formula to parse urls)

- the simple downloader supports multiple named parsers–currently defaulting to: html 4chan and 8chan threads, all images, gfycat mp4, gfycat webm, imgur image, imgur video, and twitter images (which fetches the :orig and also works on galleries!)

- there is some basic editing of these parsing formulae, but it isn't pretty or easy to import/export yet

- the new parsing test panel now has a 'link' button that lets you fetch test data straight from a URL

- added a 'gather to this page of pages->dead thread watchers' menu to the page of pages right-click menu–it searches for all 404/DEAD thread watchers in the current page structure and puts them in the clicked page of pages!

- cleaned up some page tab right-click menu layout and order

- fixed tag parents, which I previously broke while optimising their load time fugg

- the new favourites list now presents parents in 'write' tag contexts, like manage tags–see if you like it (maybe this is better if hidden?)

- sped up known_url searches for most situations

- fixed an unusual error when drag-and-dropping a focused collection thumbnail to a new page

- fixed a problem that was marking collected thumbnails' media as not eligible for the archive/delete filter

- wrote a 'subscription report mode' that will say some things about subscriptions and their internal test states as they try (and potentially fail) to run

- if a subscription query fails to find any files on its first sync, it will give a better text popup notification

- if a subscription query finds files in its initial sync but does not have bandwidth to download them, a FYI text popup notification will explain what happened and how to review estimated wait time

- delete key now deletes from file import status lists

- default downloader tag import options will now inherit the fetch_tags_even_if_url_known_and_file_already_in_db value more reliably from 'parent' default options objects (like 'general boorus'->'specific booru')

- the db maintenance routine 'clear file orphans' will now move files to a chosen location as it finds them (previously, it waited until the end of the search to do the move). if the user chooses to delete, this will still be put off until the end of the search (so a mid-search cancel event in this case remains harmless)

- the migrate database panel should now launch ok even if a location does not exist (it will also notify you about this)

- brushed up some help (and updated a screenshot) about tag import options

- fixed a problem that stopped some old manage parsing scripts ui (to content links) from opening correctly

- improved some parsing test code so it can't hang the client on certain network problems

- misc ui code updates

- misc refactoring

next week

I am spinning a lot of plates right now, but I also have a bit of spare time next week. I hope to catch up on my ongoing misc todo and also polish some of the new stuff that has come out recently. I also want to put some time into the gallery overhaul–maybe prepping for the ability to drag and drop arbitrary URLs onto the client.

____________________________
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

b80fc3  No.8538

File: e3bd52370036c09⋯.png (878.42 KB, 1920x1042, 960:521, for the low price of 91GB.png)

File: 3e5a7a877f1d08d⋯.png (22.95 KB, 300x300, 1:1, 8f19d1645816cf98b0442d9bdb….png)

File: e8a82a5d1995c30⋯.jpg (272.53 KB, 1280x960, 4:3, 1201674492773.jpg)

This isn't related to the latest version but I don't want to bump a thread off for something temporary, hope you do not mind it.

I posted this in the /v/ share thread and figured someone here may also want it. This is an old collection from a few years ago, I'm going to be deleting it but wanted to make it available before I do so. I will be hosting it for a few days but am going to get rid of it eventually.

/ipfs/zDMZof1m3JUHZvvjwf9WJu7Ey1PFZKKnmqgH9gCMv5CJauSa3BoL

https://ipfs.io/ipfs/zDMZof1m3JUHZvvjwf9WJu7Ey1PFZKKnmqgH9gCMv5CJauSa3BoL/

Thanks for your efforts as always, hydrus.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

3288d4  No.8539

File: 2be45b49dd4309c⋯.jpg (42.17 KB, 500x282, 250:141, YES YES YES YES.jpg)

>twitter downloader

Does it do by account or hashtag, or is it only for single images? All of the above? I'll play with it tomorrow.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

3e5bc6  No.8540

>>8539

This is just a simple 'paste page url, get file urls' for now. Broader two-layer searching will come with the gallery overhaul, which will start with a search query like username, hashtag, or 'samus_aran' and then walk through gallery pages to get file urls.

So for twitter, try pasting this into the new simple downloader with 'twitter image' selected:

https://twitter.com/fioquattro/status/981271897592737797

It should fetch the ':orig' version of the image(s) as well.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

3e5bc6  No.8541

>>8540

Sorry, 'walk through gallery pages to get page urls'.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

4027ef  No.8544

>>8533

ok, hydrus I dont think will ever do comics or video in a way that is good, its just not something the program can do.

It works for images because unless someone is heavily autistic, they don't have a sorting method for images that works. so dumping the images into hydrus and letting the program do its thing, even if it scrubs every aspect of the image from its file name to location, works because you sure as fuck didn't sort it any better yourself.

however with comics, I have a download folder, I have a collection I keep/kept fairly up to date in sorting. I'm not about to dump the comics into hydrus because even if hydrus dies and goes away completely, I still have the comics in archives, and I still have them sorted.

Large videos, and I mean actual anime, hydrus could do, but sorting them by name is not the way to go, hashing them out and filename/sorting based on a public anime list would be the way to go, but lets get back to comics.

you have really 2 ways about doing this, both of which would at the very least be a spin off version of hydrus.

1) you leave the files where they are, and you have the program sniff either a download directory for new entries, and you sort out comics and shit from here

2) you have comics moved to your system however you are not going to hash them out, they have to be kept in some form of order and hierarchy.

personally i'm fucking stuck using acdsee ultimate 10 and a version of acdsee from nearly 15 years ago, one for speed, one for the ability to see unicode.

If you can make the comic version of hydrus see inside rars and zips, and read the files, I could switch to it in a heartbeat.

I also believe that with a lot of the basis of hydrus is already to the point that you could make this and be done with it for long periods of time, as comics/manga are far easier to deal with then images, as its not a daunting task to tag one 200 page book, or a 36 volume series the same way it is individually tagging 200 pages or 3700 ish pages.

If you ever do this I look forward to it as my hentai comics/comic folders are an absolute nightmare to deal with anymore.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

4027ef  No.8545

so, went over to imgur with this

https://imgur.com/a/1JOBl

really its the only massive job I can think of when it comes to imgur, however it also seems like there are no wait times to downloading galleries anymore so the usefulness at this point just comes from I don't have to import.

so I dump that url into it and it downloads 19 images, 3 blank images and stops.

so I go huh… thats a thing… ok lets try imgur images

it downloads the first image alone in a higher resolution.

went to twitter, with the link in the thread, got 15 images, move the slider to twitter in a new tab, got 1 image but it was a crap res so I cant tell if it did anything.

so… this prompted me to test it with 4chan and see what it did…

it kept the last page I set it to… so had I done what I normally do and didn't notice, I would have lost a fuck ton of links because it was trying to twitter filter

I go to adult requests and find a page with a single image, and I move the setting to all images… I only get thumbnails. I move it to 4chan and I get the full image… now how will this handle archives, I go back to the thread that imgur comes from, and go to its archive,

https://desuarchive.org/trash/thread/14922493/#14922493

now, let's go all images, ok only get thumbnails

fuck it let's go 4chan

get nothing

I dont find this simple

I dont find this useful

Im sure in the future this will turn into either a go to solution, or will actually be simple, but the prior page of images I could dump nearly any chan site into it and get the results I wanted, I could dump every archive for 4chan into it and get the images from them, this new one is at the very least a pain in the ass, and at worst the reason im considering reverting.

is it possible to add page of image downloader back in and have it a selectable option? This seems like its going to require a fuck load of work to get even close to the simplicity that was page of image download.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

c6e102  No.8547

>>8544

The best way for hydrus I can think of to work with large series is to create an intermediate file with a thumbnail chosed by the user, and a hash calculated from all the pages. The program placing the pages in some intermediate folder that is human readable too. And the importing that other file as if it contains the whole manga.

Then when you open the file in hydrus it should see the file and realise that it's a link to the other files and asking you how to open the folder with the other files.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

019d6c  No.8548

File: 37972e88168ff94⋯.png (601.87 KB, 720x717, 240:239, a6f22007668c9901be9f624b53….png)

>>8536

>Note: The new simple downloader cannot yet do the old 'get the destination of image links' parse rule the old downloader could. If this is important to you, please hold off updating for a week–I hope to have it in for v302.

That'd work as a general image board downloader, right?

I've been wanting to scrap a couple camwhore threads from mewch for a while but I'm too lazy to learn how html works to make a thread scrapper.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

4027ef  No.8551

>>8547

honestly, if hydrus kept files where they were and just kept track it knows what a file is by hash, this would work either for images or for full chapters(zip, rar, cbr, cbz) and lets say you moved one file to another folder outside of the program, the program still knows those files, and because its not a… lets go mine, 1.5 million image big image archive, it would re discover/import/assign the old hases to the same images and just know where they are in a new location.

if it made its own folder structure, it would need to be human readable, and create txt files for original names, god knows some groups are obnoxious and create several hundred character names, burry the pages 3 folders deep and make it so its fucking impossible to read on windows.

but I honestly don't think we would need thumbnails, at least as far as manga goes, hentai maybe, but i'm looking at acdsee, my go to for this, I have everything in detail view, while thumbnails would be useful for looking at porn, at least cover to cover to more easily see what's what, it's largely not needed. that said, dup detector could weed out duplicates as that is an issue for me where I will have several versions.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

4027ef  No.8552

>>8548

there is an extension for chrome that will get every link on a webpage, I used this on an image ftp thing a while back when I first started downloading with hydrus, and imported the list to raw url download. it managed to hit everything.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

2536c4  No.8553

>>8552

link? That sounds super useful.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

71a2ef  No.8554

>>8544

There's already a small handful of comic programs that do just that. Even some for doujins like HappyPanda(can grab tags, can view any file type, ect.). But I do kind of agree, a hydrus version for comic and doujins would be god tier. Hydrus' UI looks like it would fit more as a comic cover viewer, where insted of single separate images, you view the covers or the first image to a folder or zip. Managing comic's/doujins are a hell of a lot easier than single images so it wouldn't sound like much work would be needed.

But yeah just use alternatives, I've tried acdsee like once but it doesn't seem like it was ment for comics(or images for that matter). There should be a bunch of better programs actually ment for comics out there by now.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

4027ef  No.8555

>>8553

https://chrome.google.com/webstore/detail/link-klipper-extract-all/fahollcgofmpnehocdgofnhkkchiekoo?hl=en

it is useful, keep it disabled when you aren't using it, it may be a conflict with my extensions but the thing would interfere and start with button presses it shouldn't.

>>8554

sure, there are a few programs that can kind of work for comics. you got the standard one that I forgot the name of every time, that one stopped being developed a long time ago and has little to no workable file browser

if you are willing to extract everything, faststone and a few others work, but i'm not willing to extract and bloat the damn archive more then need be, plus when it comes to moving files archives is far cleaner then folders

as for acdsee, that one is probably the one to rule them all

you got the file browser that is efforless

you got the image viewer that at least in the newest version, 3, 6, 7, 8,9, pro, pro 2 and pro 3 is able to handle the images, there are various versions where zoom lock was broken, also middle click removes the image editing portion of the image viewer.

as for something has to be better?

yea, if you want to extract everything, some programs are better,

but if you dont, you are now using at least 2 programs in tandem and its no longer a lean back, oh i'm done with this chapter, enter, back space till you can select the next chapter, down arrow, enter till im full screen

god knows im willing to move to any program that isn't shit.

hydrus, for an image viewer, I would say compared to acdsee is lacking, however I also have a 55 inch 4k monitor, and I have yet to run into an image that when scaled to 'fit screen' demands I zoom in to comfortably read

as for happypanda/x I was never really clear on what it did.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

e825a8  No.8556

>>8536

>8chan threads (including 3-year-old threads that have some broken links on the new thread watcher)

THANK YOU!!

>gfycat mp4s and webms, imgur still images and mp4s

EVEN MORE THANK YOU!

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

bbe2a8  No.8558

>>8544

I've been using https://github.com/Difegue/LANraragi to manage a collection of archive files, sounds like it almost fits your bill, except it has no support for chapters whatsoever.

It's barely shilled outside of /h/'s collection management thread so I'm pretty sure nobody knows about it

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

4027ef  No.8559

>>8558

honestly the web interface part of that is the part that kills my interest.

Im just responding to

>hydrus isn't great at handling large videos or comics/manga yet

from the dev

I personally don't want to use hydrus for comics, manga, or large videos, however I can easily see a separate program from hydrus, which largely uses hydrus as a base as a program I would love to use for comics manga and potentially video.

my temp folder as it stands right now is around 7200 rar/zips big, and this is the dumping ground everything is placed in before I get around to sorting if I wanted them to work in hydrus, I would need to extract, then have everything labeled for the chapter/volume/page order and then I would need to search for it to find it again because its not stored in a human parseable way.

the whole thought is

>yet

as in hdev is looking to make it work in this way. personally I don't want hydrus for my comics even if it worked perfectly just because it would be more steps to get to an end goal, the program would have to parse through 1.5 million images to find the anal yuri doujin im thinking of

at that point I think it would be better to have a separate version of hydrus handle it and tossing ideas for it.

personally i'm not opposed to letting hydrus/hydrus like program handle everything and it has its own sorting methods method is honestly fucking shit, and so long as I can import the very basic tags to label everything with when i dump a folder of shit in, im good, but it needs to be a human parseable database because as shit as my sorting method is, I am still able to find shit, unlike with images

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

352590  No.8561

>>8536

>clients peer-to-peering each other directly

Personally, i don't think this is a good idea. There is a huge amount of potential security and/or anonymity problems. At least make this feature optional with full control over it.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

3e5bc6  No.8564

File: 03ddc62bbc9a71a⋯.jpg (1.85 MB, 4001x2692, 4001:2692, 03ddc62bbc9a71a2a08e160fc9….jpg)

>>8538

Thanks m8, this looks interesting.

>>8544

Thanks. I agree that cbz/cbr support is the way to go here. Dealing with pages separately is too much of a hassle.

>>8545

And thanks for this as well. When I prototyped that imgur parser, I was only testing it on single pages, not albums. I have a couple of new test links now and will give albums another go this week. Depending on how complicated fetching the 'second page' of results is, it might have to wait for the gallery update. I am confident we can have some good support here ultimately. There's also a decent json API apparently, as well.

I will try to get 'fetch the destination of image links' working again. I was knee-deep in the rewrite by the time I realised the current html parser could not quite do the step I needed, so I will add that and try to get it in for v302, at which point this new downloader will be able to do everything the old page of images downloader could.

I also do not like the dropdown menu for selecting the parser–can you think of a better way to say 'please parse this using "fetch all image links"' or whatever? The current choice is not saved through a session save/load, so that's one step I can do.

I hope to have some 'drop twitter link on the client, it fetches images and videos and tags' functionality in in the next few weeks, but it will need more complicated code than this simple downloader, which can only do simple, 'one-phrase' parsing.

>>8548

Mewch should be possible in this current system–I can add one for next week. If you feel brave, you can try adding a new parsing 'formula' to the simple downloader with this one simple rule:

get every <a> tag with attributes class=originalNameLink
and then get the href attribute of those tags

But you might want to hold off if you want to learn it–this is all still a bit ugly for now.

>>8561

Yeah, I would do this more as a between-friends kind of system. You could DM your friend a link that would give him access to your 'hub' or something that runs off a computer you control, and then you could share groups of files with the hub or him specifically. I'll always make sharing a turn-on system that you have to manually click to do anything–the default will always be off.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

4027ef  No.8565

>>8564

Ok, i'll just say this, I know fuck all about what is going on behind the scenes.

For what to do about the drop down, for anything the parser knows, it should just default to that parser, no user input required there.

then there is the mp4/webm on gfycat, I have no fucking clue what that site is, but shouldn't the difference between webm or mp4 be a user preference kind of thing rather then 'I think i prefer this one today'

for imgur, instead of images/video, just have a single get everything option, that users can set to be different in a preference.

4chan and 8chan, this should be easy to just have the parser parse these based on domain.

split the thing into two options for download.

the simple one should be just that, what the old parser was, paste the link, I got your fucking back

then have a seperate one for advanced, this would keep the drop down and all the options at its disposal. i'm assuming the idea is for 3rd parties to be able to add parsers, this could be where you load a parser to test before adding it to the pool.

basicly, the simple parser should be brain dead simple, which is currently isn't, keep this as a way to design the simple one and it should treat you well.

also, as long as these sites are able to get images

https://desuarchive.org/

https://archive.4plebs.org/

https://archive.nyafuu.org/

https://thebarchive.com/

https://archiveofsins.com/

i'll be more or less set.

with rar/zip/cbr/cbz im completely ok if you take some time to get something working as a separate program. Honestly, you have more work to do with hydrus, but you don't have a whole lot more you would have to do to get a manga/doujin reader going, the hardest part would be a user parseable file storage, and that could be solved by either sorting by manga or doujin, sorting manga to series names, and doujin by creator names.

hell, if there is a known file, you could have it pull thumbnails and things like that from the file itself as it knows what to create and use, you could have unknown ones test against cover images in a dup filter like thing, and merge the know and unknown.

Honestly, it doesn't seem like there would be a lot of work to get that working sans the whole reading archives, and once that's solved there would likely be very little maintenance work.

a simple file browser for manga, possibly just a known 'you have it' series list, and an unknown file sorter

now I have to start a new project, fix my fucking mouses double click issue. oh the joy.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

4027ef  No.8566

>>8561

>>8564

the problem with even that is I have used kazaa, edonkey, and that one japanese specific one… you don't want this as you will get attention for all the wrong reasons.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

354aaa  No.8567

Thank you based dev.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

71a2ef  No.8568

File: 733621c55455359⋯.jpg (89.1 KB, 1181x544, 1181:544, 43634737.jpg)

>>8536

What does "CONFLICT: Will be petitioned/deleted on add." here mean? I just updated from 299 to 301 so I'm still trying to understand the new parent/sibling update, also the old tags still seem to stay in the tag tray on the left.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

54ff61  No.8569

>>8536


2018/04/08 13:23:48: hydrus client started
2018/04/08 13:23:53: booting controller...
2018/04/08 13:23:55: booting db...
2018/04/08 13:23:56: updating db to v301
2018/04/08 13:23:56: updated db to v301
2018/04/08 13:23:57: preparing disk cache
2018/04/08 13:24:02: preparing db caches
2018/04/08 13:24:02: booting db...
2018/04/08 13:24:02: initialising managers
Gdk-Message: 13:24:09.565: client.pyw: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.

I've got this error since a few versions. I'm using Manjaro Linux with Hydrus package from AUR repository. If I run Hydrus from source, it works until I change database to my old database. Then I've got the same error as above.

Maybe it will help https://stackoverflow.com/questions/25790890/xio-fatal-io-error-11

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

4f317d  No.8573

>>8569

Seems like I'm getting the same error on Gentoo.

And also:


[xcb] Unknown sequence number while processing queue
[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called
[xcb] Aborting, sorry about that.
client: ../../src/xcb_io.c:274: poll_for_event: Assertion `!xcb_xlib_threads_sequence_lost' failed.


[xcb] Unknown request in queue while dequeuing
[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called
[xcb] Aborting, sorry about that.
client: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.
client: ../../src/xcb_io.c:179: dequeue_pending_request: Assertion `!xcb_xlib_unknown_req_in_deq' failed.

… on repeat startups. But there is also the exact same variant >>8569 reported. It varies when I repeatedly run hydrus.

If I run a fresh install of hydrus with new *.db files, everything is fine.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

4027ef  No.8575

got an idea

it seems every time I download a thread, or threads, I tend to end up with a fuckload of 'already in database' images.

Is there a way to currently hide anything not new?

if there isnt, would it be possible to put a toggle next to

[sort by options] [assend/decend] for [all/old only/new only]?

I think this would help greatly in being able to parse threads and downloads to just be able to sift through the shit you already have.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

c6e102  No.8577

File: ab013e6cbf5568f⋯.png (416.64 KB, 891x928, 891:928, dllhost_2018-04-11_12-14-1….png)

>>8575

See pic related, I'm pretty sure you can change the default options in the settings menu so that all future ones also open this way.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

4f317d  No.8578

>>8573

I've tried to at least identify which DB files are affected. Sqlite seems to verify them all as okay, so I resorted to just trying to use the old *.db files one-by-one.

Which isn't going too well. Using the old client.db still seems to be fine for launching Hydrus, but then actually using Hydrus results in


no such table: external_caches.specific_current_mappings_cache_8_3

And other errors. Seems fairly obvious to me that client.db isn't self-contained and the failure seems not too graceful; both search and download features just don't seem to work now.

Not sure how I'd regenerate these caches, none of the maintain / verify / debug options I tried worked. And adding more db files so far led to hydrus' UI not starting.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

3e5bc6  No.8579

>>8568

Since siblings substitute one tag with another, it can only every be an n->1 relationship. Multiple bad tags can go to one good tag, but one bad tag cannot go to multiple good tags. If a user intends to add a->b when a->c already exists, the a->c would have to be removed in the same step for the new a->b to be valid. Otherwise, the client wouldn't know what to replace any instance of 'a' with.

Formerly, I had some convoluted workflow here that discovered the conflict when adding a->b and interrupted the user with a secondary layer of yes/no dialogs that was a big nested pain, so now I just assume that you want to petition any conflicts and assign an automatic petition 'reason' that makes it easier for any repository janitor to see what is going on.

Everything here remains pretty patchwork since it works on combinations of pairs. I will be prototyping some new 'group'-based relationship stuff here and the file duplicate system in future that I hope will allow for better ui and workflow.

>>8569

Thank you for this report. I have rarely had something like this on my Linux (Ubuntu) test machine, but I cannot reproduce it reliably–I just get it randomly on boot sometimes. Does this crash happen every time you try to boot, and does it never happen on the other database? Does it always happen during boot, or does it sometimes happen 10-30s after boot, or can it happen hours after boot?

That link happens to talk about 32vs64bit stuff–are you running on a 32bit version of Manjaro, or 64bit? My current suspicion for the 'resource temp unavailable' is that my wx is doing something early in boot in an order that the X server isn't happy with–maybe something like asking for the size/position of a window before it has been shown on screen. Different Linux window managers have different levels of spergout on this sort of lazy coding, but it is often difficult to track down exactly what I have done, particularly on a full crash like this. Do you get any obvious graphical errors when you run the client, like the hover windows not appearing or positioning correctly? Or maybe unexpected focus jumps?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

3e5bc6  No.8580

>>8573

Thank you–please check my questions in >>8579.

>>8578

This is stranger. I don't think any db problems would cause this X server crash, but maybe there is a deep error trying to report somehow on boot that isn't getting through correctly.

The database->regenerate->autocomplete cache is supposed to regenerate those mappings cache tables. Are all your missing tables something of that ilk, like external_caches.some_cache_2_3? Have you have any hard drive faults recently?

Edit: Oh, I may have misunderstood what you did here–did you boot the client after separating the .db files? This makes for an invalid db, so please put the originals back together–your client will be completely unusable without them.. It will have probably generated some 'stub' client.caches.db etc… files, which you should carefully discard.

>>8575

>>8577

Yeah, this is it. Only presenting new files is patrician-tier.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

4027ef  No.8582

>>8577

>>8580

Im not sure that works retroactively.

for things like thread watchers or smaller thread imports, or even smaller imports, I would rather see everything up front so I know the program has seen it, but after the thread 404's and I have seen it has gotten everything, being able to toggle it from everything to only new would be great especially now that you can tag/rate from thumbnails.

even the fuck off massive thread imports I have where I have a single page of imports thats 12000 new 17000 already in, I would like be able to see the import in is entity then cull what I already have, at least in my case largely due to putting off tasks like this for quite a while, its nice to see what was acquired. this also reminds me about another Idea I had a while back. Its a bit easy to switch the sort order on accident, this will completely ruin the import order, now there is a by time imported option but I believe that one is what age use to be, and this will sort by when an image was imported, but in the case of downloading pages, completely ruins the the ability to look through what you got, and be able to see where one thread stopped and another started, this may not be the most useful thing for most people but its nice to be able to have a build in waypoint for dealing with threads/bundles of images.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

4f317d  No.8594

Thanks for your help, hydrus_dev!

I solved the database issues by just reverting to a .db file backup from around 3 hydrus versions ago. That one upgraded to the current version and runs fine.

>>8579

>Does this crash happen every time you try to boot, and does it never happen on the other database?

Happened every time on older databases. Never on newly generated databases.

> Does it always happen during boot

Always during boot; I never got to the main Window.

>>8580

> did you boot the client after separating the .db files? This makes for an invalid db

Yes, that was the approach I tried. Figured that this might work somehow since the files are separate, but it obviously never actually worked.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.



[Return][Go to top][Catalog][Nerve Center][Random][Post a Reply]
Delete Post [ ]
[]
[ / / / / / / / / / / / / / ] [ dir / random / bane / clang / hentai / htg / loomis / mde / pone / rule34 ]