[ / / / / / / / / / / / / / ] [ dir / asmr / htg / loomis / lovelive / mexicali / russian / strek / wai ]

/hydrus/ - Hydrus Network

Bug reports, feature requests, and other discussion for the hydrus network.
Name
Email
Subject
Comment *
File
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Options
Password (For file and post deletion.)

Allowed file types:jpg, jpeg, gif, png, webm, mp4, swf, pdf
Max filesize is 16 MB.
Max image dimensions are 15000 x 15000.
You may upload 5 per post.


New user? Start here ---> http://hydrusnetwork.github.io/hydrus/

Current to-do list has: 1,147 items

Current big job: finishing login and domain managers


YouTube embed. Click thumbnail to play.

0905db No.7091

windows

zip: https://github.com/hydrusnetwork/hydrus/releases/download/v279/Hydrus.Network.279.-.Windows.-.Extract.only.zip

exe: https://github.com/hydrusnetwork/hydrus/releases/download/v279/Hydrus.Network.279.-.Windows.-.Installer.exe

os x

app: https://github.com/hydrusnetwork/hydrus/releases/download/v279/Hydrus.Network.279.-.OS.X.-.App.dmg

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v279/Hydrus.Network.279.-.OS.X.-.Extract.only.tar.gz

linux

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v279/Hydrus.Network.279.-.Linux.-.Executable.tar.gz

source

tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v279.tar.gz

I had a great week. The big downloader overhaul took a big step forward, and I somehow fit in a ton of other stuff as well.

hydrus network upgrade

The client now uses the new network engine to talk to hydrus services. This was the last holdout of the old network engine, and I am happy to have finally moved everything over.

Ideally, you will not notice any differences. The same basic protocol is happening, but now it will go queue up in the new system and follow its bandwidth rules and all that. You will see 'hydrus service' network contexts appear in the the bandwidth window (and wherever else) and be able to set service-specific rules there.

Also, the 'logging in' that hydrus clients need to do works on the new login engine. I am very pleased with this, but it is a first version. It took a lot of work to figure out a good pipeline for error states and the less common operations (like access key testing) and I suspect there are still some bugs for other unusual situations. Please let me know if you run into any trouble. If you do have a problem, a good solution to some problems is to go to the repository in services->review services and click 'refresh account'. Please let me know the details in any case. The error handling works but is not yet polished–I already have a job set up to improve specifics in the coming weeks.

Please note that if you run a server, you probably want to update it this week. Although <279 clients can talk to >=279 servers, the reverse is not true. New clients trying to talk to old servers will get safe-but-frustrating 'cannot login' errors whenever they try to do anything (this was due to a long-time bug in the hydrus session code that happened by chance not to affect old clients).

As everything now runs on the new network version, I can now tackle a number of catchup jobs that were on hold while I was bridging two systems. I expect several engine-wide things like error handling/logging and network use user presentation and old broken test code fixing, and maybe even proxy support, to come in the near future.

other highlights

Import folders can now be forced to 'check now' from the file menu! If you like, you can set up 'manual' import folders by pausing them in the manage dialog and then only running them when needed in this new submenu. I expect to add a subscription-like progress popup for import folders to make these workflows neater.

Queries that only include a positive ratings predicate ('rated' or a specific rating, not 'not rated') and perhaps only a couple of other system search terms–usually something like ('system:rating favs=like', 'system:limit=60')–now have an optimisation to run much faster. This matters for clients with large collections, where these searches could take up to five seconds the first time. Now they should only take a handful of milliseconds!

I have updated a number of the default boorus to now have https urls. I believe all the sites that now support valid https _should_ have it. Normally these changes would only affect new users, but I have also added a button to services->manage boorus to restore any of the default boorus! If you haven't messed with the boorus at all, you may just like to restore all your boorus through this button–you might get some new entries (danbooru or sankaku, for instance) that were broken when you originally installed.

Pages that you give a custom name shouldn't now be overwritten by the thread watcher. There is more to do here (I'd like to have custom names that can still do [DEAD], and dealing with tab icons), but let me know if you discover any combinations where the overwrite still happens.

Pages should now draw their thumbnails much more efficiently, using far less CPU to animate!

If you are in advanced mode, the number of currently open pages will be written in the 'pages' menu!

full list

- moved the hydrus service object over to the new networking engine

- hydrus services now appear in the bandwidth review panel

- hydrus requests now obey larger network bandwidth rules (this mostly means 'global')

- service bandwidth usage and rules are no longer managed from manage services–it is now under review bandwidth usage, like all other network contexts

- updated some network engine stuff for misc hydrus states like 'server busy'

- fixed a bug in the server code where the session key cookie had an invalid expiration timestamp–servers should update this week to make sure new clients can log in properly

- hydrus network requests will force-set User-Agent as 'hydrus client/(network_version)' as the network version is used in the protocol to determine compatibility

- fleshed out the hydrus specific network job, giving it the various bandwidth tracking and version checking responsibilities the service object used to have

- moved session cookie decay (only matters for Hentai Foundry atm) to the new session manager (was previously hacked into the login manager)

- moved some hydrus response parsing stuff around, added content type awareness to the new network job

- updated several unusual hydrus 'static/test' requests used to test credentials and fetch access key and so on to the new system

- added a special 'test service' service to better accomodate these requests

- wrote a static login script for hydrus services

- polished up login management system overall

- import folders now support a 'check now' state, like subscriptions, that will cause them to check immediately

- import folders can be 'check now'ed from the file menu under a new submenu! if you would like to have a 'manual' import folder, try pausing it and just running it from this menu!

- added an optimisation to the file search algorithm to search ratings queries super fast when they lack tag or file system preds to otherwise speed them up

- updated the booru presets that now support https to be https

- added a 'restore defaults' button to the manage boorus dialog–you can restore specifics or all of them

- optimised how fading thumbnails are blitted to screen, which may provide a huge performance boost for high-res/small-thumb clients

- pages that have been renamed by the user will no longer be rename-overwritten by any auto-renaming system (currently just thread watchers, I think, but this will expand in future). unfortunately, this is not retroactive–only pages renamed from now on will be aware that they were user-renamed

- in advanced mode, the pages menu now states how many pages are currently open

- the 'page of images' downloader will now say '(x already in queue)' when it reports how many urls were found, if any were already in the queue. (this should clear up some confusion where it would previously say '0 new urls' even when it found some stuff)

- the gallery downloader will do essentially the same, but on a per-page basis. the text is a little crushed, so I may revisit this

- fixed an issue with the manage services dialog not being able to rename dupe-named services on edit subdialog ok

- the manage services dialog now uses the new listctrl! the listed services are no longer a horrific unsorted mess!

- fixed the file import status button not showing on raw url downloader or import folder edit dialog

- improved how some directory tree merging code deals with read_only files in the source

- maybe fixed some unusual selection behaviour with the booru selection popup dialog. it now has a real ok button, rather than a mickey-mouse hidden one

- completely deleted the old networking engine!

- cleaned out some unused imports from the networking code and related entries from running_from_source.html

- changed up and fixed some odd bugs in how how repositories test some error/isfunctional stuff vs regular paused status

- hydrus network contexts now have a prettier name

- ip address-based network contexts will no longer spam the bandwidth tracker with their useless subdomains

- network jobs that are unable to attempt validation or login will now error immediately rather than waiting indefinitely

- fixed up a bunch of test code that was broken due to the mismatch of network engines

- broke some other test code due to the network engine transfer!

- the client is more resilient about broken 'twisted' installs, and _should_ be able to boot without it. this may or may not apply to the built release–more work can be done here

- some network job refactoring

- clientside service code cleanup

- misc fix that I can't remember

- misc cleanup

- misc refactoring

next week

I couldn't get to EXIF image rotation this week, so that's top of my list. If I can get that done in reasonable time, I'll start on the user-editable login engine stuff, which will probably involve planning and halloween-appropriate skeleton-writing.

8ae71b No.7093

File: a8f3d50a218ece7⋯.gif (165.6 KB, 594x224, 297:112, 2017-10-25_23-15-00.gif)

One more thing with adding links to a page of images.

I don't know if there is an option, or if it's just me, possibly the program goes to fast, I don't know

Now, I observed this awhile ago but now that the program will mention things already in the queue I figured it's now an issue.

a long list of links dumped in at once works perfectly fine, however dump one link in and nothing comes up at all, it just transitions to a blank area

The image highlights it, the computer hung a bit on the first click or two but it's one link that is already in the queue then 3 links already in the queue then one link that wasn't imported yet just to highlight the behavior on these parts.

is there a way on single imports to highlight the amount imported and already imported to the queue?

Also, something that can't be shown when there are low thread counts, larger threads have this dialogue show longer than smaller ones and it seems time between imports is also decreased so it's harder to tell unless you are paying full attention to the import as its happening.

Other than this, very helpful all round update.

Thanks


93512d No.7094

I made an alternate Linux build for people having trouble with the official build. A few things are still weird, but overall it works better for me than the official. Run from source works slightly better, but pip is a headache. Built on Linux Mint 18, so it should also work on Ubuntu 16.04.

If you want open externally to work, you will have to manually set the open program for each mimetype in options->files and trash. For some reason, when Hydrus invokes xdg-open it opens a program in WINE.

https://www.mediafire.com/file/aa05xcq7674d747/hydrus-279.tar.gz

I'll post one of these every week until the official Linux build works on my machine. If anything is broken, tell hydrus_dev (and thank him while you're at it). He was kind enough to give me his build command, but I don't know Python. If something is broken, I'm probably just as clueless as you are.


d4d282 No.7095

Call me a dummy, but how do I log into sankaku through hydrus?


98eac1 No.7097

Help, tumblr needs a login now because "safe mode" and all my Hydrus tumblrs subscriptions are borked, wat do?


d4d282 No.7099

File: 8672a368305923f⋯.png (173.94 KB, 1920x1080, 16:9, Untitled.png)

Getting this error whenever it hits the page limit


8ae71b No.7104

A few versions ago I suggest exhentai downloader now that the engine should be able to handle it.

Then a few later, I asked if an 'all the usual suspects' downloader would be doable, basically a downloader that would look for a username/tag across every gallery they hydrus can handle.

so. here I am, looking at exhentai and I find an artist I like

can't download from the site without spending a significant amount of points.

So they link to his accounts, I hit them up one by one… If you ever make a usual suspects search a thing, I highly suggest gallery and checkboxes, that way you only search where you know they are with specific usernames as they could be different across different sites, this person was the same on the 3 he uses

so here I am with 3 different windows. each one having about 100~ images the others don't have, and kind of wanting a way to combine them into one window that only shows the images once…

If there isn't a way to do this in program yet, would it be possible to add a function to page of pages to collapse everything to 1 window effectively combining everything to a more manageable, less same image in 3 different tabs set?


0905db No.7107

File: 2db2765f157e270⋯.jpg (506.12 KB, 1416x1666, 708:833, 2db2765f157e270b0f68e3a3c6….jpg)

One problem with moving hydrus to the new engine was that the bandwidth is now tracked for all requests. Previously; only downloads counted. This means your tag uploads may suddenly stall. I will have a fix for this in v280 and better ui feedback on what is going on in the coming weeks. For this week, if you are a prolific tag uploader, please edit the 'default' hydrus bandwidth rules and remove the '50 requests per day' rule. This should free you up.

>>7093

The new network engine will split different items in 'queue' downloaders like this into their own permanent entries that you can review to see how they did.

There's an antagonism here in that I want to show what happened, but I also want to speed things along as fast as possible when nothing is found. I expect the one with the blank result had an unusual combination of zero results and so the code just moved on without a user-polite delay. I will have another look at the code, thank you. This will be easier when I move this to two loops, so I can have the page downloader on a 5s period but let the file downloader run wild.

Now I think of it, it would be nice to have a kind of log for the download control, as things often fly past too quick to see. Maybe a window that pops up with a big list of what it tried, and when, and what happened.

>>7095

>>7097

Hydrus does not yet support login to these sites, but it will soon. The login manager I am currently building is designed to support any typical site. Any hydrus user will be able to create and share a login script without me having to do anything. I expect login scripts to appear in the wiki as it happens (I will link to the right page then), and I'll probably import them to the client as defaults as well. Please let me know how the features work for you as they roll them out over the coming weeks.

For your tumblr subs, >>7097 , please pause them for now. When tumblr login is available, you'll be able to add in a working user/pass to that and they'll log in automatically and hopefully just start working again.

>>7099

Thank you for this report. This should be fixed for v280–please let me know if it gives you any more trouble. I apologise for the inconvenience.

>>7104

I think searching multiple sites at once may be in the future of hydrus, but I will not be doing it in this pass. My main workflow aim this iteration is to allow better support for multiple queries per site–making it easier to discern different query results in downloader pages, allowing multiple threads per thread watcher, and having subs with the same tag and check period settings but hitting multiple queries on the same site.

Doing one query on multiple sites is unfortunately a more complicated problem. There are the issues of artists and tags sometimes having different names on different sites, and also all the ui to support it would be a lot of work.

I like the idea of saying 'collapse all this stuff into one page' though. I would like to add more thumbnail migration options in general in the nearish future, including drag-and-drop from tab to tab (so you can dump fresh downloads on a processing megapage, for instance).


8ae71b No.7112

>>7107

On the window with a log, that would honestly be very helpful

on the why this happens, with smaller ones sure its to speed things along, but my thinking there is if this is the first thing you add, unless the link had sub 5 files, its going to add the second one long before the files are downloaded, as for the end… that's just what I see with every single link that is added on its own or as the last of the set, I saw this every single time I added a bulk 10-20 links to a the download page (going through around 1000~ threads I had saved for processing, with the 8tb hdd and the archive being fully online, i was finally able to go through it) the last link added regardless would not show anything, when I use to add one at a time I had to watch the number of images to download because that was the only indication I had that something added, and when I eventually came across the same link I usually made a new window and added the thread to there so I knew for a fact it added.

One of the fun things about going through that overwhelming bulk is this, if you had more then 1 link that were the same in the queue before it was gone through the program would scrap it… now on larger imports before I found extentions or that I could copy paste to notepad ++ and then paste directly in one click, this was frustrating as I had no indication the thread was already in… I mean I knew why it wasn't adding, but paranoia doesn't let logic work when I don't want to lose the images, if you make the log file, also add in logs for when it scraps links.

On the topic of logs I also have to ask, is it possible for the program to remember what task it was given when acquiring a link? I came across an image recently, that I had the link for where I got the thing from, however trying to find the page that the link was on from that link was basically impossible for me, I was ultimately able to find what I was looking for by asking other people, but had I been able to get the link i may have been able to get it on my own

here

I add

Holyfuckitsanimage.bla/hereisthepage

but the program only remember

thisisthelinktotheimage.nah/blipbloop/sdfa437y89q394.image

does the program remember the former, and if it doesn't would that be possible to add/retain?

as for the multiple sites thing, the problem doesn't seem to be to difficult, though you may need to a few options

1) booru sites

2) the blog sites user name

3) the blog sites tags

That would separate all the massive issues that would come from different tags/usernames

I remember the time I thought of this, I searched one person's username on every booru site and just redid the name on every one, only two had a slightly different name… with the booru ones, one of the issues I have is even finding the fucking sites to do a test search on them, if anything could be done to make it a bit better, is it possible to add a link to the site in the style of thread watchers having a link to the thread? Its probably less of an issue when you know the sites, but personally I have only ever used maybe 2 of the boorus prior to the program letting me know some others exist.

cant wait till thread watchers can do more then 1 thread at a time and if you do anything with combining and consolidating tabs anytime soon, as I am finding out, large tabs are a motherfucker to work with but I can see uses in cutting down redundant images in small batches.


8ae71b No.7120

just got this error twice

UnboundLocalError

local variable 'num_already_in_seed_cache' referenced before assignment

Traceback (most recent call last):

File "include\ClientImporting.py", line 875, in _THREADWorkOnGallery

did_work = self._WorkOnGallery( page_key )

File "include\ClientImporting.py", line 817, in _WorkOnGallery

if num_already_in_seed_cache > 0:

UnboundLocalError: local variable 'num_already_in_seed_cache' referenced before assignment

not really sure what it was, got 2 errors basically exactly the same


bafeea No.7121

>>7120

got the same error when downloading from sankaku, not sure if anything actually broke though


40083e No.7123

Does Hydrus already have the capability to migrate files and tags between databases? In other words, am I safe to download and tag a bunch of pictures onto my laptop with a local Hydrus with confidence I can correctly import them into my external drive's Hydrus reo once I get back to it, or will soon be able to?


0905db No.7124

File: 54062e00f743bf6⋯.jpg (162.28 KB, 1535x1114, 1535:1114, 54062e00f743bf62474bc379b1….jpg)

>>7112

I hope to roll an improvement into URL storage in this 'domain' manager I am currently working on. I expect to move URLs into three categories:

- Raw URL (the file, like cdn.website.com/2f/1246254949549545619.jpg)

- Page URL (booru.net/post/123456)

- Decorative 'Source' URL (8ch.net/vp)

I am not sure about the third though. I will think about it more and maybe just not do it.

Direct file URLs are very useful for hydrus to remember but are ugly to display in the ui. Page URLs can help you find stuff you downloaded already, and a decorative URL could let you see 'oh, I got this from /a/ once' or whatever. At the moment, the downloader engine can only support one URL per file, and the parser isn't flexible enough to figure out post URLs for tumblr yet.

So as the downloaders improve and become more flexible, they'll be able to report any of these URLs and the client will be better able to choose what to display. There will also be a system to 'idealise' URLs, so dupes that just have arguments in a different order or have some random tag-gumpf on the end can be merged.

The page limit goes up to 200 tomorrow, btw.

>>7120

Thank you, this is fixed for tomorrow. I think it was happening when the file limit was hit in some occasions. I think it was mostly harmless. Please let me know if you run into any more trouble in v280.


0905db No.7125

>>7123

You can do this manually with file/tag exports, but there is no direct automatic client-to-client sync yet.

You can do it simply from the thumbnail right-click->share->export menu. You can export and import tags in neighbouring .txt files for simple jobs and create Hydrus Tag Archive files for more complicated or comprehensive jobs. HTAs are more advanced. There is a bit about them in the help, but let me know if you would like any assistance, or with the more basic import/export. There are a bunch of people on the discord familiar with this stuff as well, if you are ok getting on the discord server.

Very advanced users can create their own tag repositories on their home networks, of course, but you probably do not want to try this until you are already comfortable with the other methods.


8ae71b No.7130

>>7124

The decorative source url… its not exactly useful, at least looking at it from the outset, if it takes little to no space, go for it, but if it takes up something that can become tangible, id discard that one, just knowing the page that I imported the raw url from would be enough to source just about anything.

if its only able to store one url… could it be possible to jerry rig it to save 2? like the image you have

https://boards.4chan.org/vg/thread/194173643[{}]https://media.8ch.net/file_store/54062e00f743bf62474bc379b199c0232d7a8b1d325e7b8b590e69ad0bb280e6.jpg

No website would use [{}] in the url, but it could be a clear indication for the program or user where the line separates, that way in a future version, when it can handle more than one url, it can retroactively undo the [{}] and separate them?

on the error

I never really seen harm in the error, it was more just an error, nothing failed that I can tell it was just happening so I came here to paste it

on the page limit increased to 200, what happened?




[Return][Go to top][Catalog][Nerve Center][Cancer][Post a Reply]
Delete Post [ ]
[]
[ / / / / / / / / / / / / / ] [ dir / asmr / htg / loomis / lovelive / mexicali / russian / strek / wai ]