windows
zip: https://github.com/hydrusnetwork/hydrus/releases/download/v305/Hydrus.Network.305.-.Windows.-.Extract.only.zip
exe: https://github.com/hydrusnetwork/hydrus/releases/download/v305/Hydrus.Network.305.-.Windows.-.Installer.exe
os x
app: https://github.com/hydrusnetwork/hydrus/releases/download/v305/Hydrus.Network.305.-.OS.X.-.App.dmg
tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v305/Hydrus.Network.305.-.OS.X.-.Extract.only.tar.gz
linux
tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v305/Hydrus.Network.305.-.Linux.-.Executable.tar.gz
source
tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v305.tar.gz
I had a great week with a bunch more downloader overhaul work done. The new parsing system now kicks in in a few more places and some sites now support url drag-and-drop by default.
new gallery parsers
I have tightened up last week's improvements on the new gallery page downloader and integrated it into regular downloader pages and subscriptions. Now, if the client recognises a url in any downloader or sub and knows how to parse it in the new system, it will use that new parser seamlessly.
I have also written new parsers for pixiv, danbooru, safebooru, and e621 for exactly this, so if you use any of those, you may notice they now populate the 'source time' column of the file import status window (which is useful for subscription check timing calculations) or that these parsers now pull and associate additional 'source urls' from the files' pages (so although you may download from danbooru, you might also get a new known pixiv url along the way).
A neat thing about these parsers is that if one of these additional source urls has already been seen by the client, the client can use that to pre-determine if the file is 'already in db' or 'previously deleted' before the file is downloaded, just like it would the main post url, saving time and bandwidth. The danbooru and e621 ones even pull md5/sha1 hashes and check those, so if everything works right, you should never have to redownload anything from danbooru or e621 again!
I also fixed the pixiv downloader more generally, which I had broken in last week's url normalisation update (and due to some other pixiv-specific stuff). I apologise for the inconvenience–everything should be working again (although you may have some useless bad urls from v304 that are missing the 'mode=medium' component that you may wish to skip/delete rather than let error-out), and the new pixiv parser fetches the romaji versions of tags now as well. Manga pages aren't supported yet, and tag searching is still down, but as I roll out some more gallery stuff here, I think I'll be able to figure something out.
Another upshot of the new parsers is that the client can now receive these sites' post urls as drag-and-drop events. Try dragging and dropping a danbooru file post url (like this https://danbooru.donmai.us/posts/2689241 ) on the client–it should all get imported in a new 'urls downloader' automatically, with all the new url association and everything! (You might want to check the new 'manage default tag import options' under the 'network' menu before you try this–the whole download system has a foot in two worlds at the moment, and while some parts pull TIO from the old system, the new url-based auto-stuff looks there.)
And lastly, with the help of @cuddlebear on the discord, there is a comprehensive yiff.party API parser in place, also with drag-and-drop support. Due to the shape of the data that yiff.party presents, this creates a thread watcher. You can even set these watchers to check like every 30 days or so, _and they should work_ and keep up with new files as they come in, but I recommend you just leave them as [DEAD] one-time imports for now: I expect to integrate 'watchable' import sources into the proper subscription system by the time this overhaul is done, which I think is probably the better place for more permanent and longer-period watchables to go.
I am pleased with these changes and with how the entire new downloader system is coming together. There is more work to do–gallery parsing and some kind of search object are the next main things–but we are getting there. Over the next weeks, I will add new parsers for all the rest of the default downloaders in the client (and then I can start deleting the old downloader code!).
other stuff
Import pages now report their total file progress after their name! They now give "(x, y/z)", where x=number of files in page, y=number of queue items processed, z=number of queue items unknown. If y=z, only "(x)" is reported. Furthermore, this y/z progress adds up through layers of page of pages!
If you try to close a page of pages (or the whole application), and multiple import pages want to protest that they are now yet finished importing, are you sure you want to close y/n, the client now bundles all their protests into a single yes/no dialog!
If manage subs takes more than a second to load, it'll now make a little popup telling you how it is doing.
full list
- fixed the pixiv url class, which was unintentionally removing a parameter
- wrote a pixiv parser in the new system, fixing a whole bunch of tag parsing along the way, and also parses 'source time'! by default, pixiv now fetches the translated/romaji versions of tags
- finished a safebooru parser that also handles source time and source urls
- finished an e621 parser that also handles source time and source urls and hash!
- wrote a danbooru parser that also handles source time and source urls and hash!
- as a result, danbooru, safebooru, e621, and pixiv post urls are now drag-and-droppable onto the client!
- finished up a full yiff.party watcher from another contribution by @cuddlebear on the discord, including url classes and a full parser, meaning yiff.party artist urls are now droppable onto the client and will spawn thread watchers (I expect to add some kind of subscription support for watchers in the future). inline links are supported, and there is source time and limited filename: and hash parsing
- fixed some thread watcher tag association problems in the new system
- when pages put an (x) number after their name for number of files, they will now also put an (x/y) import total (if appropriate and not complete) as well. this also sums up through page of pages!
- if a call to close a page of pages or the application would present more than one page's 'I am still importing' complaint, all the complaints are now summarised in a single yes/no dialog
- url downloader pages now run a 'are you sure you want to close this page' when their import queues are unfinished and unpaused
- if the subscriptions for 'manage subscriptions' take more than a second to load, a popup will come up with load progress. the popup is cancellable
- added a prototype 'open in web browser' to the thumbnail right-click share menu. it will only appear in windows if you are in advanced mode, as atm it mostly just launches the file in the default program, not browser. I will keep working on this
- harmonised more old download code into a single location in the new system
- created a neater network job factory system for generalised network requests at the import job level
- created a neater presentation context factory system for generalised and reliable set/clear network job ui presentation at the import job level
- moved the new downloader simple-file-download-and-import to the new file object and harmonised all downloader code to call this single location where possible
- did the same thing with download-post-and-then-fetch-tags-and-file job and added hooks for in the subscription and gallery downloader loops (where a parser match for the url is found)
- the simple downloader and urls downloader now use 'downloader instance' network jobs, so they obey a couple more bandwidth rules
- harmonised how imported media is then presented to pages as thumbnails through the new main import object
- the new post downloader sets up referral urls for the file download (which are needed for pixiv and anything else picky) automatically
- improved file download/import error reporting a little
- entering an invalid regex phrase in the stringmatch panel (as happens all the time as you type it) will now present the error in the status area rather than spamming popups
- fixed a bug in the new parsing gui that was prohibiting editing a date decode string transformation
- fixed enabling of additional date decode controls in the string transformations edit panel
- added a hyperlink to date decoding controls that links to python date decoding explainer
- if a source time in the new parsing system suggests a time in the future, it will now clip to 30s ago
- misc downloader refactoring and cleanup
- fixed an issue where new file lookup scripts were initialising with bad string transformation rows and breaking the whole dialog in subsequent calls, fugg
- hid the 'find similar files' menu entry for images that have duration (gifs and apngs), which are not yet supported
- added 'flip_debug_force_idle_mode_do_not_set_this' to main_gui shortcut set. only set it if you are an advanced user and prepared for the potential consequences
- silenced a problem with newgrounds gallery parser–will fix it properly next week
- fixed some old busted unit test code
- rejiggered some thumb dupe menu entry layout
next week
I will try to fit in some more parsers, and I might take a stab at a 'multiple thread watcher' page for advanced users. There's also an experimental new 'open file in web browser' that I had mixed luck with this week and would like to pin down a good multiplat solution for.