windows
zip: https://github.com/hydrusnetwork/hydrus/releases/download/v256/Hydrus.Network.256.-.Windows.-.Extract.only.zip
exe: https://github.com/hydrusnetwork/hydrus/releases/download/v256/Hydrus.Network.256.-.Windows.-.Installer.exe
os x
app: https://github.com/hydrusnetwork/hydrus/releases/download/v256/Hydrus.Network.256.-.OS.X.-.App.dmg
tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v256/Hydrus.Network.256.-.OS.X.-.Extract.only.tar.gz
linux
tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v256/Hydrus.Network.256.-.Linux.-.Executable.tar.gz
source
tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v256.tar.gz
I had a mixed week, but I am overall happy with my work. I put some more time into the gelbooru problem and polished the duplicate filter.
gelbooru back to normal
The 'redirect' urls that the gelbooru parser was generating last week are now gone. The client will figure out the correct urls and put them in the url cache just as it used to work before. Also, any old redirect urls will be removed from all your downloader pages and subscription when you update. EDIT: one user reported just now that the old urls were not removed correctly. If this happened to you, please let me know any details, and if it isn't a privacy issue, send me a couple of the urls that you still have.
Hopefully this patch will last us until the downloader overhaul, but let me know if it breaks again!
duplicate filter almost done
I've cleaned up and completely finished a lot of the duplicate filter code. Some unusual bugs and laggy moments are fixed, the way pairs are selected and presented is improved, and the db and gui do some more trickery to save you time.
The client now makes some simple automatic guesses about which file is 'better'. It tells you on the top hover window what it thinks–whether one file has larger resolution, or more tags, for instance–and then puts the better file as the A in the A-B pair you judge. A is often a nice high-res image, and B is often a no-tag, newer, scaled down version.
I think there will be two more weeks of work, and then I will be done with duplicates.
misc
You can now drag and drop text onto the program, and it'll automatically put it into a url download page! It works great for the text links on imageboards, for instance–just drag the link onto hydrus and hit enter. Try it out!
You can now 'append' gui sessions, which will load the contents of the session without deleting whatever was open before. This seems to work very well for some jobs (like opening a few 'favourite' pages without breaking your current workflow, or opening a bunch of empty thread watcher pages all set up and ready to go), and I think it may be cause to rename the 'sessions' system to something else, something like 'bookmarks'.
full list
- the duplicate filter now loads new pairs off the gui thread. it will display 'loading pairs…' during this time
- media viewers of all kinds are now more comfortable displaying no media (when this occurs, it is usually a frame or two during startup/shutdown)
- the duplicate filter now responds to any media_viewer_browser navigation commands (like view_next) with a media switch action
- you can now alter the duplicate filter's background lighten/darken switch intensity from its top hover window's cog icon
- fixed a bug in the new dupe pair selection algorithm that was preventing pairs from being presented as groups
- the duplicate filter will now speed up workflow by automatically skipping pairs when you have previously chosen to delete one of the files in the current batch
- auto-skipped pairs _should_ be auto-reverse-skipped on a 'go back' action
- added a |< 'go back' index navigation button to the duplicate filter top hover window
- the duplicate filter now displays several 'this file has larger resolution'-type statements about the currently viewed file. it lists them on the top hover window and in the background details text underneath
- the duplicate filter _roughly_ attempts to put the better file of the two first. this will always be indexed 'A'
- the duplicate filter now shows done/total batch progress in its index string–not sure how clear/helpful this ultimately is, so may need to revisit
- an unusual bug where Linux would spam the 'All pairs have been filtered!' duplicate filter message over and over and then crash _should_ be fixed–the filter no longer waits for that message to be OKed before closing itself
- drag-and-dropping text onto the client will now a) open a url import page if none already exist and b) put the dropped text into the input box of the first open url import page (and focus it, so you can quickly hit enter)! This works when dragging text links from browsers, as well
- you can now 'append' gui sessions, which will just append that session's tabs to whatever is already open–say, if you have several 'favourites' pages you want to be able to quickly load up without having to break your existing workflow
- ipfs services now have a 'check daemon' button on their review services panel which will test the daemon is running and accessible and report its version
- fixed the 'test address' button for ipfs services on their manage services panel
- the client can now automatically download files it wants and knows are on an ipfs service
- middle-click on an 'all known files' domain thumbnail will now correctly start a download (as long as a specific remote file service is known)
- the multihash prefix option is reinstated on ipfs manage services panels
- the gelbooru parser now discovers the correct page url to associate with its files
- wrote some redirect fetching code to fix the gelbooru bad urls issue
- discovered a quicker fix for the gelbooru issue–the redirect location is the garbage in the original url in base64
- all downloader/subscription url caches will purge any old gelbooru 'redirect.php' urls on update
- fixed an issue where 'previously deleted' gallery/thread imports were returning 'fail'
- fixed a problem that was causing some redundant laggy work in adminside petition processing
- thread watchers will now remember their file and tag import options through a session save even when no thread url has yet been entered
- fixed an issue where media 'removed' from a media viewer view of a collection resulted in the entire collection being removed at the thumbnail level
- fixed an issue where media deleted from a media viewer view of a collection resulted in the media not being correctly removed from selection tags
- tag, namespace, and wildcard searches on a specific file domain (i.e. other than 'all known files') now take advantage of an optimisation in the autocomplete cache and so run significantly faster
- fixed a hover window coordinate calculation issue after minimising the media viewer on some platforms
- removed some 'all files failed to download' spam that could sometimes occur
- misc fixes
next week
I can see the end of the tunnel on the dupe filter. Main things remaining are a system predicate to search files with dupe relationships and a workflow from thumbnail right-click to assign dupe relationships manually en masse.
I also have jury duty, which is likely to only be one day.