windows
zip: https://github.com/hydrusnetwork/hydrus/releases/download/v351/Hydrus.Network.351.-.Windows.-.Extract.only.zip
exe: https://github.com/hydrusnetwork/hydrus/releases/download/v351/Hydrus.Network.351.-.Windows.-.Installer.exe
os x
app: https://github.com/hydrusnetwork/hydrus/releases/download/v351/Hydrus.Network.351.-.OS.X.-.App.dmg
linux
tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v351/Hydrus.Network.351.-.Linux.-.Executable.tar.gz
source
tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v351.tar.gz
I had an ok week. I sped up several systems and added a new processing panel to the duplicate filter.
The 'next big job' poll is finished! I will next be focusing on overhauling the duplicate filter's db structure (including sketching out support for file 'alternates') and further improving the ui-side workflow.
duplicate filter
Seeing that the duplicate filter work was popular in the poll, I was happy to put a bit more time into it this week.
Most importantly, the duplicate filter now has a new always-on-top panel to make reviewing differences and making decisions easier. Essentially, I have pulled the 'this has higher resolution' statements and the action buttons out of the top hover window and put them in their own box to the middle-right. It stays on top, so you can always see it, and I have expanded the different statements to explicitly state each file's relevant values, such as '550KB >> 141KB', and to colour their text green/blue/red based on that difference. You can now make at-a-glance decisions for easy pairs. Let me know how it works for you!
I also have tweaked the 'show some random dupes' and 'duplicate filter' database routines to sample results more cautiously, meaning that duplicate filters with very large search domains (like system:inbox or system:num_tags>0) will work significantly faster. The initial search step still has to run every time, but the second 'sampling' stage takes barely any time at all. If your dupe work still takes a long time to count up or load pairs to filter, I'll recommend again to just add a 'creator:' tag.
The next 'big job' work here is to overhaul the duplicate database tables to work in more intelligent 'groups' rather than my initial simple 'pairs' system. This will compact the duplicates data, speed up many operations, massively simplify transitive duplicate logic, and lead to alternate file structure support. I'll also probably copy the basic structure to tag siblings and parents, which are a similar data structure also currently stored in pairs. This job will take some work, but I know the general thrust of what I want to do.
faster code and some misc work
I cleaned and improved a bunch of old code this week. First off, image loading now uniformly uses a faster library, so image imports and some thumbnail creation should be a little bit faster and deal with some rare image rotations more reliably. Secondly, the way the tag siblings and parents managers construct themselves on client boot is significantly faster. And a new 'local' tag cache–which will take a minute to construct when you update the client–will speed up many tag-related operations, particularly file results building, especially right after the client boots. There are many changes here, so please report any bugs you see.
review services now uses nested notebooks instead of my old 'listbook' control. I don't really like how it looks now, but the code behind the scenes is a lot saner. The last instance of this bad listbook is the options dialog, which is even worse behind the scenes but a bit too big to fit into a single notebook–I am still thinking about what to do with it.
It is just a little thing, but subscription popups now list file download progress x/y in terms of the current job rather than total subscription history! So, if your sub has 400 files already downloaded and finds 10 more in a sync, the popup will now say (and display a progress gauge for) a much more helpful 3/10 rather than 403/10.
full list
- wrote a new (always on top!) hover window for the duplicate filter that sits on the middle-right. the duplicate cog button and action buttons are moved to this new window, as are the file comparison statements
- the duplicate file comparison statements now state the relevant actual metadata along with better '>>'-style operators to highlight differences and green/red/blue colouring based on given score. it is now much easier to see and action clearly better files at-a-glance
- improved some hover window focus display calculations to play with the new always-on-top tech
- both the 'show some random dupes' button and finding dupe pairs for the filter should be a bit faster for very large search domains. the basic file search and indexing still has to run, but the second sampling step in both cases will bail out earlier once it has a decent result
- core image handling functions now uniformly use OpenCV (faster, more accurate) by default, falling to PIL/Pillow on errors. image importing in the client and server should be a bit faster, and some unusual image rotations should now be read correctly
- the server now supports OpenCV for image operations, it _should_ also still work with only PIL/Pillow, if you are running from source and cannot get it
- unified all thumbnail generation code and insulated it from suprises due to unexpectedly-sized source files, fixing a potential client-level thumbnail generation looping bug
- gave all image processing a refactor and general cleanup pass, deleted a bunch of old code
- wrote a new 'local tag cache' for the db that will speed up tag definition lookups for all local files. this should speed up a variety of tag and file result fetching, particularly right after client boot. it will take a minute or two on update to generate
- sped up how fast the tag parent structure builds itself
- the review services panel now uses nested notebooks, rather than the old badly coded listbook control. I don't really like how it looks, but the code is now saner
- similar-files metadata generation now discards blank frames more reliably
- subscription popups now report x/y progress in terms of the current job, discarding historical work previously done. 1001/1003 is gone, 1/3 is in
- made the disk cache more conservative on non-pre-processing calls
- cleaned up some file import code, moving responsibility from the file locations manager to the file import object
- updated the ipfs service listctrl to use the new listctrl object. also cleaned up its action code to be more async and stable
- I believe I fixed a rare vector for the 'tryendmodal' dialog bug
- fixed a bug in presenting the available importable downloader objects in the easy drag-and-drop downloader import when the multiple downloaders dropped included objects of the same type and name–duplicate-named objects in this case will now be discarded
- unified url_match/url_class code differences to url class everywhere
- updated some common db list selection code to use new python string formatting
- plenty of misc code cleanup
next week
I'll now tackle this dupe db work. I'll plan and experiment first. Otherwise, next week is a small jobs week. I'd like to add .ico support and add pages' 'collect by' dropdown value to sessions so it gets saved.