>>14895
>My test application of 6 tags to 10,000 files went from 52 seconds to 4.8!
VERY big optimization, thank you dev!
I did a lot of my preliminary mass tag-correcting last year and I had to do it in batches to keep everything from crashing and burning at the time, and still it hung a lot. Now that my duplicates are largely done with I've been going about my first serious attempt at comprehensively trimming down junk files. So far I've done a cursory run through tagless files for things to throw away, purged my lowest-quality source of files of all immediately obvious deletes, and now I just finished processing my 1280 smallest image files in batches of 64, deleting most of them. I've gone from around 2 saved per batch to around 12, so I'm getting to the end of that being practical.
My next task after sorting my less-strict duplicates might be finally tackling the hellscape that is my tumblr imports, which in addition to being the worst leg of the pre-full-tagging-and-archival-process sweep of my db, will have a lot of things that need the tags fixed because I'm dumb and switched my methodology halfway through (and so did Tumblr, but at a different time, and so did Hydrus in a way).
So this will definitely still be extremely useful to me because some of those tumblr image collections were giant.
Also my backlog for downloading videos and importing them to Hydrus has moved from deleting the extra junk links in jDown to actually having no more drive space left until I watch/delete/keep/edit video clips.
The ride never ends, but I have finally settled into a sustainable cruising velocity for now.