[ / / / / / / / / / / / / / ] [ dir / random / 93 / biohzrd / hkacade / hkpnd / tct / utd / uy / yebalnia ]

/hydrus/ - Hydrus Network

Bug reports, feature requests, and other discussion for the hydrus network.

Name
Email
Subject
REC
STOP
Comment *
File
Password (Randomized for file and post deletion; you may also set your own.)
Archive
* = required field[▶Show post options & limits]
Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Options

Allowed file types:jpg, jpeg, gif, png, webp,webm, mp4, mov, swf, pdf
Max filesize is16 MB.
Max image dimensions are15000 x15000.
You may upload5 per post.


This board will be deleted next Wednesday. I am moving to a General on 8chan.moe /t/. This board is archived at 8chan.moe /hydrus/!

YouTube embed. Click thumbnail to play.

cda3a0 No.14699

windows

zip: https://github.com/hydrusnetwork/hydrus/releases/download/v410/Hydrus.Network.410.-.Windows.-.Extract.only.zip

exe: https://github.com/hydrusnetwork/hydrus/releases/download/v410/Hydrus.Network.410.-.Windows.-.Installer.exe

macOS

app: https://github.com/hydrusnetwork/hydrus/releases/download/v410/Hydrus.Network.410.-.macOS.-.App.dmg

linux

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v410/Hydrus.Network.410.-.Linux.-.Executable.tar.gz

source

tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v410.tar.gz

I had an ok week. I wasn't as productive as I hoped, but I am happy with the mostly optimisation work.

optimisations

After some more profiling in IRL situations, and with more helpful info from users, I have done another round of profiling for the new sibling cache, and more besides. A database technique I use for many purposes is now more reliable (fewer lag spikes), and has less CPU overhead. If you found some systems (like the 'related tags' suggestions in manage tags dialog) sometimes took a few seconds to work in the past couple of weeks, they should now be fast again. And you should find many types of file search, particularly those with multiple search predicates, and general tag processing, should be faster than before.

My dev machine went from about 3-8k rows/s processing speed in a test environment up to 8-20k rows/s, which is faster than it was before the siblings cache was added.

full list

- general work:

- fixed a bug in the new file service filtering code that was stopping file upload commands to file repositories or ipfs services from sticking

- fixed an issue with the export files dialog auto-close-when-done function

- I think I fixed a possible bug in the boot file location repair/recovery dialog sometimes not saving corrected paths on unusual file systems

- file migration cancel button and shut off timer should work a bit more reliably, more to come here

- copying subscription quality csv info to clipboard no longer does nice human numbers (you now get 1234, not csv-breaking 1,234)!

- may have fixed a very rare 'or predicate' error when opening a dialog with a 'read' autocomplete input, like export folder or file maintenance jobs dialogs

- all pages are better about dealing with missing (i.e. recently deleted) services on load, and autocompletes also

- error handling from servers with strange character encodings should be better about dealing with null characters

- cleaned up the combined display regen chain code

- deleted some obselete db code

- .

- optimisation review:

- after more profiling, and thanks to additional input from users, I have done another round of optimisation for the new caches. using a new technique, more than just mappings are sped up this week - a number of queries that were prone to lag spikes should now have much more reliable speed and also be faster when hammered often

- .

- join and analyse db optimisations:

- these are mostly forcing table join orders, which reduces lag spikes, and reducing some related pre-query analysis overhead, which speeds things up more the faster your drive is (up to double processing speed on an ssd). they will affect different clients to different extents, but if your 'related tags' were taking more than a second to load, it should be sorted this week. systems affected:

- archiving files

- fetching 'related' suggested tags

- tag siblings regen/update in about ten places

- all mappings processing

- additional mappings processing for add/delete, pend/rescind_pend

- importing or deleting files that have tags

- loading medias' tags for the first time or on regen

- loading any media for the first time

- num notes searches

- similar files search tree maintenance

- many general file hash lookups

- many general tag lookups

- .

- other optimisations:

- mappings processing

- sibling processing

- wildcard tag searches, with and without namespaces, particularly when mixed with other search terms

- 'tag as number' searches, with and without namespaces, particularly when mixed with other search terms

- searching for tags when mixed with other search terms

- has notes/no notes

- searching files on 'all known files' with general file metadata system predicates (like size, filetype)

- url class, url domain, and url regex file searches, particularly when mixed with other search terms

- num tag file searches when mixed with other search terms

- has/not has tags file searches when mixed with other search terms

- sped up specific display chain regen significantly, with similar separate current/pending optimisations as last week's for combined

- converted specific display cache overall regen to use a copy followed by the new chain regen rather than additive file import

- sped up combined display chain regen a little bit

- the splash window now updates itself with less UI overhead, so spammy updates (like the new tag regen code) use a little less CPU and fewer UI context switches

next week

I had some IRL fall on my head this week. There's a bit more next week. I still have many small jobs and github to catch up on, and also some final tag siblings lookups to migrate to the new system to eliminate the 'loading tag siblings' step on boot. I'll just keep pushing. I'm scheduled to start the parents cache the week after, so it would be great to have all the siblings changes squared away.

____________________________
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

0e34a8 No.14706

>>14699

Got an error when trying to search by time imported

DBException
OperationalError: no such column: timestamp
Traceback (most recent call last):
File "hydrus\core\HydrusThreading.py", line 382, in run
callable( *args, **kwargs )
File "hydrus\client\gui\ClientGUIManagement.py", line 4932, in THREADDoQuery
query_hash_ids = controller.Read( 'file_query_ids', search_context, job_key = query_job_key, limit_sort_by = sort_by )
File "hydrus\core\HydrusController.py", line 615, in Read
return self._Read( action, *args, **kwargs )
File "hydrus\core\HydrusController.py", line 194, in _Read
result = self.db.Read( action, *args, **kwargs )
File "hydrus\core\HydrusDB.py", line 1028, in Read
return job.GetResult()
File "hydrus\core\HydrusData.py", line 1755, in GetResult
raise e
hydrus.core.HydrusExceptions.DBException: OperationalError: no such column: timestamp
Database Traceback (most recent call last):
File "hydrus\core\HydrusDB.py", line 629, in _ProcessJob
result = self._Read( action, *args, **kwargs )
File "hydrus\client\ClientDB.py", line 12868, in _Read
elif action == 'file_query_ids': result = self._GetHashIdsFromQuery( *args, **kwargs )
File "hydrus\client\ClientDB.py", line 7215, in _GetHashIdsFromQuery
files_info_hash_ids = self._STI( self._c.execute( select ) )
sqlite3.OperationalError: no such column: timestamp


Database Traceback (most recent call last):
File "hydrus\core\HydrusDB.py", line 629, in _ProcessJob
result = self._Read( action, *args, **kwargs )
File "hydrus\client\ClientDB.py", line 12868, in _Read
elif action == 'file_query_ids': result = self._GetHashIdsFromQuery( *args, **kwargs )
File "hydrus\client\ClientDB.py", line 7215, in _GetHashIdsFromQuery
files_info_hash_ids = self._STI( self._c.execute( select ) )
sqlite3.OperationalError: no such column: timestamp

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

6b5b94 No.14707

Hey dev, I'm having some trouble with the Twitter subscriptions. I just tried to set one up and it says to put in the username but whether I put the handle in with or without an @ it always says this subscription appears to be dead when it tries to update. I've tried this with multiple different accounts. Any idea what could be causing that?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

ef2013 No.14708

I had a good week. I fixed some recent search bugs and I capped off the new siblings work. Everything siblings is now running off the new cache, so the slow 'loading tag siblings' step of boot no longer occurs!

The release should be as normal tomorrow.

>>14706

Thank you for this report, I am sorry for the trouble. I believe this happens when the 'time imported' search is mixed with certain other search predicates, either tags or some more unusual system preds. It should be fixed tomorrow, but let me know if you have any more issues.

>>14707

I am not sure. This is on the new nitter downloader? Can you give an example username, so I can try my end?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

0e34a8 No.14711

>>14707

are you using "nitter media lookup"? I have my subs for media and retweets split up so the "media and retweets" one might not have worked.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.



[Return][Go to top][Catalog][Nerve Center][Random][Post a Reply]
Delete Post [ ]
[]
[ / / / / / / / / / / / / / ] [ dir / random / 93 / biohzrd / hkacade / hkpnd / tct / utd / uy / yebalnia ]