[ / / / / / / / / / / / / / ] [ dir / clang / feet / femdom / in / komica / mde / mu / s8s ]

/hydrus/ - Hydrus Network

Bug reports, feature requests, and other discussion for the hydrus network.
Name
Email
Subject
Comment *
File
Password (Randomized for file and post deletion; you may also set your own.)
Archive
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Voice recorder Show voice recorder

(the Stop button will be clickable 5 seconds after you press Record)
Options

Allowed file types:jpg, jpeg, gif, png, webm, mp4, swf, pdf
Max filesize is 16 MB.
Max image dimensions are 15000 x 15000.
You may upload 5 per post.


New user? Start here ---> http://hydrusnetwork.github.io/hydrus/

Experienced user with a bit of cash who wants to help out? ---> Patreon

Current to-do list has: 2,017 items

Current big job: Catching up on Qt, MPV, tag work, and small jobs. New poll once things have calmed down.


YouTube embed. Click thumbnail to play.

69d547  No.8101

windows

zip: https://github.com/hydrusnetwork/hydrus/releases/download/v295/Hydrus.Network.295.-.Windows.-.Extract.only.zip

exe: https://github.com/hydrusnetwork/hydrus/releases/download/v295/Hydrus.Network.295.-.Windows.-.Installer.exe

os x

app: https://github.com/hydrusnetwork/hydrus/releases/download/v295/Hydrus.Network.295.-.OS.X.-.App.dmg

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v295/Hydrus.Network.295.-.OS.X.-.Extract.only.tar.gz

linux

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v295/Hydrus.Network.295.-.Linux.-.Executable.tar.gz

source

tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v295.tar.gz

I had a great week. I mostly fixed and dejanked stuff. Also, the thread watcher now supports 420chan!

fixes and cleanup

The RuntimeError popups some users were seeing after a restore-from-minimise events are completely fixed. Some related search page and tag autocomplete draw and timer code is also cleaned up, and a memory leak was fixed along the way.

I also moved some more stuff to the new job scheduler. The client should use fewer threads and less idle CPU. I am still very pleased with this new system. There is also more to do here, so I will continue moving jobs to it.

cancelling file queries

If you start a big file query for, say, just system:everything and then add a new search predicate, the original search will be cancelled early. I have added more cancel 'checkpoints' to the search code, so large laggy searches will cancel much earlier than before.

Furthermore, if a search takes more than three seconds, a little 'stop' button will now appear beside the autocomplete input. If you accidentally start a very large search, please hit this button to stop it while you sort out what it was you actually wanted. The search might not free up the database immediately, but it typically only takes a few seconds.

420chan and parsing ui

I wrote a 420chan thread parser. It should add and link itself when you update, so feel free to drop 420chan thread URLs on the client just like you would a 4chan or 8chan URL, and let me know if anything doesn't work.

For advanced users: I've also significantly reworked the layout of the new edit parsers ui. The 'test' panel is now beside the main 'edit' panel, and the 'info' panels are dropped for help buttons that link to the html help. There are numerous bug fixes and little workflow improvements as well. I expect to continue working on the help over the coming weeks, but all this stuff is getting there.

thoughts on underscores

There are many tags like 'blue_eyes' that come from sources that can't deal with spaces. We have fixed a ton of these with siblings ('blue_eyes'->'blue eyes'), but since this is such a common problem, I am also considering just adding a rule to convert all underscores in tags to spaces. I intend to still allow underscores in tags using an escape character (it'll probably be that double-underscores are rendered as one underscore, while single underscores are converted to space) for tags where the underscore is part of it (like it might be 'series:watch_dogs', although I see most sources just use "Watch Dogs" anyway), but it'll take a bit of human intervention (likely a handful of siblings) to fix on our end. I figure the effort to carve the space for the exceptions will be much less than ultimately having to add a sibling for almost every simple tag that has a space.

Making this sort of shift would be a big step affecting multiple systems, and I want to think about it for a while. I also want to know what you think, so if you like, please check out this poll:

http://pollmaker.vote/p/JKTWR07U

And if you have longer thoughts, please let me know. Again, I won't do this next week, so there is a good bit of time to think and talk about it.

full list

- fixed the runtimeerror popups that would come up on restore from minimise or main gui move after the complete destruction of a general search page

- cleaned up some main gui move code generally, and removed a memory leak on the way

- file queries can now cancel at multiple checkpoints during the first phase, saving a bunch of CPU time on certain large queries that are replaced mid-search

- after a file query has been going three seconds, a little 'stop' button will appear beside the regular autocomplete input. clicking this will cancel the current query! it will stop when it next hits one of the checkpoints above

- the floating autocomplete dropdown should be less flickery in some circumstances

- dejanked some more file query code

- added a 'clear orphan file records' entry to the database->maintain menu. this looks for and purges orphan file rows as you may have seen a notification about recently. this mostly affects the duplicate filter system

- fixed up the delete file code to be a bit more robust–it should lead to fewer orphans in future

- all the parsing edit panels have new layout: they no longer have info panels but instead a help button that points to the html help, and the edit and test panels are now beside each other rather than in notebook pages

- harmonised a bunch of the parser ui test panel code, refactored how the results are stored

- the test panel now presents a better 'preview' of what it contains (the actual text control has like 64KB text limit on some OSes and has unreliable text encoding rules, so using it as the raw container for the example data has lead to problems), and we now read and write the example data with a couple of new copy to/paste from clipboard buttons

- wrote another new test panel for subsidiary page parsers that does the separation formula stuff a bit better. the test results now come back for all posts as well, rather than just the first

- added a new 'deeply_nested_dialog' frame key to options->gui for the parsing ui to better lay out five or six nested dialogs in a nice 'topleft' way

- the 'topleft' frame padding is reduced from 50 to 24 pixels to better fit in deeply nested dialogs

- misc parsing ui improvements and little fixes

- the manage url classes and manager parsers dialogs now have a better 'add defaults' button that allows you to just select the defaults you want (by name) from a checklistbox

- wrote a parser for 420chan and added it to the defaults. it should automatically add and link up when you update

- if the install_dir/db directory is not writable-to (e.g. you have installed the program to a protected location like "C:\Program Files"), the client and server will default to ~/Hydrus as the db directory

- wrote a new 'TagSummaryGenerator' class that will do 'bunch of tags'->'nice summary string' conversions for the thumbnail banners and export filenames

- substituted some static tagsummarygenerators to do thumbnail banners

- did the same for the media viewer top-center namespace summary

- started some edit ui for tagsummarygenerators–I'll have some proper customisable stuff in the near future

- moved the background memory maintenance and misc daemons to the new job scheduler, reducing thread count and idle CPU some more

- added a debug 'show scheduled jobs' entry for the new job scheduler system

- decompression bomb failures will no longer count towards a subscription's fail count, so having a bunch of them won't abandon a sync

- fixed and otherwise improved a potential crash condition when a thumbnail panel closes while a menu is popup'd on it

- to forestall this program instability, the thumbnail window will no longer replace while its menu is open. the behaviour after this delayed window delivery is slightly borked, but it isn't a crash so I'm ok with it for now

- removed some other jank from the thumbnail media panel swap code

- non-cancellable modal popups will no longer have the 'close' button. trying to close them with the dialog's X button will still give the 'sorry lad, can't cancel' error

- rating and file service system predicates for services that no longer exist will now render a neat 'unknown x system predicate' presentation string rather than throwing an error

- searches in 'all known tags'/'specific tag domain' no longer provide system:untagged, wew

- some delayed events are now posted in a more thread-safe way

- misc refactoring

next week

I am getting over several humps now. The new parser code seems to be working out, and the wx update and its crashes seem 99% dealt with, so I want to catch up on some things that have been put aside. I've done some prep work for user-editing the 'banner text' on thumbnails, so I'd like to finish that, and there are some longer term 'nice to fix' bugs that I'd like to look at. I'd also like to fit in some more imageboard parsers.

BTW: The old computer my server runs on is having some OS update trouble right now and keeps shutting off and failing to come back up properly. Several mornings this week I have tried to check my PTR petitions and discovered it was down again. I went nuclear on it today, killing a whole bunch of 'it is ok to reboot for update' settings, but I have also moved my 'move the server to newer hardware' job way up. In any case, if you try to pend some tags and get a ConnectionError, please let me know and/or try again in a few hours. I hope to have this situation fixed within the next few days.

____________________________
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

00e3c7  No.8103

File: a3e75f0210370e1⋯.jpg (113.8 KB, 1280x720, 16:9, [HorribleSubs] Kemono Frie….jpg)

I'm impressed with that changelog. Thanks!

Maybe the PTR server will eventually get bothersome enough to put in a rkt container or VM so you can ignore the host OS updates for longer, or employ comparatively quick solutions if anything goes wrong? Not that I was bothered by the downtime.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

9ef6aa  No.8105

File: 27661e816406217⋯.png (31.06 KB, 557x273, 557:273, pagination communication a….png)

There's some new weirdness with thumbnail page numbers in collections

>automatically replacing underscores in tags with spaces

Should be optional because that would bork my tags of pixiv filenames

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

fb19c7  No.8107

Can we get parsing for reddit and downloading from gfycat plis?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

943017  No.8108

>I am also considering just adding a rule to convert all underscores in tags to spaces.

be careful with this, many artists have underscores in their professional name and that would fuck those tags up.

Some artists with underscores in their names have characters named something super common like "joe", so people use the tag character:joe (artist_name) to clear out which joe is in the picture.

here's my suggestion: apply this rule only to non-namespaced tags.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

943017  No.8109

also, do you have a list of sites/imageboards to make parsers for?

if you do, here's a few to add

furaffinity.net

inkbunny.net

weasyl.com

lulz.net

u18chan.com

yeah yeah i know, im a huge degenerate please rape my face

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

943017  No.8110

File: 328dd53c94f3688⋯.jpg (19.65 KB, 409x150, 409:150, error.JPG)

sorry for a third post, but i'm having problems uploading my tags to the PTR.

i'm getting the following error, it started happening last morning while i was still on 294, i updated to 295 and its still happening, it doesn't seem to be related to the version.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

943017  No.8111

File: 959e3f1e7712480⋯.jpg (38.33 KB, 405x314, 405:314, error 2.JPG)

File: a39dd099115759c⋯.png (23.5 KB, 512x512, 1:1, thonk.png)

and double sorry for the quadruple post, but i should have posted the traceback to help you more in the previous post

what did it mean by this?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

55bd0e  No.8115

>>8101

On the underscore

Is it hard to do or time consuming?

Does the work need to be done to advance anything major or is it cleanup work?

If its something that needs to be done to implement something else, by all means do it, I honestly could not give one flying fuck about the removal as there is little use to keeping them

However if it takes real time you could devote elsewhere, and is not important for a new feature implementation, then fuck it, put it off till you have an off week where you aren't feeling up to writing code and cant really do anything major but still want to accomplish something.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

91c0d0  No.8116

>>8105

how do you create sorting schemes by custom tags?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

206041  No.8117

update on this shit

>>8111

>>8110

it seems to have fixed itself, after i went into services and refreshed the accounts.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

1ed9d0  No.8121

File: a6889c54135cf12⋯.png (93.98 KB, 1358x647, 1358:647, client_2018-02-23_18-01-19.png)

So I posted on the thread from 2 versions ago about the wx board giving out 404 errors and you suggested that I use the page downloader for the older files in the set but sadly that didn't do the trick.

I noticed that the files that were getting downloaded weren't coming in the right order, which is kinda important to identify related files as they usually come one after the other in threads, and also that the page downloader was getting uninteresting mimes because of what seems to be the "&loop=1" appended at the end of the URL.

Is there any way you could get this issue sorted out soon? I'd preferably like to use the thread watcher to get all the files but at least get the page downloader working?

One other request is to add yuki.la to the parsers too, to get archived threads with filenames and such which are useful in fishing for sources and such. Thanks in advance!

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

cbcf37  No.8124

Will the new parser be able to find hashes from URLs and check if the file is already in database before (re)downloading it?

For example: blah.blah/file_store/830914cdf228e2dd713a2dc3e8df195ce17cd649.jpg &

blah.blah/_images/830914cdf228e2dd713a2dc3e8df195ce17cd649/somefile.jpg

I haven't had a chance to mess around with it yet.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

9ef6aa  No.8125

>>8116

Go to "file->options" then go to "sort/collect" then type the namespaces you want to sort or collect by, separated by - characters

namespace1-namespace2-namespace3

Because - is the separator you can't sort or collect by any namespace containing a - character which is a weird limitation but applies nonetheless

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

91c0d0  No.8126

>>8125

Thanks fam. Seems like a simple workaround would be for the separator to be changed to double – though im not sure just how many namespaces exist with hypens already in them

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

68f33d  No.8128

Hey fam retard here, if I want to blacklist a tag like say 'black hair' how do? I don't understand how this works

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

206041  No.8129

>>8128

black list? what do you mean by this? you mean use it as a search parameter?

if thats the case, just type it with a - at the beginning, ie:

anime

-black hair

it will bring up all pictures tagged anime and rule out any that are tagged "black hair"

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

f77c76  No.8130

If you force Hydrus to process PTR updates with the "process now" button, and then stop it after a while, the client will freeze for a long time after the "processing updates finished" popup. On a smaller db it took about 15 minutes and on my main larger db I left it for 2 hours before I ended the process.

Any way you can make it not freeze like that? Thanks

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

f77c76  No.8131

>>8128

Use the censor function. Right-click a tag.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

d4d77f  No.8132

There is no good answer for spaces vs underscores. The internet has chosen not to conform to a single standard.

Copied from a stackoverflow answer you potentially have to deal with

text with spaces

textwithoutspaces

encoded%20spaces%20in%20URL

underscore_means_space

dash-means-space

plus+means+space

camelCase

PascalCase

" quoted text with spaces" (and single quote vs. double quote)

slash/means/space

dot.means.space

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

00e3c7  No.8133

>>8130

I also observed this. I'm going to guess it's sqlite doing its thing…?

>>8132

Is right.

That said, I feel regardless that just getting rid of underscores by default is probably a good idea.

The problem should lessen anyhow if eventually more better downloaders pre-sanitize tags from more sources.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

290501  No.8136

File: 5855716d92addcb⋯.png (971.19 KB, 1920x1080, 16:9, Screenshot from 2018-02-24….png)

I have a problem with the media viewer, the hover-over windows are stuck in these positions and cannot be moved whatsoever, making the viewer really annoying to use.

They also flicker a bunch if the always-display bugfix setting isn't enabled.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

94fac6  No.8142

>>8136

>all the windows fucked up and shit

>that scared "w-what happened?" crying face

I say you leave it for the novelty.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

55bd0e  No.8143

>>8139

Haven't see the massive gifs but that would be funny to see, pretty much everything massive of mine is massive for a reason, a good reason? no, but a reason.

when you mention decompression bombs and treating them as such, there is an issue, what if I want the image/video? I have to go around turn the setting off and then back on.

Honestly, with both of these... well in a download page you get a log, and it will tell you failures and you are able to retry the failures.

Is there a way to implement a right click menu to 'override filter'

This way it can force import a file that it either sees as a decompression bomb or as just a massive fuck off file size.

I think when implemented I want to set it so everything over 20mb is a check with me moment, im looking at files and seeing where the threshold is for file size, 20 returns 600~ files, 15 returns 1200~

I think 8mb may be good, would have to test it a bit but 8 returns 5000 files and given how much I have downloaded, its still next to dickall and I would be able to manually ok them, however if I go that low an ignore filter right click menu option would be needed.

On that note again, would it be possible to add another type to what the program tells us?

we got successful, already in db, deleted, and failed

would it be possible to change failed for filter reasons over to filtered? this would be decompression bombs and this maximum file size, just so its brought to are attention that something needs checking better then failed.

that also said, can we get an 'ignore filters' option to manual imports? this would make it easier to import things like decompression bombs and a max file size without disabling it in options each time.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

69d547  No.8144

Thank you for voting in the poll! There are many more 'please make it optional' results than I expected, so I am glad I ran it. I will make this change more slowly and have it as a more flexible and optional system.

>>8103

I'm from a low-tech background of running ftp servers under desks, so that's how I run the PTR. It has always just been an old computer I have sitting in a corner, much like how server.exe is a thing you can just throw on a USB stick if you want.

I don't know much at all about VM stuff or more advanced hosting solutions. I wonder if hydrus will ever be a big enough network to go in that direction, or if I will eventually move the server's jobs down into the client and have more and more of a P2P system.

If you know about that stuff and we ever run into a serious problem here, I'd be interested to know what you think would be an appropriate solution.

>>8105

Thanks. I will write an exception or something into my new thumbnail banner generation code to detect integer namespaces and deal with that like "min-max", like before.

>>8107

I am currently working on a big overhaul of my downloader engine. Once I am done with the current parsing help, I will move toward converting the gallery downloaders to the new system. At this point, it will be much easier for me or any other user to create parsers for lots of new sites. Let me know how it works for you as I roll it out!

>>8108

Thanks, this is a great idea.

My general thinking has been that the work of fixing underscores where they are actually desired (by substituting the double-underscore escape character) would be an easier job than the current mess of fixing every instance of 'blue_eyes', but yeah, perhaps a more granular system is the key. Most of this shit is in unnamespaced tags.

In any case, since people do care about this after all, I'll have a more complicated system than just 'replace them all at the db level wew' anyway.

>>8109

Thanks. I will add these to the list. I don't want to get super-bogged down doing this myself (I made the new system user-editable and -sharable precisely so other users could take this work off me), but I will absolutely write these down. If you are HTML or JSON fluent, you might like to get involved in writing some parsers yourself as these systems roll out. It looks like some of the work is going to happen on discord and github–I'll link it all everywhere as this becomes real.

>>8110

>>8111

>>8117

Thank you for this report. Sometimes the client has trouble getting a new session key from a repository. I think there is a timing issue. It is now all working on the new login manager, which at the moment is basically two sticks and a short bit of string. The best solution at the moment is to give it a couple of days to figure itself out or yeah just hit refresh in review services. I'll deal with this properly and improve error messages when I get the login manager proper going.

>>8115

Thank you for your input. I agree personally, but in making hydrus, I have found the entire concept of 'tag siblings' and generally 'A should look like B, imo' is the most complicated and divisive part of the whole deal. People's opinions on whether one style is better or worse vary wildly and for valid (albeit subjective) reasons. I tend to not care all that much about how tags look, but others spend hours on it.

I expect to put more time into several new systems for tag presentation in future, and iterations on existing tag siblings/parents to allow more user-side choice. It does take a lot of time and work, unfortunately.

>>8121

Thank you for this follow-up. I think that 'loop=1' business is the page of images downloader getting the 8chan 'webm viewer' page, like so:

https://media.8ch.net/file_store/e95d2a111b5d260016b9128bb3d1dc6ee178328f3d5d8589b3a403bb21dccaa0.webm

The 'uninteresting mime' part is due to that link actually returning an html page. It looks like it is doing the images ok, since the thumbnail link goes straight to the raw image. What a mess.

Future versions of 'page of images' (when I move it to the new parsing system) will be more flexible and allow fixes to this in the downloader page ui itself, but it will be a while until that is out.

I do not know a clever solution to this at the moment. This is a legacy issue with 8chan that affects a finite number of years-old files, so I don't want to invest a bunch of time fixing it in the thread watcher proper. I will make a fix, though–but let me think about it a bit. Maybe I can bump the new 'page of images' up.

>>8124

Yes! And in a generalised way: If you can figure out a way to find a hash in the HTML or JSON of the target page, you can now decode it from hex or base64 (to raw bytes) and then inform the new parsing system whether it is md5, sha1, sha256, or sha512. The client will check and decide whether to download (or just set 'already in db'/'previously deleted') appropriately.

>>8125

>>8126

This ui is years old and sucks. My apologies–it'll one day be neater.

>>8130

>>8133

Yeah, unfortunately sqlite takes a while to clean up big transactions. We are doing big server-level stuff on desktops, so it can be tricky to schedule this stuff in a way that fits nicely with an ui. If the db writes a million new tags, that's a lot of 4KB blocks to update from the db-journal to the main db file. During that time, the ui won't work well. I usually hide this big delay behind idle time or shutdown maintenance time where you never notice, but if you hit process now, you have to deal with it on its face.

I have updated the warning dialog on 'process now' to say it may take a while to give your ui back even after a cancel. But I don't want to put time into making this too neat, as it has always been more of a debug thing. I recommend that all users let the client do update processing according to the regular maintenance rules as work for them under options->maintenance and processing.

Having said that, 2 hours sounds way too long–is that drive especially full or fragged? If you are running on a HDD that is also a bit slow for other reasons, I recommend only doing update processing during shutdown time. Limit it to ten minute bursts or so and you'll never get massively boshed. Or pause/slow PTR syncing until you next update your hardware.

Although, it could also be that in the long time it took to clean up the old transaction, let's say 20 minutes, idle mode kicked in again and the whole thing restarted! I feel like I have seen this before–again, I recommend not using the 'process now' button except for debug or other testing purposes. I didn't design the system to generally be used pleasantly in that way and didn't hide it behind help->advanced mode for no reason.

>>8132

>>8133

Yeah, my general feeling is that this is an endless battle. There is no ultimate perfect solution, so all I can really work on is a system that lets people head towards what they would prefer.

My fingers-crossed hope is that neural nets will make this all a lot easier to manage in the coming years.

>>8136

Thank you. Please try hitting help->debug->report modes->hover window report mode and then play with these broken hover windows in a variety of ways. This will spawn a fucking ton of popup messages.

Then shut down the client, go into your log, and find all the popup stuff, bundle it into a new text file or something and pastebin/email/discord DM it me! And observations you have about when the popups came in most of all (e.g. 'only came up on initial load' or 'got lots during flickering while mouse over the area they should have been') or whatever would also be helpful.

>>8143

Thanks. I've decided to add this new 'kill big gifs from THE SFM COMMUNITY' rule to 'file import options'. Now I am thinking of it, it would be reasonable to move the 'don't import decomp bombs' to file import options as well–then the retry action would only be another three clicks–'open file import options', 'decomp bombs are ok', 'apply'–rather than the pain of going into the full options.

After taking more feedback from the discord yesterday, I am zoning in on 32MB being the default for big gifs. It'll be editable if people want different, but the main thing I want to cut out is the stealthy 168MB gif that serves no honest purpose for anyone.

I will keep the ideas of adding a new file result status and having some sort of 'override filter' pipeline in mind, but these would probably be quite complicated to actually do well, so I'll try it simply through file import options first and we'll see how that goes.

If you like some decomp bombs, I recommend you just turn that option off. I've never encountered a malicious file in the wild, and the bad pngs we are talking about don't usually do any real harm–they just blat your CPU and memory for a handful of seconds.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

290501  No.8145

>>8144

When I hover over where the window should be in the viewer it will appear and display correctly without flickering but just in the wrong position.

But if where the window appears and where it should be overlap and you mouse over it, it will flicker in and out repeatedly.

During the flickering a bunch of message pop-ups will appear, they happen each time it fades in or fades out.

Also if I move the media viewer window between different workspaces, the hover-over windows will (sometimes) be in another random position.

Here is the log: https://ghostbin.com/paste/xqhmc

Also separately I've gotten these exceptions sometimes when using the search bar:


2018/02/25 21:43:52: Uncaught exception:
2018/02/25 21:43:52: AttributeError
'AutoCompleteDropdownTagsRead' object has no attribute '_dropdown_hidden'
File "/opt/hydrus/include/ClientGUI.py", line 3442, in TIMEREventPageUpdate
page.TIMERPageUpdate()
File "/opt/hydrus/include/ClientGUIPages.py", line 771, in TIMERPageUpdate
self._management_panel.TIMERPageUpdate()
File "/opt/hydrus/include/ClientGUIManagement.py", line 3345, in TIMERPageUpdate
self._UpdateCancelButton()
File "/opt/hydrus/include/ClientGUIManagement.py", line 3202, in _UpdateCancelButton
self._searchbox.ForceSizeCalcNow()
File "/opt/hydrus/include/ClientGUIACDropdown.py", line 583, in ForceSizeCalcNow
self.DropdownHideShow()
File "/opt/hydrus/include/ClientGUIACDropdown.py", line 383, in DropdownHideShow
self._ShowDropdown()
File "/opt/hydrus/include/ClientGUIACDropdown.py", line 322, in _ShowDropdown
if self._dropdown_hidden:

2018/02/25 21:43:56: Uncaught exception:
2018/02/25 21:43:56: AttributeError
'Panel' object has no attribute 'IsActive'
File "/opt/hydrus/include/ClientGUI.py", line 3442, in TIMEREventPageUpdate
page.TIMERPageUpdate()
File "/opt/hydrus/include/ClientGUIPages.py", line 771, in TIMERPageUpdate
self._management_panel.TIMERPageUpdate()
File "/opt/hydrus/include/ClientGUIManagement.py", line 3345, in TIMERPageUpdate
self._UpdateCancelButton()
File "/opt/hydrus/include/ClientGUIManagement.py", line 3202, in _UpdateCancelButton
self._searchbox.ForceSizeCalcNow()
File "/opt/hydrus/include/ClientGUIACDropdown.py", line 583, in ForceSizeCalcNow
self.DropdownHideShow()
File "/opt/hydrus/include/ClientGUIACDropdown.py", line 381, in DropdownHideShow
if self._ShouldShow():
File "/opt/hydrus/include/ClientGUIACDropdown.py", line 266, in _ShouldShow
tlp_active = self.GetTopLevelParent().IsActive() or self._dropdown_window.IsActive()

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

a4fcf4  No.8146

>>8144

> If you are HTML or JSON fluent, you might like to get involved in writing some parsers yourself as these systems roll out.

i'd love to but im a fucking brainlet who cannot learn how to do things unless he's taught by a person. i've tried many times and i just lose focus and get frustrated when shit doesn't compile or math gives an incorrect result.

i'll just wait and give you support by keeping tabs on errors i get.

in other news, i've tagged at least a couple thousand pictures already out of those sites i mentioned earlier, the battle never ends.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

f77c76  No.8148

>>8144

>Having said that, 2 hours sounds way too long–is that drive especially full or fragged? If you are running on a HDD that is also a bit slow for other reasons, I recommend only doing update processing during shutdown time. Limit it to ten minute bursts or so and you'll never get massively boshed. Or pause/slow PTR syncing until you next update your hardware.

It's a db with 70k files. client.mappings.db is 5.55GB, client.master.db is 1.66GB. It's on an AES encrypted 1TB 5400rpm WD green HDD. Not full or very fragged. So yeah, not exactly super fast. Works fast enough for anything else in Hydrus though.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

32401d  No.8149

>>8144

Since today morning the client doesn't perform any subscription checks or downloads.

I can do duplicates, maintenance and search for files. But it doesn't check nor download.

bandwidth rules are ramped up, so there should be no problem.

Multiple subscriptions don't want to check/download, furthermore newly added subscriptions don't check/download as well.

It simply stopped in the middle of downloading in the morning and after restart never continued to download.

Note: I could only kill -9 it, because it didn't close even after an hour

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

91c0d0  No.8150

File: f45939dc1247835⋯.webm (583.39 KB, 1280x720, 16:9, [Coalgirls]_Toradora_15_(….webm)

>>8146

>im a fucking brainlet who cannot learn how to do things unless he's taught by a person. i've tried many times and i just lose focus and get frustrated when shit doesn't compile or math gives an incorrect result.

i feel you

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

32401d  No.8151

>>8149

Welp, false alarm. Somehow paused the subscription syncronisation.

Some hotkey or however.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

f2cdb5  No.8152

>>8149

Same thing happened to me a while ago and I haven't been able to get it to work again so far.

They are all stuck in a state that says they will check again when after I press OK to exit the subscriptions window, but they never start to check.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

00e3c7  No.8155

>>8144

>Yeah, unfortunately sqlite takes a while to clean up big transactions

I also think it has pretty bad HDD access patterns when it can't run fully in memory. At least I always had big performance issues with bigger sqlite databases when things like postrgres, h2, rocksdb and quite a few others still seemed to take the combination of a lot of data manipulations with only a bunch of RAM and the rest on HDD just fine.

But I don't suppose switching away from SQLite would be easy, right?

> so all I can really work on is a system that lets people head towards what they would prefer

Yea. Maybe actually just do what seems easy and right to you on the PTR -like dropping the underscores for spaces and doing double underscores or whatever, and then write the parsers to respect that so most submissions already are good.

If it turns out people really want different tag formats, maybe there should be a special "sibling" / "parent" - like category (but separate from the manual one) to maybe take scripting generated mappings. Something that can not mix with the manually entered data and thus be dropped and re-generated with new scripts at will.

> My fingers-crossed hope is that neural nets will make this all a lot easier to manage in the coming years.

Might happen, sure. Maybe your PTR will help train these.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

e1db8a  No.8157

I occasionally get these popping up back to back whenever I search for a tag. Other than that I don't notice any problems.


AttributeError
'AutoCompleteDropdownTagsRead' object has no attribute '_dropdown_hidden'
File "include\ClientGUI.py", line 3442, in TIMEREventPageUpdate
page.TIMERPageUpdate()
File "include\ClientGUIPages.py", line 771, in TIMERPageUpdate
self._management_panel.TIMERPageUpdate()
File "include\ClientGUIManagement.py", line 3345, in TIMERPageUpdate
self._UpdateCancelButton()
File "include\ClientGUIManagement.py", line 3202, in _UpdateCancelButton
self._searchbox.ForceSizeCalcNow()
File "include\ClientGUIACDropdown.py", line 583, in ForceSizeCalcNow
self.DropdownHideShow()
File "include\ClientGUIACDropdown.py", line 383, in DropdownHideShow
self._ShowDropdown()
File "include\ClientGUIACDropdown.py", line 322, in _ShowDropdown
if self._dropdown_hidden:


AttributeError
'AutoCompleteDropdownTagsRead' object has no attribute '_dropdown_hidden'
File "include\ClientGUI.py", line 3442, in TIMEREventPageUpdate
page.TIMERPageUpdate()
File "include\ClientGUIPages.py", line 771, in TIMERPageUpdate
self._management_panel.TIMERPageUpdate()
File "include\ClientGUIManagement.py", line 3345, in TIMERPageUpdate
self._UpdateCancelButton()
File "include\ClientGUIManagement.py", line 3202, in _UpdateCancelButton
self._searchbox.ForceSizeCalcNow()
File "include\ClientGUIACDropdown.py", line 583, in ForceSizeCalcNow
self.DropdownHideShow()
File "include\ClientGUIACDropdown.py", line 383, in DropdownHideShow
self._ShowDropdown()
File "include\ClientGUIACDropdown.py", line 322, in _ShowDropdown
if self._dropdown_hidden:

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

94fac6  No.8159

File: 69e4fade7433769⋯.jpg (157.77 KB, 768x1024, 3:4, 69e4fade743376958ef9e0bce8….jpg)

Just updated from 286 I stopped browsing 8ch for a while and forgot about weekly updates to 295 and tag downloading is fucked for existing files.

I tried downloading random files from gelbooru, it worked and it got the tags I asked for (unnamespaced on one test, creator and character on the second test, and none on the third)

After that I tried downloading tags for 4 files from both gelbooru and danbooru, one existing file and the three recently imported ones from the previous test. Not a single one of the files registered the new tags, not even the one I had downloaded no tags for. I tried both character namespace only and all tag options, but neither worked.

Did you put some "don't import tags for existing files" option on one of the latest releases or is shit actually fucked?

I accidentally double my pics into 20k after importing from a folder, but I don't think it's affecting anything, since there are no tag operations occurring in the background.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

b701bd  No.8161

Dunno if it will be fixed in the new release that will come up in a few hours, but after Corrupt EXIF data it stops downloading altogether.

Causing the the "Waiting for subs to finish".

2018/02/28 19:03:50: /media/hydrusnetwork/PIL/TiffImagePlugin.py:742: UserWarning: Corrupt EXIF data. Expecting to read 2 bytes but only got 0.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

69d547  No.8162

File: 1887dc6aeb7b111⋯.jpg (102.13 KB, 800x1150, 16:23, 1887dc6aeb7b11145438720e9b….jpg)

>>8145

Thank you. I ran out of time this week, but I have recorded this all and will look at it closer hopefully next week.

The tracebacks you saw should be completely fixed in today's release.

>>8146

>>8150

np lads, thank you for your ongoing feedback.

>>8149

>>8151

>>8152

Thank you for these reports. I think maybe some unusual error states are pausing the whole system or otherwise setting some strange 'oh fugg, don't do any subs for a bit, I think the network is down' flag somewhere and then isn't recovering from that correctly. I will look at this soon. Please let me know if the situation improves or otherwise changes in the coming weeks.

>>8155

Thanks. You are absolutely right on the sqlite front. I could be convinced that a different db engine worked better for some of our needs and also had better python support, but the work to change it atm would be far more than just improving the current system. And since everyone is moving to SSDs for their system drive, I think making it easier to have a hydrus client with db on the fast drive and files on the slow drive is probably a good plan that'll ameliorate this anyway. My decent laptop (on an SSD) processes so fast I don't even think about it any more, it is like 30k rows/s or something. About 10s a day to keep up with the PTR.

I am still thinking about siblings and parents and other tag replacement stuff. Most of my options right now are band-aids on duct tape on rotten wood, and I am running out of 'easy' work as well. I'm considering doing some stuff at the db-end, which will be a decent whack of work but make a lot of these front-end changes easier and result in less CPU overall as the system grows. This system is just starting to teeter into 'fucking laggy waste of CPU m8' territory, so it might be worth attacking the whole thing, top to bottom. I'll just find a spare fifty hours to do that. :^)

As for neural network stuff, yeah, I'm really hoping the breddy huge tag 'corpus' we are building with the PTR will help as a training system. I am very keen (over the coming years) on creating a space for Anons to burn ML CPU cycles on their own machines. I'm still pretty ignorant on the actual nuts and bolts of it though, so I can't speak with any honest expertise.

>>8157

Thank you. These should be completely fixed in today's release. Let me know if you still have any problems!

>>8159

If you click the cog icon on the download page and say 'get tags for files where url known and already in db (this downloader)' and try again, does it work then? The logic of when tags are applied in these cases is a little fine, something like:

If the URL is new:
get the file and tags
add the file to db
if the file is new:
add the tags to db, set status of 'successful'
elif the file is already in the db:
add the tags to db, set status of 'already in db'
else: (the URL is known)
if the URL belongs to a file already in the db:
if the cog icon says to fetch the tags from the server:
get the tags, add them to db
set status of 'already in db'

I default to 'off' on the cog icon to save bandwidth and time on files the client has seen (and usually parsed tags for) before.

The cog icon is a stupid icon and is in a bad place. It was always a bit debug, and I haven't yet done anything with it during this current downloader overhaul. Maybe it should be in 'tag import options'? If turning this on does solve your problem, what ui improvements would you recommend that I could make so you would have spotted this the first time around?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

94fac6  No.8164

File: 63ad6d5a880e02a⋯.png (42.61 KB, 1152x628, 288:157, Untitled.png)

File: 796af784cfb1cbf⋯.jpg (74.66 KB, 1050x1523, 1050:1523, 796af784cfb1cbfa4dae284c82….jpg)

>>8162

>If you click the cog icon on the download page and say 'get tags for files where url known and already in db (this downloader)' and try again, does it work then?

Yeah, that works. Thanks.

>Maybe it should be in 'tag import options'?

I think that be better. It's one of the two places I first looked for that option in, the second one being in the settings menus.

I didn't even notice the cog in there, though I am fucking blind to things in front of me.

>what ui improvements would you recommend that I could make so you would have spotted this the first time around?

Aside from putting it into the tag import options, the placement of the cog would help. You have these kind of clusters of buttons (red lines), which makes it cover a large area, so your eyes wander around there and you end up easily finding the buttons. That cog, however, is kind of in a desolate spot with nothing around it in it's subsection.

Putting it near something wide(e,g, the "stop after this many files" thing, like in the pic) would make it more visible, I think.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

b701bd  No.8167

>>8162

Regarding the pausing due to an error. I stated it in >>8161, if the client gets "UserWarning: Corrupt EXIF data." while doing subs, it will freeze the sub, prevent the client from closing, and after a forcefull shutdown set the "paus subscription syncronisation" to true.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

69d547  No.8261

>>8164

👌

>>8161

>>8167

Thank you–sorry, I missed this the first time around. I will check this out this week. Can you give me an example sub search or a specific page that will give a bad file like this?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.



[Return][Go to top][Catalog][Nerve Center][Post a Reply]
Delete Post [ ]
[]
[ / / / / / / / / / / / / / ] [ dir / clang / feet / femdom / in / komica / mde / mu / s8s ]