[ / / / / / / / / / / / / / ] [ dir / 55chan / b2 / choroy / dempart / gts / jenny / marxism / y2k ]

/hydrus/ - Hydrus Network

Bug reports, feature requests, and other discussion for the hydrus network.
Name
Email
Subject
Comment *
File
Password (Randomized for file and post deletion; you may also set your own.)
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Options

Allowed file types:jpg, jpeg, gif, png, webm, mp4, swf, pdf
Max filesize is 16 MB.
Max image dimensions are 15000 x 15000.
You may upload 5 per post.


New user? Start here ---> http://hydrusnetwork.github.io/hydrus/

Experienced user with a bit of cash who wants to help out? ---> Patreon

Current to-do list has: 1,783 items

Current big job: Finishing duplicate db overhaul and filter workflow improvements


YouTube embed. Click thumbnail to play.

505cb1  No.7497

windows

zip: https://github.com/hydrusnetwork/hydrus/releases/download/v286/Hydrus.Network.286.-.Windows.-.Extract.only.zip

exe: https://github.com/hydrusnetwork/hydrus/releases/download/v286/Hydrus.Network.286.-.Windows.-.Installer.exe

os x

app: https://github.com/hydrusnetwork/hydrus/releases/download/v286/Hydrus.Network.286.-.OS.X.-.App.dmg

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v286/Hydrus.Network.286.-.OS.X.-.Extract.only.tar.gz

linux

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v286/Hydrus.Network.286.-.Linux.-.Executable.tar.gz

source

tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v286.tar.gz

I had a great week. There are a bunch of small improvements and fixes, and you can do some important new things with sessions.

session improvements

You can now save 'page of pages' into sessions! Just right-click on a page of pages tab to see the new option. The 'save session' menu also lists the existing session names to make it easier to re-save processing sessions.

You can also append sessions from the tab menu, and they'll insert in that location. They will spawn inside 'page of pages' pages named for the session. The idea here is to allow you to load sessions in and out more easily. I have never really been happy with the clunky workflow of sessions, so I hope this helps.

I'm open to doing more on this–like perhaps adding some auto-save functionality to our new page of pages sessions. Anyway, this is a big enough change on its own, so please play with it and let me know what you think.

other highlights

The file import status window (which lists all your import paths/urls from any import context) now has an 'open sources' menu entry on its right-click menu. It will open urls in your web browser and file paths in your file explorer. (Although Linux can't do the file explorer part, unfortunately.)

While doing some booru work and talking to some users, we discovered that booru.org seem to have messed up their SSL certificate. If you go to https://booru.org in a browser, you'll get the 'add exception' page with some error about the cert being set up for hypnohub.net. I presume they mixed up their certs or something. I am not sure how new this error is–I seem to remember it working right before. In any case, if you use hydrus to download from any domains that are explicitly ***.booru.org (by default, the furry@booru.org booru does), you may get connection errors. Please go to network->manage boorus and then click restore defaults->furry@booru.org, which will correct the initial gallery search url to 'http'.

The shortcuts system now allows you to set 'content' shortcuts (tag and rating mappings, typically for the 'media' shortcut domain) to explicit 'set' in addition to the old 'flip on/off' behaviour. Set will only ever set on.

If you run in advanced mode, all thumbnails will now have a 'reparse files and regen thumbnails' right-click menu entry. This updates any old borked files that have odd rotation or resolution or duration to the best the newest code can do. It also regenerates thumbnails. I've also improved the update pipeline here, so you'll see both the new metadata and thumbnail immediately!

I have been putting more time into reducing some ui overhead. Clients that have many import pages open should find they run just slightly faster and with less ui lag jitter, particularly when several queues are actively downloading at once.

Also, clients with many files overall will now perform system:similar_to queries much much faster. These queries should now typically always come in faster than a second.

full list

- simplified how thread watcher assigns DEAD and 404 status–it should do it more reliably now

- thread watchers now publish their page names (including updating dead/404 status) right after they check

- the save session menu under pages now lists the exsting sessions for quick-save–if you select one of these, it will throw up a yes/no overwrite confirmation dialog

- 'appending' sessions now loads them in a new page of pages named as the session name!

- right-clicking on a page of pages now gives you a 'save this page of pages to a session' option

- right-clicking on the page tabs lets you append a session anywhere

- 'load' session is now 'clear and load', and it throws a yes/no dialog for confirmation

- the gallery pending queries box now allows multiple selection with ctrl/shift clicks. it'll even move up/down correctly on non-contiguous selections!

- the gallery query input is now a new combined textctrl-and-paste button control that will appear in other places across the program

- the page of images input now also uses this new control

- the page of images and raw urls queues will now ignore any input that does not start with 'http'

- pasting a list of texts from the clipboard will now typically strip the pasted content of leading or trailing whitespace

- some text and tag cleaning is a bit faster and neater

- the file import status panel now supports 'open sources' on its right-click menu–it opens in your web browser or file explorer as appropriate. (linux can't do file explorer though)

- thumbnails will no longer 'fade' when their pages are not visible, reducing CPU load in several contexts

- thumbnail fade will use less CPU overhead on very fast computers

- the timer loops that all import pages use to keep themselves updated are now harmonised into one loop at the top gui level (this should reduce some idle CPU overhead and improve some ui responsiveness)

- content (tag and rating) shortcuts can now be set to 'flip on/off' or 'set'

- the manage subscriptions dialog now waits for any currently syncing subs to cancel and save themselves before opening itself (keeping it all synced and rollback-proof)

- added a 'watchable' url class type and wrote examples for 4chan/8chan thread urls

- added 4chan/8chan file url classes to the examples, which if added will auto-hide them from the media viewer

- fixed 'add parameter' in edit url match panel

- simplified some stuff in the existing parsing system

- added preliminary 'url type' support to the parsing system

- the html formula parsing object can now return the full child html of the node it finds

- the new url and html stuff doesn't do anything yet–but it will in the new downloader engine

- added a bunch more booru entries to the url classes defaults. I will continue filling these out, hopefully setting comprehensive defaults next week

- the advanced mode thumbnail right-click menu lets you reparse and regen thumbnails for any files! if you have rotated or stretched files, you can now fix them from the menu!

- added new tools to let the client update media file metadata and thumbnails live!

- the new reparse and regen should update file metadata and thumbnails live!

- improved how some 'quiet' errors are printed to the log. they'll now have the full trace of the error as well as the stack

- added a daemon_report_mode debug mode. it throws up a popup every time a daemon fires its callable

- fixed an issue with the editstringtostringdictcontrol

- fixed an issue with loading (and updating to a new version) import pages or import folders with some kinds of unicode paths

- you can now set the 'woah, you should close some pages' warning value under options->gui. its default remains 165 pages

- system:similar_to searches should be a lot faster on clients with many files

- the manage subscriptions and edit subscription panels now both list their num_urls summary column as '22' if all done but '11/22' if not. this column also sorts based on percentage completion, then num total, then num done

- the database migration dialog now uses the new listctrl–its buttons are now also clever, and will disable when they are invalid

- misc fixes

- fixed an index bug in the new listctrl after certain types of 'setdata' call

- cleaned up some search code

- some veto code cleanup

- cleaned and harmonised how text is pulled from the clipboard

next week

I plan to spend the two weeks from the 20th to the 3rd focusing on a big ui update, so next week will be the last normal work week of the year. I would like to catch up on outstanding bugs and maybe tie up something neat for the break.

0beeec  No.7498

Ok with being able to save page of pages as their own sessions, i'm splitting my current cluster fuck session up into other sessions that are ~40k images in total, this should eliminate most of the programs issues with massive amounts of files being open

Is there a way to change what a page of page is saved as though? right now it saves it as

[user] pop name

Personally I want to change [user] to [PoP]

While im talking about saving sessions, I have started to really only save 2 kinds of sessions prior to this version, 1 was the pre upgrade session and 2 was the date and time save.

Is there any way to lets say change the default session name to the current date and time?

>>7423

Is the unsetting debug option in yet? i'm not obviously seeing anything, but I will also admit I didn't comb through everything.


83762e  No.7499

>The shortcuts system now allows you to set 'content' shortcuts (tag and rating mappings, typically for the 'media' shortcut domain) to explicit 'set' in addition to the old 'flip on/off' behaviour. Set will only ever set on.

I can't seem to understand how this works.

I have a shortcut inside media that reads:

>shortcut:f1

>command: set ratings "1.0" for x

However, upon press F1, nothing happens, even after refreshing. I tested to make sure the f1 key worked with another basic command, and it seems to work.

So I'm wondering what I'm doing wrong.


0beeec  No.7500

>>7498

Going through my session i'm noticing that top level page of pages don't have anything in front of them, but if you go down a level they get [user] im front of them.

I honestly like this, but wish I could change the [user] bit if only to make them easier to differentiate from actual sessions.


0beeec  No.7501

Ok, had an issue thats ultimately my fault however it lead to a suggestion

So, my session I found out when I consolidated everything was 460k images big, ok, thats fucking huge

I have my session saved, I went onto splitting everything into chunks and came out with 15 chunks, all under 50k but because of how some things were sorted some were 8000

so, now to get a new blank tab

-and I want to make a suggestion to have a 'restart as blank tab' as the last time I I did the test, loading up 60-100k images, then removing those tabs did not improve performance, this may have changed, but I would still prefer to start fresh

So getting the blank tab loaded, I go to load up the first image set… and 10 seconds later I realize I fucked up and saved over the session.

I know, my own fault, however I want to make the suggestion to have this

have save session have the first option 'save current session'

a second option 'save session (session name here) again'

This would be an option that is greyed out until the session is saved or loaded, this just makes it easier when going though the session menu seeing as I personally have 38 sessions saved, around 20 of which would still be relevant if I cleaned everything up.

Then another menu that would drop down with a list of sessions so you could save over a different session if you wanted.

so yea, back to loading the 460k session to resave the one I fucked up on…


0beeec  No.7502

Ok have a question, with more and more of a move to adding more websites to where images are found, is there a method, let's say if the hdd dies, and you don't have a backup, to full the images from where they were found?

I know a good deal of images I have wouldn't be able to be imported through a method like this unless you implement redirects for 4chan archives, but another good deal could be recovered this way.

Im asking because I don't think I will ever have the spare money to just have working backups, and instead I do have the money for a rollback backup (my current hdd is an 8tb, that has a backup of a 4tb drive, that has a backup of a 1.5 700 300 and 250gb drives, a backup like this) and Just got confirmation of a 3tb drive i'm pretty much going to exclusively use for hydrus archives, and seeing my current game drive that is acting as my hydrus archive is sitting at 700gb I want to get it off there and onto the 3tb as fast as possible, then i'm also thinking of cold storageing my hand picked saved images to blurays, and deleting that hdd backup too as its sitting in the 180-240gb range.

but i'm thinking that if the hdd fails within the first 3 months, when crib death usually happens, how recoverable would the archive be?


714464  No.7503

I am the anon who cannot download files. Here's log from this version:


2017/12/14 16:49:36: booting gui...
2017/12/14 16:49:36: The client has updated to version 286!
2017/12/14 16:51:09: Traceback (most recent call last):
File "/opt/hydrus/include/ClientNetworking.py", line 1556, in Start
response = self._SendRequestAndGetResponse()
File "/opt/hydrus/include/ClientNetworking.py", line 1902, in _SendRequestAndGetResponse
response = NetworkJob._SendRequestAndGetResponse( self )
File "/opt/hydrus/include/ClientNetworking.py", line 1087, in _SendRequestAndGetResponse
response = session.request( method, url, data = data, files = files, headers = headers, stream = True, timeout = ( connect_timeout, read_timeout ) )
File "/home/pat36/.local/lib/python2.7/site-packages/requests/sessions.py", line 457, in request
resp = self.send(prep, **send_kwargs)
File "/home/pat36/.local/lib/python2.7/site-packages/requests/sessions.py", line 546, in send
while request.url in self.redirect_cache:
TypeError: argument of type 'NoneType' is not iterable

2017/12/14 16:51:09: Failed to refresh account for public tag repository:
2017/12/14 16:51:09: Traceback (most recent call last):
File "/opt/hydrus/include/ClientServices.py", line 876, in SyncAccount
response = self.Request( HC.GET, 'account' )
File "/opt/hydrus/include/ClientServices.py", line 773, in Request
network_job.WaitUntilDone()
File "/opt/hydrus/include/ClientNetworking.py", line 1696, in WaitUntilDone
raise self._error_exception
TypeError: argument of type 'NoneType' is not iterable


c83217  No.7504

YouTube embed. Click thumbnail to play.

I made an alternate Linux build for people having trouble with the official build. A few things are still weird, but overall it works better for me than the official. Run from source works slightly better, but pip is a headache. Built on Linux Mint 18, so it should also work on Ubuntu 16.04.

If you want open externally to work, you will have to manually set the open program for each mimetype in options->files and trash. For some reason, when Hydrus invokes xdg-open it opens a program in WINE.

http://www.mediafire.com/file/jodg6u33idbjep6/hydrus-286.tar.gz

I'll post one of these every week until the official Linux build works on my machine. If anything is broken, tell hydrus_dev (and thank him while you're at it). He was kind enough to give me his build command, but I don't know Python. If something is broken, I'm probably just as clueless as you are.


89f642  No.7505

File: 0a7ed492b78c053⋯.webm (277.59 KB, 734x800, 367:400, 0a7ed492b78c0531ceda14b21….webm)

>>7497

Just picked up a webm from http://danbooru.donmai.us/posts/2919604 (via Hydrus downloader) that has incorrect frame timing if you're still looking for webms to test on.

Unfortunately the "reparse files and regenerate thumbnails option" elicited no change; the whole webm plays out in the first 30 frames or so out of "547"


32b804  No.7509

Hi there.

I just saw this change in the 285 changelog:

"like the recent subscription query randomisation update, subscriptions themselves are now synced in random order (this stops a subscription named 'aadvark' always getting first bite into available bandwidth)"

Is there a way around this?

I named most of my subs in a way that they are done in a certain, deterministic order. And I've grown rather fond of it working that way.


505cb1  No.7513

>>7498

Thank you for this report. The [USER] stuff is supposed to be hidden–it is a quick hack to test if a page name is user-created. I will check this week.

You can only set the the default session name under options->gui at the moment, but I am open to new thoughts. Can you explain your date/time session names a bit more?

For the set/flip stuff, I skipped the debug step entirely and just wrote set/flip support into the shortcut system. Please check your shortcut settings and change the new set/flip dropdown in the edit shortcut action panel. The ui is a bit ugly as I just threw it in, but I think it works ok! Let me know otherwise.

>>7499

I think the 'content' shortcuts only work in the media viewer at the moment. I need to put a bit more work in to get them to work at the thumbnail level, which I assume is what you are seeing. If you open a file up in the media viewer, does your F1 work then?

>>7501

Thanks. I think moving to the client being aware of the current session is one way to go. At the moment, it doesn't know that stuff, and only keeps track of 'last session'.

I am also thinking about just moving to the new page of pages sessions stuff, to treat sessions less as full 'replacing' sessions and instead just as handfuls of pages.

>>7502

There isn't a recovery like this, but it might be possible. I can't really promise anything when it comes to recovery operations, as by definition things are broken, and fixing from other data is often more complicated (and hence more work) than one expects. My main advice remains to set up a decent backup for all your documents and everything, not just hydrus. A WD Passport is usually a good cheap solution.

There will be more url actions as I develop the domain manager. Maybe a mass-export of known urls will serve what you want?


505cb1  No.7514

>>7503

Thank you. I will look into this this week.

>>7505

Thank you for this link. I will check this out!

>>7509

I will add an option to revert back to the old behaviour. Let me know if you have any further problems!


0beeec  No.7515

>>7513

with the date and time, really when I save a session it's either me upgrading the program, or me going though a session and wanting to save a progress report as up till the saving page of pages, I was always riding the line of stability.

I, at least in my case, find just a date and time being the most useful session name, as if something fucks itself, I can restart blank from a time when it didn't fuck itself.

So with that in mind, a setting that you could check that when you went to save a session, it defaulted to the date and time, like 12-18 10:53 as an example, if you want to append more data to that, you could, but if checked it would just add current day and time, possibly a few different ways like instead of 12 it would say december, or dec, possibly a 4chan x style where the user puts the format they want in there for it to pull from.

On the recover side, im thinking more like this

hydrus knows if an image is not there, or can check

Hydrus also knows the url it was acquired from if it was a url

If the file is gone, and it sees it was acquired from a url, could it try to recover the image from the url, or potentially urls.

At least the way i'm thinking, I have the thumbs and client in a separate area then the full images, if the full image drive goes kaput, a way to recover from that.

thinking more on this, would it be possible when a full image is gone/not findable to move the thumbnail of said image to the dup finder/archive? that way it would at least be in the archive, and if the same file was found again in the future, the dup detector sould show them as exact or close to exact matches.


0beeec  No.7516

Forgot, I was here for a reason.

So I have an issue coming up and i'm wondering if there is anything that can be done.

I'm moving from my 1920x1200 to a tcl p 605 55 inch 4k tv for a monitor, my every metric it should kick my current displays ass handily, however, thumbnails are currently 200x200, I don't think that should change, however, you are looking into thumbnail stretching of sorts to fit an area

with this would it be possible to define the area to be larger than 200x200, like 300x300, 400x400, 800x800 and have the thumbnail stretch to be bigger? hell in the case of 400x400 or 800x800 it may actually be better to use a thumb as a placeholder and bring in the full size image

For me because i'm going to be using a 55 inch 4k tv, and if that fails me utterly, a 43inch 4k monitor, the size of the image will be roughly the same as it is now on my 1200p, but on an actual 4k monitor that is 24-30 inches, the size of the thumbnails must be pretty painful.


cd36e1  No.7517

File: 564ade8700235e6⋯.png (1.89 MB, 1920x1080, 16:9, Untitled.png)

>You can now save 'page of pages' into sessions! Just right-click on a page of pages tab to see the new option. The 'save session' menu also lists the existing session names to make it easier to re-save processing sessions.

Nice. Finally going to get rid of the lag on startup.

I swear I need all these pages, it's to help me tag stuff and find untagged pics.


83762e  No.7520

>>7513

>I need to put a bit more work in to get them to work at the thumbnail level, which I assume is what you are seeing

Yes that's what I meant. Can't I just like, pay you some money to prioritize my feature first lol

>If you open a file up in the media viewer, does your F1 work then?

I'm sure it does, but I don't use the media viewer since I'd rather view the images in irfanview


505cb1  No.7523

File: b8a33954a83f4ae⋯.jpg (151.66 KB, 1202x900, 601:450, b8a33954a83f4ae1c1c94330bc….jpg)

>>7503

I looked at this today. This error is actually inside the 'requests' library code, which isn't mine. I am not sure what is going on. I looked into my own version of requests, but I don't have self.redirect_cache in sessions.py. It looks like you are running from source–is that correct? If so, please open a new terminal and go:

python
import requests
requests.__version__
exit()

My requests on my Windows dev machine is 2.18.4. What's yours? If you have a much earlier version than that, then my best suggestion is you update it–maybe you happen to have a version that has a bug or needs some kind of initialisation call that I don't do?

>>7505

I looked at this today. It has I think 12 frames, but my ffmpeg reports fps of either 2.5 or 50, wew. Since it has a duration of about 10.9s, I am not sure how to combine 2.5 and 50 to make the correct result. I appreciate this example, and I will hang on to it for the next time I do a proper pass over the video parsing. Perhaps a new version of ffmpeg will suddenly fix this magically.

I noticed it is a danbooru webm of an ugoira–I guess their webm generating script gives some funny framerates or something? I don't know much about that stuff, or how widespread this problem is.

>>7515

I will see about adding a timestame to session save times and listing that in the save/load menus or something. This might be tricky and possibly laggy as it would require a full load to get that info, but perhaps I can cache it in some clever way. I think having '3 days old' or similar would be useful, so thank you for the suggestion. I hope that fulfills a bit of what you are looking for.

I will keep your thoughts on what to do with missing files in mind. I always want to improve reliability for this sort of thing, but I don't have time to do much clever stuff for rare events.

>>7516

Yeah, I am planning to do some kind of thumbnail overhaul at some point. Several users have asked for bigger than 200x200, but it's obviously not so simple as just bumping the max numbers up.

I don't really want to scale things as it mostly just looks like shit, but I also haven't found the time to do a proper solution either, so I may have to settle for it as a compromise.

For the next two weeks I will be updating the big gui library I use (wx). The new version is supposed to have better support for high dpi modes, which is closely related to all this. I have a 4k monitor to test with, so depending on how all that goes and how you and other users find it, that may inform what we really want here.

>>7520

The shortcuts system is in ongoing long-term overhaul. The work will accelerate as I finish off the listctrl replacement. Adding these shortcuts to thumbnail view won't be super-hard–it'll mostly just be refactoring the currently canvas-only content processing code down to the media level and then have both the canvas and thumbnail grid point at that. It won't be super long, so please hang in there!

For a super-secret solution that may work in the meantime: I think the preview window (which is really just an embedded media viewer canvas) will process media shortcuts properly, so if you click on the preview window to give it focus and then hit your shortcut, it should work. This is obviously awkward, but it will work for small jobs.


89f642  No.7524

>>7523

>I looked at this today. It has I think 12 frames, but my ffmpeg reports fps of either 2.5 or 50, wew. Since it has a duration of about 10.9s, I am not sure how to combine 2.5 and 50 to make the correct result. I appreciate this example, and I will hang on to it for the next time I do a proper pass over the video parsing. Perhaps a new version of ffmpeg will suddenly fix this magically.

>I noticed it is a danbooru webm of an ugoira–I guess their webm generating script gives some funny framerates or something? I don't know much about that stuff, or how widespread this problem is.

I wouldn't know either, nor do I need the file itself as I have the ugoira zip; I just got it from the download, saw it was broke, and thought you'd be interested. The actual ugoira is here https://www.pixiv.net/member_illust.php?mode=medium&illust_id=63819160 if that would somehow help your testing.

Speaking of ugoiras: would it be fine to import the actual ugoira zip files for whenever the eventual ugoira->apng thing gets worked on, or would it be best to keep them out of Hydrus altogether until that time comes? Just curious as to what would be the best approach.


505cb1  No.7531

>>7524

Thanks.

For whether to import, if you would like to use hydrus stuff like tags to find those files again, for easy upload somewhere else or associating zips with an external ugoira viewer program, then please do. But make sure to keep track of them by adding an 'ugoira' tag or similar, so that when I add ugoira support, you have a way of finding them all again so you can help hydrus reclassify them or convert them to apng or whatever. I am not decided on how ugoiras will be handled in the client yet, but figuring out whether an arbitrary zip file is an ugoira is probably a bit difficult to automate, so make sure you have a quick way to find them all again. Although, having said that, if you don't have any other zips in your client, then the whole issue is probably moot.

If you don't have immediate need for hydrus searching for your ugoiras, I recommend you not import them yet. Just keep them in a 'import later' folder somewhere.


714464  No.7533

>>7523

I don't know what happened. My package manager thought I've got requests 2.18.4-1, but I had got 2.4.3. I updated it, but still got the error.


714464  No.7535

>>7533


2017/12/20 23:04:13: Traceback (most recent call last):
File "/opt/hydrus/include/ClientNetworking.py", line 1556, in Start
response = self._SendRequestAndGetResponse()
File "/opt/hydrus/include/ClientNetworking.py", line 1087, in _SendRequestAndGetResponse
response = session.request( method, url, data = data, files = files, headers = headers, stream = True, timeout = ( connect_timeout, read_timeout ) )
File "/home/pat36/.local/lib/python2.7/site-packages/requests/sessions.py", line 457, in request
resp = self.send(prep, **send_kwargs)
File "/home/pat36/.local/lib/python2.7/site-packages/requests/sessions.py", line 546, in send
while request.url in self.redirect_cache:
TypeError: argument of type 'NoneType' is not iterable

2017/12/20 23:04:13:
2017/12/20 23:04:13: Exception:
2017/12/20 23:04:13: TypeError: argument of type 'NoneType' is not iterable
Traceback (most recent call last):
File "/opt/hydrus/include/ClientImporting.py", line 5032, in _CheckThread
network_job.WaitUntilDone()
File "/opt/hydrus/include/ClientNetworking.py", line 1696, in WaitUntilDone
raise self._error_exception
TypeError: argument of type 'NoneType' is not iterable

File "/usr/lib/python2.7/threading.py", line 774, in __bootstrap
self.__bootstrap_inner()
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/opt/hydrus/include/HydrusThreading.py", line 242, in run
callable( *args, **kwargs )
File "/opt/hydrus/include/ClientImporting.py", line 5541, in _THREADWorkOnThread
self._CheckThread( page_key )
File "/opt/hydrus/include/ClientImporting.py", line 5117, in _CheckThread
HydrusData.PrintException( e )
File "/opt/hydrus/include/HydrusData.py", line 936, in PrintException
stack_list = traceback.format_stack()




[Return][Go to top][Catalog][Nerve Center][Cancer][Post a Reply]
Delete Post [ ]
[]
[ / / / / / / / / / / / / / ] [ dir / 55chan / b2 / choroy / dempart / gts / jenny / marxism / y2k ]