[ / / / / / / / / / / / / / ] [ dir / agatha2 / chaos / eris / lewd / loomis / pdfs / vichan / wooo ]

/hydrus/ - Hydrus Network

Bug reports, feature requests, and other discussion for the hydrus network.
Name
Email
Subject
Comment *
File
Password (Randomized for file and post deletion; you may also set your own.)
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Options

Allowed file types:jpg, jpeg, gif, png, webm, mp4, swf, pdf
Max filesize is 16 MB.
Max image dimensions are 15000 x 15000.
You may upload 5 per post.


New user? Start here ---> http://hydrusnetwork.github.io/hydrus/

Experienced user with a bit of cash who wants to help out? ---> Patreon

Current to-do list has: 1,494 items

Current big job: updating to python 3 over Christmas


YouTube embed. Click thumbnail to play.

fe12cd  No.5946

windows

zip: https://github.com/hydrusnetwork/hydrus/releases/download/v257/Hydrus.Network.257.-.Windows.-.Extract.only.zip

exe: https://github.com/hydrusnetwork/hydrus/releases/download/v257/Hydrus.Network.257.-.Windows.-.Installer.exe

os x

app: https://github.com/hydrusnetwork/hydrus/releases/download/v257/Hydrus.Network.257.-.OS.X.-.App.dmg

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v257/Hydrus.Network.257.-.OS.X.-.Extract.only.tar.gz

linux

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v257/Hydrus.Network.257.-.Linux.-.Executable.tar.gz

source

tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v257.tar.gz

I had a great week. I am almost done with the new duplicate system.

dupe stuff

I have essentially finished the duplicate filter. The 'this file has a higher resolution' statements are better and the underlying database code is as efficient and productive as I can think of. I have also added some new merge option checkboxes–you can now apply 'archive' status across a processed pair or even choose to delete both files.

You can also filter which tags are merged when you say to copy or move tags between the pair. Check this out if you are interested and let me know what you think–the panel to edit this new 'tag censor' object is a little technical and I think it could do with some user-friendliness work.

And I have made a new 'system:duplicate relationships' system predicate. It finds files that have a certain number of the given kind of relationship, such as 'files that are better than at least one other file' or 'files that have alternates' or 'files that are equal to two other files'. This is not wholly useful yet, but it will be when thumbnails are dupe-aware and actionable. (Thumbnails will know their existing dupe relationships and will allow you to set new dupe relationships manually through the right-click menu.)

I believe I will have this thumbnail stuff done next week, at which point I will be done with this initial phase of the new duplicates system and I can start on the downloader engine overhaul!

some other stuff

You can now move the main gui's page tabs all the way to the left or right with their right-click menu, and you can also append or insert new pages in the same way.

The statusbar is a little more informative when it comes to db locking–it will say a bit more than 'db locked', and if you hover over it, it will report the current database job in a tooltip. If your client has been locking up from time to time, please hover over it and report to me what you now see.

full list

- the duplicate filter will now maintain zoom on files with the same ratio

- split the duplicate merge options into separate tag/rating controls–you may see some duplicate service entries, but these will be cleaned on your next shutdown

- duplicate merge options now allow syncing 'archive' status

- duplicate merge options now allow 'delete both files', which you may find use for in custom actions

- created a tag censorship object to handle and action a rich tag censorship ruleset

- tag duplicate merge options now have and use this tag censorship object to filter which tags are merged, with an initial value of 'let everything through'

- wrote a tag censorship edit panel and tied it into the duplicate action edit panel so these new tag censorship objects can be edited

- added an optimisation to the duplicate status setting code–if two files are better/worse, they are inherantly duplicates and so 'not dupe' and 'alternate' relationships apply to both equally and can be duplicated

- fixed and culled and normalised the 'this has more tags' dupe filter statements to be more accurate and useful

- added 'this has a larger filesize' type-statements to the dupe filter

- created a 'system:duplicate relationships' predicate that can find files based on how many duplicate relationships of a particular type they have

- cleaned up some misc duplicate filter code and added some tooltips to the top hover window dupe action buttons

- added 'move to left/right end' to main gui page tab right-click menu

- added 'new page' and 'new page here' to main gui page tab right-click menu

- you can now right-click for a menu from empty tab space on the main gui

- the main gui statusbar now updates more efficiently when under heavy refresh load

- the main gui statusbar now shows db read/write/commit status and sets the current db job summary as its tooltip–if you experience persistent hangs, please hover over the statusbar and report what you see!

- export tags to .txts checkbox will now default to 'all services on' when checked

- fixed the thread watcher, which was accidentally disabling its text input early

- 'similar files' searches launched from the thumbnail menu will now default to 'my files' file domain rather than 'all local files'

- downloader pages will now correctly sort their files on initialisation

- refactored and generally cleaned up some collect and sort code

- fixed some unlikely-but-possible collect/sort bugs

- fixed some bad layout in the top-right hover window that was making it grow unreasonably tall when many urls were shown

- on the different download import pages, the progress gauge that shows file download progress will now reset back to 0 as soon as the file download is complete

- fixed a problem where video imports with unicode characters in their path were failing to mime-parse

- improved the file import status update pipeline to better deal with large transactions (like skipping/deleting/retrying a thousand rows at once). all these big transactions should lag the gui far less

- improved some misc import status cache code

- made first step in a big size rewrite job that will size many elements according to local system font size rather than specific pixel values

- hydrus servers now explicitly default to TLSv1.2–we'll see if that clears up some of the handshake timeout problems we have recently seen

- cleaned up a bunch of possible pydeadobjecterrors when the new review services panel is closed

- improved and rescheduled gelbooru redirect url purge

- added a catch-and-recovery to hydrus network session initialisation, which may sometimes receive invalid data after a service deletion

- added similar catches to tag parent/sibling initialisation, which apparently can be vulnerable to a similar invalid data problem

- I think I cleaned up some more Linux ClientToScreen console errors

- refactored and cleaned some frame size event responsibility

- refactored and cleaned the panel and controls that display file import status

- did a little more menu code cleanup

- misc cleanup

- misc fixes

next week

I want to finish the thumbnail dupe stuff if I can, as I would like to start the downloader engine overhaul asap. There is also some IPFS work I accidentally missed this week, and I would like to apply the new Tag Censor object to the client's regular tag censorship system.

a0e281  No.5947

File: 7ff9d51f57d9ea8⋯.png (179.41 KB, 600x639, 200:213, 1495185903181.png)

>>5946

Thanks for the update, OP! Can't wait for the downloader overhaul, I'm going to have some scripts to write.

I updated to v256 without reading the warning about the thread watcher, so I'd been watching the archived threads in 4chan X and biting my nails a little bit, but it was only a few days until this version so I didn't revert because I like to live on the edge. Ultimately, only one of the threads I wanted to scrape got knocked off its archive, and it only had 70-something files which I can get manually from the external archive. You could say I'm a pretty wild guy who lives for the adrenaline. I sometimes think about getting a motorcycle and riding it around with my helmet strap unbuckled, or maybe a moped. Pic related, it's my friend calling, I'm not sure if I'm gonna answer it though, we'll see, he knows I'm a busy and adventurous devil-may-care kind of guy. Thanks for reading.


c81529  No.5950

Asked this in the faq, because the answer I got is no, ill post it here

"Is there a way to maintain zoom across multiple non same resolution images in duplicate mode?

I have many images that are questionable if they were resized up or down, and with the difference in resolution its difficult to tell because I need to flip back and forth fast."

I think this is a needed feature a relative/maintain equal size flip between images zoom option.


360f2c  No.5952

>>5947

You can use the page of images downloader to download from all the archives like warosu and desuarchive. No need to do it manually.


360f2c  No.5953

>>5950

As of the latest version you can for files with the same ratio.


a0e281  No.5957

>>5952

It won't just download the thumbnails then? Thanks anon!


b55a83  No.5958

May have been a thing in 256 already, but I can't now scroll to the next image in media viewer if it isn't the active window.


652497  No.5959

Same person who noticed speed boosts in running from source here.

I'm seeing the same improvements in v257.exe

Seems like the Pillow upgrade made a major improvement in handling large quantities of thumbnails etc.

Thanks!


b6f3f9  No.5960

First of all, thank you for making this program. I just found out about it, and it's the best thing I've downloaded in years. Absolutely everything I need. I'm only really having a couple of issues. I searched the booru and the FAQ to try and find the answer, and I found that using regex and setting pages, chapter whatever works to put the files together when I click the group sort box. I'm wondering if there's an easier way to fix this problem that I am unaware of. I tried reading the regex guide and it confused the hell out of me, so I just copied the regex code from your images on the booru. Anyway, here's my problem.

I have many images saved and many of the images come from multi-part collections. Like the multi-image posts on pixiv where they have different variations or a small comic or something that's in a sequence of images. Sometimes as few as 3 sometimes a whole lot of them. Anyway, I need a way to group them so they aren't scattered all over when I search for a tag. Let's say there's a 3 image sequence of donkey kong eating a banana. I search for donkey kong, is there a way to see the normal donkey kong images but have the 3 image set grouped together and not scattered all over the other images so I have to search around for it or add more tags to the box before I can find them? This seems like a big problem right now for me because I have probably 30 of those tiny image sets in each themed folder full of various other random images. I want the ones that go together to appear together.

Sorry if I'm terrible at explaining things. I'm shit at programming and my computer skills have gone into the shitter over the years. Any help is very much appreciated. For now I'm just saving each small image sequence into a separate folder and just importing the single images that don't have any others in sets.


007708  No.5961

>>5960

Add the title of the imageset as a tag, search for Donkey Kong, then add the title to your search. For example. Or, if you have everything tagged, go up to the top, under the tabs, go to the drop-down list and select "sort by series-creator-title-volume-chapter-page", and even if you just have the artist tagged, they'll be grouped together.


7cbba7  No.5965

File: 8bc652e2a21461d⋯.jpg (771.58 KB, 700x933, 700:933, 8bc652e2a21461d8338116ee4d….jpg)

>>5961

>preparing for downloader engine overhaul

Praise be.

>>5960

And thus another soul was lost.

Hoarding now…!

your pixiv sets have a title to group them together.

Not saving the title has always been one of the major failing of the boorus. That can be allievated by creating pools named for the title, but neither does it save, nor is it automated.

Though…It won't save you when the title is very common or shared. For these case, I use pixiv work:#, nijie work:#, seiga illustration not accounted for, I don't have an account there.

And do remember pixiv's pic start with 0 when saving, the default regex rename them as 1,2,3.

use this:

[0-9]\d*(?=\.[^\.]*?$)

to get the 0,1,2,… paging.

I'm sure it can be improved but this work as is.


bd2e31  No.5967

>>5961

>>5965

Thanks, that helps a lot! I think I have the basics of it down now. It just felt weird having a useless tag for hundreds of different image sets just to tell them apart, but I guess they won't get in the way. Like having a page tag with multiple different things having the page 2 tag. I guess that's the way most people do it though?

By the way this is the best program ever.


bd2e31  No.5968

Another question from a noob. For boorus like e621, when using a parser, is there a way to automatically tag for pools? Or do I just have to go in after they are downloaded and select the files to add a tag for each image pool?


7cbba7  No.5969

>>5968

There is no system currently for pools.

Some people have added a pool namespace but it's seems a soso temporary workaround/solution.

>>5967

You can use a "collection" sorting to arrange by set your image.

It's located above the search box and should indicate "no collection"

You should hang out in the discord chat and search arounf, there's a lot of useful discussion there.


2e426e  No.5971

Are you going to make the thumbnail view of the dupe page a bit more useful? At least add next/previous pair buttons instead of the random pairs button. But maybe it's a waste of effort since everyone seems to be using the dupe filter instead. I just don't see why it's there if it's gonna stay useless.


a15c5f  No.5974

>>5971

>Are you going to make the thumbnail view of the dupe page a bit more useful?

He said he's gonna put dupe options in the right click menu so you can mass-mark files as alternates/dupes.


c81529  No.5976

>>5957

you can set it to download images and download the links to images.


fe12cd  No.5977

File: 848298373fa7949⋯.jpg (995.16 KB, 2002x2004, 1001:1002, 848298373fa79499e2f5f8fded….jpg)

>>5947

I am sorry for the inconvenience. I will design the new downloader system to be much more automatically testable, with the intention of reducing these 'typo' errors in future. I almost fucked subscriptions this week as well, it was caught at the last minute.

>>5950

As >>5953 says, this now works for matching ratio as well as resolution. I'd like to generalise this code and add support for it in the regular media viewer. If that works well and people like it, I'll see about figuring out some standard for keeping some sort of zoom when the two images have different ratio. I am not sure what that should be–perhaps something like 'keep the smaller edge the same length'?

>>5958

Thanks. I will be putting some work into this this week. I've been working on some of that code recently, so I think I may have changed it by accident.

>>5959

Great!

>>5960

>>5961

>>5965

>>5967

I am glad you like my program! These answers are good–best thing atm is to add title and page tags.

I expect to eventually add support for a 'multi-page single-file' format that will allow you to treat these multi-part images as a single object and thumbnail with internal page order and so on to take all the hassle out of this. It will probably just be a cbz/cbr that hydrus will be able to browse inside the program. When I get around to this, there will be easy gui in the program to create these new files, so please use title and page tags for now, until this better system comes in.

>>5968

When the new downloader engine comes in, you'll be able to parse whatever info from the page's html you like and assign it to whatever namespace you like. I assume 'pool' info is on the image page somewhere, right? You'll be able to grab that text and stick 'pool:' or whatever you want on the front. If you know html it'll be no prob to set up the parser, but if you don't and the help I write isn't good enough, just grab me wherever and I'll walk you through it. In any case, I expect the more experienced users here will generate rich parsers for the big boorus anyway. It'll all be easily shareable on imageboards.

>>5971

The thumbnail part of the dupe page is mostly there because all pages have a thumbnail part and I didn't want to put the time into creating a new class of page yet. I wrote the button to select some random pairs just because it was easy, but yeah, as >>5974 says, you'll be able to mass-assign 'these are all alternates' statuses to thumbnails soon enough, so the randomness may become less of a problem.

Let me know in a few weeks if you would still like to see unknown pairs in a more rigorous and navigable fashion on this page.


2e426e  No.5992

Update on this bug: >>5886

It happens in the full screen media viewer as well, not just in the duplicate filter.

It only happens if you try to remove a tag from the PTR and the "Enter a reason" window pops up. Once this window has popped up, and you close the manage tags window, Hydrus will focus on and bring to the front the main window instead of the fullscreen window you were using.


512275  No.5997

Can confirm that known urls still don't show up on page B/white background in the dupe filter.

Also I have a situational question, let's say image A is an alternate of images B and C, and that image B and C are dupes of each other. If the dupe filter compares A to B and I mark it as alternate, and then it compares A to C and I do the same, how would I get it to compare B to C? Or will this happen eventually since it's somewhat random which ones show up first?


daf5c2  No.5998

>>5997

>Also I have a situational question, let's say image A is an alternate of images B and C, and that image B and C are dupes of each other. If the dupe filter compares A to B and I mark it as alternate, and then it compares A to C and I do the same, how would I get it to compare B to C? Or will this happen eventually since it's somewhat random which ones show up first?

If A is alternate of B and C, hydrus does not make assumptions on the relationship between B and C, so they'll eventually be shown to you so you can make a choice like normal.


2a273f  No.6000

This program is amazing. I have many suggestions for improvement, but quite a few of them are just basic visual things and I'm sure you're not wanting to spend much time in that area when you have the meat and potatoes of the program to deal with. Things like complete dark ui toggle which makes the entire UI dark mode, I think this might actually be a bit complicated, but I'm not a programmer. Also the psi icon needs an upgrade. I'm using a black taskbar in windows 10 and I can't see the icon at all. I can try and make one with the same character if you want. I'll just upload it here when I do I guess and whoever wants to use it can use it.

I have so many image files saved all over my PC right now I am going insane. This is the best thing ever, but at the same time it's getting me to realize just how much I collected over the years and just stuck into random folders in folders in folders.

I have a couple of questions, let's say I want to download a single file off of let's say gelbooru, and I want all the tags to automatically be applied like it does when I download a gallery/booru. Only instead of an entire gallery, I just want to download a single image quickly without setting up a subscription to the tag or anything, just download and go. What is my best course of action for this?

One more question. If I dump tons of non-tagged images into the program and then I download a bunch of files from boorus using the gallery downloader, it will automatically apply tags to those images that I already have if it finds the image in the gallery, or am I setting myself up for hell? I have hundreds of thousands of images and frankly I don't have enough time left on this earth to tag them all. I was going to mess with the tag repositories, but I don't want to be connected to the network, I just want to use my local stuff and a few tags from the boorus when I download a gallery.

Thanks to anyone who read that.


50b427  No.6006

Will it be possible to do whatever things you can do with the filter without the filter?

>>6000

The icon is stored in /hydrus network/static/ so if it really bothers you you can change it yourself.

As far as I know there is no way to download single images from a booru. You'll have to download it manually and apply the tags manually I think.

I think it is possible to download just tags, but I don't know how. Don't use the public tag repository, I've heard it has tons of terrible tags.


daf5c2  No.6007

File: 2b5b0e65826ffd3⋯.png (34.87 KB, 727x948, 727:948, screen727948.png)

File: 775eb16df1785bf⋯.png (2.93 KB, 512x75, 512:75, test.png)

File: 1b716e71ec8b0fd⋯.png (28.58 KB, 375x972, 125:324, e62a523fa18144fb0b62a82535….png)

>>6000

First of all, nice trips.

>I have a couple of questions, let's say I want to download a single file off of let's say gelbooru, and I want all the tags to automatically be applied like it does when I download a gallery/booru. Only instead of an entire gallery, I just want to download a single image quickly without setting up a subscription to the tag or anything, just download and go. What is my best course of action for this?

My suggested way to do this is to import the file by saving it manually and import it (after all it's only one file - if you want to instead import whole galleries without subscriptions you can just use F9 > download > gallery > booru), and then using the file lokup scripts (first pic related) to fetch the tags from the booru. If you don't have this tab on by default, activate it in options > tags, it should be at the middle/bottom between several tabs.

Also I suggest using this (second pic related) script, which I understand is not in the program by default but is much more effective, and that you can import by saving the picture in your PC and then going to services > manage parsing scripts, and then "import" > "from png".

>If I dump tons of non-tagged images into the program and then I download a bunch of files from boorus using the gallery downloader, it will automatically apply tags to those images that I already have if it finds the image in the gallery, or am I setting myself up for hell?

It won't download them automatically. If you set up the PTR you'll automatically get tags for the files whose hash are exactly the same as the ones people tagged in it, however. If your files are unique/edited, you're out of luck and your only way to tag them "automatically" is to use the parsing scripts I mentioned above, or to redownload them (if you have a lot of files from the same tag) and to use third pic related (along with the options under "import options - tags", checking all checkboxes if you want all the tags) to tell the program that you want the tags even if the files are already in your database.

If you need more help check out the discord, it's full of experienced users who can help you sort most stuff out.


2a273f  No.6008

>>6007

Thanks for that awesome reply, I got everything to work perfectly. The tag fetch for single files is going to be very useful for me. I downloaded the scrips also and installed it. Do I delete the old gelbooru md5 GET script, or should I keep them both there? It looks like it also fetches the file ratings, is that what it does differently?

Also I'll check out the discord when I get a chance, thanks!


daf5c2  No.6009

>>6008

No need, you can keep all the scripts you want. The different thing it does is that it fetches tags through IQDB rather than directly from gelbooru. IQDB is an aggregator, which means it'll reverse-search the picture rather than searching for the precise hash, and it will find tags even if you don't have the exact same picture as the one on gelbooru.


bf44a1  No.6010

Trying to download https://www.youtube.com/watch?v=ioaex1oSw7Y


Exception
Could not fetch video info from youtube!
File "/home/hydrus/Desktop/hydrus/build/client/out00-PYZ.pyz/include.ClientGUIMenus", line 122, in event_callable
File "/home/hydrus/Desktop/hydrus/build/client/out00-PYZ.pyz/include.ClientGUI", line 2488, in _StartYoutubeDownload
File "/home/hydrus/Desktop/hydrus/build/client/out00-PYZ.pyz/include.ClientDownloading", line 218, in GetYoutubeFormats

unknown url type: unknown


c81529  No.6013

>>5977

I posted that >>5950 right after the q&a thread told me it didn't work, as I asked it shortly before or after the update I forget which.

If in duplicate more there was a relative zoom, as in either the height or width is matched that would be perfect

On a side note I have been looking at other features and seeing what more I can do with the program.

With thread watcher, I have one thread that 404ed, because 4chan does not keep an archive of the board, and hydrus knows that as if I check now, it tells me it leads to a 404

I'm not sure what it does to threads that hit an archive status, as im not useing it for any that have yet.

However, if the thread had a 'change name when 404/over' option so instead of 'thread watcher' it starts as 'thread watcher' till you give it a thread, then it changes to 'watching thread' and when it hits a 404 or sees text on the page, 4chan gives a

'Thread archived.

You cannot reply anymore.'

Message for me, if it sees either of those, it turns from 'watching thread' to 'thread done' that would be immensely helpful.

With thread watcher too, if there was an option to just keep checking instead of a check X times limit that would also be helpful, currently I have it set for once an hour and 7 days worth of checking times, I could probably set that to 999 and be done with it, but looking at some boards and some sites like 8 chan can span multiple months, a button to definitely check would be very welcome.

Then with the download page of images, as i'm going though a fairly massive back catalogue right now this hits me harder then it probably normally would, but when I add a page I check it to make sure its on the list, and I add lets say I add 8 pages, I now have to scroll to the bottom of the list to make sure the page I added was in fact added. being able to invert that list would be helpful, or the scroll automatically going to the bottom to see the most recently added thing would be helpful.

On downloading page of images, I personally find the process bars useless once you have more then 1 page in the queue, the single image download one because either I have that fast of an internet, or because the program is laggy and sticks, usually only see that bar move for a fraction of a second. the total bar, well, right now i am at 6200-6400 and when a new page adds more images the bar is basically full green so I have a suggestion for that, because I don't want to get an image editor just imagine editor installed right now (new computer) imagine this is all centered or takes up a full full line

———————————–

Imports

Current page urls - ####

#### successful - #### failed - #### failed to timeout - #### in db

Processing ####/####

(Current image) [———————-##%————————–]

(Current page) [———————-##%————————–]

(total queue) [———————-##%————————–]

———————————–

With this, adding the ability to pause the download (already there) and process the pending page urls all at once instead of when they come up, this would allow the queue to instead of be 125, then process a page and jump to 523 images, would instead reflect the total process better.

With that, the 'failed to timeout' is something I would like to address, I probably processed 100 or so links before I went through the log, found out that a number were failing due to time out, and inside the log, you can already retry the images by selecting them and telling them retry, a button for this outside of the log would be convenient.


c81529  No.6014

>>6013

on a side note, and I ask this because where i'm currently pulling from seems to have little issue with me attempting to load entire archived threads at once, an override function that lets me download more than one image at once would be nice, using qtorrent has spoiled me a bit on i get to set 2 different download/upload rule sets at the click of a button. having a button like that so I can activate it when sites don't care. I don't know if its a program limit to only be able to download one at a time, or if it's a limit you imposed, but it would be nice.

Now, there is one more thing that I can think of that would not be programing based that you could do, as you know the program best and would be the best person to do this. the whole download/thread watch/ other things I have yet to do are things that are not explained in the help, or at least are not jumping out at me in the help. Im also going to be honest, reading text about what to do when there is no 'look at this, you did it' effect, such as the download url, its a bit hard to understand the program, and with how seemingly easy it can/could be to set it off on a download ALL THE WEBSITE kick, a video series where you/someone you trust to explain shit right for youtube would be amazing.

Baring a video series, a revamp of the help to a wiki like format where anyone willing to contribute could step in, or a wiki like format as an aside for the help would likely be helpful to everyone.

Sorry if that was a lot or assish, just had a lot of thoughts after using it for more then just my personal image archive programs for a few days


fe12cd  No.6030

File: 79461bf55504d36⋯.jpg (263.83 KB, 1280x1720, 32:43, 79461bf55504d36ee75dd65fb3….jpg)

>>5992

Thank you for the update, and I apologise for not getting to this–I am a bit overwhelmed with IRL and other stuff atm and some things are falling through the cracks.

I was able to reproduce this issue with the 'enter a reason' dialog bit and I have figured out a fix that returns the focus to the tag manager and then the media viewer. Let me know how it works for you.

>>5997

I am unable to reproduce this error–is there anything special about the URLs, that you can think of?

>>5997

>>5998

There are several optimisations where if you say:

A alternate B

B better than C

Then the client can assume "A alternate C" since B and C are dupes. Most of this stuff happens behind the scenes and just saves you time on making the same decision over and over, so you don't have to worry about it. Anything it can't infer will end up in your filter eventually.

I am not sure if there is an alternate/alternate rule, i.e.:

A alt B

B alt C

A alt C

But I don't think that is always true. Maybe it is, I will think about it. Most of the optimisations in place do copying of equal and dupe relationships as valid.

Please let me know if you ever notice this going wrong for you.

—-

I have to get the release started, but I want to get back to this thread tonight. I am sorry for my lateness just recently. Things should be easing up for me (in terms of time and stress) in two weeks.


fe12cd  No.6033

File: 617f19429e612f2⋯.jpg (204.28 KB, 650x951, 650:951, 617f19429e612f27b08e418e1a….jpg)

>>6000

Thank you, I am glad you like it!

The ui takes its colours that are editable in options->colours from your system, so if your system has a darkmode, perhaps engaging that will cue hydrus? I think I remember another user saying an external darkmode program was able to apply itself to hydrus, but I forget the details.

The new downloader engine will have the ability to take a single booru url and just download that single file and parse its tags. until then, follow >>6007 's excellent advice.

>>6010

I think I talked to you about this in discord. I can reproduce this in Linux and will have a look at it next week. Thank you for the report!

>>6013

I hope I can address most of what you talk about here in the downloader engine overhaul, which I will be starting next week.

I like the idea of the thread watcher notifying the 404 in some way. I'll fold that into the downloader engine work, see what my options are. I'll see about increasing the check times as well–maybe it would be good (and polite to the servers) to extend the check period as the watcher goes on. It might do every 5 mins to begin with, but only do every hour 24 hours later, and once a day after a week.

The file import status list should be somewhat invertable if you click the 'added' tab a couple of times. Pages are all added with the same timestamp, so I am not sure about the secondary sort, but the pages' worth should be invertable. I agree that this panel is not very helpful at the moment. It will be getting more controls and view options in the overhaul.

I'd like to split the file import queue into a another layer of queues as you suggest as well. Having a separate layer of cached 'when I looked at this gallery page, I got these urls' will be helpful, I think. The current systems do not do well with >1,000 urls total.

I am not sure why the timeout bug occurs. I don't get it myself, so I think this is my old network engine having problems with some people's networks. It will all be getting an upgrade soon, so I hope it will disappear with that.

>>6014

There will be improved bandwidth controls as well, including on a per-domain basis. I hadn't thought of simultaneous downloads, but I will look into it. That might complicate things more than I want to in this phase, but we'll see.

Writing good help and keeping it up to date is a constant battle. I had a wiki a long time ago, but it withered. Perhaps it would do better now there are more users. I would certainly not be opposed to any user writing/filming their own guide and would happily link it.

Thank you for your feedback!


512275  No.6034

File: b85745b90908b70⋯.png (48.09 KB, 746x604, 373:302, Colors.png)

>>5997

>>6030

Nothing special about the URL's. I'm just not seeing any on page B in the dupe filter, at all.

Do they render as black text on the white background? I'm positive they have URL's- they display them when I search similar and open up the media viewer, but I have yet to see them show up on page B. If I had to guess, it's that they're being shown as white text on the white background so they aren't visible.

Could this have to do with my color settings perhaps?

Also, don't burn yourself out m8


fe12cd  No.6080

>>6034

Thanks, I bet that's it! I will look into it this week.

I'm going to take some time off next week.


2e426e  No.6091

>>6030

>Let me know how it works for you.

Yeah it's fixed now. Thanks!




[Return][Go to top][Catalog][Nerve Center][Cancer][Post a Reply]
Delete Post [ ]
[]
[ / / / / / / / / / / / / / ] [ dir / agatha2 / chaos / eris / lewd / loomis / pdfs / vichan / wooo ]