[ / / / / / / / / / / / / / ] [ dir / random / abdl / doomer / fringe / miku / random / sapphic / tingles / warroom ]

/hydrus/ - Hydrus Network

Bug reports, feature requests, and other discussion for the hydrus network.
Name
Email
Subject
Comment *
File
Password (Randomized for file and post deletion; you may also set your own.)
Archive
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Voice recorder Show voice recorder

(the Stop button will be clickable 5 seconds after you press Record)
Options

Allowed file types:jpg, jpeg, gif, png, webm, mp4, swf, pdf
Max filesize is 16 MB.
Max image dimensions are 15000 x 15000.
You may upload 5 per post.


New user? Start here ---> http://hydrusnetwork.github.io/hydrus/

Experienced user with a bit of cash who wants to help out? ---> Patreon

Current to-do list has: 2,017 items

Current big job: Catching up on Qt, MPV, tag work, and small jobs. New poll once things have calmed down.


HookTube embed. Click on thumbnail to play.

aef60a  No.9287

windows

zip: https://github.com/hydrusnetwork/hydrus/releases/download/v312/Hydrus.Network.312.-.Windows.-.Extract.only.zip

exe: https://github.com/hydrusnetwork/hydrus/releases/download/v312/Hydrus.Network.312.-.Windows.-.Installer.exe

os x

app: https://github.com/hydrusnetwork/hydrus/releases/download/v312/Hydrus.Network.312.-.OS.X.-.App.dmg

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v312/Hydrus.Network.312.-.OS.X.-.Extract.only.tar.gz

linux

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v312/Hydrus.Network.312.-.Linux.-.Executable.tar.gz

source

tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v312.tar.gz

I had an ok week. I mostly worked on smaller downloader jobs and tag import options.

tag import options

Tag import options now has more controls. There is a new 'cog' icon that lets you determine if tags should be applied–much like file import options's recent 'presentation' checkboxes–to 'new files', 'already in inbox', and 'already in archive', and there is an entry to only add tags that already exist (i.e. have non-zero count) on the service.

Sibling and parent filtering is also more robust, being applied before and after tag import options does its filtering. And the 'all namespaces' compromise solution used by the old defaults in file->options and network->manage default tag import options is now automatically replaced with the newer 'get all tags'.

Due to the sibling and parent changes, if you have a subscription to Rule34Hentai or another site that only has 'unnamespaced' tags, please make sure you edit its tag import options and change it to 'get all tags', as any unnamespaced tags that now get sibling-collapsed to 'creator' or 'series' pre-filtering will otherwise be discarded. Due to the more complicated download system taking over, 'get all tags' is the new option to go for if you just want everything, and I recommend it for everyone.

For those who do just want a subset of available tags, I will likely be reducing/phasing out the explicit namespace selection in exchange for a more complicated tag filter object. I also expect to add some commands to make it easier to mass-change tag import options for subscriptions and to tell downloaders and subscriptions just to always use the default, whatever that happens to be.

misc downloader stuff

I have added a Deviant Art parser. It now fetches the embedded image if the artist has disabled high-res download, and if it encounters a nsfw age-gate, it sets an 'ignored' status (the old downloader fetched a lower-quality version of the nsfw image). We will fix this ignored status when the new login system is in place.

Speaking of which, the edit subscriptions panels now have 'retry ignored' buttons, which you may wish to fire on your pixiv subscriptions. This will retry everything has has previously been ignored due to being manga, and should help in future as more 'ignored' problems are fixed.

The 'checker options' on watchers and subscriptions will now keep a fixed check phase if you set a static check period. So, if you set the static period as exactly seven days, and the sub first runs on Wednesday afternoon, it will now always set a next check time of the next Wed afternoon, no matter if they actually happen to subsequently run on Wed afternoon or Thurs morning or a Monday three weeks later. Previously, the static check period was being added to the 'last completed check time', meaning these static checks were creeping forward a few hours every check. If you wish to set the check time for these subs, please use the 'check now' button to force a phase reset.

I've jiggled the multiple watcher's sort variables around so that by default they will sort with subject alphabetical but grouped by status, with interesting statuses like DEAD at the top. It should make the default easier to at-a-glance see if you need to action anything.

full list

- converted much of the increasingly complicated tag import options to a new sub-object that simplifies a lot of code and makes things easier to serialise and update in future

- tag import options now allows you to set whether tags should be applied to new files/already in inbox/already in archive, much like the file import options' 'presentation' checkboxes

- tag import options now allows you to set whether tags should be filtered to only those that already have a non-zero current count on that tag service (i.e. only tags that 'already exist')

- tag import options now has two 'fetch if already in db' checkboxes–for url and hash matches separately (the hash stuff is advanced, but this new distinction will be of increasing use in the future)

- tag import options now applies sibling and parent collapse/expansion before tag filtering, which will improve filtering accuracy (so if you only want creator tags, and a sibling would convert an unnamespaced tag up to a creator, you will now get it)

- the old 'all namespaces' checkbox is now removed from some 'defaults' areas, and any default tag import options that had it checked will instead get 'get all' checked as they update

- caught up the ui and importer code to deal with these tag import option changes

- improved how some 'should download metadata/file' pre-import checking works

- moved all complicated 'let's derive some specific tag import options from these defaults' code to the tag import options object itself

- wrote some decent unit tests for tag import options

- wrote a parser for deviant art. it has source time now, and falls back to the embedded image if the artist has disabled high-res downloading. if it finds a mature content click-through (due to not being logged in), it will now veto and set 'ignored' status (we will revisit this and get high quality nsfw from DA when the login manager works.)

- if a check timings object (like for a subscription or watcher) has a 'static' check interval, it will now apply that period to the 'last next check time', so if you set it to check every seven days, starting on Wednesday night, it will now repeatedly check on Wed night, not creep forward a few minutes/hours every time due to applying time to the 'last check completed time'. if you were hit by this, hit 'check now' to reset your next check time to now

- the multiple watcher now sorts by status by default, and blank status now sorts below DEAD and the others, so you should get a neat subject-alphabetical sort grouped by interesting-status-first now right from the start

- added 'clear all multiwatcher highlights' to 'pages' menu

- fixed a typo bug in the new multiple watcher options-setting buttons

- added 'retry ignored' buttons to edit subscription/subscriptions panels, so you can retry pixiv manga pages en masse

- added 'always show iso time' checkbox to options->gui, which will stop replacing some recent timestamps with '5 minutes ago'

- fixed an index-selection issue with compound formulae in the new parsing system

- fixed a file progress count status error in subscriptions that was reducing progress rather than increasing range when the post urls created new urls

- improved error handling when a file import object's index can't be figured out in the file import list

- to clear up confusion, the crash recovery dialog now puts the name of the default session it would like to try loading on its ok button

- the new listctrl class will now always sort strings in a case-insensitive way

- wrote a simple 'fetch a url' debug routine for the help->debug menu that will help better diagnose various parse and login issues in future

- fixed an issue where the autocomplete dropdown float window could sometimes get stuck in 'show float' mode when it spawned a new window while having focus (usually due to activating/right-clicking a tag in the list and hitting 'show in new page'). any other instances of the dropdown getting stuck on should now also be fixable/fixed with a simple page change

- improved how some checkbox menu data is handled

- started work on a gallery log, which will record and action gallery urls in the new system much like the file import status area

- significant refactoring of file import objects–there are now 'file seeds' and 'gallery seeds'

- added an interesting new 'alterate' duplicate example to duplicates help

- brushed off and added some more examples to duplicates help, thanks to users for the contributions

- misc refactoring

next week

I also got started on the gallery overhaul this week, and I feel good about where I am going. I will keep working on this and hope to roll out a 'gallery log'–very similar to the file import status panel, that will list all gallery pages hit during a downloader's history with status and how many links parsed and so on–within the next week or two.

The number of missing entries in network->manage url class links is also shrinking. A few more parsers to do here, and then I will feel comfortable to start removing old downloader code completely.

____________________________
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

7f0021  No.9288

$250 added to the Patreon for this month. Thanks for fitting it in this week!

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

252934  No.9289

>>9288

Found one flaw in my request and it's my bad for not thinking about my "actual" problem when importing new tags from boorus. Specifically, the unnamed space is trash on most boorus - while the character and series namespaces are usually useful. I still want to include tags from the unnamed space since I have a pretty significant overlap (several hundred tags).

Could the option be expanded so that I can choose to only apply it to tags without a namespace? If that would be too difficult, as-is still works better than before since I already have several thousand characters/series as tags. I can always add the tag once manually before an import to make sure they get imported for every image - since most of my imports tend to me of a certain character or series.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

c461d8  No.9290

>>9288

Based $250 Anon.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

034416  No.9291

>>9287

>I have added a Deviant Art parser. It now fetches the embedded image if the artist has disabled high-res download

Was going to ask or make a suggestion in another thread but since you brought this up now. In the past versions, occasionally I would come across certain artist on DA that have their images with these large resolution sizes that would slow hydrus down to a crawl for me just by selecting the thumbnail and/or viewing them. Before version 312 I was going to ask if there was a way to select smaller res sizes when downloading from DA if the largest reached a certain resolution size.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

060c31  No.9294

I don't know if it's related to this release or not but I was in the middle of migrating my storage(bought a new HDD and setup a new path to move files there) and now I keep getting "database disk image is malformed' every time a subscription starts. Any way I can fix this? Thanks


DBException
DatabaseError: database disk image is malformed
Traceback (most recent call last):
File "include\HydrusThreading.py", line 197, in run
self._callable( self._controller )
File "include\ClientDaemons.py", line 275, in DAEMONSaveDirtyObjects
controller.SaveDirtyObjects()
File "include\ClientController.py", line 1114, in SaveDirtyObjects
self.WriteSynchronous( 'serialisable', self.network_engine.bandwidth_manager )
File "include\HydrusController.py", line 640, in WriteSynchronous
return self._Write( action, HC.LOW_PRIORITY, True, *args, **kwargs )
File "include\HydrusController.py", line 201, in _Write
result = self.db.Write( action, priority, synchronous, *args, **kwargs )
File "include\HydrusDB.py", line 908, in Write
if synchronous: return job.GetResult()
File "include\HydrusData.py", line 1710, in GetResult
raise e
DBException: DatabaseError: database disk image is malformed
Database Traceback (most recent call last):
File "include\HydrusDB.py", line 537, in _ProcessJob
result = self._Write( action, *args, **kwargs )
File "include\ClientDB.py", line 11063, in _Write
elif action == 'serialisable': result = self._SetJSONDump( *args, **kwargs )
File "include\ClientDB.py", line 9027, in _SetJSONDump
self._c.execute( 'DELETE FROM json_dumps WHERE dump_type = ?;', ( dump_type, ) )
DatabaseError: database disk image is malformed


Database Traceback (most recent call last):
File "include\HydrusDB.py", line 537, in _ProcessJob
result = self._Write( action, *args, **kwargs )
File "include\ClientDB.py", line 11063, in _Write
elif action == 'serialisable': result = self._SetJSONDump( *args, **kwargs )
File "include\ClientDB.py", line 9027, in _SetJSONDump
self._c.execute( 'DELETE FROM json_dumps WHERE dump_type = ?;', ( dump_type, ) )
DatabaseError: database disk image is malformed

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

060c31  No.9296

>>9294

I should note I read 'help my db is broke.txt', and the errors originate from client.db & client.master.db. It specifically gives 'database disk image malformed' when running a PRAGMA integrity_check on client.master.db.

I cloned both of them, and then ran another integrity_check, and they came back ok. I don't get 'database disk image malformed' anymore when running Hydrus, but now I can't import any new files(they fail), and subscriptions are still error'ing out.

I have a full .db backup from 2 weeks ago. My main concern is, if I use the backup, what will happen to all the files I added within that time span? Will they just remain floating in client_files unused since the .db doesn't recognize them? I believe I added something like 200k files within that time.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

15f7ec  No.9297

Hello hdev, Back to report about how fast my massive fuck off session loaded this time.

I watched it load the first page in chunks of 400-1500 a second, there were no hangs, and the moment it was loaded it was able to bring up the images, jumping around to other tabs, im not sure, it locked up again while its loading, but it only did it once finished loading a second tab, all in all took about 5 minutes, im sure if I restarted it now it would load slower.

I don't understand why sometimes it loads fast and other times its slow as hell, especially when everything is loading off of an nvme outside of full size files.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

060c31  No.9298

>>9296

After a bunch of trial & error, I decided to just use my backups from 2 weeks ago. It contains a backup of client, caches, master, mappings and cache. I extracted a clean Hydrus (new version) and moved the databases into the db folder.

After the update finished, it gave me a "database disk image is malformed'. Now, I know for a fact that can't be possible considering I made those backups while my DB was working perfectly fine.

I don't know why this happening

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

9ed11d  No.9299

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

9e0012  No.9301

File: e795d8859254a00⋯.jpg (96.11 KB, 829x737, 829:737, Capture.JPG)

was the tag-parsing removed from the Inkbunny parser? im downloading images but they bring in 0 tags.

i've set "grab all tags" by default, i've also tried with the 2 boxes on top (grab all tags even if…) checked and still no tags.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

9e0012  No.9302

>>9301

update: this only seems to happen on multi-page submissions.

single file submissions actually grab all the tags

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

8d27ff  No.9303

I LOVE YOU

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

7f95fd  No.9304

You're doing gods work hydrus_dev. The god of smut that is. Thanks.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

aef60a  No.9306

File: 4efa78b369eef8e⋯.jpg (68.54 KB, 456x810, 76:135, 4efa78b369eef8e870e16f2112….jpg)

It looks like Deviant Art changed their preferred URL format just as I released this new parser, so it isn't matching. Your DA subs will likely complain about hitting their 'periodic file limits' this week (they'll also do a bit of redundant re-downloading). You can ignore them for now–I will roll out a new url class for v313 to match the new URL format, and your subs will settle down on their subsequent syncs.

>>9289

>>9288

Thank you, I really appreciate it.

I can see the follow-up issue as well. I'll be adding the new 'tag filter' object to TIO to make it easier to make more complicated namespace filtering requests like 'everything except for species namespace', so I'll tack one onto this 'only existing tags' option as well. You'll want to say 'only add unnamespaced if they already exist', which will leave the namespaced to go on as normal and just be filtered by that second filter.

>>9291

There isn't an easy way to do this in the new system I have built, at least not on like a per-artist basis. If you want to change your whole DA parser (although be aware that it isn't actually kicking in this week, as above), you could just go network->manage parsers->deviant art file page parser->content parsers and delete the file url from download button entry. This will cause the parser to fall back to the embedded, smaller image every time.

I'll probably be rolling out a new version of the DA parser in v313, so wait until then if you do this otherwise I'll just be overwriting your changes again when you update.

TBH I recommend just downloading the big files and giving them a 'check back in three years' tag or something and seeing if your next computer handles them better. It depends how crazy the artist is–if it is 20MB pngs, I say keep them and deal with them later, but if it is 120MB pngs, you may want to follow that artist by hand anyway as there may be no easily automatable solution to their nonsense.

>>9294

>>9296

>>9298

I am sorry you are having trouble here. For the 'are my files just floating in client_files' question–yes they are, and you can recover them with database->maintain->clear orphan files. It will let you move this orphan files somewhere else, at which point you can just reimport them manually.

That said, if you are still getting 'malformed' errors, I would not use that db even if it boots. When you cloned those dbs, what errors did you get for file imports and subscriptions? (I am guessing the sub error was the same as the file error?)

In general, I cannot cause a 'malformed' error through code. It almost always means a hard drive fault screwed up a sector of your db. I suspect this hit your db more than two weeks ago, but maybe your subs didn't run between the time it occured and you made the backup? I am not sure–can you remember your subs running fine until just recently, after the backup was made?

Although if you have problems in multiple dbs, you may have had multiple events at different times. What was the 'malformed' error you got on your backup once it updated? Was it a problem stopping it from booting at all? Do you have a traceback for it, and is it for a different request than json_dumps?

Have you had any rough power cuts recently? If you can't explain why your hard drive might have messed some things up, I recommend you first make sure you have a backup of your client_files and anything else important on that drive. There is the smallest chance that it is on its last legs and creating new errors as you try things. If your db backup is stored on a different drive, make sure that copy stays unmolested for now.

I recommend you stick with this backup, even if it could not boot, as it is likely to have the fewest problems. If you remember it was v310 or whatever, you might want to try creating a new test environment, on a new known-good drive if you have one, with the v310 release and that backup to see if it still runs as perfectly as you remember. (you don't need to give it files in client_files for the test, it'll just complain with normal errors if these are missing) If it boots, try updating it again. If it really does turn malformed only on update, let me know and we will drill into that.

If it is not good, I recommend trying the clone trick with the backup. If you can get it to complain about anything but 'malformed', I can figure out some scripts to run or whatever to fill-in any lost data. But before we start rebuilding, first make sure that drive is truly ok.

Let me know how you get on!

>>9297

Almost all bottlenecks are now down to disk latency, which means almost all load/search times are subject to whether your OS has the disk sectors it wants in the disk cache. The algorithms on what to keep and pre-load in here are pretty complicated these days, but the basics are if you start the client within an hour of most recently closing it, the important bits are probably already in memory, so it'll be fast, but if you start after a fresh system boot, everything is laggy.

Even on a super SSD, the difference between that and ram is something like 50µs vs 10ns, I think, so it still matters for 'loading lots of small things from random places' requests.

>>9299

Thank you, this looks very useful. I have saved this for when I next revisit the duplicate filter. It would be great to be able to automatically say 'this image is a lower quality version of this' with confidence.

>>9301

>>9302

Thank you for this report. I was just working on a similar thing with someone else about this. There is some bad tag import option calculation going on for file import options to direct file urls in the url downloader. This is fixed for v313 (and the url downloader may get its own tag import options button anyway, which forces the issue). Let me know if it gives you any more trouble.

>>9303

>>9304

Ψ 👌

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

15f7ec  No.9308

>>9306

>Even on a super SSD, the difference between that and ram is something like 50µs vs 10ns, I think, so it still matters for 'loading lots of small things from random places' requests.

I leave my computer on for months at a time, really its only hard crashes or power outs that took it down, and since I got the battery around november, there has been 1 power out long enough the system went down where the town transformer blew.

every other time it on an upgrade or the program crashing, however I haven't had the program crash in awhile, since I moved over to saving large amounts of tabs to grab all at once, and more recently when you added the multi watcher. sometimes when I close and immediately reopen, everything goes fast, other times, everything chugs, sometimes when the program would crash, and this was on a 300k file session, it would load slow, but then the next time, it loads faster than ever before.

if I have the session below 60k images, regardless of how it closes, its open almost instantly opened, but with my 300 and this current 600k image session, it loads very slow, even when it loads fast, its proportionally different then when its loading smaller file sizes, it leads me to think it's something in program bottlenecking rather then storage, unless its trying to load storage all at once and hitting storage at much as possible. if that's the case, it may be replicable with low numbers of images but over many tabs, currently I have 78 tabs open thanks to multi watcher being able to trim the crap down.

this is just my thought based on seeing every tab trying to load at the same time, if one tab tried to load sequentially rather then every tab trying at once it may load differently.

with that being said, would it be possible to to have a mode in the multi watcher to not show the date and time but instead have relative time always? At least for me its easier to see when I loaded batches of threads with 5 days 16 hours rather than 2018/06/23 00:34:01

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

7650af  No.9309

Is there an easy way to mass replace tags or add namespace to them? I see that you added meta namespace with the latest update, but previous images I've fetched from danbooru were unnamespaced, unlike the current ones. It's not that much of a big deal but it bothers my autism.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

aef60a  No.9310

File: f1862dcc637cde9⋯.jpg (114.95 KB, 720x960, 3:4, f1862dcc637cde98c93f8307ad….jpg)

>>9308

Thanks. Yeah, I think there is some additional bottleneck during simultaneous access.

I will try to add that option for the timestamps this week. I am doing some other work in that region, making more places across the program use the same code, so you should see that option take effect in more and more places in coming weeks.

>>9309

The tag siblings system can do basic jobs here–it replaces one tag with another in the front-facing gui. You can edit tag siblings under services->manage tag siblings. Please check the help here for some background reading:

https://hydrusnetwork.github.io/hydrus/help/advanced_siblings.html

And let me know if you would like any help with it! Tag siblings has some messy logic now, but I have plans to let it do more things in cleaner ways in future.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

7650af  No.9311

>>9310

I'll check it tomorrow and will ask if there's anything I couldn't figure out, thanks hydrus dev. You're the best.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

5bc971  No.9313

I tried to download on PIXIV, this profile:16488

-From pixivutil : 518 files + the profile avatar

-From Hydrus (subscription) : 354 files

-From Hydrus (gallery downloader) : 399 files (354) total files contained in the downloader page

Coud someone explain what I could be setting up wrong?

@hydrus dev specifically, would it be doable to configure the page numbering between 1,2,3,… and 0,1,2,… in the subscription windows?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

aef60a  No.9315

File: bab19ddc70c9762⋯.png (36.34 KB, 1109x968, 1109:968, new gallery import log.png)

>>9313

Thank you for this report. It looks like my old pixiv gallery downloader was fetching just type=illust pages, like so:

https://www.pixiv.net/member_illust.php?id=16488&type=illust&p=1

While the 'illust' class in pixiv includes some multi-page media, it doesn't include what the artist classifies as 'manga', which are listed under type=manga:

https://www.pixiv.net/member_illust.php?id=16488&type=manga&p=1

I have made the pixiv downloader get the 'all' type for v313:

https://www.pixiv.net/member_illust.php?id=16488&type=all&p=1

Which includes everything including ugoira (and maybe novels, whatever that means?). I will make a note about this in the release post as well, with instructions on how to get whatever was previously missed.

For zero-indexed manga, if you did not see it, please check my post at >>9248 , which should have a zero-indexed pixiv parser png attached. You can drag and drop this png onto the network->manage parsers dialog's list and then swap the 'link' for pixiv file pages under network->manage url class links. Let me know if it doesn't work.

If you specifically want to customise which parser is used by which subscription, this is probably not something I will do in this iteration of the downloader overhaul, as it is too complicated. I want to get everything basically working now so I can move on to other, more overdue jobs and do not want to get tied up in any additional complicated bells and whistles.

I do expect to add some better page: tag management to the manage tags dialog soonish, which will have a tool for 'please shuffle these 5, 6, 7, 8, 9, 10, 11, 12 page tags up one' to make these finicky fixing jobs a bit easier.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

5bc971  No.9316

>>9315

Indeed, I had missed the linked post.

Thanks for taking the time to consider my request.

And no, it's need for every pixiv link since pixiv's work are numbered from 0.

Being able to fetch ugoira's tags is a really appreciated, I wonder if it'll fetch even response work depending of how pixiv set their site.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

034416  No.9317

>>9306

>TBH I recommend just downloading the big files and giving them a 'check back in three years' tag or something and seeing if your next computer handles them better. It depends how crazy the artist is–if it is 20MB pngs, I say keep them and deal with them later, but if it is 120MB pngs, you may want to follow that artist by hand anyway as there may be no easily automatable solution to their nonsense.

The image's file size dosen't seem to be a problem for me as most are less than 5MB, its just these 7,000~10,000x7,000~10,000 resolution sizes that's slowing Hydrus down a bit for me. DA is the only site I noticed that has these artist going crazy with resolution sizes for their images but its not that big of an issue thought, just annoying.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

5bc971  No.9318

>>9315

This work perfectly thus far, in the few profie I tested.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

15f7ec  No.9319

>>9317

The reason da has sizes like that, at least on artists you like/thing its worth downloading is because they also sell prints of those images.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.



[Return][Go to top][Catalog][Nerve Center][Random][Post a Reply]
Delete Post [ ]
[]
[ / / / / / / / / / / / / / ] [ dir / random / abdl / doomer / fringe / miku / random / sapphic / tingles / warroom ]