[ / / / / / / / / / ] [ dir / cute / egy / fur / kind / kpop / miku / waifuist / wooo ]

/hydrus/ - Hydrus Network

Bug reports, feature requests, and other discussion for the hydrus network.

Catalog

Name
Email
Subject
Comment *
File
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Options
Password (For file and post deletion.)

Allowed file types:jpg, jpeg, gif, png, webm, mp4, swf, pdf
Max filesize is 12 MB.
Max image dimensions are 10000 x 10000.
You may upload 5 per post.


New user? Start here ---> http://hydrusnetwork.github.io/hydrus/

Current to-do list has: 714 items

Current big job: finishing off duplicate search/filtering workflow


YouTube embed. Click thumbnail to play.

59ea49 No.5108

windows

zip: https://github.com/hydrusnetwork/hydrus/releases/download/v244a/Hydrus.Network.244a.-.Windows.-.Extract.only.zip

exe: https://github.com/hydrusnetwork/hydrus/releases/download/v244a/Hydrus.Network.244a.-.Windows.-.Installer.exe

os x

app: https://github.com/hydrusnetwork/hydrus/releases/download/v244a/Hydrus.Network.244a.-.OS.X.-.App.dmg

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v244a/Hydrus.Network.244a.-.OS.X.-.Extract.only.tar.gz

linux

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v244a/Hydrus.Network.244a.-.Linux.-.Executable.tar.gz

source

tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v244a.tar.gz

EDIT: There was a problem with the initial release and autocomplete matching of siblings. The 244a above is a hotfix.

I had a great week. The db rewrite went very well. I have reduced the size of the client database files by about 33%.

This is a heavy update. It will use a lot of HDD activity for about 30-40 minutes. If you sync with my PTR, you will need about 6GB free on your hydrus install's hard drive.

the final compaction

This all went better than I expected. There was a lot of work–I changed about 1,200 lines of code this week–but the fundamental problem proved simple, and there were only a couple of difficult bumps along the way. The database is smaller, operations are faster, and the code is simpler.

Ultimately, I have reduced how tag mappings are stored in the database. Before it used three numbers per row, and now it uses two. This reduces the size of client.mappings.db by about 40%. In order to map the missing number, I had to bump up client.master.db by a little bit. There are many other small changes, but it seems to be shaking out to about a 33% reduction in total db size, or about 1.7GB for a typical PTR-syncing client.

My dev pc took 33 minutes to convert while my laptop took 45. If you have an SSD or a fast CPU, it should take a little less time, and if you have an old computer, expect it to take longer. If you do not sync with my PTR, it will take about ten seconds.

If you do sync with my PTR, you will need about 6GB free on your hydrus install's hard drive for the update to work. If you don't have this, the client will warn you beforehand. It is optional and beneficial to also have about 3GB free on your system drive, if that is different from your hydrus install.

The particularly good news with this change is that nothing seems to be architecturally broken or suddenly slowed down by the change. Because every mappings row is that bit smaller and simpler, most operations (including tag repository processing) are actually going through a bit faster.

Please let me know if you do encounter any problems. I expect there is at least one unusual operation I have made a typo on that slipped through my testing.

This change was a long time coming, but I am also glad I had the chance to think about it. I am now ready to overhaul the network next week.

some other stuff

I fixed the Deviant Art parser!

I fixed an issue with Linux session-loaded pages not accepting key events!

OS X can now handle windows with no pages open, and it won't eat pages on session loads!

The client will stop spamming so many 'shutdown work' dialogs when there seems to be nothing to do!

full list

- updated client database to compact ( namespace_id, tag_id ) pair into a single id for storage

- added some bells and whistles to the update code

- added a free space check and messagebox warning before the update

- updated db, service, and a/c cache creation code to reflect new schema

- updated absolutely everything else in the db to reflect the new schema

- for users with plenty of tags, the db should now be about 33% smaller!

- unified how unnamespaced tag searching counts are totalled

- unnamespaced tag searching counts are now totalled when the tags are fetched from the in-view ui media

- unified how tags are split into ( namespace, subtag ) across the program

- fixed deviantart gallery thumbnail parser

- fixed linux session load page key event handling bug

- os x can now support notebooks with zero pages open

- fixed an issue where os x was losing the first page of some session loads

- fixed some similar files shutdown work false positive calculation

- reduced server bandwidth check period from 24 hours to 1 hour

- improved calltothread scheduling under heavy load

- improved scheduling of how files are physically deleted

- numerous laggy temp_table replacement/cleanup

- more temp_table replacement

- misc efficiency improvements and general db code cleanup

- misc path code cleanup

next week

Due to recent issues with growing network bandwidth usage, I will now overhaul how repositories and clients synchronise. There is a lot of wasted bandwidth, CPU, and HDD in this at the moment, and the database is now in a good place to receive and process the content data in a cleverer way.

This database rewrite turned out to be easier than I thought, but I really do think this will be a big job. If it does end up needing two weeks, I will say so on Tuesday the 14th.

8292bc No.5109

Excellent, keep up the good work.


11630c No.5110

The recent tag window doesn't seem to put most recent tags on top any more. Is there a setting for this? Or can we get options to change the sort behavior for recent tags?


cc7f8b No.5111

>>5108

The DB update failed for me on starting the 244a of the Linux binary.

Traceback (most recent call last):

File "/home/hydrus/Desktop/hydrus/build/client/out00-PYZ.pyz/include.HydrusDB", line 223, in init

File "/home/hydrus/Desktop/hydrus/build/client/out00-PYZ.pyz/include.ClientDB", line 9428, in _UpdateDB

OperationalError: database or disk is full

Over 200GB were free on the respective partition at that point, so I'm going to guess the database is full - whatever that might mean…?


c3c5f7 No.5112

>>5111

Did you try running from source?


e8fcbc No.5113

File: 2e710610a45e1ca⋯.jpg (285.58 KB, 1476x1042, 738:521, 2e710610a45e1caa2961bc9145….jpg)

>>5110

Thank you for this report. I can't repeat it here, but I expect something got switched around with all the changes this week. I will look into this this week.

>>5111

I am not sure about this. SQLite can typically go up to zillions of TB, and I don't overrule that anywhere, so I don't think the db is full. Sometimes it needs to create temporary db files in your temporary directory to do big transactions, and that has sometimes hit Linux users who have 500MB limits or whatever. I remember one user who had a ramdisk tempdir who had that problem. If your system partition is full up or your tempdir has a different kind of limit, I think you'll need about 3GB of spare space. If you are interested, you should be able to watch the file(s) being populated in your tempdir as the bigger parts of the update occur.

Please let me know how you get on. If this is the problem, I will improve the free space check for this update step.


1838a3 No.5116

>>5112

No, I run the pre-compiled binary.

>>5113

> Sometimes it needs to create temporary db files in your temporary directory to do big transactions

Bingo - thanks for that hint!

It was my 4GB $TMPDIR ramdisk that did get full. Wasn't aware it was being used at all, but I guess it's a thing sqlite does on its own…?

When I checked after the error message was thrown, $TMPDIR was already empty again. And I had only monitored whether I'd have some read failure or space problems on Hydrus' partition.

> I think you'll need about 3GB of spare space.

Almost 4GB weren't enough. But I'm pretty sure it's just because my DB files are growing large.

> If this is the problem, I will improve the free space check for this update step.

Sure.

I'd also have caught on if the error had reported which temporary file it was operating on at the time of failure.

Thanks for the update & help!


79620c No.5117


2017/02/09 22:02:49: hydrus client started
2017/02/09 22:02:49: booting controller...
2017/02/09 22:02:49: booting db...
2017/02/09 22:02:51: updating db to v244
2017/02/09 22:02:51: converting existing tags to subtags
2017/02/09 22:03:02: creating the new tags table
2017/02/09 22:03:03: compacting smaller tables
2017/02/09 22:03:03: compacting combined_files_ac_cache_3
2017/02/09 22:03:04: compacting specific_files_cache_1_3
2017/02/09 22:03:04: compacting specific_current_mappings_cache_1_3
2017/02/09 22:03:05: compacting specific_pending_mappings_cache_1_3
2017/02/09 22:03:05: compacting specific_ac_cache_1_3
2017/02/09 22:03:05: compacting specific_files_cache_2_3
2017/02/09 22:03:05: compacting specific_current_mappings_cache_2_3
2017/02/09 22:03:05: compacting specific_pending_mappings_cache_2_3
2017/02/09 22:03:05: compacting specific_ac_cache_2_3
2017/02/09 22:03:05: compacting combined_files_ac_cache_7
2017/02/09 22:03:06: compacting specific_files_cache_1_7
2017/02/09 22:03:06: compacting specific_current_mappings_cache_1_7
2017/02/09 22:03:08: compacting specific_pending_mappings_cache_1_7
2017/02/09 22:03:08: compacting specific_ac_cache_1_7
2017/02/09 22:03:08: compacting specific_files_cache_2_7
2017/02/09 22:03:08: compacting specific_current_mappings_cache_2_7
2017/02/09 22:03:08: compacting specific_pending_mappings_cache_2_7
2017/02/09 22:03:08: compacting specific_ac_cache_2_7
2017/02/09 22:03:08: compacting sqlite_stat1
2017/02/09 22:03:08: compacting shape_perceptual_hashes
2017/02/09 22:03:08: compacting shape_perceptual_hash_map
2017/02/09 22:03:08: compacting shape_vptree
2017/02/09 22:03:08: compacting shape_maintenance_phash_regen
2017/02/09 22:03:08: compacting shape_maintenance_branch_regen
2017/02/09 22:03:08: compacting specific_files_cache_8_3
2017/02/09 22:03:08: compacting specific_current_mappings_cache_8_3
2017/02/09 22:03:09: compacting specific_pending_mappings_cache_8_3
2017/02/09 22:03:09: compacting specific_ac_cache_8_3
2017/02/09 22:03:10: compacting specific_files_cache_8_7
2017/02/09 22:03:10: compacting specific_current_mappings_cache_8_7
2017/02/09 22:03:11: compacting specific_pending_mappings_cache_8_7
2017/02/09 22:03:11: compacting specific_ac_cache_8_7
2017/02/09 22:03:11: compacting shape_search_cache
2017/02/09 22:03:11: compacting duplicate_pairs
2017/02/09 22:03:11: compacting specific_files_cache_10_3
2017/02/09 22:03:11: compacting specific_current_mappings_cache_10_3
2017/02/09 22:03:11: compacting specific_pending_mappings_cache_10_3
2017/02/09 22:03:11: compacting specific_ac_cache_10_3
2017/02/09 22:03:11: compacting specific_files_cache_10_7
2017/02/09 22:03:11: compacting specific_current_mappings_cache_10_7
2017/02/09 22:03:11: compacting specific_pending_mappings_cache_10_7
2017/02/09 22:03:11: compacting specific_ac_cache_10_7
2017/02/09 22:03:12: compacting current_mappings_3
2017/02/09 22:03:12: indexing current_mappings_3
2017/02/09 22:03:13: compacting deleted_mappings_3
2017/02/09 22:03:13: indexing deleted_mappings_3
2017/02/09 22:03:13: compacting pending_mappings_3
2017/02/09 22:03:13: indexing pending_mappings_3
2017/02/09 22:03:13: compacting petitioned_mappings_3
2017/02/09 22:03:13: indexing petitioned_mappings_3
2017/02/09 22:03:13: compacting current_mappings_7
2017/02/09 22:04:25: A serious error occured while trying to start the program. Its traceback will be shown next. It should have also been written to client.log.
2017/02/09 22:04:25: Traceback (most recent call last):
File "/opt/hydrus/include/ClientController.py", line 1167, in THREADBootEverything
self.InitModel()
File "/opt/hydrus/include/ClientController.py", line 530, in InitModel
HydrusController.HydrusController.InitModel( self )
File "/opt/hydrus/include/HydrusController.py", line 203, in InitModel
self._db = self._InitDB()
File "/opt/hydrus/include/ClientController.py", line 64, in _InitDB
return ClientDB.DB( self, self._db_dir, 'client', no_wal = self._no_wal )
File "/opt/hydrus/include/ClientDB.py", line 1054, in __init__
HydrusDB.HydrusDB.__init__( self, controller, db_dir, db_name, no_wal = no_wal )
File "/opt/hydrus/include/HydrusDB.py", line 244, in __init__
raise e
Exception: Updating the client db to version 244 caused this error:
Traceback (most recent call last):
File "/opt/hydrus/include/HydrusDB.py", line 223, in __init__
self._UpdateDB( version )
File "/opt/hydrus/include/ClientDB.py", line 9424, in _UpdateDB
self._c.execute( 'DROP TABLE old_table;' )
DatabaseError: database disk image is malformed

After today's update. When I'm starting hydrus it's updating db again.


ac31e8 No.5118

>>5117

> database disk image is malformed

I'm not very experienced with this, but it

could be a sqlite3 error about the database file actually being damaged on disk.

Maybe you want to back up (copy) the *.db database files to another folder, then try either the db check on the older version of hydrus you used before.

Or perhaps try to run this in the db folder of the currently installed version:


sqlite3.exe "*.db" "pragma integrity_check"

You probably have to substitute "*.db" with the actual full client*.db file names. I don't think the wildcard syntax will actually work - oddly it gives "ok" when you try to run this on not existing files.


27fa50 No.5120

>>5113

The recent tag list does add new tags to the list, they just have no order that I can figure out. Sometimes they end up at the end of the list, sometimes in the middle somewhere. Before they all went on top so if I was putting the same tags on several sequential files I could just select the top 10 or so tags in the recent list and add them.


b9a2d9 No.5122

>>5118


$ sqlite3 "client.mappings.db" "pragma integrity_check"
*** in database main ***
On tree page 225588 cell 10: invalid page number 50922042
Multiple uses for byte 2673 of page 225588
Error: database disk image is malformed


56fd1c No.5123

>>5122

Looks like that's the problem. Might be interesting to check the HDD using a SMART extended test or something to figure whether the error happened because of a hardware problem.

As for repairing this problem: I have no practical experience with sqlite3, but it seems like on many databases, one thing worth trying is to dump and restore the whole database:

https://stackoverflow.com/questions/5274202/sqlite3-database-or-disk-is-full-the-database-disk-image-is-malformed


6168af No.5124

>>5123

So repairing is not working(repaired file has got 4 KB), but previous version of hydrus is working just fine.


a1a851 No.5125

File: 5548dc98f6c2866⋯.jpg (3.52 MB, 3191x2491, 3191:2491, 5548dc98f6c286613f73e0145b….jpg)

>>5116

I'm glad this got sorted out. Unfortunately, I haven't figured out a way to tell sqlite to use a different location for its temporary tables. This might not ever affect you again, but as this update had to deal with a single gigantic table, there wasn't a good way to break it up.

Please let me know if you get hit by this again in future.

>>5120

Thanks–I'll have a look to see why that is happening.

>>5117

>>5118

>>5122

>>5123

>>5124

Do you have a backup? Is that backup your 'previous version' of hydrus? If so, does that older db also have problems with an integrity check, or is it clean?

If you haven't checked 'help my db is broke.txt' in the db dir, give it a quick read. It describes another dump/restore recovery method that may be different to what you have tried before.

This sort of thing is almost always due to a hard drive problem, and while it may just be a blip from a recent hard power cut, it could suggest something more serious. It may be this broken record has been around for ages, but you've never stumbled over it as it is in a gigantic mappings table, and now my update code here is iterating over absolutely everything, it can't avoid it. It could also be this error is getting thrown mistakenly.

Please run chkdsk or your OS's equivalent as the help document describes and make sure everything is backed up.

If you don't have a clean backup and this can't be fixed by a simple clone, I can walk you through some sql to try to fix this manually. I am confident we can recover your db, but it might take a bit of back and forth.


0b2e0b No.5128

>>5125

> Unfortunately, I haven't figured out a way to tell sqlite to use a different location for its temporary tables.

Point 5 in this might offer a way:

https://www.sqlite.org/tempfiles.html

At least on the *nix OS'.

> Please let me know if you get hit by this again in future.

I'll probably fairly actively try to make it happen again by importing more files. :)


2e8fb2 No.5133

>>5125

I cloned database and now everything looks to work fine


95551a No.5136

File: 3e46209822b55f4⋯.jpg (621.43 KB, 731x1017, 731:1017, 3e46209822b55f411b8ecc8860….jpg)

Hi dev, may I request for the thread watcher a persistent icon or something that indicates when a thread has been archived or has 404'd?

Thank you


1f9a30 No.5156

File: 085b68f362c85f1⋯.jpg (89.37 KB, 740x1280, 37:64, daemon stop orig.jpg)

Hello hydrus,

I want to fully migrate into hydrus but I also wish to retain a lot of the metadata in the tags, the mass importer allows me to pull in filenames and paths but I was wondering how complicated it would be to add support for pulling in the rest of the file stat, like creation time, mod time, filesize, etc. and possibly formatting it however I want with something like regex groups or any other special formatting sytanx (whatever is easiest to implement). For example I could take the date and translate it from whatever the OS/API provides (maybe "mm/dd/yy") into a tag "file-creation-date:yyyy/mm/dd"

I personally only care about creation time but maybe others would care about importing other meta data that is exposed by the OS/Python API.

I'm also curious if these can be applied to the auto importer, I want to setup a directory to save things to so they get imported into hydrus then deleted from the original file system, and I'd like the same metadata to be preserved, as well as maybe a special dynamic tag like "import date" i.e.

filename:whatever.ext

path:/usr/home/coolman/Pictures/autistic things/

file-creation-date:yyyy-mm-dd-hh-mm-ss

file-import-date:yyyy-mm-dd-hh-mm-ss

What do you think about this request? If it's too complicated I can live without it, but I would like to retain this data for indexing purposes. "show me all the images I saved in 2015", "images I saved in May of any year", "images I imported yesterday", "images imported from this directory", etc. I feel like these things would help me divide things into smaller workloads for the initial tagging passes of my big collections.

Also forgive my English I injured my legs and they gave me drugs so maybe I'm explaining things poorly or not seeing the feature in the client.


2ba0f4 No.5162

File: 9fe44fdaad6246e⋯.jpg (642.08 KB, 1914x2290, 957:1145, 9fe44fdaad6246ed76a19f302f….jpg)

>>5128

Yeah, I think I might just bite the bullet and use the pragma, even though it is deprecated. Or at least fall back to it when the temp partition doesn't seem large enough.

>>5133

Great! Let me know if anything else comes up.

>>5136

Sure–I'll make sure it disables the controls and leaves a message permanently saying what happened.

>>5156

I would like to expand the manual import and the auto import to allow this sort of parsing, but I can't get to it quick. I would also like to add regex parsing to the auto importer to allow for better workflows here.

I think your best bet at the moment is to use another program or a script to convert this information you want into the import file's path or filename and then parse it through hydrus using regexes. This is a little awkward and cumbersome at the moment and will only work for manual imports, but if you batch them up it might be ok.

The client adds import date as the file's 'age', btw. You can search for that with system:age.


8bef39 No.5171

>>5108

Hello. I have a problem with updating db. Here is traceback:

File "include\ClientController.py", line 1167, in THREADBootEverything

self.InitModel()

File "include\ClientController.py", line 530, in InitModel

HydrusController.HydrusController.InitModel( self )

File "include\HydrusController.py", line 203, in InitModel

self._db = self._InitDB()

File "include\ClientController.py", line 64, in _InitDB

return ClientDB.DB( self, self._db_dir, 'client', no_wal = self._no_wal )

File "include\ClientDB.py", line 1054, in init

HydrusDB.HydrusDB.init( self, controller, db_dir, db_name, no_wal = no_wal )

File "include\HydrusDB.py", line 244, in init

raise e

Exception: Updating the client db to version 244 caused this error:

Traceback (most recent call last):

File "include\HydrusDB.py", line 223, in init

self._UpdateDB( version )

File "include\ClientDB.py", line 9402, in _UpdateDB

self._c.execute( 'INSERT INTO ' + cache_table_name + ' ( hash_id, tag_id ) SELECT hash_id, tags.tag_id FROM ' + the_table_join + ';' )

OperationalError: no such column: old_table.namespace_id


67b93a No.5172

File: bb82bcf682aac51⋯.png (884.98 KB, 640x1136, 40:71, 1486082739872.png)

>>5162

Thanks for letting me know.

>convert this information you want into the import file's path or filename and then parse it through hydrus using regexes

I think I'll end up doing that but if there is any way to interface with hydrus directly, I might pursue that instead. Is there a way I can pass tag information to hydrus, either though an API, manipulating the database myself, or something else? I figure I just have to implement the means to get the metadata I want then figure out where to move the actual files and how to tell the client/database about it, is that right? Or even just insert the tags into hydrus and let the importer handle the placement of the files themselves, and hopefully the client would pick up on the tags I inserted for the matching hash.

I figure if I can make something like that I can schedule a task to have it run on a directory and simulate the automatic import until later. I just need to know how to approach it.

>system:age

Neat.

Thanks as always hydrus.


5ea8df No.5175

File: bfb865d8b597f31⋯.jpg (94.62 KB, 692x1100, 173:275, bfb865d8b597f3190a07eb27af….jpg)

>>5171

I am sorry you are having problems.

Have we previously done some database recovery work, or have you played around with your db? Anything that might have created some unusual spare tables in your database?

Which OS are you on, and which hydrus release are you using?

In any case, please go to your database directory (default install_dir/db) and double-click the sqlite3 executable. This should open up a new terminal window. Copy/paste this block into it (and then hit enter if you need to):

.open client.caches.db
.mode csv
.once muh_sqlite_master.txt
select * from sqlite_master;
.exit

It should have created muh_sqlite_master.txt in the same directory. Please post the contents of that here, or pastebin it if it is too large, or email it to me.

>>5172

There's no API yet, but I would like to create one through a localhost http server the client will optionally run. It'll take hash/tag pairs through a POST request, so if you can script that, you could store the tag/hash pairs for now and wait for that to come in.

That might take a while though.

Wait–I just remembered I support importing newline-separated tags through 'neighbouring' txt files, and this works for automatic import folders as well. If you have a file 'blah.jpg' and have 'import tags from neighbouring .txt files' checked, the client will look for 'blah.jpg.txt' and parse it for newline separated tags. If you knock up a script to automatically create your tags and stick them in a .txt file and then pass both files into your import folder destination for automatic import, I think you'll get what you want.

Let me know if you try this but it doesn't quite do what you need.

I would strongly advise against inserting tags to the hydrus db directly. You can play around if you want, but make sure you have a backup. There are a bunch of caches that it would be a huge headache to manually adjust.


f2891a No.5177

>>5175

No, I didn't do anything with db by myself.

My OS is Win10 and I tryed to update Hydrus from 230 to current 244.

Here pastebin link: http://pastebin.com/mHV7yVLb


ab505f No.5185

File: bc90eb364731802⋯.png (23.21 KB, 660x662, 330:331, tags.png)

File: ccf37c136afd5e3⋯.jpg (46.13 KB, 600x424, 75:53, 1462556963925-1.jpg)

>>5175

>'neighbouring' txt files

Perfect!

The only thing I'd like to see is a checkbox for deleting those neighboring text files after a successful import in the same way you can delete the files themselves after import. Both in the manual and automatic importer.

`add tags based on filenames>delete neighboring .txt files after successful import`

In case anyone else wanted it, Here is what I wrote for the task+a Windows binary:

https://ipfs.io/ipfs/QmWPUxfKo6FcNzc3TCTm69ZoUcbteyLAaqYWos95nx9FFV

Usage is: Just run it and it will process the current directory recursively, you can also provide as many space separated paths as you want.

I only tested to make sure it works and imports, I haven't actually used it to migrate everything yet, I'm gonna hold off for a little bit to make sure that's the only data I want and that's how I want it formatted before doing it for real, when I do though I'll be sure to report any issues or success in the latest thread.

Thanks hydrus~


bc5bbc No.5186

>>5177

I am sorry for not jumping on this faster. I have burned myself out concentrating on this rewrite and I don't think I'll be able to get it done anyway. My brain is fried right now, but I will make sure to respond to this properly tomorrow.

If you haven't tried it yet, I'm confident you can revert to the old version.


fef9d7 No.5190

>>5185

>The only thing I'd like to see is a checkbox for deleting those neighboring text files after a successful import in the same way you can delete the files themselves after import. Both in the manual and automatic importer.

Hydrus already does that automatically.


4a9ea7 No.5192

File: e4c598eb1bc4ff3⋯.gif (1.32 MB, 387x263, 387:263, 1462024202909.gif)

>>5190

Woops. For some reason it wasn't working earlier when I was testing it, it would delete the images but not the text. I removed everything but the db and replaced my install with an updated copy and it's working as intended now.

I usually just replace my install with newer versions so I had some left over files like "cv2.pyd" and various dlls that are gone in newer releases, not sure if that was causing issues before.


e1c7d8 No.5197

>>5186

Dont worry, there is no need to rush. In any case I'm glad that your program exists, thank you for this.


16f3e0 No.5199

>>5186

Don't overwork yourself my dude. Take it easy.


8598a5 No.5203

File: c012bf26b0dfe0a⋯.webm (5.96 MB, 1280x636, 320:159, c012bf26b0dfe0a7a37913496….webm)

>>5177

>>5186

Thank you for this follow-up. It looks like the update breaks for anyone updating from v237 or before. If you would prefer to wait, I think I have fixed the update code for v245, which will be out on the 1st March.

If you would like to fix it now, I think you have two options:

If you still have a backup of your v230 db, download v238 and update it to that. Then retry v244 and it should work.

If you do not still have a backup, please make one of your database as it stands right now. Just copy the .db files somewhere at least, in case my instructions here break something permanently. Then go to your database directory and double-click the sqlite3 executable. A terminal window will appear–copy-paste this into it and hit enter if you need to: (Please note any other users with this problem, these commands only work for this user.)


.open client.caches.db
DROP TABLE specific_files_cache_8_3;
DROP TABLE specific_current_mappings_cache_8_3;
DROP TABLE specific_pending_mappings_cache_8_3;
DROP TABLE specific_ac_cache_8_3;
DROP TABLE specific_files_cache_8_7;
DROP TABLE specific_current_mappings_cache_8_7;
DROP TABLE specific_pending_mappings_cache_8_7;
DROP TABLE specific_ac_cache_8_7;
.exit

Then try to update. If the client boots, you might still get a couple of popup errors about the missing autocomplete cache tables. Even if you don't, go database->regenerate->autocomplete cache to recreate them. It will take a few minutes to complete.

If you decide to try to fix it either way, please let me know how you get on, and if not, let me know if v245 does it in the end.

>>5185

>>5190

>>5192

This is supposed to work all the time, but I'll make a note to check it out.

>>5197

>>5199

Thanks. I still have 161 tasks left in the 'rewrite' job wew lad, but a bunch of that is just stuff I haven't ticked off yet. Although I pushed a little too hard this week, it was fun work, and it is 90% done now. I unfortunately just can't hit tomorrow.


098b78 No.5206

>>5203

> v245, which will be out on the 1st March.

AUGH

Will 245 have more of the duplicate stuff, or do you still have to dedicate all of the week on the network rewrite?


7967f3 No.5208

>>5206

I should have some spare time next week, but there are a lot of emergency bugs I need to jump on, so they'll be first. Absent any new problems, I expect to be back to dupe stuff for v246.


bcec26 No.5211

>>5203

I did the second option. It took about 5 minutes to update, but there was no error messages and it seems to be working well. And thank you again for help.




[Return][Go to top][Catalog][Post a Reply]
Delete Post [ ]
[]
[ / / / / / / / / / ] [ dir / cute / egy / fur / kind / kpop / miku / waifuist / wooo ]