[ / / / / / / / / / / / / / ] [ dir / random / 93 / biohzrd / hkacade / hkpnd / tct / utd / uy / yebalnia ]

/hydrus/ - Hydrus Network

Bug reports, feature requests, and other discussion for the hydrus network.

Name
Email
Subject
REC
STOP
Comment *
File
Password (Randomized for file and post deletion; you may also set your own.)
Archive
* = required field[▶Show post options & limits]
Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Options

Allowed file types:jpg, jpeg, gif, png, webp,webm, mp4, mov, swf, pdf
Max filesize is16 MB.
Max image dimensions are15000 x15000.
You may upload5 per post.


This board will be deleted next Wednesday. I am moving to a General on 8chan.moe /t/. This board is archived at 8chan.moe /hydrus/!

File: 442eb99a34c2ff7⋯.jpg (120.39 KB,379x834,379:834,442eb99a34c2ff7c026c477316….jpg)

c54963 No.8396

ITT post/discuss/jerk off to/improve scripts to use with hydrus

One of my scripts was broken as shit and I just fixed, but I figured it was better to graduate from the Q&A thread and make it easier to find.

____________________________
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

c54963 No.8400

File: 8b83d3d31991e3f⋯.jpg (101.06 KB,379x834,379:834,8b83d3d31991e3f2d78458a9f1….jpg)

Here's the MakeNew script I just fixed.

It takes every line from a file named s.txt, compares each of those lines with the lines of old.txt(if it exists) and spits out a file named new.txt with the new lines. All the new lines will then be appended to old.txt.

It's mean to be used along with my md5 script from the Q&A thread(I want to mess around with it a bit so I'll post it here later) which creates a file named s.txt with the md5s of all files in your hydrus folder, so you can easily perform a md5 search to get tags for them. You should use that script first to generate the s.txt file, then you either rename that s.txt to old.txt(if it's the first time you're running the script) or run this one(if you already have a old.txt) to get all the fresh new hashes on new.txt while also saving them on old.txt.

The scripts are independent, so you can name them whatever(as long as the extension is ".py"), I just call it MakeNew since that's what I named mine, since it's pretty much what it does.

Sometimes I make seemingly retarded stuff, so feel free to ask me stuff if there's anything you don't understand about the script.

oldFile = open("old.txt", 'r')

line = ""
write = False
for line in oldFile:
continue
if ((not(line[-2::] == "\n")) and (len(line) > 0)):
print("line:" + line)
print(len(line))
write = True

oldFile.close()
write = False


readFile = open("s.txt", 'r')
newFile = open("new.txt", 'w')

for tempReadLine in readFile:
for readLine in tempReadLine.split('\n'):
if(len(readLine) < 1):
continue

newLine = True

oldFile = open("old.txt", 'r')
for tempOldLine in oldFile:
for oldLine in tempOldLine.split('\n'):
if(len(oldLine) < 1):
continue

if oldLine == readLine:
newLine = False
break

oldFile.close()
if(newLine):
newFile.write(readLine+"\n")
newFile.flush()

newFile.close()
readFile.close()


oldFile = open("old.txt", 'a')
newFile = open("new.txt", 'r')

if(write):
oldFile.write("\n")

for line in newFile:
oldFile.write(line)

newFile.close()
oldFile.close()

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

c54963 No.8401

>>8400

>Sometimes I make seemingly retarded stuff

>

if ((not(line[-2::] == "\n")

And also actually retarded stuff sometimes.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

e5ff83 No.8520

First, this is script to get iqdb result. It also can be used as server to get tag

https://github.com/softashell/iqdb_tagger

This one is used to fetch image from ddg and serve it as hydrus booru server

https://github.com/rachmadaniHaryono/duckduckgo-images-api

This is planned to get image and data from multiple source but currently only work with Google image.

This also capable to search alternatives size and similar image with Google image help

https://github.com/rachmadaniHaryono/gbooru-images-download

This is still wip but the plan is to create pixiv booru with its own tag and serve it to hydrus

https://github.com/rachmadaniHaryono/PixivUtil2

Another wip project to serve hydrus booru image from reddit

https://github.com/rachmadaniHaryono/RedditImageGrab

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

26ac4c No.8523

>>8520

Getting a bunch of " PermissionError: [Errno 13] Permission denied" errors trying to run the iqdb one on Windows. Am running from admin command prompt. Any ideas?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

2b2283 No.8525

>>8523

can you post the error

also maybe run it as administrator?

https://stackoverflow.com/a/47495497/1766261

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

26ac4c No.8550

>>8525

That is the error. It's a bunch of stuff like this:

"2018-04-03 10:01.09 path: imagedump\78e400ec2808bcde2012ec7bdd6e6d78369f1813ac8ea1ac0bd5f4731a2d061e.webm

error: [Errno 13] Permission denied: 'C:\\Users\\An0n\\AppData\\Local\\Temp\\tmpsq3l6z73'"

And as I said, I am running from an admin command prompt. My research suggests the problem comes from the temp file command in the program working differently in windows than Linux. Have you run this successfully in Windows yourself?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

c54963 No.8596

File: ca7a1b1ef952ecd⋯.jpg (114.22 KB,709x1000,709:1000,ca7a1b1ef952ecd6f4c6d9a561….jpg)

Here's the MD5 reader.

For some reason it would fuck up randomly when reading files, it happened most of the times when I left my PC on overnight. I tried to force the terminal to print the filename and thread number, which somehow fixed it and made it slightly faster, thought the speed might just be placebo.

What I'm guessing is that since it left the terminal on hold for a fuckton of time, it eventually broke something because lol scripting language on intensive tasks, and making it to print each file forces the terminal to update instead of just stand there and drifting into death.

Because of that it now prints the starting date and end date at the end of the batch.

Usage is:

python scriptname.py numberofthread [anything here if you want it to clear the a.txt file instead of just writing to the end of it]

or just

scriptname.py * numberofthread [anything here if you want it to clear the a.txt file instead of just writing to the end of it]

import glob, os, subprocess, sys, datetime, Queue, threading

def writeHash(threadNum, file):
hashProcess = subprocess.Popen(["CertUtil", "-hashfile", file, "MD5"], stdout=subprocess.PIPE)

hash = str(hashProcess.communicate()[0]).split("\n")[1]
hash = hash.replace(" ","")
hash = "md5:"+hash
q.put([threadNum, hash])


startDate = datetime.datetime.now()
os.chdir("./")


if(len(sys.argv)>3): #for retards without python in path
open("a.txt", 'w').close()


threadNum = 2
if(len(sys.argv)>2):
threadNum = int(sys.argv[2])

threads = []
for i in range(threadNum):
threads.append(threading.Thread(target=writeHash, args=(0,0))) #placeholder
threads[i].daemon = True


q = Queue.Queue()
outFile = open("a.txt", 'a')

curThread = 0
count = 0
podracing = False

for file in glob.glob("f*/*"):
print(str(curThread) + ": " + file)
if(podracing):
result = q.get()
curThread = result[0]

hash = result[1]
outFile.write(hash)
outFile.flush()


threads[curThread] = threading.Thread(target=writeHash, args=(curThread, file))
threads[curThread].daemon = True
threads[curThread].start()

if(podracing): #should be faster than the operation below
continue

curThread += 1
if(curThread == (threadNum-1)):
podracing = True

outFile.close()
print(startDate)
print(datetime.datetime.now())

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

c54963 No.8645

>>8400

I just tried using the script after my latest 100 file update and it just fucking doubled the size of my old.txt, turns out it was because of the shitty way that windows handles files. I don't know what happened exactly, but the files just fuck up in unexpected ways sometimes, I once made a python script that wasn't working in some folders, so I made it print out the file names and it did nothing in certain folders, I remade the script in batch and those same folders that fucked up with python also fucked up with batch.

There's probably some complex fuck way to fix it, but I have no idea how to do that, so here's a makeshift fix I'm using:

>Make an old.txt file before using the script

>run it and see if it seems it's fucked up (old.txt fucking doubled in size)

>it if is, just copy makeNew to another folder

>create a new text file for old.txt, new.txt and a.txt, (do not just copy the file, it'll copy the broken file instead)

>copy the contents into each respective file

>run the script, it should work now

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

c54963 No.8646

>>8645

>>Make an old.txt file before using the script

*>Make an old.txt file backup before using the script

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

cc83c2 No.8875

>>8396

Does anyone have a Tumblr script that uses amazon links instead of tumblr's media default?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

44d58f No.8959


// ==UserScript==
// @name Use Tumblr Raw Image
// @namespace UseTumblrRawImage
// @description Changes all Tumblr hosted images to use the raw version
// @author jcunews
// @version 1.0.5
// @match *://*.tumblr.com/*
// @grant none
// @run-at document-start
// ==/UserScript==

(function() {

var regex = /^(https?:\/\/)\d+\.media\.tumblr\.com(\/[0-9a-f]{32}\/tumblr_(?:inline_)?[0-9A-Za-z]+_(?:r\d+_)?)\d+(\.[a-z]+)$/;

function processSrc(ele) {
if (!ele.src || (ele.tagName !== "IMG")) return;
var match = ele.src.match(regex);
if (!match) return;
match = match[1] + "s3.amazonaws.com/data.tumblr.com" + match[2] + "raw" + match[3];
if (ele.getAttribute("data-src") === ele.src) ele.setAttribute("data-src", match);
ele.src = match;
}

function processContainer(container) {
var eles = container.querySelectorAll('img[src*=".media.tumblr.com/"]');
processSrc(container);
Array.prototype.slice.call(eles).forEach(processSrc);
}

var observer = new MutationObserver(function(records) {
records.forEach(function(record) {
if (record.attributeName) {
if (record.attributeName === "src") processSrc(record.target);
} else {
var nodes = Array.prototype.slice.call(record.addedNodes);
nodes.forEach(function(node) {
if (node.nodeType === 1) processContainer(node);
});
}
});
});

addEventListener("load", function() {
processContainer(document.body);
observer.observe(document.body, {
childList: true,
attributes: true,
subtree: true
});
});

})();

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

44d58f No.8960

>>8875

check >>8959 but it needs some touch-up work

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

ea2f13 No.9365

I made a quick batch script to help import mangos, I usually rename all images starting at 0 if there's a cover or 1 if not, then import them all using the filename as a page, so I made this quick script to do this for me. Just throw this in the folder and double click it.

@echo off
SETLOCAL EnableDelayedExpansion
CHOICE /C 01 /M "Start at "
IF ERRORLEVEL 1 SET a=0
IF ERRORLEVEL 2 SET a=1
for %%i in (*) do if NOT %%~xi==.bat echo %%~xi & ren "%%~fi" "!a!%%~xi" & SET /a a+=1

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

b2d6fb No.9401

File: 878199433bd98cc⋯.png (15.8 KB,302x302,1:1,Screaming-Infant.png)

Someone is uploading rating:[website address] tags to the PTR. Check your scripts, people. The rating tag will not always be in the same place on some boorus so you can't just grab the third line under statistics or whatever.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

0f0a54 No.12602

File: 95ef85ccc17bb8d⋯.jpg (58.11 KB,804x600,67:50,95ef85ccc17bb8da219b1d4081….jpg)

Here's a quick python one for md5 search.

import sys 


if(len(sys.argv) < 2):
print("script.py file.txt")
sys.exit(0)


fileName = sys.argv[1]
file = open(fileName)

gel = open(fileName+"-gelbooru.txt", 'w+')
e621 = open(fileName+"-e621.txt", 'w+')
r34xxx = open(fileName+"-rule34xxx.txt", 'w+')
dan = open(fileName+"-danbooru.txt", 'w+')
tbib = open(fileName+"-tbib.txt", 'w+')
xb = open(fileName+"-xbooru.txt", 'w+')
safe = open(fileName+"-safebooru.txt", 'w+')

for line in file:
gel.write("https://gelbooru.com/index.php?page=post&s=list&tags=md5%3a"+line)
e621.write("https://e621.net/post/index/1/md5:"+line)
dan.write("https://danbooru.donmai.us/posts?page=1&tags=md5:"+line)
r34xxx.write("https://rule34.xxx/index.php?page=post&s=list&tags=md5%3a"+line)
tbib.write("https://tbib.org/index.php?page=post&s=list&tags=md5%3a"+line)
xb.write("https://xbooru.com/index.php?page=post&s=list&tags=md5%3a"+line)
safe.write("https://safebooru.org/index.php?page=post&s=list&tags=md5%3a"+line)

Just open the files you want to search in hydrus, select all, right click, select -> copy -> hashes -> md5, wait for it to copy all of them then paste the hashes on a .txt file.

You run the script with "scriptname.py filename.txt", so you need python in path. It'll create 7 different text files with the hashes already formatted for 7 different boorus. so you can select all, copy and paste it on URL import to get hydrus to download all the files it finds.

I did those 7 because they're the only one I know with md5 search, but I might just be retarded and have missed some other boorus.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

0f0a54 No.12604

>>12602

I always forget to close shit. Depending on your computer that script might fuck some files up, here's the fix.

import sys 


if(len(sys.argv) < 2):
print("script.py file.txt")
sys.exit(0)


fileName = sys.argv[1]
file = open(fileName)

gel = open(fileName+"-gelbooru.txt", 'w+')
e621 = open(fileName+"-e621.txt", 'w+')
r34xxx = open(fileName+"-rule34xxx.txt", 'w+')
dan = open(fileName+"-danbooru.txt", 'w+')
tbib = open(fileName+"-tbib.txt", 'w+')
xb = open(fileName+"-xbooru.txt", 'w+')
safe = open(fileName+"-safebooru.txt", 'w+')

for line in file:
gel.write("https://gelbooru.com/index.php?page=post&s=list&tags=md5%3a"+line)
e621.write("https://e621.net/post/index/1/md5:"+line)
dan.write("https://danbooru.donmai.us/posts?page=1&tags=md5:"+line)
r34xxx.write("https://rule34.xxx/index.php?page=post&s=list&tags=md5%3a"+line)
tbib.write("https://tbib.org/index.php?page=post&s=list&tags=md5%3a"+line)
xb.write("https://xbooru.com/index.php?page=post&s=list&tags=md5%3a"+line)
safe.write("https://safebooru.org/index.php?page=post&s=list&tags=md5%3a"+line)

file.close()
gel.close()
e621.close()
r34xxx.close()
dan.close()
tbib.close()
xb.close()
safe.close()

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

c3e71a No.12631

>>12604

Thanks, this is pretty useful. Sankaku has an md5 lookup as well:


sankaku = open(fileName+"-sankaku.txt", 'w+')
sankaku.write("https://chan.sankakucomplex.com/?tags=md5%3A"+line)
sankaku.close()

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

2f8c10 No.12982

>>12604

>>12631

>>12602

I not able to get the md5 unless I open each image in the hydrus media viewer first, in which case there is no easy way to select them all and get it for all of them.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

4c0673 No.13155

>>8396

Can someone bundle a torrent client with Hydrus+IPFS?

Just found this https://unix.stackexchange.com/questions/44247/how-to-copy-directories-with-preserving-hardlinks

(People in >>>/animu/81926 recommended Hard Links rather than soft links, so yeah)

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.



[Return][Go to top][Catalog][Nerve Center][Random][Post a Reply]
Delete Post [ ]
[]
[ / / / / / / / / / / / / / ] [ dir / random / 93 / biohzrd / hkacade / hkpnd / tct / utd / uy / yebalnia ]