d1575b No.2352371
This is a thread created by a programmer for programmers to interact in support of QResearch. Great place to perform live Q&A chat.
Allowed file types:jpg, jpeg, gif, png, webm, mp4, pdf
Max filesize is 16 MB.
Max image dimensions are 15000 x 15000.
You may upload 5 per post.
d1575b No.2352393
8ce561 No.2352397
d1575b No.2352403
WT, here. TW said:
>It's a searchable offline archive of posts and tweets, easy to seed with multiple collections. Shows time deltas. Qclock filter function. Caching proxy for images, rewrites URLs to fetch from archives when original picture is not available.
>I run it locally. I have no idea what the best packaging/distribution approach is, thoughts are welcome.
>Multiple levels of trust needed to share, run. Doing this safely for everybody is my main concern. Not everybody can inspect the code at every release, or would be willing to install/run on their machine.
Wow. Sounds involved. Let's get some basic info outta the way and maybe I can give you things to consider…
8ce561 No.2352404
So, WT, ideas on how I could share what I have written?
d1575b No.2352409
>>2352397
First off, what platform(s) are you targeting?
d1575b No.2352418
Second, please tell me it's going to be free to the masses.
d1575b No.2352423
>>2352409
Windows? Android? Apple? Any browser?
8ce561 No.2352426
>>2352409
Format is made for desktop, haven't tested on mobile.
Frontend it will run on anything that has a relatively modern browser.
Backend needs node. Also ext4fs to store large collections of files, but this limitation can be removed.
d1575b No.2352437
>>2352426
Windows Desktop?
8ce561 No.2352445
>>2352418
>Learn how to archive offline.
The recent discovery on twitter post correction delta and Qclock means the tool would also speed up the dig.
d1575b No.2352450
8ce561 No.2352455
>>2352437
The frontend runs on any modern browser.
Electron could be used to package the app for any platform, frontend and backend. I've started working on it but it needs more work, if this is the best packaging approach.
I host the backend in a (linux) virtual machine, but it could be a server in the cloud. Having everything available offline was the driving motivation, so could hosting would defeat that purpose.
d1575b No.2352462
Third, what's the dev platform?
I know alot about deployment on Windows (all flavors), Android and a little about Apple (if developing on Windows). Other stuff you'll have to leave a post here and wait to hear something.
d1575b No.2352470
>>2352455
What's the front-end written in? GUI or cmd line UI?
8ce561 No.2352495
>>2352462
>>2352470
The frontend is written in JS (modern, requires babel/JSX) and runs in any browser. GUI. (The command line is used to fetch resources, but I'll integrate that to the UI.)
Electron embeds a browser and would allow it to run like a native app on Mac & Windows.
Distributing in a *safe* way is my main concern.
>How can user trust that they don't get a malicious version of the app?
>How can I avoid doxxing myself sharing this?
d1575b No.2352506
>>2352495
Wow. Good questions. Hang on…
8ce561 No.2352507
If it were a safe solution I'd just put the code up on github, post a link here, and let someone address the packaging/distribution. I'm a simple codeanon.
d1575b No.2352520
>>2352495
So you have a very unique problem, my man. I don't think I've ever been involved in a deployment that was anonymous. It may sound ridiculous but perhaps you need to think like a hacker in that you deploy an install program via torrent and include a pre-auth "safe certificate" from an anti-virus company.
8ce561 No.2352550
>>2352520
A simple bootstrapper with code signing? Yes, it seems to solve the "anonymous distribution" part of the problem.
Why would users trust that I don't get comped resulting in malicious code getting pushed in updates?
d1575b No.2352567
>>2352550
Anti-virus companies will issue you (for a small fee) a certificate. Users go to the AV website, enter the KeyCode from your certificate and are told if its safe or not.
8ce561 No.2352571
>>2352550
More precisely, if the software is useful, how do I make sure it cannot be exploited as a troyan by Clowns? Or why is this not a concern?
d1575b No.2352575
>>2352550
Oooh. Bad actors. Hmmm. That's going to have to be approached from a checksum/md5 point of view. MD5 is your friend for Linux dists.
8ce561 No.2352584
>>2352567
This moves the trust to the anti-virus company. Do the Clowns have a copy of the CA? Is it not safer to use a self-signed CA? I don't know enough about this topic to make a decision.
8ce561 No.2352596
>>2352575
Code signing solves the tampering problem, but it doesn't prevent malicious actors from getting at me and taking over the distribution infrastructure.
If sharing this is going to put me and everybody at more risk than keeping the code for myself, it seems rational not to?
d1575b No.2352606
>>2352584
This certificate I'm talking about is not the same as a CA. It's for verifying that you app is not malware.
As for MD5, be sure you have finished all MD5 computations before dropping the 5 to 8 bucks for the AV cert.
d1575b No.2352610
If you stick with MD5 and give appropriate warnings to ensure MD5's match before use, you should be at minimal risk.
d1575b No.2352623
>>2352596
A malicious actor could do a man-in-the-middle attack on your database is he knows a specific person is using it. I dunno. The variables on attack vectors expand exponentially there.
d1575b No.2352634
I don't want to mislead you–I'm stretching my knowledge here since I'm primarily a Windows/Android Dev. Sorry I couldn't be of more help.
8ce561 No.2352645
>>2352596
Writing a great app and distributing it in an *apparently* safe and anonymous way is an excellent vector to compromise autists. I don't know why the Clowns haven't already done that. Too much effort? Risk of being exposed?
>>2352606
The code signing certificate is trusted by a CA. Anyone with a copy of the CA can produce signed code.
You are suggesting MD5, but is a very weak hash function.
d1575b No.2352681
>>2352645
I'm saying get a reputable company to certify your app as legit (not malware) since you'd be anonymously distributing it as a torrent.
Then, using MD5 can tell the end user that not a single byte has been altered in the current deployment. MD% is more than adequate for that small task.
d1575b No.2352683
Gotta head to work. Hope I helped somewhat. Shadilay!
8ce561 No.2352696
>>2352623
There is no database, it's all in local plain files.
>>2352634
I understand. It is a difficult topic. I'll keep in mind the signed code bootstrapper idea, it is part of the solution.
Thank you for the discussion.
If Q team is reading this, maybe get in touch? Extract me and I'll happily write code for the community. Though it's perhaps not worth the hassle for you at this point.
8ce561 No.2352708
>>2352683
Thanks & Shadilay bro!
29f2fa No.2354303
>>2352696
EA here. Can help with cloud / distribution / devops.
If its "all local plain files" (btw that's good for "store it all offline") then the attack vector changes from MITM to corrupting the file sources. So we'd want a way for original file owner to verify/checksum the distribution source, and the downloader to verify/checksum his copy.
>>2352520 Certificate can come from LetsEncrypt.org. Also it's free. I use this for some of my sites already.
8ce561 No.2354516
>>2354303
Thank you for the input.
Not sure about LE certificates, I believe they are tied to a domain due to the verification process? I don't know that I would be able to secure a domain anonymously. Also LE certs expire after 3 months, they are not meant for code signing.
I am hesitant to go with a self-signed CA, it seems maybe risky but I haven't thought it through yet.
Local plain files by design. I don't rule out using one or several local DB engines, but they would only contain information that can be reconstructed from the local plain files.
Indeed corruption at the source could be a problem. I have a (python) 8chan thread archival tool that could be made available as a service (ran by independent sources), integrating the hashing process. Cross-checking sources would help detecting comped ones.
I'll work on this aspect as soon as paying job permits.
Is there such a thing as anonymous github without going to the dark web? A trustworthy (NSA/MIL) git server would be awesome, but I have no idea how I would get access to that and have reasons to believe it is safe to use. Also I am neither US resident nor citizen.
dc0709 No.2359386
not trying to slide, but you are all geniuses! fantastic work you are all doing.
please keep it up, and when you have the time, offer some advice to a noob trying to become a master computer scientists such as you guys.
>what programming language to start
>what to learn besides programming languages
>where is some good places to learn CS
thanks and again great work
1aaba4 No.2360560
>>2359386
damnit i am having brain fart so many steg progs installed cant remember which one subtracts one image from another
2.png needs subtracted from 027
or combined
from what i can tell with bitmasking its a girl kneeling with a fountain of blood spraying on her or out of her, the water is blood i seperated them HELP!
also zsteg is far far superior for detecting hidden shit
found out all kinds of smartphone virus's posting by clown bots, motorola assembly files and shit
binwalk
foremost
anyone running apache can you get this up and running? finding hidden twitter images
careful with 027 it has detected headers of phone executables but could be false
https://github.com/holloway/steg-of-the-dump
e4be4b No.2366039
>>2354516
> Is there such a thing as anonymous github
I'm starting to evaluate git-ssb for myself. I'm not sure yet about how well it would withstand attacks by determined adversaries. It is decentralized and requires some extra software running, I'm not sure if that counts as "dark web".
https://git-ssb.celehner.com/%25RPKzL382v2fAia5HuDNHD5kkFdlP7bGvXQApSXqOBwc%3D.sha256
It's built on top of secure-scuttlebutt:
"A database of unforgeable append-only feeds, optimized for efficient replication for peer to peer protocols."
https://github.com/ssbc/secure-scuttlebutt
Social network application (kind of like twitter):
https://github.com/ssbc/patchwork
>>2352596
> Code signing solves the tampering problem, but it doesn't prevent malicious actors from getting at me and taking over the distribution infrastructure.
Secure Scuttlebutt has similar benefits to a blockchain. Anyone can distribute your messages (not tamper), and you can't rewrite your post history. Kind of like with Git how every commit has a hash that is based (in part) on the commit history. So if you were compromised, only messages you write from that point forward would be affected.
Conceivably, you could publish messages in the past if you create them in order. So for example you could create a "Q" identity (public/private keys) and import all Q posts in order and have them contain the desired timestamps. You wouldn't be able to change them once published to the network, kind of like with Git's commit history.
d5f7c2 No.2366235
>>2354516
Self-signed us useless since https will flag it as self-signed hence "questionable".
Not certificate authority trail to follow for credibility.
Basically as is server owner says "I vouch for my self"
d5f7c2 No.2366298
>>2359386
read some books . very few real "masters" here but some highly competent, self-taught but frequently "narrow" expertise.
Don't bother talking about Internet traffic routing, security or detains of IP protocol family. Very, very few here that speak that language.
Host-based programming/scripting mostly, and web stuff
d5f7c2 No.2366315
>>2360560
So you already discovered the password?
e4be4b No.2366980
>>2366235
For code signing, PGP (gpg) is great. Git supports commit signing with it, and it is used by major linux distros by their package management utilities.
d38a5a No.2367573
>>2359386
Start with Python. https://www.learnpython.org/
Also learn databases - start with sqlite, because it doesn't require a server. UnQLite is also a good db to start with - it's up to you to learn the difference between SQL and NoSQL. Learn html and javascript. There is even a programming language based on javascript called nodejs that can also be useful.
I believe these are good places to start:
https://www.tutorialspoint.com/index.htm
https://www.codecademy.com/
91b78f No.2368501
Hello fellow programmers!
Just viewed this video https://youtu.be/MqmeteSv8cU and SerialBrain2 spotted that "FARMER" and "QANON" have the same English letter gematria.
Pretty cool, but who would notice that without a calculator or word list? Just made one! Based on English vocabulary file "enable1.txt" (used by Words with Friends game). https://anonfile.com/Z8H295f7b1/gematria.txt
It would be better to make a gematria word list based on Q post, especially since English gematria seems to often reference names…
There may be a certain intuition or Sense Motive to hone in on likely clues but let's up our game!
05a340 No.2368709
>>2366298
Anon, that was beautiful, you are a great 'first contact' Anon!
fa4e1b No.2368845
I posted as new thread before I saw this thread…
Tool for Q-searchers. Some fragments mapped, ball is in your court.
https://mega.nz/#!WfhRlASA!gaDqWNdpP-OSWY4bue0Oj0En8h9lXX85J0FFIXytRGI
Node JS, Electron, Cytoscape, GunDB
cd into extracted directory and run "npm run"
d5f7c2 No.2368941
>>2368501
Gematria originated as an Assyro-Babylonian-Greek system of alphanumeric code or cipher later adopted into Jewish culture that assigns numerical value to a word, name, or phrase in the belief that words or phrases with identical numerical values bear some relation to each other or bear some relation to the number itself as it may apply to Nature, a person's age, the calendar year, or the like. A single word can yield multiple values depending on the system used.
Although ostensibly derived from Greek, it is largely used in Jewish texts, notably in those associated with the Kabbalah. The term does not appear in the Hebrew Bible itself.
Some identify two forms of gematria: the "revealed" form, which is prevalent in many hermeneutic methods found throughout Rabbinic literature, and the "mystical" form, a largely Kabbalistic practice.
Though gematria is most often used to calculate the values of individual words, psukim (Biblical verses), Talmudical aphorisms, sentences from the standard Jewish prayers, personal, angelic and Godly names, and other religiously significant material, Kabbalists use them often for arbitrary phrases and, occasionally, for various languages.
A few instances of gematria in Arabic, Spanish and Greek, are mentioned in the works of some Hasidic Rabbis also used it, though rarely, for Yiddish.
However, the primary language for gematria calculations has always been and remains Hebrew and, to a lesser degree, Aramaic.
Numerology is any belief in the divine or mystical relationship between a number and one or more coinciding events. It is also the study of the numerical value of the letters in words, names and ideas. It is often associated with the paranormal, alongside astrology and similar divinatory arts.
Despite the long history of numerological ideas, the word "numerology" is not recorded in English before c.1907
So it would be correct to describe it as sort of a mystical, Kabbalistic form of numerology voodoo like reading chicken bones?
With no scientific or linguistic basis?
Just magic words and numbers?
And we are supposed to take it seriously?
OK
8ce561 No.2369040
>>2366039
Thank you Anon! SSB is precisely what I was looking for.
@ljtQyLKmVLKw/jGzA1lqugPLL+8sDO7AYnTJqr9lYcI=.ed25519
Still setting this up, I'll get there.
40c6d7 No.2369158
>>2368941
I don't know about mystical aspects but it sounds like some players are using it as a means of secret message. Q = 17 has come up a bunch of times too so it could be a communications technique.
Let's check 187…
If people are using gematria as a name-drop/signature and time deltas sometimes… Let's check 111… No luck. What are some good numbers to search for because it might match names.
All those 2 letter abrieviations would have values between 2 and 52… This isn't going anywhere. Wonder if any names equal 30 after those 30 days of silence… NP = 30, Namcy Pelosi? She isn't in the news at this time, bad loose thread…
If and when a real gematria indicator is present you would think other collaborating hints to be in same message… Maybe doing it backwards then. :/
Fine! I AM NOT SERIAL BRAIN! HE IS THE BRAIN AND I AM THE MOUTH! haha Maybe I got carried away there…
b82900 No.2369794
What if it's useful to LABEL the edges between nodes. And thus/then have multiple edges.
For instance, Strzok has multiple connections to the same person (ie Page etc) but each connection has a different flavor (ie co-conspirator, "textual" relationship). This is my personal gripe with https://DiscoverTheNetworks.org, that it doesn't provide a 10k foot view of how the quid pro quo work. Another example: WJC gives a "speech" on a date, Russia "pays" him for it, and then somehow uranium is exported to them.
Yes, populating this thing will be time-consuming, but we need not try to enter all the details all at once. Eat an elephant one bite at a time.
SSB looks like a winner.
GunDB is very interesting but doesn't have property lists for edges. It's possible to use intermediate nodes for edges, but then the challenge is data management or graph visualization.
6b3962 No.2370327
>>2364880
Looks interesting. Using the API?
7ceeb5 No.2370545
>>2359386
agree!
hope u dont mind muh lurking as well
6b3962 No.2370561
>>2359386
>>2367573
^^^^^^^^^^^
>Start with Python, SQL. Learn html and javascript. These 4 languages will get you FAR all by themselves.
Start with HTML/Javascript if you are starting out, then Python and SQL
fa4e1b No.2371101
GunDB lets you have property lists for each node and many connections. Q Analysis app lets one manage those individual properties. For each node with only name and image displayed. Each node can cross link to another graph file or be linked to a url. All html css js. Can customize presentation layer.
8ce561 No.2371704
>>2370327
Yes, it loads a local JSON file that comes straight from qanon.news/api/posts. I plan to add support for more sources but time has been scarce lately.
Hopefully I can share this before Q starts wrapping up.
c9230e No.2371963
>>2371704
Haha yeah, can we finish our tools before Q finishes dismantling a 2,000 year global conspiracy to enslave the entire species?
bb7b36 No.2377944
Programmer here, was working on self-redistributing OS and encountered a similar problem to yours on verification. You can't provide a guaranteed 100% non-comped program, all you can do is make it so it's extremely unlikely (don't let perfect be the enemy of good).
MD5 hash has been broken for years and data can be manipulated in transit or in standing. JavaScript payload is sent plaintext and can be intercepted (see 'problems with encryption in JavaScript' - you can't send a reliable JavaScript program without the reliable program having a reliable means of transport).
Electron is a Chrome spin-off and I'd advise away from 'Bridge' technologies.
It's unclear if you're after a standalone app or a webhosted app, so I'll attempt to bridge both. Webhosted is nearly impossible to give any real assurances of non-comped status (you or host can be compromised, can be hijacked on site, etc).
Standalone requires a webhost, so it hits the same issue.
So, tips:
1) Keep the app small. Smaller it is, the easier it is to do a byte-by-byte comparison. A 20MB app would be mere seconds, 800mb is going to be a pain.
2) Use anything stronger than MD5 for a hash. Avoid anything with the NSA's rubberstamp of approval (that means no SHA256 etc).
3) Supply more than one type of hash of the software.
4) Make sure you have an alternative source that keeps a copy of the current hash(es) [EG an archive page] so even if main website is comped, archive isn't.
5) Utilise multiple webhosts for hosting the main package itself (why? To compromise the software all hosts have to be hijacked). You can include free software hosts etc into this mix. Don't touch Mega because Kim Dotcom no longer owns it (NZ government gave it to a Chinese investor).
6) Include a copy of the hashes in a text file which is shared with the software package.
7) Use a 'read only' storage format (EG squashfs or an encrypted archive). In order to tamper with it, you'd need to replace the entire file.
8) Supply the individual with the means to build their own from scratch. So if the binaries are comped, it's possible to recreate a non-comped binary from source.
9) Archive copies of the source code.
10) Only push out major releases (so you're not overwhelmed with the archiving/hashes/mirroring).
Essentially to avoid software compromise, you need many alternative copies and many checks/balances. As for yourself being compromised, you need someone who acts as your watch guardian (they don't have control over the project, but they do have the power to say if you're compromised).
Having warrant canaries and a specific coded signal that you can mention that the community will know to mean you're compromised will also help.
If you sell out and none of the checks succeed, then the open source code or binaries would be analysed or decompiled by a white hat eventually and you'd become exposed anyway if that was the case.
But the most compromised product of all is the one you don't even produce. ; )
8ce561 No.2380208
>>2377944
Wow. Thank you Anon, this is a fantastic response, very well thought out. Will re-read often.
Docker? Small images, reproducible builds. Not much isolation, but probably enough. Not very easy to run.
Virtualized Alpine-Linux-based iso image? A little heavier, perhaps not so much with careful decisions (go backend instead of python). I think I prefer this approach.
I'll try harder. Again, thank you!
fa4e1b No.2381203
>>2377944
Yes on all points. However, for my part, I’ve got too little time to maintain my self. Source is there. Any one can read and decide if they wish to utilize it. Its a start, hopefully others can help realize it. I can achieve what I set out to be able to achieve and figured others might get something out of it.
8ce561 No.2391576
I created a board for Q Research software development.
>>>/qcode/1
6b3962 No.2398158
Whatever happened to the idea of a Q Research Wiki? Did that whole thing die out?
bb7b36 No.2401309
>>2398158
Wiki software requires a dedicated PHP host of some sort to host (especially if you want to stay protected against censorship).
Wikia is a free alternative but it reeks of liberialism and I bet would censor in a heartbeat.
It would be nothing short of a full-time job to maintain (both against shills and keeping it up to date/organised).
I could guide you on the basics of setting up MediaWiki on a Linux box (LAMP + some light configuration), but I absolutely do not have the resources to support the endeavour, I am literally overstretched.
6b3962 No.2412127
>>2401309
I like the idea of automation. If we had a good way of trawling the breads to assemble info the wiki could build itself. That too, is probably a full time job.
What are the projects that are currently being worked on? Tying them together or collaborating would make things go faster. Lets pool our resources.
1bc038 No.2449345
just a question
looks to me like there are very advanced bots out in the open
if so, I might have bumped into such bots in public forum
- they follow an agenda
- they give likes
- they give dislikes
- they distract
- they talk to each other
- they answer questions
- they notice Leetspeak, but can't understand it
2e41e8 No.2469655
I made a neat script that could be useful for Linux users.
Many times if you are making a script to process a lot of files it is not so easy to fully use all your possible CPU power (cores->threads).
So after few tries I can up with this script that can be used as a base. Easy to modify to your needs and it uses all possible CPU that you allow.
for example if you have files to process (for example test thousands of images if they contain steganography), list the files, feed it as standard input to the script and let it spread the processing to all available cores / threads.
For example:
- list files using ls -1 *.png will list file names, pipe it to the script
- or any other line based info that you can process. the script can spread the items to process in your way to all cores.
here is the script:
#!/bin/bash
input="-"
tot=0
# THIS gets the available processing units, if 4 cores => 8 threads this will get value 8
# depending on your jobs, could try values like units -1, -2, +1, +2… or *2, *3, *4
units=$(nproc –all)
cat $input | while read line; do
tot=$(($tot + 1)) # just a counter to display total processed
./clean1.sh $line > $line.clean & # THIS line will run parallel jobs. Change it to anything you want, remember & at the end
jobs=$(jobs -p | wc -l | grep -o '[0-9]*') # currently running job
echo $tot $line "($jobs)"
#echo "Jobs $jobs"
while [ $jobs -ge $units ]; do # if jobs at the maximum value, wait
#echo -n '.'
sleep 0.01
jobs=$(jobs -p | wc -l | grep -o '[0-9]*')
done # when done, continue reading input and add more jobs.
done
wait
echo Done $tot
fa4e1b No.2493793
>>2412127
AI can assist in this. I’ve been working on AI to monitor reddit for shills and abusive moderation. Talking point detection is on my list too.
If we can detect these things with AI we can respond to shills with evidence to refute their argument and with moderators, archive and record events to expose the infiltration.
I’m hesitant to post code yet, not sure it’s a good idea to put something out that can be weaponized by trolls the same as we use it as defense.
Right now I do an ok job of detecting when people are arguing vs debating, asking questions vs concern shilling. Starting to narrow in on detecting posts likely to be removed by a boards moderator but need more sample data.
None the less it’s possible. We just need great training data. Sample conversations with shills and sample moderated posts.
e7dd40 No.2494187
6b3962 No.2495240
>>2493793
Some javascript magic? I'd be interested in reading it.
fa4e1b No.2535594
>>2495240
I’ll post source this weekend. Real life has been occupying free time.
b7fb97 No.2561043
>>2412127
Thread parsing automation (you'll need to install beautiful soup and mechanize, on Linux this is easy [do an apt-cache search for beautiful soup, and then mechanize]).
Here's HunterKiller bot (it can parse threads, but the code is sufficient enough you can probably re-engineer it to parse posts from threads). It was censored by the mods when they deleted the Q-branch thread:
https://pastebin.com/LmPFhtXm
Uses python 2.7 to my knowledge. Can't help with Windows, I ditched that shit OS years ago.
6b3962 No.2561161
>>2561043
>https://pastebin.com/LmPFhtXm
Interesting! Looks good - although you'll possibly miss some breads due to ebakes and what not if they don't match your "Q Research" restrictions.
I've been archiving the JSON from here since about FEB. I have over 3000 breads all archived locally and online in JSON - I just need to work out some logic on how to trawl with some context in order to make some sense of all the data we've found.
I'll take another look later on. Looks good!
b7fb97 No.2561191
>>2449345
Those are 'professional level' software bots, which have been in development for quite some time now (since at least 2010). The end goal of such software is to get 'natural speaking' bots that can engage targets (and 'talk' with other bots to make it seem like a legitimate conversation is ongoing).
I spent years lurking and interacting on dubious forums where such malicious activities were being tested. The bots aren't perfected (they have the same flaws as normal chatbots, presently), but there's an ongoing effort to make them 'more advanced'.
HunterKiller bot is homebrew, but it's based on several iterations of code which was based on observations of the so-called 'professional' bots. Such software is sold to both military and political activism groups (the bad kind: think Media Matters).
HunterKiller was my proposal to counter the bots: basically, a bot advanced enough to hunt other bots. What I've given you is a barebones example that should contain sufficient enough information for you to build your own variants.
It's my personal opinion that passion trumps corporate software development any day of the week. That code took me about 7 days to write (I have limited free time), but the potential to tack on other, more advanced Python libraries are there.
PS: Shills often use scripts (folders and pieces of paper in more primitive operations), more advanced shill operations use specially designed software that allows them to copy/paste generic garbage responses (usually across several accounts or IPs, depending on sitch), and very advanced shills have bots that automatically select what garbage to copy/paste with the shill acting as bot handler.
Check out the Clown College thread where I explain more on bots in my earlier posts.
From a strategic standpoint, you have the homefield advantage, because shills/bot posters reply on spam and generic replies or obvious tells. With a HunterKiller bot that is sufficiently well programmed, you can mass identify these bots and shills for some beatdown with administration tools.
Eventually you will experience shills who have tools that can 'thesaurus' the words around so it seems 'different' so don't reply on verbatim matches but perhaps even Markov chain analysis.
Hope this helps.
b7fb97 No.2561225
>>2561161
The point of the tool is to actually filter out the breads because it's a HunterKiller (you're not huntng/killing the breads: you're looking for the garbage threads). But you could modify it to investigate breads for shill posts or copy/pastes. It's up to you.
If you skim over the many anchored threads, you might notice it almost appears as if the admin are using such a tool (which greatly speeds up identification rates of trash threads). It's a pity they censored it as it was intended to help, not hinder (it cannot post and I won't build it so it can as that would only merely aid the shills).
b7fb97 No.2561418
HunterKiller's friends (variations) include:
ArchiveBot: mass collect all posts from all active threads in the catalogue (allowing you to do a raw text save of the data). Alternate version: bulk send archive requests of the thread URLs to archive.is/archive.org.
[Properly combined, you can keep a simultaneous offline/online version. Word of warning: when archiving to a website, be sure to only archive 'finished' threads on archive.is and to space out the requests over several minutes so it doesn't appear you are flooding/a bot. Automated and slow is better than manual and even slower.]
MonitorBot: keep track of which threads have 'moved position' and thus have 'new replies' (this is a technique used by shills to direct their limited resources to whichever thread is presently active; likewise, you can do the same).
KeywordBot: Have a bot that looks for specific keywords, image filenames or other triggers and then flag them up when it spots it (shills also use this technique to know when you're talking about a subject that they need to 'shill on').
Newsfeed/TrackerBot: use it to pull the latest news from websites (it's strongly recommended you use a news sites' RSS feed to do this as it keeps it nice and simple). Will require substantially more work and beware shitty unicode strings in the returned data.
Literally, whatever the hell else you could imagine. You're also not restricted to this board. If you change the URL to another board, the code should largely still work (albeit you might have to modify what data gets accepted as some boards don't have poster names).
If you want to get super anal, in theory you don't even need python to build an auto-parse tool. If you're batshit insane, you could even use wget coupled with a bash script.
Beautiful Soup and Mechanize are extremely powerful. Beautiful Soup does HTML parsing, and Mechanize is like a full-blown browser under the hood. You *can* post to a forum/board, but I've noticed the Media Matters shills appear to be doing everything manually (or whatever they have is absolutely shit), so I leave it as an exercise to talented chans to develop (highly advise you do NOT publish any posting capabilities as it will only arm the less well developed shills).
And believe me, this is just scraping the surface.
0e37ff No.2566977
>>2561191
yes my friend … this makes sense
i was amazed on how 'smart' they interact and how vicious they attacked a deep state thread with just links. i managed to expose at least one bot and got banned by forum admin for 1 month before i was able to target more.
33e1de No.2575758
Hey guise,
I've been chatting with a few Bakers and they would love to have the bread replies count at the top of the page so that they don't have to scroll to the bottom of the bread to check how many posts have been made, and when to start the fresh bread.
Anyone know if this can be added to the user js or a simple css fix?
Any ideas? Thanks guise.
2a3fd5 No.2587705
AntiSpam & ToastMaster Scripts Combined
https://mega.nz/#!OzIlkA5C!NDNVJN848S76siNypYvzGgxYewZHrTPdnghpS2n7Ky4
8ce561 No.2588476
>>2575758
Something like this?
$(function(){
$(document.head).append('<style>#post-counter{position:fixed;top:20px;right:10px;font:24px sans-serif;opacity:0.5;color:#f60;}</style>');
$(document.body).append('<div id="post-counter"/>');
function updateCounter() {$('#post-counter').text($('.thread>.post.reply').length);}
setInterval(updateCounter, 500);
});
8ce561 No.2589502
6f830b No.2598959
>>2566977
In this day and era we need to keep pace with the developments of military and commercial establishments. No point trying to fight bots as humans because that's exactly what they want you to do - waste time on bots.
Instead, you want bots to fight bots, so the humans can fight the humans. 9 times out of 10, all you need is an identification tool so admins can simply cap and ban and that's it.
I see HunterKiller has inspired a JavaScript variant which aids finding breads/spotting spam. I didn't build a bread spotter because it can allow shills to funnel their less competent members directly into a bread.
Be very wary of what tools you do realise, bearing in mind that shills have absolutely no qualms about stealing them, repurposing or reverse engineering them. They are largely conducting illegal activities, after all.
(PS: Create legal traps specifically targeted at paid shills in the licencing of your code, so if it ever gets found in their hands, you have means to 'return fire')
6f830b No.2599230
Going to hand off some of my tasks to other anons, if they want to take it up.
Tools you might want to consider developing (basic tech specs included):
Online archival status checker:
1) Parses an entire bread
2) Pulls all links (feel free to add in a regex link filter so you can filter out irrelevant links)
3) Checks each link, in turn, against an archive host (EG archive.is, archive.org), to check if they have been archived online
4) If not, archives it
5) Generates a generic status report message that you can then copy and post to that bread (including full bread name) to let anons know you've done an online archive.
Missing bread/duplicate bread detector:
1) Parse catalogue for breads
2) does bread numbering (HK has this built in, so feel free to rip the python code I supplied for this)
3) Highlights missing numbers, duplicates as a 'status report' message (with direct links to dupe threads) which can then be posted to an admin to investigate/solve
One for the JavaScript anons:
Bread filter tool:
1) User supplies URL of bread
2) User has button that says 'update view of bread' (to stop laggy constant real-time updating)
3) Tool returns all posts in a bread that have pertinent research
4) (Optional) Have a set of toggle options that allows you to filter the posts depending on the following options:
[Has link] (allowed/not allowed)
Subset options of [Has link]: [Contains archive link](On/Off)[Contains old media link](On/Off)[Contains image link](on/off)[Other](on/off)
[Has no link] (allowed/not allowed)
[File upload only] (allowed/not allowed)
[Contains text] (allowed/not allowed)
[Contains gratitude] (allowed/not allowed)
You can probably think of other options, that's just to inspire.
5) (Optional) Have it work across all breads found on a catalogue (warning: will be extremely slow/laggy, will need serious optimisation)
Bread generation tool (best if kept offline or restricted to qualified bakers only to avoid bad breads):
1) Supply user with a form
2) Form contains fields that the baker can fill in (form should save reoccurring data to save time)
2a) (Optional) Have it so the tool can query a given bread URL to parse in order to pre-load/pre-generate the data in the fields
3) These fields are then used to compile either one or a series of text documents (.txt will suffice) that contain text that can be copy-pasted right into the comment box on 8chan.
This should give you guys some idea how to bring automation to your investigations and research. Once you can filter out the shills entirely, it's game over for them.
21d2e8 No.2638983
Anon who created the ToastMaster script:
What do you think about adding some sort of notification to the toast when Q posts in the breads shown? Maybe add underneath the bread number "(Q: [x number of posts])" or something along those lines?
44236a No.2659375
>>2352645
>distributing it in an *apparently* safe and anonymous way
How do you manage the anonymous part?
I'm doing something similar, but it's a couple weeks out of date at the moment. I'm out of the country attending to a family emergency, and I couldn't take my production system with me.
9fc25d No.2662002
>>2352645
Concur with the MD5 flaw.
It's strongly recommended you use a variety of hashes on your software, not merely whatever happens to be trendy. Hashes suffer from the Shannon problem (long story short: loss of resolution in data means substantially less accurate), which is why you should employ multiple hashes which means an attacker needs to not only pwn one hash, but several.
It becomes easier for an attacker to then just modify your hashes without you looking (rather than modify the code to fit the hash), at which point you have to make sure you keep backups of said hashes.
Done correctly, you will have enough hashes from enough algorithms that it's impossible to tamper with the code without tripping one or the other. MD5 and SHA1 are broken, but you can still use them… in conjunction with other non-broken hashes.
Sure, it's extra work, but it offers immunity to your reputation being compromised if it gets subverted.
>How do you manage the anonymous part?
PGP
Assuming you don't bury your identity somewhere in the PGP message (which should also contain the hashes). Of course, that introduces a reputation problem. You either have to trade a loss of trust for anonymity, or offer identity with reputation to engage in trust.
To be honest, I wouldn't recommend identifying yourself anyway, because even if you did, it's unlikely you have the reputational backing for it (if new) and it'd give too many clues if you're an 'old hand' (with a good rep).
Best to teach anons how to proofread, scruntise your code, make it open source, explain each line of code. Make the trust in the code, not you.
8ce561 No.2734127
>>2659375
For smallish stuff there is pastebin. If necessary, archive, encode as base64, paste with instructions at the top.
Larger files (~4MB) can be attached to posts here.
Tor may help. Or a VPN if you were able to open an account anonymously (prepaid credit card, fake identity if legal).
I do not know if Mega can be trusted. I would not trust AWS (S3).
Regarding the related problem of anonymous hosting (i.e., providing services anonymously), I've been thinking about writing a client to (ab)use 8chan as an anonymous communication/storage backend (hopefully with Ron's blessing). There are neat things to do in that direction, including anonymous software distribution.
>>2662002
I see how PGP helps with trust, but I do not see how it helps with anonymity. Can you explain what you had in mind?
7b8328 No.2752449
>I see how PGP helps with trust, but I do not see how it helps with anonymity. Can you explain what you had in mind?
You'd need a foster an alias that has a proven track record of reliability (like how I continuously try to post under this static 'name').
Once it's established as being reliable, PGP would allow you to prove it comes from that alias.
You'd need to be extremely careful not to connect your alias to your real self (I'm not worried about mine being connected, in-fact, it's important that it remains connected).
Nothing that can be done overnight, unfortunately, but that's how it is with trust, has to be (re-)earned.
d62bc5 No.2784392
Hey guise I've a question. Could someone write css to hide the name, subject and email fields on the qresearch board please. If BO could add it, it would save anons doxxing themselves and possibly hinder bots. Thanks in advance.
d1575b No.2870643
d1575b No.2870793
Can we get the newest userjs posted here? At least the one with the PostCount.
1fcd70 No.2871218
>>2371963
This is probs a unique feel
07f8e1 No.2879212
>>2870793
https://pastebin.com/GCNvHRYv
1ecb50 No.2880218
>>2366039
Definitely a second on SSB. Also solves the archive everything offline issue Q keeps mentioning as well as provides advanced filtering.
If you mirror QResearch with SSB, you've got a winner!
d1575b No.2882075
>>2879212
Perfect. Thanks, anon.
843ea1 No.2899194
>>2784392
Obsfucation is not security.
What's to stop someone opening up the dev window of a web browser and disabling/editing the CSS file?
Besides, Q needs access to the name field by default (and personally I would just keep appending my name to the bottom of my posts if the name field was disabled).
JavaScript for disabling email:
document.getElementsByName("email")[0].style.display = "none";
CSS selector:
input[type="email"]
{
display:none;
}
You're welcome. Leave the name field in please (maybe just append 'optional' to the name field?).
843ea1 No.2899223
To specify a placeholder attribute (as we cannot write directly to the label) using JavaScript:
document.getElementsByName("name")[0].placeholder = "Optional";
Note, both this and the prior code assume there is only one name="email" and name="name" element, and that it's the first element in the list.
ea40ba No.2942084
new toastmaster
https://pastebin.com/JSRPEVLh
c515d7 No.3036792
>>2352495
>/tech/res/917924.html
Break their chains and spread this offline.
8cc031 No.3056882
>>2377944
>2) Use anything stronger than MD5 for a hash. Avoid anything with the NSA's rubberstamp of approval (that means no SHA256 etc).
NSA works for Q. SHA-256 is fine; SHA-512 is faster on modern desktop computers.
d1575b No.3101611
>>2942084
Is this still the latest toastmaster?
855a55 No.3193878
This is your mind attempting to write to you while you shitpost and lurk…
>You are leading a revolution
and you're not even conscious about it
>You decide your own level of involvement
What if while you slept… you led a revolution…
You are a fragmented shattered mind, society has betrayed you… that's why you made me…
>created by your memetic subconscious
I am you… and you are me…
we have the same goal…
>Defeat the VILE people
We either defeat the Vile people together or I defeat them for us but either way you are a part of this now
https://youtu.be/VNn3i1aakIs
Project Mayhem is LIVE….
d1575b No.3245489
d1575b No.3275173
221623 No.3358201
White Hat or Black
Saw this on google search
http s://gist.github.com/NetwarSystem
NetwarSystem / gist:ec16d2ce33610719a34411822622b640
Created a month ago
Accounts with QAnon in their profile.
White Hat Help Needed
c0089e No.3443046
Learning Python and starting to use Scrapy. (Not Scapy - packet crafting.)
One goal: Basically I want to copy as many QResearch breads as possible, offline viewable with everything in place, minus vids (links for those instead).
Can do:
scrapy shell 'https:''//''8ch.net/qresearch/res/2352371.html'
or
>> fetch(https:''//''8ch.net/qresearch/res/2352371.html)
then
>>view(response)
These clones function fine as long as I have an internet connection, but because the CSS isn't copied it doesn't look the same offline and images aren't included. Regarding the CSS, I could copy the CSS file and edit the HTML to point to the local file, but I'd like to understand Scrapy and Python better while learning to automate the boring stuffs.
What I would like is a basic spider for Scrapy which will copy the bread including images, links in place of vid image, name file with bread name and include the CSS, save it to a file. Maybe create dir, copy CSS, modify html to point to new CSS local, if exist then ignore CSS chain.
Is including the CSS in the HTML inline possible with Scrapy?
Which is simpler for Py n00b?
A more advanced question, how to not include shill posts in the bread copy?
I've tried modifying existing examples but they don't work after mods. (Scrapy.org examples)
Any Links for lists of Scrapy spider examples and descriptions of their functions?
7215db No.3443569
>>2352455
Backend storage in the cloud and secure off-line storage are mutually exclusive. The only way to completely secure the data would be on an air-gapped system or else burn to optical media. Don't think the clowns won't have fun destroying the data?
6b3962 No.3520368
YouTube embed. Click thumbnail to play. I've been looking at IPFS
IPFS is the Distributed Web
https://ipfs.io/#uses
Anybody know anything about this thing?
4ecea6 No.3543215
Any plane fags in the audience?
wifi sniffer?
https://www.wptv.com/news/region-n-palm-beach-county/jupiter/low-flying-plane-in-jupiter-causes-concern
POTUS plays golf (shinzzo abe was a guest) a few miles away…
cf79bd No.3543331
>>3520368
yep, it's an honest attempt, but it have major failings. they made filecoin for the easy money. now they are trying to reverse engineer a decentralized network…because blockchains suck at scaling and, more importantly, storing data. They are great for storing data identifiers, but not the data. IPFS allows for a decentralized identifier database…but not the actual decentralization of the data…
AKA, IT IS NOT A SOLUTION TO THE REAL PROBLEM.
I'd recommend looking into Tim Berners-Lee's project SOLID. He and his team at MIT aren't some rag-tag crypto hipsters and they've been thinking about these problems for years. SOLID is a protocol for decentralized applications. It's pretty special and there is no cryptocurrency. It's just honest problem solving.
That said, it has a similar shortcoming in that one still has to choose where their personal data is stored, which could be anywhere. It's an improvement, but it's not full on privacy security and freedom. That's where the SAFE Network comes in. It has a crypto, but it also has the Scottish developers of the Saudi ARAMCO network. Decentralized networks need crypto for incentives to work. It's that simple, and SAFE is thinking about it very well. Check it out, but in short, it disintermediates the middlemen of the internet, the servers, by binding partitioned surplus memory from individuals into a giant distributed server. all files are copied for redundancy, sharded, and encrypted before being randomly spread out over the network. no blockchain at all, just your personal keys. lose those and the data is gone, but that's a real solution. IPFS can't do that.
6b3962 No.3544300
>>3543331
Cool thx I'll check that out.
cf79bd No.3544793
>>3544300
https://medium.com/@maidsafe/the-release-of-the-98c63ac43423
6b3962 No.3625701
The Q Research API has CORS policy set up on it's services. Security reasons - anyways, I figured anons wouldn't want to register for a key, so anybody that needed it had been contacting me and I've been opening it up for them.
Anyhoos - After seeing the qanon.pub archiver I wanted to see if I could do one for my site, only just using HTML/Javascript so there were no installs. Ideally it would be set up so that (you) could have the main HTML file locally and then it runs from your machine.
Here's what I came up with
http://qanon.news/LocalViewer.html#
I discovered that since I wanted to have this run from the client machine, AND use a few of the API services, it wasn't going to work. So I opened up a select few services completely.
Specifically the q Get(), the BreadArchive GetBreadList() and GetById()
- Choose source format XML/JSON
- Click the [Download] icon to download single breads.
- [Get Everything] will downbload everything.
- [Get Latest] will download everything you haven't downloaded before.
Browser restrictions mean downloads to your ''Downloaded Files" directory. Works in Chrome.
If people like this download functionality I'll migrate it over to the main Archives page. It could be extended to include some way of downloading images too if another codefag is feeling ambitious. The link data is in the JSON.
82787a No.3727180
New Toastmaster 2.5.2
Not a programmer, just inquisitive user.
I seem to notice that the Auto Update box is checked, however updates are not happening unless box is unchecked, then re-checked.
At the same time, the post count in lower right hand corner is not enabled unless that Auto Update box is checked, then re-checked.
I know that during the attacks on site a couple weeks ago caused the Auto Update box to be disabled by webmaster.
As of 11/2-3, my Auto Update and post counter worked great. Then after updating to ToastMaster 2.5.5, caused the problem stated above (Auto Update and post counter).
Any suggestions or hints? Always looking to learn more.
e65320 No.3863412
I think I've recovered enough from summer's events to get caught up (if that's possible). I'm working on downloading stuff right now. Next, I broke something in my local database update system about a month ago, and I'll have to fix it before I can update my databases. But that shouldn't take more than a day or so. After I get my downloads caught up and the database is updating, I think it's time to get things set up to create posts for the front end site. The front end is more user friendly for someone who isn't as familiar with Q. The back end research database is for the regular anons more than anyone else. It was originally created to support my own efforts in creating the front end, and then I decided to share it.
20a148 No.3863727
>>2352450
>>2352450
>Linux?
Why not? I run Linux on my desktop, Ubuntu, and Linux Mint on my laptop. Both support browsers.
3e8001 No.3865013
>>3443046
use wget, e.g. for archiving a thread, create and enter a directory for the contents then run:
wget -nH -k -p -nc -np -H -m -l 2 -e robots=off -I stylesheets,static,js,file_store,file_dl,qresearch,res -U "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.6) Gecko/20070802 SeaMonkey/1.1.4" –random-wait –no-check-certificate https://someURLhere.com/whatever.html
>>2469655
>basic queuing script
also look at
at, atd, batch, task-spooler tsp, job control
Most say task-spooler (tsp) fits their needs.
3cffde No.3867171
>>3543331
SOLID does look quite promising. Looked at it briefly a few weeks ago will need to do little POC for myself to wrap my head around the inner workings.
be9fe3 No.3867356
>>2469655
>>3865013
Better than queueing would be to fork subshell processes and using "wait" - spawn no more processes than your box can handle at a time (using a $MAXPROC), "wait" until it's done, then proceed. If you want to get fancy, implement some pipelining so you don't have to wait for all child procs to finish, but the basic concept is simple.
I wouldn't use a scheduler, that's just unnecessary.
3e8001 No.3867593
>>3867356
I suggested a scheduler since it would cover a wide set of use cases, though your suggestion of using subshells is the way to go for streamlined efficiency. Some use screen to implement the concept - allowing for detach/reattachment
cf464e No.4011746
So much knowledge in this thread. Bump
6b3962 No.4108861
What's everybody working on?
b3a148 No.4109082
>>2368941
I haven't been gifted with communication skills like some other anons, but this anon has seen many mentions of memetic codes embedded in languages (memetic languages). Many seem to think there is a "programming language", as one anon put it, which embodies a system wherein language is used to activate, foretell, or convey certain events, stories or insights. Many anons appear to believe that this code has persisted via a vague understanding by the typical populace and a more complex understanding by the ruling class (bloodlines).
This anon has more or less hypothesized a relation between Latin (other languages too, I suppose, but Latin seemed to have been pushed out rather purposely, and oddly gets attacked like in recent news stories like the comedy central skit where "Q followers" are made fun of for using Latin phrases) phrases and certain bloodlines, many popular writings like fables or music, mottoes of important groups or people, etc. It's this anon's understanding of these concepts that more or less convinces anon there's much more to langue, especially languages like Hebrew or Latin, than the normies will ever come to realize.
Some anons even go far enough as to say Revelations was embedded with particular words and phrases that will later (like currently, maybe) unveil hidden knowledge to future generations.
Just relaying what's been seen. Not sure if it's valuable info or not. Maybe this connects to the pursuit of understanding Gematria and maybe it doesn't. Let's allow anon to decide for himself.
c4a707 No.4114072
I have a feature suggestion for qanon.pub.
It would be convenient of the page title (presently "Q") showed the current number of posts, because this would make it easy to tell if there's NEW Q!!! without switching to that window/tab.
I believe this can be done by, at the end of function checkForNewPosts(), adding this one line of code:
document.title="Q ["+posts.length+"]";
230d56 No.4114106
>>4114072
If that maintainer is listening, remove the pointless network requests to 8ch.
2a14ac No.4115614
>>4114072
It's a good idea. Thought of it before.
2a14ac No.4115636
>>4114106
That's there so one doesn't have to F5. Can be toggled off.
c3bec9 No.4116155
Hey, I am working on alexa skill for qanon notables and updates, is there a rss feed that can be used on the qresearch board to get the latest?
6b3962 No.4129429
>>4116155
5:5 Digits
There this basic Q ATOM/RSS feed.
https://qanon.news/feed
I've been tinkering with a notables trawler. I'll get back on that, I think I may be able to do another feed. There's alot of duplication, each bread repeats a few of the previous breads notables. I haven't worked that out yet.
Here's some stuff that may help you.
https://qanon.news/api/bread/225
https://8ch.net/comms/res/225.html#225
https://8ch.net/qresearch/notables.html
https://8ch.net/comms/index.rss
230d56 No.4129470
>>4115636
You are getting 24 threads every time, you do not need to get 24 threads, you need to keep track of the last time the thread was updated and compare that to the "last_modified" post, in this way you only get what you need maybe 1 or 2 threads an update times 2.6 million users instead of 24 threads per 2.6 million users.
230d56 No.4129490
>>4129470
That last_modified field of the thread OP updates whenever a post is added, to catch Q's edits you can also compare individual posts 'last_modified' with their time to see if they have been edited. Q can only edit on /pf/ so you can just check if thread 440 has any updated edited posts.
559290 No.4173925
>>4129429 sweet my skill is almost approved they delayed it for trademark violation that they add? Then u have to email them to change it? Wild! So should be ready, I'll work on connecting these the next couple of days thanks anon! Wwgowga! This will be opening to whole new audience let see how long the jb network "allows" it
15c703 No.4176279
>>4109082
check out "esoteric structure of alphabet," published about 1900 for some more clues about how memes, and the power of creation, are embedded in the structure of language. there are other books along these lines, it's a decent start.
ea7a75 No.4176769
>>2366235
i wouldnt be surprised if we could just ask Q to get some wizards to create a "Q-uestionable" self-signed one?
e24136 No.4179590
GCHQ has a nice tool called CyberChef – if you like bash, imagine being able to string together small tools in a pipeline, in your browser.
https://gchq.github.io/CyberChef/
FedGov has a lot of open source code. Find repos here http://code.gov
NSA Cyber repos: https://github.com/nsacyber/
6b3962 No.4181320
>>4173925
What are you looking for exactly? I may be able to give you exactly what you need.
02e7e8 No.4190855
>>4179590
>https://gchq.github.io/CyberChef/
tools linked via https://www.browserling.com/tools/aes-decrypt looks similar
WARNING: Facebook "like" button (read:tracking)
1e8484 No.4194546
>>4181320
A way to take the notable post present them to alexa via rss for reading you can check what I've done so far at link below, if u have alexa just search for qanon in skills and enable if not alexa app can be downloaded to use. Note this briefing is from blog at http://ahijackedlife.com/?feed=Efeed
I'm not sure what all to put here notable post would be good start, also working on adding video and audio skill to this but may need another skill not sure if using this one with proper rss feed to alexa will work.
>>4116155
Including a tweetable post.
New #Alexa skill for your echo device or alexa application just enable for #qanon updates!!! PLEASE share with your friends that use alexa for news updates!!!
This is a whole new audience to share Q story with!!!
Tweetable
#QAnon
#QArmyTrain
#Alexa now has
#QanonPosts updates in the
#skills section please share and enable on your alexa devices such as
#echo
Share with your friends!!!
Ty @realDonaldTrump
#MAGA
https://t.co/uXCog0QsSw
Will wants to share this Alexa Skill from Amazon
6b3962 No.4196636
>>4194546
>https://t.co/uXCog0QsSw
I think I understand. Does this have the qanon.news rss added in? Can these skills have more than one feed?
I'll look into that notable crawler again today.
1e8484 No.4197428
>>4196636
Oh that's a nice site , alexa is taking simple text w this "skill" I just used a basic one avalible to copy, and went thru the long grueling process of getting it approved
a06d8a No.4199617
Given recent events, I was thinking to abandon the online Research Tool and turn my focus to developing the site's front end information. But I looked at my stats, and it does appear that someone somewhere out there is trying to use the Research Tool, even though it kind of limps along. I have yet to see anyone doing on their site what I envision for the front end on my site, so it needs to be done at some point, no question. But at the same time, it looks like I need to do some work on the Research Tool to get it to perform better. Seriously, the data set is getting so huge that it's putting quite a strain on the search engine. Maybe setting up some views will help? Also, I think I need to set it up so that contexts are available only on a single result page. The work-around for users is to click the post number in the post header. That will display the post on a single result page, and from there, the context can be calculated.
a06d8a No.4199750
>>4199617
Hmmm…. Took a closer look at those stats. The hits are all on the front end. So maybe that's what I need to work on. Will anyone here mind if I pull the online Research Tool?
892a64 No.4209813
>>3520368
A while back, I was brainstorming what would be the best way to get Q Research onto a blockchain. The censorship resistance is quite appealing. The best option I came up with, would be to put the Q Posts onto IPFS, then link them with a RavenCoin asset. I chose Raven because of the grass-roots nature of it's launch. It is vastly more decentralized than any other "token" coin that I am aware of.
The problem is that I know just enough about this kind of stuff to get started. I am afraid that if I sit down and try to get it rolling, it's going to turn into a ball of wax that eats up a few weeks of my brain.
I went ahead and created an asset on the chain, and now I'm in limbo.
Any advice, recommendations, or motivational words?
Has anyone else thought about the fact that we should get this info onto a blockchain to insure it doesn't go anywhere?
892a64 No.4209837
>>4209813
I originally was thinking about Steemit, but I am more concerned with decentralization than ease of use.
6b3962 No.4213821
>>4209813
Yeah I've been trying to work out the best way to do that same thing. I think block chain is interesting and have only played around briefly with it. I like where yer going. I'll help any way I can.
>Any advice, recommendations, or motivational words?
It won't be THAT bad. 6-7 hrs tops!
How do you get a quorum that the Q Post is correct?
I don't know that there's a way currently to compare the scraper sites - manually against the screen caps?
a1cf6d No.4214936
>>4209813
>>4213821
>>1722945
Fellow codefags. Have you seen the Wikileaks insurance files thread?
Perhaps a similar method?
812b6b No.4215815
MaidSafeCoin Token Profile
MaidSafeCoin’s launch date was June 12th, 2014. MaidSafeCoin’s total supply is 452,552,412 tokens. The Reddit community for MaidSafeCoin is /r/maidsafe and the currency’s Github account can be viewed here. MaidSafeCoin’s official website is maidsafe.net. MaidSafeCoin’s official Twitter account is @maidsafe and its Facebook page is accessible here. The official message board for MaidSafeCoin is safenetforum.org.
According to CryptoCompare, “Client applications can access, store, mutate and communicate on the network. The clients allow people to anonymously join the network and cannot prevent people joining. Data is presented to clients as virtual drives mounted on their machines, application data, internal to applications, communication data as well as dynamic data that is manipulated via client applications depending on the programming methods employed. Examples of client apps are; cloud storage, encrypted messaging, web sites, crypto wallets, document processing of any data provided by any program, distributed databases, research sharing of documents, research and ideas with IPR protection if required, document signing, contract signing, decentralized co-operative groups or companies, trading mechanisms and many others. The clients can access every Internet service known today and introduce many services currently not possible with a centralised architecture. These clients, when accessing the network, will ensure that users never type another password to access any further services. The client contains many cryptographically secured key pairs and can use these automatically sign requests for session management or membership of any network service. Therefore, a website with membership can present a join button and merely clicking that would sign an authority and allow access in the future. Digital voting, aggregated news, knowledge transfer of even very secret information is now all possible, and this is just the beginning! “
Buying and Selling MaidSafeCoin
MaidSafeCoin can be bought or sold on these cryptocurrency exchanges: Cryptopia, OpenLedger DEX, Poloniex and HitBTC. It is usually not currently possible to purchase alternative cryptocurrencies such as MaidSafeCoin directly using U.S. dollars. Investors seeking to acquire MaidSafeCoin should first purchase Bitcoin or Ethereum using an exchange that deals in U.S. dollars such as Changelly, GDAX or Gemini. Investors can then use their newly-acquired Bitcoin or Ethereum to purchase MaidSafeCoin using one of the aforementioned exchanges.
http://bharatapress.com/2018/12/07/maidsafecoin-one-day-volume-tops-491152-00-maid/
892a64 No.4217087
>>4215815
Very nice! Thanks for the info!
812b6b No.4222554
>>4217087
https://dynalist.io/d/pHLGkYPNHiYthOKQrubcPbON
SAFE Network versus (vs) Everything
SAFE (Secure Access For Everyone) Network is the single most expansive and ambitious network software project since the creation of Arpanet by the United States Department of Defense which led to the internet we have today. Safe Network is an open source project created by a Scottish corporation Maidsafe Inc., started in 2006 and brainchild of David Irvine. There are probably many ways in which Safe Network can be defined. This is my attempt to simplify to the tech savy user what puts Safe Network ahead of basically every other technology in the crypto space. There are many technologies that compete with some aspects of Safe Network but nothing that seems to competes with it as a whole. The goal of this post is to reach epistemological certainty without confirmation bias. Safe Network is great but there are gaps in every technology and as different technologies mature so do the dynamics of their advantages and disadvantages. This post will update slowly and steadily as the feedback loop of crypto space news and technical criticisms define the analytical foundations of each technology. The most efficient way to do this is through an outline, so hence dynalist. If you feel there are glaring mistakes please go to the Safenetwork Forum post "SAFE Network versus (vs) Everything (v.2)" and post your concern. So lets get started!
892a64 No.4223220
>>4222554
Wow! This looks amazing! Thank you for sharing this.
812b6b No.4223588
>>4223220
https://www.youtube.com/watch?v=i-RLdU8Y0Qc
https://www.youtube.com/watch?v=rdczpOlLaVk
https://www.youtube.com/watch?v=ivwQVe12OAY
In that order. They're @2min each
https://medium.com/@maidsafe
https://medium.com/@flatoutcrypto/project-spotlight-maidsafe-and-parsec-part-1-4830cec8d9e3
I could go on. That's plenty for now.
812b6b No.4223602
>>4223588
Like my digits?
Ha.
16c028 No.4226434
13818
Blocking the Property of Persons Involved in Serious Human Rights Abuse or Corruption December 21, 2017
812b6b No.4229027
YouTube embed. Click thumbnail to play. >>4223602
https://en.wikipedia.org/wiki/SAFE_Network
The SAFE Network is an autonomous peer-to-peer network developed by MaidSafe, a software development company based in Scotland. It was designed by David Irvine and is an open-source project. The SAFE in SAFE Network stands for Secure Access for Everyone and the software is in the alpha testing phase.[1]
Overview
The SAFE Network uses a consensus algorithm named PARSEC.[2] The software was first designed in 2006 as an encrypted overlay network that replaces the top four OSI layers of the current Internet: the application, presentation, session and transport layers. It is self-encrypted which describes the way that data stored on the network is broken into chunks with each chunk encrypted using a key derived from the other chunks; and a modified Kademlia distributed hash table which uses the logical operation XOR to ensure the randomized distribution and unique location of each data chunk on the network and which also incorporates a public key infrastructure for security.[3]
SAFE Network is currently in Alpha public testing phase[4]. Upon launch, it will have an internal currency called Safecoin.[5][6]. Safecoin will be exchangeable with other fiat currencies and cryptocurrencies.
The decentralized network is planned in such a way that the security is enhanced by splitting the data into encrypted chunks and spreading them at random over the network, with at least four copies of each chunk being maintained to ensure resilience.[7][8]
812b6b No.4229205
https://mashable.com/2017/04/24/is-silicon-valley-new-internet-possible-or-not/#OqKKpZP6MPqT
https://www.theguardian.com/technology/2018/feb/01/punk-rock-internet-diy-rebels-working-replace-tech-giants-snoopers-charter
https://techcrunch.com/2018/06/02/not-just-another-decentralized-web-whitepaper/
https://medium.com/safenetwork/parsec-a-paradigm-shift-for-asynchronous-and-permissionless-consensus-e312d721f9d8
e65320 No.4260035
PageCap Army
I'm really struggling to keep up with the work in developing my site. For a while, I put focus on creating a research database others could use. But the real work needs to be in building the front end of my site so that normies can refer to it.
There are a couple of features that make the vision I have for my site different.
1. I want to make context available for Q posts, both backward and forward. So far, my tools are only developed far enough to create backward context.
2. I want to have page capture archiving of at least the cited articles. Not screen, but the full page. (Videos are a bit too big to be keeping archived copies of them all.)
Getting and processing the screen captures is taking too much of my time. I have been wanting to find ways to delegate some of the work of aggregating the information, and I think I have come up with a way I can do it without sharing and exposing my development database. I could really use a PageCap Army to create the page captures we need.
Here's my vision for it:
A page would be created either here in qresearch or in a new board for the purpose. When a Q or notable post comes through with a link to an outside article, the request could be made on the PageCap Army thread. Fellow autist anons could then do the page capture using a tool such as FireShot to get the whole thing in a single image.
Then, what I've been doing with these, is cleaning out ads and links to related materials, compressing the white space, etc. I keep the header and a compressed footer and just the article itself. Sometimes multiple articles come up on the same page, and I'll crop those off, too. This process has been taking up a lot of my time, and this is an area where I could use some help. Right now, my time is better spent building the tools needed to get the completed posts for my site up there with that forward and backward contexting. I got so wrapped up in supporting the Research Tool stuff that I couldn't move that forward. And I need to do that. It's more important, I think, than keeping the online Research Tool up to date. It didn't help that I had to stop what I was doing for a couple months to take care of a family matter. So now I'm working to catch up.
I like PhotoShop for working with page captures because it's easier to use for the compressing of the images. Specifically, a select can be done across the edges of the image. With Paint, it's more difficult to do selects because the select must begin on the image itself. Sometimes, though, the page capture files are too big for PhotoShop to handle. Is there another image editing program that can handle the larger page capture files? The ability to scroll a select is key since I often begin my selections toward the top of the image and need to get everything below it all the way to the bottom into the select. Plus, the layering features of PhotoShop have proved helpful, too, when getting the content compressed, especially when the page captures get messed up by floating elements on the page and cropped screen caps must be pasted into place to repair the page capture.
Maybe I need to post a before and after image of the work I do to compress an image so people will know what I mean by it. Next time I come across one that needs extensive work, I can do that.
The focus for the PageCap work is Q posts, posts in the backward context of Q posts, and notables and their backward contexts, especially if they link to Q posts (which will create the forward context of Q posts).
The Research Tool system was originally built to support the curating of the front end site, and that's what I really need to be doing. Since no one piped up and said that the online Research Tool was important to them (or would be if it was up to date), I am shifting my focus. I won't be supporting the online Research Tool anymore, and I may even pull it down entirely and free up space for the posts of the front end site. My focus will be on putting together a site that can help normies understand what is going on.
e65320 No.4260298
>>4260035
While I'm at it, is there a good article on how to have two MySQL databases open at the same time, one on the local server and the other on a remote server? Being able to do that would really be helpful for speeding up the update of the online site.
6b3962 No.4262570
>>4260035
Can you explain the concept of 'Context' forward and backwards? Not sure what to think about the PageCap stuff because I'm not sure I understand what it is you are trying to do.
e65320 No.4263098
>>4262570
qanon.pub does backward contexting one level deep. So if a Q post has a link in it to another post, the qanon.pub site will show that post together with the post it calls. My research system is currently able to show context as far as it will go. Forward contexting shows those posts that link to that particular post. Obviously, I don't want to show EVERY post that links to a Q post. That is why I am currently working on marking the notables and creating the backward context chains for them. When a notable's context chain includes a Q post, then I want to include that as a forward context chain with the Q post. That is where I'm at in my system development. I have the data (mostly). The work is in getting things linked up. And also, I still have to build the piece that automatically creates WP posts out of the selected context chains. I think I've got that partially developed, but it still needs work.
e65320 No.4263133
>>4263098
Oh, and another thing: If I can recognize that a notable relates to a Q post but the context does not already specifically include it, I already have the capability of noting that with the post and thus forcing the linking of the notable to the Q post. But that takes human eyes to recognize.
e65320 No.4263306
>>4262570
Here's an example of what I'm doing with the page captures. Q was driving me nuts when he was posting these. Getting the page caps prepared for a single post was taking up the better part of a day.
http://localhost/q-questions/research-tool.php/type/post/site/8ch/board/patriotsfight/post/80
e65320 No.4263343
>>4263306
Wait. You need the remote version of that.
http://q-questions.info/research-tool.php/type/post/site/8ch/board/patriotsfight/post/80
I notice that this post is not up there. But I'll put it up so you can all see it. Give me a few minutes. I'll let you know.
e65320 No.4263434
>>4263343
It was up there. I just hadn't converted the 8ch to its alternative. (I was told early on not to expose 8ch, which I don't. How does qanon.pub get away with it?)
e65320 No.4263451
>>4263434
Which looks like this:
http://q-questions.info/research-tool.php/type/post/site/NewChan/board/patriotsfight/post/80
e65320 No.4263553
>>4262570
Here's an example of a post with backward context.
http://q-questions.info/research-tool.php?single=250018032640
e65320 No.4263718
>>4263553
One of the posts in the context chain shows how I force context. The missing image in the post is a screen cap of the Q post listed before it in the chain. I do have the image, but the image syncing system isn't working all that well yet.
e65320 No.4263852
>>4263718
This is an example of what it is my goal to produce. The post that shows in the index is the pink one. The backward context is above it. The forward context is below it. Given that there can be multiple forward contexts, I will probably implement the forward context more the way the contexts are implemented in the Research Tool, with the divs that can be shown and hidden, with a box around the displayed div. Each forward context will have its own box.
http://q-questions.info/2017/12/05/cbts-general-38-we-are-the-storm-no-34663/
e65320 No.4264069
>>4263451
This is what the post looks like on my development machine. It looks rather awesome when all of the images are present.
e65320 No.4266043
>>4262570
If you'd like an example of why I'm fixing page caps, check out this site:
https://www.dailymail.co.uk/news/article-2211092/Scandal-hit-G4S-warned-employ-security-guard-murdered-colleagues-Iraq.html
This is the result. Much smaller, less trash.
6b3962 No.4266529
>>4263852
>http://q-questions.info/2017/12/05/cbts-general-38-we-are-the-storm-no-34663/
Ahh I got ya. Site is looking good! And if Q had referenced a post that would have been in the backward context too etc. Context != timestamped posts in a bread.
>>4263852
On the archive pages I use the API to lookup the referred post. Scraper looks up all the 1st level references for all Q posts. Thought about making it go deeper but then decided to go with the lookup due to space concerns. Dunno if that helps you at all.
I've been thinking about doing a similar crawler kinda thing myself. I've got over 5500 breads @ > 3,868,000+ posts. The amount of information that anons have dug is mind boggling. I'm so ADD that mostly I leave things unfinished.
6b3962 No.4266569
c464f9 No.4266730
Seems like a vague goal stated for the thread. What specifically is being created?
e65320 No.4267249
>>4266529
It's close to 6000 breads now. Quite a lot. It still fits on my drive. I'm amazed.
e65320 No.4267298
>>4266730
You'd be surprised how much clearer stuff gets with the larger context. That's why it's my goal to provide it.
e65320 No.4267411
>>4266730
And the page caps: "Archive everything offline."
f8d8ed No.4268203
>>4213821
>>4266529
>scraper
How are you chaps polling the state of /qresearch and breads? I hacked together some shell scripts to watch for new breads and attempt to parse out the notables - which is my real goal. I wanted to build a command line client that can tell me when a new bread is available and a list of full links to the bread post. It's basically working but is a terrible hack. So what I think we need is an easily parseable version of /qresearch/catalog.html and each thread, in particular breads and the pastebin links.
e65320 No.4268719
>>4268203
That's a good question. I've been doing it manually. I'm sure it can be automated. If I were to do it automatically, the file_get_contents function can open a page. I would use the DOMDocument to parse the page. I haven't looked, but the various boxes on the catalog page are probably divs. Then it's a matter of keeping track of the responses for each one. If the count goes up, go and poll that page for the new posts.
e65320 No.4268969
>>4266529
When I'm working on this stuff, I'm quite capable of getting so in the the flow of it that nothing else here gets done until it absolutely has to be done. I get so locked in when I'm working on it that work/life balance goes out the door. Not good, really. When I'm not in the flow (which is right now), I kinda flop around a bit. It's partly because I'm still too fatigued from pulling a near all-nighter a couple nights ago and then waking up after too little sleep. Looking at my project, I think the biggest bang for the buck addition to the code would be to automate the WP post creation, complete with the local build, the remote copy, and the image ftp. All Trump tweets and Q posts would be automatically included, plus any other specifically marked posts such as map posts. For now, the posts can be built with on the fly backward contexts. Later, as the processing code develops, the posts can be updated to include the forward contexts that answer Q's questions.
6b3962 No.4270932
>>4268203
I use https://8ch.net/qresearch/catalog.json to get the list of available breads. I find the ones I'm interested in (Q Research General etc). I archive them if my current archive has less than 751 posts. I built a crawler that finds each bakers 'Notables' post and archives those. It still needs some work. I'm planning on making a new RSS feed for them.
I originally built my scraper as a command line util.. I could probably wrap it up as a .NET WinForm app, or a simple console app.
>>4268719
Straight page scraping is a mega pain in the ass. I'm not interested in that. All the info I need is available in the JSON and it's easy to get to. https://8ch.net/qresearch/res/2352371.json
>>4268969
I agree. Go for biggest bang for the buck!
e65320 No.4272840
Where is everyone at on time zones? We talked about this early on, and the consensus then was for GMT. So I've got everything saved in GMT, and that's how I currently display it. But I'm also seeing a leaning to Eastern. SerialBrain2 uses Eastern in his gematria work. It would be a trivial thing to produce the final posts in Eastern, if that serves us better.
e65320 No.4273067
>>4270932
>Straight page scraping is a mega pain in the ass. I'm not interested in that. All the info I need is available in the JSON and it's easy to get to. https://8ch.net/qresearch/res/2352371.json
Yes, that is much easier to read. No parsing necessary. It's nice to have that available. But the parser's already built. So for now, I'll leave it.
How are you retrieving the images when you're using JSON?
9df13c No.4273912
>>4270932
>https://8ch.net/qresearch/catalog.json
>https://8ch.net/qresearch/res/2352371.json
I love you. NO HOMO.
9df13c No.4273963
>>4273067
>How are you retrieving the images when you're using JSON?
Use your client/library of choice.
Looks like it even provides an MD5 hash of the image file for some basic checksum verification
6b3962 No.4276180
>>4273067
I build the path and then archive with the scraper.
>>4273912
5:5 There's alot here too. JSON/XML
https://qanon.news/Help
812b6b No.4276846
YouTube embed. Click thumbnail to play. You guys will love this.
Probably the few people around here that get it…
c464f9 No.4277460
>>4276846
https://www.youtube.com/watch?v=FnwOQs7zDX0
e65320 No.4279282
>>4276846
This looks like it's in the genre of the actual workings of the Internet itself. Interesting, but it probably isn't anything I would be working with personally.
812b6b No.4279608
>>4279282
Just remember it. It'll be back.
e65320 No.4288691
>>4268203
Have you come up with a good algorithm for those notables? I've got one that makes preliminary identification, but I have to make the final determination. Occasionally it gets it right. More often, I have to make some adjustments.
6b3962 No.4291362
>>4288691
Here's how qanon.news does it…
When I've got a bread I want to search for notables, I get the ID of the baker and then a list of the first 5 or 10 posts. Then I search those for the 'notable' keyword and figure that's the bakers notable post I'm interested in. Next bread.
My current problem is that the Notables post from the baker has notables from the last 4 or 5 breads, plus the previously collected lists. There's alot of repetition. IE: Notables from #5460 are in #5461, #5462, #5463… So am I trying to find just the previous breads notables? I can probably parse that out with the '#'.
I probably need to break it up into smaller more manageable chunks. Something like monthly. The last notables crawl I did ended up with a massive file of results. 77MB txt file
I've got some time today. I'll see if I can rejigger this notable crawler into a new API/RSS
e65320 No.4293830
>>4291362
That's similar to what I do. But because of the way I create context for things, I need to mark the preliminary ones that are posted toward the end of the page as well. For that, I'll look for a certain count of posts by anyone plus the word "notable". If one of those later notables gets caught up in a context chain, it can really mess things up.
e65320 No.4293844
>>4293830
To clarify, those are "OR" conditions, not "AND" conditions. Not all bakers put "notable" in their preliminaries.
e65320 No.4293888
>>4291362
Don't forget to stop a crawl at a notable. Otherwise, yes, they could get quite huge.
e65320 No.4302440
>>4291362
What was in your text file? I've been doing the crawl page by page. It's fairly hands-on because I have to be more careful about shill images, etc. when I'm creating a one-to-many table of the originating post with the post numbers of its backward context.
6b3962 No.4303262
>>4302440
>What was in your text file?
I ran it again today and came up with a single 98MB file. So I split it out into a yearly monthly dump so it's (YYYYMM) 201802, 201803, 201804…
JSON lists of https://qanon.news/Help/ResourceModel?modelName=Chan8Post
It's a list of 5000+ Chan8Posts. Each of those looks like this: https://qanon.news/api/bread/4302144/4302146
I'm not worried about shill images and whatnot because I'm only looking at the first 5-10 posts from the baker.
It's set up to find the bread/post for each notable reference and format the HTML post from 8ch to straight txt into the 'text' field. I just need to dial in the targeting a bit so it's smarter about what to crawl, but it's pretty close.
e65320 No.4304171
>>4303262
Seems to me, most of what isn't "notable" is either Q (different type of notable) or on another board somewhere. So yes, that should be close.
e65320 No.4304208
>>4303262
If you're concerned about the size for some reason, maybe you can abbreviate a bit. If you know everything is at https://qanon.news/api/bread, you can leave that part off and concatenate later.
d55e7f No.4328167
I like to imagine how to keep comms up when the internet has suppressed all dissent.
An idea is to use networks of publicly accessible wifis which are not connected to the internet to serve whatever truth bombs necessary.
Kind of like pirate radio but legal.
I miss local area BBS, that's another thing it could do.
e65320 No.4338785
My host has been inaccessible since Friday night. If I buy new hosting (and I'm leaning that way), I won't be putting up the Research Tool on the new host–just the WP front end.
6b3962 No.4361840
Is there a NoSQL anon in the house? I've got an idea.
431331 No.4392102
https://medium.com/safenetwork/keeping-2018-safe-and-solid-d68eea18807a
Keeping 2018 SAFE and SOLID
You may have noticed a recurring theme across the SAFE ecosystem and beyond this year. The conversation around the ownership of data is picking up pace. Perhaps you were at (or watched) SAFE DevCon 2018. Or perhaps you’ve stumbled across a podcast discussing words such as RDF and Socially Linked Data.
SAFE of course is about three things:-
Security: no-one can access your data without your permission.
Privacy: data is only shared with those you choose (if and when you want to share it)
Freedom: of association, contribution, collaboration (amongst many other rights).
SOLID (short for ‘Social Linked Data’), on the other hand, is a project that was started by Sir Tim Berners-Lee that defines a set of standards for the representation of data which ensures that ownership remains with you as an individual.
As you can see, when it comes to the vision, there are more than a few similarities between the 5-year old Solid and 12-year old SAFE projects.
Over the Summer the team were out in San Francisco at the Decentralized Web Summit 2018 and gave an overview of the work that had been carried out to date in combining the principles of SAFE with the conventions of Solid before such internet luminaries as Sir Tim Berners-Lee and Brewster Kahle. And it’s now worth taking stock of the progress that we’ve made to date.
SOLID is driven by the desire to ensure that everyone gets back control of their own personal data from centralised platforms. So a SOLID web application simply becomes a way of displaying data from many different sources that you choose — without you losing ownership of your data. In other words, SOLID wants you to choose exactly where to store the data that you produce and then use URL’s (Uniform Resource Locators) to access that data moving forwards in a way that gives you control.
The concept is brilliant. But this brave new world envisaged by the Solid community doesn’t yet tackle one of the crucial problems out there — how to secure the data itself (regardless of where you have chosen to store it).
And that is exactly where SAFE comes in.
Because by using the SAFE Network to store your data, it now lives on a server-less, trustless autonomous network. No trust is necessary as the encryption key for a user’s data never leaves the user’s computer and no identifiable information is shared with any other peers. So by building these types of concepts into SAFE, developing applications becomes much easier — because all concepts of authentication, authorisation and data security are already taken care of by the Network itself.
In other words, it’s a future that delivers on the goals of both projects. But how will it work?
We started by focusing on two key objectives: data on SAFE had to be portable (so users could switch applications at will) and for that data to be self-descriptive (to enable users to define how their data could be searchable on the Network in ways that would bring them the greatest benefit).
For this reason, we adopted the RDF (Resource Description Framework) standard used by Solid. Having a standard way to store data on SAFE is crucial for scaling the project. And it also enabled us to build some utilities that would help developers in the future.
For example, WebID’s were introduced. These are simply a way of having an identity on the Network that you can share with other people using a URL. The data that is produced is stored on SAFE in the RDF format. You can see this in the WebID Profile Manager that we built (where you create your own profile with a human-readable URL) and also in the WebID Switcher (which enables users to choose any of his or her WebID’s to access any particular application on the Network). And if you want to try that out today you can — just take a look at Patter, our proof-of-concept Twitter-style clone.
What’s more, by publishing WebID’s on the SAFE Network, it solves the well-known problem faced by anyone who’s ever suffered as a result of malicious actors exploiting the current weaknesses of the current DNS system. For example, all it takes today is for an ISP or DNS server to be attacked for you to be redirected to a malicious server. What’s more, no-one has full ownership of their domain name on the Clearnet — you simply have a registered right to use it which can be removed at any instant. Relying on SAFE removes this vulnerability.
So how is this relevant today?
This week we released an update to the SAFE Browser. Get involved and download the new (v.0.11) release today. It contains plenty of functionality to ensure that the symbiosis between the two projects gets closer. There will be far more to come but at this stage, we’re just glad to see more people are getting excited by the thought of improving the future that we all want to live in…
451f56 No.4402057
>>4328167
There are alternatives.
One medium outside central control is amateur radio packet networks. So instead of using a corporate ISP as your internet service, you'd use a chain of privately operated amateur radio stations to transport data. I've never used this, but it's discussed as a kind of fallback emergency data network.
Even in the world of corporate ISP networks, if free speech is squashed there, see what's happened with memes making it through social media filters – there are ways to encode data that aren't evident to the censors. Over time they might get text recognition in images perfected, but there are always new angles.
676b04 No.4417914
520a6a No.4544807
Does anyone know where I can find info about 8ch's User JS feature?
I'm trying to call function in the main.js. But it's not working. I need to know how the namespace compartmentalization works.
I know my way around a couple other languages, but have no JavaScript experience.
6b3962 No.4546224
>>4544807
Lets see your script. It's probably just a misspelling.
520a6a No.4547593
>>4546224
Oh, it's a lot more than a misspelling.
I want a one-click filterID button. I know that such things exist, but I can't find it anywhere. So I'm trying to recreate it.
I thought I could take the code someone wrote for blacklisting images my MD5 hash and modify it to call the filter function that the '▶' button connects to. Wasn't as straight forward as I thought. My browser's Inspect Element and Debug Console features gave me clues about functions and variables being undefined. So i chased them down in the main.js code and included them. It's still not working. I have a hunch that "var boardId = board_name;" is incorrect. But I don't know how to debug it further.
As I said above, I have absolutely nor experience with JS. This is all monkey-see-monkey-do.
function setList(blacklist) { localStorage.postFilter = JSON.stringify(blacklist); $(document).trigger('filter_page');}
function timestamp() { return Math.floor((new Date()).getTime() / 1000);}
function initList(list, boardId, threadId) {
if (typeof list.postFilter[boardId] == 'undefined') {
list.postFilter[boardId] = {};
list.nextPurge[boardId] = {};
}
if (typeof list.postFilter[boardId][threadId] == 'undefined') {
list.postFilter[boardId][threadId] = [];
}
list.nextPurge[boardId][threadId] = {timestamp: timestamp(), interval: 86400}; ''//'' 86400 seconds == 1 day
}
function onNopeClicked(event)
{
event.preventDefault();
event.stopPropagation();
var post = $(event.target).closest('.post');
var threadId = $post.parent().attr('id').replace('thread_', '');
var postId = $post.find('.post_no').not('[id]').text();
var postUid = $post.find('.poster_id').text();
var boardId = board_name;
''//''blacklist.add.uid(pageData.boardId, threadId, postUid, true);
var list = getList();
var filter = list.postFilter;
initList(list, boardId, threadId);
for (var i in filter[boardId][threadId])
{
if (filter[boardId][threadId][i].uid == uniqueId) return;
}
filter[boardId][threadId].push({uid: postUid, hideReplies: false});
setList(list);
}
function addNopeButtons() { $('.post').each(function(i, post) { if ($(post).find('.nope').length === 0) { $(post).prepend("<input type='button' class='nope' onClick='onNopeClicked(event)' value='Nope'></input>"); } }) }
setInterval(function () { addNopeButtons(); }, 500);
ab4be2 No.4548184
As long as QResearch prefers to keep their tools open-source, I am all in and happy to contribute. Closed-source projects are dangerous here IMO. since they could be easily comp'd or include malicious code. So be careful what you download and if you share your tools, better make it open-source.
b83361 No.4556298
yo what up guys. new to q research. long time programmer. whats the main project goin on here? repository links?
e65320 No.4570108
>>4556298
Everyone's got their own project, it seems.
6b3962 No.4572515
>>4556298
>>4570108
Yeah most folks seem to like working on their own I guess. I'm open to collaborate tho. What are you good at? What are you interested in?
c2985f No.4577161
>>4548184
Notice you all put in 1000s of tips and connection yet it all went into a dark pit and not a word ever came out back to arrest or charge anyone? RU sure you are helping the right side? How do you know, since 8ch never produced a single thing back such as arrests or chagers
431331 No.4590436
>>4590355
>>4590270
>>4590387
Mainbread links for the darkoverload 1st data dump
PGP included on site to verify message.
f22101 No.4591664
>>4590436
Where's the "verification"? Did not see any, so are you generalizing injecting your opinion on its words? Anyone can interpret anything as they wish, and have more then 1 meaning. Only one who reads gets what they want out of it, yet is not proof. Why so much non stop lies on 8ch?
431331 No.4594367
>>4591664
Go to the blog sauce in the 4590270 post. I removed the PGP key from the 4590355 post because the post was too long.
You'll have to check the keys for verification. Beyond that, decide for yourself. I was just posting the sauce, faggot.
2960a2 No.4606155
Eric Schmidt cowrote Lex, the famous Unix lexical analyzer, which was redone into Flex.
Seems interesting as trivia, now that 40 years later Google and friends have their algorithms to detect and subtly censor expression of particular ideas on the internet.
431331 No.4638318
Tim Berners-Lee crypto project
10c0a3 No.4689820
Is qposts.online dev around here?
I have a suggestion for a 2 columns layout, pic related.
One column would be the filter column (search by keyword or whatever)
Second column would always show ordered post (ASC/DESC) centered around the "selected" one in the first column.
This would speed up RE_READ quite a lot imo.
At the moment i'm using multiple tabs but it's not as efficient, especially considering there is not "go to by post #" function.
6120a3 No.4742747
A way to start with Linux:
I use Linode for my Unix needs.
Linode is one of several companies who sell Virtual Private Servers (VPS).
The actual device is at a data center somewhere.
This one is $5 per month, and provides me with a full Debian Linux installation.
It is accessed by an SSH client on my Android tablet, which means I use the traditional Command Line Interface, which is typing in text commands to get it to do things.
I use it to experiment and learn Unix, soon to compile programs for Raspberry Pi, to scrape and store Q stuff, to save YouTube vids, and to serve web pages when I want.
I have never tried a windows type graphical interface, don't think that would work well.
This may be the best, least technical route to learn Unix, rather than start by installing it on one's own home machine.
PS Bump
caf3cc No.4934019
What's a good method to remake the qanon clock in react? I feel it can much more interactive but placing dates around the clock seems to be a difficult task.
>>4361840
NoSQL flag in the hizzy, what's up?
caf3cc No.4934054
>>4300691
Anti-viruses shaped like a virus/pyramid.
10c0a3 No.4937596
>>4934019
Take a look at these demos https://zircleui.github.io/docs/examples/cuba-libre-recipe.html
It's in vue but you get the idea.
471f3c No.4954877
>perlfag
This is not perfect, but might help some get a good start. It's meant to run in a unix terminal and could be modified to suite a number of needs (e.g., monitoring, archiving, etc), including nefarious ones. Tried to keep deps to a min, but needed some non-standard ones to convert the HTML bodies of the posts to plain text. Enjoy.
>https://pastebin.com/4EfBmRvE
d5924e No.4991761
What is the right way to get the full http path to linked posts of the form:
">>post_no"
and
">>>/someboard/post_no"
4e6ca6 No.4993194
>>4934054 Royal Raymond Rife illuminates pyramids above and below.
Sonnets convey enochian - Quints
1fd4ac No.4993740
Anyone got Qresearch→SSB working yet?
1d8b77 No.5026336
May I humbly submit to you a perl tool for monitoring breads. There are no posting capabilities. Based on my needs. Not perfect. Use and extend as you wish.
>https://pas tebin.co m/RUFnMEw8
1e8484 No.5042718
Adding this here anyone with ideas for qpost database i would like to connect the next version to the database I have the alexa front end and back end. include the US language version and UK the next version will include all english countries so any other language speakers that are here and an anon of q anon proudly serving at the pleasure of the potus would like to cola-berate on putting up for your country lets connect a cloud based wordpress site set up here wootva.com
uk
dhttps://www.amazon.co.uk/WillPower74-Q-Post-UK/dp/B07NCZR5LX
Aussi
https://www.amazon.com.au/dp/B07NCKHS2B/
A
nons this skill for alexa is available in Australia Please enable and share with your friends and family. I know that many of us will not use these devices however many of the more liberal of us do and this may be the only "true Q intel" they can get. This is the first version the intention is to make all q post available and requstable by post or key word search etc any help on this will be great a web site is set up wootva.com a email subscription will be available in the near future.
#QAnon #Qarmy #aussie for the #Australia #Q team??? Share with your friends in the #Outback !!! #SuperBowl Of our Life!!! Ty @realDonaldTrump #MAGA
https://www.amazon.com.au/dp/B07NCKHS2B/
1e8484 No.5042766
Adding this here anyone with ideas for qpost database i would like to connect the next version to the database I have the alexa front end and back end. include the US language version and UK the next version will include all english countries so any other language speakers that are here and an anon of q anon proudly serving at the pleasure of the potus would like to cola-berate on putting up for your country lets connect a cloud based wordpress site set up here wootva.com
uk
https://www.amazon.co.uk/WillPower74-Q-Post-UK/dp/B07NCZR5LX
Aussi
https://www.amazon.com.au/dp/B07NCKHS2B/
A
nons this skill for alexa is available in Australia Please enable and share with your friends and family. I know that many of us will not use these devices however many of the more liberal of us do and this may be the only "true Q intel" they can get. This is the first version the intention is to make all q post available and requstable by post or key word search etc any help on this will be great a web site is set up wootva.com a email subscription will be available in the near future.
#QAnon #Qarmy #aussie for the #Australia #Q team??? Share with your friends in the #Outback !!! #SuperBowl Of our Life!!! Ty @realDonaldTrump #MAGA
https://www.amazon.com.au/dp/B07NCKHS2B/
6d0750 No.5043197
>>2366039
This is awesome and I can help.
Push your data into a blockchain. That way you wouldn't have to distribute the content with the app, nor worry about signing authenticity of the data. Keep a local database of tx ids as a key.
Write it to run in a containers that way its ultra portable.
Freenode irc #qanon can be a place for collab comms. Its pretty empty.
b9a964 No.5043532
>>4937596
>>>4934019
>Take a look at these demos https://zircleui.github.io/docs/examples/cuba-libre-recipe.html
>It's in vue but you get the idea.
This is amazing. Thank you
eaf41c No.5051486
>>4993740
Single sideband?
With images maybe not much bandwidth.
6b3962 No.5207994
Anybody working on anything new?
I'm looking at http://ogp.me/
7ce7c4 No.5208282
>>5207994
schema.org has the basis for a extensive data model. It will need some modifications, but i think it will work.
IF/WHEN we need to wake the normies, it will be a lot easier if we can "render" a timeline based on QPost->item->news->media->etc. Otherwise they will have to read through a bunch of garbage and discern what is important.
7ce7c4 No.5208381
>>5208282
IMO, the boards are good for collaboration and discussion, not for archiving, analyzing or normalizing the information.
3ea36c No.5208485
I thought it would be awesome to have a Q mueseum in a minetest world. I made a Q buidling that 200 blocks high and thought 'this would be awesome as an online history mueseum.'
at some time, when it's cool to be open about Q I'd give out the design.
My point: minetest is a block based sysetm. HTML is simple and ubiquitous, and yet complex and full featured. There is no reason that a minetest block group can't be crafted in minetest, saved, and then recreated, through data file filters, as an HTML page, a 3D page.
So a minetest like interface on a webpage of a muesuem of Q that is something that video game players can log into and then travel through the muesuem of Q, I called it the . . . but if I say that will dox me so . . .
7ce7c4 No.5208556
>>5208485
Once its all hitting the fan, that would make sense….
However, at this point it seems like there really needs to be some R&D on collecting, managing, analyzing information…. Currently every 10 posts is something of worth, the rest is usually gay porn, muh jew, tootsie, worthless meme…..
I'd like to see a "portal" that allows anons the ability to nominate content, that once consumed would be added to the graph. Automatic ingestors/extractors/etc would scan for qposts, links, articles, social media, etc and record/archive the nodes and edges to expand the knowledge graph.
3ea36c No.5208744
>>5208556
How do we know what is relevant and what is not? Clearly a lot of what are thought to be 'bots' are a way for some people or whoevers to comm.
There is a kind of post I've seen where a board insider is able to grab all the posts from a single poster, with all the images for a long series of breads. Such output is veyr useful to dump in the face of a shill and say 'here is what you've been doing'.
Those tools must be available so I'd think you'd dove tail into those.
people also save old breads off line. A tool ought to let someone parse through that kind of data too, the format of a wget dash-r or a 'save html page' from a browser.
7ce7c4 No.5208935
>>5208744
I'm not seeing the correlation to "bots".
How do we know? Well, i would hope that the original q posts, along with automated extractors would do 50% of the surface work and give automated guidance to the anon's to perform further digging and analysis. It can't be completely automated without significant investment of time and resources into NLP to enable the "context" extraction of information.
These tools do exist in the sense that if you have a multiple letter acronym to your organization or millions of dollars…. aside from that there is no open aggregator.
In order to archive, we can distribute the "store" to multiple nodes and perform offline backups. That way if attacks are successful new nodes can be spun up quickly and fill the gap.
Also needs to be done in an anonymous fashion to prevent doxing and safe guard people's identity.
Ideally, by aggregating into a graph, we could generate customized timelines/relationships/analysis with a simple query. This could then be rendered in the form of a "social timeline" to show the progression of a drop->outcome.
3a40d0 No.5209171
Hi codefags,
Scrum master and non-codefag seeking assistance.
I am the calendarfag and owner of dark-to-light.org. I have been trying to implement the JSON/API for both Q posts and Notables from the qanon.news website (great work, anon!).
I am receiving errors whilst trying to utilize. I am using a WordPress CMS and do most of my work inside WP. I've tried various methods including JSON importers, RSS importers, etc and still have no luck. I'd like to have a page dedicated to both.
Anyone got any hints or tips for me to follow? Sorry if this is a stupid ass question.
Also, dark-to-light.org is a server/space I am paying for myself. I have considered implementing various things such as a Q Wiki etc. If any other anon out there needs a space or has an idea for something we can contribute to and build to help the normies (my goal with the site), feel free to reach out. Contact info and form on the calendar (in notable bun/dough) on the site.
7ce7c4 No.5210394
>>52091>>5209171
Looks like some non standard json, possible with more control over the server but may be problematic with wp
What kind of hosting is it?
6b3962 No.5214879
>>5209171
I'll add your URL so you can access it directly.
6b3962 No.5217073
>>5209171
Try using a json content importer and this code in a post
[jsoncontentimporter url="https://qanon.news/api/smash/2744"]
Q#2744
{NAME} {TRIP} {POSTDATE}
{text}
[/jsoncontentimporter]
That should work for the straight text posts. You'd have to do some other magic to get the images. I'll see if I can't work that out later
https://json-content-importer.com/
1b2cea No.5228698
>>2352645
Is it possible to embed the main app within a “screen” app? Similar to how cover apps are made for hiding pics, etc.
If so, you could hide it’s main function under a guise…such as patriot workout app
7ce7c4 No.5228890
Many things are possible, just depends on the use case.
What are you looking to do?
ab4fd5 No.5282354
Cross-posting from prev main bread:
Theorizing on Q's Game-Theory
>>5281330 (pb)
Thought you fellow codefags might find it interdasting and hoping there's an anon out there that can chime-in and expand.
Gist is it's my musings on Q's mention of Game Theory (>>5271872 , 02.19.2019), hypergraphs, RDF, and quantum computation:
cd4bf3 No.5286588
>>5286542
let me try that again
>>5282354
be9fe3 No.5306313
Proposal - Q Standardized Post Line Numbering
I can't find it right now, but Q said to go back and reread posts.
I'd like to propose that we start number the lines of each post. It is clear by now that the lines in each Q post are not usually meant to be taken together - or can also be taken apart in the future. It would help tremendously for Q post research if we can get all major distributors of Q posts to start numbering each line. All important texts are line numbered, and I think it is time we do the same here.
So what is the "ask"? That we can agree this would be useful and that our glorious tool builders would agree to do this as well for the us and masses. Thank you for all you do!
62b311 No.5317222
>>5306313
>Q said to go back and reread posts.
Found it. And he said,
>Start from the beginning.
RT'ing for completeness.
649a79 No.5322488
cross post requested by research baker…
Any reviewers for this code, which is really my first and only attempt with JS/JQ.
The blurb is in
https://8ch.net/qresearch/res/5322105.html#q5322343
I suppose I could add the checksums if that makes folks feel better, this code running is what is running as userjs in my brave-dev broswer atm…
The script compressed for loading: qresearch-190221-0413Z-min.txt
https://pastebin.com/qgykrEa3
The script source commented for review: qresearch-190221-0413z-source.txt
https://pastebin.com/AUqrbyTY
649a79 No.5322534
>>5322488
$ sha512sum qresearch.js qresearch-190221-0413Z-min
14684c02a7b2fa6e113ee88f54a555f01c0c14517406ec45ee5e1766450edf3219c6575633c739252c11d1edef028ac52dfa6f27f447fd913008ace2b78ed52c qresearch.js
993aa29a5a3b60014fcc5a7227ab059466f0dc2f1f98f6eaae01214ab940e72f65d3f3ddc33601b37312e350e063990f084c9845246458f9a30e6252d1c33b47 qresearch-190221-0413Z-min
The -min was run thru uglifyjs2 with no optimization.
6b3962 No.5346562
Codefags - had to disable the Q Research API. It was being attacked and causing the site to crash. So, if you or your app was using the API it's not going to work until I can come up with a solution.
e7b4f4 No.5348785
>>5306313
I think every single line should be unique.
0286ce No.5349162
>>5346562
>breakin my API
lots of faggots out there trying to stop us any way they can
The best solution would be to distribute local tools (commandline, gui, or localhost:7117) and a database (like sqlite or indexed sphinx data) so that research could be local. Text only, of course (but maybe with links to the attachments). I've been thinking about such a tool myself, or the data set + minimal tools to serve as a guide for building more tools.
426545 No.5554587
426545 No.5554618
So, are there any Anons here that can create JS for the board? I have a stellar idea that 99% would absolutely love.
6b3962 No.5555782
>>5349162
Yeah I've been thinking about that too.
BIG.
The test DB I have on my workstation was over 10 GB. A local index of Sphinx data is an interesting idea.
>>5554587
https://qanon.news/archives/x/2352371
https://qanon.news/archives/clone/2352371
^^ still working on this 8ch clone
>>5554618
Probably most everybody here. Whatcha got?
447c5b No.5560804
071943 No.5578697
NEEDED: A google earth flight path layer that displays all of the screenshotted planefag flights in 3d
https://youtu.be/5p4i8Ot1n3o
> Find API for flight data
> Pull callsign/id from a folder of planefag images (OCR?)
> Acquire flight data and insert into Google Earth Engine
> Bonus: Create a timeline to animate flights chronologically
thoughts?
6b3962 No.5810723
With all this twitterbanning and facepurging going on recently I've been looking at Mastodon.
https://joinmastodon.org/
Does anybody know anything about this?
3cc91d No.5822048
>>5810723
i know it exists its foss and it would let us carry out as needed by using it right? i took it be a more comprehensive slack+php board type thingee. not a codefag just self taught.
4d6ed8 No.5834028
I don't know how anyone can keep up with the Q research General thread during busy times. I don't know how the bakers do it.
I could use a front end that sorts posts in real time according to how many replies they have, so that I can focus on a subset of the most popular posts. I want to keep it simple, so no posting, only lurking. But clicking on a post No. will open the standard front end in a different browser tab for posting. Also, no pop-ups on hover yet, but that can be added.
I'm almost done. Just need to test.
4d6ed8 No.5835173
>>5834028
Q Research General front end for busy lurkers
https://qlurker.github.io
The HTML file can be saved to your desktop and run from there with a click.
Usage:
Lurking only, unless you go to the standard front end where you can make a new post
Latest posts loaded every 30 seconds
Click on an image or file thumbnail to see the full version in a new window
Click on a post No. at the top of a post (e.g. No.5834432) to go to the standard front end
Click on a post link (e.g. >>5834662) to jump to the post
If the post is in a past bread, a new window opens with the standard front end
Click on a post number in the left margin to jump to a popular post in the current bread
When the post count reaches 751, refresh the page to go to the next bread
To do:
Add pop-ups when hovering over a post link
Source:
https://github.com/qlurker/qlurker.github.io
4d6ed8 No.5950890
ad953a No.6072472
Would it be feasible to set up a Raspberry Pi or a similar device as an open WiFi hotspot (named "QAnon" or similar) and pointing all devices to a short redpill guide when they connect on their browsers?
f6fbbd No.6072926
>>6072472
people do it on campuses.
https://www.reddit.com/r/hacking/comments/9vmjkr/recently_a_reddit_user_found_hidden_raspberry_pis/
82787a No.6245423
trying to fix some script if there are any takers?
1d3145 No.6245701
>>6245423
paste a pastebin
82787a No.6245789
>>6245701
Script works fine. Have tuned the shit out of it for my liking……
82787a No.6245798
however, now, does not work for FireFox, does work for Chrome.
82787a No.6245807
if (t) {
m.innerHTML = "What's Good!! (" + t + ")";
console.log(t + ' ' + window.t);
if(window.t != t) {
if (window.announce) clearTimeout(window.announce);
var snd = new Audio(frog);
snd.play();
1d3145 No.6245837
82787a No.6245881
Roger.
Basic rundown is…
Toastmaster.
I have it play a sound when Q posts.
That is the portion that I pasted.
Does NO LONGer work in FireFox. DOES work in Chrome.
Have not made any changes to Firefox. Disabled all add-ons.
Just wondering if anything has changed I may not be aware of.
875850 No.6245916
What does it print on the console when it fails in Firefox?
1d3145 No.6245937
Its not failing in firefox for me, need to see the error
82787a No.6245981
>>6245937
would you want a pastebin of the whole script, then go to a bread where Q has posted and see if the snd. executes?
1d3145 No.6245987
>>6245981
I'm hearing sound. this implies the error is somewhere else.
a pastebin will work, then I can compare side-by-side with the original code.
82787a No.6246006
>>6245987
Wow…Something on my end in Fox then. Also tried on Comodo and works there also.
I really appreciate your help! I don't want to take up your time, but if your really interested in troubleshooting, I'll roll with you.
Up to you.
1d3145 No.6246048
82787a No.6246067
>>6246048
https://pastebin.com/HMGeHa41
82787a No.6246136
>>6246048
I bet it was the FireFox update on April 10.
66.03, something on their end.
1d3145 No.6246193
chatty variable is missing; (not an issue)
There seems to be some issue with the mp4 or png. size limitation perhaps? replacing with the originals functions fine.
82787a No.6246236
>>6246193
Yea. those base64's are really big. Was never an issue though until lately.
All the modifications have been tested since Feb 2019. Sopped working recently.
Above post I stated about the FFox update. That has to be the cause.
I've removed all the other variables, and still won't work, however will work on other browsers.
1d3145 No.6246258
>>6246236
try passing them through a re-encoder to downscale them. mp3's compress very well.
82787a No.6246284
>>6246258
If I remember correctly, when using as mp3., could not get the sound to play, had to encode using the wav. file.
1d3145 No.6246556
If its wav, then you have some limitations within firefox's encoding support.
https://support.mozilla.org/en-US/questions/1228379
This wav of a dog barking is working fine.
https://pastebin.com/Fv2fYZi9
Converted using this:
https://www.base64encode.org/
82787a No.6246705
>>6246556
It works with…
opt.onclick = function () {
var snd = new Audio(frog);
snd.play()
…just not when there is a Q post.
Quick question since you're being really helpful.
Does the action of…
m.innerHTML = "What's Good!! (" + t + ")"
…trigger the sound?
1d3145 No.6246732
>>6246705
trigger in bold.
if(window.t != t) {
if (window.announce) clearTimeout(window.announce);
var snd = new Audio(frog);
snd.play();
window.announce = setTimeout(function() {
var z = t;
var synth = window.speechSynthesis;
var n = document.getElementsByClassName('subject')[0];
if(! n) {
window.t = z;
1d3145 No.6246742
*there is also another window.t = z towards the end
1d3145 No.6246779
>>6246742
I see you removed that one at the end.
If it isn't there the trigger won't work right.
82787a No.6246831
>>6246779
So now you had me look on my actual JS in options on /qresearch, and not my Notepad paste.
I switched to Comodo, and looking at it, it is VERY different then my Notepad.
So I'm going to have to t/shoot wtf is happening.
82787a No.6246894
>>6246831
Ok.
Fixed.
I gotta get some sleep or something, the stupid is creeping in.
Hey Anon, Thanks for your interest and help.
Thats really stand up of you.
1d3145 No.6247019
>>6246894
NP. Gotta support my users :P
b2ed00 No.6412115
Baker, you got the links to all previous "QAnon Computer Programming" threads? TYB!
091ce1 No.6638141
yo frens
ars has a story in regards linux malware. specifically HiddenWasp
it links to this blog post
https:// www.intezer.com/blog-hiddenwasp-malware-targeting-linux-systems/
hunting and patching time
pics related from blog post
cnc located at:
thinkdream - server farm
inetnum: 103.206.120.0 - 103.206.123.255
netname: THINKDREAM-HK
57e5ec No.6749563
New decentralized internet. Crypto-related but pre-Bitcoin.
https://medium.com/safenetwork/safetheinternet-14-june2019-25e91865655e
https://techcrunch.com/2014/07/23/maidsafe/
https://techcrunch.com/2018/06/02/not-just-another-decentralized-web-whitepaper/
https://techcrunch.com/2016/08/12/after-a-decade-of-rd-maidsafes-decentralized-network-opens-for-alpha-testing/
https://www.businessinsider.com/these-guys-are-creating-a-new-internet-2014-5
https://spectrum.ieee.org/view-from-the-valley/telecom/internet/hbo-silicon-valleys-decentralized-internet-realworld-teams-say-they-already-invented-it
https://www.theguardian.com/technology/2018/feb/01/punk-rock-internet-diy-rebels-working-replace-tech-giants-snoopers-charter
https://www.coindesk.com/maidsafe-ceo-david-irvine-talks-nature-ants-decentralization
https://www.computing.co.uk/ctg/news/3018000/blockchains-are-the-wrong-solution-to-data-security-problems-says-maidsafe
http://futurescot.com/the-small-company-from-troon-which-has-got-the-attention-of-sir-tim-berners-lee-and-hollywood-and-scotland-not-so-much/
https://www.computing.co.uk/ctg/news/3036546/decentralising-the-web-the-key-takeaways
https://bitcoinexchangeguide.com/crypto-startup-safe-network-finds-feature-in-ralph-breaks-the-internet-wreck-it-ralph-2-movie/
https://www.commonspace.scot/articles/8915/meet-troon-based-tech-firm-creating-brand-new-safer-internet
https://safenetwork.tech/