[ / / / / / / / / / / / / / ] [ dir / cafechan / cyoa / hisrol / kc / leftpol / soyboys / vg / zenpol ]

/qresearch/ - Q Research Board

Research and discussion about Q's crumbs
Name
Email
Subject
Comment *
File
Password (Randomized for file and post deletion; you may also set your own.)
* = required field[▶ Show post options & limits]
Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Options

Allowed file types:jpg, jpeg, gif, png, webm, mp4
Max filesize is 16 MB.
Max image dimensions are 15000 x 15000.
You may upload 5 per post.


Attention newfags: Leave the Name/Email field blank for your safety/anonymity. Do not create new threads. You may post in any thread already created. Thank you.

File: f4f8f5614ce4cd4⋯.jpg (580.13 KB, 696x694, 348:347, scripting scraping.jpg)

e6b154 No.513266

Technical thread for all things related to working with your local archive

Once you have a local archive (e.g. like >>493884 describes it), it's not too hard to extract stuff, search it, reformat content or correct references.

Open source text processing tools (common to unix/linux users) can do that for you, and even people with a windowsOS can use these things for example via CygWin or their "PowerShell".

Use this thread for recipes, tips and questions of that geeky technical nature, and don't hesitate to try it yourself.

e6b154 No.513387

For the sake of completeness, here is a list of all qResearch threads (generals only) via pastebin.com/BL49G1eJ


e6b154 No.513456

>>513387

Above list was created parsing the HTML code of the catalog-page.

There should be 752 posts in each of them, but for some reason and at some point certain breads started shrinking (refilled later in parts), and I can only trust that the removed posts were rather of the shilly nature, than of the sensitive one.


e6b154 No.513632

While I am wondering at times, why even so many anons still use (commercial, known-for-fuckery) windowsOS's, it is not too hard to make a windows machine scriptable, once you have installed something like cygwin or know how to use the powershell.

Once a command line is running (for linux/unix/mac that goes quick), which knows "grep" & "wc", one can easily extract the number of posts contained in a thread's HTML:

>grep -ob "<time\ unixtime=\"[0-9][0-9]*\"" ${HOME}/archive/qresearch/res/502597.html | wc -l

All posts have timestamps, and these can be found via searching for <time unixtime="[unixSecs]".

Above line of code first extracts this timestring (via "grep"), and "wc" (after the "|"-sign) just counts the number of lines written out by "grep".

To see what "grep" does, leave out all starting from the "|"-sign.


e6b154 No.513981

Also useful, might be correcting the references in a local archive. In most cases, those still point to the online source, even though there is a local archive available.

For all references of the >>513266 kind, these can be extracted/addressed using regular expressions. To search for them, I'd try:

grep "href=\"/qresearch/res/[0-9][0-9]*\.html#[0-9][0-9]*\"

In regExp this would select all occurrences of a string

href="/qresearch/res/[someNumber].html#[someNumber]"

Substituting these strings (via sed, and using the local path) should fix the ref's for local use.


e6b154 No.514619

>>513456

There appears to be a max. number of around 375 threads in a catalog-file (qresearch/catalog.html), after which point (dormant) threads start "falling" out of the catalog.

You can use the archive at qresearch/archive/index.html, or check for snapshots on archive.org:

web.archive.org/web/*/8ch.net/qresearch/catalog.html


0fe486 No.514781

Anons - I have a copy of the qresearch board in its entirety as of an hour or two ago. The archive is all images too, around 23GB (I think some of you are forgetting extra_files)

the scraping app is written in node - i can seed a torrent as well, which may be better because of bandwidth


3d5966 No.514839

>>514781

mega.nz


0fe486 No.514873

>>514839

coming soon, thanks friend!

signed, friend anon


e6b154 No.515050

The (fixed) structure of the 8ch-html pages, as long as it stays fixed, allows for the extraction of all contained parameters & elements (i.e. post-ids, attached images, message text, etc.) from a copy of the original source (if your comm's are safe).

Depending on whatever it is the extracted data will be used for, one would have to choose the output format (i.e. json, xml, csv, etc.). Not much of a difference any of them, except when you don't know them.

One has to be careful though when webpages with Content Management Systems are involved: It might become important to know what format you can feed in to your CMS or webpage, and/or what kind of API your provider supports.


e6b154 No.515106

>>514781

>>514839

As of an hour ago means the archive is missing those deleted posts ? >>513456

Is it known, who deleted them, and what/why was deleted ?

Shouldn't be too hard to find the most "original" version in all of our archives.


0fe486 No.515167

>>515106

much agreed - the more people who do this and have different copies - the better for analysis.

i want to get a general copy out to non technical people as well. something they can hang on to, you know?

thanks for you work patriot


e6b154 No.515270

>>515167

I should have a local copy of all threads (general) from cbts & thestorm. Also have the halfChan and occasional /pol/ threads on 8ch, when Q was there. There's also what the MA has on qarchives.ml. I'll think about a way to compare/crosscheck archives ….


0fe486 No.515344

>>515270

everything is a json object that we can get - basically we have the index we need - vichan-devel on github

would be more than willing to help!


e690a2 No.516907

hello anons -

the upload is done - here is the archive to mega

https:// mega.nz/#!2DRl0TwI!0jCtPpMm4Vd_Uul1Vc-wSjWj3eKNkD4gXl3zfuUYyuI

real patriots archive offline - will provide another if needed at a later date

#maga


ff5382 No.518079

Here's an example on how to search your local archive for some string/expression you know appears in some thread:

Say you're looking for a previous post that you remember contained for example "puzzle".

If you're archive were at ${HOME}/archive/qresearch/res/*.html, then you could save a list of all HTMLs in a temp-file:

ls ${HOME}/archive/qresearch/res/*.html > tmp.log

and loop over all the files, searching for your string using this one-liner:

while read ifile; do a=`cat "${ifile}" | grep -iob "puzzle"`; [[ "${a}" != "" ]] && echo -e "${ifile}\n${a}"; done < tmp.log

If the search term is found, you'll get a list of the HTML-filenames followed by all occurrences of the term and their byte-offsets within each file.

"grep -i" makes it case-insensitive

"grep -o" outputs only the search term, if found

"grep -b" adds byte-offset to output

Once you have a byte-offset, you can do for example:

tail -c+[byteOffset] [htmlFile] | head -c100

to see the first 100 characters following the start of your search term


ff5382 No.518484

>>518079

There is an example script, that does the search for some words/expressions a little more sophisticated

pastebin.com/tM53Q6AM

Linux/Mac users should be able to run it on the console, winUsers would need for example CygWin to run it. It should serve as an example, so feel free to use/edit it as needed.


0bce00 No.524383

>>513266

Is there a wget command or script to make a local archive of the qcodefag.github posts? I get the frame and styles, no data. Did I miss the method somewhere?


2b8bb5 No.525585

>>524383

Not sure if QCodeFag still maintains the archive and updates it with new posts. So I'd suggest using QCodeFags backup/mirror at qanonmap instead.

Since it is github, you can clone the entire branch using:

git clone https:// github.com/qanonmap/qanonmap.github.io.git

if you have the git-tools installed (i.e. "sudo apt-get install git", or "sudo dnf install git")

You'll have a folder qanonmap.github.io after "git clone" is done, where you'd open the index.html with your browser. Works for me on firefox …..

Other than that, you can always visit the page you want to archive and use, for example, firefox's addon "SavePageWE", which produces a single HTML-file with all (thumbnail)-images included. (You might have to download full-Images manually then ….)


2b8bb5 No.535869

File: 41dc25b69dc22dd⋯.png (124.62 KB, 648x732, 54:61, rawText.png)

>>513387

In case someone is looking for a text file in ASCII to work with:

pastebin.com/2HrnESSX

has all of Q's confirmed posts in EST (except those which were deleted from GA, e.g. No.[6, 51, 54, 55, 56]).

No manual work involved – pure scripting and scraping from the HTML sources.


45dd29 No.688574

File: f78d54d55602864⋯.jpg (60.84 KB, 750x748, 375:374, Screenshot from redacted.jpg)

Anyone Can Cook: Build Your Own Kitchen Edition Like The Experts Edition

This guide was written with Scientific Linux 7 in mind. Scientific Linux is not an exact copy of Red Hat Enterprise Linux. Scientific Linux is a Linux distribution produced under contract of Department of Energy by Fermi National Accelerator Laboratory. It is a free and open source operating system based on A Prominent North American Enterprise Linux Vendor and aims to be "as close to the commercial enterprise distribution as we can get it." 

Scientific Linux is derived from the free and open source software made available by The Upstream Vendor, but it is not produced, maintained or supported by Red Hat. Scientific Linux has its own build system, compiler options, patchsets, and is a community supported, non-commercial operating system. Scientific Linux does not inherit certifications or evaluations from Red Hat Enterprise Linux.

The United States Government / Department of Defense is the biggest customer of Red Hat. Red Hat was the first Linux company to be valued at over one billion dollars. They do this by selling yearly subscriptions to security and bugfix updates of packages. They literally sell old versions of free open source software. How do you get the government to want to buy old versions of free software? By listening to their wants and needs and by building it in an environment they can trust.

Enterprise Linux 7 uses systemd for init and logging purposes. Systemd logs are binary in nature, making them difficult to modify on a freshly intruded system. Although outside the scope of this post, consider the intention behind policies that mandate logging, auditing and remote loghosts. Logging and auditing take different approaches to collecting data. A logging infrastructure provides a framework for individual programs running on the system to report whatever events are considered interesting. An auditing infrastructure, on the other hand, reports each instance of certain low-level events, such as entry to the setuid system call, regardless of which program caused the event to occur. Successful local or network attacks on systems do not necessarily leave clear evidence of what happened. It is necessary to build a configuration in advance that collects this evidence. As you can see anyone attacking systemd for it binary logs (EL7 also exports those to text) may be uninformed or giving you bad advice.

No matter which logging software is used, a system should send its logs to a remote loghost. An intruder who has compromised the root account on a machine may delete the log entries which indicate that the system was attacked before they are seen by an administrator. If system logs are to be useful in detecting malicious activities, it is necessary to send them to a remote server.

SELinux is produced by National Security Agency. It adds DOD-styled Mandatory Access Controls in addition to Linux's Discretionary Access Controls. As a comparison, in addition to requiring permission to know (DAC, clearance) you also need a need to know (MAC, duty). Consider if your web browser were compromised. On a regular operating system it would then have permission to do anything the user running has, including wrecking havoc. On SELinux computers, a compromised browser would not be able to use the nano or sudo commands even though you do.

This guide will show you how setup an Enterprise Linux 7 host to a reasonable degree of security. It will walk you through configuring the system and setting up a zfs environment for archiving everything. Tips, suggestions and addendums are welcome. I don't know everything. I personally will be looking in to writing some sort of crontask to add to this guide. A Board Owner/Volunteer's input would be welcome in determining how often to run the wget command.

These skills were built up over the years with the intention of playing games through wine with the benefits of snapshoting my save files. As they have consistently tested over the years, now is the time to share them.

ftp:// linux1.fnal.gov/linux/scientific/

https:// www.iad.gov/iad/library/ia-guidance/security-configuration/operating-systems/guide-to-the-secure-configuration-of-red-hat-enterprise.cfm

https:// www.tens.af.mil/


45dd29 No.688588

Install updates, repos and packages for later

Optional: Disable kdump, select DISA-STIG, encrypted partitioning. Do note that in Enterprise Linux 7.5 Beta a separate /home partition or volume is required if using the stig.

Optional: Delete pre-provided repos and replace with fnal.repo (see annex)

sudo rm /etc/yum.repos.d/repos.repo /etc/yum.repos.d/sl*.repo

sudo nano /etc/yum.repos.d/fnal.repo

Now would be a good time to allow the system to connect to the network and allow it to connect at startup.

sudo yum update -y && sudo reboot

sudo rpm -Uvh https:// dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && sudo rpm -Uvh http:// download.zfsonlinux.org/epel/zfs-release.el7_4.noarch.rpm

sudo yum install kernel-devel kernel-headers dkms yum-plugin-copr git scap-workbench

If you have a password, or wireless keyboard, switch the comments on the nopasswd lines.

sudo nano /etc/sudoers

If using fnal.gov as update server and DISA-STIG at home, than comment out the last two.

sudo nano /etc/yum.conf

If using no password and DISA-STIG, use a live cd to edit the following to allow local login without password.

sudo nano /etc/security/pwquality.conf

PROTIP: If using DISA-STIG at home, edit issue file. See annex for suggestion.

sudo nano /etc/issue

Optional: Create a bridge, useful for virtual computers and/or if you have multiple ethernet ports.

sudo nmcli c add type bridge autoconnect yes con-name br0 ifname br0

sudo nano /etc/sysconfig/network-scripts/ifcfg-en{TAB COMPLETE}

BRIDGE=br0 #Add that to the end

sudo nano /etc/sysconfig/network-scripts/ifcfg-en{other ethernet ports if available}

Recommended: Lock down ssh daemon to accept logins from devices with keys added to your home ~/.ssh/authorized_keys file.

Edit this file on the computer receiving a ssh connection.

sudo nano /etc/ssh/sshd_config

Run this command on the device that will initiate the ssh connection.

ssh-keygen -b 4096

Copy contents of ~/.ssh/id_rsa.pub from initiating device to receiving device ~/.ssh/authorized_keys file.


45dd29 No.688595

Allow yourself to use virtual machines without using root account. Useful if sshing in from another virt-manager (see annex).

sudo nano /etc/polkit-1/rules.d/80-libvirt.rules

Install zfs. Enabling zfs-testing was necessary at the time of writing.

sudo yum install zfs

Enable zfs services:

sudo systemctl enable zfs-import-cache zfs-import-scan zfs-import.target zfs-mount zfs-share zfs-zed zfs.target

Suggestions on how to create your delicious zpool.

sudo zpool create bread /dev/loop9

sudo zpool create bread /dev/mapper/luks-blahblah

sudo zpool create bread /dev/vdb

Create zfs partiton as desired and take ownership of it.

sudo zfs create bread/qresearch

sudo zfs set compression=on bread/qresearch

sudo zfs set dedup=on bread/qresearch

sudo zfs set snapdir=visible bread/qresearch

sudo chown 1000:1000 /bread/qresearch -R

(Take note the lack of the trailing slash. This seems to own the folder and everything in it versus only owning the files in it.)

Recommended: Install and configure sanoid for auto snapshoting.

sudo yum install perl-Config-IniFiles git

cd /opt/

sudo git clone https:// github.com/jimsalterjrs/sanoid

sudo ln /opt/sanoid/sanoid /usr/sbin/

sudo mkdir -p /etc/sanoid

sudo cp /opt/sanoid/sanoid.conf /etc/sanoid/sanoid.conf

sudo cp /opt/sanoid/sanoid.defaults.conf /etc/sanoid/sanoid.defaults.conf

sudo nano /etc/sanoid/sanoid.conf (see annex)

sudo nano /etc/crontab

*/5 * * * * root /usr/sbin/sanoid –cron #Add to bottom

Secure rsync copy from one to another.

rsync -avzhe ssh /stuff/mountable/bread.qcow2 owner@backend:/var/lib/libvirt/images/bread.qcow2 –progress

Enable rc.local service if mounting image file at boot (see annex).

sudo nano /etc/rc.d/rc.local

sudo chmod +x /etc/rc.d/rc.local

sudo systemctl enable rc-local

Optional: Set up samba server. Do note that the ssh server also acts as a file server.

sudo yum install samba

sudo nano /etc/samba/smb.conf

sudo smbpasswd -a owner

sudo sebool -P samba_enable_home_dirs on

semanage fcontext –at samba_share_t "/bread(/.*)?"

restorecon /bread

sudo systemctl enable smb

sudo systemctl enable nmb

PROTIP: Use virt-manager to create qcow2 files. Use gnome-disks to mount image files temporarily.

PROTIP: Gnome-disks can also be used to create encrypted containers, and to manage an entries’ fstab and crypttab setup

PROTIP: Use scap workbench to analyze your system for DISA-STIG compliance. Deviate as appropriate. Review that and NSA Guide to RHEL 5 and you will quickly become more knowledgeable than the experts who pay thousands of dollars for courses and certifications.

PROTIP: Use a wired keyboard for typing luks passwords on a workstation. On a laptop, disconnect the power cord before typing in luks password.

PROTIP: Seriously if using DISA-STIG edit your issue file. IT IS NOT GOOD TO PRETEND TO BE THE GOVERNMENT WHILE VISITING THIS PART OF THE INTERNET!

PROTIP: Never trust someone who tells you to disable selinux. That’s no good.


45dd29 No.688609

/etc/yum.repos.d/fnal.repo

[sl]

name=Scientific Linux $slreleasever - $basearch

baseurl=ftp://linux1.fnal.gov/linux/scientific/$slreleasever/$basearch/os/

enabled=1

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl7

[sl-security]

name=Scientific Linux $slreleasever - $basearch - security updates

baseurl=ftp://linux1.fnal.gov/linux/scientific/$slreleasever/$basearch/updates/security/

enabled=1

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl7

[sl-fastbugs]

name=Scientific Linux $slreleasever - $basearch - bugfix updates

baseurl=ftp://linux1.fnal.gov/linux/scientific/$slreleasever/$basearch/updates/fastbugs/

enabled=1

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl7

[sl-rolling]

name=Scientific Linux 7rolling (pre-release) - $basearch

baseurl=ftp://linux1.fnal.gov/linux/scientific/7rolling/$basearch/os/

enabled=0

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl7

metadata_expire=3600

[sl-rolling-security]

name=Scientific Linux 7rolling (pre-release) - $basearch - security updates

baseurl=ftp://linux1.fnal.gov/linux/scientific/7rolling/$basearch/updates/security/

enabled=0

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl7

metadata_expire=3600

[sl-rolling-fastbugs]

name=Scientific Linux 7rolling (pre-release) - $basearch - bugfix updates

baseurl=ftp://linux1.fnal.gov/linux/scientific/7rolling/$basearch/updates/fastbugs/

enabled=0

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl7

metadata_expire=3600

[sl-testing]

name=Scientific Linux Testing - $basearch

baseurl=ftp://linux1.fnal.gov/linux/scientific/7rolling/testing/$basearch/

enabled=0

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl7

metadata_expire=60

[sl-testing-source]

name=Scientific Linux Testing - Source

baseurl=ftp://linux1.fnal.gov/linux/scientific/7rolling/testing/SRPMS/

enabled=0

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl7

metadata_expire=60

[sl-extras]

Name=Scientific Linux Extras - $basearch

baseurl=ftp://linux1.fnal.gov/linux/scientific/7x/external_products/extras/$basearch/

enabled=1

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl7

[sl-extras-debuginfo]

Name=Scientific Linux Extras - debuginfo

baseurl=ftp://linux1.fnal.gov/linux/scientific/7x/external_products/extras/$basearch/debuginfo/

enabled=0

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl7

[sl-extras-source]

Name=Scientific Linux Extras - source

baseurl=ftp://linux1.fnal.gov/linux/scientific/7x/external_products/extras/SRPMS/

enabled=0

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl7

[sl-source]

name=Scientific Linux $slreleasever - Source

baseurl=ftp://linux1.fnal.gov/linux/scientific/$slreleasever/SRPMS/

enabled=0

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta file:///etc/pki/rpm-gpg/release file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-other file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl7

[sl-debuginfo]

name=Scientific Linux Debuginfo

baseurl=ftp://linux1.fnal.gov/linux/scientific/$slreleasever/archive/debuginfo/

enabled=0

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl7

[repos]

Name=Scientific Linux repos - $basearch

baseurl=ftp://linux1.fnal.gov/linux/scientific/7x/repos/$basearch/

enabled=1

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl7

[repos-source]

Name=Scientific Linux repos - source

baseurl=ftp://linux1.fnal.gov/linux/scientific/7x/repos/SRPMS/

enabled=0

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl7


45dd29 No.688619

/etc/issue

You are not accessing a U.S. Government (USG) Information System (IS) that is

provided for USG-authorized use only. By using this IS (which includes any

device attached to this IS), you consent to the following conditions:

-The owner routinely intercepts and monitors communications on this IS for

purposes including, but not limited to, penetration testing, COMSEC monitoring,

network operations and defense, personnel misconduct (PM), law enforcement

(LE), and counterintelligence (CI) investigations.

-At any time, the owner may inspect and seize data stored on this IS.

-Communications using, or data stored on, this IS are not private, are subject

to routine monitoring, interception, and search, and may be disclosed or used

for any owner authorized purpose.

-This IS includes security measures (e.g., authentication and access controls)

to protect owner interests – not for your personal benefit or privacy.

-Notwithstanding the above, using this IS does not constitute consent to PM, LE

or CI investigative searching or monitoring of the content of privileged

communications, or work product, related to personal representation or services

by attorneys, psychotherapists, or clergy, and their assistants. Such

communications and work product are private and confidential. See User

Agreement for details.


45dd29 No.688628

/etc/polkit-1/rules.d/80-libvirt.rules

polkit.addRule(function(action, subject) {

if (action.id == "org.libvirt.unix.manage" &&

subject.isInGroup("wheel")) {

return polkit.Result.YES;

}

});

/etc/sanoid/sanoid.conf

####################

# sanoid.conf file #

####################

[bread/qresearch]

use_template = production

#############################

# templates below this line #

#############################

[template_production]

# store hourly snapshots 36h

hourly = 9000

# store 30 days of daily snaps

daily = 3650

# store back 6 months of monthly

monthly = 120

# store back 3 yearly (remove manually if to large)

yearly = 10

# create new snapshots

autosnap = yes

# clean old snapshot

autoprune = yes

/etc/rc.d/rc.local

#!/bin/bash

# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES

#

# It is highly advisable to create own systemd services or udev rules

# to run scripts during boot instead of using this file.

#

# In contrast to previous versions due to parallel execution during boot

# this script will NOT be run after all other services.

#

# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure

# that this script will be executed during boot.

touch /var/lock/subsys/local

losetup /dev/loop9 /stuff/mountable/bread.qcow2

zpool import bread

/etc/samba/smb.cfg

[global]

workgroup = workgroup

netbios name = researchserver

security = user

server signing = mandatory

client signing = mandatory

[bread]

path = /bread

browseable = yes

read only = no

valid users = owner someone anyone

force user = owner


45dd29 No.692123

Posting because >>493884 has gone missing.

wget -nH -k -p -np -H -m -e robots=off -I stylesheets,static,js,file_store,file_dl,qresearch,res -U "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.6) Gecko/20070802 SeaMonkey/1.1.4" –random-wait –no-check-certificate http:// 8ch.net/qresearch

Real patriots archive offline.


2b8bb5 No.707829

>>692123

The above command creates an archive (or snapshot) of the entire QR board. That's a lot of data (i.e. images etc), and >>516907 amounting to around 22 GB was posted during #633. For a quick reference, here's the options involved in above command:

-nH → don't create host-folder (i.e. folder 8ch.net)

-k → convert links for local viewing

-p → download all files to properly view page (i.e. page-requisites)

-np → do not ascend to parent directory during download (i.e. folder 8ch.net)

-H → span across hosts when recursive

-m → mirror page [turns on recursive (-r), timestamping (-N), sets recurse-level to inf]

-e → execute command (i.e. robots=off)

-l → list of folders to (I)nclude

-U → user-agent

Subsequent options are self-explanatory and should be invoked using double "-".


2b8bb5 No.707925

>>707829

In case you do not want to archive the entire QR board, and for example only have an archive of (some) Research Generals, you can omit the mirror option (-m), e.g.:

wget -nH -k -p -H -np -e robots=off -I stylesheets,static,js,file_store,file_dl,qresearch,res -U "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.6) Gecko/20070802 SeaMonkey/1.1.4" –random-wait https:// 8ch.net/qresearch/res/704471.html

which would download only #873 and its requisites. Note that large images will not be downloaded, only their thumbnail versions.

Thumbnails (on the 8ch server) usually live in folder: media.8ch.net/file_store/thumb/, while large images are in media.8ch.net/file_store/. Large images and their thumbs have identical filenames, which should correspond to the original (i.e. large) image's sha256sum.


2b8bb5 No.777427

>>535869

An updated pastebin (times in EST) with posts up to & including March 23rd:

pastebin.com/zkJGjtCD


2b8bb5 No.817813

>>777427

Another update on Q's posts including a correction that was necessary for all posts after March, 11th (the day when EST switched to EDT – times in the pastebin are Eastern):

pastebin.com/HrFMnRVf


d8878e No.1000640

>>777338




[Return][Go to top][Catalog][Nerve Center][Cancer][Post a Reply]
[]
[ / / / / / / / / / / / / / ] [ dir / cafechan / cyoa / hisrol / kc / leftpol / soyboys / vg / zenpol ]