[–]▶ No.983922>>999838 >>1000061 >>1007034 >>1009913 >>1021392 [Watch Thread][Show All Posts]
Last thread:
>>915966 (https://archive.is/rQakE)
Updates
0.4.17
>ipfs 0.4.17 is a quick release to fix a major performance regression in bitswap (mostly affecting go-ipfs → js-ipfs transfers). However, while motivated by this fix, this release contains a few other goodies that will excite some users.
>The headline feature in this release is urlstore support. Urlstore is a generalization of the filestore backend that can fetch file blocks from remote URLs on-demand instead of storing them in the local datastore.
>Additionally, we've added support for extracting inline blocks from CIDs (blocks inlined into CIDs using the identity hash function). However, go-ipfs won't yet create such CIDs so you're unlikely to see any in the wild.
Features
>URLStore (ipfs/go-ipfs#4896)
>Add trickle-dag support to the urlstore (ipfs/go-ipfs#5245).
>Allow specifying how the data field in the object get is encoded (ipfs/go-ipfs#5139)
>Add a -U flag to files ls to disable sorting (ipfs/go-ipfs#5219)
>Add an efficient --size-only flag to the repo stat (ipfs/go-ipfs#5010)
>Inline blocks in CIDs (ipfs/go-ipfs#5117)
Changes/Fixes
>Make ipfs files ls -l correctly report the hash and size of files (ipfs/go-ipfs#5045)
>Fix sorting of files ls (ipfs/go-ipfs#5219)
>Improve prefetching in ipfs cat and related commands (ipfs/go-ipfs#5162)
>Better error message when ipfs cp fails (ipfs/go-ipfs#5218)
>Don't wait for the peer to close it's end of a bitswap stream before considering the block "sent" (ipfs/go-ipfs#5258)
>Fix resolving links in sharded directories via the gateway (ipfs/go-ipfs#5271)
>Fix building when there's a space in the current directory (ipfs/go-ipfs#5261)
tl;dr for Beginners
>decentralized P2P network
>like torrenting, but instead of getting a .torrent file or magnet link that shares a pre-set group of files, you get a hash of the files which is searched for in the network and served automatically
>you can add files to the entire network with one line in the CLI or a drag-and-drop into the web interface
>HTTP gateways let you download any hash through your browser without running IPFS
>can stream video files in mpv or VLC (though it's not recommended unless the file has a lot of seeds)
How it Works
When you add a file, the files are cryptographically hashed and a merkle tree is created. These hashes are announced by the IPFS client to the nodes in the network. (The IPFS team often describes the network as a "Merkle forest.") Any user can request one of these hashes and the nodes set up peer connections automatically. If two users share the same file then both of them can seed it to a third person requesting the hash, as opposed to .torrent files/magnets which require both seeders use the same file.
FAQ
>Is it safe?
It's about as safe as a torrent right now, ignoring the relative obscurity bonus. They are working on integration with TOR and I2P. Check out libp2p if you're curious.
>Is it fast?
Finding a seeder can take anywhere from a few seconds to a few minutes. It's slowly improving but still requires a fair bit of optimization work. Once the download starts, it's as fast as the peers can offer, just like a torrent.
>Is it a meme?
You be the judge.
It has implementations in Go (meant for desktop integration) and Javascript (meant for browser/server integration) in active development that are functional right now, it has a bunch of side projects that build on it, and it divides important parts of its development (IPLD, libp2p, etc) into separate projects that allow for drop-in support for many existing technologies.
On the other hand, it's still alpha software with a small userbase and has poor network performance.
Websites of interest
https://ipfs.io/ipfs/
Official IPFS HTTP gateway. Slap this in front of a hash and it will download a file from the network. Be warned that this gateway is slower than using the client and accepts DMCAs.
glob.me --- dead
▶ No.983928>>983930
>Thread #3
Fuck you faggot OP
Newfags shouldn't create threads.
▶ No.983930>>983932 >>984194
>>983928
t. actual retard until your third line
>>983929
Fork when?
I'll delete this thread if negative pressure ramps up in this thread. >>>/ipfs/
▶ No.983932>>983973 >>984194
>>983930
>I'll delete this thread
Ooops I'm unable to delete the whole thread
Again:
>>>/ipfs/
▶ No.983973>>984179 >>990981
>>983932
Fuck off, Either you stick with IPFS, or we will turn this car around for Beaker/Dat
https://beakerbrowser.com/ https://datproject.org
▶ No.984128>>984222 >>1004418 >>1004647
Is there a gui for windows x86? I wanna still wanna my run my old legacy programs on my ipfs box.
▶ No.984179>>984227
>>983973
>implying both of them do not have CoCks
What about making your own alternative?
▶ No.984194>>984228 >>984232
>>983932
>>983930
IPFS threads are fine. I was calling you out for being a filthy newfag because you continued the gay numbering of threads started by the last faggot cuckchan OP. We've had perpetual IPFS threads for over 4 years. We don't number threads.
▶ No.984222>>987060
>>984128
Why the hell is your old ass windows install not airgapped?
▶ No.984227
>>984179
Nobody will listen to 8chan... unless it is QAnon, in that case fake and gay
I would rather hope that we can fork all the source code and integrate it into our stuff
▶ No.984228>>985209 >>1041943
>>984194
Numbering threads is a tradition (see Gamergate and /pol/ threads about Happenings)
▶ No.984232>>985209
>>984194
>newfag
NOT AN ARGUMENT
>the last faggot cuckchan OP
Then invite him back here for and tell him why
>We don't number threads.
The next threads will no longer be numbered, hopefully.
▶ No.985209>>985408
▶ No.985408
>>985209
<reading comprehension
Just ignore the autistic shitposting this thread started with.
▶ No.987060>>987095
>>984222
I'm a poor fag with limited hardware. My rig is so old even just browsing the chans can result in freezing. It even has problems on threads with >600 posts or >300 files, to speak nothing of anything more resource intense like even vidya from the 90s, often the results in full blown overheating in <30 mins.
▶ No.987095>>987341
>>987060
Why don't you use GNU/Linux or BSD on such a weak system?
▶ No.987341>>987348 >>987537 >>993685
>>987095
Partially because, to be quite honest I never imagined tech would degrade this far, that such an unacceptable level of quality would be condoned for mass production yet it's all I got, and partially I have some things I gotta windows on. I actually have quite a lot of stuff that requires old programs, yet no one ever made things to handle these applications and now-a-days all the code is so shit no one is even thinking on that level of power user anyway so I'm left mostly living in an old software bubble as it's the most technically functional and productive in the sea of nearly unusable products and services with shittastic interfaces and less and less hotkeys.
Really though the main reason I remain is this one program, Everything Search, that allows me to realtime index my drive, so useful this is that not only do I keep it on a global hotkey that I use hundreds of times a day, but it's functionally altered my whole style of using the computer. It's also functions for me as a launch bar, entirety of file search, and a lot of management too. Plus being able to access anything on my computer in <2 seconds without ever having to do something as archaic as navigating ever again, it's pretty hard to best.
One of it's key operating principles that allows for such speed is a table unique to NTFS and for that reason I have found nothing else that works on a competitive level on Linux. Maybe I missed it but I've looked several times for years, and I have used Linux as a main, even for years, but in the end I was spending so much time on my dual boot drive and on this machine with such inane limits here I am.
▶ No.987348>>987351
>>987341
You could just install gentoo with a ntfs drive and then run the program in wine.
▶ No.987351>>987362
>>987348
For what ever reason it's not within the wine db.
▶ No.987362
>>987351
That doesn't stop you from doing it though. Just run it in wine with a ntfs drive and then see if it works. Considering how simple, and old, it is you should be able to run it just fine.
▶ No.987537
▶ No.987731>>987887 >>987904
it's called IPFS because even FTP over moon-bounce amateur radio outperforms it
▶ No.987887>>987904
Bump because the (((distributed web))) is getting BTFO
>>987731
Holy shit, fucking savage!
▶ No.987904
>>987887
>>987731
> Trusting (((the cloud)))
Are you scared of P2P, Moshe? What, Torrents destroyed your business plan?
▶ No.988705
Look at this soyboy and laugh
▶ No.988809>>989395 >>989448
To all the bluepilled autists who still foolishly have hope in decentralization, read this thread
>>988725
▶ No.988844
Seems to be a lot of shilling in this thread. Let's clean it up with some good news!
https://www.youtube.com/watch?v=xzYEjHER6x4
▶ No.988930
The increasing pushback only tells me that this thing is becoming more and more sane as an alternative to paying Shlomo Shecklestein to host your data for you.
>No goy stop hosting things for free! My cousin Marty Rothenberg will only charge you $20 per GB, that's only as much as half a cup of coffee for *100% guaranteed uptime!
*up to 100%, terms and conditions may apply, we reserve the right to remove content at any time. Content hosted in our BaseMentTech™ datacenters are only allowed to be accessed in select geographical regions
▶ No.988994>>989986
▶ No.989350>>989986
▶ No.989395
>>988809
Guess we'll just do nothing then.
▶ No.989448>>989985 >>989997
▶ No.989985>>989989 >>989997
▶ No.989986>>990026
>>988994
Whitepill
>>989350
glow in the darkpill
▶ No.989989>>989997
▶ No.990026>>990085 >>990134
>>989986
FUCKING THIS
(((IPFS))) BTFO. We're literally stuck with Jewgle forever.
▶ No.990085
▶ No.990114
▶ No.990134>>990253
>>990026
Google bends to the will of IPFS.
Now it is us who glow in the dark.
▶ No.990253
▶ No.990449
▶ No.990981
Is anyone building an IPFS Nyaa?
>>983973
Dat looks good, too.
▶ No.993682>>993848
Is this real or another sketchy CIA trap like that stream alternative shilled by 4chan's /g/ a couple months ago?
▶ No.993685
>>987341
Have you tried https://duckduckgo.com/?q="Everything+Search"+linux
▶ No.993848>>993997
>>993682
> sketchy CIA trap like that stream alternative
never heard of this. give link?
either way this is a real open source project, many companies are starting to adopt it as well
▶ No.993887>>995519
IPFS v0.4.18 Is Out
This is a really big update.
>experimental support for the QUIC protocol. QUIC is a new UDP-based network transport that solves many of the long standing issues with TCP
>now supports the gossipsub routing algorithm (significantly more efficient than the current floodsub routing algorithm) and message signing
>the new `ipfs cid` command allows users to both inspect CIDs and convert them between various formats and versions
>the refactored `ipfs p2p` command allows forwarding TCP streams through two IPFS nodes from one host to another. It's ssh -L but for IPFS
>there is now a new flag for `ipfs name resolve` - `--stream`. When the command is invoked with the flag set, it will start returning results as soon as they are discovered in the DHT and other routing mechanisms
>Finally, in the previous release, we added support for extracting blocks inlined into CIDs. In this release, we've added support for creating these CIDs. You can now run ipfs add with the --inline flag to inline blocks less than or equal to 32 bytes in length into a CID, instead of writing an actual block
>you can now publish and resolve paths with namespaces other than /ipns and /ipfs through IPNS. Critically, IPNS can now be used with IPLD paths (paths starting with /ipld)
>this release includes the shiny updated webui
>this release includes some significant performance improvements, both in terms of resource utilization and speed
>In this release, we've (a) fixed a slow memory leak in libp2p and (b) significantly reduced the allocation load. Together, these should improve both memory and CPU usage
>we now store CIDs encode as strings, instead of decoded in structs (behind pointers). In addition to being more compact, our Cid type is now a valid map key so we no longer have to encode CIDs every time we want to use them in a map/set
>bitswap will now pack multiple small blocks into a single message
>this release saw yet another commands-library refactor, work towards the CoreAPI, and the first step towards reliable base32 CID support
>CoreAPI is a new way to interact with IPFS from Go. While it's still not final, most things you can do via the CLI or HTTP interfaces, can now be done through the new API
>from now on paths prefixed with /ipld/ will always use IPLD link traversal and /ipfs/ will use unixfs path resolver, which takes things like sharding into account
>we intend to switch IPFS gateway links https://ipfs.io/ipfs/CID to to https://CID.ipfs.dweb.link
>this way, the CID will be a part of the "origin" so each IPFS website will get a separate security origin
https://github.com/ipfs/go-ipfs/blob/master/CHANGELOG.md
▶ No.993997
>>993848
Steemit/Bitchute is not CIA, just shekel grubbing.
Neither is GitGud.tv, just GaymerGayte
▶ No.995519
>>993887
>QUIC
>ipfs p2p
>ipfs name resolve --stream
>performance patches
>all this browser standards stuff
I like it.
▶ No.998746>>998765 >>1000218
Does /g/ even use this? How safe is it? Are files encrypted and anonymous?
▶ No.998765>>1021960
>>998746
>/g/
I dunno, why don't you ask them?
>anonymous
If they add I2P support, sure. In the meantime tracking peers is as easy as bittorrent.
▶ No.999838>>1000049 >>1000218
>>983922 (OP)
>(though it's not recommended unless the file has a lot of seeds)
So same as torrents then? popcorn time for ipfs when?
>Is it a meme?
What are some use-case scenarios besides replacing torrents? hosting websites? storage? can compete against datacenters in terms of cost?
▶ No.1000049>>1000050 >>1000065 >>1000067
>>999838
I don't think it is fit for any useful storage except archiving, as (as far as I know) there exists no deletion or edit function, and the system tries to bloat itself forever.
I'd rather rent anonymous decentralised storage for money, with the bonus that I have a guarantee that the files will not just be lost, and that I can edit / remove my files. Also, there is litterally no incentive for nodes to participate in IPFS, so you can expect most nodes to actually come from the NSA for surveillance purposes.
▶ No.1000050
>>1000049
This is not to shill any cryptocurrencies that rent storage, because they are mostly shit, from a technological point of view.
▶ No.1000061>>1000218
>>983922 (OP)
I just installed IPFS. I have not downloaded anything and it has used 700 megabytes in less than 25 minutes. This is retarded.
▶ No.1000065>>1000067
>>1000049
>I don't think torrents are fit for any useful storage except archiving, as (as far as I know) there exists no deletion or edit function, and the system tries to bloat itself forever. I'd rather rent anonymous decentralised storage for money, with the bonus that I have a guarantee that the files will not just be lost, and that I can edit / remove my files. Also, there is litterally no incentive for peers to participate in bittorrent, so you can expect most peers to actually come from the NSA for surveillance purposes.
▶ No.1000067
>>1000049
>>1000065
>edit function
Use IPNS, faggot
>deletion
If you upload files to a distributed storage system and expect some way to magically delete the file from all the seeders' computers and prevent them from reuploading the file with the same hash, congratulations: you're retarded.
▶ No.1000218>>1000219 >>1000241 >>1000249 >>1004647 >>1048455
>>1000061
Are you talking about 700 megabytes of bandwidth?
How did you measure this?
>>998746
There used to be IPFS threads but they can't exist there anymore as a result of the spam filter changing. Almost always will the hashes trip the filter.
>>999838
>What are some use-case scenarios besides replacing torrents? hosting websites? storage? can compete against datacenters in terms of cost?
As the name implies, it is much like a filesystem. It's literally just content addressable data, nodes, records, and probably other things. All built on the same standards and interfaces. With networking taken into consideration.
With the ability to reference data, globally, with read and write access, is very generic so you could do a lot with that.
Hosting a static website is as simple as hashing the content and connecting to the network, that's 2 commands and anyone can see your content using their own node or any gateway.
You don't have to consider a domain, how you handle load balancing, implementing your own protocols, formats, reference system, nat punching, etc etc.
The way they handle dynamic data and peers is built on the same principles. A peer id is just a hash, that doesn't change if they change networks suddenly, that you don't have to consider NAT, or implement anything new, or consider what is the latest transport protocol being used.
It's basically whatever HTTP stack you like, but instead of URLs and servers, it's hashes and P2P.
That's all you need to know, you don't need to know the entire stack, and better yet you don't need to know the damn user environment, like can they connect to domain xyz directly, did a file change paths, etc.
Another way to think about it is probably just simply, what if instead of your OS asking your hard disk for data, you also asked the network for it, and could make reasonable guarantees that it's the data you wanted, whether it came from the disk, or someone else.
From a user perspective I think it's really convenient to have a worldwide reachable chunk of arbitrary data, in 1 command, that progressivley gets more reliable as real time goes on and development continues to add more efficient traversal, storage, etc. that is all transparent to the users and developers.
Stagnation is prevented by using the same layering scheme as IP. Components can be added and deprecated without disrupting the whole system.
IPFS as a project is basically just saying "we should do it this way, because we can" and gluing everything together so it works that way.
You should be able to say this is the address of my data, and through some means that I don't even know, my packets will get to your machine if you request them. And that's not really an insane or improbable goal. A lot of these concepts are ancient, and already tried, but nobody has really tied them all together like this.
>I don't think it is fit for any useful storage except archiving, as (as far as I know) there exists no deletion or edit function, and the system tries to bloat itself forever.
In what way are you talking about? Locally or globally?
Locally you just initiate garbage collection and it deletes the data. Edit is just adding the changed data in and deleting any orphaned children. Like how diffs work in git, an edit should only consist of the different bytes if it's chunked, it's not like every commit duplicates the entire file that was changed.
Globally, there isn't a delete, something becomes unavailable when everyone deletes it locally or is disconnected from the rest of the swarm. I personally see this potential permanence, as a benefit.
>I'd rather rent anonymous decentralised storage for money, with the bonus that I have a guarantee that the files will not just be lost
Literally their sister project.
https://filecoin.io
>there is litterally no incentive for nodes to participate in IPFS, so you can expect most nodes to actually come from the NSA for surveillance purposes
That doesn't make much sense to me. What would they survey unless they were hosting content for people on nodes that were determined to be the best, which probably means geographically closest and/or fastest.
At most they could determine that some peers downloaded hashes X,Y,Z from their nodes.
The NSA would likely have to act as a CDN for whatever they want to survey and be the best CDN on top of that to be chosen by clients.
▶ No.1000219>>1048455
>>1000218
File limit.
I wanted to post the rabin chunker, but may as well post these too.
▶ No.1000241>>1000460
>>1000218
Filecoin vs Stroj vs Sia vs Maidsafe
▶ No.1000249>>1000460
>>1000218
>Are you talking about 700 megabytes of bandwidth?
Yes. And it is still going all these hours later. Not as much but still a lot. High idle bandwidth usage has been listed as an issue for years now and it is still shit.
▶ No.1000460>>1000536
>>1000241
I'm not familiar with them all. I remember being interested in Maidsafe, but at the time, nothing was released. A lot of the Filecoin information was already published when I started reading about it, I forget if this was around the same time or later.
Maidsafe would have the name advantage if they changed it to maido-safe though.
Can you tell me about the others?
In any case, I like the concept, regardless of implementation.
Being able to utilize my free space and bandwidth, until I need it, is pretty appealing to me. Not even for profit per se, but the token offer alone, valid for at the very least, guaranteed storage, is very appealing to me.
Everyone should be able to participate in this market, and more importantly, everyone should have access to store and retrieve data reliably.
Having autonomous consensus mechanisms and mathematical proofs managing all of this, is what makes it so reliable, especially when coupled with all the libp2p stuff from IPFS.
On 1 side you have a system for connecting peers/nodes together through any means, and on the other you have a set of functions around verifiable distributed data storage.
I like it.
>>1000249
It's probably the DHT, I think there are issues with it right now. In one of the previous threads someone was talking about a new non-kademilia approach. But I forget if they were just mentioning it or if it is something they plan to implement.
https://en.wikipedia.org/wiki/Coral_Content_Distribution_Network
Until it's fixed you could try
https://github.com/ipfs/go-ipfs/blob/master/docs/experimental-features.md#client-mode-dht-routing
I think this makes it so you only make DHT requests.
I wonder what specifically is causing it to be so high compared to other networks.
▶ No.1000536>>1003003
>>1000460
>Until it's fixed you could try
Well anon knowing IPFS it will be a decade until it is ready.
▶ No.1003003
>>1000536
Better late than never imo. As someone who has been interested in these concepts for as long as I've been on the internet, I'm glad to finally see someone producing, even if it's slow.
For years we've had nothing but speculations and white papers without implementations. Multiple projects have died without shipping anything usable. I can use IPFS today and it's been improving regularly over time.
Being recognized by Mozilla, Google, etc. and being supported in things like Firefox, Chrome, Brave, etc. make me believe that this will actually still be around in a decade.
It's not like we have any alternatives anyway. I don't trust any of the proprietary BitTorrent Inc. projects and I haven't seen another project like this, that isn't strictly for experimentation, unsupported vaporware, or practically useless.
We have Zeronet which people point out the flaws of every thread.
Or things like "dat" which say they're decentralized but then rely on you to connect to a custom DNS server that they host for content resolution. And custom browsers like beaker.
I still need to look into urbit but that may serve a different purpose. If anyone wants to give me the quick rundown I'd like that.
▶ No.1004343>>1004350 >>1004372
Don't bother. IPFS has been considered useless to those who truly know what is actually going to happen.
Read >>997341 for example. More and more people here are taking the blackpill and accepting that nothing we do will make us able to escape the eternal kikery.
Don't bother fighting back anymore. You will either take it up the ass like the rest of us, or take your own life. We lost. We're done forever.
▶ No.1004350
>>1004343
>You will either take it up the ass like the rest of us, or take your own life. We lost. We're done forever.
>please take it up the ass like me
Gives a whole new meaning to asses and elbows.
▶ No.1004372
>>1004343
don't listen to the blackpill jew, the future is not yet lost
only the people themselves can bring it back now
do whatever you can to act and make the world a better place!
▶ No.1004418>>1004423
>>984128
By default, there is a web gui at http://127.0.0.1:5001/webui, where 5001 is your ipfs api port. Note that localhost is hardcoded into the webui, so if you're trying to use the webui remotely I'd recommend ssh tunneling ports 5001 and 8080.
Also please move your old windows boxes as far away from the internet as possible
▶ No.1004420>>1004430
Could we replace Tumblr and all those other lame blogging platforms with IPNS sites and RSS/atom feeds?
▶ No.1004423
>>1004418
Also, since there's no documentation for this, I'd like to point out a peculiarity in file chunking. The default chunker (size-262144) tries to split the file into pieces <= 256 KiB in size. However, it also limits the number of links an object can contain to 174. I think this was called MaxChildrenPerNode at some point, but I can't find any recent references to that name.
This means that if you try to add a file larger than 174 * 256 KiB in size, you will end up with a nested structure of ipfs objects pointing to other ipfs objects. Ipfs seems to try to fill each level of the tree before adding new levels.
As an example, when I tried adding a 2GB file, I ended up with an object containing 43 links. The first 42 links each pointed to an object pointing to 174 objects of size 262144 (plus 14 bytes of protobuf wrapper), while the last object pointed to 81 "full" data blocks and 1 "partial" data block.
▶ No.1004430>>1004446 >>1004465 >>1004660
>>1004420
Yes, but the entire reason people use lame blogging sites is because they're too lazy to run their own websites. The technical know-how required to set up a /comfy/ rss-ipfs system is astronomically greater than "log into _, then use the wysiwyg editor."
Honestly, my main hope is that ipfs displaces torrents as an easy means of redistributing content. The whole "I changed a letter in one of my filenames so now there's two separate swarms of seeders" retardation needs to stop. All the different chunking options give me pause though, as they have the same effect.
▶ No.1004446>>1004465 >>1004660
>>1004430
With some normalfag-friendly application or web UI, you could boil it down to "log in with your key, then write a post/attach files and upload." Popular content would load faster as more people view and reblog it, and you wouldn't be limited to a centralized, proprietary platform's whims.
The big disadvantage to this over Tumblr and other blogging platforms is the lack of a search function or asks. This does mean pedophiles couldn't flood service-wide tags or search terms with CP, but it would make finding blogs outside word of mouth or a future IPFS search engine harder. It would also eat up precious data for phoneniggers and other burgers stuck with data caps.
▶ No.1004465>>1004476 >>1004647
>>1004430
>ipfs displaces torrents as an easy means of redistributing content.
Why?
>I changed a letter in one of my filenames so now there's two separate swarms of seeders" retardation needs to stop
And how is IPFS going to fix this?
On bittorrent I just update my RSS feed and let my followers update. The RSS article is OStatus compliant, so it's self-contained, containing multiple checksums and signature.
*Nix, do simple things well.
>>1004446
You just described Scuttlebutt.
▶ No.1004476>>1004529
>>1004465
>And how is IPFS going to fix this?
Read the OP, faggot. IPFS really, really hates file duplication and changing filenames doesn't change file hashes.
>Scuttlebutt
I'll look into this.
▶ No.1004529>>1004550 >>1004647 >>1004673 >>1004680
>>1004476
I read IPFS Golang, and JavaShit. It's kike software.
They have no solutions, only begging implementations.
I'm not sure how ANSI C code in libtorrent can beat Google Spysoft.
▶ No.1004550
▶ No.1004647>>1004673 >>1004680 >>1004807
>>1004465
>and let my followers update
Swarm deduplication should be automated in the standard by default, not left up to clients to implement (or not), like in the case of Bittorrent.
Going through the trouble to automate 90% of the work seems like a waste of time when you can just have it 100% automated in the majority of the network rather than the minority.
There's no circumstance where it makes sense not to do this. If you're trying to get a hash, why care where it comes from as long as it's valid?
If you're publishing data, the intent is already to share the data with random peers.
>>984128
>a gui
https://github.com/ipfs/go-ipfs/issues/5003
Like this or do you mean something else?
>>1004529
Why not separate the specifications from the implementations?
There's multiple implementations on github that are written in C. You could probably re-use a lot of existing libraries because of how they separate systems >>1000218
This is like people claiming i2p is somehow bad because the reference implementation is in Java. Despite a functioning C++ varient also existing.
You'd better spend your time writing the implementations you want, rather than complaining about the ones you dislike. The specifications are there. The designs are already done and you have multiple reference implementations.
You're not a nodev are you?
▶ No.1004660
>>1004446
>With some normalfag-friendly application or web UI, you could boil it down to "log in with your key, then write a post/attach files and upload."
This looks like it might be that
https://github.com/Peergos/Peergos
>>1004430
>All the different chunking options give me pause though, as they have the same effect.
Thankfully the default is a fixed size and not "auto" like in most torrent clients, so it shouldn't be an issue unless people intentionally change the chunking method when adding files.
▶ No.1004673>>1004680 >>1004807
▶ No.1004674>>1004680
▶ No.1004680>>1004807
>>1004529
After seeing >>1004674
I take back what I said here >>1004647
>implementing something in Python when ANY of the other implementations exist
What I should have said was, we shouldn't diverge into language wars. It's not really critical to the concepts and can change at any time if people find enough merit in the concept, to warrant the effort of implementing it.
>>1004673
No Jai, no buy. Obvious trash.
▶ No.1004807>>1004862 >>1004872 >>1004913 >>1004925
>>1004647
>should be automated
Ah, you have no respect for security or privacy.
>you can just have it 100% automated in the majority of the network
Seems you've never installed your own tracker.
>This is like people claiming
I don't sell projector screens.
>You're not a nodev
What's your github account? I don't use Social Networks, esp. Microsoft ones.
>>1004673
I'm fine https://www.libtorrent.org/projects.html
>>1004680
Sell me IPFS when we're making history:
https://en.wikipedia.org/wiki/Named_data_networking
▶ No.1004862
>>1004807
I get the impression you're being insincere for the sake of attention and are not actually interested in discussing this. Maybe you should use social media instead of trying to do that here. Or try /g/.
▶ No.1004872>>1004873 >>1004924
>>1004807
>Ah, you have no respect for security or privacy.
Renaming a filename without changing the hash is disrespecting your security and privacy?
▶ No.1004873
>>1004872
Renaming a file*
▶ No.1004913>>1004924
>>1004807
How is swarm merging in concept a breech of security or privacy?
Security is completely irrelevant and privacy can be maintained in the same way you would if swarms where not merged. It's exactly the same as BT except automated.
If you want privacy, use anonymous routing or a private network.
>Seems you've never installed your own tracker.
What do trackers even have to do with this and what protocol are you even talking about? KAD, Bittorrent, and IPFS all use DHT for this, not trackers. Merging is handled clientside for all.
>NDN
IPFS seems to have actual implementations for a lot of these concepts. What are you trying to say by posting an advertisement to a conceptual "Future internet".
If something does come out like that, what difference would it make since it's content addressed? All the IPFS developers would need to do is figure out a way to resolve NDN hashes and fetch content from their nodes. That's the whole point of IPLD
There are already implementations to add support for native hashes of various other formats and networks.
https://github.com/ipld/ipld
I'm not really confident in some research project that hasn't yielded anything since 2006 suddenly being relevant.
As for IPFS you can already work with git commits over HTTP, and ethereum blocks over whatever their network is. And Bittorrent is on their planned list.
I love the idea of having 1 program handle all my hashes/magnets. It's much better than having multiple different clients open, connected to multiple different networks independently.
If I request data, I want the program to get that data, through any means, over any network ,using whatever URN is considered optimal that day.
The whole benefit of IPFS is that these things can change.
How routing is done, what hash algorithm is used, what chunking method is used, what encryption method is used, what transport is used, etc. but the interface stays the same "ipfs get URN", "ipfs add ~/dont_click/boku_no_pico.xvid"
Maybe someone sent you a bittorrent hash and they're only hosting over i2p, the IPFS program should eventually be able to resolve that. Maybe a fork already does.
What's so great about NDN and why should I wait more decades for it when this exists today?
▶ No.1004924>>1004925 >>1005001
>>1004872
Full automation: misleading bit, download the whole repo.
You want semi automation, the more you can control the better, even if it runs mostly by itself.
>>1004913
>Security is completely irrelevant
Yeah, I concluded from you as much.
>except automated.
A windmill is also automated. Does it work like a windmill?
Does it not need my intervention if I made the typo?
>trackers even have to do with [Automation]
A lot. It's what you are claiming trackers don't have: a way to publish updates, even for typos.
>actual implementations for a lot of these concepts
We have prototypes, empirical evidence, and scientists working on this. Not hobbyists that wrote a thesis paper.
http://named-data.net
http://www.named-function.net
>IPFS developers would need to do
Nodev, got it.
>some research project that hasn't yielded anything
It's ok to ignore the 10 years worth of work in CCN, and working implementations.
Reminds me ZeroCoin.
>1 program handle all my hashes/magnets
Be sure to run it in ring 0, so you can't expect any breaches.
This is the systemd of statements.
>What's so great about NDN and why should I wait more decades for it when this exists today?
The same question Linux have in Desktop adoption, and display server support: lack of support.
Bittorrent works here and now, and a proven solid track record, while IPFS still begs to complete it's not so sound ideals.
It's snake oil, and you don't even see it.
▶ No.1004925>>1004934
>>1004924
IPFS already has the openBazaar fork with TOR/I2P but it has not been mainlined as the development is slow as shit.
>>1004807
> Libtorrent has no de-dup function
> Shilling NDN, which is in a worse state than IPFS and Dat Protocol
▶ No.1004934>>1004940 >>1004966
>>1004925
>Shilling
Yeah, that's what I thought of IPFS when it was announced, and thanks to you, even more refined.
Anything IPFS says it wants to do, Bittorrent software already provides, and more.
But we're done with conversation Nodev.
In the next ten years, we'll see Bittorrents on post quantum crypto, making it impossible to spy and tamper, unneeding TOR & I2P.
▶ No.1004940
>>1004934
> we'll see Bittorrents on post quantum crypto, making it impossible to spy and tamper, unneeding TOR & I2P.
Well you are now the nodev.
▶ No.1005001
>>1004924
>I concluded from you as much.
I feel like you're being dishonest on purpose.
If you're going to pretend to be an expert here you should know that security is a separate layer irrelevant to peer management in practically every P2P system that still exists. It just wouldn't work any other way. This isn't even IPFS specific, as you should be aware.
>A windmill
Please be direct, don't try and use analogies, or you're going to make things more convoluted.
There shouldn't be anything confusing about the automation of swarm merging, if there is, just ask directly.
>you're claiming trackers don't have
My claim is that it's implementation specific, and requires coordination from multiple parties (tracker and client devs as well as users choosing to use those extensions), to actually see benefit, as opposed to it being implicit.
I see no reason this shouldn't be the case.
You had concerns about security and privacy but there isn't any reason to be concerned, those are irrelevant to distribution. Security around these systems is basically a solved problem and we have multiple anonymous networks to choose from to handle that, with i2p probably being the best example today.
>and scientists working on this. Not hobbyists that wrote a thesis paper.
I'm disregarding this as an appeal to authority.
These people are getting results and pushing out usable products, not siphoning academia funds while putting out whitepapers every few years.
>Nodev, got it.
That doesn't make any sense. You're the one interested in NDN, not me.
It would literally just be a matter of detecting NDN URNs, so I could probably do it if NDN existed.
>Be sure to run it in ring 0
I'm sorry but this is just inane garbage.
A single program is not a single component, and your CPU analogy isn't really relevant.
>Bittorrent works here and now, and a proven solid track record, while IPFS still begs to complete
What point are you trying to make?
I'm having a hard time believing your whole argument is based around being content with the status quo while simultaneously pushing a conceptual future internet academia moneypit.
More importantly, it doesn't have the distinct advantage we've been talking about.
Bittorrent clients, are bittorrent clients. IPFS is a set of interfaces around distributed data publishing and retrieval. Nothing prevents you from using Bittorrent as your exchange instead of their native exchange and torrent files instead of the IPFS internal format if you wanted to.
That's the entire point, that it's flexible. Again, if you think NDN is such hot shit set to deprecate Bittorrent, nothing would prevent IPFS developers from using those concepts and networks.
IPFS itself is nothing but a set of specifications. The reference implementations are not bound by anything other than those, and they're intentionally designed to be modular.
>It's snake oil, and you don't even see it.
I don't see how it's snake oil or harmful in any way and you're not doing a good job of demonstrating how it is.
We're all already 100% aware that this isn't done yet, but that doesn't invalidate its merits compared to the current software out there. What has been finished is good, what's not yet finished looks good. You can't just say that the good parts are bad because the rest isn't finished.
If anything it should be a mark of embarrassment that these people are pushing out the most robust thing while others do nothing, all the concepts of IPFS are unashamedly taken from "proven" projects, most of which are older than BT itself.
Not to mention the comparisons to Bittorrent are so secondary.
In the scope of IPFS that's literally only the exchange layer. Long term they're competing with HTTP, so it makes more sense to compare it to that in functionality.
The nice thing here is that you shouldn't have to focus on the exchange layer since it can change. All you should be worried about is what the URNs look like, and that you can issues a get request to it and somehow it gets exchanged, maybe over IPFS, maybe over HTTP, maybe over BT, or as we've talked about, an aggregate of everything available to you.
I'd appreciate it if you gave me legitimate reasons to avoid this project. I've spent a great deal of time looking into this and am only trying to see and prepare for what is coming next. Right now that looks like IPFS and for years of people posting these threads, nobody has convinced me otherwise. In the meantime several of the projects people shilled have fallen into unmaintained irrelevance, or had actual flaws pointed out AND exploited.
Nobody is doing that for IPFS and they've had more time to do so.
▶ No.1005226>>1007036
>#3
you're missing a couple of zeros
▶ No.1007034>>1023813
>>983922 (OP)
>integration with TOR
Doesn't tor get slow when people torrent over it? How would this even be implemented?
▶ No.1007342>>1007407 >>1007409
So this currently isn't released yet, but look at what they have planned for v0.34 js-ipfs. The ">" indicates a check, while the "<" indicates no check.
https://github.com/ipfs/js-ipfs/issues/1721
>Refactor files API
>Upgrade to latest ipld-dag-pb
>Refactor Object API
<--cid-base option
<CID version agnostic get and #1757
<DHT
>IPNS over pubsub
>IPNS over DHT
<addFromURL, addFromStream, addFromFs
>Support for HAMT directories in MFS
The DHT isn't completed yet, but if you check on this thread [https://github.com/ipfs/js-ipfs/pull/856] there's only two things left to do.
▶ No.1007407>>1007577
>>1007342
So when the DHT Implementation is complete, does that mean JS-IPFS and Go-IPFS can finally pull data from each other?
▶ No.1007409>>1007577
>>1007342
you forgot the nocopy bug
▶ No.1007413>>1007465 >>1007501
IPFS is bloat, just use DHT+bittorrent
▶ No.1007465>>1007493
>>1007413
Fuck off, IPFS' main draw is its per-file deduplication autism and bittorrent doesn't have that. You might as well call bittorrent bloat and tell everyone to use FTP.
▶ No.1007493>>1007497
>>1007465
>per-file deduplication
solved by dht
>You might as well call bittorrent bloat
bittorrent solves a problem, ipfs does not
▶ No.1007497>>1007498
>>1007493
>solved by DHT
Bittorrent's DHT does not magically give bittorrent per-file deduplication, you stupid nigger. Even Dat's deduplication is limited to files within a specific folder and not files across the entire network.
>ipfs does not solve a problem
Yes anon, torrents breaking if you change a filename or sharing shittons of files with other torrents without sharing peers totally aren't problems and don't happen in the real world all the fucking time.
▶ No.1007498>>1007499 >>1007512
>>1007497
>the entirety of ipfs could be duplicated using bittorrent and symlinks
Nice work guys
▶ No.1007499>>1007502
>>1007498
>he's so fucking stupid he thinks sharing peers for the same files across otherwise entirely different torrents is the same as bittorrent with symlinks
>this is what anti-IPFS fags actually believe
▶ No.1007501>>1007502
>>1007413
SHA1 is broken though
▶ No.1007502>>1007504 >>1007531
>>1007499
So just track each file separately and have the client bundle together related files? Bittorrent can do it
>>1007501
It's much easier to have bittorrent clients optionally upgrade to sha256 than it is to try and create a whole new network
▶ No.1007504>>1007507
>>1007502
>a separate torrent for each file
>assemble them into folders yourself
>god help you if there's a folder hierarchy
There's a reason no one does this. Maybe you'll figure out why someday.
▶ No.1007507>>1007508 >>1007509 >>1007900
>>1007504
Those are extraordinarily trivial problems which could be solved in a bittorrent client, you don't need a new fucking network for that
▶ No.1007508>>1007510
>>1007507
>lol just track each file separately
>what are folder hierarchies
>why would you need those
>let the client figure it out
>how dare you suggest a new protocol
Just accept that bittorrent and IPFS have different design goals before you make yourself look like an even bigger idiot.
▶ No.1007509
>>1007507
Also, the
>you don't need a new fucking network for that
excuse is like shitters insisting on using HTTP for fucking everything even if it doesn't fit their use case. To make the existing bittorrent network work like IPFS would create an ungodly mess which takes away bittorrent's existing perks so it can ape another protocol.
▶ No.1007510>>1007511 >>1007513
>>1007508
>a torrent of torrents
>one text file in the top-level torrent lays out the folder heirarchy and location for all the other torrents included in the bundle
There we go, took me 30 seconds and I just implemented IPFS in bittorrent+DHT
▶ No.1007511>>1007514
>>1007510
And how would you replicate IPNS, oh wise perverter of bittorrent?
▶ No.1007512>>1007513 >>1007514 >>1007558
>>1007498
>symlinks
What if you torrent some music, and you want to change tags? Now the whole file is different and you can't seed it anymore.
▶ No.1007513>>1007514
>>1007510
The multitorrent clusterfuck still breaks if someone renames a file or changes music tags as >>1007512 suggested.
▶ No.1007514>>1007515 >>1007751 >>1007900
>>1007511
I've got no idea what that is
>>1007512
>>1007513
IPFS is file-addressed, that breaks IPFS too
▶ No.1007515>>1007519
>>1007514
>that breaks IPFS too
That's the thing
It doesn't
lurk moar
▶ No.1007519>>1007539
>>1007515
You can't seed to a swarm if you've modified your files, that's literally impossible. If you're talking about something to do with IPNS, maybe that's revolutionary technology, but as for IPFS itself there's nothing in it that can't be done with bittorrent
▶ No.1007531
>>1007502
> It's much easier to have bittorrent clients optionally upgrade to sha256 than it is to try and create a whole new network
This will never happen. I had this conversation back when shattered happened and nothing has changed since.
The only way to unfuck bittorrent is by abandonding it and creating something better.
▶ No.1007539>>1007543
>>1007519
>You can't seed to a swarm if you've modified your files, that's literally impossible
Read https://medium.com/@ConsenSys/an-introduction-to-ipfs-9bba4860abd0 , faggot. Get your mind around this idea: a file's contents can be separate from its metadata (including the filename and timestamp).
▶ No.1007543>>1007546 >>1007547 >>1007900 >>1039270
>>1007539
I'm not even going to read that, your proposition that you can seed into the same swarm with different files is absurd
▶ No.1007546>>1007558
>>1007543
How is it? If there's a document originator, he just announces that there was a change and others download the changes.
▶ No.1007547>>1007558
>>1007543
>he thinks metadata == file contents
Nigger, with a decent hashing system the file's hash doesn't change if you tweak the filename, timestamp, or other metadata. It's the same fucking file with different metadata so even if some other asshole changed his file's name, you can still peer with him because the hash and file contents are the same.
▶ No.1007558>>1007562 >>1007565
>>1007546
>>1007547
This whole discussion is pedantic as fuck and worthless but this entire thread started >>1007512 with some downloader changing id3 tags in his music. That changes the hash, and creates a new swarm. Regardless, from the link provided, do versioned files even work yet? The issue linked to from the medium post https://github.com/ipfs/notes/issues/23 is over three years old and still open. The only interesting part of this whole project seems to be IPNS. I will read into it some, but from what I've seen of IPFS, I find it hard to believe that IPNS is not some silicon valley-tier rebranding of a preexisting technology
▶ No.1007562>>1007565 >>1007900
>>1007558
Anon's id3 tags example is silly and likely wouldn't work, but changing filenames, timestamps, and similar metadata does not change the hash in IPFS or create a new swarm. Combine that with the ability to peer with those seeding a different folder with some shared files, and you have some interesting significant advantages over bittorrent when it comes to actually keeping torrents alive.
▶ No.1007565
>>1007558
>>1007562
>versioned files
I don't remember that going anywhere.
▶ No.1007577>>1007743
>>1007407
I think so. They say this in the dht section
>It allows your IPFS node to find peers and content way more easily and it has full interop with Go IPFS nodes so y'all have the full IPFS network at your fingertips
I think before, you'd have to have an ipfs-go node set up a websocket connection in order to interact with one using js-ipfs. And since there wasn't a dht you couldn't really download content without knowing a peer that contained the content (could be wrong).
>>1007409
Not sure what that is. Don't see anything recent related to it for js-ipfs.
▶ No.1007743
>>1007577
nocopy is a ipfs-go bug, search it in the issues page.
▶ No.1007751>>1007955
>>1007514
>that breaks IPFS too
▶ No.1007895>>1007900
Ok anons, here's my pitch for ipfs "meta files." The idea is that since directories are light-weight, you can distribute an ipfs directory heirarchy as raw blocks. This way if the directory blocks are forgotten, you can still recover some if not all of the files.
My current idea is to have a flat tar file containing an "index" file (contains the master hash), and raw blocks named <hash>_block. Then to import the directory structure, you
ipfs block put < block
each block, retrieve the master hash from the index file, and proceed to use the master hash as you normally would.
Any suggestions on the file format and general premise are welcome.
▶ No.1007900>>1007901 >>1007919
>>1007507
>I see what you're saying, and I agree it could be done. I also recognize that this is not standard for BT. However, I still don't see the need for someone to make a protocol where this desirable feature is standard, when we could just stick with the existing protocol and make unofficial extensions that will only ever have azureus users connected because people insist on using minimal and or outdated clients with a subset of modern features
why nigga
How isn't moving to a format and network that has these things considered in the standard, a better solution?
There isn't even a migration path. You literally just add the directories that have your seeding content in it, and you're done. Migration complete, and if everyone did this all the swarms would merge.
Now think of how bad it is moving from 1 bittorrent client to another.
>oh man macroTurret just came out and it has this feature I want but my client doesn't have, I'll just migrate from 1 non-standard to another easily!
It's ASS.
>>1007514
>IPFS is file-addressed
IPFS is content addressed, that's the entire advantage. Any named content you see is metadata that's part of the node.
Notice that "Hash:" value is different to the object I'm requesting. The one I'm requesting contains the metadata and a reference to the actual data. Like a real networked filesystem, it seperates metadata from data, but guess what, metadata is also just data and you can content address it too.
Bittorrent can't compete with this, the transfers may as well be fire and forget. Unless you absolutely never modify the content, or meticulously manage it yourself. Why would anyone want to bother with this when it can just be automated and built in.
The absurd amount of dead torrent files that are basically duplicates with an additional text file, needs to stop.
>>1007543
>your proposition that you can seed into the same swarm with different files is absurd
Do you not understand how chunking works?
If you add a file, you chunk it and save the chunks, if you modify it and add it again, there's going to be duplicate blocks for the majority of the file or tree. The only thing you have to store to have 100% availability for both copies, is the difference between them. The duplicate blocks take up the same space, and the delta takes another space.
Read this
https://en.wikipedia.org/wiki/Data_deduplication
it's not a new concept, and everyone goes apeshit about ZFS for having it.
What if you could have that, but for network transfers too?
Why not use a tool that's more efficient with storage and bandwidth, for data transfers?
HTTP and BT need to be deprecated.
>>1007562
>Anon's id3 tags example is silly and likely wouldn't work
You could implement the MP3 format in the same way files are implemented.
Since ID3 is just metadata, you could literally just extend regular files to have fields like Artist, CoverArt, etc. and the Data field point to an audio stream or another container.
If that was done the blocks would be separated and the data stream would always have the same hash/swarm. Since metadata is so small compared to an entire extra copy, it's likely people would seed the original anyway since it would be the difference of a few bytes, not a few MB.
You could setup something to like output this ipfs mp3-object in whatever format like json and that would be really easy to support.
`ipfs object get hash ----encoding=json,` would give you the metadata with a link to the data.
Which seems really easy to add support for compared to other shit media players support. Like RTMP, DASH, etc.
>>1007895
I'm not 100% sure but I think that's how this works.
https://github.com/ipfs/go-ipfs/blob/master/docs/experimental-features.md#directory-sharding--hamt
▶ No.1007901>>1008063
>>1007900
Did this trip a spam filter?
▶ No.1007919>>1007940
>>1007900
>instead of just extending bittorrent to do what I want, I'll implement my own standard from scratch
The absolute state of /tech/, lapping up this silicon valley shitware
▶ No.1007940
>>1007919
You forgot to provide a reason why I'd want to build on a legacy platform instead of dropping all the incidental shortcoming and designing something that is built with the concepts I want, in mind.
There's nothing special about bittorrent that we need to clutch on to it.
Not the hashing, the chunking, the dht, nothing.
You act like just because something already exists that we can't replace it with something better.
That's not how this has ever worked.
Did you forget how we got here in the first place? Should I send you the rest of this post via an ed2k link?
Regardless the best part about IPFS is that I don't have to implement it from scratch, and I don't have to extend BT either. It already exists and is in active development. It gets better while I sit on my ass. So what's the problem and why should I care about bittorrent at all?
I've yet to see a reason to avoid using IPFS. And I don't think my life would be any better if I tried doing the same things I do with IPFS, with bittorrent.
Not sure why you feel the need to defend BT. As if I'm not using both as well as others.
It's not like once you install ipfs you have to uninstall your torrent client.
Not sure what your end goal is or what point you're trying to make.
It just sounds like a bunch of "stop liking what I don't like" or "it's new and I don't understand it so it must be bad".
▶ No.1007955>>1008063
>>1007751
It breaks the file folder and file hashes on the macro scale, but not the data chunk hashes or hashes of the other files in the micro scale.
▶ No.1008063>>1008476
>>1007955
The original hashes don't break. The new hash is different but the old hash will continue to work.
When you change the metadata you're only adding those changes, so the original will still work as long as the metadata and data blocks are available.
It's only a few bytes so there's no reason not to just keep the original pinned as well as your modified copy if you're worried about permenance.
Look here >>1007901
you basically are just changing this struct for directories that changed and these are tiny, even when you include upwards recursion on parent directories.
Anyone that lists the index is mirroring the metadata themselves until their next gc too, so even if you delete the original metadata, people can still retrieve the index from someone who has it, while retrieving the main data blocks from you.
You don't store 2 full copies of a file, to provide 2 full copies. You only need to know which blocks are in common and which are not.
But in any case changing something never breaks the original.
Just like how if you make a new magnet link the old one still works as long as people still have the data, but the issue here is that magnets will only download from peers of the same swarm which is only the people with the exact same metadata/torrent file. IPFS simply doesn't have this restriction, you can get any data from any peer, and in theory over any network they add support for, which will include bittorrent itself at some point.
If this happens you could not only swarm merge in the same network, but across networks too. So you could download X% from ipfs peers and Y% from bittorent peers for the same content.
You'd have to know the hashes for both networks, but this is how in-network swarm merging already works.
And the magnet link format already allows you to specify hashes for bittorrent, gnutella, ed2k, and probably more so you could just post shit like this.
magnet:ipfs-hash;bt-hash;http-backup-location
and feed it to a single client and let it sort it out.
As it should be imo. Let the download manager manage the downloads and figure out where the data is and how to get it. I shouldn't have to know ANY of this bullshit but I do because it's all terrible.
▶ No.1008476
>>1008063
>You don't store 2 full copies of a file, to provide 2 full copies.
This is important. I only know of 1 BitTorrent client that does swarm merging and it forces you to make duplicate files on disk.
https://github.com/BiglySoftware/BiglyBT/wiki/Swarm-Merging
I must have hundreds of gigabytes of duplicate data in different temeporary directories (to avoid name conflicts) from the manual merging process of multiple seedless torrents.
My options are to
- manually find and delete duplicates, removing myself as a seeder which is bad for the network
- symlink the files myself by hand, still leaving me the problem of storing them in a unique directory and spending time on it
- waste disk space keeping the duplicates. Which is not efficient.
I'd rather the filesystem take care of this for me, and IPFS does.
▶ No.1008554>>1008608 >>1008660 >>1008697
>You act like just because something already exists that we can't replace it with something better.
>That's not how this has ever worked.
>Did you forget how we got here in the first place? Should I send you the rest of this post via an ed2k link?
Well.. you are right something always replace the old shit.
However! You have it all backwards. ED2K is better than Bittorrent and Bittorrent is better than IPFS. Bittorrent still haven't caught up with the capabilities of ED2K/Kad. It was fun to watch the development.
Bittorrent now has PEX!
err.. come on granpa.. been having that shit forever.
Bittorrent now has DHT!
phew.. cool so that's what you call your Kad implementation.
Bittorrent has.. well still does not have distributed file search..
Ha! Move along fucker. The donkey has been stale for a decade and you still can't keep up.
IPFS.. hmm.. you have TOOLS to mount it and an HTTP proxy. Also some sort of revision system? Wow. Tell me more. Oh, no wait I think even bittorent built git over torrent. Hell the donkey even pioneered using your fucking DAG for error correction.
Tech is dead. Showmanship isn't.
Donkey is king.
▶ No.1008608
>>1008554
I'm too young to remember edonkey, but why did it die? If you were to build a new filesharing application today would you use ed2k?
▶ No.1008660
>>1008554
You're conflating the protocols with the implementations.
You can't say ED2K is good when talking about ED2K/KAD. Those are obviously 2 different things.
You have to remember that if you wanted to use the KAD network in an ed2k client, you had to migrate from 1 client to another to get it. And even though multiple clients eventually implement a way to search KAD it doesn't mean they did it in the same way, so everything is not only network specific, but often client specific too. Obviously edonkey clients are not going to find emule clients through KAD.
Look at this
https://en.wikipedia.org/wiki/Comparison_of_eDonkey_software#Features
And it got worse
https://en.wikipedia.org/wiki/Comparison_of_BitTorrent_clients#Features_I
Why do we have multiple global torrent rating systems. Does anyone actually want these networks fragmented?
Why even implement a chat protocol extension if it can only be used by people using the same client as you?
What do we even have, like 2 torrent libraries that people use to make torrent clients? And people can't keep feature parity across clients.
It doesn't have to be this terrible.
The whole benefit of IPFS is that the interfaces and formats are standard, and extensions should be transparent to the user and application developers, it should only bother IPFS developers to extend the clients and network.
Not force people to switch clients, hash formats, or any other needless thing.
At the end of the day the api directives "get mulithash" and "post nodeid" are going to remain the same. Just like with HTTP through several versions.
"get uri", "post uri".
Your client is either going to be capable of resolving something or not, but if you write a protocol extension implementation in 1 language, for 1 client, you should be able to easily port it to another.
It's not tied to a specific programs SDK or hacked up internals. It's tied to a broader/higher-scoped specifications designed with modules in mind from the start.
Yes you could probably do all of this over bittorrent but there's no way to coordinate that. Anything you've already posted is nothing but evidence of that to me. We've had the time, and some people have done it for some of the clients, covering some of the networks.
Fragments of fragments of peers.
IPFS is doing it different. Trying to allow for cooperation between clients and networks, by defining standard interfaces between layers.
I can't see this as anything other than a good thing for a P2P network. At every level.
If you think bittorrent is good, I don't know how you can't be excited for something that's more flexible from the foundation up.
It has the seamless upgrade potential that everyone else has neglected.
If I had it my way I would migrate to something if I could have the promise of never having to migrate again.
If HTTP can get 28 years of use by accident, I can only imagine how long IPFS will last when longevity is actively considered and they have past systems to learn from.
You seem to have some problem with IPFS but I am failing to understand what that problem is other than it being incomplete right now.
Obviously appealing to past tech like bittorrent fails on me too for the reasons mentioned above in this post. It's the current best, that doesn't mean it will remain the current best or that we should build on top of it either. It helps nobody and is a bigger pain in the ass long term than nailing standards down and implementing them fresh short term.
That's my perspective on it at least. I'm still interested in your opinions on the matter though.
I have no personal attachment to IPFS so even though it would be disappointing, I can be convinced that I missed something and it's actually bad.
But that hasn't happened in the years we've had these threads and IPFS has only improved over that time and grown in popularity with people like Mozilla and archive.org.
It's a whitelisted protocol in web browsers. When do they ever do that?
You don't see people putting NNTP:// into Firefox but ipfs://hash is valid
We have shit like webtorrent and they still don't want to impliment magnet handling natively despite so many people using BitTorrent regularly.
I'm not saying it's guaranteed to succeed because of this, but I doubt it's going anywhere one way or the other. Despite people saying "memeware" forever.
▶ No.1008697
>>1008554
I'm not sure how old are you but if you're past your 30s you shouldn't even need to think why encompassing solutions are always preferred in the long run. Nobody likes to deal with abstraction layers.
▶ No.1009028>>1009515
▶ No.1009548
>>1009515
Found it in the usual place and I'm wondering what the commenters are raving about.
It just looks like a blog platform. Are they just upset that articles can't be removed and instead targeting the name for some reason?
What's going on here
▶ No.1009571
>>Is it a meme?
>You be the judge.
<go
<JS
<thinks they are the only content addressable storage
i have judged
▶ No.1009913>>1009928
>>983922 (OP)
N00b question.
I've installed ipfs. I can launch it via command line / terminal on different operating systems, etc.
So, how the fuck do I find anything worthwhile on it?
It's not like I can go to a search engine and type in something_cool.zip and it take me to it on IPFS.
I can load my own shit up in it, and see the hash, etc, but I want to access other people's cool shit, not just shit I've created.
▶ No.1009928>>1010458 >>1025849
I'm cleaning out 4 "tmp" directories of shit I downloaded. I'm realizing the benefit of downloads being considered garbage by default.
If I was using IPFS for these, I could just run garbage collection right now and it would automatically clean up shit that I wanted to seed, but only until I ran out of free space.
Waiting until I have TB's of garbage to manually sift through sucks ass. It was different when storage used to be precious and disk cleanup was frequent, not months-years apart.
I just found 4 copies of the same romset in different formats as well as duplicates from swarm merging.
500GB's of something I said I'd seed for someone and forgot about.
And I still have more. This was a mistake.
>>1009913
>It's not like I can go to a search engine
https://ipfs-search.com/#/search?search=.zip
People post hashes to video games in the /v/ share threads and wherever else you'd post a magnet link.
Some Anons here used to have little web pages with personal indexes of things. Almost like geocities pages that updated over IPNS.
https://ipfs.io/ipfs/QmXta6nNEcJtRhKEFC4vTB3r72CFAgnJwtcUBU5GJ1TbfU
▶ No.1010220>>1010380
▶ No.1010380
>>1010220
>The Vulpine Club is a friendly and welcoming community of foxes and their associates, friends, and fans! =^^=
What point are you trying to make by linking a random conversation on furry twitter? Is this spam?
▶ No.1010458
>>1009928
>ipfs-search.com
>sniffs the DHT gossip and indexes file and directory hashes
Very nice. Though I should start wrapping files in directories so the filename actually makes it to the dht
▶ No.1010556>>1010564
>>1010493
> my unpopular program is much better because they get money a different way and are licensed differently
if dat's so good, why didn't it get as popular or as wide-spread as IPFS?
> inb4 muh VCs influence
fuck off. you should know that dat is BSD licenced, compared to IPFS's MIT license, something free software respecting
▶ No.1010564
>>1010493
>random masodong namefag
Fuck off.
>>1010556
>if dat's so good, why didn't it get as popular or as wide-spread as IPFS?
Dat is closer to a bittorrent replacement with a built-in VCS for updating your torrent. It lacks IPFS' deduplication autism, aka the reason most people are interested in IPFS in the first place, so many of IPFS' more interesting implications and uses flat out wouldn't work on Dat.
▶ No.1010565
>>1010493
>Exactly, but it changes the fact that if we use it and collaborate on its ecosystem we make easier for them to continue with their business model.
If there are other alternatives, it's better to focus on them instead of in VC funded software.
>IPFS is always going to be able to copy what we do, but they will need to put some effort on that, instead of using us as free labor.
I'm really surprised to see someone post something like this openly from a mastadon person.
Is this not just intentionally being spiteful to a software project solely because you don't like the people who make it. And you don't like those people because they have funds and are partnering with other funded software projects who also release free software and services?
How is this going to help mastadon users or anyone for that matter?
Not adopting a platform that you admit is likely to become dominant, because the developers get paid.
Nobody complains about this for any other open source software project that I know of.
The main argument BSD users like to throw around is that FreeBSD developers get full time salary from Apple and Sony. They treat it as a good thing and I don't see how it could be anything else.
Literally what's the fear? I'd much much much rather use software from someone who is getting paid than from someone who has to be supported some other way. Probably though ads or selling user data.
If protocol labs wants to trick /biz/ out of their money with cryptomemes why should I even care if I get distributed anime pictures as a result?
Who loses?
▶ No.1011728>>1011735
Would people use an IPFS tracker that functions like a Bittorrent tracker? It would periodically keep track of how many online people have pinned specific file hashes. To simplify things it would recursively keep track of items in subdirectories of a given hash directory. It wouldn't keep track of directories directly. Users could recursively query directories so a client could easily summarize the seeders count for each subdirectory and each individual file.
▶ No.1011731
I try to upload and then it says
Error: read tcp 127.0.0.1:50383->127.0.0.1:5001: use of closed network connection
What does this mean?
▶ No.1011735>>1011745
>>1011728
>an IPFS tracker that functions like a Bittorrent tracker
Something like https://github.com/DistributedMemetics/DM/issues/2 ?
▶ No.1011745>>1011757
>>1011735
No not an indexer I already talked to him/you I mean like a bittorrent tracker in how it keeps track of available peers and seeders for a specific torrent. It would let people know the minimum amount of online seeders available for files on IPFS.
▶ No.1011757>>1011791
>>1011745
What would this provide over the DHT? If you're just wanting to find how many people have a file, you can already do that with `ipfs dht findprovs <hash>`.
Note that there is no way to know if someone has pinned a file or just has it in their cache.
▶ No.1011791
>>1011757
Wow the DTH got a lot faster. I got results in 3 minutes compared to over 10 minutes last time I tried that last year. As you said, it would be a more conservative but reliable number since it would only include people who have actually pinned the file in question. I figured instead of people waiting for DHT results they could query the tracker database instead. And no, I wouldn't want to cache the DHT results since it includes files in people's file cache which is unreliable.
▶ No.1016594
https://github.com/libp2p/js-libp2p-websockets/issues/74
Just in case any JShitters like me are having trouble connecting web peers check above.
▶ No.1018587>>1018673 >>1018738
>2019
>still no I2P support in IPFS
>devs claim they won't add Tor support until go-ipfs gets a full security audit (https://github.com/ipfs/notes/issues/37#issuecomment-444685370)
fugg
▶ No.1018673>>1018679 >>1022544
>>1018587
>the guy in that thread being buttblasted because they didn't adopt his code yet
Doing anything else defeats the whole purpose of using Tor. Is he really upset that they're refusing to integrate his unaudited anonymous routing method into their own unaudited p2p network until security experts have looked at them both?
Anyone that has a use for Tor wouldn't trust either until this happened anyway.
How can Tor developer be this blind? There's something (((glowingly))) suspicious about them being so insistent and inflammatory.
>It's Tor not TOR.
kek
>At least Open Bazar is using the code we wrote.
later
>I explicitly told Juan that I was concerned about the prospect of writing a bunch of code for free that will then not get used.
meanwhile
>communist flag in their profile on an open source development platform
Why even be mad when the silk road replacement is using your code anyway? As if something that large for Tor counts as "not used".
As if people can't literally just use the bazaar fork right now.
As if that entire thread doesn't have multiple IPFS developers saying it needs an audit before the work was even done.
If this shit isn't provably secure people that need it could actually die, why try to rush it? When is that ever a good idea for security?
Regardless of anything else this dude seems like a dickhead manbaby. Especially for saying this shit.
>>At least Open Bazar is using the code we wrote.
>we
Looks like he shat something out and someone else fixed it up and maintained it. Now he's trying to take credit.
Later in the thread he can't even figure out how to handle import paths and lists himself as a senior engineer on linkedin.
If there was a /tech/ equivalent for /cow/, this man would be on it.
▶ No.1018679>>1018851
>>1018673
I don't mind it not having Tor support, but I2P would be nice.
>manbaby
Maternal insults are lame.
▶ No.1018738>>1018851
▶ No.1018851>>1021117
>>1018679
>Maternal insults are lame.
Does "immature" fall under this? "arrogant" seems appropriate. I stand by what I said though, it's childish behavior.
>I don't mind it not having Tor support, but I2P would be nice.
Sorry, I got off topic.
It just makes me mad that Tor users are already seen as criminals and pedophiles, and then we have people like this trying to push the project on top of that. It doesn't seem helpful. Although my complaints aren't either.
I would like to see these integrated but I'm not in any rush for it either. To me it doesn't matter if they support it if they can't guarantee anonymity yet. But I don't see what that dev is complaining about since if you don't have this opinion, you can still use it today. And apparently multiple projects and people do.
>>1018738
Neat.
▶ No.1021117>>1021223
>>1018851
>It just makes me mad that Tor users are already seen as criminals and pedophiles
Because the TOR devs are a bunch of fags who deserve no sympathy https://twitter.com/torproject/status/898256109789687808
more evidence for IPFS+I2P, not BitTorrent+TOR
▶ No.1021223
>>1021117
>Because the TOR devs are a bunch of fags who deserve no sympathy
Their work in favor of privacy is unmatched and that alone deserves sympathy.
A dumb opinion doesn't undo it all, especially as they don't act on it.
▶ No.1021392>>1021394
>>983922 (OP)
All the certs for archive.is, archive.fo, archive.today, etc... seem to be bad.
It also appears someone might be trying to hijack their domain name
https://news.ycombinator.com/item?id=18834467
I think having a similar service using IPFS over i2P would be perfect.
Are either of these projects mature enough to have something like this?
Does anything like this already exist?
▶ No.1021394>>1021396
>>1021392
Loads fine for me. Maybe you are being too conservative for your security settings.
▶ No.1021396>>1021399
>>1021394
Could someone be trying man-in-the-middle me?
▶ No.1021399
>>1021396
You may have wrong date settings. Check out whether the date on your computer is accurate.
▶ No.1021728>>1021772 >>1021777 >>1021822
Somebody tell me if this shit will do anything resembling a normal file system, or is everything just an object because some developer was weaned off his moms tit on Java and Soy.
And yes, yes, I have read the fucking docs. They were written by someone who wears a pussyhat.
▶ No.1021772
▶ No.1021777
>>1021728
>on Java
Object doesn't have to refer to objects from OOP.
▶ No.1021822>>1021962
>>1021728
>he's read the docs and still doesn't know this
Soy or not, something is rotting your brain if you haven't figured this out yet. Yes, you can use it as a filesystem and mount it through FUSE. You're especially retarded if you expect deleting a file on your end will magically keep others from seeding it.
https://github.com/ipfs/go-ipfs/blob/master/docs/fuse.md
▶ No.1021960>>1021978
>>998765
ipfs is not meant to be anonymous, it uses DHT essentially and nodes are essentially broadcast. What it /is/ useful for is to make it hard to censor the content since that would require taking down all the bootstrap servers (which you can patch with your own) and the clients that host the content.
To anonymize it you simply need to tunnel the traffic over a network interface (actual or virtual) that is secured.
▶ No.1021962>>1022647
>>1021822
Lets not also mention that he doesn't know what a merkledag data structure is. Doesn't know that its also used to de-duplicate content and optimize for speed.
▶ No.1021978>>1022107
>>1021960
>it is meant to be insecure
Ok. Into the garbage it goes.
▶ No.1022107
>>1021978
>program is designed to reuse existing anonymity layers like I2P instead of reinventing the wheel
>this somehow means it's "meant to be insecure"
▶ No.1022536>>1022549 >>1022904
>>983929
>We are committed to providing a friendly, safe and welcoming environment for all, regardless of gender identity, sexual orientation, disability, ethnicity, religion, age, physical appearance, body size, race, or similar personal characteristics.
>see the hangout mettings between the devs
>literally nothing but white guys, no wimminz, kangz, nothing
>all this shit to appease faggots who dont even contribute to this
WEW
▶ No.1022544
>>1018673
Ever since tor started with muhnatzees I been suspicious that they might have been compromised
▶ No.1022549>>1022588
>>1022536
Software project constantly get into trouble for not having a so-called code of conduct in their repositories, so adding one could be merely a preventative measure. At worst, it's virtue signaling. You probably shouldn't let it bother you, although, if IPFS gets really big and starts being used at a massive scale the above-mentioned cock will likely be used by yids to purge bad goyim from the development team and install their own actors, similar to what's happening with Linux recently.
▶ No.1022588>>1022888
>>1022549
What happened to just telling annoying faggots to fuck off?
▶ No.1022647>>1022888
>>1021962
This is the laziest argument I've ever heard on tech, this is worse than "install gentoo" or "delete system32", seriously, get off the motherfucking soy you fucking faggot.
▶ No.1022888
>>1022647
>getting mad because you're stupid
lmao
>>1022588
Defamation, blackmail, and rape allegations happened.
▶ No.1022892>>1022896 >>1022904
This thread has devolved into bickering about nonsense. I suggest we make an IPFS board. When was the last time any of you even shared something over it? I was trying but the file size of my file I was trying to upload crashed my net.
Keep linking shit and stop arguing over nonsense. Its not meant to be anymous, dumbasses. The CoC is pointless toosince the only people using it and programming it are whites.
▶ No.1022896>>1022904
>>1022892
There's already >>>/ipfs/ . We used to have more sharing before /tech/ went to shit through cuckchan newfags and the mods not giving a shit.
▶ No.1022904
>>1022536
>>1022892
Delete this before the bright hairs find out.
>>1022896
We don't need a whole board for this. /v/ has a constant "share thread" where it's not bound by the protocol or distribution method. The post ipfs hashes, magnet links, mega urls, vola rooms, and other things.
Someone could start one here if the board owner would allow it. But I'm not sure what kind of stuff /tech/ would even (want to) share.
>>>/v/15953491
▶ No.1023813>>1023868
>>1007034
This is why we need i2p. With Tor people can leech, and all traffic has to go through a few somewhat fast central nodes. With i2p everyone is a node just by using it. The more people use i2p the faster and better it gets. Tor just gets congested with more people.
▶ No.1023868
>>1023813
IPFS+I2P through the Temporal project
▶ No.1025320>>1025349 >>1025657 >>1025893
Is there any distributed imageboard being currently worked on? What's your take on proof of human work blockchain? Captcha puzzles are generated collaboratively between recent miners and none of them know the solution
https://eprint.iacr.org/2018/722.pdf
https://eprint.iacr.org/2011/535.pdf
The first paper suggests rewarding betrayal to prevent collusion between capatcha generators, thus involving currency. This is also an incentive to mine to prevent attackers from taking over. There must be a way, it could be fun to watch anime directly on anon post.
▶ No.1025349>>1025657
>>1025320
Distributed =/= decentralized in terms of software development/engineering.
Polite sage.
▶ No.1025657>>1025723 >>1028894 >>1028994 >>1035790
>>1025320
I know this one https://github.com/smugdev/smugboard but there's more that I don't remember.
I honestly like the idea of machine proof of work over human pow, especially if the owner can profit from it.
This kills 2 birds with 1 stone. People have to be at least somewhat committed to their input so they don't spam total garbage, and the owner gets profit without having to resort to other methods.
Most captchas benefit nobody and are simply a blockade, the exception being Google.
And worthless cryptographic challenges (like hashcash) fell out of style in favor of ones that reward tokens (that hold some form of value).
The only real issue is fairness across devices. You can't exactly have weak hashes for some people since obviously people are going to exploit that on powerful systems. And having only strong hashes unfairly locks out low power devices.
It would be interesting to see unique token exchanges become decentralized and simplified. Maybe the Ethereum people are already doing something like that.
When it comes to pay-to-post I hate that idea, but I have no problem with token exchanges since they're more specific and tied more to services.
It would be cool to have some domain specific token like an 8ch post token, and be able to automatically barter with the owner's system with other tokens.
So I don't have to compute something like heavy hashes with monero at the time of posting but I could say preemptivley host data on a low power machine to generate filecoin and exchange that for post tokens.
Ideally these things would all hold no monetary/fiat value an be pure barter tokens but that's unavoidable and it ruines the whole system since someone could just buy post tokens. But it would be cool if we could force service based tokens rather than a generic cash economy.
It might be possible, but I can't imagine how.
I want to see a digital world where only the people who generate hentai@home credit can post on my website. More seriously imagine using something like folding@home points.
Something that implies you did useful work and not just money based since people can just steal cash it implies nothing and can't be sourced or proven. Having a lot of it doesn't even prove you deserve free passage everywhere either.
Having things tied to a crypto identity would be good but then people would be upset over the lack of anonymity and the ability to have multiple accounts for the same service.
>>1025349
Sage shouldn't be considered inherently rude.
▶ No.1025723>>1025764 >>1026335
>>1025657
>I want to see a digital world where only the people who generate hentai@home credit can post on my website
How would you implement this using IPFS and Ethereum/Tron/Neo/Cardano/Eos/Stellar/Iota/etc... (smart contracts)? Asking for >>>/hydrus/ (to create a social network seeding/tagging layer on top of it)
▶ No.1025764>>1025845 >>1026335
>>1025723
For reference https://bitcoinexchangeguide.com/neo-vs-eos-vs-trx-tron-vs-xlm-stellar-vs-ada-cardano-altcoin-battle/
Some ideas:
1. Those who seeds more have more rights in downloading materials they like, similar to private trackers and Sia/Storj/Filecoin/Maidsafe/etc. storage tokens
2. Those who tags more may or may not be genuine (e.g. tag trolls, tag bots), with or without incentives, thus we need an overseer to make sure the tags are good
3. Tying #1 and #2 together may not be a good idea, as seeders could also be a tag abuser, or worse use tag bots to rig the system and gain dominance
4. To fix #2, those who engage in the community with tagging should have the most power, but we must create a consensus system for a Steem-esque "proof of collaboration"
▶ No.1025845>>1026335
>>1025764
Some ideas regarding voting https://en.wikipedia.org/wiki/Penrose_square_root_law https://en.wikipedia.org/wiki/Jagiellonian_compromise
The power of a person's vote should be the square-root of the person's contributions.
Some problems regarding quantification of file contributions:
1. If we assume every file is created equal, does that mean a high-resolution image is the same as an emote/flag?
2. If we assume that file is ranked by its file size, people would cheat by converting low-res jpeg into png, mp3 into flac, and epub to pdf.
3. Even if we can quantify the files, how does that stop things like Waifu2x 2x upscale and SVP 60fps from slipping through the cracks?
4. If we actually use file popularity to rank file weight, then it becomes a shilling and spamming operation, how do you tag and manage them?
▶ No.1025849>>1025958
>>1009928
Anyone have a plain english guide on setting up "geocities pages that updated over IPNS." Because, for the personal "I'm soooo random!" website, IPNS looks tempting.
▶ No.1025893>>1025915
>>1025320
There are no captcha puzzles that can't be solved by AI. Human pow has no future.
▶ No.1025915>>1026335
>>1025893
What about faptcha?
▶ No.1025958
>>1025849
You don't need a guide.
ipfs add -Q -r my_website_root | ipfs name publish
When it says "published to Qm..." this is your IPNS reference. "/ipns/Qm.../"
The apis you want to read about are
ipfs key --help
ipfs key gen --help
ipfs key list --help
ipfs name --help
ipfs name publish --help
▶ No.1026335>>1026364 >>1026432
>>1025723
I think the answer is simply "yes, smart contracts could help" at this level. The things mentioned here
>>1025764
>>1025845
should be ironed out ahead of time by the hydrus community (and/or developer).
In principle, smart contracts and blockchains allow you to emphatically claim history and truth at a global/network scale. But that's not really interesting to talk about until you know what you need to prove and how the data is generated. What "value" is, has to be defined before we can speculate how to implement it.
Then you can discuss potential exploits and how to defeat them.
I think you should start a thread there if one doesn't already exist, either way link to it here.
When it comes to auto image analysis we have lots of options today. This thread has some and there's A LOT more >>>/hydrus/1553
When the results are unsure, we can always fallback to human intervention. How those people are elected, what power they have, etc. again has to be defined in human language first.
For reputation, I guess I would look into older p2p networks that tried to have reputation systems, like ed2k/kademelia.
And I'm sure things like sybil mitigation are good to know.
https://ieeexplore.ieee.org/document/6363920
https://www.sciencedirect.com/science/article/abs/pii/S1389128615004168
If you can figure out how DHT networks work without spam killing them, you might have a basis for distributing votes, once a vote's weight and impact are defined.
Somewhat related to A3 - (bots to rig the system and gain dominance)
I wonder when the IPFS people are going to release filecoin research. If they're going to have a credit system based around hosting data, they likely have to solve a lot of these same problems. But IPFS isn't even out of alpha yet and it's been years so it might not make sense to wait for that. Then again if it's taking them years maybe it's not worth trying to solve on your own if they're going to do it for you.
At the very least they'd have to have some way to prove you're hosting and that the data you're hosting is cryptographically legit.
Helping A1 - (Those who seeds more have more rights in downloading)
Without relying on peers or trackers to be trusted.
We need to have a thread where we can dump knowledge. I'm not really strong on consesus or trust tech since I feel like it's constantly been evolving for years. But I'm wondering how other people handle this, specifically blockchains, p2p networks and even centralized systems. Like if there are boorus with points/credit systems, how they handle it, etc. How do federated platforms deal with it too, things like NNTP based services, or even modern things like Mastodon. Things like OpenBazaar might be worth investigating too.
There's also the idea of using libp2p directly to implement your own system that is decentralized in most parts but (semi-)centralized in authority.
Like distributing files via IPFS while forcing vote data to be sent to specific peerID(s) who processes it and then updates the single source of truth.
This way you don't really have to maintain your own blockchain if you don't feel like you need all the things that offers you. Like I doubt most people actually need an entire history of events versus just operating on the latest copies of something in a traditional database kind of fashion.
i.e.
>Peers a,b,c tagged files X with
This is everything the auth server needs to know and distribute, probably best on demand
versuses
>tag X was tagged y by peer a on $date -> tag count was incremented by peer b -> 3million entries...
This is not central and everyone has to have a copy of it(or its head), and they traverse it themselves to find the truth about something.
Depends on what you want and need.
If I remember correctly the former is more or less how the hydrus tag repos work already
>>1025915
Can't you just do automated image recognition?
>>>/hydrus/1553
https://trace.moe/
▶ No.1026364>>1026380
>>1026335
>yet another "guys we should have an imageboard ON THE BLOCKCHAIN" shitter
Fuck off.
▶ No.1026380>>1026394 >>1026626
>>1026364
>list at least 3 options
>bias against blockchain specifically in favor of simpler approach
>this is somehow DUDE BLOCKCHAIN LMAO
Can you just not.
▶ No.1026394>>1026432
>>1026380
All of said options are stuffed with convoluted bullshit just so you can set up a crappier and less convenient version of something people already have. For imageboards we're better off with a hybrid solution where images and videos are served over IPFS to save server bandwidth.
▶ No.1026432>>1026626
>>1026394
Hydrus isn't an imageboard, it's a booru.
>a hybrid solution where images and videos are served over IPFS to save server bandwidth.
>>1026335
>Like distributing files via IPFS while forcing vote data to be sent to specific peerID(s) who processes it and then updates the single source of truth.
▶ No.1026626
>>1026432
>>1026380
We need to consider WHEN simpler approaches break and WHEN blockchain breaks. To put simply, simpler approaches can't scale with user count.
No matter it is a private tracker on libp2p and IPFS, or using storage coins like Sia/Storj, or NNTP-esque solutions, we need a solution once we expand out.
▶ No.1027339>>1027345
Could IPFS be used for web page archiving? Like archive.org or archive.fo type of thing?
How could I do something like this? Would saving a webpage with relative links and then uploading it work?
▶ No.1027340
>>1026385
But you can use Bitcoin SV (Satoj's Vishnu) to store data files, as big as 128 mb, forever and ever in the blockchain, for a minimal fee. Miners will be forced to keep your file around forever.
▶ No.1027345>>1027349
>>1027339
Absolutely.
wget the site with relatives and just ipfs add -r the files.
▶ No.1027349
>>1027345
It worked. You need to add -w also.
▶ No.1028703>>1028894 >>1029039
So, it should be a distributed imageboard with proof of work tangle.
>each post must validate two random previous ones
>time/sorting is estimated by the sum of PoW accumulated by all validations (weight)
It should be more resilient to censorship than a blockchain because history can not be rewritten even with more mining power. You can post offline or in a local network and merge the subgraph when back online. The client settings define a target difficulty to filter and mine posts. In a way this chose between quality or quantity without having to split the community.
>How does it do against spam?
People will rise the difficulty until they are satisfied with the amount of spam. What should happen is an endless race of computing power until an equilibrium is found where either spamming becomes impractical or the network is too weak, but then it is still possible to dynamically reduce the difficulty when the spamming stops.
>How could this be abused?
Since time is calculated from the amount of validations, it is technically possible to mine against selected posts. Nevertheless it easier to censor posts with little to no validations, so hopefully the majority will keep it random and unbiased.
I'm going to try doing it in Rust.
▶ No.1028815>>1028863
Since its still in heavy development I'm worried about getting hit by some zero day if I were to install IPFS directly on my computer so I was thinking about ways to isolate it from the rest of my system. Using a VM should probably be enough but obviously I don't have resources to launch a VM for every other program so my next best option is probably a sandbox.
So how did you set up IPFS? Should I just install it directly on my system and live on the edge?
▶ No.1028863
>>1028815
Use firejail or some unix user fuckery to sandbox it.
▶ No.1028894>>1028900 >>1029039 >>1029249
>>1025657
>especially if the owner can profit from it
How are you defining profit?
>And worthless cryptographic challenges (like hashcash) fell out of style in favor of ones that reward tokens (that hold some form of value)
Hashcash still serves a good method to reduce spam. I agree though that good behavior should be rewarded in a provable way.
>You can't exactly have weak hashes for some people
>And having only strong hashes unfairly locks out low power devices
I too was concerned about this. The way I solved it is by giving users the option of either paying a fee per transaction or attaching a PoW to their transaction. Miners are incentivized to prefer transactions with fees over transactions with PoW attached. Miners can choose the number of PoW transactions they will accept into a block.
>When it comes to pay-to-post I hate that idea
>It would be cool to have some domain specific token like an 8ch post token, and be able to automatically barter with the owner's system with other tokens
You just contradicted yourself. How would the 8ch tokens be used and produced?
>I want to see a digital world where only the people who generate hentai@home credit can post on my website
That is entirely possible to do right now. hentai@home could transition to a public privately permissioned blockchain such as Multichain where the current rules still apply. Users would be credited in "coins" that may or may not be freely tradable. Even if they're not freely tradable, the user who owns the coins could sign a message proving they own their amount of coins. Then you as the website admin could check the hentai@home public blockchain to verify they really have them.
>Something that implies you did useful work and not just money based since people can just steal cash it implies nothing and can't be sourced or proven
As I just explained, a public privately permissioned blockchain solved that problem. Account rules and coin distribution are entirely up to the owners of the blockchain. It's basically just like a SQL database where the admins have complete control but anyone can download it and look at the changes in real time.
>Having things tied to a crypto identity would be good but then people would be upset over the lack of anonymity and the ability to have multiple accounts for the same service
In UTXO based blockchains generating a new deposit address for each transaction is encouraged. The problem is when a service requires users to submit more then one of their transactions. Then that service is able to link multiple addresses to a single user.
>>1028703
DAGs can never provide an absolute global view of the network. Eventually nodes will become partitioned with some discussions only happening on a particular node. Even comments on a universally accepted thread can become partitioned among nodes due to partial network failure. That said, I agree that for an imageboard a DAG would probably be the best approach. Using the IOTA Tangle as a base is not a good idea. It was designed for monetary consensus, not for message consensus. The best approach I've come up with is simply extending the Matrix DAG messages to include PoW and IPFS image hash fields. With a few more quality of life adjustments, I really think Matrix is the solution we've been looking for. Instead of like in Tangle where new transactions refer to only two previous "tip" transactions, in Matrix new messages refer to all previous "tip" messages. This allows the DAG to always approach a linear graph. Under ideal circumstances this will make a linear graph where messages are ordered linearly. Individual nodes may determine how to present multiple branches to users.
>It should be more resilient to censorship than a blockchain because history can not be rewritten even with more mining power
IOTA Tangle only require attackers to have 34% of the global hash rate to take over and remove transactions before they can be confirmed by nodes. Besides, the PoW wouldn't need to be an arms race. All messages above the global minimum PoW would be downloaded and individual nodes could choose the fixed minimum PoW level they require to be accepted into their graph. A general consensus of acceptable PoW levels would be an ongoing discussion for each node.
I'm personally working on a design for an improved version of DPoS that should make things much more palatable (since competitive PoW on all blockchains will eventually be phased out) though as of now I still think a Matrix inspired DAG is the best bet for a long history preserving distributed imageboard.
▶ No.1028900>>1028930 >>1029039
>>1028894
>How are you defining profit?
Gain of any kind that prolongs the service. Be it monetary or some other kind of token for distributed computation, storage, etc.
"Profit" may not be the most appropriate word here though.
>The way I solved it is...
Interesting approach.
>You just contradicted yourself
You left "but" out of the quote.
Despite the 2 being similar they are not the same, given that 1 is inherently generic (cash) while the other isn't intended to be (post token). Likewise with the other tokens. Specific vs generic.
As already mentioned, the issue that would need to be solved is some way to prevent exchange for monetary value. Potentially you could maintain a blacklist of tokens going through known fiat exchanges, but that's just cat and mouse and doesn't prevent individual private trade. As far as I can tell there's no way to handle this.
>How would the 8ch tokens be used and produced?
This was just an example, tokens could be used instead of a captcha challenge, generated through whatever hashing scheme you wish.
>a public privately permissioned blockchain solved that problem
If you can have access to an account, you can sell the access to that account. This happens a lot in video games for non-transferable items.
It only prevents the tokens from leaving the system, not people buying into the system with (preexisting) credit.
>Eventually nodes will become partitioned with some discussions only happening on a particular node
Wouldn't CRDTs resolve this?
https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type
I believe this is how people are building chat and document editing programs on top of IPFS now.
They will still diverge in the event of a large outage, but can become joined again.
Clients could have bias on this too, when a conflict was resolved, in the data it should still be in the correct order, but you could render it as if it's a post at the bottom of the thread with out of order timestamps.
I think meguca does this. Assume 2 postsers A and B
A reserves a post slot, B reserves a post slot, B submits post, A submits post. I think it renders AB until you refresh, then it would be BA. Or it might render AB with out of order timestamps. I forget. Not that the rendering of the data is worth considering before the data itself. But it seems like divergence and then (re)unification shouldn't be cause a technical or visual disaster. But I'm not very familiar with all that, so something could be problematic with what I'm saying.
>Matrix DAG
Neat. I didn't know about this. Seems like a good base if it's designed for chat in the first place.
▶ No.1028930>>1029039 >>1029281
>>1028900
>"Profit" may not be the most appropriate word here though
I think you're thinking of a store of value. I agree we should be storing (subjective) value in some systems. Imageboards are not one of those systems. Instead we should (and have been) penalizing disvaluable behavior.
>given that 1 is inherently generic (cash) while the other isn't intended to be (post token)
>the issue that would need to be solved is some way to prevent exchange for monetary value
>As far as I can tell there's no way to handle this
I understand the intended difference between tokens but unfortunately I can't think of a way to prevent it either. Ultimately all "cash" transactions are barters too. Unless there's an always correct whitelist I don't see how you can selectively allow some barters but not others.
>If you can have access to an account, you can sell the access to that account
>It only prevents the tokens from leaving the system, not people buying into the system with (preexisting) credit.
I overlooked that. I actually assumed because the seller would retain a copy of the only private key valid for those coins that people wouldn't purchase them. You're right, people are very stupid and/or risky. You might be able to prove value for some service but only in real time. Historical records of the service wouldn't be reliable.
>Wouldn't CRDTs resolve this?
CRDTs are really just conflict resolution rules when combining alternate representations of a solution. Each message is a solution so there's nothing inherently conflicting about them. I was talking more along the lines of individual node selective PoW requirements. Message propagation shouldn't be much of an issue due to longer reply times since users aren't really chatting. Even then, your personal local node could go offline, you could reply to a few posts on different threads, go back online and resync with the network a day later and everything would be fine. Thanks to the way the DAG is set up, your posts would be seen as a branch from older transactions and would be accepted since they stemmed from an older valid message. The next new message would refer back to your branch consolidating it back into the main branch. This actually doesn't even need to happen but it's nice because then we don't have to worry about a million branches when traversing.
>I think meguca does this
I'm pretty sure meguca orders and displays posts after the captcha is completed and in order of when users first begin typing regardless of then they hit the submit button.
>Seems like a good base if it's designed for chat in the first place
It's actually designed for decentralized text chatting, VoIP, and video calling. Of course we'd only want to use text messaging and change a few other things like removing a mandatory domain name for each node. Scroll down to the "How does it work?" section at the bottom.
https://matrix.org/blog/home/
▶ No.1028994>>1029088
>>1025657
>I want to see a digital world where only the people who generate hentai@home credit can post on my website.
>decide to duckduckgo hentai@home
>first result is FBI's recruit website
hmm
▶ No.1029039>>1029088
▶ No.1029088>>1029281
>>1028994
>2011+8
>not knowing what hentai@home is
>>1029039
I wasted my time reading the docs. Looks like he's just writing his own implementation of Matrix but without domain names required and with decentralized image storage and real documentation. He said he wants Grid to be federated with Matrix. The PoW he's describing is literal proof of work. Only well thought out and well backed ideas will be considered when coding things.
I'm talking about creating something brand new. Take Matrix's DAG idea (every new message references all previous tip messages), add a configurable hashcash PoW to each message, and use a gossip protocol like in Bitcoin to create decentralized full nodes. Full nodes will determine consensus rules, just like in Bitcoin. The advantage of not having to enforce a global universal state is that each node could determine the appropriate level of PoW required to be added to their DAG. They will still propagate all messages that conform to the global consensus rules. Thus nodes that choose to enforce higher PoWs than required will be subsets of the larger global graph. Light clients connect to full nodes to view and post messages. Higher level things like client level message blacklists created by full node operators could optionally be used for further reducing spam or optionally enforcing administrative policies at a given PoW level.
▶ No.1029249>>1029281
>>1028894
>DAGs can never provide an absolute global view of the network. Eventually nodes will become partitioned with some discussions only happening on a particular node.
IOTA chose transactions to validate by computing random walks towards the tips, giving a higher probability to node with higher cumulative weight, to achieve some sort of "soft" consensus.
>IOTA Tangle only require attackers to have 34% of the global hash rate to take over and remove transactions
Unlike IOTA if we chose not to prune orphaned posts we can make censoring almost impossible, at the cost of giving attackers the possibility to forge subtangles and manipulate the weight of older entries.
>in Matrix new messages refer to all previous "tip" messages.
I didn't know about Matrix. It's true it seems more appropriate. Although I wonder if it can handle spam.
>A general consensus of acceptable PoW levels would be an ongoing discussion for each node.
I agree. We might even set a target number of posts / seconds (preferably low, e.g. one every 10 minutes) which would be automatically be adjusted according to the global hashrate. Not only this would help to mitigate spam without having to actively review parameters, but this would also discourage mindless posting no matter how popular the imageboard could get. This should be achieved with synchronized network time. At least it works if we forget about the fact shills definitely have more computing power than legitimate users.
>I'm personally working on a design for an improved version of DPoS
This sounds interesting. Do you mind sharing more details? The reason I won't use PoS is that I can't see a way to make it anonymous but I would gladly give up on PoW.
▶ No.1029281>>1029956
>>1029249
>to achieve some sort of "soft" consensus
Each node having their own view of the network is too much of a trade off when dealing with serious monetary systems. We'll see what happens when they turn off their central coordinator.
>giving attackers the possibility to forge subtangles and manipulate the weight of older entries
In a message oriented system long subbranches aren't an issue because no message takes priority over others and relative ordering doesn't matter. A message branch is "confirmed" (added to the graph) if and when the trailing message refers to a previously known message. There is no waiting. It either is or is not valid. See my image and explanation >>1028930 for offline merging and >>1029088 for my whole idea.
>We might even set a target number of posts / seconds (preferably low, e.g. one every 10 minutes)
I also think we should have a hard cap of incoming messages per second. Not quite that low though. 8chan experiences less then 1 post per second and 4chan experiences 9-10 posts per second on average. I think matching 8chan levels is a good starting point.
>which would be automatically be adjusted according to the global hashrate
I'm not advocating for a PoW arms race like how cryptocurrencies use it. See my previous posts. I would also argue against dynamically adjusting the acceptable PoW level on the node level because as you said, attackers can always raise the difficulty high enough that no honest users can reasonably post anymore.
>Do you mind sharing more details?
Basically anonymous coins using DPoS but publicly revealing your votes doesn't compromise your or anyone else's privacy.
▶ No.1029656>>1029657 >>1029763 >>1029765
I WOULD upload a buch of big awesome directories, but this error keeps appearing. Asked help chat for help, only person was confused. Seems to be an error in GoLang itself. (They provided link)
https://github.com/golang/go/issues/11745
▶ No.1029657
>>1029656
Independent uploads (Very inconvienent)
I physically stole these things and made copies before destroying the source. All wrapped with -w, so you should be able to see the directory names (right?)
ipfs://QmagA7ixdDe88cdyxCbZGtHfMcttrZ2sWWypgL855jCRsa
ipfs://QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn
ipfs://QmQLaJUKQ5gLJZdzrQ6faJmVsdFVH4GBBtpaoGtDeX6q4p
God damn, can't even upload small packages rn. wtf did they do?
Gonna keep trying here..
ipfs://QmUAsJPMWurh2XecDizt4ec7FETm4WJJHkACDyuRk1HPWo
ipfs://QmYU8naXg15E5nnFaogfC9a2ahnTegUe28g1gFUpVM5e8q
ipfs://QmYz7rwABzp1wKDeyGJCWs1FHgkbNcQzzrhszUTkvjMBEn
There are several directories I STILL can not upload due to this error. This annoyes me greatly. Enjoy what I did upload.
▶ No.1029763>>1029765 >>1029833
>>1029656
>Note: This is actually just a bug in error reporting. That is, go reports one error when it should be reporting another. In this case, you likely don't have read permissions on one of the files you're trying to add to ipfs.
Does it error out on the same file every time?
Fixing the issue in Go won't help, it will just make the error message what it's supposed to be. You'd still have to fix whatever is wrong with the local file. Since you can add some files but not others it's probably permissions.
You should try running
cd /where/it/is
find . -type d -exec chmod 755 {} ";"
find . -type f -exec chmod 644 {} ";"
on the directory you want to add and see if it shows any errors. If not, try adding it to IPFS again.
▶ No.1029765
>>1029763
>>1029656
I forgot
chmod -R +rX directory
also works if your chmod supports it. Not sure what OS you're on.
▶ No.1029833>>1029837 >>1029842
>>1029763
Well slap my ass and call me Judy, it worked. When I physically stole these CDs I had full intention of pirating them for free on IPFS and the like, and it's been a long while since I could attempt to.
ipfs://QmV16dyLSoBnb7JMfpy7wgHh8SLKvfSGuA9opjCeifaDY5
ipfs://QmQK4fznj8im3mxNx3bMi4eyrjka8581EHs8uy7Dedsp3w
Also, IPFS Companion for Firefox no longer exists, so I downloaded the beta channel and it doesn't work. Wat do?
▶ No.1029837
>>1029833
Some misc now, also taken from physical media
ipfs://QmWQ3U5uqLTgxgt1TKJYiHesZjfKa248DsCm4qQGzUeSHw
ipfs://QmYfvxWyE81kArais9TVPe7e6bxzLaXYLEZmKi5fmud8FE
ipfs://QmR8XeoJnkAe2P1Kq5tNppHUf6Wz9GiwAvz7YNX22EQGGR
Bunch of movies I have saved to my machine
ipfs://QmXpgQDp8iDav42M5VN4wF5c354xrLiSgegYMDgboR8cxP
ipfs://QmY4UJMxkqaB4jLYGXghsQQHnYnSpcy1yfmsFFu1qYhAph
ipfs://QmeybnTMwFUsirCP6jebtGvLikJk56zTD9kyUvx8gbE7wH
ipfs://QmZfU3ybfYG8MQhNPqNYWj179EPMjoK3jR4NJacioPZHjd
ipfs://QmPRVdYh5qECazXBsqth7SbL33Wb7B9oK1jyWPakSoxEZm
ipfs://QmR6wuCjADqMQBkpLLevLAxehxPkigKDFLPf9doVDfgj61
ipfs://QmVxR78VHAZFWgU7EStGGrh2a1AkHwENjTRTRDaAHwN4SA
ipfs://Qmcp6FobaVBgNKAf88dgdL1UPEQZDnW8Um9kqA2gj4gC7m
ipfs://Qme6NYm7ny28eNLzMHJhcJvCsdgeFTCNPw2QM3DvW7UZuy
ipfs://QmVQGuymt5BLHYFgER2amm9mApdQoaiV3wDysv5vCbRbbK
ipfs://QmXtsoszmvijiRpsjHCJ1jxhaAqhQ8sXxTKdYqmgbj2WaY
ipfs://QmViWvJKsRRkT8D4ZAXpM8mvDoYGo9YmnpKFxpTbTyyo5W
ipfs://QmebWELSxDkAwyL5d19rSDZZrngWtkQzB7NAQJS6Guzk5s
ipfs://Qmd4HZJF1kaAUAeHZtpRwePM1SD2Y335fHFPGhHUTHx48F
ipfs://QmSbCFVQ6gy9gd9MmstSNnM49hRXDTUxvhhjXzJeDwCjww
This is it for now. Please mirror, enjoy, etc.
▶ No.1029842>>1029844 >>1029867
>>1029833
>IPFS Companion for Firefox no longer exists
It looks like Mozilla only removed it temporarily because the builds failed.
https://github.com/ipfs-shipyard/ipfs-companion/commit/bf9a69ed9dbc36d56e165c2dfaa83e833cd462ec
I don't know how their process works so maybe it's waiting on review from Mozilla, or maybe it has to be built by them.
I can't believe they delete the whole page though.
I have 2.7 stable if you want to try it
ipfs://Qmb1dT9M2jEMLkB9HgidcQpwUkQ74wHVf4J3meFejpiZB9
But the Beta branch works fine for me. What's it doing wrong for you?
▶ No.1029844
>>1029842
It simply does nothing, the beta branch. I click it to see if there are options I can configure and the pop-up is a blank white window. No links are modified to clickable. I'll download your 2.7 version.
▶ No.1029845>>1029846 >>1029851
After uploading all those gigabytes to IPFS, my net as a whole is not working very well. Any tips to rectify this?
▶ No.1029846>>1029850
>>1029845
Restart everything like your computer, modem and anything else in between. Works %99 of the time and but if it doesn't, just reinstall your operating system.
▶ No.1029850
>>1029846
>if it doesn't, just reinstall your operating system
Lol, ok. I'm not that stupid
▶ No.1029851>>1029854
>>1029845
IPFS is P2P, when you add something to IPFS you're not uploading it, you're just making it available for download from that machine (while the daemon is running).
Your daemon also has to tell the DHT about it so people know you're providing it. IIRC someone in a previous thread (or this one) said that this is really inefficient right now, but is being worked on so maybe it will be different soon.
This only has to happen if the network doesn't already know about something (if a DHT record isn't expired). At least I think that's how it works for KAD-DHT, I don't know how theirs works specifically.
If it's even remotely sane this should only happen when you add something new or if you've been offline for a long time and come back online. You need to tell some peers that you have the content and where you can be reached which takes some bandwidth and opens a lot of P2P connections temporarily. Some ISPs probably throttle you when this happens.
▶ No.1029854>>1029867
>>1029851
I used "Upload" for want of a better word. My ISP is Cox Communications and I don't know if I am being throttled or not. However, I am only running this for 30 minutes now. It probably just needs more time. After that, my connection should normalize, right? According to the WebUI I am currently peered with 69 peers, but when I checked earlier it was at over 280. As I was typing that it jumpped to 183. IDK then...
▶ No.1029867>>1029874
>>1029854
>After that, my connection should normalize, right?
It should, after the DHT records are propagated.
For multiple GBs at once that's a lot of records to provide. Since each hash is made up of smaller hashes that also need to be provided.
For example check this out >>1029842
>ipfs ls Qmb1dT9M2jEMLkB9HgidcQpwUkQ74wHVf4J3meFejpiZB9
QmVXMmiViNa3LtwJDHhkNrqYY16XHgYLusEsAAPn7F4H5c 6831852 ipfs-firefox-addon@lidel.org.xpi
Shows the file but even the file itself is made up of smaller blocks (in BT these are called pieces/chunks)
You can see the individual blocks with the same command
ipfs ls QmVXMmiViNa3LtwJDHhkNrqYY16XHgYLusEsAAPn7F4H5c
So that 1 6MB file with a directory turns into 28 hashes that I need to tell the network I provide.
If the number of peers I need is 5 (I don't know the actual number) that's 140 DHT STORE requests that have to succeed. If we count just the hash as 46 bytes that's 6.44KiB of total DHT traffic to provide the folder to the network, and real requests have overhead and other shit.
Take into account you have a lot more records than that coming from GBs of files and they're all probably fighting for the same bandwidth.
If a STORE times out, you try another peer until it succeeds.
So forwarding the port might help you not waste bandwidth since you'll just connect to whoever is fastest and know the connection won't fail.
My knowledge of DHT comes from using ed2k, bittorrent, and Wikipedia though, not IPFS specifically. So it might work differently, or I could be flatout wrong.
They might only provide the root hash or something more optimal.
> I am only running this for 30 minutes now.
It depends on your upload bandwidth and other DHT nodes/peers.
>over 280
Right now I have around 900 but I leave my daemon up all the time (except when playing games). I also have my ports open, a fast connection, and am in the US so there are likely a lot of nodes wanting to connect to me that are also physically closer.
▶ No.1029874>>1029883 >>1029900
>>1029867
If I could, I'd have it running 24/7 on a dedicated archive/seedbox machine but I have no money and this is running on a laptop. My options are limited atm
▶ No.1029883>>1029886 >>1029900
>>1029874
is it some fuckhuge collection of memes and books again? let that shit die already if it is
▶ No.1029886
>>1029883
No, they're electronic textbooks and video games I stole, and various films/videos
▶ No.1029900>>1029953
>>1029883
Do not disparage the meme posters.
>>1029874
If you keep your connection up long enough for me to mirror it. I will host them until my next repo garbage collection. So far though I only managed to grab 2 blocks. So it might be working but it's getting out there slowly. Maybe your connection is dropping in and out.
Hopefully your personal situation improves regardless.
▶ No.1029953>>1029965
>>1029900
I've kept it up for hours, what's your status?
▶ No.1029956>>1030028
>>1029281
>Each node having their own view of the network is too much of a trade off when dealing with serious monetary systems
The computational power required to assemble the main tangle have to be large enough to eclipse small sub graph made by isolated or malicious nodes. Though I agree that unlike bitcoin, consensus is never 100% achieved. Only the probability a transaction to be valid is known from the cumulative weight of all confirmations.
>I also think we should have a hard cap of incoming messages per second.
I have been thinking about how to implement it on a dag and to proves historical records, but I'm not sure. I might be missing something stupid. We could make nodes not validating posts which were witnessed being created too fast or with too title PoW/PoS, but the problem is that can the spammer confirm his own posts and then merge into the main branch? We could let nodes vote against posts ([-1, 1] weight with each dag link) but we open an easy way to censorship (although always publicly available). It isn't even fair for high latency nodes. Only the hard consensus of a blockchain provides a straightforward solution. Else users have to decide individually or it should be coordinated by moderators.
>I'm not advocating for a PoW arms race like how cryptocurrencies use it.
Neither do I. I think I'll just wait for a better solution than PoW.
>attackers can always raise the difficulty high enough that no honest users can reasonably post anymore.
The main way to defend against it is to have the right amount of centralization through opt-in signed moderation. Nodes must agree that blacklisted posts don't count toward the speed limit so moderators are able to negate such attack without cpu work. Some moderators could even handle by neural networks.
>Basically anonymous coins using DPoS but publicly revealing your votes doesn't compromise your or anyone else's privacy.
Having untradable tokens to do anonymous staking would be nice but how would they be distributed? If they are automatically rewarded through time, how to protect against Sybil shills creating many fake identities?
We need to set rules in a way we don't even have to compete against corporate shills or spammers. We could trust the ability of people to process information comprehensively. For exemple, this vague idea: a board where an identity is required, but posting is anonymous through a zero-knowledge proof for user validity (ring signature). Misbehaving would get the identity tainted, then doing so repeatedly will increase the probability of being blacklisted. This could work as an anonymous Twitter too (only the public messages from followed users get displayed but it's impossible to tell identities apart, adding and removing follows would be random but statistically optimal after enough votes for one user).
▶ No.1029965>>1029970 >>1032712
>>1029953
ipfs get QmV16dyLSoBnb7JMfpy7wgHh8SLKvfSGuA9opjCeifaDY5
Saving file(s) to QmV16dyLSoBnb7JMfpy7wgHh8SLKvfSGuA9opjCeifaDY5
7.99 MiB / 1.33 GiB [>---------------------------------------------------------------------------] 0.59% 37d5h12m28s
I tried connecting to you directly (via peerid of the only provider for that hash) but it kept timing out so there must be some connection problem going on. You're only reachable rarely. This shows some kind of network problem because regardless of the DHT I should be able to connect to you.
▶ No.1029970>>1032712
▶ No.1030028
>>1029956
>I have been thinking about how to implement it on a dag and to proves historical records, but I'm not sure
Easily, the node software has a queue for incoming messages and only relays them at a fixed rate, say 2 messages per second. Honest nodes won't send or relay messages over twice per second and will block nodes that submit messages more than twice per second. This weeds out spam nodes that try to submit messages at much higher rates. It becomes a consensus rule.
>We could make nodes not validating posts which were witnessed being created too fast or with too title PoW/PoS, but the problem is that can the spammer confirm his own posts and then merge into the main branch?
Messages can only be broadcasted at a fixed rate. If I'm connected to 8 nodes, the overall incoming message rate will be 16 messages per second if each node is limited to 2 messages per second. Individual nodes can connect to any amount they want but a minimum of 8 would be recommended.
>The main way to defend against it is to have the right amount of centralization through opt-in signed moderation
I agree but on the node or client level, not the overall network level.
>Nodes must agree that blacklisted posts don't count toward the speed limit so moderators are able to negate such attack without cpu work
Blacklists shouldn't be network wide. They should be applied on the node or client level. Multiple nodes can contribute to shared message blacklists. One of the great qualities about PoW is that it can be very hard to produce but very cheap to verify. To verify all you have to do (simplified) is hash the message and check it against the PoW hash. It takes milliseconds to verify a message with an arbitrary PoW difficulty.
>but how would they be distributed?
>If they are automatically rewarded through time, how to protect against Sybil shills creating many fake identities?
I'm talking about a regular cryptocurrency not anything special. DPoS works with 1 coin being 1 vote. Votes are cast by locking up coins for a small period of time and signaling your vote for specific nodes to be allowed to produce blocks. Once the voting round is over (it happens periodically) the top 30 or so voted nodes get to "mine" blocks at a fixed rate and in a fixed order. Free market competition drives the limited amount of block producers ("miners") that are voted for to give away a large portion of their block reward to their voters. They're basically buying votes. This is a good thing because it distributes coins most to those who secure it, the voters. Most nodes will give away between 70% to 90%. If the miner node violates consensus rules their block will be rejected and both they and their voters won't get paid. This will lead voters to vote for another node who follows the rules because they directly profit from it. It's a pretty good self regulating system with no power wasted on heavy calculations. The only weakness I see is that attackers could take advantage of low voter turnout to get 51% of the votes to vote themselves as "miners". They would profit the most but the system itself would still continue to be secure if checkpoints are included. The worst they could do to maximize their profit would be raising transaction fees. That's the same outcome as PoW but with way less capital needed to sustain the system.
>a board where an identity is required, but posting is anonymous
I suppose that could work if you wanted a whitelisted board. Ring signatures can be used.
>posting is anonymous through a zero-knowledge proof for user validity (ring signature)
Looks like you don't understand zero-knowledge proofs. They can only prove an unknown variable is within a range of possible values. In the case of Monero, they prove all inputs to a transaction are equal to the ouputs. This means the transaction's net value cancels out to zero. This proves someone can't create coins from nowhere and some unknown valid amount is being transferred. Only the sender and receiver know how much was transferred.
>Misbehaving would get the identity tainted, then doing so repeatedly will increase the probability of being blacklisted
If you use ring signatures you can confirm all the given signatures for a message are whitelisted. That means any one of the whitelisted users who's signature was used could have posted the message. It's impossible to determine which in the group of signatures was the actual message sender. That's the whole point. If you were to profile undesired posts you would be forced to correlate message content with multiple posts that share a common signature and deduce the bad actor relatively easily. If you did that, it defeats the whole purpose of the ring signature in the first place. You can't have your cake and eat it too.
▶ No.1031160>>1036812
Looks like Filecoin is more or less public, but at the moment it's only good for research and devs, I guess.
https://filecoin.io/blog/opening-filecoin-project-repos/
https://github.com/filecoin-project/specs
https://github.com/filecoin-project/go-filecoin
>Please note: Filecoin is in heavy development. Code is changing drastically day to day. At this stage, the repos, devnets, and other resources are for development. This release is aimed for developers, researchers, and community members who want to help make Filecoin. Miners and users who seek to use Filecoin will want to wait for a future release (likely, the testnet milestone).
▶ No.1032712
>>1029970
>>1029965
My connection is significantly better now and I am at 610 peers and growing. Please continue to try, or retry if you stopped.
▶ No.1035190>>1035793 >>1036812
Time to share some shite
/ipfs/QmWDrYpZwEpUXcnZv8abLNDVsP4eTW27A1SryuoQUvdReB/
▶ No.1035761>>1036812
▶ No.1035790
>>1025657
What if we fork filecoin and make a version that cannot cannot be transferred between wallets, only people with a certain amount of hentai@homecoin are allowed to make posts, and seeding everyone else's posts is the only way to gain hentaicoin (though you can just buy wallets, so maybe this isn't necessary). It isn't spent to post, and once you're over the requirement you could spam the network but there's an opt out moderation system to offset this. You subscribe to a moderator group and will never see posts that get filtered by your mods, so one group can automatically filter every post made by a user marked as a spammer. The spammer will have to make new accounts to continue spamming and host the site for everyone.
Also, I'm not an expert on crypto stuff but I have an alternative way to allow for anonymity in this system. There can be "post tumbler" services similar to VPNs which are accounts that post anything sent to them from approved users, those users being anyone that hasn't been filtered for spamming. If a post tumbler allows spammers to post through them then the spam mods can just filter the tumbler. Of course these are centralized and go against the idea of the decentralized service, but just like the moderator groups if people don't like one service they can just use another one. If a service becomes compromised the filter can be set up to filter any post from them after a certain date.
Lastly, I think a generic social media backend would be interesting, pretty much every social media site is just posts with linked media and chains of replies. Clients could be made to mimic anything from anonymous imageboards to facebook to twitter, and if a client sucks you can move to an alternative one without having to abandon the userbase. If people don't want to be anonymous or interact with anons they can just filter the tumblers and put all their personal info on a "profile" (which would just be a website on IPFS whose IPNS name is their public key).
▶ No.1035793>>1036812 >>1038155
>>1035190
And a little more
QmVtgbpMSKEufQocyMaQgk4RbG6eW8eJH34qRbS63TVw8i
▶ No.1036812>>1037267 >>1038155 >>1038797
>>1031160
Anyone find anything interesting?
>>1035190
>>1035793
That page is CUTE
>>1035761
Things seem to load much faster than they used to. Even when I add something and try to get it through the gateway, it shows up pretty quick.
>experimental AutoRelay and AutoNAT
Really excited to see these since people either don't want to, don't know how to, or can't open ports.
I hope it's on by default in the next release.
▶ No.1037267
>>1036812
> I hope it's on by default in the next release.
AutoNAT is already enabled by default, not sure about AutoRelay though. They elaborate that it uses other IPFS peers (described as relays) to get around a NAT.
▶ No.1038155
>>1036812
>Things seem to load much faster than they used to. Even when I add something and try to get it through the gateway, it shows up pretty quick.
I agree, using this hash here >>1035793, directories load much quicker. Looks like IPFS is actually becoming viable! The only problem is that files aren't loading, but that may because peers aren't available or my router is blocking some stuff.
▶ No.1038483>>1038492 >>1038536
I ask for your help, /tech/. After adding a bunch of files to IPFS is there anything else I need to do in order for the network to be able to access them?
I can access the file by going to my own hosted node (http://localhost:32148/ipfs/QmUwsKgJxpxtEgqWrwjuYFbudDgm3JchPvy9TcK3LPuUFT) but if I want to use an external node I just can't access the files. It time outs or just throws an error. I tried both with ipfs.io and ipfs.infura.io
How long does it take for ~100 GB of files to reach the network?
I'm using version 0.4.19
▶ No.1038492>>1038495
>>1038483
>~100 GB of files
I'm guessing you probably have these in a fuck-huge directory with hundreds of files, while at the same time having a slow internet connection.
Maybe try organizing these into more manageable sub-directories?
Alternatively, it might just be that your hosted node is having trouble connecting to the network?
▶ No.1038495>>1038517 >>1038874
>>1038492
Yup. More than one thousand files.
I typed "ipfs swarm peers" and I see a bunch of IP addresses and stuff. Is that sufficient evidence my node is properly connected to the network?
But yeah, there's close to no traffic according to the WebUI so maybe that's the problem.
▶ No.1038517>>1038552 >>1038797
>>1038495
How did you add your files? Because it's possible your node didn't announce that you're a provide for the hashes in question, in which case there aren't any nodes that know you have those files in the first place.
Try to run:
ipfs dht provide <cid>
and then try again?
▶ No.1038536>>1038552
>>1038483
this may be a dumb question, but are your ports open? open port 4001, tcp only. you could also alternatively use UPnP, but that isn't as safe.
▶ No.1038552>>1038555
>>1038536
It is a dumb question but since I'm retarded it was the appropriate one. I guess that's what I deserve for using DHCP. Now the port is open and traffic is flowing.
>>1038517
It seems this is also a needed step. Is there a way to batch provide all the CIDs or should I just create a script?
But yeah, now a test video I added is working.
THANK YOU!
▶ No.1038555>>1038797
>>1038552
They should be provided by default when adding, and I think they're provided at an interval regardless of how you added them (while offline or not).
You don't have to do manually provide them, but they were probably not available after forwarding the port because the provide didn't happen automatically yet.
If this is wrong, someone please correct me.
▶ No.1038797>>1038874
>>1038555
>>1038517
Not providing CIDs for possessed blocks defeats the point of IPFS. So that should not be problem.
As for ports - how much is 'bunch of IP addresses and stuff.'? With forwarded ports and around 800-900 peers, I still get random delays between `ipfs add` and anything becoming visible at ipfs.io gate.
>>1036812
Thanks :3
And more junk: QmbxsmdSN9wt7SSFpRpdYnjk5A4PNc2H2vLhCsHaadHe96
▶ No.1038874>>1038897 >>1039037 >>1047335
>>1038797
I'm >>1038495
It doesn't matter because now I have a different problem altogether. Ever since opening port 4001 (i.e.: getting real traffic) my shit-tier Technicolor cablemodem router DPC3848VE craps itself after about 10 minutes of running ipfs daemon.
I'm thinking about getting a new router and using that piece of crap in bridged mode but at the same time I really don't feel like buying a new router just for IPFS, especially since I live in a third world country and everything is expensive as fuck.
Disabling MDNS help mildly but not decently enough.
▶ No.1038897>>1039030
>>1038874
Start your daemon with the flag "--routing=dhtclient". That should help stop your router from shitting itself.
▶ No.1039030
>>1038897
Didn't help, crapped itself after less than 5 minutes. Thanks for your suggestion.
▶ No.1039037>>1039255
>>1038874
dht is just really heavy. no consumer nat router can handle it well. they are designed for normie web browsing not having thousands active connections at the same time and thats what dht does.
▶ No.1039255
>>1039037
Why is it done at the router level? How much router RAM is needed in order to use it without problems?
▶ No.1039270>>1047532
>>1007543
Have you ever wondered why if you make an empty folder and check the size of it you get 0 bytes even if it has a long name? The name is not a part of the file, if you manually open up the memory of a file you won't find its filename in the header because it's not a part of a file. This is also why when you wipe a hard drive and have to perform data recovery you lose all the filenames and metadata, that stuff is all stored in one place in the file system.
▶ No.1041943
>>984228
>Numbering threads is a tradition
Not when it an eterna-bread newfaggot.
▶ No.1047335>>1047364
>>1038874
23 days later I am here to tell you that you can get a netgear R7800 from china for $70.
I have a asus ac68u and my node hovers around 600-800 peers. I also run H@H. No issue so far. The R7800 is an even better router.
▶ No.1047364>>1047373
>>1047335
Well, I didn't want to turn this into my personal blog but I ended up buying a router and my cablemodem still craps itself, even in bridged mode. Maybe it's the sheer amount of connections or something. I don't even know how to diagnose it.
▶ No.1047373>>1047476
>>1047364
What is "bridged" mode? How many peers do you get?
I'm running on a home connection that also relies on a ISP modem. I have it in bridge mode and it works fine. Router says 1500 connections.
▶ No.1047476
>>1047373
"Bridged mode" is the cablemodem's term for not acting like a router.
More than 900 peers last time I tried.
▶ No.1047532
>>1039270
>memory of a file
>that stuff is all stored in one place in the file system
Are you trying to remember programming lessons you took in year 1993? Because that sounds like mangled description of FCB & FAT
▶ No.1047569>>1047677
>>1047336
Is that actual memory usage or just allocated/reserved memory usage?
▶ No.1047677>>1047722
>>1047569
Pretty sure it's actual usage.
It's up to 528MB now
▶ No.1047722>>1047740
>>1047677
My ipfs running in the background is using 6MB you gianormous faggot I have many connections. Did you compile it from source? Did you set how many peers are allowed to a more sane value then 2000 in the config i.e "HighWater" and "LowWater" being max and minimum connections? If no then you are a faggot who needs to go back.
▶ No.1047731>>1047733
I've never looked at IPFS thing, convince me to try this.
▶ No.1047733>>1047827
>>1047731
IPFS is just bittorrent but without the bloat and duplication of files via multiple torrents. Do you have a file you want p2p'd and don't want jewgle to take it down? i.e piracy Are you a reporter that's not a fag and will get censored by the cabal? Are you a giant media organization trying to save costs by making all your clients act as CDN's thereby saving you electricity, bandwidth, time, and keeping your files up even after your organization ends or changes all without you having to worry about it after publishing? Then IPFS is for you.
▶ No.1047740>>1047745 >>1048030
>>1047722
Who gives a shit I just download it and run, I'm not autistic enough for your bullshit.
And it only has ~900 peers at most.
542MB now
▶ No.1047745>>1047751
>>1047740
Ok then, you still can adjust the config file under ~/.ipfs/config to change the highwater and lowater values. You also could update to the latest version instead of whatever you are using. Where are you downloading ipfs from?
install gentoo faggot
▶ No.1047751
>>1047745
0.4.19
What do you have yours set to?
▶ No.1047827>>1048189
>>1047733
>IPFS is just bittorrent but without duplication of files via multiple torrents.
This. Much better than official description, which is full of buzzwords and outright bullshit about IPFS being persistent storage.
▶ No.1048030>>1048034
>>1047740
>682MB
I'm suspecting some kind of memory leak now
▶ No.1048034
>>1048030
maybe ipfs checks available ram to decide how big to make some cache an maybe anon has way more ram than you?
just guessing though as I haven't used ipfs yet.
▶ No.1048081
>>1048047
>666M memory usage
>ipfs daemon
Pretty obvious :^)
In all seriousness I'm quite sure computers and the Internet aren't just "happy accidents" and created with malicious intent.
▶ No.1048189>>1048508
>>1047827
IPFS is like 5 projects: IPFS, libp2p, multiformats, IPLD, Filecoin.
https://protocol.ai/projects/
>IPFS
web protocol that is supposed to replace HTTP
>IPLD
abstract data model (requires high IQ)
>multiformats
self addressing data. Numbers that tell you their base. Hashes that tell you their hash function, etc.
>libp2p
network stack (can be used separately from IPFS)
>filecoin
payed storage on IPFS
▶ No.1048298>>1048329 >>1048455
Anyone can correct me if I'm getting something wrong or if I'm being too optimistic.
I think of the best things about IPFS is that it makes it possible to create and sustain websites like YouTube for astronomically less money than what Google pays to upkeep it. You just outsource to everyone else (users) to act as storage and they will do it willingly because they will be getting payed with Filecoins for storing your shit. It's genius really as you need incentive for people to lend anything, whoever came up with Filecoins is clearly really damn smart to look this far ahead and come up with this shit.
▶ No.1048329
>>1048298
Yup. It's just about getting the implementation right but getting it right is not that simple. I guess we'll see profitable IPFS usage around 2023.
▶ No.1048455
>>1048298
>whoever came up with Filecoins is clearly really damn smart to look this far ahead and come up with this shit.
The founder says in basically every conference talk, that few to none of the ideas in IPFS are new, and that IPFS is just an aggregation of existing, and proven concepts.
See the images here even
>>1000218
>>1000219
IPFS could probably better be summed up as "git over bittorrent" in concept. Neither of which are new, but of course that would be good. The challenge is in implementing it. Reminds me of PARC, how lots of companies basically just took their research and made it widely available.
But in this case it's more like rounding up standards, and implementing your own modular variant of it, rather than rounding up single products.
That concept in itself is the IP thin waist model.
Not that it has to be original, I'm not knocking them for it. The concept may be simple and unoriginal, but furthering an interface and then implementing it is no easy task. I'm reminded of all the poorly extended forks of p2p tools like amule. Also failed attempts like Bittorrent DNA, which probably would have worked fine if it wasn't proprietary (a p2p network without peers isn't very useful).
>for astronomically less money
I'm honestly surprised people are not attacking them aggressively because of it. It disrupts the necessity for a lot of services. I'm thinking CDNs, DNS, HTTP, and other services people pay to host, that are basically eliminated so long as you have any internet connected box and can run a command that's 2 words.
▶ No.1048508>>1048711
>>1048189
That's exactly what I was referring as 'buzzwords and outright bullshit'.
>web protocol that is supposed to replace HTTP
Bullshit. Makes as much sense as 'torrents are replacement for email attachments'.
>IPLD
DAG of content addressed nodes. It's a paragraph in IPFS documentation, that for some reason is called 'project'.
>multiformats
Ultimate hipster shit. Take something that was practised since programming stopped involving physical rewiring, and is so ubiquitous that nobody even notices it - and give it brand name.
▶ No.1048620>>1048630
is there any way to search files on the ipfs network (like trackers and dht with bittorrent)? i'm not very familiar with it, so that could be a stupid question.
▶ No.1048630
>>1048620
Read the thread.
ctrl + f "search engine".
▶ No.1048711>>1048718
>>1048508
>Bullshit. Makes as much sense as 'torrents are replacement for email attachments'.
There is no reason IPFS cannot replace HTTP.
>It's a paragraph in IPFS documentation, that for some reason is called 'project'.
IPLD is a set of specifications. You are probably looking at the wrong repo if all you are seeing is a paragraph. Look at ipld/specs instead of ipld/ipld
>Ultimate hipster shit. Take something that was practised since programming stopped involving physical rewiring, and is so ubiquitous that nobody even notices it - and give it brand name.
But that's wrong you idiot. They have a whole fucking section explaining why that isn't the case. Copy paste:
Q. Is this a real problem?
A. Yes. If i give you "1214314321432165" is that decimal? or hex? or something else?
Well, which one is it nigger?
And that's just bases. SHA-256 vs SHA-3, can you tell the difference?
▶ No.1048718>>1048794
>>1048711
>replace HTTP
Ok, let's start from beginning.
What features does HTTP provide?
>Look at ipld/specs
Looked at it. It's still DAG+content addressing.
>you are seeing is a paragraph
I don't see 'a paragraph'. I see paragraph worth of information, inflated to look like a full standard.
>real problem
Yes, it's a problem. And it's solved decades ago. From tagged pointers (first LISPs) to URNs and glibc's crypt.
multiformats is design equivalent of leftPad js library.
▶ No.1048730>>1048731
So as IPFS stands today, could I host a website using it? Assuming of course I was able to convince my users to install an IPFS client.
▶ No.1048731>>1048733
>>1048730
Your users could just use a gateway
It'll probably be very slow when you're just starting out though.
▶ No.1048733>>1048794
>>1048731
I don't care about users needing to install software. I've been playing around with wireshark recently to figure out how acestream works because I want to make my own free version of it. All I really need for it is some way of hosting dynamic content (eg a constantly updating m3u8 playlist), and the video chunks linked by that playlist. I am new to IPFS, but this should be possible using IPNS right?
▶ No.1048742>>1048794
Not inherently related to IPFS, but if I get a Intel NIC (currently using realtek) will it handle the many connections IPFS makes better?
▶ No.1048794>>1048797 >>1048870
>>1048718
>multiformats is design equivalent of leftPad js library.
If something is important, it should be standardized ESPECIALLY if it's a solved problem.
The complexity or size of it doesn't matter since it's just as arbitrary as the rest of the features of a specification. If we all agree that this is how it should be then there's no problem in having a spec devoted to it. There's no ambiguity, no "defacto" bullshit, extensions and modifications to it are comparable and explicit. A multihash is a multihash, as described in the standard.
Not having it defined is just as bad as not having fixed sized ints defined, and those as a concept fit in much less than a paragraph but are obviously important, and the same thing, a type declaration/specification.
Fundamental building blocks of technology should be varied in size like this, it's basically the entire point.
Also it's ironic for you to point at LISPs and then speak like leftpad is a problem when high modularity and reuse is a core LISP concept. Everything problematic about that package is meta, and mostly centered around the removal incident. The package itself in concept is an ideal. Having the potential to disappear is not a library problem, it's an infrastructure problem. Discouraging reuse for any reason (slow package manager/interpreter/loader/compiler/etc.) is a tooling problem. In a perfect world there would be no penalty or risk for importing dependencies.
>>1048733
Depending on the frequency, you'll want to look into these things in order of slowest to fastest
IPNS, pubsub, IPFS P2P sockets / LibP2P streams.
IPNS publish takes a few seconds to propagate to the entire network.
pubsub has peers subscribe to a stream, and they can all publish to that stream, so it's like a shared broadcast channel. It's as fast as the broadcast, which can be sent multiple ways (floodsub = you broadcast to everyone, gossipsub = some kind of message relay)
sockets are sockets, you connect peers via their peerid, either by wrapping go-ipfs's p2p command or using libp2p directly. It's as fast as the connection.
https://blog.ipfs.io/25-pubsub/
https://github.com/ipfs/go-ipfs/blob/master/docs/experimental-features.md#ipfs-p2p
I'd recommend pubsub for this kind of thing. It seems like the easiest way to do it without implementing your own complex peer management and broadcasting, but you can still have that if you need it. Like if you need to transmit data to a specific peer and not the entire group, you still can.
i.e. if a peer joins the pubsub topic, they announce they joined in the topic, you can grab their peerid from that announce message, establish a connection with them, send them the current header for the playlist, and then they'd be in a ready state for whatever the current stream data is supposed to be (probably text multihashes to video fragments in this case).
But it all depends on what you want to do and how you want to do it. You could easily just publish to IPNS and be done in 1 command if you make sure your stream delay is large enough to populate enough fragments before each publish. But it seems less than ideal.
>>1048742
There's too many variables. The nic model, the driver stack, the network config.
You're most likely to encounter problems at the router level because router vendors hate you.
▶ No.1048797
>>1048794
The Intel NIC is a EXPI9404PT. The realtek is a RTL8111G (built into my MB).
Drivers are just linux mainline kernel drivers. No special network config.
My router doesn't seem to have any issue either. It reports about 1100 connections (total, not just ipfs).
▶ No.1048870>>1048876 >>1048890
>>1048794
Thank you. I was planning on essentially duplicating the HLS protocol and having the client refresh the master playlist, but pubsub seems like a much more direct way of doing what I want, I'll just broadcast the hash of each chunk as it becomes ready. I doubt I'll need to do any sort of peer management as I'll just be broadcasting an immutable list of chunks, then the client will request the chunks it requires as HLS does, but it's good to know I've got the option if necessary.
But from that IPFS blog link:
>As it is today, the pubsub implementation can be quite bandwidth intensive. It works well for apps with few peers in the group, but does not scale. We have designed a more robust underlying algorithm that will scale to much larger use cases but we wanted to ship this simple implementation so you can begin using it for your applications.
So it isn't ready just yet, but as long as as there's eventually some sort of peer hierarchy so the original host doesn't have to broadcast to each client individually this should work just fine.
▶ No.1048876>>1048890
▶ No.1048890>>1049078
>>1048870
>So it isn't ready just yet, but as long as as there's eventually some sort of peer hierarchy so the original host doesn't have to broadcast to each client individually this should work just fine.
I think that's what gossipsub is trying to solve.
https://youtu.be/vveUuE7YlZ8?t=293
>>1048876
Very cool. I'd love to see someone take it further and have the server be standalone instead of a proxy to a container though (dropping the legacy stream). Probably wouldn't be a hard thing to convert.
I'm real interested in seeing how viable P2P streams are. Sometimes I do want to just stream my desktop with like 10 people, and I'd love that to be doable without relying on external services or shouldering the entire bandwidth requirement yourself. If I can stream and just have other peers relaying the blocks, the delay shouldn't be as bad as existing amplification solutions. I wonder how Skype handles all this.
▶ No.1049078
>>1048890
>I'd love to see someone take it further and have the server be standalone instead of a proxy to a container though
I think that would be fairly difficult to do practically. Someone streaming would need to be broadcasting (ideally) to at least 3 or 4 peers himself so the stream doesn't immediately cut out if a peer drops off, and that would be quite challenging for a client trying to stream HD video off of a home connection. Doubly so if you're streaming on wifi, or on some sort of conference's internet.
▶ No.1049575>>1049747
▶ No.1049747>>1049778
>>1049575
>RationalWiki
because everyone knows the far-left is rational
▶ No.1049778>>1049783
>>1049747
Corbett is baby's first /pol/ come on
▶ No.1049783>>1050045
>>1049778
I've got no idea who corbett is, just pointing out rationalwiki is only good for a quick laugh.
▶ No.1050045
>>1049783
>rationalwiki is only good for a quick laugh.
Uh yeah, i know, that's why I linked it?
▶ No.1050146>>1050147 >>1050148 >>1050235 >>1050334 >>1050347
Why has IPFS not become huge with the recent crackdown on piracy? Like a lot of old trackers getting shutdown and whatnot? Or is there a large 'underground' community that I'm not aware of?
▶ No.1050147
>>1050146
maybe its harder to use than torrent magnets. public "trackers" are just magnet lists now.
▶ No.1050148>>1050171
▶ No.1050171
▶ No.1050235
>>1050146
IPFS in inferior to torrents for every use case
▶ No.1050334
>>1050146
It's not finished yet.
You can share multihashes today in the same way you share magnet links. With the index being just a file in IPFS itself, it's hard to take it down without going after every node hosting it.
IPFS will likely gain popularity when anonymous routing is integrated into the main branch.
It would make openly distributed indexes simpler to make and use, while harder to shut down as well.
There was something like this
/ipfs/QmXta6nNEcJtRhKEFC4vTB3r72CFAgnJwtcUBU5GJ1TbfU
that used some json hosted over IPNS so the client was static but the index was dynamic.
If you scrape the DHT you can just index the entire network as a distributed database. Some people are doing this already.
https://torrent-paradise.ml/about.html
(https://github.com/urbanguacamole/torrent-paradise)
https://ipfs-search.com/#/search
(https://github.com/ipfs-search/ipfs-search)
▶ No.1050347>>1055452
>>1050146
Because 99.99% of people aren't autistic like you. If you want it to become "huge" then you need to make it accessible to people who have less than 20 centimeters of beard on their neck.
▶ No.1055452>>1055457 >>1055467
>>1050347
https://orion.siderus.io
There, now answer me again, WHY has IPFS not caught on? And don't tell me it is "shit marketing"
▶ No.1055457
>>1055452
>some obscure meme address that doesn't mean anything nor make reference to ipfs
>website doesn't even load without javascript
Do I even need to answer?
▶ No.1055467>>1055510
>>1055452
My brother is relatively normie tier. He asked he for help pirating textbooks this fall. First he tried looking them up on the Pirate Bay, but they weren't there. Then I showed him libgen, which is where he found them.
The two major problems for pirates are:
1. you need to be able to find the files your looking for.
2. you need to be able to rely on the file being available (have peers)
For web based platforms like HTTP and IPFS, 1 is basically impossible. Look at the money google pulls in doing a shit-tier job of this. 2 is unsolved in bittorrent, and IPFS (afaik) does nothing to improve on this. This is why libgen beats torrenting here, because you know the file will be there.
Okay I installed orion. How is this supposed to help non-autists? I probably have autism, and I have no clue what to click on to start downloading things. Do you really think I could suggest this to my brother and have him start pirating his books with it? Fuck, do you think I could recommend this to my brother and have him do anything with it? If this is what normie-tier software looks like in the IPFS world, then it has a really long way to go.
▶ No.1055491>>1055503
You could just install the ifps browser plugin with ipfs in the background. I don't know what orion is but the plugin seems preety normalfag friendly. Just find a ipfs adress on the web and click, download starts, assuming it was pinned and or there's peers, with a nice GUI showing the progress.
▶ No.1055503
>>1055491
this isn't normie friendly at all. Normies don't normally run daemons. "Assuming it was pinned and or there's peers" is way too strong an assumption. TPB sorts purely on peer count for a good reason. ipfs-search.com only hosts dead links. First search I tried: last seen a month ago, last seen ten days ago, last seen 4 hours ago ... what about things I can download right now? What niggerfaggot normie is going to sit there for ten days waiting for the resource to come back up? (they have a cute color scheme: green means "seen within the last month", yellow means "seen within the last year", and red means "seen longer ago than a year". how helpful)
Normie tier software would look like this: you install a standalone program. You double click the icon on your desktop. Up comes a window with a search bar and some recommended links. The search bar finds resources that are: 1. alive 2. relevant, in that order. The recommended links are new, interesting, and of course alive. It also has a little pane where you can add new resources from your computer, add a description to them, whatever.
▶ No.1055510>>1055528
>>1055467
>For web based platforms like HTTP and IPFS, 1 is basically impossible
With IPFS you really only need to know the hash of the book you are looking for and IPFS will automatically requests nodes for this file. If there are any seeders it will find them without any extra work. But then there is the problem of you finding these hashes and where. A good improvement to IPFS would be to make some sort of hash database for when you are looking for something, and since it's just hashes you can't be fucked with the excuse of copyright infringement since it's just a hash and that's not copyrighted. This could make #1 normalfag and non-autist friendly.
>2 is unsolved in bittorrent, and IPFS (afaik) does nothing to improve on this
There's filecoin which was made with the goal of incentivizing people to store files and make them available on the network for as long as possible. It rewards people with the cryptocurrency according to how much time they seed the files, and if I'm not wrong, also based on their demand.
▶ No.1055514
Is there a way to use ipfs without filecoin? I.e without having your download/uploads linked to a hash chain that is ummutable which is to say always able to be tracked back to the filecoin account/metadata that made the transfer? Or is there a way to generate a new file coin account/identifier on demand?
▶ No.1055528
>>1055510
>since it's just hashes you can't be fucked with the excuse of copyright infringement since it's just a hash and that's not copyrighted.
Ha! how kindly you think of various western governments. I think "what color are your bits?" is the most complete treatment of this, but the pervasive banning of TPB demonstrates that you don't need to host the resource itself to get torn down.
▶ No.1057797
Hosting your own data on your own machine? Who let this happen?
▶ No.1059027
Necrobumpity with a chat log showing the developers of LineageOS are fucked
/ipfs/QmcbapngtcQFVHHqGCf7eCtjAhjcPHvsgQwKjPS1NHW6G7
▶ No.1064192>>1065147 >>1066483
Is there a way to download a hash from a specific peer using their node address?
▶ No.1065147>>1066484
▶ No.1066483>>1066484
Did something weird happen on 8ch? I made this post already but it's gone now. Maybe something spam related.
>>1064192
For what purpose? If it's for privacy you probably want to create a private network.
https://github.com/ipfs/go-ipfs/blob/master/docs/experimental-features.md#private-networks
If it's for speed, downloads should swarm by default (I'm pretty sure, someone correct me if I'm wrong)
▶ No.1066484
>>1065147
>>1066483
Interesting, I guess the index just needed to be rebuilt.
▶ No.1066588>>1067570 >>1067709
so its just torrents but the client is named ipfs. thats how op described it. magnets can already do the same thing.
▶ No.1067570
>>1066588
IPFS has deduplication, that is the advantage.
▶ No.1067709
>>1066588
Did you only read the OP? Magnets are just a way to resolve torrent files themselves. They don't do anything interesting. Read the thread if you want to know the differences.