I aware of but not familiar with IPFS, so that's probably a good place to start research, but...
Does anyone know of any efforts to create a program like Seti@Home that would allocate blocks of storage to be used globally by the filesystem it is supporting?
I'm just imagining how interesting and possibly useful it would be to have a storage pool of millions, perhaps billions, of nodes connected sharing some amount of the replicated and encrypted data.
And if it were as easy to contribute to as running some simple program and saying "OK, here's 20gb for you to use." would get us closer to millions of nodes.
Just a thought.
I don't know that many people realize just how much infrastructure is in place for things like YouTube, localized caches in datacenters near you, etc.
Hell, while we're on this, we could probably find a better protocol to deliver video that doesn't use HTTP.
Is it #QUIC's time to shine?!‽
If we name this tool Shardy McShardface, do you think we'd get more interest and use?
I think that's key... Gotta have a name for the kids these days. Tho... The McFace thing might be a bit dated by now... Would it be retro and cool, nostalgic and funny?
@bobdobberson pretty much free cloud storage, no? unless im misunderstanding.
i dont think its technically difficult, its a matter of funds. thats going to require a lot of disk space, a lot of energy, and a lot of man power for administration and maintenance.
@benda I don't understand what you mean.
Millions of people already have more than 20GB to dedicate to something like this.
This would require no additional resources, just coordination. I do think IPFS is pretty close to this situation, but I don't currently wish to spend my resources on that research.
The user of the program would be dedicating a portion of their available storage to this use; 20GB is just an example. If you want to host 1TB or 20PB, go for it.
@bobdobberson ooooh i see what you're saying. hm. that would be interesting to see it happen and how it would work out.
@benda to make it successful, one could pour resources into it so that there is enough global storage to support the appropriate number of replicas to ensure redundancy and reliability, to get broader support and bring users into the ecosystem.
But it is also quite possible to do this with enthusiasts who wish to volunteer resources enough to get things off the ground.
You make a good point that there would need to be quotas of sorts, to ensure that redundancy and reliability can be sustained, meaning the first amounts of storage available to general users would be pretty small.
That may be where IPFS is running into issues, I know they do some sort of distributed storage, but if I recall, the tools and mechanisms to leverage it are cumbersome.
@benda then my question is... Are we getting into "just because we can, doesn't mean we should" territory, as I wonder if that would /really/ be the best use of our networking infrastructure.
Then I look at what our networking infrastructure is being used for... And... Oh it makes me want to weep.
@bobdobberson lol. i dont think we would be doing it just because we can. there is some need for non-corpo cloud storage, for privacy and freedom sake. but im going the personal route. community route is also good, even preferable as it removes the financial and technical barrier for others (at least to some extent). but its going to be a lot of heavy lifting. like redundancy for example. if via internet, its possible to do something like raid setup on something like a logical volume that's distributed all over the world, there would still be the possibility that the same day a piece of the pie fails mechanically, the redundant copies are offline for whatever reason. there would have to be layers of redundancy. an im not sure how one could plan for that. incorporate probabilities for physical failure, malicious attacks, and other causes of a drive going offline? and to incorporate that would mean writing the code in the first place since the tools we have to usually address data redundancy don't actually factor these things in.
(btw, i know nothing about how ipfs works. reading on it now).
@benda imagine something more like bit-torrent, with directory listings.
You would not need to make your data redundant, the sharding of the data, hashes, and everything else would ensure there are X replicas, where X is determined to be the safest for however many nodes might be lost at any one time.
@benda I also think we tend to generate too much data, I've been happy without a cloud-based solution of any sort for a very long time. I cheat by using syncthing to keep the few devices I use updated with relevant shared data.
@benda also, that's not entirely true, I do have virtual private servers, and off-site 'cloud-ish' backups.
@bobdobberson yeah the first thing i thought when i read you're original post was bittorrent. and it seems thats the underlying technology behind ipfs, as well.
@benda I don't know that bittorrent, as is, or as implemented by IPFS is the ideal solution. It can likely be improved upon in ways that consider network topology and speeds, and I don't know enough about bittorrent's guts to know if it can be tweaked for those considerations easily, or if the mechanism it employs really lends itself to this implementation.
@benda I was wondering why I even had this idea, as I didn't see a personal use for the tech, and backtracked my steps to my thinking about alternatives to youtube, and how the biggest thing to implement with a youtube killer would be massive storage for the videos, and ways to cache content 'locally' to the end user to ensure it's speedy.
@bobdobberson ah yes. video hosting is a whole other beast, mostly on the resources front. if that's the goal i would look into, maybe touch base with the devs of #hyper8 and #peertube
hyper8 being @benpate
i know with peertube part of the solution is in fact a p2p. there's a host server, but if a video is popular enough, concurrent watchers are all seeding and leaching to one another to take some of the load of the server.
(i dont really know anything about how hyper8 works-- sorry ben!)