kolektiva.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Kolektiva is an anti-colonial anarchist collective that offers federated social media to anarchist collectives and individuals in the fediverse. For the social movements and liberation!

Administered by:

Server stats:

3.9K
active users

Pope Bob the Unsane

I aware of but not familiar with IPFS, so that's probably a good place to start research, but...

Does anyone know of any efforts to create a program like Seti@Home that would allocate blocks of storage to be used globally by the filesystem it is supporting?

I'm just imagining how interesting and possibly useful it would be to have a storage pool of millions, perhaps billions, of nodes connected sharing some amount of the replicated and encrypted data.

And if it were as easy to contribute to as running some simple program and saying "OK, here's 20gb for you to use." would get us closer to millions of nodes.

Just a thought.

I don't know that many people realize just how much infrastructure is in place for things like YouTube, localized caches in datacenters near you, etc.

Hell, while we're on this, we could probably find a better protocol to deliver video that doesn't use HTTP.

If we name this tool Shardy McShardface, do you think we'd get more interest and use?

I think that's key... Gotta have a name for the kids these days. Tho... The McFace thing might be a bit dated by now... Would it be retro and cool, nostalgic and funny?

@bobdobberson I belive there is a "Silicon Valley" company called Pied Piper working on this.

@bobdobberson pretty much free cloud storage, no? unless im misunderstanding.
i dont think its technically difficult, its a matter of funds. thats going to require a lot of disk space, a lot of energy, and a lot of man power for administration and maintenance.

@benda I don't understand what you mean.

Millions of people already have more than 20GB to dedicate to something like this.

This would require no additional resources, just coordination. I do think IPFS is pretty close to this situation, but I don't currently wish to spend my resources on that research.

The user of the program would be dedicating a portion of their available storage to this use; 20GB is just an example. If you want to host 1TB or 20PB, go for it.

@bobdobberson ooooh i see what you're saying. hm. that would be interesting to see it happen and how it would work out.

@benda to make it successful, one could pour resources into it so that there is enough global storage to support the appropriate number of replicas to ensure redundancy and reliability, to get broader support and bring users into the ecosystem.

But it is also quite possible to do this with enthusiasts who wish to volunteer resources enough to get things off the ground.

You make a good point that there would need to be quotas of sorts, to ensure that redundancy and reliability can be sustained, meaning the first amounts of storage available to general users would be pretty small.

That may be where IPFS is running into issues, I know they do some sort of distributed storage, but if I recall, the tools and mechanisms to leverage it are cumbersome.

@benda then my question is... Are we getting into "just because we can, doesn't mean we should" territory, as I wonder if that would /really/ be the best use of our networking infrastructure.

Then I look at what our networking infrastructure is being used for... And... Oh it makes me want to weep.

@bobdobberson lol. i dont think we would be doing it just because we can. there is some need for non-corpo cloud storage, for privacy and freedom sake. but im going the personal route. community route is also good, even preferable as it removes the financial and technical barrier for others (at least to some extent). but its going to be a lot of heavy lifting. like redundancy for example. if via internet, its possible to do something like raid setup on something like a logical volume that's distributed all over the world, there would still be the possibility that the same day a piece of the pie fails mechanically, the redundant copies are offline for whatever reason. there would have to be layers of redundancy. an im not sure how one could plan for that. incorporate probabilities for physical failure, malicious attacks, and other causes of a drive going offline? and to incorporate that would mean writing the code in the first place since the tools we have to usually address data redundancy don't actually factor these things in.
(btw, i know nothing about how ipfs works. reading on it now).

@benda imagine something more like bit-torrent, with directory listings.

You would not need to make your data redundant, the sharding of the data, hashes, and everything else would ensure there are X replicas, where X is determined to be the safest for however many nodes might be lost at any one time.

@benda I also think we tend to generate too much data, I've been happy without a cloud-based solution of any sort for a very long time. I cheat by using syncthing to keep the few devices I use updated with relevant shared data.

@benda also, that's not entirely true, I do have virtual private servers, and off-site 'cloud-ish' backups.

@bobdobberson yeah the first thing i thought when i read you're original post was bittorrent. and it seems thats the underlying technology behind ipfs, as well.

@benda I don't know that bittorrent, as is, or as implemented by IPFS is the ideal solution. It can likely be improved upon in ways that consider network topology and speeds, and I don't know enough about bittorrent's guts to know if it can be tweaked for those considerations easily, or if the mechanism it employs really lends itself to this implementation.

@benda I was wondering why I even had this idea, as I didn't see a personal use for the tech, and backtracked my steps to my thinking about alternatives to youtube, and how the biggest thing to implement with a youtube killer would be massive storage for the videos, and ways to cache content 'locally' to the end user to ensure it's speedy.

@bobdobberson @benda Youtube videos are unnecessarily big, by the way. 360p is almost always adequate. And most of YouTube is worthless - just like everything else, 99% of it is crap. If you decide what is useful, and keep it in 720p, and downgrade to 360p for archival, you use a lot less space.

@immibis depends on what you watch, I suppose. Most of the engineering stuff isn't crap, unless you've found the wrong people to watch. Lots of quality informative / educational content on YouTube from my perspective. DEFCON talks for one... Which are likely hosted elsewhere, but...

There's plenty of garbage on there, too, don't get me wrong. Some of that serves the function of allowing one's brain to zone-out and do some processing.

More or less agree with the idea 360 is fine. Videos don't need to be huge. Especially since many are watching on tiny screens.

@benda

@bobdobberson @benda I have an idea to create some kind of curated feed aggregator. Like Reddit but without user submissions to subreddits, or comments. Of course you'll probbly be able to mix in arbitrary feed URLs but they won't be on the site by default.

Speaking of Reddit, apparently they're somehow still alive, they've just started banning people for upvoting pro-Luigi or anti-Trump/anti-Musk stuff, and Digg will soon try to take their audience back? Redditors must be living in the weirdest internet timeline.

@immibis right, so, continue to use YouTube and experience the videos that YouTube deems worthy? A big part of the reason I don't want to stick with YouTube is because their centralized authority means they get to control the people making videos on their content.

What spurred this was a video of someone train-hopping that used obviously copyrighted material for music, and the person doing the filming made the decision to damn the consequences, suffer the demonitization and copyright strike(s) and just put it out there. But stuff like that gets taken down /constantly/, if only for being train-hopping.

It'd be nice to find a platform where the people making videos don't feel compelled to follow any sort of guidelines, and algorithms, and all that.

What you propose /would/ solve the problem of YouTube constantly unsubscribing me from channels silently, but one of the benefits of using the platform directly /is/ the algorithm, as shitty as it is.

Ideally we would have better algorithms, or user-defined algorithms / search functions for finding the videos they're looking for.

Of course, it would probably all descend into porn without advertisers... But maybe we need to embrace that as a species, and figure out how to normalize having sexual organs.

@benda

@immibis creating the curated feed aggregator actually sounds like a better idea than what that triggered mentally for me; I thought you were proposing using a feed aggregator to follow YouTube channels, which is something I tried out before, and was ... less easy to use in the end.

I have a bad habit that I am working on where I start to reply before I even read everything I'm reading -- talk about people being too demanding of lightning fast responses and such... Sigh... We're all works in progress...

The Reddit situation is definitely similar to what I was talking about with YouTube conformity. The moderation and censorship has become very political in recent times. Coupled with our seeming desire to be edge-lords -- to say things and demand people censor us or try...

It feels like it is intentional sowing of division meant to keep us at each others' throats.

@benda

@bobdobberson ah yes. video hosting is a whole other beast, mostly on the resources front. if that's the goal i would look into, maybe touch base with the devs of and
hyper8 being @benpate

i know with peertube part of the solution is in fact a p2p. there's a host server, but if a video is popular enough, concurrent watchers are all seeding and leaching to one another to take some of the load of the server.
(i dont really know anything about how hyper8 works-- sorry ben!)

@bobdobberson @benda Are you thinking of allocating storage to a particular project or just to the world in general? My view of this thread starts with "I aware of but not familiar with IPFS" - I might be missing some context about what problem you're trying to solve.



An[REDACTED]ve is one project trying to replicate itself using Bittorrent. Bittorrent doesn't allow the project to choose which data to store with which participants.

IPFS feels like a failure. It allows anyone to choose to mirror specific pieces of data, just like Bittorrent, and in that regard it's a success. But it's way too slow to be useful for a lot of things. Like Bittorrent, it doesn't let the project leader choose how to mirror data. Unlike Bittorrent it comes with a URL scheme so NFTs can use it... and that's all.

Every protocol that follows the "just throw a distributed hash table at it" approach seems to be slow.

@immibis I suspect IPFS _feels_ like a failure because very few people know about it or use it, and it would almost certainly benefit from more users.

I have no idea how it works, mechanically. it may be imperfect in numerous other ways.

@bobdobberson It's slow. Really slow. Two minutes to load a plain HTML page slow. I think that's all you need to know. If you want people to use something, you have to start and end every thought with user experience.

@immibis oof. Yeah, that's not terribly practical at all.

I /do/ think that we're accustomed to things being lightning fast, and that is settings us up for future pain when it all comes crumbling down.

@SpaceLifeForm likely because of currently low adoption. I suspect as tools like these see more nodes, the speed would increase, as it becomes a bit more like bit-torrent.

I have to check that link to see what they specifically cite though. Might be a design issue. There's a lot that can be done by intelligently spreading replicas around where they are used.

@bobdobberson generating nonsense sounds like an use case for LLMs! oh wait, they're still crap.

Ironically, ByteDance would also fit.

@bobdobberson Probably undergoing a fourth rediscovery phase when it's re-retro nostaligia and yes, funnier this time around, too. I use SNTPS 1.x.5 v 2 with encryptoken plus to bypass the data mining nodes

@bobdobberson "the kids these days" just doesn't cut it now

@bobdobberson I think Filecoin was intended to do something like this. Don't know if it worked well. You pay for the storage in Filecoin, of course, that's why it's called Filecoin and not just File.

@immibis yeah, that defeats the whole purpose of open source, for the good of the people kinda thing I'm aiming for.

The incentive to use it should be that our use makes it better. I realize there's a hump to get over, which is why people develop these motivational tools, but that can only end badly.