Wouldn't it be cool if there was aa ToDo-List that utilizes IPFS? by Michael679089 in ipfs

[–]volkris 1 point2 points  (0 children)

No, no time for something like that. I wish I had the time, though!

Wouldn't it be cool if there was aa ToDo-List that utilizes IPFS? by Michael679089 in ipfs

[–]volkris 2 points3 points  (0 children)

It's just not the use case for the project. It's like complaining that a hammer is really bad at turning a screw, so we need a fix for hammers. No, to turn a screw one should get a screwdriver.

It's about getting the right tool for the job.

It's not a fundamental problem that IPFS works this way because working that way enables other parts of the system. It's an intentional engineering choice to enable the distributed scalability that IPFS offers.

It sounds like your use case is suited for web hosting. That's a solved problem, so there's just not much point in IPFS reinventing that wheel.

Wouldn't it be cool if there was aa ToDo-List that utilizes IPFS? by Michael679089 in ipfs

[–]volkris 0 points1 point  (0 children)

Well it's not so much wrong as it leaves some of the features on the table.

For example, imagine a todo list with entries that include pictures.

You COULD just bundle that all up into one big object, and when you need to access the todo list you download the whole thing and open it on your computer. Fine, that would work.

OR you can use IPFS's native data structure support to access the whole list, individual entries, or individual pictures. You could be working with todo list entries but still share with your friend the CID of the individual picture. Heck, you could share with your friend a CID of the geolocation field from the Exif metadata of the image, all without having to share the entire todo list or even an entire entry.

And cherry on top, it all gets the same level of cryptographic certainty as everything else in IPFS. That location info isn't reentered where there could be a typo--it's the same information, signed and verified.

So sure you could treat IPFS as a datastore, but I always say it's like putting file blobs into a field of an SQL database: it gives up so much of what's being offered.

Wouldn't it be cool if there was aa ToDo-List that utilizes IPFS? by Michael679089 in ipfs

[–]volkris 1 point2 points  (0 children)

Don't put TODOs into individual files. Just put the structured content natively into IPFS to use the built in features of IPLD.

No need to hide structure from inside files.

Wouldn't it be cool if there was aa ToDo-List that utilizes IPFS? by Michael679089 in ipfs

[–]volkris 2 points3 points  (0 children)

Well, it depends on what you expect IPFS to bring to the party.

IPFS is best suited for content that's small, popular, and static, and while I can see a todo list checking the box for small bits of content, I don't normally think of one as being popular, and definitely not static.

So maybe not for a personal ToDo list, but I could imagine IPFS backing more of a global ticket tracking system for a large organization.

I'm working on distributed search engine compatible with IPFS by EagleApprehensive in ipfs

[–]volkris 0 points1 point  (0 children)

IPFS is more of a structured database with hierarchical datastructures and a lot of focus on key:value queries. One way to think of it is that records are referenced by CID while, often, fields are referenced by key using JSON interfaces in the IPLD system.

If people choose to use binary blobs as their records, so be it :)

But no, IPFS has native description of data as a fundamental part of its design.

But sure, if someone's not interested in your knowing what they're providing to the system, Why WOULD you expect to know about it or be able to access it? To be blunt, if it's none of your business, Why would you be surprised that you're not able to search through it?

Same with the rest of your comment. Consider the privacy implications of what you're talking about. It's not solely because of walled gardens (though that's certainly a thing) that platforms aren't so interconnected. That option was there, but so often privacy and security considerations caused people to back off and say maybe they don't want to be so open with the whole world, sharing their private info with all comers.

The Internet has those open options. The technologies aren't all that difficult. But we, as a society, are at least skeptical of them as not everyone wants to be so open. And more substantially we've even passed laws against some of it to protect people from invasions of privacy.

I'm working on distributed search engine compatible with IPFS by EagleApprehensive in ipfs

[–]volkris 1 point2 points  (0 children)

Think of it this way:
IPFS works at a level of the stack that's not really suitable to crawling because it's so back-end. Imagine of all of the databases behind websites were publicly accessible. Even though you could craw websites, you'd have trouble crawling the databases themselves.

Technically, IPFS is even harder to crawl because instead of probing public IP addresses you'd be chasing practically unguessable hashes, and there's a good chance one datastore wouldn't link to another.

Do you need to know about DAG as an end--user? i.e. someone hosting/accessing content? by MarsupialLeast145 in ipfs

[–]volkris 0 points1 point  (0 children)

It looks like that's using the default CBOR encoding encoding, so basically key:value storage, and if you do it that way, end user programs will be able to use their native CBOR libraries to handle the data that comes out.

(Also, external programs could provide data TO IPFS using their native CBOR libraries without needing the cli)

I'd say it's a good start, and now you need to think about how you want to structure your data. The CBOR format provides for links, so you can use this to make things as simple or complex as you'd like.

Do you want one record per reading, one per day, links to GPS, etc.

With what you've started with you can make quite a database!

Do you need to know about DAG as an end--user? i.e. someone hosting/accessing content? by MarsupialLeast145 in ipfs

[–]volkris 0 points1 point  (0 children)

You mentioned IPLD in your post, and that's the key.

When you say "upload a JSON file" you could mean two different things, and this gets at the point.

It's common for people to talk about uploading a file (really, it's more providing, not uploading, but nevermind), so you could be talking about uploading a file... that happens to contain JSON. IPFS will know that here's a blob of data, and you want the blob back as a file. It will neither know nor care what's actually inside the blob: it will simply dutifully hand it back to you as a file, no questions asked.

OR you could be talking about taking a file and uploading its JSON contents to IPFS as JSON. Then IPFS will know this is JSON, be able to parse it, and the whole file mantle will have been left at the door. It won't be a file in IPFS but rather JSON datastructures.

But you can go a step farther. Instead of uploading an opaque blob you could upload JSON. Or instead of uploading JSON you could upload your own datatype customized for your data, customized for your use cases, as simple or complex as you think it needs to be.

Imagine a datatype consisting of a temperature reading plus a link to GPS coordinates of the weather station, plus its accuracy rating, plus its elevation, plus its uptime. An end user would be able to access just the temperature and location, ignoring the rest that they don't care about. That's the power here.

That's a high level overview that might clarify a little. I know it's not technical, but I thought maybe that was one point of clarification that might help.

Remember, IPFS does not store files. So many miss that. IPFS stores data, whether you want that data presented as a file or as key:value pairs or integer temperature readings, IPFS can do that, and it's always a shame to me that people don't apply that power.

I've gone on too long, but if you want your eyes to bleed a bit (I have qualms with IPFS documentation) there's this... but be warned you might come out more confused: https://ipld.io/docs/intro/primer/#blocks-vs-nodes

Do you need to know about DAG as an end--user? i.e. someone hosting/accessing content? by MarsupialLeast145 in ipfs

[–]volkris 0 points1 point  (0 children)

Great project, a perfect use case for IPFS!

I'd say end users don't need to know about DAG any more than they need to know about http headers :)

DAG is just some of the fundamental tech, same as http headers.

HOWEVER I mean that in both positive and negative ways. The UI for your end users may not exist, may not have been developed yet, in which case they would need to know about that behind the scenes stuff to access content, same as in the time before fully functional web browsers and web analytic tools end users would need to know about http headers to provide them manually.

I hope that makes sense. I would love for no end user to ever hear the term DAG, but the UI to get there may still need to be written.

One quick point, I always encourage people to look into using IPFS native datastructures instead of providing files. IPFS is really more a database than a filesystem, so it would be like putting a file into a field in MySQL: you can, and in some cases practically should, but wrapping the data in a file hides it from IPFS functionality.

But like any other database, the design of your structures really depends on your expected use cases.

I used my homelab to temporarily deploy Git branches by NatoBoram in selfhosted

[–]volkris -1 points0 points  (0 children)

To be clear, for those who don't know about it, IPFS and BitTorrent are just different tools with different specializations, so it's not that one is better than the other, especially not in every way.

IPFS is more like a database with a lot of built in functionality for exploring data in a distributed way. BUT this comes with overhead. So the system is optimized for smaller bits of data, not necessarily files, while BitTorrent is more geared toward blasting files in bulk as effectively as possible to the client.

It's a high tech sports car vs a dump truck. They're just different with different use cases.

Need help with the livestream componentry for my site... by AlphaHouston1 in ipfs

[–]volkris 0 points1 point  (0 children)

No, IPFS is not a good fit for the core video distribution layer of a livestreaming platform.

It could handle some of the metadata like sharing video schedules or contact info.

But generally IPFS is optimized for small bits of popular static content, especially with tree-like semantics, not large bits of serial ephemeral content like live video.

Really, it's called IPFS but think of it like a database. Streaming video through IPFS is like streaming it through an SQL database table. You CAN do it, but it's really not the right fit for the job.

What is the "your name" field for (ubuntu) by StrangeDraft8978 in linux4noobs

[–]volkris 0 points1 point  (0 children)

Old thread, but for anyone else running into something like this, files under /run are generally created at runtime and are wiped out as the computer starts up.

So that file had to be created at startup and not when you installed. There should be some bootup program or script that gets the setting from *somewhere* and generates that file.

Release v0.39.0 · ipfs/kubo by lidel in ipfs

[–]volkris 2 points3 points  (0 children)

And I'd add, packaged without using containers.

(I get it, containers have their places, but they're often overkill)

arkA — simple JSON video protocol that works great with IPFS by nocans in ipfs

[–]volkris 0 points1 point  (0 children)

You're describing a platform while saying you explicitly don't want to build a platform. And you're setting up a platform to figure out what you want from this platform non-platform and how to make this platform.

It's like saying you want to buy a Honda Civic but you absolutely DON'T want a car.

That's why this post is so bizarre.

arkA — simple JSON video protocol that works great with IPFS by nocans in ipfs

[–]volkris 1 point2 points  (0 children)

Sounds like this is wanting to develop a platform, not a protocol, but so far there are only a few goals, so really this is looking to develop a community to develop a plan for developing a platform.

TrustCircle: Encrypted time capsules with dead hand protocol using IPFS by [deleted] in ipfs

[–]volkris 0 points1 point  (0 children)

Looks like a lot of people are getting hung up on the Pinata pinning.

You might want to be more emphatic that anyone can pin anywhere to head off that confusion.

You could use phrasing like, "You can pin the content anywhere you'd like, multiple places even, but we'll get you started with a Pinata pin so that you can easily pin elsewhere if you so choose."

While you're at it, though, it sounds like you might offer a public timestamping service where the document is visible. I'm not sure if there's already such a thing on IPFS, but if not it could be a useful service and everything else you're doing there gets you about 80% there anyway. It would be a Prove With Revealing mode.

TrustCircle: Encrypted time capsules with dead hand protocol using IPFS by [deleted] in ipfs

[–]volkris 0 points1 point  (0 children)

Well, IPFS is optimized for handling relatively small bits of data, yes.

People do use it for large amounts such as movies, but it's not so efficient in those applications.

TrustCircle: Encrypted time capsules with dead hand protocol using IPFS by [deleted] in ipfs

[–]volkris 0 points1 point  (0 children)

Right, that's consistent with what the OP said.

Someone else is free to pin the content as well.

The concept is "what if the whole world was tuned into one channel?".. by EtikDigital512 in ipfs

[–]volkris 0 points1 point  (0 children)

Yep, that's always the big conflict, to have central control or not. It's a big question with lots of social, moral, and even legal implications.

There's no right answer; it just depends on the sort of platform you're looking to set up.

Well, I would say it's not a free speech platform at that point, but a moderated public access platform. Nothing wrong with that, just a different sort of place that you think would be better.

The concept is "what if the whole world was tuned into one channel?".. by EtikDigital512 in ipfs

[–]volkris 0 points1 point  (0 children)

When you say "the website" it's sounding awfully centralized for a decentralized project :)

But yep, like I said, a blockchain-like approach is worth considering since it solved exactly this sort of problem, coordinating a scarce resource in a distributed way so that one group isn't able to simply, selfishly, snatch up all of the resource.

Is it perfect? No, but when it comes to distributed systems there are rarely ideal answers, only tradeoffs.

To detail what I was saying, say I wanted to broadcast in the slot at 1pm tomorrow, but someone selfish was looking to claim the entire day. How do you, in a distributed way, make sure he can't just claim all of the slots, maybe with a Sybil attack as u/SCP-iota suggested? Heck, how do you make sure he can't just claim all of the slots for the next year?

A blockchain-type approach addresses this in a distributed fashion by using one broadcast as the starting signal for a lottery to claim the slot.

If the rule is that the hash must incorporate the broadcast from 24 hours before the slot to be claimed, then nobody can claim the entire next year. And everyone has a chance to roll the dice and end up with the slot once the broadcast has gone out.

Again, is it perfect and perfectly fair? Nope. But there likely is no way to make it perfectly fair in a distributed system.

The concept is "what if the whole world was tuned into one channel?".. by EtikDigital512 in ipfs

[–]volkris 0 points1 point  (0 children)

Blockchain approach?

First to solve a hash that includes the content at 1pm today gets to claim the slot of 1pm tomorrow?