Hacker Newsnew | past | comments | ask | show | jobs | submit | mafintosh's commentslogin

its through the holepunch stack (i am the original creator). Incentives for sharing is through social incentives like in BitTorrent. If i use a model with my friends and family i can help rehost to them


The modular philosophy of the full stack is to give you the building blocks for exactly this also :)


Looking through the balance of the material, I can see that, but on first glance, this seems a confusible point.


Already supported :)


Will this be documented at some point?


coming later this year once we stabilise a bit more :)


The rust port of Hypercore has been very active recently and they are making good progress. Part of the latest Hypercore release was to move some of the transport crypto to be easier to port to other languages such as rust.

The wire protocol works now: https://github.com/Frando/hypercore-protocol-rs and the community is active in #datrs on freenode


Yes, we'll definitely be doing this. Thanks for the feedback.


Hypercore is a single writer append-only log. The website has a bit more info about how it works, but's basically a merkle log signed by a private key / public key scheme. We build collaborative data structures by combining multiple Hypercores.

Hyperdrives builds a p2p filesystem on top of Hypercore for a single writer. Using mounts you can mount other peoples drives so merge conflicts don't happen since there is no overlapping writes.

We are working on a union mount approach as well for overlapping drives (we talk a bit about this in the post)


So it sounds a bit like you've replicated git.

Do you agree that for the collaborative data structures side of things (like the chat app) users of the hypercore-protocol will likely run into clock trust problems?

PS I'm a big fan of your work/repos.


Yea you have to trust the original writers atm. I have some ideas for reducing this trust in the future through some consensus schemes but nothing fully baked yet. Def something I wanna hit tho, so we can get better security in something like a massively distributed chat system.


The peers gossip using compressed bitfields in regards to what data they have. These bitfields are super small so we can pack quite a bit of information.

At the moment we don't do anything special in regards to discovery, but as we scale that's something we want to investigate. Since everything is running on append-only logs we can group the data into sections quite easily so there is some easy wins we can do there with announcing to the dht that you have data in a specific region.


So do you opportunistically gather info about what other peers have via this gossip protocol, or just when you need something?

I had looked at https://datprotocol.github.io/how-dat-works/ , but I don't remember anything about a gossip protocol or a peer building a "view of the world". Is that new?


It's only between the peers in your subset of the swarm for now. They exchange a series of WANT and HAVE messages where they subscribe to the sections of each others logs they are interested in.

We are working on expanding this scheme so peers can help discover peers that have the section you are looking for.

Due to the compressed bitfields these section are quite large. In most cases using a few kilobytes you can share WANT/HAVE for millions of blocks


Yes, I saw the compressed bitfields. Most amount of bit twiddling I have ever seen in a pure javascript library...

Being able to identify a piece of content by an integer instead of a hash makes things more efficient compared to content-addressed storage a la IPFS.


Yes we are continuously exploring and research this. The mount support is our first stepping stone towards this. See the union mounts section of the post


It should work on 14. Could you open an issue on the repository?


filed hyperdrive-daemon #47 problem with hyperdrive-daemon/node_modules/fuse-native/prebuilds/linux-x64/node.napi.node


Thanks, appreciate it


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: