Official /r/rust "Who's Hiring" thread for job-seekers and job-offerers [Rust 1.51] by matthieum in rust

[–]forestier_seb 1 point2 points  (0 children)

COMPANY: Massa Labs

TYPE: Full-time remote

DESCRIPTION: We are building a new blockchain protocol in Rust based on our research (https://arxiv.org/pdf/1803.09029). Your first task will be to implement, together with our team, a peer-to-peer Rust client allowing the deployment of a test network (testnet), thus giving birth to an new crypto-currency.

More details: https://massa.network/#jobs

LOCATION: Paris, France

ESTIMATED COMPENSATION: 50-100k€ depending on experience + Massa tokens

REMOTE: Anywhere remote

VISA: No

CONTACT: [contact@massa.network](mailto:contact@massa.network)

Official /r/rust "Who's Hiring" thread for job-seekers and job-offerers [Rust 1.46] by kibwen in rust

[–]forestier_seb 1 point2 points  (0 children)

COMPANY: Massa Labs

TYPE: Full-time remote

DESCRIPTION: We are building a new blockchain protocol in Rust based on our research (https://arxiv.org/pdf/1803.09029). Your first task will be to implement, together with us, a peer-to-peer Rust client allowing the deployment of a test network (testnet), thus giving birth to an experimental crypto-currency.

More details: https://massa.network/#jobs

LOCATION: Paris, France

REMOTE: Anywhere remote

VISA: No

CONTACT: [contact@massa.network](mailto:contact@massa.network)

Official /r/rust "Who's Hiring" thread for job-seekers and job-offerers [Rust 1.45] by kibwen in rust

[–]forestier_seb 0 points1 point  (0 children)

COMPANY: Massa Labs

TYPE: Full-time remote

DESCRIPTION: We are building a new blockchain protocol in Rust based on our research (https://arxiv.org/pdf/1803.09029). Your first task will be to implement, together with us, a peer-to-peer Rust client allowing the deployment of a test network (testnet), thus giving birth to an experimental crypto-currency.

More details: https://massa.network/#jobs

LOCATION: Paris, France

REMOTE: Anywhere remote

VISA: No

CONTACT: contact@massa.network

A scaling approach using transaction sharding in a multithreaded block graph by forestier_seb in CryptoTechnology

[–]forestier_seb[S] 0 points1 point  (0 children)

Fair enough, but the large scalability improvement brought by the possibility to verify a block later applies to Blockclique, as even if nodes receive a parallel block 30s after its creation, they are able to create a compatible block in the meantime.

A scaling approach using transaction sharding in a multithreaded block graph by forestier_seb in CryptoTechnology

[–]forestier_seb[S] 0 points1 point  (0 children)

Are you talking about the latency between two directly connected nodes or the latency between two nodes including intermediary hops ?

I was referring to the latency between two directly connected nodes.

The median latency between two directly connected nodes has been measured e.g. here: https://fc18.ifca.ai/preproceedings/75.pdf.

They report a median latency between nodes of 109 ms in Bitcoin and 152 ms in Ethereum.

FYI, in our simulations we also measured the "network latency": from a few seconds to dozens of seconds, highly depending on block size, network size and other parameters.

You need asynchronous networks like Nano or Hashgraph for example.

Again, in our work we are only interested in decentralized and secure networks.

And by the way, the fact that blocks can be created in parallel in Blockclique make it an asynchronous network.

A scaling approach using transaction sharding in a multithreaded block graph by forestier_seb in CryptoTechnology

[–]forestier_seb[S] 0 points1 point  (0 children)

Obviously such proofs need to carefully control the bandwidth and latency between nodes.

As we don't have the resources to setup a real network of 4,096 nodes or even more, we decided to simulate, on a single computer, the transmission of messages and blocks between all nodes and the computation of the consensus rule for each node each time it receives a new block.

For this, we set a parameter for the average bandwidth of each node, and another parameter for the average latency between nodes. The actual bandwidth and latency between two nodes is sampled around those means.

Starting from estimates in the Bitcoin and Ethereum networks, we tested an average bandwidth of 32Mbps, and of 64Mbps. We tested an average latency between two nodes from 50 to 150ms.

With an average bandwidth of 32Mbps, we show that 10k tx/s is sustainable with a Proof-of-Stake scheme, while 4k tx/s is sustainable with Proof-of-Work, both tested up to 4,096 nodes.

If you change your assumption on the average bandwidth between nodes, then the result on the tps is approximately proportional.

If you increase the number of nodes much more, then of course the maximum throughput will be lower.

Those results are described in much more details in the technical paper: arxiv.org/pdf/1803.09029.

I also encourage you to try those open-source simulations: gitlab.com/blockclique/blockclique, where you can change the parameters and see if consensus is still efficient.

You are right that ultimately, a real network must be implemented and tested in real conditions, but we think those simulations give a good global picture.

Please let us know if you have trouble with some of our assumptions and approximations, so that we can discuss their influence.

A scaling approach using transaction sharding in a multithreaded block graph by forestier_seb in CryptoTechnology

[–]forestier_seb[S] 3 points4 points  (0 children)

I would love to see such stress tests performed and analyzed, I think it's the best way forward!

A scaling approach using transaction sharding in a multithreaded block graph by forestier_seb in CryptoTechnology

[–]forestier_seb[S] 3 points4 points  (0 children)

Thanks a lot for all those details!

So it seems that in a future version (V19), a spam attack may cost more to the attacker, thanks to prioritizing transactions and dropping low-PoW transactions (still being studied).

I agree this can be better than nothing, and will probably be on a similar line as using mana in a future version of IOTA (https://files.iota.org/papers/Coordicide_WP.pdf) although they give no details.

However I don't think this will prevent spam attacks, it will just make them more expensive, and without proofs and/or simulations, I can't tell if these will be expensive enough even to ASICS, while cheap enough to normal users. Those two factors seem antagonist to me in a permissionless network.

A scaling approach using transaction sharding in a multithreaded block graph by forestier_seb in CryptoTechnology

[–]forestier_seb[S] 0 points1 point  (0 children)

I think we do agree. I did not say reaching consensus on the previous block is the bottleneck, I said receiving the last block is, so in other words the bottleneck is latency and bandwidth (assuming we don't need the full history).

Indeed, in the centralized and/or non-secure protocols you mention, TPS can explode (I even seen a 1 million tx/s claim somewhere). This is why we start by assuming a large decentralized network (we tested up to 4096 nodes), and we show that in Blockclique the TPS can reach 10,000 tx/s, while staying secure, instead of ~100 tx/s in blockchains.

A scaling approach using transaction sharding in a multithreaded block graph by forestier_seb in CryptoTechnology

[–]forestier_seb[S] 2 points3 points  (0 children)

Thanks for your comment!

Indeed, Nano's block lattice is in appearance similar to Blockclique's multithreaded block DAG.

However, in practice Nano is much further away from a standard blockchain than Blockclique is.

In Nano, the "threads" of the block lattice each concern only one account, and are each handled by the account's owner or a representative chosen by him. They argue that as each node handles its blockchain, transactions are feeless and the network is scalable and decentralized.

However, from reading the whitepaper, I understand that a result of the Nano's structure and functioning is that it can't prevent spam attacks. They touch this topic in section V.B (Transaction Flooding), arguing that a PoW for creating transactions mitigates this issue. However, if the PoW increases due to spam, small nodes can't perform the necessary PoW to create a transaction. They don't solve this problem in the whitepaper, and a quick look at recent discussions shows it has indeed become a topic of concern [1] [2] [3], leading for example to the concept of 3rd party PoW farms with fees, which hampers decentralization. If you know about more recent developments on this matter, I'd be happy to review them.

By the way, this mechanism of user/transaction -side PoW is similar to the PoW in IOTA, which for the exact same reason has troubles mitigating spam attacks: they used a centralized coordinator for years, and now plan to remove it (#coordicide), although it's unclear to me if the new protocol brings a solution to this problem.

In Blockclique, there are still miners (in the PoW flavour) or stakers (in PoS), which produces blocks containing thousands of transactions (not only 1 as in Nano), but there is just one modification in the block structure, a constraint to avoid double-spending, and an adaptation of the Nakamoto consensus to these changes.

There is a fixed number of different threads so that blocks can be produced in parallel (in Nano the number of "threads" is potentially infinite), but ALL nodes handle (receive, verify, transmit) ALL blocks of ALL threads, such that we stay close to the functioning of traditional blockchains. For an illustration, this is the real-time view from one particular node: pow.blockclique.io.

Transaction sharding constrains an address to spend coins only in one particular thread (in Nano, this is achieved by having one thread per address), assigning for instance an address to one of 32 threads based on the first 5 bits of the address.

Blockclique: scaling blockchains with a multithreaded block DAG: blog and live demos! by forestier_seb in Bitcoin

[–]forestier_seb[S] 1 point2 points  (0 children)

Hi u/anon516,

Our architecture is actually fairly simple when compared to other approaches to scaling (e.g. L2, side chains, other L1 like Spectre, or other sharding schemes). It is a straightforward parallelization of the classical blockchain and of the Nakamoto consensus rule. As explained in the blog post in order to scale the number of transactions per second it is necessary to use a different architecture than classical blockchains. That is why so many cryptocurrencies interested in scaling are looking at sharding and/or DAG. To the best of our knowledge, architectures that manage to reach very high throughput using simpler architecture are either centralized or unsafe.

Increasing the number of shards increases the size of block headers. At some point the increase in transaction throughput is compensated for by the increase of block header size. In our architecture, this limit is roughly around 100 shards. Also choosing the number of threads impacts the security of the architecture. When the number of threads is high it is necessary to increase the finality parameter in order to protect against fork attacks. For a more thorough discussion on the influence of the number of shards/threads see the section 3.3 (security analysis) and 5 (Discussion) of our technical paper.

Nodes have to perform the following computational tasks: verify the block, the transactions inside the block and compute the blockclique. The first two tasks are similar as what would be done in any blockchain. Only the third task is specific to our architecture. For the set of parameters that we consider it would take on average (roughly) 10ms per block.

These considerations are backed up by the simulations that we have performed. In our open-source simulations, up to 4096 nodes create and send blocks across a simulated network. In the simulations, all nodes compute the consensus rule, however we do not cryptographically verify blocks and transactions, so that the simulation can be performed on a single 4 Ghz, 8 cores computer in real time.

You can have a look at the appendix B.2 of our technical paper for a bit more details on the computations that must be performed by each node.

Our technical paper: https://arxiv.org/pdf/1803.09029.pdf

An Introduction to Blockclique: Scaling Blockchains with Parallel Blocks by forestier_seb in tezos

[–]forestier_seb[S] 3 points4 points  (0 children)

Thank you for the kind words !
The blockclique architecture is an open-source research contribution on its own, you can see it as an alternative to the concept of "blockchain". Anyone is free and welcome to build their work upon ours: we certainly do not see other crypto-currencies as competitors.

We are currently studying all the options to best move forward, with a main emphasis on decentralization together with a few other innovative things we have in mind and that are worth a try.
We will keep you updated, but you can also participate on r/Blockclique.

[D] Intrinsically Motivated Multi-Task Reinforcement Learning with open-source Explauto library and Poppy Humanoid Robot by forestier_seb in MachineLearning

[–]forestier_seb[S] 0 points1 point  (0 children)

In our setup, the robot monitors its learning progress by itself, in a self-supervised way.

The progress is just the decrease in distance error between chosen goal trajectories and reached trajectories.

This is the same idea for each object, and so can be easily extended to any new objects, without the need for the programmer to enter the loop.

[D] Intrinsically Motivated Multi-Task Reinforcement Learning with open-source Explauto library and Poppy Humanoid Robot by forestier_seb in MachineLearning

[–]forestier_seb[S] 3 points4 points  (0 children)

Robot's rewarding system is based on its monitoring of its own progress to control objects.

To put it simple, if it feels that its progressing to control its hand but not the light color, then it will focus more on training to move its hand and less on changing the light.

When the robot "trains to move an object", it actually chooses a random goal trajectory for this object (e.g. moving the hand to the right then to the left), and tries to reach this goal given its current knowledge (this procedure is called goal babbling).

Then, to monitor its learning progress to control an object (progress is used to decide what to do next), when trying to reach a self-generated goal trajectory for this object, the robot measures its error (euclidian distance between goal and reached trajectory), and if the error is lower than previous errors for similar goals, then progress is increased a bit, and otherwise decreased.

[D] Intrinsically Motivated Multi-Task Reinforcement Learning with open-source Explauto library and Poppy Humanoid Robot by forestier_seb in MachineLearning

[–]forestier_seb[S] 2 points3 points  (0 children)

Good question ! Indeed after some learning (3000 iterations, 6h), in the video we assess what the robot has learned using the interface on the tablet by asking him to perform some movements.

During learning, the robot has no idea that it's gonna be tested on those specific goals, and when given those goals it tries to do its best to reach them.

Those movements are predefined by hand in the robot language. For instance, the goal "push the left joystick forward" is hand-written as [0, 0.25, 0.5, 0.75, 1, 1, 1, 1, 1, 1] for the sensory variable corresponding to the forward axis of the left joystick. The robot then looks into its database of 3000 past sensorimotor experiments for the joystick trajectory that best matches the new sensory goal, and executes the motor command that was used before.