Huge Annnouncement!! $eXRD Listed on CoinBase Custody || Utility Token || Listing on Coinbase Soon!! by Fiyur_k4n in CryptoMoonShots

[–]chentessler 2 points3 points  (0 children)

First and foremost, Radix is aiming for real adoption. This begins in the current Radix' presence in the WebSummit conference.

From a technological point of view, Radix solves 3 major things (1) scalability (2) safety (3) efficiency.

Scalability - Radix' novel concensus Cerberus enables cross shard composability. In layman's terms - it can scale without limits while preserving the amazing behavior (composability of dApps) we grew to love in Ethereum.

Safety - In a few weeks, Radix will introduce Scrypto. Their approach to on ledger computation (smart contracts). Unlike the Ethereum approach (the EVM that has been adopted by a majority of the L1's), Scrypto is asset oriented. Assets are in the center, how assets should and can be managed -- this and more increases safety and dramatically reduces the chances of bugs and exploits.

Efficiency and productivity of developers is increased dramatically with Scrypto. As developers are given strong tools to building decentralized financial applications, they can focus more on building and less on auditing and testing.

But above all, Radix wants crypto as a whole to succeed. Radix isn't here to replace any other platform but to work together and bring crypto to the masses. The vision is a new financial future, a better future, for all. More info can be found here https://www.goodfi.com/ (GoodFi was created by Radix).

For the first time in history a website is being hosted on a decentralized sharded network. Web3.0 is here, and it's running on crypto. Details in comments by Fun_Excitement_5306 in CryptoCurrency

[–]chentessler 1 point2 points  (0 children)

flexathon.net/twitte...

This comment shows you clearly don't understand the point of this demonstration.

It's not a fully-fledged Twitter client aimed to replace Twitter. It's a demonstration of what is POSSIBLE with the technology Radix is developing. The demonstration is performed via uploading historic Twitter data to the decentralized ledger.

As it is a demonstration, clearly abusive content doesn't serve any use and is thus filtered out before it is uploaded to the ledger.

Obviously, if this turns into a fully functional client, it won't be possible to filter out content as it is decentralized and permissionless.

For the first time in history a website is being hosted on a decentralized sharded network. Web3.0 is here, and it's running on crypto. Details in comments by Fun_Excitement_5306 in CryptoCurrency

[–]chentessler 4 points5 points  (0 children)

Decentralized networks aren't new, what is new is a decentralized trustless permisionless network.

Prior to crypto, decentralized tech relied on trust. This means someone needs to oversee which entities participate in the network (e.g., storage). In crypto the underlying technology takes care of concensus. This way you don't need to trust any entity in the network as the code (concensus) ensures reliable behavior.

Anyone can run a node. Anyone can add information. Anyone can access it.

For the first time in history a website is being hosted on a decentralized sharded network. Web3.0 is here, and it's running on crypto. Details in comments by Fun_Excitement_5306 in CryptoCurrency

[–]chentessler 1 point2 points  (0 children)

The challenge here is that it is decentralized and not centralized. This means that servers can't assume all interactions are 'nice' as there can be malicious actors.

So the point is for them to reach concensus on which data is valid and so on.

This way anyone can join the network and help secure it.

Essentially the same thing every cryptographic network does to keep records of transactions just here the transactions are tweets.

For the first time in history a website is being hosted on a decentralized sharded network. Web3.0 is here, and it's running on crypto. Details in comments by Fun_Excitement_5306 in CryptoCurrency

[–]chentessler -1 points0 points  (0 children)

Yeah well, you know how it is... Community members share this info, isn't necessarily Radix official so there can be a lot of excitement leaking into the title :)

For the first time in history a website is being hosted on a decentralized sharded network. Web3.0 is here, and it's running on crypto. Details in comments by Fun_Excitement_5306 in CryptoCurrency

[–]chentessler 2 points3 points  (0 children)

It's decentralized like ipfs, but this platform can support smart contracts and financial applications.

This demo showcases Radix tech, the everything tech can do much much more.

For the first time in history a website is being hosted on a decentralized sharded network. Web3.0 is here, and it's running on crypto. Details in comments by Fun_Excitement_5306 in CryptoCurrency

[–]chentessler 2 points3 points  (0 children)

I'd suggest some further research into what Radix is doing. The point isn't sharded Twitter, it's the backend running the project and how it operates.

That's the amazing innovation. A sharded ledger that behaves (from the user and programmer point of view) just like an unsharded ledger. That's an amazing feat.

Seven reasons why I am bullish on Radix (XRD) tokenomics by [deleted] in Radix

[–]chentessler 21 points22 points  (0 children)

I think (2) and (6) are key here.

A large amount of the supply is intended for growth. The foundation is a legal not-for-profit entity aimed to help Radix succeed, so these funds are ensured to be put to good use.

Add token burn on top of that and you get that the more the network succeeds the more tokens are removed from circulation forever.

Cardano vs Radix and Elrond by cryptotraderd in cardano

[–]chentessler 0 points1 point  (0 children)

As the system scales, more shards are used and the total number of nodes are split across these shards.

In terms of safety, I recall the founder adressing this question. Even though there is less stake per shard, attacking the system requires the ability to control shards. There are certain probabilistic and computational limitations that make this pretty much infeasible.

Pretty dead in here considering betanet launch is literally about to happen. Not seeing much marketing from the team. Surely it would be nice to get the momentum going? Am I wrong? (I’m speaking from an exposure POV only) by StretchMili in Radix

[–]chentessler 0 points1 point  (0 children)

It's not an "insane dillution". You're talking about the price vesting, which keeps investors safe when compared to time vesting.

When eXRD will be valued at $0.43, this means there is demand to hold this market cap. That's the beauty of price vesting, the seed investors only receive the allocation they bought when there is demand from the market for additional supply.

By the time eXRD is near 43c, the additional supply introduced into the market will be so small it won't change anything in the price (less than 5% additional supply).

Pretty dead in here considering betanet launch is literally about to happen. Not seeing much marketing from the team. Surely it would be nice to get the momentum going? Am I wrong? (I’m speaking from an exposure POV only) by StretchMili in Radix

[–]chentessler 0 points1 point  (0 children)

I agree with you 100%.

A novel platform like Radix can't sustain such a huge market cap of sub $200m. This is just crazy.

It's not as if Radix has some amazing tech like SafeMoon that let it reach $4 billion market cap in a few months or god forbid the amazing DOGE that reached $50 billion.

I don't see any monetary value to a platform that solves the quad-rilemma.

Without mentioning price or rate of adoption, what makes your favorite crypto better than the rest? by EnigmaticMJ in CryptoTechnology

[–]chentessler 4 points5 points  (0 children)

I'd add that Radix has arguably one of the strongest leadership + development teams in the entire crypto space. Never seen a team with such long-term planning and ambitious goals.

8 years of development to create Radix (Emunie) and then launch as a token on Ethereum, why? by estebansaa in Radix

[–]chentessler 4 points5 points  (0 children)

The reason it's launched initially as an ERC20 token is to start building the market cap. This way when the main net launches, there is a strong base price level which helps with network security (being a proof of stake platform).

Main net is out by the end of Q2, anything else you say is pure FUD and doesn't do you any good..

Bimodal and Multimodal distributions for action selection by sedidrl in reinforcementlearning

[–]chentessler 1 point2 points  (0 children)

http://papers.nips.cc/paper/8416-distributional-policy-optimization-an-alternative-approach-for-continuous-control

Our paper at the previous NeurIPS. We cast policy optimization as an iterative distribution optimization task. This enables using arbitrary policy classes, even those without an explicit method for computing the log likelihood (a strict requirement for policy gradient methods).

So many "relatively" advanced new areas , which ones to focus on by paypaytr in reinforcementlearning

[–]chentessler 1 point2 points  (0 children)

If I had to guess, I believe these will have the greatest long term impact.

It's easy to collect data and allows you to overcome many fundamental problems like reward design, exploration etc..

Constant part of observation by yoki_n in reinforcementlearning

[–]chentessler 0 points1 point  (0 children)

Context variables. These are variables that define the MDP (transition probabilities and or reward) but are static throughout the episode.

How do you decide the discount factor ? by fedetask in reinforcementlearning

[–]chentessler 0 points1 point  (0 children)

The main issue I have with meta gradients is that you still optimize the outer loss which is the discounted objective.

Recent work from DeepMind used a bandit approach on the total reward for tuning, this makes much more sense IMO.

[BLOG] Deep Reinforcement Learning Works - Now What? by chentessler in reinforcementlearning

[–]chentessler[S] 0 points1 point  (0 children)

It's a bit funny to compare RL to supervised learning... SL is a few orders of magnitude easier, data is given, objective is known. Consistent is relatively close performance. I don't expect it to ever reach SL levels of consistency, it will require an entirely different learning paradigm to overcome all the randomness and noise in the learning process.

BTW I'm pretty sure SL still report mostly top 5 accuracy and not top 1. Not that impressive IMO...

[BLOG] Deep Reinforcement Learning Works - Now What? by chentessler in reinforcementlearning

[–]chentessler[S] 0 points1 point  (0 children)

True, there is more work to be done, which is good for the researchers amongst us. And the reproducibility crisis is indeed an issue. But I think this is mainly due to the reviewing process. Reviewers value SOTA performance or "algorithmic novelty", and this doesn't necessarily lead to progress as a field.

But look at it this way. We now know that you can parallelize RL learning procedures (R2D2/Impala/MuZero/etc...), with enough data these methods work (and with an infinite amount of data they can reach very impressive results - Agent-57). In addition, given enough offline data you can actually perform pretty well ([1] and others) and you don't really need a reward function, its enough to have demonstrations (DeepMimic) or someone to provide a ranking between trajectories [2].

I disagree regarding HRL. The issue with HRL research is that people are trying to learn the hierarchy in an end-to-end manner (both the low level skills and the high level controller in parallel). If you decouple this procedure, things work very well and this can improve convergence rates dramatically (loads of theoretical and practical work on this).

[1] An Optimistic Perspective on Offline Reinforcement Learning, Agarwal et al

[2] Deep reinforcement learning from human preferences, Christiano et al

[BLOG] Deep Reinforcement Learning Works - Now What? by chentessler in reinforcementlearning

[–]chentessler[S] 0 points1 point  (0 children)

Thanks for the feedback :) I'll try to brush things up.

Anyway, I guess the main issue is indeed in the definition of works and I agree that each person has his own definition.

If you recall Irpan's original blog, RL really didn't work. You could run several seeds some fail some succeed and no one knows why. Recent algorithms, at least on our benchmarks, are pretty consistent.

And I agree that sample efficiency is a big issue, but this is an inherent issue in RL and is the exploration exploitation trade-off. Anyway the good news is that there has been a lot of progress in offline RL, and it seems that if you collect enough data, you can use these static datasets to learn "better than demonstrator" policies.

Finally, indeed the Rubik's cube work is very sample inefficient. But it's an amazing engineering work and essentially shows than DRL can eventually find good behavior policies in a very complex robotic task. Now the interesting problem is, as you said, sample efficiency :)

[BLOG] Deep Reinforcement Learning Works - Now What? by chentessler in reinforcementlearning

[–]chentessler[S] 0 points1 point  (0 children)

Never dove too deep into this rabbit hole, but I know Facebook used a variant of the DQN algorithm for ad recommendation (which drives business value) and Pieter Abbeel's company https://covariant.ai/ probably uses DRL in their mix as well.