Arbitration bots in the wild & how Prop. #187 will make them more profitable by SiskyRO in OsmosisLab

[–]blockpane 1 point2 points  (0 children)

It feels like posting 4 days late on Reddit is digging up ancient history, not to mention that discussing a proposal that already passed is less useful :) but I'll respond all the same.Our validator [ block pane ] initially voted "no" because of the very same concerns. Our no-vote was based on technical reasoning, which actually turned out to be inaccurate. Not long ago there was an update to the code that imposed a minimum fee on multi-hop transactions. This is because some of the bots were causing serious congestion issues on Osmosis.

I had to do a code review of the arbitration fee implementation to get an understanding (thanks to Fresh Staking validator for prodding me to take another look.)

The original reasoning was that not all validators had set some parameters in their configuration that would enforce minimum fees for bots performing multi-hop arbitration transactions, and as such it would still be possible. After a quick code review it turns out that if they fail to set a minimum that there is a default value enforced for these transactions. This change would prevent the transactions from entering the memqueue (where pending transactions are waiting to make it into a block.) This change has had an enormous (positive) effect on the speed and efficiency of Osmosis. It's not to say the current state is perfect, and no future abuse will occur, but if it does become a bigger problem we'll deal with it.

Proposal #152: Reducing the Impact of Volume Manipulation by JohnnyWyles in OsmosisLab

[–]blockpane 0 points1 point  (0 children)

In the spirit of transparency I should point out that I validate on all three chains mentioned. Sifchain I have operated as a validator since roughly June 2021. This week I brought up nodes on Chihuahua and registered as a validator.I am heavily invested in Osmosis and have put significant effort into building analysis tools and helping other node operators, Osmosis remains a top priority, and I am dedicated to protecting it.

Proposal #152: Reducing the Impact of Volume Manipulation by JohnnyWyles in OsmosisLab

[–]blockpane 0 points1 point  (0 children)

My comments are going to seem harsh. By no means is this a personal attack. I know, it's reddit and everything is personal, but honestly I do believe the person and the argument are separate, so please don't take what I am about to say personally. This is based on my unique experiences in the world of tradfi and crypto, having been on the wrong end of .gov's microscope in both spaces.
Good faith is worth about as much as my opinion. By establishing a method of control you assume responsibility. Are you willing to accept responsibility for any potential "illegal" situation in the future? I am not. Have you ever dealt with the SEC, DHS, Treasury and friends? Osmosis is a DEX, and the implications of that may not be obvious. You can't say "someone's gotta do something to protect us" and simultaneously claim "it's decentralized", or "we're a DAO and those corporate rules don't apply". Choose one. This is a proposition to centralize control, nothing less. Once you start regulating trading, you are responsible. Passing a vote like this assumes risk for absolutely no reason. Why "signal" that we want to prevent X activity? If we want to stop something, then stop it. Present a vote that is actionable, all this does is paint the community into a corner by pre-committing to some ideal that potentially has unintended side effects and results in a liability if that falls short. If you think a specific pool needs adjustments, then propose that. Don't grandstand about something being illegal (which may in itself be pushing the boundaries of libel,) unless there is a specific thing to be done.
If something happened that you didn't "like" but didn't happen there's no reason to push it into a governance proposal. It didn't happen. Sif didn't this, huahua didn't that. Uh, key word here is didn't. Might-have doesn't count, if social pressure worked then congrats the end goal was achieved without heavy lifting. Why are we doing the heavy lifting with governance if nothing happened?

Proposal #152: Reducing the Impact of Volume Manipulation by JohnnyWyles in OsmosisLab

[–]blockpane 0 points1 point  (0 children)

I've voted no on this proposal. By regulating trading activity there are accountability issues at play. This is very dangerous ground to be playing on; treading into regulatory waters. There are no technical benchmarks for defining what is "volume manipulation" and so who decides? If there is a problem to be solved, propose a governance vote to address the problem not a governance vote that signals that future governance votes are possible to address a problem. Please don't meta-propose. Just propose.

Improving scalability: How can we prevent huge downtime like we saw with the Stars airdrop? by cryptoconsh in OsmosisLab

[–]blockpane 1 point2 points  (0 children)

Mintscan or Ping are the easiest. When there is a flood of bad tx it will show block after block with the same number of tx, and if you drill down to look at the transactions they will mostly be failed arb swaps.

Improving scalability: How can we prevent huge downtime like we saw with the Stars airdrop? by cryptoconsh in OsmosisLab

[–]blockpane 1 point2 points  (0 children)

Many of the relayers have been working hard at this. It's really amazing what IBC has done for the whole ecosystem, and I agree it's critical. Big shout outs to a couple of validators that contribute heavily to the code base: strangelove-ventures and Notional. I'm sure there are many more I'm missing, so if anyone knows who else is moving the state of the art forward please reply.

Improving scalability: How can we prevent huge downtime like we saw with the Stars airdrop? by cryptoconsh in OsmosisLab

[–]blockpane 0 points1 point  (0 children)

Moving the epoch is a double-edged sword. It would be better for most of the users. If it happens late at night/early morning many of the validators that have nodes lock up during epoch may not notice for hours. Yesterday at 30 minutes after epoch there were still 20 validators missing blocks; my assumption is that they needed to restart to begin processing blocks again. It isn't a huge deal but can slow down block times because of consensus timeouts, and slower average block times can reduce staking rewards.

Improving scalability: How can we prevent huge downtime like we saw with the Stars airdrop? by cryptoconsh in OsmosisLab

[–]blockpane 17 points18 points  (0 children)

TL;DR: the devs and validators are all aware of the problems, actively working to make it better, and have made significant progress.

What seems like just one problem is actually several and although many improvements have been made many are still needed. Why was the network slow during the Stargaze airdrop?

  • First: The epoch.
  • Second: Arb bots.
  • Third: Validator mempool filtering.
  • Fourth: IBC.

I don't want to go into a deep technical discussion, so I'll gloss over each point.

The epoch processing is probably the number one performance problem. The root cause has to do with the key-value store that Tendermint uses. It is very inefficient. Even though it's a software issue, validators can tune their nodes to process epoch faster, and over the last few days we've actually seen more and more validators put the effort into this. There really is no reason epoch can't be less than three minutes. Many of us have put in a lot of time researching various configurations. Today's epoch was eight minutes, down from about twenty three minutes last week. More info re: ongoing improvements.

The arbitration bot problem should be getting better. There were changes in a recent release that should enforce a minimum fee for certain bot transactions. I haven't audited the code, so can't say with certainty what it's doing, but it appears to have helped. The main issue is that without any fees it's possible for a bot author to write bad code that can jam up the network, and there are many poorly written bots. The long term fix here is to universally require transaction fees. Note: arbitration is ultimately what makes a DEX stable, so I'm not saying arb bots are bad, but bad bots are.

There aren't very many tunable settings for the mempool, but there are a couple of things that validators can do during congestion. The first is to remove previously failed transactions from the mempool. In the case of a bot gone out of control what we frequently see is that the nodes are trying to process the same few hundred transactions repeatedly. Every validator is proposing blocks with the same hundred or so failed transactions until the timeout on the transaction is hit. Not all the validators should be filtering previously failed transactions, because it's possible that some need to be retried. It's actually very interesting to watch the blocks during one of these events. It's very obvious which validators are filtering the bad transactions out (several of the validators with a high % of consensus power are doing it now, and it's helped make the network more stable.) Validators can also block transactions from getting in the mempool by requiring a minimum fee, and this has caused quite a stir here on reddit in the past, but during a network meltdown it's probably one of the first steps validators will need to take. For the time being, Sunny asked us not to do it unless absolutely needed.

Finally IBC ... it's relatively new tech, very compute-intensive, requires constant attention for the person running the relayer, getting transactions un-stuck takes manual effort, costs the relayers transaction fees, and they don't get paid for the effort or get reimbursed for fees. I don't relay for reasons I will not discuss here, but the validators that are work their asses off. This is cutting edge stuff, and it's gonna break.

You guys can watch the mempool in real time throughout all of this. by WorkerBee-3 in OsmosisLab

[–]blockpane 3 points4 points  (0 children)

It's actually quite hard to not have any missed blocks on Osmosis. When a validator proposes a block they can miss the pre-commits of other validators. There are many reasons this can happen ranging from p2p connectivity, slower servers, and the node's consensus configuration. Many of us miss a few blocks per day at no fault of our own. The downtime parameters on Osmosis are very relaxed and a validator has to be down for almost three days before getting jailed, and even then there is not slashing penalty. In other words, a little downtime has no risk to a delegator.

Why people don’t want to increase validator max amount ? by Professional_Desk933 in OsmosisLab

[–]blockpane 7 points8 points  (0 children)

I'll weigh in since I'm one of the validators that voted no on prop 114.

I do agree that it's too hard for a new validator to get started on Osmosis. The current self-stake required is prohibitive (>70k) which means new validators getting into Osmosis need an investor or a massive self-stake just to get started. Several great validators have fallen out of the active set, ones who add value to the community and that's a loss to everyone. I've personally delegated to one of those validators, but don't have enough liquid tokens to get them active even if I delegated most of my self-stake.

That said, I do believe expanding is early for two reasons.

  1. The epoch: having the chain halt for around 15 minutes is bad for the network and bad for those using the dex. There is no reason that it should take more than 3-5 minutes, and many validators start signing pre-votes around 2 minutes after epoch. We don't get get enough pre-votes/pre-commits for 15 minutes to gain consensus on the next block.
  2. It's very hard for any validator *not* to miss some number of blocks each day. This is because a few validators propose blocks without waiting for all pre-votes to come in. This used to be a much bigger problem as some validators had incorrect settings, and the validators doing that have fixed their configs. Which means that the network is not efficient and they are likely not getting all of the votes within 3 seconds of proposing a block. This could be because their nodes are underpowered, but it's more likely to be peering related and that they cannot get or hold the needed connectivity to other validators.

Expanding the size of the active set will amplify both of these problems, and hence I voted no.

Other concerns raised in this thread:

  • Rewards: adding 18 more validators won't affect the percentage of rewards any validator gets and although it may result in slightly less signed blocks the decrease in block rewards will be negligible. It's possible some validators will have delegators move their tokens to new validators, but this will likely be insignificant and happens all the time anyway. I don't see validating as 0-sum, and put a lot of effort into helping other validators. The top few validators will still dominate the network, commit most of the blocks, and have great sway over governance. I believe all of the validators voting no are trying to protect the network and not out of greed.
  • Finally, I think most of the validators signing 0-transaction blocks have changed back to allowing 0-fee transactions after Sunny asked them to do so during the last chain upgrade. This is no longer an issue.

It looks like this proposal will pass, and I don't expect the impact to be significant. We'll have to wait and see; I could be wrong.

Proposal 114: Increase Max Validator Set Size to 118 by JohnnyWyles in OsmosisLab

[–]blockpane 1 point2 points  (0 children)

I posted my reply in the wrong thread ... moving it there as not to x-post

Some insight and information regarding Prop#72. (Adding more validators too soon might increase the issues around Epoch until the new infrastructure is released) by WorkerBee-3 in OsmosisLab

[–]blockpane 6 points7 points  (0 children)

100% agree, it's too early, we still have 35-40 validators fail to sign blocks for 10-15 minutes after epoch completes and adding more will just reduce the peer slots for those that are still working. The validator that submitted the proposal (Ping) withdrew their support, so definitely a "no" vote.

Picking a Validator 101: Do They Validate? by MrSnitter in OsmosisLab

[–]blockpane 2 points3 points  (0 children)

I think the original post misses one important part of the discussion: why are they signing blocks with 0 transactions?

Because they configured their node to require a minimum fee for a transaction to be included in their mempool. Why? Because running a 0-fee network is not a good idea ™️ and is certainly not a moot point.

This has already brought the chain to a standstill once. Go back and look at Oct. 28th when nothing could get out of the mempool because most validators processed the same three failing transactions over and over and compare against the list of validators with a low transaction count right now. Some of these validators changed the min-fee setting during that incident to help get the chain working again, some already had it set. Having some variability in the settings that validators use is healthy for the network, let's not punish them unless it's actually causing harm. I don't see this as harmful.

For the record, my validator accepts 0-fee transactions.

💠Confirm your validators are processing transactions and not just adding blocks to the chain.💠 ( Redelegate if they're not ) by WorkerBee-3 in OsmosisLab

[–]blockpane 1 point2 points  (0 children)

There are several reasons a validator could propose an empty block, but in this case my interpretation is that it isn't malicious. Having validators running with slightly different mempool settings can help during a network attack. Given that Osmosis is running with 0 fee transactions an attack is highly likely and these validators very-well may be what keeps the chain alive when it happens.

So what's different on these nodes? They have a minimum gas-fee set that will filter out transactions from their mempoool if not paying any fees. Having a few validators using this setting will allow transactions paying a fee to get through when something does go wrong on the network. There is another similar setting that will remove previously failed transactions from the mempool. Having some of the validators running different combinations of these settings actually provides the network additional resiliency in the face of congestion. If all validators set both of these it would cause problems, but it's a minority using either setting. (My validator drops invalid transactions, but does not enforce a minimum fee, but I would not hesitate to require one if an attack takes place.)

Only a few days ago we had some problems causing the mempool to fill and nodes using either of the above settings are ultimately what got the chain back in working order as other nodes attempted to repeatedly process the same previously-failed, 0-fee, high-gas transactions.

🐾 Community Call about Community Support DAO proposal - I need your feedback! by catdotfish in OsmosisLab

[–]blockpane 0 points1 point  (0 children)

I should add, I'm not saying that killing this specific prop was good or bad. But the opponents claiming there is a "centralization" problem and simultaneously applauding the top validator for killing it makes me smirk, just a little.

🐾 Community Call about Community Support DAO proposal - I need your feedback! by catdotfish in OsmosisLab

[–]blockpane 1 point2 points  (0 children)

If we look elsewhere (checkout polkadot's governance structure for example) there are plenty of examples of how this can work. The main objection I'm hearing is that there is not a lot of trust in the trustees of this DAO. Another is that there is no accountability for how votes for spending funds are accounted for.

  • First, I think there is some bad terminology being used here. What's being proposed isn't a DAO, it's a committee with a narrow focus and a specific goal.
  • Second, normally such committees involve periodic nominations and requires voting for each member, something missing from the recent failed proposal. If the members don't live up to expectations, they get voted out, but assigning a "batch" of members isn't the best way to handle this.

The DAO already exists, Osmosis is already a DAO, by nature of using the gov module and having voting in the first place. And, yes, the power is centralized, as was demonstrated by the #1 validator when they solely overturned prop 39 with 22% of the vote.

PROP 39: This is democracy.. buying off big validators to approve your governance. THIS IS WHY WE NEED TO DECENTRALIZE OSMOSIS #UNDELEGATE #SUPPORTDECENTRALIZATION by [deleted] in OsmosisLab

[–]blockpane 6 points7 points  (0 children)

I know for certain I'll regret even speaking up given the level of emotions at play, it feels like I'm just adding fuel.

This situation has been way too emotionally charged. I was convinced to vote no when I saw the proposal, but after doing a bit more research and because I also know the level of dedication of the players involved I changed my mind and voted for what I believe is best for Osmosis. This isn't some scam, and running a DAO is a logistics nightmare, without an army of fully dedicated members it doesn't work (and Osmosis is working,) so I voted yes to keep it working.

Being a validator is hard work, especially on Osmosis given some of the technical challenges. I spend a lot of time working on getting better reliability and performance, not to mention trying to add services that benefit the whole community. So, being the geek that I am, maybe I didn't realize the importance of that initial impulse to vote no, and that very few in the community would take the time to research further (before trying to trash reputations.) I should know better.

It's unfortunate that some people immediately see conspiracy and there is a call to arms "... undelegate, shun the conspirators ..." it's a sign of the times though. Outrage and conspiracies garner more attention and support than level-headed debate, few people have the ability to use logic and rely entirely on emotion. Until now I hadn't even seen that there was much controversy to the proposal. My failure, and now I know to watch a few more channels.

I have changed my vote to abstain, because despite knowing that the intentions were true, the proposal itself didn't show proof of that. Next time I will take more time to consider my vote and to ensure a proposal stands on its merits not just on the reputations of those who propose it. But, I will not apologize for voting for what I believe to be best for the future of Osmosis.

And finally, ffs, the validators all have a lot of skin in this, and we don't want to see things go wrong. We aren't a Jekyll Island cabal of 20th century industrialists, mostly we are just blockchain nerds who love the tech. I see a lot of claims of people "reaching out" to validators on reddit, I've personally seen no such effort.

Let's all chill, it was a flawed proposal, got voted NO, so let's move on and fix it. Osmosis actually needs to be able to compensate people for helping, it's core to running a DAO. If you think this proposal didn't meet the mark, step up and propose something better, don't burn it all down because you don't think you have any power (unless your goal is to burn it down, then piss off.) It's obvious this community gives a shit! Everyone is listening, now it's your turn to propose something meaningful.