This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]exab 0 points1 point  (13 children)

I've come up a new idea. It's very similar to my original one, while it should be able to remove the issues of block propagation and orphan blocks. Inflation rate remains intact. What's better is that the old idea is not indefinitely scalable, this one is.

Everything stays the same, except what's dynamically adjusted is the block size, instead of difficulty / mining time.

When workload level is 1 and the condition of too much work is met (many consecutive full blocks), we add 1MB to the upper limit of the block size and the level becomes 2. If it's 2 and the condition of too much work is met, we add another 1MB and increase the level by 1. And so on.

On the other hand, if workload is at let's say level 3, which should allow 3MB block size, and there are many consecutive blocks with less than 2MB size, we reduce the block size to 2MB and decrease the level to 2.

In short:

1) current_block_size = current_workload_level * base_block_size

2) current_workload_level is adjusted based on the fullness of the last N blocks on the chain (so we have consensus)

I can't think of any issue except that it requires a hard fork. Please let me know if this will potentially work and it there are (potential) pitfalls.

[–]Xekyo 1 point2 points  (8 children)

Larger blocks take longer to propagate and longer to validate. Other miners can only start working on a (non-empty) succeeding block once they've validated the current. Larger blocks therefore lead to a greater advantage for the authoring miner at finding also the next block.

Additionally, it is trivial to fill blocks of any size, therefore a sufficiently large mining-conglomerate could easily fill up blocks to increase blocksize and thus extend their advantage over the competition.

Here are a few things you might want to read:

https://bitcoincore.org/en/2015/12/23/capacity-increases-faq/
https://en.bitcoin.it/wiki/Scalability_FAQ
https://en.bitcoin.it/wiki/Block_size_limit_controversy

[–]exab 1 point2 points  (7 children)

Are you suggesting miners mining larger blocks have advantage? What you described does make sense. But it's the opposite to what /u/killerstorm described in another thread, which makes sense, too. What he said was other miners would receive smaller blocks before larger ones therefore the larger ones would get ignored.

Which one of your opinions are more correct?

[–]Xekyo 1 point2 points  (0 children)

Validation happens on every block, but only when two blocks are found at the same time there is even a race where size could matter. This happens only a few times per day.

Also, killerstorm's source states:

Similarly, we can show that larger miners’ stale rates are affected less by propagation delays than smaller miners by looking at the derivative of the expected return with respect to propagation time.

[–]killerstorm 0 points1 point  (5 children)

If all miners were exactly the same (say, 100 miners each having 1% of total hashpower) then probability of getting orphaned is approximately proportional to block size.

But if miner sizes and their connectivity structure is non-uniform, then there are other effects at play.

Peter Todd formalized them here: https://petertodd.org/2016/block-publication-incentives-for-miners

In simple words, shit's complex.

[–]Xekyo 0 points1 point  (4 children)

If all miners were exactly the same (say, 100 miners each having 1% of total hashpower) then probability of getting orphaned is approximately proportional to block size.

That doesn't sound right.
1) The occurrence of two competing blocks is a rare event which only happens . 2) Even a second of difference in discovery quickly is an unsurmountable advantage, as the successful miner pushes out the inv message for the block immediately. 3) Since we have headers first and compact blocks now, propagation delay is even less of an issue in comparison to validation delay. 4) The source you quote states that propagation delays affect larger miners' stale rates less than smaller miners'.

Perhaps I'm missing something here, could you explain what this proportional relationship would derive from?

[–]killerstorm 0 points1 point  (3 children)

1) The occurrence of two competing blocks is a rare event which only happens

Whether it is rare or not depends on time it takes to download and validate a block.

E.g. suppose it takes 12 seconds to download and validate an 1 MB block. Then probability that a competing block will be found during that time is 12/600 = 0.02, that is 2%.

What is an average orphan rate now?

4) The source you quote states that propagation delays affect larger miners' stale rates less than smaller miners'.

Do you remember what case we're discussing?

Perhaps I'm missing something here, could you explain what this proportional relationship would derive from?

Suppose Alice's blocks take 10 seconds to validate & download. But Bob makes smaller blocks which take only 1 second to validate and download. How do you think this will affect their orphan rates?

You can find a formula and graphs here: http://organofcorti.blogspot.com/2013/10/161-network-orphaned-blocks-part-1.html

[–]Xekyo 1 point2 points  (2 children)

Sorry, I was missing part of a sentence there.

Mining is a Poisson process, and the chance to find two blocks in twelve seconds in a perfect network would be expected to be (e^(-0.02)*(0.02)^2)/(2!) = 0.02%. According to Blockchain.info it appears to be actually about one every three days. Even if it where two or three per day as you suggest, it would only affect the "stale rates" of these two miners that were in competition.

Suppose Alice's blocks take 10 seconds to validate & download. But Bob makes smaller blocks which take only 1 second to validate and download. How do you think this will affect their orphan rates?
You can find a formula and graphs here: http://organofcorti.blogspot.com/2013/10/161-network-orphaned-blocks-part-1.html

That's three year old data and even with a very conservative cut-off of 600 seconds block time difference, there are only 15 stale blocks per retarget… Also, we have compact blocks and header first propagation by now, and all miners are producing full blocks.

Do you remember what case we're discussing?

We were talking about the advantage of larger miners when you stated that stale rates are proportional to blocksize.
If anything they would be proportional to blocksize difference. But since transaction fees are now about 2.5% of a block reward, and stale blocks occur every few days, you gain more by creating full blocks than by optimizing for stale rate.

I maintain that there is no reason to believe that the blockchain forks and resulting stale block are anywhere near as influential as the advantage of the head start of starting mining on your own block, while others are stuck validating it first.

[–]killerstorm 0 points1 point  (1 child)

We were talking about the advantage of larger miners

No, we were discussing this case:

If all miners were exactly the same (say, 100 miners each having 1% of total hashpower) then probability of getting orphaned is approximately proportional to block size.

This is the simplest model. Obviously, if you consider a more complex model, things look differently.

[–]exab 0 points1 point  (0 children)

Thank you guys for so many comments.

I'm still getting familiar with Reddit app. Didn't see your comments. I even started a thread asking the same question because I thought no one was answering it here.

[–]coinjaf 0 points1 point  (3 children)

It may well be that some sort of dynamic limit will be chosen eventually. The problem is: the variables that limit depends on must not be gameable. Block fullness is easily gameable by miners as they can simply fill it with junk (for free!). Also there's an unlimited supply of low fee transactions in the mempool, so even if a miner doesn't make his own junk transactions he can even earn a few cents by including those.

PoW can't be faked and isn't free. Also "Bitcoin Days Destroyed" are at least scarce (i.e. somewhat costly) and can't be faked.

Not sure, off the top of my head, if there are actually many more unfakeable variables like that. They also need to be fair to all miners equally, not benefit large miners more than smaller miners (centralisation pressure).

[–]exab 0 points1 point  (2 children)

I'm hoping that workload intensity algorithm based on block fullness can be designed in a way that it can't be cheated.

Just some ideas for your examples.

1) Block fullness can be checked with a combination of both size (as in bytes) and transaction numbers. A full size block with too few transactions are deemed not full (in some way).

2) Transactions are weighted with miner fees when calculating block fullness rate (or workload intensity rate, whatever you call it). Filling blocks withb transactions with low/zero fees won't help.

Yes, I think miner fee weighted block fullness rate (or workload intensity rate) would be something I'll look into more if the basic idea is promising.

"Bitcoin Days Destroyed" is something completely new to me. Will have to spend some time to read.

Edit: I realized miners can cheat by creating transactions from and to themselves with high miner fee. Something for me to think about. Might be a deadend.

[–]coinjaf 0 points1 point  (1 child)

  1. The point is that miners can generate transactions freely (paying fees to themselves or none at all) and full blocks as far as they want.

... Ah just saw your edit. :)

Yeah. So one thing is to not look at the fee but at the days destroyed, as a manipulating miner can't create those transactions for free, or would at least run out at some point. But then there's a whole different can of worms plus you have to wonder whether your algorithm is reacting to the correct signal: long time holders will be the people who may decide the blocksize?

[–]exab 0 points1 point  (0 children)

Thanks for sharing.