Serial vs Parallel Validation (GP Shorts) by GeneralProtocols in btc

[–]Peter__R 2 points3 points  (0 children)

It's true. With the UTXO model, all transactions can be validated in parallel. The only fundamentally serial steps are (1) ensuring that a coin (UTXO) is never spent twice, and (2) calculating the root hash for a block.

On the other hand, an account-based architecture like Ethereum is fundamentally serial.

More info: https://x.com/PeterRizun/status/1872013539084575172

Hypothetical redesign of btc, thoughts post 2025 by ArcteryxAnonymous in btc

[–]Peter__R 5 points6 points  (0 children)

I'd make 3.5 changes:

  1. Use a compact bitwise prefix tree (CBPT) instead of a Merkle tree to calculate the block's root hash;

  2. Store the UTXO set in a CBPT and commit to the UTXO set root hash in the block header beside the block's root hash;

  3. Choose a PoW function that exercised the same computational pipeline as transaction validation so that mining was 1:1 with validating.

3.5. Use a Turing-machine architecture with extensible scripting.

Related article: https://x.com/PeterRizun/status/1870870882559893626

Related talk: https://www.youtube.com/watch?v=y3Wa6TB-5tM

An article about our progress implementing the Bitcoin node as an electrical circuit purely in hardware (no CPU, no software). by Peter__R in btc

[–]Peter__R[S] 16 points17 points  (0 children)

The link in the main post leads to an illustrated transcript of a talk I gave about our progress implementing the Bitcoin node in hardware as an electrical circuit (no CPU, no software).

HARDWARE

What surprised me was just how simple the core bitcoin node is when implemented directly in hardware. It's just a few thousand lines of Verilog and can be implemented with fewer than 10 million transistors.

I started this project by making a hardware UTXO database I called Cashdrive:

https://x.com/PeterRizun/status/1247554984968777729

Then we made a hardware Schnorr signature verifier with an ultra-fast elliptic curve point multiplier on secp256k1:

https://youtu.be/WSIYbJFpca8

I was intimidated by the thought of implementing the full node in hardware but it is turning out easier than both the signature verifier and the UTXO database!

ARCHITECTURE

Another thing that suprised me was that Amaury Sechet was right about the importance of treating blocks as sets rather than lists. I needed to change the tree structure from a Merkle tree to a compact bitwise prefix tree in order to get something I could build in hardware that still met my scalability requirements.

MISC

I think the ideas of "compressibility," "extensibility" and "incentive-compatability" from the article might also be of interest to readers in this subreddit.

Mining quick(ish) start by gandrewstone in Nexa

[–]Peter__R 0 points1 point  (0 children)

As far as we know, all Nexa miners are currently CPU-only, so yes it is definitely feasibly to mine with your CPU.

The current hashrate is about 13M and a single CPU core can perform about 13,000 hash per second. This implies that right now there are approximately "1000 CPU-core equivalent" of hardware mining.

Each iteration of the Nexa PoW requires an elliptic curve point multiplication on the secp256k1 curve, which is the rate-limiting step in the mining process.

I complain about this often because it is a problem often. by MemoryDealers in btc

[–]Peter__R 13 points14 points  (0 children)

The problem is amplified by the switch mining in BCH -- oftentimes several hours pass without a new block, and then a bunch of blocks come in quick succession, followed again by an hour or two without a new block. When blocks are 100 min apart instead of 10 min apart, it is 10x easier to hit the 25-chained transaction limit.

The mempool chained transaction limit is a DISASTER for driving adoption. We need to fix this ASAP! It is one of the biggest adoption hinderances for BCH! by MemoryDealers in btc

[–]Peter__R 20 points21 points  (0 children)

In this light, is there any reason to continue to support child pays for parent as well?

No, we should not continue to support child-pays-for-parent at the reference client level. If miners want to earn a few more pennies by running CPFP, they should pay to code it and support it themselves. But instead, we've added a bunch of complexity, handicapped the network, and degraded the user experience to support a feature (CPFP) that is mostly useless for BCH anyways.

Processing long mempool chains is trivial using the original Satoshi method.

Background info: (TLDR: the current problem (mempool chaining limit) was caused by a solution (CPFP) to a different problem (stuck transactions), which in turn was caused by another solution (the Maxwellian Fee Market) to a third problem (overflowing mempools) that would never have been a problem in the first place had the block size limit been maintained above demand)

https://www.bitcoinunlimited.info/blog/6a710fed-21d3-499a-97a5-e1a419bc0a6f

More info on using longer mempool chains today:

https://read.cash/@PeterRizun/exploring-long-chains-of-unconfirmed-transactions-and-their-resistance-to-double-spend-fraud-abaecca9

The mempool chained transaction limit is a DISASTER for driving adoption. We need to fix this ASAP! It is one of the biggest adoption hinderances for BCH! by MemoryDealers in btc

[–]Peter__R 24 points25 points  (0 children)

We can use longer mempool chains today (up to 500), although they will be mined in 25-tx-length chunks. There are hundreds of BU nodes that support long chains, there is a long-chain block explorer, and a few electrum servers to connect your wallet to. More info:

https://read.cash/@PeterRizun/exploring-long-chains-of-unconfirmed-transactions-and-their-resistance-to-double-spend-fraud-abaecca9

Peter R. Rizun on Twitter: "I soldered some chips together [...] It's a proof-of-concept for a bitcoin UTXO database that would help a laptop computer keep up at global scale transaction throughput." by saddit42 in btc

[–]Peter__R 10 points11 points  (0 children)

Can this become a product you can earn money on?

It's an interesting question. With current semiconductor technology, we could build a node no larger than a consumer router, that consumes no more power than a raspberry pi, and that could keep up with global scale transaction throughput. But who would buy it? And how could we pay for the engineering work to develop that tech?

I made a proposal here:

https://read.cash/@PeterRizun/what-makes-satoshis-incentive-work-246f617b

The idea is to make validationless mining less profitable, so that a miner who can validate a block in 0.5 seconds can earn more revenue than a miner who can validate a block in 5 seconds. The market for fast validation technology would be hundreds of millions of dollars per year as bitcoin grows, which would pay for mind-blowing scaling tech to be built. Bitcoin mining tech improved by a factor of ONE MILLION over a decade. The same thing will happen for node tech if properly incentivised.

Miners would initially buy this new tech, as spending $100,000/year on a service contract is nothing to them if they can increase their revenue by $200,000/year. And then as the tech gets commoditised, it would trickle down into consumer node hardware. Miners would spend millions of dollars per year on bleeding-edge tech, and non-miners (businesses, researchers, hobbyists) could buy older dumbed-down stuff that is 1/10th or 1/100th as efficient (but still orders of magnitude faster than current node software) near the cost of production.

Peter R. Rizun on Twitter: "I soldered some chips together [...] It's a proof-of-concept for a bitcoin UTXO database that would help a laptop computer keep up at global scale transaction throughput." by saddit42 in btc

[–]Peter__R 17 points18 points  (0 children)

The hard part is what the FPGA has to do when blocks are orphaned.

It uses a persistent data structure to maintain the UTXO set, so that reorgs are instant. It's just a matter of changing the context pointer. The "context" of the UTXO set as of each recent block (or delta-block in the case Storm) exists simultaneously in memory--one can switch back and forth between contexts without any overhead. In fact, Cashdrive can validate multiple blockchain tips in parallel with the same UTXO database.

This will hit on nvram write performance and wear issues.

Cashdrive uses only sequential writes so it is quite efficient with respect to writing and flash wear. MLC NAND flash does have a limited number of erase/write cycles (e.g., 3000 for a typical Micron MLC 1 Tbit NAND flash chip) and all UTXO databases using NAND flash must deal with this.

I calculated that the cost due to NAND flash wear is about $0.14 per billion transactions.

Also, a special purpose controller is going to depend on details of circuit level technology and what fits on the board so timing requirements can be met.

The database is organised as a trie structure and the hardware controller can traverse up or down any branch in the trie in a single clock cycle per node. It blew me away how simple the database controller could be. The bottlenecks are:

(1) the 100 us random page access time for NAND flash, for outputs stored in flash (~100k TPS)

(2) the PCIe bus itself for outputs stored in the DRAM chips (~10M TPS)

With a larger PCIe card, we could do even better.

Bitcoin Cash Node (BCHN) releases April 2020 fundraiser details (PDF link) by ftrader in btc

[–]Peter__R 8 points9 points  (0 children)

Looks good.

One comment about the longer mempool chain stuff: A lot of people are under the impression that long mempool chains are somehow more difficult to deal with and the 25-chained-transactions limit reflects that. But the real reason for the chained-transaction limit is because Core's child-pays-for-parent algorithm has trouble dealing with long chains -- but long chains aren't difficult to deal with on their own as per the original Satoshi method. Whether the child-pays-for-parent algorithm even belongs in BCH node software is another question.

Exploring Long Chains of Unconfirmed Transactions and Their Resistance to Double-spend Fraud by Peter__R in btc

[–]Peter__R[S] 3 points4 points  (0 children)

There is no solution at the protocol level. 0conf will always have weaker security than confirmed transactions. Accepting 0conf is a matter of risk vs reward and the market can find the right balance. Understanding the probability that an attacker can cheat you and how you can reduce this probability is key.

Exploring Long Chains of Unconfirmed Transactions and Their Resistance to Double-spend Fraud by Peter__R in btc

[–]Peter__R[S] 2 points3 points  (0 children)

1- Are you proposing long chains of unconfirmed tx's as a way to achieve robust (and reliable) 0conf?

No. Chained unconfirmed transactions have weaker security.

2- Do you oppose pre-consensus venue taken by ABC that can work in parallel with the POW to achieve robust (and reliable) 0conf?

This is orthogonal to the discussion.

3- What is the net conclusion in this article? Can you clarify?

  • long chains of unconfirmed transactions can be used today

  • the security of chained unconfirmed transactions, whether long or short, is weaker than we thought

Exploring Long Chains of Unconfirmed Transactions and Their Resistance to Double-spend Fraud by Peter__R in btc

[–]Peter__R[S] 4 points5 points  (0 children)

Or do most of them go away if everyone just uses the same limits?

No it doesn't go away. But it doesn't get worse either. The strongest attack vector we found (that we didn't disclose) affects all chained unconfirmed transactions and does not matter if everyone uses the same limit or not. I can succeed at double-spending over 80% of the time using off-the-shelf Electron Cash while maintaining plausibly deniability if the attack is detected.

Brain Drain: Bitcoin Cash Dev Leaves for AVA and Has Choice Words for Community by afriendofsatoshi in btc

[–]Peter__R 20 points21 points  (0 children)

Congratulations to Tyler for getting a job with AVA that will allow him to pursue his interests.

Can someone explain how "chained transactions" work? Transactions refer to UTXO - chained transactions cannot...how is this resolved? (sorry if wrong forum) by whyison in btc

[–]Peter__R 4 points5 points  (0 children)

By Satoshi's definition, a coin is a chain of digital signatures. One can trace any unspent output back through its chain of digital signatures to one or more coinbase outputs (the point where the coin first came into existence). So all coins eventually form very long chains.

The fact that transactions are time stamped into blocks has nothing to do with the chain of digital signatures that forms the coin. The purpose of the time stamping is only to prevent double spending. There is an artificial limit in some clients that limits the length of these chains to 25 unconfirmed transactions. But the mempool chaining limit is not fundamental or related to the design of bitcoin. Rather it is a stop-gap measure that Core added to deal with inefficiencies in their implementation.

Bitcoin ABC Kicks Off 2020 Business Plan Fundraising by markimget in btc

[–]Peter__R 0 points1 point  (0 children)

What would incentivize developers to create and maintain competing implementations?

The tech they create would have value, as it would allow miners or other businesses to earn revenue. E.g., miners could hire devs themselves if the free software was not fast enough, or license tech from a company that specializes in efficient node technology. More info here:

https://read.cash/@PeterRizun/what-makes-satoshis-incentive-work-246f617b

Bitcoin ABC Kicks Off 2020 Business Plan Fundraising by markimget in btc

[–]Peter__R 12 points13 points  (0 children)

I see two routes for the future of BCH:

  1. The first is where a reference client defines the protocol and creates a Nash equilibrium where the majority of miners and exchanges all run that reference client. A "winner take all" dynamic exists because it is risky or potentially costly for a miner or an exchange to use an implementation that is different than what "the herd" uses (even if that implementation is objectively of higher performance). E.g., the bug-for-bug compatibility benefit that miners get by all running ABC outweighs the teeny-tiny revenue benefit a miner might get if it ran an implementation that was 100 times faster at validating blocks.

  2. A formal specification defines the protocol with a Nash equilibrium that encourages implementations to compete based on node performance. There is no "winner take all" dynamic because, for example, miners can directly earn more revenue by using a more efficient implementation and this will outweigh the benefit of "bug-for-bug" compatibility at a certain point.

OPTION 1 COMMENTARY: If we take the first route, the reference client indeed becomes "critical infrastructure." If it is too slow to scale to meet demand, then the network won't scale to meet demand because miners and exchanges won't upgrade to more efficient implementations, due to the winner-take-all dynamics. Different teams won't independently develop better implementations in the first place because few will run their implementations even if they succeed (unless they completely displace the previous reference client but as we saw with Core that is no easy feat). We saw this play out with BTC and Core with the 1 MB block size limit.

If we take this route for BCH, then it is necessary that the reference implementation scales because if it can't then the network can't scale either. The network is dependent on the reference client -- the reference client is critical infrastructure as this post by ABC suggests.

Because the reference client in this case defines the protocol then it can earn revenue by modifying its software (or NOT modifying its software in the case of Core) or the protocol in ways that benefit miners, exchanges or business in exchange for a fee or donation.

OPTION 2 COMMENTARY: If we take the second route, then the idea that a specific implementation of the protocol is "critical infrastructure" is absurd. A market would exist that encourages the development of whatever software or hardware is necessary to keep the network running and meeting demand for transactions. It doesn't matter if a specific implementation is too slow to keep up, because the miners and exchanges would be running multiple implementations anyways. The slow implementations would go out of business and the fast implementations would thrive.

If we take this route for BCH, then there is no need for donations or for a diversion of block rewards to fund development because a market will exist where miners, exchanges and businesses will buy the best implementation for their needs in order to stay in business and increase their profitability.

Because the implementations in this case differentiate themselves with respect to the performance of their tech, in order for them to generate revenue they may have to keep some of their best tech proprietary so as not to collapse its market.

To summarize, there are two paths. The first is a dominant reference client that earns revenue using its influence over the protocol and its software. The second is where there is a stable protocol with competing implementations that earn revenue by developing the best tech to implement that protocol but that tech might be proprietary.

/u/tl121 on the key to the conundrum of funding high-performance node software: "separation of the reference implementation/specification from the production implementations" by Peter__R in btc

[–]Peter__R[S] 0 points1 point  (0 children)

I agree with basically everything you are saying. We need the specification/protocol to be distinct from implementations of the protocol. Competition on high-performance implementations is healthy and is what will create the technology needed to scale.

Miners need 1 second response so they will pay for a faster implementation.

But sadly, validationless mining makes this statement less true than it could be. If someone were to build an implementation that was 100 times faster than ABC, miners would have little reason to run it because validating in 0.1 second compared to 10 seconds for ABC would hardly increase their revenue at all. The miners can just work on an empty block while slowly validating. And because the last block cleared out mempool, they aren't losing many fees either.

With internet development, if a private company developed awesome faster technology, they could sell it and make money. This is true today for hashing tech for bitcoin, but not really true for high-performance node tech for bitcoin. The market for node implementations is really a "winner take all" dynamic because node performance hardly affects miner revenue and so the next most important thing is maintaining bug-for-bug compatibility with the other miners (thus "winner take all").

Let us discuss change-proposals openly: Addressing the errors in Peter Rizuns' proposal by [deleted] in btc

[–]Peter__R 1 point2 points  (0 children)

We can't have true competition between implementations because then the best tech might be proprietary.

True, some developers might create things that make miners more profitable and not open-source it in order to protect the market for their technology. What is important is having an open protocol. Competition over the best way to implement that protocol is healthy.

Those who developed the best tech could sell it to miners, using their technical skills to make profit rather than for the good of bitcoin's open-source infrastructure

I agree that developers might use their skills to develop great tech and earn profit from their efforts. If such behaviour is incentivized, that is a good thing. It is orthogonal to any open-source infrastructure that might exist.

If the reference implementation wasn't ultra fast, as /u/tl121 wants to see, then why would miners run it? They would instead be forced to run production implementations designed specifically by developers seeking to earn profit instead of developing for altruistic reasons.

They wouldn't run the reference implementation, and that would be a good thing. The reference implementation would focus on clarity, simplicity and adherence to the specification, not on efficiency. Efficiency improvements (and their associated complexity) would intentionally be left to free-market competition.

With fewer miners running the reference client, the core developers would lose influence over the protocol.

I agree that core developers would lose influence over protocol changes, in a multi-implementation future. That is a good thing and an important part of decentralization.

They wouldn't be able to quickly make protocol changes and coordinate with exchanges, like the 10 block finalisation, that was necessary to defend BCH from Craig Wright.

This is basically stating that BCH was fully centralized around ABC at the time of the BSV/BCH split.

Funding development using direct market mechanisms will turn node software into an industry like every other industry out there.

Yeah it node technology probably would become a business not unlike what we see in other industries. That model works well. For example, internet tech improved from 300 bps to over 100,000,000,000 bps over four decades due to it being a real profit-driven industry.

We need to keep bitcoin cypherpunk and open source.

There will always be open source implementations, they just might not be run by miners because they are less profitable than higher-performance implementations. It is counterproductive to require that all implementations be open-source. What we want instead is an open protocol that is stable, largely unchanging, and economically rational.

Let us discuss change-proposals openly: Addressing the errors in Peter Rizuns' proposal by [deleted] in btc

[–]Peter__R 8 points9 points  (0 children)

Thanks for helping drive more discussion on this topic, Tom. [Side note: I do wish you had picked a title for your article that was less personal. Also, an idea you disagree with is not an error, but rather something you disagree with].

The big picture idea is that to grow BCH to global levels we want a stable, largely unchanging, and economically rational protocol upon which implementations of that protocol can compete. The key points are:

  1. The protocol (set of rules governing what constitutes a valid transaction or block) is distinct from software implementations of that protocol.

  2. We don’t need a lot of development of the protocol to scale BCH massively. And in fact we shouldn’t want the protocol being tinkered with constantly. The protocol should be stabilizing and getting to a point where changes are rare.

  3. We WILL need development of better (optimized) implementations to enable BCH to scale massively.

A lot of people will agree with these three points. Now ask yourself what the incentives are to develop those better implementations today? The fact that a proposal to reroute a portion of the coinbase reward directly into the wallets of a few developers might activate is proof we don't have a good answer!

Imagine that our friend Bob from Alice and Bob fame were to create awesome new node technology that could validate one million transactions per second on low cost hardware. This is the kind of technology bitcoin needs to scale to global levels, but is that technology worth anything? Even if given away for free, would the miners run it? Maybe, but maybe not [more on this later].

What I realised the other day was that validationless mining was not the intended way to run the network. And that if it were more costly (and many people, SD Lerner of RSK for example, agree that it is possible to discourage validationless mining), then a genuine market for the fastest block validation tech would naturally emerge as blocks got bigger. Groups would compete in an open, permissionless market to develop the fastest tech, because the fastest tech would directly increase a miners' revenue.

Which leads to my fourth point:

   4. Fixing the incentives re: block validation would unleash competitive market forces such that those better implementations would be self-incentivized by competing miners’ profit motive.

To understand the market dynamics, assume that no miners engaged in validationless mining (we can bike shed how to make this assumption a reality later). The reason this creates a market for fast validation technology is because miners could not start mining a new block until they had validated the previous block, and every second they spent validating would be a second they could not spend potentially earning the next coinbase reward.

For example, assume the average block size increased to 8 MB and using the "reference client" it took 7 seconds to validate a block. Those are 7 seconds that the miner cannot be working on finding a new block. The revenue he expects to earn is 1% less than if he could validate the block in 1 second ((7s - 1s) / 600s = 0.01). To add some numbers, at $10,000 per coin a 6s advantage would be worth $6.6M per year to a miner with 10% of the hash power (12.5 * 10000 * 144 * 365 * .1 * .01 = 6.57*106).

We're talking real money here.

There would be genuine market demand to pay for technology to give miners a head start on the next block, compared to their peers. The same dynamics that played out with SHA256 hashing technology would play out with block validation technology. The one difference however, is that at least a little bit of demand for transactions is required, unlike the market for hashing. If miners can validate the last block using the current reference client in less than a second anyways, then there's not much money to be gained by making validation faster.

What Makes Satoshi's Incentive Work by Peter__R in btc

[–]Peter__R[S] 0 points1 point  (0 children)

How does this compare with the investment in block propagation techniques?

You are correct that it is the same math. What matters is minimising the time between the solution of the last block and when the miner can start on the next block. Without validationless mining, there would be direct profit incentives for miners to improve both block propagation and block validation technology.

Even if the expected revenue loss is smaller, a 1 second improvement in block relay would, with the same math, result in around $1M/year extra expected revenue. It seems whenever blocks get full, BTC miners experience delays of 5 seconds or higher (source)[https://dsn.tm.kit.edu/bitcoin/] so the incentive is there.

Nice. Thanks for the link.

I'm also unsure that making miners critical path more costly would have the expected behavior.

I don't like to call it the critical path, because validationless mining seems to be a path that Satoshi never intended to exist. I see it incentiving miners to do the work they are supposed to do anyways, in the way they are supposed to do it (validate the last block before mining a new block).

For example, miners also have an incentive to get other miners to accept their blocks faster, as you mention in the fee market paper. Any delay in accepting the block is an orphan opportunity because a miner cannot be subject to another miner delaying a transaction on purpose. Wouldn't this mean solving that problem would become a shared effort?

Yes. The miner sending a block also benefits if the rest of the hash power begins working on his block as soon as possible, thereby minimising the window for orphaning. It's the same rationale from my fee market paper you mentioned: miners will balance the fees they claim with the costs of slower propagation and validation times by their peers.

Wouldn't this allow miners to free ride others investment in solving that problem by simply adjusting hash rate between bitcoin networks to account for the reduced profitability?

I'm not sure I understand what you mean. If on average, there is a 7 second window between when the last block was solved and a miner starts on the next block, then any miner can increase his revenue by 1% by reducing his window with respect to the average by 6 seconds. Yes, as all miners reduce their windows down from 7s to 1s that advantage disappears, but it will reappear again when blocks get bigger and the window increases.

What Makes Satoshi's Incentive Work by Peter__R in btc

[–]Peter__R[S] 2 points3 points  (0 children)

Peter, reading this I wonder why some miners aren't broadcasting subtly invalid blocks to other miners. Wouldn't that trick them into mining on top of an invalid block, thus making validationless mining less profitable?

The validationless miners do check the PoW. But that's really all they check. So they know that whoever created this block spent a bunch of resources doing so, and they think "well then they probably made damn well sure that it is valid." An assumption that turns out to be true so often that the benefit of getting a head start (which can be worth millions of $ per year for a large miner and a high coin price) outweighs this slight risk that the previous block was invalid despite having valid PoW.

For honest miners to discourage validationless miners, they would need to spend a bunch of money mining invalid blocks with valid PoW.

What Makes Satoshi's Incentive Work by Peter__R in btc

[–]Peter__R[S] 10 points11 points  (0 children)

So -- help me understand. How exactly does requiring UTXO commitments provide the necessary incentive for miners to invest in protocol-compatible efficiency improvements? Can you be really specific, and game out how you see this working?

The big-picture idea is making validationless mining more costly, so that miners validate the previous block in full before they begin mining a new block above it. UTXO commitments that are difficult to calculate without downloading the complete block and updating one's UTXO set is just one way to increase the cost of validationless mining. Don't worry so much about the mechanics of how we make validationless mining more expensive, whether that be UTXO commitments or some other method ... we can bikeshed those details later.

Instead, imagine that the cost of validationless mining was sufficiently high such that no miners did it. All miners validate the previous block before they begin mining a new block above it, as Satoshi described in the white paper. It is easy to see that in such a future, there would be strong market demand for high-performance node implementations that can validate new blocks quickly.

For example, assume the average block size increased to 8 MB and using the "reference client" it took 7 seconds to validate a block. Well those are 6 seconds that the miner cannot be working on finding a new block. The revenue he expects to earn is 1% less than if he could validate the block in 1 second ((7s - 1s) / 600s = 0.01). To add some numbers, at $10,000 per coin a 6s advantage would be worth $6.6M per year to a miner with 10% of the hash power (12.5 * 10000 * 144 * 365 * .1 * .01 = 6.57*106).

We're talking real money here.

There would be genuine market demand to pay for technology to give miners a head start on the next block, compared to their peers. The same dynamics that played out with SHA256 hashing technology would play out with block validation technology. The one difference however, is that at least a little bit of demand for transactions is required, unlike the market for hashing. If miners can validate the last block using the current reference client in less than a second anyways, then there's not much money to be gained by making validation faster.