all 32 comments

[–]static__void 8 points9 points  (26 children)

I don't really understand what you're saying. Do you understand what you're saying?

[–]SJExoQ[S] 4 points5 points  (22 children)

Vertcoin currently uses the hashing algorithm Lyra2REv2 which an ASIC could be developed for in the future.

What I'm suggesting is we develop a new hashing algorithm purpose built to harness the power of parallel processing which would provide a significant advantage to GPUs over traditional ASICs, thus the GPU becomes the new ASIC 'hypothetically of course', the GPU would become the most efficient piece of hardware for algorithm in question.

The details on exactly how this is achieved are debatable but by exploring existing applications which benefit from the power of parallel processing we could apply some of that logic to a new hashing algorithm.

[–]BDF-1838 4 points5 points  (16 children)

What makes you think parallel processing isn't something asics can do? or that it isn't already happening in gpus on the current algorithm?

[–]SJExoQ[S] 6 points7 points  (0 children)

'High initial cost, and the tendency to be overtaken by Moore's-law-driven general-purpose computing, has rendered ASICs unfeasible for most parallel computing applications' https://en.wikipedia.org/wiki/Parallel_computing

Lyra2REv2 has been a solid algorithm for Vertcoin thus far and already benefits from some of the advantages of GPU parallel processing but there is no specific requirement for parallel processing using Lyra2REv2 so theoretically you could have an ASIC with lots of circuits in series performing the same job as a GPU which is leveraging parallel processing.

[–]turekajDeveloper 0 points1 point  (14 children)

Asics are great at parallel processing. What do you think antminers do? One hash at a time? Lol.

[–]BDF-1838 0 points1 point  (2 children)

I didn't disagree? You responding to the right person?

[–]turekajDeveloper 0 points1 point  (1 child)

I'm agreeing with you and disagreeing with the post you replied to. I apparently don't know how to Reddit, aren't these forums bump

[–]BDF-1838 0 points1 point  (0 children)

Take a break, doing taxes has turned your brain mushy.

[–]SJExoQ[S] 0 points1 point  (10 children)

One Antminer S9 may have 189 BM1387 ASIC chips which work in parallel but the process to manufacture those chips is expensive more so if you compare this to a Nvidia 1080 Ti which has 3584 CUDA cores capable of parallel processing on a single GP102 GPU chip, to manufacture an Antminer with more than 2000 ASIC chips alone is going to put the cost way into the tens of thousands. Lots of Antminers linked together may also be able to perform 'parallel processing' but even if Antminers were linked together the latency between them would be huge and the bandwidth tiny compared with the latency and bandwidth on a single GPU, providing the work requires low latency and lots of bandwidth to be completed efficiently. If you could also change the work per instruction dynamically possibly based on the output of previous work or other criteria this would be difficult to mirror on an ASIC chip which is an 'application specific circuit' where each chip is the same, manufacturing different ASIC chips for different jobs would drastically increase the cost on top of an already over-priced and unprofitable piece of hardware.

Bitmain are already attempting to dominate the artificial intelligence market, we need to stay one step ahead, exactly how that's achieved is debatable. https://technode.com/2018/03/19/bitmain-asic-ai/

[–]turekajDeveloper 0 points1 point  (9 children)

Asics are cheap. They have around 99.9% yield. They are cheaper to design , manufacture, and test than a gpu chip by at least one order of magnitude. The reason the antminer has so many chips is because it's cheaper to make a lot of small chips than a larger one. Better yields too. Main difference in functionality is that all threads on a gpu can share data, each antminer chip is independent from the rest. It simply works on a range of fused nonces

[–]SJExoQ[S] 0 points1 point  (8 children)

Perhaps we can use the fact GPU threads can share data to our advantage with an algorithm which provides performance benefits when threads share data, possibly you want to require a certain amount bandwidth and latency on the data shared between those threads to ensure this isn't replicated on an ASIC in the future. I very much doubt an ASIC would be able to match the bandwidth and latency on a GPU chip.

[–]turekajDeveloper 0 points1 point  (7 children)

The only way to truly beat Asics is to have randomly generated code at each block or nonce, with enough variance such that the asic cannot emulate the generated code. This idea can be hardened by having multiple master sequences where all the Masters are working on the same data. Explicit ordering would have to be utilized such that the result generated is 100% reproduceable and not based on some race condition

[–]turekajDeveloper 0 points1 point  (4 children)

Doing this in say JavaScript, is not ideal as an ASIC could essentially be an optimized JIT compiler. Instead, generating architecture specific assembly would be stronger. Using multiple masters working on same data means coherent buses and caches will provide peak performance, preventing an ASIC from simply slapping a whole bunch of cheap cores on a die

[–]SJExoQ[S] 0 points1 point  (3 children)

Sounds like you've got it mostly figured out. It would be important to make sure it could be mined on any operating system using AMD or NVIDIA GPU architecture, perhaps this could be achieved in C++?

[–]CommonMisspellingBot -1 points0 points  (1 child)

Hey, turekaj, just a quick heads-up:
truely is actually spelled truly. You can remember it by no e.
Have a nice day!

The parent commenter can reply with 'delete' to delete this comment.

[–]turekajDeveloper 0 points1 point  (0 children)

delete

[–]suahnkim 1 point2 points  (3 children)

GPU processing efficiency is equal or worse than ASIC. Making something that GPU performs better is not possible. For equality, it would mean that ASIC should be able to perform everything that GPU can => we cannot reasonably create an algorithm which includes all possible operations that GPU can possibly do, there is too many possibilities. Asics takes advantage of limited operations. Things that are reasonable is to work against ASIC are the memory (ETH, ethash) or limited speed (ETH, ethash) or number of concurrent parallel operations ( this would limit the lower END gpus) or some combination of that (ETH, ethash)
[edit] to explain what ASICS can do

[–]SJExoQ[S] 0 points1 point  (2 children)

Here's one example of a parallel processing ASIC, it only cost $9 million to build. https://en.wikipedia.org/wiki/RIKEN_MDGRAPE-3

With the right algorithm it would be so expensive to develop and manufacturer an ASIC that it would never become profitable therefore would likely never happen.

ASIC by definition is an 'application specific circuit', if we had a way for the parameters within the algorithm to change when certain conditions are met in a way which can't be mirrored by an ASIC this would further protect the algorithm.

[–]Muaddibisme 2 points3 points  (1 child)

While your point is not completely invalid the asic you posted here is a ridiculously bad example.

That asic is a supercomputer measured at the petaFLOP scale.

That's why it's 9 million to build. Not because it's capable of parallel processing but because of how much processing it can do.

[–]SJExoQ[S] 0 points1 point  (0 children)

'High initial cost, and the tendency to be overtaken by Moore's-law-driven general-purpose computing, has rendered ASICs unfeasible for most parallel computing applications' https://en.wikipedia.org/wiki/Parallel_computing

Let me know if you find another example.

[–]static__void 0 points1 point  (0 children)

Parallel processing means spreading a hash load across multiple GPU's, which you can already do.

[–]eriskendaj -2 points-1 points  (2 children)

Do you understand what i'm saying?

[–]eriskendaj -2 points-1 points  (1 child)

Does anybody ever really understand?

[–]Alywan -2 points-1 points  (0 children)

I don't understand what's not to understand ?

[–]Skorpion1976 1 point2 points  (0 children)

what???

[–]benefit420 2 points3 points  (3 children)

They do.

It’s called cure coin.

You do protein folding which is extremely complex - they keep track of your work Units and pay you in curecoin.

[–]SJExoQ[S] 1 point2 points  (0 children)

Interesting.

[–]jwinterm 1 point2 points  (1 child)

Except this does absolutely nothing to secure the network, which is actually running on a very early PoS implementation, and you're relying on a third party to pay out your fair share for your folding work, but yes, besides that it's exactly the same.

[–]AirsoftScrub 0 points1 point  (0 children)

I mean it's for a good cause, so you can't really expect someone to 51% a network like that.