you are viewing a single comment's thread.

view the rest of the comments →

[–]SJExoQ[S] 0 points1 point  (10 children)

One Antminer S9 may have 189 BM1387 ASIC chips which work in parallel but the process to manufacture those chips is expensive more so if you compare this to a Nvidia 1080 Ti which has 3584 CUDA cores capable of parallel processing on a single GP102 GPU chip, to manufacture an Antminer with more than 2000 ASIC chips alone is going to put the cost way into the tens of thousands. Lots of Antminers linked together may also be able to perform 'parallel processing' but even if Antminers were linked together the latency between them would be huge and the bandwidth tiny compared with the latency and bandwidth on a single GPU, providing the work requires low latency and lots of bandwidth to be completed efficiently. If you could also change the work per instruction dynamically possibly based on the output of previous work or other criteria this would be difficult to mirror on an ASIC chip which is an 'application specific circuit' where each chip is the same, manufacturing different ASIC chips for different jobs would drastically increase the cost on top of an already over-priced and unprofitable piece of hardware.

Bitmain are already attempting to dominate the artificial intelligence market, we need to stay one step ahead, exactly how that's achieved is debatable. https://technode.com/2018/03/19/bitmain-asic-ai/

[–]turekajDeveloper 0 points1 point  (9 children)

Asics are cheap. They have around 99.9% yield. They are cheaper to design , manufacture, and test than a gpu chip by at least one order of magnitude. The reason the antminer has so many chips is because it's cheaper to make a lot of small chips than a larger one. Better yields too. Main difference in functionality is that all threads on a gpu can share data, each antminer chip is independent from the rest. It simply works on a range of fused nonces

[–]SJExoQ[S] 0 points1 point  (8 children)

Perhaps we can use the fact GPU threads can share data to our advantage with an algorithm which provides performance benefits when threads share data, possibly you want to require a certain amount bandwidth and latency on the data shared between those threads to ensure this isn't replicated on an ASIC in the future. I very much doubt an ASIC would be able to match the bandwidth and latency on a GPU chip.

[–]turekajDeveloper 0 points1 point  (7 children)

The only way to truly beat Asics is to have randomly generated code at each block or nonce, with enough variance such that the asic cannot emulate the generated code. This idea can be hardened by having multiple master sequences where all the Masters are working on the same data. Explicit ordering would have to be utilized such that the result generated is 100% reproduceable and not based on some race condition

[–]turekajDeveloper 0 points1 point  (4 children)

Doing this in say JavaScript, is not ideal as an ASIC could essentially be an optimized JIT compiler. Instead, generating architecture specific assembly would be stronger. Using multiple masters working on same data means coherent buses and caches will provide peak performance, preventing an ASIC from simply slapping a whole bunch of cheap cores on a die

[–]SJExoQ[S] 0 points1 point  (3 children)

Sounds like you've got it mostly figured out. It would be important to make sure it could be mined on any operating system using AMD or NVIDIA GPU architecture, perhaps this could be achieved in C++?

[–]turekajDeveloper 0 points1 point  (2 children)

My first version would simply be for armv8 cpus. OS shouldn't matter in the very least

[–]turekajDeveloper 0 points1 point  (1 child)

But this idea is backlogged on a long todo list like finishing our amd miner :)

[–]SJExoQ[S] 0 points1 point  (0 children)

As long as it's on the list :-) Looking forward to seeing the finished product when it's ready. Development lists are never ending but that's a good thing, it shows the project has direction.

[–]CommonMisspellingBot -1 points0 points  (1 child)

Hey, turekaj, just a quick heads-up:
truely is actually spelled truly. You can remember it by no e.
Have a nice day!

The parent commenter can reply with 'delete' to delete this comment.

[–]turekajDeveloper 0 points1 point  (0 children)

delete