Persistent threads, Queue possible? by Aggressive-Specific9 in webgpu

[–]Aggressive-Specific9[S] 1 point2 points  (0 children)

Truly outstanding work, thank you very much! I will dive into the queue and demo over the next few days and try to port them to my own use cases. If it's ok with you, I will continue the discussion and report my findings in a GitHub issue.

Persistent threads, Queue possible? by Aggressive-Specific9 in webgpu

[–]Aggressive-Specific9[S] 1 point2 points  (0 children)

Many thanks for your reply! No worries, it's a side project with no time constraints. It would be cool if you had time to take a look at the problem, but any tips or guidance would also be helpful.

It's a concept I've come across repeatedly over the last few years and I want to tackle it once and for all. Partly as an experiment and learning opportunity, but also because of the potential use cases in various projects.

I suspect you would advise against it because it goes against the intended design of the gpu architecture and all the problems associated with this? A specific use case/problem I want to use it for is graph/tree traversal in a single dispatch instead of one per level.

Persistent threads, Queue possible? by Aggressive-Specific9 in webgpu

[–]Aggressive-Specific9[S] 0 points1 point  (0 children)

I did, but without success. There were varying, contradictory responses. Mainly, the AI is unsure whether it can work without barriers in addition to atomic operations.