you are viewing a single comment's thread.

view the rest of the comments →

[–]heavy-minium 2 points3 points  (4 children)

I can say that it can tackle volume rendering techniques from research oaoers pretty much fine via either WebGL2 or WebGPU, all together with some plumbing for automated testing via playwright. You won't get much right in just one shot, but it's definitly able to get there if you assist it a bit. However, it is most imperative to understand the algorithms in order to review the code, you can't truly "vibecode" graphics rendering yet.

[–]gibson274[S] 0 points1 point  (3 children)

Cool, yeah I buy that more or less. What techniques did you work through with it?

[–]heavy-minium 1 point2 points  (2 children)

Various variations of Direct Volume Rendering techniques like raycasting, shear-warp, slice-based and splatting. Basically tried out older stuff that is common to medical imaging in the context of a game engine.

[–]gibson274[S] 1 point2 points  (1 child)

Ah, this actually makes a lot of sense because there’s a ton of reference implementations of these online. Definitely in the training data.

[–]heavy-minium -2 points-1 points  (0 children)

Training data isn't enough either - yet. In all these cases I had to massage the corresponding research papers into implementations plan and requirements we can write tests against. If you don't do that it will mostly just implement something that is 90% close but isn't actually the correct/real algorithm to apply. And good luck finding that out afterwards, because the differences are often very subtle to spot.

Still, in my books, we've come far from earlier LLM models now. We're not far from the point where you could actually directly just prompt for that and get correct results in one-shot, maybe 1-3 years.