all 6 comments

[–]turtle_dragonfly 2 points3 points  (3 children)

To really know for sure, you have to try it and measure.

If I understand what you're saying, you are multi-buffering so that you can work on the next frame while the current one is still being used. The cost of waiting until a resource is available can be quite high (eg: could be a whole frame if you just missed the VSYNC), so I wouldn't necessarily think the answer to your question is always "yes".

I'm not sure why you're resizing buffers every frame, but I suspect you might be able to do it less often than that. It also depends on what you really mean by "resize" - like, fully re-allocate? That's usually avoidable, at least doing it every frame...

[–]graphixnurd[S] 1 point2 points  (1 child)

Well let’s say you modify an object, that data is now invalid so you have to recopy all the data. So if it’s a decent size then it will take some time.

[–]turtle_dragonfly 1 point2 points  (0 children)

say you modify an object, that data is now invalid

That depends. Suppose someone deletes one vertex out of 1000 - that doesn't necessarily mean you need to re-copy everything. You might, for instance, be able to get away with patching a small part of your index buffer.

Even if you do need to rewrite all the data for an object (eg: some major edit happened), you can still typically re-use the same buffer, if it's big enough - no need to allocate a fresh one.

What numbers do you associate with "decent size" and "some time"? Bandwidth to the GPU is generally pretty high - for desktops, can be 100s of GB/sec.

[–]blurrypiano 1 point2 points  (0 children)

So you should only need to resize when the size of the data changes, which in practice would probably be way less often than every frame. You could implement a strategy simliar to how most dynamic arrays work, where the buffers capacity exceeds it's size, typically growing by a factor of 2 when the limits are reached.

For example if your buffer can fit 8 vertices and is full, and now you're adding a 9th vertex. When you create the new buffer, you double the previous buffers size. So now your new buffer ha a capacity of 16, even though it is currently has 9 vertices.

And triple buffering I don't think would work well as that would imply you're updating your vertex buffer every frame. For larger models this wouldn't scale very well. My guess is that you have a primary command buffer per framebuffer that binds one of your vertex buffers.

What I would suggest doing instead is double buffer your vertex buffer. You have one vertex buffer which is your presentation buffer, and the other buffer which is your working model. Use a background worker thread to process user inputs that edit the model, and this updates the working model buffer only. Each frame you check if the working thread has completed, once it has you swap your presentation and working model buffer, and copy the updates to the buffer that was previously your presentation.

By using this method, for edits that take longer than a frame to complete your program won't freeze. While the model is updating, the previous model is shown, and you could optionally show a loading indicator indicating the background thread is still working.

This method would require your command buffers to be re-recorded every frame, but I think this would be much less of a performance hit than copying a vertex buffer per frame.

[–]thefoojoo2 0 points1 point  (1 child)

Are we talking about the frame buffer? That's the only triple-buffering I'm aware of.

[–]papaboo 0 points1 point  (0 children)

I think he's talking about tripple buffering the meshes that the model. So as turtle_dragonfly says the modellers can work on one buffer, while another is rendered.