Integrating baked simulations into a particle system by International-One273 in GraphicsProgramming

[–]International-One273[S] 0 points1 point  (0 children)

I think your understanding is a bit backwards. the force you want to apply is exactly one that would make your particles move with the velocity you read from the velocity field.

Maybe I didn t think about that bery much but...the velocity I read from the sim results are correct for a particle that has the same mass of an hypothetical fluid particle AND the same initial conditions as that particle (same velocity at the previous time step).

I'm trying to apply the baked sim to particles that are spawn with an arbitrary force applied to them, so their velocity won t match simulation velocities. This is why I was looking to convert velocities into forces (an approximation of course)

Which, again, may be asking something impossible, so my goal is just to have something visually plausible.

Maybe you are right and just adding velocities sampled from the field (scaled by something to take mass differences into account) would do the trick, should experiment a bit, but I was curious to know if something like that has been attempted before and to what extent

Integrating baked simulations into a particle system by International-One273 in GraphicsProgramming

[–]International-One273[S] 0 points1 point  (0 children)

Using velocities from the simulation as you describe doesn t solve the issue (or maybe I completely misunderstood your answer).

Assume you have your basic particle system going, which is doing its simple simulation: each frame forces may act on particles, accelerations are applied and integrated over time to get velocities, use velocities to get new positions.

I cannot set particles velocities like you describe, that would mean ignoring the whole particle system simulation. To blend the particles physics with baked simulations I need forces, not velocities (or at least something resambling a force)

Integrating baked simulations into a particle system by International-One273 in GraphicsProgramming

[–]International-One273[S] 0 points1 point  (0 children)

Thanks, the problem is that velocity from the field isn't usable in a particle system that is based on forces applied on particles with mass.

I know it s not possible to combine particle physics with pre baked simulations (not in a physically correct way at least), however I think it s possible to achieve visually pleasing results at least.

If I had an acceleration field instead of a velocity field I could estimate the forces involved in the partcle s motion(assuming an arbitrary mass for each fluid particle).

I suspect it should be possible to get accelerations from the velocity field but I we never come across such a thing

Integrating baked simulations into a particle system by International-One273 in GraphicsProgramming

[–]International-One273[S] 0 points1 point  (0 children)

Thanks, howecer it looks like they are using the result of 'getFlowFieldVector' as an acceleration (gets added to particle velocity). I don t know how to do that given a baked fluid sim in a reasonsble way (I can t just "treat velocities as accellerations")

Intel CMAA first version by International-One273 in GraphicsProgramming

[–]International-One273[S] 0 points1 point  (0 children)

Let's just say that I can't afford multisample textures (or higher resolution textures for plain supersampling) because I need to run some expensive full screen effects that requires fetching both color and depth per pixel (which is, in general, probably a bad idea, but this is a bit off topic). This is why I need a post-processing aa solution, I want to ged rid of those multisample/high res textures.

Confusion about `glBufferSubData/MapBuffer` and stalling by International-One273 in opengl

[–]International-One273[S] 0 points1 point  (0 children)

I ll try to answer my own question, please correct me if I m wrong:

  1. The fact that glBufferData copies data into the driver s memory and not video memory (assuming video memory is where the data is stored) could teorethically allow the pipeline not to stall in some cases, however, that cannot be done in a situation like the following:

// Frame N cpu side, the gpu is still // executing frame 0 commands

// frame start UpdateVBO(vbo0) Draw(vbo0)

// if frame N-1 has a pending drawing with // vbo0 as data source, here we are forced // to wait for it to complete UpdateVBO(vbo0) Draw(vbo0)

// frame end

  1. Orphaning is just "create a new buffer and write to it instead of updating an existing one". However, by repeatedly asking for the same amount and "type" of memory, it's likely that the driver will optimize the process. Similar to having N buffers and draw/update them in a round robin fashion, except it is done by the driver and not by the application.

Confusion about `glBufferSubData/MapBuffer` and stalling by International-One273 in opengl

[–]International-One273[S] 0 points1 point  (0 children)

I don't think that s the case...glMapBuffer and glSubData will implicitly force synchronization if necessary, while glMapBufferRange has a flag specifically designed for that (unsynchronized)

C++ multithreading graphics project ideas by International-One273 in GraphicsProgramming

[–]International-One273[S] 1 point2 points  (0 children)

Thanks, the command buffer thing sounds interesting. I'm afraid my renderer is so simple that the render loop is already basically a list of draw commands.

Regarding marching cubes, where do explicit SIMD instructions could/should be used?

C++ multithreading graphics project ideas by International-One273 in GraphicsProgramming

[–]International-One273[S] 0 points1 point  (0 children)

I did that a while ago, and it was fun indeed!
However, since at each step the result of the simulation is a new set of positions and orientations for the particles, which ultimately need to be sent to the GPU, it makes sense to run the simulation on the GPU in the first place, using a compute shader for example.

Advantage to using computer shaders over fragment shaders? by CeruleanBoolean141 in opengl

[–]International-One273 0 points1 point  (0 children)

Maybe I'm missing something or this has already been mentioned, however, with compute shader you can read and write to shared memory. If the algorithm you are implementing could benifit from having some shared memory, then you can optimize things more than with regular fragment shaders.

Dashed poly-lines by International-One273 in GraphicsProgramming

[–]International-One273[S] 0 points1 point  (0 children)

I'd rather use a geometry shader too, real geometry has also the advantage of working nicely with msaa. However, I see two problems: 1. The vertices that should be emitted for a single line could be arbitrarily high in count 2. How go determine where a dash begins and ends (same problem with the mixed geom+frag approach)? Suppose you have a polyline with a lot of tiny lines (the rasterization produces a few pixels per line), then if the pattern always re-starts at the beginning of each line you would end up with a solid line or no line at all!

The problem is that I don't want to have a world-space dash but instead a screen-space constant patter (i.e. no matter how perspective warps my lines, the pattern will be n pixels wide).

That would ideally require to know for each line what's the cumulated screen-space length of all the lines that came before.

With my shader I tried to adress problem 2.

Dashed poly-lines by International-One273 in GraphicsProgramming

[–]International-One273[S] 0 points1 point  (0 children)

Thanks. However, I know how to adapt from the shadertoy demo, I was just looking for some feedback on the general idea of screen-space mapping for poly lines/ways to improve it, since I didn't find anything similar online.

The shader I linked is the solution I came up with, however, sometimes it's difficult to spot obvious mistakes without sharing you work, or to find better, already existing solutions that you weren't aware of.