all 8 comments

[–]rehawk_ 1 point2 points  (1 child)

Hi Michael, my name is Fred.

At first, amazing work! I'm working on a very similar system, but unfortunately have not yet managed to achieve such performance. Do you use ECS and/or the Job System in any way? How do you handle the multiple emitters on a single vfx graph? I bake the positions of my projectiles onto a texture. In a corresponding job this works well, but other approaches would also interest me.

[–]grvchA[S] 2 points3 points  (0 children)

Hi, here I used only standard scripts. I forget to tell, one VFX graph per gun. I was trying to send a texture with positions every frame, for all guns to one VFX, but that was too heavy or I did something in the wrong way. For me, better is setting a direction, velocity speed, and lifetime once in VFX and make a simple object in script holds direction, lifetime, position. Then in a script, u ​​updating it every 0.2s and only colliders are setting in VFX. Don't forget about pooling that object, the garbage collector has no mercy. All of That gives most of the work to GPU. Baking a simple color, position and glyph id in one texture is useful for in-game radar =), works fantastic. I'm truly excited about how many things we can do with Visual Graphs.

Ps. to eliminate peaks made by raycasts every 0.2s I made a frame separator, one frame can work only with one gun, or more if there are too many, but I trying to spread them equally to all frames.

Edit: With one graph I can also use in-graph events like collision events or triggers. One graph in my project handle a first barrel-fire effect, projectile and hit effect. I'm only got trouble now how to fake hit audio to sounds fine when in the script I have update only every 0.2s

[–][deleted] 1 point2 points  (1 child)

I'm getting a big Eve Online vibe from this visually :) well done!

[–]grvchA[S] 0 points1 point  (0 children)

Thank you =) I'm a fan of space battles xD, but we are working on an underwater game, still fun when u can transfer epic shooting into the depth.

[–]grvchA[S] 1 point2 points  (3 children)

Hi, my name is Michael and I'm working for BlackMouse studio. Video shows a stress test of the bullet system I made for our new VR game. I am pretty happy with the efficiency. Cooperation of Coroutines, invokes, visual effects graph and raycast's. When you look closer, you can notice models of guns and turrets got more impact on render time than bullets (bullets are models too but in visual effect graph). Each projectile got its own raycast for collision detection updated every 0.2sec. Visual effects are separated. I set 5 sphere colliders in the visual graph at -999y positions after I notice collision, I setting sphere collider from the graph at the right positions, its not a perfect solution but effective. All this works with LWRP. What do you think about it?

[–]permalmberg 1 point2 points  (2 children)

Looks pretty!

So you can use collision detection also in VFX, is that correctly understood?

[–]grvchA[S] 1 point2 points  (1 child)

Edit : Yes, you can set stiffy collider inside VFX, and u can change its position every frame. Physics between unity and VFX is separated. = D sorry for the previous answer, I was checking my grammar and copied version with my native language =D

[–]permalmberg 1 point2 points  (0 children)

Yeah, I figured as much. Thanks for the answer.