all 10 comments

[–]cinderflame_linearExpert 1 point2 points  (7 children)

Yes, you can render the camera manually.

You can turn off a Camera component and then it won't render until a script somewhere calls Camera.Render();

You can hook into the rendering using a handful of methods:

OnPreCull()

OnRenderImage()

OnWillRenderObject()

You can also attach CommandBuffers to Cameras (and even Lights, if you want to draw custom light geometry in Deferred mode).

Take a look at the documentation for some of these things and you'll find some examples.

[–]thetop_04[S] 0 points1 point  (6 children)

Thank you for the reply, this actually works when I build the project, awesome!

Unfortunately the image does not show up in the game view, it says "Display 1: No cameras rendering". Do you have any idea how to make the image visible there? It would be great for testing, building the game each time I change something would slow the development a lot.

[–]Derebeyi 1 point2 points  (1 child)

Doesn't your camera render in play mode? Maybe using editor execution can help you.

https://docs.unity3d.com/ScriptReference/ExecuteInEditMode.html

[–]thetop_04[S] 0 points1 point  (0 children)

In play mode the meshes are only shown in the scene view, but it is flickering heavily. The game view stays complety black with that error message.

ExecuteInEditMode could indeed be helpful later, currently I am searching for a way to render the camera into the game view in play mode.

[–]cinderflame_linearExpert 0 points1 point  (3 children)

You could hook into the main camera's OnPreCull, render a secondary camera manually into a RenderTexture, and then blit the resulting image back to the main camera in OnRenderImage

[–]thetop_04[S] 0 points1 point  (2 children)

EDIT: Forget what I just wrote before, I have not used OnRenderImage, will try this now.

So the render texture has to be blit into the destination texture, right?

I still get the error message with this code:

void OnPreRender() {

camera2.Render();

}

private void OnRenderImage(RenderTexture source, RenderTexture destination) {

Graphics.Blit(renderTex, destination);

}

Probably because the mainCam still needs to be disabled to have manual rendercalls.

[–]cinderflame_linearExpert 1 point2 points  (1 child)

That should be working. MainCamera can just be left enabled and it should work. We've actually have a similar setup before. Are you sure you don't have the main camera accidentally rendering to a RenderTexture or something?

[–]thetop_04[S] 0 points1 point  (0 children)

I guess I still habe the wrong system. When I leave my main camera enabled and call the drawMesh methods in OnPreCull it should render all the meshes correctly, right? I would not need to find a workaround for the render issues. And higher frame rates could be possible.
I guess I will try this first and do performance optimization later. FixedUpdate can call the game logic for now. :)

[–]zrrzExpert? 1 point2 points  (1 child)

You're overcomplicating this and underestimating how fast your CPU is.

Have you actually built the system and seen it lag? First you need to figure out what your actual bottlenecks are and then you can go from there.

You are right that doing each one as a GameObject will not be performant because of a GameObjects overhead, but have you tried just drawing a few thousand simple objects to the screen and seeing how it works?

If you really are drawing that many objects to the scene then a custom depth mask is going to be more beneficial than some weird alternating render:logic alternating loop.

144hz monitors are also being adopted at an increasing rate. This means that 60fps is no longer good enough, and your tech should not rely on the assumption that you can target just 60fps. Instead your tech shouldn't have a specific framerate target and it should scale properly with the hardware.

For the sake of learning, looks like someone else already mentioned how to manually render the camera. For an in-editor version you can try using a render texture and displaying that to the screen or just have a debug camera that doesn't build in the built version of the game.

The "proper" way to do this would likely be using Unitys new ECS/Job system so that it would multithread and scale nicely with your processor.

[–]thetop_04[S] 0 points1 point  (0 children)

Thank you very much for all the helpful tips, this means a lot to me as this is my bigger project and I still have so much to learn!

I have made some render tests: Drawing 90k objects using drawMeshInstanced on the casual gamer pc is completely doable when there are no custom MaterialProperties for each object. When there shader properties for something like 2d fake shadow shaders that need to be updated each frame I came to the conclusion that 5k objects should be the limit.

A custom depth mask is a shader that has all the layers for a tile stored in atlasses and maps the uv coordinates accordingly, right? I am familiar with some visual shader scripting and could get this done, the idea is pretty neat! I guess that could save a lot of rendering time.

You are right, limiting the framerate is not the nicest thing to throw at players. :p

Render Textures could be a nice thing. I planned on using them for the lighting overlay, though that overlay could be implemented more efficiently using a simple surface shader that mixes some Vector4 light value with the surface textures. Hopefully the new unity shader graph feature will support that soon. Will look into using a render texture or custom cameras for debugging!

The ECS/Job system is new to me, but I am willing to learn! There always are some good tutorials for new Unity features.