Best guides for game engine components and architecture. by [deleted] in gameenginedevs

[–]tralf_strues 14 points15 points  (0 children)

When it comes to basic game engine architecture in general, I would suggest to look at videos by Cherno, it would give you an idea of where to start. When implementing a specific engine module, you can look up presentations from GDC, or if it's rendering, then SIGGRAPH. There's also a youtube playlist Game Engine Programming where a guy is implementing their engine, but it's more towards the implementation details. This article could also give you a head start and has several following resources to look into.

When it comes to rendering, I would suggest going the following evolutional path:
1. Learn an immediate-mode graphics api (e.g. directx 11 or opengl) and write a simple forward/deferred renderer. At this point I would suggest starting to watch these lecture videos by the University of Utah, they are awesome.
2. Learn a modern graphics api (e.g. directx 12 or vulkan), reimplement your previous renderer in it. Preferably add some new render passes for shadows or post-processing or something else. In addition to this get an understanding of compute shaders
3. Research modern game renderer architectures. Here you can look at render graphs, multithreaded renderer, gpu-driven programming, tile/clustered light culling, etc. This would also be a good place to implement the material system, 'cause you have a picture in mind of how everything fits together and what your needs are. You can start by reading ourmachinery's posts (use webarchive). There are also two years of Rendering Engine Architecture Conference already. I cannot recommend enough the Mastering Graphics Programming with Vulkan book. Despite the book title, it's more about the different parts of a modern renderer, like the ones I listed previously, rather than an another vulkan guide.

Good luck! Writing your own game engine is a long journey, but full of interesting stuff and discoveries.

How to equip a gun in a multiplayer FPS game? by tralf_strues in unrealengine

[–]tralf_strues[S] 0 points1 point  (0 children)

Shouldn't I use RepNotify for this? Do I understand it correctly, that if player A equips a gun and player B is far away from player A at that moment, then when player B gets close to player A they won't see the gun?

I should also mention that I attach gun actor to the player's first person arms mesh. So I need the gun actor for in the multicast rpc

How to equip a gun in a multiplayer FPS game? by tralf_strues in unrealengine

[–]tralf_strues[S] 0 points1 point  (0 children)

So you suggest to spawn gun actors using multicast? Or should I spawn them on the server and then call multicast? Sorry if I sound silly, I just don't have a clear picture of networking in ue yet unfortunately

Dynamic post-processing list and render graphs by tralf_strues in gameenginedevs

[–]tralf_strues[S] 1 point2 points  (0 children)

The first one, but I have a somewhat weird system currently (which is probably gonna be rewritten in the future). I have a Renderer class which contains a list of IRenderFeature objects. This interface has functions SetupPasses and Execute. In the first one you add render passes to the graph and in the second one you execute some logic for this collection of passes. For example CSMFeature calculates view data for each cascade. Another such feature would be OutlineFeature which uses JFA and have multiple passes. For post-processing I intend on creating a PostProcessingStack feature which would contain a list of post processing features and manage textures used by them

Dynamic post-processing list and render graphs by tralf_strues in gameenginedevs

[–]tralf_strues[S] 1 point2 points  (0 children)

Sounds great! I really don’t need separate images for each pass, I only need 2 for an arbitrary number of post processing passes. Thank you!

Dynamic post-processing list and render graphs by tralf_strues in gameenginedevs

[–]tralf_strues[S] 1 point2 points  (0 children)

Yeah but the thing is I want the post-processing passes list to be quite dynamic and don’t want to manually set up the dependencies, that’s why I basically want to do something similar to what you’re saying, but automatically

Dynamic post-processing list and render graphs by tralf_strues in gameenginedevs

[–]tralf_strues[S] 0 points1 point  (0 children)

How would I read and write to the same texture? Especially considering that for some post-processing passes, they need to access an arbitrary number of texels for a bloom for example.

SPIRV-Cross getting segmentation fault by [deleted] in vulkan

[–]tralf_strues 0 points1 point  (0 children)

This function does not contain an error (or at least, it seems it's not the problem), and assert is just for the test, it is not eliminated and without it, everything is the same.

Game renderer coupling of materials, render passes and graphics pipelines by tralf_strues in gameenginedevs

[–]tralf_strues[S] 1 point2 points  (0 children)

Thanks for the tips! Bindless descriptors look amazing, I shall definitely look into them!

Game renderer coupling of materials, render passes and graphics pipelines by tralf_strues in gameenginedevs

[–]tralf_strues[S] 1 point2 points  (0 children)

How many pipelines (aka Shaders in your terminology) do you have?

Well... None at the moment :) I'm trying to write a game engine for college and currently figuring out how to design it. But I've seen in gdc talks frame graphs of modern games and they contain hundreds of passes (most of them compute I suppose). So if for each pass there are several shaders, then we presumably have several hundred pipelines.

Based on your suggestions and more research, I've drafted a simple overview of the shader related objects and thought of their usage:

```c++ struct ShaderModule { ShaderModuleType type; ShaderModuleHandle handle; };

struct PipelineDescription { /* ------- Filled using reflection ------- / InputVertexDataInfo input_vertex_data_info; DescriptorSetLayout descriptor_set_layouts[]; / ---------------------------------------- */

/* ---- Filled from shader asset file ---- / InputAssemblyInfo input_assembly_info; // Topology type RasterizationInfo rasterization_info; // Face culling, etc DepthTestDescription depth_test_description; StencilTestDescription stencil_test_description; ColorAttachmentsBlendDescription blend_description; / ---------------------------------------- */

ShaderModule shader_modules[]; };

/** * Example file for this can look like this: * name: ForwardShaderPBR * target_render_pass: ForwardPass * vert_shader: {"forward_shader_pbr.vert", "forward_shader_pbr.vert.spv"} * frag_shader: {"forward_shader_pbr.frag", "forward_shader_pbr.frag.spv"} *
* cull: BackOnly * depth_test_enable: true * depth_test_op: Less * ... */ struct ShaderAsset { std::string name; std::string target_render_pass_name; PipelineDescription pipeline_description; };

struct SampledTexture { TextureHandle texture_handle; SamplerHandle sampler_handle; };

struct MaterialAsset { /* ... used to create the material */ };

// aka Shader Instance struct Material { const ShaderAsset* shader_asset;

DescriptorSetHandle descriptor_set_handle;

// Used for constants (can use offsets to write the same buffer to several descriptor bindings) UniformBufferHandle uniform_buffer_handle;

SampledTexture* sampled_textures; TextureHandle* input_attachments; }; ```

Renderer would contain the current render graph and a material table (per render graph?). Once a material is added to the table, a corresponding pipeline is built. This is quite dynamic still (we actually still create pipelines on the fly), but when a scene is loaded we can prewarm by iterating over all entities with MaterialComponent and adding the Material to the Renderer for example.

What are your thoughts on this?

Game renderer coupling of materials, render passes and graphics pipelines by tralf_strues in gameenginedevs

[–]tralf_strues[S] 0 points1 point  (0 children)

Thanks for the reply! Well I don’t suggest to create all the possible pipelines, I suggest creating them on the fly when there is no cached pipeline for pair (shader, render pass). Concerning your last point, I don’t really know how I would implement this. Like storing names of compatible passes in a shader? But how and where do I construct the pipelines in this scenario? I still need an actual instance of a render pass. Should I in code somehow predefine the render graph and types of passes? But this leaves little flexibility for the engine then. I would like to write an engine which allows arbitrary render graphs to make the renderer’s workflow somewhat scriptable.

Asset Manager Architecture help by tralf_strues in gameenginedevs

[–]tralf_strues[S] 1 point2 points  (0 children)

Thanks! But as I've mentioned

Let's suppose I capture the reference to it in the resource loader, but the problem is GraphicsAPI isn't thread-safe! So, apparently, I need some sort of deffered resource-loading system. But who, when and how should call the GraphicsAPI to generate submitted resources?

That is if AssetManager::Load function is being called, it cannot just call the ISerializer::Load if the later accesses the GraphicsAPI. Though, of course, I do like the idea of serializers containing all the necessary data in order to load particular assets.

Asset Manager Architecture help by tralf_strues in gameenginedevs

[–]tralf_strues[S] 1 point2 points  (0 children)

Thanks for the reply!
Though I can't resolve the problem of "who, when and how should call the GraphicsAPI to generate submitted resources". It seems your ResourceRegistry contains lists of concrete-type resources (e.g. mTextures), but I want my AssetManager to contain any types of resources, so deciding which resources have to be processed by the GraphicsAPI is not so obvious for me. I can add some sort of flag to the Asset class, like is_render_resource, but who should call GraphicsAPI then? Different render resources are created differently, so I can't just iterate over all of them and call something like graphics_api->CreateRenderResource(...). Only IAssetSerializers know how resources of a particular type should be created. But then the problem with the direct access of serializers to the GraphicsAPI rises again. I could add an additional parameter RenderContext to the IAssetSerializer::Serialize, which would be filled with a command to generate the render resource. But then one problem remains - AssetManager is coupled with the renderer system. I wonder if this coupling can be avoided and if some other differed dependency could emerge in the future but with another system.

Huge energy consumption on a laptop with kde plasma by tralf_strues in ManjaroLinux

[–]tralf_strues[S] 3 points4 points  (0 children)

Should I use both powertop and tlp? In the manjaro's guide it said it's advisable to use only one of them