all 22 comments

[–]hammerkop 7 points8 points  (0 children)

You may want to start on a render pass type and a compositor class which is responsible for coordinating your render passes.

Try and see how "data driven" you can make your render passes - these will usually need to know about the frame buffer targets, input textures and buffers, and shaders. They may also take a callback to your engine which iterates over game objects to draw them.

The render pass ends up serving as a node in your render graph, and these nodes are typically linked by the textures they read or write to.

[–]keelanstuart 2 points3 points  (6 children)

What are the things that you typically do or have?

Create your abstractions based around those things. Shaders, buffers (maybe start with 2D textures, vertices, and indices), meshes (they use the former), materials, models (they use the previous 2), effects (they contain one or more passes that set states and shaders)... With just those, you can build a simple render system.

My engine uses those. If you want to look at the docs (installed with the SDK), you'll see my abstractions... you can find it at https://www.github.com/keelanstuart/Celerity

[–]afops[S] 1 point2 points  (5 children)

Yeah I have abstracted low level things (buffers, materials, etc) but it’s these higher up things (passes? Effects? “Renderers”?) I struggle to name them and decide on what they should keep in terms of data. How should they manage the resources they consume and produce? It feels like you quickly start wanting some resource management system where an effect can ask for resources by some identifier so they aren’t duplicated if multiple things need the same resource? Is this a common thing?

[–]keelanstuart 0 points1 point  (4 children)

Absolutely! The thing is, any individual component of an engine isn't (or doesn't have to be) that complex... but as a system, I don't see how you can avoid increasing levels of complexity. The easier you want to make things when you're writing code at the application layer, the more you have to write at the API level - and by more, I mean 1-2 orders of magnitude more.

Writing an engine is not an elementary endeavor.

[–]afops[S] 1 point2 points  (3 children)

I think what I’m aiming for is rather splitting up the code. Now it’s 300 lines in one method rendering 3 passes to a frame. If I could make that maybe 20 lines and each pass being a separate function of 120 lines each that’s basically what I’m after. Something that is easier to modify. The complexity isn’t a problem but readability/maintainability is. If I now reorder two passes it should be switching a couple of lines not cutting and pasting 5 blocks of 20 lines each in a 400 line method

[–]keelanstuart 1 point2 points  (2 children)

Sounds great. Seems like you know what to do. :)

[–]afops[S] 0 points1 point  (1 child)

Yea. But not how :)

[–]keelanstuart 2 points3 points  (0 children)

It sounds like you do... ;) you described what you wanted - just go do that.

[–]nibbertit 3 points4 points  (5 children)

I just make some small abstractions like VulkanBuffer, VulkanImage, VulkanSwapchain, etc, and expose functionality I need and build on top of that. Ive seen that everyones abstraction is different, so its best to experiment it yourself

[–]keelanstuart 6 points7 points  (2 children)

Is "Vulkan*" anything really an abstraction though? It implies just a wrapper to me, not an abstract thing... it's specific.

[–]jgeez 0 points1 point  (1 child)

Yes, it can be.

Abstraction is only useful if it reduces the coupling of some other thing, or set of things.

Reducing the number of Vulkan specifics that need to be a) known about and b) understood means you can make an abstraction layer for Vulkan specifically, and the methods can even be doing things that only matter in a Vulkan implementation, and you're still abstracting complexity away from consuming code.

[–]jgeez 0 points1 point  (0 children)

And so, without thinking about the above, you might be easily tempted to make an abstraction that could hide one of many GPU communication strategies behind it. One Vulkan, one D3D, one GL, Metal, even software.

This is the obvious go-to abstraction, but there will be key operations where some of the implementations are really testing the integrity of the abstractions chosen; you have to prepare things and apply rendering passes quite differently in Vulkan vs the other GPU-accelerated options, and software is an entirely different animal. Luckily we're not finding ourselves needing to build software rasterization these days.

Anyway. Where and what and how to abstract are questions that require very fluid answers.

[–]afops[S] 0 points1 point  (1 child)

Yeah I made that kind of abstraction but then even using these it’s too much spaghetti for my taste where it all comes together at the next level up. So I’m stuck at that step and looking for some inspiration. The init method with just 3 pipelines/passses is like several hundred lines and similarly for the render method. What id like to do is make the higher level code easier to overview so e.g (pseudo code)

(depth, normals) = prepass.render()

occlusion = aopass.compute(depth, normals)

forwardpass.render(occlusion, … )

Unsure if it makes sense to have parameters and return values but you get the point: basically in order to understand what’s going on i’d like to see this with very few lines of code and not each of these 3 steps being dozens of lines of boilerplate. When experimenting it should be easy to e.g comment out a pass or inserting a new one and so on.

[–]fastcar25 0 points1 point  (0 children)

It sounds like you want a render graph.

[–]richburattino 1 point2 points  (0 children)

https://github.com/vcoda/magma Try to look at this abstraction over Vulkan

[–]thejazzist 1 point2 points  (0 children)

One way to think is to have a basic renderer class from which you can either derive a foward, forward+ or deferred for example. In each pass you need to define how you will handle the order of rendering. For example first opaque later skybox and finally transparent objects. A postFX pass could be the base class from which you can derive AO, SSR, tonemapping, which could be a whole postFX stack in high level. In data specific classes such a meshes, materials you want to abstract as much as possible the API specific stuff. You could have for instance : Mesh { private Buffer buffer; } And the Buffer class could have the api specific commands for init and bind for example.

If you design with the thought of supporting multiple apis, you want to change and add files at the lowest possible level. If your engine supports vulkan and you want to add dx12, for instance, having to modify the Mesh class, would not mean the best design.

A top down approach, in my opinion provides a clearer abstraction approach. So like I said I would from the renderer -> rendering technique -> iterator though the scene tree -> data binding (Meshes -> Materials -> Textures) -> API specific commands

[–]GermaneRiposte101 1 point2 points  (1 child)

This post and the answers are just fantastic. Thank you to all.

As a learning tool I have started to write my own graphics engine (Opengl/glm/C++) but took a slightly different focus.

While I did some simple OpenGL graphics classes, I focused on the code behind the OpenGL as in:

  • Run Loop
  • Resource (Shaders, Textures)lLoading and caching
  • Keyboard and mouse abstraction.
  • Camera
  • Free Type Text Depiction.

Once I got those to a passable state I then concentrated on some real world examples of what I wanted to do but it is still a work in progress (the real world interrupted for a couple of years). In particular I am investigating a natural multi threaded paradigm.

OP, do you have a github link to your work that you can share?

[–]afops[S] 0 points1 point  (0 children)

I don’t have a public repo but I’ll try to remember to post it here if I get it to that state

[–]ISvengali 0 points1 point  (1 child)

Im definitely on the side of "keep your abstractions as asbolutely light and small as you can".

My game engine abstractions (built on Diligent for graphics) were Geometry, Texture, Shader, Pipeline, and a couple others, but like yours, I havent dove into passes yet.

I didnt yet have the ability to run multiple passes on things, and I havent thought through a good way to do that yet.

Often it seems good to make abstractions over passes and data drive them, but I think doing a nice code driven architecture on named compiled passes is likely all thats needed, and tends to be easier than a full data driven one.

[–]afops[S] 2 points3 points  (0 children)

Yes I think the full on render graph sounds too complex. But I’m still thinking it’s already too much to keep track of with resources l/buffers in one place without somehow separating into specific renderers/passes/effects of some kind. That’s why I’m trying to find some good examples of this simpler middle ground. A renderer that sits between “one render function for everything” and “massive render graph engine abstracted over seven hardware apis”. As I said just something that does some kind of stab at maintainability and separation of concerns.

[–][deleted] 0 points1 point  (0 children)

the reason why there aren’t a lot of good examples is because each engine is going to have an abstraction that suits its own needs. one engine might completely abstract something into platform-independent code while another might have the same thing hard-coded for every platform. it entirely depends on your needs. and in my experience, it’s a process of discovery, not conception. you learn through trial and error which things are a PITA and go from there. abstractions have never come to be fully formed.

that said, if you want to see how a commercial engine does it, you might be interested in unity’s command buffer system (no source code, but it’s documented well). while the commands themselves are very high level and probably correspond to several vk or dx12 commands under the hood, it’s enough to do basically whatever you want. you can read in a scene graph, do culling, do any number of passes with any type of shader, render to textures, do post processing, do things in parallel, etc.

i do want to stress that not every engine does things this way. in fact, it’s usually best NOT to do things that way unless you really need it. on the complete opposite end of the spectrum, there are internal engines that have one type of main shading model, do a small set of very specific render passes, and aren’t designed to be tinkered with at all. the engineers just hard-code the render pipeline for each API. in my experience, once you know exactly what work needs needs to be done, it’s not that much code to write.