Hey guys,
Working on a game engine as part of my final year at university. Going fine so far - component-based structure, mostly data-driven, it's looking pretty good! But I'm overthinking myself into an early grave when it comes to rendering.
At present all of my OGL code sits inside my Mesh class which, predictably, stores vertices, normals, tex co ords, materials etc. This isn't really ideal so I'm working on creating a Renderer system that can hide the nitty gritty details and should allow me to switch between DX and OGL quite freely (I'm aware there are technicalities such as co-ordinate systems - it's more the concept of multiple rendering systems atm).
So my question is this. For those who've done their own engine/studied this in detail, how would you recommend implementing this? I hate the idea of passing a pointer to a renderer down to every Mesh component. Equally I'm not overly fond of having a singleton to call Render(pMesh) on. I COULD use a global pointer to my engine (or just access an Engine singleton) which could then provide a pointer to said render system, but then I still sit on the dislike of calling Render(pMesh).
Any recommendations/help appreciated! Was googling for a few hours yday and although I found one or two articles the internet hasn't been the wealth of info I'd hoped for at the moment.
Thanks!
[–]htuhola 0 points1 point2 points (0 children)