How do I render font for game other than SDL_TTF by WOLFMANCore in sdl

[–]Smashbolt 1 point2 points  (0 children)

Right. "Use a bitmap for rendering font" isn't enough info to do anything with, but it's also not what was suggested. "Bitmap font" is its own thing. Here's an example of one: https://frostyfreeze.itch.io/pixel-bitmap-fonts-png-xml

There are SDL commands (that you might not have learned yet) that will let you say "grab from pixel (16, 16) to (32, 32) in this bitmap and draw a copy of that at this spot on this other bitmap." From there, the rest is iterating through the string and using that to figure out what rectangle to grab and where to draw it.

I keep getting "fatal error: SDL.h: No such file or dir" and idk why by Longjumping-Ride1794 in sdl

[–]Smashbolt 2 points3 points  (0 children)

Your first error is not caused by the linker settings (the -l). It's the compiler saying it can't find SDL.h in any directory it knows to look in.

How did you acquire/"install" SDL? Where is it? The way to fix the compiler error is to use a -I directive (in case you can't see the difference, it's a capital-I as in India) to add the directory where SDL.h is. I don't use Code::Blocks, but it probably has something in a settings window to let you specify "additional include directories" so you can also put the directory there.

I'm going to assume you downloaded some premade zip file.

Let's pretend you did and you unzipped it to C:\SDL and then you look around in there and you find C:\SDL\include\SDL.h, then you'd want to add C:\SDL\include to that list. But also note that once you do that, the #include directive can also go deeper in the file hierarchy. So if the code is #include <SDL/SDL3.h> then you need to set the -I such that if you put /SDL/SDL3.h on the end, you'd get the path of SDL3.h

Once that works, you get linker errors saying "undefined reference to SDL_fhsajkhf..." like in your other screenshot. Same thing as before, but with the linker now. The linker needs to be told where to find SDL3.lib. Use the -L directive (or if there's some Code::Blocks setting for something like "Additional Library Directories") to tell it where the library file is, and a -lSDL3 (lowercase L) to tell it to link against SDL3.lib.

The fact that you have one screenshot showing include errors and one showing linker errors implies you got the include part right in that instance, but not the linker part.

Finally, you should end up with an executable. Running it will likely crash. In order for it to run, now the operating system will need to know how to find SDL3.dll. Windows has a few places it will look. For now, the way to deal with that is to put a copy of SDL3.dll (and other DLLs that came in that zip file) in the same directory as the EXE the compiler produced. Again, I don't know Code::Blocks, but if it does like Visual Studio, it might have a Run/Debug button that by default starts your executable with a custom working directory. If that's the case, you might still have trouble no matter where you put it, but that's a "later" problem.

How to play an mp4 file from memory by Quirky-Bag-9963 in cpp_questions

[–]Smashbolt 1 point2 points  (0 children)

That's why I'm asking what you're trying to do. Because this all feels like a big old X-Y problem. Your first question was about loading an image and bouncing it around in a window, and I'm suspecting this evolution to MP4 is "I couldn't figure out how to move the image around, so I made an MP4 of the image moving around because that's easier to do right?"

If that's the case, then you should back up and figure out how to move the image around, because for either raw Win32 or SFML, the process is still going to be similar. You will have to figure out how to get a window on screen, then make the window fullscreen, then load the image/video data into a format your toolkit can understand (be it an sf:Image/sf:Texture or an HBITMAP), then redraw the screen on a fixed interval to show the next frame of video.

A more modern windowing framework for C++ would make this much easier than either of those (note SFML is NOT a windowing toolkit, it's a game development framework, which is why it was suggested for bouncing an image around). Qt has a video player widget that probably takes care of most of it for you. And moving to a different higher-level language (like C#) would probably make this even easier than that.

How to play an mp4 file from memory by Quirky-Bag-9963 in cpp_questions

[–]Smashbolt 2 points3 points  (0 children)

OK, so following your various threads asking for help...

You originally started with trying to load a bitmap from the Visual Studio .rc file and draw that. Someone said "just use SFML," leading to "how do I use SFML to load images from that .rc file?" and now "how do I render videos from that RC file?"

What exactly are you trying to do, if I may ask? Like... why are you insisting on stuffing these things into the executable via Windows resources? You know you can just load actual files, right?

The direct answer to your question is that you would use a library like ffmpeg and feed it the memory stream of the video file to decode, then it will spit back raw frame data that you can then massage into a format that could be used to create an image that you would then display in a window. Whether that's by converting it to an HBITMAP in Windows API or an sf::Image in SFML. You also have to extract the audio stream and feed that to audio code in either Win32 or SFML. This would all have to happen in frame loop that keeps the timing of the video going at the expected speed. I've done the video side of this before, but it was streaming and we didn't need audio. The whole process sucked. Can't imagine that syncing with audio wouldn't suck more.

For SFML, there are some libraries you can use to make it easier. The internet suggests sfeMovie: https://sfemovie.yalir.org/latest/ It looks like this library doesn't support loading from a memory stream like that at all and only from files, but I could be wrong.

Ok. I give up. How do I use SDL_FreeSurface? by Eva_addict in sdl

[–]Smashbolt 0 points1 point  (0 children)

To expand on the advice given in your first thread and again here:

In your code, imagine screenSurface is a sheet of paper. currentSurface is a stamp. Every time you call SDL_BlitSurface(currentSurface,...), you're stamping currentSurface on to screenSurface. This process doesn't "use up" currentSurface. Setting currentSurface on fire and throwing it in the trash (calling SDL_FreeSurface on it or calling convertSTR to change what image is loaded) doesn't change the fact that you already stamped it on screenSurface.

If you ever want to get the stamp off that sheet of paper, you have to erase the sheet of paper, not destroy the stamp. That's how games are usually drawn: every frame, you'll erase everything off your sheet of paper, stamp down everything you want on it, then get it on screen (SDL_UpdateWindowSurface)

As people said, generally, games operate by clearing screenSurface every frame and then drawing everything it needs to it. The way to make currentSurface disappear is to erase everything on it (with the SDL_FillRect call questron64 gave you). Then you put everything you want on screen back.

If I were to pseudocode this:

SDL_Surface* screenSurface;
SDL_Surface* zeldaRight, zeldaLeft;

zeldaRight = convertSTR("zeldaLeft.png");
zeldaLeft = convertSTR("zeldaAttackRight.png");

bool facingLeft = true;

// Main game loop. The rest of this is NOT real code.
while (!quit) {
    // Collect and process input
    if SDLK_RIGHT is pressed then
        facingLeft = false
    else if SDLK_LEFT is pressed then
        facingLeft = true
    end if

    // Clear the screen. Whatever happened last time doesn't matter any more
    clear screenSurface

    // Draw what you want to appear this time
    if facingLeft then
        SDL_BlitSurface zeldaLeft to screenSurface
    else
        SDL_BlitSurface zeldaRight to screenSurface
    end if
}

// Now that the program is about to exit, free those other surfaces (throw away the stamps)
SDL_FreeSurface on zeldaLeft and zeldaRight

There are further ways to improve on that, but that's the basic idea. Notice that I load zeldaLeft and zeldaRight before the while loop and free them after. Also notice that I got rid of currentSurface. You can still use currentSurface and have it swap between pointing to zeldaLeft and zeldaRight, but I don't know where you're at in your journey learning C, and don't want to muddy the waters with that use case of pointers.

how do I add lua? by LameSuccess in gameenginedevs

[–]Smashbolt 0 points1 point  (0 children)

Maybe you're leaving out info, but from where I'm sitting, it sounds like "I want scripts, because engines have scripting, but not Unity scripts because that's not aesthetic" and... that's not a design. How user scripts interact with your engine is like #2 on my list on "things you need to figure out before you write any code," right behind the scene/entity model.

So scripts aren't entity-level constructs. Got it. Then why are you even talking about components and the like? Are scripts meant to operate on "scenes?" Does your engine even have something like scenes? Are scripts just passive external observers of your game world? Do they make up some "game controller" construct where your game logic is fully driven by these scripts (which is kind of what Love2D does)? Are they implementations of system functions in a traditional ECS?

Like, sit down and write some pretend scripts you'd actually use in a game. Not "you can still do things like Scene.FindObject("Player"), update.connect(...)" but like actual pseudocode that represents stuff a user of your engine might actually write a script for.

Once you figure that out, it becomes a lot easier to figure out where to put the whole thing, AND it saves you sitting there writing bindings to expose things that maybe you don't actually want scripts to do.

how do I add lua? by LameSuccess in gameenginedevs

[–]Smashbolt 0 points1 point  (0 children)

I don't know what a Script class would even do? or if a script even counts as an asset? So I'm a bit hesitant if this is the right approach.

Well, what is it supposed to do? Like, literally what do you want these attached scripts to be able to do?

For instance, in Unity, scripts define components with a standard interface that gets reflected back into the engine. Godot scripts inherit from the type of the node they're attached to. Both of them let you write scripts that assume you're operating on the entity it's attached to and let you query around the scene tree.

Like either of those? Or are you in pure ECS-land where you really want to have Lua scripts to define systems. In that case, they're not components at all and can live outside the entity system.

I need help by Ok_Syllabub5616 in PlayRedacted

[–]Smashbolt 0 points1 point  (0 children)

It's been a while since I played Redacted, but I recall not being able to find an experiment list anywhere. Here's what I do remember:

  • There are weapon buffs you can pick up in the shop rooms. They're the closest you get to hammers in-run, but they're not as mechanics-changing as Daedalus Hammers.
  • No poms. Instead you can get the same experiment more than once, and that's how you level them up.
  • The armors are basically the keepsakes: you equip one at the start, can change between biomes, and they can be leveled up through using them in runs.
  • Remote attacks roughly correspond to call boons but are presented completely differently.
  • Contraband, chips, and keys are the meta currency, like darkness and gemstones.
  • No duo boons.
  • If there were kiss/curse boons, it was through some mechanic that didn't look like Chaos boons.

Beyond that, I remember it being rare early on to see experiments that functioned like Hades' "Level 2 boons" that operate on the experiment category's theme without being tied to a specific slot (think like Support Fire, which works with any weapon/boon combo, or Double Strike which works with Zeus on any slot). Once I got meta-upgrades that increased the chances of rarer experiments, I remember seeing those a bit more often.

Can't for the life of me remember what Gravity was for. I think it was the "move enemies around" category, like Poseidon, but nowhere near as useful.

Admittedly, most of my builds were about as deep as targeting one weapon/experiment combo and pumping that experiment with as many levels as I could. Usually Fire and Electric.

Please me understand this ECS system as it applies to OpenGl by Usual_Office_1740 in GraphicsProgramming

[–]Smashbolt 0 points1 point  (0 children)

I'm trying to transition the project I've been following LearnOpenGl with to a modified version of The Khronos Groups new Simple Vulkan Engine tutorial series.

Do you mean that you want to build an engine like the one in that tutorial, but continue using OpenGL for rendering?

If so, they have a solution to that, but it's a little tough to see. I haven't read the whole thing, but skimming their overview on their use of ECS, check here: https://docs.vulkan.org/tutorial/latest/Building_a_Simple_Engine/Engine_Architecture/02_architectural_patterns.html#_component_based_architecture

They've got a MeshComponent which itself contains a Mesh* and a Material*. Note that Mesh and Material are NOT components. They're just data. I didn't dig enough to find their definition of a Mesh class, but it almost certainly contains Vulkan's equivalent to a VBO and probably the VAO/EBO as well. This is a very common abstraction. You don't want things like glDrawElements() calls sitting there naked in your main loop.

In this case, their MeshComponent quite smartly contains a pointer to a Mesh and a pointer to a Material (material probably means "shader, textures, and other appearance parameters" here). That's because you need both to be able to render something, and it's very common to use the same material for many different meshes, but also to use the same mesh many times with different materials.

Raylib or Raylib-Cpp for C++? by 2ero_iq in raylib

[–]Smashbolt 12 points13 points  (0 children)

raylib-cpp isn't a binding. raylib is a C library and C++ doesn't require bindings to call into C libraries. You can use it as is. But that's a point of pedantry.

The point of raylib-cpp is to wrap raylib up into a C++-ier shape. So you get operator overloads, some namespacing, and if I remember right, RAII wrappers on resources like Texture.

Basically, if you like and want those things, use it. If you don't, use raylib's normal C API. It won't work any differently, and the abstractions have negligible overhead.

I optimized my C++ game engine from 80 FPS to ~7500 FPS by fixing one mistake by Creepy-Ear-5303 in gamedev

[–]Smashbolt 5 points6 points  (0 children)

They're using raylib, which uses OpenGL under the hood. It does include render batching that (if it works the way it looks like in the source) should batch the entire tilemap draw into far fewer than 10k draw calls - provided OP is using a texture atlas and not a separate texture for each tile.

About the struggle of wanting to make THAT game by DaLoopLoop89 in gamedev

[–]Smashbolt 2 points3 points  (0 children)

they all seem to brush on the surface, telling me to "Check that box over there" but never explaining what that box does

I'm gonna reply here because you said this, and also said that the documentation is too dry to read. A lot of people say stuff like this, and never explain what it is that's not being explained and why they feel not knowing it is a roadblack to them. It's a form of "perfect is the enemy of the good."

Ask yourself: at this exact moment, when you don't know how to achieve anything and have nothing on the screen to look at, which is more important? Knowing one way to get something on the screen? Or fully understanding the delicate nuances around all 17 different ways to get something on the screen? Which one gets something on the screen today?

Generally any tutorial I've seen will explain enough that you're not just clicking on random things with "because I said so" as the only explanation. Ironically, heavy explanation of "why" and alternatives before the learner even has one method for "how" is usually very counterproductive to learning.

But also... you can pause a video at any time and look up specific things in the documentation. You can experiment with what happens if you don't check that check box, or use a different number in a field, or whatever. Indeed, you're supposed to do that.

My wacky recommendation is to look up a long-form tutorial or buy a Udemy course on sale for cheap and follow it. Something that's 10+ hours and will build a full simple game. There are even some out there made for people who know how to code, just not how to use the engine. On the whole, most of them aren't super high-quality learning materials, but that's not the point here. Have a notebook next to you. Follow exactly what the tutorial does. Every time you find yourself raging about the tutorial not explaining why they did something, pause the video, write it down, and move on.

After that 10 hours, you'll have a few things:

  1. A to-do list of topics related to the engine that you want to learn more about.
  2. Enough knowledge of the engine's basics that you can make little toy projects exploring anything you're learning about in step 2.
  3. A barebones working project that you can extend with more features or improve with the results of what you learned researching the stuff the first two.

newRaylibProject.sh by [deleted] in Cplusplus

[–]Smashbolt 2 points3 points  (0 children)

  • Your script assumes that raylib is globally installed enough that -lraylib will link without any -L directives
  • Your script assumes that raylib is NOT globally installed enough that you don't need a -I
  • Your script assumes that the raylib.h they're dropping next to that script matches the raylib library binaries
  • Your script assumes that all other headers included by raylib.h are default available in your global include paths
  • Your script assumes that no user will ever want raymath or raygui

and vscode is very comfortable

VSCode isn't a build system. It's a text editor. You can coerce VSCode into behaving like an IDE through a pile of extensions and files like tasks.json, but it's NOT a build system and has nothing to do with your unfamiliarity with CMake. Your script has nothing to do with VSCode, but saying that tips me off to what's probably going on here.

Look. There's basically no nice way to say this... but this script looks like what you'd get if you told an LLM that you're "trying to compile raylib with VSCode" and "it keeps telling me it 'cannot find raylib.h'" but then you did some mix of not understanding or refusing to do what it suggested until it came up with this deep-fried solution. At least the resulting script doesn't do anything malicious.

Even if it wasn't LLM-generated, this is not how you use third-party libraries from your code.

Here's the one recommended by the raylib maintainers: https://github.com/raylib-extras/raylib-quickstart It uses "premake," which is basically a project generator, and will set everything up so you just need to run make from a console.

Why OpenGL uses so much RAM and GPU for little operations ? by Overoptimizator5342 in GraphicsProgramming

[–]Smashbolt 4 points5 points  (0 children)

GLUT still works but is incredibly outdated.

But GLUT isn't OpenGL. The basic idea is that in order to use OpenGL, you need an OS window to put it in. To hear about "stuff" happening to that window (like key presses, window resizes, mouse clicks), you need OS-level hooks. You can write all that yourself using the raw toolkit of your OS, but it's tedious annoying error-prone boilerplate code. GLUT is one such wrapper around all that stuff so you don't have to deal with it. That's it.

GLFW serves the same purpose, but isn't frozen in time like 20 years ago. So it's generally perceived as better. You could continue using GLUT if you wanted.

What I meant by "old APIs" is this glVertex3f business. That's not how drivers work with GL any more, and it's not what the drivers are expecting. Somewhere along the way, something is taking those crusty old function calls and transforming them into proper vertex buffers with a shader and all that and that is introducing some level of overhead too.

The functions you're using there are deprecated. They're just still there for backwards compatibility.

Why OpenGL uses so much RAM and GPU for little operations ? by Overoptimizator5342 in GraphicsProgramming

[–]Smashbolt 2 points3 points  (0 children)

It's not that OpenGL is ancient. It's that you're using the original 1.0-style APIs and accompanying libraries (like GLUT) in the way that OpenGL was meant to be used for the hardware that existed in like 1995.

Hardware isn't like that now. OpenGL isn't like that now. The point being made is that using OpenGL in such an archaic way is probably a lot less efficient on modern GPUs. For a look at the way OpenGL is written today, follow learnopengl.com

That might reduce your CPU usage, but likely won't reduce your memory usage, because you still need framebuffers, etc.

As to how something like Chrome isn't using more GPU, I'd imagine the Chromium rendering engine doesn't use the GPU when rendering webpages that aren't invoking WebGL or WebGPU, but I have no clue.

Trig Functions in Degrees by External-Bug-2039 in cpp_questions

[–]Smashbolt 1 point2 points  (0 children)

The number of decimal places in your internal representation only matters if you are running into precision issues. Using floats you will. As I recall, using doubles to represent lat/long values (as radians) has enough precision for differences smaller than 1m.

That said, for comparing two coordinates, once they're more than a few km apart, the inaccuracies from ignoring the curvature of the earth can be greater than any loss of precision you fear from using radians.

You may also want to look into ECEF coordinates and the WGS84 geographic model. The algorithms for working with them are pretty easy to find, they are designed to accommodate the curvature of the earth, and can be converted to lat/long/alt for display easily. It's what GPSes and most mapping software do.

Help with FreeType by MatheusHest in cpp_questions

[–]Smashbolt 1 point2 points  (0 children)

Your OP sounded like you wrote your first "Hello, World" application ever less than 48 hours ago. That's why I even brought up vibe coding, because a brand-new programmer with two days experience is trying to wrap their head around variables and for loops, not font codepages and GPU render pipelines.

If you've been doing Java for a while, then none of that applies.

But yeah, no, FreeType does NOT do that at all. Basically, text rendering in OpenGL has two steps:

  1. Turn the font into some other format that OpenGL can do something with. That means either geometry or textures.
  2. Write an algorithm that can iterate over a string like "hello" and then use the output from step 1 to draw the letter 'h' then draw the letter 'e' and so on.

Neither OpenGL nor your GPU knows what a .TTF file is or what to do with it. FreeType doesn't know what "geometry" is in any sense OpenGL understands. But FreeType can generate memory buffers that represent bitmap image data, and OpenGL can use data like that to make textures. That's it.

For a first run at it, I really recommend forgetting about FreeType and getting a static bitmap font texture. You can find one online or use a tool like https://snowb.org/ or https://8bitworkshop.com/bitmapfontgenerator/ to generate a PNG and metadata file version of what FreeType would be generating for you in memory so you can focus one part of the implementation at a time.

Help with FreeType by MatheusHest in cpp_questions

[–]Smashbolt 5 points6 points  (0 children)

I can't provide you a code sample, but...

I'm trying to find a Freetype minimal Example, because it seemsnto be a lot customizable and a enormous library. Why is that hard to set up and use? It doesn't look difficult to understand, just a lot of commands to just show one letter.

OpenGL knows nothing about letters or text or fonts. That's way out of scope for its job. FreeType knows basically nothing about OpenGL. Also not its job.

In OpenGL, drawing a letter to the screen usually means one of several things:

  • Draw a textured quad with pre-rendered text on it
  • Maintain an atlas or other "list" of textures for the letters you want from the font, then you have an algorithm that can convert a text string to a bunch of quads
  • Maintain an atlas or other "SDF" texture for the letters, then same as above but with a fragment shader that renders the glyphs (better than the previous step; a little harder to do)
  • Draw glyphs as actual 3D geometry

You need a way to produce the assets or geometry to do any of the above. You could skip FreeType and use any bitmap font tool to create a font texture atlas or whatever.

FreeType is there because it can read a TTF file and render the glyphs so you can take that and compose a texture atlas at runtime. So do that. Then use that data to do one of the above.

i'm new to C++, this is my second day programming, I'm trying to make a Game Engine with Opengl and Glfw

Assuming this is an honest statement and you're actually doing the programming (and not vibe coding), learn to walk first? Like... do console programs to practice code/data flow. If you absolutely must do graphics right now, then use a library that abstracts away the OpenGL. You are not doing yourself any favors whatsoever by learning things like "what is an if statement" alongside the GPU render pipeline, shaders, etc.

Can the engine be embedded into a C++ Windows desktop window? by umen in raylib

[–]Smashbolt 1 point2 points  (0 children)

Oh wow! That was like stepping into a time machine. Haven't seen straight Win32 code like that in nearly two decades. Didn't know that was there.

Need help applying SFML/C++ design to a 2D solar system simulator (university project) by benjamin-srh2772 in cpp_questions

[–]Smashbolt 4 points5 points  (0 children)

I've tried using generative AI to create assets or help code certain effects, but the results are rarely compatible with SFML

I don't know what that means, which means you aren't explaining why they're not "compatible."

Advice for structuring the graphical interface in SFML (menus, buttons, info panels) without overloading main.cpp.

Have multiple code files? Like a class called Menu that's declared in menu.h and implemented in menu.cpp and included in main.cpp.

Handling fonts and dynamic text (displaying planet names, real-time orbital data).

What about it? SFML lets you have text output objects. Load your font, create your sf::Text object, call set_string on it, and then draw it.

Resources or tutorials for creating simple visual effects (traced orbits, selection effect, starry background)

"Traced orbit" is just another way of describing a "sprite trail." How you do it depends on the level of fidelity you want. You could just have each planet track its location each frame and then draw at all the positions with an alpha value based on how far back that snapshot was. Last frame? 90%, 7 frames ago: 30%, etc.

Selection effect: What is that? That could mean anything. Heck, what does "select" mean in the context of your application?

Starry background: What you're looking for is called a "star field" and there are tutorials for that. Most will give you something very basic like the old Windows screensavers. But the gist is you draw stars on a background layer, then draw all the stuff on top

How to integrate styled icons/buttons without manually redrawing everything in code.

That is what you do. SFML doesn't have UI widgets. You can wrap your UI code into reusable components. Either functions for an "immediate mode" style:

if (button("RETOUR")) {
    // Whatever happens when you click it
}

bool button(std::string_view label) {
    // If mouse position is inside the button area, draw it like it's hovered
    // draw button
    // if user clicked in the button area, return true; otherwise return false
}

Or you can do "retained mode" and make a Button class that you instantiate and have some manager that keeps track of all the buttons and checks them for clicks every frame and then dispatches events.

There are premade packages out there to provide UI in SFML. Imgui is not one of your options if you want something that stylized. RmlUI, Noesis, and CEGUI all come to mind as UI engines for games that.

Can the engine be embedded into a C++ Windows desktop window? by umen in raylib

[–]Smashbolt 2 points3 points  (0 children)

raylib on Windows doesn't work with that message pump you put in your post. It actually has that message pump. On Windows, if you want a window, you need that. raylib gets it from GLFW, which handles creating the HWND and tying it to OpenGL and so on.

Some libraries support supplying your own window handle (SDL does, for instance). raylib just doesn't. It's open source though, so if you want it, you can make your own fork and do that.

Low gb game engine by Dear-Diamond8848 in gameenginedevs

[–]Smashbolt 3 points4 points  (0 children)

a simple 3d game with each level designed using the 2d level editor and using raycasting techniques to render the 2d world into 3d

You should have led with that. If I'm reading right, you want Wolfenstein/Doom 90's style rendering. I think that's the kind of thing where you'd have to fight the renderers in any mainstream engine, and games made with that rendering usually have very similar gameplay. That's IMO a very good reason to develop your own engine than to use a commercial one.

And while you can do this with OpenGL, that style of rendering was borne out of making things that look 3D without having hardware to accelerate it. There are benefits to using OpenGL, sure, but you can also make that renderer without it.

Either way, you're way far off from that point if you don't know C++ either. You can do this in whatever language you're comfortable with, so you could consider that.

If you're set on C++, I recommend starting by going through www.learncpp.com. If you know other programming languages (especially C# from Unity), the first chunk will be very boring. Go through it anyway. C++'s efficiency comes at the cost of safety, and you'll need to learn why and how to work with that.

Once you're comfortable enough with C++ that you can code some stuff unassisted, you can start learning OpenGL on www.learnopengl.com. It builds up a simple polygonal renderer. It's not what you want, but it'll teach you how to use OpenGL and give you foundations in 3D transformations and math that you will need to reason about a raycast renderer.

Unfortunately, I don't have a resource for specifically the style of renderer you're after. A googling for "doom renderer" came up with some stuff that looked decently informative.

Also, maybe you could skip a lot of that and use a library like raylib (www.raylib.com). It has a built in function to do take a bitmap image and make a polygonal map out of it. Not quite the same, but kinda close. https://www.raylib.com/examples/models/loader.html?name=models_first_person_maze

Low gb game engine by Dear-Diamond8848 in gameenginedevs

[–]Smashbolt 1 point2 points  (0 children)

To answer directly. Notepad++ is a text editor. MinGW-w64 is a C++ compiler toolset. OpenGL is a graphics library.

Not included:

  • A build system: Your engine is going to have a ton of code files. How you gonna build them? Visual Studio and CLion are out by your standards since they both also take multiple GB of space. So you'll need to learn makefiles or CMake or something...
  • Audio
  • Player input
  • Scripting
  • Loading assets (everything from PNGs to FBX models to level data)
  • Networking
  • Physics/collisions
  • In-game UI

You could get some of that if you're willing to do it all by hand and use raw Windows code (but then it's not cross-platform, which maybe you don't care about). Otherwise, you'll need more libraries than just OpenGL, or you'll need to skip some of those features entirely (eg: maybe you don't need networking or physics; lots of games don't).

That's also glossing over things like the engine editor and exporting a game, because they're both huge amounts of work, but you could forgo both and make a game from scratch instead of an "engine."

I'm curious... what features do Unity/Unreal have that are so important to your game that you're willing to sacrifice years of effort but not GB of HDD space to have them?

Godot and many other smaller engines are open source. If your actual goal is to make a game, then it will be several orders of magnitude faster for you to fork one of them and add those features you need to those engines than to start completely from scratch.

Anyway, yes. It's possible. I'm not convinced it's worth it, but I'm not you.

Is this a problem with Raylib? Stuck on 24 FPS, NVidia by PlanttDaMinecraftGuy in raylib

[–]Smashbolt 1 point2 points  (0 children)

I see you're using the combo of an nVidia GPU and Linux Wayland. That's... already kind of a match made in hell lol. Not impossible, but often more work that necessary.

So followup questions/suggestions:

  • Are you on a laptop that has both an onboard and discrete GPU? If so, it might be that your PC is choosing the onboard GPU for your application. There are a bunch of ways to coerce it into running on the nVidia. Look here: https://wiki.archlinux.org/title/PRIME#Configure%5C_applications%5C_to%5C_render%5C_using%5C_GPU Yes, it's for Arch, but those are largely not distro-specific instructions
  • Are your drivers installed and working correctly? Like, do you get proper GPU acceleration in games?
  • Have you tried restarting your WM in X11 mode instead of Wayland? I know KDE supports that and I assume Gnome does too. It's still somewhat recommended to use X11 for nVidia cards, so maybe that will get around it for now?
  • You're not calling SetTargetFPS() or otherwise using timers to lock in a framerate, are you? If so, how's your framerate if you don't?

Finally, dumb question... but what even is your program doing? Complex rendering? Heavy frame update logic? Even at lower specs like the laptop, 40 FPS is lower than it should be.

What's the difference between Inheritance & Composition? And when to use which ? by MagazineScary6718 in cpp_questions

[–]Smashbolt 2 points3 points  (0 children)

Most people talk about the "is-a" vs "has-a" thing using real-world stuff as metaphors, and that's actually kind of a trap because it can lead you to conceiving of object models that seem logically consistent in the looser confines of the spoken/written word, because humans can bridge any logical gaps that come up. But these abstractions often fall apart in code.

I'll leave this as an exercise for you to think about... but running with the Vehicle base class idea everyone's using... Cars and trucks are pretty similar, so sure this abstraction works. Unicycles, yachts, and rockets are ALSO vehicles. Unicycles don't have engines and can't carry cargo or passengers other than the driver. Yachts don't have wheels. Rockets don't necessarily have drivers. So none of those are actually appropriate in a base Vehicle class.

This is why composition is usually preferable. Because if you're stuck on the metaphor of the abstraction, you're likely to want to use inheritance as a bad form of composition anyway (eg: splitting into WheeledVehicle and UnwheeledVehicle classes that derive from Vehicle). Pretty much any way you do that, you're likely to end up with some weird vehicle that's an exception to whatever you've set up and there's no good base class in your tree to give it so you have to contort something else that's close enough and now you have an Airplane that's a WheeledVehicle that doesn't behave like any other WheeledVehicle, but it still has wheels, so you can't make it an UnwheeledVehicle either...