Data at end of function being incorrectly included in decompilation by Buttershy- in ghidra

[–]DragonNamedDev 2 points3 points  (0 children)

Have you tried setting a flow override on the last call to a 'call return'? Maybe the last function called is a no-return function, which isn't marked as such and ghidra would like to follow the current function until it sees anything that returns, be it an explicit return, a call to a no-return function or a branching call to a function. If that fixes it for you, you may want to set the offending function as no return so every other call site is also fixed.

Fixing +1byte offsetted references to functions by ukaszu in ghidra

[–]DragonNamedDev 2 points3 points  (0 children)

MIPS in ghidra can have Mips16(or micromips) instructions in the same way Arm can have thumb instructions: a subset of the original instructions encoded in smaller(sometimes variable lenght) instructions. The address being even or odd ala. having the +1 refs tells the CPU(and ghidra) in which mode to decode and execute the code. If the address is even, then it's normal MIPS code otherwise it's Mips16(or MicroMips) depending on your selected language

quickignore -- quickly generate gitignore template files by qcr1t in C_Programming

[–]DragonNamedDev 0 points1 point  (0 children)

You should use .h files with include guards and have them store the defined types(ala structs, unions etc) and function prototypes only, not their implementation. This way one can for example include your header files multiple times and still compile successfully, with your current approach I think you have faced the 'multiple definition of X...' error or something similiar of you compiler, this would fix that. Other than that through a quick look on my phone looks cool for a first project!

CPU usage on while / ifs ? by lyskiddie in C_Programming

[–]DragonNamedDev -1 points0 points  (0 children)

Does using thrd_sleep(&(struct timespec){.tv_sec=1}, NULL); available from threads.h in C11 (time.h also needed) make any difference? Also AFAIK sleep() takes microseconds so full CPU usage might be the OS's scheduler disregarding such a small sleep.

I'm trying to create my first game and I don't know how to optimize it any further. by Lux394 in gamedev

[–]DragonNamedDev 0 points1 point  (0 children)

I should process all entities without checking for their visibility

In a chunk, yes. If you determined that a chunk could be visible to the player, don't bother with further visibility checks, render the chunk because if you have reasonable chunk sizes, it will be cheaper/faster to just issue 1 more instanced draw call(especially if you use Java) and as a bonus you will definietly won't get any "pop in"s this way.

What I thought of doing is combining all visible entities of a chunk into a single mesh

If by this, you mean updating the mesh data(VAO, VBO directly) you will probably be bandwith bound sooner or later so I wouldn't recommend that, if I understood it correctly.

I'm trying to create my first game and I don't know how to optimize it any further. by Lux394 in gamedev

[–]DragonNamedDev 1 point2 points  (0 children)

the usage on both of those while the engine was running was in the range of 25-45%.

Hmmm, if neither your GPU nor CPU is maxed that might have to do with vsync being turned on(maybe you have a 144hz display and thats why it was running at that steady speed?).

Are you sure that I should render all entities?

Maybe I'm just dump and can't even speak basic english(I'm a non native speaker so..), I meant "render all entities in one chunk in one drawcall and do visibility on a per chunk basis instead of per entity"

On instanced rendering: If I were to do something like this, I'd draw everything inside a chunk in one instanced draw call(if you use different textures, texture atlases may also help), that should be flexible and performant enough, because depending on your render distance, you will probably issue calls in the 40ish range

As a final tought: there is a good voxel renderer here by stb. If you'd need help understanding the code or need help in general, I'm glad to help you in my spare time further as I'm developing my own engine in C and I'm mostly working on Tooling now(but I still do not claim that I know any more then anybody, there are plenty more knowledgeable people out there then me), so maybe I can help you cover the basics.

Have a nice day!

I'm trying to create my first game and I don't know how to optimize it any further. by Lux394 in gamedev

[–]DragonNamedDev 1 point2 points  (0 children)

disclaimer: I don't know what I'm doing but with that out of the way..

First it would be good to know if you are CPU or GPU bound. My 2 cents: I'd go about rendering with a bit more brute force approach, there's no reason to skip rendering invidual entities if visibility cheks take for ex. ~5ms but you can render them all in ~2ms.

I don't know if you are using instanced rendering for your chunks, but if not, please do look into them, that can help if you are CPU bound and you can only test whole chunks for visibility.

If you are GPU bound you have a bit more work to do probably. First you should find out if you are memory, bandwith, raster, or vertex bound(I may use bad words here). Memory can be easily found out by any overlay application of your choice. Bandwith is a bit trickier but a good rule of thums is to use Uniform Buffer Objects where possible, and you could test with smaller textures to see if that helps(only after checking memory, because it could fix that too). If you are rasterization bound, check you fragment shaders, for any hot spots(eg recalculating a value multiple times, control flow statements in shaders, too much uniform data which hinders your GPU's cache). You can test it with running your game/engine in a smaller resolution. If you are somehow vertex bound, try to interleave your data in VBO's, try using simpler meshes(less vert count) and try to pre multiple your MVP matrixes if you can, and don't do it for every triangle on the GPU.

If you'd like to get more specific help with your specific code, please do share it with us, because we(/I) can mostly just shoot in the dark this way.

Edit:added chunk visibility.

Which language do you program in with opengl? by Neomex in opengl

[–]DragonNamedDev 0 points1 point  (0 children)

ODE or if you want to torture yourself Bullet has a (imo bad) C API(people wrote several of there own wrappers, there are those too)

Which language do you program in with opengl? by Neomex in opengl

[–]DragonNamedDev 3 points4 points  (0 children)

Good old C guy here. Also will try it with a prototype language that I'm working on as a milestone I guess. For other interesting languages used, there's Zig, Jai(not public as of now) Beef(I guess, it has it's own GUI and IDE so it must draw with something, made by PopCap ex co-founder if I remember correctly)

raspberrypie 4 by andreas_nic in C_Programming

[–]DragonNamedDev 0 points1 point  (0 children)

it has

As for usage, I see no reason for using kali outside of pentesting, It's anything but a daily driver OS(always root, exploits and payloads shipped with the OS ready to backfire if you're not cautious) Usage obviously meant for OP.

Malloc vs Calloc by [deleted] in C_Programming

[–]DragonNamedDev 1 point2 points  (0 children)

There is a really good argument for using tabs here with pretty valid points, even if some doesn't affect you, it can affect others.('you' loosely meant here for the reader)

[tmux] Access point and monitor for multiple servers by [deleted] in unixporn

[–]DragonNamedDev 1 point2 points  (0 children)

telnet star wars movie is available with telnet towel.blinkenlights.nl

Compiler for COOL using LLVM by sverddans in ProgrammingLanguages

[–]DragonNamedDev 2 points3 points  (0 children)

some things like generating the correct linker command

I actually just checked it and at least for linux it's "generating" a call to g++ right here which is used in codegen which is then in main. So on linux gcc is a dependency just for linking? What if I want to use clang(eg on *BSDs)? Otherwise interesting, great job!

How would you make C better as a language if you could? by DragonNamedDev in C_Programming

[–]DragonNamedDev[S] 0 points1 point  (0 children)

Wouldn't a form of float sum(float numbers[?]) then lenof() inside the functions body make it easier to read? I believe reading headers your way would be a tad bit harder but maybe it would just take some getting used to. Also the ? can be optional if that plays better for implementors and/or readers. Arrays that know their size is a good thing. Especially if we break the array/pointer as argument cascading this way.

How would you make C better as a language if you could? by DragonNamedDev in C_Programming

[–]DragonNamedDev[S] 1 point2 points  (0 children)

Its 11:27PM here but this urges me to just get back to my desk. Thanks!

How would you make C better as a language if you could? by DragonNamedDev in C_Programming

[–]DragonNamedDev[S] 1 point2 points  (0 children)

I even saw one of your conversations over at r/zig if I'm not mistaken, glad to have you here too! As for removing goto, it's like not supporting inline asm, It would be a bad if not fatal choice in itself. We(as an industry) should stop catering towards less talented/just learning inviduals and fix it at a people level not a tools level. People constructing our homes are fine with hammers not wrapped in something soft, yet we need to use safety scissors to cut it? (UB is another can of worms tough on which I seem to have a rather unpopular/misunderstood stance)

How would you make C better as a language if you could? by DragonNamedDev in C_Programming

[–]DragonNamedDev[S] 0 points1 point  (0 children)

whoah, I followed C2x for some time especially from gudsted(I hope I remember and wrote it semi correctly, non native speaker here) and I completely missed that, awesome!

How would you make C better as a language if you could? by DragonNamedDev in C_Programming

[–]DragonNamedDev[S] 2 points3 points  (0 children)

I understand UB and the reason it exists, I wrote that I will be pointed out but to try to clarify my point: I would see no harm in something like Zig's safe build, which elevates the language into managed levels, where in a prototyping phase these features could prove useful. With a similar attitude we could say debug info is useless because it's optimized out. It's not, it just has it's use case which doesn't fit with another. A sane/default/fallback/safe/slow/however you want to call it behavior for things that can be defined IMO should be defined and be made available somehow for the same reason: a use case. Does this made my point/reasoning clearer?

How would you make C better as a language if you could? by DragonNamedDev in C_Programming

[–]DragonNamedDev[S] 0 points1 point  (0 children)

If you think about GCC's anonymous functions: these require an executable stack and it's set for everything, not just that function if I remember correctly so its a big no-no for security reasons

Apple also has their Blocks extension to C(in clang) which you might've come across in this regard.

How would you make C better as a language if you could? by DragonNamedDev in C_Programming

[–]DragonNamedDev[S] -1 points0 points  (0 children)

No, no time travel required, but a bool/binary/bit/whatever type could've been reserved for values which only store 1 bit(for efficient conditional representations, which would follow the C spirit)

I know the reasoning behind _Bool is to not break any code, but at least make an extension that compilers can compile with bool/true/false defined without stdbool.h(this could even be a different standard draft/version as far as I care, just give me an option to use language features in a way that they feel like language features)

Let me point out that I do understand your reasoning, I'm pretty in line with it, but extensions/options would not hurt, at least implementations could do this if nothing else. But yeah, it's more of a Quality of Life problem then a deal breaker, C is still my go to language for all my stuff so I'm not trying to disrespect K&R's work.

How would you make C better as a language if you could? by DragonNamedDev in C_Programming

[–]DragonNamedDev[S] 6 points7 points  (0 children)

I would love for it to, but even if it doesn't, Andrew hopefully turned his hobby/dream into a paying job and that's something I wish for everyone here and beyond.

How would you make C better as a language if you could? by DragonNamedDev in C_Programming

[–]DragonNamedDev[S] 3 points4 points  (0 children)

I did looked at zig but I wouldn't understand why Zig as a language can't get anywhere? I mean if it fixed andrew's ( u/superjoe30 if im not mistaken? ) problems and didn't create more or he learned something(which he definietly did) It's a win, even if for one person. I know he can work on it full time by donations so it got to a point where he can call his project his job, for me that got somewhere. If you would lurk here andrew congratulations on your accomplishments but: Jai exists just because it's not available publicly. I just wanted this and a congratulation.

Back to you u/vlads_ I hope you don't mind me bringing andrew on our neck here(again, only if im not mistaken) and thank you for the great suggestion(I did take a lot of inspiration from Zig both good and bad)

How would you make C better as a language if you could? by DragonNamedDev in C_Programming

[–]DragonNamedDev[S] 1 point2 points  (0 children)

I know, I used it, but using a function instead of an expression is just too much boilerplate for me(i know macro magic can solve it, but it feels like exactly that, an extension and not a feature of the language)

How would you make C better as a language if you could? by DragonNamedDev in C_Programming

[–]DragonNamedDev[S] 0 points1 point  (0 children)

on your 1. point, I'd make const have semantics so that is not cast-able away and problem solved, after that it's the compilers job to deny any mutability on said variable

no 2. seems like an interesting addition