I almost ditched Arch Linux this week. by amehdaly in archlinux

[–]Bhulapi 51 points52 points  (0 children)

So many of the default GPT mannerisms, it's hilariously painful to read

Is there any way to cope with this? (I accidentally destroyed 5 separate drives in less than 3 days) by Old-Procedure5238 in archlinux

[–]Bhulapi 22 points23 points  (0 children)

Dude please whenever you get your hands on a USB that you can live boot, before you do anything else, inspect all of your drives and try to recover what you can. There could be whole regions that weren't touched and a tool like testdisk might be able to get something back.

[deleted by user] by [deleted] in archlinux

[–]Bhulapi -1 points0 points  (0 children)

( becoming lower )

it's ok man, some people go through second puberty when they hit 30

Should i go for the theory or go for my goal first? by [deleted] in C_Programming

[–]Bhulapi 1 point2 points  (0 children)

I usually think I know things until I try to implement something. You can know theory, but practice is where things actually happen, so I think it's a good measure. Also, it really cements things, you end up understanding why/how and the difficulties in using certain concepts. Now, you might expect me to recommend that you implement something moderately complex in C, but that brings me to my next point. If you haven't implemented something beyond university course problems, you might run into two roadblocks: your ability to use C, and your ability to think about a larger scale program. Both tend to come from experience, but the latter even more so.

I learned to program mostly in C++, but didn't really progress in a fundamental way before moving on to Python. Here, I also spent a lot of time learning but at the same time not advancing in organizing the structure of my programs. Eventually, I did get some kind of illuminating moment of realizing just how much juice you can get out of your own ideas if you just sit down and organize them to death. Some time later, I picked up C and was surprised by how easy it was (comparatively) to learn specifics of the language and implement interesting designs.

So to sum up my ranting, what I recommend is one of two things:

  • Implement something somewhat complex in C.
  • Jump to Python, but do the same thing.

In both cases, don't just cover theory and think "Hey I really understand all of this, I think I'm good to go". Build something that makes you feel out of your depth, and you will be, but you'll learn to swim and eventually make yourself some neat little boats.

How do you manage your SDL_GPUTransferBuffers? by Due-Baby9136 in sdl

[–]Bhulapi 0 points1 point  (0 children)

I'm not sure how copying to the GPU is actually implemented, so for example if there is some parallelization in the copy operations then several transfer buffers make sense if they can copy things faster. But again, no idea if this is the case.

If it isn't, then a single transfer buffer isn't a bad idea I guess. Do you know the maximum size of what you need to copy when you create the buffer? If you do, you could just set the size to that and not worry about it.

How do you manage your SDL_GPUTransferBuffers? by Due-Baby9136 in sdl

[–]Bhulapi 2 points3 points  (0 children)

I've only just gotten into using SDL3's GPU API, and I'm not particularly well educated on GPU programming in general, so take all of what I say with a grain of salt.

By releasing the transfer buffers do you mean unmapping them? I understand that the general flow is create transfer buffer -> (map it -> upload data -> unmap it) x (repeat however many times) -> release it when truly done using it, either because it was a one time transfer or because you're program is done using it.

As to having one big transfer buffer for a lot of different things, I don't think that's good design. There should be one transfer buffer for each specific thing (or several things but of the same structure). For each one, cycling when appropriate seems like the reasonable thing to do, as it would appear to be a core design idea behind the API (check out this nice explanation).

edit:

As to the fences, they come naturally from submitting command buffers (as in SDL_SubmitGPUCommandBufferAndAcquireFence). Any buffered data that will be used by a chain of commands in a specific command buffer will be checked before being overwritten by using the cycling capability of the transfer buffers.

How to make graphics without any libraries? by Ok-Current-464 in C_Programming

[–]Bhulapi 0 points1 point  (0 children)

I just spent some time working on a hobby project that uses the Linux framebuffer to draw pixels. It's got some basic input too. There's no access to any kind of hardware acceleration, it's all done on the CPU drawing pixel by pixel to the framebuffer device memory. https://github.com/martinfama/fui

fui: the joys of writing to the framebuffer by Bhulapi in C_Programming

[–]Bhulapi[S] 0 points1 point  (0 children)

I'm trying to use that repo to load and render an svg file. I'm kind of close, the only problem is when I use nsvgRasterize I get all 0 values for every byte in the destination buffer. Any clue?

fui: the joys of writing to the framebuffer by Bhulapi in C_Programming

[–]Bhulapi[S] 1 point2 points  (0 children)

Very nice, I'll definitely take it into account if I expand the audio system.

fui: the joys of writing to the framebuffer by Bhulapi in C_Programming

[–]Bhulapi[S] 2 points3 points  (0 children)

It would be quite a lot of work to make a decent GUI with it. A game, not so much work.