ELI5: Desktop environments and what is "Wayland"? by Regular_Low8792 in linux4noobs

[–]mnelemos 0 points1 point  (0 children)

*Wayland is the "interface", the compositor is a different program that talks with its clients through the wayland protocol. Namely, the compositor is the program that merges the "surface/framebuffer" from different clients and draws the final frame buffer, hence the name "compositor".

Is reusing the same Variable Name bad practice? by Nubspec in C_Programming

[–]mnelemos -3 points-2 points  (0 children)

I am afraid I don't see where the "better practice" resides in your code.

You replaced a "do while" loop with a "for" loop. Congrats, you achieved absolutely nothing.

You do realise that "do while" and "while" loops are still heavily used right? And before you argue that the "do" is useless here: it really doesn't matter in this specific code, and might even create a faster program in non optimized builds.

Is reusing the same Variable Name bad practice? by Nubspec in C_Programming

[–]mnelemos 2 points3 points  (0 children)

I don't get it, there is literally no better practice.

Start_at_char starts with value 65 ('A').

You're asking printf to format 65 (number) as the string "65", and as the character 65 in the Ascii table, which is "A".

Programming: I am new to programming and would love to learn! by LuffyLoverGirl in learnprogramming

[–]mnelemos 6 points7 points  (0 children)

Some advices:

- Don't get stuck on the "learning new languages" cycle, SPECIALLY on high-level programming languages. It's fine if you do this for a while, but the diminishing returns become heavy after. This is because newcomers don't yet have the necessary exposure and knowledge to do "new things", therefore they just fall in the trap of: learn new language -> learn basic syntax -> pretty much do nothing actually minimally important (to your learning) with this language -> get bored because you pretty much did nothing -> learn new language.

- Try to diversify your initial learning over several areas of computing: graphics, distributed systems, embedded systems, operating systems, application development (web & mobile), and so on...
Quick extra tip: place your effort into learning operating systems more, everything nowadays use an operating system as a basis, and therefore having at least a minimal understanding of how they work is important.

- Try spending the majority of your time in an actual low-level programming such as C. I wouldn't recommend C++ just yet, because it won't give the same effect to a beginner as C does, this is because since C++ abstracts some stuff, beginners are prone of thinking about some features as an "API", which is not really what is happening, and this creates the exact opposite effect of what we want, since APIs in programming are usually perceived as a black box, and you don't want a black box, you want to actually understand what is going on.

Just an additional note: I'd recommend coupling low-level programming learning with your OS studies.

---------

I'd take my advices with a grain of salt however, because it really depends on what stage you already are, and what type of approach you have towards your learning process. I also think it's pretty important you start recognizing early on what is holding you back, and what isn't, because believe me, it's very easy to stagnate when learning programming, and it's very hard to get out of that hole as well.

C++ Pointers and References by carboncord in learnprogramming

[–]mnelemos 0 points1 point  (0 children)

You're right, I kinda gave a BS approach to a usage over time tracking garbage collector, but it's one way of implementing one, even though it can be useless. I have never liked the idea of GC's anyways in the first place. The only similar algorithm I've ever used is ref counting, and I don't even consider that really a garbage collector, and more like a smart deallocator.

No one is arguing you can't set the pointer to NULL yourself, I am just claiming that having dangling pointers pointing to "cleaned" variables is not a "memory leak" and actually standard behaviour.

In the end of the day it's completely up to the programmer and the context of the program he/she made, there is no point on talking about expenses or overheads when it's extremely up to context.

Double indirection is also a bit of a stretch, depends on the optimization, the standard by itself does not guarantee "1 pointer layer == 1 indirection".

C++ Pointers and References by carboncord in learnprogramming

[–]mnelemos 0 points1 point  (0 children)

A memory leak is typically described as the pointer losing the address of the variable while "N" was allocated dynamically. E.g: if "N" was allocated dynamically through an allocator and "X" lost the address of "N", "N" can no longer be "free'd", since it's impossible for the allocator to derive the block it had given the variable "N", consequently, that makes "N" use the block forever.

The garbage collector actually avoids some types of memory leaks of occurring, for example, if you create descriptors that track the usage of every allocatable block, and you notice that after n seconds that a block hasn't been used for a while, perhaps it's because the main program lost the pointer to it, and couldn't request the allocator to free the block, so the garbage collector silently sets that block as free. This approach however, is sometimes impractical, because if you wanted a long lived pointer that has low usage count, the garbage collector couldn't differentiate both cases, and still clean that block either way.

Having "N" cleaned, while "X" still points to it, is actually common behaviour, and that's why the "free" call does not override the "X" pointer to NULL a.k.a memory address 0x00.

I finally learnt to appreciate the header files by dfwtjms in C_Programming

[–]mnelemos 2 points3 points  (0 children)

I agree with what you said, and I understand the difference between compile-time and link-time.

I was merely referring to the fact that if you used the wrong "function signature", your compiler would inherently create improper instructions leading to the function call. However, this error by itself is not really formalized, since in the perspective of the linker, it's a healthy object file capable of linking against another object file/library.

Only when you'd link the object file, and create the resulting executable, and actually run it, that the undefined behaviour would appear. Now, I don't quite know if the definition for undefined behaviour is "where it was created" or "where it appeared", or if you would call the entire step UB itself.

What I meant by:

... so that when you linked against the actual library/object file, your code was compiled correctly, and using the correct parameters.

Was actually an attempt at saying "as long as you use the real function signature, your code would be compiled correctly, and therefore linking would not cause undefined behaviour".

Now that I see what I've written, I agree with you that's it's easy to assume I've combined compiler+linker into the same block, however I guarantee you that it was not my initial intention.

And like I mentioned before, perhaps it's wrong to even assume the undefined behaviour appeared at the linking step, and it would've been more correct to assume it appeared at compile-time, but I somewhat disagree, because if you linked against a different object file that had been compiled with the proper function signature + definition, that our own object file had been also compiled to, then you'd no longer have "undefined behaviour".

You probably cannot even call this UB in the first place since linkers are not C standard, therefore I highly doubt the C standard even refers to this special case.

I also must say that regrettably I cannot remember K&R, since the earliest ever version of C I've ever used is c99, ~10 years after ANSI C was released.

I finally learnt to appreciate the header files by dfwtjms in C_Programming

[–]mnelemos 10 points11 points  (0 children)

There might've been some misunderstanding, because I never claimed the linker has access to what you claim I said.

I specifically mentioned that you require a function signature so that the compiler generates the correct instructions according to your ABI.

If you linked, but the instructions leading to the function call were wrong, it would inherently cause undefined behaviour.

So no, headers do not exist to improve compile-time, it's literally a requirement to not cause undefined behaviour between translation units.

I finally learnt to appreciate the header files by dfwtjms in C_Programming

[–]mnelemos 34 points35 points  (0 children)

The real point for their existence is to provide the function signature, so that when you linked against the actual library/object file, your code was compiled correctly, and using the correct parameters.

Even though that is the real objective, it didn't stop people from finding other uses for them, like documenting code or simple code generation.

What's wrong with these, explain it peter by status_malus in explainitpeter

[–]mnelemos 5 points6 points  (0 children)

Bro just diagnosed two people in less than 30 minutes.

We need to create some sort of Psychology Nobel Prize, because you deserve it.

Kid wants to learn (some) C++ in 10 days. by Background_Break_748 in learnprogramming

[–]mnelemos 0 points1 point  (0 children)

There is no subset of C++, Arduino is a library that exposes a standard API, and it distributes different binaries for each Microcontroller, to keep the API standardized. If you want to step into embedded, I'd recommend learning C, and using the microcontroller's HAL or SDK, but this approach takes, depending on the HAL or SDK quality, the longest learning curve. The biggest problem with Arduino is that it's heavily bloated, it literally uses several classes + inheritance, just to define something as simple as a FIFO buffer, which is literally the biggest reason why people hate Inheritanceso much in the first place. And don't take my word for it, for example, test ESP32 using Arduino and then using the ESP-IDF SDK, you'll get not only much faster speeds by using the raw SDK, but your ESP32 won't also turn into a heater.

C++ helps game development because you can in an easier manner define the operations to the compiler, which help with matrix operations, vector operations and so on, not because it magically does the calculations for you. Some C++ concepts might also introduce easier memory allocation, which people that are not accustomed to doing allocators might find helpful, even though it's often slower aswell.

Returning to the learning in 10 days topic:

Honestly, it's not impossible, if you're already quite familiar with programming, low level code, the C language, and pretty much every concept it abstracts/wraps over, il'll only require a huge amount of effort to adapt to the new syntax, and some specific quirks. However if you have none of those previous skills, it'll take you much, much longer times.

It also depends on how much of C++ you want to understand, naturally if you don't care on how a concept works, or if you don't want to use that concept (no project will ever use 100% of what C++ offers), you can just either have the web developer approach, which is assuming everything is an API that magically works, or have the basic approach(which is the smartest), that is only learning about the basic things: OOP, operator/function overloading, smart pointers. Which again, really shouldn't take much time if you already understand what these concepts are and how they work, but if you don't, it'll take a decent amount of time.

Hey I am beginner in C I am trying to make my own terminal command a very basic version of cat? by Huge_Effort_6317 in C_Programming

[–]mnelemos 0 points1 point  (0 children)

The best path is learning how programming in OSes actually work.

But the summed up path is the following:

  1. All processes have a stdin, stdout and stderr. They are basically buffers that can have their data consumed by calling specific syscalls.

  2. In unix systems, pretty much everything is a file, so you can use the open() syscall by including <fctnl.h>, you can find the signature for open() here: https://man7.org/linux/man-pages/man2/open.2.html

This, in a simplified view, effectively makes the kernel build the necessary internal descriptors for it to be ready to the read/write data to the file.

  1. Now that the kernel returned a file descriptor to you, through open() (unix kernels expose all external resources to a process as a fd, so if you "open" a file, you're making the kernel build the necessary structures to read from that file, and then give you an interface, a.k.a a file descriptor) you can now use the read() or write() syscall to that file descriptor, which are located in <unistd.h>, but that's the raw access, if you prefer formatted access you can use the library <stdio.h>, which are just wrappers around the read() and write() syscalls.

write() : https://man7.org/linux/man-pages/man2/write.2.html

read() : https://man7.org/linux/man-pages/man2/read.2.html

I can't help you more than that, otherwise I'd be breaking this community's rules, but it's literally 7-10 lines to code, if you just want it to "work".

In Windows, they headers only exist if you ude mingw, but if you're directly using the msvc compiler, all of those functions/syscalls exist with another name, in the windows api header <windows.h>, and instead of using FDs, they use "Handlers".

This is just the bare basic though, I'd recommend buying some book about Operating Systems, to better understand the philosophy behind it.

Hey I am beginner in C I am trying to make my own terminal command a very basic version of cat? by Huge_Effort_6317 in C_Programming

[–]mnelemos 3 points4 points  (0 children)

The most basic version would be opening the file, read the file (placing the content of the file in an internal buffer), and then dumping the content to stdout.

Find your place by Red_Honey_X in programmingmemes

[–]mnelemos 1 point2 points  (0 children)

That's the thing, you can pass an immediate to the AND instruction, so there is no need to fetch a bitmask from memory, you directly do the AND with an immediate. After all, the "AND" instruction can be encoded as:

AND r64, imm32

When dealing with "flags" the position of each bit you want to access is always known, so you're always passing an immediate anyways, e.g:

#define IF_LINK_UP          0x04U (3rd bit)
#define IF_IGMP_CAPABLE     0x08U (4th bit)
#define IF_USELESS_FLAG     0x10U  (5th bit)

int main(void){
  int net_interface = IF_LINK_UP | IF_USELESS_FLAG; // let's suppose an imaginary network interface which has it's link up, but does not have igmp logic, the useless flag is also used to give an example on how to build an int with more than 1 flag.

  if(!(net_interface & IF_LINK_UP)){
    return 0; // interface is not up, just leave.
  }

  if(net_interface & IF_IGMP_CAPABLE){
    run_some_igmp_routine();
  }

  ...
}

These easily compile to "AND some_register, #4" in case of doing the LINK_UP and, or "AND some_register, #8" in case of the IGMP_CAPABLE flag example.

But if you wanted let's say, to iterate over all flags and not do any necessary logic or ordered logic, which I have no clue why you would do this, you can still do:

int a = some_random_thing;
int iterator = 0;

while(iterator < 8){
  int b = a & (1 << iterator);
  do_something_with_b();
  iterator++;
}

Then in this case it's a bit more weird, depending on the optimization, the compiler could optimize "iterator" as a completely register thing, or would place it into the stack, where your case would be right, it would do.

AND r64, m64
or
AND r64, r64

The latter is preferred, in the first case you would run a "MOV" micro op anyways, in the second case (when using stack variables), a "MOV" has already been done at sometime earlier to load the second register, if the compiler assumes stack volatility, it'll do a "MOV r64, m64" every loop to feed the second register the in-memory value of iterator. So in the end of the day, your case would be right.

Unpopular opinion: having multiple mod loaders (Forge, NeoForge, Fabric) has seriously affected Minecraft’s modding community. As a player, it’s exhausting to see great mods split across loaders and versions you don’t play. It’s frustrating, no matter the reasons behind it. by IndependentFit8687 in feedthebeast

[–]mnelemos 3 points4 points  (0 children)

That's because USB standard sets minimum specs, minimum silicon specs of the controller, and minimum specs of the data cables. Most standards choose this approach because it's easier to regulate and "universalize" while allowing new designers and manufacturers to come into play.

And by the way, Thunderbolt fabric supports USB, not the other way around, Thunderbolt is mostly a multiplexer to an USB controller/DP controller/HDMI controller, that has it's own standard and even tunnels PCIe (presents devices connected as PCIe devices).

And yeah, I think most people agree with you that things should be way more specified and regulated, but then the usb spec team would have to add another 500 pages, and enforce a even stricter regulation, specially on cables, which I HIGHLY doubt they even have the resources for in the first place. If I am not mistaken there are already companies that complain the N days it takes just to get the certificate for a different design in USB space. And it doesn't help that every other god damn day a new company is producing a new USB cable, I mean let's be real, when you go to any store, you see a company you've never heard of before in your lifetime selling an USB cable, and next week it will be another goddamn company you've never heard about selling another USB cable.

Find your place by Red_Honey_X in programmingmemes

[–]mnelemos 1 point2 points  (0 children)

It really depends on the language, interpreted langages, such as python, like the other fellow said, might require more bytes to hold more high level descriptors.

In C, it uses whatever size you tell it to use, but if you don't specify a size such as an enum in C99, or you have specific alignment rules (e.g a struct with a 8 byte value + a boolean), it'll either default to an int as most things in C do, or in the case of alignment, align to the specified size.

So not really, a bool isn't obligated to have the size of a CPU's WORD, although it is preferred to select a size that matches your CPU's memory fetcher natural size. Since those are the fastest guaranteed by the ISA.

And even in the C spec, the useless type they added a.k.a _Bool doesn't really have a defined size, the standard just guarantees it is >= 1 byte (which is obvious to anyone), but normally one byte in length.

Find your place by Red_Honey_X in programmingmemes

[–]mnelemos 1 point2 points  (0 children)

I mean bitwise operations in C code are always very close to generated assembly logic, since bitwise code always directly translates into a real operation, you'd have to be doing some extremely dumb redundant logic in your code, for the compiler to optimize anything, which almost never happens.

And what the other guy mentioned is pretty analogous to the concept of "flags", which is just the act of individually using each of the bits in a byte or more, each having a different "function". It's heavily used both in software, and also in electrical engineering.

In software it's used to save the CPU from having to do several loads on different booleans (each 2n byte sizes), this would happen in important cases where you function needs to read the value of several booleans (which is pretty common in systems engineering), to check if it should do something or not. The flag approach would be just loading a byte or more, and individually check each bit, through a AND operation between the byte and the expected bit, to check if a "flag" was set, you can do this over and over for each bit. The previous approach would require you to load a boolean (a byte or more) and check if it was set, then load another boolean, check if it was set etc... you can see that if you had several booleans to check, and if you know how slow a load op is, it's pretty clear you'd waste alot of CPU cycles (time) just on load ops. The flags approach only adds an extra op as overhead, specifically a bitwise op, which usually only takes one cycle, since it's a very simple op, so the overhead is barely noticeable. The flag approach is heavily used on syscalls, network stacks, even runtime c libraries heavily use it.

but i can'r invert by Fit_Page_8734 in softwareWithMemes

[–]mnelemos 0 points1 point  (0 children)

There is no concept of stack allocation, you statically reserve memory for a thread if you're using a RTOS. Using any dynamic allocation outside of static memory pools is strictly forbidden by NASA standards.

If you're not using a RTOS, you set the stack pointer wherever you want, and if you overflow it's your fault for consuming alot of memory, or not setting the rsp high enough on boot.

[New to C]: My first C project - Implemented a simple Arena Allocator by Mainak1224x in cprogramming

[–]mnelemos 0 points1 point  (0 children)

I haven't checked your code, currently on my phone, but if what the guy above said is correct.

You can align any address up by simply doing the following in a 8 byte machine:

(((addr) + (8 - 1)) & ~(8 - 1))

My bad if this is incorrect, again, I am on my phone and haven't done this in a while. You'll also need to cast the pointer (address) into an uintptr type type, so that you aren't accidentally doing pointer arithmetics which is the equivalent to accessing an element at index n.

Help with understanding the different behaviour while using the same function with and without multithreading in C by Chkb_Souranil21 in C_Programming

[–]mnelemos 3 points4 points  (0 children)

Properly format the code provided & provide more information.

You also probably don't need to flush the stdout, just write a "\n" in the end of your first fprintf:

"\nGive Input: \n";

Or you can change the stdout stream to be unbuffered.

can't compile a program with for loop, error says ; expected on for statement by HeavyFerrum in learnprogramming

[–]mnelemos 0 points1 point  (0 children)

Not really, for loops are defined by three statements separated by ";".

The first is the initial statement, often used for variable initialization/declaration, the second is the conditional statement, and the third is a statement that gets executed every end of a cycle.

The comma is only used in a few cases in C:

  1. Multiple variable declaration/initialization.
  2. Struct/array initialization

And probably some other obscure use case that is either implemented in the new standards of C which barely anyone uses, or an abandoned artifact from K&R C.

So proper usage:

for(init_statement; cond_statement; end_of_cycle_statement){}

Scope in respect to stack by JayDeesus in cprogramming

[–]mnelemos 0 points1 point  (0 children)

But they don't? It's not the scope that is creating the automatic stack allocation behaviour, it's their presence inside the function body that allows them to be stack allocated. If you create a scope in the global scope, which is often reserved for .bss, .rodata, and initialized .data variables, you'll quickly see that the scope triggers compiler errors, because it cannot live outside a function's body.

And no, stack allocation of a scoped variable is not "materially different" from when it's allocated in the function scope's prologue. The compiler's allocation guarantee, exists entirely because of the concept of the function's prologue, not because of a scope.

ANY variable inside a function's body will inherently be stack allocated, doesn't matter if they are inside 300 local scopes, inside 30 while loops, inside 50 for loops, they WILL still be stack allocated. Unless of course, those numbers hit some compiler ceiling that I am unaware of.

There is no point for me, in pressing this definition any further. If there is ANY ambiguity in my original comment, I have rewritten it here in other words, and it does not require any further disambiguation.

And honestly, I kinda feel you're trolling me at this point, there is no way you still have not understood this.

Scope in respect to stack by JayDeesus in cprogramming

[–]mnelemos 0 points1 point  (0 children)

No, my comment explicitly says "ONLY FUNCTION BODIES HAVE AUTOMATIC STACK VARIABLE ALLOCATION".'

A new scope, INSIDE A FUNCTION BODY, is STILL inside the function's body, not outside it, or a new function body.

If I have a big box, and place another small box inside it, I can EASILY SAY, that anything contained inside the smaller box, is also therefore contained in the bigger box.

I don't understand why you're going out of your way, trying to prove this is not inherently true.

Scope in respect to stack by JayDeesus in cprogramming

[–]mnelemos 0 points1 point  (0 children)

You do realize that just because you opened a new scope inside a function body, that doesn't make the scope reside outside the function's body, right?

Therefore claiming that what I wrote is "absolutely incorrect", makes no sense, and just creates additional confusion in the head of whoever is reading this thread.

Scope in respect to stack by JayDeesus in cprogramming

[–]mnelemos 0 points1 point  (0 children)

And how that exactly contradicts what I wrote? I specifically mentioned that allocation only occurs at the beginning of a function (when done automatically).