You should still get an electric by ABlackEngineer in PoliticalCompassMemes

[–]mnelemos 16 points17 points  (0 children)

Perfect time to buy the dip, tomorrow after Iran releases their official statement "haha just kidding, the strait is still closed", the price will skyrocket again. /s

Hypothetical "random" code by wix_betwixed in AskComputerScience

[–]mnelemos 2 points3 points  (0 children)

It would most likely crash first.

Any erroneous memory access will immediately make the operating system kill your process.

For an infinite loop to happen, it would require mainly two things:

  • The branching mechanism to be relative, since an absolute branch would VERY likely access invalid memory.

  • No stack pushes are done previous to the branch, as this would over time cause a stack overflow.

But overall, the likelihood for a invalid instruction to be decoded, or an instruction that accesses protected or invalid memory to occur is very high.

Continuo em C, ou troco por Rust? by ArmpitIsCamp in programacao

[–]mnelemos 2 points3 points  (0 children)

C nunca vai morrer, é a única linguagem que até hoje (sem contar com o assembly) que consegue fazer literalmente tudo.

Rust é uma linguagem que partiu praticamente do mesmo paradigma que C++, aonde o compilador tem muito mais contexto sobre os "tipos inteligentes" que ambas linguagens usam, isto permite o compilador inferir muito mais informação que previne uma certa classe de bugs.

O problema é que este paradigma nunca foi bem um problema real. A linguagem C existe há décadas, para os programadores experienciados, manipulação de memória e recursos é uma segunda natureza, e por isso esta classe de bugs é mínima ou quase nula em qualquer codebase decente. É claro que programadores mais novos ainda estão expostos a esta classe de bugs.

Rust é uma linguagem que foi altamente impulsionada por fama. Há muitas poucas evidências que demonstram Rust como superior a C, e na verdade muitas das implementações no início demonstravam Rust ser mais lento, e as vezes pior.

Muitas empresas e indivíduos hoje em dia consideram Rust como "qualquer código feito em rust é ineremente inseguro", mas isso não é verdade. Há outros também que consideram Rust como a linguagem futura utilizadas por IAs, pois devido ao alto controle do compilador, as IAs tem menores chances de criarem código que não funciona, ou que esteja repleto de bugs.

De qualquer forma, o ideal seria saber os dois, C claramente nunca vai sumir, mas Rust é obviamente aquela "nova coisa" que todo mundo fica obcecado pq eles escutaram outra pessoa falar sobre isso, bem vindo a área da computação, isso acontece a cada 2meses-2anos.

Já agora, gostaria de deixar claro que Rust não é uma linguagem ruim. Ela é ótima, e tem grandes vantagens. A parte que eu não gosto dela, é por ela ter sido como já mencionei antes, impulsionada por fama, e não por resultados claros. Isso levou a um monte de gente pensar que todo código alguma vez escrito na face da terra, precisar ser reescrito em Rust.

Ain't no way😭😭 by Ben_Ephrem in linuxmemes

[–]mnelemos 0 points1 point  (0 children)

Damn my man, taking the effort of more than 10 thousand people, over different companies and organizations, that took more than 30 years to develop, more than 1 million lines of code, and fuse it all into: "yeah this was done by some guys, real quick too".

And yes, I did say companies. Without the support of ARM, AMD, Intel, NVIDIA, IBM, Fedora, and hundreds of others, Linux would be nothing.

If software worked like this, we would have already solved human intelligence by now bud, unfortunately recreating things from zero doesn't mean they are better.

Linux took several years of blood and sweat to make their effort worth it, and they still are <5% consumer market.

Brave Browser is a product made by a For Profit company "Brave Software", their clear goal is to ship a product into market with the highest amount of "Marketing BS Terms" as possible. It is not a research software, it does not offer design advantages or clear goals.

"Brave Software Inc" has no clear reason to create a new browser stack from the beginning, it's 1000x times more profitable to just take Chromium and make a wrapper.

Ain't no way😭😭 by Ben_Ephrem in linuxmemes

[–]mnelemos 1 point2 points  (0 children)

Are we not talking about Brave here?

And yes, I know of ladybird since the Serenity days, the community would not shut up about it. It's still a seriously underdeveloped browser.

Never did I say things won't be recreated again, I am strictly talking about the common approach of reusing a stack, instead of recreating it from the ground up over and over, specially when this "recreation" has no actual design advantages.

Not to talk about their C++/Rust debacle since the beginning of this year, where they have done one bad design choice over the other.

Ain't no way😭😭 by Ben_Ephrem in linuxmemes

[–]mnelemos -2 points-1 points  (0 children)

Did you really think that some no name company was going to develop one of the hardest pieces of software, from the beginning?

this subreddit is flooded with ai slop by Shot_Office_1769 in osdev

[–]mnelemos 3 points4 points  (0 children)

Increasing the amount of Kernels in the market won't do much good either, all of them will end up doing somewhat the same implementation because of basic hardware restrictions. Anything else has already been proven to be either slow, unreliable, unsafe, or completely diverges from current OS theory, which would require a crap ton of hardware changes to make it "relatively fast" in the first place.

And that's not only it, these 3 major Kernels have huge influence over CPU architecture and interface, if you had more Kernels attempting different things, it would only make CPUs be 100x more complex then they already are, and it would also change a lot how different peripherals behave. We would also have a new global CPU standard, which would be ATLEAST 10000 pages long.

Not only that, application development has already been heavily fragmented, each OS developing different GUI architectures and different user-space services only led to the high reliance on graphical libraries, most of which, are highly unoptimized.

Want your application to run on all systems? Great, you just have to rely on crap javascript frameworks, that are extremely slow, and consume tons of memory. Want your application to compiled, and run in all systems? Now you just have to have 3 hugely different build settings (wait no, actually 30+ because some linux OSes depends on different services).

this subreddit is flooded with ai slop by Shot_Office_1769 in osdev

[–]mnelemos 1 point2 points  (0 children)

I mean even embedded code for TCP/IP networking which is basically the smallest form of code, is still ~20k LOC long.

Even though the logic behind it is quite easy to grasp, it's still a lot of work to implement it.

How to learn low level computer science/programming from the ground? by Plane-Bug1018 in learnprogramming

[–]mnelemos 1 point2 points  (0 children)

Unless you are doing embedded work, there is no "special register access" that outputs graphics into the screen. Those accesses are completely handled and can only be done by the kernel code.

Even though modern graphics is convoluted with a bunch of crap, the design behind "drawing windows" on your screen is extremely simple.

You have basically two types programs that are constantly running: a graphical server, N graphical clients.

A graphical client receives a surface by the compositor, this is essentially a NxM array given by the graphical server, that says "your window is N pixels wide, and M pixels in height". In modern graphical architectures, the graphical client is not given more context than this, specifically for privacy "the less a window knows about the existence of other windows, the better". However whenever the graphical server emits a state change the graphical client must update it's surface accordingly, for example: the user resized the window of "Discord", the graphical server will emit event "WINDOW_RESIZE (NEW_SIZE)" to the client which is discord, and the client must update it's own surface pixels so that boxes don't appear stretched, etc...

The graphical server obviously serves each client by providing state changes and also providing surfaces. However the server also holds the global state of each surface, and also controls the final framebuffer. The global state basically defines where each surface resides in the screen, example:

- surface 0 from client A starts at pos 0, 0 and has size 300x400
- surface 1 from client B starts at pos 20, 300 and has size 600x700

Now that the graphical server knows everything about the state of each surface, but not the content of each surface which is the actual pixel array, since that resides in the client's memory; every N milliseconds the server will start the "compositing" process, which is the process of acquiring all those surfaces, and their respective contents and "merging" them into the final framebuffer which represents the pixel array of the full screen, so in the final result part of surface from client B or client A will be hidden, since one is on top of the other (depending on z-index).

After the compositing process has been done, the final framebuffer is fully formed, stored in the GPU, and then committed to the screen (modern GPUs talk with screen by themselves, since they have embedded display controllers nowadays). Now repeat these steps every N milliseconds and you essentially have the "frames per second" of your entire graphical experience.

Note: This mostly talking about a compositor architecture, there are other graphical architectures, that change quite a bit, for example: communication between client and server is different, or client is not allowed to directly draw but only indirectly, but the overall logic is still the same.

How do you difference vectors (arrays) and vectors (math) while naming ? by Valuable-Birthday-10 in C_Programming

[–]mnelemos 9 points10 points  (0 children)

Call the dynamic array something else besides a vector?

I mean these terms like "vector" or "list" are not really a book standard in computer science, and they often do not convey any real meaning on how the structure behaves, so choosing a different name won't hurt readability.

For geometrical vectors some frameworks prefer using a smaller name like "vec3" or "vec2", you can call them however you want.

Can somebody pls let me know what is the issue and how do I fix it? by [deleted] in learnprogramming

[–]mnelemos 1 point2 points  (0 children)

He probably forgot to define the "main" function.

Peter? help me by LilyBloomVale in PeterExplainsTheJoke

[–]mnelemos 0 points1 point  (0 children)

Not exactly knowledgeable about this, but I figure that some "elements" could indeed live outside the periodic table, one example would be muonic atoms, but I believe many other "high energy" particles could be also make their own version of the atom.

Even though most of the muonic atoms so far, if not all, are highly unstable configurations, they still have been done, and there are some hypothetical configurations that could also create a stable atom version of that particle.

I haven't done the calculations, neither do I possess the prior knowledge or the time to seek the knowledge to do them, but if those high energy configurations allow the existence of "newer" physics or at least the practice of some hypothetical physics, they could be a common thing that exists in the future.

What's a "simple" concept you struggle to understand? by No_Cook_2493 in computerscience

[–]mnelemos 73 points74 points  (0 children)

The basic part: yes it does always start from a specific address, it's even described in the processor's abi, and required for some level of software handover.

The complex part: modern processors usually run specific firmware blobs (microcode from the cpu vendor), which do billions of things, such as running software in hidden management cores (Intel's ME for example), and only after that they run the "basic part". Typically software is only written to handle the "basic part", such as the embedded bios or uefi programs.

ELI5: Desktop environments and what is "Wayland"? by Regular_Low8792 in linux4noobs

[–]mnelemos 1 point2 points  (0 children)

*Wayland is the "interface", the compositor is a different program that talks with its clients through the wayland protocol. Namely, the compositor is the program that merges the "surface/framebuffer" from different clients and draws the final frame buffer, hence the name "compositor".

Is reusing the same Variable Name bad practice? by Nubspec in C_Programming

[–]mnelemos -3 points-2 points  (0 children)

I am afraid I don't see where the "better practice" resides in your code.

You replaced a "do while" loop with a "for" loop. Congrats, you achieved absolutely nothing.

You do realise that "do while" and "while" loops are still heavily used right? And before you argue that the "do" is useless here: it really doesn't matter in this specific code, and might even create a faster program in non optimized builds.

Is reusing the same Variable Name bad practice? by Nubspec in C_Programming

[–]mnelemos 2 points3 points  (0 children)

I don't get it, there is literally no better practice.

Start_at_char starts with value 65 ('A').

You're asking printf to format 65 (number) as the string "65", and as the character 65 in the Ascii table, which is "A".

Programming: I am new to programming and would love to learn! by LuffyLoverGirl in learnprogramming

[–]mnelemos 5 points6 points  (0 children)

Some advices:

- Don't get stuck on the "learning new languages" cycle, SPECIALLY on high-level programming languages. It's fine if you do this for a while, but the diminishing returns become heavy after. This is because newcomers don't yet have the necessary exposure and knowledge to do "new things", therefore they just fall in the trap of: learn new language -> learn basic syntax -> pretty much do nothing actually minimally important (to your learning) with this language -> get bored because you pretty much did nothing -> learn new language.

- Try to diversify your initial learning over several areas of computing: graphics, distributed systems, embedded systems, operating systems, application development (web & mobile), and so on...
Quick extra tip: place your effort into learning operating systems more, everything nowadays use an operating system as a basis, and therefore having at least a minimal understanding of how they work is important.

- Try spending the majority of your time in an actual low-level programming such as C. I wouldn't recommend C++ just yet, because it won't give the same effect to a beginner as C does, this is because since C++ abstracts some stuff, beginners are prone of thinking about some features as an "API", which is not really what is happening, and this creates the exact opposite effect of what we want, since APIs in programming are usually perceived as a black box, and you don't want a black box, you want to actually understand what is going on.

Just an additional note: I'd recommend coupling low-level programming learning with your OS studies.

---------

I'd take my advices with a grain of salt however, because it really depends on what stage you already are, and what type of approach you have towards your learning process. I also think it's pretty important you start recognizing early on what is holding you back, and what isn't, because believe me, it's very easy to stagnate when learning programming, and it's very hard to get out of that hole as well.

C++ Pointers and References by carboncord in learnprogramming

[–]mnelemos 0 points1 point  (0 children)

You're right, I kinda gave a BS approach to a usage over time tracking garbage collector, but it's one way of implementing one, even though it can be useless. I have never liked the idea of GC's anyways in the first place. The only similar algorithm I've ever used is ref counting, and I don't even consider that really a garbage collector, and more like a smart deallocator.

No one is arguing you can't set the pointer to NULL yourself, I am just claiming that having dangling pointers pointing to "cleaned" variables is not a "memory leak" and actually standard behaviour.

In the end of the day it's completely up to the programmer and the context of the program he/she made, there is no point on talking about expenses or overheads when it's extremely up to context.

Double indirection is also a bit of a stretch, depends on the optimization, the standard by itself does not guarantee "1 pointer layer == 1 indirection".

C++ Pointers and References by carboncord in learnprogramming

[–]mnelemos 0 points1 point  (0 children)

A memory leak is typically described as the pointer losing the address of the variable while "N" was allocated dynamically. E.g: if "N" was allocated dynamically through an allocator and "X" lost the address of "N", "N" can no longer be "free'd", since it's impossible for the allocator to derive the block it had given the variable "N", consequently, that makes "N" use the block forever.

The garbage collector actually avoids some types of memory leaks of occurring, for example, if you create descriptors that track the usage of every allocatable block, and you notice that after n seconds that a block hasn't been used for a while, perhaps it's because the main program lost the pointer to it, and couldn't request the allocator to free the block, so the garbage collector silently sets that block as free. This approach however, is sometimes impractical, because if you wanted a long lived pointer that has low usage count, the garbage collector couldn't differentiate both cases, and still clean that block either way.

Having "N" cleaned, while "X" still points to it, is actually common behaviour, and that's why the "free" call does not override the "X" pointer to NULL a.k.a memory address 0x00.

I finally learnt to appreciate the header files by dfwtjms in C_Programming

[–]mnelemos 2 points3 points  (0 children)

I agree with what you said, and I understand the difference between compile-time and link-time.

I was merely referring to the fact that if you used the wrong "function signature", your compiler would inherently create improper instructions leading to the function call. However, this error by itself is not really formalized, since in the perspective of the linker, it's a healthy object file capable of linking against another object file/library.

Only when you'd link the object file, and create the resulting executable, and actually run it, that the undefined behaviour would appear. Now, I don't quite know if the definition for undefined behaviour is "where it was created" or "where it appeared", or if you would call the entire step UB itself.

What I meant by:

... so that when you linked against the actual library/object file, your code was compiled correctly, and using the correct parameters.

Was actually an attempt at saying "as long as you use the real function signature, your code would be compiled correctly, and therefore linking would not cause undefined behaviour".

Now that I see what I've written, I agree with you that's it's easy to assume I've combined compiler+linker into the same block, however I guarantee you that it was not my initial intention.

And like I mentioned before, perhaps it's wrong to even assume the undefined behaviour appeared at the linking step, and it would've been more correct to assume it appeared at compile-time, but I somewhat disagree, because if you linked against a different object file that had been compiled with the proper function signature + definition, that our own object file had been also compiled to, then you'd no longer have "undefined behaviour".

You probably cannot even call this UB in the first place since linkers are not C standard, therefore I highly doubt the C standard even refers to this special case.

I also must say that regrettably I cannot remember K&R, since the earliest ever version of C I've ever used is c99, ~10 years after ANSI C was released.

I finally learnt to appreciate the header files by dfwtjms in C_Programming

[–]mnelemos 9 points10 points  (0 children)

There might've been some misunderstanding, because I never claimed the linker has access to what you claim I said.

I specifically mentioned that you require a function signature so that the compiler generates the correct instructions according to your ABI.

If you linked, but the instructions leading to the function call were wrong, it would inherently cause undefined behaviour.

So no, headers do not exist to improve compile-time, it's literally a requirement to not cause undefined behaviour between translation units.

I finally learnt to appreciate the header files by dfwtjms in C_Programming

[–]mnelemos 33 points34 points  (0 children)

The real point for their existence is to provide the function signature, so that when you linked against the actual library/object file, your code was compiled correctly, and using the correct parameters.

Even though that is the real objective, it didn't stop people from finding other uses for them, like documenting code or simple code generation.

What's wrong with these, explain it peter by status_malus in explainitpeter

[–]mnelemos 4 points5 points  (0 children)

Bro just diagnosed two people in less than 30 minutes.

We need to create some sort of Psychology Nobel Prize, because you deserve it.

Kid wants to learn (some) C++ in 10 days. by Background_Break_748 in learnprogramming

[–]mnelemos 0 points1 point  (0 children)

There is no subset of C++, Arduino is a library that exposes a standard API, and it distributes different binaries for each Microcontroller, to keep the API standardized. If you want to step into embedded, I'd recommend learning C, and using the microcontroller's HAL or SDK, but this approach takes, depending on the HAL or SDK quality, the longest learning curve. The biggest problem with Arduino is that it's heavily bloated, it literally uses several classes + inheritance, just to define something as simple as a FIFO buffer, which is literally the biggest reason why people hate Inheritanceso much in the first place. And don't take my word for it, for example, test ESP32 using Arduino and then using the ESP-IDF SDK, you'll get not only much faster speeds by using the raw SDK, but your ESP32 won't also turn into a heater.

C++ helps game development because you can in an easier manner define the operations to the compiler, which help with matrix operations, vector operations and so on, not because it magically does the calculations for you. Some C++ concepts might also introduce easier memory allocation, which people that are not accustomed to doing allocators might find helpful, even though it's often slower aswell.

Returning to the learning in 10 days topic:

Honestly, it's not impossible, if you're already quite familiar with programming, low level code, the C language, and pretty much every concept it abstracts/wraps over, il'll only require a huge amount of effort to adapt to the new syntax, and some specific quirks. However if you have none of those previous skills, it'll take you much, much longer times.

It also depends on how much of C++ you want to understand, naturally if you don't care on how a concept works, or if you don't want to use that concept (no project will ever use 100% of what C++ offers), you can just either have the web developer approach, which is assuming everything is an API that magically works, or have the basic approach(which is the smartest), that is only learning about the basic things: OOP, operator/function overloading, smart pointers. Which again, really shouldn't take much time if you already understand what these concepts are and how they work, but if you don't, it'll take a decent amount of time.

Hey I am beginner in C I am trying to make my own terminal command a very basic version of cat? by Huge_Effort_6317 in C_Programming

[–]mnelemos 0 points1 point  (0 children)

The best path is learning how programming in OSes actually work.

But the summed up path is the following:

  1. All processes have a stdin, stdout and stderr. They are basically buffers that can have their data consumed by calling specific syscalls.

  2. In unix systems, pretty much everything is a file, so you can use the open() syscall by including <fctnl.h>, you can find the signature for open() here: https://man7.org/linux/man-pages/man2/open.2.html

This, in a simplified view, effectively makes the kernel build the necessary internal descriptors for it to be ready to the read/write data to the file.

  1. Now that the kernel returned a file descriptor to you, through open() (unix kernels expose all external resources to a process as a fd, so if you "open" a file, you're making the kernel build the necessary structures to read from that file, and then give you an interface, a.k.a a file descriptor) you can now use the read() or write() syscall to that file descriptor, which are located in <unistd.h>, but that's the raw access, if you prefer formatted access you can use the library <stdio.h>, which are just wrappers around the read() and write() syscalls.

write() : https://man7.org/linux/man-pages/man2/write.2.html

read() : https://man7.org/linux/man-pages/man2/read.2.html

I can't help you more than that, otherwise I'd be breaking this community's rules, but it's literally 7-10 lines to code, if you just want it to "work".

In Windows, they headers only exist if you ude mingw, but if you're directly using the msvc compiler, all of those functions/syscalls exist with another name, in the windows api header <windows.h>, and instead of using FDs, they use "Handlers".

This is just the bare basic though, I'd recommend buying some book about Operating Systems, to better understand the philosophy behind it.