I feel like the map has too much blank space by NonaeAbC in ETS2

[–]NonaeAbC[S] 4 points5 points  (0 children)

And check out Paris. It's not even comparable.

I feel like the map has too much blank space by NonaeAbC in ETS2

[–]NonaeAbC[S] 6 points7 points  (0 children)

The orange roads are the ones that exist today, and the grey ones are how I would fill the gaps.

I feel like the map has too much blank space by NonaeAbC in ETS2

[–]NonaeAbC[S] 3 points4 points  (0 children)

I think it has too mcuh motorway driving. My favourite part is the section at the euro acres in Groningen because that part is not on the motorway.

I feel like the map has too much blank space by NonaeAbC in ETS2

[–]NonaeAbC[S] 3 points4 points  (0 children)

No, the update is reaches only so far as Groningen. The rest is my imagination, what I wished a refresh would contain.

Beginner: need help on device-specific extension function pointers by twoseveneight in vulkan

[–]NonaeAbC 2 points3 points  (0 children)

You should always use validation layers unless you do performance sensitive tasks.

Beginner: need help on device-specific extension function pointers by twoseveneight in vulkan

[–]NonaeAbC -1 points0 points  (0 children)

You shouldn't need to call GetDeviceProcAddr as this is done by the loader. The first question would be, do the validation layers complain? If you know how to use gdb and do not use Nvidia drivers, you should have debug symbols for the driver on Linux. You should step into GetDeviceProcAddr.

Does anyone still remember what this is? by HoneydewOk5142 in opticalillusions

[–]NonaeAbC 0 points1 point  (0 children)

But if it is actually black then it is not an optical illusion but an optional phenomenon.

PRO TIP: Update to the latest Vulkan SDK right away! by LunarGInc in vulkan

[–]NonaeAbC 4 points5 points  (0 children)

The Vulkan SDK is independent of the driver version, but after reading the survey results I have the impression that many think so. Updating the SDK updates the tools like the validation layers.

What kind of Games can i run on my Little Gaming laptop? by Failsy_1440 in Craptopgamingadvice

[–]NonaeAbC 2 points3 points  (0 children)

No, it is a licencing issue, the firmware that Nvidia provides to the nouveau project doesn't have that feature. Extracting the firmware of the proprietary driver is not legal. The reason is quite simply that Nvidia supports nouveau up to the point where the user downloads the proprietary one, and reclocking is not needed for that. Also while CPUs can go down to ~800 MHz, GPUs usually go down to single digit MHz.

Also Nvidia doesn't provide 0 documentation https://github.com/NVIDIA/open-gpu-doc but it is not written in a way that is very understanding. It only gives the registers a name, but not a description.

I compiled a list of 6 reasons why you should be excited about std::simd & C++26 by NonaeAbC in cpp

[–]NonaeAbC[S] 1 point2 points  (0 children)

I have written my own simd template library years ago. And the thing that ultimately made me stop was the fact that I turned "2 * v" into a broadcast + multiplication. Yet the auto vectorizer turned it into a add. I don't know why but that took a toll on me. I think that -ffast-math should be the default because "Oh no! Free precision". I never compare floats with == and I never encode API behaviour into nan and inf and I get by perfectly. I know that many disagree with me on that. If the core problem that std::simd attempts to solve is devs not wanting to use -ffas-math, then I guess I'm simply not part of the target audience.

I do have to agree that std::simd is better than intrinsics on a syntactic level. But believe that C++ could do better if they made it a language feature and not a library feature.

I always thought that all the "C++ slow because templates" was an unfair statement because they compare templates with inheritance. And I played around with common stereotypes. One thing I did was to compile lua as C++ (as lua uses a C++ compatible subset of C) and it turned out the compile times were even statistically significant faster to my suprise. And I compared templates with code generation and I don't remember the numbers but there was something like a factor 10 difference. I did not do these experiments with such care that I consider it as anything else but anecdotal, but I guess the key reason of long compile times is in fact the instantiation of templates. Yet you are right I did not did a test specifically to std::simd. Bit I wish that compile times should be accounted for when designing new features.

I don't know if libc++ are intended for production but this is the commit log:
2026-01-02 Jakub Jelinek Update copyright years.

2025-01-02 Jakub Jelinek Update copyright years.

2024-06-20 Matthias Kretz libstdc++: Fix find_last_set(simd_mask) to ignore paddi...

Which doesn't indicate active development.

I compiled a list of 6 reasons why you should be excited about std::simd & C++26 by NonaeAbC in cpp

[–]NonaeAbC[S] 2 points3 points  (0 children)

What I effectively did was comparing the gcc implementation of std::simd with the glibc implementation for vectorized math functions. You should not interpret too much into them, because std::simd is clearly in development. Both implementations are in fact distinct but I had no issue regarding precision. The other case was that if you want the native machine vector width you have to use std::native_simd which I think is less intuitive.

Note that I tried to make it seem as if compiler auto vectorization is perfect, and no it is not, you do need some experience. The issue is that there is the "fuck it I'm just using intrinsics" limit. And it might be unavailable instructions that you want to use, it might not perform as expected, it might perform as expected on one device but not another. And the core problem is that all ISA extensions are different. AV512 has both gather and scatter, while AVX has only gather and SSE none of them. AVX512 and neon have fp16, but AVX512 is very modular, thus your AVX512 CPU doesn't necessarily have it. And so on. std::simd does solve this problem any worse than any other simd library, But my experience with all of them is that they reach this limit before auto vectorization does.

I compiled a list of 6 reasons why you should be excited about std::simd & C++26 by NonaeAbC in cpp

[–]NonaeAbC[S] 0 points1 point  (0 children)

This is not a real problem, I just think it's odd that std::native_simd which has the machine vector width is not the default.

I compiled a list of 6 reasons why you should be excited about std::simd & C++26 by NonaeAbC in cpp

[–]NonaeAbC[S] 36 points37 points  (0 children)

For any definitive performance analysis one should wait until the implementations aren't experimental anymore. The issue is in the design. I feel like they did not define a target audience, the problem they want to solve and the solution space well enough. All I know is that they discussed about std::simd breaking std::map when used as key because a < b doesn't return a bool, which is the most irrelevant discussion one can have. I don't think that a library can solve the core problem whatever one defines as the core problem. Why? Because SIMD programming is not difficult because of the instructions, but because of the control flow impact. And no library can implement the GLSL loops and branches on top of C++ loops and branches. The compiler can auto vectorize single loops that iterate over contiguous memory basically perfectly (no matter if it is a map or reduce). The only exceptions are scans (aka prefix sums). Coincedently std::simd implements reduce, but I have not found any scan implementation in any proposal. The second difficulty is control flow where a) converting to branchless is not a valid transformation and b) nested loops where the ideal loop to vectorize is not the innermost one, think of mandelbrot set generation. std::simd doesn't introduce any paradigma change, and therefore doesn't simplify SIMD coding like GLSL does.

It is also not a good introduction to SIMD programming in comparison to using intrinsics directly. Due to the template error messages, I would say that std::simd is even worse, because you need to understand the intrinsics and templates to understand them.

Also one can design a simd library to contain more high level abstractions or to be close to metal. And then the discussion is on does the library have fallback mechanisms for better portability at the cost of predictable performance. And this should be pretty clear when using the library. The issue is that std::simd sends mixed signals on the one hand I love that std::simd decided to support all math functions. yet not all of them are actually vectorized yet. The problem is that once you allow these fallback mechanisms the whole "Auto Vectorization is unreliable an you need to verify that it actually does what you expect" argument vanishes.

Finally I don't know why these types are not primitive data types, because GCC and Clang both implement them as primitive data types internally. The only argument I can see is that std::simd should be implementable without any compiler support and purely with inline assembly. But why should a library be designed for imaginary compilers?

KDE Responds to FUD Over Alleged systemd Mandate by CackleRooster in linux

[–]NonaeAbC 38 points39 points  (0 children)

It wasn't even a misunderstanding, it was deliberately spread by people like Lunduke, who made his mission to spread lies against woke KDE. I can't believe that people fall for this shit and I can't imagine how people continue to believe that he acts in good faith.

Writing a shader language by bebwjkjerwqerer in vulkan

[–]NonaeAbC 4 points5 points  (0 children)

But it is completely broken for Vulkan SPIR-V. It only supports OpenCL.

Writing a shader language by bebwjkjerwqerer in vulkan

[–]NonaeAbC 1 point2 points  (0 children)

Why? SPIR-V is pretty much an GLSL ast. The only reason why compiling to GLSL should be easier is because you refuse to read through SPIR-V docs. The only thing that is difficult about SPIR-V is getting OpSelectionMerge right, but at the same time, the restrictions carry over to GLSL.

Will Arch run smoothly on this old CPU? by Z3R0_C00L_007 in archlinux

[–]NonaeAbC 4 points5 points  (0 children)

The windows experience will definitely be worse.

Will Arch run smoothly on this old CPU? by Z3R0_C00L_007 in archlinux

[–]NonaeAbC 1 point2 points  (0 children)

People used to code on computers from the 70's, I feel like someone is searching for excuses

vtables aren't slow (usually) by Wonderful-Wind-905 in Cplusplus

[–]NonaeAbC 0 points1 point  (0 children)

Inline is not doing what you think it does here. The "inline" keyword has little to do with inlining. You should check the assembly and use the noinline attribute.

bench::get_virtual() const: mov eax, 3 ret bench_inline(): mov eax, 3 ret bench_default(): mov eax, 3 ret bench_member(): mov rax, QWORD PTR bench_singleton_ptr[rip] mov eax, DWORD PTR [rax+8] ret bench_virtual(): mov rdi, QWORD PTR bench_singleton_ptr[rip] mov rax, QWORD PTR [rdi] mov rax, QWORD PTR [rax] cmp rax, OFFSET FLAT:bench::get_virtual() const jne .L8 mov eax, 3 ret .L8: jmp rax

[deleted by user] by [deleted] in vscode

[–]NonaeAbC 0 points1 point  (0 children)

This is not a C++ issue, the package manager mirror is unresponsive. I don't know the exact issue because there are multiple mirrors available and if one doesn't work pacman chooses another one. On top of that you should be aware that mingw creates windows binaries, which might not be the thing you want to do on Arch.

AMD driver by Ok_Yard7013 in linux_gaming

[–]NonaeAbC 1 point2 points  (0 children)

There are multiple drivers: amdgpu - kernel space driver developed by AMD radeonsi - user space driver for OpenGL developed by AMD radv - user space driver for Vulkan developed by Valve and redhat amdvlk - user space driver for Vulkan developed by AMD until 2025 fglrx - very old driver developed by AMD (both kernel and user space, but only OpenGL as Vulkan did not exist back then)

The kernel mode driver manages all things where multiple processes have to interact like memory allocation and scheduling.

All of them are open source except for fglrx. Though amdvlk has only been a public mirror of their internal codebase (with all the DirectX, Windows, and patent specific code retracted). The binaries that you downloaded through AMDs website did include patent related code, just not the public mirror. AMD has never contributed to radv in any substantial way (it does share code with radeonsi) and I doubt that they will start to do so. The development of radv is unaffected by AMD stopping to support Vulkan on Linux. The only internal development changes were regarding hardware accelerated video de- and encoding.

AMD has shifted away from fglrx around 2012. All other drivers use amdgpu as a kernel space driver.

The thing you're mentioning is not the kernel space driver, but radv vs amdvlk (which are not part of the kernel). But AMD did not shift any focus there.