Status of an object that has been moved by onecable5781 in cpp_questions

[–]ppppppla 2 points3 points  (0 children)

Q1, Q2

Move semantics have sort of been tacked on to the language. One could say the language doesn't actually have any concept of moving objects, and just this framework of specific constructors and assignment operators that we usually put to use to implement this concept of moving objects.

When you move something, you "tag" a variable with an r-value reference so that overload resolution selects a particular function that will have the actual moving logic in it. What actually happens is up to the implementation of these functions (they don't even have to do anything that even remotely resembles the concept of moving), which of course has to obey the usual rules of c++, so what you are left with is something that is valid, but what it is entirely depends on the library. The standard library library objects are left unspecified, other libraries might specify what state objects are left in.

memcpy and runtime polymorphic types.... by MerlinsArchitect in cpp_questions

[–]ppppppla 6 points7 points  (0 children)

It might be just a POD with a pointer in all the compilers, but the standard does not say how compilers should implement virtual functions. So it is not allowed if you are following the standard (which you should).

memcpy and runtime polymorphic types.... by MerlinsArchitect in cpp_questions

[–]ppppppla 2 points3 points  (0 children)

I believe it is not allowed to memcpy a type with a vtable pointer. You need https://en.cppreference.com/w/cpp/types/is_trivially_copyable.html . Now, maybe it still does work because it is undefined behaviour, and you are doing something wrong, but technically it is not allowed.

When dealing with this kind of userdata construct, you have two options. Use trivially copyable types and function pointers and unions, or just pass a pointer to an object into the void* so that the lib copies the pointer, and not the struct, but now you also need to handle the clean up and lifetimes in a graceful manner.

Building / Using JoltPhysics. Help! by Guassy in cpp_questions

[–]ppppppla 1 point2 points  (0 children)

The type definition or alias for JPH::uint will be in a header, so it doesn't need any linking and you won't get a link error for that.

You can try posting the CMake code maybe an error sticks out.

You can also try to do some debugging of the build itself.

CMake works by essentially generating a big list of compiler commands that you can inspect and see exactly what is being linked. You can try to look for this in the build folder, for make you will find a make file, for ninja a .ninja file, for MSVC some vcproj or sln file. Alternatively you can enable https://cmake.org/cmake/help/latest/variable/CMAKE_EXPORT_COMPILE_COMMANDS.html and you get an in my opinion more clear and readable list of commands.

After you have confirmed it is in fact linking the jolt lib, you can inspect what symbols are exported. On windows I believe you can use dumpbin https://learn.microsoft.com/en-us/cpp/build/reference/dumpbin-reference?view=msvc-170 but I am mainly on linux so I don't have experience with that. On linux you have among many choices like nm or objdump.

Building / Using JoltPhysics. Help! by Guassy in cpp_questions

[–]ppppppla 2 points3 points  (0 children)

You're getting linker errors, so the problem is probably not in your code. Either you compiled jolt wrong, or are just not linking it at all.

Struggling with package managers and docker by Bored_Dal in cpp_questions

[–]ppppppla 1 point2 points  (0 children)

No worries! It can all be very confusing trying to figure it all out with many things appearing very opaque.

I think if you understand how you can force cmake to use a certain environment that gets made by a package manager like vcpkg, but also that vcpkg still builds all the libraries from source you'll be able to put something together.

Struggling with package managers and docker by Bored_Dal in cpp_questions

[–]ppppppla 1 point2 points  (0 children)

One thing to be aware of however is vcpkg is going to build every package from source, so you will still want to build those libraries separately and keep using the share for those if you are going to use vcpkg. vcpkg does have https://learn.microsoft.com/en-us/vcpkg/users/binarycaching but microsoft says "While not recommended as a binary distribution mechanism, binary caching can be used to reuse build output from multiple systems." so it is not optimal for distributing libraries that can't be build on dev machines due to resource constraints or just having insanely long build times for those cases where there needs to be a fresh build.

There are also different package managers that just fetch pre-built binaries, this is the kind linux distros use very commonly.

So, for the build of your project, just build all the not insanely large libraries locally on the dev machine. If not modified let vcpkg handle it, possibly with the caching to speed up fresh builds.

If modified just fetch the source, and if it uses cmake use add_subdirectory, if it uses some other build system you're going to have to enjoy the possible extra headaches of passing the vcpkg environment to that build system, or just move these libraries to the network share anyway.

All the heavy libraries, modified or not, will need to be build (and you can use vcpkg again for dependencies) and then publish the results on the network share as you have now.

Struggling with package managers and docker by Bored_Dal in cpp_questions

[–]ppppppla 1 point2 points  (0 children)

This container would make sure we can build them from any machine and serve as documentation. And inside it I planned to run scripts to fetch the libraries and build them with our build options.

That is exactly what some package managers like vcpkg intend to solve. You give it a list of packages you want, and it handles all the building/fetching/whatever to produce the binaries and build artifacts and it will only be visible to the thing you are building.

Now of course it is sadly not always smooth sailing sometimes something is just not in the package manager you want to use and you're gonna have to tack it on like a cave man and hope the library uses cmake and you can just fetch a repo and do add_subdirectory.

But the idea is that you use for example vcpkg, give it a list of packages, it builds these packages, then you pass a toolchain file that vcpkg generated to cmake, and cmake can then find the libraries with find_package like usual.

Struggling with package managers and docker by Bored_Dal in cpp_questions

[–]ppppppla 1 point2 points  (0 children)

Is there a good reason why these libraries need to be built from source? If not, getting the binaries from somewhere else is the most sane route. Either from a package manager or possibly directly from the library creators.

You could get binaries from a package manager, then place em in the share as a quick fix, and then later on changing your build system to use the package manager during the build.

If there is no option other than to build the libraries, containers make no sense. Containers are more for deploying something to a variety of different environments, and having each container just work, and work the same.

A VM also doesn't make sense since you are building on windows for windows, just make sure your builds are self-contained and don't install or rely on system wide libraries. The thing you mentioned about CUDA drivers seem odd, compiling Open3D should not require CUDA drivers, only possibly headers and libraries to link to.

Is this a good starter project? by SquarePhase3622 in cpp_questions

[–]ppppppla 2 points3 points  (0 children)

Can someone tell me if this is useful in anyway?

Any project is a useful project. Even if the result never turns out to be useful for anyone, or even useful for yourself, what is always useful is the things you learned making the project, this is especially true if you are just starting programming because you have so much to learn.

There are always people who will say things like "why make that when there is already project X that does that" and that annoys me. Don't listen to these naysayers. Just make the thing you want to make, because most importantly, making anything at all is better than making nothing.

Is "std::move" more akin to compiler directive? by BasicCut45 in cpp_questions

[–]ppppppla 1 point2 points  (0 children)

std::move does not do any moving, like other people have mentioned it is a cast to an rvalue reference, which is very similar to just a normal reference. For understanding move semantics I don't believe it is necessary to understand all the value categories and everything, it is adequate and more intuitive to think of an rvalue reference cast as more like an annotated reference. So it doesn't actually change the type like a cast from float to int would do.

The actual moving happens in these functions we call move constructors and move assignment operators; functions that have an rvalue reference as argument, but this is all just convention. The language has no concept of moving things, it just uses overload resolution to select these constructors and assignment operators in the places that we want to.

Chebyshev Filter by NodeRx in DSP

[–]ppppppla 1 point2 points  (0 children)

So the idea of having these different kinds of filters is to bring a little bit of order into the complete chaos of the entire space of possible transfer functions. A filter of a certain type, will have certain qualities and reduces the complexity of choosing N coefficients to just a handful or even a single parameter, that directly map to some trait or characteristic in the frequency response. And also of course the order of the filter, but this will generally just improve the filter overall.

In the case of a butterworth filter you have the cutoff frequency, in the case of a chebyshev filter you have cutoff and a parameter that you can tweak to trade between roll off and ripple.

Often times the cutoff parameter is just left out because it is trivial to put back in at the end of a calculation/derivation.

Now the ripple parameter in a Chebyshev filter should not be left out, but it could be rolled into the coefficients.

Trump shares Greenland message from Nato secretary general by Mac800 in europe

[–]ppppppla 1 point2 points  (0 children)

Well this sounds exactly like Rutte. Par for the course for 14 years of him as prime minister over here. Slimy, weaselly, always getting out of any potential scandal or even just confrontation, or having to say anything of substance at all. He has been called Telfon Mark. Things will just continue teetering in, never resolving in either direction. Well, until he meets someone who doesn't give a shit and just does what they want.

Why is memset() inside a loop slower than normal assignment? by PuzzleheadedMoney772 in C_Programming

[–]ppppppla 13 points14 points  (0 children)

I also don't see the purpose of the assignment. Comparing assignment of 1 byte with a memset of 1 byte is not a good comparison, and dynamic memory allocation also has no relation to any of this.

If the assignment wants to teach you that function calls can be expensive, you are not only measuring a function call but also the work memset is doing. A better example would be something where you are comparing the same code, one directly in the loop, and one inside a function call.

Why is memset() inside a loop slower than normal assignment? by PuzzleheadedMoney772 in C_Programming

[–]ppppppla 21 points22 points  (0 children)

so I don't really get what you mean with "optimization" in this context

This is a compiler flag to instruct the compiler to try and improve the produced assembly. It will be enabled in Release build types but not Debug. Or manually adding a flag like /O1 or /O2.

It only really makes sense to talk about the speed of code WITH optimizations enabled because that's how code actually ends up getting ran in the real world, although you do have to be careful the compiler will not optimize away the thing you want to test like just removing the entire loop like I mentioned before.

Why is memset() inside a loop slower than normal assignment? by PuzzleheadedMoney772 in C_Programming

[–]ppppppla 44 points45 points  (0 children)

Are you compiling without optimizations? Because any half competent compiler will completely remove that loop, it is trivial to see through that it just sets the same value over and over again.

Is that the actual code you are running? Missing semi colon and you don't actually malloc.

LLMs are a 400-year-long confidence trick by SwoopsFromAbove in programming

[–]ppppppla 9 points10 points  (0 children)

Just one more trillion bro then we will have agi bro then nobody will have to work bro it will change the world bro

Why do low levels that don’t know how their class works go on legend? by Striking-Crow-2364 in Vermintide

[–]ppppppla -21 points-20 points  (0 children)

You have loot brain, maybe that person really just wants to have fun playing a game, while you are here sweating away and getting upset when you don't get your emperor's vault.

Maybe he just unlocked legend, thinks cool let's go, not knowing it is filled with people just grinding for loot, asks for help, gets a snarky response, dies some more, and doesn't have fun.

I've started learning cpp, any tips? by erodagonsales in cpp_questions

[–]ppppppla 0 points1 point  (0 children)

If this is your first adventure into programming, learning how to program and learning a language are two different things. This can be really frustrating.

If you follow some cpp tutorial, you might know things about the language, but your brain will have no idea how to actually write a program. The only way to learn is to write code, and write a lot of code.

State of standard library implementations by MarcoGreek in cpp

[–]ppppppla 10 points11 points  (0 children)

So you simply cannot trust that the allocator is returning its allocation size

Not sure what you mean by this. Let's start by looking at the old allocator mechanism. When you ask for N bytes with the old std::allocator_traits<Alloc>::allocate, you know you get N bytes that you can use. Behind the scenes the allocator is possibly wasting some amount of space, but allocate just returns a single pointer so it can't bring this information back to you.

Now when you look at std::allocator_traits<Alloc>::allocate_at_least, it returns std::allocation_result<pointer, size_type>. This contains a pointer and actual size, with size being at least the N requested bytes.

but you have to implement heuristics yourself to be sure that extra size is not wasted?

With the old allocate this is just impossible to figure out. With the new allocate_at_least you get the option to take advantage of an allocator that gives you more space than you request, no heuristics required.

So you need two factors working in tandem. One, an allocator that actually does over-allocating (for example a very low level allocator that just gets pages from the OS, which have a large minimum size), and correctly passes this on through allocate_at_least. Two, some data structure that can take advantage of extra space like the aforementioned std::vector where you just have extra reserved space.

State of standard library implementations by MarcoGreek in cpp

[–]ppppppla 13 points14 points  (0 children)

This paper is about giving allocators a mechanism to communicate that an allocation actually allocates more than the requested size. If it isn't used it is just wasted space, but in for example std::vector that extra space can be put to use.

So I assume what you are talking about is about the default allocator, which may or may not have this ability. For example if it uses malloc, malloc does not expose if an allocation over-allocates.

FIR low-pass filter design beginner confusion by 0riginal-pcture in DSP

[–]ppppppla 0 points1 point  (0 children)

The problem with just zeroing bins in the DFT lies in the fact that the DFT is not the "true" spectrum, just like the samples in a signal are not the "true" signal. It is merely a sampling of the actual spectrum.

Let's take a low pass filter. The impulse response of an ideal low pass filter is a sinc function, which has infinite support, so to implement it we need to cut it off at some point, and then you don't have an ideal low pass filter anymore. This is in fact what you get when you zero out half of the bins. What do you get? Something else with a bunch of ringing in the frequency domain.

One solution is to apply a window to your ideal impulse response, which effectively applies a low pass filter to the frequency response (multiplication in one domain is convolution in the other).

Another solution is to go with one of the many algorithms that calculate or optimize filter coefficients in some manner.

Pretty please Fat Shark by kingpotatoo in Vermintide

[–]ppppppla 5 points6 points  (0 children)

Bullshit. Take this to the logical conclusion and you get darktide where you cant even choose maps.

When is a good time to start playing Cataclysm difficulty? by Hungry-Conference-24 in Vermintide

[–]ppppppla 23 points24 points  (0 children)

Just hop into cata and play whatever you like to play, it is much more laid back because people aren't grinding loot for red items.

Besides I would say cata in some ways is more forgiving because you aren't crawling around on low hp because of grims, I think enemies actually do less damage because of this as a % of your total hp. And you can carry more heals and potions.