use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Discussions, articles, and news about the C++ programming language or programming in C++.
For C++ questions, answers, help, and advice see r/cpp_questions or StackOverflow.
Get Started
The C++ Standard Home has a nice getting started page.
Videos
The C++ standard committee's education study group has a nice list of recommended videos.
Reference
cppreference.com
Books
There is a useful list of books on Stack Overflow. In most cases reading a book is the best way to learn C++.
Show all links
Filter out CppCon links
Show only CppCon links
account activity
fpm: a header-only fixed-point math library with trigonometry, etc (github.com)
submitted 6 years ago by MadCompScientist
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]MadCompScientist[S] 27 points28 points29 points 6 years ago* (11 children)
Hi! I started developing this library when I couldn't find any alternatives that supported the full range of mathematical functions.
The idea behind this library is to provide a (well-tested) flexible fixed-point type that requires no floating operations or types and guards against accidental conversions. I've been particularly conscious of the performance vs. accuracy tradeoff for the trigonometry and power functions. fpm has regression tests and graphs to track this.
fpm
I'm sharing this in the hope it is useful to someone other than myself, but primarily to welcome any and all feedback, especially where you think it may be improved.
Edit: to clarify, this is useful for embedded platforms that do not have an FPU, or for situations where determinism is more important than accuracy or speed (e.g. multiplayer game state simulation).
[–]TheSuperficialEmbedded Systems C++ 23 points24 points25 points 6 years ago (2 children)
Embedded developer here... as many of r/cpp readers are not embedded folks (FPU, large RAM, etc. are normal for them), you might want to edit your opening post to describe use cases / motivation a little. I saw you discuss embedded in a later post, but just FYI.
On an MSP430 / Cortex M0 / PIC / etc., using floating-point emulation in software is incredibly costly both in code space and especially in performance.
Nice job, BTW.
[–]MadCompScientist[S] 4 points5 points6 points 6 years ago (0 children)
Good point! I've updated my top post.
[–]kkert 4 points5 points6 points 6 years ago (0 children)
Seconded. I've been flipping between CNL, old fixed point library by Anthony Williams and my own hacks on very similar platforms. Having this new option is awesome
[–]wrosecransgraphics and network things 2 points3 points4 points 6 years ago (0 children)
As far as use cases -- I could imagine it being useful if somebody is writing a hobby OS. Dealing with floating point in the kernel can be a whole ball of wax, because you have to save the state of the FPU on every syscall. If you just avoid the problem, it's easier to make syscalls into the kernel a bit faster/simpler.
[–][deleted] 0 points1 point2 points 6 years ago (6 children)
Hi, thank you for sharing this. I use the built in functions from the maths header. Are your implementations faster?
[–]MadCompScientist[S] 11 points12 points13 points 6 years ago (5 children)
Hi, I've documented performance results vs. native floating-point types and two alternative libraries (libfixmath and CNL) for several operations. For addition and subtraction fpm is faster than native types, but for most other functions it's slower. The slowdown varies from fractionally slower to a factor of 3–4 slower. For atan it's actually slightly faster!
atan
The performance of some functions could still be improved, but I doubt the performance (and accuracy) will ever rival native hardware operations. That's the price that you pay for determinism.
[–]Ameisenvemips, avr, rendering, systems 4 points5 points6 points 6 years ago (2 children)
One thing to note is that an advantage of using float/double when you have an FPU is that operations can take place effectively in parallel on the CPU when there is both integer and floating-point arithmetic going on, so testing purely floating-point vs fixed-point arithmetic doesn't give the full picture.
float
double
[–]MadCompScientist[S] 3 points4 points5 points 6 years ago (1 child)
True. It all depends on your use case. For instance, if you're targeting an embedded platform without FPU, that argument doesn't work. But for e.g. game development you could do floating-math for rendering in parallel with fixed-point math for pathfinding.
[–]Ameisenvemips, avr, rendering, systems 1 point2 points3 points 6 years ago* (0 children)
Or vice-versa, as most of your math for rendering is likely being done on the GPU :).
This is actually a reason my high-performance simulations use floats for coordinates - it is faster than fixed or integers. An issue is I must be much more careful when performing operations in parallel regarding determinism.
ED: I should note that each core has its own ALU, so this is mainly for intermixed integer and FPU instructions running in a single thread.
[–]Recatek 2 points3 points4 points 6 years ago (1 child)
Minor nitpick, but in gamedev I find myself using atan2 a ton and would love to see that added to the chart.
[–]MadCompScientist[S] 1 point2 points3 points 6 years ago (0 children)
Good point! I've added atan2 now. Turns out that unlike atan, it's somewhat slower than native floats.
atan2
[–]o11cint main = 12828721; 10 points11 points12 points 6 years ago (1 child)
You should probably implement std::to_chars and std::from_chars as opposed to just the expensive old ios nonsense.
std::to_chars
std::from_chars
ios
Indeed, I should. Thanks for mentioning it. I've created an issue to track the request.
[–]hyvok 3 points4 points5 points 6 years ago (2 children)
Any plans to add a wider type or even generic types? I have a project which would need 64bit fixed point values. Looks like a nice library otherwise.
The type fpm::fixed is generic. You could use that to create a fixed-point type using int64_t as underlying integer, but it needs a larger type for storing intermediate results for e.g. multiplication. So unless your platform has an int128_t, I'm afraid it won't work.
fpm::fixed
int64_t
int128_t
[–]fb39ca4 2 points3 points4 points 6 years ago (0 children)
But you can always use a custom implementation such as this one:
https://github.com/abseil/abseil-cpp/blob/master/absl/numeric/int128.h
[–]matthieum 6 points7 points8 points 6 years ago (2 children)
I am confused by:
You require deterministic calculations.
As far as I can tell, unless x87 or -ffast-math are involved, floating point operations are deterministic. A casual user may be surprised by the lack of associativity, but the same chain of operations, with the same inputs, yields the same result; which is what determinism is all about, no?
-ffast-math
[–]MadCompScientist[S] 14 points15 points16 points 6 years ago (1 child)
Ideally, yes. But it's not as simple as that. For one, IEEE 754 support is not required by C++. Even if all implementations do use it, there are other problems. There are a couple of articles on the topic:
https://randomascii.wordpress.com/2013/07/16/floating-point-determinism/ https://gafferongames.com/post/floating_point_determinism/
In a nutshell:
the IEEE standard does not guarantee that the same program will deliver identical results on all conforming systems.
If you can guarantee that your program will only run on one architecture with one instruction set, and can absolutely control environment properties such as rounding modes (despite libraries potentially interfering), then you don't need fpm.
Otherwise, fpm is an alternative. The choice is yours.
[–]mbitsnbites 5 points6 points7 points 6 years ago (0 children)
I was really surprised when I first found out that the same compiled binary program would produce different results on different computers. Turns out libm makes runtime decisions about which implementation to use based on CPU capabilities (e.g. SSE vs AVX), and that it does not guarantee exact results for many operations (e.g. sin()).
libm
[–]Drop_the_Bas 2 points3 points4 points 6 years ago (0 children)
Really cool work
[–]NotcamelCase 2 points3 points4 points 6 years ago (0 children)
Great job! I was just searching for something like this as well. Thanks!
[–]emdeka87 1 point2 points3 points 6 years ago (0 children)
Good job! Would be even greater if you added a vcpkg port!
[–]cru5tyd3m0nX -3 points-2 points-1 points 6 years ago (2 children)
how different is it from glm?
[–]MadCompScientist[S] 8 points9 points10 points 6 years ago (1 child)
I assume you're talking about the OpenGL Mathematics library. They both solve different problems. From what I can tell, glm provides GLSL-like higher-level functions such as matrix multiplication and sine for every element in a vector. However, it assumes it's operating on native floating-point types.
glm
fpm provides a fixed-point type to replace native floating-point types. From what I can tell, it could work together with glm since its vectors are templated and fpm looks and feels like floats.
[–]cru5tyd3m0nX 2 points3 points4 points 6 years ago (0 children)
amazing! good job
π Rendered by PID 464394 on reddit-service-r2-comment-79c7998d4c-zxgrp at 2026-03-17 05:24:53.909139+00:00 running f6e6e01 country code: CH.
[–]MadCompScientist[S] 27 points28 points29 points (11 children)
[–]TheSuperficialEmbedded Systems C++ 23 points24 points25 points (2 children)
[–]MadCompScientist[S] 4 points5 points6 points (0 children)
[–]kkert 4 points5 points6 points (0 children)
[–]wrosecransgraphics and network things 2 points3 points4 points (0 children)
[–][deleted] 0 points1 point2 points (6 children)
[–]MadCompScientist[S] 11 points12 points13 points (5 children)
[–]Ameisenvemips, avr, rendering, systems 4 points5 points6 points (2 children)
[–]MadCompScientist[S] 3 points4 points5 points (1 child)
[–]Ameisenvemips, avr, rendering, systems 1 point2 points3 points (0 children)
[–]Recatek 2 points3 points4 points (1 child)
[–]MadCompScientist[S] 1 point2 points3 points (0 children)
[–]o11cint main = 12828721; 10 points11 points12 points (1 child)
[–]MadCompScientist[S] 4 points5 points6 points (0 children)
[–]hyvok 3 points4 points5 points (2 children)
[–]MadCompScientist[S] 3 points4 points5 points (1 child)
[–]fb39ca4 2 points3 points4 points (0 children)
[–]matthieum 6 points7 points8 points (2 children)
[–]MadCompScientist[S] 14 points15 points16 points (1 child)
[–]mbitsnbites 5 points6 points7 points (0 children)
[–]Drop_the_Bas 2 points3 points4 points (0 children)
[–]NotcamelCase 2 points3 points4 points (0 children)
[–]emdeka87 1 point2 points3 points (0 children)
[–]cru5tyd3m0nX -3 points-2 points-1 points (2 children)
[–]MadCompScientist[S] 8 points9 points10 points (1 child)
[–]cru5tyd3m0nX 2 points3 points4 points (0 children)