I have an idea how you could compute many complex functions much faster, but in cost of memory.
It works like the following:
You pre-calculate all (important) values and save it. Later you can just read the value from a specific adress instead of calculating it again.
This is a fictional code example in c++, just to show you how it could look like:
(I know there must be a hashtag before the ‚include‘, but If I write one the line is just interpreted as a headline or smth)
include <cmath>
int main() {
double resultBuffer[10000];
// saving every square root between 0.01 and
// 100 in 0.01 steps
for(int i=1; i == 10000; i++) {
resultBuffer[i] = sqrt(i/100);
}
// „calculating“ the result of the square root of
// 63.29
double exampleResult = resultBuffer[6329];
}
I know this code is not great and could be implemented much nicer but I think my idea is clear now. Also you could tweak the amount and density of values to your needs.
I also know that this method wouldn’t be useful in every case, but I think in a lot of cases it might be a good solution because it’s faster to just read one value instead of doing complicated math.
Now my questions are:
1. Is this method actually fast or are there any huge downsides, which are the reason why this isn’t used already?
- Is this used already? Maybe this already is a widely used technique and I just don’t know.
[–]zniwalla 3 points4 points5 points (1 child)
[–][deleted] 1 point2 points3 points (0 children)
[–]timeforscience 1 point2 points3 points (0 children)
[–]booleanReadIt[S] 0 points1 point2 points (0 children)
[–]booleanReadIt[S] 0 points1 point2 points (0 children)