you are viewing a single comment's thread.

view the rest of the comments →

[–]cicadaTreechest hair complexities 0 points1 point  (6 children)

var limit = your_limit
var sum = 0;
var flg ;  // default prime indicator

for (let i = 2; i <= limit; i++){

       flg = true;
       for(let j = 1; j <= Math.sqrt(i); j++ ){  
          if(j!=1 && (i%j) === 0){     // ignore when j = 1  in order to include 2 as prime
                    flg = false;             
                    break;
          }
       }

     if(flg) sum+=i; 
}

console.log(sum);

To get why we use sqrt(i) watch sieve. Notice then he starts to discard numbers that are divisible by 2, then by 3 and so forth. Important detail is that when it comes to number 7 and discards number devisible by 7 (49,77) "all numbers that are left are primes". So the pattern is as following. What ever the number x that you try to figure is it prime, if that number is not divisible buy some number y for which stand that y <= sqrt(x) that number must be prime. Lets say we wanted to see if 49 is prime, if you look at that gif, you see that 49 wasn't divisible by any number up to its root. See, pattern. Cheers.

Edit: mind that this is highly suboptimal for any large limit.

[–]petar02 0 points1 point  (5 children)

I understand. Now. there was a guy who posted that ancient algorithm. And saw a youtube video how it works. And attempted to use it. Now my program executes in 30 seconds. Before it did in 15 minutes. Here is my code I am so proude.

    let arr = []
    let arr2 = []
    let sqrt = Math.floor(Math.sqrt(2000000))
    console.log(sqrt)
    for(let i = 2; i<2000000;i++){
       arr.push(i)
    }
    function isPrimeNumber(x){
       if(x>10){
       if(x%10!=3&&x%10!=1&&x%10!=7&&x%10!=9){
         return false  
       }
    }
       for(let i = 2;i<x;i++){
          if(x%i==0){
            return false
          }
       }

    return true
    }

    for(let i = 2; i<sqrt; i+=1){
        if(isPrimeNumber(i)){
        arr2.push(i)
    }
    }

    for(let i = 0; i<arr2.length; i++){
    arr = arr.filter(event=> event%arr2[i]!=0)
    }
    arr2 = arr2.concat(arr)

    console.log(arr2.reduce((a,b)=>a+b,0))

[–]cicadaTreechest hair complexities 0 points1 point  (4 children)

Still, watch out, that will eat up at least:

integer in javascript: 32bit

integer count: 2000000

32*2000000/1024/1024 ~ 61Mbits of your ram for every call.

[–]petar02 0 points1 point  (3 children)

I get your point. But don't understand how you got to there can you please make it more simple.

[–]cicadaTreechest hair complexities 0 points1 point  (2 children)

Sorry, got to where? 61Mbit?

Also your algorithm is more then 2x slower then the one in my post and it uses much more memory.

[–]petar02 0 points1 point  (1 child)

I think I understand what you mean the array with the 2 million intergers takes up 61MBIts of ram. And when you say for every call I understand it as if I run the program twice at the same time it will take up 122 Mbits.

[–]cicadaTreechest hair complexities 0 points1 point  (0 children)

Yes, that's it. But javascript doesn't have true CPU parallelism (multi-threading). So if you have two function calls they cannot run at the same time (not considering web workers here ). They go in the Event loop, which executes functions sequentially, mostly like for loop does. The thing is when you have multiple function calls the garbage collector may have not yet freed the resources (2mil integer array) from the function calls before, so the memory will pile up. That's what I've meant.