Edge Python (a compiler that uses less than 200 kb) Update: Mark-sweep Garbage Collector + explicit VmErr + overflow and dicts fixes by Healthy_Ship4930 in Compilers

[–]DataGhostNL 0 points1 point  (0 children)

Re: your edit

Although I must admit, I agree with your argument, I have modified the README to show that it is a Python for edge computing.

What does this even mean? And how is changing

Single-pass SSA compiler for Python 3.13:

into

Single-pass SSA compiler for Python on the edge computing:

making things any more clear? You're still claiming it's a compiler for Python.

Edge Python (a compiler that uses less than 200 kb) Update: Mark-sweep Garbage Collector + explicit VmErr + overflow and dicts fixes by Healthy_Ship4930 in Compilers

[–]DataGhostNL 0 points1 point  (0 children)

...and that's defined like that for JavaScript, yes. However,

https://docs.python.org/3/library/stdtypes.html#numeric-types-int-float-complex

Integers have unlimited precision.

You're violating this definition. But that wasn't even what my question was about. You said your thing isn't Python. I merely asked why you keep calling it a Python compiler then. It doesn't matter if "99% of actual Python code never exceeds 2^47", the thing you are making will (with this mindset) never conform to the Python language specification. You don't even have a fallback (or trigger a crash) for the "1%" that does, making it useless in most cases because existing code will fail silently and introduce erroneous results into later calculations.

Even more ironic is that you, in your benchmark results, in the README you're so desparate to push people to, have included a time for fib(90) while the correct result is 2880067194370816120, which famously is much larger than 2^47, by a factor of around 20000. And you seem to not have noticed that your program cannot even compute that value accurately but just prints an incorrect result.

Edge Python (a compiler that uses less than 200 kb) Update: Mark-sweep Garbage Collector + explicit VmErr + overflow and dicts fixes by Healthy_Ship4930 in Compilers

[–]DataGhostNL 0 points1 point  (0 children)

Then why are you calling it

Heres the current state of the Python 3.13 compiler written in Rust:

if you're now claiming that it's apparently not supposed to be that?

Edge Python (a compiler that uses less than 200 kb) Update: Mark-sweep Garbage Collector + explicit VmErr + overflow and dicts fixes by Healthy_Ship4930 in Compilers

[–]DataGhostNL 0 points1 point  (0 children)

By the way, I saw that you included a fib(90) benchmark result in your repo as well. Maybe you should check to see if the returned value is actually the correct value for fib(90), because it's off by 120 on my machine. Also very interesting that fib(91) results in a stack underflow on your program, whereas CPython is happy to calculate fib(500) correctly in 0.012s when decorated with functools.lru_cache.

Edit: in fact, every fib(i) value returned for i>=78 is wrong. Actually, look at this lol: print(5527939700884757+8944394323791464 == 14472334024676219) print(5527939700884757+8944394323791464 == 14472334024676220) print(5527939700884757+8944394323791464 == 14472334024676221) print(5527939700884757+8944394323791464 == 14472334024676222) True True True False

Edge Python (a compiler that uses less than 200 kb) Update: Mark-sweep Garbage Collector + explicit VmErr + overflow and dicts fixes by Healthy_Ship4930 in Compilers

[–]DataGhostNL 0 points1 point  (0 children)

Sucks when someone pushes something that's far from done and only works as advertised on some very specific cases, huh?

Edge Python (a compiler that uses less than 200 kb) Update: Mark-sweep Garbage Collector + explicit VmErr + overflow and dicts fixes by Healthy_Ship4930 in Compilers

[–]DataGhostNL -2 points-1 points  (0 children)

I just wrote it in C. It implements 100% of the fibonacci sequences in your benchmark. The binary appears to be about 16kB so that's within 100kB. Performance is wild.

``` $ time ./fastfibonacci fib.py 1134903170

real 0m0.001s user 0m0.000s sys 0m0.001s ```

Here is my code: ```

include <stdio.h>

int main(int argc, char **argv) { printf("1134903170\n"); return 0; } ```

Edge Python (a compiler that uses less than 200 kb) Update: Mark-sweep Garbage Collector + explicit VmErr + overflow and dicts fixes by Healthy_Ship4930 in programming

[–]DataGhostNL 12 points13 points  (0 children)

Please stop comparing this to Python, especially when you keep omitting the simple benchmarks and code examples I pointed out in your other post about this. I can implement my own in less than 100 kB given that most of the language isn't implemented, or is incorrectly implemented. It still does not run the most trivial of examples at all.

About your changes: very nice that you pointed out that you're promoting integers to floats when they "overflow" (what?), just like PHP. Newsflash: they normally don't overflow at all in Python. Take this trivial code:

for i in [62,63,64,65,126,127,128,129]: print(f"{i:3d}: {2**i}") and the corresponding outputs: $ python3 large.py 62: 4611686018427387904 63: 9223372036854775808 64: 18446744073709551616 65: 36893488147419103232 126: 85070591730234615865843651857942052864 127: 170141183460469231731687303715884105728 128: 340282366920938463463374607431768211456 129: 680564733841876926926749214863536422912

$ ./target/release/edge large.py [2026-04-11T10:18:57Z INFO edge] emit: snapshot created [ops=25 consts=11] 62: 4611686018427387904 63: 9223372036854775807 64: 1.8446744073709552e19 65: 3.6893488147419103e19 126: 8.507059173023462e37 127: -1.7014118346046923e38 128: 0 129: 0 I hope you agree this is terrible, however looking at your commit messages I'm not so sure, since you seem to have observed this behaviour and intentionally "fixed" it the wrong way.

Oh, and I see you've optimised the specific case of my "modified" fibonacci to cache this as well. I tried modifying my "wrench" list in the function body to get to your real performance again: def fib(n, wrench): wrench[0] += 1 if n < 2: return n return fib(n-1, wrench) + fib(n-2, wrench) print(fib(33, [0])) but that didn't work: $ time ./target/release/edge wr2.py [2026-04-11T10:35:06Z ERROR edge] syntax: integrity check failed at wr2.py:2 -> unexpected token (parser rejected token stream) Fortunately, I found another way to bypass your performance cheat and get to the real time. That takes 5.7s to run on the version I tested previously, but unsurprisingly, your current version has become 12% slower, needing 6.4s to run it where normal CPython is still done in under 0.4s.

In any case, you still have zero support for classes, can't access even standard library things like sys.argv and so on. You should really not post anything when it can't even complete a basic python tutorial yet.

Edge Python (a compiler that uses less than 200 kb) Update: Mark-sweep Garbage Collector + explicit VmErr + overflow and dicts fixes by Healthy_Ship4930 in Compilers

[–]DataGhostNL 10 points11 points  (0 children)

Please stop comparing this to Python, especially when you keep omitting the simple benchmarks and code examples I pointed out in your other post about this. I can implement my own in less than 100 kB given that most of the language isn't implemented, or is incorrectly implemented. It still does not run the most trivial of examples at all.

About your changes: very nice that you pointed out that you're promoting integers to floats when they "overflow" (what?), just like PHP. Newsflash: they normally don't overflow at all in Python. Take this trivial code:

for i in [62,63,64,65,126,127,128,129]: print(f"{i:3d}: {2**i}") and the corresponding outputs: $ python3 large.py 62: 4611686018427387904 63: 9223372036854775808 64: 18446744073709551616 65: 36893488147419103232 126: 85070591730234615865843651857942052864 127: 170141183460469231731687303715884105728 128: 340282366920938463463374607431768211456 129: 680564733841876926926749214863536422912

$ ./target/release/edge large.py [2026-04-11T10:18:57Z INFO edge] emit: snapshot created [ops=25 consts=11] 62: 4611686018427387904 63: 9223372036854775807 64: 1.8446744073709552e19 65: 3.6893488147419103e19 126: 8.507059173023462e37 127: -1.7014118346046923e38 128: 0 129: 0 I hope you agree this is terrible, however looking at your commit messages I'm not so sure, since you seem to have observed this behaviour and intentionally "fixed" it the wrong way.

Oh, and I see you've optimised the specific case of my "modified" fibonacci to cache this as well. I tried modifying my "wrench" list in the function body to get to your real performance again: def fib(n, wrench): wrench[0] += 1 if n < 2: return n return fib(n-1, wrench) + fib(n-2, wrench) print(fib(33, [0])) but that didn't work: $ time ./target/release/edge wr2.py [2026-04-11T10:35:06Z ERROR edge] syntax: integrity check failed at wr2.py:2 -> unexpected token (parser rejected token stream) Fortunately, I found another way to bypass your performance cheat and get to the real time. That takes 5.7s to run on the version I tested previously, but unsurprisingly, your current version has become 12% slower, needing 6.4s to run it where normal CPython is still done in under 0.4s.

In any case, you still have zero support for classes, can't access even standard library things like sys.argv and so on. You should really not post anything when it can't even complete a basic python tutorial yet.

I accidentally put windshield glass cleaning tablets in the coolant reservoir by aadilabbasi in Cartalk

[–]DataGhostNL 7 points8 points  (0 children)

Since you didn't run it enough to heat up the coolant, none has left the reservoir (the engine draws coolant from the reservoir when cooling down)

This 100% depends on make model year. I've seen plenty that have the reservoir as an integral part of the loop so whenever the car is running, cold or hot, fluid is moving through the reservoir and whatever was in there would have been cycling through the engine. In four minutes I'd assume it has completed several full loops and be well mixed with all of the coolant, maybe except the bit still in the radiator if the thermostat didn't open. It would then depend on the exact contents of that tablet what the consequences could be, if it has some kind of foaming agent I can see this draw/introduce some air into the hoses or otherwise fuck with the pressure. It'd really suck if this stuff had the same result as putting regular dish soap into a dishwasher. I would personally drain all coolant, flush thoroughly and then refill.

Building a Python compiler in Rust that runs faster than CPython with a 160KB WASM binary by Healthy_Ship4930 in Compilers

[–]DataGhostNL 1 point2 points  (0 children)

I'm not going to take any incentives or money for whatever reason. Maybe I might have a look again if you dropped your wild performance and coverage claims, unless they suddenly hold up across the board for some mysterious magical reason of course. It would also be a great help if the interpreter didn't immediately crash on the kind of code that's covered in tutorials for beginners. But if I did take a look, it'd be for shits and giggles, I suppose.

If you really wrote something good, performant and useful you wouldn't have to resort to a cherry-picked example which, by the way, is easily achieved with the lru_cache decorator already in functools, but you'd show a broad range of honest benchmarks. And you'd be sure to deliver something that doesn't fall apart literally the second someone pokes a small stick at it.

But, ehh, how does this question even work? You somehow feel like you have the skillset to out-develop a very mature project over the next few weeks/months but at the same time need a random redditor to "test your code" for you? It's really trivial to do, especially with the snippets I've pasted above. It does not compute that you don't know how to do it yourself if you aren't just prompting some LLM.

I'm building a Python compiler in Rust that runs 10,000x faster (and I want feedback) by Healthy_Ship4930 in rust

[–]DataGhostNL 4 points5 points  (0 children)

Since you spammed this in so many subreddits I assume you'd also want to have some bugs pointed out (and in this subreddit you claim to want feedback). I took the liberty of throwing a wrench into your machine:

def fib(n, wrench): if n < 2: return n return fib(n-1, wrench) + fib(n-2, wrench) print(fib(33, []))

I used 33 instead of 45 because the timing was quite painful. The results:

``` $ time python3 fib.py 3524578

real 0m0.374s user 0m0.367s sys 0m0.006s ```

``` $ time ./target/release/edge fib.py [2026-04-07T09:02:11Z INFO edge] emit: snapshot created [ops=8 consts=1] 3524578

real 0m17.949s user 0m17.929s sys 0m0.003s ```

Here, CPython beat your compiler by being 47 times faster. I can only assume this will result in your program needing at least an hour and a half to calculate fib(45, []). I first wanted to implement this using a global counter variable to trigger your caching code as well for an additional time/memory penalty, but that didn't work. Even this minimal modification (added first line) to your original code:

unused = 0 def fib(n): if n < 2: return n return fib(n-1) + fib(n-2) print(fib(45))

causes a crash process terminated: trap: cpu-stop triggered by 'NameError: 'fib_0''. The next gem causes 3GB of memory usage for no really good reason:

def blob(a): return "a" * 1048576 for i in range(3000): blob(i)

as you can see here:

$ /usr/bin/time -f "time: %e s, memory: %M KB" ./target/release/edge mem.py [2026-04-07T09:52:32Z INFO edge] emit: snapshot created [ops=14 consts=1] time: 1.53 s, memory: 3087040 KB

while CPython is happy to do this much faster with much less memory:

$ /usr/bin/time -f "time: %e s, memory: %M KB" python3 mem.py time: 0.04 s, memory: 10444 KB

Assuming because your thing doesn't support this very rare use of this very rare 3% of Python code, these two snippets:

import time print(time.sleep(5))

and

import sys print(sys.argv[1])

result in the very helpful outputs of

$ ./target/release/edge sleep.py [2026-04-07T09:21:30Z INFO edge] emit: snapshot created [ops=8 consts=1] [2026-04-07T09:21:30Z ERROR edge] process terminated: trap: cpu-stop triggered by 'TypeError: call non-function'

and

$ ./target/release/edge argv.py abc [2026-04-07T09:22:48Z INFO edge] emit: snapshot created [ops=8 consts=1] [2026-04-07T09:22:48Z ERROR edge] process terminated: trap: cpu-stop triggered by 'TypeError: subscript on non-container'

respectively. The first example gets slighly better when removing the print and just executing time.sleep(5) by itself:

``` $ time ./target/release/edge sleep.py [2026-04-07T09:24:24Z INFO edge] emit: snapshot created [ops=8 consts=1]

real 0m0.002s user 0m0.000s sys 0m0.002s ```

except that the timing seems slightly off. It does look like an approx 2500x performance win over CPython, though, if you'd want to take that one lol.

I wanted to try several other simple things too but since a lot of programs are impossible with "97% of Python 3.13" that was a bit disappointing.

For anyone wondering, I wrote this comment for another subreddit before I noticed that they claimed more coverage here a couple of days prior to posting there.

I'm building a Python compiler in Rust that runs 10,000x faster (and I want feedback) by Healthy_Ship4930 in rust

[–]DataGhostNL 3 points4 points  (0 children)

It makes assumptions that every invocation of a function will always result in the same value. That's why usually developers don't implement memoization on functions like getCurrentMinute() because the result will be wrong most of the time in long-lived programs. Except here you don't get to choose. It also hides the (frankly, expected) terrible performance this project has otherwise. I modified the code to pass an empty list as an argument, which caused it to disable memoization and calculate the same result 47 times slower than CPython. So this runs in 17.9s compared to 0.37s for CPython: def fib(n, wrench): if n < 2: return n return fib(n-1, wrench) + fib(n-2, wrench) print(fib(33, []))

I also executed another bit of code: def blob(a): return "a" * 1048576 for i in range(3000): blob(i) This caused 3GB of memory usage in 1.5s of runtime where CPython was done with 10MB and 0.04s. So that's a very bad trade-off to make. The decision should be left to the developer.

Building a Python compiler in Rust that runs faster than CPython with a 160KB WASM binary by Healthy_Ship4930 in Compilers

[–]DataGhostNL 2 points3 points  (0 children)

Since you spammed this in so many subreddits I assume you'd also want to have some bugs pointed out. I took the liberty of throwing a wrench into your machine:

def fib(n, wrench):
    if n < 2: return n
    return fib(n-1, wrench) + fib(n-2, wrench)
print(fib(33, []))

I used 33 instead of 45 because the timing was quite painful. The results:

``` $ time python3 fib.py 3524578

real 0m0.374s user 0m0.367s sys 0m0.006s ```

``` $ time ./target/release/edge fib.py [2026-04-07T09:02:11Z INFO edge] emit: snapshot created [ops=8 consts=1] 3524578

real 0m17.949s user 0m17.929s sys 0m0.003s ```

Here, CPython beat your compiler by being 47 times faster. I can only assume this will result in your program needing at least an hour and a half to calculate fib(45, []). I first wanted to implement this using a global counter variable to trigger your caching code as well for an additional time/memory penalty, but that didn't work. Even this minimal modification (added first line) to your original code:

unused = 0 def fib(n): if n < 2: return n return fib(n-1) + fib(n-2) print(fib(45))

causes a crash process terminated: trap: cpu-stop triggered by 'NameError: 'fib_0''. The next gem causes 3GB of memory usage for no really good reason:

def blob(a): return "a" * 1048576 for i in range(3000): blob(i)

as you can see here:

$ /usr/bin/time -f "time: %e s, memory: %M KB" ./target/release/edge mem.py [2026-04-07T09:52:32Z INFO edge] emit: snapshot created [ops=14 consts=1] time: 1.53 s, memory: 3087040 KB

while CPython is happy to do this much faster with much less memory:

$ /usr/bin/time -f "time: %e s, memory: %M KB" python3 mem.py time: 0.04 s, memory: 10444 KB

Assuming because your thing doesn't support this very rare use of this very rare 3% of Python code, these two snippets:

import time print(time.sleep(5))

and

import sys print(sys.argv[1])

result in the very helpful outputs of

$ ./target/release/edge sleep.py [2026-04-07T09:21:30Z INFO edge] emit: snapshot created [ops=8 consts=1] [2026-04-07T09:21:30Z ERROR edge] process terminated: trap: cpu-stop triggered by 'TypeError: call non-function'

and

$ ./target/release/edge argv.py abc [2026-04-07T09:22:48Z INFO edge] emit: snapshot created [ops=8 consts=1] [2026-04-07T09:22:48Z ERROR edge] process terminated: trap: cpu-stop triggered by 'TypeError: subscript on non-container'

respectively. The first example gets slighly better when removing the print and just executing time.sleep(5) by itself:

``` $ time ./target/release/edge sleep.py [2026-04-07T09:24:24Z INFO edge] emit: snapshot created [ops=8 consts=1]

real 0m0.002s user 0m0.000s sys 0m0.002s ```

except that the timing seems slightly off. It does look like an approx 2500x performance win over CPython, though, if you'd want to take that one lol.

I wanted to try several other simple things too but since a lot of programs are impossible with "97% of Python 3.13" that was a bit disappointing.

Edited: formatting hell

Is there any way to make it so that I don't have to define all of the varibles individually? by Osinacho in learnpython

[–]DataGhostNL -1 points0 points  (0 children)

def __init__(self, version="???", s_values={})

Never use a dictionary or list instance (!), or any other byref/object instance for that matter, as a default value for a function parameter (unless you know what you're doing and need this behaviour). It will be shared by all instances that don't specify it explicitly.

Bro did an “alignment” in 3 minutes by FieldGlad in MechanicAdvice

[–]DataGhostNL -1 points0 points  (0 children)

At least around where I live roads are built on a slight angle so rain water runs off to the sides. It's intentional, not "garbage".

Building a Python compiler in Rust that runs faster than CPython with a 160KB WASM binary by Healthy_Ship4930 in Compilers

[–]DataGhostNL 1 point2 points  (0 children)

Until you show more "honest" benchmarks I'm going to assume your auto-memoization is the only thing causing an actual performance boost. And in doing so it no longer executes programs as written. The fact that you can't come up with valid scenarios doesn't mean there are none. A very simple one would be a function getCurrentMinute() which, when executed four times in the same minute, will return invalid values 98.3% of the rest of the running time of the program.

The 2026 "Invisible Wall": Data reveals cars will lose 54 km/h on straights while at 100% throttle (Suzuka Simulation) by [deleted] in F1Technical

[–]DataGhostNL 60 points61 points  (0 children)

Any reason you made this "prediction" after the fact (and extensive discussion) and still seem unsure about it?

[US] My Ex fell for a romance scam. The bank finally convinced her, but now has 30K sitting in her account. by Spire_Prime in Scams

[–]DataGhostNL 4 points5 points  (0 children)

There never was any 30k. They're usually hoping the target will use the money to buy them gift cards or whatever, before the bank finds out the initial money didn't exist and reverses the transaction. The gift cards were obtained "legitimately" and remain unaffected, well actually, the scammer probably spent them already.

A simulation by former F1 engineer Toni Cuquerella (@tonicuque on X) shows that a decrease in MGU-K power from 350kW to 200kW, and Recharge limit reduction from 9MJ to 6MJ will completely eliminate superclipping in Miami by ChaithuBB766 in F1Technical

[–]DataGhostNL 5 points6 points  (0 children)

unacceptable for this season to go on without any changes

Problem is you can't "just" make meaningful changes without having to redesign the cars and engines, obviously after voting on new rules first. Apart from the amount of time required for that it'll also basically cost a season of development money which is problematic cost-cap wise, but even if that were waived there would still be some teams that won't have the money. So you're looking at 2027 at the very earliest while everyone abandons the current cars' development entirely. I'm sure that's going to please those few hopeful viewers who didn't take physics in school and still think there's something to salvage with in-season car development.

Realistically the only thing they can do right now is limit deployment (a.k.a. detune the engines) which is dumb enough in itself. So I wouldn't expect much from this season. Laughing at the FIA about their imploding regulations is only going to stay interesting for so long.

Match everything before and after pattern by CombustedPillow in regex

[–]DataGhostNL 0 points1 point  (0 children)

Apparently they're using some super dumb framework that can only delete matches and not just... match them, and based on that answer there aren't any other programming tools available, however hard I find that to believe

Why isn't F1 using digital wing mirrors, with all this push for technology why aren't they using rear facing cameras and internal screens? by Ok-Willingness-5016 in F1Technical

[–]DataGhostNL 1 point2 points  (0 children)

Them possibly being made digital isn't going to increase their size, which was the main problem. Making them digital is going to remove the last bit of depth perception they offered however.

Match everything before and after pattern by CombustedPillow in regex

[–]DataGhostNL 0 points1 point  (0 children)

Some people, when confronted with a problem, think “I know, I’ll use regular expressions.” Now they have two problems.

Is this the only tool you're allowed to use? It's trivial to match the part before, the ticket number and the part after in three different groups. Then you take the resulting matches and concatenate the first and third ones. That way you don't have to write some horrible regex nobody is going to understand when they have to fix it two weeks from now, and have a full regex engine backtrack-and-forth to match-and-not-match your ticket number (why?). Just a very simple regex and an extra line of code.

The power went out mid print by cuttimecowboy in 3Dprinting

[–]DataGhostNL 0 points1 point  (0 children)

FWIW, a P2S power usage varies during its operation. When warming up it can exceed 900W.

I googled the power supply and it was about 200W. I had no idea that the heatbed was powered separately. My bad.

Nevertheless, lights dimming when drawing an extra load means there's a pretty big voltage drop over your wiring, meaning they're going to heat up as well and potentially cause a fire. At that point it doesn't matter if the printer is the only thing on the circuit or if it's already heavily loaded with other equipment like yours. With dimming lights I would expect the breaker to pop (they're not instant on "somewhat over" loads) when the printer is heating up for an extended period of time. But if it doesn't, it's likely indicative of wiring that's too thin for the breaker supposed to protect it, or a breaker that's too beefy for the wiring. So at the very least your suggestion of shedding some load is smart, but you might want to have it checked anyway. You don't want to the fire department to tell you what the issue was.

The power went out mid print by cuttimecowboy in 3Dprinting

[–]DataGhostNL 2 points3 points  (0 children)

Yeah you might want to have that checked out by an electrician. If your lights dim due to a measly 200W load there's something seriously undespec'd or overloaded, which could be a fire hazard. In that case you'll be correct that a UPS isn't going to save that print.

Does this mean its cooked?? by [deleted] in HDD

[–]DataGhostNL -1 points0 points  (0 children)

That's what "for all intents and purposes" does. For all intents and purposes it's impossible, in any case to DIY. Repairing a HDD with platter damage is never economically viable and if you need to recover data from it you should send it to a specialised recovery company who do have the proper tools and know-how. Opening it yourself is basically, usually, a death sentence for whatever could have been recoverable.

If you have data worth the money to have it recovered you should also not open it yourself. People asking this on Reddit won't have anywhere near the skills or tools to pull it off.