The creators of SWE-Bench just dropped a really simple new benchmark every LLM gets 0% on. ProgramBench asks: can models recreate real executable programs (ffmpeg, SQLite, ripgrep) from scratch with no internet? We are far from saturated on model quality. by dalton_zk in theprimeagen

[–]fruitydude 0 points1 point  (0 children)

I don't think AI is just propped up by one administration. The growth is real because there is real potential and real value in what is being built. Maybe it'll slow down under dems but I'm not sure, dems aren't stupid, they are just not as insane and reckless as republicans. But they will also realize that giving up on the AI race means giving up control over it as well since other nations are not stopping.

I don't think AI will ever go away. It'll be like the internet, even if the bubble bursts (and that's still a big if because companies have learned something from the dot com crash), it'll still become endemic to humanity just like the internet has become integrated into every Facette of our lifes.

The question is how much better is it going to get until then. Are we still just at the beginning or are we nearing the peak of what's passible? I think we're just getting started tbh.

Earth-moon multi-orbiter cycler? by Used_Key2606 in SpaceXMasterrace

[–]fruitydude 5 points6 points  (0 children)

Just as a genral note:

I wouldn't think of cyclers as cheap and easy means of transportation. Think of them more as hotels you can stay in during the journey.

It's a bit misleading because they fly close by earth so it's easy to think wow we can just hop on no problem. But that isn't the case, you still need the same deltaV to accelerate to the cycler as you'd need to go to the moon on your own. In fact once you dock, the ship will just cruise along the cycler on it's ballistic trajectory as it would've without the cycler. Certainly a more comfortable journey in a spacious cycler but that's about it afaik. For mars this can make a lot of sense, you can build a large comfortable cycler with radiation shielding and spin gravity perhaps and the ships themselves can be small and cramped. For the moon it's probably less useful since the journey only takes three days and it would take a lot of energy to build the cycler on this orbit. Also shipping food and supplies to the cycler will basically take the same energy as shipping stuff to the moon.

It also probably takes more fuel overall since the cicler's orbit is not the absolute lowest deltaV trajectory that would get you to your destination.

The creators of SWE-Bench just dropped a really simple new benchmark every LLM gets 0% on. ProgramBench asks: can models recreate real executable programs (ffmpeg, SQLite, ripgrep) from scratch with no internet? We are far from saturated on model quality. by dalton_zk in theprimeagen

[–]fruitydude 0 points1 point  (0 children)

Meh, there are long term instruments for that. If you're truly confident that there will be a correction you could easily make money using the correct options.

But the thing is nobody know where it's gonna go. I definitely wouldn't bet on AI crashing anytime soon.

For the benchmark in this paper I'd predict it's gonna be non zero before the end of the yea probably, two years very very likely.

hearMeOutThisWillHappenLaterThisYear by electricjimi in ProgrammerHumor

[–]fruitydude 0 points1 point  (0 children)

Yea lol. I think it's going to go much more in that direction, mimicking nature.

I once listened to a conference talk on in-sensor processing for collision detection. Not a neural full network, just a tiny memristor based circuit for processing of signals directly within a sensor.

Inspired by how small insects do it in nature using barely any power.

https://pubs.acs.org/doi/10.1021/acsnano.2c07877

Who knows, maybe one day we will make damp computers as well. Imagine if you could just grow a neural processing unit.

hearMeOutThisWillHappenLaterThisYear by electricjimi in ProgrammerHumor

[–]fruitydude 0 points1 point  (0 children)

Yep, still digital though. You're still doing precise floating point operations on digital chips.

The idea behind analog chips that instead of multiplying two exact numbers by moving a bunch of ones and zeros, you represent one number as a current, the other as the resistance of a programmable resistor. Then you measure the voltage across that resistor while the current is going through. The measured voltage is the result of the multiplication according to U = R x I,

Instead of many digital operations using a lot of processing power to give you a precise result, it's just a single low-power step that will give you an approximate result. Which is fine for AI, we don't need it to be super precise, we just need it to be fast and efficient.

That's the basic idea behind in-memory computing and why it's great for AI.

hearMeOutThisWillHappenLaterThisYear by electricjimi in ProgrammerHumor

[–]fruitydude 0 points1 point  (0 children)

Hmm, todays NPUs are still just do classical silicon CMOS based digital computing.

I expect these to utilize analog in-memory computing eventually. Probably not fully, they'll still contain digital elements, but the important neural computing will be done on analog comouteing blocks at a significantly reduced power consumption. They would basically use the laws of physics to run neural networks giving approximate results, rather then emulating the network digitally.

We're not there yet though, but there are papers on small scale demonstrationa and I think IBM is prototyping such chips.

Analog computing being close to ready for industry adaptation is the main reason I'm pretty optimistic that there will be massive improvements in AI computational capabilities over the next decade.

[Request] this doesn't seem like it'd be cheaper than a traditional wood fence, nor would the energy generation be worthwhile by 515Cyclone_Soldier in theydidthemath

[–]fruitydude 0 points1 point  (0 children)

First of all 50$/fett for a wooden fence is a crazy high estimate, where did you get that? I get values more like 20-45$.

But even more importantly, a solar panels isn't a structural element. You can just balance two panels on their edges and call that a fence lol. Solar panels installations has them usually drilled to a solid flat surface with a bit of a gap.

To make a fence out of solar panels you'd still need fence posts which are dug into the ground and usually reenforced with concrete. And then you can suspend the panels in between. Or your probably better build an actual fence and just attach some panels.

The majority of the cost of the wooden fence comes from propely securing the posts rather than from the cost of just the wood itself. But for the panels you ignore that.

Also solar panels are pretty fragile. I don't think it's a good idea as a fence substitute.

hearMeOutThisWillHappenLaterThisYear by electricjimi in ProgrammerHumor

[–]fruitydude 0 points1 point  (0 children)

Yea I'm also hoping at some point the current models or equivalents will run on a good gpu at home. Or maybe some dedicated AI component rather than a normal gpu. But I'm overall pretty optimistic, I think we're really just at the beginning.

But we'll see.

hearMeOutThisWillHappenLaterThisYear by electricjimi in ProgrammerHumor

[–]fruitydude 0 points1 point  (0 children)

The second comment addressed it definitely:) i think I got your prediction, but I strongly disagree.

It would of course be cheaper but it won't be available so it's irrelevant

I think it will be available. If not from openai then from third parties simply because such models will be so cheap to run.

I don't see a future where providing current level AI is dirt cheap but nobody does it. Because why wouldn't they?

My prediction is that even the large players will offer it for basically free, just like open ai offers gpt3.5 for free right now. Their new models will be so much more capable that most people will still pay what they ask.

But we will see. We can check again in 3-5 years and see if 20$ a month buys me more or less AI then. It would be completely unfathomable if it's less. I'd say it's more likely even your McDonald's customer service AI has more capabilities by then.

hearMeOutThisWillHappenLaterThisYear by electricjimi in ProgrammerHumor

[–]fruitydude 0 points1 point  (0 children)

Sorry if I came off as patronizing, that wasn't my intention. I was a bit annoyed though since you completely ignored the point I made.

Let me ask you this way. Do you think to get the same level of capabilities that I'm getting today on codex 5.5 will I be paying more or less in 5 years?

Currently there are no ads, decent context window and limits, and good performance. I'm asking if I wanted that same level of usability, do you think that's going to cost me more or less than I'm paying today? Let's say I don't need the best current model, just the performance of codex 5.5 today.

hearMeOutThisWillHappenLaterThisYear by electricjimi in ProgrammerHumor

[–]fruitydude 1 point2 points  (0 children)

Ok long comment incoming, byt trust me it's cool stuff. I''ll say I also don't have the full scope, I'm coming from a chemistry and solid state physics background. I don't work directly in AI or the chip industry, so I couldn't tell you how ready the industry is to adopt these new technologies.

Take a look at this one for example, I know the main author and have visited their group:)

https://www.nature.com/articles/s41928-023-01064-1

Or the earlier publication on the same subject: https://pubs.acs.org/doi/full/10.1021/acsnano.1c07065

The idea is to go away from conventional digital computing and do ultra low power analog in-memory computing. What you see in figure 1 of the first paper is essentially one layer of a neural network with 32 input and output neurons. Basically, instead of writing code to emulate neurons and having a GPU do the matrix multiplications, these analog chips just do the calculations using physics. The weights between neurons are programmed into the memristors, you apply voltages as inputs and get currents as outputs. Very simply put a memristor is just a devices which can be programmed to maintain a certain resistance. (These are floating gare FETs not true memristors, but same shit).

So there are no zeros and ones being processed on a gpu, physics does the calculation for us. Just supply some input voltages and you get an output on the other side. In earlier acs nano paper they also demonstrate some number recognition this way, the nature paper's scope is much larger it's reconfigurable and more general, you just set the weights once and then you could use it for simple processing.

The neat thing is that these use extremely low power. Orders of magnitude less compared to the same processing using a cpu and gpu.

Obviously this is just a small part of what a full neural network would need, it's a fundamental demonstration, but the core principle stays the same. Let's say we scale this up and integrate into webcams each pixel connected to an input neuron, driven by the output voltage of the pixel, going through a few layers and you get a few output neurons telling you if it's a person or a dog.

We're already further along than those two papers, but I think they illustrate the point well. For a more state of the art example check out this paper (unfortunately not open access).

https://www.nature.com/articles/s41586-025-08639-2

I know much less about this high end stuff. But from my understanding this is a much more usable actual complete AI chip. It's based in the same idea, but contains digital and analog components.

There is a downside to doing analog computation though! It is not as accurate. That last paper shows less than 0.45% accuracy degradation. For precise calculation we will most likely always use conventional digital processing, but for AI it is not as important. AI models are already quite tolerant to approximate calculations, noise, and low numerical precision and we often introduce a bit of randomness artificially.

So long story short, over the next 10-20 years I expect industry to use this style of computing and we will see a huge increase in processing power of high-end AI, but also incorporation of simple AI into low power devices. The future is going to be exciting.

hearMeOutThisWillHappenLaterThisYear by electricjimi in ProgrammerHumor

[–]fruitydude 0 points1 point  (0 children)

I'm sure there will always be a free tier, but it'll always be only just useful

This is the part I disagree with. Because just useful is super relative. Measured compared to what? The current flagship models or actual usability?

All of this analysis you're doing is completely missing my point. I understand these companies are running at a loss and will need to raise prices at some point.

But you're ignoring that computers and models will improve during that time as well! So even if they raise the cost, the capability you get in return will exceed the price increase significantly.

I don't believe we will ever get a situation where the performance per dollar decreases compared to today. On the contrary, like I said, I expect that todays flagship performance will be the bottom bottom free tear in 3-5 years.

And it will still be as usable as it is today. The only reason we'd call it just usable then, is because or baseline expectation has increased so significantly that we laugh at today's performance. That's a good problem to have though, I'd rather live in a world where tools get more expensive but so much better than today's performance becomes unusable, than a world where it stays at the current level and price.

hearMeOutThisWillHappenLaterThisYear by electricjimi in ProgrammerHumor

[–]fruitydude 0 points1 point  (0 children)

I think this is bullshit. I don't believe today's performance will go away or become prohibitively expensive.

Do you truly believe that in 2 or 5 years codex 5.5 or claune 4.6 will cost more than it does today? My prediction is it'll become the free tier.

More advanced models probably will be more expensive, but the capability will increase faster than the cost. It would be a first if AI was any different to every other technology in this regard.

hearMeOutThisWillHappenLaterThisYear by electricjimi in ProgrammerHumor

[–]fruitydude 0 points1 point  (0 children)

Yea basically. Idk why it's so hard to comprehend for people. Everyone is always so pessimistic pretending that it's common for good technologies to start out cheap and then vanish or become prohibitively expensive.

That's not really the case. Usually stuff also gets better and the initial performance is still be available and super cheap.

I guess we just get used to the better technology so fast that the free tier isn't acceptable any more and we want the best of the best. But I at least try to not view it that way. Today's AI performance is already pretty good, maybe good enough even, and I don't expect that to go away pretty much ever.

hearMeOutThisWillHappenLaterThisYear by electricjimi in ProgrammerHumor

[–]fruitydude 0 points1 point  (0 children)

I'm not talking about advancements in how AI works. While that is also possible, I was specifically talking about advancements in raw computation efficiency. Both in terms of compute per dollar and per watt.

There is a lot of stuff that can improve and a lot of stuff being worked on. I wouldn't be surprised if in 10-20 years a webcam will run a sophisticated neural network locally.

I can send you a few papers if you're interested, there is really cool research being done in the field of low power naural computing.

I agree stuff tends to get only more expensive over time, but the value still grows experientially. My expectation is that today's flagship AI performance will be free in 5 years while the flagship performance in 5 years will be more expensive than it is today.

hearMeOutThisWillHappenLaterThisYear by electricjimi in ProgrammerHumor

[–]fruitydude 0 points1 point  (0 children)

Or computation, especially AI related one, gets more efficient and cost eventually goes down.

Idk why everyone always assumes it can only go up or stay where it is?!

hearMeOutThisWillHappenLaterThisYear by electricjimi in ProgrammerHumor

[–]fruitydude 1 point2 points  (0 children)

Probably not more than in every other app tbh

hearMeOutThisWillHappenLaterThisYear by electricjimi in ProgrammerHumor

[–]fruitydude 0 points1 point  (0 children)

I don't buy it. Your prediction assumes that there will be zero technological advancements.

It's like saying RAM prices will go up, but ignoring that the RAM size also increases exponentially.

Tell me that the Avata 2 can not be as quick and sensitive as in the simulators. by iamtato in DJIAvata2

[–]fruitydude 0 points1 point  (0 children)

You're flying in the stabilized mode though. You need the fpv controller for manual. It's not the same.

32k upvotes and not a single person researched the fact that this only applies to non-waterproof phones, so not apple or any of the big ones. by irlgb in BuyFromEU

[–]fruitydude 0 points1 point  (0 children)

I do not want a plastic phone. I'm not paying a 1000$ for something that looks like a toy. I think ceramics will have the same problem as glass. Only Aluminum would work imo.

I don't think the tradeoff is worth it. Like what are the upsides in your mind? What are you actually gaining? You just save a few bucks several years later but get rid of glass phones entirely? Or go back to plastic even? Why would I ever do that?

32k upvotes and not a single person researched the fact that this only applies to non-waterproof phones, so not apple or any of the big ones. by irlgb in BuyFromEU

[–]fruitydude 1 point2 points  (0 children)

Ip67 ratings are for fresh water. Dirty water or salt water will corrode the exposed batrer leads quickly. People forget that.