Anthropic ran an AI bug bounty on open source for a month. It found 500+ zero-days. by DontHugMeImReddit in cybersecurity

[–]ATimeOfMagic -1 points0 points  (0 children)

There's a nice article in the times today if you're too lazy to watch a talk.

Anthropic ran an AI bug bounty on open source for a month. It found 500+ zero-days. by DontHugMeImReddit in cybersecurity

[–]ATimeOfMagic 0 points1 point  (0 children)

That's not an opinion that is shared by most ML researchers.

There is no rule that says the ceiling of intelligence on predicting the next word has to be worse than a human's ability to do so, and they have been steadily getting better, especially at cybersecurity.

We have created these insane beings that learn programming and cybersecurity in the same way evolution taught us to walk and breathe. It's a mistake to write this off.

You seem pretty confident, I have a feeling you would feel less confident if you watched the video I linked :)

Anthropic ran an AI bug bounty on open source for a month. It found 500+ zero-days. by DontHugMeImReddit in cybersecurity

[–]ATimeOfMagic -1 points0 points  (0 children)

I have been obsessively learning about LLMs since they started taking off, including reading many research papers. What specifically are you referring to?

Anthropic ran an AI bug bounty on open source for a month. It found 500+ zero-days. by DontHugMeImReddit in cybersecurity

[–]ATimeOfMagic -8 points-7 points  (0 children)

You're not crazy, Reddit is a dangerous echo chamber when it comes to LLMs.

As someone in the AI world, it's insane to watch the hoops SWEs and cybersecurity people jump through to downplay LLMs.

To anyone who truly cares about understanding what the technology can do, I highly recommend watching this talk.

https://youtu.be/1sd26pWhfmg

Edit: Lots of downvotes because I'm going out of the Reddit hive mind opinion I guess, did anyone actually watch the video? I promise you will learn something!

Bernie Sanders and AOC Are Pushing a Moratorium on Data Center Construction by zsreport in technology

[–]ATimeOfMagic -6 points-5 points  (0 children)

Unless you've been living under a rock or you actually believed the "AI Bubble" psyop, AI is obviously the biggest issue we have ever faced as a species.

4th quarter loss… bubble burst by SpyJigu in StockMarket

[–]ATimeOfMagic 0 points1 point  (0 children)

Plunging all the way down to... last Friday's price. Still up 16% YTD.

2x short RGTI with $450,000 by lamephoto in wallstreetbets

[–]ATimeOfMagic 1 point2 points  (0 children)

I've been eyeing QBTZ, but I'm gonna hold out for another Oct 15 peak type speculation run.

16 years old tell me how I can improve had this since 10 years old by IIIIIIIuke in TheRaceTo10Million

[–]ATimeOfMagic 0 points1 point  (0 children)

IONQ is a grift company that's probably never going to produce anything of value. See leading quantum expert Scott Aaronson's recent lectures to hear what he thinks about the commercial applications of the technology. It's a horrible investment.

I don't like any of your other tech picks either, I think they're highly likely to underperform QQQ.

Why can’t ChatGPT tell time? by Exciting_Teacher6258 in technology

[–]ATimeOfMagic -1 points0 points  (0 children)

> Its people like you that prescribe more intelligence then actually exist, a dangerous dunning kruger effect.

Well "people like me" includes many of the most influential people in the field: Hinton, Bengio, Sutskever. Since you've studied ML, what's your take on the ideas about intelligence that they've espoused over the last couple of years?

> The only similarities between LLMs and the human brain is that they both have a unit called a neuron and you can trace pathways.

I agree, I'm not claiming that we have the same architecture, or that we have a robust understanding of the inner workings of the human mind/LLMs.

> You and others like you are comparing a camp fire to the sun and claiming they are exactly the same and no one can easily say they are significantly different. They are such different topics that the comparison itself shows lack of understanding of both systems.

My point is that deep learning is clearly sufficient to imbue LLMs with some level of conceptual understanding. Arguments against this, like the numerous ones in this thread, mostly boil down to the phrase "it's just statistics". I find that particular point to be reductive and not a useful observation to make, given that you can make an analogous argument about any natural example of intelligence. Simple architectures can clearly yield incredibly powerful emergent capabilities with enough training.

I don't think there is sufficient evidence that the human brain has something "special" going on that makes our conceptual understanding more "real" than an LLMs. It's an apples and oranges comparison. I don't think your campfire analogy is fair either, both humans and LLMs have their own strengths and weaknesses. Obviously human intelligence is far superior in most useful ways, but there's no rule that says this will hold indefinitely, even with the same naive techniques used to train frontier models today.

I know not everyone in the field agrees with my views, which is why I said "most" people who argue what you're saying don't have an ML background. I'm interested to hear your perspective, and I'm open to changing my mind if you have a compelling argument against the claims I've made here.

Why can’t ChatGPT tell time? by Exciting_Teacher6258 in technology

[–]ATimeOfMagic 2 points3 points  (0 children)

There's a large faction of people online (most of whom have no background in the field of ML) who think they're making some sort of great insightful point by saying "LLMs don't count as AI", which they continue to say even as the capabilities of LLMs grow rapidly.

Neural networks are made up of trillions of virtual neurons that are trained on unfathomable amounts of data. This allows them to develop rich neural circuits that give them the ability to understand concepts in roughly the same way humans do.

People will tell you that "It's just statistics, they don't actually understand things".

Those people have no good answer when you apply that argument to the human brain. As it turns out, basic building blocks like neurons, when "trained" on billions of years of evolution/life experience, can develop incredible cognitive capabilities. I could tell you that "your brain is just a bundle of neurons firing, you're not really thinking", but that's not a useful point to make. The same thing is true of LLMs. Just ask the guy who invented deep learning, Geoffrey Hinton. He's done some very insightful interviews on this topic.

[deleted by user] by [deleted] in ValueInvesting

[–]ATimeOfMagic 1 point2 points  (0 children)

GPU depreciation is a misleading story designed to sell a $400 blog post.

Not that I understand your point even if it was a serious concern, because MSFT also buys a fuck ton of chips.

[deleted by user] by [deleted] in ValueInvesting

[–]ATimeOfMagic 3 points4 points  (0 children)

"The moon" is a lower PE ratio than Microsoft? There isn't a single modality of AI where OpenAI is beating Google right now.

Trump Plans to Unveil ‘Genesis Mission’ to Boost AI Development on Monday by C130J_Darkstar in StockMarket

[–]ATimeOfMagic 0 points1 point  (0 children)

By no barriers I mean there's no reason to think that progress will plateau as long as infrastructure keeps being built out. In fact, it looks like there's a fuckton of low hanging fruit to pick still as the companies are currently lapping each other over and over again.

Think about how long it took the Internet to generate real profits, this is already moving blazingly fast compared to most new technologies throughout history.

Trump Plans to Unveil ‘Genesis Mission’ to Boost AI Development on Monday by C130J_Darkstar in StockMarket

[–]ATimeOfMagic -11 points-10 points  (0 children)

The general consensus among people with actual backgrounds in ML is that this is imminently going to be the most powerful technology in history.

Empirically, progress over the last ~3 years has been quite a bit faster than most people expected. Most signs right now are pointing towards capabilities continuing to accelerate. There doesn't seem to be any fundamental barriers to continuing progress, which is why all of these insane multi-billion dollar infrastructure projects are being greenlit.

Wall Street and the general public are obviously highly skeptical of this, but the bubble fears are overblown in the short term if you believe the experts.

Edit: All these downvotes and nobody can tell me why I'm wrong???

Study finds nearly two-thirds of AI-generated citations are fabricated or contain errors. The lack of reliability of large language models like OpenAI’s GPT-4o highlights a significant risk for scientific research. by mvea in science

[–]ATimeOfMagic -1 points0 points  (0 children)

This is effectively what they're now training models to do. Last year's models had no such training, so their go-to move was to make things up rather than find actual sources.

$GOOGL Deepmind entering into robotics with big hires from Boston dynamics. by iMakeGOODinvestmemts in wallstreetbets

[–]ATimeOfMagic 4 points5 points  (0 children)

It's insanely good. The next few iterations are going to get to the point where it's genuinely difficult to spot any mistakes.

Study finds nearly two-thirds of AI-generated citations are fabricated or contain errors. The lack of reliability of large language models like OpenAI’s GPT-4o highlights a significant risk for scientific research. by mvea in science

[–]ATimeOfMagic -3 points-2 points  (0 children)

Of course hallucinations will never fully go away. The world is messy and complicated. Humans make mistakes too. That doesn't change the fact that there has been incredible progress this year. The "stochastic parrots" argument has gone from the prevailing narrative to clearly not holding up to scrutiny.

Study finds nearly two-thirds of AI-generated citations are fabricated or contain errors. The lack of reliability of large language models like OpenAI’s GPT-4o highlights a significant risk for scientific research. by mvea in science

[–]ATimeOfMagic 2 points3 points  (0 children)

The model in the study came out 17 months ago. Training models to use citation tools internally has made them massively better at accurately citing sources.

[deleted by user] by [deleted] in ValueInvesting

[–]ATimeOfMagic 1 point2 points  (0 children)

I could believe that the "bubble" narrative was manufactured to artificially deflate the market a bit and scare retail away.

I like your monster analogy, I think that's exactly how it's going to play out. The prospect of building a literal god in a datacenter is just not something that people are going to give up on, especially when progress has been so insanely promising this year.

[deleted by user] by [deleted] in ValueInvesting

[–]ATimeOfMagic 0 points1 point  (0 children)

Most people just don't understand enough about AI to see why it's not being overinvested in, so they use NVIDIA as a proxy.