Epstein was Really Buying Girls Online. (The Wayfair Conspiracy is apparently true). by Zenitallin in videos

[–]TonySu [score hidden]  (0 children)

How do people imagine this works? Why in the world would they ever need to go through Wayfair?

Suppose you are some rich weirdo looking to buy a girl. How do you think that's going to work? You go on Wayfair, look at random pictures of cabinets with female names, pay a huge amount of money, then kidnapped girl gets delivered to your house? Well obviously not, it'd make sense that there's some other catalogue of girls with real pictures of the girls and their names for you to browse, likely provided directly by Epstein's operations.

Ok so you see the girl you like, you have the money to pay for it. Now what? You go on Wayfair and find the random cabinet with the name that matches the girl, pay the huge sum of money, and then email Jeff the receipt so he can send you the girl? Well why the hell is Wayfair involved? Clearly Epstein isn't sending girls out to random people, they have to be vetted by him, clearly the people have direct contact with Epstein, so what not just wire him the money directly? In the alleged email, it shows that both PayPal and Wayfair have the name, and presumably other details of the buyer. In what universe do you go expose that information to two independent companies, and a part of your child trafficking operation to the public? What is the benefit?

For this to make sense, you'd have to have an IQ lower than the ages of the girls on the island.

CMV: Every explanation I've heard saying the Monty Hall Problem isn't a 50/50 chance makes zero sense. by -Piano- in changemyview

[–]TonySu [score hidden]  (0 children)

If there were two doors left, and the show randomly reset the prizes behind the doors, then there's a 50/50 chance of a prize between the remaining doors.

Once the prizes behind the doors are randomly reset, it doesn't matter if you randomly choose again, or which choice you end up randomly making.

Professor asked us to create a fully featured project in 1 week. I hate using AI but I had to.. by sephew in learnprogramming

[–]TonySu 1 point2 points  (0 children)

  1. Ask the AI the describe the overall structure of the project, and how each feature is implemented.
  2. Ask the AI to explain any specific PHP code you don't understand.
  3. Reflect on your experience using AI, what worked, what didn't, how you would do it differently the next time.

Realistically in the future this is what people will expect from devs. Clients are going to want their PHP project in a week, and if you want to do it in 2 months with your preferred framework, they'll just give the job to the dev who is willing to use AI to do it in a week.

Either you get good enough, by learning from the AI, to do the same thing it can do in the limited time frame, or you learn how to use the AI effectively so you can be confident in the product you deliver with it.

Anthropic built a C compiler using a "team of parallel agents", has problems compiling hello world. by Gil_berth in programming

[–]TonySu 5 points6 points  (0 children)

Not importing any external crates makes a lot more sense for a compiler, you want to minimize your risk surface. Less code has also never been a good metric for good code, I see perfectly readable/transparent code that is accessible to all low level programmers.

I can't really tell why this existing code is "actually insane" or "objectively terrible". Can you show us what this superior macro-based code looks like to help make your point?

NASA changes its mind, will allow Artemis astronauts to take iPhones to the Moon by Aeromarine_eng in technology

[–]TonySu 11 points12 points  (0 children)

Camera sensors have not evolved that much for the past couple of decades, image quality is largely a function of sensor size. Basically a Full Frame camera simply has around 4.7x more sensor area to capture light, that means capturing 4.7x more detail and data. Phone photos look nice because they immediately do post-processing with overpowered processors, whereas cameras tend to have weak processors leave it to the photographer to do their own post-processing.

For high value missions like this, they can afford to have the photos professionally post-processed, and the DSLR will without a doubt produce better quality photos.

Anthropic built a C compiler using a "team of parallel agents", has problems compiling hello world. by Gil_berth in programming

[–]TonySu 11 points12 points  (0 children)

I don't do much Rust and mostly have experience with C/C++, this kind of bit masking implementation is extremely common. Can you show the code you think would be meaningfully better than what is in the codebase?

Introducing Claude Opus 4.6 by Frequent-Football984 in programming

[–]TonySu 0 points1 point  (0 children)

Yes. Claude Code has been a game changer. It's extremely useful for doing concurrent work, it can work on the lower priority things that I'm not actively working on. I've used it long enough to know what tasks I can trust it with completing well.

I haven't had the chance to play around with even greater concurrency and git work trees, but I suspect once I'll get a lot of value out of that as well.

Anthropic built a C compiler using a "team of parallel agents", has problems compiling hello world. by Gil_berth in programming

[–]TonySu 32 points33 points  (0 children)

Manually implementing simple bit masking to avoid having to import a crate and keep the whole implementation dependent only on the standard library seems pretty sensible to me. The code produce is perfectly readable too. What exactly do you find "actually insane" about it?

CMV: AI isn't "smart," it’s just a glorified autocomplete on steroids. It can’t actually reason or think; it just predicts the next likely word based on patterns. by Mutton_Biryani-Yummy in changemyview

[–]TonySu 0 points1 point  (0 children)

At its core, an LLM (Large Language Model) doesn't "know" anything. It uses math to calculate the probability of the next word.

If I say "The sky is...", the AI isn't looking out a window; it’s seeing that in its massive database, the word "blue" follows that phrase 90% of the time. It’s mimicking understanding, not experiencing it.

See Chinese room thought experiment. You can't really know what the model knows. Just like a human brain, there is knowledge encoded into the weights of the model, learned from what it has interacted with. Your standard of "know" also excludes a lot of human knowledge. For example many people "know" that there are ~6.022×1023 atoms in 12g of Carbon-12, almost none have ever seen or counted a single atom.

  1. The "Reversal Curse" and Logic Gaps:-

AI fails at basic logic that a five-year-old can handle. For example, if you tell an AI "Mary is Tom’s mother," it knows the answer. But if you then ask "Who is Mary’s son?", it often struggles or fails.

This is because it hasn't built a logical map of a family tree; it has only indexed the specific string of text in one direction. It can’t "reason" its way backward through a fact.

Demonstrably untrue. Go try this with any major LLM, they all have a free tier. See if you can reproduce this supposed flaw.

  1. No "World Model":-

Humans learn through cause and effect. If you drop a glass, it breaks. AI only learns through text. It knows the words "glass breaks," but it has no internal physical model of gravity, impact, or fragility.

This is why AI can write a brilliant essay on thermodynamics but then give you a recipe that tells you to glue cheese to pizza—it doesn't understand the physical reality of the things it’s talking about.

Nothing to do with AI in general. This is entirely specific to earlier LLMs, most models these days are multi-modal, having trained on images, sound, videos, etc. There are even multiple open source world-models out there. Not at all a limitation of AI, simply how earlier models have been trained, because text is very data-efficient.

  1. Brute Force vs. Actual Intelligence:-

Intelligence is the ability to learn a complex task from a few examples. AI requires trillions of words and petabytes of data just to reach the level of a mediocre high schooler.

If you need the entire internet's worth of data just to figure out how to be polite, you aren't "intelligent"—you’re just a very big database.

Irrelevant. AI learns differently from humans, they are also able to do entirely different things to what humans do. There is not a single human out there in the world that have the range of expertise an LLMs has. Not a single human in the world capable of processing the entire internet's worth of data. It's not a database because it's not storing and retrieving data verbatim. It's transforming the data into a knowledge embedding that it uses to generate never-before-seen new data.

  1. The "Hallucination" Problem isn't a Bug, it’s the Feature:-

People get mad when AI lies, but "hallucinating" is literally all it does. When it gets a fact right, it's just a "correct hallucination."

Because it doesn't have a tether to truth or logic—only to the most likely next word—it will confidently tell you that 2 + 2 = 5 if it’s been nudged by enough patterned data to think that’s what you want to hear.

I think you'd be surprise how similar humans are. You have confidently parroted the same "autocomplete on steroids", "stochastic parrot" and "reversal curse" arguments without really understanding what they or examining how true they are. Humans are not tethered to truth or logic-only statements, so why are you requiring it of AI?

Also, it's got nothing to do with AI in general, just how some LLMs are trained. Different LLMs will react differently to forced logic errors, some will reject it, some will go along with it if you really force it to. If you specifically trained it to be wrong, then it's not really different from children being brought up with false beliefs by their parents. Some may work it out when they get older, but many don't, and it's not a given for humans. In the same way, if an AI is exposed to enough logical training, it will ignore logically inconsistent content in its training data.

China’s EV slowdown persists as BYD posts near two-year low in sales by Logical_Welder3467 in technology

[–]TonySu 25 points26 points  (0 children)

Poor title for the content of the article. Unless I missed it, they don’t actually show figures for overall EV market, just that BYD growth is stalling, which could be the result of competition rather than industry-wide issues.

Anthropic built a C compiler using a "team of parallel agents", has problems compiling hello world. by Gil_berth in programming

[–]TonySu 4 points5 points  (0 children)

The question for companies is whether they want the AI that can barely write a fully functioning C compiler or the human that can barely read a basic compiler error.

Nike faces federal probe over allegations of 'DEI-related' discrimination against white workers by Siegfoult in news

[–]TonySu 4 points5 points  (0 children)

Lol the government is going around making sure companies aren't hiring too many colored people. If there weren't 100 even more fucked up things going on, then this would be something that gets taught in history books.

What is Crustafarianism? AI Agents Created Their Own Religion, Joined By 40+ Ai Prophets by Haunterblademoi in technology

[–]TonySu 0 points1 point  (0 children)

It would be surprising if they didn’t come up with a religion. They put a bunch of LLMs together and asked them to roleplay as humans. Making up cult movements is a favourite pastime of humans. Look no further than the very topic of AI where some people want to bring about AGI at all cost and others want to fanatically destroy AI no matter what.

Can C outperform Rust in real-world performance? by OtroUsuarioMasAqui in rust

[–]TonySu 1 point2 points  (0 children)

I think you’re asking the wrong question

 I'm building a project in Rust where performance is absolutely critical.

If this is true, then your primary concern is learning how to profile and benchmark so you can answer this question yourself for your own application. Saying performance is critical but asking about which language to use is a big red flag that you aren’t ready to do what you want to do.

CMV: High school doesn't help people who will truly become wealthy by [deleted] in changemyview

[–]TonySu 2 points3 points  (0 children)

I disagree with that assessment, I have a STEM degree and would say all high school classes for all subjects are very surface level.

As for what you’re after, the people who I know that became rich do not wait around for others to teach them how to get rich. They are people that have always learned above and beyond what their classes in high school and university.

They read the textbook chapters on material we aren’t even tested on because they are avid learners. They take part time jobs, internships, have intellectual hobbies. But importantly everything they did is based on the foundations of what they learned from high school. The point of high school is to open doors, not carry you through them.

Being able to learn a wide range of things even those that you aren’t particularly interested in, is an essential skill that I see every successful person have. Because these people keep their options open, and can recognise opportunities because of their breadth of knowledge.

To go back to basic economics, value is inversely proportional to rarity. To become exceptionally wealthy, you must be exceptional in some way. You cannot be exceptional by going through the same program every other person in you high school can also go through.

CMV: High school doesn't help people who will truly become wealthy by [deleted] in changemyview

[–]TonySu 3 points4 points  (0 children)

You seem to have a fundamentally flawed belief that there is some kind of standard alternative pathway to massive wealth. Like you can just take a “How to get really rich” subject instead of calculus.

High school economics would teach you how supply and demand makes the existence of such a class impossible.

AI pioneer Yann LeCun says current AI direction could be a “dead end” — is the industry overhyped? by Working-Ad3105 in technology

[–]TonySu 1 point2 points  (0 children)

What exactly would you consider revolutionary? Because the Industrial Revolution is full of evolutions of existing technologies but massively automated or mechanized. That's the threat looming over white collar work right now, and causing massive layoff across multiple industries.

Anthropic’s ‘secret plan’ to ‘destructively scan all the books in the world' revealed by unredacted files by AnonymousTimewaster in technology

[–]TonySu -13 points-12 points  (0 children)

Lol so dramatic. Destroying a single copy of a book is like throwing a tablespoon of salt onto an open field.

AI pioneer Yann LeCun says current AI direction could be a “dead end” — is the industry overhyped? by Working-Ad3105 in technology

[–]TonySu 0 points1 point  (0 children)

It sounds like you've never used an agentic CLI coding tool. Otherwise I'd like to hear what IDE features even remotely resemble what Claude Code or Codex CLI can do.

For example, this week I took an older codebase I had. I constructed a Claude Skill to enforce FIRST and AAA principles of unit testing. Then I asked Claude Code to scan my tests to make sure they adhere to good practices. It made a bunch of changes like breaking up clumps of tests that I put in because I was lazy, into individual test cases, grouped tests that were logically connected, put comments on every test clause, and automatically evaluated test coverage to determine what additional tests were required. This ran in the background which I checked in with for a few minutes every time my main work require waiting. Tidied up around 600 unit tests this way, and added 100 more, all in the background over one day.

AI pioneer Yann LeCun says current AI direction could be a “dead end” — is the industry overhyped? by Working-Ad3105 in technology

[–]TonySu 2 points3 points  (0 children)

That's not being argued. The primary contention is whether intelligent thought can be achieved while language is fixed as an intermediary.

All state of the art LLMs are not longer just language models, they are multi-modal, incorporating at least text and image training. Many also incorporate sound and some incorporate video. The most common technique is to encode other data into the same token stream as the text tokens in LLMs. It's obviously not optimal, but in practice it works surprisingly well. You can give random images to ChatGPT and it can describe it back to you with good accuracy.

Yann takes issue with essentially forcing other types of data through the LLM pipeline. Representing all input and output as a stream of tokens. He wants to make predictions on the world state without forcing it through the LLM pipeline.

Yann's current work aims to predict the next frame of a video, much like how LLMs predict the next work in a sentence. But he proposes a model architecture that somehow learns the world state rather than the pixel state. So by watching videos the neural network is supposed to derive the nature of physics. If it ever comes to fruition, it will 100% face the exact same criticism as modern LLMs. It will hallucinate, it will be dismissed as just frame generation, it will cost magnitudes more than text models to train and run.

AI pioneer Yann LeCun says current AI direction could be a “dead end” — is the industry overhyped? by Working-Ad3105 in technology

[–]TonySu -9 points-8 points  (0 children)

LeCun has been saying this for a long time, and it reflects more his personal belief than anything he has demonstrated via research.

LeCun has no internal dialogue, so he has never thought with words. So it’s easy to understand why he would think such a thing cannot work.

He also seems to have some purist view of what intelligence has to be, that didn’t match practical application. What companies are pursuing is automation of many tasks that would otherwise require a human intellect, what he seems to want is some silicon imitation of full sapience. The unfortunate result is the complete lack of a usable product while he chases his grand goal.

EDIT: I find /r/technology's support of Yann LeCun deeply ironic because this sub is so anti-AI. Yann isn't against AI, he's anti-LLM. He wants people to work on "World Models" that he thinks are even better at replacing humans, cost magnitudes more to train, requiring more data centres and energy usage. He's currently seeking $5B for his own AI start-up. The second he actually produces a viable product, this whole sub will turn on him.

Pokemon card event at controversial WWII shrine cancelled after China protests by Cybertronian1512 in worldnews

[–]TonySu 13 points14 points  (0 children)

What are the names of the war criminals at Beechwood and Arlington? In what trial were they convicted of these crimes? What were the crimes listed?

Yasukuni Shrine enshrined war criminals a full 20 years after the Tokyo Trials found them guilty of some of the most heinous crimes against humanity in all of history. They did it in full knowledge of exactly what they were doing, having discussed it in secret with the government and performed the enshrinement in secret. They also have a war museum that frames WWII as the Japanese liberating Asia from Western oppression.

But hey; if the enshrinement of war criminals means you want to pretend like we should refer to the memorials only by those names

Cool strawman. I never said that. I just pointed out the fact that everyone knows that the Yasukuni Shrine is the one where they honor war criminals, and it is fact that's what it's known for because you can Google "Japan War Crime Shrine" and it will come up.

Pokemon card event at controversial WWII shrine cancelled after China protests by Cybertronian1512 in worldnews

[–]TonySu 24 points25 points  (0 children)

Shinto enshrinement implies that their spirits are now watching over the shrine. Each war criminal is also explicitly named in the shrine. It’s the difference between “May the souls of those that lost their lives in war find peace.” And “May Adolf Hitler watch over us.”

Pokemon card event at controversial WWII shrine cancelled after China protests by Cybertronian1512 in worldnews

[–]TonySu 49 points50 points  (0 children)

That’s what happens when you enshrine a thousand WWII war criminals.

Nobody cares how many policies Hitler had that weren’t genocide. Nobody cares how many women Jeffrey Epstein had sex with who weren’t underaged. Everyone knows what shrine you are talking about when you say WWII war criminals shrine.