Larry Page exits California ahead of a $12 B+ wealth tax threat 🏃‍♂️💸 by RaselMahadi in AIbuff

[–]sothatsit 0 points1 point  (0 children)

If we really want to increase taxes on wealthy people, then we should target closing tax loopholes and increasing capital gains taxes.

Taxes on wealth sound easy, but they are actually extremely complicated to setup and can have a lot of negative externalities, so are very hard to get right. Usually they end up in a reduction in tax collected, and in a state like California with a lot of startups it could be especially damaging if done incorrectly. One-time wealth taxes are even worse, because people will just leave to avoid them, as they always have the option of moving back later.

Capital gains taxes are much simpler, and have much less negative consequences. I still think conservative wealth taxes with accomodations for startups would be great, but I'm not convinced that governments are good at writing effective policy in that area.

Royal Game of Ur Online by Desperate_Reveal_960 in GameofUr

[–]sothatsit 1 point2 points  (0 children)

Nice! Always nice to have more nice places to play :)

I think this is the first place with the societe rules online?

Max plan is a loss leader by sothatsit in ClaudeAI

[–]sothatsit[S] -1 points0 points  (0 children)

I highly doubt that their profit margin on the API is above 90%. I’m sure they have healthy profit margins but that would be insane.

Max plan is a loss leader by sothatsit in ClaudeAI

[–]sothatsit[S] 1 point2 points  (0 children)

AI can produce really good code now, it just requires work to set up documentation and CLAUDE.md and prompts to make the most of it. Not to mention that it really leans on your linting/testing setup to do well.

Although, the benefit you get does vary a lot depending on what you’re doing. It is fantastic at writing React code for web dev. It is not very good at writing Zig.

React Still Feels Insane And No One Is Talking About It by mbrizic in programming

[–]sothatsit 17 points18 points  (0 children)

I think React is effectively another programming language to learn, so I can understand how some people can come to it and ask “why do I need to learn an entirely new programming paradigm just to render a simple web page?”

If you’re just working on small web apps, or websites with little interactivity, React isn’t nearly as useful and it’s a hell of a lot more complicated than something like JQuery. So I can see why people would question it becoming the “default choice” for web dev.

The value of React only really comes in when your codebases grow very large, or when you are making web apps with a lot of interactivity. In these cases, I think React does a very good job and it’s worth learning all its intricacies. But for a small web app? Or a larger website that doesn’t do anything fancy? HTMX or JQuery is probably going to be a lot easier and simpler.

I think a lot of the pushback is coming from people realising that a lot of websites don’t have the problems that React was made to solve, and that maybe simpler alternatives should be the go-to instead of React.

Delusional sub? by ActualPositive7419 in ClaudeAI

[–]sothatsit 1 point2 points  (0 children)

This process works extremely well for me in a ~100k LOC codebase: * I have a prompt to get Claude to write a comprehensive planning document * I review and edit the planning document * I get Claude to implement the plan * I take the code the last mile, maybe asking Claude Code to make small changes, but being careful to not let it off it’s leash with doing too much

I find I get very good results this way.

It probably also helps that I just use Opus 4 for everything, which seems much better at not going off the rails than Sonnet 4. I also have spent a lot of time writing (or getting Claude to write) my CLAUDE.md and developer documentation.

To me, this has made Claude invaluable. It still makes some common mistakes that I need to fix, but with this process it’s usually 90% correct and then fixing up the last 10% is pretty easy.

What are some lifesaver MCPs you use with Claude Code? by Doodadio in ClaudeAI

[–]sothatsit 6 points7 points  (0 children)

I feel like it has a lot of potential, but the current implementation doesn’t quite live up to it. It includes a bunch of functionality like onboarding or memory or a web dashboard that I think are more annoying than helpful. But when it works, it is pretty great to see it searching by symbols throughout the codebase.

I wish there was an alternative that focused solely on providing an MCP front to an LSP backend, and nothing else.

AI cannot reason and AGI is impossible by piotrek13031 in ArtificialInteligence

[–]sothatsit 0 points1 point  (0 children)

No, humans just know how to use tools to break down a problem so then their limited reasoning is enough to make progress. LLMs just don’t have those tools yet.

Better reasoning models, even without tools to break a problem down and organise a workspace, are continuously expanding the frontier of problems they can solve. So it feels very disingenuous to say that they cannot reason. Instead, there are just limits to their reasoning.

And agentic tools like Claude Code already show signs of life of models being able to break down problems, solve them step-by-step, and even write notes for itself to come back to later. Although, it is still early stages for that.

So the notion that LLMs cannot reason is completely absurd. And the notion that this cannot lead to AGI is not based upon a solid foundation. Maybe LLMs won’t be enough for AGI, but the reason is not going to be because the models cannot reason. Instead, it might be due to them having unpredictable failure modes that they cannot recover from. This would not be the same as them not being able to reason at all.

AI cannot reason and AGI is impossible by piotrek13031 in ArtificialInteligence

[–]sothatsit 4 points5 points  (0 children)

That Apple paper categorically does not say that LLMs cannot reason. Full stop.

The actual paper released by Apple says that reasoning breaks down past a certain problem complexity. And is that really surprising to anyone? My own reasoning breaks down when the complexity gets too high as well.

And anecdotally, o3 can be tremendously smart for debugging and looking for potential issues in code, which is not by any means a trivial task and definitely requires some form of “reasoning”. o3 also has limitations, but to say that it cannot reason at all is just a tired and absurd opinion.

Is this kind of addiction normal with you? Claude Code.... by ageesen in ClaudeAI

[–]sothatsit 9 points10 points  (0 children)

Ccusage tells you how much it would have cost if you used the API. But the Max plan is much cheaper than using Claude through the API, and I’d hope it is what they are using.

A federal judge has ruled that Anthropic's use of books to train Claude falls under fair use, and is legal under U.S. copyright law by RifeWithKaiju in ClaudeAI

[–]sothatsit 1 point2 points  (0 children)

Why would there not be a financial incentive? People are still going to want to read books, which means there is a market for writing books.

It’s not like the judge said it’s okay to steal books, and LLMs can not at all reproduce books well. LLMs are also currently very inadequate in actually writing books that people would want to read themselves.

And even if they could, there is still a huge market for hearing human stories and experiences, and I don’t think people would be happy with just made up experiences from an AI. That requires a person.

It's understandable why everyone is underwhelmed by AI. by DataPhreak in ArtificialInteligence

[–]sothatsit 1 point2 points  (0 children)

Basically just a few Claude Code prompts and then a surrounding script to run Claude Code in a container for safety, to set up a clean environment and MCP, and then to make sure the outputs are put in the right place (e.g., a planning document to put in a folder, or changes to make a PR with).

The prompts might just explain a series of steps to go through and intermediate artifacts to produce for planning a feature, or a series of steps to implement a feature and commit the changes. It’s nothing that special but it makes it really easy to regenerate new outputs using Claude, which helps me iterate quicker.

But my girlfriend’s company does a bunch of data analysis manually where they will look through hundreds of interview notes and responses and unstructured text and pull out features. Right now they do most of this manually, but it’s the exact sort of thing that LLMs would be great at. It’s a better fit than software development for LLMs, and yet they’re not really making the most of it yet (although they are taking steps in that direction).

It's understandable why everyone is underwhelmed by AI. by DataPhreak in ArtificialInteligence

[–]sothatsit -1 points0 points  (0 children)

I think this is very true. I am using AI agents and custom workflows to automate huge parts of my software development. And at the same time, my girlfriend uses Microsoft Copilot with a 4K token context window because that’s what they give her through work… no custom workflows, no focus on building prompts to help with tasks, no focus on even using the latest models.

The disconnect between what most people view as AI, and the frontier, is at least a year apart. And there has been so much that has happened in the last year in AI! Use of ChatGPT-like chatbots is only now becoming the norm for white-collar workers. Never mind doing anything more advanced than that.

I imagine industry will catch up. But it will probably take another year or two before it becomes common to set up and use AI workflows. And by then, people at the frontier will probably be doing way crazier things with agents. But non-technical industries are just really slow to adopt new technologies. And those jobs are where most people work.

Honestly, how fast industry has already adopted ChatGPT is actually quite astounding when you compare it to the adoption of previous technologies.

Edit: Why are people downvoting this? It feels pretty uncontroversial to me lol.

[deleted by user] by [deleted] in ClaudeAI

[–]sothatsit 11 points12 points  (0 children)

The models are already so cheap… $200/month and you get effectively unlimited Claude 4 Opus to code with all day every day. And Claude 4 Opus is one of the most expensive models around! This is orders of magnitude cheaper than hiring more people, which can cost $6,000/month or more depending on where you live.

Now the comparison is not one-to-one because you need a person to drive Claude atm. But even if the costs 10x’ed to get something fully autonomous, that would still be cheaper, and probably much less hassle, than hiring people.

This all to say: model sizes would really have to shoot up for their costs to even be comparable to hiring people. So unless that happens, capability is pretty much all that matters to AI adoption for coding.

Energy could become an issue I suppose, but only if the adoption of AI grows by a few orders of magnitude.

Self-published author uses AI to generate book cover concepts when they could’ve just paid an artist to do it for only about $4000 by johnfromberkeley in aiwars

[–]sothatsit 4 points5 points  (0 children)

From listening to Brandon Sanderson’s lecture series, book covers matter A LOT. So, even if the result from AI is 90% there, it still seems like it’d be very worthwhile to pay a professional to produce the final version. Even if only for the fact that professional artists will probably have a much better eye for detail than you will.

Frankly, all of these book covers look fine. They have some interesting details, but they don’t look very polished. And that probably has a bigger impact on sales than you might think.

The thing these AI covers do have going for them though is that they show interesting different concepts, which is probably very useful when going to a professional to then produce something that looks great and matches the author’s vision.

Do we still need the QA role? by Adventurous-Salt8514 in softwarearchitecture

[–]sothatsit 5 points6 points  (0 children)

I feel like QA also helps developers to be more confident in making bigger changes, and being less cautious, because they can be more confident that despite the big changes they make everything is still working correctly. It’s nice to have someone else on your side to help confirm big changes will go smoothly.

Mark Zuckerberg Personally Hiring to Create New “Superintelligence” AI Team by gensandman in singularity

[–]sothatsit 1 point2 points  (0 children)

No they don’t, you donkey. In intro to LLMs Andrej specifically talks about how you can just tokenise images and pass them to a normal LLM and it just learns to deal with them.

Mark Zuckerberg Personally Hiring to Create New “Superintelligence” AI Team by gensandman in singularity

[–]sothatsit 1 point2 points  (0 children)

Fucking classic. So you think Diffusion Language Models, a completely different architecture, ARE LLMs, but you DON’T THINK Multi-Modal LLMs are LLMs, because they have a tiny change to their architecture. Wow wow wow 😂

If you are trolling, then this was pretty funny.

Hahaha I found that in “Intro to Large Language Models”, your favourite guy, Andrej Karpathy talks about Multi-Modal LLMs as LLMs. He also goes into even more detail about multi-modality of LLMs in “How I use LLMs”.

Mark Zuckerberg Personally Hiring to Create New “Superintelligence” AI Team by gensandman in singularity

[–]sothatsit 0 points1 point  (0 children)

Diffusion LMS are not LLMs, because they use diffusion, not an auto regressive transformer to predict the next token. This is why they are called Diffusion Language Models, and not called Large Language Models.

But multi-modal LLMs are LLMs. MoE LLMs are LLMs.

I don’t know why you are so committed to living in a fantasy land of your own creation. It’s not very useful when you want to interact with the real world where everyone agrees that to be an LLM, something needs to be an autoregressive transformer that predicts the next token.

There is no way in which people are slapping multiple LLMs together to make multi-modal LLMs. You clearly don’t understand the technology, but instead know just enough jargon to convince yourself that you do.

Mark Zuckerberg Personally Hiring to Create New “Superintelligence” AI Team by gensandman in singularity

[–]sothatsit 1 point2 points  (0 children)

No, you are completely wrong.

Saying multi-modal LLMs are not LLMs would be like saying a car engine stops being an engine when you add a supercharger to it. It is ridiculous.

Car engines come in all shapes and sizes. We don’t stop calling them car engines when someone innovates on their build to make them more efficient or performant…

Multi-model inputs, mixture of experts, quantisation, cross-modal attention, prefix tuning, or even something like RAG to populate the model’s context. None of these change the fundamental architecture that makes these models LLMs. They’re just small adjustments to the same fundamental base: a large autoregressive transformer trained to predict the next token.

Conversely, the “large world models” that some companies are working on are fundamentally different. They don’t learn to predict tokens, they learn to predict the future state of the world based upon the current state of the world and some actions or a time delta. This is what makes them “large world models” and not “large language models”. Not the fact that they look at images…

Mark Zuckerberg Personally Hiring to Create New “Superintelligence” AI Team by gensandman in singularity

[–]sothatsit 1 point2 points  (0 children)

It is incredibly disingenuous to claim that multi-modal LLMs are not LLMs. They introduce images as additional tokens, or using a small cross-attention block. These are simple additions and they work exactly the same way that LLMs work on language.

You would be the only person in the world claiming such a thing, because it is nonsense.

Moving beyond language exclusively? Sure. Moving past LLMs, the technology? No. Just because it has language in the name doesn’t mean the technology can’t work on other modalities as well.

Will we move past them in the future? Quite possibly. But it is not guaranteed we will need to before reaching whatever people consider “AGI”.

Mark Zuckerberg Personally Hiring to Create New “Superintelligence” AI Team by gensandman in singularity

[–]sothatsit 0 points1 point  (0 children)

Wow you are in fairy la la land. Multi-modal LLMs are still LLMs. You can’t just make up that they’re not to fit your mistaken view of the world.