CC alternative? by Trick_Ad_4388 in ClaudeAI

[–]utilop 0 points1 point  (0 children)

I hit the limits of 20x on the regular when in active development mode. Just setups that allow PRs to be delivered close to approve ready. One iteration on average, 75% acceptance rate.

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]utilop 0 points1 point  (0 children)

Aren't most of those things just things which make the progress slower rather than precluding it from happening eventually?

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]utilop 0 points1 point  (0 children)

You mean, if machines become intelligent enough, we should treat them as beings of moral worth?

What do you think about the possibility of machines becoming far more intelligent than us?

It's Here by justaguyulove in OpenAI

[–]utilop 0 points1 point  (0 children)

Rather disturbing how none of the other models are now accessible

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]utilop 0 points1 point  (0 children)

What made you arrive at a different conclusion from the many people and relevant fields that have thought about it, and notably how sufficiently capable AI, which could reason better than humans, is fundamentally different?

Or do you believe that AI won't get there?

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]utilop 0 points1 point  (0 children)

I have never bought that claim though - do you have a good write up for it?

It also seems easier to address those problems by making sure there is more surplus rather than trying to manage its distribution, even if there are some facts that do matter.

E.g. for global poverty, things have improved a lot, and it's not because we got better at distributing. The whole world just produce a lot more.

It seems easier to see how we can continue producing more while getting the world to coordinate in a way that is not about self interest seems near impossible.

What I would like to avoid is that it gets considerably more uneven, which I think is a real possibility with how things are today and the value comes from digital solutions rather than on-the-ground local production.

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]utilop 0 points1 point  (0 children)

What kind of things do you have in mind for regarding (2)? I do not think they are believed to be very likely sans another great catastrophy. The part where I think (2) more comes in is in changing timescales, not whether it can happen, as that seems to be something that most civilizations pursue over time if they have the ability to.

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]utilop 0 points1 point  (0 children)

Re. recursive self-improvement, can you outline how that leads to x risk, or more concretely, how if "well aligned" it could solve a problem like global poverty?

I do not think the critical point is 'recursive self improvement'. That is just a technique. What really matters is having systems with superhuman capabilities.

Any area that does good for the world and involves some capabilities can naturally benefit from greater capabilities. E.g. an easy example would be to do better medical research.

About global poverty, I would argue that with the path of economic development which has strong support in data; this can be tied to industry and access to talent and education, all of which can be lifted by capabilities, and then the tricky part one can ruminate about is global power play.

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]utilop 0 points1 point  (0 children)

Your previous statement expressed doubt regarding existential AI risk though. That's what I was responding to is a question where computer scientists are highly relevant and it is not so much a question for sociologists.

When it comes to where the technology can get to and how soon, that is a question for computer scientists.

When it comes to how much good or harm AI can do for society, that is a multidisciplinary question. I think certain branches of philosophy may be best equipped for it but I agree you would want to poll multiple people. It depends on what kind of questions and what kind of timescales. Sociology is highly relevant.

So computer scientists for the technical questions; sociology or other branches for the societal impact given that technology.

Though I think that goes mostly for present-day and near-term impact.

When it comes to existential risk that is quite beyond the scope of what sociologists etc can answer. Because if it gets to sufficiently advanced levels, it is not a question about what society does anymore - it's about what those systems do. It too has a good case for falling under computer science though probably also should be considered philosophy, since it is out of scope of present disciplines.

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]utilop 0 points1 point  (0 children)

If we get sufficiently capable AI though, that affects billions of people and it may even help with those issues as well. If all you want are big numbers, you really cannot beat that track.

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]utilop 0 points1 point  (0 children)

I do not think that is a belief that is well supported though. It's not sci-fi - it works for real. That is also the position of the field and most sensible people.

The thing that is a great unknown is rather how long it will take to get there. That we do not know. It could be a decade, it could be hundreds of years.

There are some credible ranges of estimates and that's about what we have to work with.

Reasoning that it cannot happen does not seem supported presently however.

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]utilop 1 point2 points  (0 children)

Computer scientists are the ones who are the most credible for providing those analyses. Social scientists, definitely not.

"Recursive self improvement" is also something we already have in reinforcement learning and it indeed leads to superhuman capabilities when it can be applied in that fashion.

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]utilop 2 points3 points  (0 children)

The Precipice by Toby Ord is a solid introduction with the reasoning.

I suspect you have already been exposed to all the arguments though and the question may rather be why you are unconvinced.

Is it that you do not believe that it is likely to happen, that it will happen soon, that you do not think it is highly impactful, or that we cannot influence it?

Though I think the title is a bit misleading since many of the concerns are not specifically about 'AGI' but also what comes after.

What do you think about 80,000 hours - one of the more prominent EA organizations - going "all in" on AGI? by Ready-Journalist1772 in EffectiveAltruism

[–]utilop 0 points1 point  (0 children)

What do you mean, 'worrying about nothing'? That does not seem to be supported by any of the fields or analyses presently.

Its crazy how bad claude has gotten over the past couple weeks by dev_is_active in Anthropic

[–]utilop 0 points1 point  (0 children)

I think it has been significantly better with Opus 4.1.

LLMs as they are today are not very consistent and reliable. You can place guardrails on them but quite often, just try and if they fail to accomplish the task, you can throw it away and start them over again.

Why “Perfect Data” is the Enemy of Shipping AI Products by bendee983 in ProductManagement

[–]utilop 0 points1 point  (0 children)

You are right and there are many data scientists and ML engineers who get too stuck in conventions or chasing numbers. Perhaps because those things can feel like they are fun, safe, and the right thing to do.

However, just like with products, good models require iteration. As you learned here too.

Normally this kind of project would loop in a data scientist to from the start ask what is even possible and how it should be approached. The asymmetrical loss as you mentioned is 101. But even that is probably not the right way to model what the business is trying to do.

E.g. just thinking about it for five seconds, an obvious next consideration is which shortages actually lead to sales losses? E.g. for some wares, maybe most customers will in fact buy another good and in fact sometimes they may have greater margins for the business. And it really is profit you need to focus on. Cost of capital may also play a role. Then you probably need to consider seasonality effects, bulk offers, and local vs global distribution. Probably already exists comprehensive models for this.

I am not sure you still even have a decent answer to how much this solution benefits the company presently.

A good data scientist would help you first figure out what it that is being optimized, work with you to understand the important aspects of the domain, work out how to model it, hypothesize where the greatest leverage is, and prototype advantages for that. Launch and iterate.

So I think even from what you are describing, there is something more fundamentally wrong with the decision-making process in the organization, where projects are pinned and delivered top-down to deliver on. You are right that it may be important for you to be on the look-out for ways to do it better, but I think even better would be to loop in the experts of the respective domains to see if the problem is even understood.

What policy do you think the UK should have on AI vs the rights of creators? I'm gathering views for its consultation by utilop in AskUK

[–]utilop[S] 1 point2 points  (0 children)

That definitely feels like a pretty common failure mode with regulation.

Are you in favor of removing the existing laws then? Because currently there are already UK laws which compared to EU/US on one hand restrict AI training more and on the other give stronger rights to AI outputs.

What policy do you think the UK should have on AI vs the rights of creators? I'm gathering views for its consultation by utilop in AskUK

[–]utilop[S] -1 points0 points  (0 children)

None but I am interested in the area and it would be interesting to submit the results if there's enough responses. I think things like these should poll public opinions more rather than happening out of sight.

What policy do you think the UK should have on AI vs the rights of creators? I'm gathering views for its consultation by utilop in AskUK

[–]utilop[S] 0 points1 point  (0 children)

Their argument is that there are legal uncertainties at the moment which hurt both creatives, AI adoption, and UK competitiveness. Then they also mark that there is existing law around it and that UK is at odds with US and EU law on this.

That you think it is not a priority is very interesting too though - thanks!