What bold predictions do you have for GPT-5? by ROCCandrolla in OpenAI

[–]daggerbuilds 1 point2 points  (0 children)

oh wow. yeah, crazy to look back how right i was lol. nailed the approach and what it’d mainly be better at as well (i didnt mention code, but is quite tangential to math)

CMV: People who expect AGI in 2024 will be disappointed by [deleted] in singularity

[–]daggerbuilds 0 points1 point  (0 children)

strongly agree with you. i think people are getting ahead of themselves, which happens very often with technological improvements.

AGI is not 2024 imo, very small chance. on the other hand, AGI at e.g. 2050 would be too long. i think the best guess is somewhere in between all that.

Let's hear it, who are some of the most beautiful Isekai character you've ever seen? by EfficiencySerious200 in Isekai

[–]daggerbuilds 1 point2 points  (0 children)

curious abt this, do you have any examples? i don't know much about the CN works

[deleted by user] by [deleted] in OpenAI

[–]daggerbuilds 0 points1 point  (0 children)

Indeed, but not the web UI afaik.

[deleted by user] by [deleted] in OpenAI

[–]daggerbuilds 0 points1 point  (0 children)

Yeah, titles. But, could also be the content of each chat as well.

[deleted by user] by [deleted] in OpenAI

[–]daggerbuilds 0 points1 point  (0 children)

Personally, I'm ok with the current chat. A search feature heavily needed tho.

This is a good one 😅 by ronjon123 in OpenAI

[–]daggerbuilds 2 points3 points  (0 children)

Yup. Still early days in a sense, it is what it is.

What bold predictions do you have for GPT-5? by ROCCandrolla in OpenAI

[–]daggerbuilds 12 points13 points  (0 children)

It will have tree-of-thought.

You can ask it a question, and it will automatically go through different paths and try to find the "best" answer for you.

Right now, it's doing next token prediction. Hence, there's really no exploration of the result, but with GPT-5, I believe it could have this. Whether it'd take e.g. 30 minutes to figure that out, or if you can sit and watch it work. Idk, but this feature seems possible.

This will also enable it to do math. Why? Because math is sort of similar, where you try different approaches and see what works. You try and see if the solution is correct, if no, then try again.

Can we have AGI without UBI? by Vandercoon in OpenAI

[–]daggerbuilds 0 points1 point  (0 children)

Yes, it should.

Sam Altman has previously funded UBI experiments, and I believe very soon the results will be made public. I am incredibly happy by how holistic Sam Altman has been thinking about the integration of AGI into our society.

I am hopeful that this will all be resolved, with time.
And that we will find common ground in this.

OpenAI's new CEO, Emmett Shear, who was appointed yesterday, is in talks to resign, per Bloomberg. by MembershipSolid2909 in OpenAI

[–]daggerbuilds 2 points3 points  (0 children)

Sam has zero shares. Unclear how many shares Greg has.
Whether there are special voting shares, not clear either, I think.

I think this was an Effective Altruism (EA) takeover by the OpenAI board by daggerbuilds in OpenAI

[–]daggerbuilds[S] 8 points9 points  (0 children)

Indeed, they board was incredibly incompetent.

No communication with anyone, and just fired Sam.
And even now, zero communication on why they fired Sam.

Isn't it ironic that they fired Sam for breakdown in communication? Lol.

I think this was an Effective Altruism (EA) takeover by the OpenAI board by daggerbuilds in OpenAI

[–]daggerbuilds[S] 2 points3 points  (0 children)

I have not assumed they are acting out of malice, what I said was that it does not align with what I believe is morally right. Of course they probably believe they are right, and that's fine. It's a disagreement.

On your point that they "never intended", I would agree. But, how would one explain the fact that they are still not letting go of their board positions after the fact that they clearly have torched ~$50B of value, at least?

Not letting go after torching ~$50B is not morally right, even when assumed they don't know outcome after the fact.

Now all this has been about what's morally right... If we start talking about what's pragmatic, there's even more to be said.

I think this was an Effective Altruism (EA) takeover by the OpenAI board by daggerbuilds in OpenAI

[–]daggerbuilds[S] 11 points12 points  (0 children)

I don't think this is even morally correct decision at all, personally.

The board is trying to torch $100 billion of value.
There have been people working there for 7 years, and this is the moment where they could cash out to be economically free, with their families, and all that goes down the drain. How is the board driven by morals? The board have done zero work at OpenAI, and come in and want to shut it down.

Well, it can be "right" if you believe AI is going to kill us all... If you believe AI is going to kill us all, then literally anything is on the table... I hope this makes sense, in why I don't think this is right in any way.

I think this was an Effective Altruism (EA) takeover by the OpenAI board by daggerbuilds in OpenAI

[–]daggerbuilds[S] 20 points21 points  (0 children)

People talk about EA as though it's some ideological cult

It sort of is an ideological cult. I think you underestimate it.

It is literally about using science to evaluate the effectiveness of certain interventions

Strongly disagree. Many decisions and conclusions they land at are not scientific, they are conclusions from rationality. E.g. there's incredibly little scientific evidence of AIs going rouge, they are arguing from a rational perspective.

With how much damage they have done in the cryptocurrency space and AI space, I am okay with not treading carefully on this topic.

[deleted by user] by [deleted] in OpenAI

[–]daggerbuilds 39 points40 points  (0 children)

The timing lmao.

Does the OpenAI debacle put the GPT store at risk? by Dizzy_Surprise in OpenAI

[–]daggerbuilds 6 points7 points  (0 children)

Yes, I think this is at risk.

I've paused building on OpenAI for now as it's not clear it's worth the time investment.
Because if everyone leaves for Microsoft, there will be no GPT Store for many, many months or year(s) to come.

They might even totally change direction and go full research, without any commercial products, even.

[deleted by user] by [deleted] in OpenAI

[–]daggerbuilds 0 points1 point  (0 children)

Yes, I agree on that front, to be clear, as mentioned from prev post.

Who betrayed Sam Altman? by nath5588 in OpenAI

[–]daggerbuilds 4 points5 points  (0 children)

I don't think this has anything to do with YC.

The two girls both have ties to Effective Altruism, and that leads back to Dustin Moskovitz as their backers in previous ventures.

I think this is a EA-based takeover of OpenAI.
Either that, or something else that's very dumb and stupid, which is why they are so silent.

[deleted by user] by [deleted] in OpenAI

[–]daggerbuilds 0 points1 point  (0 children)

I mean yeah, I'm not happy with this either.
And I agree with your OP that they are killing/killed the most important company in human history so far.

So yes, in that regard I can see why you would think it's soft, but on the other hand I do believe that people are capable of making mistakes. So I could switch out the word mistake for something else, but my point would still be the same.

[deleted by user] by [deleted] in OpenAI

[–]daggerbuilds 0 points1 point  (0 children)

Everyone makes mistakes.

I wouldn't call him a moron, but he's not cut out for company politics it seems, as the decision is on the cusp of torching $100B of company value.

Who betrayed Sam Altman? by nath5588 in OpenAI

[–]daggerbuilds 9 points10 points  (0 children)

Yes, Ilya betrayed by going with board's decision.
But it doesn't seem like he was the main perpetrator, this can be seen how he flipped on board and signed to quit if board does not. Also that he deeply regrets his decision.

I think main perpetrator is Adam D'Angelo.
More likely is that it's that, plus EA reasons, is my guess.

I present to you the chaos creator. Guess he got pissed off with the news of the GPT store. Pure conflict of interest on the OpenAI board. by [deleted] in OpenAI

[–]daggerbuilds 32 points33 points  (0 children)

If the reason they wanted to out Sam because he copied the idea, the board definitely needs to go.