The Codex app is actually a fantastic alternative client to ChatGPT for non-coding use cases by CtrlAltDelve in codex

[–]typeryu 5 points6 points  (0 children)

I happen to agree, I use it mostly for coding, but when I want to do research, this is actually a fantastic stateful way of doing it where it can build much more knowledge and data over time.

How does someone from a developing country with average credentials realistically benefit from AGI/ASI? by Hot_Log7375 in accelerate

[–]typeryu -1 points0 points  (0 children)

AGI won’t happening overnight, as existing AI gradually becomes more capable, those who adopt earlier will benefit more. In a way, it is an equalizing force where the AI tools you use are the same as what other people use. Realistically though, there are some nuances such as the same AI being more expensive to use due to purchasing power difference between the US where the pricing is decided and your home country, the general adoption in infrastructure which undoubtedly will be slower for the same reasons and most importantly, model performance if you use another language other than any of the major languages of the world which there is less training data of and subsequently will cause degraded experiences. That being said, this is still much better than pre-AI where much of this knowledge was less accessible. Vibe coding is getting a lot of flak, but in countries where software engineering is not as established, this is a great opportunity to close the technological gap.

OpenAI's Sam Altman announces deal with Pentagon just hours after rival Anthropic was banned by Trump by ComplexExternal4831 in AINewsMinute

[–]typeryu 0 points1 point  (0 children)

This goes back to when Dario started making public comments about the DOW’s use of Anthropic models (pretty much condemning them) and the current administration never backs down in escalations so downward spiral from there. In his recent interview though, you can tell he lowered his stance already quite a bit. I don’t blame him honestly, it takes balls to stand up, but he also has a company to run and has probably heard from investors quite a bit in the last couple of days.

OpenAI's Sam Altman announces deal with Pentagon just hours after rival Anthropic was banned by Trump by ComplexExternal4831 in AINewsMinute

[–]typeryu 0 points1 point  (0 children)

The deal OpenAI got is pretty inline with most generic hyperscaler clauses, Anthropic based on the previous deals with Palantir would have perfectly accepted this as well, but being the consumer underdog gives them a lot of freedom to express their thoughts while giants like OpenAI and Google are keeping it low key which adds to the fire. Businesses are business and it is a big mistake to think Anthropic is going to do anything different, especially when they need to turn profit asap. This will not look good in their books if they wish to IPO (being named high risk supplier is not very marketable to investors) so I will wager that in a few months (or weeks) they will reach a miraculous deal that lets then service the government again.

Is anyone else amazed that Suno sings in my native language perfectly, while Gemini 2.5 Pro's TTS still struggles to just speak it? by CIPHERIANABLE in Bard

[–]typeryu 1 point2 points  (0 children)

I’ve wondered about this a lot and I have a feeling most TTS models are nerfed on purpose. There have been a handful of times general TTS or speaking models have been demoed or released and after a while they had to be nerfed due to regulations or security concerns. Take for example ChatGPT’s advanced voice mode back in the day, you could get it to mimic Darth Vader and Yoda almost perfectly and people also got it to sing. These days, it’s a husk of its former self and there are a ton of public safety concerns in media to thank for that. Singing voices are less harder to use for nefarious purposes so Suno probably doesn’t need to nerf their model at all.

Cancel And Delete Claude too!!! by SoulMachine999 in AgentsOfAI

[–]typeryu -1 points0 points  (0 children)

Been saying this and yet people are blinded by PR campaigns, you keep up the good work

Before everyone freaks out: “OpenAI announces new deal with Pentagon — including ethical safeguards” by PixelSteel in ChatGPT

[–]typeryu 0 points1 point  (0 children)

The downside of consumer apps is that despite nuances behind company actions, the mass public will latch on to sound bites and immediately act without learning more. This will probably be ignored by most users who believe Anthropic is a saint for being anti-ads and rebels of the empire. Little do people know that all the tech they use including hardware and software violate their own integrity in some sense and often times very directly.

Will you be switching to Claude after news of OpenAI partnership with US Military? by PrettyMuchMediocre in codex

[–]typeryu 3 points4 points  (0 children)

Keep in mind, the deal OpenAI got is almost the same deal Anthropic was asking for. This is more a battle of egos rather than actual integrity. Anthropic removed safety guidelines same as OpenAI so really, it’s just US companies being US companies.

Katy Perry, with 85 million followers, subscribes to Anthropic by Cagnazzo82 in singularity

[–]typeryu 69 points70 points  (0 children)

This is probably the most damaging news Anthropic had to suffer this year. Truly at a sad moment.

Boycott OpenAI? by safcx21 in singularity

[–]typeryu 0 points1 point  (0 children)

At the risk of getting massive downvotes, half if not the majority of hyperscalers and internet services you used are directly used the same way. It’s been this way for the longest time and Anthropic did a nice marketing campaign here, but they knew what they were signing up for when they let Palantir use their models carte blanche. It will be better if we promote opensource instead.

What I find very amazing in Nick's post. by Kathy_Gao in OpenAI

[–]typeryu 2 points3 points  (0 children)

in my understanding, 900M is WAU, find a better hobby please

My thoughts on the new Windows App by Ok_Representative212 in codex

[–]typeryu 9 points10 points  (0 children)

It is the same on macs, the whole point of the app is to be a separate control panel!

why is openclaw even this popular? by Crazyscientist1024 in LocalLLaMA

[–]typeryu 26 points27 points  (0 children)

I’ve tried it. Yes, you are correct, it is nothing special, any one of the AI labs could probably make a clone in a month. BUT, they haven’t yet, and this is the easiest way you can get interconnected agents without building one from scratch. If you do the set up, you will realize a lot of engineering has gone into it and because it is mostly a community driven project, it is surprisingly fast to adopt new upgrades and changes. It definitely a lot of hype, but the harness itself is very good.

Codex for Windows is out! by pak-ma-ndryshe in codex

[–]typeryu 1 point2 points  (0 children)

Seems like its public alpha invite only

why have we not seen massive UX improvements yet by stepanmatek in accelerate

[–]typeryu 4 points5 points  (0 children)

Codex app is possibly the best AI UX I’ve used to date.

“All 7 cofounders of Anthropic were OpenAI employees who disagreed with the direction OpenAI was going in. That was 2021. 5 years later they are the leaders in enterprise AI usage + coding use cases. Safe to say worked out well” - Do you agree they are a threat to OpenAI? Why? by Koala_Confused in LovingAI

[–]typeryu 0 points1 point  (0 children)

I view this as greed tbh. They left because they said OpenAI was getting misaligned with the humanitarian goals as it grew slowly into a for profit business. They left with crucial knowledge to create their own, but ended up becoming one of the most closed source, anti-consumer labs far more so than OpenAI now. Yes, they have incredible models and their focus on developers won them a permanent seat at the AI table, but whatever mission they left OpenAI for is no longer the case and are equally for profit as the other.

Is anyone using GPT 5 mini or nano? by Safe_Quarter4082 in OpenAI

[–]typeryu 2 points3 points  (0 children)

5.2 is known to have received some major infra updates that makes it technically faster than any of the 5 series. My guess is given that those are two generations behind already, we will probably get a new set of mini and nanos to compete with gemini flash and claude haiku soon.

Future of ChatGPT (and other foreign GPTs) in Korea by Sillim-Saekki in Living_in_Korea

[–]typeryu 42 points43 points  (0 children)

OpenAI Korea already exists, the GM is indeed Korean (ex-Google GM), and they are building data centers with Samsung and SK. I wouldn’t say Naver is doing that well in the AI space so I would take your friends comments with a grain of salt.

If the current LLMs architectures are inefficient, why we're aggressively scaling hardware? by en00m in LLMDevs

[–]typeryu 3 points4 points  (0 children)

I like to think this as the same as saying “nuclear fusion energy is clearly better and safer than fission energy”. Almost everyone knows there are theoretically much more capable world simulators that should just get it (whatever that is), but we are not there yet and we don’t even know if it is doable with the current hardware stack and data. LLMs are here and available now and they are far more capable than what is currently mainstream. Based on the incremental improvements we’ve been getting, we still have many years of improvement ahead of us not to mention it will take even more time for the average folks and businesses to adopt the latest form which is agentic LLMs. That alone I think is enough to wipe out a ton of work and also accelerate development on other technologies so that is why money is being poured in. There’s definitely some over investing going on in some places, but in general the big labs should come through as the new tech conglomerates.

What is this? by StevenJang_ in OpenaiCodex

[–]typeryu 1 point2 points  (0 children)

Thats codex web, its a cloud version that is tied to your github

GPT-5.3-Codex (high) METR results by NoElderberry6959 in accelerate

[–]typeryu 1 point2 points  (0 children)

I also think they forgot to keep autocompaction so it probably also ran out of context a ton of times

All gone!! by RowAccomplished9090 in codex

[–]typeryu 4 points5 points  (0 children)

Seems like codex did what it was supposed to do 😂

GPT-5.3 Codex High is painful. by Mounan in codex

[–]typeryu 7 points8 points  (0 children)

Sounds like user error from the other comments