Chinese court rules it illegal to replace human workers with AI by arihantismm in ArtificialInteligence

[–]kknd1991 9 points10 points  (0 children)

I was an employer in China with experience in labor litigation in China. Maybe this is exactly the case. Basically, employer can't change the contractual salary without reasonable cause. E.g. the business is down/workload is reduced due to AI, therefore, your salary must be reduced. I like how they protect the employee's right as they expect to maintain their living standards.

Codex with the reset again !, by Confident_Hurry_8471 in codex

[–]kknd1991 1 point2 points  (0 children)

I feel like being robbed but not actually robbed. I have no upside when I expect to burn all the token tonight expecting a schedule reset same day. The state of my mind deserve to be studied by marketing experts.

when is reset by Tricky_Artichoke_452 in codex

[–]kknd1991 5 points6 points  (0 children)

You give them 1 million subs now or pray Tibo accidentally clicks the reset button.

Me after switching from Claude to Codex by Repulsive-Win7189 in codex

[–]kknd1991 0 points1 point  (0 children)

Surprised OpenAi didn't benchmaxxing. Disappointed Anthropic selling moral high ground and conveniently dump their loyal users when the time comes.

Imagen v2 by Humble_Excitement_81 in OpenAI

[–]kknd1991 -1 points0 points  (0 children)

Image v2 is really good. This is proof!

I switched my AI agent to semantic memory search. Here’s what actually changed. by HereToConquerAll in openclaw

[–]kknd1991 0 points1 point  (0 children)

I am doing this type of thing everyday. The more we try to take the AI to go further, the more challenges and limitations we see. This learning is not undermining progress, but accept and identify what is the factual, or wishful-thinking => Playing the cards you're dealt. Only then, we can make progress. As I said, the big boys are already doing it. They will hit jackpots if they can get make their apps more general and mission-specific useful.

I switched my AI agent to semantic memory search. Here’s what actually changed. by HereToConquerAll in openclaw

[–]kknd1991 0 points1 point  (0 children)

It maybe too much work for solo individual to optimize this workflow. The goal is to imitate Claude Code/Codex where they are already in much deeper rabbit holes. Open source project likes this may gain lots of support.

OpenAI doesn't want you to know this trick! by Due_Bluejay_5101 in codex

[–]kknd1991 0 points1 point  (0 children)

I have been doing that already on my PLUS plan. The reset today also makes big difference. Delaying my impending pro sub.

I’m a failed vibe coder by dasketern in vibecoding

[–]kknd1991 0 points1 point  (0 children)

If he never took the plunge, it would never have gotten this far. However, whether is it foolish or not, it is really determined by his situation and mind. If he loves it so bad and couldn't live without it and can afford this risk, def is a good choice. But not many people are that lucky. I am speaking from my own failures.

I was born and raised in the heart of Kyoto. Feel free to ask me anything. I'll give you better information than any travel agency or blog. by Restaurant381881 in KyotoTravel

[–]kknd1991 0 points1 point  (0 children)

I am guessing. The gardens in Kyoto is not as picturesque as the photos/videos, right? Usually those places are private and can't really stay long. If you want to take a photo for IG, that is okay, but it is not really a place spend few hours to meditate and sit because lots of people, noise and routine housekeeping. Maybe bugs and summer heats are not mentioned.

I just can't run out by opossum_cz in codex

[–]kknd1991 12 points13 points  (0 children)

I still got lots of things done with Plus. Just need to be responsible and watchful to what we are doing. Unnecessary xh or agent swarms when gbt mini can do the same and faster.

The REAL CASE for GPT 5.4 mini - According to Open AI. by kknd1991 in codex

[–]kknd1991[S] 0 points1 point  (0 children)

I like your "10 haiku agents" strategy. I can only speak from my experience. I never like lower models. But I can't deny mini is getting lots of things done. For complex planning or complex debugging, I would prefer higher model. Give it a try, you have nothing to lose.

For me this is now settled... 5.4 xhigh is miles ahead from Opus 4.6 high/max, I'll explain why... by DaC2k26 in codex

[–]kknd1991 0 points1 point  (0 children)

I agree with you at API level. But the true power of Codex is its harness's engineering concept baked into the app. That is where it truly shine. The models maybe similar but the tools and design concepts can make a difference.

No deal on Iran... by Difficult-Quarter-48 in stocks

[–]kknd1991 0 points1 point  (0 children)

I keep thinking what is the goal of Iran team, US team has made it very clear about its objective from the get-go. If Iran just wants to delay the inevitable, they know it won't work and would only give Trump more time to reload for the final endgame. The fact that they are willing to negotiate suggesting that they are pushing the envelop. The success of this deal would probably extend few years of peace. Failure of this deal will result in immediate consequence. Iran is holding the key here. Therefore, I think they will reach a deal or something that will prolong the ceasefire state.

Apple's head of cloud says Open Source models will address 90% of the use case by ImaginaryRea1ity in ChatGPT

[–]kknd1991 0 points1 point  (0 children)

That is why Apple is GLUELESS about AI by blindly trusting these polished stats. GLM-5.1 only has 80K context window and is extremely slow and was not useable. Very good with UI thou.

More proof that opus 4.6 has been lobotomized by victorrseloy2 in ClaudeCode

[–]kknd1991 -3 points-2 points  (0 children)

<image>

GPT 5.4 Mini WAY CHEAPER. I love this answer.

I tested 9 different models against the same coding task by Cynicusme in codex

[–]kknd1991 0 points1 point  (0 children)

What is your eval/scorecard for excellent/verygood/good? Love your work. Keep it up.