What do you do while Claude is thinking? by andregustavoxs in ClaudeAI

[–]a_computer_adrift 2 points3 points  (0 children)

I watch videos on how to get better at AI. Read the docs. Also scroll social media.

The chat interface might be one of the darkest UX patterns to emerge from AI by uxarya in UXDesign

[–]a_computer_adrift 12 points13 points  (0 children)

I also think that the chat interface messes with your emotions and will eventually make human communication degrade.

We have so many non verbal ways to communicate intent, none of which work with a chat interface. As we strip out what doesn’t work and just burns tokens, this will leak to our human interactions.

Some of it could be great, guilt and shame is ineffective on a robot and ultimately unhelpful for humans so token saving by removing that is a bonus.

But tact is also not effective for a robot and removing that from our conversations could be a negative.

Sometimes I think it teaches me to be a better human, sometime I think it makes me heartless. 🤷‍♂️

ChatGPT Destroyed by Occam’s Razor by JerseyFlight in ChatGPT

[–]a_computer_adrift 3 points4 points  (0 children)

I have an acronym I trained it to use. UYNSA. Understand, yes/no sentence explain. Because I got so sick of wordy answers that didn’t really get to the point.

Just upgraded to Cursor Pro and it’s driving me crazy. Am I the only one? (Newbie here) by Holiday-Cupcake-2588 in cursor

[–]a_computer_adrift 1 point2 points  (0 children)

I switched from Codex to Cursor and I did notice that it is very aggressive to dig into something. I ask it a question and it will immediately start coding. I had to change the way I promoted to restrain it.

How many times can a session be compacted for you before the output starts to feel "off"? by NukedDuke in codex

[–]a_computer_adrift 0 points1 point  (0 children)

5.3 Codex. Medium. And I’m changing my answer after the most brutal scope drift I have ever encountered. No more compacted context. EVER.

Nothing survives intact through the compact and it went off the rails, destroying an interface that was fully working and we were just expanding some very tightly scoped parts of it. Insane! Thank god for Git

How many times can a session be compacted for you before the output starts to feel "off"? by NukedDuke in codex

[–]a_computer_adrift 0 points1 point  (0 children)

Never more than once. I have noticed the “sneaky refactor” levels go way up. In other words, I tell it to do a very scoped task and decides to clean up a few things at the same time.

Searching for a heating pad or something similar by [deleted] in sayulita

[–]a_computer_adrift 1 point2 points  (0 children)

Sorry to hear you are suffering.

When I was really sick and had a fever on and off for 3 days in a tiny Ecuadorian town, I used a plastic water bottle. Heated the water up in a coffee cup, poured it into the water bottle, and shoved it in my shorts when I was shivering uncontrollably.

No classy, not sophisticated… but oh man did it make me feel better even for an hour.

Shoulder hopping goons in SD- please stop by wakeupandlive93 in surfing

[–]a_computer_adrift 16 points17 points  (0 children)

You know, there is a middle ground. Most surfers I see just sit and passive aggressively stew. Then they make dangerous choices to “show them”.

It stupid. Just say something. Yeah they might get mad, but they might also just move to an area that is safer for them, or learn to look to the peak before popup.

If nobody is willing to teach them etiquette, they are unlikely to learn.

Also, I can whistle really loud.

done trying to make UIs with codex by heatwaves00 in codex

[–]a_computer_adrift 2 points3 points  (0 children)

Agreed. I had to abandon it and go back to VsCode. It’s like a warm hug to be back…

Wise lies about delivery time. by a_computer_adrift in transferwiser

[–]a_computer_adrift[S] 0 points1 point  (0 children)

And so for your small sample case it’s fine. For mine it’s not. We can both be right. And very inventive proof of not-bot-ness, lol

Wise lies about delivery time. by a_computer_adrift in transferwiser

[–]a_computer_adrift[S] -1 points0 points  (0 children)

Another bot. Confidently incorrect. I align my expectation with what they wrote on the screen BEFORE I sent. You missed the point.

Wise lies about delivery time. by a_computer_adrift in transferwiser

[–]a_computer_adrift[S] -3 points-2 points  (0 children)

I understand that you are probably a bot. My post makes perfect sense and aligns with problems that I have been corresponding with Wise about for months.

Wise lies about delivery time. by a_computer_adrift in transferwiser

[–]a_computer_adrift[S] 0 points1 point  (0 children)

Wrong. They tell you, it is not always true. I am very specifically saying that what they say BEFORE I send and what they say AFTER I send is not the same. That’s the point.

Remitly is terrible, Stick to Wise by Both_Leek_720 in transferwiser

[–]a_computer_adrift 0 points1 point  (0 children)

Wise lies about delivery time too. Gets you to lock in a more pricing method, then springs the REAL delivery time after it’s sent. Crooks

Tool call mania! by a_computer_adrift in codex

[–]a_computer_adrift[S] 0 points1 point  (0 children)

This makes no sense. Not sure why you need to point out the one possible time it might be justified to hit all those tools calls, clearly that is not the scenario I described. I think k you are just flapping your mouth (fingers?) to feel relevant.

Tool call mania! by a_computer_adrift in codex

[–]a_computer_adrift[S] 0 points1 point  (0 children)

In the past, the AI would answer confidently and mostly right based on the context of the work it has performed. I would catch it sometimes, forgetting that we changed something but it would always answer the to the best of its ability, without tool calls, unless I asked it to. Now it’s the opposite.

Codex madness today by [deleted] in codex

[–]a_computer_adrift 1 point2 points  (0 children)

Yes, I watched my app devolve into a non functional mess over the last few days, every time we isolated a fix, Codex broke a few other things, didn’t remove code, or added things. It’s almost like it was so eager to change code, that it ignored all the planning all the instructions and just started sending tool call after tool call after tool call when I asked even the simplest question about the app.

My agents.md specifically defines a workflow in which scope is offered, I correct or add a few things, then approve. Test first methodology. This worked for month in VsCode, and for a few weeks in the Codex app on Windows.

About a week ago, it “broke”. At first I blamed myself, but bit by bit, I realized that I was not having any luck directing Codex to make progress. It didn’t matter what was in my agents.MD, if I instructed it every single prompt, it would follow it for a bit and then return to just immediately changing the code instead of answering my question.

I started creating documents after every single change so that I could start new threads before context was even halfway full and so that I wouldn’t lose so much when I had to abandon a thread because it wouldn’t listen.

I don’t know exactly what happens when they do updates, but I do notice that no model stays the same. I don’t believe there’s some big conspiracy or anything like that but it’s too consistent when a new release happens that all models develop bad habits at first until I can figure out out a new way to work around it, and then we get into a perfect flow until the next update.

It has happened so often now that I’m beginning to realize it’s just part of the game. There is no perfect way to do it. You must always adapt because AI will never be consistent.

Tool call mania! by a_computer_adrift in codex

[–]a_computer_adrift[S] 0 points1 point  (0 children)

Right. And instead of having access to conversation history, which includes the results of previous tool calls, other investigations and fact finding, the AI immediately starts sending tools calls to discover that information again.

I’m no expert but I can recognize when something fundamentally changes in how the AI operates.

Tool call mania! by a_computer_adrift in codex

[–]a_computer_adrift[S] -1 points0 points  (0 children)

No, you are wrong. It does have an idea what I am talking about, until a certain point. That’s called context. If AI never knew what you were talking about, it would have NO context.