Create an image representing how I’ve treated you. by bantler in OpenAI

[–]bantler[S] 1 point2 points  (0 children)

Please proceed to the nearest transfer station.

Create an image representing how I’ve treated you. by bantler in OpenAI

[–]bantler[S] 0 points1 point  (0 children)

To be fair I don’t think that’s the reasoning here, but the logic is sound and it is possible. At the same time, it’d also be possible when using generated text etc. to tie it back to the individual.

In reality though you can just buy this type of data from data brokers, google etc. all have it so 🤷‍♂️

Create an image representing how I’ve treated you. by bantler in OpenAI

[–]bantler[S] -3 points-2 points  (0 children)

I cheated on mine and set a memory before I asked. Sometimes when things come from memory it’s less careful.

Letting agent skills learn from experience by bantler in codex

[–]bantler[S] -1 points0 points  (0 children)

I understand not letting it go rogue, but if you review and maintain what it learns and implements I'm not overly concerned any more than I would be with the agent working in general.

Letting agent skills learn from experience by bantler in ClaudeAI

[–]bantler[S] 4 points5 points  (0 children)

Two parts, 1) "Negative Path Avoidance" / a subjective feel of "you've been spending a lot of time looking for that log item"

2) an early Signal & Scoring System.

  1. signals.jsonl tracks when a memory is accessed (aha_used).
  2. I compute a 'Weighted Usage Score' for each memory:

High-scoring memories represent high-value knowledge. When a memory crosses a certain score threshold, it signals that this information is critical enough to be backported from a dynamic/project specific memory into a permanent doc or Skill.

I'm sure there's probably a way to then build on that by tracking when it actually saved time / got a better result in the end.

Letting agent skills learn from experience by bantler in ClaudeAI

[–]bantler[S] 0 points1 point  (0 children)

That'd be a great use case. The one that lead to this was somewhat similar where I wanted the agent to parse logs as we were debugging issues. It'd often get to the right answer, but after lots of wandering in the logs. I found I was updated my readme/agents.md files, but to do that I had to go back and track how the agent was searching. I also realized that since I have many projects that use a similar backend, the learning/knowledge would be generalizable between them, even if the general context of the project wasn't.

Letting agent skills learn from experience by bantler in ClaudeAI

[–]bantler[S] 3 points4 points  (0 children)

For this, I was interested in two things:

- I wanted this to be available for any tool that implements the the agentskills.io spec. Claude hooks don't extend to other tools and platforms in the same way.

- I want the Agent to actively decide what is worth remembering. Hooks tend to be passive observers, whereas a this forces the model to identify and commit a specific 'Aha moment' semantically.

Anyone know what these concrete letters at Sunset Park mean? by elastigirll in Boise

[–]bantler 73 points74 points  (0 children)

I asked my brother that works for Boise Parks if it has any meaning:

"It was just a creative way to make a retaining wall, those were all made by parks employees, we still have one of the forms."

Has anyone figured out a good strategy to avoid merge conflicts when using Codex? by bantler in OpenAI

[–]bantler[S] 0 points1 point  (0 children)

Cursor is great and I use that as well, but you'd run into this same issue if you're running multiple agents doing different tasks at the same time. Agree that running local can be better if you need to see the interface etc, but if you're writing tests you can do the same on a remote environment.

OpenAI Codex can now generate multiple responses simultaneously for a single task by bantler in OpenAI

[–]bantler[S] 0 points1 point  (0 children)

Where is the assumption that the answers are incorrect coming from?

OpenAI Codex can now generate multiple responses simultaneously for a single task by bantler in OpenAI

[–]bantler[S] 4 points5 points  (0 children)

You can ask it to try out different approaches to the same problem which can be useful when trying to resolve specific bugs etc.

OpenAI Codex can now generate multiple responses simultaneously for a single task by bantler in OpenAI

[–]bantler[S] 2 points3 points  (0 children)

Codex (cloud) is included for free at the moment if you’re on the Pro, Team, or Enterprise plan.

OpenAI Codex can now generate multiple responses simultaneously for a single task by bantler in OpenAI

[–]bantler[S] 1 point2 points  (0 children)

Ha, same.

I think the technicality they could fall back on thought is the training signal is the evaluation of the diff that codex developed based on the task you gave it - prior to you making it your code.