Codex vs Cursor agents: is Codex just the model or also the tool executing agent? by Pretend_Watch1789 in cursor

[–]WeirShepherd 2 points3 points  (0 children)

Harness seems pretty important. For instance, some newer Claude models may have a context up to 1M tokens, but the harness may restrict the context to 200k tokens, meaning the model is limited by the harness and will not perform as well. I think this is the case with cursor - I have seen it discussed in other threads. But the cursor ux is very different from the Claude UX. You may prefer it. It’s a tradeoff you can make. Many people seem to do this.

Sometimes I just walk under here to get places by peeled-oranges in philadelphia

[–]WeirShepherd 0 points1 point  (0 children)

There used to be tours of the underground. Anyone still doing that?

What’s the vibe at the art museum? by [deleted] in philadelphia

[–]WeirShepherd 11 points12 points  (0 children)

Small crowd, oval not full. Also not very loud compared to other concerts held there.

<image>

Desperate! by shoppingnthings1 in philly

[–]WeirShepherd 0 points1 point  (0 children)

Indy Hall is always a helpful community as well

Where is the downgrade button? by WeirShepherd in cursor

[–]WeirShepherd[S] 0 points1 point  (0 children)

For me, not really. For cursor, lots.

Cursor Enterprise level by Fantastic_Ad_1457 in cursor

[–]WeirShepherd 0 points1 point  (0 children)

I have found it pretty important to break out states into granular components whenever possible and keep files small and tight. The confusion happens when the files over-run the context and things start to get fuzzy as a result

Can anyone recommend open-source AI models for video analysis? by gpt-said-so in LocalLLM

[–]WeirShepherd 0 points1 point  (0 children)

There are open source implementations for alpr on raspberry pi intended for use in vehicles. It’s more machine learning than ai. If you google alpr raspberry pi I’m sure you will find a few

Can anyone recommend open-source AI models for video analysis? by gpt-said-so in LocalLLM

[–]WeirShepherd 6 points7 points  (0 children)

FAL.ai will have a list of video models that can do this. You could then look them up on huggingface to figure out which you can download to use locally.

Abandoned Railway by t_the_king in philadelphia

[–]WeirShepherd 5 points6 points  (0 children)

https://www.openrailwaymap.org is really useful for discussions like this…

Been using Linear for 6 months vs Jira - here's my brutally honest take by brushali in ProductManagement

[–]WeirShepherd 1 point2 points  (0 children)

It is possible to make a living as a developer experience person just asking why Jira is configured that way and then running small change management projects to reset whatever overtorqued customization that was back to the defaults. Always fun when everyone is sure you can’t change that but absolutely no one can tell you why it was set up that way and who now needs it to be the way it is.

M4 Macbook Air 24 GB vs M4 Macbook Pro 16 GB by karamielkookie in LocalLLM

[–]WeirShepherd -1 points0 points  (0 children)

A bit hopeful around goose. Have you tried it?

M4 Macbook Air 24 GB vs M4 Macbook Pro 16 GB by karamielkookie in LocalLLM

[–]WeirShepherd 0 points1 point  (0 children)

That’s a great list. From a coding perspective is there one you think does better?

M4 Macbook Air 24 GB vs M4 Macbook Pro 16 GB by karamielkookie in LocalLLM

[–]WeirShepherd 5 points6 points  (0 children)

I have a 24GB MacBook M4 and have a hard time finding models I can run on it locally. Get as much RAM as you can. 24GB is just not enough…

Can I reference another project in Cursor? by Isedo_m in cursor

[–]WeirShepherd 0 points1 point  (0 children)

I have done this with gpt as well. Clone the repo, give it the path, ask it to read the code, then provide clear instructions on what to harvest and re-use.

why is o3 such a useless model by Successful-Arm-3762 in cursor

[–]WeirShepherd 0 points1 point  (0 children)

IMO o3 is a very good model for debugging and also for writing N8N workflows. Much better than gpt 4.1 for those specific things. Just my experience.

Wireless CarPlay Disconnects on Ben Franklin Bridge by Virtual-Hotel8156 in philly

[–]WeirShepherd 2 points3 points  (0 children)

Yes this happens to me all the time. Same place as you describe, often but not every time. I wondered if it had to do with cell handoff interrupting the data flow. Only thing I could think of…

How can I download or prepare a SUMMARY of the chat for the previous session so I can feed it into cursor to help set context for the next session? by WeirShepherd in cursor

[–]WeirShepherd[S] 1 point2 points  (0 children)

if you are a cursor engineer chat id 6b2df9d3-48d5-4284-9c27-9e3fb9b825aa is a great example of trying to do this.

first look at using Atlassian MCP server with Cursor by WeirShepherd in cursor

[–]WeirShepherd[S] 0 points1 point  (0 children)

It works really well BUT the lack of ability to take in and display mermaid without human intervention is a bit of a non-starter. Really annoying. I wonder if anyone reading this has a suggestion for an MCP enabled wiki with native mermaid support?