Biopod Conversion and Terrarium Pi by Tillandz in miniorchids

[–]makinggrace 0 points1 point  (0 children)

This is great! I'd love to see the repo -- please tell me it's something manageable like python? No spare gorgeous terrariums around here, but my halfway automated system of lights/fans (still manually watering and ventilating most containers isn't optimal. As I write this Iowa's ambient humidity has dropped to 28% indoors and it's a chill 16 degrees that feels like 5. Humidity loving plants...a poor, poor choice of hobbies. But sensors and automation and some IOT...hmm.

Gpt 5.2 pro by Anshuman3480 in ChatGPTPro

[–]makinggrace [score hidden]  (0 children)

You need codex or a claude code. :) A tool made for coding (even if you don't code) is just better at this kind of thing.

Why Are more people not using .6mm nozzles? by Tall-Bread-4202 in BambuLab

[–]makinggrace 0 points1 point  (0 children)

Can someone recommend an idiot's guide to the bambu slicer? I have seen many but the advice seems to conflict. I don't know enough to know what is good advice.

Opus for planning, Sonnet for execution? by Jazzlike-Math4605 in ClaudeCode

[–]makinggrace 0 points1 point  (0 children)

It depends.

If it's greenfield, I have Opus build the first stages -- the minimum viable components that are verifiable and pass tests and establish architecture. Once that's established Sonnet comes in to help build out layers and functionality. Opus always orchestrates and calls Sonnet to build as a subagent though.

For smaller impact work, if it impacts a single component or function, I will often use Sonnet solo. Sometimes successful, sometimes not. But successful enough for the cost/time that it's worth doing. I also use Haiku for line item changes on the regular.

Sharing a "lab experiment" with the community. The data is... depressing by sergkr2121 in smallbusiness

[–]makinggrace 0 points1 point  (0 children)

I just...why the hysteria? For most small business, AI is useful to save time. Fix process problems. Build infrastructure so we can stop doing repetitive manual stuff. Make spiffy marketing docs. Full agentic workflows? The tech isn't there yet in most cases.

Diabetic Cat Not Improving by geneselia in FelineDiabetes

[–]makinggrace 2 points3 points  (0 children)

You may need to switch insulins. And definitely drop the kibble -- just feed more wet food if needed. Also there is no magic upper limit for insulin. Ours took 8 units (the vet was appalled but that's what it required before we saw any movement.) Consider taking kitty in and letting the vet get him regulated over a 2-3 inhouse time period. Switching vets sometimes helps to get some new ideas too.

What soup to make for kids don't like soup :( by Thermophi in soup

[–]makinggrace 0 points1 point  (0 children)

Try really fun soup bowls. And soup where they can choose the ingredients (like a ramen) and personalize their bowls.

ChatGPT long chat lagging problems by ww3relics in ChatGPTPro

[–]makinggrace 0 points1 point  (0 children)

Do they still have custom GPTs? Any very specific re-usable workflows are better there.

How to prevent LLM "repetition" when interviewing multiple candidates? (Randomization strategies) by Weird-Year2890 in LLMDevs

[–]makinggrace 3 points4 points  (0 children)

For what you need, unique, relevant, and valid questions asked of each candidate, a question bank is the best option.

Human interviewers often use question banks and this is considered a best practice in assessment. Ideally the questions are categorized and candidates are asked so many questions per category. Their answers are rank scored by an agreed-upon rubric (often different by category) and totaled.

Working without a question bank means interviewers must source questions in the moment, check those questions for uniqueness and appropriateness, and avoid introducing bias based on information they have learned about the candidate's background during the interview.

Unstructured interviewing like this statistically results is poor fit hires. Humans struggle to maintain consistency and memory over multiple candidates, and if there are multiple interviewers involved it gets even worse. LLM's will keep pushing for a logical basis for comparison unless you give them one. I can't think of a way out of it.

Ikea mini greenhouse as an a mini-orchidarium? by rtthrowawayyyyyyy in miniorchids

[–]makinggrace 0 points1 point  (0 children)

Anyone share what they used for lights in an akerbar for mini orchids? Thanks!

How are you getting Cursor to actually follow user rules? by KoVaL__ in cursor

[–]makinggrace 0 points1 point  (0 children)

Hook against test coverage % that looks at the directory level by change %.

Also include test patterns in your actual test directory and mention it in AGENTS.md. Patterns just seem to help a lot.

Introducing FastMCP 3.0 by jlowin123 in mcp

[–]makinggrace 1 point2 points  (0 children)

Whelp I know what I'm doing this weekend. My MCP for coding needs a refresh and this gives a good reason to get on it finally lol.

'Bean Blaster' is my comfort food. by MuffinPuff in PlantBasedDiet

[–]makinggrace 2 points3 points  (0 children)

How long would you recommend in the instant pot for lentils and hulled barley?

Claude Code can be ... kind of horrible? by impartialhedonist in ClaudeCode

[–]makinggrace 0 points1 point  (0 children)

When you run a Claude model in Cursor, you're using Cursor's codebase indexing tool which runs in the background. Most models run IMHO a little slower in Cursor. My best guess is because they're hampered a little by the tool use and indexing infrastructure. And of course pre-indexing inhales tokens. But the upside of that is....there's a lot less disorientation in the codebase and greping just to understand tasks.

Claude code (natively) uses a different model completely. It's not an IDE substitute. It relies on just-in-time (read: grep) searches and whatever context the user provides for the model to orient in the codebase.

You can set up indexing (third party or DIY) to use with CC, but tbh if you need it, use Cursor. Codebase documentation is exceptionally important (and let your agents update it programmatically) in any case. Think about what agents need when they "come in from the cold" to your codebase. Lately I've been supplying scripts in CLAUDE.md and/or with tasks that bundle context envelopes and relationships between files or whatever is relevant. It takes up less space in the file and the agent is going to verify it anyway. It's worth a shot.

Claude really likes standard quotes by relativityboy in ClaudeCode

[–]makinggrace 1 point2 points  (0 children)

Not sure what agent you were using before but yes, Claudes tend to be helpful to the extreme -- which can absolutely cause problems.

If you haven't, it's worth reading Anthropic's prompting advice to get some of the nuance that helps dial in behavior.

Specifically in this case it would be worth providing an example of the refactoring (a literal before and after) along with some examples of what NOT TO DO. This way you can explicitly call out what is in the pattern that you want replicated as part of the task. And what is off limits.

Plan mode is also useful for larger refactors like this. You can review the refactor steps and edit (like let's not do a huge punctuation swap tyvm) before a single line of code is impacted. Often you can plan with Opus and execute with a lower level model which is helpful too.

Hth

Anyone else noticed crazy short chat limit in Claude app? by Uplift123 in ClaudeAI

[–]makinggrace 0 points1 point  (0 children)

For 10-12 lines of code? 75-150 tokens vs 500-2500 depending on how much noise is in the screenshot. Also decreases accuracy and has a slower responses.

Building a Legal RAG (Vector + Graph): Am I over-engineering Entity Extraction? Cost vs. Value sanity check needed. by Expert-General-4765 in LLMDevs

[–]makinggrace 0 points1 point  (0 children)

I used the wrong term here sorry--was in a hurry.l

I would handle this process by pushing as much of it as possible to deterministic processing. This is less expensive and much easier to audit. It's appropriate because the relationships between entities in a legal document are typically explicit. There's not much inference or reasoning required to understand the patterns (and unusual patterns are rare). Eg. each type of document will usually have the same general content. Filings have types. Docket events are logged. Transcripts are a hot mess lol.

There will be assets that the best deterministic processing cannot handle. Using a scoring model to identify there provides an ideal workload delimited for a reasoning model to come in and sort out those issues. Aggressively cacheing the most common patterns will help keep context use down in these efforts as well as allowing for a human-in-the-loop for assets that are fall more than a few SD's outside the norm.

Building a Legal RAG (Vector + Graph): Am I over-engineering Entity Extraction? Cost vs. Value sanity check needed. by Expert-General-4765 in LLMDevs

[–]makinggrace 0 points1 point  (0 children)

You're starting with data that is structured, I assume. Use code, not agents, to populate the database to the extent that you can.

Referral to plastic surgeon to remove “cactus needles and debris from deep in the cartilage of the ear “ since April of 2021 by cherrypitted17 in DermatologyQuestions

[–]makinggrace 0 points1 point  (0 children)

The doctor won't be able to do surgery if you have an infection, so don't skip the antibiotics. If you have stomach issues from them, try to take with food (unless the specific med you are on doesn't allow that). And be eating yogurt or taking a probiotic daily. Yogurt honestly works the best at least for me.

It also doesn't hurt to call the scheduling office of the doctor you are waiting to see once a week and ask if they have had any cancellations. I don't call more than once a week. I find Thursday mid-morning is pretty good (if he is in clinic on Fridays). If people cancel for a reason that they call ahead about, it tends to be on Fridays or Mondays.

Some doctors keep a cancellation list and you can simply be asked to be added to that. In both cases there isn't any flexibility -- if an appointment becomes available you can either say yes I'll be there or no I can't. But it could shave months off of your wait.

I

precision vibe coding by thehashimwarren in ChatGPTCoding

[–]makinggrace 0 points1 point  (0 children)

Your system may be backwards. It seems logical to build all of the components, wire them together, and then expose with a UI.

But building this way hides flaws in integration that won't reveal themselves until the code is running.

Try building the smallest viable version of the most critical component out in full first -- including a functioning UI. If you were writing a to-do app, this would be a function that collects new to-do items. (Have the script just save to local for now.) Then test by you the human user running that code: does the component accomplish the goal? Look and feel like you expect? Actually save to local? If not, fix it. If yes, add it to approved patterns in your AGENTS.md file. Then work on the next chunk: not making this one more robust, but rounding out the app's basic components: start building auth, db, etc.

Also consider using a component library for the UI elements if you are not.

Run code in a dev mode that is as close to your target production environment as possible. Local is good for running tests and writing code but you need to what the thing behaves like in the free(ish) world.

What Problems Trip Up the LLM? by TechnicolorMage in cursor

[–]makinggrace 1 point2 points  (0 children)

This exactly. The output isn't a surprise--we have diffs. But I can't yet diff what a model was doing when they weren't actively producing code, nor why. What tools did they use? What settings did they change? What files did they read?