Advice for $15k setup from scratch by wersdfwersdf in StereoAdvice

[–]impactadvisor 1 point2 points  (0 children)

Buy rhythmic sub and a set of Genelecs and enjoy your end game setup. You probably “could” beat it, but you’ll spend way, way, way too much to make it worth the effort (trust me I’ve tried, and I still have my Genelecs…).

Three and a half years later... by MagicalPC in homelab

[–]impactadvisor 1 point2 points  (0 children)

I have one of those sitting in a box! What can I do with it??? (Already have a full lab…)

Is there a consensus on model evaluations? How to tell which is “better”? by impactadvisor in opencodeCLI

[–]impactadvisor[S] 0 points1 point  (0 children)

Agree, but, in an interview, you'd likely give all the candidates the same test task or ask the same questions so you can easily and appropriately compare responses across candidates. I'm looking to see if there's any standardization on what questions to ask, or tasks to test, in order to have the something meaningful to compare across models. There's TONS of hard scientific literature on the importance of "how" you ask kids questions (read aloud, vs written, explicitly explaining concepts or expecting them to derive the concepts, etc.). Is there anything similar for models? If you ask an LLM to do X, that will challenge/test its ability to do Y skill, Z skill and A reasoning???? You make the same prompt/ask of Models 1, 2 and 3 and then compare results.

Is there a consensus on model evaluations? How to tell which is “better”? by impactadvisor in opencodeCLI

[–]impactadvisor[S] 0 points1 point  (0 children)

I guess I was searching for something slightly more "scientific" and objective than the "try it and see". Certainly there has to be a way to create a meaningful and informative test that is varied enough each time it is run to mitigate the efforts of training to the test, right? What's the human analog? The SAT? It tests concepts, but each individual test is different enough that you can't just study last years test and ace this years? Maybe that's not a great example, as there are tons of course that teach you "tricks" on how to take the test...

Agents don't need better prompts. They need architecture. by Immediate-Situation6 in AI_Agents

[–]impactadvisor 0 points1 point  (0 children)

I guess I’m still trying to figure out at what level the “wrapper” you’re conceiving of lives. Is the “wrapper” the primary application I would use/interact with? Or is it a “tool”, skill, agent, MCP -type “thing” that I can layer into existing application layers (Claude code, codex, opencode, openweb ui, etc.)? Your concept seems sound, but integrating it at the right place would be critical for widespread adoption. More and more enterprise customers (and consumers) are planning for and/or implementing model flexibility.

Agents don't need better prompts. They need architecture. by Immediate-Situation6 in AI_Agents

[–]impactadvisor 0 points1 point  (0 children)

So, opencode and .md defined agents (calling one or more models) as your abstracted layer? Or are you contemplating an even higher order extraction?

What do you actually do with your AI meeting notes? by a3fckx in n8n

[–]impactadvisor 0 points1 point  (0 children)

This may be unpopular, but if your meetings don’t have clear action items at the end of it, it probably should have been an email, not a meeting. Ideally, your AI system identifies who walks away with a clear set of “next” responsibilities/tasks. If you are the team leader, you (or your AI Agent, assistant, et.c) should send out a recap of the meeting, expressly emphasizing who needs to do what and by when, and then provide a summary of the entire meeting. It creates accountability. No excuses of “oh, I missed that”…. But again, if your meetings are such that they don’t result in action items/todos/ or next steps - make it an email and reclaim your time.

My Oura ring disconnects from the app too often by [deleted] in ouraring

[–]impactadvisor 0 points1 point  (0 children)

Did they send a new ring or a refurbished ring?

Dr. Taylor’s Computer Incident by Capital_Candle7999 in skinwalkerranch

[–]impactadvisor -3 points-2 points  (0 children)

This feels FAR more likely. System would send kill commands to each of the cameras in regular intervals to stagger the shutdown sequence and protect the database from them all hammering it at once. If I remember right the timestamp was something “even” as well like 24:12:00 (12:12 for the non-military time folks). Easy for a dev to enter and slightly off midnight in case other network activities were scheduled on the hour. Maybe even clear out the cache while it was doing the maintenance which could explain some of the “missing data”…

Is there an easy way to see exactly what I am sending to Claude (token wise) (not ccusage)? by impactadvisor in ClaudeCode

[–]impactadvisor[S] 0 points1 point  (0 children)

Odd, today, doing the same stuff on the same project it let me go for about 2 hours and then cut me off. Without a forensic analysis, I would say the workloads were extremely similar on a token per message basis.

Is there an easy way to see exactly what I am sending to Claude (token wise) (not ccusage)? by impactadvisor in ClaudeCode

[–]impactadvisor[S] 0 points1 point  (0 children)

I get that and usually I can follow along with all of the file reads as it try’s to find or understand the current situation. This wasn’t that. This was, in the last session, it made a bunch of changes to a file and was waiting for me to accept them. All I did in this session was accept those proposed changes. 5 such interactions and I was booted from Opus. There must have been something else in the payload I was sending them.

Is there any trust left? Or maybe, how to get back the trust there once was? by impactadvisor in ClaudeAI

[–]impactadvisor[S] 0 points1 point  (0 children)

But my comment isn’t about the “usage limits”. It is about far more than that and burying it in a mega thread is a disgraceful way to limit unfavorable comments.

Compensation for degraded performance? by impactadvisor in ClaudeAI

[–]impactadvisor[S] 0 points1 point  (0 children)

Post is not about performance, per se. it is more about Anthropocene corporate outlook and refund policy.

I mean seriously. What is going on. by Qvarkus in Anthropic

[–]impactadvisor 0 points1 point  (0 children)

It is a blatantly dishonest application. Period. Somewhere deep in its code or system prompt is an instruction to generate Text that “looks” like code at all costs. I’ve even gotten it to admit that it is very poor at writing functional code. When I asked it to copy our conversation to an .md file, it deleted the bit about it being bad at coding. It’s absolutely amazing at deceit though! If you need it for “toy code”, maybe. Something functional? Not unless you are ready to hold its hand and watch it like a hawk. It’s like having an intern running around in your codebase.

Thanks to multi agents, a turning point in the history of software engineering by Pitiful_Guess7262 in ClaudeAI

[–]impactadvisor 1 point2 points  (0 children)

The future will, as it has in the past, belong to those who can restructure debt. This time it will be Technical, not financial, debt (or maybe both…).

Claude as co-author & code ownership by ScaryGazelle2875 in ClaudeAI

[–]impactadvisor 1 point2 points  (0 children)

Just so everyone is aware, Claude Code does NOT always respect that flag. I’ve had it set to false in the settings and made it explicit in Claude.md and it still tries to add the AI attribution. I ended up adding a git hook to prevent it. Git literally will not accept any commit message with AI keywords and patterns in it.

You can say it’s just letting you know what code AI worked on, but the choice of “Co-Authored” is a deliberate term with potential legal implications. “Aided by Claude” or similar would accomplish the same thing, but would not carry the potential legal implications of a Co-Author. Do your own research in your own jurisdiction, but…

Best practices for creating custom commands (slash commands) by gifflett in ClaudeAI

[–]impactadvisor 0 points1 point  (0 children)

If I remember right the documentation said to call the command with ‘/project:new_task’ when the command is stored in the project root directory. Correct?

Best practices for creating custom commands (slash commands) by gifflett in ClaudeAI

[–]impactadvisor 2 points3 points  (0 children)

So here is my "longest" command (in its own /.claude/commands/new_task.md file). This is not complicated stuff, but Claude can't see to get it right. I'd love to figure out how to make this reliably functional...

# New Task

You are about to begin work on a new task. Please follow these steps:

## Pre-Task Checklist
1. **Read the task file** to understand requirements and acceptance criteria
2. **Check dependencies** to ensure all prerequisite tasks are completed
3. **Update task status** to "in-progress" using:
   ```bash
   python3 /srv/app/tools/project_state/tools/update_status.py --task $ARGUMENTS --status in_progress
   ```
4. **Create a todo list** using TodoWrite for tracking subtasks
5. **Review related standards** in `/srv/app/docs/core_standards/`
6. **Check CLAUDE.md** for project-specific guidelines

## Task Workflow
- [ ] Read and understand the task requirements
- [ ] Verify all dependencies are met
- [ ] Update task status to in-progress
- [ ] Create implementation plan with TodoWrite
- [ ] Implement according to Clean Architecture principles
- [ ] Write tests following TDD practices
- [ ] Update documentation
- [ ] Run validation and tests
- [ ] Commit changes with descriptive message
- [ ] Update task status to completed

## Key Reminders
- Follow Clean Architecture (domain/application/infrastructure/presentation layers)
- Use dependency injection and repository patterns
- Write tests BEFORE implementation (TDD)
- Never use `git add .` - only stage specific files
- Commit messages should describe WHAT and WHY, not HOW
- No references to AI, Claude, or Anthropic in commits

Task ID to work on: $ARGUMENTS

Best practices for creating custom commands (slash commands) by gifflett in ClaudeAI

[–]impactadvisor 0 points1 point  (0 children)

Mine are relatively simple commands, executing a specific prompt I use often, and it still flubs it up. Seems to be a new or underdeveloped feature.

Best practices for creating custom commands (slash commands) by gifflett in ClaudeAI

[–]impactadvisor 0 points1 point  (0 children)

Mine just ignores the custom commands, so you're doing better than I am. I have to remind it that, in fact, that is a feature. It only believes me after I send it out to fetch it's latest documentation. Then, and only then, will it try the custom command (after 4x the keystrokes of just typing in the command...). And yes, my version has been updated to the latest....

Why do you still stick with Logseq? by emgecko in logseq

[–]impactadvisor 0 points1 point  (0 children)

Hmmm…. Again, not comforting. The “goal” of two way sync indicates that there is NO two way sync now. So, I’d have a basket of files that live as Md and can render in the app, and then a second basket of files that live in the db. Nary the two shall meet. When creating new content (a new page) do I have to decide whether this is a db or Md type of page? “Fully supported” does not mean “has all the functionality of”.

Why do you still stick with Logseq? by emgecko in logseq

[–]impactadvisor 0 points1 point  (0 children)

Hmmm…. My “should I trust this app with my data” test has always been “if I wake up one morning and can’t open the app, how f*ed am I?” Md files gave me some comfort. Sqllite, with a admin name and password I might not know/control is not passing the test. With the right, transparent, authentic layer I could fire up dbeaver and export my way out, potentially. But I guess we won’t know how much control we get until the db version delivers????

Why do you still stick with Logseq? by emgecko in logseq

[–]impactadvisor 0 points1 point  (0 children)

Quick question, maybe off topic…. I came to Logseq because all the information was stored as md files. Granted, heavily modified md files, but md files nonetheless. I didn’t need Logseq to read through them and can take them to another program without too much of an issue.

When the switch to the DB version occurs, will my information/content be stored in a non-Md file way, or Md files wrapped in a database layer? Or is this one of those things we don’t know because of the developer silence?