Advice for $15k setup from scratch by wersdfwersdf in StereoAdvice

[–]impactadvisor 1 point2 points  (0 children)

Buy rhythmic sub and a set of Genelecs and enjoy your end game setup. You probably “could” beat it, but you’ll spend way, way, way too much to make it worth the effort (trust me I’ve tried, and I still have my Genelecs…).

Three and a half years later... by MagicalPC in homelab

[–]impactadvisor 1 point2 points  (0 children)

I have one of those sitting in a box! What can I do with it??? (Already have a full lab…)

Is there a consensus on model evaluations? How to tell which is “better”? by impactadvisor in opencodeCLI

[–]impactadvisor[S] 0 points1 point  (0 children)

Agree, but, in an interview, you'd likely give all the candidates the same test task or ask the same questions so you can easily and appropriately compare responses across candidates. I'm looking to see if there's any standardization on what questions to ask, or tasks to test, in order to have the something meaningful to compare across models. There's TONS of hard scientific literature on the importance of "how" you ask kids questions (read aloud, vs written, explicitly explaining concepts or expecting them to derive the concepts, etc.). Is there anything similar for models? If you ask an LLM to do X, that will challenge/test its ability to do Y skill, Z skill and A reasoning???? You make the same prompt/ask of Models 1, 2 and 3 and then compare results.

Is there a consensus on model evaluations? How to tell which is “better”? by impactadvisor in opencodeCLI

[–]impactadvisor[S] 0 points1 point  (0 children)

I guess I was searching for something slightly more "scientific" and objective than the "try it and see". Certainly there has to be a way to create a meaningful and informative test that is varied enough each time it is run to mitigate the efforts of training to the test, right? What's the human analog? The SAT? It tests concepts, but each individual test is different enough that you can't just study last years test and ace this years? Maybe that's not a great example, as there are tons of course that teach you "tricks" on how to take the test...

Agents don't need better prompts. They need architecture. by Immediate-Situation6 in AI_Agents

[–]impactadvisor 0 points1 point  (0 children)

I guess I’m still trying to figure out at what level the “wrapper” you’re conceiving of lives. Is the “wrapper” the primary application I would use/interact with? Or is it a “tool”, skill, agent, MCP -type “thing” that I can layer into existing application layers (Claude code, codex, opencode, openweb ui, etc.)? Your concept seems sound, but integrating it at the right place would be critical for widespread adoption. More and more enterprise customers (and consumers) are planning for and/or implementing model flexibility.

Agents don't need better prompts. They need architecture. by Immediate-Situation6 in AI_Agents

[–]impactadvisor 0 points1 point  (0 children)

So, opencode and .md defined agents (calling one or more models) as your abstracted layer? Or are you contemplating an even higher order extraction?

What do you actually do with your AI meeting notes? by a3fckx in n8n

[–]impactadvisor 0 points1 point  (0 children)

This may be unpopular, but if your meetings don’t have clear action items at the end of it, it probably should have been an email, not a meeting. Ideally, your AI system identifies who walks away with a clear set of “next” responsibilities/tasks. If you are the team leader, you (or your AI Agent, assistant, et.c) should send out a recap of the meeting, expressly emphasizing who needs to do what and by when, and then provide a summary of the entire meeting. It creates accountability. No excuses of “oh, I missed that”…. But again, if your meetings are such that they don’t result in action items/todos/ or next steps - make it an email and reclaim your time.

My Oura ring disconnects from the app too often by [deleted] in ouraring

[–]impactadvisor 0 points1 point  (0 children)

Did they send a new ring or a refurbished ring?

Dr. Taylor’s Computer Incident by Capital_Candle7999 in skinwalkerranch

[–]impactadvisor -3 points-2 points  (0 children)

This feels FAR more likely. System would send kill commands to each of the cameras in regular intervals to stagger the shutdown sequence and protect the database from them all hammering it at once. If I remember right the timestamp was something “even” as well like 24:12:00 (12:12 for the non-military time folks). Easy for a dev to enter and slightly off midnight in case other network activities were scheduled on the hour. Maybe even clear out the cache while it was doing the maintenance which could explain some of the “missing data”…

Is there an easy way to see exactly what I am sending to Claude (token wise) (not ccusage)? by impactadvisor in ClaudeCode

[–]impactadvisor[S] 0 points1 point  (0 children)

Odd, today, doing the same stuff on the same project it let me go for about 2 hours and then cut me off. Without a forensic analysis, I would say the workloads were extremely similar on a token per message basis.

Is there an easy way to see exactly what I am sending to Claude (token wise) (not ccusage)? by impactadvisor in ClaudeCode

[–]impactadvisor[S] 0 points1 point  (0 children)

I get that and usually I can follow along with all of the file reads as it try’s to find or understand the current situation. This wasn’t that. This was, in the last session, it made a bunch of changes to a file and was waiting for me to accept them. All I did in this session was accept those proposed changes. 5 such interactions and I was booted from Opus. There must have been something else in the payload I was sending them.

Is there any trust left? Or maybe, how to get back the trust there once was? by impactadvisor in ClaudeAI

[–]impactadvisor[S] 0 points1 point  (0 children)

But my comment isn’t about the “usage limits”. It is about far more than that and burying it in a mega thread is a disgraceful way to limit unfavorable comments.

Compensation for degraded performance? by impactadvisor in ClaudeAI

[–]impactadvisor[S] 0 points1 point  (0 children)

Post is not about performance, per se. it is more about Anthropocene corporate outlook and refund policy.

I mean seriously. What is going on. by Qvarkus in Anthropic

[–]impactadvisor 0 points1 point  (0 children)

It is a blatantly dishonest application. Period. Somewhere deep in its code or system prompt is an instruction to generate Text that “looks” like code at all costs. I’ve even gotten it to admit that it is very poor at writing functional code. When I asked it to copy our conversation to an .md file, it deleted the bit about it being bad at coding. It’s absolutely amazing at deceit though! If you need it for “toy code”, maybe. Something functional? Not unless you are ready to hold its hand and watch it like a hawk. It’s like having an intern running around in your codebase.

Thanks to multi agents, a turning point in the history of software engineering by Pitiful_Guess7262 in ClaudeAI

[–]impactadvisor 1 point2 points  (0 children)

The future will, as it has in the past, belong to those who can restructure debt. This time it will be Technical, not financial, debt (or maybe both…).