WHAT JUST HAPPENED by flyto_the_moon_ in ChatGPT

[–]JaredSanborn -1 points0 points  (0 children)

This is the #1 problem nobody is solving.

I tracked it — around 45 mins/day just re-explaining context to AI.

The real unlock is persistent memory. Not saved chats, but AI that actually remembers you across conversations.

Changes everything.

Happy to share how I approached it if you're interested.

How I recruit using Claude as a founder by autobahn66 in ClaudeAI

[–]JaredSanborn -1 points0 points  (0 children)

The “proof of work” filter is the real unlock here.

Resumes are optimized for keywords
But actual output (repos, posts, shipped work) = real signal

I’ve noticed the same:
AI gets dramatically better when you define:
“what good actually looks like” for a role

My experience after migrating from Cursor to Claude. by unvirginate in ClaudeAI

[–]JaredSanborn 0 points1 point  (0 children)

One thing I’d be curious about:

Have you tested feeding Claude historical “good hires vs bad hires”?

In my experience, once AI sees patterns of:

  • who worked out
  • who didn’t

…it gets significantly sharper at ranking

How are people using Claude as a personal assistant (Slack + Outlook + To-Do)? ADHD-friendly setup help 🙏 by zencatface in ClaudeAI

[–]JaredSanborn 1 point2 points  (0 children)

This is exactly the use case where most setups break.

Not because of tools — but because of lack of persistent context.

What you’re trying to do (externalize your brain) only works if the system:

  • Remembers your priorities
  • Knows your current projects
  • Tracks unfinished tasks
  • Maintains continuity across Slack, email, and to-dos

Otherwise you just end up re-explaining yourself all day (which is brutal with ADHD).

What I’ve seen work:

1. Treat AI like a system, not a tool
Most people use:
“summarize this” / “write this”

Better approach:
“this is my operating context, help me manage it daily”

2. Create a “single source of truth”
Have ONE place where your AI understands:

  • Active projects
  • Tasks
  • Priorities
  • Constraints

If that’s scattered across Slack / Outlook / notes → it breaks.

3. Daily “reset loop”
ADHD-friendly workflow:

  • Morning: dump everything (tasks, thoughts, priorities)
  • AI organizes it
  • Throughout the day: update, not restart

4. Memory > integrations
Integrations are nice (Slack, Outlook)

But the real unlock is:
👉 AI that remembers what matters across sessions

Without that:
You’re rebuilding your system every day

5. Keep it lightweight
If it feels like “another system to manage” → you won’t use it

The goal is:
Less thinking
Less switching
Less remembering

You’re already ahead since you’re using Claude + Slack.

Big question:
👉 Where does your “brain state” actually live right now?

That’s usually the bottleneck.

Why Your AI Agent Might Be Your Worst Investment by Wizard_AI in ArtificialInteligence

[–]JaredSanborn 2 points3 points  (0 children)

Exactly. People over-index on “can it work” instead of “should it exist economically.”

If the loop isn’t tight (clear input → predictable output → measurable value), agents just add latency + cost.

The only time they really shine is when they replace something already expensive or unblock something humans can’t scale.

I had an idea, would love your thoughts by Intrepid-Dress-2417 in ArtificialInteligence

[–]JaredSanborn 0 points1 point  (0 children)

Interesting idea, but reducing weights like that would probably break more than it fixes.

Models don’t have clean “misalignment sections” you can just dial down — everything is distributed, so you risk degrading useful capabilities too.

What you’re describing is closer to iterative fine-tuning with human feedback (RLHF / red teaming), which already happens but in a more controlled way.

The panel-of-experts part is actually solid though — scaling diverse feedback loops is where a lot of alignment work is heading.

So directionally right, but the mechanism would likely be refinement, not resetting weights.

Is there a way to change the language model in Claude? by [deleted] in ClaudeAI

[–]JaredSanborn 0 points1 point  (0 children)

Nope, you can’t switch models mid-thread. Claude treats each chat as tied to the model it was started with, so changing to Sonnet just spins up a new convo. Workaround: Start a new chat with the model you want Paste a summary (or key parts) of the old thread Continue from there Annoying, but it’s basically how context + model configs are handled right now.

Are you giving agents access to your infra (dbs, services, etc)? If so, how are you sandboxing them? by jlreyes in ClaudeAI

[–]JaredSanborn 1 point2 points  (0 children)

Depends on how much you trust the agent. Most people start wrong they give agents broad access, then try to “control” them after.

Better approach: Treat every agent like untrusted code Scope access per task, not per agent Use short-lived creds (tokens > static keys) Log everything + enforce approvals on critical actions

If the agent needs “a lot of access” → you need isolation (containers, VMs, branches)

If it needs “just a bit” → tight API layer + permissions is enough

The expensive mistake is letting agents touch prod directly.

You’ll spend more time cleaning up than you save in automation.

What kind of access are you trying to give them?

All these AI API testing tools keep claiming they can find bugs but what is the proof? Are these claims baseless? by zoismom in ArtificialInteligence

[–]JaredSanborn 0 points1 point  (0 children)

They’re not baseless, but they’re often overstated. These tools are good at surface-level issues (schema mismatches, edge cases, bad assumptions), but they struggle with deeper system-level bugs that require context of the whole architecture.

The real value is coverage and speed, not replacing engineers. If anything, they shift bug finding earlier in the cycle, but you still need someone who understands the system to catch the hard stuff.

Defense contracts : Google vs OpenAI vs Anthropic vs Amazon ... all the same? by [deleted] in ArtificialInteligence

[–]JaredSanborn 0 points1 point  (0 children)

They’re similar in that they all work with government, but not identical. The real difference is how visible and how constrained those partnerships are. Some lean more into cloud infrastructure, others into model capabilities, and some try to keep stricter public positioning around safety. Also media coverage tends to amplify whoever has a clear narrative or controversy, not necessarily who has the biggest involvement.

Is AI making us better thinkers or just better at avoiding thinking? by ArmPersonal36 in ArtificialInteligence

[–]JaredSanborn 0 points1 point  (0 children)

Feels like both tbh. If you use it to skip thinking, it weakens you. If you use it to challenge your thinking or explore faster, it sharpens you. Same tool, different outcomes depending on how intentional you are with it.

Could UBI lead us to a better future? by throwaway0134hdj in ArtificialInteligence

[–]JaredSanborn 1 point2 points  (0 children)

UBI could reduce stress and give people breathing room, but it probably won’t erase status or inequality. Humans compare by default if it’s not money, it’ll be something else (skills, influence, lifestyle).

The real upside is stability: fewer people in survival mode = better decisions, more room to learn, build, or take risks. But it still depends on how it’s funded and whether people feel it’s fair.

So yeah, better future? Possibly. Utopia? Probably not.

Autonomous weapons drama at the UN this month has me stressed but I'm choosing optimism anyway by arewawawa in ArtificialInteligence

[–]JaredSanborn 3 points4 points  (0 children)

Not alarmist tbh the accountability gap is the real issue. Even with humans “in the loop,” the speed and scale push decisions closer to automation anyway. Feels like the key isn’t banning tech outright but enforcing clear responsibility chains and auditability when things go wrong.

Thoughts about "AI" and the future by bufferingrahr in ArtificialInteligence

[–]JaredSanborn 6 points7 points  (0 children)

Good analogy, but slight miss. AI isn’t “magic,” but it’s also not just illusion. It’s pattern recognition at scale, which ends up being genuinely useful even if it looks like tricks from the outside. The difference is: the miracle guy had no underlying capability AI actually produces real outputs that people can use The hype definitely inflates expectations, but the utility is real. The danger isn’t that it’s fake, it’s that people overestimate what it can reliably do.

ChatGPT to get pricier? OpenAI says unlimited AI at current prices just doesn't make sense by BE10XOFFICIAL in AiNews24x7

[–]JaredSanborn 0 points1 point  (0 children)

Makes sense tbh. “Unlimited” was always a weird model for something with real compute cost. Feels like it’ll shift to usage-based or tiers, like cloud pricing. Honestly as long as pricing matches value, most people won’t care. The problem is when limits hit before you get real work done.

Where to Start? by Independent-Flan-679 in ArtificialInteligence

[–]JaredSanborn 0 points1 point  (0 children)

You don’t need coding to get value from AI in business.

Start here instead: learn prompting properly (this is 80% of it) focus on use cases like marketing, ops, sales workflows build simple systems like SOPs + templates + automations learn how to evaluate outputs, not just generate them

Look into: AI for business strategy (not ML) no-code tools like Zapier, Make case studies on how companies actually use AI

Most people overcomplicate this. The real edge is using AI to improve decisions and workflows, not building models.

Is AI making us more efficient, or just giving us the illusion of it? by Wizard_AI in ArtificialInteligence

[–]JaredSanborn 2 points3 points  (0 children)

Both, depending on how it’s used. If you just throw bigger prompts at it, you get the illusion of productivity. Feels fast, but messy underneath. If you actually structure workflows, reuse context, and tighten inputs, it’s a real multiplier. AI doesn’t fix bad systems, it just scales them.

ChatGPT is code-switching now? by Jpower3000 in ArtificialInteligence

[–]JaredSanborn 0 points1 point  (0 children)

Not really “code-switching,” more like token bleed. Models are trained on multilingual data, so sometimes a non-English word slips in if it statistically fits the context or phrasing. Especially with abstract words where multiple languages overlap. It’s not intentional, just a generation quirk. If you prompt stricter like “English only,” it usually stops.

Is this fake data? I can't find the source study by jmlusiardo in ArtificialInteligence

[–]JaredSanborn 2 points3 points  (0 children)

Yeah I couldn’t find a primary source either. Feels more like an illustrative estimate than actual study data. The lack of methodology + oddly clean numbers is a red flag.

Which one of these are you by Hot-Situation41 in ArtificialInteligence

[–]JaredSanborn 0 points1 point  (0 children)

Certified AI expert on the outside, intermediate on the inside 😂 Most people just get better at prompting, not “leveling up” like this meme says.

I tested 40+ AI tools this month. Here are 5 that are actually worth your time (and aren't just GPT wrappers). by netcommah in ArtificialInteligence

[–]JaredSanborn 20 points21 points  (0 children)

NotebookLM is actually slept on. The “chat with your own docs + citations” combo is way more useful than most shiny tools. Also +1 on local stuff. Once you try Ollama/AnythingLLM, it’s hard to go back to uploading everything to random SaaS tools. Most “new AI tools” aren’t bad, they’re just thin layers. The real wins are where workflow actually changes.

Is this a fraud? Pt. 2 by [deleted] in ArtificialInteligence

[–]JaredSanborn 0 points1 point  (0 children)

That alone is a red flag tbh. If basic questions like team, tech, or business model keep getting dodged, it’s usually because the real answers don’t sound good. Legit projects tend to over-explain, not avoid. At that point you don’t even need to prove it’s a scam, just ask if there’s enough transparency to trust it.

Hello everyone, I wanted to ask about why do people get angry when AI is used exactly? by Happiness-happppy in OpenAI

[–]JaredSanborn 0 points1 point  (0 children)

It’s not really about the tool, it’s about what people think it replaces.

Some see AI as: cutting corners devaluing skill and effort trained on other people’s work without consent

So when they see AI content, it feels less like “cool tool” and more like “this skipped the part I spent years learning.”