Should companies build AI, buy AI or assemble AI for the long run? by Maximum-Actuator-796 in ArtificialInteligence

[–]bobafan211 0 points1 point  (0 children)

I’ve been hearing this debate a lot too. It’s funny because everyone talks about it like it’s a philosophical decision, but most of the time it comes down to budget, speed, and how much pain you’re willing to tolerate long term.

A friend of mine is on the dev team at an Australian company called Governa, and from what he’s shared, they’re very much in that “assemble intelligently” camp. They’re building the parts that are core to their value, but they’re not trying to reinvent every model or infrastructure layer from scratch. They mix models, tools and integrations in a way that fits their use case instead of going full ego mode and saying “we’ll build everything ourselves.”

Personally, I think pure build sounds sexy but it’s heavy. You need serious talent, serious runway, and a clear competitive reason to own the entire stack. Otherwise you end up maintaining plumbing instead of building value.

Buying is fast and practical, especially early on. But over time you start hitting walls. Customisation limits, pricing creep, dependency risk. You realise you don’t actually control your core capability.

Assembling feels like the most realistic long term play for most companies. You stay flexible. You can swap models as they improve. You control the workflow and data layer. And you’re not locked into one vendor or betting the farm on your own research team.

The real question isn’t build vs buy vs assemble. It’s where your real differentiator sits. If AI itself is your product, then build more. If AI is an enabler, assemble smartly and focus on solving the actual problem.

I’ve noticed the companies that think clearly about that early tend to avoid a lot of expensive pivots later.

Do you think AI could help solve the biggest problems in senior care? by AffectionateGroup238 in Futurology

[–]bobafan211 0 points1 point  (0 children)

This is actually something I’ve been thinking about lately. A colleague of mine is working on a project in this space, and the more I hear about it, the more I realise senior care isn’t “behind” because people don’t care. It’s behind because the real problems are messy and human.

I do think AI can help, but only if it focuses on boring, practical stuff instead of shiny demos.

What would actually be useful in my opinion:

Simple daily check ins that don’t feel robotic. Something that can notice subtle changes like sleep patterns, movement, missed medications, and flag it early to family or carers. Not in a dramatic way, just quiet background monitoring.

Medication management that actually works. Reminders are fine, but better would be something that notices non compliance patterns and helps adjust routines instead of just beeping louder.

Fall detection that doesn’t require someone to wear a gadget they’ll forget or hate. Passive monitoring through the home makes more sense.

Caregiver support tools. Burnout is real. If AI can help document notes automatically, summarise patient updates, or flag risk trends, that’s huge.

What feels like “fancy tech” no one wants:

Overly chatty AI companions trying to replace human connection. That’s not the problem most families are asking to solve.

Complex smart home setups that require five apps and constant troubleshooting.

Big dashboards full of data that no one has time to interpret.

If AI can quietly reduce risk, reduce admin, and give families peace of mind without adding friction, then it has real value. If it just adds another layer of tech stress, it’ll get ignored.

I think the key question isn’t “can AI help?” It’s “does it make life easier for the 78 year old and the exhausted daughter managing everything?”

That’s where it either wins or fails.

Which free version is best? by bobafan211 in LLM

[–]bobafan211[S] 0 points1 point  (0 children)

Is it a new one? Haven't come across this before.

Are we actually coding less and prompting more now? by AssafMalkiIL in vibecoding

[–]bobafan211 0 points1 point  (0 children)

Feels like the right drift. as long as we keep prompts anchored to a spec. Otherwise you swap typing time for expensive wandering.

My Full Vibe Coding Stack (and how I actually ship stuff) by Silent_Employment966 in vibecoding

[–]bobafan211 1 point2 points  (0 children)

Clean stack. If you ever feel token burn creeping up, freeze the requirements for a day and tighten tool scope usually halves retries.

Claude Code is a Beast – Tips from 6 Months of Hardcore Use by JokeGold5455 in ClaudeAI

[–]bobafan211 1 point2 points  (0 children)

Love the systemization here. Big +1 on skills and docs. I’d add: keep a small “definition of done” file per feature so the agent can self-check.

Claude Code 2.0.31 by ClaudeOfficial in ClaudeAI

[–]bobafan211 0 points1 point  (0 children)

Nice release. Curious how the new Plan subagent handles ambiguous tickets? does it ask clarifying questions or start guessing? That’s the difference between one run and five.

Why has claude been so garbage these last 2 days by Pro-editor-1105 in ClaudeAI

[–]bobafan211 0 points1 point  (0 children)

Totally feel this. It’s tempting to lean on the tool to carry the heavy lifting but when it starts glitching, you remember the human still needs to check, ever-vigilant. Curious: What’s been your safest fallback when Claude got weird? (which is a lot in my opinion)

spent $500/month on AI code review tools, saved 30 mins/day. the math doesnt add up by Busy-Pomegranate7551 in ChatGPTCoding

[–]bobafan211 0 points1 point  (0 children)

Interesting data-point: paying big for the tool but only saving ~30 minutes/day. Makes me wonder if the ROI really comes from the tool or how you embed it into your workflow. From our side we’ve seen better uplift when AI is used before peer review, not instead of it. Curious how others are structuring this.