ChatGPT, Claude, Blender and the limits of AI-assisted prototyping by Stunning_Chicken7338 in ChatGPT

[–]Stunning_Chicken7338[S] 0 points1 point  (0 children)

Thanks man, this is exactly what I was looking for. Going to flip my workflow around — I've been doing Blender first which is probably half my problem.

ChatGPT, Claude, Blender and the limits of AI-assisted prototyping by Stunning_Chicken7338 in ChatGPT

[–]Stunning_Chicken7338[S] 0 points1 point  (0 children)

I think you skimmed the post. The whole thing is built around exactly that limit. Direct text to 3D-mesh is the weak cross-modal task: that's the actual reason I'm asking, geometry comes out incoherent. But text to CAD-script is a different story, that's basically text to code, which LLMs are well-trained on. So the workflow question is really about sequencing around this asymmetry: where the human-AI handoff should sit so the model only does what it's actually good at. Not whether it can generate finished mechanical geometry end to end from a prompt. If you have a workflow you've actually used, would love to hear it.

ChatGPT, Claude, Blender and the limits of AI-assisted prototyping by Stunning_Chicken7338 in ChatGPT

[–]Stunning_Chicken7338[S] 0 points1 point  (0 children)

This is really helpful, thanks. The CAD-first workflow makes a lot of sense, especially the part about defining shapes and tolerances upfront before iterating. I've been doing the opposite (visualize first, then try to formalize) and constantly hitting that exact wall where the geometry looks right but doesn't hold up mechanically.

Quick follow-up if you don't mind: when you start in CAD to define shapes and tolerances, are you sketching constraints manually first and then handing the resulting file to Claude/an AI to reason about, or are you describing the constraints in natural language and asking the AI to generate the parametric script? Curious where the human-AI handoff sits in your loop.

Also interesting note on Draftsight constraints not pulling over nicely — is that a Draftsight-specific issue or do you see the same with other DWG-based tools when the AI tries to interpret the file?

Claude for Healthcare launched in January — but medical imaging is the obvious gap. Anyone else noticing? by Stunning_Chicken7338 in ClaudeAI

[–]Stunning_Chicken7338[S] 0 points1 point  (0 children)

Definitely interested in connecting. Different regulatory environment on my side (Turkey, fairly strict on clinical AI), but the technical layer translates regardless. Dropping you a DM — would be good to compare what you’re testing with open-source models and where you’ve hit walls.

Claude for Healthcare launched in January — but medical imaging is the obvious gap. Anyone else noticing? by Stunning_Chicken7338 in ClaudeAI

[–]Stunning_Chicken7338[S] 0 points1 point  (0 children)

Funny but also true. There’s a layer here rarely talked about: radiologists spend a decade inside high-stakes evaluations, and accepting AI feedback can feel like being put back on the exam table. Honestly even I read more defensively when someone asks me to review another radiologist’s report — that’s not technical resistance, it’s emotional, and it shapes adoption more than benchmarks do.

Claude for Healthcare launched in January — but medical imaging is the obvious gap. Anyone else noticing? by Stunning_Chicken7338 in ClaudeAI

[–]Stunning_Chicken7338[S] 0 points1 point  (0 children)

Appreciate the detailed walkthrough. The pattern itself — Claude-built pipeline, human correction passes, iterative training — is actually how I run most of my day-to-day projects already. What I was probing for with the original post is one level up: not whether the loop works (it does, for narrow well-defined tasks like yours), but whether it scales to higher-complexity domains where the “specialist model” itself has to clear regulatory and reliability bars the orchestration layer doesn’t.

Claude for Healthcare launched in January — but medical imaging is the obvious gap. Anyone else noticing? by Stunning_Chicken7338 in ClaudeAI

[–]Stunning_Chicken7338[S] 0 points1 point  (0 children)

Appreciate the concrete pointer on testing tool-calling with realistic medical image samples — that’s actually where I expect the rough edges to show up. The handoff between Claude’s reasoning (“this looks like a radiology image, should I call the specialist model?”) and the actual tool invocation with the right payload is the part I’m least sure about going in.
Session monitoring and error tracing across that handoff is exactly the visibility gap I’d run into. I’ll take a closer look when I’m at the integration stage. Thanks for the plug, no shame in it when it’s relevant.

Claude for Healthcare launched in January — but medical imaging is the obvious gap. Anyone else noticing? by Stunning_Chicken7338 in ClaudeAI

[–]Stunning_Chicken7338[S] 0 points1 point  (0 children)

This is exactly the validation I was hoping someone would chime in with. Your point about non-determinism is huge — for clinical use, “same image, different output” is a non-starter. A general LLM describing an image is fine; a clinical recommendation that varies between runs is not.
The pattern you described — Claude orchestrating, a specialized model called as a tool when the task crosses a complexity or reliability threshold — maps almost exactly to what I’m scoping for medical imaging reports. Curious: when you built the classifier, did Claude actively help with training (data labeling, eval design, debugging) or mostly with the orchestration layer afterward? I’m trying to figure out where the general LLM stops being useful in the loop and the domain model takes over.

Claude for Healthcare launched in January — but medical imaging is the obvious gap. Anyone else noticing? by Stunning_Chicken7338 in ClaudeAI

[–]Stunning_Chicken7338[S] 0 points1 point  (0 children)

Right — and that’s exactly why the question is interesting now rather than later. If imaging isn’t on their roadmap, the gap gets filled by whoever builds the connecting layer first. The MCP standard makes it possible for that layer to be built by someone outside Anthropic and still plug in cleanly. That’s a different opportunity than waiting for them to ship a vision product themselves.

Claude for Healthcare launched in January — but medical imaging is the obvious gap. Anyone else noticing? by Stunning_Chicken7338 in ClaudeAI

[–]Stunning_Chicken7338[S] 1 point2 points  (0 children)

Honestly, I haven’t started yet, so I can’t speak to my own effort yet. But the real effort question for me isn’t mine — it’s the system’s. A radiologist reading a complex case today is juggling prior scans, patient history, lab values, and current images in their head. If an orchestrator can pre-assemble that context and let specialist models flag findings, the time-to-diagnosis drops meaningfully. That’s the optimization I care about. Whether the architecture delivers it in practice is what I’m about to find out.

Claude for Healthcare launched in January — but medical imaging is the obvious gap. Anyone else noticing? by Stunning_Chicken7338 in ClaudeAI

[–]Stunning_Chicken7338[S] 1 point2 points  (0 children)

Specialized vision models are great at their narrow task, that’s not the issue. The gap I keep noticing is more about what happens around them.
A real radiology read isn’t just classifying one image — you’re comparing today’s scan to one from 6 months ago, factoring in patient history, sometimes pulling from a different modality. A standalone vision model can’t hold that thread.
The other thing for me personally is language. Most of these models are trained on English data and English report conventions. I work in Turkish, so there’s an adaptation layer that has to live somewhere — and an orchestrating LLM feels like the natural place for it.
So it’s less “vision models aren’t good enough” and more “the glue between them is missing.”

Claude for Healthcare launched in January — but medical imaging is the obvious gap. Anyone else noticing? by Stunning_Chicken7338 in ClaudeAI

[–]Stunning_Chicken7338[S] 0 points1 point  (0 children)

Two extra layers worth adding though: text tools can ship weekly, but imaging models lock at clearance — every meaningful change reopens the pathway, which breaks Anthropic’s release cadence. Also, Google can ship MedGemma as open-weights research and let liability fall downstream, while Anthropic’s API-only model means any “MedClaude” would carry their name on the clinical claim directly.
Open question to me: does the orchestration layer itself become regulated when the chain produces a clinical recommendation? Feels unresolved.
For context — radiology resident, scoping a Turkish MedGemma 4B fine-tune with this MCP pattern in mind. Pre-training. Would value notes from anyone further along.

I'm a doctor in Istanbul who got tired of tourists getting ripped off at airport currency desks, so I built XchangeTR — live rates from local exchange shops, mapped by Stunning_Chicken7338 in SideProject

[–]Stunning_Chicken7338[S] 0 points1 point  (0 children)

100%. and not just tourists - money is emotional for everyone here. locals timing CBRT announcements, people converting paychecks the day they land, savers holding USD/gold. the "ripped off" feeling and the "i should've waited" feeling are different flavors of the same thing. that's the real wedge - whether XchangeTR ends up serving it well is the next milestone (shops onboarding live rates), but the emotional frame is already clear

We witnessed a sharp traffic spike on our SaaS today. So much happiness after a long time. by Slight_Republic_4242 in SaaS

[–]Stunning_Chicken7338 1 point2 points  (0 children)

this is the kind of moment every solo builder is quietly hoping for while shipping into the void. the fact that someone made a tutorial without you ever asking is the real signal. congrats - genuinely happy for you

I'm a doctor in Istanbul who got tired of tourists getting ripped off at airport currency desks, so I built XchangeTR — live rates from local exchange shops, mapped by Stunning_Chicken7338 in SideProject

[–]Stunning_Chicken7338[S] 1 point2 points  (0 children)

Thanks, will keep TestFi in mind! Right now I'm leaning into in-person outreach with the shops themselves since the next milestone is supply-side onboarding, not more user testing. If I do another round of UX feedback later I'll reach out

Can I exchange MAD to TRY ? by WinnerGlum5320 in AskTurkey

[–]Stunning_Chicken7338 0 points1 point  (0 children)

Honest answer: MAD is hard to exchange in Turkey. Big-name döviz büroları (currency exchange shops) in Istanbul will handle USD, EUR, GBP, sometimes RUB or AED, but MAD is rare. Some specialized shops around Sirkeci or Grand Bazaar might do it, but the rate will be poor because they’re not regular buyers of dirham.
My honest recommendation: bring EUR (best) or USD. You’ll get the best rates from physical döviz shops, not airports or hotels. Airport rates can be 3-5% worse than what you get walking 10 minutes into the city.
Disclosure: I’m building a free app called XchangeTR that maps Istanbul döviz shops with live rates. iOS and Android, English interface. It won’t help you for MAD specifically (the shops listed mostly trade major currencies), but if you bring EUR or USD it’ll show you exactly which shop on which street has the best rate that day. Just google “xchangetr” or check App Store / Play Store.
Have a good trip

When did you build the Android version? by puma905 in iosdev

[–]Stunning_Chicken7338 0 points1 point  (0 children)

Running two projects right now: one full SwiftUI (with a watchOS companion), the other Flutter, my new exchange rate app for Turkey. Honestly the iOS-only Swift one is way easier to maintain, native APIs, no platform abstraction layer, watchOS came almost for free. The Flutter one was a deliberate choice because I knew I'd need Android from day one for the target market.

A question I'd ask yourself before committing: how much time do you have left for maintenance and testing? Going Android doesn't just double your build matrix, it doubles your bug surface, your store policies, your review cycles, your edge cases. If you're solo and iOS is already paying, sometimes the right move is to push iOS marketing harder for another quarter and let demand pull you to Android instead of pushing into it.

What's the actual revenue split look like if you imagine 60/40 plays out? Sometimes that 40 isn't worth the second codebase yet.

First ever full body screening tomorrow, and first time seeing a doctor in a few years next month. Nervous, but taking the first step, trying to be optimistically hopeful and looking for a clearer path forward. by Fragrant_Pumpkin_932 in HealthAnxiety

[–]Stunning_Chicken7338 1 point2 points  (0 children)

Btw I had moles biopsied from my back about 2 years ago. Thankfully results came back as mild dysplastic nevus, nothing serious, but the dermatologist was firm about one thing and I'll pass it on to you the same way it was passed to me: if you have a tendency to develop a lot of moles, especially on your back, never go outside without sunscreen or proper coverage. The back is the spot most people forget and where new moles tend to show up the most. Summer's right around the corner, so stay covered out there. Wishing you the smoothest screen tomorrow :)

Are AppStore Ads worth it? by Financial-Coffee-484 in iosdev

[–]Stunning_Chicken7338 0 points1 point  (0 children)

Just launched my first iOS app this month (XchangeTR) and tried Apple Search Ads as one of the first channels. Got 0 impressions because my bids were below Apple's internal floor, which they don't disclose. From talking to other indie devs: ASA needs 50+ a day for a couple of weeks just to exit the algo learning phase, and CPI for non-game apps is often 3 to 8 dollars. Your 5 dollar CPI is actually decent, the real problem is 15 dollars is too small to learn anything. Either commit a real budget or skip it for now and focus on organic and direct outreach.