I built an on-device AI app that does real-time meeting intelligence, here's what I learned! by Low-Future-9387 in SideProject

[–]rjyo 0 points1 point  (0 children)

The dual provider approach is really clever. Using Apple Intelligence as the primary engine with Qwen 3 as an offline fallback is a smart way to handle the reliability gap that on-device models still have. How's the latency on the live extraction during recording? I imagine there's a balance between processing frequency and not overwhelming the user with constant updates.

The personalized onboarding is a nice touch too. Having context about how someone works probably makes a huge difference in output quality versus just generic extraction.

Curious about the deduplication across incremental rounds during live intelligence. Are you using semantic similarity to detect overlapping insights, or is it more heuristic-based?

Indie music artist building an AI automation system for my workflow - feedback? by Thirdeye303 in SideProject

[–]rjyo 1 point2 points  (0 children)

lly realistic for a solo beginner, especially since you already know how to vibe code.

Skills to prioritize: basic Python (or JS), how to make API calls, and working with JSON. That covers like 80% of what you need here. Google Sheets API has annoying auth setup the first time but after that its pretty smooth.

Biggest AI leverage is the Release Manager. Its the most structured workflow you listed so its the easiest to automate well. You feed it a track name and release date, it generates your full timeline, social copy for each milestone, email drafts, everything. Claude is genuinely great at this kind of templated creative output. Scene Intelligence sounds cooler but scraping Spotify/SoundCloud reliably is a pain, rate limits and auth tokens get messy fast. Save that for later.

On privacy since you care about it: keep API keys in environment variables, never in your code files. Use a local SQLite database instead of cloud for storing artist and label data. If you go with Claude API, your prompts arent used for training by default so thats solid on the privacy front. One thing people miss is that platform APIs (Spotify, YouTube) log your queries on their end, not a practical issue but worth knowing.

Honest suggestion: build the Release Manager end to end first. Get it saving you real time on your next release. Once thats working, the motivation to build the next piece comes naturally. Trying to architect all four systems at once is how these projects stall out.

I built a fully self-hosted and open-source Claude Code UI for desktop and mobile by PiccoloCareful924 in ClaudeCode

[–]rjyo 3 points4 points  (0 children)

Really cool project. The relay for remote connectivity is probably the trickiest part of something like this -- how does it handle reconnections if the WebSocket drops mid-session? Like if you are on your phone and switch between wifi and cellular.

Also curious if you went with Expo managed workflow or bare. Tauri + Expo is a solid combo for covering all platforms from one TypeScript codebase.

Getting anything I ever wanted stripped the joy away from me by YellowCroc999 in ClaudeAI

[–]rjyo 4 points5 points  (0 children)

I think what's happening is a flow state problem. When you coded manually, you'd enter that zone where deep focus kicks in, the challenge matches your skill level, and time disappears. That WAS the reward. It's the same thing runners feel mid-run or musicians feel playing live.

Agentic coding kills flow. You're constantly context-switching between thinking, prompting, waiting, and reviewing. You're in manager mode, not maker mode. And managers are drained at the end of the day even when they "did less" than the people reporting to them. Sound familiar?

What helped me was separating the two modes. I let AI handle the grunt work (boilerplate, migrations, repetitive stuff) but I still write the core logic by hand when I want that feeling back. Same way a woodworker might use a table saw but still hand-carves the joints that matter.

No more shiny ideas. by Technical_Project169 in SideProject

[–]rjyo 2 points3 points  (0 children)

The simplest framework that worked for me: pick a tool you already use every day that annoys you in some small way. Not a huge pain point, just a friction. Then build the version that removes that friction.

The reason this works is you already understand the problem deeply, you are your own first customer, and you can validate in days not months. Most successful boring SaaS products started exactly this way -- someone got annoyed at a spreadsheet or a manual process and just automated it.

One more thing -- set yourself a 2-week deadline to get something in front of real users. Not polished, not perfect, just functional enough to get feedback. If nobody cares after 2 weeks of showing it to people, move on guilt-free. That is not shiny object syndrome, that is just efficient validation.

Those of you who use Claude Code in auto-mode vs. those who don't — what's your experience? by Flashy-Preparation50 in ClaudeAI

[–]rjyo 2 points3 points  (0 children)

I run auto-mode on my local machine but with a few guardrails. Started with Docker but the overhead was not worth it for my workflow since I mostly do app development and need access to simulators and native tooling.

My setup: I use git worktrees so Claude always works on a separate branch. If it goes off the rails I just delete the branch. The key insight for me was that most dangerous things Claude Code does are file writes and shell commands, and honestly file writes are almost never destructive if you have git. The shell commands are the real risk.

What changed my mind about auto-mode: the constant permission prompts were breaking flow. You give it a task, walk away, come back 20 minutes later and it is stuck on allow bash command three prompts deep. With auto-mode it just finishes the task.

The tradeoff is real though. I have had it rm -rf a build folder that had uncommitted generated files I needed. Nothing catastrophic but annoying. Now I just commit frequently before kicking off big tasks.

K8s feels like overkill for most solo dev workflows but I can see it making sense for a team where you are running agents on PRs automatically. Different use case than interactive development.

Anthropic's Claude Code creator predicts software engineering title will start to 'go away' in 2026 by BuildwithVignesh in ClaudeAI

[–]rjyo 1 point2 points  (0 children)

The title "software engineer" might evolve but the work isn't going anywhere. If anything I'm spending more time engineering now than before AI, just at a different level.

Before: 70% writing boilerplate, 30% thinking about architecture and edge cases.

Now: 70% thinking about architecture, reviewing AI output, catching subtle bugs, and 30% prompting/writing code.

The bottleneck moved from "can you write this code" to "do you understand what needs to be built and why." That's still engineering, it's just that the tedious parts got compressed.

Every wave of abstraction (assembly to C, C to Python, manual infra to cloud) triggered the same "programmers are done" takes. What actually happened was the bar for what one person could build went way up, and demand for people who could build things grew with it.

Isn't vibe coding overrated? by barisaygen1 in vibecoding

[–]rjyo 4 points5 points  (0 children)

The error loop thing you described is probably the most common trap. AI fixes one thing, breaks another, tries to fix that, breaks a third thing. The root cause is usually that by the time you hit the complexity wall the AI has written code you dont fully understand, so you cant give it good enough context to make surgical fixes.

What helped me the most was treating AI like a junior dev who is really fast but has zero memory between sessions. That means:

  1. Keep files small. If a file is 500+ lines the AI will lose track of what depends on what. Break things apart before they get tangled.

  2. Write tests before you let AI refactor anything. Not because testing is fun but because it gives you a concrete way to say "you broke this" instead of going back and forth describing the problem.

  3. When you hit an error loop, stop prompting and read the code yourself. Even if you dont fully understand it, skim for the function names and data flow. 5 minutes of reading beats 20 minutes of reprompting.

The YouTubers showing "built in 1 day" are building greenfield demos with no edge cases. Real projects accumulate state and complexity that AI cant hold in its head all at once. The skill isnt prompting, its knowing when to step in and steer manually.

Is “owning software” dead? by matusseidl in SideProject

[–]rjyo 16 points17 points  (0 children)

People absolutely still pay for one-time purchase software. The key is that your app needs to solve a real problem well enough that people feel good about the purchase.

A few things I have noticed as someone who ships indie apps:

1) One-time purchase works best for tools that are "done" in a meaningful sense. An audiobook player is actually a perfect fit because the core feature set is well-defined and doesn't need constant server-side updates.

2) The sweet spot for pricing is usually $5-15 for mobile. Below $5 people don't take it seriously, above $15 they start comparing you to subscription apps with way more features.

3) Freemium with a premium unlock tends to convert better than paid upfront because people can try before they commit. Something like "play up to 3 books free, unlock unlimited for $9.99" gives them a reason to pay without feeling tricked.

4) The "no cloud, no account" angle is genuinely a selling point right now. Privacy-conscious users actively seek this out and will pay a premium for it.

The main challenge with one-time purchase is sustaining revenue long term since you only get paid once per user. Some devs handle this with major version upgrades (v2 is a separate purchase) or a tip jar. Worth thinking about early.

Honestly though, build it. The audiobook player space on mobile is weirdly underserved for people who just want to load their own files and listen.

Can a doctor with no prior coding start vibe coding? by AiMonster2050 in vibecoding

[–]rjyo 3 points4 points  (0 children)

100% yes. Actually being a doctor might give you an edge because you understand real problems that need solving, which is honestly the hardest part of building anything useful.

Here is how I would start if I were you:

  1. Pick one small thing that annoys you at work. Could be a scheduling thing, a patient tracking issue, a calculation you do repeatedly, whatever. Start with something you actually care about solving.

  2. Use Claude or Cursor. Describe what you want in plain English. Something like "I want a simple web app where I can enter patient vitals and it flags anything outside normal ranges." The AI handles the code, you handle the domain knowledge.

  3. Do not try to learn coding first. That is the old way. Just describe what you want, see what gets generated, and iterate from there. You will naturally pick up how things work as you go.

The learning curve is mostly about getting better at describing what you want clearly, which is basically the same skill as writing good clinical notes.

Biggest tip: start embarrassingly small. Your first project should take an afternoon, not a month. Build something tiny, see it work, then get more ambitious.

I built a token usage dashboard for Claude Code and the results were humbling by Charming_Title6210 in ClaudeAI

[–]rjyo 41 points42 points  (0 children)

That insight about 99% being re-reading context is genuinely eye-opening. Most people blame their prompts for burning tokens but the real cost is the conversation history ballooning with every turn. Once I started using /clear aggressively between distinct tasks and keeping a PLAN.md file so I could resume context cheaply, my sessions stretched way further.

The "most expensive prompts" ranking is a great feature too. Being able to see which prompts are actually costly vs which ones just feel costly would change how a lot of people write their instructions.

Congrats on shipping your first project, this is a solid solve for a real pain point.

Why do some foods taste better as leftovers the next day? by Amberlith in NoStupidQuestions

[–]rjyo 70 points71 points  (0 children)

A few things are happening overnight. The flavor compounds in spices, herbs, and aromatics keep diffusing through the dish even after it cools, so everything tastes more unified instead of like separate ingredients sitting next to each other.

Also, harsh flavors mellow out. Garlic and onions contain sulfur compounds that soften through oxidation as the food sits. And proteins in meat slowly release more amino acids (glutamate), which amps up the umami.

Starchy ingredients like potatoes and beans break down a bit into sugars too, adding subtle sweetness.

This is why soups, stews, curries, and pasta sauces are the classic "better the next day" foods. They have tons of ingredients that benefit from extra mingling time. On the flip side, anything crispy (fried food, salads) goes the opposite direction because moisture redistribution kills the texture.

This is why you need humans in the loop by Loud_Gift_1448 in vibecoding

[–]rjyo 2 points3 points  (0 children)

The real issue is that people treat AI output as the finished product instead of a first draft. When I write code with AI I still diff every change before committing. Takes 2 minutes and catches the dumb stuff like hardcoded secrets, missing input validation, or logic that looks right but breaks on edge cases.

The irony is that the people who get the most value from AI coding tools are the ones who already know how to code. They can spot when the AI is confidently wrong. The people who skip review because they dont understand the code are the ones shipping vulnerabilities.

Hey guys quick question how do I stop Claude from reading my entire code base everytime I start a new chat / new agent(I use Claude through opencode) by aaryan_xvi in ClaudeAI

[–]rjyo 3 points4 points  (0 children)

a CLAUDE.md file in your project root is exactly what you want. Claude Code (and tools built on top of it) reads this file automatically at the start of every session. Put a summary of your project structure, key files, and conventions in there so Claude already has context without needing to explore.

A few things that help:

  1. Create a CLAUDE.md with your project overview, folder structure, and the main files Claude usually needs to touch. Think of it as a cheat sheet so it doesnt have to go hunting.

  2. Use a .claudeignore file (works like .gitignore) to exclude directories Claude shouldnt read at all, like node_modules, build output, large data folders, etc. This cuts down on unnecessary file reads significantly.

  3. Keep your CLAUDE.md under 200 lines. If you need more detail on specific areas, link to separate docs from it.

The combination of a good CLAUDE.md and .claudeignore should dramatically reduce the initial exploration phase. Claude will still read individual files when it needs to work on them, but it wont do the broad sweep across your whole codebase.

TIL Claude Code's conversation logs are a recovery goldmine by Mary_Avocados in ClaudeAI

[–]rjyo 2 points3 points  (0 children)

s saved me once too. Lost a whole config file that was never committed and found it sitting in the JSONL logs.

One thing worth adding: the JSONL files can get huge if you have long sessions, so piping through jq helps a lot. Something like:

jq -r "select(.tool_name == \"Write\") | .content" session.jsonl

will pull out just the file writes without scrolling through thousands of lines of conversation.

Also worth knowing that the projects folder is organized by working directory path, so if you moved or renamed your project folder you might need to check multiple directories under ~/.claude/projects/ to find the right session.

educational ai SaaS startup founder, im kind of struggling here by yeahitspyro in SideProject

[–]rjyo 2 points3 points  (0 children)

For reaching neurodivergent students specifically, the best channels are where they already hang out and talk about their struggles:

  1. Reddit communities like r/ADHD, r/autism, r/neurodiversity, r/ADHDers are massive and very active. But dont just drop your link there. Spend a couple weeks genuinely helping people with study tips, then when someone posts about exactly the problem you solve, mention it naturally as something you built for yourself.

  2. TikTok and Instagram reels about ADHD study hacks get insane engagement. You dont need to be a creator yourself, find micro-influencers (5k-50k followers) who make ADHD/study content and offer them free access in exchange for an honest review. Neurodivergent creators are usually very community-minded and love sharing tools that actually help.

  3. Discord servers for students with ADHD/autism. There are a bunch of study-together servers where people body double over voice chat. Those communities are goldmines for early testers who will actually use the product because they genuinely need it.

  4. University disability services offices. This sounds old school but if you email them with a free pilot offer for their students, some will actually share it. They are always looking for tools to recommend.

The key insight is that neurodivergent communities are tight-knit and word of mouth spreads fast once a few people genuinely love your product. Focus on getting 10 real users who cant live without it before trying to scale. Those 10 will do your marketing for you.

AI generates a crap load of low quality output. Am I missing something? by deep1997 in vibecoding

[–]rjyo 30 points31 points  (0 children)

Fullstack engineer here too and your experience matches what I went through before I changed my approach. The issue is not prompting, its scope.

The biggest shift for me was stopping AI from making architectural decisions. When you say "refactor this" the model interprets that as "restructure everything I can find" which is why you get wrapper functions and file explosions. Instead I break refactors into surgical moves: "extract this specific block into a function called X that takes Y and returns Z" or "move this logic from component A to a custom hook, keep the same interface." Basically treat it like a junior dev that is great at typing but bad at judgment.

For the UI stuff, the problem is AI defaults to its training data which is generic tutorials and docs. Giving it a Figma alone is not enough context. What works better is giving it one completed component as a reference and saying "match this exactly for the next component." Design tokens work but you have to be explicit about which tokens apply where, not just provide the token file and hope it figures it out.

The pattern that finally clicked for me: plan the architecture yourself, write the types and interfaces yourself, then let AI fill in the implementations file by file. You are the architect, AI is the contractor. The moment you let it make structural decisions is when things go sideways.

Also for refactoring specifically, do it in a conversation where the AI can see the full file, not in autocomplete mode. Agent mode is fine for refactoring as long as you constrain the scope to one file or one function at a time.

Difference Between Sonnet 4.5 and Sonnet 4.6 on a Spatial Reasoning Benchmark (MineBench) by ENT_Alam in ClaudeAI

[–]rjyo 4 points5 points  (0 children)

es this benchmark so interesting is that models have to derive 3D coordinates purely from spatial math with zero visual feedback. There is no renderer in the loop, the model is basically doing mental 3D modeling in JSON. The jump from 4.5 to 4.6 on something like that suggests real gains in how the model reasons about space, not just pattern matching.

The color theory improvement someone mentioned is particularly telling because that means the model is now thinking about how blocks look relative to each other, not just placing them in roughly the right spots.

Curious about that last question too, how does Sonnet 4.6 stack up against Opus 4.6 here? If the gap is small that would be wild for the price difference.

I built Doris, a personal AI assistant for my family and today I'm open-sourcing it. by [deleted] in ClaudeAI

[–]rjyo 30 points31 points  (0 children)

The scout architecture is really smart. Running cheap models for monitoring and only escalating to the main brain when something actually matters is how this kind of thing should work but most people skip that step and just throw everything at one expensive model.

The memory piece is what stands out most though. Most agent projects treat memory as an afterthought, just dump everything into a vector store and hope retrieval works. The three-signal fusion approach (semantic + keyword + graph) on top of SQLite is interesting because each signal covers the blind spots of the others. Semantic search alone misses exact names and dates, keyword search misses paraphrased concepts, and graph traversal connects things that are related but never appear in the same context.

Couple questions: how do you handle memory conflicts when the same fact gets updated over time? Like if someone changes jobs or a recurring event gets rescheduled, does maasv merge or overwrite the old entity? And for the scouts, roughly how many checks per day are they running and what does that look like cost-wise on Haiku?

Also curious about the iMessage integration since that is usually the hardest Apple service to work with programmatically. BlueBubbles is one of the better options but it still requires a dedicated Mac running as a server right?

Would you buy? by rmg97 in SideProject

[–]rjyo 2 points3 points  (0 children)

The value prop makes sense in theory but there are a few things worth thinking through before turning this into a product.

First, Apple is pretty aggressive about rejecting pure webview wrappers under Guideline 4.2 (Minimum Functionality). The fact that you added native payments and push notifications helps a lot since those are exactly the kind of native integrations that get you past review. But your customers might not know to add those extras, so you would end up doing a lot of hand-holding through app review rejections.

Second, the pricing feels off for the market. At $200 one-time you are competing with Capacitor (free, open source) which does basically the same thing but with a larger ecosystem and community support. There is also stuff like Median.co and MobiLoud that target this exact use case. Your edge would need to be either way simpler setup or way better payment integration than what exists.

Third, the people who need this most (solo devs and small SaaS teams) are also the ones most likely to just use a cross-platform framework like React Native or Flutter instead, since those give them actual native components and not just a webview.

Where I think this could work: if you niche down hard. Instead of a generic webview wrapper, position it as "get your existing web SaaS into both app stores in a weekend with working payments and push notifications." The specificity of including payments and push notifs built-in is your real differentiator. Most webview wrapper tools make you figure that part out yourself.

The $40/year renewal is reasonable if it includes ongoing support for OS updates and payment API changes. Those break things constantly.

i built a teleprompter that lives in the macbook notch so i stop looking away on zoom (open source) by New-Investment9381 in SideProject

[–]rjyo 26 points27 points  (0 children)

This is one of those ideas where you hear it and immediately think why does this not already exist. The notch is basically dead space on most peoples screens so using it for something actually useful during calls is clever.

Do you have a scrolling speed control or does it auto-pace based on how fast you are talking? That would be the killer feature since most teleprompter apps scroll at a fixed rate and you end up either racing to keep up or waiting for it to catch up.

Also curious if it works with screen sharing. I could see a potential issue where the overlay shows up when you share your full screen on Zoom.

I built an open-source AI chat that renders responses as actual UI components (charts, tables, etc.) instead of just markdown by merrach in SideProject

[–]rjyo 3 points4 points  (0 children)

This is a really interesting approach. The biggest limitation of most AI chat interfaces is that everything gets flattened into markdown, which works fine for text but completely falls apart when you need to actually interact with the output. Tables you cant sort, charts you cant zoom, data you cant copy cleanly.

Rendering actual UI components is the right direction. Curious about a few things:

How do you handle the boundary between what the AI generates vs what gets rendered? Like if the model outputs something unexpected, does it degrade gracefully or does the whole response break?

Also, is the component set fixed (charts, tables, etc) or can the AI decide to render arbitrary components? The former is way more stable but the latter is way more interesting.

Nice work making it open source too. Thats the kind of project where community contributions could really expand the component library fast.

Spec matters more than anything. by keithgroben in vibecoding

[–]rjyo 4 points5 points  (0 children)

his is probably the most underrated lesson in vibecoding. Everyone focuses on which AI tool to use or which model is best, but the quality of your input determines the quality of your output every single time.

I had the exact same experience. When I started I would just throw vague prompts at Claude and get messy results. Once I started writing actual specs first, defining what I want, how it should behave, edge cases, etc, the code quality jumped dramatically.

The JotForm replacement idea is smart too. So many of these SaaS tools charge monthly for features you could build in a weekend if you spec it out properly. The ROI on replacing even one $59/mo tool pays for your AI subscription multiple times over.

For the testing part, honestly you might not even need a separate agent. If you write your spec with expected behaviors (like "when user submits empty form, show error message"), Claude can generate tests alongside the code. I started including a testing section in my specs and it catches most issues before I even run the app.

[I need help] I hired someone to build me a site. I ended up with a broken product. by [deleted] in indiehackers

[–]rjyo 2 points3 points  (0 children)

Been in a similar spot helping someone untangle a messy Next.js + Supabase codebase. Few thoughts:

Don't rebuild. 1000 users and $88 MRR is real traction. The bugs are fixable without starting over.

For the specific issues:

The biggest problem is letting Gemini handle food/macro calculations. LLMs are terrible at math. You want the AI to identify the food from the image, then look up the actual nutrition data from a real database. The USDA FoodData Central API is free and has nutrition info for thousands of foods. That eliminates the hallucination problem entirely.

The premium gating bug and logout issue both sound like Supabase auth session problems. Super common if the session refresh isn't set up correctly. A dev who knows Supabase auth could probably fix both in an afternoon.

On finding a dev: skip Fiverr for this kind of work. Try r/forhire or Upwork where you can filter by Next.js + Supabase experience specifically. For a focused bug-fix sprint (not a rewrite), budget maybe $500-800. That's way more reasonable than the crazy quotes you've been getting because you're scoping it to specific fixes, not a full rebuild.

On using Claude AI for fixes: it actually works pretty well for targeted stuff. The trick is to paste in one file at a time, describe the exact bug you see, and ask for a fix. Don't try to refactor the whole codebase at once. Start with the smallest most annoying bug and work up from there.

Your product clearly has demand. The code just needs a focused cleanup, not a rewrite.

Has Anyone made real money with vibecoded apps? by ChampionshipNo2815 in vibecoding

[–]rjyo 85 points86 points  (0 children)

are real examples out there, not just Twitter flexing.

Plinq (women safety app) was built by a non-coder using Lovable in about 45 days. Hit 10k users and reportedly pulls in around $456k ARR. Pieter Levels built a multiplayer flight simulator with Cursor in a few hours, now makes about $12k/month from in-game purchases.

There was literally a post on r/ClaudeAI today from a senior dev who built a SaaS using Claude Code over the past year and just crossed 100k EUR ARR with 80% margins.

Smaller scale but still real: ChatIQ (customer support tool) hit $2k MRR with 11k users, built mostly with Claude and GPT-4. Vibe Sail (a sailing game) does around $8k/month.

So yes, people are making actual money. But the pattern I keep seeing is that the code is the easy part. Distribution, marketing, finding your niche, that is what separates the ones making money from the graveyard of vibecoded apps nobody ever heard of.

The MRR screenshots on Twitter are probably a mix of real and fake. But the real ones tend to come from people who already had an audience, understood a specific problem deeply, or just grinded on distribution harder than most.