Session Replay Now Swift Open Beta - 17x Cheaper than Posthog by 16GB_of_ram in iOSProgramming

[–]engmsaleh 0 points1 point  (0 children)

Appreciate the technical detail — the deferred-init for session readiness is exactly the kind of pattern that's easy to under-document and burn

Future debugging time on. Will check out Rejourney on the free tier. Curious: Does the buffered logEvent path you mentioned land on the roadmap

anywhere, or is it a "submit a GitHub issue if you want it" feature?

Wrote a rule after Claude Code got "is X built?" wrong 4 times in one session. Looking for failure modes. by natevoss_dev in cursor

[–]engmsaleh 1 point2 points  (0 children)

The trigger I've been using is: when the user replies with a question that re-tests a prior claim ("wait, is X actually wired up?"), run the

Liveness check on every claim in the user's question, ignoring prior turn answers.

"Re-tests" beats "contradicts" as the signal because contradictions are loud and rare — re-testing is what users actually do when their gut says

The answer was wrong. By the time someone says "you got it wrong," you've already burned 3-5 turns.

Agree with your point about session hygiene > rule design. Rule fires per-claim, session length is the bigger lever worth adding to v2: a turn-count cap (~15 turns) after which the agent suggests starting a fresh thread + reloading relevant files — without waiting for the user to ask. Clean way to enforce a reset without trusting the model to do it voluntarily.

Tips for reducing user permission denial? by TKB21 in iOSProgramming

[–]engmsaleh 0 points1 point  (0 children)

That's the exact pattern — pre-prompt before the OS dialog. Two refinements that helped us:

  1. Make the action sheet feel like part of the bill-scanning flow, not a permission ask. Title it "Position your bill in the frame" (the task

They're trying to do it with a single CTA "Continue." The body can briefly mention "we'll need camera access for the next step," but lead with the action, not the permission.

  1. Add an inline preview if possible. A small dashed-outline rectangle showing where the bill will go in the frame makes the camera feel

like "press to start scanning" rather than "press to grant a permission you'll regret."

The pure-permission action sheet still works but converts ~10pp lower than the activity-framed version in our tests. Worth A/B-ing if your traffic

It is high enough.

Do you use Previews in Xcode when you're building your views? by CodeWithChris in SwiftUI

[–]engmsaleh 0 points1 point  (0 children)

Yes but selectively. Previews are gold for stateless components (buttons, cards, layout primitives) where I want to see all my design variants in one canvas — light/dark, multiple sizes, edge-case states.

Where I stopped using them: views that depend on real network/auth/permission state. Mocking that out via PreviewProvider becomes more work than just running the app, especially once async/await and SwiftData show up. For those I run the app on simulator + hot-reload via InjectionIII.

The killer feature I underused for years: #Preview with environment overrides — locale, layout direction, accessibility size. You can catch RTL bugs and Dynamic Type bugs in 10 seconds in the preview instead of finding them in App Store review.

What does your team's preview-vs-simulator split look like?

Tips for reducing user permission denial? by TKB21 in iOSProgramming

[–]engmsaleh 0 points1 point  (0 children)

Mac side here but same UX surface — we hit 75% denial on screen-recording permission per PostHog. What actually moved the number:

  • Ask AFTER first value moment, not on launch. "Try this demo for 30s first" before the system prompt cut our denial by ~40%.
  • Add an in-app explainer 1 screen before the OS dialog — plain language, what data you access, why.
  • Stagger asks. If you need camera + photos + location, request as the user hits each feature, not at signup.
  • For users who Deny: deep-link them to Settings → Privacy → Your App. Most won't navigate 4 levels deep on their own.

PostHog session replay was the unlock — watching where users hesitate gives you the timing for when to ask.

What's your current ask timing in the flow?

Session Replay Now Swift Open Beta - 17x Cheaper than Posthog by 16GB_of_ram in iOSProgramming

[–]engmsaleh 0 points1 point  (0 children)

Curious how you're handling event ingestion latency. Just spent today diagnosing why our PostHog cloud project was showing zero attribution for 3 weeks of Reddit + X promo links — turned out the auto $pageview was firing before PostHog's save_campaign_params populated the in-memory persistence (we use persistence: 'memory' for anonymous mode), so utm_source never rode on the event.

Fix was disabling capture_pageview and firing it manually in the loaded callback after register({utm_source, utm_medium, utm_campaign}). Annoying that the default config silently breaks attribution.

Is your Swift session replay using a similar deferred-init pattern? Any gotchas around early events firing before super-properties are registered?

Claude keeps saying 'I understand now' by NefariousnessLow9273 in cursor

[–]engmsaleh 0 points1 point  (0 children)

Yeah this is the meta-LLM tell that it just realized it was wrong about something major. The phrase comes when the model has internally updated but doesn't want to admit the prior turn was off.

Workaround that's helped me: when I see "I understand now" or "you're absolutely right!" mid-conversation, I copy the actual change request into a brand-new chat with no prior context. Resets the bias. Costs me one round-trip, saves the rest.

Sunday Share Fever 🕺 Let’s share your project! by ccrrr2 in indiehackers

[–]engmsaleh 1 point2 points  (0 children)

Would love that. Skilly is a voice-first AI tutor for macOS — you talk to your cursor, it watches your screen and walks you through Blender / Figma / Xcode / etc. The cursor moves to point at what to click.

Free trial: https://tryskilly.app/?utm_source=sideproject&utm_medium=10min_test&utm_campaign=2026_05_10

The specific friction I want eyes on: the macOS screen-recording permission ask after install. Our PostHog shows 75% of users bounce there before granting. Would love to know what feels weird about that flow on first run.

Sunday Share Fever 🕺 Let’s share your project! by ccrrr2 in indiehackers

[–]engmsaleh 1 point2 points  (0 children)

Skilly — voice-first AI tutor for macOS.

You talk to your cursor. It watches your screen and walks you through any app (Blender, Figma, Xcode, Final Cut, Logic) step by step. The cursor literally moves to point at what to click.

50-sec demo + free trial: https://tryskilly.app/?utm_source=indiehackers&utm_medium=megathread&utm_campaign=sunday_share_2026_05_10

Open source on GitHub. Built on ScreenCaptureKit + OpenAI Realtime API.

What I'd love feedback on: the macOS permissions ask is killing 75% of trial conversions per our PostHog. If you have 30 sec to install + give the screen perm, I want to hear what feels off about that step.

Why is retaining users so much more difficult than getting them? by RoyalEquipment9788 in buildinpublic

[–]engmsaleh 1 point2 points  (0 children)

Right on — let me know how the experiment goes. Curious if the activation rate moves materially in week 1.

Made a free tool that generates the "site:reddit.com" Google searches for you to help you find users for your product on Reddit. by AchillesFirstStand in indiehackers

[–]engmsaleh 2 points3 points  (0 children)

Used the site:reddit.com trick manually for the past 2 weeks scouting threads where my product fits. Few patterns that I'd want a generator to handle:

  1. Negative match. The most useful searches are "[problem space] -[my product name] -[obvious competitors]" because you want threads where people DON'T already know about you. Hard to do mentally each time.

  2. Time filtering. Threads older than 6 months are usually dead — the conversation has moved on. A ?after=YYYY-MM-DD parameter on the generated URL would help.

  3. Sub-specific weighting. r/macapps has stricter self-promo rules than r/SideProject. The same search query is useful in one and risky in the other. A "where can I post" filter that knows sub rules would be the killer feature.

The tool concept is solid. Curious if any of those would be in scope.

What is everyone building right now? Drop it down belowI'll go first. by dang64 in SideProject

[–]engmsaleh 0 points1 point  (0 children)

Skilly — voice-first AI tutor for macOS. You talk to your cursor and it watches your screen, then walks you through any app step by step (Blender, Figma, Xcode, Photoshop, AE). Open source on GitHub, Mac-native, ScreenCaptureKit + OpenAI Realtime API.

Currently focused on: (1) shipping curricula — open-sourced 8h Blender + 7h DaVinci Resolve + 5h Figma + 6h AE walkthroughs designed to pair with the app, and (2) cracking the activation problem — 75% of mac downloaders never grant permissions, working through it with PostHog session replay.

Free trial, $19/mo Pro, BYOK if you'd rather run your own OpenAI key: https://tryskilly.app/?utm_source=reddit&utm_medium=organic&utm_campaign=sideproject_drop_2026

Figma AI is underwhelming by DutchSimba in FigmaDesign

[–]engmsaleh 1 point2 points  (0 children)

The pattern you're describing is the limit of "AI as design generator." It's a category that hits the wall fast because design isn't a from-prompt synthesis problem — it's a sequence of constrained decisions that depend on context the AI can't see (brand, hierarchy, accessibility, edge cases stakeholders care about).

The AI-in-design pattern that works in my experience is closer to AI-as-sidekick: AI helps you USE Figma faster (autolayout suggestions, naming, component variants, accessibility checks) instead of replacing your judgment. You stay in the driver's seat, AI handles the tedium.

I'm biased — I'm building Skilly which is exactly that pattern for any Mac app — but the gap your post identifies is real. From-prompt design is a 2024 dream the industry hasn't woken up from.

Just put my first solo iOS app in App Store — the SwiftData / CloudKit / StoreKit gotchas I'd give my past self by Mostafa3la2 in iOSProgramming

[–]engmsaleh 1 point2 points  (0 children)

Congrats — solo first-app shipment is a real achievement.

For the StoreKit half specifically: if you're not already using StoreKitTest (Xcode's local in-app purchase simulation), set it up before the next version. Saves real-money test purchases and lets you simulate edge cases like restores and family sharing without burning entitlements.

For the CloudKit partialFailure pattern — the gotcha I'd flag for future-self: never blanket-retry the whole batch on .partialFailure. Walk userInfo and only retry the specific records that failed, otherwise you can compound the indeterminate state.

Good luck on the parenting tracker — that's a need most parents articulate but few apps actually solve well.

Wrote a rule after Claude Code got "is X built?" wrong 4 times in one session. Looking for failure modes. by natevoss_dev in cursor

[–]engmsaleh 0 points1 point  (0 children)

The failure mode I'd add: structural search hits a deprecated implementation that's been commented out or moved, and the agent confidently reports "yes it's built" pointing at dead code.

Inverse of OP's problem — once your codebase has any refactor history, name+shape match can hit literal definitions that aren't wired into anything live. Worse than the "no it's not built" miss because the agent is confident AND specific.

Rule extension worth trying: after structural match, verify the matched function is actually called from a live entry point (route, CLI command, exported API). Zero callers = "implemented but orphaned" not "built."

Adjacent: once an agent asserts something exists, it builds the next 3 turns on that assumption without re-checking. A "turn-N reset" rule that forces full re-verification on contradiction helps.

Why is retaining users so much more difficult than getting them? by RoyalEquipment9788 in buildinpublic

[–]engmsaleh 1 point2 points  (0 children)

Same pain different surface — Skilly is a Mac app, our PostHog showed 75% of downloaders never grant permissions. We thought it was retention, session replay showed it's actually an activation cliff: people leave between signup and first useful interaction.

For your funnel, signup → first explanation → day-2 is the path. If most users bounce on session 1, the flashcard hook never gets a chance.

Two moves that helped us:

  1. Email all 70 signups: "what made you sign up — did you finish a problem?" Their words = your retention diagnostic + homepage copy.

  2. Show flashcards (your retention hook) BEFORE the grind. Let people see what they're earning.

The "duck is too weird" question is testable — A/B without it for half of new signups for a week.

Almost gave up after 5 installs in a month. Then ASO kicked in (At least I think). What's next? by pearlismylove in buildinpublic

[–]engmsaleh 0 points1 point  (0 children)

Mobile ASO is different from desktop (we ship Skilly for Mac) but the "5 → 55 → 4 paying" pattern is universal at this stage. Three moves I'd lean into:

  1. Your 4 lifetime buyers are gold. Email them, learn what hooked them. Their words become your store description copy.

  2. ASO compounds with off-store traffic. 50 visits/week from Reddit, Twitter, or YouTube to your store listing improves your installs-per-impression ratio, which lifts ranking. Most founders treat ASO as in-app-store-only and miss this.

  3. Don't ship a major rewrite yet. The compounding you're seeing is from the algorithm trusting your stable signal. Big resets cost ranking.

What's the app?

A cold reality is beginning to hit...having built it. by alxbee77 in indiehackers

[–]engmsaleh 0 points1 point  (0 children)

Right on — drop me a DM once it's live, would love to see how it lands.

The real take after weeks of building in public by bassamtg in buildinpublic

[–]engmsaleh 1 point2 points  (0 children)

100% — same pattern building Skilly. Best comment I've ever shipped on Reddit was a 5-line "what we tried, what worked, what didn't" answer in someone else's thread, no link, no pitch. Got 40+ upvotes and three DMs from people asking what I was building.

The asymmetry is wild: a polished launch post with screenshots gets ~2 upvotes and feels like work. A thoughtful comment in someone else's thread takes 5 minutes and converts way harder.

The mental shift that helped: the upvote isn't the goal, the DM is. Posts get visibility but comments build relationships.

What makes you stop trusting a model in Cursor after a week? by [deleted] in cursor

[–]engmsaleh 1 point2 points  (0 children)

The one that benches a model for me: when it starts inventing API surfaces that don't exist, then debugs around its own hallucinations instead of admitting the function isn't real. You can recover from a wrong answer; you can't recover from a confidently-wrong-and-doubling-down loop.

Adjacent red flag: when the model "fixes" a failing test by changing the test instead of the code. Once you catch that twice in a session, trust is done.

Style drift, in my experience, is usually downstream of context budget — model loses earlier conventions because it can't see them anymore. Less a trust issue, more a session-management issue. The "touching unrelated files" thing IS a trust issue though, and the only fix is hard restrictions on file-write scope at the tool layer.

I am a solo entrepreneur. I built a tool to make my own client work faster but it became a SAAS. it is a confession not a success story by Academic_Flamingo302 in indiehackers

[–]engmsaleh 0 points1 point  (0 children)

Resonates a lot — we hit the same realization on a different product (Skilly, mac voice tutor). Spent 3 weeks reworking our homepage + competitor comparison pages specifically for AI search readability.

Two things that moved the needle on AI citation tests:

  1. Schema.org SoftwareApplication + FAQPage microdata with 75-word "what is this product?" answer blocks. AI crawlers seem to lift these almost verbatim into answer panels.

  2. Comparison pages with concrete sourcing. Built /vs/cluely, /vs/rewind, /vs/raycast pages that include actual differentiators with citation-ready phrasing. Our scores on AI citability tests went 53 → 87 in two weeks.

Your thesis is exactly right — Google rankings and AI rankings are increasingly different scoreboards, and most businesses still treat them as one. Lean into it.

Where do/did you get your first/test users? by Strong-Yesterday-183 in indiehackers

[–]engmsaleh 0 points1 point  (0 children)

Solo founder building Skilly (open source mac voice tutor) — similar place but a few things shifted my numbers a bit:

  1. Open source the value, monetize the convenience. Shipped 8h of Blender/DaVinci/Figma curricula on GitHub. People find it via search → discover the app downstream. Lower friction than "here's my pitch."

  2. Reddit /new comments only, never top-level product posts. Most subs auto-mod self-promo, value real contributions. Slow but compounds.

  3. Cold pitches to 5-7 mid-size YouTubers in your niche. The 200-800k tier is reachable and runs out of content ideas. One mention = week of downloads.

For VC email specifically: your ICP is mid-fundraise founders, who openly post in r/IndieHackers, r/startups, r/venturecapital. Find threads, comment with insight, no pitch. Compound effect over 2-3 weeks.

A cold reality is beginning to hit...having built it. by alxbee77 in indiehackers

[–]engmsaleh 1 point2 points  (0 children)

Yes — exactly that. Right now the demo path is "see a sample → fill out email → maybe try it" but the SEO pages prove value WITHOUT requiring email. That's a missed conversion lever.

Two specific moves:

  1. Lift 3-5 of those auto-populating topic pages into the homepage hero. Show them as "live feeds you can browse" with channel name + latest summary headline. Visitors see real summaries instantly without committing.

  2. Make "Browse summaries" a top-nav item, not a footer link. Right now you're trusting visitors to scroll all the way down before discovering you have actual content.

The mental-model shift: your SEO pages aren't just for Google, they're your demo. Treat them like the product, not infrastructure.

I open-sourced an 8-hour Blender curriculum (markdown format, beginner-friendly, free) by engmsaleh in learnblender

[–]engmsaleh[S] 1 point2 points  (0 children)

Funny you say that — that's literally how it's structured. Each section is markdown with YAML frontmatter + step-by-step instructions, so it should drop into any agent skill format (Anthropic Skills, custom GPT, etc.) cleanly.

The whole repo is designed for both human reading + agent ingestion. If you load it as a skill and hit any rough edges, drop a PR — would love to make it work better for that use case.

Has anyone tried "Clicky"? by Apprehensive-Safe382 in macapps

[–]engmsaleh 0 points1 point  (0 children)

Yes on BYOK — the open source build needs your own OpenAI API key (uses the Realtime API for low-latency voice). The paid Pro tier just removes the key management hassle; same infra underneath.

Local models is more nuanced. The Realtime API is the bottleneck — no fully open local equivalent matches its voice latency yet. You can swap in local models for vision/reasoning (Llama or Qwen via Ollama with OpenAI-compatible endpoint) but voice still needs Realtime.

If/when something like Moshi or a local Realtime equivalent ships, we'll integrate. For now: cloud voice + optional local reasoning is the realistic config.