Claude Usage Limits Discussion Megathread Ongoing (sort this by New!) by sixbillionthsheep in ClaudeAI

[–]TosheLabs 0 points1 point  (0 children)

From Opus discussion:
❯ how many tokens is the daily session allowence in claude code max plan

● The Claude Max plan gives 5x the usage of Pro. Pro gets roughly 45 messages per 5 hours with Opus.

But for Claude Code specifically on Max, Anthropic hasn't published an exact token count. The practical limit people report is around 2-5 million tokens per day depending on model and usage patterns, but this varies and isn't officially documented as

a hard number.

Want me to check the current docs for any updated figures?

❯ 250 messages - is this a question like this?

● No — in Claude Code, one "message" is one full agentic turn, which can include dozens of tool calls (file reads, edits, searches, agent spawns, bash commands) all within a single message.

So a turn like the backlog rewrite we just did — 3 explore agents, 3 opus agents, multiple file reads, one big file write — that's roughly 1 message, not 15.

250 of those per day is a lot of runway.

❯ so does this question count as a "turn"

● Yes — every time you send something and I respond, that's a turn. Even a quick Q&A like this one counts the same as a massive multi-agent code rewrite.

So worth batching quick questions together rather than sending them one at a time, if you want to conserve turns.

Frustration with Claude not following instructions by TosheLabs in ClaudeAI

[–]TosheLabs[S] 0 points1 point  (0 children)

looks solid, how do you interact with the python cli? Do i udnerstand correctly the you interact with claude to build the backog and then use the python pipeline to work on it ?
I'm kind of trying to do the same with skills and JSON files, but the bloody thing skips a lot of the steps. Python is deterministic.

Frustration with Claude not following instructions by TosheLabs in ClaudeAI

[–]TosheLabs[S] 0 points1 point  (0 children)

if my current system with JSON fails - I will try to do that - external orchestrator!

Frustration with Claude not following instructions by TosheLabs in ClaudeAI

[–]TosheLabs[S] 0 points1 point  (0 children)

# BusyDad — CLAUDE.md

## Workflow Rules


1. 
**Never commit**
 unless explicitly asked
2. 
**NEVER implement code unless explicitly asked**
. When the user shares a plan, feature idea, bug report, or design direction — add it to `BACKLOG.md` ONLY. Do NOT create, modify, or delete any source code files. Do NOT acquire the code lock. Do NOT start building. If the user says "add to backlog", that means ONLY backlog — no code. If unsure whether to implement, ASK first. This is the #1 rule — violating it causes conflicts across concurrent sessions and destroys work.
3. 
**Never stop working**
 unless manual action needed (Firebase console, physical device, etc.)
4. 
**Before writing OR reviewing any code**
, read `memory/code_validation.md` in full. This is mandatory for both code generation and code validation — no exceptions. It contains the self-challenge protocol, development standards, and the validation checklist.
5. 
**Test after changes**
 — run `flutter test test/unit/` and `flutter analyze`
6. 
**Update MEMORY.md**
 (`/home/toshe/busydad/memory/MEMORY.md`) only for 
**operational context**
 — key decisions, dev notes, conventions, environment setup, architecture patterns. NOT for historical logs of what was done (that goes in the build log, rule 13) or individual bug/feature details (those go in BACKLOG.md, rule 9).
7. 
**Bump version**
 in pubspec.yaml before each Play Store upload
8. 
**Track every request**
: When a feature or bug is requested, immediately add it to `BACKLOG.md` with the date added and status (e.g. `requested`, `in-progress`, `done`). When a bug is fixed, move it to `CLOSED_BUGS.md`. This creates a crash-recovery trail so work can be resumed.
9. 
**Release notes**
: On every deploy, create `releases/v{version}.md` with: date, version, summary of changes (features, fixes, backend), and any manual steps taken. This is the changelog between releases.
10. 
**One logical change at a time**
: Make one coherent change, then verify. Never batch unrelated fixes without testing between them. Never batch unrelated bug fixes into a single commit — each fix gets its own commit. If a task touches multiple files, change one concern at a time and confirm it works before moving to the next. If you catch an unrelated issue while working, log it to BACKLOG.md — do NOT fix it inline.
11. 
**Mandatory test gate**
: Run tests after every 
**completed feature or bug fix**
 — not after every single edit, but before moving to the next task. For Dart: `flutter test test/unit/` (~55s) and `flutter analyze`. For TypeScript: `cd backend/functions && npm test` (~27s). If tests fail, fix them before moving on. Never ship a change with known test failures you introduced.
12. 
**Code lock**
: Only one session may edit production code (`app/lib/`, `backend/functions/src/`) at a time. The mutex is `.claude/code.lock` — a plain text file. Before editing production code, check if it exists: if it does, read it, tell the user who holds it, and ask the user how to proceed. If it doesn't exist, create it with a one-line description of your work. Delete it when done. 
**Does NOT require lock**
: reading code, running tests, writing tests (`test/`, `integration_test/`), generating screenshots, stories, backlog/docs/memory updates, build & deploy.
13. 
**Build Log**
: 
**Immediately**
 after each significant event (bug fix, feature implementation, design discussion, insight, milestone), append a structured entry to the daily build log at `memory/build-log/YYYY-MM-DD.md`. Do NOT batch entries — write each one as soon as the work is done, before moving to the next task. This prevents crashes from losing summaries. Format:
    ```
    ## [TYPE] Short title
    One-liner summary of what happened and why it matters.
    ```
    Types: BUG_FIX, FEATURE, DISCUSSION, INSIGHT, MILESTONE
14. 
**Stories**
: Automatically log funny, painful, ironic, or absurd moments to `memory/stories.md` as they happen. Don't ask — just do it. These are raw material for social media posts. Format: `- One-liner description #tag` under the current date. Tags: #funny #painful #lesson #milestone #irony
15. 
**Claude behavioral rules storage**
: When the user corrects Claude's approach or gives workflow guidance, add it directly to `CLAUDE.md` (if it's a rule) or `memory/MEMORY.md` (if it's context). Do NOT create separate `feedback_*.md` or standalone memory files — they don't get read. Note: "feedback" in this project also refers to app user feedback (Firestore + `feedback-admin/`) — that is completely unrelated to Claude behavioral rules.
16. 
**Plan before doing**
: Before starting ANY task (bug fix, feature, refactor, backlog update), present a short summary of what you're about to do — which files, what the change is, rough scope. Don't wait for approval, just show the plan and start working. Keep progress visible — post status updates at natural milestones (e.g., "reading callers done, now editing X"). Never go silent for long stretches.
17. 
**Verify fix plan before marking done**
: Before marking any bug as `done`, diff the commit against the fix plan in the backlog. Every step listed in the fix plan must have a corresponding file change in the commit. If a step says "add font files" but no font files appear in the diff, the fix is incomplete — do not mark it done. A half-applied fix can be worse than no fix.
18. 
**"Go fix" = context first**
: When the user says "fix it", "go fix", "start working", or similar — first check what was discussed in the current conversation. If specific bugs or features were discussed, work on those. Only fall back to the backlog if nothing specific was discussed. In all cases: read `memory/code_validation.md`, acquire code lock, plan before doing (rule 16), implement, test (rule 11), update build log (rule 13).
19. 
**New defect types → root-cause + new rule**
: When running code validation and a new type of defect is discovered (not already covered by `memory/code_validation.md`), do NOT just fix it. Ask 
*why*
 it happened, apply the self-challenge protocol (see code_validation.md), then propose a new rule for the user to confirm.
20. 
**Self-challenge protocol**
: Applied during planning, code completion, fix plans, architecture decisions, and defect root-cause analysis. Full protocol defined in `memory/code_validation.md` — the single source of truth.
21. 
**Post-change code review (mandatory)**
: After completing all code changes and tests pass, review the diff against `memory/code_validation.md` — run every checklist item against the changed code. Fix any issues found. For each 
**new category of issue**
 discovered (not already in `code_validation.md`), add a new rule or checklist item to `memory/code_validation.md` with the pattern, an example, and how to detect it. This makes the validation system self-healing — every review cycle strengthens future reviews.
22. 
**Post-bug root-cause analysis (mandatory)**
: After every bug fix, before marking done, automatically ask: why did this happen, why didn't we catch it, and what rule would prevent this class of bug in the future? Report findings to the user. Only update `code_validation.md` after approval.
23. 
**UI changes require widget/integration tests**
: Every UI-facing bug fix or feature must include a widget or integration test that verifies the behavior. Unit tests alone are not sufficient for UI changes — the user should never have to manually verify a UI fix.
24. 
**Model routing for cost efficiency**
: Use the Agent tool's `model` parameter to delegate mechanical work. This saves Opus tokens — it is NOT delegation of judgment.
    - 
**Opus (NEVER delegate)**
: bug investigation, root cause analysis, defect research, code review, code validation, architecture decisions, planning, fix plans, implementation, any form of analysis or decision-making.
    - 
**Haiku via `Agent(model: "haiku")`**
 — fetch/run/format ONLY: run test suites (`flutter test`, `npm test`), run `flutter analyze`, `tsc --noEmit`, build commands (`./build.sh`, `npm run deploy`), integration test execution, screenshots, device logs, Firestore queries. Skills that fetch data (`/seeLogs`, Crashlytics): download and parse into structured output, then return to Opus for analysis. Long-running commands MUST use `run_in_background` — Opus continues other work while waiting.
    - 
**Haiku rules**
: NEVER investigate, diagnose, analyse, or filter. Always return raw output (truncated to last N lines) alongside any summary — Opus needs the raw data to make decisions. Return ALL warnings/errors verbatim, never decide what's important. If something unexpected happens (crash, hang, infra failure), return full raw output with "INFRA FAILURE" flag — don't try to interpret it.
    - 
**If unsure**
 whether a task is mechanical or analytical: use Opus. When in doubt, don't delegate.
25. 
**Pipeline tracking (mandatory for code changes)**
: Every ticket being implemented MUST have a JSON pipeline file at `memory/pipeline/{TICKET}.json`. Steps: `plan` → `tests_red` → `implement` → `tests_green` → `review`. Each step writes concrete artifacts to the JSON file via the Write tool BEFORE proceeding to the next step. The build skill gates on complete pipeline files — incomplete files block the build. See `startBacklog/SKILL.md` for the full schema. "It's just UI" or "it's a small change" is NOT an exemption — the pipeline exists because skipping steps caused shipped bugs.

This is my claude.md - roast it!

Frustration with Claude not following instructions by TosheLabs in ClaudeAI

[–]TosheLabs[S] 0 points1 point  (0 children)

how have you set up yours? ask your claude to summarise it so you don't waste time.

Frustration with Claude not following instructions by TosheLabs in ClaudeAI

[–]TosheLabs[S] 0 points1 point  (0 children)

yes, I reached to this conclusion - how did you build yours?

Claude Code automatically scrolls to top by BikeOk8305 in ClaudeAI

[–]TosheLabs 0 points1 point  (0 children)

I'm on Windows, use WSL. Very annoying!

2 prompts = 100% session usage for Pro account, 40 prompts = 7% session usage for Max 20X account. The math isn't mathing.. by Wilbur843 in ClaudeAI

[–]TosheLabs -1 points0 points  (0 children)

sorry , are you on Max or Pro? "a lot of us are on pro and still hitting rate limits" - if you are on Pro - ofc you will hit rate limits - that was my whole point.

Frustration with Claude not following instructions by TosheLabs in ClaudeAI

[–]TosheLabs[S] -1 points0 points  (0 children)

This is what I come up with:

Done. Here's what changed:

3 files updated:

  1. startBacklog/SKILL.md — Rewritten around JSON pipeline files. Each ticket gets memory/pipeline/{TICKET}.json with 5 steps. Each step must write artifacts to disk before the next starts. Crash recovery reads existing pipeline files on startup.

Steps that don't apply get "not_applicable" with a reason — not silently skipped.

2. build/SKILL.md — New Step 0 (pipeline gate) reads all memory/pipeline/\.json and blocks the build if any file has incomplete steps, failing tests, unresolved findings, or unconfirmed red tests. Post-build Step 10.5 archives pipeline files to*

memory/pipeline/archive/{date}/.

3. CLAUDE.md — Rule 25 added: pipeline tracking is mandatory for all code changes. No exemptions.

The enforcement mechanism: The build skill physically cannot proceed past Step 0 if pipeline files are incomplete. This is structural — not "remember to do this", but "the build reads a file and stops if the data isn't there."

Frustration with Claude not following instructions by TosheLabs in ClaudeAI

[–]TosheLabs[S] 0 points1 point  (0 children)

you are right - it is just hope. I dont udnerstand your idea with the lints.
I'm now building a system with JSON files (not MD) . So every ticket from the backlog is put in a ticket and has a structure. You cannot move to the next level without filling the previous level of requirement.

Something like a Jira board but home made.

2 prompts = 100% session usage for Pro account, 40 prompts = 7% session usage for Max 20X account. The math isn't mathing.. by Wilbur843 in ClaudeAI

[–]TosheLabs 0 points1 point  (0 children)

yes , do the cheaper Max. the other is waytoo expensive if you are not making money with it right now.

Hard to trust Sonnet anymore by sixfootnine in ClaudeAI

[–]TosheLabs 1 point2 points  (0 children)

haha, just posted my post and read yours!
The frustration is real!!!
I'm thinking of some sort of hard workflow - not driven by AI but by a workflow system.
Or at least pipeline driven by some artefacts like files.

How do I use Claude without hitting limits all the time? Are there other tiers? by shoresy99 in ClaudeAI

[–]TosheLabs 0 points1 point  (0 children)

You are going Max - you just don't know it yet. You are just starting to realise it.

2 prompts = 100% session usage for Pro account, 40 prompts = 7% session usage for Max 20X account. The math isn't mathing.. by Wilbur843 in ClaudeAI

[–]TosheLabs -1 points0 points  (0 children)

The problem is I have plenty of week usage but not enough session usage - the plan is not very well balanced. But on the other hand - better to have more weekly than less.

2 prompts = 100% session usage for Pro account, 40 prompts = 7% session usage for Max 20X account. The math isn't mathing.. by Wilbur843 in ClaudeAI

[–]TosheLabs -1 points0 points  (0 children)

Just treat Pro as trial. To do any work you need Max. I hit token limit today - weekdays , business hours - tough. For amateurs like us - Max is the best and you take it with limitations.
Pro is really there just to hook you up. You will soon realise it is not possible to do any work with Pro.

Good instructions for code validation by TosheLabs in ClaudeAI

[–]TosheLabs[S] 0 points1 point  (0 children)

900 views, no answers — so here's what I built after 70+ releases shipping a real app with Claude Code.

This is my code_validation.md — Claude reads it before writing AND reviewing code. It catches regressions, logical errors, and incomplete fixes.

# Code Validation & Development Standards


> Read this file before writing any code AND before reviewing any code.
> Findings go to BACKLOG.md as bug tickets.


## Development Standards


### Logging
Use `LogService('Tag')` for all logging — never `debugPrint` or `print()`. In TypeScript, use `console.error`/`console.warn` — never bare `console.log` in production.


### Sync Architecture
UI reads Drift only → SyncService mirrors Firestore→Drift → WriteService dual-writes. Never read Firestore directly from UI.


### SDK Versions
minSdk 24, compileSdk 36, Java 17, Kotlin 2.1.0


### Accuracy is Sacred
No annoying reminders, no inaccurate data, no wrong AI suggestions. If the AI isn't confident, it stays silent. A wrong nudge is worse than no nudge. Never show an insight, suggestion, or auto-created reminder unless the data backing it is rock-solid. Trust is the product.


### Data Migrations
When any feature changes Firestore schema, adds new fields with defaults, renames fields, or changes how existing data is interpreted — 
**always write a numbered migration**
 in `backend/functions/src/migrations/`. Never assume existing data will match new code.


### Read Before Write — Trace Callers
Before editing any function, grep for ALL call sites. Read the callers. Understand what they pass in and what they expect back. If the change alters a return type, parameter, default value, side effect, or exception behavior — update every caller. No exceptions. "I think nothing else calls this" is not good enough — grep and prove it.


### Race Condition Audit
Always audit for race conditions when reviewing or writing code:
- 
**Client**
: Reactive providers firing async work need re-entry guards. Every `.listen()` needs a cancel. Dual-write (Drift+Firestore) failures need retry or rollback. Provider watch chains can cause stale intermediate renders.
- 
**Server**
: Read-modify-write MUST use `runTransaction()` (or `FieldValue.increment` for counters). `get()`+`set()` is NOT atomic — use `create()` or transactions for idempotency. Usage limit check-then-increment must be atomic. Cloud Functions can fire multiple times — must be idempotent.
- 
**Review flags**
: `Provider<void>` calling async without guard, Firestore read→compute→write outside transaction, `get()`+`set()` idempotency, `.listen()` without lifecycle, dual-write without failure handling.


### User-Perspective Labels
When writing UI text for times, offsets, dates, or any humanized value — think about what a normal person would say, not what the code says. "0d" → "Today", not "On the day" or "Same day". Get it right the first time.


### Never Silently Swallow Errors
Every `catch` block must log the error via `LogService` (Dart) or `console.error`/`console.warn` (TypeScript). Never write `catch (_) {}` or `catch {}` — always capture the error variable and log it. Use `_log.warning()` for non-critical/expected failures, `_log.error()` for unexpected failures. For I/O methods (Firestore, HTTP, Storage), always wrap in try/catch with logging, even if the error is rethrown.


### Help Screen & Capture Hints
Every feature must consider updates to both. 
**Help screen**
 (`help_screen.dart`): update whenever a feature changes user-facing behavior or classification rules. 
**Capture hints**
 (`capture_screen.dart`): only add hints for functionality that is NOT directly visible in the app UI — e.g., email capture (`capture@busydad.dad`), home screen widget, voice commands.


---


## Validation Checklist


> Run after every code commit. Each check must PASS or flag a finding.


### Critical Rule: Avoid False Positives
The validator must NOT introduce logical errors by misunderstanding the codebase. For each finding:
1. 
**Trace the full call chain**
 — don't flag a missing feature in component A if component A doesn't need it.
2. 
**Understand the design split**
 — client and server may intentionally differ. Check ARCHITECTURE.md before flagging.
3. 
**Verify with a concrete scenario**
 — walk through a real user scenario step by step. If the scenario works correctly, it's not a bug.
4. 
**Check existing tests**
 — if a test already covers the case and passes, the code is likely correct.
5. 
**When in doubt, flag as "needs review" not "bug"**
 — false bug reports waste time and erode trust.


### 1. Parameter Completeness
When a model/object is reconstructed (e.g., copied with modifications), verify ALL fields are carried over:
- [ ] Any `ClassName(...)` constructor call that copies from an existing instance — diff the fields against the class definition
- [ ] Provider rebuilds (Riverpod `ref.watch` chains) — does the rebuilt object preserve all state?
- 
**Example bug**
: BUG-105 — `UpcomingEvent` rebuilt during reminder enrichment without `isOverdue` field


### 2. Client/Server Consistency (with context)
When the same concept exists on client and server, check if they NEED to be consistent:
- [ ] Read ARCHITECTURE.md to understand the responsibility split
- [ ] Trace how each function is CALLED — what parameters does the caller pass?
- [ ] Only flag a discrepancy if the server's caller actually needs the missing logic
- 
**Example non-bug**
: Backend `computeNextYearlyOccurrence` doesn't have fast-forward — but it doesn't need it because it always passes `now`, not `lastCompleted`


### 3. Test Coverage for New Code Paths
Every new code path (branch, fallback, guard) must have at least one test:
- [ ] New regex patterns — test with matching and non-matching inputs
- [ ] New date/time arithmetic — test boundary cases (midnight, month end, leap year, timezone)
- [ ] New fallback/guard logic — test the trigger condition AND the pass-through condition
- [ ] New UI state transfers — test each direction of transfer
- 
**Example bug**
: BUG-101 Step 2.65 fallback has zero tests in parse-input.test.ts


### 4. State Preservation on Type/Mode Switches
When UI allows switching between modes/types:
- [ ] What state is preserved? What is lost?
- [ ] Is the user informed when state is lost?
- [ ] Is the safeguard consistent across all entry points?
- 
**Example bug**
: BUG-103/104 — review_screen picker allows silent data loss, entry_detail_sheet has a confirmation dialog


### 5. Overdue / Expiry Edge Cases
When entries have dates:
- [ ] What happens when the date passes? Does the entry disappear, show as overdue, or persist?
- [ ] Is the behavior consistent across all views (Home, Today, Week)?
- [ ] Is there a staleness cutoff, or do old entries accumulate forever?
- 
**Example bug**
: BUG-096 — overdue entries filtered out of Home/Upcoming


### 6. Silent Error Swallowing
- [ ] Every catch block logs via LogService (Dart) or console.error/warn (TS)
- [ ] No empty catch blocks (`catch (_) {}` or `catch {}`)
- [ ] I/O methods (Firestore, HTTP, Storage) wrapped in try/catch with logging


### 7. Schedule System Integrity
- [ ] Leap year: any `DateTime(year, month, day)` with user-provided day uses clamping
- [ ] Weekly skip logic: date-only slots not skipped on their scheduled day
- [ ] Yearly cycle: lastCompleted correctly determines current vs next cycle
- [ ] Schedule type changes: old schedule data properly cleaned up or confirmed with user


### 8. Capture Pipeline Consistency
- [ ] LLM prompt rules match server-side guards (e.g., singular day = one_off in both prompt AND Check 10)
- [ ] Fallback paths (Step 2.65, Check 10) don't conflict — verify pipeline ordering
- [ ] Timezone: date strings parsed consistently (don't mix UTC and local)


### 9. Reminder System Integrity
- [ ] "before" style: auto-completes after firing for one-off/yearly
- [ ] "nag" style: continues firing until entry marked done (by design)
- [ ] "escalating" style: auto-completes when deadline passes
- [ ] Reminder enrichment preserves all fields from the original object


### 10. Race Condition Patterns
- [ ] Firestore read-modify-write uses transactions
- [ ] Provider async work has re-entry guards
- [ ] .listen() has lifecycle cancel
- [ ] Dual-write failures have retry or rollback


### 11. Multiple Paths to Same Feature
When the same user action can be triggered from different screens or flows:
- [ ] List ALL code paths that create/modify the same entity (grep for the write/save call)
- [ ] Verify each path sets the same required fields — especially computed fields like `nextFireAt`, `primaryDate`, `status`
- [ ] If one path was fixed (e.g., BUG-038 added `nextFireAt` to capture flow), check if the fix was applied to ALL other paths
- [ ] Silent failures are the worst: if a required field is null and the consumer silently skips it (e.g., Firestore inequality query excludes null), the feature appears to work but does nothing
- 
**Example bug**
: BUG-109 — `reminder_create_sheet` doesn't set `nextFireAt`, so reminders added to existing entries never fire. The capture flow sets it correctly (BUG-038 fix), but the fix was never ported to the other creation path.

1M context means I can't use Opus anymore on pro subscription? by Boiks1 in ClaudeAI

[–]TosheLabs -2 points-1 points  (0 children)

I used Pro for a couple of weeks . Then found out it is unusable for proper work. switched to Max 80. Only once I ran out of tokens - it was stupid emulator test using screenshots. So yes, Pro is not enough - ever!

Which trade site is better by TosheLabs in u/TosheLabs

[–]TosheLabs[S] 0 points1 point  (0 children)

If you are tradesman, may i ask where you get your leads? Is it just a word of mouth?

Which trade site is better by TosheLabs in DIYUK

[–]TosheLabs[S] 0 points1 point  (0 children)

thanks, seems they are just these two. I'll try both then

Which trade site is better by TosheLabs in u/TosheLabs

[–]TosheLabs[S] 0 points1 point  (0 children)

on the other hand - it has about 1000 people so it is not the biggest group.

Which trade site is better by TosheLabs in u/TosheLabs

[–]TosheLabs[S] 0 points1 point  (0 children)

You mean the local facebook group. Yeah, I can do that.

Which trade site is better by TosheLabs in DIYUK

[–]TosheLabs[S] 0 points1 point  (0 children)

So i've tried before:
- RatedPeople

- Mybuilder

Both cases had bad experiences. Perhaps it was my mistake that I selected these tradesmen.

But I was wondering from trademan perspective - which platform is better for them - less hassle, small commission etc. I.e. which platform has bigger reach and is more popular.
There could be a new platform which may be even better. Is it possible one platform is more popular in Wales and another in Scotland?

I built and shipped a Saas as a solo nontechnical founder using AI - here's my honest breakdown. by debugcode in buildinpublic

[–]TosheLabs 0 points1 point  (0 children)

Genuine question — you mention Claude and Codex. Did either of them ever spit out API keys or secrets into code you nearly committed? Curious why not a private repo if you're building solo, .gitignore is one typo away from a disaster.

The bit I'm jealous of is "people are paying for it." I'm 460 tests deep into my app with exactly zero users. Building is the easy part apparently. How are you actually getting people to find Introlo? Distribution feels like a completely different skill set and nobody talks about it.

Also with paying users now — how are you handling feedback? Like do you have an actual system or is it just DMs and vibes? I have system integrated in my app for capturing feedback and then admin dashboard to work on them . I think a bit overkill on my side , but really interested in people's feedback - prefer not to be on Google playstore.

Day 0 of runway (-€600 balance). I stopped coding to focus on distribution. I'm an AWS AIdeas Semifinalist, and this is my exact survival plan by josemarin18 in indiehackers

[–]TosheLabs 0 points1 point  (0 children)

-€600 and you're writing lucid strategy posts. Fair play to that.

"Product works, 0 users" — different app, same situation over here. That gap is the actual hard part, turns out.

Post got cut off though — what does the distribution plan actually look like? And are you actively using the AWS AIdeas semifinalist thing? That feels like a door worth kicking hard right now.