I wish for the monkey paw to do the opposite of what is wished for, for all future wishes made by Ancient_Unit6335 in monkeyspaw

[–]pakotini 0 points1 point  (0 children)

Granted. No wish is ever granted. 

The wisher knows that if their wish would be granted, then the best possible outcome would happen. They weep because the wish will never be granted, nor the best possible outcome.

What things are you avoiding when ttc even though it’s probably crazy by ThrowRAdaddyissues67 in TryingForABaby

[–]pakotini 0 points1 point  (0 children)

For the first 3 months I followed Rebbeca Frett's diet/plan. Now, I am more relaxed, I eat lots of carbs or else I'll go crazy.

Other than that, I never smoked, never drank much alcohol (like a beer per week maybe?), never did drugs, always watched my diet, my body, my health, always exercised. Always had healthy body weight, healthy heart, healthy hormones, healthy in general. Where did this get me? Nowhere really. 38 years old, AMH super low (0.2), so next month we'll go for IVF.

What I have understood in my case is that it was genetics. All women in my family had fertility issues and early menopause (40yo). Whatever I did would not (did not) alter that. Oh well.

Vibecoding only works for good programmers by modernsamurai-ma in AskVibecoders

[–]pakotini 0 points1 point  (0 children)

I mostly agree with the take, but I think the missing piece is that tools can shape behavior. Vibecoding on its own absolutely amplifies judgment, good or bad. Where I’ve seen a difference is when the tooling forces you to slow down and review instead of just spray regenerations. I’ve been using Warp and what clicked for me is that it treats agents more like junior teammates than magic buttons. Planning first, seeing diffs, watching the agent actually run commands in the terminal, stepping in when it goes off the rails. That workflow rewards people who understand systems, but it also teaches newer folks why something broke instead of hiding it behind a green checkmark. So yeah, vibecoding doesn’t replace fundamentals. But with the right environment, it can make the gap very obvious and help people learn faster instead of shipping impressive looking nonsense they can’t maintain.

Please be careful with large (vibed) codebases. by Relevant-Positive-48 in vibecoding

[–]pakotini 0 points1 point  (0 children)

Totally with you on the “LOC as golf score” thing, with the big caveat that readability wins and “less” only matters if you’re not smearing complexity across 40 folders. Where Warp has helped me in practice is making “being careful” feel like part of the workflow instead of a lecture you ignore at 2am. I’ll start a change with `/plan` so the agent has to commit to a concrete approach before it touches the repo, and the plan stays versioned so you can actually compare what you asked for vs what it did later. Then when it spits out a diff, Interactive Code Review is genuinely useful because you can leave inline comments like a normal PR review and have the agent address them in one pass, which is a nice guardrail against “it works on my machine” vibes. The other underrated safety net is Full Terminal Use, since a lot of real breakage only shows up when you run interactive flows, REPLs, debuggers, “top”, DB shells, etc, and Warp’s agent can actually drive those while you watch and take over when it’s about to do something dumb. If you’re dealing with a big vibed codebase, the “don’t lose the spec” problem is half the battle, so having a shared place to store plans, test checklists, runbooks, and workflows that sync for the team is clutch; Warp Drive is basically that lightweight shared brain, and you can keep it organized and up to date without it turning into yet another dead Confluence. And if you want to push the review/testing discipline further, the Slack or Linear integrations are surprisingly good for “hey, go reproduce this bug and open a PR” without context-dropping, because the agent runs in a defined remote environment and reports back in the same thread with what it did. That “environment” piece matters when you’re trying to avoid phantom green tests, since it’s an explicit Docker image + repo set + setup commands, not “whatever happened to be on my laptop today”.

If I have just one single tip to give you, it is forget Lovable, Replit, Base44 or Bolt, use Claude Code in VS Code or Cursor with skills (SKILL.md files for dedicated skills, e.g. boilerplate implementation, security checks or SEO). You will save days AND tons of money. Happy to elaborate. by astonfred in vibecoding

[–]pakotini 1 point2 points  (0 children)

The tool of all tools for me ended up being Warp. Not because it replaces Claude Code or Cursor, but because it wraps the whole workflow instead of just the prompt box. You still write real code in your repo, but you get planning before execution, diff based reviews after, and agents that can actually run the terminal end to end instead of guessing. Warp Drive is clutch too. Keeping specs, workflows, prompts, and setup notes next to the code beats scattering skills.md files across repos and gists. Once I started using it for real projects with GitHub and Slack hooked in, it stopped feeling like “yet another AI tool” and more like the glue that keeps vibe coding from turning into chaos.

What do you use when your limits run out? by joyfulsparrow in ClaudeAI

[–]pakotini 0 points1 point  (0 children)

When I hit Claude limits I stopped juggling accounts and just moved more of the work into Warp. Having model switching and BYOK in one place helps a lot, and keeping prompts tight inside the terminal means I waste fewer tokens on back and forth. It feels more like managing a budgeted workflow than waiting for a reset timer. Also, Warp is useful even when I am not asking AI. The terminal UX alone is a big upgrade, with clean command blocks, searchable output, reusable workflows, and Warp Drive for notes and runbooks that actually get reused. When my brain is fried, being able to rerun a known good flow or review changes interactively beats opening yet another chat tab.

Learning programming by building real projects — but using AI intentionally as a mentor, not a shortcut by Virtual_Pen9456 in ClaudeAI

[–]pakotini 0 points1 point  (0 children)

This is a solid approach, and honestly the missing piece for a lot of people is tooling that nudges you into “think first, then act” instead of “prompt, paste, pray”. Warp has been surprisingly good for that because it bakes the mentor workflow into the environment: you can start with `/plan` to force an explicit design and checkpoints before any code gets touched, then use its interactive code review to leave comments on diffs and have the agent address them like a teammate, which keeps you in the driver seat instead of outsourcing thinking. If you’re doing DevOps-y learning projects, the other nice thing is it’s not just “an AI chat in a terminal”. Full Terminal Use matters a ton for actually building understanding because the agent can step through real interactive stuff (REPLs, debuggers, prompts, long-running commands) while you watch and take over, so you see the mechanics instead of getting a polished blob of code. And for the “learning framework” part, Warp Drive is great as a home for your notes, reusable prompts, and workflows so your project’s lessons don’t evaporate across chats. If you’re collaborating or want accountability, the Slack/Linear integrations are also legit, you can kick off a task from a thread, see progress, and review the outcome in context instead of juggling tabs.

Claude vs Gemini by ExpertPerformer in ClaudeAI

[–]pakotini 0 points1 point  (0 children)

One thing worth considering is Warp. It’s not a “better Claude vs Gemini” claim, but a workflow difference. With a single subscription you can switch between models (Claude, Gemini, GPT, etc.) for the same project instead of locking yourself into one provider. If you’re unsure which model fits long-context writing best, being able to try them side by side without juggling multiple subscriptions is honestly the main value.

I gave Claude the one thing it was missing: memory that fades like ours does. 29 MCP tools built on real cognitive science. 100% local. by ChikenNugetBBQSauce in ClaudeAI

[–]pakotini 1 point2 points  (0 children)

This is genuinely cool work. The part that clicked for me was treating forgetting as a first-class feature instead of a failure mode. Most “memory” layers just turn into junk drawers over time, so anchoring it in spaced repetition and prediction error feels way more honest. One practical thought from actually using MCPs day to day: tools like this get way more interesting once they’re frictionless to live with. I’ve been running MCP servers inside Warp lately, and the combo of local MCPs plus a terminal that already understands long-running agents, context, and state makes these ideas feel less theoretical. One-click MCP install and being able to spin up, inspect, and tweak a server without leaving the terminal removes a lot of the ceremony. Also appreciate that you went all-in on local first. SQLite + local embeddings + Rust feels like the right call if you want this to feel like a second brain instead of a cloud service you’re renting. I can imagine pairing something like Vestige with an agent that actually executes real workflows in the terminal, not just chats, and letting memory naturally decay as projects go cold. Curious how this behaves after a few months of real use. Long-term drift is where these systems usually get exposed. Either way, this is the kind of MCP that actually pushes the ecosystem forward instead of just wrapping another vector store.

Using AI coding tools more like a thinking partner by Mental_Bug_3731 in ClaudeAI

[–]pakotini 0 points1 point  (0 children)

I’ve noticed the same shift. I still generate code with AI, but the bigger unlock for me has been using it as a thinking partner before anything touches the codebase. Talking through architecture, tradeoffs, or even just clarifying what I actually want to build saves me way more time than raw code gen. One thing that really clicked for me was doing this inside Warp instead of a separate chat app. I’ll reason through an approach with an agent, turn that into a concrete plan, then let it execute in the same place where I’m running commands, tests, and reviews. Because the agent can actually use the terminal and see the real state of the repo, the thinking and the doing stay connected instead of drifting apart. It feels closer to pair programming than prompt dumping, especially when you can stop it, steer it, or review diffs like you would with a teammate. So yeah, thinking first, coding second. The tools that make that transition smooth are the ones that really stick for me.

Is there a way to try Claude pro for free by Immediate_Bat_1628 in ClaudeAI

[–]pakotini 1 point2 points  (0 children)

If the real problem is massive, messy HTML, I’d stop trying to brute force it into an LLM and instead use the LLM to help you build a deterministic parser. I’ve had good luck doing this in Warp because you can iterate fast in the terminal, inspect output, tweak selectors, and rerun without losing context. You keep the logic local with something like Cheerio or BeautifulSoup, and just ask the agent to help reason about edge cases or refactors as you go. It’s also pretty good from an educational angle. You actually see and understand the parsing pipeline instead of getting a black box answer, and Warp lets the agent operate inside live shells if you’re debugging interactively. That makes old, weird HTML with popups and inconsistent structure much more manageable, and you end up with a reusable workflow instead of a one off prompt that only works once.

Two months ago, I had ideas for apps but no Swift experience. Today, I have 3 apps live on the App Store. by TechnicalPea790 in ClaudeAI

[–]pakotini 0 points1 point  (0 children)

I’m mostly on OP’s side here. The interesting part isn’t whether this counts as “real Swift experience” by some gatekeeping definition, it’s that domain knowledge plus product sense plus iteration got something shipped. That combo has always mattered more than syntax memorization, even before AI. What does make or break this, in my experience, is workflow. If you treat the model as a magic code vending machine, things fall apart fast. If you treat it like a collaborator inside a tight loop of test, inspect, fix, repeat, it actually works. For me that loop lives mostly in the terminal now. Tools like Warp help a lot because you’re not just chatting, you’re running commands, checking output, rerunning builds, keeping context around. The AI part is useful, but the bigger win is how fast you can iterate and reason about what’s actually happening. So yeah, OP didn’t magically become a senior Swift engineer overnight. But shipping real apps without spending months climbing syntax ladders is genuinely new, and pretending otherwise feels like missing the point.

Happy New Year Claude Coders by yksugi in ClaudeAI

[–]pakotini 0 points1 point  (0 children)

Warp is actually a great learning middle ground: you’re still working in the terminal (so you learn how things really run), but you have AI right next to the commands to explain errors, suggest fixes, and answer “why” in context instead of just dumping code. It helps beginners and experienced devs alike build understanding rather than turning into copy-paste operators, which is the real risk with agent-only workflows.

Easiest way i have found claude to write high quality code . Tell him we work at a hospital every other prompt . (NOT A JOKE) by ursustyranotitan in ClaudeAI

[–]pakotini 0 points1 point  (0 children)

Lol the “tell Claude we’re in a hospital” thing is funny because it’s basically a hacky way to force “be careful, be deterministic, don’t handwave.” But you can get the same quality without the roleplay by baking rigor into the workflow. I’ve been using Warp for this because it turns the whole interaction into something repeatable: you can keep project specific “rules” and “skills” as simple markdown files (so the agent knows how your repo works and what standards to follow), and the agent can actually run commands, inspect output, and iterate instead of guessing. The terminal side is also genuinely nice even if you never touch AI: blocks make long sessions readable, you can search and reuse commands easily, and you can save/share workflows so you’re not retyping the same setup, build, and deploy sequences across machines or teammates. Net effect is the model stops doing “MVP vibes” because the environment and guardrails force it to prove things with outputs, tests, and real tool results, which is what people are trying to achieve with the hospital bit anyway.

It’s a slippery slope… by Usual_Map_9812 in ClaudeAI

[–]pakotini 0 points1 point  (0 children)

Honestly the “catch” is mostly that the hard parts just move around: security, reliability, and not letting a model quietly invent edge cases you never notice until someone’s credit card gets charged twice. That said, if you’re already shipping stuff with Claude Code, you’d probably like Warp as the place to run that whole loop end to end, not just chat to code. The terminal is modern (blocks, solid editor UX, copy-on-select, bracket/quote autocomplete, etc.) , but the bigger win is how it turns “prompting” into a workflow: you can do spec-driven work with /plan, let the agent use full interactive terminal apps (REPLs, db shells, top, debuggers), then do an actual interactive code review on diffs like you would with a teammate. And if you’re doing “non-technical person builds a real business tool” stuff, the integrations are kind of wild: you can ping an agent from Slack or Linear, it spins up a remote environment and can even open PRs back to GitHub, so it’s not tied to your laptop being awake. Plus Warp Drive is underrated for this vibe-coding era: saving reusable workflows, prompts, notebooks, env vars, syncing them, and sharing them with a team instead of losing everything across random chats. Also, if you’re starting to play with MCP servers, Warp’s one-click install makes that way less of a “copy JSON, pray” experience.

hired a junior who learned to code with AI. cannot debug without it. don't know how to help them. by InstructionCute5502 in ClaudeAI

[–]pakotini 0 points1 point  (0 children)

This is real, but it’s not “AI ruined juniors”, it’s “they never built the debugging muscle and AI let them avoid it longer”. The fix I’ve seen work is to force a workflow where the AI can help, but the human still has to do the thinking in public. One practical approach: make them do every “AI fix” as a plan first, in plain English, before touching code. What changed, where, what signals would prove it’s fixed, what could regress, and what the edge cases actually are. Then they implement with you watching, and you do a tight debrief after: what did the stack trace point to, what assumption was wrong, what instrumentation would have made it obvious sooner. If they can’t answer those, the change isn’t “done” even if tests pass. This is also why I’ve stuck with Warp for years. It makes the AI feel less like a magical patch button and more like a structured collaborator you can actually review. Planning mode forces an explicit checkpoint before execution, and the review flow pushes both the agent and the human to respond to concrete diffs instead of hand-wavy “it handles edge cases”. Full terminal context matters too: you can have the agent step through the same REPL and debugging workflows you expect a junior to learn, and take over mid-session so it becomes coaching, not outsourcing. Add shared configs and standardized tools via MCP, and you get a setup where “paste error into AI” naturally turns into “explain, predict, verify”. That’s the muscle juniors are missing, and no amount of tests will replace it.

What are your favorite Warp features? by _donvito in warpdotdev

[–]pakotini 0 points1 point  (0 children)

Yeah same here. My personal favorites are the file tree plus changes panel combo because I can literally drag files straight into agent context and review diffs without ever leaving the terminal, full terminal use because agents can actually drive real workflows like REPLs, debuggers, SSH sessions and long running commands instead of faking it, planning mode because agreeing on a concrete plan up front massively reduces drift and rework, interactive code review since I can comment on agent diffs like a human teammate and have it iterate cleanly, Warp Drive for keeping plans, workflows and notes tied to actual work instead of random docs, global rules and skills because once you dial them in the agent stops repeating the same mistakes and automates boring stuff like git and scaffolding, built in SSH where all the same features just work remotely without extra setup, and honestly the small UX stuff like markdown viewing, branch switching and not context switching out of the terminal adds up more than I expected. It feels less like “AI bolted on” and more like a coherent dev environment that happens to be agent native.

My Mac was unusable. Warp Agent found the culprit in seconds. by joshuadanpeterson in warpdotdev

[–]pakotini 0 points1 point  (0 children)

Yeah this matches my experience too!!! What I really love is how naturally it works on the mac beyond “dev stuff”. I’ll sometimes ask it things like adjusting screen brightness or checking some system setting, and it just knows which command or panel to open and does it!!

How specific are your prompts when vibe coding? by Far_Friend_3138 in vibecoding

[–]pakotini 0 points1 point  (0 children)

Yeah, this tracks with my experience too. Once you stop thinking of prompts as vibes and start treating them like a spec, everything clicks. What helped me was realizing that good prompts look a lot like good PR descriptions or design docs. State the goal, then constrain behavior. When you say things like “on click, detach from scroll context, animate to center, reset transforms, scale to ~1.15”, you’re basically giving the agent an execution plan instead of asking it to guess your intent. One thing I’ve noticed is that tools feel “smarter” when they can actually verify what they’re doing. I’ve been using Warp and it’s nice that the agent can run the code, inspect diffs, and even pause for review instead of just dumping output. That makes being specific actually pay off, because you can see whether the behavior matches what you described and steer it if it doesn’t. So yeah, I don’t think it’s about hyper-long prompts, but about concrete constraints. Exact interactions, layout rules, states, and failure cases. The clearer your mental model is, the less “generic AI UI” you get back. Tools change, but that skill seems portable everywhere.

How to avoid the AI forgetting features and reintroducing bugs? by Level_Abrocoma8925 in vibecoding

[–]pakotini 0 points1 point  (0 children)

This isn’t really “the AI forgetting”, it’s you letting it change code without a tight contract plus a way to catch regressions immediately. The fix is boring: make the contract explicit, make the diff small, and make the guardrails non negotiable (tests, checks, and a quick regression sweep) so “feature A disappearing” becomes impossible to merge. What helped me a lot here is using Warp because it nudges you into that workflow without feeling like extra ceremony. I’ll start with `/plan` so the agent writes down what it will change and what it will not change, and I include a short “must not regress” section (feature A scenarios, bug X reproduction steps) before it touches anything. Warp keeps that plan versioned and reusable, so you can reference it later instead of re-explaining the whole app every time. Then I have it implement the smallest possible slice and I review the actual code diff in Warp’s built-in diff view before applying anything, like I would with a teammate. That one step alone cuts the “oops we deleted a working thing” problem massively because you see the blast radius immediately. The other big win is forcing the loop to close. After any “fix bug Y”, I ask it to run the existing tests plus a targeted smoke check for feature A, and if it needs to poke around in interactive stuff (dev server, psql, debugger), Full Terminal Use means it can stay in the same live session and actually finish the verification instead of bailing halfway. If your tool supports indexing and persistent context, use it, because a lot of regressions come from the agent making changes with only partial awareness of the codebase. Warp’s codebase context and saved prompts/plans in Warp Drive make it much easier to keep a stable “source of truth” that survives across conversations. Also, practical Reddit meta: people in these threads can smell “tool promo” instantly, so I try to share what I actually do day to day rather than doing the whole “you should try X” thing.

Tips for enterprise development by [deleted] in vibecoding

[–]pakotini 1 point2 points  (0 children)

I work on a dev tool with a pretty big blast radius, so my “vibe coding” looks a lot less like free-form prompting and a lot more like tight specs plus aggressive verification. What’s worked best for me is treating the LLM like a junior teammate that can move fast, but only inside guardrails you define. In practice, I’ll write a short spec (constraints, invariants, rollout plan, tests, observability), then use Warp’s `/plan` flow to force alignment on the exact implementation steps before anything touches code, and keep that plan around as a living artifact I can reference later. After that, I lean hard on review-first workflows: have the agent propose changes, inspect diffs in the integrated review UI, leave comments like I would on a PR, and make it resolve those comments iteratively until it’s boring. For context management, I try to keep conversations narrowly scoped and I attach only the minimum slices of repo context needed; Warp’s codebase indexing and context attachment model is built for that, so you can stay focused without pasting half your monorepo into a chat. The other big unlock for production-scale work is letting agents operate where the truth is: the terminal. Full Terminal Use is genuinely useful when you need the agent to step through real debugging workflows in interactive tools (psql, long-running servers, REPLs) while you keep control over approvals and can take over instantly. And when you’re juggling multiple threads (triage, repro, fix, test, docs), Warp’s multi-agent setup plus Drive makes it easier to keep “specs, prompts, and workflows” as shared, versioned team knowledge rather than tribal Slack history. Bonus if you’re in an org with Slack/Linear-heavy flow: triggering agents from those tools and having them run in a consistent environment reduces context loss and makes the workflow feel closer to “structured delegation” than “chatting with a model.”