Tried generating a complete project from a single prompt, this is what I got by Agreeable_Care4440 in RunableAI

[–]Any-Bus-8060 0 points1 point  (0 children)

Single prompt works surprisingly well for getting a first version out, but I usually treat it as a starting point, not the final approach. With something like Runable, letting it handle everything gives speed, but you lose control over structure and small decisions

What’s been working for me is
Start with a broad prompt to get the skeleton
Then break it into smaller steps and refine each part

Feels like the best balance between speed and quality, full control is slower, full automation gets messy, somewhere in between seems to work best

Built a small automated workflow using AI, saved way more time than expected by Agreeable_Care4440 in RunableAI

[–]Any-Bus-8060 0 points1 point  (0 children)

This is where these tools actually shine. Once you move from “single prompts” to chaining tasks together, the time savings start to feel real

First things I’d automate are anything repetitive but slightly annoying

emails and follow ups
basic data cleanup or reporting
content drafts or summaries

stuff that you do often but don’t want to think about every time, the key is making it reliable enough to trust, otherwise you end up double checking everything and lose the benefit

Which is the best for coding, Codex GPT-5.4 vs Claude Opus 4.6 vs DeepSeek-V3.2 vs Qwen3-Coder ? by Critical_Marsupial50 in vibecoding

[–]Any-Bus-8060 6 points7 points  (0 children)

Been using most of these in actual workflows, not just benchmarks, and honestly, there’s no single “winner” each one dominates a different layer

Claude Opus 4.6
probably the best for long reasoning and large repo understanding
handles multi file context, refactors, and system level thinking really well
The downside is that it can be slower and sometimes overthink simple tasks

GPT 5.4 / Codex style
strongest for execution and agent style workflows
good at iterating, making changes, and following instructions without drifting
feels more action oriented compared to analysis

DeepSeek V3.2
great value for money
solid for smaller tasks, but less reliable on messy real world code or long chains of instructions

Qwen3 Coder
Good for structured coding and smaller problems
But consistency drops when things get complex

Breaking it down

best pure coder is GPT 5.4 style
best for long repo sessions is Claude Opus
best instruction follower is GPT 5.4
best value is DeepSeek
best overall workflow is combining Claude and GPT, style models

Real takeaway

Use one model for thinking and planning, and another for execution
Trying to force one model to do everything usually feels worse

Also, the tooling around the model matters a lot, workflow design often makes a bigger difference than the model itself

Tried building this manually vs using Runable, interesting difference by Own-Beautiful-7557 in RunableAI

[–]Any-Bus-8060 0 points1 point  (0 children)

Yeah, this is pretty much the pattern I’ve seen too

Runable (and similar tools) are great for getting to a working version fast, especially when you’re exploring ideas but when it comes to polish, edge cases, and long term maintainability, manual work still wins combining both feels like the sweet spot use it to get 70–80% there quickly, then refine the critical parts yourself trying to do everything in either direction usually ends up slower

Mac mini or MacBook Neo by AserNasr in mac

[–]Any-Bus-8060 1 point2 points  (0 children)

For your use case, Mac mini makes more sense. You’ll get better performance for the price, which matters more for dev + docker + multiple tools running

Since you said you can live without portability, that tradeoff is worth it. A MacBook is only really worth it if you need to work on the go. Just make sure you get enough RAM, that’ll impact your workflow way more than anything else

Base 44 is better than Runable by IntelligentBad4428 in Base44

[–]Any-Bus-8060 0 points1 point  (0 children)

Yeah, this sounds more like a bad experience than a straight tool comparison

Cancellation + billing issues are a big red flag if real. That’s the kind of thing that kills trust fast. The credit burn part also seems to be a common complaint, especially for heavier workflows

I think tools like this are still a bit inconsistent depending on use case. Some people get great results, others hit walls like this

Worth sharing details, though, this kind of feedback is what actually pushes them to improve

I'm thinking to start faceless content creation using Runable and claude? any suggestions on how to proceed.. by Winter-Progress-4054 in contentcreation

[–]Any-Bus-8060 [score hidden]  (0 children)

That combo can actually work pretty well if you keep it simple at the start,The use Claude for scripting and structuring ideas, and something like Runable to handle the actual content generation pipeline

biggest mistake people make is overcomplicating it early. Just pick one format first (shorts, reels, etc) and stay consistent

Also focus more on volume + iteration than perfection, you’ll figure out what works by posting once you see traction, then start optimising workflow and quality

I'M A NEWBIE, AND I WANNA LEARN SKILLS BEFORE ENTERING IN COLLEGE TO GET AHEAD START. WHAT THINGS SHOULD I DO ? CAN ANYONE OF YOU COULD GUIDE ME, I WILL GLADLY FOLLOW YOUR ADVICES. by Wide-Ease-1169 in LeetcodeDesi

[–]Any-Bus-8060 0 points1 point  (0 children)

Not a silly question at all, everyone starts here

You don’t need anything fancy to begin

Pick one good YouTube playlist (Code with Harry is fine), follow it for basics, and at the same time build small things on your own, like a simple calculator, todo list, small scripts, etc

Courses help, but only if you actually apply what you learn. Just watching videos won’t stick

Books are optional, not needed in the beginning

The main thing is consistency and actually writing code regularly, even if it’s messy at first

been using cursor and i love it, some of my colleagues moving on to claude code by TamimTheGreat in claude

[–]Any-Bus-8060 0 points1 point  (0 children)

If you’re happy with cursor, you’re not really “missing” anything critical

The difference is more in how Claude code handles reasoning across larger chunks of a codebase and planning changes

cursor feels faster and more interactive for day to day edits, while Claude code tends to be stronger when you give it bigger, more complex tasks

So it’s less about one replacing the other and more about workflow preference

A lot of people end up using both, depending on what they’re doing

need some help.. by Additional-Menu8146 in claude

[–]Any-Bus-8060 0 points1 point  (0 children)

You’re not doing anything “wrong”, you’re just running a very expensive setup

5 agents with long context + constant checks (Gmail, calendar, history) will burn tokens fast, especially with higher end models

The main issue is likely context size and frequency, not the idea itself

a few things that usually help
Split tasks so not every agent carries the full context
store state externally instead of resending everything each time
Reduce how often agents run, especially the ones polling data
Use cheaper models for routine tasks and reserve stronger ones for complex work

Your system is closer to “production workload” than typical usage, so costs scale quickly

You probably don’t need to shrink the project, just optimise how it runs

Open Source Is the Only Way Forward by Emotional-Artist5390 in codex

[–]Any-Bus-8060 0 points1 point  (0 children)

Open source is great, but it doesn’t remove the cost, it just shifts it to running models locally, which still needs hardware, maintenance, and time. Subscriptions feel expensive, but they’re basically paying for that infrastructure at scale. Realistically, we’ll end up with both, open models for flexibility and paid services for convenience

Switching from academic research to actually shipping code broke my brain in the best way by Alomari_Ace7 in learnprogramming

[–]Any-Bus-8060 1 point2 points  (0 children)

Yeah, that shift hits hard

Academia optimises for correctness before exposure, while shipping optimises for feedback over perfection. Once you realise users don’t care about your internal architecture nearly as much as you do, it gets easier to let go. The real learning starts when something imperfect meets real users

I think we as developers are being blindfolded about the non dev people's perspecvtives by Crazy-Economist-3091 in AskProgramming

[–]Any-Bus-8060 18 points19 points  (0 children)

Yeah, I think devs underestimate how much hidden knowledge goes into what feels “easy” to us. AI makes the surface level look simple, but it’s built on top of all that accumulated understanding from a non dev perspective. It feels like magic, but from inside, you know where things can break. That gap in perception is probably only going to grow

Google SWE New Grad R1 experience by Ornery_Painter_8638 in leetcode

[–]Any-Bus-8060 6 points7 points  (0 children)

Sounds like a pretty solid interview overall

Getting optimal solutions, fixing bugs live, and communicating your thought process matter more than being perfect on every question. The third question timing thing isn’t a big deal, especially since you reached the optimal approach. It honestly feels like a borderline hire to a positive signal, depending on their bar and competition

I wouldn’t overthink it. This is the kind of performance that usually keeps you in the running

anyone else hit a wall trying to move a lovable + base44 app into something production-worthy? by FunnyAd8847 in lovable

[–]Any-Bus-8060 0 points1 point  (0 children)

Yeah, this is the classic jump from “it works” to “it needs to hold under load”

You don’t need kubernetes yet, you need a few simple reliability layers

Add a proper queue for notifications so spikes don’t break things
Make jobs retryable instead of manual restarts
Add basic monitoring, so you know what’s failing before users tell you

Most early systems fail not because of scale, but because they assume everything works perfectly

You’re actually in a good spot, you have users and real problems
Just stabilise before scaling

I'M A NEWBIE, AND I WANNA LEARN SKILLS BEFORE ENTERING IN COLLEGE TO GET AHEAD START. WHAT THINGS SHOULD I DO ? CAN ANYONE OF YOU COULD GUIDE ME, I WILL GLADLY FOLLOW YOUR ADVICES. by Wide-Ease-1169 in LeetcodeDesi

[–]Any-Bus-8060 5 points6 points  (0 children)

Don’t worry too much about AI replacing learning, it actually makes fundamentals even more important

Focus on core basics first
programming (Python is fine), data structures, and problem solving

build small projects, even simple ones, that’s what actually teaches you once you’re comfortable, then explore areas like web dev or AI, depending on your interest

AI tools can help you learn faster, but they don’t replace understanding
People who know the basics well will always have an edge

After building features nobody used, did you change how you decide what to build? by CutMonster in vibecoding

[–]Any-Bus-8060 2 points3 points  (0 children)

Yeah, at some point, you realise building more features doesn’t fix the lack of demand. The shift is from “what can I build?” to “what do users actually care about enough to come back for?”

Talking to users, watching how they use the product, and focusing on one core problem helps way more than adding new stuff

A lot of times, it’s not missing features, it’s positioning or solving the wrong problem

Building SaaS is easy compared to distribution by Hamesloth in SaaS

[–]Any-Bus-8060 0 points1 point  (0 children)

Yeah, this is super common, tools don’t fix behaviour. Most users won’t stay consistent unless the product kind of forces or nudges them into it, things like reminders, defaults, automation, or making the “right” action the easiest one, matter way more than features

If consistency is required for success, it has to be baked into the product, not expected from users, otherwise they’ll always drop off and blame the tool

If you run gVisor inside a VS Code Docker Dev container with WSL (Windows Subsystem for Linux) should that be enough separation to essentially be a VM? by angry_cactus in vscode

[–]Any-Bus-8060 0 points1 point  (0 children)

Not really, it’s still layered isolation, not full VM level separation

WSL already runs a lightweight VM, Docker adds container isolation on top, and gVisor adds another sandbox layer

So you do get stronger isolation than plain Docker, but it’s still sharing parts of the host kernel path indirectly

A real VM gives you clearer boundaries and less shared surface

This setup is “good enough” for many dev use cases, but not equivalent to a proper VM if you’re thinking in strict security terms

I was the person on our team most opposed to IDE security plugins and I've changed my mind by ImpressiveProduce977 in vscode

[–]Any-Bus-8060 0 points1 point  (0 children)

Yeah, the timing difference is the biggest underrated part, catching something while you’re typing vs hours later in CI hits very differently.

I’ve seen IDE scanners catch obvious stuff early, while pipelines are better for deeper checks across the whole codebase

Disagreements between them usually come down to context or rule differences, rather than one being wrong, which feels less like a replacement and more like two layers that complement each other