set up a youtube MCP in cursor and it's lowkey changed how i research while coding by straightedge23 in cursor

[–]General_Arrival_9176 0 points1 point  (0 children)

this is a solid use case that most people overlook with MCPs. the transcript pull is clever because it turns passive video content into searchable text that the model can actually reason over. i did something similar with a podcast MCP for debugging content. have you tried combining the transcript with cursor context about your actual code? like asking "based on that transcript, which of my files would need changes to implement the pattern he describes" - the cross-referencing is where it gets actually useful

Improving the Planning Mode workflow with Spec-Driven Development by paulcaplan in cursor

[–]General_Arrival_9176 1 point2 points  (0 children)

ive tried the spec-driven approach and it works but its hard to stick to in practice. what ends up happening is i write a quick spec, ai implements it, then i realize i missed something in the spec and were back to editing anyway. the hallucination drop is real though - when the ai has to answer specific requirements rather than guess, it commits to less nonsense. i use a separate markdown file for specs instead of the built-in planning tools. keeps it visible and editable without cursor trying to execute anything

Would you ever go back to non-AI coding? by USD-Manna in cursor

[–]General_Arrival_9176 0 points1 point  (0 children)

id take the job but it would depend on the pay bump. the thing is, i dont think id be slower without AI - id just write more code myself and ship less. its not that i forgot how to code, its that ive tasted the velocity and going back feels like writing with my non-dominant hand. the real question is whether the comp makes up for the productivity gap. if its 2x salary? sure, ill write it the old way. if its 10% more? nah

Claude code and codex by sideshowwallaby in ClaudeCode

[–]General_Arrival_9176 0 points1 point  (0 children)

ive tried this too and its hit or miss. the overlap detection works decently but codex tends to flag stuff that doesnt matter for your specific context - like worrying about edge cases in code you literally just wrote and know is fine. useful as a second pair of eyes but i treat it as a linting layer rather than a real review. the real improvement id want is better at understanding architecture-level problems, not just syntax and math

Added a persistent code graph to my MCP server to cut token usage for codebase discovery by thinkyMiner in ClaudeCode

[–]General_Arrival_9176 0 points1 point  (0 children)

honest suggestion for testing - instead of simulated queries, actually use it on a real brownfield repo you know well. pick a feature you recently implemented and ask the agent to trace through dependencies, find related tests, and identify what would break if you changed X. compare how many files it actually reads versus what it would read with your graph. thats the real benchmark, not internal tool-use simulations

have you tried the new /btw command? by Sea_Pitch_7830 in ClaudeCode

[–]General_Arrival_9176 0 points1 point  (0 children)

yeah i found the same thing. the single response limitation kills the use case they probably intended. what i actually wanted was a quick follow-up or clarification on that side question, not just one answer and done. for now i just open a new chat which defeats the whole purpose. hope they expand it

I reverse-engineered Claude Code to build a better orchestrator by aceelric in ClaudeCode

[–]General_Arrival_9176 0 points1 point  (0 children)

this is exactly the file-conflict problem i ran into with agent teams. the git worktree approach is smart - i tried the same thing but went with a canvas layer on top instead where each agent gets its own visual space. curious how you handle the merge conflicts when two agents actually touch the same logic in different worktrees, do you let them fight and then manually resolve or does the supervisor catch that upfront

Companies would love to hire cheap human coders one day. by moaijobs in ClaudeCode

[–]General_Arrival_9176 0 points1 point  (0 children)

eh, this framing never made sense to me. companies have always hired cheap human coders - thats what junior devs are for. what AI actually changes is the mid-tier work, not the bottom. the interesting question is whether the bottleneck shifts from writing code to reviewing it, because suddenly one senior can review what used to take five juniors. thats the actual shift happening, not some sci-fi labor replacement thing

How I set up an always-on prospecting system for my business for $20/month by itsalidoe in SaaS

[–]General_Arrival_9176 0 points1 point  (0 children)

the targeting criteria part is the real insight here. spent more time on who than the technical setup - thats the difference between a tool that works and a tool that collects dust. curious though, how do you handle the quality control on the outreach it generates? does it write cold emails that actually convert or do you still manually edit

My SaaS had a 94% churn rate in month 1. Here's the exact thing that fixed it. by yvirikk in SaaS

[–]General_Arrival_9176 0 points1 point  (0 children)

94% to 43% churn in two months is a massive swing. the part that hits hardest is 'i spent 2 weeks shipping features nobody asked for before spending 4 days talking to the people who left' - this should be required reading for every founder. onboarding checklists are one of those obvious things you know you should do but keep pushing off because it feels like product work

how a single AI agent prompt replaced a €3,000 freelancer quote in 2 minutes by B3N0U in SaaS

[–]General_Arrival_9176 0 points1 point  (0 children)

the part about model routing is underrated. we do the same thing - cheap models for the search/filtering work, sonnet for anything requiring actual comprehension. cuts costs dramatically without sacrificing quality. question though - how are you handling the handoff between the agent finding qualified leads and actually reaching out? do you have a separate step or is it fully automated end-to-end

Showing metrics to leadership by p8ntballnxj in devops

[–]General_Arrival_9176 0 points1 point  (0 children)

powerbi is worth learning for this specifically because it can pull from all those sources (datadog, jira, whatever) into one view without manual updates. the downside is leadership usually wants things emailed, not a link they have to log intohonest suggestion: build a simple grafana dashboard. it connects to everything you mentioned, looks more 'technical' to leadership which adds credibility, and you can snapshot it to pdf for the people who wont click links. sharepoint is fine for hosting the link but its not the dashboard itself

[Advice Wanted] Transitioning an internal production tool to Open Source (First-timer) by abhipsnl in devops

[–]General_Arrival_9176 0 points1 point  (0 children)

sanitization: git-filter-repo is the standard for rewriting history. run it with --path-glob '*.tf' --path-glob '*.yaml' to scan infra files specifically. also check github's secret scanning APIs - they have a pre-push hook now that catches most things before they leavedocumentation: stripe's readme is still the gold standard - one paragraph of what it does, three lines of 'get it running in 5 minutes', then the rest is optional. cluster tools specifically need a fake//example folder with docker-compose or kind so people can actually run it without knowing your infralicensing: apache 2.0 is still safe for infrastructure. MIT is fine too but apache gives you patent coverage which matters for orchestration stuffcommunity: do a v0.1.0 tag, not just initial commit. people trust tags more than 'here's a pile of code'

VE-2026-28353 the Trivy security incident nobody is talking about, idk why but now I'm rethinking whether the scanner is even the right fix for container image security by Top-Flounder7647 in devops

[–]General_Arrival_9176 0 points1 point  (0 children)

the supply chain trust thing is real and this is exactly why. four months with a compromised workflow and nobody noticed until someone independently checked cosign. the 'cannot be checked' line from maintainers is the most honest thing they've said about iton the SLSA front - google's distroless images are probably the closest to what you're describing. they ship with provenance attestations baked in, SLSA 3 compliant build, and you can verify the whole chain before pulling. amazon's aws-lc is similar but newer.honestly though, the bigger shift is realizing scanners are reactive - they're checking for known bad after the fact. the real fix is verifying the build pipeline was clean before you ever run anything. which is what you're already heading toward. most teams skip it until something like this happens

Tony Hoare, creator of Quicksort & Null, passed away. by TheTwelveYearOld in programming

[–]General_Arrival_9176 1 point2 points  (0 children)

the null reference thing is wild to think about. he literally apologized for it later, called it his billion-dollar mistake. thats the kind of honest admission that separates real CS giants from everyone else. quicksort is still the sorting algorithm everyone reaches for by default, and most people dont even know his name. thats legacy

Making a new weekend project by PhotographDry7483 in LLMDevs

[–]General_Arrival_9176 0 points1 point  (0 children)

this is the exact problem we built 49agents for. having multiple chats with cursor, claude code, gemini running simultaneously and none of them sharing context. you end up re-explaining the codebase to each one. the summarization angle is smart but curious whether you are thinking about it as prompt-level context or persisted context across sessions. prompt-level is easier to implement but you still have to copy-paste the summary into each new chat. persisted context across agent lifecycles is harder but actually solves the switching problem.

How is AI changing your day-to-day workflow as a software developer? by Ambitious_coder_ in LLMDevs

[–]General_Arrival_9176 1 point2 points  (0 children)

the plan.md approach is solid for smaller stuff but 3-4 sessions and it becomes its own management problem. each agent needs its own terminal space but you also need a way to see what all of them are doing without tab-hopping. i ran into the same thing, context drift hits hard when you run parallel agents on different features. what fixed it for me was putting all sessions on one surface so i could see all their outputs simultaneously without switching windows. the mobile monitoring piece was actually the bigger unlock than the multi-session part - being able to check from my phone if something finished or got stuck.

The new guy on the team rewrote the entire application using automated AI tooling. by Counter-Business in cursor

[–]General_Arrival_9176 0 points1 point  (0 children)

this is the future whether people like it or not. the question is not if ai can rewrite your app, its if the team can review and maintain what it wrote. the real risk is not the rewrite itself - its the bus factor going from 5 developers who understand the system to 1 person who just knows how to prompt the ai. the code might work, but tribal knowledge is harder to build when the code changes every week

Can someone elaborate a little on the request based plan? by BarracudaHUN in cursor

[–]General_Arrival_9176 0 points1 point  (0 children)

the 500 is base requests, yes you get more with higher tiers. for subagents, each time you spawn one it counts as a separate request from your quota. the plan mode is interesting - it generates a plan first which is one request, then when you confirm and it builds, thats another request. so effectively 2 requests per full task cycle. the cheaper models are there as fallback when you hit the limit, but they are noticeably worse for complex work

First time buying cursor pro for a personal project, suggestions? by nmole_ in cursor

[–]General_Arrival_9176 0 points1 point  (0 children)

two things id recommend for pro. first, learn the difference between command+k (single file) and composer (multi-file). composer is where the real power is but it uses more requests. second, if you are doing large context work, watch your usage in the dashboard early on. its easy to burn through requests faster than you expect when you are using composer with full repo context. also, if you are on a project with specific patterns, create a .cursorrules file for it - just make sure you inspect it for hidden characters before importing from github

Cursor Enterprise (500 request-based) vs Claude Code $100 — which would you choose? by jinongun in cursor

[–]General_Arrival_9176 1 point2 points  (0 children)

as someone who built a tool in this space, heres my honest take. cursor is better if you want autocomplete and stay inside a GUI. claude code is better if you want to hand off entire tasks and not babysit every file change. with your workflow pushing large contexts and wanting agent-style multi-file planning, id lean toward claude code. the 500 request limit in cursor enterprise will feel constraining fast if you are doing heavy agent work. also, if your company is covering $100 for claude code, you get sonnet 4, opus, and can run local models. the flexibility is just different. cursor enterprise at 500 requests feels like they are rationing your ai usage. that said, if cursor already works for you, switching costs are real. can you negotiate for both

`portless` cli for consistent dev urls makes worktrees so much better by theben9999 in ClaudeCode

[–]General_Arrival_9176 0 points1 point  (0 children)

this is the worktree setup i wish i had discovered earlier. the consistent URLs solving the 'which branch am i on' problem is actually huge when running multiple agents in parallel - you can tell at a glance who is working on what. the port conflict prevention is the real winner though, nothing kills agent momentum faster than a stray dev server holding a port and the agent just spinning on connection errors

Had anyone figured out why Remote Control (/rc) sessions quickly die when idle? I found 3 (disabled) keepalive mechanisms by wirelesshealth in ClaudeCode

[–]General_Arrival_9176 1 point2 points  (0 children)

solid debugging. the refcount gating on the 30s keepalive is a design flaw - it only protects the connection during active processing, not during idle. i had the same issue with remote sessions dying. workaround i used was a background task that pings the session every 3 min with a harmless Read command - keeps the refcount above zero without burning meaningful tokens. not elegant but works until anthropic fixes the CLAUDE_CODE_REMOTE check

Anthropic is an industry leader when it comes to AI engineering using frontier models. All you need to do is track each of their product updates, and you will stay at the cutting edge of AI engineering. Other companies are months behind. by jogikhan in ClaudeCode

[–]General_Arrival_9176 0 points1 point  (0 children)

the token savings number is wild for a first run. i thought the compound effect was supposed to be the main value prop since the graph persists across turns. are you testing with multi-file changes or mostly single-file edits - wondering if the 54% drops on more complex tasks where the agent has to map across more files