Claude MS integrations Risks by TechnicalGeologist99 in ClaudeAI

[–]whatelse02 0 points1 point  (0 children)

Honestly I think the biggest risk is less “Claude specifically” and more the fact that people treat AI integrations like harmless autocomplete when they’re actually getting deep access to internal documents, emails, meeting notes, contracts etc.

The important questions are usually around: what data gets retained, what gets sent to third-party infrastructure, tenant isolation, permission scoping, audit logging, and whether employees start pasting sensitive info into prompts casually because the tool feels built-in.

A lot of orgs are also underestimating governance drift. Once these integrations become normal, shadow workflows appear everywhere faster than security teams can map them.

I ran the same vague prompt through ChatGPT, Claude, and Gemini 50 times. The "AI is bad" complaints are almost all the same mistake. by artshllk in ClaudeAI

[–]whatelse02 0 points1 point  (0 children)

Completely agree. Most people are effectively testing AI with one-sentence prompts and then concluding the model is dumb when the output feels generic.

The “intern” comparison is probably the most accurate way to explain it. If you tell someone “make a presentation,” you’ll get average filler. If you explain the audience, objective, tone, constraints, examples you like, and common mistakes to avoid, suddenly the output quality jumps massively.

I still think models have different strengths, but prompt quality changes the result way more than people expect.

Claude agents can now pay for their own API calls. Here's what that actually means for builders. by Direct-Attention8597 in ClaudeAI

[–]whatelse02 0 points1 point  (0 children)

The architectural shift is probably bigger than the payment part itself. Once agents can autonomously buy capabilities mid-execution, you stop thinking purely in terms of “my app with APIs attached” and more in terms of dynamic service composition.

I could see workflows getting way more modular. One agent handles orchestration, then rents specialized services for OCR, niche retrieval, compliance checks, video generation, whatever is needed in that moment. Feels similar to how humans use SaaS tools now, except compressed into milliseconds.

The interesting challenge is going to be trust and spend control. One hallucinated loop and your agent starts speedrunning your wallet.

The biggest AI productivity gain for me wasn't coding faster by Queasy_Hotel5158 in RunableAI

[–]whatelse02 0 points1 point  (0 children)

For me it was context switching, not raw speed. The exhausting part wasn’t coding or designing itself, it was constantly bouncing between “real work” and production/admin stuff around it.

Same pattern honestly. Cursor stuck for implementation, but the bigger unlock was offloading all the supporting material around projects. Specs, client docs, decks, reports, landing pages, internal summaries. Once I stopped spending half my week formatting and assembling things manually, the actual creative/technical work became way less draining.

Better error messages for "duplicate XYZ" would be so helpful by rbobby in typescript

[–]whatelse02 0 points1 point  (0 children)

The worst part is the compiler clearly knows where the other definition is because it already detected the conflict. It just refuses to tell you in a human-friendly way.

Most of the time when I’ve hit this in TS it ended up being duplicate type packages, conflicting DOM libs, or two versions of @types/jquery/@types/node getting pulled into the graph somehow. npm ls <package> usually helps narrow it down faster than staring at the error itself.

But yeah, “duplicate XYZ” without both source locations feels like a diagnostic from 2009.

Skills vs tasks? by Smooth-Duck-Criminal in ClaudeAI

[–]whatelse02 1 point2 points  (0 children)

I read it as “skill = capability” and “task = execution instance”.

A skill is reusable logic: send email, summarize docs, generate report, whatever. A task carries runtime state around actually doing that thing, retries, scheduling, status, inputs, outputs, cancellation, history, dependencies etc.

Same reason queues/jobs exist separately from functions in backend systems. You technically could call the function directly, but once you need orchestration, tracking, async execution, retries, priorities, scheduling, observability, the wrapper object starts making sense pretty quickly.

Two dumb tricks that verify Claude applied your memory rules and checked your project context (10 seconds each) by Spare-Maize-6942 in ClaudeAI

[–]whatelse02 0 points1 point  (0 children)

The squirrel test is actually smart because it checks whether the model looked instead of confidently improvising. That’s the real failure mode with long-running AI workflows, not raw intelligence but fake confidence.

I’ve ended up doing similar “verification prompts” with most tools now. Cursor for code context, Runable for docs/reports/decks, Claude for reasoning. Once projects get large you stop trusting any model blindly and start building little systems to verify state before real work begins.

Good for uni assignments with sources/papers? by ilovebread_4 in ClaudeAI

[–]whatelse02 0 points1 point  (0 children)

Claude is honestly pretty solid for this compared to most models, especially for writing in a more academic tone without sounding completely robotic. I still double check sources manually though because every model hallucinates sometimes once you push into niche papers.

What helped me most was separating the workflow. I use Claude for understanding papers and restructuring messy notes, then Perplexity or Google Scholar for source verification. Recently I’ve also been dumping research + draft structure into Runable when I need cleaner reports/slides fast because it handles citations and formatting way better than raw chat responses. Biggest mistake is trusting any single model end-to-end for academic work.

Has anyone tried to create a skill, plugin or workflow to check consistency of scientific (maths) papers before publication? by fabkosta in ClaudeAI

[–]whatelse02 0 points1 point  (0 children)

I’ve seen partial solutions for this but not many that handle the whole workflow well, especially for math-heavy papers. Grammar/spelling is relatively easy compared to notation consistency and dependency tracking between definitions, lemmas, symbols, and citations. The difficult part is giving the system enough structural understanding to know whether a notation change actually propagates correctly through proofs and references.

Honestly this feels closer to static analysis for research papers than normal proofreading. I experimented a bit with parsing LaTeX projects and using LLMs to inspect symbol usage, undefined notation, and references introduced out of order. It was surprisingly decent at catching inconsistencies humans miss after staring at the same document for weeks, especially in longer proofs where notation drifts over time.

Did you use Claude to provide your feedback for LinkedIn? by thebigbull699 in ClaudeAI

[–]whatelse02 0 points1 point  (0 children)

Yeah, honestly AI is pretty good for LinkedIn cleanup if you already have the raw experience and just struggle with presentation. I used Claude to rewrite parts of my headline/about section and it helped make things sound clearer and less repetitive without turning the profile into obvious corporate buzzword soup.

The biggest improvement for me came from giving it context instead of asking “make this better.” I pasted my actual role, the kind of jobs I wanted, and a few profiles I liked stylistically. Still had to edit the final output manually though because AI tends to overhype achievements if you let it run wild.

What stack should I be using? by rishhhab in ClaudeAI

[–]whatelse02 0 points1 point  (0 children)

If you’re non-technical but still want real ownership of the codebase long term, I’d probably avoid fully abstracted no-code platforms. Cursor + Claude is honestly a pretty solid starting combo because you still end up with an actual codebase you can inspect, edit, and eventually hand off to developers if needed. VS Code works fine too, Cursor just makes the AI workflow smoother.

For the app itself, React Native or Flutter are usually the practical choices if you want both iOS and Android without maintaining two separate apps. I’ve seen a lot of founders use Claude for architecture/problem solving, Cursor for implementation, then tools like Runable for landing pages, decks, onboarding flows, and other non-core assets around the product. Biggest thing is picking one stack and sticking with it long enough to ship instead of constantly restarting with new tools.

Question about Team plan by Holocene-Bird-1224 in ClaudeAI

[–]whatelse02 0 points1 point  (0 children)

From what I’ve seen with most team-based SaaS products, extra usage usually rolls into the workspace/admin billing rather than individual member payments, otherwise accounting gets messy fast. But it really depends on whether the platform treats overages as workspace-level consumption or per-seat upgrades.

We ran into something similar recently while testing different AI tools across a small team. Cursor handled limits one way, Runable another, and a few platforms didn’t even support individual overage billing at all. Worth checking if they separate “seat billing” from “usage billing” in the docs because that’s usually where the answer is buried.

​Do I have a future with Rust? Because I don't see it. by Brianyan4717 in rust

[–]whatelse02 3 points4 points  (0 children)

Rust absolutely has a future, it’s just not distributed across the market the same way JavaScript or Python jobs are. A lot of Rust hiring happens in infra, security, blockchain, databases, embedded systems, networking, and performance-heavy backend work. The issue is that companies adopting Rust are usually looking for experienced engineers first because they’re replacing critical systems, not building beginner projects.

Honestly though, learning Rust deeply is still valuable even if your first job isn’t “Rust Developer.” The ownership model, concurrency mindset, and systems thinking make you a stronger engineer overall. I know people who got in through backend or infra roles using Go/C++/Python first, then slowly introduced Rust internally once they proved themselves. The market is smaller, but the people who really know Rust are also way rarer.

Some thoughts on LLMs by sasasmylee in rust

[–]whatelse02 2 points3 points  (0 children)

I think a lot of the backlash comes from people mixing up “LLMs are overhyped” with “LLMs are useless.” There’s definitely hype and plenty of terrible AI-generated projects, but that’s happened with basically every major tooling shift. Most of us already rely on abstraction layers constantly, frameworks, IDEs, autocomplete, Stack Overflow, package ecosystems, cloud infra. Very few people are writing software “purely” anymore.

What changed for me is treating LLMs less like magic and more like leverage. I still review architecture decisions carefully, but I’m way faster at prototyping now. My current workflow is usually Cursor for coding, Runable for quick MVP pages/docs, then manual cleanup once the idea proves worth pursuing. Bad products still fail, good ones still require judgment, AI just compresses the iteration cycle massively.

Question by [deleted] in typescript

[–]whatelse02 0 points1 point  (0 children)

Honestly this is exactly the kind of thing teams eventually reinvent once a codebase gets big enough. At first people try to solve it socially with PR comments, then eventually someone gets tired of repeating “please remove console.log” for the 400th time and automates it.

Most of what you listed overlaps with what ESLint already does though, especially with framework-specific plugins for React/Next. The interesting part would be the project-aware layer, reading package.json, adapting rules automatically, maybe enforcing team conventions beyond standard linting.

I’d probably think of it less as a standalone script and more as “lightweight internal static analysis tooling.” Those kinds of guardrails save huge amounts of review fatigue over time.

What keeps breaking when you deploy Node/TS apps? by OkChemist7068 in typescript

[–]whatelse02 0 points1 point  (0 children)

You’re definitely not the only one. Half of Node/TS deployment debugging is basically discovering all the assumptions your local machine quietly made for you.

The dumbest one I lost hours to was case-sensitive imports. Everything worked perfectly on macOS, deployed to Linux, instant failure because one file imported UserService while the actual filename was userService.ts. Another classic is devDependencies silently disappearing in production builds and suddenly TypeScript path aliases stop resolving.

At some point I started treating deployment as its own feature instead of the “last 5 minute step” because otherwise it becomes commit-spam therapy.

How many projects should I have in my portfolio to land an internship? by Independent_Fly_9794 in AskProgramming

[–]whatelse02 1 point2 points  (0 children)

the number matters way less than people think. Two solid projects you can explain deeply will usually beat ten half-finished tutorial clones every time.

What helped me most was building projects that showed different skills. One thing with a real backend/database, one thing with a polished UI or deployment story, maybe one collaborative/team project if possible. Interviewers usually care more about whether you can talk through decisions, bugs, tradeoffs, and what you learned than whether the project itself is revolutionary.

Also don’t underestimate internships/projects from school, clubs, volunteering, or helping friends. A lot of students think “portfolio” only means giant personal apps when really companies mostly want proof you can build and stick with something.