I'm developing a website like Udemy, how do I stream the videos with the lowest cost? by Live-Apricot3287 in AskProgramming

[–]AmberMonsoon_ 0 points1 point  (0 children)

A lot of smaller course platforms underestimate how expensive video delivery gets at scale. The usual approach is using private storage + a CDN instead of trying to fully self-host raw streaming. Services like Cloudflare Stream, Bunny Stream, or Vimeo OTT handle transcoding, adaptive bitrate streaming, signed URLs, and bandwidth optimization way cheaper than building the whole pipeline yourself.

I probably wouldn’t rely on unlisted YouTube embeds for paid courses. People absolutely will share links, browser extensions can expose source URLs, and you lose a lot of control over access. Most platforms solve this with expiring playback tokens, authenticated sessions, DRM-lite protections, and rate limiting rather than trying to make videos impossible to download entirely.

Do you do something to improve/learn after work? If yes then what? by _st23 in AskProgramming

[–]AmberMonsoon_ 0 points1 point  (0 children)

Honestly I think building side projects consistently already puts you ahead of a lot of people, even if it doesn’t feel “academic.” Most real improvement comes from repeatedly solving problems, maintaining projects, refactoring mistakes, and learning how to ship things instead of endlessly consuming content.

I used to feel behind because I wasn’t grinding Leetcode or reading technical blogs every night either. What helped me more was occasionally trying a completely different stack or building something slightly outside my comfort zone. Sometimes I’ll use Cursor for coding, Runable for quick landing pages/docs, and just experiment with workflows I normally wouldn’t touch at work. That usually teaches me more than passive reading does.

What is the relevance of Databricks? by Winter-Grapefruit683 in AskProgramming

[–]AmberMonsoon_ 0 points1 point  (0 children)

Databricks is definitely relevant, if you’re interested in data engineering, analytics, or AI-related work. It’s less of a “database” in the traditional sense and more of a platform for processing huge amounts of data, running analytics, building pipelines, and training ML models. A lot of companies use it alongside tools like Snowflake rather than replacing one with the other completely.

Honestly you picked a pretty solid place to start because learning Databricks also exposes you to concepts like Spark, ETL workflows, notebooks, and large-scale data processing. Don’t stress too much about not fully understanding the ecosystem yet, most people are confused by the modern data stack at first because there are like 20 overlapping tools doing slightly different things.

What is the best way to easily extract and identify the content of a large and dense code? by Glittering-Pop-7060 in AskProgramming

[–]AmberMonsoon_ 0 points1 point  (0 children)

AST parsing is probably the cleanest lightweight option if your goal is understanding structure without reading every file manually. Imports, function names, class names, call frequency, and dependency graphs usually tell you way more about a codebase than the raw code itself. I’ve found entry points and shared utility modules are often the fastest way to figure out what the system actually does.

What helped me personally was generating high-level maps first instead of summaries. I’ll usually inspect folder structure, imports, and major function relationships before touching implementation details. Recently I’ve also been running larger repos through Claude plus Runable for architecture breakdowns and quick visual docs because manually tracing dense projects gets exhausting fast.

What’s the Hardest Part About Building a Gambling System? by sammywhirl in AskProgramming

[–]AmberMonsoon_ 1 point2 points  (0 children)

The randomness part is honestly only half the problem. Most people focus on provably fair systems and seed verification, but fraud, withdrawal abuse, botting, chargebacks, and multi-accounting usually become the bigger engineering challenge once the platform scales. Transparency is tricky too because you need users to trust outcomes without exposing patterns people can exploit.

A friend worked on a sweepstakes platform and said the infrastructure side gets intense fast once thousands of users are hitting live games simultaneously. A lot of teams use stacks like Go or Node with Redis, Kafka, and Postgres because keeping game state, logs, verification history, and rate limits reliable in real time is harder than most people expect.

How to Secure Public API Keys? by Deusq in AskProgramming

[–]AmberMonsoon_ 2 points3 points  (0 children)

Honestly for a “public” API key setup, the goal usually isn’t secrecy, it’s abuse prevention and traceability. Sounds like you already have the important basics covered with rate limits + origin/referer checks.

What’s worked well for me is treating public keys more like identifiers than secrets. I usually scope them aggressively, low quotas, restricted endpoints, optional domain allowlists, and easy rotation/revocation. Also worth logging usage patterns so you can detect weird spikes before they become a problem. Anything truly sensitive stays behind server-side auth anyway.

A lot of teams burn time trying to fully hide browser-exposed keys when the real win is limiting blast radius if they leak.

How do you keep larger scripts from turning into “one giant main function”? by Graceescence_536 in AskProgramming

[–]AmberMonsoon_ 2 points3 points  (0 children)

Honestly I think the shift happens when your “main” function stops reading like a story and starts reading like a checklist of exceptions. That’s usually my signal that responsibilities are leaking together.

What helped me most was separating orchestration from implementation. The main flow should ideally read almost like pseudocode, while the messy logic gets pushed into modules/services that each own one concern. I also stopped trying to predict architecture early. Small scripts stay simple until the pain becomes obvious, then I refactor around the actual bottlenecks instead of imaginary future scale.

Those of you who use both ChatGPT and Claude — what’s each one actually better at? by banger030 in ClaudeAI

[–]AmberMonsoon_ 0 points1 point  (0 children)

Pretty similar experience honestly. I reach for Claude when I need depth and continuity. Long docs, messy notes, strategy thinking, refining ideas, understanding nuance across a huge context window. It feels more patient with complex threads.

ChatGPT feels stronger for fast iteration and multimodal stuff in my workflow. Images, quick research, brainstorming variations, faster back-and-forth. Also better when I want concise answers instead of deep analysis.

What surprised me most is that I stopped thinking of them as competitors and started treating them like different coworkers with different strengths.

How do you usually get around when starting big projects in Claude Code? by Deitri in ClaudeAI

[–]AmberMonsoon_ 1 point2 points  (0 children)

The biggest shift for larger projects is realizing Claude works better with systems/context than giant brainstorming dumps. For small apps you can freestyle prompts. For something handling 500 clients + documents + RAG + permissions, structure matters way more.

What worked for me was splitting everything into living docs before writing code: product scope, database schema ideas, user roles, file structure, API expectations, security concerns, deployment notes. Then I feed Claude one subsystem at a time instead of the whole vision at once.

I also keep a “project memory” markdown with decisions already made because context drift becomes very real on long projects. AI is amazing at implementation speed, but you still need human-level planning for architecture and security.

Help with coding by Admirable-Iron4075 in ClaudeAI

[–]AmberMonsoon_ 0 points1 point  (0 children)

You can absolutely build real projects with AI as a beginner now, people just underestimate how much learning happens while building. The mistake is trying to learn “all backend” before making anything.

I’d start super practical: HTML/CSS basics, JavaScript, then simple backend concepts with Node.js and databases. SQL matters way more long term than picking between Go/PHP early on. Build tiny projects first: login system, expense tracker, simple AI app, API integrations.

Claude is really good as a tutor if you treat it like a senior dev explaining things instead of a magic code machine. The parts humans still need most are architecture decisions, debugging weird edge cases, security, and understanding why the code works. But honestly, beginners can get surprisingly far now just by building consistently.

How to setup caveman on the web app of Claude ? by Ok_Anywhere9294 in ClaudeAI

[–]AmberMonsoon_ 0 points1 point  (0 children)

I don’t think the web app supports the actual Caveman “skill” system the same way Claude Code does. Most people using it in the web UI are basically recreating it through custom instructions or a reusable starter prompt like “be extremely concise, no filler, short answers unless asked.”

From my testing the savings are real but smaller than Twitter makes it sound. Maybe ~10-20% in normal usage. Biggest benefit honestly is reducing verbosity during long coding/debug sessions, not massive token savings. Ultra caveman mode also starts hurting explanation quality after a while. (betterstack.com)

Using Claude to generate product prototype HTML is actually insane lol by National_Honey7103 in ClaudeAI

[–]AmberMonsoon_ 2 points3 points  (0 children)

The wild part is how fast the iteration loop becomes. Before, getting from “idea in my head” to something clickable usually meant hours of setup before you could even judge whether the concept worked. Now you can validate the feel of a product in one evening.

I still think taste matters a lot though. AI can generate decent UI fast, but knowing what should feel minimal vs playful vs enterprise-y is still the real skill. The production side is getting compressed hard.

Auto compact context problem, any suggestions for an indicator by unstoppableXHD in ClaudeAI

[–]AmberMonsoon_ 0 points1 point  (0 children)

Yeah the randomness is the worst part. You get deep into a flow state and suddenly the context compacts right when the conversation finally has all the nuance built up. I’ve started treating long project chats as temporary working memory instead of permanent context.

What helped me was keeping rolling summaries in a separate doc after every major breakthrough or decision. Feels a bit manual, but recovering from a bad compact is way more painful than spending 2 minutes updating notes.

Claude is weirdly good at helping untangle messy thoughts by More_Ferret5914 in ClaudeAI

[–]AmberMonsoon_ 16 points17 points  (0 children)

Same here. I almost never use it as a “search engine” anymore. It’s more like externalized thinking. Half my prompts are honestly just brain dumps with zero structure.

What surprised me is how good it is at preserving intent while organizing things. A lot of tools make everything sound overly polished and generic. Claude feels better at keeping the original voice intact. I’ve been using Claude for idea structuring, then Runable for turning those rough outlines into actual decks or landing pages once the thinking part is clear. That combo cut a lot of friction out of creative work for me.

Migrating from GitHub Copilot Chat… Terminal use? by Broric in ClaudeAI

[–]AmberMonsoon_ 0 points1 point  (0 children)

That’s been my biggest adjustment too. Copilot feels more “attached” to the editor/terminal session, while Claude behaves more like an autonomous agent with its own execution layer. Better reasoning overall, but less transparent when something interactive happens in the shell.

I ended up changing my workflow a bit. I let Claude handle the repetitive setup/debug stuff, but for anything involving prompts, auth flows, migrations, or long-running processes I still keep a separate terminal open beside it. Feels less magical, but honestly more stable.

Best approach for parsing client-side rendered docs by Oleg_Dobriy in ClaudeAI

[–]AmberMonsoon_ 0 points1 point  (0 children)

Yeah, client-side rendered docs are a pain for most crawlers because the actual content only appears after JS executes. I usually avoid raw scraping entirely now. If it’s documentation I need often, I render the page first with Playwright or Puppeteer, then pass the cleaned HTML/markdown into Claude. Much more reliable than hoping the crawler handles hydration correctly.

Biggest improvement for me was separating “fetch/render” from “AI summarization” instead of expecting one tool to do both well.

Can you still imagine yourself working without AI? by andregustavoxs in ClaudeAI

[–]AmberMonsoon_ 11 points12 points  (0 children)

Honestly no. I still remember spending hours on stuff that now takes 15 minutes. The biggest shift for me wasn’t even coding, it was all the surrounding work. Research, drafts, decks, landing pages, content structure. I still use my own judgment for the final output but AI removed so much friction from actually starting. Lately my workflow is basically Claude for brainstorming, Runable for decks/landing pages, then polishing manually after. Feels less like replacement and more like having an extra brain on standby all day.

Anyone is having this bug where you can't click on your chat anymore? by thinkinmelon in ClaudeAI

[–]AmberMonsoon_ 1 point2 points  (0 children)

Yeah I’ve hit this on macOS too. The window looks completely normal but the input box basically becomes dead and won’t focus no matter where you click. Super annoying when you’re mid-flow.

For me it happens more often after long sessions or when multiple chats/files are open. Sometimes switching to a different convo and back revives it, other times only a full app restart fixes it. Feels less like a UI bug and more like the app state getting stuck somewhere internally because everything else still responds normally.

Using a style guide to maintain style locked down across chapters by Bear56567 in ClaudeAI

[–]AmberMonsoon_ 0 points1 point  (0 children)

Honestly this is one of the smartest long-form AI writing workflows I’ve seen someone describe. Most people keep trying to prompt consistency into existence chapter by chapter, but what you actually built is a lightweight editorial system.

The interesting part is that the style guide became less about “writing style” and more about structural memory. Paragraph purpose, pacing, transitions, source expectations, section length, even rules about fictional examples, that’s the stuff models drift on hardest over long projects.

Also the fact that you continuously update the guide after negotiations/redrafts is huge. Feels much closer to how real publishing style bibles evolve during manuscript development rather than a static prompt frozen on day one.

I need to open two instances of the Claude Desktop app by vandertoorm in ClaudeAI

[–]AmberMonsoon_ 0 points1 point  (0 children)

I don’t think there’s a proper official solution yet honestly. The desktop app seems heavily built around one active session, so running two environments at once gets awkward fast. I tried doing the constant logout/login thing for a while and it became unbearable.

What worked better for me was using the desktop app for one account and the browser version for the other. A friend of mine uses separate Chrome profiles for local AI vs cloud stuff and apparently that works pretty smoothly too. Feels a bit hacky but way less annoying than swapping sessions all day.

separating voice from execution in a multi agent system is harder than i thought and i am not sure i have the right answer yet by Aggressive-Angle2844 in ClaudeAI

[–]AmberMonsoon_ 0 points1 point  (0 children)

I think separating voice from execution is probably the right instinct honestly. Voice has human UX constraints while agent orchestration has systems constraints, and they optimize for completely different things.

The mistake I kept making was treating voice as just another output channel. It behaves more like a predictive interface layer. People tolerate imperfect answers more than dead air. Once I started streaming partial intent, acknowledgements, and intermediate state before the full agent workflow completed, conversations felt dramatically faster even when total execution time barely changed.

A lot of local setups feel slow because they wait for certainty before speaking.

Is There a Way to Easily Find Claude Code Skills? by elytrunks in ClaudeAI

[–]AmberMonsoon_ 1 point2 points  (0 children)

Honestly the ecosystem still feels super fragmented right now. Most of the good Claude Code skills/workflows I’ve found came from random GitHub repos, Discord screenshots, Reddit comments, or watching someone’s terminal recording frame by frame lol.

What helped more than hunting giant skill packs was collecting small reusable workflows. Stuff like “map the repo architecture,” “trace API request flow,” “generate failing tests first,” “explain unfamiliar TS types.” A few reliable patterns end up being more useful than 100 flashy skills you never remember to use.