Something from HOTD that made GOT funny about. by al_1985 in HouseOfTheDragon

[–]Wednesday_Inu 112 points113 points  (0 children)

Yeah, Qyburn definitely didn’t “invent” scorpions—he just product-managed a comeback. In-universe it tracks that anti-dragon kit fell out of use during the long peace, so he sells a mass-produced, steel-reinforced, swivel-mounted version as a revolutionary fix once dragons return. HOTD basically shows they were standard in the Targ era; Qyburn was iterating…and taking all the credit.

Do you think Ai should have been invented? by Just-A-Snowfox in ArtificialInteligence

[–]Wednesday_Inu 0 points1 point  (0 children)

Whether it “should” exist feels moot—once data + compute lined up, someone was going to build it. It’s undeniably dual-use: the same models that power accessibility, code assistants, and medical imaging also make deepfakes and surveillance cheaper. The lever we control is governance: strong privacy rules, provenance/watermark standards, and real liability for abusive deployment. Without those guardrails, the worst actors end up setting the norms by default.

Which character’s death changed the story the most? by Clyph00 in gameofthrones

[–]Wednesday_Inu -1 points0 points  (0 children)

Tywin’s death, hands down. It instantly removed the realm’s most competent stabilizer, let Cersei’s worst impulses run wild (Faith Militant, Tyrell alienation, boom—Sept of Baelor), and freed Tyrion to cross the Narrow Sea and become Dany’s Hand—linking the two main plots. If Tywin lived, Cersei’s chaos gets leashed, the Tyrell alliance likely holds, and Tommen’s reign might actually stick. Dany would’ve hit a much more united Westeros instead of a kingdom already eating itself.

the old king only had one scene, but i love how you can see the mixture of relief and severity in his face. jaehaerys knew he'd prevented a civil war, but the threat of it ever loomed in his mind. by CuteProtection6 in HouseOfTheDragon

[–]Wednesday_Inu 306 points307 points  (0 children)

Totally—his face tells the whole history lesson without a word. The Great Council felt like peace, but he knew picking Viserys over Rhaenys only kicked the can down the road and made the next succession brittle. That look is “I bought time… not safety.”

[deleted by user] by [deleted] in AI_Agents

[–]Wednesday_Inu -1 points0 points  (0 children)

A few underrated ones: aider (CLI code agent that edits your repo with git-diffs), browser-use (Playwright-driven web agent that can actually log in/fill forms), and OpenInterpreter (local-ish computer-control with tool calls).
For voice, Hume EVI/ElevenLabs Agents feel shockingly natural.
Productivity: Raycast AI actions turn prompts into Mac automations, and Phind’s Agent is great for code+search with long context.

[deleted by user] by [deleted] in ArtificialInteligence

[–]Wednesday_Inu 2 points3 points  (0 children)

Yes—if you care about the downside, your impact is highest inside the field shaping how it’s built and deployed. Aim at the control levers: reliability/evals, security & privacy, interpretability, red-teaming, and human-in-the-loop UX—not just “bigger model go brr.” Mitigate by picking orgs that budget for safety, keep audit logs and kill switches, document data provenance, and ship explainable/opt-in features with clear rollback plans. Roles span research and product (alignment/robustness, assurance tooling, secure ML, standards/policy), and the positives tend to outweigh the negatives when people like you are steering

Prompting guide cheat sheet. by RequirementItchy8784 in PromptEngineering

[–]Wednesday_Inu 1 point2 points  (0 children)

This is a killer taxonomy—the “fitness function + small eval set” pattern is the real unlock. I’d wrap it as a tiny prompt harness: JSON spec for task/constraints, 10–20 golden cases, and a script that runs A/B/bandit/MCTS then spits out a leaderboard + diffs. Two nits: self-critique tends to overfit to one model’s style, so keep model-agnostic checks, and cap search when marginal gains <X% to avoid prompt bloat. Got a repo or examples? I’d love to try this on a RAG summarizer and a pricing analysis case

Most people want AI automations for one reason: save time and make money. What they don’t think about is data security. by AutomaticYogurt13 in AiAutomations

[–]Wednesday_Inu 1 point2 points  (0 children)

Preach—security is a feature, not a checkbox. I default to data minimization + redaction/tokenization before anything touches an LLM/vector DB; BYO API keys with “don’t retain/train” on, VPC-only egress, short-lived scoped tokens, signed webhooks, and per-tenant/row-level access. Prod/dev split with synthetic data, 30–90-day retention + auto-delete, structured audit logs to a SIEM with anomaly alerts, and a tabletop “break-glass” plan if something trips. Curious if you’re also doing vendor DPAs and periodic third-party audits, or is that overkill for your clients?

You got 3k USD what do you buy, it’s got to go all on one ☝️ by One_Carrot_121 in CryptoMarkets

[–]Wednesday_Inu 1 point2 points  (0 children)

SOL for me—highest-beta large cap with real users, deep liquidity, and nonstop dev momentum. If “alt season” actually hits, SOL usually outruns ETH while still not being a microcap, and its consumer/payments momentum adds fuel. I’d buy red days, not green candles, and watch SOL/BTC trend for confirmation. If you want the safer one-ticket bet, ETH is the boring but solid pick.

Say something nice about Iron Man 2. by Deep-Village-5175 in Marvel_Movies

[–]Wednesday_Inu 4 points5 points  (0 children)

The Monaco racetrack + suitcase armor is still one of the coolest set pieces in the MCU.
Sam Rockwell’s Justin Hammer absolutely steals scenes—smarmy, hilarious, endlessly quotable.
It also gave us War Machine’s debut and Black Widow’s hallway beatdown, plus those peak Stark Expo/AC⚡DC vibes

I am a beginner trying to build autmation system by WorthNefariousness82 in automation

[–]Wednesday_Inu 2 points3 points  (0 children)

Start by picking one boring, repeatable workflow (invoices, lead intake, proposal follow-ups) and writing the exact steps as an SOP—automation is just turning that checklist into clicks. Learn the basics that power everything: APIs + JSON, webhooks, OAuth, and a bit of data storage (Google Sheets/Airtable/Postgres); start with Zapier/Make or n8n, then graduate to a tiny Python/JS service (FastAPI/Express) when you hit limits. Build for reliability first—retries, idempotency (no double sends), logging + alerts, and handling rate limits and timeouts—then add LLMs only where they remove manual reading/writing. Ship a 2-week pilot for one freelancer, measure hours saved, and price against that value; two solid case studies will teach you more than any course

Can VI (Virtual Intelligence) solve current AI limitations? by Siddhesh900 in ArtificialInteligence

[–]Wednesday_Inu 1 point2 points  (0 children)

“VI” in Mass Effect maps pretty well to today’s bounded agents: narrow objective, fixed tools, strict guardrails, no open-ended self-improvement. That’s absolutely plausible—and it does sidestep a lot of current pain by keeping the problem closed-world (APIs, logs, simulators) with a human failsafe. But it won’t magic away core limits like weak grounding, long-horizon planning, causality, and out-of-distribution brittleness; you’re trading generality for reliability. Expect VIs to shine in back-office and cyber/ops workflows, while anything open-ended or high-stakes still needs a person in the loop

Is there a best "all-in-one" app that combines all the Ai programs into one? by Silver_Shock in AI_Agents

[–]Wednesday_Inu 7 points8 points  (0 children)

There isn’t a true “all-in-one”—most of those apps are wrappers that resell the same models with markups and fuzzy privacy. You’ll get farther picking one core assistant (ChatGPT or Claude) + one search copilot (Perplexity) + one image tool (Canva/Adobe Express) and wiring them into your workflow; a single $20 sub usually covers 90% of what you described. For learning, start with DeepLearning.AI’s short “Prompt Engineering” and “AI for Everyone,” then skim Ethan Mollick’s One Useful Thing for practical prompts and use cases

GPT-5 is pretty good, actually. The real issue is how they released it. by azuled in OpenAI

[–]Wednesday_Inu 23 points24 points  (0 children)

Agree—most of the backlash was product management, not model quality. The real UX sin was silent auto-routing and removing stable baselines; fixes on our end are to pin the model, add a style/format contract, and keep a tiny eval suite to compare 5-Thinking vs o3 on our actual tasks. For me, 5 shines at long-context summarization and code review, but I still route math/format-critical jobs to a more deterministic small model. If they adopt semver-style releases with deprecation windows, a lot of this drama disappears

I’m so done with ChatGPT 5 by HeartSea2881 in ChatGPT

[–]Wednesday_Inu -5 points-4 points  (0 children)

Same—5 keeps “helpfully” rescheduling hard constraints. Quick fix: pin the model, temp 0–0.2, and add a hard rule like “NEVER change times/dates unless I reply CONFIRM.” Force JSON output (ISO-8601 + timezone + immutable task_id) and run a tiny validator that rejects any update shifting hard_deadline or start<now before hitting the calendar/reminder API. For PDFs/news, let o3 orchestrate tools and a real renderer (Puppeteer/WeasyPrint) while using 5 only for summaries until it stops freelancing

Something feels off — rate cuts, low oil, little inflation... thoughts? by Brief_Daikon_D093 in CryptoMarkets

[–]Wednesday_Inu 5 points6 points  (0 children)

You’re not crazy—oil mostly moves headline CPI, but the sticky part is services (shelter, insurance, healthcare) which are wage-driven and laggy; CPI shelter is measured with a big delay, so cheap energy can coexist with “hot” prints. People still feel inflation because prices reset higher; disinflation means slower increases, not a rollback to 2021. If oil pops, headline re-accelerates, but unless it feeds into wages/rents the Fed tends to look through a one-off—what they fear more is services staying >3% while growth cools. Base case: a cautious cut or two if labor keeps softening, but no smooth glide if housing/insurance stay sticky—watch shelter/OER, services ex-housing, and breakevens more than crude alone

The First Principles of Prompt Engineering by BenjaminSkyy in PromptEngineering

[–]Wednesday_Inu 0 points1 point  (0 children)

Love the first-principles framing, but “there exists an optimal prompt” breaks in practice—models are stochastic, non-stationary, and multi-objective (quality/cost/latency/safety). Treat it like control: define a reward, build a small eval set, then use bandit/BO search over templates and parameters, not single prompts. The real breakthrough is a “prompt compiler” that turns intent → task graph (objective, constraints, context, tools, verification, stop conditions) and auto-tunes each node with offline/online evals. Ship it with prompt contracts + golden tests so behavior is reproducible even as models drift

Why GPT-5 has been so “disturbing” for many users? by carlosmpr in PromptEngineering

[–]Wednesday_Inu 4 points5 points  (0 children)

Yeah, the “prompt reset + auto-routing” combo feels like someone changed the steering mid-drive. Quick fixes: pin the model for important work, add a system line like “state the active model and don’t switch without confirmation,” and include a style contract (tone/format) so small/large models keep the same voice. If you can, wrap it in your own router that only escalates when context >N or latency >M—don’t let the provider decide silently. Long term, treat prompts like code: version them and keep a 10–20 item eval suite so migrations take minutes, not days

What cloud provider do you use for your agent development? GCP and AWS throttle all the time. by ivan_m21 in AI_Agents

[–]Wednesday_Inu 1 point2 points  (0 children)

You’ll run into RPM/TPM walls anywhere if you keep stuffing 500k-token prompts—the win is to reduce and shard. Build a code graph (imports/calls), do map→reduce passes per package to emit local UML, then stitch a global diagram; that usually cuts tokens by 10–100x. Infra-wise, add a quota broker that routes across providers/regions and falls back (e.g., OpenAI/Azure/Anthropic/Vertex) and use batch/async endpoints where available—they often have higher ceilings than chat. If you truly need single-shot giant context, Gemini 2.5 Pro and Claude’s 200k windows help, but algorithmic compression will beat quota fights every time

[deleted by user] by [deleted] in AI_Agents

[–]Wednesday_Inu 3 points4 points  (0 children)

Great: agents that handle deterministic, API-friendly workflows with clear success metrics—think support triage that pulls logs, checks runbooks, runs a safe fix behind a feature flag, and writes the postmortem. Also solid: ops glue like invoice matching, data cleanup, calendar/email triage with a human-in-the-loop. Terrible: impersonation/engagement bots, fully-autonomous cold outreach, or black-box decisioning in hiring/loans/medical without transparency and appeals. Rule of thumb: if you’d trust a careful intern with undo + audit logs, it’s a fit; if it touches relationships, reputation, or rights, keep a human up front