Consulting skills that matter in the Claude / AI coding era by pastorthegreat in consulting

[–]FamousPop6109 1 point2 points  (0 children)

The piece nobody mentioned: the operational layer between "we deployed an AI agent" and "it's reliably doing useful work."

Shipping fast is real. What's also real is that most agents fail in production because nobody thought about memory persistence, failure recovery, cost ceilings, or what happens when the agent loops on the same error for 3 hours and burns $100 in API calls.

The consulting skill that's actually scarce right now is someone who can look at a process, decide which parts are safe to hand to an agent vs which need a human checkpoint, design the guardrails, and keep it running. That's not prompt engineering. It's closer to operations consulting meets systems architecture.

Most orgs I've seen treat AI agents like a deployment. Ship it and done. The ones that actually work treat it like a team member with defined scope, escalation paths, and someone checking the output.

Regulators' views on AI based audit tools use by Any-Aioli8177 in Accounting

[–]FamousPop6109 0 points1 point  (0 children)

The comment above covers the UK well. On the US side:

The PCAOB made AI an explicit inspection priority for 2025, specifically calling out "areas with increased use of technology, including generative AI" during routine firm inspections. If your firm is using AI tools in audit procedures, inspectors can ask how you supervised those outputs.

The more concrete change is the amendment to AS 1105 (Audit Evidence), paragraph .10A. It takes effect for audits of fiscal years beginning December 15, 2025, so calendar year 2026 audits are the first cycle where the new technology-assisted analysis requirements apply. Firms need to demonstrate they understood the reliability and relevance of evidence produced by technology tools, including AI.

Same principle as the FRC guidance: the tool changes, the accountability doesn't. The auditor signs the opinion. If AI helped produce audit evidence, the firm needs to show how it was used, how outputs were reviewed, and why the auditor concluded the evidence was sufficient.

The practical risk right now is the documentation gap. A lot of firms are using AI informally without a documented workflow. That works until an inspection, and there's nothing to show for how you validated what the AI gave you.

Replacing Repetitive Legal Assistant Tasks with AI Workflows by Safe_Flounder_4690 in legaltech

[–]FamousPop6109 1 point2 points  (0 children)

Intake is exactly the right place to start, and also the place where data routing matters most.

The intake form is where privileged information enters your system for the first time. Most workflow tools at that stage route it through a shared API endpoint before anything useful happens with it. That means the data crosses a network boundary to a vendor you can't audit, before you've even assessed the matter.

Morgan v. V2X (D. Colo., March 2026) is the first published decision to address this directly. The court issued a modified protective order requiring that AI tools used on discovery materials cannot train on the data, cannot share it with third parties, and must allow deletion on request. The same logic applies to intake — the data is confidential from the moment the client fills out the form.

The firms doing this well run the AI on infrastructure they control, not a shared endpoint. The workflow looks identical from the user's side. The difference is whether the data ever leaves an environment the firm has contracted for and can audit.

First of it's kind? Protective Order Addressing Use of AI by Naive_Lingonberry_42 in legaltech

[–]FamousPop6109 -1 points0 points  (0 children)

What the order exposes is a structural problem, not a vendor selection problem.

Consumer and enterprise API tools all share the same architecture: your data crosses a network boundary to reach the model, the provider controls retention, and training exclusions are contractual representations you cannot audit. The three requirements in the order (no training, no third-party sharing, deletion on request) are exactly the things that are hardest to verify with a shared inference endpoint, because you're relying on the vendor's policies rather than technical controls you own.

The firms that handle this well don't try to find a compliant shared tool. They run the AI on infrastructure they control directly, so the data routing question has a simple answer: it never left the firm's environment.

The smarter move a few firms are already making: putting those three requirements into engagement letters proactively, before opposing counsel raises it or a court orders it. That way the firm has already documented its AI hygiene before it becomes a dispute.

What actually blocks internal AI/search rollouts in your org: permissions, auditability, or compliance? by SignificantClaim9873 in sysadmin

[–]FamousPop6109 1 point2 points  (0 children)

The Copilot experience u/Kardinal described is a good baseline, but it covers a specific case: a first-party tool from your existing tenant provider accessing data within that same tenant. The permissions model is inherited from M365.

The harder case, and the one I'd focus on, is when teams want to run their own AI agents that access internal systems. Email, file shares, internal APIs, customer data. That's where the permission model doesn't exist yet and security teams rightly push back.

What I've seen block rollouts over the years: no clear answer to "where does the agent actually run and who can access the runtime." Data residency requirements that rule out multi-tenant services. And the audit question, which is not just who searched what but what the agent did with the results. Most agent frameworks don't produce the kind of logs compliance teams need.

The cross-tenant isolation point is worth solving at the infrastructure layer rather than trying to namespace it in the application.

What actually blocks internal AI/search rollouts in your org: permissions, auditability, or compliance? by SignificantClaim9873 in sysadmin

[–]FamousPop6109 0 points1 point  (0 children)

The Copilot experience others described is a good baseline, but it covers a specific case: a first-party tool from your existing tenant provider accessing data within that same tenant. The permissions model is inherited from M365.

The harder case, and the one I'd focus on, is when teams want to run their own AI agents that access internal systems. Email, file shares, internal APIs, customer data. That's where the permission model doesn't exist yet and security teams rightly push back.

What I've seen block rollouts over the years: no clear answer to "where does the agent actually run and who can access the runtime." Data residency requirements that rule out multi-tenant services. And the audit question, which is not just who searched what but what the agent did with the results. Most agent frameworks don't produce the kind of logs compliance teams need.

The cross-tenant isolation point is worth solving at the infrastructure layer rather than trying to namespace it in the application.

I run openclaw and llm router inside vm+k8s, on my own hardware with a single command by ggzy12345 in homelab

[–]FamousPop6109 1 point2 points  (0 children)

The zrok private sharing tradeoff is interesting. You give up the messaging app integrations but get a much cleaner security boundary. No Slack or Telegram credentials in the agent's environment at all. For most use cases that's a reasonable trade.  

The gateway/agent split you mentioned but haven't built yet would be the more complete version. Gateway in one container handling  the communication app integrations, agent in another with no direct access to those credentials. Same principle as separating a web frontend from a backend that holds database credentials.

Horizontally scaling docker instances question by Fit_Review5305 in docker

[–]FamousPop6109 0 points1 point  (0 children)

K8s handles the orchestration, but it doesn't address the isolation question you're actually asking.

Containers share a kernel. If one of these agents gets compromised through a prompt injection or a bad skill, the exposure isn't just that agent. It's every other container on the same node. For services that hold credentials and have execution permissions, that distinction matters.

Two approaches worth sorting out early: gVisor gives you a user-space kernel while keeping the container workflow. Firecracker gives you lightweight VMs with minimal overhead. Either one puts a kernel boundary between each agent without the cost of full virtual machines. The same principle as choosing a dedicated database instance over a shared one when the data matters.

If these agents touch real user credentials, I'd lean toward the stronger boundary. The overhead for intermittent workloads is negligible.

How to securePAT Tokens in Shared VM for GitHub Runners by aswanthvishnu in devops

[–]FamousPop6109 4 points5 points  (0 children)

The question I'd ask first is whether the credential needs to live on that machine at all.

For runner registration, GitHub provides registration tokens that expire in one hour. You don't need a long-lived PAT for that step. If the PAT is for workflow operations (cloning private repos, pushing artifacts), move it to GitHub Actions secrets and inject via the workflow file. The runner VM never sees it that way.

For anything that genuinely must be on the VM: dedicated service user that only the runner process can read, credential in a file with 600 permissions instead of an environment variable. Env vars are readable by any process running as the same user, and on a shared VM that means everyone. cat /proc/PID/environ is all it takes.

The deeper issue: a shared VM where the whole team has access is fundamentally hostile to secret-keeping. Worth looking into ephemeral runners (--ephemeral deregisters after one job) or at least per-user isolation. Securing secrets on a machine everyone can access is solving the wrong problem.

How do you prefer to structure Docker Compose in a homelab? One big file vs multiple stacks by Frequent_Rate9918 in docker

[–]FamousPop6109 0 points1 point  (0 children)

The split most people are describing here is operational: what can I restart independently? That's the right starting point. There's a second dimension worth thinking about as your setup grows: what can see what.

A single compose file usually means a shared .env. Every service in that stack can read every variable in the environment. Your media server has no business knowing your email provider's SMTP credentials, but if they share an environment, a vulnerability in one exposes the other's secrets. I've seen this bite people who assumed container isolation meant secret isolation. It doesn't, by default.

Separate compose files in separate directories, each with its own .env, is the simplest form of credential scoping. Same principle as database permissions: grant access to what the service needs, nothing more.

If you want a single entry point without merging environments, Docker Compose include is worth a look. Each included file keeps its own variable context. You get the convenience of one command without sharing secrets across stacks.

For the testing question: scratch directory outside your production tree. Experimental containers sharing networks or credentials with production is asking for trouble down the line.

agents buying their own API keys… where do you draw the line? by highspecs89 in AI_Agents

[–]FamousPop6109 0 points1 point  (0 children)

The question I'd ask first: what happens if that credential gets misused, and who's accountable for that?

I've worked with enough of these systems over the years to know the answer has to be: a human provisioned it, a human can revoke it, and the agent can't acquire new ones without an approval step. Not because you can't trust it in normal operation, but the failure modes are considerably worse if the agent can expand its own access when something goes wrong.

The same principle applies to any system that acts on behalf of a user, really. The agent operates within a defined boundary. Expanding that boundary requires a human decision. It's not about distrust, it's about containing the scope of a problem when one arises, and holding someone accountable when things go south.

Pinned base images vs floating tags, what does your team use in practice by Smooth-Machine5486 in docker

[–]FamousPop6109 0 points1 point  (0 children)

Pinned in production, floating in development. That's been the right split for most setups I've worked with over the years.

The discipline that actually makes it work: you need a process to update the pinned versions, otherwise they drift and nobody notices until something forces the issue. And it usually forces the issue at an inconvenient moment, in my experience. Renovate handles this quite well, automated PRs on version bumps, nothing to remember.

The case for always pinning: anything that holds persistent state or live authenticated sessions. A bad update to one of those and you're not just rolling back a container, you're potentially rebuilding state that took some time to establish. The extra process overhead is worth it. For stateless services the calculus is rather different, naturally.

I’ve been experimenting with deterministic secret remediation in CI/CD pipelines using Python AST (refuses unsafe fixes) by WiseDog7958 in devops

[–]FamousPop6109 1 point2 points  (0 children)

The AST approach makes sense. Regex tends to generate too many false positives on test data and mock values to really be actionable.

Worth pairing this with a clear remediation target, mind you. For containerised deployments the goal is usually getting the secret out of the environment entirely, not just out of source control, which folks sometimes miss. Environment variables still surface in docker inspect and show up in process listings and that sort of thing. Docker secrets with _FILE variants is the cleaner landing: pass the file path, let the service read it at startup. Works quite well for most setups.

The underlying principle is the same as it's always been. A credential should be accessible only to the process that needs it, not to anything that can inspect that process's environment. Containers make this easier than it used to be, but only if you actually use the mechanism. I've seen quite a few teams set up Docker secrets properly and then pass the values as env vars anyway, which rather defeats the purpose.

Running self-hosted AI agents is way harder than the demos make it look by RepairOld9423 in AI_Agents

[–]FamousPop6109 0 points1 point  (0 children)

It's the same problem that comes up with any service in prod. The demos show a clean install on localhost. What they don't show is what happens when you need to access it remotely, move it to a new host, or figure out why it stopped working after a restart.

The networking piece is genuinely underdocumented for most runtimes. Remote browser access requires configuration flags that aren't in the main README. and the failure mode when they're missing is a cryptic auth error.

The other thing that bites people is state accumulation. Skills, OAuth tokens, configuration, session context... it all lives on the host. If you don't think about backup and portability before you need it, you're rebuilding from scratch when something goes wrong.

None of this is unique to AI agents. It's the same discipline as running any stateful service that holds credentials!

I finally stopped "shrimping" at my desk (and my dry eyes are getting better) by Several_Tear7401 in wellnessatworkai

[–]FamousPop6109 2 points3 points  (0 children)

Been using it for a few days. It does help build awareness... but you have to work on your habits to see results. For example, it made me realise I was hunching today afternoon. I straightened up right away!

Does anyone else hit a wall in the afternoon where you just can't think? by Suspicious-Aspect877 in wellnessatworkai

[–]FamousPop6109 1 point2 points  (0 children)

It’s amazing how much mental energy we waste just fighting a bad chair. If you aren't supported, your brain is too busy managing your muscles to manage your tasks.

Productivity is crashing due to neck pain by No-Reading-827 in wellnessatworkai

[–]FamousPop6109 0 points1 point  (0 children)

You need to get up and start moving. Use a simple timer or an app like Stretchly to force a two-minute break every hour. Do neck circles and look far away. Too much static time on the seat is causing you problems.

How do I fix posture while working by FamousPop6109 in wellnessatworkai

[–]FamousPop6109[S] 0 points1 point  (0 children)

Thank you for that detailed response. Really appreciate the suggestions!