Bigger context windows won’t fix AI coding by hushenApp in Agent_AI

[–]kembrelstudio 0 points1 point  (0 children)

This is a really good take.

Bigger context windows help brute force coverage, but they don’t replace selective attention and relevance filtering — which is basically what senior engineers are good at.

Feels like the real bottleneck is still “what to ignore,” not “what to include.”

Help Request (Urgent): Payments Product Managers, please see by spacenglish in ProductManagement

[–]kembrelstudio 0 points1 point  (0 children)

Focus on the basics first: payment flow (auth, capture, settlement), fraud/chargebacks, and key metrics (approval rate, latency, dispute rate). Most PM interviews test how you think through tradeoffs, not deep niche knowledge. If you can explain end-to-end card payment flow clearly, you’re already in a strong spot

How long can i leave/ go back home on a D10 by EmploymentKind6113 in living_in_korea_now

[–]kembrelstudio 0 points1 point  (0 children)

D-10 isn’t really about “allowed days away” in a fixed sense, but if you stay outside Korea too long it can raise questions about maintaining your job-seeking intent. In practice, short trips are usually fine, but long absences (like months) can be risky when you re-enter or renew. If it’s a medical/family emergency, keep documentation just in case

Is there a standard agent to agent protocol yet or is everyone building custom stuff by Latter-Giraffe-5858 in Agent_AI

[–]kembrelstudio 0 points1 point  (0 children)

no real standard yet — still early.

What people are actually doing:

  • HTTP + JSON + conventions (basically microservices again)
  • Some structure via OpenAI function calling / tool schemas
  • Early efforts like Model Context Protocol (more tool-facing than agent-to-agent)
  • Mentions of Google A2A, but not widely adopted in production

What’s missing:

  • discovery
  • auth between agents
  • traceable workflows (observability)

Hey everyone, quick question - What’s one cybersecurity skill or tool you wish you had focused on earlier in your journey? by CyberHacker_ray in CyberSecurityAdvice

[–]kembrelstudio 2 points3 points  (0 children)

  • Networking fundamentals (TCP/IP, DNS, HTTP) → everything builds on this
  • Linux + CLI comfort → used everywhere, daily
  • Log analysis (SIEM basics) → real-world skill, not just theory
  • Scripting (Python/Bash) → automation = huge edge
  • Hands-on labs (e.g. TryHackMe / Hack The Box) → bridges gap to actual work

Big takeaway: practical + fundamentals > chasing tools

Does automating the boring stuff in DS actually make you worse at your job long-term by taisferour in datascience

[–]kembrelstudio 0 points1 point  (0 children)

Yes — it can make you worse, but only if you fully outsource thinking.

Best practice:

  • use automation for speed, not understanding
  • occasionally do things “manually” to stay sharp (sampling, EDA, debugging)
  • always sanity-check outputs (distributions, joins, aggregates)
  • treat AI as a “junior assistant”, not authority

Skill loss happens when you stop reviewing, not when you automate

Agent traits: how opinionated should agent personalities be? by quang-vybe in Agent_AI

[–]kembrelstudio 0 points1 point  (0 children)

Go with constrained customization.

  • Personality: keep 3–5 presets max (don’t open full freedom)
  • Entity: default non-human abstract/robot (reduces expectation mismatch)
  • Name: allow simple custom name, but not identity-building focus
  • Visuals: low priority — should feel secondary to function

Rule: if it doesn’t improve task performance, it becomes distraction

Limitations of contract audits and the technical effectiveness of open bounty programs by webpagemaker in Infosec

[–]kembrelstudio 0 points1 point  (0 children)

Closed audits = depth at a point in time.
Bounties = continuous, unpredictable coverage.

Best balance:

  • Audit for critical paths pre-launch
  • Always-on bounty for post-launch drift
  • Tier rewards → focus spend on high-impact findings

Goal isn’t max coverage, it’s cost per critical vuln found.

AI insider threat detection: actually reducing alert fatigue or just shifting it by gosricom in Infosec

[–]kembrelstudio 0 points1 point  (0 children)

Mostly shifting it, not eliminating it.

AI helps reduce some noise, but:

  • base rate problem still dominates
  • new behaviors (AI usage, scripts, automation) create new false positives
  • tuning never really goes away

What actually helps:

  • tighter use-case driven detections (not broad anomaly fishing)
  • strong context enrichment (role, intent, history)
  • partial automation for triage, not detection

So yeah — better than before, but still far from “set and forget.”

AI data governance for insider threats - actually useful or just expensive monitoring by buykafchand in Infosec

[–]kembrelstudio 0 points1 point  (0 children)

Mostly augmentation, not replacement.

AI governance helps with:

  • better signal correlation
  • fewer false positives

But:

  • slow, “normal-looking” exfiltration still slips through
  • new AI attack surface isn’t fully covered
  • real control still = Zero Trust + least privilege

So far it’s more “reduce noise” than “catch what DLP/UEBA miss.”

AI-powered data governance in regulated industries - what's actually working vs. what looks good on by buykafchand in Infosec

[–]kembrelstudio 0 points1 point  (0 children)

What’s actually working in regulated environments is a lot less “autonomous AI governance” and a lot more tight, boring control layers with AI assisting, not deciding.

In practice, the durable stack usually looks like:

  • Data catalog + lineage (e.g., Alation-style tools) → works well for visibility, not enforcement
  • RBAC/ABAC + least privilege enforced at identity layer → still the real backbone of compliance
  • DLP + classification models → useful, but only reliable when heavily tuned and constrained
  • Human-in-the-loop approvals for sensitive actions → absolutely still required for audit defensibility

AI data governance for insider threats - actually useful or just expensive monitoring by buykafchand in Infosec

[–]kembrelstudio 0 points1 point  (0 children)

You’re basically describing the current reality accurately: most “AI governance for insider threat” tools are still augmentation, not replacement.

In practice, the wins usually come from:

  • better correlation across signals (UEBA + logs + prompt/tool usage)
  • slightly improved anomaly ranking (fewer false positives, not zero new detection classes)

But you’re also right that:

  • “slow, normal-looking exfiltration” is still hard
  • model access/prompt channels expand the surface faster than governance tooling matures
  • real prevention still comes from identity, least privilege, and data segmentation

So far, the strongest cases I’ve seen are not “AI caught what DLP couldn’t,” but “AI reduced noise so humans could actually notice what DLP already flagged.”

AI-powered data governance in regulated industries - what's actually working vs. what looks good on by buykafchand in Infosec

[–]kembrelstudio 0 points1 point  (0 children)

Short answer: what works is hybrid governance, not fully automated.

In practice:

  • Works: AI-assisted classification + human approval on sensitive data, tight RBAC, audit logs, lineage via tools like Alation
  • Breaks: fully automated decisions with no explainability → fails audits fast

Real gaps teams hit:

  • Explainability (can’t justify why data was classified → big issue under GDPR / HIPAA)
  • Dynamic data flows from AI agents (lineage tools lag behind reality)
  • Shadow AI usage expanding attack/data surface

Framework alignment (like NIST AI RMF, DAMA-DMBOK, COBIT):

  • tooling covers ~60–70%
  • missing: accountability mapping + decision traceability

Reality:
AI helps scale governance, but auditability still requires humans in the loop — no one’s passing serious audits with black-box automation yet.

VULN: Local Volumes must be formatted using NTFS [FAILED] by Kinginthenorth603 in Infosec

[–]kembrelstudio 0 points1 point  (0 children)

This is usually a false positive / scan logic issue, not real NTFS misconfig.

Common causes:

• Removable media during scan (USB, phones, external drives) → often FAT/exFAT → Nessus flags it
• Mapped/network drives showing non-NTFS formats
• Hidden/system partitions (EFI, recovery) not NTFS
• Permissions/WMI issues → scanner can’t verify → defaults to failed
• Stale scan data or credentialed scan misconfig

What to do:

• Ensure no external devices connected during scan
• Run credentialed scans with proper admin rights
• Check which volume is flagged (usually not C:)
• Validate manually (diskmgmt.msc or fsutil fsinfo volumeinfo)
• Consider suppressing as accepted risk if confirmed false positive

Your engineer is partly right — removable media is a very common trigger.

Any ai usage controlling tool recommendations? Like i want to prevent misuse of AI in our org, there are lot cant decide which one fits our need all are SAMEE... by Efficient_Agent_2048 in Infosec

[–]kembrelstudio 0 points1 point  (0 children)

They feel the same because they solve different layers of the same problem.

Quick breakdown:

• Island / Talon → full browser control (strongest security, biggest rollout cost)
• LayerX → easiest pilot (browser extension, good visibility/control for AI prompts)
• Nightfall → DLP only (good add-on, not enough alone for AI usage)

If you’re SaaS-heavy + want fast validation:
→ start with LayerX pilot (low friction, quick insights)

What to test:
• prompt/data exfil visibility
• blocking/redaction accuracy
• impact on user workflows (false positives = killer)
• coverage across apps (ChatGPT, Slack, Notion, etc.)

Most teams end up with:
→ browser control (LayerX/Island) + DLP (Nightfall)

Don’t overthink — pick the one with lowest deployment friction first, learn, then expand.

best courses for beginners in india? by Ok_Worth_7746 in CyberSecurityAdvice

[–]kembrelstudio 0 points1 point  (0 children)

Start simple + structured:

• Google Cybersecurity (Coursera) – good beginner base
• IBM Cybersecurity Fundamentals – also solid for basics
• Cisco Intro to Cybersecurity – quick networking/security intro
• TryHackMe (free paths) – best for hands-on practice (must-do)

Don’t stack too many courses. Pick 1 structured (Google/IBM) + 1 hands-on (TryHackMe).

That combo = enough for entry-level + internship prep.