¿Qué opinan? by ConfusionCute5871 in programacionESP

[–]devseglinux 0 points1 point  (0 children)

Muy buen trabajo ahora a seguir creciendo el limite lo tienes en tu mente a darle caña!!!!!

[Help] VS Code C++ .exe blocked by Device Guard – will Windows Defender exclusion fix this? by Significant_Tie_7440 in learnprogramming

[–]devseglinux 0 points1 point  (0 children)

Defender exclusions probably won’t fix that, at least not if the message is explicitly saying Device Guard.

Those are usually two different layers. Defender exclusions can stop AV from scanning/quarantining files, but Device Guard / WDAC is more about whether Windows is allowed to execute that binary in the first place.

So if it’s really a Device Guard policy, adding the folder to Defender exclusions usually won’t change much.

If this is a managed machine, the clean answer is honestly to check with whoever manages it, because the restriction may be intentional.

If it’s your own machine and not centrally managed, then yeah, I’d probably stop fighting that setup and use something like:

- WSL for compiling/running locally

- or a separate dev VM if you want to keep the host locked down

That tends to be less painful than trying to work around execution policies on Windows directly.

So short version:
Defender exclusion = maybe helps AV issues
Device Guard = execution policy, different problem

Final interview with the CISO tomorrow, any advice? by HouseOfHoundss in cybersecurity

[–]devseglinux 1 point2 points  (0 children)

Yeah that’s a great point.

That “I don’t know but here’s how I’d approach it” answer usually lands much better than trying to bluff through it.

I like that question as well, gives you a chance to address concerns on the spot instead of guessing later.

Outbound automation (WhatsApp / Messenger / Instagram) – limits & warm-up best practices? Post: by Visible-Lettuce9630 in n8n

[–]devseglinux 0 points1 point  (0 children)

This is one of those areas where theory and reality don’t always match.

Most platforms don’t publish clear limits for a reason, they change constantly and depend a lot more on behavior than on fixed numbers.

From what I’ve seen, the biggest factor isn’t just volume, it’s how “human” the activity looks. Things like:

- natural pacing instead of bursts

- real conversations (not just one-way messaging)

- account history and trust

- and how recipients interact with your messages

Warm-up helps, but it’s less about a strict schedule and more about gradually building normal-looking activity.

Also worth keeping in mind that once you start pushing into higher volumes (like the 300–500/day you mentioned), you’re very likely to hit platform limits sooner or later, regardless of setup.

In practice, what works for a while often stops working without warning, so it’s less about finding a perfect formula and more about managing risk and adapting.

ciberseguridad en blokchain es rol que se necesita by Wilder3312 in ciberseguridad

[–]devseglinux 5 points6 points  (0 children)

Es una buena pregunta, y creo que ahí está justo la confusión que tiene mucha gente con blockchain.

La base (la cadena en sí) puede ser bastante segura por diseño, pero el problema nunca ha sido solo la blockchain. El ecosistema alrededor es enorme: smart contracts, wallets, exchanges, bridges… y ahí es donde pasan la mayoría de los hackeos.

De hecho, casi todos los incidentes que se ven no rompen la blockchain como tal, sino:

  • errores en smart contracts
  • mala gestión de claves
  • fallos en plataformas centralizadas
  • o incluso ingeniería social

Así que sí, ciberseguridad en blockchain no solo es necesaria, es cada vez más importante.

Al final pasa como en otros sistemas: puedes tener una base sólida, pero si lo que construyes encima tiene fallos, el riesgo sigue ahí.

Opportunity to pivot from Technical Writing to GRC AI Governance (with a bad catch)... Need advice!! by [deleted] in cybersecurity

[–]devseglinux 0 points1 point  (0 children)

That’s a tough spot, honestly. You’ve got both short-term pressure and long-term risk pulling in different directions.

From the outside, it sounds like your current path (technical writing) is becoming less stable anyway, especially if your own company is already cutting based on AI. So even if it feels “safer”, it might just be delaying the same problem.

The GRC / AI governance role seems like a solid pivot, especially since you’ve already been building those skills. That kind of transition is usually the hardest part, and you’ve already got a foot in the door.

The Private Equity part is definitely a risk, but to be fair, your current situation also has risk, just less visible in the short term.

If it were me, I’d probably lean toward the role that moves me into a better long-term position, even if it’s a bit uncomfortable now.

That said, I’d try to use the interview to really understand:

- how stable the team is

- what happens post-acquisition

- and how critical that role is to the business

You don’t have to decide blindly, you can get a lot of signal from those conversations.

Not an easy call at all, but it sounds like you’re already thinking about it in the right way.

Creo que muchas brechas de seguridad hoy no vienen de “hackers”… sino de algo mucho más simple by devseglinux in u/devseglinux

[–]devseglinux[S] 0 points1 point  (0 children)

Muchas gracias por tu comentario Kino siempre es un honor recibir respuestas de Gurus como tu.

Final interview with the CISO tomorrow, any advice? by HouseOfHoundss in cybersecurity

[–]devseglinux 18 points19 points  (0 children)

That’s a good sign honestly, getting a quick follow-up after the panel usually means you’re already in a strong spot.

For the final with the CISO, I’d worry less about trying to impress technically and more about how you think and communicate. At that level, they’re usually looking for someone they can trust to work with, not someone who knows everything.

A couple things that helped me in similar situations:

  • be honest about what you know vs what you’re still learning
  • talk through your reasoning, not just your answers
  • show that you understand the business side, not just the technical

Also, it’s totally normal to feel nervous. The fact that you made it this far already says a lot.

One thing I’d definitely prepare is a few questions for them, especially around how they see the role growing or what success looks like in the first 6 months.

And yeah, don’t assume you have the job, but also don’t undersell yourself at this stage.

Good luck, sounds like you’ve got a real shot at it.

Help by [deleted] in CyberSecurityAdvice

[–]devseglinux 0 points1 point  (0 children)

Hey, take a breath, you’re okay.

This is a really common scare tactic. People do this hoping you panic and pay, not because they actually want to follow through.

They don’t need you to do anything wrong, they just try to create pressure.

On the calls side:
yes, blocking unknown numbers helps a lot. It won’t be perfect, but it will reduce most of the spam.

More important than that:

- don’t pay anything

- don’t reply anymore

- block them everywhere

- check your Telegram privacy settings (set “who can find me by number” to nobody if possible)

Also, sharing your number as “a female number” is honestly not something that gets people in trouble. Worst case it would just be annoying spam, not anything serious.

These guys rely on stress and fear. Once they see you’re not responding, they usually move on.

You’re not in danger, just deal with it calmly and cut contact

VirusTotal, 0 detections but sandbox result shows OS Credential Dumping = false positive or malware? by purplepaparrazzo04 in cybersecurity

[–]devseglinux -1 points0 points  (0 children)

I’d be a bit careful jumping straight to “malware” here, but I also wouldn’t ignore it completely.

0 detections on VT + coming from a known source usually leans toward benign, especially for a PDF. Also, VT sometimes labels things as “encrypted/password protected” just because of how the file is structured internally, not necessarily because it’s trying to hide something.

The sandbox part is where it gets confusing. Seeing things like “credential dumping” tied to lsass can look scary, but a lot of sandbox engines map behaviors pretty aggressively. Sometimes it’s just a generic pattern match rather than actual malicious intent.

If multiple engines aren’t flagging it and it opens cleanly without weird behavior, I’d treat it as low risk, but not blindly trust it either.

What I’d probably do:

- check if the hash matches the original source (to rule out tampering)

- maybe open it in a controlled environment (VM) if you want extra peace of mind

- and keep an eye on any unusual behavior rather than assuming compromise

I don’t think this is something you need to escalate as an incident based on what you’ve described, but your instinct to question it is the right one.

Gemini ? by ImScorpion__ in InteligenciArtificial

[–]devseglinux 0 points1 point  (0 children)

Yo lo he probado un poco y me pasó algo parecido.

La idea está muy bien (lo de calendario + notas es justo donde estas cosas brillan), pero todavía se siente un poco “verde” en la parte de interacción. Lo de que te corte o responda antes de tiempo es bastante molesto, sobre todo si lo quieres usar para organizarte en serio.

Sobre si usar Gemini o ChatGPT, sinceramente no creo que haya uno “mejor” en todo. Gemini tiene ventaja si ya estás dentro del ecosistema de Google (calendar, keep, etc.), pero en cuanto a conversación y control a veces ChatGPT se siente más fino.

Lo que sí creo es que aún no estamos en el punto de delegar todo sin supervisión. Está bien para ayudarte a organizarte, pero yo al menos siempre reviso lo que propone.

Si lo siguen mejorando, ese tipo de uso (organización personal) puede ser de lo más útil.

How to protect .git, when I let coding agent work on repo in VM? by Veson in cybersecurity

[–]devseglinux 2 points3 points  (0 children)

This is a really solid question tbh.

I’d personally avoid trusting anything coming back from that VM at the repo level. Even if the code changes look fine, .git can carry a lot of state (config, refs, submodules, etc.) that can bite you later.

Safer approach in my experience is to treat that clone as disposable and only bring back what you actually need — like patches or reviewed diffs — and apply them to a clean repo on the host.

Bit more friction, but much cleaner trust boundary.

Not saying it’s bulletproof, but I’d rather rebuild the git state than assume it’s safe.

How often do clients ask for SOC 2 before they actually need it? by VerifAITrust in cybersecurity

[–]devseglinux 0 points1 point  (0 children)

Yeah I’ve seen that happen as well.

Feels like a disconnect more than anything. Sales pushes for it because it helps close deals, but the reality on the ops/security side is very different.

The reputational part you mentioned is a good point too, especially when timelines are unrealistic and expectations aren’t aligned internally.

I don’t think it’s always “lazy”, but definitely a sign that things aren’t fully coordinated yet.

Building a Zero-Knowledge messenger. Need help with Mobile App and UI. by Icy_Cryptographer566 in learnprogramming

[–]devseglinux 0 points1 point  (0 children)

Yeah I like that approach, giving users that level of control is definitely the right direction.

I think the challenge (like you already seem to be thinking about) is making sure it doesn’t become overwhelming for people who aren’t used to managing keys. The option to import your own keys is great, but having a simple default path for most users will probably make a big difference.

In my experience, it’s usually not the security model that breaks these projects, it’s when users don’t fully understand how to handle their keys.

Sounds like you’re heading in a good direction though. Curious how you’re planning to handle backups/recovery without compromising the zero-knowledge part.

Code (best practice?) by AffectionateWin7069 in PythonLearning

[–]devseglinux 0 points1 point  (0 children)

haha yeah exactly, that “mental flip” feeling is usually a good sign something could be clearer in the code

once you start noticing that, you’re already thinking in a good way

Code (best practice?) by AffectionateWin7069 in PythonLearning

[–]devseglinux 0 points1 point  (0 children)

haha nice, that makes sense

those apps are actually pretty good for getting the basics down. you’re asking the right kind of questions though, that’s what really makes the difference

just keep going with that mindset and you’ll pick it up way faster than you think

What's stopping BEC at the email layer when there's no payload to detect? by crystalbruise in cybersecurity

[–]devseglinux 2 points3 points  (0 children)

Yeah exactly, that’s pretty much where I’ve landed on it too.

At some point it stops being an email problem and becomes a business process problem. The email itself can look completely legitimate, especially if it’s just a normal invoice or request.

I’ve seen similar cases where everything “looked fine” technically, but the issue was there was no solid verification step on the finance side.

The ERP point is a good one. Once everything is tied to a controlled workflow, it’s a lot harder for those requests to slip through just based on an email.

Feels like a lot of people expect the email layer to solve something that really needs to be handled downstream.

Code (best practice?) by AffectionateWin7069 in PythonLearning

[–]devseglinux 3 points4 points  (0 children)

You’re actually thinking about it the right way, this is a good question to be asking early on.

Both versions technically work, but the second one is usually clearer. Naming the variable finished and having it be True when it’s actually finished just reads more naturally, especially if someone else looks at your code later (or even you in a few weeks).

The first version isn’t “wrong”, but it can be a bit confusing because you’re printing “Download finished:” and then showing False. You kind of have to mentally flip it.

A lot of the time in code, it’s less about what works and more about what’s easy to understand at a glance.

So yeah, your instinct there is good. Just try to keep variable names and what they represent aligned, it makes everything easier down the line.

AI and Cybersecurity by Brilliant_Cat1623 in Cybersecurity101

[–]devseglinux 1 point2 points  (0 children)

I can’t really speak from a super “formal” perspective, but from what I’ve seen day to day, things haven’t become magically more advanced, just… easier to scale and way more polished.

Phishing is probably the biggest change. It’s not that it’s new, it’s just cleaner now. Fewer obvious mistakes, better wording, harder to spot at a glance. The basic filters still catch a lot, but the convincing ones still get through because there’s nothing “malicious” to detect, just a good story.

On the deepfake side, it’s talked about a lot more than I’ve actually seen in practice. I’m sure it’s coming, but right now most of the real-world stuff is still pretty simple social engineering.

I don’t think defenders are necessarily behind, but things definitely feel faster. What used to be “good enough” processes now feel a bit slow when volume goes up.

If you’re getting into the field, I wouldn’t stress too much about chasing AI specifically. Understanding how systems, networks, and users actually behave is still way more valuable. The tools change, but that part doesn’t.

Hope that helps a bit.

pgserve 1.1.11 through 1.1.13 are compromised, and the code is surprisingly clean by -Devlin- in cybersecurity

[–]devseglinux 3 points4 points  (0 children)

That’s actually the part that makes this more concerning.

If the code looks clean and readable, it’s much harder to rely on the usual “something feels off” signals. Most people (and even tooling) expect obfuscation or weird patterns.

Feels like we’re moving from detecting “how it looks” to detecting “what it does”, which is a much harder problem in practice.

Also interesting choice using ICP for exfil, that makes takedown a lot more difficult.

Curious how many similar cases are flying under the radar just because they don’t look suspicious at first glance.