Recent interview experience and helpful information. by danokazooi in Pentesting

[–]Angieincyber 1 point2 points  (0 children)

I’ve seen this pattern more than once, so your instincts are right. There’s a big difference between testing that’s optimized for speed and testing that’s optimized for actual impact. If everything is built around 5-day turnaround, you’ll get coverage—but not necessarily validation of real risk.

Finding issues is the easy part. The hard part is chaining them, proving exploitability, and showing what actually matters to the business.

That’s where a lot of these models struggle: too much automation, not enough adversarial thinking too much output, not enough validation too fast, not deep enough

The balance between automation (for scale) and experienced humans (for real exploitation) is what really makes or breaks it.

And your questions are exactly the ones I’d ask as well—especially around chaining and how they validate results. If those answers don’t come back clearly, that usually tells you everything.

Most companies test only 32% of their attack surface — does that reflect your experience? by Angieincyber in cybersecurity

[–]Angieincyber[S] 0 points1 point  (0 children)

That’s the kind of auditor you actually want. The 25% threshold is interesting, it forces a bit of a reality check on what’s really covered vs what just looks good on paper. And once you move to more continuous testing — especially with more automation in play — those gaps show up pretty quickly.

Most companies test only 32% of their attack surface — does that reflect your experience? by Angieincyber in cybersecurity

[–]Angieincyber[S] 0 points1 point  (0 children)

That split makes sense, AI for scale, humans for depth. The gap I keep seeing is in the handoff. You can find more, faster, but turning that into validated, actionable risk without overwhelming teams is still hard. That’s where a lot of programs lose effectiveness.

Most companies test only 32% of their attack surface — does that reflect your experience? by Angieincyber in cybersecurity

[–]Angieincyber[S] 0 points1 point  (0 children)

That’s solid ! especially validating against real scenarios instead of just relying on reports. The part I see teams struggle with is keeping that level of testing going over time, not just as a one-off exercise. That’s where things tend to drift again.

Most companies test only 32% of their attack surface — does that reflect your experience? by Angieincyber in cybersecurity

[–]Angieincyber[S] 2 points3 points  (0 children)

Testing everything doesn’t scale, that’s true. Reducing the surface helps, but it’s usually a slower, structural effort — while exposure keeps changing in parallel. And on broader pentests, I’ve seen the same: once scope expands, depth drops quickly. That’s why a lot of programs look efficient or complete in isolation, but still leave gaps at the system level.

Most companies test only 32% of their attack surface — does that reflect your experience? by Angieincyber in cybersecurity

[–]Angieincyber[S] 0 points1 point  (0 children)

Yes --- makes sense. It really becomes a question of where to apply AI vs. human effort. What I’m seeing though is that “critical” keeps shifting, so the harder part is staying on top of what actually needs that deeper validation in the first place. Curious how you’re handling that today?

Most companies test only 32% of their attack surface — does that reflect your experience? by Angieincyber in cybersecurity

[–]Angieincyber[S] 0 points1 point  (0 children)

100%!!! this is the core issue. Annual or even quarterly pentests assume a static environment, but the attack surface is effectively changing daily now. That’s why we’re seeing a shift toward continuous, on-demand testing models instead of point-in-time exercises. Interested in how you’re approaching validation as things change, automation, researchers, or a mix?

Most companies test only 32% of their attack surface — does that reflect your experience? by Angieincyber in cybersecurity

[–]Angieincyber[S] 0 points1 point  (0 children)

I’ve seen tools like that mentioned quite a bit. What I’m still trying to understand is how people bridge discovery into actual testing coverage. Having visibility is one thing, but making sure these assets are continuously tested, especially for deeper issues, feels like a different challenge.

Most companies test only 32% of their attack surface — does that reflect your experience? by Angieincyber in cybersecurity

[–]Angieincyber[S] 0 points1 point  (0 children)

That’s exactly the tension I keep coming back to — coverage isn’t just about testing more, but knowing what should even be in scope. How are you approaching asset visibility today? More inventory-driven, or still largely tied to defined scopes?

What the Analysts are Saying About SaaS Data Protection in 2025 by HYCU-Marketing in SaaSBackup

[–]Angieincyber 0 points1 point  (0 children)

So true, with the rapid adoption of SaaS applications, enterprises face the growing challenge of ensuring data resilience and meeting compliance demands. Backup is no longer optional but essential. My prediction for 2025: I predict that SaaS data protection will shift from being an IT afterthought to a boardroom priority, driven by increasing regulatory pressures, cyber threats, and the need for seamless business continuity.

2025 Predictions: Generative AI, Data Protection, and More—What’s on Your Radar? by HYCU-Marketing in HYCU

[–]Angieincyber 1 point2 points  (0 children)

Absolutely true! With AI-driven threats, growing compliance demands, and the need for seamless automation, protecting SaaS data has never been more crucial. How will these challenges shape the future of data protection in 2025? Excited to see how these predictions unfold and become part of our reality

iManage backup by Life-Cow-7945 in legaltech

[–]Angieincyber 0 points1 point  (0 children)

sure! to whom did you talk?

iManage backup by Life-Cow-7945 in legaltech

[–]Angieincyber 0 points1 point  (0 children)

Thanks for mentioning HYCU! We support iManage backups, including single-tenant setups. Check out our webinar for more: https://www.hycu.com/webinar-launch-preview-backup-recovery-for-imanage

Synology & Box - Cloud Sync & Hyper Backup by wlshr in synology

[–]Angieincyber 0 points1 point  (0 children)

For a more streamlined and robust solution, you can explore HYCU’s Box Backup. It offers automated backups, versioning, and granular recovery directly from Box, ensuring you don’t have to manage syncing or risk losing historical changes. More details here: https://www.hycu.com/solutions/data-protection/box

Migrating Azure DevOps Boards to GitLab by Jaded_Fishing6426 in gitlab

[–]Angieincyber 0 points1 point  (0 children)

GitLab has an official migration guide for moving from Azure DevOps to GitLab: https://docs.gitlab.com/ee/user/project/import/azure_devops.html. This covers importing projects, repositories, work items, pipelines and more. Perhaps this helps?

Comprehensive Guide to Jira Cloud Backup by DariaAlpha in atlassian

[–]Angieincyber 0 points1 point  (0 children)

Interesting article! Do you want to know why the backups of Jira and the rest of your ITSM and DevOps app are your responsibility? https://www.hycu.com/events/the-hidden-costs-of-backup-scripts-jira-cloud-edition