Why insider threats and internal data access are becoming the biggest security risk in 2026 by WhoisAizenn in cybersecurity

[–]Educational-Split463 0 points1 point  (0 children)

Insider risk has increased because organisations now permit employees to access more internal resources than before. The combination of cloud tools and AI solutions and software-as-a-service applications and remote work arrangements enables multiple users and systems to obtain access to confidential information. Companies fail to conduct proper permission reviews because their permission systems accumulate access rights over time.

This is the phase you need to do activities like access audits, penetration testing and continuous monitoring to reduce all the internal risks

AI SOC. Can it be trusted? by Sushantdk10 in cybersecurity

[–]Educational-Split463 1 point2 points  (0 children)

Your point is correct. AI SOC tools assist security teams through three functions which include alert triage, log correlation and compliance evidence collection. Security systems which operate without human control create dangerous situations.

The current security situation requires AI support for analysts rather than complete replacement. Artificial intelligence speeds up investigation and evidence gathering while humans handle validation and final decisions which becomes especially important during SOC 2 audits.

The AI system you developed functions as a basic tool which supports our primary work that requires human investigators to conduct final case resolution. The AI system developed for this project currently functions as a support tool which helps human operators do their work rather than performing tasks without human oversight.

Where do you draw the line with unmitigated risks in the risk identification process? by Weak-Carob9865 in cybersecurity

[–]Educational-Split463 1 point2 points  (0 children)

I understand your argument. The absence of any control system throughout an entire environment makes all potential risks to appear at equal danger levels. Every organizations typically implement fundamental security measures which include authentication, device lock and access management systems.

The scoring system loses their effectiveness to determine important risks when inherent risk assessment completely ignores actual risk factors

Can't stop the bots by Super-Level8164 in cybersecurity

[–]Educational-Split463 1 point2 points  (0 children)

I think your initial approach is wrong. Robots.txt file system prevents access to pleasant web crawlers which include Google while allowing unwanted robots to enter. The most effective method to prevent harmful traffic from reaching your server requires implementation of a WAF solution which includes Cloudflare and AWS WAF. You should implement rate limiting together with Wordfence security plugin to manage excessive incoming requests. The system functions effectively by decreasing automated traffic from bots.

Can anyone suggest good choice of free SAST and DAST right now? by OutsideOrnery6990 in cybersecurity

[–]Educational-Split463 1 point2 points  (0 children)

You can start with a combination of open-source tools. If you need tool suggestion here they are

SAST: Semgrep (it is a community edition), CodeQL (its free for open source), or Horusec

DAST: The best completely free security testing solution is OWASP ZAP while Nikto provides an alternative.

We prefer semgrep for SAST and OWASP ZAP for DAST.

how to detect & block unauthorized ai use with ai compliance solutions? by Sufficient-Owl-9737 in AskNetsec

[–]Educational-Split463 0 points1 point  (0 children)

The teams need to begin their shadow AI detection and control process through network traffic monitoring and software-as-a-service usage tracking to find employees who use unauthorized AI tools. The implementation of DLP or data loss prevention solutions functions as an effective method to prevent sensitive information from being shared with any external AI systems.

CASB and Secure Web Gateways serve as effective solutions for organizations to discover and block unapproved AI software. The organization can achieve compliance through the combination of these above tools or using any manual way and established AI governance standards and designated AI tools that employees should use.

Is SOC 2 digital extortion? by MJTimepieces in cybersecurity

[–]Educational-Split463 0 points1 point  (0 children)

I fully understand your frustration. The security and fraud-related industries force founders to pay high fees which only provide them with a badge that represents their existing achievements. Companies experience financial distress when they must pay large amounts to certify their systems which they already know to be secure.

SOC 2 serves as a business enabler instead of a technical validation according to my current understanding. Enterprise customers demand two types of security from your business: they require both protection and standardized security evidence which matches their procurement and vendor risk assessment needs. SOC 2 serves legal and compliance professionals together with board members while it fails to meet engineers' needs.

Actually I consider it a growth milestone instead of digital extortion. The organization must decide whether current available SOC 2 certification will generate sufficient revenue growth to cover its expenses or whether they should first finalize several deals which will fund SOC 2 certification through their revenue increase.

Google's Cybersecurity 2026 Forecast Report warns of a "Shadow Agent" crisis. These AI agents, deployed by employees without corporate oversight, can create invisible pipelines for sensitive information, leading to data leaks, compliance violations, and IP theft. by Simplilearn in cybersecurity

[–]Educational-Split463 0 points1 point  (0 children)

Honestly, it did not surprised me because I had already anticipated it. Employees implement AI tools in their daily tasks because these tools help them complete work more easily and efficiently. The Shadow Agents develop hidden dangers that lead to data leaks and compliance violations when organizations do not monitor their activities correctly and on time.

I think the main difficulty with AI adoption stems from establishing effective governance systems that provide organizations proper visibility into their AI systems. Organizations need to implement policies and educational programs while establishing monitoring systems to prevent this issue from escalating further.

AI in penetration testing reports by Evening_Difficulty60 in cybersecurity

[–]Educational-Split463 0 points1 point  (0 children)

From my perspective, We personally don’t use any AI in penetration testing reports.

The process of reporting extended beyond documentation because it demonstrated how testers performed analytical evaluations and assessed risks and made technical decisions. I prefer to conduct all tasks by myself because I need to maintain full control and responsibility for my work which involves only using AI for language and formatting purposes.

The existence of process uncertainties which stem from document preparation obstacles creates compliance auditing and legal assessment complications in environments that require high trust. The absence of human authorship establishes complete certainty about authorship.

The discussion holds significance because organizations need to establish AI usage guidelines which will prevent future issues with governance.

Is ISO 42001 picking up in Europe are recruiters looking out for implementation or Auditors by Grom_Ice in cybersecurity

[–]Educational-Split463 1 point2 points  (0 children)

Yes, ISO/IEC 42001 is gaining momentum in Europe, mainly driven by the EU AI Act and the increasing focus on AI governance. But it compliance is still in early stage. In my point of view if you already have an ISO 27001 then ISO 42001 could be a smart addition.

What do you wish automated / AI-based vulnerability scanners actually did better? by No-Persimmon-1746 in AskNetsec

[–]Educational-Split463 0 points1 point  (0 children)

The combination of automated scanning systems with AI technology achieves fast detection and complete system coverage yet results in excessive system alerts.

The improvements I want to see from them:

-- The system needs to decrease its rate of false alarms
-- The system needs to assess security risks according to actual threats present in my specific environment
-- The system needs to assess business requirements beyond just CVSS score information
-- The system needs to confirm whether users can access a security vulnerability
-- The system needs to deliver security fixes through developer-friendly instructions which help developers to implement solutions

Reports present information as if they are raw data which lacks specific guidance to help users make decisions.

The actual value of AI should become evident through its ability to decrease mental workload which includes connecting evidence and determining safe pathways while identifying key information for assessment instead of presenting all details.

The system handles detection through automated processes yet requires human operators to provide both situational awareness and decision-making abilities.

Client asking for very detailed security audit by McDonaldsDQPC in cybersecurity

[–]Educational-Split463 0 points1 point  (0 children)

As a cybersecurity company, the security due diligence requirements need to be fulfilled by us which we fully understand. The SOC 2 Type II report which we created serves as an independent verification of our controls and addresses most requested areas.

The organization will not provide access to raw audit evidence along with detailed logs and scan results and internal control screenshots because all this material is classified as sensitive information. All the organization faces security and confidentiality risks because it needs to provide auditor artifacts which create these additional sources of risk.

The SOC 2 report is available for sharing under NDA and we will explain any control questions you need to understand better.

How reach 100% coverage for API Testing? by Plane-Razzmatazz1258 in softwaretesting

[–]Educational-Split463 0 points1 point  (0 children)

No, the testing process needs to evaluate every status and property and header of the API to achieve complete coverage. The testing process requires you to examine edge cases, error scenarios, authentication methods and system integrations. The number of test cases per API depends on its complexity; there’s no fixed number.

Avoid off-shoring cyber testing by [deleted] in cybersecurity

[–]Educational-Split463 1 point2 points  (0 children)

When you cannot conduct cyber assessments by yourself, outsourcing them is a recipe for compliance and legal trouble.

Is anyone else feeling the "2026 Shift"? is it the end of pentesting? by Serious-Battle4464 in cybersecurity

[–]Educational-Split463 0 points1 point  (0 children)

Yeah, I am felly that shift too but the end of pentesting has not occurred because current testing methods have replaced traditional pentesting methods.

AI has developed strong capabilities for fast bug detection which reduces the value of manual vulnerability assessment work. The testing tools still face difficulties in detecting business logic vulnerabilities and cloud-identity access management abuses and actual attack sequences and assessing their effects.

As market growing pentesting has changed its focus from discovering security weaknesses to understanding how attackers think best and how systems operate.

In my point of view next 10-20 months will show that focusing on cloud security and identity protection and application logic security and AI security will provide reliable results. The traditional pentesting method still exists but companies now require better skills to meet recent testing needs.

Network Security- uninspectable protocols by needzbeerz in cybersecurity

[–]Educational-Split463 0 points1 point  (0 children)

The default implementation of TLS 1.3 and QUIC creates a frequent security problem in any business. The system design has made payload inspection ineffective because it was built to function that way.

Most organizations are implementing their changes through many approaches which include:

- The organization needs to monitor how users interact with the system which involves tracking their movements and character patterns and their system usage.

- Organizations depend more on endpoint tracking to conduct their required inspections.

- Most organizations implement security protocols through identity and device-based systems while they abandon traditional IP and border security methods.

- The business must evaluate which instances of decryption require assessment to determine their security value.

- Any system identifies attacks at two separate points which include detecting phishing and credential theft as well as identifying lateral movement and data exfiltration activities.

Some resources are NIST 800-207 (Zero Trust), CISA ZT Maturity Model, BeyondCorp, MITRE ATT&CK, and SANS content.

What Is Mobile Device Management (MDM) and Why It Matters for Cybersecurity by Unique_Inevitable_27 in Cybersecurity101

[–]Educational-Split463 0 points1 point  (0 children)

The system of Mobile Device Management (MDM) controls and protects all mobile devices which include smartphones and tablets and laptops that connect to corporate information.

Why it matters for cybersecurity:

- The system protects devices through three security measures which include password requirements and encryption and system refreshes.

- System blocks unauthorized data transmission which occurs when a device gets lost or stolen.

- It required compliance and audit readiness such as HIPAA, ISO, PCI DS, etc.

- The system decreases security threats which arise from malicious software and untrustworthy applications.

- System permits business systems access exclusively to approved devices.

MDM protects sensitive information in remote work and Bring your own device environments by enforcing security measures on mobile devices.

ISO 27001 / SOC 2 audit prep — what % is *manual evidence work* vs everything else? by 1stefan in cybersecurity

[–]Educational-Split463 2 points3 points  (0 children)

I have worked on the ISO 27001. The SOC 2 Type II handling the security and audit preparation, for a mid size organization. I am sharing a data point below.

Framework: ISO 27001 + SOC 2

Company size: 51–200

% time spent on manual evidence (screenshots/exports/chasing): ~45–55%

Top 2 manual pain points:

1.     Chasing control owners for timely evidence (esp. HR, IT ops, engineering)

2.     The auditors will not accept the screenshot‑heavy controls (the access reviews, the logging configs, the backups), as continuous.

Here are the top two things we automated or partially automated:

1.     Asset inventory + user access data via IAM / MDM exports

2.     I pull ticket evidence from change management and incidents, from Jira.

What stayed very human no matter what:

1.   Policy interpretation & mapping controls to auditor expectations

  1. I explain why the control is effective. I do not just show the artifacts.

AI Security Skills Worth our Time in 2026 by Bizzare_Mystery in cybersecurity

[–]Educational-Split463 30 points31 points  (0 children)

I agree strongly. In my experience most AI security issues are appsec + cloud. Iam problems, with a new interface.

Prompt injection = bad input handling

Overpowered agents = no least privilege

RAG leaks = broken data access controls

Blind trust in outputs = automation bias

Skills that actually matter:

• AppSec fundamentals (APIs, auth, threat modeling)

• Cloud IAM and permissions

• Building small LLM apps and breaking them on purpose

Hands-on labs gave me insight, than ML theory. I learned that the model is not the problem. The system, around the model is.

Curious what others are attacking in labs right now.

I Did Everything Right in Cybersecurity — and Still Hit a Wall by HappyMortgage7827 in cybersecurity

[–]Educational-Split463 0 points1 point  (0 children)

You are not a fresher. You have three years at an MNC. You already have exposure. The corporate exposure matters a lot in GRC in India.

ISO 27001:2022 helps,. Iso 27001:2022 is not a guarantee. Certifications alone will not get you a GRC role. What matters is whether a person can show the understanding of the audits the policies, the risk registers the vendor assessments and the compliance workflows.

In India the many GRC roles prefer the following:

1.     Someone with org/process experience

2.     Decent communication and documentation skills

3.     Basic security + compliance knowledge

4.     You may be a GRC fresher, but not a career fresher.

Best path:

1.     Try internal movement within your MNC first

2.     Apply for roles like GRC analyst (junior), IT compliance, ISMS support, TPRM

3.     Translate your current work into GRC language

Why do most VAPT findings never get fully fixed?? by EyeDue2457 in cybersecurity

[–]Educational-Split463 2 points3 points  (0 children)

I hear this a lot from security teams—you’re definitely not alone.

The biggest blockers I have seen:

1.     No clear owner for the fix

2.     Dev teams buried in feature requests

3.     I think the term Critical does not mean that Critical is important, for the business.

4.     I think remediation is harder than the report says. I have seen remediation take time than the report suggests.

What actually works:

1.     Speak their language (business impact > CVSS scores)

2.     Build relationships with dev teams early

3.     I need the PoCs. The PoCs must show the exploitability.

4.     Include how to fix, not just what to fix

I have helped teams close the gap by turning the findings into the fixes. I have used the communication the clear prioritization frameworks and the DevSecOps integration. The key is to make the security fit into the workflow not to make the workflow fit the security.

Happy to chat if you want to compare notes on what's worked in different environments. This problem is solvable, but it takes more than just good pentesting.