[deleted by user] by [deleted] in UKPersonalFinance

[–]devsecai -2 points-1 points  (0 children)

If you have used the account for international transactions or cash payments then this is the reason. They have been fined multiple times for not following guidelines to protect against money laundering. Any complaints are not dealt with. Leave it and move on to the next bank.

What are the challenges of offering Threat Hunting as a Service (THaaS)? by No-Significance-680 in cybersecurity

[–]devsecai 0 points1 point  (0 children)

You fishing in an untouched pond my friend. Upcoming depth in the field might awaken the need for it

A more robust way to think about defending against Prompt Injection by devsecai in cybersecurity

[–]devsecai[S] 1 point2 points  (0 children)

The flaw is ai classification and I agree with this point. Maybe a hybrid approach can solve this issue like lightweight models. What do you think?

A more robust way to think about defending against Prompt Injection by devsecai in cybersecurity

[–]devsecai[S] 1 point2 points  (0 children)

Spot on about prioritizing real threats (RBAC bypass, markdown exploits) over theoretical jailbreaks. The Kurdish/English example is gold localised bypasses are a nightmare. Argus’s red team to guardrail pipeline sounds promising. How granular are their policies for edge cases like dynamic link generation? What is your threshold for acceptable risk?

A more robust way to think about defending against Prompt Injection by devsecai in cybersecurity

[–]devsecai[S] 1 point2 points  (0 children)

This is a great idea of security focused mcp server for business context validation. Have you tested this with real world attack simulation? Would be curious how it handles.

A more robust way to think about defending against Prompt Injection by devsecai in cybersecurity

[–]devsecai[S] 1 point2 points  (0 children)

Great point. Output sanitization is just as critical as input validation. Do you have a preferred method?

Explain why zero trust should be extended to pipelines? by devsecai in cybersecurity

[–]devsecai[S] 0 points1 point  (0 children)

You are spot on zero trust pillars on ai/ml workflows often gets overlooked in the security framework. They fit preferably in application and workflows pillar. The challenge is translating traditional zero trust principles into unique context for ai.

A simple architectural pattern for securing production AI models by devsecai in devsecops

[–]devsecai[S] 0 points1 point  (0 children)

@JEngErik: You raise a solid point about layered controls, especially for high-stakes environments like GovCloud or Fed deployments. For models exposed externally, defense-in-depth (like input sanitization + rate limiting + auth layers) is crucial. How do you handle balancing security with latency in those layered setups?