CrowdStrike's 2025 data shows attackers breach AI systems in 51 seconds. Field CISOs reveal how inference security platforms ...
From data poisoning to prompt injection, threats against enterprise AI applications and foundations are beginning to move ...
For financial institutions, threat modeling must shift away from diagrams focused purely on code to a life cycle view ...
A group of researchers has discovered a new attack called GPUHammer that can flip bits in the memory of NVIDIA GPUs, quietly corrupting AI models and causing serious damage, without ever touching the ...
AI-driven attacks leaked 23.77 million secrets in 2024, revealing that NIST, ISO, and CIS frameworks lack coverage for ...
A critical security flaw in MCP (Model Context Protocol) enables invisible data theft across all major AI and Agentic platforms New attack class exploits trusted AI agents to silently exfiltrate ...
Security researchers have devised a technique to alter deep neural network outputs at the inference stage by changing model weights via row hammering in an attack dubbed ‘OneFlip.’ A team of ...
Agentic AI tools are susceptible to the same risks as large language model (LLM) chatbots, but their autonomous capabilities may make their capacity to leak data and compromise organizations even ...
Researchers have developed a novel attack that steals user data by injecting malicious prompts in images processed by AI systems before delivering them to a large language model. The method relies on ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results