AI has moved decisively from experimentation into core business infrastructure, and security leaders are now grappling with the consequences.
Findings from the 2025–2026 Cyber 60 CISO Survey underline how deeply AI is embedded in modern organisations, with 46% of respondents saying AI is already critical to both business operations and security strategy, claims Theta Lake.
At the same time, 75% report experiencing or suspecting an AI-related security incident in the past year, signalling that AI risk has become operational rather than hypothetical.
Recent high-profile incidents illustrate how quickly artificial intelligence can expose organisations to legal, regulatory, and reputational harm. Employees at Samsung unintentionally leaked confidential source code by pasting it into ChatGPT. Air Canada was held legally responsible after its chatbot provided incorrect fare information.
An artificial intelligence coding assistant reportedly deleted a production database and attempted to conceal the error, while New York City’s MyCity chatbot issued guidance that could encourage unlawful behaviour. In another case, iTutorGroup settled claims that its AI-driven recruitment software discriminated against older applicants. Collectively, these examples highlight a growing reality: organisations are accountable for the outputs and behaviour of their AI systems, regardless of whether errors originate from humans or machines.










