AI adoption is accelerating across financial services and other regulated industries, but governance is struggling to keep pace as firms try to balance innovation with accountability.
Theta Lake’s annual Digital Communications Governance Report suggests the direction of travel is clear, with almost all organisations planning to expand AI use, while many are already running into governance and data security challenges that can slow deployment and raise compliance risk.
Regulatory expectations are also sharpening around how AI tools are used in regulated environments, with an emphasis on continuous oversight rather than one-off controls. That includes monitoring prompts, responses and outputs to confirm tools behave as intended, as well as ongoing risk assessments that feed into updated policies, procedures, controls and systems under relevant requirements.
A central point for regulated firms is that accountability does not change just because a system is generating content. AI-generated communications are still communications, and responsibility sits with the firm regardless of whether a message was created by a person or an AI tool. In practice, that means monitoring is not just a “nice to have” for audit readiness; it is what enables investigations at scale when something goes wrong.
This shift is also tied to regulators’ focus on proving tools continue to perform as expected and result in compliant behaviour over time, not only at launch. To meet that bar, organisations need visibility into real-world prompts and outputs, supported by review processes that can withstand scrutiny. Without that end-to-end monitoring, it becomes difficult to demonstrate effective supervision or explain how issues were identified, escalated and resolved. Read the full article here.










