Over the past year, I’ve watched AI move from something we were experimenting with to something we simply use every day. Not just in theory, but in practice.
I see it clearly in my own team. There isn’t a single function in marketing that isn’t using AI in some way — from brand and content to demand generation, operations and analytics. In our case, this wasn’t casual experimentation. Early use required close oversight from our legal and compliance teams, which meant we had to think carefully about data, confidentiality and risk from the outset. What started cautiously is now part of how we work. AI helps us draft ideas, analyse performance, challenge our messaging, summarise research and speed up reporting. It is woven into our daily routines.
Across European organisations more broadly, the same shift is happening. AI tools are becoming part of normal working life. The change has been gradual enough not to feel dramatic, but significant enough that it is now routine.
What I find more interesting is not that adoption is growing, but where it is accelerating.
According to Soldo’s Spring Spend Index, average AI spend per company increased sharply last year, and total spend across the most widely used tools grew even faster. What stood out to me was that some of the fastest growth came from highly regulated sectors — financial services, public sector organisations and manufacturing — where governance, compliance and security are already part of everyday decision-making.
That felt important. As a CMO working in a regulated European business, it feels both familiar and slightly sobering.
Marketing has seen this pattern before
Marketing has been here before.
When SaaS tools first became widely available, teams adopted them quickly. Technology stacks expanded, and visibility often lagged behind usage. Finance and IT sometimes only became fully aware of new commitments once those tools were already embedded in the way teams worked.
The pattern was predictable: teams experimented locally, proved value quickly and only later formalised governance.
AI seems to be following a similar path — only faster, and with more at stake.
The European regulatory environment makes that particularly relevant. The EU AI Act, alongside GDPR, makes it clear that transparency, data protection and risk management are not optional extras but structural expectations (European Commission, Artificial Intelligence Act, 2024).
What makes this moment different is that AI is not just another productivity tool. It influences decisions, generates outputs and interacts directly with customer data and proprietary information. It does not simply support the work; in many cases, it shapes it.
In a regulated environment, that has direct implications for data protection, client confidentiality, intellectual property and regulatory accountability. When AI becomes part of daily workflows, legal and compliance exposure moves with it.
The finance, legal and compliance lens
For finance, legal and compliance leaders, the question is not only: “What are we spending on AI?” It is also: “Do we understand how and why these tools are being used — and what risks or value they create?”
Recent European research reflects this tension (see for example the European Central Bank’s work on AI adoption and risk in financial services. The ECB has noted both the rapid pace of AI adoption and the need for strong governance frameworks. Similarly, PwC’s European CEO Survey highlights AI as a source of growth, but also of regulatory and operational uncertainty.
Adoption is moving quickly, and I sometimes wonder whether governance frameworks are keeping pace.
That gap matters because it shapes how organisations deal with both opportunity and risk.
In the Soldo research, one theme was clear: usage is rising faster than structured training and guidance. Employees are experimenting, often with good intent, but not always with clear guardrails.
When adoption outpaces guidance, spend becomes harder to interpret. This is precisely why platforms such as Soldo matter: real-time visibility and controls at the point of spend allow finance teams to see and shape AI adoption as it happens, rather than reconstructing it afterwards.
A quieter challenge
Even the best platforms cannot fully address the personal or informal use of AI — tools accessed through private accounts or used outside formal systems. Putting guardrails around that kind of unseen adoption is far more complex, and it requires not just financial controls, but culture, clarity and shared responsibility.
Without that visibility, it becomes difficult to know whether spending reflects duplication, structured experimentation, genuine productivity gains or unmanaged risk.
Speed is not the problem
I have seen how powerful early adoption can be. When teams identify a real productivity opportunity and move decisively, the benefits can be immediate. The issue isn’t really speed. It’s clarity.
Invisible AI spend is not risky because it is innovative. It is risky because it lacks transparency — for finance, for legal teams and for compliance functions that remain accountable if something goes wrong.
Opacity creates two problems. First, leadership may not fully understand its exposure. Second, organisations may miss the opportunity to scale what is genuinely working.
Where marketing, finance and legal intersect
What strikes me is that marketing, finance and legal leaders are navigating the same shift from different starting points.
Marketing tends to adopt new tools early. Finance safeguards resources and financial discipline. Legal and compliance protect the organisation’s licence to operate.
All three now need to balance pace, compliance and measurable value, which means AI spend is not simply a budget line but a leadership issue.
The challenge is not whether to invest, as most organisations already are. The challenge is ensuring that AI spend is visible, understandable and aligned with organisational values — without stifling responsible experimentation.
Making AI spend legible
For European finance leaders in particular, this will feel familiar.
The goal is not to slow adoption, as that would almost certainly drive usage underground — even if, on this side of the Atlantic, we do occasionally pride ourselves on moving with a little more deliberation. The goal is to make it transparent.
When AI spend is visible and structured, organisations are better able to scale what works, reduce duplication, manage risk proactively and embed governance into everyday processes rather than layering it on afterwards.
In the end, this may not primarily be a tooling question. It may be a question of visibility, accountability and leadership.
I would be interested to hear how others are seeing this unfold.
Are you noticing AI adoption in spend data before it appears in formal strategy discussions? And how are you balancing pace with control in a regulatory landscape that is still taking shape?

Leave a Reply