Your Procurement Policy Board Needs an AI Update: Who Buys AI Tools (And Who Pays When They Break)?
SEO & Keyword
Keyword data not available
Analytics
Analytics will appear here once Matomo is connected. Showing placeholder data.
0
Pageviews
0
Sessions
0:00
Avg Time on Page
0%
Scroll Depth
Chart: Pageviews over time
Chart: Sessions over time
SEO Title & Meta
Article Content
Right, let's be honest with ourselves for a minute. Your organisation is probably awash with AI tools. Some sanctioned, some… less so. Someone in marketing is trialling a new generative AI for copy, a developer is using a coding assistant, and finance just found a shiny new tool to summarise reports. It’s all happening, isn’t it? And if you’re like most places I’ve worked with, the first time IT or security hears about it is when something goes sideways, a data export request lands on their desk, or a mysterious invoice for 'AI-powered content generation pro-tier' turns up.
This isn't a lecture, it's a reality check. I’ve been in the room when the penny drops, and believe me, it’s rarely a pleasant sound. We’re hurtling through a messy middle where AI is both a game-changer and a potential minefield. And the biggest, most glaring gap I see? It’s often right at the start: **who's actually buying these tools, and what happens when they inevitably go pear-shaped?**
Your procurement policy board, bless its cotton socks, is probably doing a perfectly adequate job for traditional software and SaaS. But AI? That’s a whole different kettle of fish. And if you’re not updating your policies now, you’re not just behind the curve; you’re heading for a right old mess.
## The Wild West of AI Acquisition: Who Buys What, And Why It’s a Problem
Let’s cut to the chase. In many organisations, AI tools are being acquired with all the strategic foresight of a squirrel burying nuts. A team member sees a cool demo, signs up for a free trial, then pops it on the company card. Or maybe a department head, keen to show innovation, greenlights a subscription without a second thought beyond the immediate perceived benefit. It’s quick, it’s easy, and it feels like progress.
This is how **shadow AI** proliferates. It’s the digital equivalent of that rogue server tucked under someone’s desk from a decade ago, but with far greater implications. We’re seeing:
* **Credential sprawl:** Employees signing up with their work email, potentially exposing company data or creating unmanaged accounts.
* **Unmanaged subscriptions:** Costs spiralling out of control as multiple teams unknowingly subscribe to the same or similar services, or forget to cancel trials.
* **Data leakage:** Sensitive company information, customer data, or intellectual property being fed into public AI models, often without the user even realising the implications.
* **Governance gaps:** No one knows what tools are in use, what data they’re processing, or who’s responsible for them.
This isn't about blaming individuals. It’s about a systemic failure to adapt. The tools are so accessible, so persuasive, and often so genuinely useful, that the usual barriers to entry for new software simply don’t exist. Your traditional procurement process, designed for large-scale enterprise software deployments with lengthy vendor assessments, simply can’t keep up. It’s like trying to catch a flock of pigeons with a fishing net.
## “But Our Existing Policy Covers SaaS, So We’re Good, Right?” Wrong.
Ah, the classic counterargument. I hear it all the time. “AI tools are just another type of software, another SaaS vendor. Our robust policies cover all that.” And to that, I say, with all due respect, **absolute rubbish.**
While there’s overlap, the unique characteristics of AI introduce a fresh hell of risks that your standard SaaS procurement checklist simply won't catch. Let me spell it out:
* **Data Privacy & Confidentiality:** AI models thrive on data. Where is that data going? Is it being used to train the vendor’s public model? Does it leave your jurisdiction? Standard SaaS policies might cover data at rest and in transit, but they often don’t adequately address data *ingestion and processing* by a black-box AI model.
* **Intellectual Property (IP):** If your teams are feeding proprietary code, designs, or creative works into an AI, who owns the output? What if the AI generates something strikingly similar to another company’s IP? Traditional software doesn’t typically create new, potentially infringing, content.
* **Ethical Implications & Bias:** AI models can perpetuate or even amplify biases present in their training data. Are you prepared for the reputational damage or legal challenges if your AI-powered hiring tool discriminates, or your customer service bot generates offensive content?
* **Security Vulnerabilities:** AI models can be poisoned (adversarial attacks), or their outputs can be manipulated. Is your vendor doing due diligence on their model’s security as well as their platform’s? And what about the security of the APIs you're connecting to?
* **Model Drift & Hallucinations:** AI models aren't static. They evolve, they can 'drift' in performance, and generative models can 'hallucinate' – confidently providing incorrect or fabricated information. This isn't a bug in the traditional sense; it’s an inherent characteristic. How do you assess and manage the risk of a tool that might become less accurate over time?
* **Vendor Lock-in & Portability:** Extracting data and models from an AI vendor can be incredibly complex. Are you truly prepared for the implications of switching vendors if your entire workflow is embedded in a proprietary AI system?
See? It’s not just another SaaS. It’s a whole new ball game, with higher stakes and a lot more moving parts.
## Who Pays When They Break? The Unspoken Liabilities
This is where it gets really ugly. Because when an unvetted AI tool malfunctions, misuses sensitive data, or leads to compliance breaches, someone has to pick up the tab. And believe me, that tab can be enormous.
* **Financial Costs:** Beyond the spiralling subscription fees, consider the cost of remediation after a data breach, regulatory fines (GDPR, anyone?), legal fees for IP disputes, or the expense of re-doing work based on flawed AI outputs. We’re talking millions, not just hundreds.
* **Legal & Compliance Risks:** Feeding customer data into an unapproved AI could be a breach of your data processing agreements or privacy regulations. Using an AI that generates biased content could lead to discrimination lawsuits. Who is ultimately liable? The individual? The department? The company? Without clear policy, it’s a free-for-all.
* **Reputational Damage:** A public incident involving AI gone wrong can decimate trust in your brand. In today’s hyper-connected world, bad news travels fast, and regaining public confidence is a long, arduous climb.
* **Operational Disruption:** If a key AI tool fails, or is deemed too risky to use, what’s the impact on your operations? The cost of downtime, manual workarounds, and finding alternatives can be significant.
This isn’t just theoretical. I’ve seen organisations scramble to contain the fallout, lawyers brought in, PR teams working overtime, all because a relatively minor decision to adopt an AI tool without proper vetting spiralled out of control. It’s a mess that’s entirely avoidable with the right governance in place.
## Your Procurement Policy Board: The Unsung AI Heroes (If They Step Up)
This is where your procurement policy board comes in. This body, often seen as a bureaucratic hurdle, is actually your first line of defence. They are the ultimate authority for establishing and enforcing acquisition guidelines across the organisation. But – and this is a big ‘but’ – they often lack the specific context and urgency needed for AI.
They might be asking the right questions for a new ERP system, but they’re probably not asking:
* “What is the training data for this model, and what are its limitations?”
* “How does this vendor handle data anonymisation and deletion from their models?”
* “What’s their policy on model explainability and auditability?”
* “Does this tool introduce new ethical risks we need to mitigate?”
It’s not their fault; they haven't been equipped. But the time for complacency is over. This board needs to become the vanguard of your AI governance strategy.
## Practical Steps to Arm Your Policy Board for the AI Era
So, what’s to be done? You can’t just tell people to stop using AI; that ship has sailed. The solution lies in smart, adaptable governance. Here are some actionable steps your procurement policy board needs to take, pronto:
1. **Define “AI Tool”:** Sounds basic, but it’s crucial. What constitutes an AI tool for your policy? Is it just generative AI, or does it include predictive analytics, machine learning platforms, or even advanced automation? A clear definition prevents ambiguity.
2. **Establish Tiers of Risk:** Not all AI tools are created equal. A simple grammar checker is different from a tool processing sensitive HR data. Develop a tiered system based on data sensitivity, criticality to operations, and potential for harm. This allows for a proportionate review process.
3. **Mandate AI-Specific Due Diligence:** Update your vendor assessment questionnaires to include AI-specific questions. Focus on:
* **Data Handling:** Where is data stored, processed, and used for model training? Data residency, anonymisation, and deletion policies.
* **Model Governance:** How is the model developed, tested, and monitored? What’s the plan for drift, bias detection, and explainability?
* **IP & Output Ownership:** Clear clauses on who owns the outputs generated by the AI using your inputs.
* **Security:** Beyond platform security, what are their AI-specific security measures?
* **Compliance:** How does the tool align with relevant regulations (e.g., GDPR, sector-specific rules)?
4. **Create a Clear Approval Workflow:** Implement a streamlined process for AI tool acquisition, especially for low-risk tools, to avoid stifling innovation. For higher-risk tools, ensure legal, IT security, and potentially ethics committees are involved early.
5. **Build Internal Capability:** This is critical. Your procurement, legal, and IT teams need to understand the nuances of AI. Invest in training. Bring in external expertise if needed. They can’t vet what they don’t understand.
6. **Assign Clear Ownership & Accountability:** For every approved AI tool, there must be a named owner responsible for its usage, monitoring, and compliance. This isn’t a one-and-done; it’s an ongoing responsibility.
7. **Regular Review & Adaptation:** The AI landscape is changing at breakneck speed. Your policies can’t be set in stone. Schedule regular reviews to ensure they remain relevant and effective.
## Time to Get Your House in Order
Look, I know this sounds like a lot. But ignoring it is far more costly. The adoption of AI isn't slowing down; it's accelerating. Your organisation will continue to embrace these tools, with or without proper governance. The choice isn't *if* you use AI, but *how* you manage the risks.
Your procurement policy board has an opportunity, and frankly, a responsibility, to step up. To move beyond the traditional and embrace the complexities of AI. It’s about enabling innovation safely, protecting your organisation, and ensuring that when things do go wrong – because they will – you’ve got a plan, a process, and a clear understanding of who’s accountable. Don’t wait for a crisis to force your hand. Start that conversation with your procurement policy board today. Your future self, and your bottom line, will thank you for it.