Questions From the C-Suite: Shadow AI
This week we continue our series “Questions from the C-Suite” with a question that is coming up around boardroom tables a lot lately:
What is Shadow AI and is it an HR or IT Issue?
The answer is not as cut & dry as you might think. Shadow AI, a subset of Shadow IT, is the use of AI tools and models inside a company without IT, security, or legal knowing. It is a people problem that rapidly becomes an IT, operational and financial problem.
Let’s unpack the risks associated with Shadow AI, why you should care, and why HR and IT must act together to limit risk and prevent financial turmoil.
What “Shadow AI” looks like in the wild
Picture a hiring manager copying résumés into a public LLM (Large Language Models) to draft interview questions. Or a sales rep dumping customer lists into a free AI assistant to craft emails. Or an external vendor using an AI code helper on your proprietary codebase. Or a finance team member dropping quarterly stats into ChatGPT to generate a slide deck. These are all Shadow AI actions: convenient, efficiency motivated, commonly adopted, and dangerously ungoverned.
Surveys suggest this is widespread. Dark Reading estimated that as of early 2024, 27.4% of data inputs into AI tools were considered “sensitive”, up 17% from a year earlier. That’s not small stuff. In fact, Microsoft reports that 71% of workers in the UK have used unapproved AI tools for work tasks including for finance or internal documents.
Shadow AI in HR, Three Cautionary Tales
1. Employee data privacy and regulatory risk
When employee or candidate data is pasted into external AI services, that PII data (Personally Identifiable Information) may be logged, stored, or used to train third-party models, rarely with an accompanying audit trail. The break in data lineage can create GDPR/HIPAA/PIPEDA exposure. Regulators don’t accept “we didn’t know” as a defense; they ask why you didn’t have controls in place. As we mentioned last week regulators don’t fine vendors; they fine you, and these fines are significant.
2. Bias, fairness, and legal exposure in people decisions
AI systems trained on unknown or biased datasets can reproduce discrimination. Amazon scrapped an internal hiring tool after it learned to de-value resumes that included the word “women’s”; an early but clear example of automated bias becoming a legal and reputational problem. If a recruiter uses an unsanctioned AI screener, and it systematically filters out protected groups, HR becomes the front line: defending hiring choices, investigating complaints, and remediating harm.
3. Trust, morale, and culture
Employees have the right to be uncomfortable if they suspect managers use opaque AI to evaluate performance or make promotion decisions. Perceived unfairness damages engagement, impacts retention, and erodes the employer brand. HR owns culture; Shadow AI can corrode it. There are enough existing issues with morale and retention in the IT space, be wary of AI exacerbating it.
Why IT and Security Teams are Worried
Shadow AI expands attack surfaces and creates blind spots. Unapproved AI tools often bypass endpoint protections, DLP(data loss prevention) and logging. They may send tokens, credentials, or sensitive files into third-party systems which IT can’t audit, increasing breach probability and amplifying impact. When something goes wrong, IT scrambles with one hand tied behind their back to secure the network, data, company reputation and limit collateral damage.
Managing AI risk has been on the top of the IT stress list for the last few years, with 68% of survey respondents reporting data leakage as a result of sensitive information finding its way to AI. Their stress is warranted.
Budgets are tight and incidents inflate unexpected costs quickly. The IBM Cost of a Data Breach report documents the astronomical price of breaches globally; unmanaged data flows and poor governance drives those numbers higher. IBM notes, “60% of the AI-related security incidents led to compromised data and 31% led to operational disruption.” In the day and age of ultra lean budgets, the absence of access controls and the rapid uptake on unsanctioned AI use, IT teams are scrambling to keep their heads above water.
What IT Wishes HR and Operations Would Action, Immediately
1. Co-Own an AI Use Policy (yesterday)
Make HR the owner of how AI can be used to augment people processes. Define what candidate/employee data may never be pasted into external models. Make this policy practical and enforceable. Companies like McLean & Company can offer policies or guidance to help get you started.
2. Require Disclosure & Vendor Assurances
When collaborating with vendors or partners:
- Require transparency about AI usage: ask them to disclose which AI tools they use (if any) in your project. All vendors handling HR data must disclose AI use.
- Include audit rights so you can ask for logs, prompts, and usage history.
- Include liability clauses: if data leaks or compliance breaches occur because of unauthorized AI usage, the vendor is accountable.
- Require alignment to your AI governance policy (for access, usage, data handling).
Collaborate with IT and consider hiring a third party to execute software audits if in doubt.
3. Education Over Bans
Inform and empower your staff:
- Train employees on what shadow AI is, why it’s risky, and what’s allowed.
- Encourage reporting of new AI tools or workflows discovered by teams.
- Create a conduit for Employees to express ongoing needs they believe could be solved by AI.
- Reward adherence to the policy; make AI safety part of performance reviews where relevant.
Bans often drive Shadow AI deeper underground; approved, secure tools reduce temptation. Samsung opted for a full temporary ban when employees pasted internal, proprietary code into a public chatbot, effective corporate wide education could have prevented this.
4. Detection & Reporting
Work with IT to deploy DLP tools, monitoring, and a simple reporting channel for new AI tools. Tech Target put out a comprehensive list this year to help you understand these tools, IT will thank you for reviewing this before talking solutions. Encourage managers to flag suspicious tools rather than hide them — incentivize transparency.
IT and Security’s Tasks (besides stressing out)
1. Establish an AI Governance Policy
- Clearly define which AI tools are approved, in what contexts, and under what data rules.
- Spell out what data may or may not be input into AI systems (e.g. no PII, no internal strategy documents)– be specific.
- Make it a formal policy, not just a request: tie it to contracts, access rules, onboarding(HR), and accountability. Companies like InfoTech can offer support in assessing where you need to start.
2. Create Visibility & Monitoring
- Use tools to monitor network traffic, API usage, browser activity and help detect when unauthorized AI services are being accessed.
- Use User Behavior Analytics (UBA) or DLP to flag suspicious AI interactions (e.g. large document uploads, unusual prompts).
- Formally inventory all AI tools in use — internal, vendor, team level — and require registration.
3. Offer Approved Alternatives
One reason shadow AI takes hold is unmet demand. If teams don’t have good tools, they innovate in the dark. You can reduce risk by:
- Licensing or building enterprise AI platforms (e.g. secure models, private LLMs, internal sandboxed tools). This is a great opportunity to collaborate with a custom software vendor if you have a gap in skillset on your team.
- Giving employees secure AI assistants that are approved for use, under governance.
- Making the approved tools robust enough so people choose them rather than rogue options.
4. Regular Audits & Reviews
- Periodically audit AI tool usage, review logs, assess risk of tools newly adopted.
- Update your governance policy frequently, because AI tools evolve fast.
- Perform security reviews and penetration tests that include the AI components in your stack.
Threat or Opportunity – You Decide
Shadow AI is the invisible threat creeping across your business right now. It’s not malicious — it’s opportunistic. Workers will always seek tools that help them do more, faster. But if those tools bypass your governance, you could lose control of your data, your reputation, and your compliance. Every department has a horse in this race but as a leadership team you must decide the path you want to go down.
By putting in place policies, visibility, governance, approved AI tools, and vendor protections, you can harness AI’s upside — while keeping the shadows where they belong: in the background.
If you still have questions on how to harness the power of AI feel free to reach out to our advisory team to get you started.