ChannelLife US - Industry insider news for technology resellers
Ps bret kinsella color  2
Tue, 25th Nov 2025

Generative AI (GenAI) tools are reshaping how organizations do work, whether their leadership knows it or not. As organizations actively test new artificial intelligence (AI) technologies, they're often missing widespread use in the shadows.

A TELUS Digital survey found that 57% of enterprise employees who use generative AI tools at work admitted to entering high-risk information. This includes details about unreleased products and customer data. More importantly, two-thirds of those employees are using personal accounts for publicly available free AI tools. 

Personal accounts don't have the same assurances around data privacy or a commitment not to use the information in future model training runs. This is a critical information security risk that most organizations entirely overlook today. 

Beyond security, unsanctioned use of AI tools limits organizational benefits. Integrating AI with operational systems, combining AI solutions to address entire business processes, facilitating shared practices across teams, creating common usage patterns, and developing broad-based employee skills don't happen when Shadow AI takes over.

The Allure of Better Tools

Freemium AI tools such as ChatGPT and Grammarly are a click away. This encourages employees to adopt them independently. Access requires no oversight and bypasses security, governance, and compliance processes. 

Our survey also revealed that 84% of employees who have used generative AI tools at work want to continue using them. The reasons for this are clear. Sixty percent said using AI enables them to execute job tasks faster, and 49% said the work output is better. Humans generally want to succeed in their jobs, and they see AI as giving them an edge. Organisations can fulfill this employee desire, or have it fulfilled secretly by Shadow AI.  

Employees who leverage free tools without clear terms of service protections may inadvertently compromise sensitive information or violate compliance policies without realizing it. One of the most prominent instances of these risks materializing occurred when Samsung engineers unknowingly leaked sensitive source code, chip design data, and meeting notes by pasting them into a public ChatGPT, prompting a temporary company-wide ban on external AI tools. 

In client-facing scenarios, unvetted outputs from free AI tools can further expose organizations to legal, reputational, and brand risk, especially in regulated industries. Free AI tools have an increased likelihood of generating biased, plagiarized, or factually inaccurate content. They lack appropriate guardrails, and their undercover use means there are no institutionalized processes to check for common issues. 

Combatting Shadow AI

TELUS and TELUS Digital have over 50,000 employees using GenAI, and we have extended these capabilities to dozens of customers. Here are a few things we've learned that may help you mitigate Shadow AI risks and get more benefits from your AI initiatives.  

  • Provide robust AI solutions that everyone can use: If you don't provide a sanctioned general-purpose generative AI solution, employees will simply use freemium solutions. It's tempting to try and control everything and focus on just a few high-value use cases for early AI initiatives. You should do both to reduce risk and drive broader benefits. 
  • Don't stop short of capturing the biggest benefits: A 2025 white paper by Cloudera found the top three cost prohibitions for accelerating AI are integration (50%), data storage (49%), and costs associated with data breaches or leakages (46%). The benefits of systems integration and leveraging larger data sets can often multiply the benefits over simply isolated task automation. Guardrails can add time and cost to a deployment, but provide protection against a variety of problems that can halt AI progress. Many project budgets only contemplate getting the solution live and functional and overlook these other costs that can be critical.  
  • Measure the impact: Delivering sustainable AI value starts with clarity around outcomes, not just outputs, but many organizations fail to set metrics or follow through on measurement. A 2025 McKinsey study found that only 39% of organizations have benchmark standards for GenAI tools used by employees. AI tools are ushering in substantial change. It's important to go beyond "vibes" to measuring outcomes. This is essential for reporting on impact as well as prioritizing budget for new and ongoing initiatives.  
  • Governance matters: Ensure the business, IT, security, and compliance organizations are collaborating toward a common goal of securing benefits from AI while meeting governance requirements. AI introduces new risks. However, many organizations are demonstrating that the risk can be acceptably managed. Governance approaches that ban AI are not just limiting potential benefits and organizational competitiveness; they are also ensuring the underground growth of Shadow AI. Bringing these groups together and establishing shared goals is essential.

In 2025, the International Association of Privacy Professionals found 77% of organizations are working on AI governance, a sign that enterprise leaders see oversight as key to sustainable innovation. Strong governance isn't about restricting experimentation, but about scaling responsibly. 

  • Establish protection from the beginning: While freemium AI solutions may employ user data for model training, make sure your provider offers an enterprise assurance against data use. 
  • Augment humans first: There's a temptation for many executives to focus on using AI for automation to directly reduce operating costs. AI definitely has a role in process automation and budget savings. However, most organizations don't even recognize the insidious cost of tedious tasks that employees must tend to every day. Augmenting human capabilities so they can execute these tasks more efficiently and consistently can deliver an immediate boost to productivity. These initiatives are often less complex to implement and can highlight other areas for strategic application of AI.  

From early gains to lasting value

The stakes have never been higher. IBM's 2025 Cost of a Data Breach Report reveals that among organizations experiencing AI-related security incidents, 97% lacked proper AI access controls, and breaches involving shadow AI cost an average of $670,000 more than traditional breaches. 

By shifting from ad-hoc experimentation to strategic, enterprise-wide implementation with proper governance, cost visibility, and performance metrics, organizations can transform initial gains into long-term value. The alternative is a costly lesson in the true price of "free" AI.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X