ChannelLife US - Industry insider news for technology resellers
Illustration busy office employees computers separation ai security icons barrier gap

Organisations lag in AI policies & skills as workplace use surges

Today

A new survey has found that while artificial intelligence use is widespread in workplaces, most organisations are unprepared to address associated risks due to a lack of formal policies and training.

ISACA's annual AI Pulse Poll, which surveyed 3,029 digital trust professionals across the globe, revealed that 81 percent of respondents believe employees at their organisation use AI, regardless of whether it is officially permitted. Despite this high adoption rate, only 28 percent of organisations have a formal AI policy in place.

According to the research, 22 percent of organisations provide AI training to all staff. In contrast, almost one third of organisations provide no AI training at all, while 35 percent restrict training to IT-related roles. Most digital trust professionals view this skills gap as pressing, with 89 percent saying they will need AI training within the next two years to retain or advance their careers, and 45 percent indicating it will be required within six months.

Jamie Norton, Board Director at ISACA, highlighted that the integration of AI tools at work is outpacing the development of organisational oversight and policy. He pointed to growing risks from sophisticated threats, such as deepfakes, that organisations are not sufficiently prepared to counter.

AI is already embedded in daily workflows, but ISACA's poll confirms governance, policy and risk oversight are significantly lacking. A security workforce skilled in AI is absolutely critical to tackling the wide range of risks AI brings, from misinformation and deepfakes to data misuse. AI isn't just a technical tool, it's changing how decisions are made, how data is used and how people interact with information. Leaders must act now to establish the frameworks, safeguards and training needed to support responsible AI use.

The survey found that while AI is delivering tangible benefits—68 percent report time savings and 56 percent expect a positive impact on their career in the next year—organisations lag in implementing comprehensive frameworks. Only 28 percent have a formal AI policy, although this figure is up from 15 percent last year. Similarly, 59 percent permit the use of generative AI, up from 42 percent in the previous year.

Respondents are employing AI for a variety of functions: 52 percent to create written content, 51 percent to boost productivity, 40 percent to automate repetitive tasks, 38 percent for analysing large data volumes, and 33 percent in customer service roles.

Despite these applications, understanding of AI remains limited. Over half (56 percent) consider themselves somewhat familiar with the technology, 28 percent very familiar, and only 6 percent extremely familiar.

Concerns about the risks associated with AI are significant. Sixty-one percent report being very or extremely concerned about generative AI being exploited by malicious actors. Fifty-nine percent believe AI-powered phishing and social engineering attacks have become harder to detect, and 66 percent expect deepfake attacks to become more sophisticated within the next year. Despite these risks, only 21 percent of organisations are investing in detection or mitigation tools for deepfakes.

Questions also remain around organisations' ability to manage the ethical aspects of AI. Forty-one percent think ethical issues such as privacy, bias, and accountability are being addressed adequately, while just 30 percent express high confidence in their organisations' ability to detect AI-related misinformation.

For many organisations, AI risks are still not a top-level priority. Only 42 percent view them as an immediate concern. The top cited risks include misinformation or disinformation (80 percent), privacy violations (69 percent), social engineering (63 percent), loss of intellectual property (53 percent), and job displacement (40 percent).

Jason Lau, ISACA Board Director and Chief Information Security Officer at Crypto.com, commented on the need for continuous learning and updated AI policies.

Enterprises urgently need to foster a culture of continuous learning and prioritise robust AI policies and training in AI, to ensure they are equipping their employees with the necessary expertise to leverage these technologies responsibly and effectively—unlocking the AI's full potential. It is just as important for organisations to make a deliberate shift to integrate AI into their security strategies—threat actors already are doing so, and failing to keep pace will expose organisations to escalating risks.

The survey indicates that organisations are recognising the need for more AI skills: nearly a third expect to increase jobs for AI-related functions within the next year. Additionally, 85 percent believe many roles will be modified because of AI, while 84 percent rate their own expertise as beginner or intermediate.

Seventy-two percent of respondents say AI skills are very or extremely important for professionals in their field at present. The findings suggest organisations must address the skills gap and integrate AI risk management into their broader security and governance strategies if they are to respond to the challenges of expanding AI adoption in the workplace.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X