Colleagues and friends are the likeliest to know about the secrets you keep, including your employer’s confidential data.
However, with the upsurge of artificial intelligence (AI) in the workplace, workers are increasingly sharing personal data with a new close friend – your friendly neighborhood chatbot.
And organizations are now facing an unprecedented security threat to their data – and they’re unprepared to face it.
When companies do notice what’s going on, the measures seem to exist at extreme ends – either ban bots like ChatGPT, or the growing number of AI-aided writing tools – in the workplace or no regulation on the use of AI by employees at all. There’s very little middle ground.
Alongside confidentiality rules, throwing information to AI tools can also be an opportunity for cybercriminals.
AI Enters the Workforce
CybSafe, a behavioral science and data analytics company, conducted a study of 1,000 office workers across the UK and the US, focusing on the use of AI in the workplace.
- 50% of the respondents already use AI tools at work – one-third weekly, and 12% daily;
- They use the tools for research (44%), writing reports (40%), data analysis (38%), writing code (15%);
- 64% of US office workers have entered work information into a generative AI tool at some time, and a further 28% aren’t sure if they have;
- 38% of users in the US admit to sharing data they wouldn’t casually reveal in a bar to a friend;
- In the UK and the US, 69% and 74% of the respondents believe that the benefits of using AI outweigh the risks;
- A significant percentage of the respondents would continue to use AI tools even if their employers ban them;
- 21% of respondents can’t discern between human-generated content and AI-generated content.
If employees are finding AI tools useful and a productivity enabler, there are other people also finding them useful – cybercriminals.
Dr. Jason Nurse, director of science and research at CybSafe, and associate professor at the University of Kent said: “Generative AI has enormously reduced the barriers to entry for cyber criminals trying to take advantage of businesses.
“Not only is it helping create more convincing phishing messages, but as workers increasingly adopt and familiarize themselves with AI-generated content, the gap between what is perceived as real and fake will reduce significantly.”
Companies With a “No AI” List
- Northrup Grumman, the aerospace and defense company, has banned AI tools until they’re fully vetted;
- Samsung learned the hard way after employees uploaded confidential code and banned AI tools;
- Verizon has blocked access to AI tools from within its systems;
- JPMorgan Chase has restricted the use of AI tools, though more details are unavailable;
- Deutsch Bank has blocked all AI tools;
- Accenture prohibits its employees from using AI tools at the office;
- Amazon encourages its employees to use its proprietary bot, CodeWhisperer though it seems it hasn’t yet blocked access to AI tools.
The Bottom Line
AI is finding a space in the workplace – but it opens up a new home for cybercrime, and companies and their staff must be eternally vigilant and update their ways of managing risks.
Meanwhile, for companies and teams, the allure AI tools offer their employees is extremely attractive.
So balancing these conflicting truths is a tricky – but necessary – ship to steer.