If you’re a recruiter you’ll doubtless have noticed that knowledge workers nowadays expect not to have to attend an office, at least certainly not five days per week. Ever since the Covid-19 epidemic swept the globe, the practice of working from home (WFH) has become commonplace.
Naturally, a geographically scattered workforce brings along pros and cons like any other working model. On the positive side, many employers have downsized to smaller offices, which are cheaper to heat, cool, rent or buy. Not least, the savings on such basics as stationary, soap and washroom supplies even decreases. Every little counts, so the move from office attendance to partial or even complete WFH models are great news for many businesses where cloud computing systems can be accessed remotely.
However, there are some major disadvantages too, as you would expect. For example, certain workers will take any opportunity to slack off around the office watercooler, it’s even easier to do so when WFH if you’re sitting in pajamas with a TV set and a music system in your room!
Slackers, aside, more important are the security implications of people accessing company networks from their own Wi-Fi or home internet. This issue is exacerbated even further if employees are using their own devices as opposed to company issued laptops or desktop computers.
Clearly, having a rigorous set of procedures in place for those accessing company networks from home is essential, as is the necessity to ensure that employees only use devices approved for company use. Furthermore, it’s crucial that people don’t use their devices for anything other than company business, especially personal activities.
You can imagine that it’s all too easy to send a picture of a partially clothed lover that someone met on Tinder instead of an infographic about sales figures; a situation that could be deeply embarrassing and potentially lead to the firing of the sender.
Ghost in the machine?
But even if people follow company procedures to the letter, whether from the corporate HQ or their kitchen table, security might not be compromised, but there’s still the encroaching specter of Shadow AI to contend with.
It sounds rather spooky and a bit like a computer geek’s Halloween costume, but in fact the concept of Shadow AI is deadly serious and carries important implications for businesses and organizations of all sizes. So, what is it, what are those implications, and how can the potential consequences be mitigated?
Some folks might be familiar with the term ‘Shadow IT’, which is the nomenclature used by human resources (HR) and IT support departments for information technology that’s used in conjunction with company devices and software.
Shadow IT isn’t something which is necessarily brought to the workplace or its systems maliciously by rogue employees. For example, often it can be an innocent mistake such as plugging a USB cable into a computer to simply charge a personal phone, but a file transfer could happen with a misplaced brush of the finger and then security is compromised. The worker involved might be carrying around highly commercially sensitive data on their phone without knowing it.
It’s AI, so it must be correct…
Equally, another example might be an employee who prefers using a different layout of keyboard to the standard and brings their own into work. But unbeknown to them, certain function and ‘alt’ keys can type unexpected characters into systems that could have potential security or functionality-related consequences.
Shadow AI is the term for the similarly unintended use of software and external devices that might be drawing upon third party AI, producing data that the employee takes as ‘certain’ because, say, Chat GPT or Bard has provided the information, so they assume that ‘it must be right’.
An example of this could be the simple act of a person wanting to convert gallons to liters using an AI conversion tool on the internet, but the tool in question is set to US gallons, as opposed to Imperial gallons, which are greater by around 20%. Such inaccurate figures could then be fed into company systems and the consequences perhaps very far-reaching.
So, just as shadow IT needs to be guarded against by use of strictly enforced company policy, so does the use of third-party AI, which might be powering software that employees use in all innocence. That’s why it’s called ‘shadow’- because it may be running in the background of software in daily use.
Think about the last time you wrote a letter and spell-checked it with the word-processing software package you were using. Later, you discovered that the spell check was set to British English as opposed to US English. Once again, an important company letter sent by an employee might cause the recipient to think that Company X was UK -owned. Again, unintended consequences could follow.
This is why it’s essential that C-Suite executives look at every possible scenario in the workplace from project management via data entry to loading trucks in a warehouse – to ensure that all standards are all homogenized across the business, with a strict set of procedures in place. An example might be the issuance of a company handbook that states which online platforms or websites are OK to access for people making engineering calculations or forbidding the use of even simple calculators on employees’ phones to total weights for loading a truck.
Consequences of not controlling shadow AI
You can see from the above examples that the smallest errors induced by Shadow AI might have a ‘butterfly effect’ on a large organization, so AI thought leaders have referred to the more common unintended consequences of not identifying shadow AI within a small business or a large corporation. Here are a few:
Generating misinformation – as in the example above of a conversion of liters to US or Imperial gallons, it’s crucial that any system in an organization is specified in its standards handbook. Disciplinary action may be taken against employees who diverge from such a policy. It’s not so much generating the information or data that’s the problem; it’s acting upon it that brings potentially serious consequences.
Spreading misinformation – likewise, it’s one thing when an individual employee does something wrong through the careless use of Shadow AI, it’s even worse when they think they’ve found a great way of doing something, then going around telling all their co-workers. Now 20 people are making mistakes rather than just one!
Data Privacy and copyright – if an individual enters information into an AI-driven chatbot, say a large language model (LLM) like Chat GPT, it can save them hours of typing, cutting and pasting when, say, trying to summarize an important specification document for a new product yet to be manufactured.
But remember that LLM’s retain anything sent to them, for providing answers to other users’ requests. If Joe Smith from marketing uploads a 50-page features and benefits document of a product yet to be launched, a competitor performing a similar operation could well be provided with that very same commercially sensitive data the next day. In short, AI chatbots simply aren’t secure enough for anything of a remotely confidential nature to be used with them.
Work with AI don’t just ban it.
There are more effective and less confrontational ways to allow employees, especially those WFH, to follow procedures: firstly, asking them how they would like to do things, analyzing the results of those polls and draw up a procedure based on a safe way of performing tasks in a manner that employees would like. Secondly, if workers are using their own shortcuts, potentially containing Shadow AI, then training them to interact better with the tasks in hand can mean that they don’t seek solutions outside the procedural rules.
Another way of ensuring that employees are using software the best way they can is by installing Digital Adoption Platforms (DAPs) alongside company software, which is like an AI assistant, friendly, helpful and experienced colleague sitting at a worker’s shoulder, offering help only when it’s needed.
A DAP is a teaching layer of software that runs alongside the primary software to which it is assigned. It works by hyper-personalization of employees’ accounts. For example, when a new hire comes to use, say, a data entry screen for the first time, the DAP will know that the user is a rookie and will proactively offer advice before any mistakes are made – for example, a tooltip might appear that states:
“Hey, Jane, the next screen requires that all the figures you input are rounded up to the nearest whole number, with no decimal places.”
But once Jane has done this successfully two or three times, the DAP will ‘know’ that Jane has learned and won’t offer any more help until the next time she makes a mistake. Similarly, if Jane logs out of that workstation, but a seasoned employee takes her place, the DAP will probably sit monitoring the activities, and no prompts will be required at all.
Obviously, these are very simplistic examples, and a DAP can do so much more than this, including making global and individual reports available to management about which teams might need more training and who should be nominated for employee of the month.
The point is that working with AI itself is no bad thing, management just needs to ensure that everyone in the organization is working with the same, approved AI instances, and that staff training is provided so that workers do not seek unapproved methods of making their jobs manageable.
So don’t have nightmares, Shadow AI isn’t a monster, it just needs to be harnessed for any positives you might find within it.