11 Workarounds – the challenges of AI

In this chapter

The rate of development of AI-based applications, and in particular generative applications such as ChatGPT make for a very cloudy crystal ball. In this chapter some of the potential impacts of AI on workarounds and shadow IT are considered but there are many issues which remain poorly defined.

AI comes centre stage

When I drafted the outline of this book in July 2022 I included a chapter on AI in which I could consider the implications of machine learning on the propensity for employees to use workarounds. I decided to leave the chapter to the end of the writing process as the rate of change in the adoption of AI routines into enterprise applications was already quite significant.

An important contribution to assessing the impact of AI on business has been made by Alter (2022) building on his considerable experience tracking research on business processes and based on a work system life cycle model.

Then came the release of GPT3 and ChatGPT by OpenAI, supported in enterprise adoption by Microsoft, a joint-venture partner of OpenAI. Now GPT4 is available and there have been some significant changes to ChatGPT as well as the release of (at this point in time) over 20 other applications based on Large Language Models (LLMs). The underlying technology as far as language management is concerned is not ‘new’ but what has happened is a step change in computing power. For a detailed description of how ChatGPT works there is a very comprehensive blog post from Steven Wolfram.

Because of the installed base of Microsoft Office, the launch by Microsoft in March 2023 of its Copilot application is a very significant development.  Over the last two decades the roll-out of new functionality on Office, and on SharePoint and other Microsoft applications, has been generally slow and poorly posted in advance. Planned release dates have come and gone. The release of Copilot can only be described as a total change of strategy.

To quote from Satya Nadella, Chairman and CEO, Microsoft

“Today marks the next major step in the evolution of how we interact with computing, which will fundamentally change the way we work and unlock a new wave of productivity growth. With our new copilot for work, we’re giving people more agency and making technology more accessible through the most universal interface — natural language.”

In the initial product announcement Microsoft seeks to reassure its customers

“Copilot will fundamentally change how people work with AI and how AI works with people. As with any new pattern of work, there’s a learning curve — but those who embrace this new way of working will quickly gain an edge.”

In the course of the deployment of new enterprise technology over the last four decades vendors may have provided some degree of training on new applications but usually on a ‘train-the-trainer’ basis. The full functionality of enterprise applications is usually only required by a relatively small percentage of the total workforce, with most employees using screens and procedures specific to their particular roles and tasks. Even so implementing these applications comes which major challenges, outlined in Chapter 5.

Implications for employees

The scope of this book is restricted to the occurrence and management of workarounds and shadow IT. At this stage there is no feedback from early adopters, and when there is, the question that is inevitably raised is the extent to which Microsoft (and other vendors offering similar LLM-based applications) have provided a level of support in implementation which will not be available to the next level of customers. There is also of course very little academic research to call upon. A notable exception is a paper by Alter (2022) in which the author presents a work-system perspective that is built on his previous work. Although there is only a passing reference to workarounds the paper does discuss the potential impact of AI applications on the workplace.

It may be several years before the large-scale independent assessments of the impact of these technologies is published. There will no doubt be positive comments from the major IT consulting services firms but there is rarely the level of detail in their endorsements that would be of assistance to less-well equipped organisations.

Inevitably this chapter is based purely on conjecture, and all that I am able to do is to raise issues and not come up with solutions. However, the need to understand the implications for organisations of AI governance in health care has been recognised by NHS England with the publication in 2022 of Developing Healthcare Worker’s Confidence in AI (NHS England 2022) which sets out an Advanced AI Education for Specific Archetypes. These archetypes are defined as

  • Shapers
  • Drivers
  • Creators
  • Embedders
  • Users

This is a useful framework as it moves away from training for specific roles towards roles based on the ways in which AI is being adopted.

The document emphasises the scale of the training effort required to prepare employees for the increased use of AI applications. This report focuses on health care professionals at all levels but in principle also applies to enterprise situations.

“Educating healthcare workers to develop, implement and use AI effectively and safely is a multidimensional challenge, involving undergraduate education, postgraduate training, and lifelong learning. The challenge is to provide the right resources to the right people and build skills and capabilities across the healthcare workforce in the most efficient and effective way possible.

This challenge demands an approach to educating and training for AI that is flexible, including  a mixture of widespread acquisition of awareness and knowledge whilst also supporting specialist skills and capabilities to deploy and maintain these technologies. This means providing a solid foundation for developing AI-related knowledge as well as personalised advanced educational elements to fit the needs of individuals in different roles and responsibilities (the workforce archetypes).”

Along similar lines a team from the Turing Institute (Morgan 2023) has considered the developing concept of ‘human in the loop’, defined as ‘human judgement at the moment an algorithm renders a specific prediction or decision’. This reflects the emerging need to recognise the importance of human intervention at a specific crucial point or ‘moment’ within the decision-making process to constrain or prevent a specific action.

As discussed in this book there are a range of initiators for workarounds which include

  • Maintaining personal productivity at the level expected by the organisation
  • Simplifying complex IT systems
  • Reducing psychological stress
  • Retaining a sense of being in control of IT systems, not being controlled by the system

The issue is whether or not novel (in terms of there being no precedent) AI systems are going to alleviate these initiators or increase them. Microsoft’s claim is that Copilot promises to unlock productivity for everyone. To back this claim Microsoft reports that among developers who use GitHub Copilot, 88% say they are more productive, 74% say that they can focus on more satisfying work, and 77% say it helps them spend less time searching for information or examples. No information is provided as to how the productivity of developers scales to the productivity of ‘everyone’.

Another statement by Microsoft suggests that

With Copilot in Word employees can jump-start the creative process so that they never start with a blank slate again. Copilot gives then a first draft to edit and iterate on — saving hours in writing, sourcing, and editing time. Sometimes Copilot will be right, other times usefully wrong — but it will always put you further ahead.

I personally find that the concept of a system being usefully wrong is difficult to accept. For it to be usefully wrong the human in the loop has to know what is correct.

There is a tendency on the part of vendors to see all digital workplaces as having similar processes and similar cultures. Williams (2018) makes an important point in presenting the outcomes of research suggesting there are six different types of digital workplace designs. The authors suggest that there are three people-focused designs supporting different levels of sophistication of interaction between people working together to create and share information, and three process-focused designs supporting joint work towards business improvement projects and integration with business processes and with other enterprise systems.

A workarounds perspective

At the time of writing this book in mid-2023 there is a tremendous amount of hype about the potential benefits of using applications such as ChatGPT to enhance the productivity of individual employees. There are already many examples of how these applications can create summaries of documents and the outcomes of meetings, develop press releases and provide high-quality translations. The underlying business case for the adoption of these applications is that they will enhance the productivity of employees and the organisation. There is also good evidence that these applications can create software code, which could lead to an increase in Shadow IT use.

The outcome could be that workarounds increase in number and scope because of the potential of these applications to generate content that is indistinguishable from content created by the employee. It is becoming clear that it is difficult for other employees and the organisation itself to identify whether a specific item of content has been machine, not employee, generated. This brings with it the risks that decisions are made on content for which there is no audit trail back to an individual employee. These risks will be of considerable concern to the clinical sector where time pressures are already very considerable to respond quickly to the medical needs of a patient.

It is still unclear about the ways in which AI technology will be embedded in the enterprise or clinical application. The resultant complexity of the application could make it more difficult for an individual employee to create workarounds but also may reduce the requirement to do so. The second scenario is that the sophistication and complexity of the application means that employees have increasingly less ability to create workarounds to tasks that remain unfit for purpose, and this could increases the stress on the employee.

The bottom line

The next few years are going to be very challenging for organisations as they adapt to the widespread adoption of AI applications. I will leave the last word (for now) to Aleksandr Tiulkanov who provides a balanced proposition for any organisation facing an uncertain future in adopting AI, as well as highlighting the importance of risk management with AI applications.

To quote from his blog

“Let’s assume you’ve identified a use case where employing a certain AI system seems to make sense. Let’s further assume that the apparent benefits outweigh the downsides for you — and, importantly, for other people.

In this case, I would still think about the following points, especially for high-stakes decisions:

  • Are you using the right kind of technology for the job? What evidence do you have the technology use in this case is science-based and actually makes sense?
  • Are you competent to verify the quality of outputs the technology produces? Objectively competent, as certified by diplomas, tests, peers, and people who pay you money for this as your work. If you’re not paid for that, you’re not a professional and thus not competent to verify the technology’s outputs.
  • Are you comfortable taking legal liability and moral culpability for any missed errors in the technology-generated outputs? The question is relevant whenever you use these outputs in real life and this might affect someone besides yourself.
  • Aren’t you over-relying on the technology, trusting it blindly, because of automation bias? Algorithmic outputs may seem authoritative, and research shows you might even disregard evidence to the contrary. How are you making sure this is not the case?”

Issues of risk management and technical debt management are considered in Chapter 12.

References

Alter, S. (2022). Understanding artificial intelligence in the context of usage: Contributions and smartness of algorithmic capabilities in work systems. International Journal of Information Management, 67, December, 102392 https://doi.org/10.1016/j.ijinfomgt.2021.102392

Morgan, D, Hashem, Y., Straub, V.J. & Bright, J (2023). ‘Team-in-the-loop’ – organisational oversight of high-stakes AI. https://arxiv.org/abs/2303.14007

NHS AI Lab and Health Education England (2022). Developing healthcare worker’s confidence in AI. https://digital-transformation.hee.nhs.uk/building-a-digital-workforce/dart-ed/horizon-scanning/developing-healthcare-workers-confidence-in-ai

Licence

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Workarounds: the benefits and the risks Copyright © 2023 by Martin White is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book