How can we prevent project management from falling into the AI darkness?

Marc Lahmann Partner and Leader Transformation Assurance, PwC Switzerland 07 Oct 2019

Artificial intelligence (AI) is the subject of a great deal of hype in the world of business, with new articles constantly highlighting the seemingly endless possibilities AI offers both organisations and their leaders. From integration, human-computer interfaces and prediction, right the way to autonomous steering, it is all on the table. Anything goes!

As with all technological progress and the change it brings, there’s also a lot of buzz about the implications. From loss of jobs and reduced personal autonomy all the way to concerns that humanity will become slaves to machines, fearmongering is a constant companion to technological change. The same is true for project management practice. AI has its foot in the door of project management and is here to stay. Even Gartner is jumping on the AI bandwagon, predicting that ‘80% of today’s project management tasks will be eliminated by 2030 as AI takes over’. But is AI really the dark devil that it is frequently stigmatised? What do project managers need to do in order to control the darkness?

In this article we will explain what risks project managers need to be aware of when dealing with AI, what the limitations are on AI in project management, and how project managers can get the most out of using AI in their projects.

Perceptions around the use of AI

Before we get into the myths and perceptions around AI, it’s important to explain the different levels or phases of AI within project management. We allocate AI in project management into the following four phases: integration and automation, chatbot assistants, machine-learningbased project management, and autonomous project management. Given their straightforward nature, we classify the first two phases as ‘weak’ or ‘simple AI’, whereas the third and, especially, the fourth phases qualify as ‘strong AI’ or ’advanced AI’ (see Figure 1). During the course of this paper, we will focus on the risks and limitations related to the phases classified as advanced AI.


Figure 1: Phases of AI evolution


Digging deeper into why AI is portrayed as the dark power that might end up controlling humanity reveals the various myths and common perceptions around it. The most prominent of these relates to job security − or rather the perception that most jobs will be replaced by an intelligent AI solution specifically designed to perform those jobs. Other myths you might have heard or thought about in connection with artificial intelligence are:

  • AI will become evil and/or conscious
  • The decisions made by AI are not reasonable or based on any evidence
  • AI will affect my independence when it comes to decision-making
  • Project managers will become obsolete owing to AI
  • Autonomous AI project managers will replace human project managers altogether
  • AIs will never be able to do more than automate simple existing processes
  • My project team will never be controlled by an AI

But what do these myths actually mean? What is the fear behind them, and how might the myth affect projects and project managers? In the following table, we try to shed some light on these myths and explain the perceptions underlying them. Find out more in our printed version (load more).

What are the risks and limitations of AI in project management?

Dimensions of AI risk and limitations

If we could be sure that an AI-based project management system was working reliably without any risks or limitations, we would use it like a toaster without having to understand precisely how it operates. Unfortunately, new AI technologies, for example driverless cars or predictive medical diagnosis, never come without any risk. If a medical diagnosis system is predicting that there’s a 95% probability that someone will live, that sounds good, and we are not going to go further into the details to find out whether it’s really accurate for them personally. But what happens if the system is predicting a 95% probability of death? Do we do the same? To believe in an algorithm that predicts death or − in the analogy of project management − the failure of a project, a human would want an explanation enabling them to see whether the algorithm had followed the appropriate process. There would also have to be a meaningful ability to challenge the algorithm on a specific risk dimension in cases where a human would disagree.

Without creating safeguards with regard to specific AI risk dimensions, there is an inherent risk that humans (project managers) could be misled, manipulated or discriminated against without knowing why, and become extremely marginalised. To be able to create safeguards to give you the confidence that the algorithm used in AI project management systems is accurate for the predictions it’s used for, you need a clear understanding of the various dimensions of AI risks to be able to challenge the results.

Potential pitfalls with biased data

Of the AI risk dimensions highlighted in Table 2 above, biased data is often identified as one of the key potential pitfalls misleading humans in an AI-based project management system environment. We humans make sense of the world by looking for patterns, filtering them through what we think we already know, and making decisions accordingly. When we delegate decisions to artificial intelligence, we expect it to do the same, only better. AI, on the other hand, can be taught to filter bias and irrelevancies out of the decision-making process, pluck the most suitable project activities from a work breakdown structure, and guide the project manager based on what it calculates is objectively best rather than simply what we have done in the past. However, human beings bring cognitive bias directly into the AI systems without realising it. Like the human brain, AI is subject to cognitive bias. Human cognitive biases are heuristics: mental shortcuts that skew decisionmaking and reasoning, resulting in reasoning errors (e.g. stereotyping, the bandwagon effect, confirmation bias, priming, selective perception, the gambler’s fallacy and the observational selection bias). The total number of cognitive biases is constantly evolving, and any one of them can affect how we make decisions. Human cognitive bias influences AI through data, algorithms and interaction as we design and train AI-based systems, and are used to make decisions from different past experiences. Read more in our White paper.

Lack of knowledge and AI risk literacy

The overall challenge in using artificial intelligence in the project management world will not be adapting. Nor will it be the output or recommendations the technology provides to project managers. The main challenge for project managers will be learning how to use the technology in the right context and how to challenge the specific limitations and risks of AI to make reliable and ethical decisions. Risk literacy, statistical thinking and data science are not currently key areas in the training of certified project, programme and portfolio managers. But in the words of Gerd Gigerinzer, a psychologist at the Max Planck Institute for Human Development: 

“the ability to read and write — is the precondition for a project manager. But knowing how to read and write is no longer enough. The breakneck speed of technological innovation, e.g. AI, has made risk literacy as indispensable in the 21st century as reading and writing were in the 20th century.”

Training in AI will become more relevant than ever before. AI risk literacy is the ability to deal with uncertainties in an informed way. Without it, project sponsors jeopardise their investments and benefits and can be manipulated into experiencing unwarranted, even damaging hopes, myths and fears. Simply stated, statistical thinking is the ability to understand and critically evaluate uncertainties and risks. But statistical thinking could be taught as the art of real-world problem solving, i.e. the risks of drinking, AIDS, pregnancy, horseback riding, and other dangerous activities. Of all the mathematical disciplines, statistical thinking connects most directly to a teenager’s world.

Educators and project managers alike should realise that risk literacy is a vital topic for the 21st century. Rather than being nudged into doing what experts believe is right, people should be encouraged and equipped to make informed decisions for themselves. Risk literacy should be taught from the beginning in the context of AI in project management.

Risks and responsibilities in projects are chances to be taken, not avoided. However, AI risk literacy in project management is only the first step in gaining an understanding of AI risk and its limitations for project managers. The second step is to establish ethical standards from the outset in the AI algorithms used in AI-based project management software. This is because human values such as compassion, justice, fairness, love, hope, responsibility, liberty and dignity are often not well expressed in AI algorithms or are not translated correctly into a machine value.

The controversy around ethical compliance and AI systems

If we look into the myth of AI, we have to differentiate between simple and advanced AI-based project management software. The myth that AI is overruling human values is based on advanced AI, not on the simple AI currently used in project management. A chatbot will not be able to directly jeopardise, demote or disregard human values. Advanced AI, by contrast, is able to influence or drive data bias and the accountability of decision-making. If the machine learning algorithm is based on a complicated neuronal network or a genetic algorithm is produced by directed evolution, it may prove nearly impossible to understand why, or even how, the algorithm is assessing the project environment, its actors and activities based on their success or failure rate. However, as machine values are derived in the cultural context of humans, mechanised industry and science, these values are often differently represented algorithmically and operationalised in computational systems. This leads to a misnomer: ‘machine values’ are not really machine values; they are simply human values that lend themselves to implementation in machine cognition. This means that AI in the context of project management is at present nothing more than the current experience of project management practice (e.g. PMBOK or Prince2), but without having considered all the risks and limitations associated with AI in this context...

There is some debate on the controversial issue of ethical compliance in AI-based systems. But there is also acceptance that the simple AI minds will self-replicate these patterns, whereas the ethical risks and limitations in advanced AI should be considered much more. This is because advanced AI can specialise in more than one task, and it is becoming so advanced that the element of human values, ethical standards and supervision gets lost even before the ethical implications of simple and advanced AI have been discussed.

However, the main point of the controversy is to understand the importance of the term morality. Morality is the basis of ethics, and is essentially about the difference between what is right and what is wrong from the perspective of a human. So even in the simple AI that exists today, ethical dilemmas and implications are already beginning to arise. For example, you have a project close to failure owing to scope and time issues, and you ask your AI chatbot how you can hide or overcome this issue now. The chatbot, being an AI-driven program, cannot infer whether this is moral or immoral − if it’s right or wrong − and simply attempts to answer the question that has been put to it. So ethical dilemmas already exist or are beginning to arise in weak AI.4 The AI chatbot will recommend fast tracking and crashing schedule compression techniques without keeping in mind the level of performance and/or health conditions of the additional resources.

Advanced AI project management software would automatically reallocate resources to parallel activities and accordingly shorten the schedule along the critical path. As a potentially fatal result, the advanced AI learns from such a successful bypass and through reverse adaptation incorporates the techniques fast tracking and crashing optimisation into standard AI project management mechanisms, which would result in the exploitation of labour and labour law violations. In another example, an advanced-AI-based project management system could decide in a product engineering, automotive or pharmaceutical project to postpone non-functional scope items to a later project phase or reduce test cycles, as they are not required for the core functionality of the new product. This could result in a product design harmful to humans, for example because non-functional health, environmental or security standards have been lowered or postponed and not tested. Here the ethical question arises as to who is accountable for any of these decisions from the point of view of morality, reliability, corrigibility and reality.

Will the machine be responsible? Or will it be the human who designed the machine, the human who provided the data to train the algorithm, or even the human who used the machine and the underlying classifications and predictions?


Morality

As morality refers to a code of conduct (law, business rules, cultural standards, etc.) that would be accepted by anyone who meets certain intellectual and volitional conditions, the AI machine algorithm needs to understand the defined code of conduct. For example, there should be ethical standards stipulating that it is not permitted for an advanced-AI-based project management system to optimise project success by reducing human and environmental safety (reward corruption).


Reliability

Even if the AI-based project management system follows a specific code of conduct, the algorithm may still be able to corrupt either the reward function itself or the data feeding it. It is therefore of utmost importance to ensure that the self-learning AI algorithms and the data feeding procedures and structures are reliable.


Corrigibility

For the event that an AI algorithm, through the self-learning process, violates the code of conduct intentionally (reward) or accidentally (optimisation), standards have to be defined and programmed into the AI-based project management system to shut down the whole software or to correct the algorithm immediately.


Reality

If it should come to harmful actions, classifications or predictions on the part of the AI-based project management system, there is still the question of impact, accountability and punishment in each real case. As a machine cannot be punished or made responsible for any actions, clear standards governing the enforcement of consequences have to be defined in the organisation.


Avoiding ethical issues in AI-based project management systems would require translating common human ethical standards into the machine learning algorithm and setting up ethical standards for using AI in the entire organisation. Furthermore, before using the system for decisionmaking processes the organisation has to define clear responsibilities in the event of any consequences (positive and negative) arising from the AI-based machine or in collaboration with any human interaction. Understanding and addressing ethical and moral issues related to AI is still at a very early stage. It’s not a simple matter of ‘right or wrong’ or ‘good or bad’. It’s not even a problem that can be solved by a small group of people. Even so, ethical and moral issues related to AI are critical, and need to be discussed now in the context of project management.

How to stay on the bright side of project management using AI

AI designed for project management

Our examination of the risks and limitations gives a clear picture of the potential pitfalls of using AI in project management. Even at the early stage we’re at now, it’s vital to be aware of the threats. But instead of throwing in the towel, we should be concentrating on drawing the right conclusions on how to circumvent potentially evil scenarios related to artificial intelligence. It’s important to remember that history, as well as numerous current studies, suggests that while some jobs and activities will disappear, new opportunities will also arise in parallel. A wide range of social and natural science skills will be necessary to develop, support and commercialise artificial intelligence to enable its appropriate use in project management. If the right lessons are learned and suitable measures taken, we will be able to use AI to stay on the bright side of project management. From the advanced AI perspective, we see the most beneficial advances in project management relating to predictions (machine-learning-based project management) and in the autonomous project management space.

Machine-learning-based project management enables predictive analytics and can provide advice to the project manager, for example on how to set up and steer the project given certain parameters, and/or how to react to certain issues and risks to reach the best possible outcome based on what worked in past projects.

Autonomous project management is most likely still decades away. The leading experts on artificial general intelligence cannot agree on a timeline: “[…] in such a poll of the AI researchers at the 2015 Puerto Rico AI conference, the average (median) answer was by year 2045, but some researchers guessed hundreds of years or more”.5 Nevertheless, there might be dedicated areas where autonomous project management could serve as an extension of machine-learning-based project management in the future.

Summarising the paragraph above, AI clearly creates the possibility of automated processes and intelligent tools that will reduce manual work. To negotiate this journey successfully, it’s essential to look beyond our own horizons and anticipate what actions have to be taken to ensure the use of artificial intelligence in project management stays on the bright side. Below we outline what project managers of the future will have to bring to the table to face the upcoming challenges, and how they can assume a new role where they collaborate with artificial intelligence rather than opposing it.

Education and technological knowledge required by PMs

The requirements for project managers to be able to use AI to their benefit vary depending on the precise application of AI within the project. Basically, these requirements fall into two categories: project management process knowledge, and general technological knowledge.

Project managers still need to know the ropes; they need to train holistically in project management processes and gain experience in managing projects the ‘classical’ way. PMs need to know the strategic impact their projects have on the organisation. They need to understand the goals their projects are trying to deliver and what sets their projects apart from others, including past projects.

However, in order to adapt to the new world of AIsupported project management, future project managers need to have a clear understanding of what AI can and cannot do. They need to be familiar with the project data set available within the organisation for the AI to learn from, since any potential bias lies within this data set. When it comes to evaluating the data set, it’s important for project managers to ensure that the organisation’s core values are reflected in the data available from past projects. This step is important, since the AI is going to learn from the available data set. If, for example, the rationale for decisions taken is not properly reflected within the data, AI-driven decision support systems might base their output on wrongly-identified patterns in the data.

As previously mentioned, risk literacy and an understanding of statistical probability is one of the key requirements for future project managers. Project managers also need to question the results of AI-generated advice; they need to understand if the referenced probability is actually applicable to the project at hand or if the project constraints (time, scope and budget) are comparable in reality. Without the ability to understand these key concepts behind advanced AI, project managers that blindly trust an AI’s recommendation might not actually improve their project success rate, but rather create unintended chaos and a project spiralling towards shutdown.

Trust and transparency

Even if a project manager is educated in the core principles of AI, the question still remains as to whether we can trust an advanced AI project management system and its underlying machine learning algorithm to do what they were designed to do. For an advanced AI in particular, it will be even harder to answer this question, since technologies like machine and deep learning are so hard to grasp, even for experts in that domain. Not only this, but the behaviour of an AI system is heavily influenced by the data it has been trained on, and any inherent bias in the data will be reflected in the deployed models. What’s even more striking is that many advanced-AI-based project management systems will potentially be used in zero-failure-tolerance engineering and construction projects (aircraft, automotive, life sciences, energy, military, etc.). This means that the question of trust is inevitable and central to any further efforts to spread the use of AI algorithms in such vital project environments.

Building trust in AI-based project management systems involves being clear about the algorithms selected and data used to train them. This means you need mathematicians and data scientists to independently assess the algorithms and data selections for bias, fairness and inclusion to ensure that the AI-based project management system is programmed and trained without human biases, and to prevent the AI system from evolving its own sentience and sapience to come up with its own  biases. One way of increasing transparency will be to have independent AI algorithm inspections conducted by a third-party trust provider who specialises in AI and has the skills to assess such a complex environment.

Building trust in AI also involves creating a transparent accountability and responsibility matrix and ethical standards within the organisation that clearly outline what happens when an AI system fails at a certain assigned task. Who should be the entity responsible for an undesirable consequence that was caused by the programming code, the entered data, the improper operation or other factors?

Furthermore, given that the project management data used to train the AI algorithm often includes personal and private data (e.g. timesheets or HR records), security and privacy standards and requirements should be defined within the organisation. To prevent misuse and malicious use, the data stewards and the data owner responsible must manage this data properly. To keep data safe, each action on the data should be detailed and recorded by the AI-based project management software.

This means that project managers of the future who use advanced AI in their projects also need, in addition to their core project management knowledge, new skills and knowledge in mathematics, data science and compliance to be able to assess the risk and limitations of the dark side of AI.

Conclusion

When it comes to preventing project management from falling into AI darkness, making sure project managers are aware of the difference between simple AI and advanced AI is merely the first step. Project managers need to understand how advanced AI works and how to trust the results it presents. But they also need to know about the myths and risks, as well as the fears that advanced AI pose. As with any endeavour, it’s essential to plan for failure: AI implementation will not be free of risk. Just as car manufacturers build in safety measures for engine breakdowns, flat tyres or brake failure, project managers need to anticipate and plan for any exceptional AI-driven flaw.

The risks presented in this paper can be navigated. Project managers need to embark on a journey to improve their overall risk literacy to successfully use AI to support their projects. We recommend the following steps to get the best results from using AI in projects:

  1. Define a path towards gaining technical expertise in AI at all levels of project management
  2. Create an understanding of the perceived and actual impediments, myths, fears and risks surrounding AIbased project management systems
  3. Establish ethical standards for using AI-based project management systems
  4. Ensure that the AI algorithm you’re using is aligned with your values and goals and doesn’t violate your human and environmental safety and security standards, by incorporating independent assessments.

If these four steps are followed, project managers using AI will not only be able to stay on the bright side of project management and be more successful in implementing their projects, but they will also have gained a deeper understanding of their project and the risks associated, and will be able to prevent project management from falling into AI darkness.

 

Contact us

Marc Lahmann

Marc Lahmann

Partner and Leader Transformation Assurance, PwC Switzerland

Tel: +41 58 792 27 99

Adrian Stierli

Adrian Stierli

Transformation Assurance, PwC Switzerland

Tel: +41 58 792 21 69