The dillemas of artificial intelligence in 2021

The dilemmas of artificial intelligence in 2021

No matter which newspaper, magazine, or journal you read, you are likely to find an article on artificial intelligence (AI), usually condemning the “robots taking over” and how this mysterious technology is the greatest threat to humanity since the invention of the atomic bomb.

Indeed, this week the press has been abuzz about the EU’s proposed legislation to regulate this aspect of the technology for the first time in the world.

Meanwhile, companies developing artificial intelligence applications are hyping their innovations, explaining how they will change people’s lives while obscuring the real value in a fog of marketing hyperbole.

Then there is the technology itself – the science of maths, data, and computers – which, outside the world of developers, seems to the layman to be satanic.

No wonder business leaders are confused about what AI can do for their business. What exactly is AI? What can it do? What benefits does it bring to businesses? Where to start? These are all legitimate questions that have so far gone unanswered.

AI, in its broader sense, will have a fundamental impact on business. There is no doubt about that. It will change decision-making, enable entirely new business models and make things possible that we never thought possible before.

AI is already being used by businesses to augment, improve or even change the way they operate. Even now, more enlightened business leaders are working out how AI can add value to their businesses, trying to understand the different types of technology and working out how to reduce the risks it inevitably entails.

Many of these efforts are either hidden or kept secret. Either because they do not want the application of AI in their products or services to be widely known, or because they simply do not want to reveal the competitive advantage it can bring.

For managers who want to get to grips with AI, the constant challenge is where to find all the relevant information without resorting to fancy articles, listening to vendor hyperbole, or trying to understand the algorithms.

Artificial intelligence is one of the “conscious unknowns” – we know we don’t know enough.

People generally experience AI as consumers. Every smartphone has access to sophisticated AI, be it Siri, Cortana, or Google Assistant. Artificial intelligence is already present in our homes through Amazon Alexa and Google Home. These are supposed to make it easier to organize our lives and generally do a pretty good job. Most of them rely on the ability to turn speech into words and fill those words with meaning.

Where are we now?

Artificial intelligence can:

  •  read thousands of legal contracts in minutes and extract all the valuable information from them
  • it can identify cancerous tumors with greater accuracy than radiologists
  • it can detect fraudulent credit card use before it happens 
  • can drive cars without drivers
  • can run data centers more efficiently than humans
  • can predict when customers (and employees) will leave us
  • can learn and grow from its own experience.

But until business leaders understand what AI is, insufficient and straightforward terms, and how it can help their business, it will never reach its full potential.

However, ethics is a critical issue. Although AI is changing the way businesses operate, there are concerns about how it could affect our lives. This is not only a scientific or social concern but also a reputational risk for companies. No company wants to get caught up in data or AI ethics scandals.

What are the ethical dilemmas of AI?

Automated decisions

AI algorithms can be as biased as humans because humans also generate them. These biases prevent AI systems from making fair decisions. We encounter biases in AI systems for two reasons:

  • Developers program it that way without even realizing it.
  • Historical data used to train AI algorithms may not be sufficient to represent the entire population.

Biased AI algorithms can lead to discrimination against minority groups. For example, Amazon stopped using its AI recruitment tool after a year of use because its developers claimed it discriminated against women. Around 60% of the candidates selected by the AI tool were male, a pattern that emerged from Amazon’s previous recruitment data.

Autonomous Things

Autonomous Things (AuT) are devices and machines that perform specific tasks autonomously without human intervention. These machines include self-driving cars, drones, and robots. 

Self-driving cars

The autonomous vehicles market was worth $54 billion in 2019 and is forecast to reach $557 billion by 2026. However, autonomous vehicles have the potential to protect vehicle liability and accountability.

For example, in 2018, Uber’s self-driving car hit a pedestrian who later died in hospital. The accident was recorded as the first death involving a self-driving car. After an investigation by the Arizona police and the US National Transportation Safety Board (NTSB), prosecutors ruled that the company was not criminally liable for the pedestrian’s death. The reason is that the safety driver was distracted by his cell phone, and police reports label the accident as “entirely avoidable.”

Lethal Autonomous Weapons (LAW)

LAWs are an element of the artificial intelligence arms race. They autonomously identify and attack targets based on programmed constraints and descriptions. There is an ongoing debate on the ethics of the military use of weaponized AI, and in 2018 the UN met to discuss the issue. And the Campaign to Stop Killer Robots has sent a letter warning of the dangers of an arms race in AI. The letter was signed by prominent faces such as Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky, Jaan Tallinn, and Demis Hassabis.

Unemployment due to automation

This is currently the biggest fear of AI. According to a CNBC survey, 27% of Americans believe AI will eliminate their jobs within five years. That’s 37% of 18-24-year-olds. 

Mckinsey estimates that intelligent agents and robots could replace up to 30% of the world’s current human workforce by 2030. Depending on the different takeover scenarios, automation will displace between 400 and 800 million jobs, requiring up to 375 million people to change careers entirely.

However, there is no data yet on how many people have started their apprenticeships or studies to avoid this.

Misuse of artificial intelligence

Surveillance practices that restrict privacy

“Big Brother is watching you.” I think everyone knows the quote from George Orwell’s social science fiction novel 1984. Although it was written as science fiction, it has become a reality (?) as governments use artificial intelligence for mass surveillance. The introduction of facial recognition technology into surveillance systems raises concerns about privacy rights. 

According to the AI Global Surveillance (AIGS) Index, 176 countries use AI-based surveillance systems, and liberal democracies are the primary users of AI-based surveillance. The same study shows that 51% of advanced democracies use AI-based surveillance systems, compared to 37% of closed autocratic states. However, this is probably due to the welfare gap between the two groups of countries.

From an ethical perspective, the critical question is whether governments are abusing the technology or using it legitimately. 

Manipulation of human judgment

AI-enabled analytics can provide valuable insights into human behavior, but misusing analytics to manipulate human judgments is ethically wrong. The most well-known example of the misuse of analytics is the Facebook and Cambridge Analytica data scandal

Cambridge Analytica sold US voter data obtained on Facebook to political campaigns, providing assistance and analytics to the 2016 presidential campaigns of Ted Cruz and Donald Trump. Information about the data breach was made public in 2018, and the Federal Trade Commission fined Facebook $5 billion for the data breach.

The spread of deepfakes

Deepfakes are synthetically produced images or videos in which someone else’s image replaces a person in the media. 

Although around 96% of deepfakes are pornographic videos, the four largest deepfake pornography websites have over 134 million views. The real danger and ethical concern for society about deepfakes are how they can be used to fake the speeches of political leaders. 

Creating a false narrative using deepfakes could damage people’s trust in the media (which is already at an all-time low). This distrust is dangerous for societies, given that the mass media is still the primary means for governments to inform people about emergencies (e.g., pandemics).


The new world is upon us. Do you have dilemmas?

Leave a Comment

Your email address will not be published. Required fields are marked *


Scroll to Top