This article explores the importance for Artificial Intelligence to be ethical and fair when integrating into human life – our relationship with machines will only become more symbiotic in the future.

Artificial Intelligence (AI) and Machine Learning are transforming industries. AI has a multitude of applications from solving complex production problems, increasing safety measures to assessing and replicating human emotions. But, as machines are becoming more astute, the need for research and regulation surrounding how machines ‘think’ has become more apparent.

What is Artificial Intelligence and Machine Learning?

Artificial intelligence (AI), is the simulation of human intelligence processes by machines, usually through computer systems. These processes allow machines to acquire information and rules, apply reasoning using these rules to reach approximate or definite conclusions, and independently correct their actions. Machine Learning is the term given to the system’s ability to automatically learn and improve from experience without being programmed.

The Artificial Intelligence and Machine Learning landscape

AI and Machine learning are transforming across industries, from manufacturing revolutions to conversational language ‘chatbots’ transforming customer service.

Within the manufacturing Industry, processes using AI and Machine Learning are no longer merely repeating monotonous tasks. Andrew Ng, the creator of the Google Brain project and Professor of Computer Science at Stanford University realises the vast possibilities for AI manufacturing, “AI will perform manufacturing, quality control, shorten design time, and reduce materials waste, improve production reuse, perform predictive maintenance, and more,”

The scale in which AI and Machine Learning will change this industry is huge, with experts such as Professor Detlef Zühlke, Head Researcher at the German Artificial Intelligence Research Centre (DFKID) stating “We will have a fourth industrial revolution”.

 

“We will have a fourth industrial revolution” Professor Detlef Zühlke, Head Researcher at the German Artificial Intelligence Research Centre (DFKID)

 

Manufacturing businesses are also identifying opportunities of AI and Machine Learning outside of their traditional factory settings. Jaguar Land Rover the UK’s leading automotive firm are pioneering the research and development of autonomous vehicles. They are addressing the psychological barriers humans have against trusting self-driving cars by adding ‘eyes’ to their prototype driverless cars, intended to ‘communicate’ with pedestrians to replicate intended human eye contact. (Forbes, 2018).

Within the Telecoms and Media Industry for customer service support, AI is being used to pick up on human emotion and give accurate customer advice using intuitive ‘chatbots’. The utilisation of sentiment analysis within these automated machines can assess the emotions of the customer through text and voice, making informed decisions on how to help the customer.

Can this human-like intuitive intelligence be trusted though, and with machines being able to pick up on human information, should we be concerned about the ethics of AI and data bias?

Ethics for Artificial Intelligence and Machine Learning

On 8th April 2019, the European Commission High-Level Expert Group (HLEG) on Artificial Intelligence released the final version of its ‘Ethics Guidelines for Trustworthy AI’, warning that machine algorithms must not discriminate on human conditions such as age, race or gender (Financial Times, 2019).

The Guidelines, consisting of three chapters, aims to outline a framework for achieving trustworthy AI and offers guidance on two of its fundamental components – AI should be ethical and should be robust, both from a technical and societal perspective.

Chapter two of this guideline details seven requirements that AI should meet:

  1. Human agency and oversight
  2. Technical robustness and safety
  3. Privacy and data governance
  4. Transparency
  5. Diversity
  6. Non-discrimination
  7. Fairness 

AI and Machine Learning helps to reduce human error, improve our lives but should never replace human integrity.

Mariya Gabriel, EU Commissioner for the digital economy communicates “We don’t want to stop innovation, but the added value of the EU approach is that we are making it a people-focussed process. People are in charge.”

Collaboration 

Francesca Rossi, Head of Ethics at IBM recognises the increase in collaboration to understand ethical AI, stating “I have seen that more and more initiatives are coming out at the same time, and at this point, I think the AI Ethics community recognises the need for cooradination and convergence to be as efficient as possible.”

As part of an industry discussion, Partnerships on AI was founded in 2017. The consortium group backed by Google, Deepmind, Facebook, Amazon Microsoft and Apple, was established established to study and formulate best practices on AI technologies. The group contains over 90 collaborators from non-profit organisations, academics and industry giants, who work together to advance the public’s understanding of AI, serving as an open platform for AI engagement and its influences people and society.

The forming of an ethics committee to scrutinise and regulate AI and Machine Learning applications needs to be independent. Members in the committee should be representative of society, ensuring that the public can trust ethics that are being implemented.

The potential of Artificial Intelligence is only limited by human imagination.

Conclusion

It is clear AI and Machine Learning are becoming more and more prevalent within our everyday lives – making machines more human. In this case we need to ensure that our human compassion and ethics are not lost within the matrix of computerised data. This can be achieved through regulation control from trusted and representative committees, along with industry wide collaboration.

 

Contact Hayley McCarthy for more information on AI.

Article by Joseph Lee.