What ethics for artificial intelligence?

The Didask team
Passionate about pedagogy to engage your learners and impact them

Artificial intelligence (AI) is a global technological challenge. It is of interest to all sectors of society, and is present in specific fields such as research, defense, health, training... and also in our daily lives, with numerous intelligent and personalized applications for our professional and private activities. Artificial intelligence is a revolution that contributes to increasing human capacities, but at the same time it raises many issues related to the development and use of complex algorithmic systems. This article explains why it is essential to put in place a regulatory framework in order to have an ethical, equitable and responsible artificial intelligence system.

Why is responsible AI important?

With generative AI, we delegate a lot of tasks, reasoning, and increasingly critical decisions to machines. However, in this exploration of possibilities driven by a changing technological environment, the question of ethical AI in line with society's values and expectations is central. Artificial intelligence is an indisputable driver of progress, but the development and intervention of this digital technology must take into account all the potential societal impacts in terms of bias, transparency and privacy protection.

Here are the main challenges to be overcome in order to transform an AI into ethical artificial intelligence:

  • The lack of transparency of algorithmic numerical systems. The decisions made by AIs are opaque (most algorithms are not open source), which poses real problems in terms of responsibility and ethics.
  • The reproduction of biases and discriminations present in the datasets on which AI algorithms are trained.
  • The production of misleading information in text, sound, image, or video format.
  • The questioning of the right to privacy and the protection of personal and confidential data due to the collection of massive data by artificial intelligence, in particular via surveillance devices (real-time facial recognition with algorithmic video surveillance).
  • Problems related to the autonomy limits of digital systems equipped with artificial intelligence and to control means.

The solution for ethical, safe, reliable and responsible AI is based on the implementation of a regulatory framework accepted by all actors (designers, professionals, users).

Establishing principles for ethical AI

Given these risks, it is therefore important to establish a set of universal principles that apply to all designers and deployers of AI. The idea is to create a reliable and robust environment, conducive to innovation, and to oversee the development and use of ethical AI. Europe is particularly active and voluntary on the subject. She is driven by the desire to “ensure that AI is human-centered and trustworthy [1]”.

As we will see, the principles and values of ethical artificial intelligence concern all layers of this technology.

Foundation models and generative AI

Generative AI is capable of generating original content (text, image, image, audio, video, lines of code...) in response to a user request. Its operation is based on a foundation model, a deep learning model that can be exploited in various types of digital systems involving generative AI. For example, for generative AI for textual content, the most common foundation models are the major language models (LLM) represented by Open AI, including Chat GPT (Generative Pre-trained Transformer) and DALL-E (text-based image generator) and DALL-E (text-based image generator) but also Claude. These algorithms are trained on masses of data in order to understand and generate texts in natural language.

The ethics, but also the performance and reliability of a foundation model depend on its architecture and the quality of the data that supplies it. A foundation model trained on a corpus of data strongly marked by stereotypes, erroneous data, or even sensitive or inappropriate data, will mechanically reproduce these biases and give harmful content.

For a foundation model to be ethical, the data must be:

  • Representative in terms of diversity and plurality, to allow a broad and neutral understanding of the world in which this digital system takes place.
  • Reliable and regularly checked, so that the model is based on consistent and accurate data.
  • Balanced, in order to avoid the over or under-representation of groups, opinions...
  • Sufficient and current.

Finally, the data used must comply with data protection regulations (RGPD), respect for confidentiality and consent, and must not be under copyright.

Technological uniqueness

Technological singularity is a theory according to which artificial intelligence systems will end up having cognitive abilities greater than human ones, with the added ability to improve themselves independently. The term singularity refers to mathematical concepts that refer to a stage where the existing model collapses. This theory predicts the rapid and uncontrolled evolution of AI, giving rise to autonomous systems, capable of innovations beyond human understanding and control.

The regulatory framework for ethical AI is the solution to protect our societies from the risks of uncontrolled and irreversible technological growth, leading to profound changes in human civilization and in value systems.

Impact of AI on jobs

AI-based digital systems offer significant productivity gains thanks to the automation of the most repetitive and mechanical tasks. The IMF considers that it will affect 60% of jobs in economically advanced countries [2]. And for the review of the Institut Polytechnique de Paris, nearly 75 million jobs in the world could be automated [3].

The article also highlights the impact of deploying artificial intelligence:

  • On low- and medium-skilled office jobs, with more than 80% of tasks that could be done by bots.
  • On the employment of women, who are twice as present in these administrative positions.
  • As a source of inequality with low-income countries, which will not have access to these advanced technologies.

Depending on their fields of application, technological innovations have always had significant repercussions. With AI, the effects on societies and employment are disproportionate. Automation through digital generative intelligence systems calls into question the sustainability of certain categories of jobs and accelerates the obsolescence of many skills.

But the deployment of AI can also be synonymous with collective prosperity, improving the quality of work, and contributing to the reduction of inequalities. In order to preserve the central place of humans in the creation of sustainable wealth, it is necessary to implement ethical artificial intelligence governance focused on training in new jobs and the acquisition of new skills.

Confidentiality

Respect for privacy and the protection of confidential data are among the founding principles of ethical AI. Even if its development is based on the exploitation of massive data, artificial intelligence must protect the personal data of individuals by adopting mechanisms similar to those of the GDPR [4], where each individual must be able to express their consent in a free and informed manner and access their personal information at any time and access their personal information at any time in order to modify or even delete it.

Prejudice and discrimination

The design and development of a digital system based on ethical AI must ensure:

  • Equity and non-discrimination, by setting up procedures to control the algorithms and data used to train foundation models.
  • The absence of malice. AIs should not produce content that is harmful to people, society, or the environment.
  • Inclusiveness. Ethical AI must respect the diversity of viewpoints and societies in a neutral and balanced way.

Accountability and transparency

Accountability and transparency are also among the principles of ethical AI. Responsibility requires that the design, development, deployment, and use of these technologies be aligned with stakeholder values, legal standards, and ethical principles.

Transparency concerns the functioning of algorithms, in order to facilitate controls and the understanding of users.

How to establish ethical AI

Governance

Governance is the first brick of ethical artificial intelligence. This management method is based on mechanisms for monitoring the algorithm design and deep learning phases to correct possible biases: malice, violation of confidential data, discrimination, overrepresentation of opinions, lack of diversity... Governance determines a balanced framework, essential for the emergence of ethical and responsible AI.

The fields of action of ethical AI

Ethical AI can be implemented in 10 areas:

  • Health,
  • Finance,
  • Logistics and transport,
  • Research, education and training (with tools like an AI to create training courses),
  • Energy and the environment,
  • Media and entertainment,
  • Commerce and marketing,
  • The industry,
  • Project management,
  • Recruiting and managing human resources and skills (educational AI).

Promoting responsible AI practices

The promotion of responsible artificial intelligence practices in companies, based on values centered on respect for human rights, ethics and transparency, is essential. It ensures the development of technologies that improve our capabilities and performance within a resilient ethical framework in terms of impartiality, transparency and privacy protection.

The decisive role of AI in the fields of learning and training

The contribution of AI to training is decisive. Businesses and training organizations take advantage of the power and agility of intelligent digital systems to:

This intelligent digital system optimizes the personalization of training offers, for greater efficiency, and prepares employees for a more immersive AI-enhanced environment. By becoming actors in their training, learners are fully committed to the success of their program. It is also a concrete and more controlled return on investment for the company.

The use ofAI assistant of training with thelms tool (Learning Management System) Didask personalizes the learning experience and makes it more effective through a method based on cognitive science. Didask educational AI is part of a strong ethical approach that respects the values of equity, transparency, responsibility and the protection of privacy. This technological tool does not replace humans; it simply makes it possible to democratize access to quality pedagogy and to promote tailor-made teaching, for increased efficiency.

The Didask educational artificial intelligence platform is ethical because:

  • It is supervised by humans and places them at the heart of learning.
  • The development of educational paths is based on the principles of cognitive sciences, making it possible to offer content adapted to a great diversity of learners and to support their development of skills.
  • It makes access to training equitable through the agile creation of tailor-made courses.
  • It complies with RGPD standards (General Data Protection Regulation), guaranteeing maximum protection of user information. In addition, it sets up strict measures for the protection and confidentiality of the data collected, for the secure use of Didask e-learning tools.

Sources:

[1] https://digital-strategy.ec.europa.eu/fr/policies/european-approach-artificial-intelligence

[2] https://www.lemondeinformatique.fr/actualites/lire-l-ia-impactera-40-des-emplois-dans-le-monde-selon-le-fmi-92668.html

[3] https://www.polytechnique-insights.com/tribunes/digital/intelligence-artificielle-quelles-consequences-pour-le-travail/

[4] https://www.cnil.fr/fr/rgpd-de-quoi-parle-t-on

À propos de l'auteur
The Didask team

Passionate about pedagogy and e-learning, we share the best practices learned in contact with our customers!

Envie d'en savoir plus ou d'essayer ?

Prenez directement rendez-vous avec nos experts du eLearning pour une démo ou tout simplement davantage d'informations.