AI ethics – high time to discuss what we want from it

While some expect autonomous robots to take over the world, others bet on the Promised Land becoming reality. At this moment in history, humankind has the technical possibilities to make utopian visions, or indeed dystopian ones, a reality. The fundamental question, when almost everything is possible, appears to be: what should we do? If well managed, Artificial Intelligence could make great leaps forward to tackle climate change, increase well-being and ensure social fairness.

The future of Artificial Intelligence (AI) is not carved in stone. In the coming decades, working and private life will change profoundly, depending on today’s choices in research, politics and business. To contribute to this debate, CEC has participated in the stakeholder consultation on ethical guidelines for Artificial Intelligence, launched by the EU’s High-Level Expert Group on AI (see below). These guidelines can be seen as a sort of compass for subsequent policy and business choices. As an advocate of sustainable leadership, CEC highlights the importance of a clear vision, strategy and purpose for developing AI systems and governance.

The document defines the “ethical purpose” of AI as respecting the rights, principles and values as enshrined in the EU Treaties and in the Charter of Fundamental Rights of the European Union. Unfortunately, the delimitations between the concept of rights, principles and values appear rather vague and even tautological in their current formulation. The “rights-based approach” taken delivers insufficiently on an ethical case for these rights in proper terms.

Furthermore, the document is ambiguous over the term “ethical purpose”, since AI systems shall on the one hand “comply with” values, principles and rights and on the other serve them as a purpose. The latter case implies that AI, and thus also organisations developing it, can only be ethical if they serve the purpose of advancing fundamental rights. At the same time, these rights and their underpinnings can evolve over time, making the need for a stronger ethical foundation of the guidelines even more important.

Shifting away from the questions of rights, it may be argued that the ground-breaking trait of AI lies in its unmeasurable potential to create a utopian or dystopian society from the contemporary point of view and compared to previous technologies. This brings up classical ethical questions about the “good life”, as well as the famous Kantian questions[1]. Since our species, at least seemingly, could soon hope for almost everything, the central question appears to be: what, if almost everything is indeed possible, should the human do? And who is this being in a ubiquitous technological environment?

Of course, these questions are closely related to the purpose of work, both conceptually and factually, as a historically defining feature of human life. Furthermore, they are the backbone of European culture and decisive for the pathway to be taken by a democratic, diverse and fair society. Artificial Intelligence has the potential to become a tool for tackling our most pressing planetary challenges. For that however, a shared understanding and vision of the direction to take is needed. Otherwise, we run the risk to become blind to the fact that technology has always been an instrument in human history and not an end in itself.

Please find CEC’s contribution to the AI ethical guidelines here (PDF).

[1] Immanuel Kant formulated the following questions in his Critique of Pure Reason: what is the human being, what can I hope for, what can I know and what should I do?

You cant view this content because you did not accept that we can use cookies.