On February 5th, Today, GSMA, INNIT, Lenovo Group, LG AI Research, Mastercard, Microsoft, Salesforce and Telefonica signed a ground-breaking agreement to integrate the values and principles of UNESCO’s Recommendation on the Ethics of AI when designing and deploying AI systems.
UNESCO’s Recommendation on the Ethics of Artificial Intelligence was the world’s first and remains its only normative framework on AI.[1]
Right now, more than fifty countries are now actively engaged in its implementation and multilateral cooperation has increased considerably.
The AI tech companies’ letter of commitment express:
“We agree that the responsibility for guaranteeing and respecting human rights is a duty for all social actors, which, in the case of the design, development, purchase, sale, and use of AI, includes businesses, civil society organizations, the public sector, and higher education and research institutions, among others”.
Due diligence must be carried out to identify the adverse effects of AI, and timely measures to prevent, mitigate, or remedy these effects must be designed and implemented in line with domestic legislation. This is an ethical minimum which must be upheld.
The companies recognize the serious risk of harm associated with AI when is developed and released without appropriate guardrails, including inequalities; the generation and spread of disinformation, hate speech and violence at scale; the risk of pervasive discrimination based on gender, race, ethnicity, and other grounds; the enabling of cyber-attacks and fraud; violations of privacy; the upending of the existing mechanisms for the protection of intellectual property rights, including the copyright concerns of authors; and the erosion of hard-won democratic norms, including human rights and fundamental freedoms.
Therefore tech companies welcome UNESCO’s recommendation adopted in 2021 and declare to adhere to its main values and principles:
- respect, protection and promotion of human rights, fundamental freedoms and human dignity
- the flourishing of environments and ecosystems,
- ensuring diversity and inclusiveness,
- promoting peaceful, just and interconnected societies,
- the principles of proportionality and do-no-harm, safety and security,
- fairness and non-discrimination,
- sustainability,
- the right to privacy and data protection,
- human oversight and determination,
- transparency and explainability,
- responsibility and accountability,
- awareness and literacy,
- multi-stakeholder and adaptive governance, and
- collaboration which were all laid out in UNESCO’s Recommendation on the Ethics of Artificial Intelligence.
Such values and principles are the basis of an international consensus on the design, development, and responsible use of these technologies.
As closing remarks, the big tech mention that they play an integral role in shaping the ethical landscape of AI, so, “we acknowledge that as companies which develop, use, purchase and sell these emerging technologies, we have a responsibility to ensure our products meet safety standards and comply with the essential principles and values laid out by UNESCO”.
[1] See https://unesdoc.unesco.org/ark:/48223/pf0000381137