Drawing on Leonardo’s Legacy to Foster Human-Centered AI

12 min

What is the connection between Leonardo’s historical Vitruvian Man and the current development of Artificial Intelligence (AI)? The awareness that human centrality, in its perfection, is an inescapable link to access the divine and full technological power. Technologies, and especially AI, will have to be governed by human beings to ensure a balance between their innovative power and respect for the ethical and moral values of humanity. How do we achieve Human-Centered AI? I explain in this article.

The human being at the center of technology and ethics

Every time I have looked at Leonardo da Vinci’s studies on the Vitruvian Man, I have been fascinated by the message that humanity must be central to all our choices and strategies, both personally and professionally.

The Vitruvian Man is a famous drawing created by Leonardo around 1490, showcasing his deep understanding of art, science, and anatomy. The drawing is named after the Roman architect Vitruvius, who described the ideal proportions of the human body in his treatise “De Architectura.” Vitruvius believed that the human body, with its symmetry and proportions, was a model of perfection and harmony that should be reflected in architecture.

Leonardo interpreted and illustrated these proportions in his drawing, demonstrating how the human body could be fitted into both a circle and a square, symbolizing the unity of the physical and the divine, and the link between the microcosm of the human being and the macrocosm of the universe.

Now, with the evolution of digital technologies, Artificial Intelligence is permeating our lives, and it’s even more crucial to ensure that humans remain at the center of its adoption as this magnificent technology promises to change our lives.

Leonardo da Vinci placed human beings at the center of his drawing, symbolizing the harmony between art, science, and human values, and we, too, must aim to place people at the core of technological development. Just as Leonardo’s Vitruvian Man embodies the unity of the physical and the divine and highlights the connection between the microcosm of the human being and the macrocosm of the universe, we must seek to create a harmonious balance between technological innovation and ethical responsibility, aspiring to integrate cutting-edge technology with deep respect for human values, ensuring that the advancements we make enhance our lives without compromising our moral standards.

My analogy wants to emphasize that, like Leonardo’s vision, our pursuit of AI should reflect a deep respect for the intricate balance between our technological achievements and the ethical frameworks that guide them. By striving to create AI that mirrors this unity and balance, we can develop systems that are powerful, efficient, compassionate, and fair, ultimately enriching the human experience in a profound and meaningful way.

We are beginning to call it Human-Centered AI, and it must be more than just a term.

Human centred AI

What is Human-Centered AI?

Human-Centered AI (HCAI) is an approach to artificial intelligence that prioritizes human values, well-being, fairness, and transparency. It is centered on the idea that AI technologies should be designed and implemented to enhance human life and society rather than merely achieving technical superiority or efficiency.

Key Principles of Human-Centered AI:

  • Centrality of Human Values: At the core of HCAI is the commitment to uphold and respect human values such as dignity, autonomy, and privacy. AI systems should be designed to reflect and reinforce these values, ensuring that they support and do not undermine what it means to be human.
  • Well-Being: HCAI emphasizes the importance of enhancing the well-being of individuals and communities, creating AI applications that contribute to physical, mental, and social health, and promoting overall quality of life.
  • Fairness: Ensuring fairness in AI systems is crucial and means avoiding biases and discrimination while striving for inclusivity and equality since AI should serve all segments of society without favoring particular groups over others.
  • Transparency: Transparency in AI means making AI processes understandable and explainable to users, being clear about how decisions are made, and ensuring that users can trust the system. Transparency helps build accountability and trust, and we are starting to call it Explainable AI.

Differences Between Traditional AI and Human-Centered AI

Traditional AI approaches often focus on optimizing performance, accuracy, and efficiency. The primary goals are to develop systems that can perform tasks better, faster, and with fewer errors than humans. While these goals are important, they can sometimes lead to outcomes misaligned with human values and societal needs.

In contrast, Human-Centered AI strongly emphasizes the human context, ensuring that AI technologies align with human life’s social, ethical, and moral dimensions – meaning not only technical excellence but also a commitment to creating technologies that are safe, ethical, and beneficial for all. The shift from a purely performance-oriented approach to one that integrates human values marks a significant evolution in how AI is developed and deployed.

Importance of Ethics in AI

Ethics plays a crucial role in the development and deployment of AI technologies. As AI systems increasingly impact various aspects of society, addressing ethical issues becomes essential to ensure these technologies are used responsibly and for the greater good.

Protecting Personal Data

One of the primary concerns in AI is the protection of personal data because AI systems often require large amounts of data to function effectively, raising significant privacy issues. The collection, storage, and use of this data must be handled with the utmost care to prevent unauthorized access and misuse. Ensuring that data is collected, stored, and used in ways that respect individuals’ privacy is necessary.

We must implement robust security measures, such as encryption and anonymization, to protect data integrity and establish clear guidelines and policies that govern data usage – ensuring that individuals have control over their personal information and are informed about how it is used.

Respecting privacy is not just a technical challenge but also an ethical obligation to maintain trust and uphold individuals’ rights.

Ensuring Transparency, Explainability, and Fairness

Transparency means making AI operations understandable to users and stakeholders, clarifying how decisions are made, and ensuring that AI processes can be scrutinized. A key component of transparency is AI explainability, which means that humans can understand and trace the decisions and actions taken by AI systems. Explainability helps users grasp the logic behind AI decisions, essential for building trust and accountability. Lack of transparency and explainability can lead to mistrust and misuse of AI technologies.

AI systems must be designed to avoid biases and ensure fairness, addressing issues such as algorithmic bias, which can result in discriminatory outcomes. Explainable AI plays a crucial role in identifying and mitigating these biases, ensuring that decision-making processes are equitable and just.

Fairness also means ensuring that AI benefits all segments of society equitably, providing opportunities for all individuals to benefit from technological advancements.

Promoting Accountability

Ensuring accountability in AI is critical to determine who is responsible for the actions and decisions made by AI systems, mainly when these systems make errors or cause harm.

Algorithmic bias is a significant ethical problem because AI systems can perpetuate and amplify existing biases in training data. For instance, facial recognition technologies have been shown to perform less accurately for people of color, leading to unfair treatment and discrimination.

Using AI in surveillance raises significant ethical concerns regarding privacy and civil liberties since excessive or unjustified surveillance can lead to a society where individuals’ activities are constantly monitored and scrutinized, eroding trust and freedom.

Integrating Ethics, Transparency, and Accountability in Human-Centered AI

Human-centered AI addresses these ethical concerns by integrating ethical principles into the core of AI development and implementation, ensuring that AI systems are designed with a focus on human values and societal well-being. It advocates for robust data protection measures, ensuring that personal data is handled with care and respect by implementing strong data encryption and anonymization techniques and providing users with control over their data.

HCAI emphasizes the importance of making AI systems transparent and explainable, creating user-friendly interfaces and documentation that help users understand how AI systems work and make decisions. HCAI strives to eliminate biases in AI systems by using diverse and representative datasets and implementing fairness checks throughout the development process – creating AI that equitably serves all users.

Finally, the HCAI promotes accountability by clearly defining the roles and responsibilities of AI developers and users and establishing guidelines and regulations to ensure that AI systems are used ethically and responsibly.

Transformative Impacts of AI on Society and Culture

Artificial Intelligence has the potential to significantly transform various sectors, leading to numerous positive impacts on society and culture.

In the health sector, AI can improve diagnostic accuracy and treatment outcomes. For instance, AI algorithms can analyze medical images more quickly and accurately than humans, leading to earlier detection of diseases such as cancer. AI-driven tools can also assist in personalized treatment plans, ensuring that patients receive care tailored to their specific needs.

In education, AI can facilitate personalized learning experiences. Adaptive learning platforms can analyze students’ progress and tailor educational content to meet their individual needs, helping to bridge gaps in understanding and accelerate learning. This personalized approach can enhance student engagement and improve academic outcomes.

In the workplace, AI can automate repetitive tasks, allowing employees to focus on more complex and creative activities, leading to increased productivity and job satisfaction. For example, AI-powered chatbots can handle routine customer service inquiries, freeing human agents to tackle more challenging issues.

Potential Risks and Negative Consequences

Despite these benefits, AI has potential risks and negative consequences. One major concern is the potential for increased inequality. If access to AI technologies is not evenly distributed, existing disparities could be exacerbated.

If not managed properly, task automation could lead to job displacement, particularly for workers in easily automated roles, resulting in economic instability and social unrest.

Human-centered AI aims to mitigate these risks by ensuring that AI technologies are developed and deployed focusing on human values and societal well-being. For instance, efforts are made to design AI systems that promote inclusivity and fairness, reducing the risk of exacerbating inequalities.

Human-Centered AI Benefits and Challenges for Businesses

Businesses across various industries are increasingly integrating AI into their operational processes to enhance efficiency and drive growth. AI technologies are being used to optimize supply chains, improve customer service, streamline manufacturing processes, and support decision-making through advanced data analytics.

Benefits of Human-Centered AI for Businesses

The adoption of Human-Centered AI offers several benefits for businesses, including improved productivity. By automating routine and repetitive tasks, AI allows employees to focus on higher-value activities that require creativity and strategic thinking, boosting efficiency and enhancing job satisfaction and workforce morale.

HCAI enables the personalization of services allowing businesses leveraging AI to analyze customer data and deliver tailored experiences that meet individual needs and preferences. This level of personalization can increase customer satisfaction and loyalty, driving repeat business and long-term growth.

Human-Centered AI fosters product innovation. By utilizing AI to analyze market trends and consumer feedback, companies can develop new products and services that better meet their customers’ evolving demands, providing a competitive edge and opening up new revenue streams.

Business Challenges in Adopting Ethical AI

But not all that glitters is gold. Businesses face significant hurdles when adopting an ethical and human-centered approach to AI. One of the main challenges is the financial burden.

Implementing Human-Centered AI requires substantial investments in advanced technology, skilled talent, and comprehensive training programs. These costs can be prohibitive, especially for smaller enterprises, making it difficult to compete with larger organizations that have more resources.

In addition to financial challenges, specialized expertise is needed. Developing and maintaining AI systems that adhere to ethical standards and human-centered principles demands a multidisciplinary team with skills in AI development, ethics, law, and human-computer interaction. Recruiting and retaining such a diverse team adds to the complexity and cost.

Another critical aspect is navigating regulatory landscapes. As governments and international bodies introduce more stringent regulations on AI, companies must stay compliant with these evolving laws, which can vary significantly across regions – requiring ongoing investment in legal counsel and compliance measures, further increasing the operational burden on businesses.


Human-Centered AI (HCAI) is an approach that ensures the technological benefits of this major innovation while reducing its risks through the adoption of established ethical and moral principles. Share on X

Role of Political Leadership

Political leadership is critical in regulating AI to ensure it is developed and used safely, ethically, and respectfully for fundamental human rights. Recent legislative efforts highlight the importance of this role, with the European Commission’s AI Act standing out as a significant example.

The EU AI Act establishes comprehensive regulations aimed at ensuring the safety of AI systems and protecting fundamental rights. It introduces new obligations for high-risk AI applications and proposes the creation of the European Artificial Intelligence Office to oversee compliance and enforcement.

National and International Initiatives

At both national and international levels, various political initiatives are underway to promote ethical AI. These initiatives aim to set standards and guidelines that ensure AI technologies are developed responsibly. For instance, several countries have established national AI strategies, including ethical frameworks for AI development.

Internationally, organizations such as the United Nations and the OECD are working to develop global standards and promote cooperation among nations to address the ethical implications of AI.

Public Policies and Regulations

Public policies and regulations play a crucial role in protecting citizens’ rights in the age of AI. Examples include data protection laws, such as the General Data Protection Regulation (GDPR) in the European Union, which sets strict guidelines for data privacy and security. Additionally, there are regulations to prevent algorithmic bias and ensure fairness in AI systems, such as those mandating transparency and accountability in AI decision-making processes.

Collaboration and Global Standards

Collaboration between governments, businesses, and civil society is essential to developing global ethical standards for AI. A multi-stakeholder approach ensures that diverse perspectives are considered and that the resulting standards are comprehensive and widely accepted, leading to the creation of frameworks that balance the benefits of AI innovation with the need to promote social good.

Balancing Innovation and Human Rights

Political challenges in regulating AI often mean balancing the need for innovation with protecting human rights. Policymakers must navigate complex issues such as ensuring fair competition, preventing misuse of AI, and addressing the socio-economic impacts of AI deployment. Striking this balance requires a nuanced understanding of both AI’s technological capabilities and the ethical principles that should guide its use.

The Role of Collaboration

Collaboration is essential in the development and implementation of Human-Centered AI. Achieving ethical and effective AI systems requires the combined efforts of technologists, policymakers, ethicists, and the general public. Each group brings unique perspectives and expertise – essential for addressing the complex challenges associated with AI.

The Necessity of Multidisciplinary Collaboration

The necessity of collaboration stems from the multifaceted nature of AI. Technologists understand AI systems’ technical capabilities and limitations, while policymakers provide the regulatory frameworks needed to ensure their responsible use.

Ethicists contribute insights on moral and ethical implications, ensuring that AI systems align with societal values. The public offers valuable input on user needs and concerns, ensuring that AI systems are designed to effectively serve a wide range of people.

Examples of Successful Collaborative Initiatives

Several successful collaborative initiatives demonstrate the power of working together. One notable example is the Partnership on AI, a coalition of technology companies, academia, and civil society organizations that aims to promote best practices, advance public understanding, and foster a broad dialogue on the implications of AI.
Another example is the AI for Good Global Summit, organized by the International Telecommunication Union (ITU) in collaboration with other UN agencies, which brings together diverse stakeholders to discuss and develop AI solutions for global challenges.

Benefits of a Collaborative Approach

A collaborative approach can lead to more balanced and sustainable solutions. By incorporating diverse viewpoints, collaborative efforts ensure that AI systems are technically robust, ethically sound, and socially beneficial. This holistic perspective helps anticipate and mitigate potential negative impacts, such as bias or misuse of AI and promotes the development of more inclusive and fair technologies.

Collaboration fosters innovation by combining the strengths of different sectors. For example, academic research can drive new technological breakthroughs. At the same time, industry partners can provide the resources and platforms needed to scale these innovations, and policymakers can then create supportive regulatory environments that encourage ethical AI development and deployment.

Subscribe to our newsletter

Diverse Perspectives on the Ethics-Innovation Dilemma in AI

The role of ethics in AI garners a wide range of opinions from different stakeholders. Some argue that integrating ethical considerations into AI development is crucial for ensuring these technologies benefit society and protect individual rights. They believe that without a solid moral framework, AI could exacerbate existing inequalities, lead to privacy infringements, and create new forms of discrimination.

The Need for Ethical Considerations

Balancing technological innovation with the protection of human rights is a delicate task. Proponents of a human-centered approach to AI stress that innovation should not come at the expense of ethical considerations. They believe that integrating ethics into AI development from the beginning can lead to more sustainable and socially beneficial outcomes. For example, by ensuring that AI systems are fair, transparent, and accountable, we can build public trust and support for these technologies, ultimately enhancing their adoption and impact.

Integrating Ethics Without Stifling Innovation

On the other hand, others view the emphasis on ethics as a potential hindrance to innovation. They argue that overly stringent ethical guidelines could slow down technological progress and limit the competitiveness of companies and nations in the global AI race. This perspective highlights the need for a careful balance between promoting innovation and safeguarding human rights.

Some contend that the rapid pace of AI development requires a more flexible approach to ethics, allowing for experimentation and iterative improvements. This viewpoint suggests that overly prescriptive rules could hinder the creative processes that drive technological advancements.

The Debate Over Regulation

Another area where opinions diverge is the debate over stricter regulation of AI. Advocates for more rigorous regulation argue that clear and enforceable rules are necessary to prevent AI misuse and protect individuals from harm. They point to algorithmic bias, privacy violations, and other negative consequences as evidence of the need for stronger oversight. These advocates believe that well-designed regulations can provide a framework for responsible innovation, ensuring that the benefits of AI are widely shared and its risks minimized.

Conversely, opponents of stringent regulation caution against the potential drawbacks of excessive oversight. They argue that heavy-handed regulations could create barriers to entry for smaller companies and startups, stifling innovation and reducing the diversity of voices in the AI landscape. They also worry that overly detailed rules could become quickly outdated in a fast-evolving field like AI, leading to regulatory bottlenecks and stifling progress.

Embracing Leonardo’s Legacy for the Future of AI

Leonardo’s legacy teaches us that humanity must always be at the center of technological advancements, a principle that extends to Artificial Intelligence. Just as Leonardo da Vinci masterfully integrated innovation with ethical responsibility, Human-Centered AI must strive to achieve the same balance to create a technological future that respects and promotes human values – honoring Leonardo’s legacy and serving as a call to action for all stakeholders.

The importance of a human-centered approach in AI cannot be overlooked. By prioritizing human values, well-being, fairness, and transparency, we can ensure that AI technologies enhance our lives in meaningful and equitable ways – mitigating risks such as bias, privacy violations, and the erosion of trust while maximizing AI’s benefits.

Your Role in Shaping Human-Centered AI

Reflecting on your role in promoting ethical and responsible AI is essential. Whether you are a technologist, policymaker, business leader, or concerned citizen, your voice matters in shaping the future of AI. By supporting policies that emphasize ethical considerations, advocating for transparency and fairness in AI systems, and staying informed about the latest developments in AI ethics, you contribute to this crucial cause.

The future of Human-Centered AI holds great promise, with the potential for AI systems that are more advanced, capable, and aligned with our shared values. Collaborative efforts across sectors and borders will be vital in developing global standards and best practices, ensuring AI serves the greater good.

Leonardo da Vinci exemplified the integration of innovation and ethical responsibility, and we must commit to these principles in developing and deploying AI. By doing so, we can create a future where technology enhances our humanity, respects our rights, and promotes the well-being of all.

This vision of Human-Centered AI inspires us to take action and make a positive impact, ensuring that AI remains a force for good in our society. Remember, every voice counts in this transformative journey.

  • Original article previously published here