Why “prosocial AI” must be the framework for designing, deploying and governing AI

Why “prosocial AI” must be the framework for designing, deploying and governing AI

Subscribe to our daily and weekly newsletters to receive the latest updates and exclusive content on industry-leading AI reporting. Learn more


Because AI permeates everyone sphere of modern lifeThe key challenge for business leaders, policymakers and innovators is no longer whether to adopt intelligent systems, but how. In a world characterized by increasing polarization, resource depletion, declining trust in institutions and volatile information landscapes, the critical need is to design AI in a way that contributes meaningfully and sustainably to the well-being of people and the planet.

Prosocial AI – a framework of design, deployment and governance principles that ensures AI is carefully tailored, trained, tested and targeted to help people and the planet – is more than a moral stance or PR facade. It is a strategic approach to positioning AI within a broader intelligence ecology that values ​​collective thriving over narrow optimization.

The ABCD of AI Potential: From Obscurity to Glory

The reason for this prosocial AI arises from four interconnected areas – agency, attachment, climate and division (ABCD). Each area highlights the dual nature of AI: it can either reinforce existing dysfunctions or act as a catalyst for regenerative, integrative solutions.

  • Agency: Too often, AI-driven platforms rely on addiction loops and opaque recommendation systems that undermine user autonomy. In contrast, prosocial AI can enable agency by disclosing the origin of its suggestions, providing meaningful user controls, and respecting the complexity of human decision-making. It’s not just about “consent” or “transparency” as abstract buzzwords; It’s about designing AI interactions that take into account human complexity – the interplay of cognition, emotion, physical experience and social context – and enable individuals to navigate their digital environment without succumbing to manipulation or distraction.
  • Bonding: Digital technologies can either divide societies into echo chambers or serve as bridges connecting different people and ideas. Prosocial AI applies nuanced linguistic and cultural models to identify shared interests, highlight constructive contributions, and promote empathy across boundaries. Rather than stoking outrage for attention, it helps participants discover complementary perspectives, strengthening community bonds and strengthening the delicate social structures that hold societies together.
  • Climate: AI’s relationship with the environment is fraught with tension. AI can optimize supply chains, improve climate modeling and support environmental protection. However, the computational effort required to train large models often results in a significant carbon footprint. From a pro-social perspective, designs are required that balance these benefits against the environmental costs – by adopting energy efficient architectures, transparent life cycle assessments and ecologically sensitive data practices. Instead of treating the planet as an afterthought, prosocial AI anchors climate aspects as a top priority: AI must not only advise on sustainability, but must also be sustainable.
  • Division: The misinformation cascades and ideological fissures that characterize our times are not an inevitable byproduct of technology, but the result of design decisions that favor virality over veracity. Prosocial AI addresses this by embedding cultural and historical competency into its processes, respecting contextual differences, and providing fact-checking mechanisms that increase trust. Rather than homogenizing knowledge or imposing top-down narratives, it promotes informed pluralism and makes digital spaces more navigable, credible and inclusive.

Double Literacy: Integrating AI and NI

The realization of this vision depends on the cultivation of what we might call “dual literacy.” On one side is AI literacy: mastering the technical intricacies of algorithms, understanding how bias emerges from data, and establishing strong accountability and oversight mechanisms. On the other hand is the competency of natural intelligence (NI): a comprehensive, embodied understanding of human cognition and emotion (brain and body), personal identity (self), and cultural embeddedness (society).

These NI skills are not soft skills that sit on the edge of innovation; it is fundamental. Human intelligence is shaped by neurobiology, physiology, interoception, cultural narratives and community ethics – a complex web that goes beyond the reductive ideas of “rational actors”. Through the dialogue between NI Competency and AI Competency, developers, policymakers and regulators can ensure that digital architectures meet our multidimensional human reality. This holistic approach promotes systems that are ethical, context-sensitive, and capable of complementing human capabilities rather than limiting them.

AI and NI in synergy: Prosocial AI goes beyond zero-sum thinking

The popular fantasy often pits machines against humans in a zero-sum competition. Prosocial AI challenges this dichotomy. Consider the beauty of complementarity in healthcare: AI excels at pattern recognition, scanning vast repositories of medical images to detect anomalies that might be missed by human specialists. Doctors, in turn, draw on their physical insights and moral instincts to interpret results, communicate complex information, and consider the broader context of each patient’s life. The result is not only more efficient diagnostics; It is more humane, patient-centered care. Similar paradigms can transform decision-making in law, finance, governance and education.

By integrating the precision of AI with the nuanced judgment of human experts, we could move from hierarchical command and control models to collaborative intelligence ecosystems. Here, machines handle complexity at scale and humans provide the moral vision and cultural fluency needed to ensure that these systems serve authentic public interests.

Building a prosocial infrastructure

To make prosocial AI central to our future, we need a concerted effort across all sectors:

Industrial and technology companies: Innovators can prioritize human-in-the-loop designs and reward metrics that align with well-being rather than engagement at all costs. Instead of designing AI to captivate users, they can design systems that inform, empower and promote – as measured by improvements in health outcomes, educational attainment, environmental sustainability or social cohesion.

Example: The Partnership on AI provides frameworks for prosocial innovation and helps developers adopt responsible practices.

Civil society and NGOs: Community groups and advocacy organizations can guide the development and deployment of AI and test new tools in real-world contexts. They can bring ethnically, linguistically and culturally diverse perspectives to the design, ensuring that the resulting AI systems meet a wide range of human experiences and needs.

Educational institutions: Schools and universities should integrate dual literacy into their curricula while strengthening critical thinking, ethics and cultural studies. By promoting AI and NI skills, educational institutions can help ensure that future generations are proficient in machine learning (ML) and deeply rooted in human values.

Example: The MIT Schwarzman College of Computing and Stanford’s Institute for Human-centered AI illustrate transdisciplinary approaches that combine technical rigor with human research.

Governments and policy makers: Laws and regulatory frameworks can incentivize prosocial innovation and make it economically viable for companies to develop AI systems that are transparent, accountable, and aligned with social goals. Citizens’ assemblies and public consultations can influence these policies and ensure that the direction of AI reflects the diverse voices of society.

Beyond boxes into a holistic hybrid future

As AI is deeply integrated into the global socioeconomic fabric, we must resist the impulse to view technology as a black box optimized for specific metrics. Instead, we can imagine a hybrid future in which human and machine intelligences co-evolve, guided by shared principles and based on a holistic understanding of ourselves and our environment. Prosocial AI goes beyond simply choosing between innovation and responsibility. It offers a richer spectrum of possibilities, where AI empowers rather than addicts, connects rather than fragments, and regenerates rather than depletes.

The future of AI will not be determined solely by computational skills or algorithmic cunning. How we organically integrate these capabilities into the human sphere will define it, taking into account the interplay of brain and body, self and society, local nuances and planetary imperatives. In doing so, we create a broader measure of success, measured not just by profits or efficiency, but also by people’s prosperity and the resilience of the planet.

Prosocial AI can be helpful in this way. The future begins now, with a new ABCD: APath to an inclusive society; bBelieve that you are part of making this happen. CChoose which side of history you want to be on. And Do what you think is right.

After two decades at UNICEF and publishing various BooksDr. Cornelia C. Walther Is He is currently a senior fellow at the University of Pennsylvania working on ProSocial AI.

DataDecisionMakers

Welcome to the VentureBeat community!

At DataDecisionMakers, experts, including engineers who work with data, can share data-related insights and innovations.

If you want to learn more about innovative ideas and current information, best practices and the future of data and data technology, visit us at DataDecisionMakers.

You might even think about it contribute an article Your own!

Read more from DataDecisionMakers



Source link
Spread the love
Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *