10 Essential Takeaways from ‘AI for Good’ Global Summit

We have this one, very small, narrowing window of opportunity to get the global adoption of AI right.

Last week, we witnessed a moment in history. UN partner agencies, diplomats, and leaders in fields ranging from publishing to robotics convened to discuss the future of AI.

The result? A resounding call for UN governance and global collaboration to mitigate the possible risks while maximizing the astonishing potential of AI adoption.

The 2023 AI for Good Global Summit convened in Geneva is the leading action-oriented UN platform for promoting AI to advance the UN Sustainable Development Goals (SDGs). The heart of these goals is global prosperity, sustainability, and equality by 2030.

Throughout the summit, there was a profound recognition of the power and responsibility at play in this moment, as we navigate the exponentially accelerating adoption of AI technology. 

And there were unmistakable echoes of solemnity, urgency, and excitement as speakers and participants discussed well-founded concerns, determined optimism, and intensifying commitment to the UN SDGs.

On the whole, there was a prevailing sense that we have this one, very small, narrowing window of opportunity to get the global adoption of AI right.

If we get it wrong, globally, the risks could be very high indeed.

If we get it right, we could live in a world in which the UN SDGs of zero hunger, no poverty, gender equality, improved human and environmental health, and universal educational opportunity are fully realized. 

So, how do we get it right? 10 Takeaways for Adopting AI for Good

1. AI is here to stay. The resulting social changes are coming fast.

We need to accept that AI is here to stay. 

The countries in attendance all brought some national experience of AI to the table. Whether in climate change, healthcare, or education, AI has already expanded the range of what is possible. It’s continuing to evolve quickly, and the human impacts and changes to civilization that result will be similarly exponential in pace. 

Historically, this kind of industrial, technological, and social change has taken centuries or decades. Now, it’s going to happen very very fast. Consider the collapse of the adoption curve. It’s happening in a matter of months. 

2. We need to nudge the trajectory of AI now, at the inflection point, towards sustainability.

Given AI’s rapid evolution and adoption, we have a tiny window of opportunity to seize the moment. We need to maximize opportunities and mitigate risks. With its massive scalability, AI has the potential to make things better or worse for all of us. There is an immense sense of urgency around getting it right the first time. If we don’t, says Wendell Wallach, “The next generation is going to look back at this moment and just shake its head and say, ‘What were they thinking?’” 

Ultimately, the path AI takes is down to us and the actions we take.

We’re at a true moment in history, one where for better or worse we’re going to take hold of our future. To do so for the better, we’re going to need to put frameworks in place. Using the SDGs as our starting point and targeted goals, we need to nudge AI technology onto a sustainable, humane course. 

We must start putting governance structures in place now, at national and international levels, so we can all trust that the application and development of AI technology will ultimately be more beneficial than harmful, providing real opportunities for health, wealth, growth, and equal access.

3. A pause is neither realistic nor desirable. 

The international community generally hopes and believes that most people will use and advance AI in a positive way. At the same time, there’s an understanding that a few bad players could make things go very, very wrong, in ways we might not be prepared for. 

So, we need to have plans to regulate and contain possible threats.

Ray Kurzweil argues that, for this reason, it’s a very bad idea to pause for six months. Why? The bad users would still advance, while the good actors would pause. A pause in one place is not going to result in a pause in other nations, industries, or companies. Instead, Kurzweil asserts, we need more intelligent defenses on the side of the good.

4. Ethics, economic structures, and incentives are going to have to shift.

Wendell Wallach argues that to thrive in the era of AI, we need to commit to a new form of ethics and a fundamental shift to our value structure. This means pushing ideas that are ultimately supportive of “the world we want to get to” and the SDGs and their outcomes.

The SDGs may be used as a framework for regulation, and ideally at the very least as the norm of what AI should help us to achieve.

Wallach contends that companies and nations will need to assume responsibility for ameliorating the downsides or side effects of their decisions and actions.

Adjacently, Gary Marcus argues that there’s currently a false tension between regulation and innovation. Instead, he posits, we can and should use regulations that up the game and challenge Silicon Valley to make their technology better (Challenge accepted!)

5. Education is perhaps the most promising area for AI development and impact.

While health and climate change was also called out as exciting areas of opportunity, education was consistently recognized as one of and often the most promising arena for AI impact and transformation. 

Many of the Summit’s talks highlighted the connecting, equalizing potential of AI educational solutions. 

Speakers touted the ability of AI to reflect a growing understanding of cognitive science, how the brain works, and how we learn and acquire skills or knowledge. 

And many participants were hopeful about the ways AI might let us bring education where it’s most needed, where there is no education or almost none, thereby allowing a sort of developmental leapfrogging (Stuart Russell). 

Participants also acknowledged the great opportunity we have at hand to think about different ways of delivering education, how we can use technology to acquire knowledge, and how we can increase digital literacy and access in a world where access to information and learning has historically determined who succeeds and thrives (Baroness Joanna Shields).

6. Artificial Intelligence is going to keep getting smarter. And so should our use and understanding of it as a partner in problem-solving and creativity.

Ray Kurzweil, a leading inventor, thinker, and futurist with a 30-year track record of accurate predictions, delivered one of the final keynotes of the summit. He pointed out that, once a computer masters something, it doesn’t just stop at a human level. It keeps going.

No human can access all of human knowledge. But AI can and eventually will. And since AI has this very broad base, and can articulate itself very quickly, in some ways the technology may be seen as superior to human intelligence or performance. 

We also need to understand that, given its exponential development, AI is going to be an entirely different type of technology in three years’ time. And we’re going to have to continually reinvent how we use and work with it.

As Baroness Joanna Shields notes, there’s an enormous opportunity here to use AI as  “middleware for problem solving and creativity.” This means leveraging never-before-achievable intelligence and skills in the service of human needs and goals.

7. Artificial Intelligence can potentially be understood, embraced, and leveraged as a part of being human.

Kurzweil noted, “This isn’t an alien invasion from Mars… People constantly look at it as if it’s the machines versus us, but it’s not.” Essentially, this means that our tools and our technology is something we created. And thus, it is part of humanity. They’re tools made for humanity, to make us smarter. Sure, artificial intelligence is technically outside of ourselves, but it is still very much part human.

8. The digital divide is going to grow. We need concrete plans to close it.

Speakers and attendees repeatedly voiced concerns about the issue of digital access, digital literacy, and how to truly guarantee everyone is at the table when it comes to AI (Jennifer Woodard).

Given that one-third of the world doesn’t have access to the Internet, it’s likely that the adoption of AI technologies will add to the digital divide within and between countries (H.E. Mr. Jürg Lauber). 

Since LLMs are very data-hungry, we are also very likely looking at an expansion of inequality because there is simply less data available in languages that are spoken by less people (Gary Marcus).

9. We need global, silo-breaking discussion, collaboration, and coordination to ensure the adoption of AI for Good.

Global collaboration is essential if we are to ensure the use of AI for good. And, despite concerns about its mixed reputation and whether it can act quickly enough, the UN wields enormous moral authority, scope, and convening authority. It is likely our best chance to ensure global AI adoption aligns with “the world we want to get to” and the SDGs. 

The Summit was an impressive display of support for this mission. And it’s one that we can only hope gets more coverage and support moving forward.

Organized by the ITU—the UN specialized agency for informational and communication technologies—the AI for Good Global Summit partnered with forty UN sister agencies, including:

  • The World Health Organization (WHO)
  • World Intellectual Property Organization (WIPO)
  • World Bank
  • World Food Programme
  • UNESCO (United Nations Educational, Scientific, and Cultural Organization)
  • United Nations Research Institute for Social Development (UNRISD)
  • UN International Strategy for Disaster Reduction (UNISDR)
  • United Nations Institute for Training and Research (UNITAR)

The speaker roster was equally compelling, with speakers indicating backing ranging from the UN Secretary-General to members of the White House; Google, Amazon, and Microsoft C-Suite leadership; artists, authors, and publishers; innovators in space and robotics; and health, climate, and education researchers.

Given the complexity, intersectionality, and scale of AI adoptions, the contributions of all these stakeholders—diplomats, civil society, academia, and industry alike—are essential.

10. Intergenerational accountability, discussion, and problem-solving is essential.

It’s our collective responsibility to chart the path of AI for future generations. 

When we discuss and make decisions about AI–and, thereby, the schools, careers, and world of the future–our conversations and solutions must be intergenerational. 

We are all stakeholders here. No one more so than our children.

Taking Assertive Next Steps with Optimism and Determination

As we adjust back into the world beyond the conference, we’re taking a tentatively optimistic outlook. 

Like many of the AI for Good Summit attendees, we here at Prof Jim truly believe that AI can help advance the UN’s Sustainable Development Goals and “the world we want to get to.”

For our part, we’re committed to a safe, inclusive, and responsible AI. And we’re so grateful to be part of the conversation about what’s feasible, what’s already going on, and what can be done. 

As we roadmap together for the short-term, medium-term, and long-term, we hope you’ll stay tuned both here and to AI for Good for more resources.

Related posts

Whitepapers

Keep in touch !

Sign up for our newsletter to hear about our latest features and products.