SEARCH FINANCIAL SERVICES INFRASTRUCTURE SECURITY SCIENCE INTERVIEWS

 

     

AI Public Interest Fund Forms

January 11, 2017

Recognizing the vast potential of artificial intelligence to affect the public interest, the John S. and James L. Knight Foundation, Omidyar Network, LinkedIn founder Reid Hoffman, and others have formed a $27 million fund to apply the humanities, the social sciences and other disciplines to the development of AI.

The MIT Media Lab and the Berkman Klein Center for Internet & Society at Harvard University will serve as founding academic institutions for the initiative, which will be named the Ethics and Governance of Artificial Intelligence Fund. The Fund will support a cross-section of AI ethics and governance projects and activities, both in the United States and internationally.

Artificial intelligence and complex algorithms in general, fueled by big data and deep-learning systems, are quickly changing how we live and work—from the news stories we see, to the loans for which we qualify, to the jobs we perform. Because of this pervasive but often concealed impact, it is imperative that AI research and development be shaped by a broad range of voices—not only by engineers and corporations, but also by social scientists, ethicists, philosophers, faith leaders, economists, lawyers and policymakers.

Hoffman and Omidyar Network each committed $10 million to the fund, while Knight Foundation committed $5 million. With the MIT Media Lab and the Berkman Klein Center, they will form a governing board to distribute awards and facilitate other activities that provide meaningful links among activities in the connective tissue between computer sciences, the social sciences and the humanities.

The William and Flora Hewlett Foundation and Jim Pallotta, founder of the Raptor Group, have each committed $1 million to the fund, which is expected to grow as other funders come on board.

“Artificial intelligence agents will impact our lives in every society on Earth. Technology and commerce will see to that,” said Alberto Ibargüen, president of Knight Foundation. “Since even algorithms have parents and those parents have values that they instill in their algorithmic progeny, we want to influence the outcome by ensuring ethical behavior, and governance that includes the interests of the diverse communities that will be affected.”

“As a technologist, I’m impressed by the incredible speed at which artificial intelligence technologies are developing. As a philanthropist and humanitarian, I’m eager to ensure that ethical considerations and the human impacts of these technologies are not overlooked. Omidyar Network is participating in the fund to ensure that critical areas like ethics, accountability, and governance, are considered from the earliest stages of design,” said Pierre Omidyar, founding partner, Omidyar Network, and a principal of the fund.

“There’s an urgency to ensure that AI benefits society and minimizes harm,” said Reid Hoffman, founder of LinkedIn and partner at venture capital firm Greylock Partners . “AI decision-making can influence many aspects of our world – education, transportation, health care, criminal justice, and the economy – yet data and code behind those decisions can be largely invisible.”

The fund seeks to advance AI in the public interest by including the broadest set of voices in discussions and projects addressing the human impacts of AI. Among the issues the fund might address:

•Communicating complexity: How do we best communicate, through words and processes, the nuances of a complex field like AI?

•Ethical design: How do we build and design technologies that consider ethical frameworks and moral values as central features of technological innovation?

•Advancing accountable and fair AI: What kinds of controls do we need to minimize AI’s potential harm to society and maximize its benefits?

•Innovation in the public interest: How do we maintain the ability of engineers and entrepreneurs to innovate, create and profit, while ensuring that society is informed and that the work integrates public interest perspectives?

•Expanding the table: How do we grow the field to ensure that a range of constituencies are involved with building the tools and analyzing social impact?

“AI’s rapid development brings along a lot of tough challenges,” said Joi Ito, director of the MIT Media Lab. “For example, one of the most critical challenges is how do we make sure that the machines we ‘train’ don’t perpetuate and amplify the same human biases that plague society. How can we best initiate a broader, in-depth discussion about how society will co-evolve with this technology, and connect computer science and social sciences to develop intelligent machines that are not only ‘smart,’ but also socially responsible?”

The Ethics and Governance of Artificial Intelligence Fund will complement and collaborate with existing efforts, such as the upcoming public symposium “AI Now,” which is scheduled for July 10 at MIT Media Lab.

The Media Lab and the Berkman Klein Center for Internet & Society will leverage the strengths of existing programs and pursue joints efforts that reinforce cross-disciplinary work and encourage collaboration, both in the United States and internationally. Activities that the fund will support include a joint AI fellowship program supporting people who are working to keep human issues at the forefront of AI, including working with international efforts that are underway; convening and supporting a network of people and institutions working to maximize the benefits of AI; funding expert research and other sectors affected by AI’s implications; and a thematic focus on the issues of artificial intelligence for the 2018 “Assembly” program.

“The thread running through these otherwise-disparate phenomena is a shift of reasoning and judgment away from people," said Jonathan Zittrain, co-founder of the Berkman Klein Center and Professor of Law and Computer Science at Harvard University. “Sometimes that's good, as it can free us up for other pursuits and for deeper undertakings. And sometimes it’s profoundly worrisome, as it decouples big decisions from human understanding and accountability. A lot of our work in this area will be to identify and cultivate technologies and practices that promote human autonomy and dignity rather than diminish it.”

Terms of Use | Copyright © 2002 - 2017 CONSTITUENTWORKS SM  CORPORATION. All rights reserved. | Privacy Statement