The Foundations of Cooperative Intelligence
Most models of intelligence begin with individual agents, seeking to achieve objectives, and approach the question of cooperation as a strategic challenge, and opportunity, that is layered on when such individual agents meet up. In this view, the individual is the foundation and cooperation is built on top. In this talk I'll argue that to build human-like intelligence (and beyond) we need to reverse this picture. Cooperating in groups is valuable for many species but it is foundational for humans: humans wouldn't exist but for their evolved capacity to cooperate. I'll argue that cooperative intelligence is the core of human intelligence. And at that core is the cognitive capacity to read and participate in the maintenance of the normative environment—the rules that maintain the stability and coordinated behavior of a group in distinctive patterns. I'll present published and ongoing work that explores this normative capacity in artificial agents.
Bio: Gillian K. Hadfield is the director of the Schwartz Reisman Institute for Technology and Society. She is the Schwartz Reisman Chair in Technology and Society, professor of law and of strategic management at the University of Toronto, a Faculty Member and CIFAR AI Chair at the Vector Institute for Artificial Intelligence, and a senior policy advisor at OpenAI. Her current research is focused on innovative design for legal and regulatory systems for AI and other complex global technologies; computational models of human normative systems; and working with machine learning researchers to build ML systems that understand and respond to human norms. Her book, Rules for a Flat World: Why Humans Invented Law and How to Reinvent It for a Complex Global Economy, is now available in paperback and on Audible.