Artificial intelligence is a fascinating, rapidly evolving field with tremendous potential—both good and bad. On one hand, the level of efficiency and automation that an AI-filled world could reach is incredible; on the other, robots destroying humans and taking over our civilization does not sound like a fun way to spend a weekend.
A group of many scientists and techies have endorsed a list of principles that aims to steer AI development in the direction of productivity as opposed to destruction. Among those endorsing the best practices include science legend Stephen Hawking and tech icon Elon Musk.
The Asilomar AI Principles were created by a team of roboticists, economists, and philosophers assembled by Future of Life.
The 23 principles range “from research strategies to data rights to future issues including potential super-intelligence,” says Future of Life. “This collection of principles …. highlights how the current ‘default’ behavior around many relevant issues could violate principles that most participants agreed are important to uphold.”
Here are some of the principles:
1. Research Goal: The goal of A.I. research should be to create not undirected intelligence, but beneficial intelligence.
5. Race Avoidance: Teams developing A.I. systems should actively cooperate to avoid corner-cutting on safety standards.
9. Responsibility: Designers and builders of advanced A.I. systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
12. Personal Privacy: People should have the right to access, manage and control the data they generate, given A.I. systems power to analyze and utilize that data.
16. Human Control: Humans should choose how and whether to delegate decisions to A.I. systems, to accomplish human-chosen objectives.
23. Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.