Giving Compass' Take:
- Jason Matheny explains how regulating three aspects of the AI supply chain, hardware, training, and deployment, can lead to significant results.
- What role can you play in supporting regulation limiting the damage of AI?
- Read about the social consequences of AI.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Artificial intelligence is advancing so rapidly that many who have been part of its development are now among the most vocal about the need to regulate it. While AI will bring many benefits, it is also potentially dangerous; it could be used to create cyber or bio weapons or to launch massive disinformation attacks. And if an AI is stolen or leaked even once, it could be impossible to prevent it from spreading throughout the world.
These concerns are not hypothetical. Such a leak has, in fact, already occurred. In March, an AI model developed by Meta called LLaMA appeared online. LLaMA was not intended to be publicly accessible, but the model was shared with AI researchers, who then requested full access to further their own projects. At least two of them abused Meta's trust and released the model online, and Meta has been unable to remove LLaMA from the internet. The model can still be accessed by anyone.
Fortunately, LLaMA is relatively harmless. While it could be used to launch spear-phishing attacks (PDF), there is not yet cause for major alarm. The theft or leak of more capable AI models would be much worse. But the risks can be substantially reduced with effective oversight of three parts of the AI supply chain: hardware, training, and deployment.
Read the full article about a model for regulating AI by Jason Matheny at RAND Corporation.