【Uncut Archives】

Since AI came into our world,Uncut Archives creators have put a lead foot down on the gas. However, according to a new policy document, Meta CEO Mark Zuckerberg might slow or stop the development of AGI systems deemed too "high risk" or "critical risk."

AGI is an AI system that can do anything a human can do, and Zuckerberg promised to make it openly available one day. But in the document "Frontier AI Framework," Zuckerberg concedes that some highly capable AI systems won't be released publicly because they could be too risky.

The framework "focuses on the most critical risks in the areas of cybersecurity threats and risks from chemical and biological weapons."


You May Also Like

SEE ALSO: Mark Zuckerberg doubles down on Meta's submission to Trump

"By prioritizing these areas, we can work to protect national security while promoting innovation. Our framework outlines a number of processes we follow to anticipate and mitigate risk when developing frontier AI systems," a press release about the document reads.

Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up!

For example, the framework intends to identify "potential catastrophic outcomes related to cyber, chemical and biological risks that we strive to prevent." It also conducts "threat modeling exercises to anticipate how different actors might seek to misuse frontier AI to produce those catastrophic outcomes" and has "processes in place to keep risks within acceptable levels."

If the company determines the risks are too high, it will keep the system internal instead of allowing public access.

SEE ALSO: Mark Zuckerberg wants more 'masculine energy' in corporate America

"While the focus of this Framework is on our efforts to anticipate and mitigate risks of catastrophic outcomes, it is important to emphasize that the reason to develop advanced AI systems in the first place is because of the tremendous potential for benefits to society from those technologies," the document reads.

Still, they're not denying that the risks are there.

Topics Meta

Latest Articles

Recent Articles

Editor's Picks

Fan Articles