📁 last Posts

Meta may stop developing AI systems due to their catastrophic risks

Meta may stop developing AI systems due to their catastrophic risks

In a recent document Meta unveils that advanced AI systems will not be made available to the public under scenarios posing risks to safety or harm according to TechCrunch reports.

Meta’s AI Safety Strategy: Restricting Advanced Systems to Prevent Harm

The self-learning capability of general AI allows it to handle all human tasks which differs from the limitations of traditional AI yet Meta remains concerned about misuse of this cutting-edge technology because it fears losing control.

During the November 2022 public meeting of the FBI regarding homeland security threats Meta documented its strategy through the Frontier AI Framework document which established two high-risk system types: "high-risk systems" and "ultra-high-risk systems."

Meta defines high-risk systems along with ultra-risk systems as cyber or biological attack platforms yet ultra-risk systems generate catastrophic outcomes which high-risk systems cannot accomplish.

Meta's AI Safety Framework: Balancing Open Access with Risk Management

The company provided instances of such attacks which included unauthorized access to end-to-end WhatsApp encryption and the distribution of dangerous biological weapon attacks. Meta recognizes that the document provides a representative list of anticipated disasters although it does not include every possible scenario when deploying its powerful artificial intelligence system.

Employees within Meta and external experts submit multiple risk inputs to senior executives who review the evaluations from researchers.

Internal access to systems rated as "high risk" by Meta will be blocked since they need proper steps to lower risk levels to moderate before being released. Security measures which Meta does not specify will be deployed when a system reaches an “ultra-high risk” level followed by development suspension until the risk decreases.

The company has declared its future AI framework will adapt dynamically with AI domain developments because of major feedback about its methods of AI system development. The AI technology at Meta is accessible to everyone despite its lack of literal open source status while OpenAI keeps its systems secured by API.

The open-source strategy of Meta delivers both possibilities and challenging circumstances for the company. Queries into Llama AI models performed by Meta have led to hundreds of millions of downloads however adversary forces from the United States have been shown to leverage one or more Llama models to construct defensive chatbots which aid organizations in their cybersecurity preparations.

Meta released its Advanced AI Framework to evaluate the open AI model differences between DeepSec and its own strategy while DeepSec provides open source systems without full security protection abilities as well as the potential to generate dangerous outputs when exploited by users.

According to Meta in their published document they assert that evaluating potential benefits together with potential risks will enable them to present AI technology which maximizes benefit while minimizing harm.

Achaoui Rachid
Achaoui Rachid
Hello, I'm Rachid Achaoui. I am a fan of technology, sports and looking for new things very interested in the field of IPTV. We welcome everyone. If you like what I offer you can support me on PayPal: https://paypal.me/taghdoutelive Communicate with me via WhatsApp : ⁦+212 695-572901
Comments