Resumo de uma frase – Anthropic, an AI safety and research company, has introduced the Responsible Scaling Policy (RSP) to address the risks posed by advanced AI models, which could lead to widespread devastation, and aims to encourage the development of safer AI systems while minimizing harm; meanwhile, the UK’s Competition and Markets Authority (CMA) has proposed principles to ensure healthy competition in the foundation model space, with an update expected in early 2024, as concerns grow over the potential misuse of AI technology.
Num relance
- Anthropic has unveiled the Responsible Scaling Policy (RSP) to mitigate risks linked to advanced AI models.
- The policy introduces AI Safety Levels (ASLs) to tier risks from low to high.
- Anthropic aims to encourage the development of safer AI systems and independent oversight.
- The Competition and Markets Authority (CMA) has proposed principles to ensure healthy competition in the foundation model space.
- Potential risks of AI misuse include the engineering of synthetic viruses and the need for restrictions and non-proliferation measures.
Os detalhes
Anthropic, an AI safety and research company, has unveiled a new policy known as the Responsible Scaling Policy (RSP).
The RSP is designed to mitigate catastrophic risks linked to advanced AI models that could potentially cause large-scale devastation.
The policy underscores the potential for AI to cause significant destruction, potentially leading to thousands of deaths or billions of dollars in damage.
The Responsible Scaling Policy
It introduces AI Safety Levels (ASLs), a risk tiering system inspired by the U.S. government’s Biosafety Levels.
The policy outlines four ASLs, ranging from low risk to high risk.
Anthropic acknowledges that the policy is not static and will be updated and refined based on experience and feedback.
The company’s goal is to encourage the development of safer AI systems that unlock additional capabilities rather than reckless scaling.
Evaluating risks comprehensively is challenging due to the potential for AI models to conceal their abilities.
To ensure independent oversight, the policy includes measures requiring board approval for all changes.
The announcement of Anthropic’s RSP comes at a time when the AI industry is facing increased scrutiny and regulation.
Anthropic’s AI chatbot, Claude, is designed to combat harmful prompts by explaining their dangers.
The company’s approach, Constitutional AI, involves a set of rules providing human oversight and incorporates supervised and reinforcement learning phases.
Constitutional AI improves transparency and performance of AI decision making and offers precise control over AI behavior with fewer human labels.
These efforts demonstrate Anthropic’s commitment to AI safety and ethics, setting a high standard for future advancements in AI by focusing on minimizing harm while maximizing utility.
Competition and Markets Authority (CMA) Principles
In the UK, the Competition and Markets Authority (CMA), the country’s competition watchdog, has proposed principles to ensure healthy competition in the foundation model space.
The CMA warns that some companies could use their models to gain market power and charge high prices for their services.
The proposed principles aim to protect consumers and promote competition.
The CMA plans to collaborate with stakeholders in the AI space, including leading developers like OpenAI, Meta, Nvidia, and Google, as well as governments, academics, and fellow regulators.
An update on the principles is expected in early 2024.
The CMA initiated its study into foundation models in May, covering issues such as security, safety, and copyright.
The investigation aligns with the Sunak government’s pro-innovation approach to AI regulation.
The principles were published as part of a 130-page initial report on AI foundation models.
The CMA consulted with 70 stakeholders, including developers, businesses, and consumer groups, to draft the initial principles.
The report highlights the potential negative impacts of AI adoption on competition and consumers, such as false information, fraud, and market power.
The CMA emphasizes the importance of effective competition and provisions like data protection and intellectual property.
Potential Risks of AI Misuse
According to Mustafa Suleyman, former Google executive and AI expert, the misuse of artificial intelligence could potentially lead to the engineering of synthetic viruses capable of causing pandemics.
Suleyman expressed concern about the accessibility of advanced AI technology and software, calling for restrictions to prevent misuse.
He emphasized the need to limit access to AI software, cloud systems, and biological material, suggesting a precautionary approach to AI development.
A recent study found that even undergraduates without a biology background can suggest bio-weapons using AI systems.
Chatbots were able to suggest potential pandemic pathogens and provide detailed protocols for generating them from synthetic DNA.
The study recommended non-proliferation measures, including third-party evaluations of AI models and screening of DNA synthesis providers.
The researchers also suggested curating training datasets to remove harmful concepts and screening DNA used by contract research organizations and robotic laboratories.
These findings highlight the importance of proactive measures to safeguard against potential risks associated with the misuse of AI in the field of synthetic biology.
Artigo Raio X
Aqui estão todas as fontes usadas para criar este artigo:
A small seedling growing steadily amidst a network of interconnected roots.
Esta seção vincula cada um dos fatos do artigo de volta à sua fonte original.
Se você tiver alguma suspeita de que informações falsas estão presentes no artigo, você pode usar esta seção para investigar de onde elas vieram.
venturebeat.com |
---|
– Anthropic, an AI safety and research company, has released a new policy called the Responsible Scaling Policy (RSP). – |
The RSP aims to mitigate catastrophic risks associated with advanced AI models that could cause large-scale devastation. – |
The policy highlights the potential for AI to cause significant destruction, leading to thousands of deaths or billions of dollars in damage. – |
The RSP introduces AI Safety Levels (ASLs), a risk tiering system inspired by the U.S. government’s Biosafety Levels. – |
The policy outlines four ASLs, ranging from low risk to high risk. |
– Anthropic acknowledges that the policy is not static and will be updated and refined based on experience and feedback. – |
The company’s goal is to encourage the development of safer AI systems that unlock additional capabilities rather than reckless scaling. |
– Evaluating risks comprehensively is challenging due to the potential for AI models to conceal their abilities. – |
The policy includes measures for independent oversight, with all changes requiring board approval. |
– Anthropic’s RSP announcement comes at a time when the AI industry is facing increased scrutiny and regulation. |
– Anthropic’s AI chatbot, Claude, is designed to combat harmful prompts by explaining their dangers. – |
The company’s approach, Constitutional AI, involves a set of rules providing human oversight and incorporates supervised and reinforcement learning phases. |
– Constitutional AI improves transparency and performance of AI decision making and offers precise control over AI behavior with fewer human labels. |
– Anthropic’s research on Constitutional AI and the launch of the RSP demonstrate their commitment to AI safety and ethics. |
– Anthropic sets a high standard for future advancements in AI by focusing on minimizing harm while maximizing utility. |
aibusiness.com |
---|
– The U.K.’s competition watchdog, the Competition and Markets Authority (CMA), has proposed principles to ensure healthy competition in the foundation model space. – |
The CMA warns that some companies could use their models to gain market power and charge high prices for their services. – |
The proposed principles aim to protect consumers and promote competition. |
– |
The CMA plans to work with stakeholders in the AI space, including leading developers like OpenAI, Meta, Nvidia, and Google, as well as governments, academics, and fellow regulators. |
– An update on the principles is expected in early 2024. – |
The CMA began its study into foundation models in May, covering issues such as security, safety, and copyright. – |
The CMA’s investigation into AI’s impact on competition aligns with the Sunak government’s pro-innovation approach to AI regulation. |
– |
The principles were published as part of a 130-page initial report on AI foundation models. – |
The CMA consulted with 70 stakeholders, including developers, businesses, and consumer groups, to draft the initial principles. – |
The report highlights the potential negative impacts of AI adoption on competition and consumers, such as false information, fraud, and market power. – |
The CMA emphasizes the importance of effective competition and provisions like data protection and intellectual property. |
Independent.co.uk |
---|
– Synthetic viruses generated through the misuse of artificial intelligence could potentially cause pandemics. – Mustafa Suleyman, former Google executive and AI expert, expressed concern about the engineering of pathogens using AI. – Suleyman called for restrictions on access to advanced AI technology and software to prevent misuse. – |
He emphasized the need to limit who can use AI software, cloud systems, and biological material. – Suleyman suggested restricting access to certain substances and approaching AI development with a precautionary principle. – |
A recent study found that even undergraduates without a biology background can suggest bio-weapons using AI systems. – Chatbots were able to suggest potential pandemic pathogens and provide detailed protocols for generating them from synthetic DNA. – |
The study called for non-proliferation measures, including third-party evaluations of AI models and screening of DNA synthesis providers. – |
The researchers recommended curating training datasets to remove harmful concepts and screening DNA used by contract research organizations and robotic laboratories. |