Dario Amodei from Anthropic is concerned about the competitor Deepseek, the Chinese KI company That was the Silicon Valley in the storm with His R1 model. And his concerns could be more serious than the typical ones that have been raised via Deepseek, the user data returned to China.
In an interview In Jordan Schneider’s Chinatalk Podcast, Amodei said that Deepseek has generated rare information about biowapen in a security test carried out by Anthropic.
Deepseek’s performance was “the worst of basically every model that we had ever tested,” said Amodei. “It had absolutely no blocks against creating this information.”
Amodei said that this was part of evaluation -anthropic runs for various AI models to evaluate their potential national security risks. His team checks whether models can generate Bioweapons-related information that cannot be found easy to find on Google or in textbooks. Anthropically positions itself as an AI fundamental model provider That takes security seriously.
Amodei said that he does not believe that Deepseek’s models are “literally dangerous” today to provide rare and dangerous information, but that they could be in the near future. Although he praised Deepseek’s team as “talented engineers”, he advised the company to “take these AI security considerations seriously”.
Amodei has it too Supported strong export controls On chips to China, citing that they could give China’s military an advantage.
Amodei did not clarify in the Chinatalk interview, which the Deepseek model has tested anthropically, nor has he gave further technical details about these tests. Anthropic did not immediately answer a request for a comment by Techcrunch. Deepseek also not.
Deepseek’s climb also triggered concerns about its security elsewhere. For example Cisco security researcher said last week That Deepseek R1 blocked no harmful requests in its security tests and achieved a 100% success rate from Jailbreak.
Cisco did not mention bioweapons, but said that Deepseek could cause harmful information about cybercrime and other illegal activities. However, it is worth noting that Metas Lama-3,1-405b and Openas GPT-4O also had a high failure rates of 96% and 86%.
It remains to be seen whether security concerns like this will make a serious dent in deepseeks fast adoption. Like companies AWS And Microsoft have publicly advertised to integrate R1 into their cloud platforms – ironically enough because Amazon is Anthropic’s largest investor.
On the other hand, There is a growing list From countries, Pursueand in particular government organizations such as the US marine and The Pentagon This started Deepseek.
The time will show whether these efforts are beginning or whether deepseek’s global increase continues. In any case, Amodei Deepseek sees as a new competitor that is at the level of the top AI companies in the United States.
“The new fact here is that there is a new competitor,” he said in Chinatalk. “Deepseek may be added to this category in the large companies that can train AI – Anthropic, Openai, Google, maybe Meta and Xai.