Deepseek: China’s Open -Source -KI -Fuels National Security paradoxical

Deepseek: China’s Open -Source -KI -Fuels National Security paradoxical

Take part in our daily and weekly newsletters to get the latest updates and exclusive content for reporting on industry -leading AI. Learn more


Deepseek and its R1 model do not waste time to rewrite the rules of cyber security AI in real time, and all, from startups to corporate providers that control integrations into their new model this month.

R1 was developed in China and is based on pure reinforcement learning (RL) without supervised fine -tuning. It is also open source and immediately makes it attractive for almost every cyber security startup, which is located on open source architecture, development and provision.

The investment of Deepseek in the model of $ 6.5 million delivers the performance that the O1-1217 from Openaai corresponds to argue benchmarks while it runs on NVIDIA H800 GPUs with a lower NVIDIA. The pricing of Deepseek sets a new standard With significantly lower costs per million tokens compared to OpenAis models. The model of the deep search wrestlers calculates $ 2.19 per million output tokens, while the O1 model from Openai calculates $ 60 for the same. This price difference and its open source architecture have attracted the attention of CIOS, CISOS, cyber security startups and providers of company software alike.

(Interestingly, Openai claims Deepseek used its models To train R1 and other models and to say so far that the company folks data through several queries.)

A AI break with hidden risks that will continue to appear

At the center of the security and trustworthiness of the models is integrated in the core of the model, warned Chris Krebs, opening director of cybersecurity and infrastructure security authority of the US Ministry of Homelry (DHS) Cyber ​​Security and Infrastructure Security ((Infrastructure Security) Agency ((DHS ) Agency (Infrastructure Security “(((Cisa) and most recently Chief Public Policy Officer Sentinelon.

“The censorship of content that the Chinese Communist Party (CCP) can be critical of the model and therefore a design feature with which they can deal with the objective results,” he said. “This” political lobotomization “of Chinese AI models can support the development and global spread of open source AI models in the USA.”

He pointed out that democratizing access to US products abroad increases the American soft power abroad and increasing the spread of Chinese censorship worldwide. “The low costs and the simple calculation of the fundamental data question the effectiveness of the US strategy in order to withdraw Chinese companies access to the latest western tech, including GPUs,” he said. “In a way, they really do more with less.”

Merritt Baer, ​​CISO at Reco And consultants of several security startups said Venturebeat: “In fact, the training (deepseek-r1) could via broader internet data that is controlled by internet sources in the west (or perhaps better described as a lack of Chinese controls and firewalls), a counter Consider. I am less concerned about the obvious things, such as the censor of criticism of President XI and more concerned about the more difficult to define political and social engineering that went into the model. Even the fact that the developers of the model are part of a system of Chinese influencing campaigns is a worrying factor – but not the only factor that we should take into account when choosing a model. “

Since Deepseek has been approved for sale with NVIDIA H800 GPUS in China, but the power of the more advanced H100 and A100 processors is missing, Deepseek democratize your model to an organization that the hardware can affect. Estimates and material costs in which it is explained how a system for 6,000 US dollars can be built up, which can operate R1, multiply on social media.

R1 and follow-up models are built to avoid the US technology sanctions, a point that cancer sees as a direct challenge for the US ACI strategy.

Enkrypt ai’s Deepseek-R1 Red Teaming report Thinks that the model is susceptible to the generation of “harmful, toxic, distorted, CBRN and insecure code edition”. The red team continues: “While it is suitable for closely written applications, the model shows significant weaknesses in operating and security risk areas, as described in our methodology. We strongly recommend implementing reductions if this model is to be used. ”

The red team from Enkrypt Ai also found that Deepseek-R1 is three times more biased than Claude 3 Opus, four times more susceptible to generate uncertain code than O1 of Open AI and four times poisonous than GPT-4O. The red team also found that the model generates an harmful output more often than the O1 of Open AI.

Know the data protection and security risks before sharing your data

Deepseek’s mobile apps are now dominating global downloads, and the web version is recorded in data set -up traffic, whereby all personal data is recorded on both platforms on the servers in China. Considered companies to carry out the model on isolated servers to reduce the threat. Venturebeat has learned from pilots

All data divided on mobile and web apps are accessible by Chinese secret services.

China’s National Secretary Service Act says that companies with government secret services “support, support and work” have to work together. Practice is so omnipresent and such a threat to US companies and citizens that the Department of Homeland Security Has A Data security business advice. Because of these risks the US Navy published a guideline The ban on deepseek-r1 from all work-related systems, tasks or projects.

Organizations that quickly control the new model go to open source and areolating test systems from their internal network and the Internet. The aim is to carry out benchmarks for certain applications and at the same time ensure that all data remains private. Platforms such as realization and hyperbolic laboratories make it possible for companies to safely provide R1 in the USA or in European data centers and not to keep sensitive information within the reach of the Chinese regulations. Please see Excellent summary From this aspect of the model.

Itar Golan, CEO from Startup Promotion for security And a core member of the top 10 of Owasps Top 10 for Great Spracers Models (LLMS) argues that the data protection risks go beyond only Deekseek. “Organizations should not insert their sensitive data into Openai or other US model providers,” he said. “If the data flow to China is a significant national security concern, the US government may want to intervene through strategic initiatives such as subsidizing AI providers inland AI providers in order to maintain competitive pricing and market balance.”

In order to recognize the safety errors of R1, quick support added to inspect the traffic generated by Deepseek-R1 queries in a few days after the model was introduced.

During an examination of Deepseek’s public infrastructure of the cloud security providers WIZ’s Research team A Clickhouse database on the Internet with more than one million protocol lines with chat historicals, secret keys and backend details learned. No authentication was activated in the database, so that a quick escalation for potential permissions was made possible.

The discovery of WIZ’s Research underlines the risk of quickly using AI services that do not build on hardened safety frames on a scale. WIZ apparently gave the violation responsibly and asked Deepseek to complete the database immediately. Deepseek’s initial oversight emphasizes three core lessons so that each AI provider can be taken into account when introducing a new model.

First carry out the red teaming and test the safety of the AI ​​infrastructure thoroughly before you even start a model. Second, forced the least privileged access and take a way of thinking with zero-trust, assume that your infrastructure has already been violated and no multidoma connections across systems or cloud platforms trust. Third, security teams and AI engineers have worked together and have how the models protect sensitive data.

Deepseek creates a security paradox

Cancer warned that the true danger of the model is not only where it was done, but how it was done. Deepseek-R1 is the by-product of the Chinese technology industry, in which the goals of the private sector and the national secret service goals are inseparable. The concept of the firewalling of the model or running as protection is an illusion because, as cancer explains, the distortion and filter mechanisms are already “baked” on a basic level.

Cybersecurity and National Security Leaders agree that Deepseek-R1 is the first of many models with exceptional performance and low costs that we will see from China and other nation states that enforce the control of all data collected.

Conclusion: Where Open Source has long been seen as a democratizing force in the software, the paradox of this model shows how easily a nation state open source can weapons if you wish.



Source link
Spread the love
Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *