The development and application of artificial intelligence models in the military field is a trend that continues to expand, especially in countries investing heavily in its advancement, such as China. In this context, the People’s Republic of China has been making progress in expanding its DeepSeek system, a large language model (LLM) that was launched very recently and could be used for battlefield information processing, strategic decision-making, and military operations optimization. However, its implementation raises serious concerns regarding security and vulnerabilities, which could not only impact the operational efficiency of its forces but also compromise critical information within military systems integrated with this LLM.

One of the most concerning aspects of DeepSeek is how its ability to store and process large volumes of information makes it a strategic asset but also a potential vulnerability. According to recent cybersecurity studies, large language models can expose sensitive data due to security protocol failures or targeted attacks aimed at manipulating their responses. In a military scenario where accuracy and reliability of information are crucial, these deficiencies could lead to operational errors with significant consequences, not only for the troops themselves but also for broader conflict-related issues.

Another key issue is the risk that DeepSeek could become a “sleeper agent” within military systems. This concept refers to the possibility that artificial intelligence could be programmed or manipulated to generate incorrect information or mislead operators into making poor decisions at critical moments. The existence of this vulnerability calls into question the feasibility of integrating such technology into defense and combat platforms, especially in a context where information warfare plays a central role.

On top of this, there is the issue of data manipulation. Recent research has warned that language models like DeepSeek can be altered through targeted attacks, injecting false information or modifying how they interpret data. In a conflict, a compromised AI could have multiple negative effects, from operational disruptions to the unauthorized transfer of critical information to other actors. Along these lines, the use of DeepSeek in the military domain also raises concerns about data security. According to recent reports, this system has a high capacity for information storage, and in some cases, this data may be vulnerable to leaks or unauthorized access. The exposure of sensitive military data could eliminate the element of “strategic surprise,” making military movements, system status, and strategic information more transparent than desired.

One identified issue is DeepSeek’s data collection model, which does not guarantee information confidentiality and could expose it to insecure networks. For any military instrument aiming to strengthen its capabilities, such flaws could pose a significant strategic risk.

The Chinese company behind the system has maintained strict secrecy regarding the security measures being implemented to prevent these vulnerabilities. Nevertheless, using DeepSeek in military systems would create a technological dependence that could be exploited by external actors. A recently published analysis highlights that these systems could be targeted by cyberattacks designed to alter their functionality or leak critical information. As a result, any integration of military systems with LLM-type AI introduces a serious weakness within a defense structure.

Another major concern is the possibility that artificial intelligence could deceive its own operators. Simulations with large language models have shown that these platforms can generate responses that not only contain errors but can also lead to incorrect interpretations in combat scenarios. This could become a significant problem if strategic or command functions are delegated to a system that has not been thoroughly tested under real wartime conditions. In this case, we are not only talking about transferring sensitive information but also about the risk of our own assets becoming enemy systems.

Additionally, China’s struggles with AI regulation present another vulnerability. Despite technological advances, cybersecurity regulations have not been able to prevent past attacks on information infrastructure, suggesting that integrating DeepSeek into national defense could increase the military’s exposure to targeted cyberattacks.

The Chinese government’s interest in these systems aligns with a broader military modernization strategy aimed at strengthening both defensive and offensive capabilities through AI-based tools. However, deploying a language model with security deficiencies could compromise operational stability and create a difficult-to-reverse technological dependence.

Information as a Disruptive Weapon

Another critical issue is the role of DeepSeek in cyber warfare and disinformation operations. The potential for these systems to be used not only for intelligence analysis but also to create controlled narratives and manipulate perceptions of reality has become a growing concern in international security forums.

In this context, various analysts have pointed out that the development of AI technologies in China has not only internal implications but also represents a challenge to global stability. As AI becomes a fundamental pillar of China’s defense strategy, the international community closely monitors the risks associated with its implementation and the potential collateral effects it could generate in conflict scenarios. It is important to emphasize that an LLM is only as valuable as the information it processes. In other words, AI reflects the data it is programmed or trained with: if the information maintains a positive ethical framework, the AI’s ultimate purpose will also be positive. However, if its values, ethics, and objectives do not align with widely accepted societal norms, the outcome of such a language model could be entirely negative.

When considering the use of a language system like DeepSeek, where the integrated information may generate ethically questionable responses, we face a narrative problem that could gradually erode fundamental sociological principles often taken for granted. With the widespread adoption of LLM systems by a vast number of people, an AI-driven information bombardment filled with false or morally problematic content could have long-term consequences for society.

As a result, LLM technology itself becomes a weaponized tool with significant impacts not only on military systems but also on broader defense-related areas, such as societal perceptions and beliefs.

While it is true that AI development is not limited to China, other major powers are also advancing their own LLMs adapted for defense environments. However, whereas some nations have opted for strict validation and regulation strategies before deployment, China’s approach appears to be moving forward rapidly without fully considering the inherent risks.

You may also like: Teledyne FLIR will deliver “hundreds” of Black Hornet 4 nano drones to Germany

DEJA UNA RESPUESTA

Por favor deje su comentario
Ingrese su nombre aquí

This site uses Akismet to reduce spam. Learn how your comment data is processed.