LLMs

Intentional Manipulation Attacks Aimed at Corrupting AI Model Decisions

The widespread integration of artificial intelligence models into corporate systems has introduced a new risk vector: the possibility of manipulating their decisions without directly compromising the system hosting them. These attacks are not based on traditional exploitation techniques, but on the strategic use of specially crafted inputs designed to cause failures in the model’s behavior. […]

Intentional Manipulation Attacks Aimed at Corrupting AI Model Decisions Read More »

The Cybersecurity Risks of Generative AI Tools

Systems based on foundation models like ChatGPT, GitHub Copilot, Gemini or Claude introduce a new set of cybersecurity risks that cannot be treated as a simple evolution of traditional applications. Their probabilistic nature, dependence on massive datasets, and increasing autonomy in sensitive tasks demand a critical review of their implications in the context of information

The Cybersecurity Risks of Generative AI Tools Read More »

Grok 4: Technical Excellence Amid Ethical and Security Controversies

On July 9, 2025, xAI released Grok 4 alongside its extended version, Grok 4 Heavy, with claims of advanced reasoning capabilities, state-of-the-art benchmark performance, and premium subscription plans such as “SuperGrok” at $300/month. Marketed as “the smartest AI,” Grok 4 promised reasoning power capable of solving PhD-level problems, outperforming many competing models in mathematical, logical,

Grok 4: Technical Excellence Amid Ethical and Security Controversies Read More »

Scroll to Top