Category: Security

  • The $1 SUV: How Prompt Injection Can Hijack Your AI Systems

    The $1 SUV: How Prompt Injection Can Hijack Your AI Systems

    Chatbots powered by Large Language Models (LLMs) are becoming increasingly common, offering convenient and engaging ways to interact with technology. However, as IBM Distinguished Engineer Jeff Crume explains in a recent video, these systems are vulnerable to a unique type of cyberattack called prompt injection. This post delves into the details of prompt injection, its potential…

  • Safeguard Your Chatbots with Garak: Identifying LLM Vulnerabilities

    Safeguard Your Chatbots with Garak: Identifying LLM Vulnerabilities

    LLMs can be vulnerable to various attacks, including prompt injection, data leakage, and even generating malicious code. But how do you proactively test your LLM-powered applications for these weaknesses? Enter Garak, an open-source LLM vulnerability scanner. In this blog post, I’ll break down the key takeaways from the video and show you how to use…