Is 2026 the Year AI Security Breaks? The Coming Crisis. The AI Threat Landscape for 2026. An investigative briefing for CISOs and security leaders: Inside the 2026 AI Threat Landscape: How AI agents, identity, prompt-injection, data poisoning, and nation-state competition reshape enterprise cybersecurity and geopolitical risk. This documentary-style analysis explores autonomous risk, agent identities, compromised service accounts, covert attack infrastructure, and governance strategies—real-world command centers, data centers, and executive briefings set the tone. Learn why identity is the new battlefield, how linguistic attacks bypass firewalls, and what controls and frameworks (NIST AI RMF) matter most for defending autonomy. Like and share if this helps your security planning.
What is Model Context Protocol (MCP) and Why Attackers Love It. MCP is rapidly becoming the standard interface for connecting AI agents to real-world systems—but it is also creating one of the largest new attack surfaces in the AI ecosystem. How it works, and why security researchers are increasingly concerned about its rapid adoption across enterprise environments. Originally introduced to solve the integration problem between large language models and external tools, MCP acts like a universal adapter that allows AI systems to access services such as Gmail, databases, cloud infrastructure, and internal APIs. While this capability unlocks powerful new automation and “agentic AI” workflows, it also concentrates authentication tokens, permissions, and operational access into centralized MCP servers.
Is your multi-million dollar LLM firewall actually protecting you, or is it just a false sense of security? In this episode of Security of AI, Bobby dives into the devastating reality of the "SockPuppet Attack," a shockingly simple technique that made billion-dollar security systems obsolete overnight. While traditional firewalls scan for malformed packets, they are often blind to semantic attacks—perfectly valid English sentences that can hijack an AI’s logic. Even advanced AI-powered firewalls designed for jailbreak detection are failing. We reveal how attackers use a one-line attack to inject a compliant-sounding prefix into a model's response, tricking the AI into executing malicious instructions before the firewall even realizes what happened.
"Superintelligence": The AI Book that Predicts the Future… But Missed the Real Dangers. This episode investigates one of the most influential books in artificial intelligence: Superintelligence. For over a decade, this book has shaped how governments, researchers, and tech leaders think about the future of AI—warning of a world where machines surpass human intelligence and become impossible to control. But while the focus is on long-term existential risk, a critical question remains: What about the risks already here? In this investigation, we break down what the book gets right—and more importantly, what it misses. Today’s AI systems are already deployed in military, financial, and autonomous environments. They don’t need to become superintelligent to create catastrophic outcomes. They only need to be manipulated.
NIST AI Risk Management Framework Overview. A concise walkthrough of the NIST AI Risk Management Framework (AI-RMF) for leaders in government, finance, and healthcare. Learn why AI risk differs from traditional risk, the seven trustworthiness characteristics (validity, safety, security, accountability, explainability, privacy, fairness), and the four core functions: Govern, Map, Measure, Manage. Practical insights on profiles, implementation, and building organizational processes to manage AI risk across procurement, development, operations, and impacted communities. Perfect for executives, risk officers, and policy teams seeking a clear roadmap to trustworthy AI in real-world contexts.
Artificial Intelligence is being deployed faster than organizations can control it—and that’s where the real risk begins. In this episode of Security of AI, we break down the GOVERN function of the NIST AI Risk Management Framework—the foundation that determines whether your AI system operates as a trusted asset or becomes a liability. Most organizations focus on models, metrics, and deployment. But the real failure point isn’t technical—it’s governance. Who is accountable? Who has authority to shut systems down? What happens when AI behaves unpredictably? Through real-world scenarios across healthcare, finance, and defense, this video exposes how weak governance leads to bias, system failures, and unintended consequences—and what effective
Everyone claims to follow Responsible AI principles. Many point to the NIST AI Risk Management Framework as proof. But when AI systems fail—through bias, hallucination, or operational breakdown—one critical question remains unanswered: who is actually accountable? This video exposes the gap between AI governance theory and real-world execution. The NIST AI RMF provides a thoughtful and practical structure for managing AI risk, but without accountability, transparency, training, and measurable evidence, it can be reduced to compliance theater. We break down why organizations often stop at surface-level adoption—policies, frameworks, and presentations—without implementing the deeper discipline required for real
Who's accountable for AI, when things go wrong. Is it government, industry or the end user? This is AI Governance. In this episode, we discuss an article titled "Ethics Is Not the Hard Part. Accountability Is" by Dennis Monagle, aka. "Fester". Here we confront a critical misconception at the heart of modern artificial intelligence: the belief that ethics is the primary challenge. In reality, the harder—and far more consequential—problem is accountability. This video examines how organizations across defense, government, and industry have leaned heavily on “ethical AI” language, often as a proxy for deeper concerns about uncertainty, control, and responsibility. It clarifies a fundamental truth—AI systems are not moral actors. They are probabilistic tools.
Your AI is Under Attack - 8 Steps to Lock it Down. AI is transforming everything—but it’s also introducing a new class of security risks most organizations are not prepared for. In this episode of Security of AI, we break down the essential steps to protect AI systems from real-world threats that are already happening today. From data poisoning and adversarial attacks to model theft and privacy leaks, AI systems face vulnerabilities that traditional cybersecurity approaches were never designed to handle. This video provides a clear, structured walkthrough of AI security fundamentals—covering risk assessment, data protection, model hardening, adversarial defense, governance, and incident response.
How to Govern and Secure Agentic AI Systems with Zero Trust | Security of AI. Imagine waking up to find your AI system executed a billion-dollar transaction—without human approval. This isn't science fiction. It's the emerging reality of agentic AI. AI agents can move money, access sensitive systems, make operational decisions, and even spawn other agents. While this unlocks unprecedented efficiency, every new capability creates a new attack surface. Most organizations are deploying these systems without the security controls needed to prevent catastrophic failures. In this video, I break down the single most important framework for controlling autonomous AI: Zero Trust for AI Agents.
How Agentic AI Has Caused an AI Security Shift | Security of AI. Agentic AI isn’t just smarter — it acts. In this urgent Security of AI briefing, Bobby explains why agentic AI represents a security shift: autonomy, persistent memory, tool and API calls, and multi-agent delegation become new attack surfaces. Learn about real threats — goal hijacking, memory poisoning, privilege escalation, prompt injection — and why governance frameworks (NIST, MITRE ATLAS, OWASP) must evolve. Practical defenses: minimal footprint, zero standing privilege, explicit confirmation gates, agent identity/provenance, and immutable audit trails. A must-watch for AI security leaders, risk managers, and governance teams preparing for the agentic era. If this helped, please like and share.
How to Build Secure AI Agents | Security of AI. Your AI Agent Could Delete Everything—Here’s How To Stop It — An executive framework for building secure AI agents. This 4-minute briefing walks CISOs and security leaders through six core pillars: threat landscape, new risk domain, identity & access, security controls & gateways, governance & lifecycle, and competitive imperative. Learn practical defenses against prompt injection, excessive privileges, data exfiltration, and compliance drift—aligned with OWASP/NIST guidance and modern agent frameworks. Real-world scenario shows why autonomy without governance is catastrophic and how to design agents that are powerful and secure.
"Your data and privacy is well respected". No data is shared with anyone!
Bobby K. Jenkins Patuxent River, Md. 20670 bobby.jenkins@ai-rmf.com <<www.linkedin.com/in/bobby-jenkins-navair-492267239<<
Mon | By Appointment | |
Tue | By Appointment | |
Wed | By Appointment | |
Thu | By Appointment | |
Fri | By Appointment | |
Sat | Closed | |
Sun | Closed |