Vibe Coding, Slopsquatting, and the Future of AI in Software Development
In this episode, we welcome back Guy Podjarny, founder of Snyk and Tessl, to explore the evolution of AI-assisted coding. We dive deep into the three chapters of AI's impact on software development, from coding assistants to the rise of "vibe coding" and agentic development.Guy explains what "vibe coding" truly is, a term coined by Andrej Karpathy where developers delegate more control to AI, sometimes without even reviewing the code. We discuss how this opens the door for non-coders to create real applications but also introduces significant risks.Caleb, Ashish and Guy discuss:The Three Chapters of AI-Assisted Coding: The journey from simple code completion to full AI agent-driven development.Vibe Coding Explained: What is it, who is using it, and why it's best for "disposable apps" like prototypes or weekend projects.A New Security Threat - Slopsquatting: Discover how LLMs can invent fake library names that attackers can exploit, a risk potentially greater than typosquatting.The Future of Development: Why the focus is shifting from the code itself—which may become disposable—to the importance of detailed requirements and rigorous testing.The Developer as a Manager: How the role of an engineer is evolving into managing AI labor, defining specifications, and overseeing workflowsQuestions asked:(00:00) The Evolution of AI Coding Assistants(05:55) What is Vibe Coding?(08:45) The Dangers & Opportunities of Vibe Coding(11:50) From Vibe Coding to Enterprise-Ready AI Agents(16:25) Security Risk: What is "Slopsquatting"?(22:20) Are Old Security Problems Just Getting Bigger?(25:45) Cloud Sprawl vs. App Sprawl: The New Enterprise Challenge(33:50) The Future: Disposable Code, Permanent Requirements(40:20) Why AI Models Are Getting So Good at Understanding Your Codebase(44:50) The New Role of the AI-Native Developer: Spec & Workflow Manager(46:55) Final Thoughts & Favorite Coding ToolsResources spoken about during the episode:AI Native Dev CommunityTesslCursorBoltBASE44Vercel
--------
49:09
AI in Cybersecurity: Phil Venables (Formerly Google Cloud CISO) on Agentic AI & CISO Strategy
Dive deep into the evolving landscape of AI in Cybersecurity with Phil Venables, former Chief Information Security Officer at Google Cloud and a cybersecurity veteran with over 30 years of experience. Recorded at RSA, this episode explores the critical shifts and future trends shaping our industry.Caleb, Ashish and Phil speak aboutThe journey from predictive AI to the forefront of Agentic AI in enterprise environments.How organizations are transitioning AI from experimental prototypes to impactful production applications.The three essential pillars of AI control for CISOs: software lifecycle risk, data governance, and operational risk management.Current adversarial uses of AI and the surprising realities versus the hype.Leveraging AI to combat workforce skill shortages and boost productivity within security teams.The rise of "Vibe Coding" and how AI is transforming software development and security.The expanding role of the CISO towards becoming a Chief Digital Risk Officer.Practical advice for security teams on adopting AI for security operations automation and beyond.Questions asked:(00:00) - Intro: AI's Future in Cybersecurity with Phil Venables(00:55) - Meet Phil Venables: Ex-Google Cloud CISO & Cyber Veteran(02:59) - AI Security Now: Navigating Predictive, Generative & Agentic AI(04:44) - AI: Beyond the Hype? Real Enterprise Adoption & Value(05:49) - Top CISO Concerns: Securing AI in Production Environments(07:02) - AI Security for All: Advice for Smaller Organizations (Hint: Platforms!)(09:04) - CISOs' AI Worries: Data Leakage, Prompt Injection & Deepfakes?(12:53) - AI Maturity: Beyond Terminator Fears to Practical Guardrails(14:45) - Agentic AI in Action: Real-World Enterprise Deployments & Use Cases(15:56) - Securing Agentic AI: Building Guardrails & Control Planes (Early Days)(22:57) - Future-Proof Your Security Program for AI: Key Considerations(25:13) - LLM Strategy: Single vs. Multiple Models for AI Applications(28:26) - "Vibe Coding": How AI is Revolutionizing Software Development for Leaders(32:21) - Security Implications of AI-Generated Code & "Shift Downward"(37:22) - Frontier Models & Shared Responsibility: Who Secures What?(39:07) - AI Adoption Hotbeds: Which Security Teams Are Leading the Way? (SecOps First!)(40:20) - AI App Sprawl: Managing Risk in a World of Custom, AI-Generated Apps
--------
44:55
Is Your Browser the Biggest AI Security Risk?
Are you overlooking the most critical piece of real estate in your enterprise security strategy, especially with the rise of AI? With 90% or more of employee work happening inside a browser, it's becoming the new operating system and the primary entry point for AI agents.In this episode, Ashish and Caleb dive deep into the world of Enterprise Browsers. They explore why this often-underestimated technology is set to disrupt how AI agents operate and why it should be top-of-mind for every security leader.Join us as we cover:What are Enterprise Browsers? Understanding these Chromium-based, standalone browsers.Who are the Key Players? A look at companies like Island Security and Talon Security (now Palo Alto).Why Now? How browsers became the de facto OS and the prime spot for AI integration.The Power of Control: Exploring benefits like built-in DLP (Data Loss Prevention), Zero Trust capabilities, policy enforcement, and BYOD enablement.Beyond Security: How enterprise browsers can inject features and modify permissions without backend dev work.AI Agents in Action: How AI will leverage browsers for automation and the security challenges this presents.The Future Outlook: Predictions for AI-enabled browsers and the coming wave of browser-focused AI security startups.Whether you're skeptical or already exploring browser security, this conversation offers valuable insights into managing AI agents and securing your organization in an increasingly browser-centric, AI-driven world.Questions asked:(00:00) Intro: Why Enterprise Browsers are Crucial for AI Agents(01:50) Why Discuss Enterprise Browsers on an AI Cybersecurity Podcast?(02:20) The Browser is the New OS: 99% of Time Spent (03:00) AI Agents' Easiest Entry Point: The Browser (03:30) Example: How an AI Agent Automates Tasks via Browser (04:30) The Scope: Intranet, SaaS, and 60% of Employee Activity (06:50) OpenAI's Operator Demo & Browser Emulation (07:45) Overview: What are Enterprise Browsers? (Vendors & Purpose) (08:50) Key Players: Talon (Palo Alto) & Island Security (09:30) Benefit 1: Built-in DLP & Visibility (10:10) Benefit 2: Zero Trust Capabilities (10:40) Benefit 3: Policy, Compliance & Password Management (11:00) Use Case: BYOD & Contractors (Replacing Virtual Desktops?) (13:10) Why Not Firefox or Edge? The Power of Chromium (16:00) Budgeting Challenge: Why Browser Security is Often Overlooked (17:00) The Rise of AI Browser Plugins & Startups (19:30) The Hidden Risk: Existing Chrome Plugin Dangers (23:45) Why Did OpenAI Want to Buy Chrome? (25:00) Devil's Advocate: Can Enterprise Browsers Stop OWASP Top 10? (27:06) Example: AI Agent Ordering Flowers via Browser Extension (29:00) How AI Agents Gain Power via Browser Extensions (30:15) Prediction: What AI Browser Security Startups will look like at RSA 2026? (31:30) Skepticism: Will Enterprises Really Fund Browser Security? (SSPM Lessons) (34:00) The #1 Benefit You Don't Know: Injecting Features Without Code! (34:45) Example: Masking PII & Adding 2FA via Enterprise Browser (38:15) Monitoring AI Agents: Browser as a "Man-in-the-Middle" (40:00) The "AI Version of Chrome": A Future Consumer Product? (42:15) Personal vs. Professional: The Blurring Lines in Browser Use (44:15) Final Predictions & The Cybersecurity Gap (45:00) Final Thoughts & Wrap Up
--------
46:00
AI Red Teaming & Securing Enterprise AI
As AI systems become more integrated into enterprise operations, understanding how to test their security effectively is paramount.In this episode, we're joined by Leonard Tang, Co-founder and CEO of Haize Labs, to explore how AI red teaming is changing.Leonard discusses the fundamental shifts in red teaming methodologies brought about by AI, common vulnerabilities he's observing in enterprise AI applications, and the emerging risks associated with multimodal AI (like voice and image processing systems). We delve into the intricacies of achieving precise output control for crafting sophisticated AI exploits, the challenges enterprises face in ensuring AI safety and reliability, and practical mitigation strategies they can implement.Leonard shares his perspective on the future of AI red teaming, including the critical skills cybersecurity professionals will need to develop, the potential for fingerprinting AI models, and the ongoing discussion around protocols like MCP.Questions asked:00:00 Intro: AI Red Teaming's Evolution01:50 Leonard Tang: Haize Labs & AI Expertise05:06 AI vs. Traditional Red Teaming (Enterprise View)06:18 AI Quality Assurance: The Haize Labs Perspective08:50 AI Red Teaming: Real-World Application Examples10:43 Major AI Risk: Multimodal Vulnerabilities Explained11:50 AI Exploit Example: Voice Injections via Background Noise15:41 AI Vulnerabilities & Early XSS: A Cybersecurity Analogy20:10 Expert AI Hacking: Precisely Controlling AI Output for Exploits21:45 The AI Fingerprinting Challenge: Identifying Chained Models25:48 Fingerprinting LLMs: The Reality & Detection Difficulty29:50 Top Enterprise AI Security Concerns: Reputation & Policy34:08 Enterprise AI: Model Choices (Frontier Labs vs. Open Source)34:55 Future of LLMs: Specialized Models & "Hot Swap" AI37:43 MCP for AI: Enterprise Ready or Still Too Early?44:50 AI Security: Mitigation with Precise Input/Output Classifiers49:50 Future Skills for AI Red Teamers: Discrete OptimizationResources discussed during the episode:Baselines for Watermarking Large Language ModelsHaize Labs
Caleb and Ashish cut through the Agentic AI hype, expose real MCP (Multi-Cloud Platform) risks, and discuss the future of AI in cybersecurity. If you're trying to understand what really happened at RSA and what it means for the industry, you would want to hear this.In this episode, Caleb Sima and Ashish Rajan dissect the biggest themes from RSA, including:Agentic AI Unpacked: What is Agentic AI really, beyond the marketing buzz?MCP & A2A Deployment Dangers: MCPs are exploding, but how do you deploy them safely across an enterprise without slowing down business?AI & Identity/Access Management: The complexities AI introduces to identity, authenticity, and authorization.RSA Innovation Sandbox InsightsGetting Noticed at RSA: What marketing strategies actually work to capture attention from CISOs and executives at a massive conference like RSA?The Current State of AI Security KnowledgeQuestions asked:(00:00) Introduction(02:44) RSA's Big Theme: The Rise of Agentic AI(09:07) Defining Agentic AI: Beyond Basic Automation(12:56) AI Agents vs. API Calls: Clarifying the Confusion(17:54) AI Terms Explained: Inference vs. User Inference(21:18) MCP Deployment Dangers: Identifying Real Enterprise Risks(25:59) Managing MCP Risk: Practical Steps for CISOs(29:13) MCP Architecture: Understanding Server vs. Client Risks(32:18) AI's Impact on Browser Security: The New OS?(36:03) AI & Access Management: The Identity & Authorization Challenge(47:48) RSA Innovation Sandbox 2025: Top Startups & Winner Insights(51:40) Marketing That Cuts Through: How to REALLY Get Noticed at RSA
The #1 source for AI Security insights for CISOs and cybersecurity leaders.
Hosted by two former CISOs, the AI Security Podcast provides expert, no-fluff discussions on the security of AI systems and the use of AI in Cybersecurity. Whether you're a CISO, security architect, engineer, or cyber leader, you'll find practical strategies, emerging risk analysis, and real-world implementations without the marketing noise.
These conversations are helping cybersecurity leaders make informed decisions and lead with confidence in the age of AI.