
I suspect we have all watched a demo where an AI agent autonomously discovered and fixed a security vulnerability in under three minutes. This is impressive until the engineer mentioned they had no idea what dependencies the agent had pulled in to make the fix, or whether those libraries introduced new risks. We’re automating security remediation while simultaneously creating massive new attack surfaces.
The convergence of AI development and software supply chain security isn’t just adding complexity, but also fundamentally changing what we need to secure. Traditional Software Composition Analysis (SCA) tools were built to track libraries and packages. Now we’re dealing with model weights, training datasets (and their provenance), AI agents that write code, and frameworks that evolve faster than our approval processes.
You may have heard a story similar to this: A team deploys an LLM-powered code assistant and within days, it introduces 47 new Python packages into their codebase— none of which had been vetted, and several with known CVEs.
The assistant was just trying to be helpful, selecting whatever libraries seemed most popular for the task at hand. This is the new reality: non-human contributors that don’t attend security training, read your contribution guidelines, or really know the architecture and “plan.”
The problem compounds when these AI tools become interconnected. Multi-agent systems, using specialized AI agents that collaborate on complex tasks, can create dependency chains that shift with every execution. One agent pulls in a library, another agent uses that output to generate more code with different dependencies, and suddenly your software bill of materials looks less like a manifest and more like a probability cloud (a tree of despair).
The teams navigating this successfully aren’t trying to slow down AI adoption—they’re instrumenting it. That means treating AI-generated code with the same scrutiny as open-source contributions: scanning for vulnerabilities, tracking provenance, and maintaining automated inventories that update in real-time.
The practical breakthrough I’m seeing is standardized AI artifact tracking. Just as we use SBOMs to document traditional dependencies, we need machine-readable formats that capture AI-specific components: model versions, training data lineage, inference-time libraries, and agent decision logs. Without this, every AI integration is a black box that auditors rightfully fear.
Start with three concrete steps:
If your team is using AI coding assistants or deploying models, map every dependency they introduce. Treat model registries like package repositories—something to inventory and monitor, not trust blindly.
You need tooling that identifies when AI agents or automated systems are making changes to your codebase, then applies appropriate security controls automatically.
The best time to catch AI-introduced vulnerabilities is in the pipeline, not production. This means integrating security scanning into wherever AI tools operate (IDE plugins, CI/CD workflows, or agent orchestration platforms).
The velocity gains from AI development are real. But velocity without visibility is just speed toward unknown risks.
JD Burke is a seasoned technology professional with over 20 years of experience in product management and application security, currently serving as Director of Security Products at Sembi. His deep expertise in application security testing spans SAST, SCA, and DevOps integration, demonstrated through senior technical roles at leading cybersecurity companies including Snyk, CyberRes/Fortify, and Kiuwan. His technical foundation includes systems architecture experience. He combines strong product management skills with hands-on application security knowledge, having successfully led cross-functional teams through strategic planning, feature development, and market positioning while maintaining expertise in vulnerability assessment, compliance frameworks, and security tool integration.