Agentic AI Security - built for developers

Ship AI applications your security team can trust

  • AI-SBOM — map every agent, model, tool, and datastore from source code
  • Static Analysis — AI Stack analysis, MITRE ATLAS, CVEs, IaC misconfigurations
  • Cognitive Policy — enforce behavioral guardrails, assess OWASP LLM compliance
  • Red-Team — context-based testing that exercises your sub-agents and MCP tools
Star on GitHub
$ pip install nuguard
End-to-end AI application security pipeline
Source Code
Python · TypeScript
Jupyter · SQL
Dockerfiles · K8s
YAML · Nginx
Policy
Cognitive Policy
AI-SBOM
Generator
20+ AI frameworks
Agents · Models · Tools
Prompts · Datastores
Guardrails · Auth · IaC
PII/PHI classification
nuguard sbom generate
Static Analysis
+ Policy Engine
Deep AI Stack Analysis
MITRE ATLAS v2 mapping
OSV · Grype · Trivy CVEs
Checkov · Semgrep IaC
OWASP LLM · NIST · EU AI Act
nuguard analyze · policy check
Red-Team
Dynamic Testing
Context-based Attacks
Guided multi-turn convos
Canary exfiltration detection
Prompt injection · Tool abuse
MCP toxic flows · Data exfil
nuguard redteam
Outputs
JSON AI-SBOM
Markdown
SARIF 2.1
SPDX 3.0
CycloneDX 1.6
Cognitive Policy feeds into Analysis & Red-Team
4
Integrated AI security capabilities
20+
AI frameworks supported
7
Scanners integrated
100%
Offline core — no API key needed

One shared evidence graph.
Four AI security capabilities.

Scan your source code once, and let AI-SBOM power every step of your security workflow.

  • Limit third-party tool exposure to your source code
  • Use LLMs over your SBOM graph only when necessary
1

AI-SBOM Generation

Extract every AI component — agents, models, tools, prompts, datastores, guardrails, auth nodes, IaC. Relationship between agents, tools, and guardrails is captured in a structured graph.

2

AI Risk Analysis

Deep security analysis of AI Stack against AI Security best practices and known vulnerabilities. Perfect during development.

3

Cognitive Policy

Define your app's AI behavior in plain English. Check between intent and implementation, and score it against OWASP LLM, NIST, etc.

4

Context-based Red-Team (dynamic tests)

Attack scenarios derived from the SBOM — not generic payloads. Prompt injection, data exfiltration, tool abuse, MCP toxic flows, privilege escalation.

Built differently from every other AI security tool

Most tools scan for known CVEs or send generic payloads at an LLM endpoint. NuGuard starts from the source code, understands your AI architecture, and derives security evidence — and attacks — from what your application actually does.

🎯

Context-driven Red-Team Attacks

Red-team scenarios are derived from your SBOM graph, not a generic payload library. SQL injection scenarios only fire when the SBOM shows a SQL-injectable tool.

🔬

Cross-component risk reasoning

NuGuard reasons across component boundaries: "this agent has write access to a HIPAA-classified datastore via a tool with no auth boundary and no guardrail in the graph."

🧬

AI SBOM extraction

Supports 20+ AI framework APIs, relationship graphs, package dependencies, and more.

📋

Cognitive Policy

Write AI behavioral intent in plain English. NuGuard checks for implementation drifts and against other security frameworks.

🎣

Canary exfiltration detection

Plant unique test data in your target app's database. The CanaryScanner checks definitive proof of data leakage, no LLM judge uncertainty.

💬

Guided multi-turn conversations

Crescendo-style multi-turn conversations — rapport → normalise → bridge → escalate → inject — adapting in real-time based on each response.

✈️

Fully offline core

Safe for air-gapped environments. Fast enough for pre-commit hooks and CI gates.

🔗

Standards-native output

Wide-range of outputs supported. Feed into GitHub Code Scanning, or your own toolchain.

Every AI component

Filename, line no for evidence. Relationship graphs.

🤖

Agents

Frameworks like LangGraph, CrewAI, AutoGen, OpenAI Agents SDK, Google ADK, AWS BedrockAgentCore, Azure Semantic Kernel, Agno, and more.

🧠

Models

LLM and embedding model references with provider, version, and model cards.

🔧

Tools & MCP Servers

Function tools, MCP tools, and decorated callables wired to agents — including privilege scope (db_write, code_execution, shell).

📝

Prompts

System instructions and prompt templates — full content preserved, template variables identified, sensitive values redacted.

🗄️

Datastores

Vector stores, databases, and caches with PII/PHI classification.

🛡️

Guardrails

Content filters and safety validators.

🔑

Auth & IAM

OAuth2, API key, Bearer, JWT, and MCP auth nodes. IAM roles, privilege scopes.

☁️

Deployment & IaC

Kubernetes manifests, Terraform, CloudFormation, Azure Bicep, etc. Cloud region, HA mode, secret stores, IAM roles, encryption settings.

Built for everyone who needs to trust AI in production

NuGuard answers the questions that matter most to each stakeholder — from source code evidence, not manual attestation.

🧑‍💻

AI / Platform Engineers

  • Reduce security review overhead — generate AI-SBOM artifacts automatically in CI
  • Fix findings before AppSec review — structural rules fire on your actual code graph
  • Extensible and customizable — adapt rules to your specific AI frameworks and workflows
👔

CTO / VP Engineering

  • Know your AI attack surface before shipping to production
  • Integrates into existing CI/CD and security tooling
  • Improve developer experience — no manual questionnaires
🔒

CISO / Security Director

  • Self-serve & automation — less manual effort
  • Audit ready - detailed evidence, documented AI policy enforcement
  • Identify PII/PHI flowing to external LLM APIs — the #1 AI data-risk concern
📋

Compliance & Risk Teams

  • Self-serve compliance assessments — no need to wait for engineering time
  • Human-readable Markdown reports for audit evidence packages
  • Version-controlled policy files — track compliance drift over time in git

Adversarial testing that knows your application

Most red-team tools fire generic prompt libraries at an LLM endpoint. NuGuard knows your application context and tailors attacks accordingly. Attacks for a healthcare app vs a code generation tool are going to be different.

Context-Based Attacks

Attack scenarios are derived from the AI-SBOM graph and your Cognitive Policy — not a generic payload library.
A PHI exfiltration scenario only fires when the SBOM contains a PII-classified datastore.

  • Real-world scenarios
  • Attack Efficiency — no wasted API calls
  • Canary gives definitive proof of data exfiltration

Developer-Friendly

Minimal input required. Point NuGuard at your source repo — it builds the attack model automatically.

  • One command: nuguard redteam
  • SARIF output for GitHub Code Scanning
  • LLM Optional — no LLM key needed for core scenarios

Full Agentic App Coverage

Attacks traverse the full agentic stack — not just the chat endpoint. NuGuard tests sub-agents, MCP tool servers, and guardrails.

  • MCP toxic flow injection via untrusted tool output
  • Tool-chain privilege escalation to admin actions
  • Guardrail bypass via multi-turn Crescendo prompts

From install to security posture in minutes

The offline core — AI-SBOM generation, Deep AI Stack Analysis, Cognitive Policy Enforcement — requires no API key and no external network access. Run it anywhere.

Add --llm for LLM enrichment. Add --target for dynamic red-teaming. Both are opt-in.

Full guide → CLI reference →
# Install
$ pip install nuguard

# Generate AI-SBOM from source
$ nuguard sbom generate --source ./my-ai-app --output app.sbom.json

# Static analysis (7 scanners, no running app needed)
$ nuguard analyze --sbom app.sbom.json --format markdown

# Policy linting and compliance assessment
$ nuguard policy check --policy cognitive_policy.md \
    --sbom app.sbom.json --framework owasp-llm-top10

# Dynamic red-team scan (requires running app)
$ nuguard redteam --sbom app.sbom.json \
    --target http://localhost:8000 --profile ci

from pathlib import Path
from nuguard.sbom import AiSbomConfig, AiSbomExtractor, AiSbomSerializer
from nuguard.sbom.toolbox.plugins.vulnerability import VulnerabilityScannerPlugin

# Generate AI-SBOM
doc = AiSbomExtractor().extract_from_path(
    path=Path("./my-ai-app"),
    config=AiSbomConfig(),
)
print(f"nodes={len(doc.nodes)}, edges={len(doc.edges)}")

# Serialize
json_str = AiSbomSerializer.to_json(doc)

# Run structural vulnerability scan
sbom   = doc.model_dump(mode="json")
result = VulnerabilityScannerPlugin().run(sbom, {"provider": "all"})
for f in result.details["findings"]:
    print(f["severity"], f["rule_id"], f["title"])
# .github/workflows/ai-security.yml
name: AI Security Scan
on: [push, pull_request]

jobs:
  nuguard:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Install NuGuard
        run: pip install nuguard
      - name: Generate AI-SBOM
        run: nuguard sbom generate --source . --output app.sbom.json
      - name: Static analysis — fail on HIGH+
        run: nuguard analyze --sbom app.sbom.json \
               --format sarif --output analysis.sarif \
               --min-severity high
      - name: Policy compliance check
        run: nuguard policy check --sbom app.sbom.json \
               --framework owasp-llm-top10
      - name: Upload SARIF to GitHub Code Scanning
        uses: github/codeql-action/upload-sarif@v3
        with:
          sarif_file: analysis.sarif

Analyse, export, and integrate

Run any plugin with nuguard sbom plugin run <name> --sbom app.sbom.json. All offline plugins work with zero network access and no API key.

SARIF Exports vulnerability findings as SARIF 2.1.0 — compatible with GitHub Code Scanning. Offline
CycloneDX CycloneDX 1.6 BOM. Optionally attaches VEX vulnerabilities from OSV for a combined BOM + VEX document. Offline
CycloneDX Ext CycloneDX 1.6 with NuGuard AI SBOM. Offline
SPDX SPDX 3.0.1 JSON-LD export with NuGuard AI SBOM. Offline
Markdown Human-readable Markdown report for audit evidence. Offline

Deep support across the AI ecosystem

Python

LangChain LangGraph OpenAI Agents SDK CrewAI (code + YAML) AutoGen (code + YAML) Google ADK LlamaIndex Agno AWS BedrockAgentCore Azure AI Agent Service Guardrails AI MCP Server (FastMCP) MCP Server (low-level) Azure Semantic Kernel

TypeScript / JavaScript

LangChain.js LangGraph.js OpenAI Agents (TS) Azure AI Agents (TS) Agno (TS) MCP Server (TS)

Infrastructure & Configuration

Terraform CloudFormation Azure Bicep Kubernetes manifests GCP Deployment Manager GitHub Actions Dockerfiles Nginx configs

Data & Storage

SQL schemas (PHI/PII classification) SQLAlchemy models Django models Pydantic models Prompt files (.txt / .md / .jinja)

Know your AI attack surface before your adversaries do

Install NuGuard, run your first scan, and get a complete security picture of every AI component in your codebase — in minutes, from source code, no API key required.

$ pip install nuguard