Improve the safety & security of your AI Application

  • Deep Visibility agents, models, MCP tools, prompts, and more
  • Vulnerability Scanning PII/PHI detection, security analysis
  • Offline operation with optional LLM modes
  • Integrate with your security toolchain via plugins
$ pip install xelo
See it in action
xelo-report · openai-cs-demo.md
$ xelo scan https://github.com/NuGuardAI/openai-cs-agents-demo \
    --format unified --output sbom.json \
    --plugin markdown --plugin-output report.md
  29 nodes · 28 deps · 36 edges → sbom.json
  ok: Markdown report generated (29 node(s)) → report.md

$ code report.md
# Opening Markdown Preview...
Generated: 2026-03-14T22:08:05Z
Schema version: 1.3.0

Summary

Field Value
AI nodes 29
Dependencies 28
Data classification PII
Classified tables AirlineAgentContext, GuardrailCheck
Use case This application implements an agentic AI workflow with 5 agent(s), 6 tool integration(s), and 2 guardrail control(s). Detected use cases include FAQ question answering, request triage and routing.
Frameworks openai_agents
Modalities TEXT

AI Components

Name Type Confidence Details
Cancellation Agent AGENT 92% openai_agents
FAQ Agent AGENT 92% openai_agents
Triage Agent AGENT 92% openai_agents
Deploy to Azure DEPLOYMENT 95% github-actions
Jailbreak Guardrail GUARDRAIL 92% openai_agents
Relevance Guardrail GUARDRAIL 92% openai_agents
gpt-4.1-mini MODEL 90% openai
Triage Agent Instructions PROMPT 92% "You are a helpful triaging agent…" · role=system
20+
Supported AI frameworks
13
Component types detected
21
Structural security rules
11
Toolbox plugins
100%
Offline — no API key needed

A Bill of Materials built for AI

Software Bill of Materials tools were designed for packages and libraries.
AI applications are different — they have agents, models, prompts, tools, and datastores that package managers cannot see.

Xelo fills that gap. It analyses your source code and configs to produce an AI SBOM complete with evidence, confidence scores, and relationship edges.

Export the AI SBOM output to your own tooling. Answer questions that security teams actually need to ask: Which AI agents touch PHI? Which datastores are connected with the AI agents and with what privileges?

1

AST-aware adapters

Language-specific parsers for Python (ast) and TypeScript (tree-sitter) extract framework-specific components with high precision — no generic string matching.

2

Regex fallbacks

Model names, auth keywords, secret patterns, datastores, and IaC signals are caught across all file types — SQL, YAML, Dockerfiles, Nginx configs, prompt files.

3

LLM enrichment (optional)

Verifies uncertain detections, enriches node descriptions, generates a use-case summary, and produces a security briefing of all IaC findings. Token budget controlled.

Every AI component, fully mapped

🤖

Agents

Agentic orchestrators — LangGraph, CrewAI, AutoGen, OpenAI Agents SDK, Google ADK, AWS BedrockAgentCore, Azure Semantic Kernel, and more.

🧠

Models

LLM and embedding model references with provider, version, and whether they are external services. Flags PHI/PII flowing to third-party APIs.

🔧

Tools

Function tools, MCP tools, and decorated callables wired to agents. Includes privilege scope — db_write, code_execution, …

📝

Prompts

System instructions and prompt templates — full content preserved, template variables identified.

🗄️

Datastores

Vector stores, databases, and caches with PII/PHI classification from SQL schema and Python ORM analysis. Datastore type and transport captured.

🛡️

Guardrails

Content filters and safety validators. Used by the vulnerability scanner to identify AI components operating without any output protection.

🔑

Auth & Privileges

Authentication nodes (OAuth2, API key, Bearer, JWT, MCP providers) and capability grants. Surfaces API surface with insufficient auth coverage.

☁️

Deployment & IaC

Kubernetes manifests, Terraform, CloudFormation, GitHub Actions, Dockerfiles, Nginx configs. Cloud region, HA mode, secret stores, IAM roles.

AI-native vulnerability scanning

21 structural XELO rules fire purely on the AI SBOM — no manual attestation, no LLM call. Rules are derived from OWASP AI Top 10, NIST AI RMF, and data protection frameworks, adapted for the AI application layer.

21 rules total 17 offline rules 4 CRITICAL severity xelo plugin run vulnerability sbom.json
🤖 XELO-001 – 009

AI graph rules

Missing guardrails, PHI to external LLMs, voice + PHI exposure, prompt injection risk, auth coverage gaps.

🔒 XELO-010 – 013

IaC / Encryption

PHI workloads without encryption at rest, secrets in env vars, missing secret management service.

🪪 XELO-014 – 017

IAM & Identity

Overly permissive IAM with PHI, roles without permission boundary, GitHub Actions w/o strong authentication.

♻️ XELO-018 – 020

Resilience

Single-AZ deployment with PHI, AI workloads without health checks, containers without resource limits.

🐳 XELO-021

Container security

Containers running as root — detected from Dockerfile USER instructions and K8s pod security context.

ℹ️ Framework integrations: findings export as SARIF 2.1 (GitHub Code Scanning / GHAS), AWS Security Hub (ASFF), and MITRE ATLAS v2 technique mappings via the atlas plugin. See the Vulnerability Scanning guide for details, and a healthcare voice agent example.

Built for the people securing AI

Xelo is for anyone who needs to answer "what is the AI behavior on our systems and data?" from a security, risk or governance perspective.

🧑‍💻

AI Engineers

  • Reduce your workload answering security and compliance questions
  • Provide meta-data and artifacts automatically through CI pipeline
  • Leverage various 3rd security tools w/o giving them repo access
🔍

AppSec Engineers

  • Find unguarded models and over-privileged agents in code review
  • Run 21 structural VLA rules with no network access
  • Export findings into GitHub Code Scanning / GHAS
  • Map detections to MITRE ATLAS techniques automatically
⚙️

DevSecOps Engineers

  • Publish AI SBOM & artifacts into various platforms
  • Scan IaC for encryption, root containers, single-AZ exposures
  • Push findings to AWS Security Hub, JFrog Xray, or GitHub GHAS
📋

Compliance & Risk Teams

  • Self-serve compliance and risk assessments w/o needing AI Engineers time
  • Generate human-readable Markdown reports for audit evidence
  • Identify PII and PHI data flows to external LLM providers

From install to AI SBOM in 60 seconds

Xelo's first two phases are fully deterministic and require no LLM key. Xelo can be used in fully offline environments — no external API calls, no data leaves your network.

JSON output can be validated with the bundled schema, exported as CycloneDX 1.6, or analysed with any built-in toolbox plugin.

Full guide → Live example: OpenAI CS Agents →
# Install
$ pip install xelo

# Scan a local repository
$ xelo scan ./my-ai-app --output sbom.json
  14 nodes, 18 edges → sbom.json

# Scan a remote repository
$ xelo scan https://github.com/org/repo --ref main

# Validate the output
$ xelo validate sbom.json
  OK — document is valid

# Run structural security rules
$ xelo plugin run vulnerability sbom.json

# Export to CycloneDX 1.6
$ xelo scan ./my-ai-app --format cyclonedx --output bom.cdx.json
from pathlib import Path
from xelo import AiSbomConfig, AiSbomExtractor, AiSbomSerializer
from xelo.toolbox.plugins.vulnerability import VulnerabilityScannerPlugin

# Extract
config = AiSbomConfig()
doc    = AiSbomExtractor().extract_from_path(
    path=Path("./my-ai-app"),
    config=config,
)
print(f"nodes={len(doc.nodes)}, edges={len(doc.edges)}")

# Serialize
json_str = AiSbomSerializer.to_json(doc)

# Vulnerability scan
sbom   = doc.model_dump(mode="json")
result = VulnerabilityScannerPlugin().run(sbom, {})
print(result.status, result.message)
# .github/workflows/sbom.yml
name: AI SBOM scan
on: [push, pull_request]

jobs:
  xelo:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Install xelo
        run: pip install xelo
      - name: Scan
        run: xelo scan . --output sbom.json
      - name: Security check
        run: |
          xelo plugin run vulnerability sbom.json \
            --output findings.json
          python3 -c "
          import json, sys
          d = json.load(open('findings.json'))
          s = d['details']['summary']
          sys.exit(1 if s['critical']+s['high'] > 0 else 0)
          "
      - name: Upload SARIF
        run: |
          xelo plugin run sarif sbom.json --output results.sarif
          xelo plugin run ghas  sbom.json \
            --config token=${{ secrets.GITHUB_TOKEN }} \
            --config github_repo=${{ github.repository }}

Analyse, export, and integrate

Run any plugin with xelo plugin run <name> sbom.json. All offline plugins work with no network access and no API key.

vulnerability 21 structural XELO rules — missing guardrails, over-privileged agents, regulated data (PII/PHI) to external LLMs, IaC / encryption / IAM / resilience / container security findings. Works with OWASP AI Top 10, NIST AI RMF, and data protection frameworks. Fully offline — no CVE feed, no internet. Also runs in air-gapped environments. Offline
atlas Maps every XELO finding to MITRE ATLAS v2 techniques and recommended mitigations. Offline
sarif Exports all findings as SARIF 2.1.0 — compatible with GitHub Code Scanning and any SARIF viewer. Offline
cyclonedx Exports AI nodes as a standards-compliant CycloneDX 1.6 BOM for downstream supply-chain tooling. Offline
markdown Human-readable Markdown report covering all detected components, findings, and remediation guidance. Offline
license Checks all detected package dependencies against SPDX licence identifiers for compliance. Offline
dependency Scores dependency freshness and flags outdated or unmaintained AI packages in the graph. Offline
ghas Uploads SARIF findings directly to GitHub Advanced Security via the GitHub API. Network
aws-security-hub Pushes findings to AWS Security Hub as Security Finding Format (ASFF) records. Network
xray Submits the SBOM to JFrog Xray for vulnerability enrichment and policy evaluation. Network

Deep support across the AI ecosystem

Python

LangChain LangGraph OpenAI Agents SDK CrewAI (code + YAML) AutoGen (code + YAML) Google ADK LlamaIndex Agno AWS BedrockAgentCore Azure AI Agent Service Guardrails AI MCP Server (FastMCP) MCP Server (low-level) Azure Semantic Kernel

TypeScript / JavaScript

LangChain.js LangGraph.js OpenAI Agents (TS) Azure AI Agents (TS) Agno (TS) MCP Server (TS)

Infrastructure & Configuration

Terraform CloudFormation Azure Bicep Kubernetes manifests GCP Deployment Manager GitHub Actions Dockerfiles Nginx configs

Data & Storage

SQL schemas (PHI/PII classification) SQLAlchemy models Django models Pydantic models Prompt files (.txt / .md / .jinja)

Start scanning your AI applications

Install Xelo, run your first scan, and get a machine-readable picture of every AI component in your codebase — in under two minutes.

$ pip install xelo