@repo/ai-oneapp
Enterprise AI agent orchestration with compliance-first guardrails. Block SQL injection, PII leaks, and prompt attacks automatically. Multi-model research pipeline with citations. Risk-based execution limits for sensitive operations.
Quick Start
Add enterprise AI agents in 10 minutes:
pnpm add @repo/ai-oneappAutomatic guardrails, multi-model research, and proof verification. Skip to Quick Start →
Why @repo/ai-oneapp?
AI agents in production need compliance controls. Without guardrails, agents can leak PII, execute SQL injection, or expose sensitive data. Manual compliance checks are error-prone and don't scale. Multi-model research improves accuracy but requires complex orchestration.
@repo/ai-oneapp solves this with risk-aware orchestration, automatic guardrails, and multi-model research pipelines for enterprise AI.
Production-ready (Grade: B+ / 88%) with 75% test coverage, PII detection, SQL injection blocking, and compliance audit trails.
Use cases
- Customer support AI — Risk-aware agents with automatic PII sanitization and audit logs
- Research assistance — Multi-model pipeline (Perplexity → Claude → GPT) with citations
- Compliance automation — Verify claims with proof-map framework and confidence scoring
- Internal tools — Risk-tiered budgets (high-risk operations limited to 2 steps)
- Financial services — HIPAA/SOC 2 compliant AI with structured compliance events
How it works
@repo/ai-oneapp provides risk-aware agent orchestration with automatic guardrails:
import { PlatformOrchestrator } from "@repo/ai-oneapp/agents/orchestrator";
const orchestrator = new PlatformOrchestrator({
model: anthropic("claude-sonnet-4"),
riskLevel: "medium", // 4-step limit, auto guardrails
enableGuardrails: true
});
const result = await orchestrator.execute({
prompt: "Analyze customer billing data",
userId: "user-123"
});
// Automatic: SQL injection blocking, PII detection, compliance eventsAgents automatically apply risk-based limits, content filtering, and compliance logging.
Key features
Risk-based orchestration — Low (6 steps), medium (4 steps), high (2 steps, escalation required)
Automatic guardrails — Block SQL injection, prompt injection, detect PII in real-time
Multi-model research — Perplexity (citations) → Claude (reasoning) → GPT (verification)
Proof-map verification — Hierarchical claim verification with 2-iteration fix loops
Compliance audit trail — Structured events for SOC 2, HIPAA, GDPR compliance
MCP integration — Model Context Protocol tools for extended capabilities
Quick Start
1. Install dependencies
pnpm add @repo/ai-oneapp @repo/ai ai2. Configure API keys
ANTHROPIC_API_KEY=sk-ant-...
PERPLEXITY_API_KEY=pplx-...
OPENAI_API_KEY=sk-...3. Create risk-aware agent
import { PlatformOrchestrator } from "@repo/ai-oneapp/agents/orchestrator";
import { anthropic } from "@ai-sdk/anthropic";
export const agent = new PlatformOrchestrator({
model: anthropic("claude-sonnet-4"),
riskLevel: "medium",
enableGuardrails: true,
enableTelemetry: true
});4. Execute with automatic guardrails
import { agent } from "#/lib/agent";
export async function POST(req: Request) {
const { prompt, userId } = await req.json();
const result = await agent.execute({
prompt,
userId
});
// Automatic: PII detection, SQL injection blocking, audit logs
return Response.json({ response: result.response });
}That's it! You now have enterprise AI with automatic compliance guardrails.
Multi-model research
Run 3-stage research pipeline with citations and confidence scoring:
import { ResearchPipeline } from "@repo/ai-oneapp/research/pipeline";
const research = await pipeline.research({
query: "GDPR requirements for AI systems",
depth: "comprehensive"
});
console.log(`Confidence: ${research.confidence}%`);
console.log(`Citations: ${research.citations.length}`);Technical Details
For Developers: Technical implementation details
Enterprise-grade AI agent orchestration library with compliance-first guardrails, multi-model research pipelines, and proof-map verification. Built on Vercel AI SDK v6 for production-critical applications.
Installation
pnpm add @repo/ai-oneappProduction-Ready
Status: 🟢 Production-Ready (Grade: B+ / 88%)
Overview
| Property | Value |
|---|---|
| Location | packages/ai-oneapp |
| Dependencies | @repo/ai, ai, zod |
| Purpose | AI agent orchestration & compliance |
| Test Coverage | ~75% average |
Features
🤖 AI Agent Orchestration
- PlatformOrchestrator: Risk-aware agent factory with intelligent caching
- Risk-Based Selection: Dynamic agent behavior based on input analysis
- Compliance Signals: Structured audit events throughout execution
- MCP Integration: Model Context Protocol tool support
🛡️ Guardrails & Policies
- Multi-Layer Defense: Pattern blocking + PII sanitization + stream monitoring
- Risk-Tiered Budgets:
- Low: 6 steps, auto tool selection
- Medium: 4 steps, auto selection
- High: 2 steps, escalation required
- Content Filtering: SQL injection, prompt injection, PII detection
- Stream Monitoring: Real-time output validation
📚 Multi-Model Research Pipeline
Three-stage pipeline for high-quality research:
- Perplexity: Context expansion with real citations
- Sonnet: Structured reasoning (claims, assumptions, gaps)
- GPT-5: Verification with confidence scoring
✅ Proof-Map Framework
- Hierarchical Verification: chains → steps → claims
- Gated Verification: Safe-edit policy with constraints
- 2-Iteration Fix Loop: Conservative error correction
- Comprehensive Testing: 10+ test cases
Export Paths
| Path | Description |
|---|---|
@repo/ai-oneapp | Main exports |
@repo/ai-oneapp/agents | Agent orchestration |
@repo/ai-oneapp/policies | Compliance policies |
@repo/ai-oneapp/research | Multi-model research |
@repo/ai-oneapp/proof-map | Verification framework |
@repo/ai-oneapp/shared | Shared utilities |
@repo/ai-oneapp/superpowers | Advanced agent capabilities |
Quick Start
Basic Agent Orchestration
import { PlatformOrchestrator } from "@repo/ai-oneapp/agents/orchestrator";
import { anthropic } from "@ai-sdk/anthropic";
// highlight-start
const orchestrator = new PlatformOrchestrator({
model: anthropic("claude-sonnet-4"),
riskLevel: "medium",
enableGuardrails: true
});
// highlight-end
const result = await orchestrator.execute({
prompt: "Analyze this customer query",
userId: "user-123"
});
console.log(result.response);With Guardrails
Security First
Always enable guardrails in production to prevent SQL injection, prompt injection, and PII leakage.
import { createAgent } from "@repo/ai-oneapp/agents";
import { guardrailPolicy } from "@repo/ai-oneapp/policies/guardrails";
import { loopControlPolicy } from "@repo/ai-oneapp/policies/loop-controls";
// highlight-start
const agent = createAgent({
model: anthropic("claude-sonnet-4"),
policies: [guardrailPolicy({ level: "strict" }), loopControlPolicy({ maxIterations: 5 })]
});
// highlight-end
const stream = await agent.streamText({
prompt: userInput
});
for await (const chunk of stream) {
console.log(chunk.text);
}Research Pipeline
import { ResearchPipeline } from "@repo/ai-oneapp/research/pipeline";
// highlight-start
const pipeline = new ResearchPipeline({
perplexity: { apiKey: process.env.PERPLEXITY_API_KEY },
anthropic: { apiKey: process.env.ANTHROPIC_API_KEY },
openai: { apiKey: process.env.OPENAI_API_KEY }
});
// highlight-end
const research = await pipeline.research({
query: "What are the latest GDPR requirements for AI systems?",
depth: "comprehensive"
});
console.log(research.findings);
console.log(`Confidence: ${research.confidence}%`);
console.log(`Citations: ${research.citations.length}`);Proof-Map Verification
import { ProofMap } from "@repo/ai-oneapp/proof-map";
import { verifyChain } from "@repo/ai-oneapp/proof-map/loop";
// highlight-start
const proofMap = new ProofMap({
claim: "System meets HIPAA compliance requirements",
evidence: [
/* ... */
]
});
// highlight-end
const verified = await verifyChain(proofMap, {
model: anthropic("claude-sonnet-4"),
maxIterations: 2
});
console.log(verified.result); // 'verified' | 'rejected' | 'uncertain'
console.log(verified.reasoning);API Reference
PlatformOrchestrator
Constructor Options:
interface OrchestratorOptions {
model: LanguageModel; // AI SDK model instance
riskLevel?: "low" | "medium" | "high"; // Default: 'medium'
enableGuardrails?: boolean; // Default: true
enableTelemetry?: boolean; // Default: true
maxIterations?: number; // Default: varies by risk
tools?: Record<string, Tool>; // Custom tools
}Methods:
execute(options)- Execute agent with promptstream(options)- Stream agent responsegetRiskAssessment(input)- Analyze input risk level
Guardrail Policy
Configuration:
interface GuardrailConfig {
level: "permissive" | "balanced" | "strict";
customPatterns?: Array<{ pattern: RegExp; reason: string }>;
enablePII?: boolean; // Default: true
enableSQLInjection?: boolean; // Default: true
enablePromptInjection?: boolean; // Default: true
}Blocked Patterns:
- SQL injection:
DROP,TRUNCATE, etc. - Prompt injection: Role manipulation, system overrides
- PII: Email addresses, SSNs (extendable)
Research Pipeline
Configuration:
interface ResearchConfig {
perplexity: { apiKey: string; model?: string };
anthropic: { apiKey: string; model?: string };
openai: { apiKey: string; model?: string };
enableCitations?: boolean; // Default: true
enableTracing?: boolean; // Default: true
}Result Structure:
interface ResearchResult {
findings: string; // Consolidated findings
confidence: number; // 0-100 confidence score
citations: Array<{
source: string;
url: string;
relevance: number;
}>;
reasoning: {
claims: string[];
assumptions: string[];
gaps: string[];
};
metadata: {
tokensUsed: number;
duration: number;
models: string[];
};
}Architecture
Component Structure
@repo/ai-oneapp/
├── agents/ # Agent orchestration
│ ├── orchestrator.ts # Risk-aware agent factory
│ ├── agent.ts # Base agent implementation
│ └── tools/ # MCP and custom tools
├── policies/ # Compliance policies
│ ├── guardrails.ts # Content filtering
│ ├── loop-controls.ts # Iteration limits
│ └── telemetry.ts # Audit logging
├── research/ # Multi-model research
│ ├── pipeline.ts # 3-stage pipeline
│ ├── tool.ts # Research tool wrapper
│ ├── trace.ts # Execution tracing
│ └── types.ts # Type definitions
├── proof-map/ # Verification framework
│ ├── schema.ts # Proof structure
│ ├── loop.ts # Verification loops
│ └── applyPatch.ts # Safe edits
└── shared/ # Shared utilities
├── schemas.ts # Zod schemas
└── signals.ts # Event busData Flow
User Input
↓
[Guardrails] → Input Validation
↓
[Orchestrator] → Risk Assessment → Agent Selection
↓
[Agent] → Model Execution → [Loop Controls]
↓
[Guardrails] → Output Validation
↓
[Telemetry] → Compliance Events
↓
ResponseRisk-Based Agent Selection
Automatic Risk Assessment
The orchestrator automatically analyzes input to determine risk level and applies appropriate constraints.
const orchestrator = new PlatformOrchestrator({
model: anthropic("claude-sonnet-4"),
// highlight-next-line
riskLevel: "high" // 2-step limit, requires escalation
});
// For customer data queries
const result = await orchestrator.execute({
prompt: "Show me customer billing history",
userId: "user-123"
});
// Automatically applies:
// - Max 2 iterations
// - Strict guardrails
// - Escalation signalsEnvironment Variables
Required for Production
All three API keys are required for the full research pipeline. Missing keys will cause pipeline failures.
# Research Pipeline (Required)
PERPLEXITY_API_KEY="pplx-..."
ANTHROPIC_API_KEY="sk-ant-..."
OPENAI_API_KEY="sk-..."
# Optional
AI_TELEMETRY_ENDPOINT="https://your-telemetry-endpoint.com"
AI_TRACE_STORAGE_PATH="./traces"Testing
Run Tests
# All tests
pnpm --filter @repo/ai-oneapp test
# Specific suite
pnpm --filter @repo/ai-oneapp test orchestrator
# Watch mode
pnpm --filter @repo/ai-oneapp test --watch
# Coverage
pnpm --filter @repo/ai-oneapp test --coverageTest Coverage
| Component | Coverage | Files |
|---|---|---|
| Agents | ~75% | 4 test files |
| Policies | ~80% | 3 test files |
| Research | ~70% | 3 test files |
| Proof-Map | ~85% | 2 test files |
Writing Tests
import { describe, it, expect, vi } from "vitest";
import { PlatformOrchestrator } from "@repo/ai-oneapp/agents/orchestrator";
// highlight-start
describe("PlatformOrchestrator", () => {
it("selects agent based on risk level", async () => {
const orchestrator = new PlatformOrchestrator({
model: mockModel,
riskLevel: "high"
});
const result = await orchestrator.execute({
prompt: "Test prompt"
});
expect(result.iterations).toBeLessThanOrEqual(2);
});
});
// highlight-endKnown Issues
🔴 Critical: Token Usage Tracking
Location: src/research/pipeline.ts
// ❌ WRONG: Both fields use totalTokens
inputTokens: result.usage.totalTokens ?? 0,
outputTokens: result.usage.totalTokens ?? 0,
// ✅ FIX: Use correct fields
inputTokens: result.usage.inputTokens ?? 0,
outputTokens: result.usage.outputTokens ?? 0,Impact: Cost calculations incorrect, usage analytics unreliable Priority: P0 Effort: 15 minutes
🟡 Medium Priority
-
No Production Persistence (
research/trace.ts)- Currently uses file-based storage
- Needs PostgreSQL + Redis for production
- Effort: 1 week
-
Limited PII Patterns (
policies/guardrails.ts)- Only detects 2 PII types (email, SSN)
- Missing: credit cards, phone numbers, addresses
- Effort: 4 hours
-
JSON Parsing Fragility (multiple files)
- Uses regex-based extraction
- Should use AI SDK structured outputs
- Effort: 2 days
Performance
- Agent Caching: Intelligent caching reduces redundant API calls
- Streaming: Full streaming support for real-time responses
- Resource Cleanup: Proper cleanup of MCP connections
- Graceful Fallbacks: Perplexity → GPT-4 on API failures
Security
Guardrail Protection
- ✅ SQL injection prevention
- ✅ Prompt injection detection
- ✅ PII sanitization (basic)
- ✅ Stream monitoring
- ⚠️ Rate limiting (needs implementation)
Compliance Features
- Structured compliance events
- Audit trail via telemetry policy
- Risk-based execution limits
- User action tracking
Related Packages
- @repo/ai - Base AI SDK wrapper
- @repo/observability - AI observability and tracing
External Resources
- Vercel AI SDK - Base framework
- Model Context Protocol - MCP specification
- Anthropic Claude - Primary LLM provider