Applied AI Systems
Runtime monitoring, control layers, and governance systems for real-world AI deployments. Production-oriented AI systems for runtime monitoring, governance, agentic workflows, and enterprise risk control.
Overview
I build applied AI systems that focus on how models and agents behave in real environments, not only how they perform in isolated demos.
My work sits at the intersection of:
- AI safety and runtime monitoring
- governance and auditability
- agentic systems and control layers
- enterprise deployment and operational risk
This portfolio highlights systems designed to make AI more observable, controllable, and deployment-ready.
Featured System
LLM Runtime Risk Monitoring & Attack Detection System
π View on GitHub
A monitoring and triage system for detecting high-risk behavior in LLM applications and agentic workflows.
What it does
- detects prompt injection, jailbreak, and policy bypass attempts
- flags sensitive data exfiltration patterns
- identifies suspicious tool usage and memory access behavior
- assigns severity, confidence, and analyst-oriented triage fields
- produces audit-ready evidence for runtime governance workflows
Why it matters Traditional AI governance often focuses on policy and documentation. This system focuses on runtime behavior, where many AI failures actually emerge.
Core capabilities
- rule-based detection engine
- SOC-style incident model
- analyst triage workflow
- governance control mapping
- exportable incident evidence
AI Safety & Governance Systems
AI Verify on AWS
π View on GitHub
Deployed AI Verify in a working AWS environment to evaluate ML systems against fairness, transparency, and responsible AI principles.
Focus
- governance-oriented model evaluation
- validation workflow setup
- audit and evidence generation
Dev Guardian β AI Assurance & Control Engine
π View on GitHub
A governance-oriented control and audit engine that translates AI system behavior into structured risk signals, control mappings, and audit-ready evidence.
Focus
- runtime risk summarization
- control mapping (EU AI Act / NIST AI RMF)
- structured audit outputs (risk_summary.md)
- governance signal generation for AI systems
Role in Architecture Acts as the audit and assurance layer, transforming runtime events into governance evidence and compliance-ready artifacts.
GenAI Multi-Agent Security Scanner
π View on GitHub
A multi-agent application security workflow using LLM reasoning to detect, classify, and report code risk.
Focus
- AI-assisted security analysis
- risk classification
- workflow traceability
- structured reporting
Agentic & Applied AI Systems
Agentic Workflow Systems
π View on GitHub
Exploration of LLM-driven assistants and AutoGPT-style orchestration for autonomous task execution.
Focus
- agent control challenges
- tool usage behavior
- reliability and oversight
- external API orchestration
Agentic AI Trip Planner
π View on GitHub
Designed an agentic AI assistant that builds travel itineraries using LLM-driven reasoning and external data APIs.
PDF to Audiobook Pipeline
π View on GitHub
A document-processing CLI pipeline that converts PDFs into structured audiobook outputs.
Focus
- practical automation
- content transformation pipeline
- metadata handling
- operational usability
Quant / Experimental AI Systems
Minervini Predicted Stock Trading
π View on GitHub
Implemented momentum-based screening strategy inspired by Mark Minerviniβs breakout and base-building logic.
PowerX Predicted Stock Trading
π View on GitHub
Built PowerX-inspired stock screener using indicators like MACD, RSI, and Stochastics. Extended with LSTM prediction.
Focus
- feature engineering
- model experimentation
- classification pipelines
Note These projects are part of my applied modeling background, but my current focus is increasingly on AI runtime risk, monitoring, and control systems.
How I Think About AI Systems
I am most interested in systems that answer questions like:
- How do we detect AI failures in production?
- How do we observe unsafe agent behavior?
- How do we translate runtime events into governance evidence?
- How do we move from static compliance to operational assurance?
That is the direction of this portfolio.
Current Direction
My current build focus is on:
- AI safety systems
- runtime monitoring layers
- governance control mappings
- agentic risk detection
- practical assurance tooling for real deployments
Selected Background
This work builds on experience across:
- 24+ years of enterprise digital transformation
- regulated operational environments
- MLOps and governance alignment
- AI audit and responsible AI controls
Letβs Talk
If you are working on:
- AI governance that needs operational evidence
- agentic systems that need control layers
- LLM deployments that need monitoring and triage
- runtime risk detection for AI systems
Iβd be glad to connect.