AI Agent Platform
Designing the first AI-native Manufacturing Execution System for electric vehicle production — a composable mesh of AI agents that act as trusted collaborators, delivering measurable business outcomes through transparent, resilient, and ethical intelligence.
AI Agent Platform · Industrial Decision Intelligence
0→1
Product launched
50+
Components in design system
4
AI agents orchestrated
7
AI Constitution principles
Context
Modern EV production depends on MES to coordinate everything from parts procurement to final assembly. Industry standards — SAP and Siemens — were built for a different era. Their data orchestration has grown so complex that the systems meant to accelerate production have become bottlenecks. When a parts shortage hits or a quality issue surfaces, decision-makers can't reach consensus in near-real-time — trapped navigating layers of dashboards, cross-referencing siloed data sources. A single hour of line downtime can cost hundreds of thousands of dollars.
The conviction behind this platform: the next generation of industrial software wouldn't be built by adding more dashboards to legacy systems. It would be built AI-native from the ground up. I joined as the founding designer — the first and only designer on the team. Every design decision was mine to define.
Legacy MES systems present data. AI agents present decisions.
Legacy Dashboard vs. Decision-First AI Interface
The Work
Research began with domain immersion — extensive sessions with our supply chain SME mapping decision-making workflows of each persona. I studied OpenAI, Anthropic Constitutional AI, Google PAIR, and agentic AI patterns to inform the interaction model. Competitive analysis of SAP MES and Siemens Opcenter confirmed the pattern: these systems treat data as something to be navigated, not understood.
I designed a four-agent system built on open metadata and ontology mapping: a Sourcing Agent (detects shortages, recommends alternatives), Risk Agent (monitors signals, calculates risk scores), Planning Agent (tracks demand, models scenarios), and Design Agent (scans BOM for vulnerabilities, finds alternatives).
The three-tier Executive Swim Lane Framework maps decisions between humans and AI: COO tier (reviews, approves via explainability panels), Human-in-the-Loop tier (category manager, risk manager, supply planner, design engineer), and AI Agent tier (four agents in parallel with human approval gates).
Trust calibration was the most critical challenge. Every AI action exposes four elements: Intent, Confidence, Provenance, and Alternatives. Progressive disclosure serves 90% of users with a one-line summary, 30% with key factors, and 5% who drill into the full reasoning trace.
I authored an AI Constitution for the platform — 7 principles: Human Welfare First, Transparency Over Opacity, User Control Over Automation, Fairness Over Bias, Privacy Over Convenience, Accuracy Over Speed, Auditability Over Secrecy.
Built an end-to-end token pipeline: Figma Variables → Tokens Plugin → Style Dictionary → Tailwind Config → Storybook → CI/CD.
AI-State Color Semantics · Component Design System
Impact
First AI-native MES for electric vehicle production — 0→1 product launched from concept to production. Comprehensive UX Design Guide v2.0 — 10-chapter document. Complete component library — 50+ components with dark/light mode, AI states, and data visualization patterns. Multi-agent interaction framework with HITL patterns. AI Constitution with audit checklist governing all agent behavior. Token pipeline achieving 1:1 design-to-code fidelity. WCAG 2.2 AA+ compliance across all components.
Before
Legacy MES — navigate dashboards, cross-reference siloed data, wait for reports
After
AI agents surface contextualized decisions with confidence scores and alternatives
Before
No standard for AI behavior, transparency, or human override
After
7-principle AI Constitution with auditable checklist embedded in UX
Reflection
Designing for AI agents is fundamentally different from traditional software. The interface isn't just a window into data — it's a collaboration layer between human expertise and machine intelligence.
Intent-adaptive, not feature-driven. Transparent reasoning over black-box recommendations. Composable intelligence. Progressive autonomy with an ethical floor.
Three capabilities that scale: platform thinking (designing for multi-agent systems forces composable architecture), trust-as-design-material (in manufacturing, wrong decisions cost millions — trust is earned through transparency, not accuracy scores), and AI ethics as UX requirement (the Constitution isn't guidelines — it's auditable requirements embedded in the interface).
Recognition