AI-Integrator
I build AI systems that make military planning faster, clearer, and executable under pressure.
My background is in military training and simulation — 20 years as a Field Artillery NCO, followed by work at CGSC, West Point, and now JMSC/MTC-G. That's the domain where I build AI systems.
My job is to make AI operationally useful under real constraints: short timelines, language barriers, high stakes, and no second chances. I design the workflows, build the tools, and hold the process together so the team can move faster without losing judgment.
Available for remote roles beginning Fall 2026.
The AI-Integrator role sits at the intersection of three capabilities that are each necessary and none individually sufficient.
I know when AI output is plausible but wrong. I constrain, validate, and govern outputs at each stage so speed doesn't displace judgment. AI accelerates the work — I determine what's usable.
I see AI-assisted workflows as pipelines, not isolated tasks. Each product depends on the one before it. I design the chain, hold it together through iteration, and make sure nothing downstream breaks when something upstream changes.
AI cannot assess what an organization truly needs or manage the risk of a first attempt. I can. Operational literacy, requirements translation, and the ability to maintain institutional confidence under a one-shot timeline — that's the work AI can't do.
A consequence-based staff decision exercise built from scratch for Ukrainian officer training — designed, pipelined, and validated in under 8 weeks.
AI-powered briefing generator that turns Markdown slide scripts into mission-ready PPTX and HTML briefings.
A Streamlit GUI backed by the Claude API for drafting military planning documents grounded in US Army and NATO doctrine.
Active research and experiments — work that isn't shipped yet but shapes where I'm headed.
Exploring Andrej Karpathy's WikiLLM architecture as a locally hosted knowledge base for military exercise design — a purpose-built alternative to general-purpose retrieval.
An LLM-driven adjudication engine for staff exercises — replacing fixed outcome rules with a model that evaluates staff decisions against doctrine and context. Designed to link with the Exercise WikiLLM as its knowledge base.
An ongoing personal library of skill.md files that encode repeatable workflows for military planning, exercise design, and document production — reusable across Claude Code, Gemini, and Codex.
A domain expert agent that can plan, advise, execute, and educate across warfighting functions. Built around per-function LLM wikis and an experience capture layer that pairs tacit operational knowledge with official Army doctrine — a complete operational brain, not just a document retriever.