Inspect the proof
Three flagship projects each map claims to demos, case studies, repos, and evidence packs.
John Kwan
I build and audit AI systems with reproducible evidence. This site is designed for inspection, not just browsing: flagship projects, stable evidence packs, and AI-assisted evaluation grounded in published artifacts only.
How to evaluate fast
Three flagship projects each map claims to demos, case studies, repos, and evidence packs.
Ask AI for comparisons, strengths, and limitations. It should stay inside the published corpus.
Use Fit Check for role alignment, honest gaps, positioning advice, and a 30/60/90 plan.
Flagship Projects
The homepage leads with current evidence-backed work so a reviewer can evaluate fit before reading long narrative copy.
An assurance layer over AI telemetry, evaluation, and review workflows, focused on evidence management rather than generic observability.
What it is
An assurance layer for telemetry, findings, review, retention, and audit workflows.
Why it matters
Makes governance and oversight inspectable through productized evidence and decision flow design.
What it proves
Capability in AI assurance, review systems, governance workflows, and policy-to-controls thinking.
Interactive periodic-table style navigator for AI security learning, design recommendations, and safe simulations with a constrained execution model.
What it is
An interactive security knowledge model with safe simulation-oriented exploration.
Why it matters
Turns security concepts and relationships into an inspectable system rather than a static document.
What it proves
Capability in agent security framing, simulation design, and technically legible product UX.
Deterministic regression harness for comparing baseline and hardened LLM application behavior with stable reports and explicit limits.
What it is
A deterministic offensive/defensive LLM security evaluation harness.
Why it matters
Shows how model-integrated systems can be tested with reproducible comparisons instead of vague safety claims.
What it proves
Depth in AI application security testing, eval design, and evidence-backed reporting.
Selected Experience
Situation
Kaiser needed policy and strategy work that could absorb changing security, privacy, interoperability, and regulatory requirements without losing operational clarity.
Actions / approach
Synthesized state and national policy changes into enterprise positions, implementation guidance, governance direction, and executive-facing recommendations.
Technical work
Evaluated emerging technologies and standards for impact on information protection, data exchange, operational risk, and adoption planning.
Cross-functional work
Represented the organization across Government Relations, Legal, Compliance, Risk Management, business, clinical, and IT groups, including standards bodies and policy forums.
Lessons / outcomes
This role sharpened policy-to-control translation, governance design, and executive communication under regulatory and operational constraints.
Situation
Operational pharmacy systems carried regulatory, workflow, and control risk that needed disciplined assessment and remediation across complex environments.
Actions / approach
Used assessments, investigations, audits, and mitigation planning to identify control gaps and drive process and system improvements with stakeholders.
Technical work
Developed workflow guidance, system narratives, and risk analyses covering compliance exposure, control design, and implementation options.
Cross-functional work
Worked with operational, compliance, and technical teams to translate requirements into feasible remediations and durable process changes.
Lessons / outcomes
This role built the audit, remediation, and evidence mindset that now carries directly into AI assurance and control-evaluation work.
Situation
Earlier Kaiser roles required strong execution across systems, controls, upgrades, access coordination, data integrity, and remediation in a regulated enterprise environment.
Actions / approach
Combined business analysis, compliance review, systems work, and technical delivery to keep operational platforms aligned with regulatory and business requirements.
Technical work
Led configuration and build work, resolved interface and data-integrity issues, and supported system lifecycle efforts tied to compliance and operational reliability.
Cross-functional work
Worked across regions and stakeholder groups to align upgrades, remediation plans, access/security coordination, and executive documentation.
Lessons / outcomes
The long arc of these roles created the systems, risk, and implementation foundation behind today's AI security and governance positioning.
Strong
Transferable / Moderate
Gaps / Not my center of gravity
AI Explainer
Ask AI is the fast conversational path for project questions and evidence lookups. Fit Check is the structured workflow for role analysis, honest gaps, and suggested positioning.
Suggested questions
Preview
Which project best shows AI security depth?
The strongest security-depth signal is the harness because it demonstrates adversarial evaluation, deterministic comparisons, known-gap disclosure, and stable artifact snapshots.
Cited to published evidence only
Writing
A short note on why the site emphasizes stable artifacts, limitations, and cited claims over broad branding copy.
Contact
If you want a fast read, start with Ask AI. If you want role alignment, run Fit Check. If the role is in the target wedge, reach out directly.