John Kwan

AI Security / Agent Security / AI Assurance

I build and audit AI systems with reproducible evidence. This site is designed for inspection, not just browsing: flagship projects, stable evidence packs, and AI-assisted evaluation grounded in published artifacts only.

Run Fit CheckView Projects
AI SecurityAgent SecurityAI AssuranceEval HarnessesEvidence-backed

How to evaluate fast

Inspect the proof

Three flagship projects each map claims to demos, case studies, repos, and evidence packs.

Ask grounded questions

Ask AI for comparisons, strengths, and limitations. It should stay inside the published corpus.

Run a structured fit read

Use Fit Check for role alignment, honest gaps, positioning advice, and a 30/60/90 plan.

Flagship Projects

Proof surfaces first

The homepage leads with current evidence-backed work so a reviewer can evaluate fit before reading long narrative copy.

AI AssuranceGovernanceEvidenceRules Engine

AI Assurance Control Plane

An assurance layer over AI telemetry, evaluation, and review workflows, focused on evidence management rather than generic observability.

What it is

An assurance layer for telemetry, findings, review, retention, and audit workflows.

Why it matters

Makes governance and oversight inspectable through productized evidence and decision flow design.

What it proves

Capability in AI assurance, review systems, governance workflows, and policy-to-controls thinking.

Agent SecuritySimulationSupabaseEducation

AI Security Navigator

Interactive periodic-table style navigator for AI security learning, design recommendations, and safe simulations with a constrained execution model.

What it is

An interactive security knowledge model with safe simulation-oriented exploration.

Why it matters

Turns security concepts and relationships into an inspectable system rather than a static document.

What it proves

Capability in agent security framing, simulation design, and technically legible product UX.

LLM AppSecRegressionEvidenceHardening

LLM AppSec Harness

Deterministic regression harness for comparing baseline and hardened LLM application behavior with stable reports and explicit limits.

What it is

A deterministic offensive/defensive LLM security evaluation harness.

Why it matters

Shows how model-integrated systems can be tested with reproducible comparisons instead of vague safety claims.

What it proves

Depth in AI application security testing, eval design, and evidence-backed reporting.

Selected Experience

Context and evidence, not a resume dump

Kaiser Permanente2017 - Present

Governance and Policy Leadership

  • Led enterprise health IT policy positions across security, identity, privacy, interoperability, compliance, and risk domains.
  • Translated legislation, regulation, and standards into policy recommendations, governance requirements, technical guidance, and implementation strategy.
  • Advised senior executives and worked across Legal, Compliance, Risk Management, clinical, research, business, and IT organizations.

Situation

Kaiser needed policy and strategy work that could absorb changing security, privacy, interoperability, and regulatory requirements without losing operational clarity.

Actions / approach

Synthesized state and national policy changes into enterprise positions, implementation guidance, governance direction, and executive-facing recommendations.

Technical work

Evaluated emerging technologies and standards for impact on information protection, data exchange, operational risk, and adoption planning.

Cross-functional work

Represented the organization across Government Relations, Legal, Compliance, Risk Management, business, clinical, and IT groups, including standards bodies and policy forums.

Lessons / outcomes

This role sharpened policy-to-control translation, governance design, and executive communication under regulatory and operational constraints.

Run Fit CheckAsk AI
Kaiser Permanente2013 - 2017

Audit, Risk, and Control Work

  • Reviewed pharmacy systems for regulatory compliance and partnered with stakeholders on process and control improvements.
  • Performed compliance assessments, investigations, audits, root-cause analysis, and mitigation planning for operational and system issues.
  • Provided risk assessments and recommendations for initiatives including controlled-substance e-prescribing, record retention, downtime procedures, and return-to-stock controls.

Situation

Operational pharmacy systems carried regulatory, workflow, and control risk that needed disciplined assessment and remediation across complex environments.

Actions / approach

Used assessments, investigations, audits, and mitigation planning to identify control gaps and drive process and system improvements with stakeholders.

Technical work

Developed workflow guidance, system narratives, and risk analyses covering compliance exposure, control design, and implementation options.

Cross-functional work

Worked with operational, compliance, and technical teams to translate requirements into feasible remediations and durable process changes.

Lessons / outcomes

This role built the audit, remediation, and evidence mindset that now carries directly into AI assurance and control-evaluation work.

Kaiser Permanente2001 - 2013

Systems and Implementation Foundation

  • Assessed system and process risks using flowcharts, narratives, audit conclusions, and corrective-action recommendations.
  • Maintained data integrity and led cross-regional enhancement efforts for regulated health IT systems.
  • Managed system configuration, administration, maintenance, access and security coordination, and technical delivery across enterprise programs.

Situation

Earlier Kaiser roles required strong execution across systems, controls, upgrades, access coordination, data integrity, and remediation in a regulated enterprise environment.

Actions / approach

Combined business analysis, compliance review, systems work, and technical delivery to keep operational platforms aligned with regulatory and business requirements.

Technical work

Led configuration and build work, resolved interface and data-integrity issues, and supported system lifecycle efforts tied to compliance and operational reliability.

Cross-functional work

Worked across regions and stakeholder groups to align upgrades, remediation plans, access/security coordination, and executive documentation.

Lessons / outcomes

The long arc of these roles created the systems, risk, and implementation foundation behind today's AI security and governance positioning.

AI Security NavigatorAsk AI

Strong

  • AI Security
  • Agent Security
  • AI Assurance
  • Governance and evidence workflows
  • Evals, testing, and controls

Transferable / Moderate

  • Platform and infrastructure thinking
  • Developer tooling
  • Policy-to-controls translation
  • Review and operations workflows

Gaps / Not my center of gravity

  • Consumer mobile leadership
  • Growth experimentation ownership
  • Role families far from AI security and assurance

AI Explainer

Use AI to inspect the work, then use Fit Check to inspect the match

Ask AI is the fast conversational path for project questions and evidence lookups. Fit Check is the structured workflow for role analysis, honest gaps, and suggested positioning.

Run Fit Check

Suggested questions

Preview

Which project best shows AI security depth?

The strongest security-depth signal is the harness because it demonstrates adversarial evaluation, deterministic comparisons, known-gap disclosure, and stable artifact snapshots.

Cited to published evidence only

Writing

Short notes that explain the evidence standard

Why this portfolio is evidence-first

A short note on why the site emphasizes stable artifacts, limitations, and cited claims over broad branding copy.

Contact

Recruiter-ready path to evaluate and follow up

If you want a fast read, start with Ask AI. If you want role alignment, run Fit Check. If the role is in the target wedge, reach out directly.