Skip to main contentSkip to navigation
MedSimAI Logo
MedSimAI Trust & Policy Center

Policies that protect our learners, partners, and research.

Explore how MedSimAI handles privacy, security, accessibility, and AI governance today, plus the ongoing work that keeps partner and learner data safeguarded.

Last updated: March 2026IRB-aligned research program
Data Protection & Privacy
Operational

Personal data is encrypted in transit and at rest, and access is limited to IRB-approved research staff operating under institutional agreements.

  • Fernet-encrypted PII stored in PostgreSQL with hashed identifiers for lookups.
  • TLS 1.2+ enforcement through CloudFront and the application load balancer.
  • Only non-sensitive media is exposed via the public CDN path (medsimai/images); user data stays in private stores while automated retention tooling is finalized.
Data Lifecycle & Stewardship
Operational

Partners govern how long institutional and learner records stay in MedSimAI, and verified purge requests cascade through every storage layer.

  • Institution admins can trigger permanent deletion by removing a cohort or submitting a validated ticket; engineering confirms the request and purges PostgreSQL rows, S3 objects, and backups inside a 30-day SLA (typical turnaround under a week).
  • Learner transcripts, feedback, and generated artifacts only persist while the owning cohort is active; once archived or deleted the records leave hot storage within seven days and age out of disaster-recovery snapshots within 30 days. CloudWatch operational logs already auto-expire after 30 days.
  • MedSimAI uses OpenAI enterprise models strictly for inference under no-training guarantees so partner prompts and outputs are never repurposed for other customers.
  • No stakeholder data is sold or broadly shared. The only subprocessors with scoped access are AWS (hosting/storage) and OpenAI (LLM inference), both under signed DPAs and IRB oversight; contractors operate under NDAs with least-privilege RBAC.
AI Model Governance
Operational

MedSimAI orchestrates OpenAI-hosted models for inference only. Conversation prompts and scoring rubrics are version-controlled and peer reviewed before release.

  • Model usage is limited to inference under OpenAI's no-training guarantees.
  • Scenario prompts undergo review and change tracking before deployment.
  • Prompt and scoring updates are version-controlled so partner reviews can trace every change.
Access Controls & Account Management
Operational

Role-based access separates student, instructor, researcher, and admin workflows with server-enforced session policies and consent tracking.

  • Strict RBAC gates dashboards and API access.
  • 24-hour session lifetime with secure cookies and CSRF protections.
  • Institution-specific SAML SSO with metadata retrieved from InCommon MDQ.
Incident Response & Support
In Progress

Incidents are currently managed directly by the core engineering team while the 2025 incident response program and external partner support are finalized.

  • Alerts in the shared support channels are escalated to an on-call engineer.
  • Documented response playbooks are being drafted with partner feedback.
  • Q4 2025 roadmap includes contracting an incident-response retainer and publishing notification SLAs.
Accessibility & Inclusion
In Progress

The team is actively closing WCAG 2.1 AA gaps with recurring audits and inclusive testing baked into the release checklist.

  • Keyboard-only workflows validated across major pathways.
  • Quarterly accessibility reviews capture remediation tasks with shared tracking for partner institutions.
  • Inclusive language and assistive-technology testing required before releases.
Need more detail?

We can share supporting documentation—IRB approvals, runbooks, and architecture diagrams—under the terms of your institutional agreement.

Email: contact@medsimai.com

Response target: typically within the next business day

Connect with the team