AI / SaMD (Software as a Medical Device) — Verus FDA

AI / SaMD

Software Lifecycle, Cybersecurity, and FDA AI expectations, packaged for review.

Software and AI submissions don’t fail because “the code is bad.” They fail because documentation is incomplete, inconsistent, or not aligned to FDA expectations. We build submission-ready software artifacts (including IEC 62304 alignment), define a controlled software lifecycle, implement an ML change protocol, and package a defensible cybersecurity story—so your eSTAR and reviewer experience is clean and coherent.

IEC 62304 Software Lifecycle ML Change Protocol Cybersecurity FDA AI Expectations eSTAR Packaging

Traceability

Requirements → risks → architecture → verification evidence.

Change control

Release governance and controlled updates for regulated software.

Cybersecurity

Threats, controls, SBOM planning, and reviewer-readable evidence.

Reviewer clarity

Clean eSTAR mapping and consistency across exhibits and labeling.

What you get

Submission-ready artifacts that reduce review friction.

These are the core topics we build into your regulatory strategy and submission artifacts. Each item is designed to reduce review friction and prevent the common “software documentation gap” that triggers additional information requests.

62304

IEC 62304 alignment

Lifecycle documentation scaled to your risk class

  • Software safety classification + rationale
  • Lifecycle deliverables map (what to build, when)
  • Traceability structure (requirements → design → code → tests)
SDLC

Software lifecycle

Design controls for software teams

  • Requirements + architecture baselining
  • Verification/validation strategy + evidence packaging
  • Release management + change control for regulated updates
ML

Machine learning change protocol

A controlled plan for model updates

  • Define “what can change” vs “what can’t” without new review
  • Update triggers, monitoring, and rollback criteria
  • Data boundaries + validation approach
CY

Cybersecurity

Threats, controls, and evidence

  • Threat modeling + security risk management alignment
  • SBOM planning, vulnerability handling, update strategy
  • Evidence packaging for reviewer readability
FDA

FDA AI expectations

Defensibility and transparency

  • Performance framing and dataset considerations
  • Clinical workflow, bias considerations, labeling boundaries
  • Post-market monitoring plan (drift, complaints, CAPA linkage)
eSTAR

eSTAR / submission packaging

Make software review easy

  • Section mapping + exhibit structure and naming
  • Consistency checks across software, risk, labeling, performance
  • Final QA to reduce avoidable AI requests

Why submissions stall

Most delays are preventable.

Missing traceability, unclear lifecycle controls, weak cybersecurity narrative, or performance evidence that doesn’t align with claims. We connect artifacts across risk, requirements, verification, labeling, and post-market plans.

Common failure modes

  • Traceability gaps: requirements and hazards don’t map cleanly to tests and releases.
  • Uncontrolled updates: no defined change protocol—especially for ML models.
  • Cybersecurity omissions: threats/controls/evidence not packaged reviewer-first.
  • Claims outpace evidence: performance doesn’t support intended use + labeling language.

How we prevent it

  • Artifact map: what FDA expects for your software risk profile and submission type.
  • Consistency QA: align software docs, risk file, labeling, and performance package.
  • Cyber narrative: threat model → controls → verification evidence → post-market plan.
  • Change governance: release controls and ML update protocol designed to be defensible.

Programs

Engagements scoped to maturity and risk.

Choose the level of support that matches your timeline, software maturity, and submission pathway.

01

Software readiness sprint (IEC 62304 + cyber)

Typical range: $4,000–$12,500

Fast gap assessment + artifact roadmap before writing or submitting.

  • IEC 62304 deliverables map + traceability structure
  • SDLC + change control recommendations
  • Cyber strategy outline (threats/controls/evidence)
02

AI evidence & ML change protocol package

Typical range: $7,500–$25,000+

Defensible update controls + performance framing for ML-enabled products.

  • ML change protocol (scope, triggers, validation, rollback)
  • Performance evaluation plan aligned to intended use
  • Post-market monitoring plan for drift + safety signals
03

Submission support (510(k) / De Novo / PMA scope)

Typical range: $10,000–$45,000+

Drafting + packaging of software/cyber sections with eSTAR alignment and QA.

  • Software documentation drafting/co-authoring
  • Cyber exhibits packaging + reviewer-readability QA
  • Deficiency response readiness support
04

Pre-Sub for AI / SaMD (when expectations are unclear)

Typical range: $8,000–$25,000+

Use FDA feedback to lock evidence expectations before you over/under-build.

  • Question strategy + meeting objectives
  • Briefing package drafting + exhibits
  • Meeting prep + minutes support

What we need to start

Intended use/claims, software description + architecture overview, current SDLC practices, release history, any threat modeling work, and any performance validation results (including datasets/metrics if ML is involved).

Start async

Share your product details and we’ll respond with a scoped plan and a prioritized artifact list (what to build now vs later), mapped to your likely submission type and risk profile.

Process

Engineering-friendly workflow. Regulator-friendly outputs.

A practical flow that produces consistent documentation and reviewer-ready packaging.

01 — Align

Claims + risk

Stabilize intended use/claims and define risk boundaries that drive documentation depth.

02 — Build

Lifecycle + traceability

Set IEC 62304-aligned artifacts and traceability from requirements to verification evidence.

03 — Secure

Cyber + updates

Implement cybersecurity documentation and define controlled change protocols (including ML updates).

04 — Package

eSTAR + QA

Assemble exhibits for reviewer readability and run consistency QA across software, risk, labeling, and performance.

Trace
end-to-end
Cyber
defensible
ML
controlled change
eSTAR
review-ready

FAQs

Clear answers for software teams.

What FDA tends to care about, and how to package it cleanly.

Do we need IEC 62304 even if we’re “just software”?

If your product is regulated as a device (including SaMD), you should expect lifecycle documentation and traceability consistent with FDA software expectations. We scale depth to your risk profile and product type.

What is an ML change protocol?

A controlled plan defining how a model can change over time—what can update, how performance is validated, what triggers review, and how drift is monitored—so post-market updates remain defensible.

Will cybersecurity be required?

If your device connects to networks, exchanges data, uses third-party libraries, or could impact clinical decisions, cybersecurity documentation is typically expected. We build the narrative and evidence package appropriate to your design and risk.

Practical next step: If you’re unsure what to build (and what not to build), start with the readiness sprint to lock the artifact map, traceability approach, and cybersecurity packaging plan before you draft.