Skip to content
Use case · Medical devices · EU MDR

CERs that actually defend the claim.

Clinical Evaluation Reports under MDR/MEDDEV 2.7/1 Rev 4 — built from your literature corpus, equivalence rationale, and post-market surveillance data. Asthra structures the appraisal, drafts the narrative, and cites every inclusion decision.

Literature appraisal
CER · Stage 2 screening
PMID 382910042024Include · A
Long-term outcomes of [device class] in moderate-risk patients: a multicenter cohort
Eur Heart J · 2024;45(8):842–854 · n=1,204 · 5y follow-up
Pivotal evidence for §6.4 effectiveness
PMID 374021882023Include · B
Adverse-event profile in real-world registry data
JACC · 2023;81(15):1402 · registry n=8,420
Supporting evidence for §6.5 safety
PMID 361904552022Exclude
Comparative analysis of competitor device generations
Cardiovasc Interv · n=82 · pediatric only
Out of indication scope · §3.2
100%
Of inclusion / exclusion decisions traced to MEDDEV 2.7/1 Rev 4 criteria.
~3×
Faster cycle from literature lock to draft in pilot device teams.
PMS-ready
Post-market surveillance data integrated alongside the clinical-evidence base.
Inputs

The evidence package, indexed.

CERs draw on a wider source set than CSRs. Asthra accepts and tags every input, then keeps the appraisal traceable.

XLS

Literature search results

Embase, Medline, Cochrane output with PRISMA-style flow. Asthra captures search strategy and supports duplicate screening.

XLS

Appraisal worksheets

Quality grading per MEDDEV 2.7/1 Rev 4 Appendix A6/A7 — pivotal, supportive, or excluded.

DOC

Equivalence rationale

Technical, biological, and clinical comparison vs. equivalent device(s). Asthra structures §3 against the regulation's three pillars.

DOC

Risk Management File

ISO 14971 hazards, risk-control measures, and benefit-risk reasoning — flow through to §7 of the CER.

XLS

PMS / PMCF data

Post-market surveillance reports, vigilance data, registry output. Integrated into §10 conclusions and updates.

DOC

Prior CER

For continuity of clinical-evaluation history and traceable change-tracking between cycles.

Workflow

From search lock to MDR-aligned draft.

01

Lock the search and load the corpus

Drop your literature search export, appraisal grid, equivalence rationale, RMF, and PMS data. Asthra reconciles PMIDs, dedupes, and flags any items not yet appraised.

02

Approve the appraisal map

Asthra produces a section-by-section evidence map: which study supports which CER claim. Reviewers swap, re-grade, or exclude — every change recorded.

03

Draft the CER body

§3 equivalence, §6 clinical evidence, §7 risk-benefit, §10 conclusions — drafted to MEDDEV 2.7/1 Rev 4 structure. Each claim cites a graded study with PMID and excerpt.

04

Clinical evaluator review

The qualified evaluator reviews in Word — challenges grading, refines benefit-risk language, integrates PMS findings — with Asthra surfacing the exact evidence each time.

05

Notified-Body submission

Final CER ships with the audit ledger embedded — including the appraisal trail. Notified-body reviewers can verify any claim back to its source on click.

Manual vs. Asthra

What changes for the clinical-evaluation team.

Manual CER drafting

  • 10–16 weeks per CER cycle, longer for high-risk classes
  • Appraisal grid maintained separately from the narrative
  • Equivalence rationale rewritten cycle after cycle
  • PMS data retrofitted into the document at the end
  • Notified-body queries answered via email and tracked separately

CER drafting with Asthra

  • +Draft cycle compressed to weeks, not months
  • +Appraisal grid and narrative are one connected artifact
  • +Equivalence reasoning regenerates against new evidence
  • +PMS data integrated from the first draft
  • +NB queries answer themselves — every claim has its citation

Pilot Asthra
on a real CER.

30-day pilot. One device, one indication. We benchmark Asthra's draft against your existing process — search to NB submission.