Challenge
A mid-size public biotech company needed to determine whether AI could reduce the time and burden of generating patient narratives. For an open-label Phase 1/2 oncology study, 56 patient narratives had already been authored in-house. The company asked Peer AI to author Draft 1 of the same narratives to directly compare efficiency, quality, and overall workload against their manual process.
Solution
The client used Peer AI to accelerate patient narrative authoring at scale.
Generated first drafts of all 56 narratives entirely with Peer AI (100% AI-authored)
Applied identical source data (TFLs, patient profiles, MedWatch forms) as used in manual authoring
Tracked medical writing hours to quantify efficiency gains
Enabled independent reviewers to score Peer Draft 1 vs. customer final narratives on accuracy, completeness, and readability on 1-5 scale
Results & Impact
Efficiency:
Traditional | Peer AI Platform | |
---|---|---|
56 Narratives: Total medical writer hours to 1st Draft | 336 Hours | 16 Hours (+4 hours by LLM) |
56 Narratives: Per Narrative Time | 6 Hours | 21 Minutes (0.36 Hours) |
Peer AI platform was 17x more efficient than manual authoring for completing 56 narratives.
Quality:
Quality: Peer Draft 1 exceeded traditional methodology.
Accuracy: Comparable (+1% vs. manual)
Completeness: Comparable (+1% vs. manual)
Readability: Significantly higher (+7% vs. manual)
Customer assessment: Higher quality reduced review cycles to final draft
Overall workload:
Customer workload reduced: Overall workload significantly decreased.
Consistent data sources: Same files used as human medical writers, ensuring no workflow disruption.
Seamless integration: Outputs delivered in DOCX for direct use in QC and review workflows.