Peer AI + Biogen: Reimagining CSR Authoring with Purpose-Built AI Agents

Overview

How can biotech and pharma teams accelerate regulatory documentation while maintaining trusted, human-level quality? In this joint presentation from CNS Summit 2025, Peer AI CEO Anita Modi and Biogen’s Xinjiang Wu share real-world insights from a year-long collaboration applying agentic AI to clinical study report (CSR) authoring.

You’ll learn how data-driven AI agents reduce drafting timelines, improve document quality and consistency, and meaningfully reduce downstream review burden — all while keeping medical writers firmly in control. The talk also explores adoption strategies, quality frameworks, cross-functional change management, and the growing leadership role of medical writers in an AI-enabled future.

Speakers

  • Xinjiang Wu - Global Medical Writing Lead, Biogen

  • Anita Modi - CEO and Co-Founder, Peer AI

Key Takeaways

  • Agentic AI significantly accelerates clinical and regulatory document creation while keeping quality high

  • Human verification remains essential; AI augments and accelerates writers

  • Higher-quality Draft 1 reduces downstream review cycles and cost

  • Medical writers are emerging as AI navigators and cross-functional leaders

  • Biogen has deployed Peer AI across Phase 1 CSRs and is expanding to more complex documents

Timestamps

00:00 – Introductions
00:16 – Why documentation is the bottleneck of drug development
01:00 – What “agentic AI” means in regulatory authoring
02:15 – How Peer AI works for CSR creation (ingestion → mapping → drafting → QC)
05:33 – Biogen’s journey: POC → evaluation → deployment
07:10 – Compounding value: faster drafts, higher quality
08:20 – Measuring quality in AI-generated documents
09:38 – Adoption lessons from Biogen: role clarity, augmentation, co-creation
11:45 – Expanding to Phase 3 CSRs, IBs, and Module 2
13:01 – Audience Q&A begins
13:16 – Managing hallucinations in a regulated industry
14:46 – Submission confidence and template consistency
15:29 – Addressing automation bias
16:29 – Beyond time to draft: metrics and edit distance

Transcript

[00:00:00] Anita Modi: Hi everybody. My name is Anita Modi. I'm the CEO and founder here at Peer AI. Do you want to introduce yourself?

[00:00:06] Xinjiang Wu: This is Xinjiang Wu. I'm from Biogen. I'm the sponsor for the initiative of generative AI in regulatory writing. So glad to be here.

[00:00:16] Anita Modi: Thank you. And we're excited to present today on how we're re-imagining CSR authoring with purpose-built AI agents.

To orient everybody a little bit about this problem space: all of us working in this industry know that documentation is the lifeblood of drug development. If you think about a drug's journey from CMC through preclinical, clinical, and ultimately through commercial, this entire journey is really gated by documentation.

Documentation is the basis on which regulatory bodies ultimately make decisions about the progression of a drug. On average, there are 100,000 to 200,000 pages of documentation across 1,500 unique documents. Over a third of submissions have quality problems, and over 75% face rejection.

On average, this results in 426 days of delay. Every day that a treatment is not reaching a patient matters. Peer AI is addressing this problem space. We are a team of AI engineers and medical writers, and we've built an agentic AI platform for biotech and pharma companies to author different types of documents — from IND modules and startup documents like protocols and ICFs, to clinical study reports, plain language summaries, safety narratives, IBs, and more.

What does agentic AI mean? Agents are autonomous capabilities. Think of a team of agents working hand-in-hand with medical writers — agents that ingest data and contextualize it, agents that map data to documents, agents that author prose, graphs, figures, tables, and agents that perform QC.

We have a platform of these agents that work with medical writers to accelerate authoring. Our medical writing team designed the interface based on their experience and best practices for leveraging AI responsibly in regulated documentation.

Our medical writers also help customers with training, onboarding, and a peer-to-peer deployment model. Core to our belief is that to unlock true time savings, you need the right balance between AI automation and human verification.

AI is powerful, but regulatory documentation is nuanced. It must be kept on a short leash. It’s not just about having a human in the loop, but knowing where the human is in the loop and how they are incorporated. Humans are essential for verification, validation, and thoughtful application.

If you get that loop right, you can scale regulatory workflows and get to a better end product faster.

Let’s walk through a CSR example.

In our workflow today, humans load all required source context: the protocol, SAP, TFLs, templates, and customer style guides. Agents then ingest the data, structure it appropriately, and document-specific agents step in.

For example, the CSR agent knows what makes a strong CSR. It understands which data from the protocol, SAP, and TFLs should be mapped where. After mapping, a human reviews and confirms or modifies the structure.

Next, authoring agents generate in-text tables, figures, graphs, prose, abbreviations, and citations. The medical writer interacts with this through chat or direct editing — no prompt engineering is required. QC agents review, and humans can trace all content back to source data.

We are continuing to extend this workflow with AI-powered reviewing and commenting tools to complement human review cycles.

[00:05:33] Xinjiang Wu: Thank you, Anita.

Here is our journey implementing Peer AI for CSR authoring. We began with a proof of concept using a straightforward Phase 1 study and a more complex Phase 3 pivotal study. We invited medical writers and editors to evaluate the outputs using predefined metrics. Based on the pilot, we saw significantly reduced turnaround time and satisfactory document quality.

Next, we conducted a side-by-side evaluation. We used generative AI to author CSRs for ongoing Phase 1 studies, and medical writers compared the Peer AI output to human-authored CSRs.

Following evaluation, we moved into implementation. We deployed Peer AI’s SaaS system within Biogen and began using it for additional Phase 1 studies. Our goal is to empower medical writers to draft CSRs quickly and with high quality.

Over to you.

[00:07:10] Anita Modi: It’s exciting to be at a stage where we can show value document over document. In each CSR, we measure the time to a high-quality first draft and the quality of the output.

We’ve seen the time to first draft continue to decrease while quality increases — true compounding value. This comes from the agentic architecture. Our CSR agent and supporting agents learn from each document and input, including feedback from Xinjiang’s team, and apply that learning to the next iteration. Between the first and third CSR, we cut time by an additional 50%.

Quality for AI-generated content is critical. When I started Peer, I interviewed dozens of medical writers to define what “good” means. From that, we developed a smart scoring rubric: data accuracy, consistency, completeness, readability, and adherence to templates and style guides.

Quality comes from continuous improvement. Each iteration feeds back into the next. One thing Xinjiang said early on was that in this space, quality is time. If you reach high quality faster, you save time downstream in editing and reviewing.

Early results show reduced cycle time overall, which leads to cost savings.

[00:09:38] Xinjiang Wu: Thank you, Anita. After almost a year of collaboration, we’ve learned several things.

Role clarity matters. When medical writers understand the evaluation process and their responsibilities, adoption accelerates. Writers respond better when AI is framed as augmentation — enhancing expert judgment — rather than automation.

Co-creating workflows increases engagement. Through this process, medical writers have become AI strategic leaders, bridging clinical expertise and technology.

Looking ahead, we plan to expand into more document types: more complex Phase 3 CSRs, IBs, and Module 2 documents. We also aim to integrate AI across SOPs and job aids, involving more cross-functional teams such as clinical development, PV, and safety. Cross-functional alignment is essential for successful AI adoption.

Eventually, we expect AI-powered document generation to accelerate major submissions like NDAs and BLAs. Building a future-ready workforce is important — medical writers will increasingly serve as AI navigators.

That concludes our presentation.

[00:12:11] Anita Modi: Before we end, I want to highlight the importance of partnership. Trust and transparency are essential. We share openly what AI is and isn’t good at, and Biogen shares internal processes so we can support effectively. Aligning incentives from the beginning has been crucial.

We’re grateful for Biogen’s partnership as we reimagine CSR and additional authoring.

[00:12:53] Moderator: All right. Any questions?

[00:13:01] Audience member: I have a question about hallucinations. In a regulated industry, how are you addressing hallucination risks?

[00:13:16] Anita Modi: Early in Peer’s journey, we saw more hallucinations and quickly identified root causes — for example, missing source data. So we built mapping systems that flag missing data. Since then, hallucinations have dropped drastically.

We also keep humans firmly in the loop with risk-adjusted QC and tools to trace content back to source. Humans make errors too, so balanced QC is essential. Our goal is to accelerate the first draft and provide tools to get to a high-quality final.

[00:14:13] Xinjiang Wu: Based on our experience, hallucination is rare. In CSRs, all source documents — CSR, TFLs, and more — are available. Prompts are tied section-by-section to source data, so there’s little opportunity for the AI to invent information.

[00:14:46] Audience member: How close are you to being comfortable submitting AI-generated content to regulatory bodies?

[00:15:14] Xinjiang Wu: We have used Peer AI in business for our Phase 1 study, which means it is submission-ready in that sense.

[00:15:29] Audience member: How do you measure or mitigate automation bias as users gain confidence in AI tools?

[00:16:02] Anita Modi: Great question. It’s early, and there is still a lot of human review. Our role is to make risk-adjusted QC easier and build tools that help spot issues. As AI becomes more common, this will be an industry-wide challenge to manage thoughtfully.

[00:16:29] Audience member: The primary metric you mentioned was days to Draft 1. Are you considering other metrics like comments, quality, etc.?

[00:17:10] Xinjiang Wu: Yes. We have a set of predefined metrics, including turnaround time, document quality, format and style compliance with Biogen’s style guide, ICH compliance, and scientific integrity.

[00:17:44] Anita Modi: We also measure edit distance — how much editing is required after Draft 1. It’s more meaningful than counting comments, since it reflects the degree and nature of edits. Reducing edit distance is a guiding metric for us.

[00:18:09] Moderator: Thank you so much. Thank you to the Peer AI and Biogen team.

Ready to accelerate document creation?

See why biotechs and pharmas trust Peer AI to deliver high-quality, inspection-ready documents.

Cta Image

Ready to accelerate document creation?

See why biotechs and pharmas trust Peer AI to deliver high-quality, inspection-ready documents.

Cta Image

Ready to accelerate document creation?

See why biotechs and pharmas trust Peer AI to deliver high-quality, inspection-ready documents.

Cta Image