Pioneering Generative AI in Pharma:
Launching Lilly’s First Patient-Facing LLM Chatbot
Background
Healthcare expectations have fundamentally shifted. Patients accustomed to ChatGPT's instant, personalized responses now expect the same from their healthcare interactions. Traditional static content delivery was no longer sufficient — especially in high-demand areas like obesity treatment, where patients and providers needed rapid, conversational access to trusted medical information.
The strategic question: How might we deliver on-demand, conversational access to complex product information using generative AI — while ensuring regulatory compliance and safeguarding patient safety — in a way that positions Lilly as an innovation leader?
My Role
As Strategic Product Lead, I drove the end-to-end effort to bring Lilly's first generative AI chatbot from concept to launch — with no industry precedent to follow.
Key Responsibilities:
Problem Definition: Balanced patient experience goals with stringent regulatory and medical risk considerations
Cross-functional Leadership: Partnered with engineering, medical, legal, and compliance teams to shape an entirely new approach to conversational AI in pharma
Architecture Strategy: Influenced architectural decisions supporting a novel hybrid model for scalable, safe conversational experiences
Process Innovation: Designed continuous refinement and risk monitoring processes with embedded governance and feedback loops
Executive Alignment: Led stakeholder alignment and championed the product vision across senior leadership
💡 Innovation & Process Design
With no pharma precedent for generative AI deployment, we built foundational processes from the ground up.
Multi-tiered Continuous Refinement
Implemented monitoring and feedback loops featuring:
Human-in-the-loop reviews for quality assurance
Automated safeguards for early deviation detection
Adaptive learning from evolving usage patterns
Real-time risk assessment and mitigation
Agile Compliance Framework
Created entirely new review workflows that:
Aligned medical, legal, and regulatory teams around iterative approvals
Kept pace with AI model learning cycles
Departed radically from traditional static content sign-offs
Enabled continuous improvement while maintaining safety standards
Hybrid Conversational Architecture
Designed a novel blend of retrieval and generation that:
Balanced responsiveness with control
Reduced hallucination likelihood
Ensured outputs stayed within approved content parameters
Enabled scalable yet safe conversational experiences
Results & Validation
Test Scenario: Sales team meeting with 6 stakeholder requests + existing Q2/Q3 roadmap Output Quality:
✅ Captured all explicit requests with supporting quotes
✅ Identified implicit performance and UX needs
✅ Correctly prioritized features based on deal impact
✅ Flagged realistic technical risks for React/Node.js stack
✅ Identified roadmap conflicts (mobile priority mismatch)
Time Savings: Reduces 2-3 hour manual analysis to 5-minute structured output
🔗 Live Demo
Built on Amazon Bedrock PartyRock - requires AWS account for access
Future Vision: Multi-App Product Suite
While PartyRock served as an excellent prototyping platform, the real potential lies in orchestrated multi-step workflows:
Planned App Ecosystem:
1. Visual Roadmap Parser
Upload roadmap images/PDFs → Extract structured feature data
Integration: Feeds into current Meeting Analyzer
2. Competitive Intelligence Analyzer
Input: Competitor feature lists, pricing pages
Output: Gap analysis vs. your roadmap
3. User Feedback Synthesizer
Input: Support tickets, user interviews, NPS comments
Output: Prioritized user pain points
4. Technical Feasibility Scorer
Input: Feature requirements + current architecture
Output: Detailed effort estimates and implementation paths
Technical Implementation for Multi-App Orchestration:
Since PartyRock doesn't support multi-app workflows, I'd build this using:
LangChain/LangGraph Pipeline:
Visual Parser → Feature Extractor → Meeting Analyzer →
Competitive Intel → Final Roadmap Recommendation
Architecture:
Frontend: React dashboard for file uploads and workflow management
Backend: Node.js orchestrator calling different LLM models
Data Flow: Structured JSON between each processing step
Storage: PostgreSQL for feature tracking and historical analysis
🎓 Key Learnings
Prompt Engineering:
Specificity in input formats dramatically improves output quality
Grounding analysis in multiple contexts (roadmap + tech stack) increases relevance
Table structures need explicit formatting instructions for consistency
Platform Limitations:
PartyRock excellent for single-step analysis but limited for complex workflows
File upload capabilities enable real-world PM workflows
Need to balance comprehensive analysis with concise, scannable outputs
Product Insights:
Real PM workflows require orchestration between multiple analysis types
Context awareness (existing roadmap + tech constraints) is crucial for actionable output
Speed vs. depth tradeoff: 5-minute analysis vs. 3-hour manual deep dive
💭 Reflection
This project demonstrated the power of AI for structured analysis while highlighting the need for thoughtful prompt engineering and workflow orchestration. The single-app PartyRock version proves the concept, but the real value lies in building interconnected analysis tools that can handle the full complexity of modern product management workflows