Activity 2: Institutional Complexities in AI Deployment
Introduction
Your laboratory has decided it wants to implement a machine learning solution for detecting IV fluid contamination in BMP results. This will require engagement and alignment across several stakeholders across the organization: The Lab, IT/IS, Data Science, Compliance, and Front-line Providers. Each of these stakeholders have their own priorities, incentives, and responsibilities.
For this activity, we will simulate the discussions that would unfold as the implementation efforts would unfold. Each participant will be assigned a group to represent, providing their role-specific guidance to the overall efforts. Your group leader will be available for hints and suggestions at any time.

Step 1: Validation
1) How should this model be validated?
2) What metrics should be used to gauge performance?
3) What criteria will be predefined “acceptable performance”?
Step 2: Deployment
1)
2)
3)
Step 3: Monitoring
1)
2)
3)
Ensure secure, reliable, and scalable deployment, integration, and ongoing maintenance of AI systems within the existing institutional IT infrastructure.
- Maintain cybersecurity and data privacy.
- Provide technical support and incident response.
- Oversee interoperability with laboratory and hospital information systems.
- Security vulnerabilities or non-compliance with institutional IT policies.
- Lack of interoperability or inability to integrate with existing systems.
- Excessive resource requirements or unsustainable maintenance burden.
Ensure that AI applications improve laboratory quality, efficiency, and patient outcomes without compromising clinical standards.
- Serve as liaison between laboratory staff, clinicians, and technical teams.
- Provide clinical input on algorithm development and use case prioritization.
- Insufficient validation or failure to meet clinical performance benchmarks.
- Disruption to laboratory operations or patient care.
- Lack of clinician trust or engagement.
Ensure full compliance with all applicable regulatory, legal, and accreditation requirements (e.g., CLIA, CAP, HIPAA, FDA).
- Guide risk assessments and documentation practices.
- Monitor for ongoing regulatory changes impacting AI use.
- Oversee management of adverse events and reporting obligations.
- Non-compliance with regulatory or accreditation standards.
- Insufficient documentation or inability to audit system performance and decision-making.
Develop, validate, and monitor AI models to ensure accuracy, fairness, and relevance to clinical needs.
- Collaborate with clinicians to define use cases and performance metrics.
- Oversee validation, verification, and ongoing clinical performance monitoring of AI systems.
- Provide transparency regarding model limitations and potential biases.
- Insufficient, biased, or unrepresentative data.
- Lack of stakeholder engagement or infrastructure support.
Ensure that AI tools support and enhance patient care and improve provider experience.
- Participate in user feedback and acceptability assessments.
- Identify unintended consequences or workflow disruptions.
- Provide input on clinical relevance and user experience.
- Loss of provider autonomy or increased workload.
- Lack of trust or understanding regarding AI system recommendations.