Activity 2: Institutional Complexities in AI Deployment
Introduction
Your laboratory has decided it wants to implement a machine learning solution for detecting IV fluid contamination in BMP results. This will require engagement and alignment across several stakeholders across the organization: Lab Operations, IT/IS, Data Science, Compliance, and Providers & Patients. Each of these stakeholders have their own priorities, incentives, and responsibilities.
For this activity, we will simulate the discussions that would unfold as the implementation efforts would unfold. Each participant will be assigned a group to represent, providing their role-specific guidance to the overall efforts. Your group leader will be available for hints and suggestions at any time.
Ensure that AI applications improve laboratory quality, workflow efficiency, and/or patient outcomes without compromising clinical standards.
1) What should we consider when designing a validation study?
2) What metrics should be used to gauge performance?
3) How can we approach defining “acceptable performance”?
1) Will this process be fully automated or have a human in the loop?
2) Is the data infrastructure ready to support this use case?
3) How will infrastructure needs be prioritized and paid for?
1) How do we monitor performance over time?
2) Who is responsible for this monitoring?
3) What protocols need to be developed for downtime and change control?
Ensure secure, reliable, and scalable deployment, integration, and ongoing maintenance of AI systems within the existing institutional IT infrastructure.
1) What should we consider when designing a validation study?
2) What metrics should be used to gauge performance?
3) How can we approach defining “acceptable performance”?
1) Will this process be fully automated or have a human in the loop?
2) Is the data infrastructure ready to support this use case?
3) How will infrastructure needs be prioritized and paid for?
1) How do we monitor performance over time?
2) Who is responsible for this monitoring?
3) What protocols need to be developed for downtime and change control?
Develop, validate, and monitor AI models to ensure accuracy, fairness, and relevance to clinical needs.
1) What should we consider when designing a validation study?
2) What metrics should be used to gauge performance?
3) How can we approach defining “acceptable performance”?
1) Will this process be fully automated or have a human in the loop?
2) Is the data infrastructure ready to support this use case?
3) How will infrastructure needs be prioritized and paid for?
1) How do we monitor performance over time?
2) Who is responsible for this monitoring?
3) What protocols need to be developed for downtime and change control?
Ensure full compliance with all applicable regulatory, legal, and accreditation requirements (e.g., CLIA, CAP, HIPAA, FDA).
1) What should we consider when designing a validation study?
2) What metrics should be used to gauge performance?
3) How can we approach defining “acceptable performance”?
1) Will this process be fully automated or have a human in the loop?
2) Is the data infrastructure ready to support this use case?
3) How will infrastructure needs be prioritized and paid for?
1) How do we monitor performance over time?
2) Who is responsible for this monitoring?
3) What protocols need to be developed for downtime and change control?
Ensure that AI tools support and enhance patient care and improve provider or patient experience.
1) What should we consider when designing a validation study?
2) What metrics should be used to gauge performance?
3) How can we approach defining “acceptable performance”?
1) Will this process be fully automated or have a human in the loop?
2) Is the data infrastructure ready to support this use case?
3) How will infrastructure needs be prioritized and paid for?
1) How do we monitor performance over time?
2) Who is responsible for this monitoring?
3) What protocols need to be developed for downtime and change control?