Sporo Health AI Literacy Policy

Last modified: Jan 30, 2025

Purpose and Scope

At Sporo Health, we are dedicated to leveraging artificial intelligence (AI) to enhance clinical workflows, reduce administrative burdens, and improve patient care. Our AI systems, including AI scribes, chart summarization tools, and clinical co-pilots, are designed to assist clinicians, not replace them or their clinical judgment. This policy outlines our approach to ensuring AI literacy among users, stakeholders, and team members, fostering responsible, informed, and transparent use of our technologies.

Purpose and Functionality

What Our Systems Do

Sporo Health’s AI systems are designed to:

  1. AI Scribe: Automate clinical documentation by transcribing and organizing patient-clinician interactions.
  2. Chart Summarization: Highlight key information in patient charts to support clinicians.
  3. Clinical Co-Pilot: Provide real-time assistance during workflows, suggesting next steps or flagging potential issues.

Human Oversight

These tools are intended to augment clinical workflows and reduce administrative burdens. Clinicians must:

  • Exercise professional judgment.
  • Review and verify AI outputs.
  • Use AI-generated information as a support tool, not as a replacement for their expertise.

Limitations

Understanding the Boundaries of AI

Sporo Health’s AI tools have the following limitations:

  1. Accuracy: Outputs may include errors, particularly in ambiguous or edge-case scenarios.
  2. Hallucinations: The AI may generate incorrect or fabricated information, especially in complex cases.
  3. Bias: Models trained on biased data may inadvertently reflect or amplify these biases.
  4. Dependency on Input Quality: Accurate inputs (e.g., clear audio or complete charts) are essential for reliable outputs.

Users must critically evaluate AI outputs and ensure alignment with clinical observations and records.

Risks

Potential Risks of AI Systems

The following risks are inherent in AI use and can be encountered when using Sporo Health tools:

  1. Falsifying Information: Incorrect summaries or hallucinations can lead to inaccurate records.
  2. Missing Critical Details: AI outputs might overlook or misinterpret key clinical information.
  3. Bias and Inequity: Inequities in training data may result in disparate outcomes across patient demographics.
  4. Overreliance: Excessive reliance on AI may erode clinicians’ independent judgment.

Mitigation Measures:

  • Data Quality controls such as routine bias audits and diverse dataset testing.
  • Rigorous validation processes for all outputs.
  • Continuous monitoring and system updates.

 

Transparency Practices

For Non-Technical Users

Simplified documentation explaining system functions, risks, and intended use such as public facing policies. Informed consent process for agreeing with terms and conditions.

For Technical Users

Detailed documentation on:

  • Training Data: Comprehensive documentation is available detailing the sources, diversity, and limitations of our training datasets. We are transparent about gaps in data coverage and availability in languages and any potential implications for specific clinical scenarios.
  • Performance Metrics: System performance is evaluated using clinically relevant metrics, including accuracy rates, false positive and negative rates, BLEU, ROGUE Scores and overall model validation results. These metrics are provided with appropriate confidence intervals to ensure contextual accuracy. The results are available in our case studies as well.
  • Version Histories: Logs detailing model updates and testing outcomes are meticulously maintained. Each update includes a summary of changes, impact assessments on performance, and alignment with compliance standards. These logs are available for review and regulatory audits upon request.

This information is available for review and shared with regulators upon request.

End-User Communication

Training for Users

We provide training to ensure clinicians understand the safe and effective use of AI systems:

  1. Onboarding Sessions: We provide comprehensive, role-specific training during onboarding to ensure clinicians are equipped to use our AI systems effectively and safely. This training includes interactive sessions, system simulations, and best practices for integrating AI into clinical workflows.
  2. Ongoing Education: Our website/tools hosts a wide range of educational materials to support AI literacy, including videos, tutorials, case studies, and system updates. These resources are updated frequently to reflect the latest developments in AI technology and healthcare regulations. 
  3. Resource Access: To support our users, customer support is available 24/7 through website channels, emails, etc. Escalation protocols ensure timely resolution of complex issues, with direct access to our technical and clinical teams.

Legal and Ethical Implications

Users are educated on:

  1. Data Privacy and Compliance: Adhering to HIPAA, GDPR, and state-specific privacy laws.
  2. Patient Transparency: Clinicians are expected to communicate with patients about AI’s role in their care and build this into their consent process.
  3. Accountability: Clinicians are ultimately responsible for patient outcomes and must validate all AI outputs.

User Expectations

Human Oversight

Users must:

  • Review and confirm all AI-generated outputs before integrating them into clinical records.
  • Report errors, discrepancies, or concerns immediately.
  • Maintain independent clinical judgment at all times.

Feedback and Reporting

A feedback mechanism allows users to report issues directly. Errors are escalated to an Incident Response Team, including the Chief Technology Officer (CTO), Chief Strategy Officer (CSO), and Chief Product and Quality Officer (CQPO), for root cause analysis and mitigation.

Monitoring and Continuous Improvement

  1. Bias and Risk Mitigation: To mitigate bias, we conduct reviews of our AI systems, utilizing fairness tests and subgroup analyses to identify and address potential inequities. Feedback from users and evolving regulatory requirements are incorporated into these reviews to ensure ongoing improvement.
  2. Performance Monitoring: Real-world performance metrics based on use cases, including task completion rates and user satisfaction scores, are continuously tracked alongside models related metrics. This data informs iterative updates, ensuring our tools remain effective and aligned with user needs
  3. Regulatory Alignment: Our systems are updated regularly to reflect changes in healthcare regulations, such as GDPR, HIPAA, and the EU AI Act. Comprehensive audit trails and compliance reports are maintained to meet industry and regulatory standards.

 

Conclusion

Sporo Health is committed to fostering AI literacy and ensuring the responsible use of AI systems. By prioritizing transparency, education, and human oversight, we empower clinicians to integrate AI into their practices safely and effectively.

For questions or feedback, contact:
contact@sporohealth.com