by Atiya Hasan, MD, MBA www.atiyahasan.com
Over the past few years, artificial intelligence (AI) has become a buzzword in the digital health industry, especially since the launch of ChatGPT and Google Bard. With AI-powered tools and services, products claiming to improve patient care, streamline administrative tasks, and make healthcare more affordable have become even more ubiquitous. However, some companies use the term "AI" as a buzzword to market their products and services, while others use AI in a limited way, such as for marketing or customer service. So, how can companies prove that their solutions actually use AI? To help digital health companies distinguish themselves, I've put together a short checklist of six essential steps that digital health companies can take to confirm their use of AI.
1. Provide technical details
Companies should be able to provide technical details about their AI system, such as the type of algorithms or models used, the data sources, and the training and validation methods. For example, if a company claims to use AI for medical diagnosis, they should provide details on the type of AI algorithm they use, the data used to train the algorithm, and the accuracy of the algorithm.
2. Demonstrate the system's capabilities
Companies can conduct randomized controlled trials or other studies to validate the system's effectiveness and publish their results in peer-reviewed journals. This can include data about accuracy, precision, and speed, as well as how the system compares to traditional non-AI methods.
3. Involve clinicians in the development and use of the AI
Including clinicians in the development and use of AI can help ensure that the AI is clinically relevant and used safely and effectively. It also displays that the company has a human-centered design approach.
4. Explain how the system is integrated
Companies should be able to explain how their AI system is integrated into their product or service and how it benefits users. They should also provide evidence that the AI system is making decisions or providing recommendations based on input data rather than just following pre-programmed rules.
5. Provide transparency and explainability
Companies should be transparent about how their AI system works and explain how it reaches its decisions. They should also be upfront about the potential limitations of AI and provide information on the types of conditions, for example, that the AI can and cannot diagnose.
6. Use recognized AI frameworks
Companies can create confidence in their solutions that the use of their AI solutions are ethical by ensuring that it meets certain standards and guidelines like The European Union's General Data Protection Regulation (GDPR) and the FDA's Pre-Certificat
ion Program.
Comments