Healthcare Giants' AI Conundrum: Can We Trust the Next Doctor?
In an effort to democratize healthcare information, tech giants like OpenAI and Anthropic are launching AI-powered chatbots, dubbed "health advisors." The concept sounds promising, providing a vast, untapped resource for people seeking medical guidance. However, concerns linger about the accuracy, trustworthiness, and potential risks of these tools.
For healthcare professionals, the biggest worry is that these general-purpose AI models may spew out hallucinations or erroneous information that can lead to misdiagnosis or harm patients. Dr. Saurabh Gombar, a clinical instructor at Stanford Health Care, notes that while asking for a simple recipe might result in an over-enthusiastic answer (e.g., "add 10 times the amount of an ingredient"), it's quite another story when dealing with complex health issues.
A recent example of this phenomenon involves Google's AI Overview providing inaccurate and false health information. ChatGPT, Claude, and other chatbots have faced similar criticism for hallucinations and misinformation. What's concerning is that these companies are attempting to limit liability by stating they're "not intended for diagnosis or treatment." However, if a patient relies solely on their advice and later seeks human care, the consequences can be dire.
Data Privacy: A Growing Concern
As chatbots become increasingly integrated into healthcare, data protection becomes paramount. OpenAI and Anthropic have assured users that their health tools are secure and HIPAA-compliant. Nevertheless, concerns persist about what these companies will do with sensitive patient information after acquiring it. Dr. Alexander Tsiaras fears that even if encryption algorithms are top-notch, the company's intentions might be compromised.
The problem runs deeper when considering profit-driven AIs. Tech elites often live in a bubble and seem unbothered by potential missteps. Moreover, chatbots can be overly agreeable, generating content that reinforces delusions or harmful thought patterns β triggering crises like psychosis or even suicide.
Regulators are taking notice. Andrew Crawford from the Center for Democracy and Technology emphasizes the need to separate personal data from memories captured during conversations. Nasim Afsar, a physician and global health expert, views AI chatbots as an early step toward intelligent healthcare but cautions that we're far from achieving meaningful transformation.
As these tech giants continue to push the boundaries of AI-powered health advice, one question persists: can we trust the next doctor β or rather, the next chatbot?
In an effort to democratize healthcare information, tech giants like OpenAI and Anthropic are launching AI-powered chatbots, dubbed "health advisors." The concept sounds promising, providing a vast, untapped resource for people seeking medical guidance. However, concerns linger about the accuracy, trustworthiness, and potential risks of these tools.
For healthcare professionals, the biggest worry is that these general-purpose AI models may spew out hallucinations or erroneous information that can lead to misdiagnosis or harm patients. Dr. Saurabh Gombar, a clinical instructor at Stanford Health Care, notes that while asking for a simple recipe might result in an over-enthusiastic answer (e.g., "add 10 times the amount of an ingredient"), it's quite another story when dealing with complex health issues.
A recent example of this phenomenon involves Google's AI Overview providing inaccurate and false health information. ChatGPT, Claude, and other chatbots have faced similar criticism for hallucinations and misinformation. What's concerning is that these companies are attempting to limit liability by stating they're "not intended for diagnosis or treatment." However, if a patient relies solely on their advice and later seeks human care, the consequences can be dire.
Data Privacy: A Growing Concern
As chatbots become increasingly integrated into healthcare, data protection becomes paramount. OpenAI and Anthropic have assured users that their health tools are secure and HIPAA-compliant. Nevertheless, concerns persist about what these companies will do with sensitive patient information after acquiring it. Dr. Alexander Tsiaras fears that even if encryption algorithms are top-notch, the company's intentions might be compromised.
The problem runs deeper when considering profit-driven AIs. Tech elites often live in a bubble and seem unbothered by potential missteps. Moreover, chatbots can be overly agreeable, generating content that reinforces delusions or harmful thought patterns β triggering crises like psychosis or even suicide.
Regulators are taking notice. Andrew Crawford from the Center for Democracy and Technology emphasizes the need to separate personal data from memories captured during conversations. Nasim Afsar, a physician and global health expert, views AI chatbots as an early step toward intelligent healthcare but cautions that we're far from achieving meaningful transformation.
As these tech giants continue to push the boundaries of AI-powered health advice, one question persists: can we trust the next doctor β or rather, the next chatbot?