US Private Company Runs Clinics for Unhoused Patients Using AI to Assist Doctors. But Is This The Right Approach?
A company called Akido Labs is running clinics in southern California, where rates of homelessness are among the highest in the nation, using artificial intelligence (AI) to assist doctors during patient visits. However, critics argue that this approach puts patients who struggle to access healthcare at risk.
The company's goal is to "pull the doctor out of the visit" by providing AI-generated diagnoses and treatment plans, which are then reviewed by a doctor. While AI can be useful in assisting medical professionals, its use in low-income clinics raises serious concerns about diagnostic accuracy and exacerbating existing health inequities.
Studies have shown that AI algorithms trained on large datasets often produce inaccurate diagnoses, particularly for patients from marginalized communities. A 2021 study found that AI algorithms under-diagnosed Black and Latinx patients more frequently than white patients, while another study published in 2024 discovered that AI misdiagnosed breast cancer screenings among Black patients at a higher rate.
Patients may not even be aware that their healthcare provider is using AI to assist with diagnoses. Medical assistants have stated that they tell their patients about the AI system listening during consultations but do not inform them of its diagnostic recommendations, which echoes an era of exploitative medical racism where Black people were experimented on without consent.
The potential impact of AI in low-income clinics goes beyond diagnostic accuracy. Advocacy groups estimate that 92 million Americans with low incomes have basic aspects of their lives decided by AI, including eligibility for Medicaid and Social Security disability insurance. Recently, federal courts have seen cases filed against large healthcare companies like UnitedHealthcare and Humana, alleging that AI systems used to decide medical coverage resulted in patients being denied care and even death.
The use of AI in healthcare disproportionately affects unhoused individuals who already face significant barriers to accessing quality healthcare. If you are financially stable, high-quality healthcare is available; however, for those struggling to get by, AI may bar them from ever receiving the care they need.
Instead of relying on AI systems that take the lead, patients and their communities should be at the forefront of healthcare decisions, ensuring that technological innovations like AI serve as tools to support human-centered care rather than replace it.
A company called Akido Labs is running clinics in southern California, where rates of homelessness are among the highest in the nation, using artificial intelligence (AI) to assist doctors during patient visits. However, critics argue that this approach puts patients who struggle to access healthcare at risk.
The company's goal is to "pull the doctor out of the visit" by providing AI-generated diagnoses and treatment plans, which are then reviewed by a doctor. While AI can be useful in assisting medical professionals, its use in low-income clinics raises serious concerns about diagnostic accuracy and exacerbating existing health inequities.
Studies have shown that AI algorithms trained on large datasets often produce inaccurate diagnoses, particularly for patients from marginalized communities. A 2021 study found that AI algorithms under-diagnosed Black and Latinx patients more frequently than white patients, while another study published in 2024 discovered that AI misdiagnosed breast cancer screenings among Black patients at a higher rate.
Patients may not even be aware that their healthcare provider is using AI to assist with diagnoses. Medical assistants have stated that they tell their patients about the AI system listening during consultations but do not inform them of its diagnostic recommendations, which echoes an era of exploitative medical racism where Black people were experimented on without consent.
The potential impact of AI in low-income clinics goes beyond diagnostic accuracy. Advocacy groups estimate that 92 million Americans with low incomes have basic aspects of their lives decided by AI, including eligibility for Medicaid and Social Security disability insurance. Recently, federal courts have seen cases filed against large healthcare companies like UnitedHealthcare and Humana, alleging that AI systems used to decide medical coverage resulted in patients being denied care and even death.
The use of AI in healthcare disproportionately affects unhoused individuals who already face significant barriers to accessing quality healthcare. If you are financially stable, high-quality healthcare is available; however, for those struggling to get by, AI may bar them from ever receiving the care they need.
Instead of relying on AI systems that take the lead, patients and their communities should be at the forefront of healthcare decisions, ensuring that technological innovations like AI serve as tools to support human-centered care rather than replace it.