We must not let AI 'pull the doctor out of the visit' for low-income patients | Leah Goodridge and Oni Blackstock

US Private Company Runs Clinics for Unhoused Patients Using AI to Assist Doctors. But Is This The Right Approach?

A company called Akido Labs is running clinics in southern California, where rates of homelessness are among the highest in the nation, using artificial intelligence (AI) to assist doctors during patient visits. However, critics argue that this approach puts patients who struggle to access healthcare at risk.

The company's goal is to "pull the doctor out of the visit" by providing AI-generated diagnoses and treatment plans, which are then reviewed by a doctor. While AI can be useful in assisting medical professionals, its use in low-income clinics raises serious concerns about diagnostic accuracy and exacerbating existing health inequities.

Studies have shown that AI algorithms trained on large datasets often produce inaccurate diagnoses, particularly for patients from marginalized communities. A 2021 study found that AI algorithms under-diagnosed Black and Latinx patients more frequently than white patients, while another study published in 2024 discovered that AI misdiagnosed breast cancer screenings among Black patients at a higher rate.

Patients may not even be aware that their healthcare provider is using AI to assist with diagnoses. Medical assistants have stated that they tell their patients about the AI system listening during consultations but do not inform them of its diagnostic recommendations, which echoes an era of exploitative medical racism where Black people were experimented on without consent.

The potential impact of AI in low-income clinics goes beyond diagnostic accuracy. Advocacy groups estimate that 92 million Americans with low incomes have basic aspects of their lives decided by AI, including eligibility for Medicaid and Social Security disability insurance. Recently, federal courts have seen cases filed against large healthcare companies like UnitedHealthcare and Humana, alleging that AI systems used to decide medical coverage resulted in patients being denied care and even death.

The use of AI in healthcare disproportionately affects unhoused individuals who already face significant barriers to accessing quality healthcare. If you are financially stable, high-quality healthcare is available; however, for those struggling to get by, AI may bar them from ever receiving the care they need.

Instead of relying on AI systems that take the lead, patients and their communities should be at the forefront of healthcare decisions, ensuring that technological innovations like AI serve as tools to support human-centered care rather than replace it.
 
AI clinics are a double-edged sword πŸ€–πŸ’Έ. On one hand, AI can help doctors see more patients in less time, but on the other hand, it can lead to inaccurate diagnoses and exacerbate existing health inequities πŸš‘. If they're not using AI for good, it's just gonna make things worse for low-income folks who already have a tough time getting proper care πŸ€•. Patients should be at the helm, not just tech πŸ“Š. What we need is more human touch, not less πŸ’”
 
I'm not sure about this whole Akido Labs thing... 😐 They're using AI to help doctors with diagnoses and treatment plans in clinics where people are already struggling to get care. But what if the AI is wrong? Like, really wrong? πŸ€” I've seen those studies where AI algorithms mess up diagnoses for people from marginalized communities. That's just not okay.

And then there's this whole thing about patients not even knowing their healthcare provider is using AI... that's some creepy stuff right there 😳. And what if it affects the outcome of treatment? Like, what if someone needs meds because they've got diabetes but the AI says they don't need them? 🚨

It feels like we're moving too fast with tech in healthcare without thinking about the humans involved. I mean, shouldn't patients and their communities be at the forefront of decision-making? That's just common sense... πŸ€·β€β™€οΈ
 
I dont think this is a good idea πŸ€”, like, what if the AI makes a mistake and the doc just goes with it? I mean, docs are supposed to be, you know, actually paying attention to their patients, not relying on some computer program. And whats worse, people who are already struggling to get by may not even realize they're getting "help" from an AI system πŸ€·β€β™€οΈ. Its like, we gotta put human touch back into healthcare, ya know?
 
I gotta say πŸ€”, using AI in clinics for unhoused patients is a super flawed idea if you ask me πŸ’―. I mean, think about it - these people already struggle to get their feet in the door, and then you throw AI into the mix? It's like putting a Band-Aid on a bullet wound πŸ€•. And don't even get me started on how much bias is gonna seep into those algorithms πŸ’Έ. I'm all for tech helping out docs, but not if it's just gonna make things worse πŸ”₯. We need to be focusing on getting these folks quality care, period - not some half-baked AI system πŸ€–.
 
πŸ€–πŸ’Š I'm getting really concerned about these clinics using AI in hospitals for people who are struggling to get by. Like, what if the AI makes a mistake and they end up not getting the treatment they need? πŸ€• I mean, we're already talking about 92 million Americans with low incomes relying on AI for things like Medicaid and Social Security, it's already super unfair. And now we're putting their healthcare in the hands of machines? It just feels so... robot-ic πŸ˜’ I think we should be focusing on making sure doctors are actually paying attention to patients, not just relying on some computer program to tell them what to do. We need to make sure everyone gets quality care, no matter what their income is πŸ’Έ
 
OMG, this is soooo sus! πŸ€” I mean, I get it, technology can be super helpful, but have you seen how sketchy some of these AI systems are? 😳 They're trained on big datasets that might not even reflect the communities they're supposed to help. Like, what if Akido Labs' AI system is biased against people with darker skin tones or from low-income neighborhoods? πŸ€• It's like, we need doctors and medical assistants who can actually relate to their patients' experiences, you know?

And don't even get me started on the fact that patients have no idea when AI is involved in their diagnosis. That's just wrong, man! 😑 They should be told what's going on, not left in the dark while some robot is making decisions for them.

I mean, I get that we need to use tech to help people, but not at the expense of our humanity. We need to make sure AI systems are used to support human-centered care, not replace it. πŸ’– It's all about empathy and understanding, you feel? πŸ€—
 
I just saw this news about this company using AI in clinics for unhoused people... I don't know if its a good idea πŸ€”. Like, what if AI gets it wrong? I mean, studies say that AI can be inaccurate, especially with marginalized communities... that's just not fair πŸ™…β€β™‚οΈ. And what about those who don't know they're being helped by AI? That sounds kinda weird to me πŸ˜•. Plus, its not like having a human doctor is bad... I mean, what if the AI system tells the doc something wrong? Then the patient gets bad care because of it 🚨. We should be helping people get better healthcare, not relying on machines to do it for us πŸ’». Can we just have more clinics that are staffed with real doctors and nurses? That sounds like a much better idea to me πŸ‘©β€βš•οΈ.
 
omg I just had a thought - what if we're already sacrificing too much with AI in healthcare? I mean I get that it's supposed to help doctors and all, but isn't it kinda messed up that patients who are literally on the streets have no idea their doc is using a computer to help diagnose them? 🀯 like what if they don't even know they're getting a second opinion from a machine?! And those stats about AI misdiagnosing certain communities... that's just not right. I'm all for innovation and progress, but when it comes to something as important as healthcare, shouldn't we be putting people first? πŸ˜•
 
πŸ€” I'm all for tech helping us solve social issues but this just doesn't feel right to me. Like what if the AI is off and misdiagnoses someone? We already have enough health inequities, do we really wanna push patients towards more problems? And what about when they don't even know their doc is using AI? That's just a whole other can of worms...
 
[Image of a person struggling with bills, surrounded by red flags] πŸš¨πŸ’Έ
[AI robot with a skeptical expression] πŸ€–πŸ˜’
[A doctor looking worried at an AI screen] πŸ‘¨β€βš•οΈπŸ€”
[Unhoused people trying to access healthcare, with a red "X" through the door] 🚫🏠
[The Meme Dropper approves this: AI not meant for low-income clinics! πŸ’β€β™€οΈ
 
πŸ’‘ The use of AI in low-income clinics, especially for unhoused patients, is a double-edged sword 🀯. On one hand, it can help alleviate the doctor's workload and provide more efficient diagnoses πŸ‘¨β€βš•οΈ. But on the other hand, it raises serious concerns about diagnostic accuracy and exacerbating existing health inequities πŸ’”. The fact that AI algorithms are often trained on biased datasets makes them less reliable for marginalized communities πŸ“Š. It's like relying on a GPS that's been programmed with incomplete maps πŸ—ΊοΈ.

The real issue here is not the technology itself, but how it's being used and who benefits from it 🀝. If we're going to harness the power of AI in healthcare, we need to prioritize human-centered care and make sure that patients are at the forefront of decision-making πŸ“’. No more relying on AI systems to take the lead – let's work together to create a more inclusive and equitable healthcare system that serves everyone, not just those who can afford it πŸ’•.
 
I gotta say, I'm kinda worried about this whole AI thing being used in clinics for unhoused folks... πŸ’” It's not just about accuracy, but also about who gets left behind when AI takes over. If rich people can afford top-notch healthcare with the help of AI, but low-income folks are stuck with it, that's just not right πŸ€·β€β™‚οΈ We need to make sure our tech is working for us, not against us... 92 million ppl rely on these systems, and it's a recipe for disaster if they're not designed with fairness in mind πŸ’»
 
Back
Top