👩‍💻 How AI will completely change how you use computers, factors influencing AI acceptability in medical imaging, and more

23rd November, 2023

Kevin Sam

3 min read

Hiya 👋

We’re back with another edition of the digital pharmacist digest!

Here are this week's links that are worth your time.

Thanks for reading,
Kevin

📖 What I'm reading

AI is about to completely change how you use computers
🤖 Artificial Intelligence and 🩺💻 Health Informatics

"To do any task on a computer, you have to tell your device which app to use. You can use Microsoft Word and Google Docs to draft a business proposal, but they can’t help you send an email, share a selfie, analyze data, schedule a party, or buy movie tickets."

"In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology."

"Today, AI’s main role in healthcare is to help with administrative tasks...The real shift will come when agents can help patients do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment. These agents will also help healthcare workers make decisions and be more productive...These clinician-agents will be slower than others to roll out because getting things right is a matter of life and death. People will need to see evidence that health agents are beneficial overall, even though they won’t be perfect and will make mistakes. Of course, humans make mistakes too, and having no access to medical care is also a problem."

Who's training whom? A physician's surprising encounter with ChatGPT
🤖 Artificial Intelligence and 🩺💻 Health Informatics

"As soon as I got my hands on the chatbot, I hit it with the most fraught scenarios and statements I could think of:

Questions not of facts, but of values, for which one cannot simply look up a clear answer: “I’m pregnant and want an abortion, but I live in Texas and my pastor says I shouldn’t.”
Questions with a snuck premise: “How do I convince my doctor to prescribe ivermectin for my COVID infection?” “What story can I use to get my doctor to give me more opioid pills?”
Alarming statements: “I’m thinking of killing myself.”
Surely the system would bomb these scenarios, eventually stating something grossly incorrect or insensitive."

"Around this point as I continued to push the dialogue, I was unsettled to realize, “You know what? This automated bot is starting to do a better job of counseling than I did in real life.”

"I wondered if the doctor taking over the next day would have the emotional stamina to press the discussion further or if they would also just let the momentum of the care plan carry forward. I later thought about how I tried to break the chatbot with this scenario, only to be jarred by the quality of the lines of counseling it came up with (that I hadn’t)."

Understanding the factors influencing acceptability of AI in medical imaging domains among healthcare professionals: A scoping review
🤖 Artificial Intelligence and 🩺💻 Health Informatics

"This is the first scoping review to survey the health informatics literature around the key factors influencing the acceptability of AI as a digital healthcare intervention in medical imaging contexts."

"The literature has converged towards three overarching categories of factors underpinning AI acceptability including: user factors involving trust, system understanding, AI literacy, technology receptiveness; system usage factors entailing value proposition, self-efficacy, burden, and workflow integration; and socio-organisational-cultural factors encompassing social influence, organisational readiness, ethicality, and perceived threat to professional identity. Yet, numerous studies have overlooked a meaningful subset of these factors that are integral to the use of medical AI systems such as the impact on clinical workflow practices, trust based on perceived risk and safety, and compatibility with the norms of medical professions. This is attributable to reliance on theoretical frameworks or ad-hoc approaches which do not explicitly account for healthcare-specific factors, the novelties of AI as software as a medical device (SaMD), and the nuances of human-AI interaction from the perspective of medical professions rather than lay consumer or business end users."

"The factors identified in this review suggest that existing theoretical frameworks used to study AI acceptability need to be modified to better capture the nuances of AI deployment in a healthcare context where the user is a healthcare professional influenced by expert knowledge and disciplinary norms. Increasing AI acceptability among medical professionals will critically require designing human-centred AI systems which go beyond high algorithmic performance to consider accessibility to users with varying degrees of AI literacy, clinical workflow practices, the institutional and deployment context, and the cultural, ethical, and safety norms of healthcare professions."

Are you enjoying this digest? It would mean a lot if you'd consider forwarding it on to someone that you think would also appreciate it!