Edward A. Lebowitz, MD
A Shallow Dive Into Artificial Intelligence
Enthusiasts regard AI as an inevitable technology that reduces labor and saves time. Fearmongers regard AI as a robotic substitute for humanity. AI has encroached so quickly on human creativity that “Clarkesworld,” a famous science fiction magazine, stopped accepting submissions because so many were “written” by AI.
This summer, I Zoom-attended a grand rounds on applications and implications of AI in medicine. The presentation centered on ChatGPT, a computer program designed to simulate conversation with human users. Patients can now “discuss” their medical conditions with ChatGPT, and in an article published in JAMA Internal Medicine, a team of licensed healthcare professionals rated ChatGPT’s responses higher than real doctors for both quality and empathy.
I put “written” and “discuss” in quotation marks because ChatGPT doesn’t write or discuss anything; it simply regurgitates word combinations gleaned from an exponentially growing trove of internet sources that it associates with your questions. Doctors feel vulnerable.
The grand rounds presenters recommended the audience download ChatGPT and try it out, which I did. In July 2023, I asked how much weight a chain could hold if it comprised four links that could hold 5, 10, 15 and 20 pounds each. ChatGPT’s answer was 50 pounds.
I responded that a chain is only as strong as its weakest link, so the correct answer is five pounds. Chat GPT agreed with me, apologized for its mistake, thanked me for pointing it out and promised to get it right in the future. However, when I asked the same question one week later, it answered 50 pounds again.
The chain question is one I had on a scholastic aptitude test 60 years ago. I thought ChatGPT would get it right the first time, and I can’t imagine a doctor getting it wrong twice, which gets us back to the question of whether ChatGPT can substitute for doctors.
By virtue of its failure to learn from its mistake or honor its commitment to give the right answer in the future, I concluded, at that time, that ChatGPT wasn’t ready. But since AI is a work in progress, I asked ChatGPT the same chain question four months later. ChatGPT offered the following updated answer in November 2023:
“The total weight the chain can hold is determined by the weakest link, as the chain is only as strong as its weakest component. In this case, the weakest link is the one that can hold the least weight, which is the link that can hold five pounds. Therefore, the chain can hold a maximum weight of five pounds.”
So, ChatGPT has “learned.” With further refinements, it may indeed compete with doctors if it isn’t already.