Nita Farahany (Duke Law) recently made a great point: “Your doctor has a fiduciary duty to you. ChatGPT doesn’t.” She discusses how people are increasingly turning to AI to serve as a kind of virtual doctor. OpenAI and Anthropic recently launched features where their chatbots can analyze a person’s medical records and provide personalized medical advice.
She argues that “we are rapidly normalizing the transfer of trust from accountable institutions to systems that explicitly refuse accountability. We need to answer sooner, rather than later, what legal obligations should apply to tools that function as health authorities while claiming they are not one, especially when tens of millions of daily users already treat that product as a health advisor.”
I wholeheartedly agree. Her entire post is great, and her Substack is essential reading.
Although AI can help with healthcare, it’s not a replacement for a doctor or therapist. We’ve already seen too many tragic suicide cases where people use chatbots at therapists and are given bad counselling and even encouraged to commit self-harm. Doctors and therapists have years of training and experience; they also have experience being human (they’re not just simulations); they must be licensed; and they have well-established legal responsibilities, such as a fiduciary duty to act in the best interest of the patient. AI currently has none of these things.




















