Should you ask an AI chatbot for a second opinion on your diagnosis before consulting your doctor? For millions of people, this is no longer a hypothetical—it's their daily reality.
Microsoft, Amazon, and OpenAI have collectively pushed health AI into the mainstream over the past three months, creating a landscape where consumers can now discuss symptoms, review lab results, and explore treatment options through conversational interfaces. Microsoft launched Copilot Health in late March, a dedicated space within its Copilot app where users connect their medical records and ask targeted questions about their health. Amazon followed by expanding its Health AI tool—which previously required a One Medical subscription—to the general public. OpenAI's ChatGPT Health launched in January, and Anthropic's Claude can access user health records if granted permission.
The scale of adoption is staggering. Microsoft reports receiving 50 million health questions per day across its platforms, with health consistently ranking as the most popular discussion topic on the Copilot mobile app. "We were seeing just a rapid, rapid increase in the rate of people using ChatGPT for health-related questions," said Karan Singhal, who leads OpenAI's Health AI team.
This demand is being met, but the scientific community remains divided on whether the technology is ready for prime time. Some peer-reviewed studies suggest current large language models can provide safe, accurate medical advice in controlled settings. Researchers at Oxford, however, emphasize that independent evaluation is essential before mass deployment in a domain where errors can be life-threatening. The concern: companies currently evaluate their own products, and those assessments—where they exist at all—aren't subject to external expert review.
Dominic King, Microsoft's vice president of health and a former surgeon, frames the products as a response to both technological capability and urgent demand. "We've seen this enormous progress in the capabilities of generative AI to be able to answer health questions," he said. But when pressed on independent validation, the picture becomes murkier. Microsoft's Copilot Health does not cite published clinical trials on its landing page. Amazon's Health AI has similarly vague provenance. OpenAI has published research, but coverage remains limited to specific use cases.
The regulatory landscape hasn't caught up. The FDA has signaled interest in AI oversight but hasn't issued binding rules for consumer health chatbots—a gap that allows companies to ship products while the evidence base catches up. Andrew Bean, a doctoral candidate at the Oxford Internet Institute who studies digital health, offers a measured view: "These models have reached a point where they're actually worth rolling out. But the evidence base really needs to be there."
For now, consumers are navigating this uncharted territory with their own judgment. The tools are free or bundled into existing subscriptions. No clinical credential appears on any splash screen. Whether 50 million daily health questions represent progress or a public health gamble depends entirely on who you ask—and that uncertainty is itself the story.