What ChatGPT Health means for the wellness industry

What ChatGPT Health means for the wellness industry


In the near term, this means wellness brands should prioritise AI optimization to up their chances of becoming the chatbot’s recommended product, healthcare marketing experts say. “Some of the most impactful work here is unglamorous, such as metadata, content consistency, and structured libraries,” Julie O’Donnell, global head of digital at healthcare consultancy Inizio Evoke, says. “Being honest, these are areas where many health and wellness brands have historically been weak. In an AI-driven landscape, that has to change. You don’t need to be the loudest or the biggest spender — brands that work smarter and build credibility systematically can make meaningful gains.”

Experts are divided on what this shift to AI-driven health means for traditional trackers, however. Some warn that while the first generation of health tracking devices, such as Whoop and the Oura ring, brought consumers data, if ChatGPT Health becomes the go-to platform for interpreting health data and personalised recommendations, wearables and their apps could be reduced to mere sensory data suppliers.

But US-based health-tracking ring Oura, which offers wearers an AI-powered “Oura Advisor” within its app, tells Vogue Business that it views the new feature as validation of user demand and a tool to “complement” its own Advisor.

“Oura Advisor is unique in that it’s built on each member’s continuous biometric data and long‑term baselines inside the Oura app, so it can turn real patterns and behaviors into contextual, actionable recommendations with clear guardrails around clinical adjacency,” a spokesperson for the company said. “In that sense, general‑purpose AI tools and Oura Advisor are complementary — one offers broad information, while the other translates your personal data, measured directly from a wearable, into tailored guidance.”

And if consumers increasingly feed their wearables data into general-purpose AI models, experts say it places greater onus on wearables brands to improve the quality and provenance of the data they collect.

“Many wearable metrics are proxies or estimates of underlying physiological processes, and as conversational interfaces take center stage, the accuracy and stability of those inputs become critical,” says Billie Whitehouse, CEO of Wearable X. “Generalized models like GPT can provide broad reasoning and synthesis, but they do not always replace the domain-specific insights that some wearable platforms have spent years refining.”

Accuracy concerns

OpenAI says it’s worked with more than 260 clinical physicians over the past two years to help shape ChatGPT Health’s responses. While specific AI software that’s already in use in clinical settings is regulated and tested for use in healthcare, general-purpose chatbots are not required to meet the same standards. Large language models (LLMs) like ChatGPT draw on thousands of internet sources, some more reliable than others.

Doctors, on the other hand, consult the latest peer-reviewed, empirical research, which often resides behind paywalls in medical journals. They’re also trained to be empathetic — something AI chatbots have historically struggled with, and a crucial factor in managing the mental health risks associated with obsessively tracking our health.

Healthcare experts, like those in other areas of AI development, say it’s crucial that human practitioners stay in the loop. While ChatGPT Health could help save doctors’ time, it’s vital that a trained licensed physician signs off on the output, they say.

“Misleading statements about medicine have been a concern for a while. On one hand, ChatGPT could be more accurate than social media. On the other hand, patients should absolutely not self-diagnose because a vast amount of nuance is incorporated by physicians to calibrate a diagnosis,” says Dr. Charlie Cox, a consultant at Reborne Longevity. “It’s important that there are very clear guardrails in place — particularly for more vulnerable individuals — to reduce risk of misdiagnosis, but on balance the net benefit should be positive.”



Source link

Posted in

Kevin harson

Leave a Comment