ChatGPT has rolled out a feature where you can give it access to your medical information, but the analyses aren’t exactly real medical advice.
AI can do a lot of things. It can edit your text and polish up messages to friends and family, making them more professional or even slightly longer than “l8r”. It can create audio by finding the patterns in learned data, and do much the same for imagery, though the learnings may or may not be as legitimate depending on how the model was trained.
And depending on what it learned from, it may even be able to provide answers for complex questions, or find the pattern between two questions and several answers.
A large language model isn’t an absorbed encyclopaedia of everything like many people think. Rather, it’s a set of questions and answers about a lot of information. Ask it the right question and it will know the right answer, but ask it a question it doesn’t understand, and it may give you totally wrong information. It may even be made up.
False information isn’t technically “made up”, but what we’ve come to call an AI hallucination is a guess from something tangible and something pattern-based. Marry them together and you get a hallucination, and that can be a problem for any information you rely on.
Travel information about real-life destinations? That’s important, especially when AI hallucinations can mess it up. And information about your health? Yep, you better believe that’s important, especially if there’s a risk of a hallucination there.
Possibly the most important information you could ever talk to someone about, except for maybe your finances.
And interestingly, that’s exactly what ChatGPT is now offering to analyse.
To kick off the year, OpenAI’s ChatGPT is adding ChatGPT Health, a concept that can look at your medical records and health information and data from apps, and provide conversations about your health.

OpenAI says it was developed by collaborating with physicians, and over the space of two years, has apparently seen more than 260 physicians work with the system to understand health that little bit better.
“This collaboration has shaped not just what Health can do, but how it responds: how urgently to encourage follow-ups with a clinician, how to communicate clearly without oversimplifying, and how to prioritise safety in moments that matter,” wrote OpenAI in its press release about the launch of ChatGPT Health.
“This physician-led approach is built directly into the model that powers Health, which is evaluated against clinical standards using HealthBench, an assessment framework we created with input from our network of practicing physicians.”
It all sounds interesting, and is an approach ChatGPT says is coming soon, but one we’d think very carefully about sharing data with.
While health information won’t apparently be used for training the ChatGPT models, and the health information will also apparently be kept separate from everything else inside ChatGPT, the risks of sharing your data aren’t small.
It’s not approved by the TGA
In Australia, health related technology has to undergo a process to ensure it works, that it does what it says, and that the risks aren’t serious to consumers. It’s not just technology the TGA looks at, but medicine in general. It’s the Therapeutic Goods Association, which covers lots of different things, technology included.
Australians have quite a few consumer-focused departments working for them, and much like how the ACCC protects consumers from stores and guarantees and scams and such, the TGA is one of the lines of defence on health tech.
It’s why some smart devices take longer to check out, and may be why it takes a little bit of extra time for your smart watch to get a feature the US received first. Australia saw the rollout of wrist-based ECGs a little later for that reason, and the same was true with the recent sleep apnoea addition.
Someone at a government department needs to check it all out, rather than have you rely on it solely in the hopes that it works.
But when it comes to AI, things are a little bit different.
Instead of being able to analyse everything, the TGA has a section on compliance of AI and software, using a risk-based approach to determine what’s exempt.
Dive in a little deeper, and you may find that guidance that hasn’t been updated for well over a year could well define OpenAI’s ChatGPT Health technology as something to be excluded, largely because it’s intended for general consumer use and isn’t intended to be used in a practice or to diagnose.
In fact, OpenAI directly states that its service isn’t for diagnosis.
“Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment,” reads OpenAI’s press release.
“Instead, it helps you navigate everyday questions and understand patterns over time—not just moments of illness—so you can feel more informed and prepared for important medical conversations.”

Asking a question
While it is semi-positive that ChatGPT Health isn’t going to be used for diagnoses, and we seriously hope your GP doesn’t have it running in their practice, the question that remains is this:
If ChatGPT Health needs you health data but doesn’t offer advice, what is the point?
Having conversations about what it can understand is still very risky, because it just might provide the wrong information, and it will be nowhere near as useful as your actual doctor or GP team, who can see your records and have the expertise to understand your situation.
ChatGPT Health is clearly not a diagnostician, and no amount of playing doctor will turn it into Dr House.
If anything, the risks are perhaps more severe, especially when talking to a chat bot about health over someone specialised who has studied the area and practices it for real life.
They’ve already been proven problematic for mental health, with at least one individual taking his own life, not to mention the risks of AI psychosis, a new term that describes people becoming dependent on chatbots for emotional support to a severe point.
A previous study by the CSIRO even showed that earlier versions of ChatGPT could be confused by evidence about health. While the technology and models have changed numerous times since then, there’s still a clear risk that the system will get something wrong, giving you inaccurate or incorrect information.

Be cautious about handing over health data to another source
It’s perhaps inconsequential that ChatGPT’s models will have improved in the time since these studies have been run, because the risks are still there. The models are only ever going to be so good, and aren’t trained on every possibility.
They’re not doctors. They don’t have experience, just a collection of knowledge in an order we’re not privy to.
As it is, it’s already risky enough handing any personal and confidential data to anyone who doesn’t need it, but to simply hand it over to a chat system in the hopes that it can talk to you about your medical situation when that’s precisely what a doctor is for, well it boggles the mind.
We’re not advocating for being a Luddite, either. Technological progress is a good thing, and can lead to genuine advances.
But your medical information is private. That’s as simple as it gets. No one has the right to peer into the soul of your personal health outside of those who treat you.
And while it’s entirely possible that ChatGPT’s health system could be useful in preparing people for conversations or even understanding health insurance requirements, and may even find a pattern medical specialists have missed, it’s also entirely possible that the health data will be confused or misleading or breached or accidentally seen by third parties who shouldn’t have access.
With at least one Australian medical body dealing with the fallout from a breach right now, that’s the last thing anyone else will want to see.
The simple reality is that your doctor probably isn’t leaning on ChatGPT to answer medical information just yet, and neither should you.
That’s exactly why you have a doctor and a GP, and what they go to university for. Until the technology is iron clad (and even then), we’d think twice before handing an AI service your health data, and really just leave it to the pros.
You can use ChatGPT for a bunch of other things if you want, but your doctor and medical experts are the ones you should be talking to about your health. That is literally what they’re there for.