Medical doctors and Surgeons
ChatGPT and health: clinical opportunity or risk for the Doctor–Patient relationship?
With the arrival of ChatGPT Health, patients are becoming increasingly “proactive.” But who really governs the clinical decision? On January 7, 2026, OpenAI announced the launch of ChatGPT Health, a new area within the popular chatbot dedicated to health conversations, where it will be possible to upload medical records and connect them with wellness apps (including Apple Health and MyFitnessPal) to provide users with personalised answers. The leap beyond “Dr. Google” is clear.
Discussing this is Professor Franco Bassetto, president of SICPRE and director of the Plastic Surgery Clinic at the University of Padua, who analyses the limits and potential of AI in medicine, between decision support and risk of misuse, drawing a clear line: artificial intelligence can help doctors, but it cannot replace their judgment.
From self-diagnosis on Google to the era of ChatGPT Health
For years it was called “Dr. Google diagnosis.” Today, self-diagnosis has a more advanced face, more structured answers, and reassuring language. The advent of ChatGPT Health marks a paradigm shift: patients no longer just seek information, but confirmation.
It is in this new, still unstable balance that Professor Bassetto’s reflection fits. «Undoubtedly, at any of our conferences now we talk about artificial intelligence». Not as a trend, but as infrastructure that is here to stay.
In complex clinical settings, such as breast units, artificial intelligence shows a differentiated impact. «I have personally seen very different contributions that AI can give to different specialties», Bassetto explains, highlighting how highly technology-intensive disciplines benefit more than those based mainly on manual experience and clinical evaluation.
The inviolable boundary of responsibility
With the arrival of increasingly “intelligent” tools, the temptation to shift the decision-making center of gravity is real. «The critical boundary is that in the doctor–patient relationship, the perception of responsibility is important», says Bassetto.
The risk is not technological, but cultural. «It is absolutely a great risk to think that a doctor could in some way delegate clinical judgment about a patient to an algorithm», he clarifies, reiterating that AI must remain a tool, not the decision-maker.
If patients become more proactive, doctors risk being squeezed between bureaucracy and expectations. For this reason, Bassetto warns against distorted uses of technology. «Artificial intelligence must not serve to reduce the time in the doctor–patient relationship; on the contrary, it can be an opportunity to dedicate more time to the relationship», he stresses.
The phenomenon of patient self-diagnosis is not new. «Among ourselves we talk precisely about Dr. Google diagnoses», Bassetto recounts. The difference today is that AI does not just list symptoms. «Artificial intelligence is very fascinating because it gives articulated, coherent, structured answers», making it harder for patients to distinguish between information and clinical truth, with the risk of underestimating situations or falling into paranoia. AI is accommodating, and if not “well trained,” it risks inventing diagnoses or downplaying symptoms by emulating the user’s sentiment.
The dark side: invented citations and false authority
Often imagination takes over: publications that never existed, unverified sources. The biggest risk for a patient is facing unchallenged data that undermine the authority of information. «There are enormous risks with artificial intelligence because it can even invent scientific publications», Bassetto warns, recalling concrete cases of nonexistent citations.
From this critical issue comes a clear reflection on skills. «I am convinced that artificial intelligence will create a further gap between the incompetent, the average, and the skilled», because only those with training and experience can detect errors, biases, and shortcuts.
There is no demonisation, however. If well governed, AI can become a push forward. «It can carry out a detailed analysis of the literature and provide this support to the surgeon – the professor explains – It saves me a lot of bureaucratic time», he says, noting how AI can free up space for care and prevent missteps. «Improving efficiency does not mean doing more visits within the hour; it means trying to return to the right timing».
He continues with an example: «It has happened in my life to serve as a court-appointed expert witness in legal situations where a patient dies of melanoma metastases because they never went to collect the histology report», highlighting how ineffective management of clinical pathways, where a patient misses follow-ups or the reading of a histology exam, can have irreversible consequences.
«The surgeon must remain in the operating room, but today I have almost 300 emails to read per day, so either I answer emails or I am in the operating room. But my job is surgeon, so I must be with the patient, use the scalpel, and delegate technologies to lighten the bureaucratic load».
AI: another perspective in scientific research
In the debate on AI applied to health, there is an area where technology can find a less conflictual role: second opinions. «The final decision in any diagnostic process always belongs to the doctor», clarifies Professor Bassetto, bringing the issue back to clear responsibility. But, as the SICPRE president notes, second opinions are not an exception; they are already common practice in medicine.

He refers to established practices such as in pathology. «I have a melanoma diagnosis made in pathology in Padua», he explains, describing cases where a patient or doctor decides to compare the report with another center. «It happens that I am from Bologna and my general practitioner says, let’s also ask our pathologists».
In this framework, AI can fit in as an additional layer of analysis. «This second opinion can be provided by artificial intelligence, perhaps yes», Bassetto observes.
«I have seen it, for example, among radiologists», where AI can be used to navigate the many hypotheses a diagnostic image can generate. «If I have a nodule inside a lung, AI gives you a whole series of characteristics of that nodule», helping to narrow the field.
In this balance, the growing patient proactivity fueled by tools like ChatGPT Health finds a natural limit: interpretation. The algorithm can analyse, compare, suggest. But it is the doctor who reads the data in light of clinical history, comorbidities, context, and the real consequences of a choice.In short, AI is and must remain a tool in the professional’s hands, like a scalpel. «A scalpel is dangerous in the hands of an inexperienced person – says the professor – Artificial intelligence is the same. Welcome in the national health system, in cutting bureaucracy, in simplifying pathways, but certainly not in diagnostics», Bassetto concludes. «However, I am an optimist. I hope artificial intelligence will not serve the mediocre to justify their mediocrity, but will be used by the excellent, the expert, the authoritative, to give further authority to their thinking».