Using AI-generative tools for self treatment matter of concern, says health expert

author-image
NewsDrum Desk
New Update

New Delhi, Feb 27 (PTI) A growing tendency towards using AI-generative tools for self-diagnosis and self-treatment is a matter of major concern, said a senior health expert, stressing that such tools are not substitutes for human intervention in clinical judgement.

Dr Jitender Nagpal, the deputy medical director of Sitaram Bhartia Institute of Science and Research here, said artificial intelligence is increasingly becoming an enabling layer in healthcare delivery rather than a replacement for human decision-making.

In India, where healthcare systems operate under constraints of time, workforce, and scale, AI has the potential to improve efficiency, consistency, and safety across the continuum of care, he said.

One of the most visible transformations has been in the speed and accuracy of administrative and clinical processes, such as record keeping, documentation, audits, and reporting, which have traditionally consumed a substantial proportion of clinicians' time.

By assisting with structured documentation, summarisation, and review, AI allows healthcare professionals to focus more on direct patient care.

At a systems level, AI is also supporting better standardisation of processes through assistance in drafting and updating clinical protocols, standard operating procedures, and care pathways, thereby reducing unwarranted variation in care delivery.

On areas where caution needs to be adopted, Dr Nagal said that while AI has shown significant promise, its adoption must be cautious, ethical, and context-appropriate.

"A major concern is the growing tendency toward self-diagnosis and self-treatment based on AI-generated outputs, which can be unsafe and misleading. AI tools are not substitutes for clinical judgement, physical examination, or contextual understanding of a patient's social and medical background," he said.

Overconfidence in AI outputs, by both patients and healthcare professionals, can lead to errors if limitations are not clearly understood, he cautioned.

Another important risk is the use of AI without adequate training or without a clear understanding of what a specific tool is designed to do, and equally, what it is not designed to do, Dr Nagpal pointed out.

AI outputs are also highly dependent on the quality and completeness of information provided, he said, adding that using these tools without sufficient clinical context can result in inaccurate or biased outputs.

"Finally, issues related to data privacy, confidentiality, and governance must remain central, particularly in healthcare systems that handle sensitive personal health information," he stated.

When used responsibly, AI can significantly enhance both patient care and research. In clinical settings, AI can improve patient safety by supporting healthcare professionals in carrying out routine administrative and clinical tasks more efficiently and accurately, the doctor said.

This includes assisting with clinical documentation, discharge summaries, audit preparation, and quality improvement reviews, thereby reducing errors associated with fatigue and time pressure, he added. PTI PLB NSD NSD