The publication of Viewpoint “The First AI Drug Prescriber” written by Daniel G. Aaron and Christopher Robertson in the Journal of American Medical Association (JAMA) signals an undeniable turning point in the development of AI in healthcare. The topic discussed by Aaron and Robertson in regard to the partnership of Utah with an AI vendor in 2026, thus making it possible to prescribe drugs independently from human intervention, seems to be more than a step forward in terms of technology – this is a major change in the concept of clinical accountability, regulation, and patient agency.
By nature, the idea of AI-prescribed medications presupposes the merger of diagnostics and prescribing. While previously AI served as an auxiliary tool in imaging, triage, and documentation in clinical practice, the Utah project makes it a full-fledged player in the medical process. Indeed, the software developed in Utah “replicates the clinical decision-making process a licensed physician would follow,” starting from prescription renewal, then broadening to nearly 200 medications, including statins, antidepressants, and anticoagulants (Aaron & Robertson). From an advocacy standpoint, such innovations promise improvements in health access, wait time, and equitable care delivery that have been suggested by Andrew Pavia in his analysis of the American health care system. In the country of such a diverse geography and huge rural areas as India, these benefits are to be expected.
However, the use of AI in clinical practice poses certain risks as well. Medicine is a complex of processes and skills that go beyond algorithms. As mentioned by the authors, evaluation of the quality of AI tools has to be based on the same criteria used in diagnostics, namely sensitivity, specificity, positive and negative predictive value, and cost-effectiveness. Thus, regulatory agencies like the US Food and Drug Administration have to control the implementation of these products. However, issues related to the competence of regulators and changing evaluation parameters raise the question of whether today’s frameworks are sufficient.
Amarlal Dave highlights one of the crucial disadvantages of the AI tool, which is input manipulation. As opposed to doctors who can detect contradictions due to physical examination and clinical insight, the AI algorithm relies only on the input information provided by users, creating the risk of inaccuracy. Lack of physical examination seems to be one of the main limitations for now, which may be compensated with the integration of devices for wearable diagnostics and remote exams in the future.
Moreover, from the ethical standpoint, there is a question of accountability for adverse outcomes resulting from AI’s improper work. Indeed, according to Aaron and Robertson, there is a need for clarifying the difference between medical malpractice and product liability. The former implies patient’s injury due to negligence on the part of a physician while the latter refers to a product being defective. This question acquires even more importance in the Indian setting considering the fact that legal regulation of medico-legal aspects of new technologies is currently under development.
Another aspect that should be taken into account is the potential risk of overmedicalization and misuse. With a wide prevalence of self-medication practices, AI-prescribing systems may contribute to inappropriate medication and overmedication that is typical of many countries. Real-world monitoring of AI algorithms in the process of their implementation seems to be critical in order to avoid negative effects.
Nonetheless, the fact of the inevitable appearance of AI-prescribing systems has to be considered in connection with patient behaviours as well. In particular, as mentioned by Andrew Pavia, the current tendencies in society imply consumerism with its preference to self-care rather than professional assistance. The multi-billion market of OTC medications seems to provide additional confirmation of this trend. AI-prescribing seems to respond rather than contradict these tendencies.
Dr. Prahlada N.B
MBBS (JJMMC), MS (PGIMER, Chandigarh).
MBA in Healthcare & Hospital Management (BITS, Pilani),
Postgraduate Certificate in Technology Leadership and Innovation (MIT, USA)
Executive Programme in Strategic Management (IIM, Lucknow)
Senior Management Programme in Healthcare Management (IIM, Kozhikode)
Advanced Certificate in AI for Digital Health and Imaging Program (IISc, Bengaluru).
Senior Professor and former Head,
Department of ENT-Head & Neck Surgery, Skull Base Surgery, Cochlear Implant Surgery.
Basaveshwara Medical College & Hospital, Chitradurga, Karnataka, India.
My Vision: I don’t want to be a genius. I want to be a person with a bundle of experience.
My Mission: Help others achieve their life’s objectives in my presence or absence!
My Values: Creating value for others.
References:
- Aaron DG, Robertson CT. The First AI Drug Prescriber. JAMA. 2026; doi:10.1001/jama.2026.3533.
- Pavia A. AI in medicine must be evaluated for performance using tools and standards for other diagnostics. JAMA. 2026.
- Topol E. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books; 2019.
- US Food and Drug Administration. Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. FDA; 2021.
















Leave a reply