
Daniel Buckland, MD, PhD, contributed to an American College of Emergency Physicians task force outlining risks, safeguards and best practices for artificial intelligence in emergency departments.
Artificial intelligence is rapidly becoming integrated into health care, raising new questions about safety, bias and oversight. A UW–Madison physician is helping address those questions at the national level, contributing to guidance on how AI should be used responsibly in emergency medicine.
In June 2024, Dr. Daniel Buckland, an associate professor of emergency medicine at the University of Wisconsin School of Medicine and Public Health and an emergency physician with UW Health, was appointed to an Artificial Intelligence Task Force convened by the president of the American College of Emergency Physicians. ACEP is the nation’s largest professional organization for emergency physicians, representing more than 38,000 members.
The task force, composed of emergency physician leaders and AI experts, was charged with studying how artificial intelligence is being used in emergency medicine, identifying potential benefits and risks, and developing guidance to help physicians and health systems adopt the technology safely and responsibly.
Buckland, a physician-scientist whose work focuses on systems risk mitigation and automation in clinical environments, said the rapid rise of generative AI is creating a pivotal moment for emergency medicine.
“AI can bring significant change to emergency departments, where our patient population is diverse and the stakes are high,” Buckland said. “But innovation without safeguards carries risks. The work done by ACEP AI Task Force was needed for our specialty to lead change responsibly.”
Assessing the growing role of AI in emergency medicine
In 2025, the task force published three peer-reviewed reports in JACEP Open examining how emergency physicians are using AI and identifying emerging risks related to medical practice, workforce demands, patient data privacy and potential bias. The reports are among the first professional association–level analyses of AI use in emergency medicine.
- One report presents findings from a national survey of emergency physicians conducted between September 2024 and February 2025. Most respondents said they believe AI can improve efficiency and quality of care. Many also cited concerns about output bias, data privacy and limited regulatory oversight. The report calls for clearer AI use guidelines and structured training to improve AI literacy among physicians.
- A second report examines legal and ethical risks associated with generative AI platforms, including the potential entry of protected health information into third-party tools without adequate safeguards. The authors recommend formal use agreements and clear institutional policies to protect both patients and providers.
- A third report provides a framework for identifying and addressing bias in AI tools used in emergency departments. It describes how bias can arise in training data and in the interpretation of AI-generated recommendations. The authors call for local validation, transparency and ongoing monitoring as AI applications advance.
Together, the reports highlight several priorities for responsible adoption of AI in emergency medicine: maintaining physician involvement in clinical decision-making, establishing clear regulatory standards, protecting patient data and ensuring transparency when AI is used in patient care.
After completing its initial work, the task force transitioned into a permanent Artificial Intelligence Committee within ACEP to guide policy, monitor AI’s impact on care and data privacy, develop educational resources and advocate for equity in AI modeling. Buckland serves on the committee.
Responsible AI use in practice at UW and beyond
According to Buckland, AI tools have the potential to reduce administrative burden, support faster diagnoses, enhance imaging interpretation, help physicians manage complex streams of clinical data and more. Used thoughtfully, he said, these technologies could strengthen both personalized care and the physician-patient relationship.
“AI will impact the future of emergency medicine,” Buckland said. “Our responsibility as emergency physicians and health care advocates is to ensure it does so safely, fairly and transparently. Modernization and accountability must move together.”
He added that because generative AI in emergency medicine is evolving rapidly, both clinicians and health systems should carefully evaluate potential risks alongside potential benefits when adopting new technologies.

According to Dr. Brian Patterson, AI integration at UW Health and the University of Wisconsin School of Medicine and Public Health brings together experts from across fields, follows evidence-based standards, and carefully considers how AI systems function in real clinical settings.
Patterson is the physician administrative director of clinical artificial intelligence for UW Health, and an associate professor of emergency medicine at the UW School of Medicine and Public Health. He leads clinical oversight of AI initiatives across the health system, including efforts involving generative AI and large language models. Patterson also serves on ACEP’s Artificial Intelligence Committee.
“AI-driven tools should be deployed in ways that protect patients and support clinicians,” he said. “That means rigorous validation, ongoing monitoring, and active clinician oversight to safely integrate these technologies into clinical workflows, ensuring they are reliable and practical in real-world care.”
Patterson added that as these tools become more common and central to medical practice, it is critical they are implemented in close partnership with clinical care teams.
Artificial intelligence is being introduced into clinical workflows, including ambient documentation systems and predictive analytics designed to support clinical decision-making. Researchers are exploring AI applications in areas such as medical imaging and health equity, including tools that help clinicians detect disease earlier and standardize screening and referral to addiction services. Patterson said the work underway at UW ensures that these technologies are used to advance care responsibly.
For Buckland, contributing to national guidance on AI is one way to extend that work beyond UW and help steward how health systems across the country adopt the technology while protecting the high standards patient care requires.
The content of this story is solely the responsibility of the authors and does not necessarily represent the official views of the American College of Emergency Physicians.