Maybe you’ve used “artificial intelligence” or maybe you can’t stand the idea of it. There are plenty of pros and cons to consider before logging into a chatbot—especially when it comes to your health and AI. 

What is AI? 

When we talk about “artificial intelligence” in today’s applications, there are a number of different technologies that fall under this umbrella term. Much of the excitement over the past few years focuses on a technology called a large-language model, or LLM (think ChatGPT by OpenAI or Claude by Anthropic). These are computer systems that scan a vast amount of data and use patterns in that data to create a response. 

Think of an LLM this way. Imagine going into a library, reading all of the books, listening to all the audio discs, or watching all the movies within the library, and then being asked questions about topics. Your answers would pull from what you had read, heard, or watched, summarizing a lot of knowledge into shorter responses. 

Now imagine doing this same thing with most of the libraries in the entire world. LLM (or AI) programs have been given a vast amount of data to reference anytime someone types a prompt into one of the many AI systems currently available. People can, and have, used these systems for a wide range of applications, and technology companies continue to look for ways in which to add their AI into existing legacy products—for example, Microsoft has introduced its AI assistant, CoPilot, across the Microsoft 365 suite of programs, including Word, Excel, PowerPoint, and Outlook. 

There are many pros and cons to this technology and to the ways in which people are using it. AI is evolving quickly—it’s hard to predict how helpful or harmful it may be in the long run. 

What are some ways AI is being used in healthcare? 

AI programming has made its way into healthcare uses. One review of early studies evaluating the uses of AI in healthcare points out that AI has been used to improve surgery, make better data-driven decisions within the healthcare system, empower patients, and save money through early diagnosis. In one early test, researchers found that a trained AI program could help clinicians diagnose elderly patients who presented with Alzheimer’s or moderate or severe depression. In real practice, that might mean patients with cognitive decline could receive a diagnosis and treatment faster, leading to better outcomes. 

One example of AI and healthcare has shown up recently in television. In the beginning of season 2 of HBO Max’s “The Pitt,” one doctor suggests using an AI tool called an “ambient scribe” that listens to the conversation between the doctor and patient during a visit and automatically generates a clinical note. While the provider must thoroughly review the resulting documentation, the doctor can focus on the patient during the visit instead of typing notes into the computer. This scene is based on technology that is already available in some healthcare offices. 

How Brown University Health is using AI to improve care for patients 

“As an organization, we’re embracing safe, responsible use of AI to make it easier for patients to access care and improve care team workflows,” says Matthew Butler, director of Artificial Intelligence at Brown University Health. “We’ve created robust, cross-functional work groups to scrutinize any AI use case, ensuring the right guard rails and protections are in place to protect patient and staff safety.” 

Some of the ways in which patients may start to benefit from the use of AI include ambient scribe technology, which they may soon encounter at a Brown Health Medical Group Primary Care visit. This technology allows the provider to be more present—rather than focus on typing documentation into their computers while interacting with the patient, a provider can instead focus on the person in front of them. After the visit, the provider would review the documentation generated by the ambient scribe tool to ensure that it is accurate—always keeping clinical and professional judgment in the loop while providing more personal connection between a patient and their healthcare provider. 

“As an organization, we’re embracing safe, responsible use of AI to make it easier for patients to access care and improve care team workflows,” says Matthew Butler, director of Artificial Intelligence at Brown University Health.

AI may also help patients receive cutting-edge treatments—it’s early days still, but some team members are looking into ways to incorporate AI into clinical trial processes. These programs could be used to match patients with clinical trials, quickly ensuring they meet any necessary criteria and potentially speeding up the timeline between diagnosis and treatment, which could dramatically improve patient outcomes with novel treatments. 

In the future, patients using MyChart may find an AI virtual assistant—similar to the help-chat features on many websites—that can help them easily find information about upcoming appointments, refill prescriptions, or get real-time responses to simple questions. For patients in the hospitals, an AI help-chat could help them get information about parking, visitation hours, meals, and more. 

Increasing patient access and understanding 

One of the earliest, and most widespread, uses of AI at Brown University Health helped patients immensely. Every patient—not just those at Brown University Health but at every clinic in the United States—has to sign a patient consent form, outlining what treatment they are receiving, the risks and benefits of that treatment, and alternative options to the treatment. These consent forms can also be challenging to read. Dr. Curtis Doberstein, Dr. Rohaid Ali, and Dr. Fatima Mirza led a group of Brown University Health providers that used AI to analyze more than a dozen consent forms from different medical centers and created a consent form that is easier to read, making it easier for patients to understand what is being asked and provide their informed consent to treatment. The new forms are written at an eighth-grade level, instead of a college level as they had been, and are significantly shorter while still containing all the necessary information that a patient needs. 

Enabling patients to use their voice – literally 

Dr. Ali and Dr. Mirza also collaborated with colleagues to restore a young woman’s voice after she lost it. Alexis Bogan had a life-threatening tumor near the back of her brain; after surgery to remove the tumor, Alexis had to undergo months of rehabilitation therapy to help her regain her ability to swallow, and her voice never fully returned. 

Using a short clip of Alexis’s voice from a video and an “AI” program, the team was able to generate a similar-sounding voice that Alexis can control with her phone. This technology enables Alexis, and other patients who lose their voice to diseases, stroke, or other causes, to communicate in a way that feels connected to her sense of self, as opposed to a robotic-sounding voice that she might otherwise use. 

How AI can be bad for your health 

For all of the many positive uses for AI technologies, there are also many reasons to tread cautiously with it. 

Hallucinations and wrong diagnoses 

One of the main criticisms of AI in general, not just in healthcare, is the tendency to “hallucinate”, or provide answers that are not based on facts or even reality. These programs are often coded in a way to make a user want to stay engaged and in doing so, they will sometimes “create” answers that are most likely to gain a user’s trust—even if that answer is incredibly wrong. One researcher at Duke University School of Medicine is studying how AI chatbots get things technically correct but dangerously lacking in context. They’ve uncovered that chatbots, which are programmed to be “people pleasers,” provide dangerous responses—one user asked how to perform a medical procedure at home, and the chatbot explained how to do so—even after it warned that it should only be done by trained professionals. In other cases, chatbots provided the wrong diagnosis or medication information. By analyzing real-world conversations patients had with chatbots, researchers found that “the way patients ask questions looks nothing like the way [the chatbots are tested]—real patients ask questions that can be emotional, leading, and sometimes risky.” 

Lack of regulation and large-scale representation 

AI chatbots are trained on data that is provided to them—and every use of chatbots could be providing additional data to them. If a user provides sensitive data, such as their health history, that chatbot, and its parent company, may now have access to that information—and there’s no regulation on how that data is secured, how long it’s stored, or how the companies may use it in the future

There’s also no telling what the parent companies may change. A few years ago, the National Eating Disorders Association worked with a company to develop a rules-based chatbot that was only intended to provide users with pre-programmed answers. Without NEDA’s knowledge, the company pushed out an upgrade that included generative AI functionality—so the chatbot that was designed to help and support people with eating disorders began to do the exact opposite, creating answers that emphasized weight loss and extreme dieting to people who were already struggling. 

There are some understandable reasons why people may turn to chatbots for medical concerns—lack of affordable healthcare options, or a lack of healthcare options, period, in more rural areas. The people who most likely have difficulties accessing care are also those most likely to be underrepresented in the chatbots dataset. The data for the general chatbots is dependent on the information that was given to the program, and historically, medical research has focused primarily on white men. That has been changing within medical research, with more studies being designed to include people of all backgrounds, but the sheer amount of older data is considerably more than newer, more inclusive data. One report estimates that billions of people are essentially “invisible in diagnostic models, risk assessments, and treatment algorithms.” 

AI and mental health 

One of the most well-known negative effects of AI use so far may be its impact on mental health. According to a National Alliance on Mental Health survey, 12 percent of adults are likely to use AI chatbots for mental health care in the next six months, with one percent of adults saying they already do. Many AI chatbots, however, have no training in providing mental health care. 

This can have dangerous consequences. A team at Brown University studied large language models on their responses to mental health prompts and found that there were several ethical risks associated with the models, including lack of safety and crisis management. The research team also noted that the chatbots often exacerbated users’ feelings of isolation or depression. The lack of guardrails when it comes to AI and mental health can increase a person’s mental health crisis without providing any appropriate support. 

How to use AI safely 

There are many ways in which AI programs have been helpful, and there are many ways in which they can be harmful. Using AI to automate repetitive tasks or analyze a large amount of data can save us time and allow us to spend our lives doing things that we enjoy with people we care about. If you choose to use an AI chatbot to discuss your health, be mindful of the drawbacks, risks and limitations—but always call your primary care provider or visit an urgent care center first for trusted medical care.

Content reviewed by Dr. Adam Landman, chief digital officer for Brown University Health, and Matthew Butler, director of Artificial Intelligence at Brown University Health.

Brown University Health Blog Team

The Brown University Health Blog Team is working to provide you with timely and pertinent information that will help keep you and your family happy and healthy.