AI “Hallucinations” When Attempting to Research Medicare Compliance Issues
In recent months, generative AI has become all the rage, with OpenAI’s ChatGPT launching for public use in November, 2022, and several other competitor systems launching as well, such as Google Bard and Microsoft Bing Chat. Generative AI programs such as these take massive amounts of information, compile it, and then learn to generate outputs that are statistically likely when a user submits a prompt. With programs like ChatGPT, one can ask it to create a story in the style of the Bronte sisters, write a traditional 3-bar blues song, or provide the user with the most recent copy of a Medicare manual. Moreover, the prompts required can be “natural language” prompts. Thus, instead of learning specific search terms or ways to filter out specific results, one can simply write naturally and ask “What is the definition of PHI under the HIPAA Privacy Rule?” and the software should produce the proper result. We say “should,” however, because there are instances in which the software can “hallucinate” results; in other words, the software uses its knowledge base to create a result that is based on elements commonly found in the material the user is asking about, but which the program has itself created out of whole cloth. For example, consider the recent case in New York in which lawyers used ChatGPT to conduct caselaw research, only to have ChatGPT produce cases that did not actually exist. Our own clients have fallen prey to such AI “hallucinations” when attempting to research Medicare compliance issues. In one instance, a client used two different AI chat programs to look up a Medicare rule, only to have each program provide a different answer, and where each answer was completely created by the AI program and was wrong. (We know. We checked. Neither the specific chapters and section numbers existed, nor the actual content presented by the chat bot.) Despite these hurdles, it is likely that generative AI will improve over time and become more reliable. Nevertheless, we strongly encourage our clients not to rely solely on such resources and to consult with us when provided with information from such software. In the future, AI may prove to be a powerful tool for use within healthcare, but given the risks posed by relying upon an “hallucinated” result, we advise consulting with legal counsel first. We will not be relying on these resources, either.