ETHICAL AND LEGAL CHALLENGES OF ARTIFICIAL INTELLIGENCE IN HEALTHCARE

STEVEN MALEN, PharmD, MBA

Dr. Steven Malen graduated with a dual degree: Doctor of Pharmacy (PharmD) and Master of Business Administration (MBA) from the University of Rhode Island. Over his career, he has worked as a clinical pharmacist in the retail, specialty, and compounding sectors. He specialized and taught on topics from vaccines to veterinary compounding. Dr. Malen has also written a science fiction novel and taught and co- founded the concept of Patient Empowered Blockchain (P.E.B.). Currently, Dr. Malen continues to write, teach, and consult various companies in the healthcare sector.

 

Topic Overview

Artificial intelligence (AI) has applications across many fields. It uses computer technologies to learn and solve problems. Artificial intelligence entered the field of medicine decades ago. As with other disciplines, healthcare AI is data- driven and uses algorithms to uncover health patterns and outcomes. Artificial intelligence is not currently being designed or developed to replace human clinicians but to repurpose their roles and improve efficiency in healthcare. Artificial intelligence models also introduce ethical and legal complexities, particularly when algorithms become sophisticated and act as "black boxes" with outcomes that are difficult to decipher. “Black-box medicine” is a healthcare term that describes a data-driven AI healthcare recommendation in which the basis for the recommendation is not fully understood. However, artificial intelligence is permanently part of healthcare. The goal is to use it to deliver safer and more effective healthcare, leading to improved patient outcomes while promoting fairness and protecting patient autonomy and privacy.

 

Accreditation Statement

image

RxCe.com LLC is accredited by the Accreditation Council for Pharmacy Education (ACPE) as a provider of continuing pharmacy education.

 

Universal Activity Number (UAN): The ACPE Universal Activity Number assigned to this activity is 

Pharmacist  0669-0000-23-191-H03-P

Pharmacy Technician  0669-0000-23-192-H03-T

Credits: 2 contact hours of continuing education credit

 

Type of Activity: Knowledge

Media: Internet/Home study Fee Information: $6.99

 

Estimated time to complete activity: 2 contact hours, including Course Test and course evaluation

 

Release Date: November 28, 2023 Expiration Date: November 28, 2026

 

Target Audience: This educational activity is for pharmacists.

 

How to Earn Credit: From November 28, 2023, through November 28, 2026, participants must:

 

Read the “learning objectives” and “author and planning team disclosures;”

Study the section entitled “educational activity;” and

Complete the Course Test and Evaluation form. The Course Test will be graded automatically. Following successful completion of the Course Test with a score of 70% or higher, a statement of participation will be made available immediately. (No partial credit will be given.)

Credit for this course will be uploaded to CPE Monitor®.

 

Learning Objectives: Upon completion of this educational activity, participants should be able to:

 

Identify how artificial intelligence systems are used in healthcare

Describe the ethical challenges to using artificial intelligence in healthcare

Describe the legal challenges to using artificial intelligence in healthcare

Review the role of artificial intelligence in the pharmacy setting

 

Disclosures

The following individuals were involved in developing this activity: Steven Malen, PharmD, MBA, and Pamela Sardo, PharmD, BS. Pamela Sardo was an employee of Rhythm Pharmaceuticals until March 2022 and has no conflicts of interest or relationships regarding the subject matter discussed. There are no financial relationships relevant to this activity to report or disclose by any of the individuals involved in the development of this activity.

 

© RxCe.com LLC 2023: All rights reserved. No reproduction of all or part of any content herein is allowed without the prior, written permission of RxCe.com LLC.

Introduction

 

Artificial intelligence in healthcare has outgrown the current ethical and legal structures governing the delivery of healthcare in the United States and worldwide, potentially eroding the protection of patient autonomy and privacy. As artificial intelligence technologies become more complex, they give rise to “black box medicine,” making it difficult, if not impossible, to understand how these technologies work. This threatens patient autonomy since patients are not able to be informed sufficiently to enable them to make and consent to their healthcare decisions. Protecting patients’ privacy is also problematic since artificial intelligence relies heavily on data collection and sharing. Nevertheless, artificial intelligence is permanently part of healthcare. The goal is to use it to deliver safer and more effective healthcare, leading to improved patient outcomes while promoting fairness, protecting patient autonomy and privacy, and mitigating potential conflicts of interest.

 

A Brief Overview of Artificial Intelligence

 

Artificial intelligence (AI) has applications across many fields. It uses computer technologies to learn and solve problems.1,2 These computer technologies use algorithms to analyze data with the goal of uncovering patterns that may be imperceptible or difficult to see using the human senses.1,2

 

Artificial intelligence techniques are grouped into four categories: machine learning, representation learning, deep learning, or natural language processing.1 Machine learning enables computational systems to learn from vast amounts of data that are input without explicit programming.1 An advanced form of machine learning, deep learning, utilizes complex artificial neural networks to analyze and interpret large datasets. In other words, deep learning uses explicit programming to introduce features or variables that may predict outcomes.3 Thousands of variables may be input and processed quickly.3 As the number of variables or features rises and the algorithms become deeper and more complex, AI models act within a "black box,” meaning that users of the technology, and even the human developers, do

not fully understand how the program reaches its outcomes or conclusions.4 This has significant ramifications in the healthcare setting, which will be discussed below.4

 

Artificial Intelligence and Its Use in Healthcare

 

Artificial intelligence entered the field of medicine decades ago.3 As with other disciplines, healthcare AI is data-driven and uses algorithms to uncover health patterns and outcomes. This aspect of AI is particularly exciting since it can help predict patient diseases, create personalized treatment plans, and streamline diagnostic processes.1,5 It may also unlock new avenues for treatment.1,5,6

 

In the clinical setting, AI can uncover patterns that have significant implications for patient care and assist in decision-making.1 For example, electronic healthcare data related to patient histories may be gathered and input into a computer with the algorithm focusing on type 2 diabetes.7 The program can then be used to try to predict the clinical risk of diabetes for patients.7 The potential benefits of this technology in healthcare are broad and profound.

 

The power and potential of AI in healthcare have led some, e.g., Kiener (2021), to say that AI may already perform better than trained medical professionals practicing without it.8 This raises a question: Will AI replace human clinicians?9 In one sense, this may be an academic question since AI is not currently being designed or developed to replace human clinicians but to repurpose their roles and improve efficiency in healthcare.10 Moreover, AI has not performed to a level that it can replace humans or all traditional laboratory tests.11,12 While some scientists speculate about AI gaining artificial general intelligence or reaching human cognitive abilities in the future,10 the consensus appears to be that in healthcare, AI is not replacing human clinicians; instead, it augments and complements the clinician’s role.3,12 Specifically, in the realm of pharmacy, AI has the potential to augment the expertise of pharmacists, leading to more informed medication-use decisions and improved patient outcomes.4

Ethical and Legal Considerations

 

Although AI in healthcare has great potential, it also creates complex clinical problems and questions that must be addressed.4,12,13 One difficulty that arises is that algorithms may not be sufficiently robust. Returning to the example above (the algorithm used to predict type 2 diabetes), if the algorithm is weak or biased, it will miss or over-diagnose type 2 diabetes cases.7

 

Artificial intelligence models also introduce ethical and legal complexities, particularly when algorithms become sophisticated and act as "black boxes" with outcomes that are difficult for clinicians to decipher fully.4,14,15 “Black-box medicine” is a healthcare term that describes a data- driven AI healthcare recommendation in which the basis for the recommendation is not fully understood.4 Gerke, et al., tell of an AI software program called Corti that alerts emergency dispatchers if someone is having a sudden cardiac arrest. Corti’s algorithms are considered a “black box” because even Corti’s developer does not know how the software works.4

 

How can the clinician inform the patient of the benefits and risks of a healthcare recommendation when the clinician does not fully understand how the recommendation was reached? The opacity of these models can make it challenging for healthcare providers to trust and integrate AI findings into their decision-making processes.14,15

 

As stated, healthcare AI is designed to complement the skills of clinicians. It does not replace human clinicians. But what should this healthcare collaboration between humans and AI look like? Gerke, et al. (2020) state that the ethos underpinning AI in healthcare should be "Health AIs for All of Us."4 This ethos describes an AI-driven healthcare program that is designed to preserve and protect informed consent, transparency, fairness, and privacy and provide compensation when the system causes injury to patients. Patient data must also be protected from theft or cyber-attack.9 Addressing these ethical considerations leads to patient trust, which is crucial for the successful integration of AI into clinical practice. Patients must be

adequately informed about their data processing, and there should be an open dialogue to foster trust.4 This is part of integrating patients into their healthcare decision-making process. Finally, if AI is jointly owned by a developer and healthcare provider, questions could arise regarding problem- solving and conflicts of interest.

 

With all this in mind, pharmacists and staff, and other healthcare leaders, are presented with a dual task: leveraging AI to enhance healthcare delivery while preserving the patient’s autonomy (informed consent) and privacy. Healthcare providers should also ensure that these technologies are transparent, fair, safe, and not subject to a conflict of interest. Finally, issues of liability must be addressed so that there is accountability when a patient is injured.4

 

Informed Consent

 

The ethical principle of autonomy requires that patients are aware of and consent to their diagnosis and treatment.4 The integration of AI into healthcare settings challenges this foundational principle of informed consent.4 Artificial intelligence applications in healthcare, particularly those with “black-box" algorithms, introduce a layer of complexity that may not be fully comprehensible even to the clinicians themselves.4,14,15 When the "black box" effect makes the decision-making process opaque, how can the clinician fully inform the patient about a diagnostic tool that the clinician does not even understand? In these cases, “black-box medicine” may conflict with important principles of patient-centered care, such as being able to explain to a patient how certain outcomes or recommendations were derived.14,15 This raises ethical concerns about the depth of understanding and transparency required to ethically justify the use of AI in patient care.4

 

This highlights the need for an evolved informed consent process that addresses these novel issues. Additionally, the potential for AI to use sensitive data like genetic information exacerbates concerns about patient autonomy, as patients might not fully comprehend or agree to the extent of their data's use.4

Legally, informed consent is required at the state and federal levels in the United States, but these rules vary.16 These laws have not been updated to deal fully with AI.4,14,15

 

image

 

Europe has a more integrated framework of patient rights and data protection laws called the General Data Protection Regulation (GDPR).4 The GDPR's "right to explanation" mandates that individuals have the right to understand the logic behind automated decisions that affect them.4 As in the U.S., this becomes problematic with non-interpretable (“black box”) AI systems. Clinicians and healthcare providers must navigate how to disclose the use of AI, especially when its decision-making process is opaque.

 

Furthermore, AI health apps and chatbots, which often operate under user agreements rather than traditional informed consent, present additional legal challenges. These agreements must be scrutinized to ensure they meet the standards of informed consent, a complex task given the dynamic nature of software updates and the typically cursory attention users pay to such agreements. The legal implications of these user agreements become increasingly significant as the data collected by these AI tools are integrated into clinical decision-making, necessitating a legal framework that ensures user understanding and consent are as robust as it is for traditional healthcare interventions.4

 

Clinicians may struggle with the extent to which they must educate patients about the AI's mechanisms, potential biases, and the data it uses, especially when they themselves may not fully grasp these aspects.4 A legal

framework needs to be developed to guide what constitutes informed consent when AI tools are integrated into clinical decision-making.

 

Transparency

 

From an ethical standpoint, transparency in healthcare AI is paramount. It helps a clinician inform the patient and overcome the black box effect.4,9 Transparency also maintains patient safety and trust.4,9

 

Transparency requires that AI developers be open about their AI tools.9 For example, AI developers should disclose the kind of data used and any software, such as data biases.4 Ethically, it is essential that AI systems used in healthcare are not only safe and effective but also that their workings and limitations are made clear to all stakeholders. The ethical principle of "no maleficence" demands that healthcare providers do no harm, which includes ensuring that the AI systems they rely on for treatment recommendations are based on robust, real-world data. When AI systems are trained on inadequate or "synthetic" data, the risk of harm increases. Ethically, there is also an obligation to rectify and disclose any such shortcomings immediately, not merely for the sake of transparency but also to uphold the ethical standards of honesty and accountability in patient care.4

 

Legally, the need for transparency in healthcare AI is intertwined with the duty to ensure patient safety.4 Artificial intelligence developers and healthcare providers must navigate a complex legal landscape where the failure to provide safe and accurate health services can lead to serious legal consequences. In the interest of safety, legal frameworks might require that the datasets used to train AI systems be scrutinized for reliability and validity.4 Moreover, legal mechanisms such as third-party or governmental audits could serve to balance the need for transparency with the protection of intellectual property and cybersecurity concerns.4 While full disclosure of AI algorithms and data may not be feasible or necessary from a legal perspective, the law may still require sufficient transparency to allow for the verification of AI systems' safety and efficacy. This could include mandatory reporting of AI's

performance in clinical trials or other forms of testing, ensuring that the systems in use have been validated through rigorous, empirical evidence.4

 

Privacy

 

The legal responsibility to ensure that patients are informed about the AI technologies influencing their care is compounded by the need to protect patient data privacy.4 The Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule is the federal law that protects an individual’s “protected health information,” consisting of the individual’s medical records and other health information that can be tied to that individual.17 The Rule requires that “covered entities” protect an individual’s health information by providing appropriate safeguards. It sets limits and conditions on the use and disclosure of an individual’s health information without first obtaining the individual’s authorization. The Rule also gives individuals rights over their protected health information, including the right to examine and obtain copies of their health records, direct disclosures, and request corrections.17

 

image

 

However, HIPAA has significant gaps in today’s healthcare environment. As stated above, HIPAA only applies to “covered entities.” It does not apply to a patient’s non-health information that may be used commercially when it is collected by a company that is not a “covered entity.” Gerke, et al. (2020) give the example of a pregnancy test kit purchased on Amazon.4 This data can be used to infer health information about the customer. A great deal of health information is collected by large technology companies, e.g., Amazon,

Google, IBM, Facebook, and Apple. These companies are making significant investments in AI healthcare technologies and are collecting important health data from people worldwide, yet they are not “covered entities.” As such, none of the protections or rights of HIPAA apply.4

 

Another shortcoming of HIPAA in the context of AI is that protected information may be uncovered through data triangulation. Data triangulation is a process whereby a patient may be identified even though the patient’s data was de-identified under HIPAA.4 Therefore, HIPAA is inadequate in protecting patients’ health privacy.4

 

The ethical concerns surrounding privacy in the use of AI in healthcare are significant, as illustrated by the Royal Free NHS Foundation Trust sharing patient data with Google DeepMind without adequate patient consent.4 This incident underscores the ethical imperative of respecting patient privacy and the need for transparency in how patient data is used.

 

In response to these privacy shortcomings, a number of states have passed data privacy laws, e.g., the California Consumer Privacy Act (CCPA), the California Privacy Rights Act (CPRA), the Virginia Consumer Data Protection Act (VCDPA), and the Colorado Privacy Act (CPA), to name a few.18-

21 The California law, the CCPA, added broader privacy rules, effective on and after January 1, 2020.4,18 The CCPA granted various rights to California residents with regard to personal information that is held by businesses.4 Under the law, California residents may ask a business (Amazon, Google, etc.) to disclose personal information the business has about them and what they do with that information. The consumer may ask the business to correct inaccuracies in their personal information, delete their personal information, direct the business not to sell or share their personal information, and limit the business’ use and disclosure of their sensitive personal information.22

 

The issue of data ownership raises further ethical questions. The value of health data is immense, and there is public discomfort with the idea of companies or governments profiting from patient data.4 However, there could be mutual benefit, e.g., what if patients are able to receive value in exchange

for their data use? Patient data could be provided in exchange for an AI app that is made available for free for a period of time, thereby adding value to patient health and well-being. The AI app could still have commercial value thereafter.4

 

Legally, the handling of patient data in AI applications raises complex issues. The breach of the UK Data Protection Act by the Royal Free NHS Foundation Trust highlights the legal responsibilities organizations have in safeguarding patient data. Legal frameworks must adapt to address the ownership and usage rights of patient data, especially considering the high monetary value of this data. Additionally, legal protections must extend beyond traditional doctor-patient confidentiality to consider how AI health apps share data with non-medical entities like family members and friends. The absence of legally enforceable confidentiality obligations in these cases poses a challenge.

 

Another critical legal issue is the patient’s right to withdraw their data, especially once it has been aggregated and analyzed.4 This involves balancing patients' rights to control their personal information with the practicalities and implications of removing data from large datasets.

 

On October 30, 2023, the White House issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.23 This Executive Order asks developers of AI and other stakeholders to address important gaps and needs with the use of AI, such as privacy.23 Despite the recommendations and efforts worldwide to resolve the privacy issues surrounding healthcare data, no country has succeeded in addressing them fully.13

 

Fairness

 

When creating AI systems, the technology uses a high number of data points to develop algorithms. These algorithms should be developed, to the best extent possible, in a manner that is fair and inclusive of all population groups.8,9,15,24 Without this focus, AI systems could inadvertently perpetuate discrimination or exacerbate existing healthcare disparities.

Biases in AI arise when selected populations make up a majority of the clinical trial research databases, creating a bias in favor of the included group. Patient groups that are underrepresented may be disadvantaged. Groups excluded from databases could be by race, gender, age, disease state, health status, or condition. If an algorithm is biased, it will likely fail when applied to the underrepresented group.25 An example of this would be an algorithm used to predict a patient’s risk of breast cancer.24 If the algorithm is trained on data from health records that are predominantly from White patients, the algorithm may fail when applied to Black patients.24 Furthermore, AI may be biased by regional data: an algorithm trained on hospital data from German patients might not perform well in the US since populations, treatment strategies, or medications can differ.24 Additionally, there are likely underlying social inequalities in healthcare access and expenditures that impact how a model might be trained to predict risk.

 

Another example is selection biases in facial recognition programs and databases that lead to lower accuracy in recognizing darker-skinned individuals, particularly women.25 A high number of data points are required when developing machine learning AI.4 Selected populations make up a majority of the clinical trial research databases, which results in a bias in favor of the included groups, with the underrepresentation of certain racial, gender, or age patient groups. This could result in algorithms that are biased and will likely produce output based on skewed data.25

 

Ethical considerations demand that AI be trained on diverse and representative datasets to ensure that all patient populations benefit from AI advancements and have equal access to healthcare. Authors have reported that remaining biases to be addressed include demographic factors such as ethnicity, gender, age, or socioeconomic status.24

 

As AI systems are deployed in varied global contexts, they must be adaptable and sensitive to the different healthcare needs and resources of those settings to avoid reinforcing health disparities. From an ethical standpoint, if AI systems are to be part of the solution in healthcare, they

must be inclusive and designed with an awareness of the diverse patient populations they will serve.4,25 Thorough clinical validation of AI data and algorithms can help reduce bias; however, it is likely not possible to generate AI tools without any trace of bias.15 This is an opportunity for pharmacy professionals, based on their interest in technology and clinical expertise, to contribute to developing solutions.

 

Legally, the deployment of fair AI in healthcare could be subject to regulations that ensure nondiscrimination and equal treatment.4 This might involve legal requirements for AI developers to validate their systems across diverse populations and disclose any limitations regarding the applicability of their AI tools to certain groups. Moreover, legal frameworks may need to address issues of protection of intellectual property and trade secrets, but these protections may impede the ability to assess biases caused by programming and data input in AI systems. With the growing recognition of the potential for algorithmic bias, legal mechanisms such as third-party audits and regulatory oversight could play an important role in safeguarding against discriminatory outcomes.

 

The development of strong anti-discrimination laws is also essential to protect patients from adverse effects outside the medical sphere, such as impacts on insurance premiums, employment opportunities, and personal relationships.4 Additionally, there may be a legal imperative to ensure that AI healthcare solutions are accessible and effective.4 The October 30, 2023, Executive Order described above calls for equity and civil rights with the implementation of AI.23

 

The Executive Order will be enforced, in part, by the Federal Trade Commission (FTC).26 On November 21, 2023, the FTC announced that it had adopted a civil investigative process to review AI products and services as part of its consumer protection process.26 The FTC’s investigations will be nonpublic and will use civil investigative demands, which are akin to subpoenas, to monitor and address unlawful activities in this sector.26

Safety

 

Ethical considerations of patient safety in the realm of AI-enhanced healthcare pivot around the principle of “do no harm.”4 The ethical imperative is to ensure that AI applications in primary care are developed and implemented in a way that prioritizes patient safety above all else. This involves rigorous scientific review and the ethical sharing of data and methodologies to allow for reproducibility and validation of AI algorithms. Moreover, ethical frameworks must address how AI systems are to be held accountable when their "black-box" nature precludes easy interpretation or when their outputs are biased.4 Ethical practices also demand that the introduction of AI into primary care augments the patient-physician relationship rather than undermining it. Discrimination-aware algorithms represent an ethical advancement, aiming to ensure fairness and prevent the perpetuation of biases against vulnerable groups. At every stage, from development to deployment, the ethics of AI in healthcare must be scrutinized to ensure that these systems benefit patients and uphold the highest safety standards.15

 

Legally, patient safety in AI-driven healthcare is a matter of regulatory compliance and risk management.27 The absence of comprehensive legislation in the US to regulate AI contrasts with the EU's approach, which insists on explicit processes to manage high-risk AI systems, including human oversight and the potential for testing and certification.27 Legal processes would need to evolve to reflect the necessity for transparency in AI applications, ensuring safety and quality.27 The legal principle of the "learned intermediary" suggests that clinicians act as a bridge between complex AI systems and patient care, but this raises questions about the legal implications of their interpretations and decisions informed by AI.26 Data governance is thus not just an ethical consideration but also a legal one, where oversight committees would legally enforce the ethical processing of data throughout its lifecycle in AI systems.27 The October 30, 2023, Executive Order required AI safety and security, as well as consumer and worker protection.23 What will this look like? The future legal framework for AI in healthcare may include advanced technological

solutions, like quantum encryption, to safeguard against potential security breaches that could compromise patient safety.27

 

Liability

 

The ethical considerations surrounding the use of AI in healthcare, particularly in clinical decision support (CDS), revolve around the responsibility and accountability of healthcare professionals.4 When an AI- based CDS system provides a treatment recommendation that differs from what a human clinician might suggest, an ethical dilemma can arise if a patient is harmed.4 For example, a physician may use AI to triage a patient complaining of rib cage pain.28 An AI-based system may help determine the diagnosis, distinguishing indigestion from angina. However, clinicians are expected to provide care based on their expertise. They are traditionally seen as decision-makers who are responsible for patient outcomes that result from their recommendations.4 Ethical concerns also emerge if the standard care practices shift toward AI as the expected standard of care. The balance between embracing AI for its potential benefits in patient health and the ethical responsibility of clinicians for patient outcomes remains a complex issue.4

 

Legally, the liability associated with AI in healthcare is multifaceted.27,29,30 Currently, healthcare professionals who use AI-based technologies may be liable for medical malpractice if the AI's advice falls below an accepted standard of care and leads to patient harm. This is because healthcare professionals are viewed as retaining ultimate control and decision- making authority, even when using AI tools. Proposals for shifting liability to AI manufacturers under product liability laws face challenges, as courts have been reluctant to view healthcare software as more than a support tool for clinicians. Liability could also extend to hospitals for their role in purchasing and implementing AI systems under theories of corporate negligence, vicarious liability, or negligent credentialing. Another approach could involve a pre-approval arrangement, offering liability protection to healthcare professionals and manufacturers for AI systems vetted by regulatory bodies, weighing the merits of litigation versus regulatory oversight.

Determining who is liable for harm caused by AI treatments remains an unresolved question. This is complicated by instances where AI demonstrates bias against marginalized groups, which could invoke anti-discrimination and human rights laws.30

 

Additionally, AI has the potential to operate autonomously, learning and operating on its own. This presents challenges in assigning responsibility for patient harm. The traditional moral and legal accountability frameworks may not fit this paradigm since it may not make legal or ethical sense to say that AI is liable for injury. Assigning liability to the builders of AI, the users, or both should be considered.25,30 This complex legal landscape underscores the need for a carefully considered liability regime that addresses the unique challenges posed by AI in healthcare.4,27

 

Conflicts of Interest

 

Conflicts of interest may arise in healthcare when healthcare providers are involved in developing AI tools.31 As stated above, clinicians are expected to rely on their expertise when treating patients. However, the potential exists for healthcare providers involved in AI development to find themselves in a conflict of interest when commercial decisions are made regarding the purchase or use of the AI tool that the healthcare providers is also invested in.31 This is similar to the conflicts that may arise when a healthcare provider is involved in the development of a drug or medical device. The conflict can be mitigated by full public disclosure, institutional oversight, divestment ownership in the AI tool, or recusal from commercial decision-making.31

 

Artificial Intelligence in the Pharmacy Setting

 

Artificial intelligence is revolutionizing the field of pharmacy by enhancing customer interactions and optimizing inventory management. Personalization of services has seen a significant boost with AI, particularly through the use of machine learning models and chatbots. These technologies enable pharmacies to provide tailored customer service efficiently, with complex queries being transferred to human staff. For instance, chatbots can

be programmed to replicate pharmacist-patient interactions, improving service delivery efficiency.

 

A notable example of AI application in pharmacy is the use of video chat technology, which facilitates patient interaction with the pharmacist.32 This enables a patient to interact with a care provider through telehealth.32 Artificial intelligence can make taking patient histories easier and save the clinician time by providing prompts to the process and clues to the diagnosis.33 For example, a person suffering from a chronic, dull, aching pain in the upper abdomen without interference with sleep is likely suffering from gastritis.33 A telehealth application can be presented to the patient using a mobile app. The application can provide a series of questions the patient must answer in a logical sequence that can efficiently guide the intake and diagnosis.

 

This exemplifies the growing trend of integrating AI into customer care. Furthermore, AI's role in inventory management is becoming increasingly crucial. Retail pharmacies can now predict future patient needs more accurately and maintain appropriate stock levels, thanks to AI-powered data analytics. This capability not only ensures the availability of necessary medications but also assists in targeted patient communication.32

 

Innovations extend to the operational aspects of pharmacies as well. For example, the University of California San Francisco (UCSF) Medical Center employs robotic technology to prepare and track medication doses, demonstrating higher accuracy and efficiency compared to human efforts. This technology, which includes the handling of oral and injectable medications, allows pharmacists and nurses to focus more on direct patient care, enhancing the overall efficiency of healthcare delivery.32

 

Artificial intelligence's influence in healthcare extends beyond individual pharmacies to encompass various aspects of medical practice. It plays a critical role in diagnosing diseases, developing treatment protocols, drug development, and patient care. Artificial intelligence applications help in predicting patient outcomes, personalizing medicine, and monitoring patient health, thereby improving the quality of care. In the physician space, AI

assists doctors in choosing appropriate treatments, particularly in complex cases like cancer. It supports the decision-making processes in pharma research and clinical trials, even predicting epidemic outbreaks.32

 

In hospital settings, AI contributes to reducing medical errors and readmissions by analyzing vast amounts of patient data. It offers prospective care guidance and diagnostic support, optimizing workflow and reducing redundant healthcare costs. For pharmacies, AI systems already in place, such as pharmacy management systems, provide valuable patient and drug data. The next step is integrating technology-based expert systems to identify drug- related problems more efficiently, thereby reducing the workload on pharmacists.32

 

With all of these changes, pharmacy staff need training in the use of AI and how to comply with all laws and prepare for new state laws and regulations. Pharmacy teams should receive legal counsel guidance on how to address errors that may occur from AI.

 

Implications for Pharmacists and Future Directions

 

The integration of AI in pharmacy practice has profound implications. It enables pharmacists to expand their focus from medication dispensing to providing a wider range of patient-care services. Artificial intelligence tools can aid pharmacists in offering more effective medication guidance and improving patient health outcomes. This technological advancement fosters greater collaboration across various healthcare entities.32

 

This involves acquiring relevant skills and understanding AI's role in pharmacy. Pharmacy education is evolving to include data science and AI fundamentals, ensuring that pharmacists are well-equipped to work with these technologies. Continuous education and hands-on involvement in AI development and governance are crucial for pharmacists to stay abreast of rapid advancements in the field.32

Artificial intelligence in pharmacy represents a blend of human expertise and technological innovation. It is essential for pharmacists to embrace AI, not only to enhance their practice but also to ensure that the pharmacy profession remains at the forefront of healthcare transformation.32

 

Summary

 

Artificial intelligence has applications across many fields. It uses computer technologies to learn and solve problems. These computer technologies use algorithms to analyze data with the goal of uncovering patterns that may be imperceptible or difficult to see using the human senses.

 

In the clinical setting, AI can uncover patterns that have significant implications for patient care and assist in decision-making. However, AI models also introduce ethical and legal complexities, particularly when algorithms become sophisticated and act as "black boxes" with outcomes that are difficult for clinicians to decipher fully. Black-box medicine” is a healthcare term that describes a data-driven AI healthcare recommendation in which the basis for the recommendation is not fully understood.

 

The ethical principle of autonomy requires that patients are aware of and consent to their diagnosis and treatment. The integration of AI into healthcare settings challenges this foundational principle of informed consent.

 

Artificial intelligence developers should be transparent about their AI tools. For example, AI developers should disclose the kind of data used and any software, such as data biases.

 

The legal responsibility to ensure that patients are informed about the AI technologies influencing their care is compounded by the need to protect patient data privacy.

 

When creating AI systems, the technology must be developed in a manner that is fair and inclusive of all population groups. Without this focus,

AI systems could inadvertently perpetuate discrimination or exacerbate existing healthcare disparities.

 

Ethical considerations of patient safety in the realm of AI-enhanced healthcare pivot around the principle of “do no harm.” The ethical imperative is to ensure that AI applications in primary care are developed and implemented in a way that prioritizes patient safety above all else.

 

Artificial intelligence has the potential to operate autonomously, learning and operating on its own. This presents challenges in assigning responsibility for patient harm.

 

Artificial intelligence is revolutionizing the field of pharmacy by enhancing customer interactions and optimizing inventory management. Personalization of services has seen a significant boost with AI, particularly through the use of machine learning models and chatbots. It enables pharmacists to expand their focus from medication dispensing to providing a wider range of patient-care services.

 

Pharmacy staff need training in the use of AI and how to comply with all laws and prepare for new state laws and regulations. Pharmacy teams should receive legal counsel guidance on how to address errors that may occur from AI.

Course Test

 

The primary role of artificial intelligence (AI) in healthcare is

 

to replace human problem-solving and learning completely.

to complement and enhance the capabilities of healthcare professionals.

to act as an independent decision-maker in patient care.

to create artificial general intelligence that reaches human cognitive abilities.

 

"Black-box medicine" is a healthcare term that describes a data- driven AI healthcare recommendation

 

in which the basis for the recommendation is simple and transparent.

that is required by the Food and Drug Administration.

in which the basis for the recommendation is not fully understood.

that enhances and integrates easily into clinical decision-making.

 

How does the presence of "black-box" algorithms in healthcare AI affect the principle of informed consent?

 

It simplifies the consent process by making outcomes more predictable

It raises ethical concerns due to the lack of transparency and understanding of AI processes

It eliminates the need for informed consent due to AI’s efficiency

It ensures that patients are fully aware of all aspects of their diagnosis and treatment

 

Preserving “transparency” is important in the development and use of AI in healthcare because

 

by disclosing the kind of data used, clinicians can uncover potential biases.

it ensures that AI systems are only based on synthetic data.

it eliminates the need for informed consent.

it protects the patent rights of AI algorithm developers.

Which of the following patient health information is NOT protected by HIPAA?

 

Records of a patient’s prescription filled by a pharmacist who used AI to access the patient’s records

Records of a patient held by a “covered entity”

A pregnancy test kit purchased on Amazon

Privacy for an individual’s medical records

 

When creating AI systems, bias can occur within an algorithm when

 

patent protection for AI developers is not enforced.

certain races, genders, or age groups are excluded from databases used to develop AI.

reciprocity (benefit to the user in exchange for their data use) is not implemented.

developers are not liable when their product causes harm.

 

Ethical considerations of patient safety in the realm of AI- enhanced healthcare pivot around the principle of

 

the use of “black box” warnings.

do no harm.

reciprocity.

strict liability.

 

In the context of AI in healthcare, particularly with clinical decision support (CDS) systems, what is a major ethical and legal challenge?

 

Determining the balance of responsibility and liability between healthcare professionals and AI developers

Ensuring that AI systems exclusively make healthcare decisions without human input

Completely replacing traditional medical practices with AI-based recommendations

Holding patients responsible for outcomes resulting from AI-based treatments

Bias in AI systems may be reduced

 

by bridging the knowledge gap between complex AI systems and the clinician by making the patient the "learned intermediary."

by removing the profits companies or governments make from patient data.

by eliminating genetic data from algorithm development.

with a thorough clinical validation of AI data and algorithms.

 

How is AI impacting the field of pharmacy?

 

By reducing the role of pharmacists in customer services and focusing more on inventory management

By enhancing customer interactions (e.g., telehealth), improving inventory management, and supporting direct patient care

By completely replacing human staff with robotic technology in all pharmacy operations

By returning and limiting pharmacy practice to its original purpose- dispensing drugs.

References

 

He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nat Med. 2019;25(1):30-36. doi:10.1038/s41591-018-0307-0

Zheng T, Xie W, Xu L, et al. A machine learning-based framework to identify type 2 diabetes through electronic health records. Int J Med Inform. 2017;97:120-127. doi:10.1016/j.ijmedinf.2016.09.014

Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019;6(2):94-98. doi:10.7861/futurehosp.6-2-94

Gerke S, Minssen T, Cohen G. Ethical and legal challenges of artificial intelligence-driven healthcare. Artificial Intelligence in Healthcare. 2020;295-336. doi:10.1016/B978-0-12-818438-7.00012-5

Crigger E, Reinbold K, Hanson C, Kao A, Blake K, Irons M. Trustworthy Augmented Intelligence in Health Care. J Med Syst. 2022;46(2):12.

Johnson KB, Wei WQ, Weeraratne D, et al. Precision Medicine, AI, and the Future of Personalized Health Care. Clin Transl Sci. 2021;14(1):86-

93. doi:10.1111/cts.12884

Habehh H, Gohel S. Machine Learning in Healthcare. Curr Genomics. 2021;22(4):291-300. doi:10.2174/1389202922666210705124359

Kiener M. Artificial intelligence in medicine and the disclosure of risks.

AI Soc. 2021;36(3):705-713. doi:10.1007/s00146-020-01085-w

Macri R, Roberts SL. The Use of Artificial Intelligence in Clinical Care: A Values-Based Guide for Shared Decision Making. Curr Oncol. 2023;30(2):2178-2186. Published 2023 Feb 9.

doi:10.3390/curroncol30020168

Sezgin E. Artificial intelligence in healthcare: Complementing, not replacing, doctors and healthcare providers. Digit Health. 2023;9:20552076231186520. Published 2023 Jul 2. doi:10.1177/20552076231186520

Gedefaw L, Liu CF, Ip RKL, et al. Artificial Intelligence-Assisted Diagnostic Cytology and Genomic Testing for Hematologic Disorders. Cells. 2023;12(13):1755. Published 2023 Jun 30. doi:10.3390/cells12131755

Flynn A. Using artificial intelligence in health-system pharmacy practice: Finding new patterns that matter. Am J Health Syst Pharm. 2019;76(9):622-627. doi:10.1093/ajhp/zxz018

Park CW, Seo SW, Kang N, et al. Artificial Intelligence in Health Care: Current Applications and Issues [published correction appears in J Korean Med Sci. 2020 Dec 14;35(48):e425]. J Korean Med Sci. 2020;35(42):e379. Published 2020 Nov 2. doi:10.3346/jkms.2020.35.e379

Bjerring JC, Busch J. Artificial intelligence and patient-centered decision-making. Philos. Technol. 2021;34:349–371.

Amann J, Blasimme A, Vayena E, Frey D, Madai VI; Precise4Q consortium. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak. 2020;20(1):310. Published 2020 Nov 30. doi:10.1186/s12911-020-

01332-6

Hall DE, Prochazka AV, Fink AS. Informed consent for clinical treatment.

CMAJ. 2012;184(5):533-540. doi:10.1503/cmaj.112120

US Department of Health and Human Services. The HIPAA Privacy Rule.

HHS. 2022. https://www.hhs.gov/hipaa/for-

professionals/privacy/index.html. Accessed November 18, 2023.

California Consumer Privacy Act (CCPA)

California Privacy Rights Act (CPRA)

Virginia Consumer Data Protection Act (VCDPA)

Colorado Privacy Act (CPA)

Office of Attorney General. State of California. Department of Justice. California Consumer Privacy Act (CCPA). OAG. 2023. https://oag.ca.gov/privacy/ccpa#sectiona. Accessed November 18, 2023.

The White House. FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. whitehouse.gov. October 30, 2023. https://www.whitehouse.gov/briefing- room/statements-releases/2023/10/30/fact-sheet-president-biden- issues-executive-order-on-safe-secure-and-trustworthy-artificial- intelligence/. Accessed November 13, 2023.

Mittermaier M, Raza MM, Kvedar JC. Bias in AI-based models for medical applications: challenges and mitigation strategies. NPJ Digit Med. 2023;6(1):113. Published 2023 Jun 14. doi:10.1038/s41746-023- 00858-z

Naik N, Hameed BMZ, Shetty DK, et al. Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility?. Front Surg. 2022;9:862322. Published 2022 Mar 14. doi:10.3389/fsurg.2022.862322

Federal Trade Commission. FTC Authorizes Compulsory Process for AI- related Products and Services. FTC. November 21, 2023. https://www.ftc.gov/news-events/news/press-releases/2023/11/ftc- authorizes-compulsory-process-ai-related-products-services. Accessed November 28, 2023.

Liaw ST, Liyanage H, Kuziemsky C, et al. Ethical Use of Electronic Health Record Data and Artificial Intelligence: Recommendations of the Primary Care Informatics Working Group of the International Medical Informatics Association. Yearb Med Inform. 2020;29(1):51-57. doi:10.1055/s-0040-1701980

Kolossváry M, Raghu VK, Nagurney JT, Hoffmann U, Lu MT. Deep Learning Analysis of Chest Radiographs to Triage Patients with Acute Chest Pain Syndrome. Radiology. 2023;306(2):e221926. doi:10.1148/radiol.221926

Tigard DW. There is no techno-responsibility gap. Philos Technol.

2020;34:589–607. 10.1007/s13347-020-00414-7

Melarkode N, Srinivasan K, Qaisar SM, Plawiak P. AI-Powered Diagnosis of Skin Cancer: A Contemporary Review, Open Challenges and Future Research Directions. Cancers (Basel). 2023;15(4):1183. Published 2023 Feb 13. doi:10.3390/cancers15041183

Brady AP, Neri E. Artificial Intelligence in Radiology-Ethical Considerations. Diagnostics (Basel). 2020;10(4):231. Published 2020

Apr 17. doi:10.3390/diagnostics10040231

Raza MA, Aziz S, Noreen M, et al. Artificial Intelligence (AI) in Pharmacy: An Overview of Innovations. Innov Pharm. 2022;13(2):10.24926/iip.v13i2.4839. Published 2022 Dec 12.

doi:10.24926/iip.v13i2.4839

Kuziemsky C, Maeder AJ, John O, et al. Role of Artificial Intelligence within the Telehealth Domain. Yearb Med Inform. 2019;28(1):35-40. doi: 10.1055/s-0039-1677897

 

DISCLAIMER

 

The information provided in this course is general in nature, and it is solely designed to provide participants with continuing education credit(s). This course and materials are not meant to substitute for the independent, professional judgment of any participant regarding that participant’s professional practice, including but not limited to patient assessment, diagnosis, treatment, and/or health management. Medical and pharmacy practices, rules, and laws vary from state to state, and this course does not cover the laws of each state; therefore, participants must consult the laws of their state as they relate to their professional practice.

 

Healthcare professionals, including pharmacists and pharmacy technicians, must consult with their employer, healthcare facility, hospital, or other organization for guidelines, protocols, and procedures they are to follow. The information provided in this course does not replace those guidelines, protocols, and procedures but is for academic purposes only, and this course’s limited purpose is for the completion of continuing education credits.

 

Participants are advised and acknowledge that information related to medications, their administration, dosing, contraindications, adverse reactions, interactions, warnings, precautions, or accepted uses are constantly changing, and any person taking this course understands that such person must make an independent review of medication information prior to any patient assessment, diagnosis, treatment and/or health management. Any discussion of off-label use of any medication, device, or procedure is informational only, and such uses are not endorsed hereby.

 

Nothing contained in this course represents the opinions, views, judgments, or conclusions of RxCe.com LLC. RxCe.com LLC is not liable or responsible to any person for any inaccuracy, error, or omission with respect to this course, or course material.

© RxCe.com LLC 2023: All rights reserved. No reproduction of all or part of any content herein is allowed without the prior, written permission of RxCe.com LLC.