Print

 

Am I My Brother's Keeper? Clinical Counselor's Ethical Responsibilities with Psychological Artificial Intelligence.

A Look Into Accountability of AI in Psychology

 

(Photo by Rita Kurtz. Stena Line on the Irish Sea, January 5, 2022.)

By Rita Kurtz

November 11, 2022

 

Abstract

This study critically examines tension with ethical views in behavioral science's psychological counseling and machine intelligence. The growth of Psychological Artificial Intelligence’s (AI) research base has grown but tends to disregard ethical issues. The effective title of the research article by Fulmer et al. (2021), The Ethics of Psychological Artificial Intelligence: Clinical Considerations, presents a need for effective awareness of Artificial Intelligence's ethical practices in clinical counseling practices. The research paper focused on six ethical problems, in no order of urgency, regarding the accountability of Psychological Artificial Intelligence use, leaving more questions of the efficacy of AI, reliability and the accountability regarding its use. The questions that arise in this research area begins with why we should care about Psychological AI and does this research offer merit? The Fulmer's et al. (2021) believe that lack of material highlighted the need for adding more literature in the research database on the topic of specifically Psychological Artificial Intelligence's ethics in clinical counseling. The study claimed no other documentation currently existed (Fulmer et al., 2021).  If all or more claim as truth, then who becomes the responsible party for psychological artificial intelligence's ethical dilemmas? Cain's arrogant answer to God's question, "Where is Abel your brother?"  after slewing his brother as stated in Genesis 4:1-9, shuns the responsibility of accountability and Cain replies to the Lord, "Am I my brother's keeper?" (New American Standard Bible, 1960/2020). In comparison, if AI supplies erroneous information to patients, who is the responsible party to take ownership of the fall? From beginning humanity with the fall of man, it remains unethical to intentionally deceive. But sometimes, the intentionality may not be purposeful but because of human error, or in this case machine error, due to faulty human input Today, undoubtedly, we struggle with ethics and accountability, not only with humanity, but also with technology and its responsibility to humankind.

Am I My Brother's Keeper? Clinical Counselor's Ethical Responsibilities with Psychological Artificial Intelligence.

I. A Look Into Accountability of AI in Psychology

`        The study argues in their research, the paucity of information regarding AI in medicine and mental health and its main purpose identified ethical problems and offered recommended solutions. For instance, one of the issues pertained to competence boundaries where violation of the ACA Code of Ethics of the American Counseling Association (ACA) could jeopardize a clinician (Fulmer et al., 2021). To alleviate the chance of receiving a violation, the study suggested that "Clinical counselors are encouraged to learn the essentials of AI relevant to clinical practices and should have at least a rudimentary understanding of the field of AI, especially machine learning (Fulmer et al., 2021). The study goes on to suggest that clinical counseling's direct connection with AI revealed inadequateness in the ethical considerations.  The researcher’s main theory developed new ethical standards and mandates to mitigate problems with Psychological AI. Theoretical conclusions considered establishing new frameworks, regulations and updated policies for all AI.

II. Methods

            Fulmer's et al. (2021) theories began with the opposite focus of the original theory. They began with the successes of AI in mental and behavioral health. Inquiry regarding the increased boon of published papers on AI from Stanford University, led them to deeper study of psychological AI. Because AI has increased in mental and behavioral health, the researchers hoped to extend theorists knowledge by adding more useful example pools and productive reports on AI.

            For example, the researcher's method studied successful trials of two randomized text-messaging cellphone trials, published within two years, which delivered an affective cognitive behavior therapy (CBT) using psychological AI (Fulmer et al., 2021). The researchers then studied the results of a web-based app which produced a reduction of symptoms of anxiety and depression amongst college students aged 18 to 28 (Fulmer et al., 2021). No further details were presented in this study, leaving questions of how the variables of interests were conducted and did this present as an adequate measurement of the app’s effectiveness. Another method used to buttress the positive claim in their study included an additional journal on the success of AI in different psychological therapies. "Tess, an integrative AI agent, used elements of CBT, psychoeducation, emotion-focused therapy, solution-focused brief therapy, and motivational interviewing, delivered through a text-based messenger system" (Fulmer et al., 2021). The results produced and claimed a reduction in depression and anxiety, evidenced by a Patient Health Questionnaire–9 and General Anxiety Disorder–7 (Fulmer et al., 2021). Although the results are shown, this study remains hard to determine the true success rates without further investigation into the accuracy of the questions used in the app messaging system.

            All though all of this may be true, after critically analyzing the study, the Fulmer et al. (2021) report included irrelevant information about other web-based technologies regarding comorbid depression using an internet-based CBT. Was this information necessary to try in a failed attempt to increase the number of AI support falsely giving a boost to the claim? Comparing internet-based CBT therapy and AI based CBT does not support their original AI claim and provides only erroneous information and should have been omitted.

            They derived at their hypotheses by gathering information from existing trials. Due to limited information available regarding psychological artificial intelligence, past investigation of the problem doesn't exactly exist according to the researchers (Fulmer et al., 2021). The researchers also claimed there exists a lack of duties and obligations to objectively lay a firm guaging system regarding ethical decisions made by AI in clinical counseling. The questions of the researchers hoped to investigate and open the platform on better ways to combat the ethical issues of AI and counseling.

            The positive side of the study deemed useful because of the foundation of ethical obligations and responsibilities of counselors who utilize AI. According to the study, clinicians should invest in AI training and keep it as a part of their daily practice. Another recommendation included for clinicians to stay current on the never-ending constant flows of societal shifts. Without staying current, clinicians run the risk that AI may misconstrue information if left unmonitored or left unattended (Fulmer et al, 2021).

            Providing AI training to clinicians, prepares them to develop better relations with their patients; therefore, this effort should be supported. One last useful aspect and important note of the report expressed that AI needs to be just as ethical, reliable, predictability and accurate in its answers to patients as human clinicians. The most powerful and poignant connection with clinician relations to patients and with AI to patient relations, stands on relationship connections with every effort to avoid disaffection.

            The negative side of psychological AI in the Fulmer et al. (2021) report included a variable interest study from Google. The study mostly demonstrated the damming side of AI. To explain, the now defunct AI-bot whose job, at the time, entailed conversing with Twitter users unleashed offensive jargon to the Twitterverse causing unethical practices on Google's behalf (Fulmer et al, 2021). The AI bot used machine learning to educate itself in which the efforts caused Twitter to report racist, inflammatory remarks publicly. Adding this faux pas to the study built up a strong argument to support their original hypotheses.

            Another downfall for clinicians the report discussed entailed how the negative repercussions of deep fakes can manipulate data leading to problems of identity-related concerns disrupting the privacy and biases in practices because of faulty AI. After reading the concerns of the report, further complex details need to be examined within the local clinician's practice. The study's claim described deep fake concerns on a national level in the media and Bid Data but did not cover enough on the counselor clinician's concerns.

III. Results

            A major limitation for such a complex study, the disappoint discussion on their findings offered no empirical data illustrations, nor did they cover the deeper aspect of AI and ethics. Alternative explains leverage for a better argument. In view of this, the lack of information regarding the emotional side of the patient and doctor relationship and how they may be negatively impacted establishes a better dispute. Why didn't they conduct their own survey from clinicians that currently use AI in their practices? The sample size of the gathered past data sufficed in bringing surface awareness to the issue, but more in-depth methodological studies could have been performed. The researchers did not include charts or graphs into their research, thus making their argument weaker. Evidence that supported their claim, could have been implemented since AI plays a major role in medicine. If the topic didn't have data available, paralleling the study to any medical field could undoubtedly offered enough information to consider reasonable. In fact, more external study information could have been implemented. Additionally, since empirical scientific data is needed to help validate most scientific journals, even if information lacked in their field of mental health, the available cross-sector data in any field of medicine could bring similar supportive material. As thought leaders in the field of behavioral science, seeking knowledge from all angles solidifies thought initiatives and collaborative efforts help

            As an illustration, in researched another journal article from 2019, Calibration: the Achilles Heel of Predictive Analytics (Van Calster et al., 2019), which discussed the importance of algorithm updating. Artificial Intelligence uses algorithms to function. Without proper updates and supervision, major bias problems arise in Artificial Intelligence and the process of Machine Learning (ML). To say that this new article could be deemed vital to back their study, it solidifies the effectiveness of the article. The journal provided illustrations, graphs, and histograms to show risk parameters and calibrations.

            Under those circumstances, risk parameters and calibrations become necessary when gathering, categorizing, and outputting data. The major findings in this study, highlighted that the risk of poor calibration could inform patients of false data, making the use of algorithms less clinically productive (Van Calster, B., et al., 2019).  Correspondingly, over or under estimation due to the outcome of poorly calibrated risk predictions, can send false hope or anticipate patient's personal decisions.

            For instance, a patient in hopes of the outcome of live birth during an administered invitro fertilization treatment (IFV), if the patient's prognosis was favorable, an over-or underestimation from the ill-calibrated algorithm could pose a harmful side effect, e.g., ovarian hyperstimulation syndrome, making the data outcome clinically unacceptable (Van Calster, B., et al., 2019). The illustrations found from the Van Calster, B., et al., (2019) report, strengthened the argument of the Fulmer et al. (2021) journal. In this case study, the supporting charts and illustration rank better to support their hypothesis instead of the exclusion of figures or illustrations. Not to mention, in knowing this information, stressing the importance of the AI and practitioner's relationship places more emphasis on the need for training. In clinical care, Machine Learning systems should be trained with both the end result (eg, malignant or benign) and the potential missed diagnoses (false negatives) and over-diagnosis (false positives) (Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., & Tsaneva-Atanasova, K., 2019) (Megler V. & Gregoire S., 2018). Henceforth, the mental health and behavioral health disciplines would still need to be evaluated and clinicians must be trained on the use of AI for reinforced systems to mitigate ethics risks. One note not to miss, in the Van Calster, B., et al., (2019), the conclusion of the article leaned toward positive output and stated, "the ultimate aim is to optimize the utility of predictive analytics for shared decision-making and patient counseling" (2019).

VI. Who is my brother’s keeper?         

            From the standpoint of noteworthy ethical patient care, and from the words from the moral story of Cain and Abel, who is my brother's keeper, or the one responsible for keeping AI accountable when something goes wrong? Did the Fulmer et al. (2021) provide enough information for preventive care? The article did offer a list of some of risks using autonomous decision making. The study created concern for ethical breech, exposing that the clinician is the responsible party. The solutions offered need further investigation but spurs the need for new investigations into the ethical practices of Psychology and AI.

            To summarize, since the study reported that biases exist and that mandates from the government regarding ethical standards must be upheld by clinicians, according to The National Research Act (1974) founded in 1974, this form of distance counseling needs further investigation because of the rapid advancement of AI including the increased use of mental health psychotherapy apps (Blease et al. 2021).

            Where does AI's accountability come into play with the requirements of The National Research Act (1974) in Psychology? As seen in the Figure 1. illustration, the clinician seeks training in AI to become the authoritarian of AI and AI bots as the bots relay information to patients during the patient care process.

(Graphic made by Rita Kurtz in Canva, 2022)

            Researchers need to consider solid training models in AI for practitioners to decrease the amount of liability on clinicians.  Clearly, the relationship between clinician and AI needs better measurement of resources, tracking, to help mitigate problems in practices concerning patient care.  The six ethical issues of AI mentioned in the study included boundaries of competence, limited ethical codes, transparency, cultural diversity, predictability and accuracy, and cybersecurity, all of them touched on practical implications for the study. New technologies like the computerization of Psychology using AI come with security and risks. The results fed into the narrative that AI can be useful but with precautions and monitoring. Past research findings had both agreement and contradictions. AI being a newer technology, attempts to take on challenges in the medical field. There are successes and failures. The ethical issues brought to light in this journal didn't offer new problems but did recommend possible solutions.

 Keywords: critical thinking, patient care, machine learning, morality

 

Rita Kurtz (PhD student) is a recent graduate of Harvard University with a master's degree from the Faculty of Arts & Sciences department. While at Harvard University, Rita took part in several projects, including a research study at the Langer Mindfulness Lab in the Department of Psychology which delved into the psychological effects of how news mediums impact the consumption and conveyance of news to the public. In writing in the sciences, she researched and wrote two research papers and presented them in front of fellow scientists. The first paper researched Nutraceutical Skin Therapy: Anti-Inflammatory Effects of Ganoderma lucidum, a study on how mushrooms may support youthful skin and aid in patients suffering from the autoimmune disease, sarcoidosis. The second delved into extensive research on Meat Analogues: Are We Making a Positive Political Advancement to Save the Planet? Or A Personal Health Choice that Barely Sustains Ourselves?  which uncovered the unnatural ingredients masked in meatless burgers from Beyond Meat and Impossible Burger. 

Her interdisciplinary studies in law, anthropology and philosophy, makes her a well-rounded candidate. Her past academia undergraduate studies covered a gamut of disciplines including writing legal briefs and law courses in Constitutional Law, Business Law I &II, Torts, Corporate Finance, Business Policy, Economics, Chemistry, Chemistry Lab, and Consumer Behavior. She became a published nonfiction writer and a certified digital storyteller while at Harvard. 

Rita was formerly a Government Account Executive supplying computer networks to the U.S. military around the globe creating relationships between the civilian sector and the government. She has also worked as a Record-Breaking Technical Recruiter, placing C-level executives in major tech companies and start-ups. Her well roundedness and entrepreneurial mindset led her to running a successful bakery at the Department of Defense (DoD) Air Force Exchange. 

Rita is a digital creator, with some experience with Python Programming language. She stays current on mainstream topics as a blogger, social media influencer, and actress/entertainer. She divides her time between speaking, performing, and engaging in television, radio, and stage productions. She has covered tech news and innovations as a repeat spokesperson at the Consumer Electronics Show (CES), MacWorld, and for Belkin Components. Her acting appearances aired on Lifetime, History Channel, Fox, and the Paramount Network, landing her on an Emmy-nominated show. Her experience in media, led to a career in television, radio, movies, and writing. As a prior executive producer, TV and radio host of a positive side of sports, life and entertainment variety show, her co-hosts included NFL players and Professionals. The show broadcasted on Warner Brothers Television and Fox. Her position led to interviews with billionaires, millionaires, celebrities, professional athletes, NASCAR drivers, professional medical staff, professional attorneys, musicians, and business owners. As a headline lead singer, she has toured with Grammy-Award winning musicians, and performed the national anthem for several professional sports teams around the country. Rita is a strong writer, researcher, listener, articulate speaker, and takes direction well. She is most recognized for the TV commercial in which she belted opera on a bus with a guy dressed like a Scandinavian viking-(877-CASHNOW).

Currently pursuing her Doctor of Philosophy degree in Division 1 General Psychology with a Christian lens, hones her past skill set as a Christian Youth Group Counselor and a contracted DoD Choir Director. Her current research interests include artificial intelligence (AI), virtual reality (VR), law, ethics, morals, bioethics, aviation, military affairs, divinity and diversity. Her postgraduate studies at Liberty University allows her to research, analyze, test, generate new data, and the application of statistical and analytical data. Setting academic theories in psychology with a Christian worldview, opens deeper theories into more professional values, morals, ethics, behaviors, attitudes, justices, theoretical modeling, evidence-based modeling, culturally diversity standardization, leadership in trends, concepts, and methods. She is currently studying neuroscience, cognitive psychology, social-personality psychology, neurotheology, law, and statistics. Her main focus lies in self-regulation in the discipline of Health Psychology from a holistic-mind, body, spirit, and soul approach.

She is a current member of the American Psychological Association (APA), American Psychology-Law Society (AP-LS), National Association of Black Journalists (NABJ), American Federation of Musicians (AFM)Christian Association for Psychological Studies (CAPS), Harvard Black Alumni Society (HBAS), Harvard Club of NY, Harvard Club of Southern California, and the Harvard Alumni Association. She currently resides in Beverly Hills, California. Her faith in Jesus Christ is the foundation for her life.

Awards: 

 


References

Blease C., Kharko A., Annoni M., Gaab J., & Locher C. (2021). Machine Learning in Clinical Psychology and Psychotherapy Education: A Mixed Methods Pilot Survey of Postgraduate Students at a Swiss University. Frontiers in Public Health. https://doi.org/10.3389/fpubh.2021.623088

Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., & Tsaneva-Atanasova, K. (2019). Artificial intelligence, bias and clinical safety. BMJ quality & safety28(3), 231–237. https://doi.org/10.1136/bmjqs-2018-008370

Fulmer, R., Davis, T., Costello, C., & Joerin, A. (2021). The ethics of psychological artificial intelligence: clinical considerations. Counseling and Values, 66(2), 131-144. https://doi-org.ezproxy.liberty.edu/10.1002/cvj.12153

Megler V, Gregoire S. (2018). Training models with unequal economic error costs using Amazon sagemaker. AWS machine learning blog https://aws.amazon.com/blogs/machine-learning/training-models-with-unequal-economic-error-costs-using-amazon-sagemaker/ 

New American Standard Bible. (2022). Zondervan. (Original work published 1960)

Van Calster, B., McLernon, D. J., van Smeden, M., Wynants, L., Steyerberg, E. W., & Topic Group ‘Evaluating diagnostic tests and prediction models’ of the STRATOS initiative (2019). Calibration: the Achilles heel of predictive analytics. BMC medicine,17(1), 230. https://doi.org/10.1186/s12916-019-1466-7

National Research Act of 1974, Pub. L. No. Public Law 93-348 (1974). https://www.govinfo.gov/content/pkg/STATUTE-88/pdf/STATUTE-88-Pg342.pdf 

 

Author Note

Rita L. Kurtz- https://orcid.org/0000-0002-4456-7784

No conflict of interest to disclose.

Correspondence concerning this article should be addressed to harvarduniversity.ritakurtz@gmail.com

###