(Knowledge of the Trinitarian God. Photo: Rita Kurtz, 2020) 

Welcome to my Blog Post!

 

The information here contains multidisciplinary research into anthropology, psychology, philosophy, law, science, divinity and diversity. This blog will give insight, thoughts, suggestions, and hopefully, a platform to discuss and help our hurting world. As a Trust & Safety expert in seminars, talks, and platforms for the ongoing, ever-present relationship with AI and humanity, this blog opens a Socratic dialogue into researching the best solutions in truth, while protecting the essence of humanness.

The articles below contain additional information such as images, graphics, etc. 

If you would like access to any of the actual Scientific Articles or Abstracts below, please send all correspondence 

to: rita@preferredtalent.com         A fee will be charged for each Scientific Article or Abstract if you choose to download. Thank you for the support.

 

Godly Living

"but sanctify Christ as Lord in your hearts, always being ready to make a defense to everyone who asks you to five an account for the hope that is in you, but with gentleness and respect;" 1 Peter 3:15

ΑΩ


Social Media Presence:

Psychology Blog: RitaKurtz.com          Instagram: @iamritaritarita          TikTok: @_rita_rita_rita_                      IMDb: Rita Kurtz 

Entertainment: RitaRitaRita.com         Twitter: @RitaKurtz                     Facebook: Rita Kurtz, Director/Producer     LinkedIn: Rita Kurtz

Store: RitaRitaRitaOnline.com

 

 

 

 

Click here for CES Video>>>>>: Robot Sex

Video and photo: Harmony the Sex Robot, CES 2018, Rita Kurtz

(I've added this photo for illustrative purposes. This is a companion robot used for intimate relations. I had the opportunity to "meet" and "chat with" Sophia, the Citizen Robot as well.) 


Virtual Healing or Digital Harm? Interrogating AI Chatbots, VR Therapies, and Social Robots through the Lenses of AI, Psychology, Theology, Ethics, and Law (APTEL)

Rita L. Kurtz

 

Research Proposal

 

Author Note

 

Rita L. Kurtz- https://orcid.org/0000-0002-4456-7784 

 

Correspondence concerning this article should be addressed to 

 

Email: RitaKurtz@alumni.Harvard.edu

September 3, 2025

  

Virtual Healing or Digital Harm? Interrogating AI Chatbots, VR Therapies, and Social Robots through the Lenses of AI, Psychology, Theology, Ethics, and Law (APTEL)

 

Abstract

 

The interrogation of Artificial Intelligence (AI) is vital for human safety. This research proposal for a PhD aims to fill a significant gap in understanding AI and its perceived moral authority and ethical constraints across various interactive mediums in mental health. Through an interdisciplinary lens spanning AI, Psychology, Theology, Ethics, and Law (APTEL), this study investigates human interactions with AI particularly in ethically sensitive situations, such as healthcare, necessitating awareness and consideration of legal reform frameworks to protect humans. In the three modalities researched, for chatbots, the metrics are succinct and recurring: trust, empathy, and dependency; for the virtual reality: distress, control, and trust (with safety stop-rules; and for robots: loneliness, well-being, and usage.

 

Psychology provides tools for assessing human responses. In contrast, theology emphasizes personhood or human dignity, comprehending the genesis of humanity through many worldviews, in this case, AI – the moral limits of delegation, reminding us that people are more than machines. Law and ethics decide permissible safeguards. We lack a cohesive, empirically supported framework and intention to (1) amalgamate these various perspectives into a testable model, (2) analyze public perceptions and behavioral outcomes, and (3) assess tangible design, Judeo-Christian morality, and legal solutions in the symbiosis of AI and humans aimed at protecting dignity and mitigating harm in the symbiosis of AI and humans.

 

This proposal examines the impact of interactions engaged in relation to AI-driven solutions, namely chatbots, virtual reality (VR) therapies, and artificial social agents (ASA), i.e., robotics. We will do three very short qualitative tests with intentionally of small sample size due to the emerging nature of the technology and the limited infrastructural support currently available. The first, a between-subject experimental design chatbot session, pretest/posttest (n=10-15). A session-level matched-groups design using DepictVR simulation/training vs. VR psychotherapy (or multiple baseline single-case if needed; n=3-6 participants). Finally, a waitlist/stepped-wedge home robot companion–referring to an artificial intelligent system that socially interacts and is engineered to interact with individuals in manners that replicate the elements of human companionship– with n=3-5 participants starting at random times. We shall use qualitative and interrupted time-series models (with permutation tests for single-case designs) to estimate condition effects. The measures are short and repeated, namely trust, empathy, and dependence for chatbot; distress, control and trust with safety stop-rules for VR; and loneliness and wellbeing plus usage for robot. Upon concluding the three studies, the final analyses will clarify which type of AI represents the preferred mode of interaction.

 

AI and human mental health interaction is expanding at an exponential rate, and there is an immense disparity and paucity of understanding about how humans interact with AI—via chatbots, AI-powered VR, and ASA robotics therapies in mental health. This APTEL study will fuel more research in this emerging discipline.

 

            #moralagency #ethicaluse #AI #ASA #VRET #theology #artificialintelligence

Introduction

 

Interrogating artificial Intelligence (AI) is vital for human safety. This study addresses a major gap in understanding AI-perceived moral authority and ethical constraints in mental health across multiple interaction mediums. Using an interdisciplinary APTEL (Artificial Intelligence, Psychology, Theology, Ethics, and Law) lens, it examines human-AI interactions.

Heinz et al. (2025) note that fine-tuned generative AI chatbots can deliver personalized, effective mental health therapies, though challenges with engagement and retention. Conversely, Bakir and McStay (2025) highlight risks globally, citing the U.S. Texas case where a bot allegedly encouraged a teen to kill his parents, prompting authorities to warn of “a race against time” to protect children (A.F. v. Character Technologies, Inc., 2024). Additionally, the Wellness and Oversight for Psychological Resources Act (2025) prohibits licensed professionals from using AI to make independent therapeutic decisions. It bars AI from making autonomous determinations or generating plans without professional human review, while requiring transparency in its use within mental health services.

Algorithms increasingly mediate decisions and social interactions, raising issues of trust, moral agency, and responsibility (Mittelstadt et al., 2016). This evidence challenges the discourse around the ethics of trust in AI and its impact on human dignity (Jobin et al., 2019). While attending one of the world's largest technology conferences, I reflected on the ethical implications of AI in robotics during an interaction with Sophia, the Robot Citizen developed by Hanson Robotics, whose human-like appearance increasingly approaches the uncanny valley (Kurtz, 2025).

These Mixed-Reality (MR) models may detach individuals from reality, supplanting God’s created order. Scripture proclaims that humans are made in the imago Dei, the image of God, “God created man in His own image… male and female,” (New American Standard Bible, 1960/2020, Genesis 1:27). Only humans are made in the image of God. No machine, algorithm, or synthetic intelligence, regardless of complexity, shares in this divine identity. Therefore, AI cannot possess intrinsic moral worth, dignity, or spiritual capacity– “algorithms are incoherent as moral agents because they lack the necessary moral understanding” to engage in moral responsibility (Véliz, 2021, p. 487).

Although the algorithms–the foundational building blocks of AI–may include learning and adaption capabilities, AI cannot be truly autonomous or accountable because it lacks reason and sentience. To comprehend complex non-tangibles like joy, love, importance, completeness–and steer our actions by addressing the moral claims of others in a way that qualifies as moral conduct (accountability)–we must be aware of other’ potential for pain, experience with injury, and the impact our actions can have on their minds and bodies (Véliz, 2021, p. 487).  As these “imitation humans” spread rapidly in mental health—via chatbots, VR therapies, and social robots—urgent investigation is needed to understand the dynamic symbiotic relationship.

The APTEL will attempt to provide insight into veritas by supplementing empirical data with guidance on questions such as whether participants believe AI-powered inspirations are assisting them in achieving their personal goal for positive outcomes. Do they feel a connection, but will this reduce their likelihood of interacting? Is the AI-powered therapy making them feel better–an inner peace, or are they still struggling internally? Do they feel strongly connected to the AI? Do they feel cared about with AI? Do they trust AI as much as a human when receiving AI-inspired therapies more than a human therapist? The impact of the APTEL study seeks to identify gaps in AI-Human relations by examining their alignment with the imago Dei, exploring the disconnection between AI and the sentience of humans, and analyzing the legal frameworks and ethical considerations in the MR space.

Literature Review

According to research, patients may have a better experience with mental health care if they can speak with a non-judgmental AI form instead of a human when they require mental health treatment, or if they can receive care at any time of day, rather than just during office hours (Hipgrave et al., 2025; Siddals et al., 2024). All of these effects are likely to influence the rate of patient recovery, either directly or indirectly (Rollwage et al., 2024). It can be concluded that AI could be useful in a variety of capacities in healthcare settings. Digital mental health interventions (DMHIs), including those that employ artificial conversational agents and virtual/augmented reality, can enhance access to care by augmenting traditional services or serving as alternatives to in-person treatment. These innovations demonstrate how technology, including AI, can effectively deliver psychological strategies and interventions across diverse contexts of care (Mohr, Weingardt, Reddy, & Schueller, 2017, p. 427). Newer technologies like DepictVR, uses AI-empowered Virtual Reality Exposure Therapies (VRET) and the research suggests that it can aid young people who experience auditory hallucinations to share their shared experiences with another, alleviating distressing situations (Depict VR, 2025).

 

Karkosz et al. (2024) investigate the use of AI-powered chat bots, such as Woebot, for web-based mobile therapies to help teenagers and young adults deal with anxiety. In the randomized controlled trial, Fido, a cognitive behavioral therapy-based chatbot, involved subclinical young adults where both chatbot and the control group (self-help-guide book) exhibited significant reductions in anxiety and depression after two weeks, which were sustainable at the one-month follow-up. Overall outcomes were comparable, regular chatbot users indicated reduced feelings of loneliness, implying an additional advantage. The chatbot demonstrated similar enhancements in a shorter duration than the book, underscoring its efficacy as a digital mental health resource (Karkosz et al., 2024).

 

Even though Woebot type chatbots are designed to deliver structured cognition-based trust and cognitive behavioral therapy (CBT)-based guidance, users sometimes interpret its response as carrying out moral or value-laden judgements (e.g., what one should or should not feel, think or even do (Seitz et al., 2022). Seitz, Bekmeier-Feuerhahn, and Gohil (2022) explored how trust emerges in interaction with diagnostic chatbots, highlighting concerns about whether users may ascribe to them authority akin to physicians. While Woebot may enhance mood temporarily, findings from Seitz et al. (2022) indicate inherent constraints in the establishment of trust by such systems: trust in chatbots is based on cognitive factors rather than emotional ones, and artificial empathy may diminish credibility (Seitz, Bekmeier-Feuerhahn, & Gohil, 2022). These findings and understanding the insentience of raise significant concerns regarding the delegation of moral or therapeutic authority to chatbots instead of human judgment (cognitive trust vs. affective trust, empathy may reduce credibility).

 

Furthermore, AI can assist clinicians in sorting paperwork (Sadeh-Sharvit et al., 2023); however, there is a significant difference between AI Machine Learning tasks, such as training Large Language Models (LLM) or Natural Language Processing (NLP), and the esoteric human experience which is complex. Human mental health and the body are far more than administrative categories; they embody profound complexity. Scripture proclaims that the Trinitarian God– Father, Son, and Holy Spirit– is uniquely reflected in humanity in a unique way, reflecting the Trinity’s nature through relationship, love and creativity (Waruwu et al., 2025). Likewise, humans are described as a triunity of mind, body, and soul (New American Standard Bible, 1960/2020, 1 Thessalonians 5:23), revealing a depth of mystery that transcends simple explanation. Furthermore, anthropocentric creation theology functions as a universal framework, shaping both religious and secular worldviews by envisioning a hierarchy that elevates human intellect above other beings and the environment (Kaunda, 2024); encouraging the protection of humans and setting moral standards that support Isaac Asimov’s Three Laws of Robotics (Asimov, 1950).

 

In the history of examining the body, specifically in mental health, Hippocrates emphasized the brain as the seat of the mind, and thought leaders such as Decartes, anatomists, and philosophers posited that the soul or atman, energized the body and its presence, which they believed was physically located within the body like the pineal gland or heart (Pandya., 2011). Modern science suggests that there is something deeper and more complex. Because of the complexity of the mind and soul, rather than invoking the supernatural for elucidation, physics, biology, and chemistry may solely refer to the spatiotemporal realm of matter and energy - or more succinctly, nature in turn, defends the principle that scientific explanations may appeal only to natural phenomena. However, if supernatural entities exist, they "can play no role in scientific theorizing, (Donahue, 2024) " placing an immaterial soul outside the scope of empirical science.

 

The understanding is that sentience is a part of the supernatural and AI does not have the capacity or “grey area” to understand the complexity of humans or their true needs leading to questions such as: Can AI mimic moral wisdom or spiritual empathy? How close can simulation come to human connection before it becomes ethically wrong? How Does AI See the Soul? How can we experiment on the brain using AI and with spiritual identity?

 

The interaction between AI and human mental health is rapidly increasing, revealing a significant gap and lack of comprehension regarding human engagement with AI through chatbots, AI-driven virtual reality, and ASA robotics therapies in mental health. This APTEL study will stimulate further research in this burgeoning field.

 

Research Objectives

 

The APTEL study aims to identify deficiencies in AI-human relations by assessing their congruence with the imago Dei,investigating the disconnection between AI and human sentience and evaluating legal frameworks and ethical implications within the mixed-reality (MR) domain. The project delivers that the progression of review, measurement, intervention is feasible within a 3-year doctoral schedule and in line with proposal expectations for aims, methods, ethics, and timetable. The research objectives are:

 

  1. To evaluate public perception of AI’s moral authority across three distinct modalities:
  • AI-powered mental-health chatbot
  • Virtual Reality (VR) therapy for auditory hallucinations
  • Humanoid robot interacting with adults or the elderly

 

  1. To determine whether human vs. AI delivery impacts participants’ perception of trust, comfort, authority, and ethical concern.
  2. To explore how theological vs. secular justification modulates these perceptions across all mediums.
  3. To assess the implications of these interactions for policy, ethical frameworks, and legal regulation, informing globally both UK Parliament and US government discussions on concerns with AI integration.

 

Table 1
Comparison chart for different mediums

 

Medium

 

Comparison

 

Justification Type

 

Chatbot

 

Human vs. AI

 

Empathic (Theological) vs. Neutral (Secular)

 

 

VR Moral Guidance

 

Psychotherapy for Auditory Hallucination

vs.

VR therapy for Auditory Hallucination (Simulation Training)

 

Empathic (Theological) vs. Neutral (Secular)

 

 

Humanoid Robot Interaction

 

 

Human caregiver vs. Humanoid robot

 

Empathic (Theological) vs. Neutral (Secular)

 

 

Methods

 

A randomized between-subjects experimental design with 3 different modalities:

 

This is a 2-5 week (per modality) between a quasi-experiment design (matched-groups) for each AI-powered study, asking participants if they feel differently between therapy delivered by a chatbot compared to in-person therapy. Field study conducted within university, healthcare, online or approved locations. This study examines 3 studies:

 

  • AI-powered mental health chatbots like NHS Limba, Replica, Qwell or Woebot.
  • Simulation training with participants who present having persistent auditory hallucinations (no personal data given in interview) Simulation/Training with feedback as insight into the Depict VR device.
  • Older adults or adults reporting loneliness (screen out sever cognitive impairment/crisis) that use ASA home robot like Energize Lab Eilik, Vector Robot by Anki, or Unitree-G1 Humanoid Robot for companionship.

 

MODALITY 1–CHATBOT

For the chatbot study, on trust, empathy, and dependence, we will conduct a between-subjects experimental design:

Design
Type of Therapy

  • Group A: Chatbot-based therapy (digital conversational agent delivering CBT-style sessions).
  • Group B: In-person therapy (licensed therapist delivering CBT sessions).

Participants
15 (min.10), adults and teens (n=13-60 consent), recruited via online survey.

Procedure
Pretest/posttest questionnaire regarding trust, empathy and dependency.

Possible Baseline Heart Rate Variability (HRV)and self-report mood questionnaire administered. Participants interacted with AI-powered chatbot for pre-and-post-intervention HRV, and mood questionnaire administered.

Measures

  • Mood:Questionnaire, pretest/posttest, 0-5 (satisfaction), 5-point Likert scale.
  • Possible HRV:Collected via smart device (e.g., Apple iPhone, Watch).

Analysis
Paired-samples t-tests comparing pre-and post-intervention HRV and mood scores.

 

 

MODALITY 2–VR GOOGLES

Session-level matched-groups (A = In-person therapy; B = VR Simulation/Training multiple-baseline single-case with staggered VR start.

 

Participants
Target N = 3–6 (min. 3). Adults/teens with a subclinical level of auditory hallucinations, such as those with paranormal experiences; high anthropomorphism/emotional dependency or a suitable proxy population.

Procedure

Independent Variable

  • Group A (In-Person Therapy):
    Standard therapist-led sessions (baseline condition).
  • Group B (VR Simulation/Training):
    Gradual introduction of VR-based simulation/voice-hearing training.
    • Staggered multiple-baseline start:Each participant begins VR at a different time point to strengthen internal validity (controls for time and expectancy effects).

Measures

Primary:

  • Frequency/intensity of auditory hallucinations (daily/weekly log).
  • Session satisfaction & perceived therapeutic alliance (adapted WAI).

Secondary:

  • Emotional response to modality (self-report Likert scales).
  • Dropout/engagement rates.

Analysis
Between Group comparison analysis:

            Compare VR vs. In-person outcomes

Matched Groups: Participants are paired and matched (age group, severity of hallucinations (frequency) and degree of anthropomorphism/emotional dependency to be mapped on graph session-by-session or pretest/posttest questionnaires to measure possible shifts, looking for improvements coinciding with VR introduction.

 

 

MODALITY 3–COMPANION ROBOT
Waitlist/stepped-wedge within-person design; start times randomized. Timeline: 1 week baseline → 1 week robot use → optional 2-day withdrawal. Target N = 5 (min. 3).

Participants
Target n=5 (minimum 3) Adults or households how claim loneliness receiving home or care home willing to allow social robot intervention.

Procedure
2-3 interventions. 2-min check-in: 3-item loneliness, 3–5-item wellbeing, trust/empathy items. 20-minute time with robot. Robot usage (min/day) logged automatically.

  • Start times randomized:Each participant begins therobot phase at different (staggered) weeks.
  • Timeline per participant:
  • Week 1:Baseline (no robot).
  • Week 2:Robot use (20 minutes per day).
  • Optional:2-day withdrawal (robot removed) to test reversibility.

Measures

  • Loneliness:3-item scale.
  • Wellbeing:3–5-item scale.
  • Trust/Empathy:Short self-report.
  • Usage:Device logs.

Analysis
Interrupted time-series and qualitative models; baseline vs. intervention as main predictor; report slope and level changes. Within-person comparisons strengthen causal inference with a very small sample. 

 

 

Discussion

 

Digital psychiatry and psychology have unresolved issues that require multi-stakeholder examination to maximize benefits and minimize harm (Burr et al., 2020). The current stakeholders do not have a universal AI regulatory law governing the implications of AI ethics, more poignantly, concerning mental health. The UK government stresses the importance of ethical standards, but lacks laws governing the AI ethics and laws, only suggestions, “AI ethics is a set of values, principles, and techniques that employ widely accepted standards to guide moral conduct in the development and use of AI systems- The Data Ethics Framework is a tool that should be used in any project,” (Department for Science, Innovation and Technology [DSIT], 2025), and it admits AI uncertainties,

“These risks could include anything from physical harm, an undermining of national security, as well as risks to mental health. The development and deployment of AI can also present ethical challenges which do not always have clear answers.”

 

Concomitant to the UK, AI is only regulated through blended mechanisms, not a nationwide AI regulatory law or set of definitive rules, only suggestions in the US– the National Artificial Intelligence Initiative Act of 2020–(part of the 2021 National Defense Authorization Act)– does not specifically address mental health; its scope focuses on broader AI efforts, including research programs, standards development, interagency coordination, and workforce training (H.R. 6216, 116th Cong., 2020). In the rapid cessation of AI usage, “We must exercise caution in formulating laws and standardizing AI and related technologies in design and exploitation for all users (Afroogh et al., 2024).”

 

This APTEL research is hopeful to highlight the interconnectedness, yet differences of human life through the exploration of AI vs. human. The interaction between AI and human health is rapidly increasing; yet there exists a significant gap and lack of comprehension regarding human engagement with AI–through chatbots, AI-drive VR, and ASA robotic therapies in mental health. Nor do we understand the foundational elements of where morality stems, coming from historical, Biblical teachings in the underpinning originality of humans being made in the imago Dei. This study covering artificial intelligence, psychology, theology, ethics and law (APTEL) will stimulate further research in this burgeoning field. Since we are made in God's image and sentient. AI confidence is crucial for technological growth. While these finding are promising, further research is needed to build a comprehensive understanding, and further investigations should be explored.

 

Ethics

 

The study ensures participant safety, informed consent, data confidentiality, and safeguards against psychological harm when engaging with AI technologies.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

References:

 

A.F. v. Character Technologies, Inc., No. 2:24-cv-01014 (E.D. Tex. Dec. 9, 2024) (complaint). https://www.washingtonpost.com/documents/028582a9-7e6d-4e60-8692-a061f4f4e745.pdf

 

Afroogh, S., Akbari, A., Malone, E., Kiani, M., & Lee, R. (2024). Trust in AI: Progress, challenges, and future directions. Humanities and Social Sciences Communications, 11, 1568. https://doi.org/10.1057/s41599-024-04044-8

 

Asimov, I. (1950). I, Robot. Gnome Press.

 

Bakir, V., & McStay, A. (2025). Move fast and break people? Ethics, companion apps, and the case of Character.ai. AI & Society. https://doi.org/10.1007/s00146-025-02408-5

 

Barreda-Ángeles, M., & Hartmann, T. (2023). Experiences of depersonalization/derealization among users of virtual reality applications: A cross-sectional survey. Cyberpsychology, Behavior, and Social Networking, 26(1), 22–27. https://doi.org/10.1089/cyber.2022.0152

 

Burr, C., Morley, J., Taddeo, M., & Floridi, L. (2020). Digital psychiatry: Risks and opportunities for public health and wellbeing. IEEE Transactions on Technology and Society, 1(1), 21–33. https://doi.org/10.1109/TTS.2020.2977059

 

Department for Science, Innovation and Technology. (2025, January 29). Understanding artificial intelligence ethics and safety. GOV.UK.

 

Depict VR. (n.d.). Stronger bonds, healthier minds. Retrieved August 13, 2025, from https://depictvr.com

 

Donahue, M. K. (2025). Methodological naturalism, analyzed. Erkenntnis, 90, 1981–2002. https://doi.org/10.1007/s10670-024-00790-y.

 

Gardiner, H., & Mutebi, N. (2025, January 31). AI and mental healthcare: Ethical and regulatory considerations (POSTnote 738). Parliamentary Office of Science and Technology. https://doi.org/10.58248/PN738

 

Heinz, M. V., Mackin, D. M., Trudeau, B. M., Bhattacharya, S., Wang, Y., Banta, H. A., Jewett, A. D., Salzhauer, A. J., Griffin, T. Z., & Jacobson, N. C. (2025). Randomized trial of a generative AI chatbot for mental health treatment. NEJM AI, 2(4). https://doi.org/10.1056/AIoa2400802

 

Hipgrave, L., Goldie, J., Dennis, S., & Coleman, A. (2025). Balancing risks and benefits: Clinicians’ perspectives on the use of generative AI chatbots in mental healthcare. Frontiers in Digital Health, 7, 1606291. https://doi.org/10.3389/fdgth.2025.1606291

 

H.R. 6216, National Artificial Intelligence Initiative Act of 2020, 116th Cong. (2020). Retrieved from Congress.gov. https://www.congress.gov/bill/116th-congress/house-bill/6216

 

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2

 

Karkosz, S., Szymański, R., Sanna, K., & Michałowski, J. (2024). Effectiveness of a web-based and mobile therapy chatbot on anxiety and depressive symptoms in subclinical young adults: Randomized controlled trial. JMIR Formative Research, 8, e47960. https://doi.org/10.2196/47960.

 

Kaunda, C. J. (2024). “Always‑already‑created”: Theology of creation in the context of artificial intelligence. Theology and Science, 22(2), 407–424. https://doi.org/10.1080/14746700.2024.2351649.

 

Kurtz, R. [@iamritaritarita]. (2023 May 18). January 2023 marked five years since Sophia the Robot chatted with audience members in Las Vegas, Nevada, at the Consumer [Video short]. https://www.instagram.com/p/CsXMSw3Po39/?hl=en

 

Kurtz, R. (2018). CES 2018 Harmony the Sex Robot. [Photo]. Las Vegas, Nevada.

Kurtz, R. (2018). CES 2018 Harmony the Sex Robot. [Screenshots]. https://www.realdoll.com/

Kurtz, R. (2023). CES 2018 Harmony the Sex Robot. [Video]. Las Vegas, Nevada. https://youtube.com/shorts/0HoV4W7WzW8?si=9b_tUrZos_5gCbkE

 

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1-21. https://doi.org/10.1177/2053951716679679

 

Mohr, D. C., Weingardt, K. R., Reddy, M., & Schueller, S. M. (2017). Three problems with current digital mental health research… and three things we can do about them. Psychiatric Services, 68(5), 427–429. https://doi.org/10.1176/appi.ps.201600541

 

New American Standard Bible. (2020). Zondervan (Original work published 1971)

 

Pandya S. K. (2011). Understanding brain, mind and soul: contributions from neurology and neurosurgery. Mens sana monographs9(1), 129–149. https://doi.org/10.4103/0973-1229.77431

 

Rollwage, M., Habicht, J., Juchems, K., Carrington, B., Stylianou, M., Hauser, T. U., & Harper, R. (2024). Conversational AI facilitates mental health assessments and is associated with improved recovery rates. BMJ Innovations, 10(1–2), 4–12. https://doi.org/10.1136/bmjinnov-2023-001110.

 

Sadeh-Sharvit, S., Del Camp, T., Horton, S. E., Hefner, J. D., Berry, J. M., Grossman, E., & Hollon, S. D. (2023). Effects of an artificial intelligence platform for behavioral interventions on depression and anxiety symptoms: Randomized clinical trial. Journal of Medical Internet Research, 25, e46781. https://doi.org/10.2196/46781.

 

Seitz, L., Bekmeier-Feuerhahn, S., & Gohil, K. (2022). Can we trust a chatbot like a physician? A qualitative study on understanding the emergence of trust toward diagnostic chatbots. International Journal of Human-Computer Studies, 165,102848. https://doi.org/10.1016/j.ijhcs.2022.102848 

 

Siddals, S., Torous, J., & Coxon, A. (2024). “It happened to be the perfect thing”: Experiences of generative AI chatbots for mental health. npj Mental Health Research, 3, 48. https://doi.org/10.1038/s44184-024-00097-4

 

Véliz, C. (2021). Moral zombies: Why algorithms are not moral agents. AI & Society, 36(2), 487–497. https://doi.org/10.1007/s00146-021-01189-x

  

Waruwu, S., Duha, T., Damanik, D., & Sitopu, E. (2025). Theological study of the doctrine of the Trinity of GodSocio‑Economic and Humanistic Aspects for Township and Industry3(1), 66–82. https://doi.org/10.59535/sehati.v3i1.388

 

Wellness and Oversight for Psychological Resources Act, H.B. 1806, 104th Gen. Assembly (Ill. 2025).

 

 

Assisted Dying Through a Christian Lens: A Call to Uphold Life Video

(Click photo above for video) 

 

On June 23 in the United Kingdom, the House of Commons passed the initial stage of legislation permitting terminally ill individuals to request and receive assisted suicide under specific safeguards and protections, along with related provisions. The subsequent vote tally will proceed to the House of Lords, followed by the Final Stages of the decision-making process. Diverse perspectives from Christian doctrines regarding the perilous ideas of not adhering to Mosaic Law's commandment "Thou shalt not murder," juxtaposed with a humanistic approach to honoring a loved one's dying wish, facilitate comprehensive discussions on the morality and ethical implications of this recently enacted first-round law. 

Invited to the Houses of Parliament as a special guest, Rita discussed a Call-to-Action with Baroness Elizabeth Berridge and MP Florence Eshalomi on the following subject matter:

  1. Assisted Dying: A Call to Up Hold Life
    Discussion on the ethical and moral examination through the Christian Lens.
  2. Cross-Parliamentary Working Group on AI Ethics and Human Dignity
    A cross-party group of Lords, MPs, and expert advisors to address moral issues emerging from AI technologies, including algorithmic bias, digital surveillance, and the impact on vulnerable populations.
  3. Parliamentary Inquiry into Faith and Ethics in AI Policy
    An official inquiry to explore how AI law and policy can be informed by ethical, faith-based, and philosophical perspectives, ensuring a holistic view of human and societal well-being.
  4. Inclusion of Faith-Based Ethical Voices in Policy Development
    Support a call for Parliament to engage with theological ethicists and faith-led institutions during consultations on AI regulation.
  5. National AI Moral Literacy Campaign
    Encourage collaborative educational efforts in schools, churches, and public forums to enhance public understanding of AI through ethical and faith-based lenses.
  6. UK Charter on Human Dignity in AI
    Champion a foundational Charter that affirms AI must serve human dignity, justice, and stewardship, drawing inspiration from Christian values while remaining inclusive.

 

📄 Assisted Dying Through a Christian Lens: A Call to Uphold Life

 

Title: Assisted Dying Through a Christian Lens: A Call to Uphold Life
Author: Rita Kurtz
Affiliation: Personal Study inspired after a visit to the Houses of Parliament
Date: June 27, 2025
Author Note:
Correspondence concerning this article should be addressed to rita@PreferredTalent.com

WHITE PAPER

Abstract

This paper examines the ethical, theological, and social implications of the United Kingdom's proposed assisted dying legislation through a Christian worldview. While proponents argue for personal autonomy and dignity, this analysis contends that legalizing assisted suicide undermines the sanctity of life and places vulnerable populations including the elderly, disabled, mentally ill, and ethnic minorities, at increased risk. Drawing from biblical principles and empirical research, including a thematic review by Paschke-Winnel et al. (2023), the argument emphasizes that emotional despair is often transient and should not serve as grounds for permanent, state-sanctioned death. Scripture teaches that all human life is sacred, even in suffering, and calls believers to bear one another’s burdens, not to end them (New American Standard Bible, 1960/2020, Galatians 6:2). The paper critiques the bill’s reliance on fluctuating emotional states to determine eligibility, arguing that such a foundation invites irreversible moral and societal consequences. Ultimately, the paper calls for compassionate alternatives rooted in presence, care, and hope, affirming that even amidst pain, life remains a divine gift worth preserving.

#endoflife #hospice #acallforaction #christianity #assisteddying

Assisted Dying Through a Christian Lens: A Call to Uphold Life

The current “assisted dying” bill in the United Kingdom may soon become law. But what does this legislation mean when viewed through the lens of Christian faith and moral responsibility?

Numerous scientific studies confirm what many people have experienced firsthand: individuals across society are silently enduring deep emotional pain—battling suicidal thoughts, depression, despair, and suffering they believe to be unbearable (Baryshnikov & Isometsä, 2022). In their darkest moments, many genuinely believe that their lives are no longer worth living. Some even come dangerously close to ending their lives.

This is precisely why the increasing campaign to legalize assisted dying is so concerning. To speak plainly, “assisted dying” is a sanitized phrase for what is, in truth, assisted suicide (David Albert‑Jones, 2024). This proposal is not about dignity—it is about giving up on the very people who most need support, presence, and hope. It suggests that when someone reaches the end of their rope, the best society can offer is an exit.

Most disturbing is the emotional and spiritual disconnection embedded in the legislation. It fundamentally misinterprets the nature of mental and emotional suffering. People in despair do not need help to die—they need help to live. They do not need affirmation of their pain—they need companionship and care. The Bible calls Christians to “Bear one another’s burdens” (New American Standard Bible, 1960/020, Galatians 6:2), not to eliminate the burden by eliminating the person.

Scripture is unequivocal: life is sacred—even when it is hard, even when it hurts. From beginning to end, life is a gift from God. The psalmist declares, “You formed my inward parts; You wove me in my mother’s womb” (New American Standard Bible, 1960/020, Psalm 139:13). The commandment, “You shall not murder” (New American Standard Bible, 1960/2020, Exodus 20:13), does not rest on how useful or independent a person is. It rests on the eternal truth that human life is “God-breathed” (New American Standard Bible, 1960/2020, Genesis 2:7) and inherently holy.

Legalizing assisted suicide sends a dangerous message: that in moments of weakness, loss, or vulnerability, life becomes negotiable. That if someone feels like a burden, society will sanction their death rather than affirm their worth. But this is not mercy—it is abandonment. True mercy “does not rejoice in unrighteousness, but rejoices with the truth” (New American Standard Bible, 1960/2020, 1 Corinthians 13:6). Mercy leans into suffering; it does not walk away from it.

The passage of such a bill puts society’s most vulnerable at the greatest risk: the elderly, people with disabilities, ethnic minorities, and those who already struggle to access adequate mental health care. In many ethnic and immigrant communities suffering is compounded by shame, silence, and isolation. If this bill becomes law, individuals who already feel unsupported may experience increased internal or external pressure to view death as their only option, now endorsed by law.

Scripture calls Christians to a radically different response: “Rescue those who are being taken away to death, and those who are staggering to the slaughter, Oh hold them back!” (New American Standard Bible, 1960/2020, Proverbs 24:11). People of faith are called to stand in the gap, not pave the path toward death. Believers are to trust not in fleeting feelings, but in the steadfastness of God: “Weeping may last for the night, but a shout of joy comes in the morning” (New American Standard Bible, 1960/2020, Psalm 30:5).

What makes this bill most dangerous is its foundation: emotion. But feelings are changeable , an“affect heuristic” (Finucane et al., 2000). Despair is not permanent. When society legislates based on emotional states, it risks irreversible consequences. Even secular scholarship highlights this danger. Paschke‑Winnel et al. (2023) found that eligibility assessments for assisted dying raise widespread ethical concerns, particularly regarding impaired decision-making capacity and increased risk of suicide in vulnerable populations.

This debate is more than a legal matter—it is a moral and spiritual one. What kind of society do we aspire to be? One that values people only when they are strong and independent? Or one that affirms the dignity of every person, especially the weak, weary, and broken?

To lawmakers and fellow citizens: do not confuse compassion with surrender. True compassion stays. It suffers alongside. It speaks life when others only see death. Jesus Himself did not turn away from pain—He entered into it. He wept with those who mourned New American Standard Bible, 1960/2020, John 11:35) and taught, “Blessed are those who mourn, for they will be comforted” (New American Standard Bible, 1960/2020, Matthew 5:4).

Let us not clothe cruelty in the language of care. Let us not make death more accessible than dignity. Let us walk with people through the valley, not abandon them in it.

Reject this bill—not because we oppose those who suffer, but because we stand for them. Because even in the struggle, life remains a sacred gift (Guerrero‑Torrellas et al., 2017).

 

References

Baryshnikov, I., & Isometsä, E. (2022). Psychological pain and suicidal behavior: A review. Frontiers in Psychiatry, 13, 981353. https://doi.org/10.3389/fpsyt.2022.981353

David Albert‑Jones. (2024). Defining the terms of the debate: Euthanasia and euphemism. Journal of Medical Ethics[PDF report]. Institute of Medical Ethics.

Finucane, M. L., Alhakami, A., Slovic, P., & Johnson, S. M. (2000). The affect heuristic in judgment of risks and benefits. Journal of Behavioral Decision Making, 13(1), 1–17. https://doi.org/10.1002/(SICI)1099-0771(200001/03)13:1<1::AID-BDM333>3.0.CO;2-S

Guerrero‑Torrellas, M., Monforte‑Royo, C., Rodríguez‑Prat, A., Porta‑Sales, J., & Balaguer, A. (2017). Understanding meaning in life interventions in patients with advanced disease: A systematic review and realist synthesis. Palliative Medicine, 31(2), 104–121. https://doi.org/10.1177/0269216316672205

New American Standard Bible. (2020). Zondervan (Original work published 1971)

Paschke-Winnel, M., Munday, M. E., & Shaheed, M. J. (2023). Medical assistance in dying (MAiD) and mental illness: A qualitative thematic review of ethical concerns. Journal of Bioethical Inquiry, 20(3), 431–445. https://doi.org/10.1007/s11673-023-10217-y

 

 

 

Rita Kurtz (Ph.D. in progress) is a Harvard University Scholar, Thought Leader, and Lawyer specializing in multidisciplinary and cross-sector concentrations including Artificial Intelligence (AI), technology, writing, law, psychology, neuroscience, neurotheology, ethics, morals, divinity, diversity, anthropology, and the sciences. 

Rita is an interdisciplinary researcher and recent graduate of Harvard University with a master's degree from the Faculty of Arts & Sciences department. Studying under esteemed Harvard Law School Professor Roberto Mangabeira Unger (SJD), Harvard University Law Director of Intellectual Property Allan Ryan, including Dr. Cornel West (Presidential Candidate 2024), and Dr. Arthur Kleinman (Harvard Department of Anthropology and Psychiatry), gave her well-roundedness in the interdisciplinary studies of law, anthropology, philosophy, ethics, morals, media, religiosity, and politics making her a well-prepared Ph.D. candidate. Through the acquisition of a prestigious Cross-Registration Academic Scholarship from Harvard, she gained a broader academic perspective and cultivated a profound curiosity about making scholarly contributions to the discourse surrounding difficult existential inquiries. In light of her Harvard Professor Unger's discussions on ethical, moral, and legal ramifications of utilizing artificial intelligence and the effects of the Knowledge Economy, she further investigated these topics, gaining the implications of the historical Tocquevillian perspective on the underestimation of technology's role. Historical data such as history, religiosity, and technology, piqued her interest in investigating these future issues in an effort to offer deeper research and discussions to grapple with these AI existential issues.

While at Harvard University, Rita combined her professional skills in television and film and took part in several projects, such as being selected out of several candidates, to participate in a research study at the Langer Mindfulness Lab in the Department of Psychology which delved into the intense psychological effects of news medium's impact on the consumption and conveyance of news to the public. As a researcher at Harvard, she wrote two research papers and successfully presented them to a panelist of fellow Ph.D. scientists. The first paper researched Nutraceutical Skin Therapy: Anti-Inflammatory Effects of Ganoderma lucidum, a study on how mushrooms may support youthful skin and aid in patients suffering from the autoimmune disease, sarcoidosis. The second delved into extensive research on Meat Analogues: Are We Making a Positive Political Advancement to Save the Planet? Or A Personal Health Choice that Barely Sustains Ourselves?, uncovering the unnatural ingredients masked in meatless burgers from Beyond Meat and Impossible Burger. She became a published nonfiction writer and a certified digital storyteller while at Harvard. 

In her undergraduate degree, her interdisciplinary studies in law, anthropology and philosophy, makes her a well-rounded research candidate. Her past academia undergraduate BBA studies covered a gamut of disciplines including writing legal briefs and law courses in Constitutional Law, Business Law I &II, Torts, Corporate Finance, Accounting 1 & 2, Human Resources, Business Policy, Political Science, Operations Management, Programming, Economics, Chemistry, Chemistry Lab, and Consumer Behavior. Her studies in computer programming, economics, anthropology, and philosophy, broadened her technical mindset for business.

Rita runs an online e-commerce store and is a digital content creator, gaining some experience with Python Programming language. She stays current on mainstream topics as a blogger, social media influencer, and actress/entertainer. As a world traveler, she divides her time between speaking, performing, and engaging in television, radio, and stage productions. She has covered tech news and innovations as a repeat spokesperson at the Consumer Electronics Show (CES), MacWorld, and for Belkin Components, hence the nickname, “Gadget Girl.” Her past acting appearances aired on Lifetime, History Channel, Fox, and the Paramount Network, landing her on an Emmy-nominated show. Her experience in media, led to a career in television, radio, movies, stage, and writing, gained her the branding of RitaRitaRita.

As a prior executive producer, TV and radio host of a positive side of sports, life and entertainment variety show, her co-hosts included Pro-NFL players and Industry Professionals. The show broadcasted on Warner Brothers Television and Fox. Her position led to interviews with billionaires, millionaires, celebrities, professional athletes, NASCAR drivers, professional medical staff, professional attorneys, musicians, and business owners. As a headline lead singer, she has toured with Grammy-Award winning musicians, and performed the national anthem for several professional sports teams around the United States. Rita is a strong writer, researcher, listener, articulate speaker, and takes direction well. She is most recognized for the national TV commercial in which she belted opera on a bus with a guy dressed like a Scandinavian viking-JG Wentworth (877-CASHNOW).

Rita formerly worked with a private company as a Government Account Executive supplying computer networks to the U.S. military around the globe creating relationships between the civilian sector and the government. She has also worked as a Record-Breaking Executive Technical Recruiter, receiving "Recruiter of the Month" and "Recruiter of the Year," for placing the highest commission received for the company, by placing a CEO into a Fortune 500 Tech Company. Her responsibilities as an Executive Technical Recruiter placed C-level executives into major tech companies and start-ups. Her well roundedness and entrepreneurial mindset led her to running a successful bakery at the Department of Defense (DoD) Air Force Exchange.

Currently pursuing her Doctorate of Philosophy degree, with a Christian Lens on ethics and morals, in Psychology and Law, her current research interests include artificial intelligence (AI), virtual reality (VR), law, ethics, morals, neuroscience, bioethics, aviation, military affairs, divinity and diversity. Her postgraduate studies at Liberty University allows her to research, analyze, test, generate new data, and the application of statistical and analytical data. Setting academic theories in psychology with a Christian worldview, opening deeper theories into more professional values, morals, ethics, behaviors, attitudes, justices, theoretical modeling, evidence-based modeling, culturally diversity standardization, leadership in trends, concepts, and methods. She is currently studying neuroscience, cognitive psychology, social-personality psychology, neurotheology, law, and statistics. Her main focus lies in self-regulation in the discipline of Health Psychology from a holistic-mind, body, spirit, and soul approach.

She is a current member of the American Psychological Association (APA), American Psychology-Law Society (AP-LS), National Association of Black Journalists (NABJ), Harvard Club of the United Kingdom, American Federation of Musicians (AFM), Christian Association for Psychological Studies (CAPS), Harvard Black Alumni Society (HBAS), former Harvard Club of NY, Harvard Club of Southern California, and the Harvard Alumni Association. She currently resides in Beverly Hills, California. Her faith in Jesus Christ is the foundation for her life. 

 

 

Awards: 

  • High Potential Individual Visa(HPI) holder in the United Kingdom from 2024-2026. The HPI Visa provides preferential treatment to academic elite students with a professional degree from a top-ranked Ivy League university within the past five years, expanding horizons and international business in the United Kingdom. In 2022, only 1342 applicants were accepted globally.
  • SMARTscholarship 2023 semi-finalist
  • Harvard Academic Cross-Registration Scholarship Award 2021
  • Published author: Top 20 List on Talking Writer 2020
  • Record-breaking "Recruiter of the Month" for earning the company's highest single-placement for placing a CEO, as an Executive C-Level recruiter in a Fortune 500 Tech Company.
  • SEFMD Science and Engineering Award in Microbiology, (First place (level 1) and second place (level 2)

                                                                                                                               ###

 

Author Note

Rita L. Kurtz- https://orcid.org/0000-0002-4456-7784

No conflict of interest to disclose.

Correspondence concerning this article should be addressed to RitaKurtz@alumni.Harvard.edu

 

 SPECIAL BLOG POST! A DOWNLOADABLE FOR YOU!


 

 


 

Open Books of Disorganization

How to Get Your Life on Track With Biblical Wisdom

By Dr. Rita Kurtz (PhD in-progress)
RitaKurtz.com


This downloadable guide explores the often overlooked topic of executive dysfunction—a cluster of symptoms that affect our ability to plan, organize, and manage time. Grounded in scripture and practical tools, this booklet offers hope for anyone overwhelmed by disorganization.

  • Understand the symptoms of executive dysfunction
  • Learn practical tools to regain control
  • Apply biblical wisdom for lasting change
📥 Download PDF Booklet:  

OpenPublished: May 10, 2025

 

 

 

Rita Kurtz (Ph.D. in progress) is a Harvard University Scholar, Thought Leader, and Lawyer specializing in multidisciplinary and cross-sector concentrations including Artificial Intelligence (AI), technology, writing, law, psychology, neuroscience, neurotheology, ethics, morals, divinity, diversity, anthropology, and the sciences. 

Rita is an interdisciplinary researcher and recent graduate of Harvard University with a master's degree from the Faculty of Arts & Sciences department. Studying under esteemed Harvard Law School Professor Roberto Mangabeira Unger (SJD), Harvard University Law Director of Intellectual Property Allan Ryan, including Dr. Cornel West (Presidential Candidate 2024), and Dr. Arthur Kleinman (Harvard Department of Anthropology and Psychiatry), gave her well-roundedness in the interdisciplinary studies of law, anthropology, philosophy, ethics, morals, media, religiosity, and politics making her a well-prepared Ph.D. candidate. Through the acquisition of a prestigious Cross-Registration Academic Scholarship from Harvard, she gained a broader academic perspective and cultivated a profound curiosity about making scholarly contributions to the discourse surrounding difficult existential inquiries. In light of her Harvard Professor Unger's discussions on ethical, moral, and legal ramifications of utilizing artificial intelligence and the effects of the Knowledge Economy, she further investigated these topics, gaining the implications of the historical Tocquevillian perspective on the underestimation of technology's role. Historical data such as history, religiosity, and technology, piqued her interest in investigating these future issues in an effort to offer deeper research and discussions to grapple with these AI existential issues.

While at Harvard University, Rita combined her professional skills in television and film and took part in several projects, such as being selected out of several candidates, to participate in a research study at the Langer Mindfulness Lab in the Department of Psychology which delved into the intense psychological effects of news medium's impact on the consumption and conveyance of news to the public. As a researcher at Harvard, she wrote two research papers and successfully presented them to a panelist of fellow Ph.D. scientists. The first paper researched Nutraceutical Skin Therapy: Anti-Inflammatory Effects of Ganoderma lucidum, a study on how mushrooms may support youthful skin and aid in patients suffering from the autoimmune disease, sarcoidosis. The second delved into extensive research on Meat Analogues: Are We Making a Positive Political Advancement to Save the Planet? Or A Personal Health Choice that Barely Sustains Ourselves?, uncovering the unnatural ingredients masked in meatless burgers from Beyond Meat and Impossible Burger. She became a published nonfiction writer and a certified digital storyteller while at Harvard. 

In her undergraduate degree, her interdisciplinary studies in law, anthropology and philosophy, makes her a well-rounded research candidate. Her past academia undergraduate BBA studies covered a gamut of disciplines including writing legal briefs and law courses in Constitutional Law, Business Law I &II, Torts, Corporate Finance, Accounting 1 & 2, Human Resources, Business Policy, Political Science, Operations Management, Programming, Economics, Chemistry, Chemistry Lab, and Consumer Behavior. Her studies in computer programming, economics, anthropology, and philosophy, broadened her technical mindset for business.

Rita runs an online e-commerce store and is a digital content creator, gaining some experience with Python Programming language. She stays current on mainstream topics as a blogger, social media influencer, and actress/entertainer. As a world traveler, she divides her time between speaking, performing, and engaging in television, radio, and stage productions. She has covered tech news and innovations as a repeat spokesperson at the Consumer Electronics Show (CES), MacWorld, and for Belkin Components, hence the nickname, “Gadget Girl.” Her past acting appearances aired on Lifetime, History Channel, Fox, and the Paramount Network, landing her on an Emmy-nominated show. Her experience in media, led to a career in television, radio, movies, stage, and writing, gained her the branding of RitaRitaRita.

As a prior executive producer, TV and radio host of a positive side of sports, life and entertainment variety show, her co-hosts included Pro-NFL players and Industry Professionals. The show broadcasted on Warner Brothers Television and Fox. Her position led to interviews with billionaires, millionaires, celebrities, professional athletes, NASCAR drivers, professional medical staff, professional attorneys, musicians, and business owners. As a headline lead singer, she has toured with Grammy-Award winning musicians, and performed the national anthem for several professional sports teams around the United States. Rita is a strong writer, researcher, listener, articulate speaker, and takes direction well. She is most recognized for the national TV commercial in which she belted opera on a bus with a guy dressed like a Scandinavian viking-JG Wentworth (877-CASHNOW).

Rita formerly worked with a private company as a Government Account Executive supplying computer networks to the U.S. military around the globe creating relationships between the civilian sector and the government. She has also worked as a Record-Breaking Executive Technical Recruiter, receiving "Recruiter of the Month" and "Recruiter of the Year," for placing the highest commission received for the company, by placing a CEO into a Fortune 500 Tech Company. Her responsibilities as an Executive Technical Recruiter placed C-level executives into major tech companies and start-ups. Her well roundedness and entrepreneurial mindset led her to running a successful bakery at the Department of Defense (DoD) Air Force Exchange.

Currently pursuing her Doctorate of Philosophy degree, with a Christian Lens on ethics and morals, in Psychology and Law, her current research interests include artificial intelligence (AI), virtual reality (VR), law, ethics, morals, neuroscience, bioethics, aviation, military affairs, divinity and diversity. Her postgraduate studies at Liberty University allows her to research, analyze, test, generate new data, and the application of statistical and analytical data. Setting academic theories in psychology with a Christian worldview, opening deeper theories into more professional values, morals, ethics, behaviors, attitudes, justices, theoretical modeling, evidence-based modeling, culturally diversity standardization, leadership in trends, concepts, and methods. She is currently studying neuroscience, cognitive psychology, social-personality psychology, neurotheology, law, and statistics. Her main focus lies in self-regulation in the discipline of Health Psychology from a holistic-mind, body, spirit, and soul approach.

She is a current member of the American Psychological Association (APA), American Psychology-Law Society (AP-LS), National Association of Black Journalists (NABJ), Harvard Club of the United Kingdom, American Federation of Musicians (AFM), Christian Association for Psychological Studies (CAPS), Harvard Black Alumni Society (HBAS), former Harvard Club of NY, Harvard Club of Southern California, and the Harvard Alumni Association. She currently resides in Beverly Hills, California. Her faith in Jesus Christ is the foundation for her life. 

 

 

Awards: 

  • High Potential Individual Visa(HPI) holder in the United Kingdom from 2024-2026. The HPI Visa provides preferential treatment to academic elite students with a professional degree from a top-ranked Ivy League university within the past five years, expanding horizons and international business in the United Kingdom. In 2022, only 1342 applicants were accepted globally.
  • SMARTscholarship 2023 semi-finalist
  • Harvard Academic Cross-Registration Scholarship Award 2021
  • Published author: Top 20 List on Talking Writer 2020
  • Record-breaking "Recruiter of the Month" for earning the company's highest single-placement for placing a CEO, as an Executive C-Level recruiter in a Fortune 500 Tech Company.
  • SEFMD Science and Engineering Award in Microbiology, (First place (level 1) and second place (level 2)

                                                                                                                               ###

 

Author Note

Rita L. Kurtz- https://orcid.org/0000-0002-4456-7784

No conflict of interest to disclose.

Correspondence concerning this article should be addressed to RitaKurtz@alumni.Harvard.edu