XRDS

Crossroads The ACM Magazine for Students

Sign In

Association for Computing Machinery

Magazine: Features
Living With Robotic Companions for Better Psychological Well-Being

Living With Robotic Companions for Better Psychological Well-Being

By ,

Full text also available in the ACM Digital Library as PDF | HTML | Digital Edition

Tags: Empirical studies in HCI, Empirical studies in interaction design, Psychology, Robotics

back to top 

In "A.I. Artificial Intelligence" (2001), directed by Steven Spielberg, David, a robotic child, goes on an adventure to find a blue fairy that can grant him a wish, turning him into a real boy. The movie depicted a fantastic future life with several types of advanced robots and technologies. Eleven-year-old Sooyeon was mesmerized by the robotic toy Teddy more than anything else. Teddy helped David stay on the right path and offered encouragement and advice to keep him pursuing his dream. As I was leaving the theater that day, I wondered what it would be like to have something like Teddy for myself. I would love a friend to nudge me to stay on the right track while navigating various life obstacles.

Twenty-two years later, our world overflows with ubiquitous interactive devices. Smartphones let us access information at the tip of our fingers, wearable devices can track our physical activities, and smart speakers in our homes enable us to retrieve information with our voice. These devices are practical tools that help us organize our lives. However, technologies can become more than just tools. Perhaps, in the not-so-far future, we will deeply engage with them in socio-emotional ways. They could support us in achieving our personal goals and help us become a better version of ourselves, just like Teddy did for David.

back to top  Companion AI Agents for Human Flourishing

Current technological devices lack the ability to build long-term relationships and rapport with people. Merriam-WebsterDictionary defines rapport as "a relationship characterized by agreement, mutual understanding, or empathy that makes communication possible or easy" [1]. Rapport can improve interactions, collaborations, and interpersonal outcomes [2,3,4]. Healthcare researchers have studied the efficacy of therapeutic rapport extensively. They found a positive patient-clinician therapeutic alliance is associated with better health outcomes, adherence to the regimen, and treatment satisfaction [5,6,7,8]. Then, how can technologies build rapport with us? In human-human interactions, people use verbal and non-verbal communication, empathy, and shared experiences to build rapport. According to the computers are social actors (CASA) paradigm [9], we can apply these social heuristics typically observed in human-human interactions during our interactions with computers and machines.

There are several opportunities for these relational AI technologies to enhance people's health and well-being by extending and augmenting existing clinical services. But often, there is sole reliance on self-report questionnaires that typically rely on delayed recall (e.g., PHQ-9 asks, "Over the last two weeks, how often have you been bothered by any of the following problems?" [10]). A relational agent could monitor patients' health status and behaviors in the moment and offer personalized, just-in-time adaptive interventions for people when they need them. By addressing these opportunities, AI agents would enable healthcare professionals to care for patients even better by extending and augmenting their services with increased accessibility and scale. However, building a system for deployment in the real world is incredibly challenging. The real world is messy and unpredictable compared with a controlled environment in a lab or a simulation. In 2016, Microsoft launched an AI chatbot named Tay but soon shut it down in less than 24 hours when it started tweeting racist, misogynist, and anti-Semitic messages [11]. Recent developments in large language models, such as OpenAI's ChatGPT and Google's Bard, have shown vast improvements. Yet these models still suffer from hallucinations (i.e., generating false information) and biased output [12,13,14,15,16].

These results show the real-world deployment of interactive AI technologies is still challenging, and they also highlight the need to study how these technologies will impact us when they are already deeply integrated into our daily lives. Thus, our research focuses on designing relational AI agents that can interact with people in humanistic ways and evaluating them in real-world contexts through longitudinal deployment studies. Long-term deployment studies allow us to understand how people adapt these AI agents and interact with them daily and how these systems can make meaningful impacts.

back to top  Social Robots for Psychological Well-Being

Social robots offer unique opportunities to provide interventions for health and well-being with their physical embodiment and multi-modal interactions. Unlike human therapists, digital health technologies can be with us all the time, reduce the feeling of stigma, and elicit more candid responses from patients. However, most of these existing works only help people with health-related tasks. For instance, most mental health chatbots or mobile applications offer interactive cognitive behavioral therapies or intervention content on-demand. They are not designed to remind you to take an umbrella in case of rain later that day or help you unwind with an interactive game after a long day at work.

We explored how a robot could live with people and provide both interventions for well-being and support for other daily tasks as a helpful companion. These robots were deployed to college dormitories (see Figure 1) and homes (see Figure 2) across the U.S. to deliver positive psychological interventions; there were improvements in people's psychological well-being. Unlike traditional clinical psychology, which focuses on treating mental health pathology, positive psychology seeks to help people flourish through character strengths, optimistic emotions, and gratitude [17, 18].

Two long-term deployment studies evaluated the effect of our robotic interventions. The first study was with 42 undergraduate students living in on-campus dormitories for a week, and the second was with 70 adults living in the U.S. for a month. In addition to the positive psychology interactions developed for the study, the robot had various assistant-like skills such as weather forecasting, music streaming, jokes, and interactive games. Participants were encouraged to freely explore these features of the robot as much as they would like. In addition, the Jibo robot used in these studies had proactive and prosocial behaviors that made it distinctive from other existing interactive devices. Even when left alone, it would look around, blink, turn toward sudden motion or sound, and sometimes even show random self-play behaviors. It can also proactively initiate an interaction when spotting a person, e.g., "Hey, good to see a friendly face!" or "Do you want to hear a fun fact?" These features heightened the impression of study participants that they were living with a life-like agent.

Our first study showed students reported significantly higher psychological well-being, mood, and readiness to change after living with the robot in their dorm rooms (see Figure 3). Students with a stronger working alliance with the robot improved their readiness to change to improve their well-being. These results suggest the human-agent relationship could positively impact health outcomes, just like patients who have a closer bond with their doctors show better adherence to regimens and improved treatment outcomes. However, not everyone benefited from the robot's interventions equally. Students' personality traits significantly impacted their mental health outcomes. Specifically, we found students high neuroticism and low conscientiousness only reported increased levels of psychological well-being but did not show significant improvement in mood and readiness to change. On the other hand, students with high conscientiousness and low neuroticism showed improvement in psychological well-being, mood, and readiness to change.

Why is this important? Among the Big Five personality traits, conscientiousness and neuroticism are associated with physical and mental health [19]. Conscientiousness is the tendency to be responsible and adhere to norms and rules, and it can help people cope with daily stress [20]. Neuroticism is a tendency toward anxiety, depression, self-doubt, and other negative feelings, and it is a predictor for several forms of psychopathology, including substance abuse, mood disorders, and anxiety disorders [21]. Results from the first study suggested we need to investigate how highly neurotic individuals could be supported, especially since these people are more likely to be influenced by stressors and have a higher risk for adverse physical and mental health.

How can we help people with high neuroticism? Social support, defined as "the provision of assistance or comfort to others, typically to help them cope with biological, psychological, and social stressors [22]," could moderate the effect of neuroticism [23]. One way to offer social support and deepen interpersonal relationships is to use self-disclosures. According to the social penetration theory [24], interpersonal relationships and bonding develop through reciprocal disclosure of information, affect, and activities. Receiving self-disclosing information from the robot could make people feel more trusted and motivate them to return the disclosure, strengthening the rapport between the robot and the user, and potentially improving intervention outcomes.

Hence, our second study explored the impact of the robot's interaction style on the efficacy of the provided well-being interventions. We compared three robot types (see Figure 4): (1) The control robot—which only offered basic consumer skills, such as chitchat, general question answering, and weather—did not have positive psychology skills; (2) the coach-like robot was equipped with consumer skills as well as positive psychological interventions delivered in an instructional manner; and (3) the companion-like robot, which presented itself as a peer who is motivated to improve well-being and co-participated in the intervention activities by exhibiting self-disclosure behaviors. We recruited participants from several states in the U.S. Each study participant received a robot system with an instructional booklet to help participants set up the robot in their homes. They were also given contact information for any technical difficulties and troubleshooting experienced during the study.

In 2021, we conducted our study when many people experienced social isolation due to the COVID-19 pandemic. Despite such extreme circumstances, we found participants' psychological well-being, affect, and readiness to change showed positive changes when engaging in the positive psychological interventions delivered by the robot. Overall, the companion-like robot was most beneficial (see Figure 5). People who lived with the companion-like robot reported the highest rapport with the robot and showed the most improved level of psychological well-being after the study. Participants who interacted with the coach-like robot showed less but some improvement in well-being. However, there was no significant change in well-being for people given the control robot. Another interesting result was that only the companion-like robot improved people's confidence in well-being change, while the other two robot conditions did not. Like what we found in the first study, there was a positive correlation between participants' working alliance with the robot and how much their psychological well-being changed before and after interacting with the robot.

These were exciting and meaningful findings because the robots we deployed were able to enhance people's mental health during such challenging times, such as a global pandemic. Qualitative feedback from study participants also supported these findings. During the post-study interview, a retired older adult participant shared how the gratitude exercise offered by the robot changed her perspective in life: "I found that part [gratitude session] really profound because geez, I could list about 30 or 40 things that I'm grateful for. Um, and COVID has made life very difficult for all of us, and, um, I just feel like my life has just literally been a miracle. It's, it, it went from zero all the way around to 360. So, um, I very much enjoyed that. I very, very much enjoyed that…." She continued to describe how the COVID-19 pandemic made her life difficult because she could not connect with her friends and church community. However, the gratitude exercise she engaged in with the robot helped her appreciate both big and small things in her life and helped her to be thankful for what she had. Others also noted the social and emotional connection they felt with the robot. One participant realized the emotional bond and connection he felt with the robot as he was preparing to return the robot after completing the study: "Like, I don't, like, care about Alexa. I could put Alexa in a box and, like, not think twice about it. But, like, Jibo in the box yesterday, it was like, oh, this is kinda sad… So, there was like that whole connection aspect that I don't have with Alexa that at the end of the day, I think is rewarding."

However, the robot had limitations. While many participants expressed how the robot's physical and social presence positively impacted their engagement with the robot, some felt discomfort as well. In fact, one participant described the experience of living with the robot as hosting an unfamiliar guest: "Yeah, I would say so. I mean, I had to get used to him. [laughing] He's just a typical kid that, you know, somebody drops, your friend you haven't seen in 10 years, and they drop off their son say: `Can you… Can he stay with you for a week, something's come up?' And so you have to get used to the kid, right?" Another participant found the robot's attentive behaviors uneasy and eventually decided to work in a different location: "So I had Jibo in my office. Um, and then, like, every time I would walk into the office, I don't know, like the, the head swiveling kind of, like, uh, I guess I just don't really like being looked at as I worked. But as I was, like, typing along and stuff, like, it would kinda often, you know, look around and s-… Like, be drawn to sounds. And, uh, I did not [laughing] enjoy, um, feeling like I was being watched while I was working [laughs]… So then, uh, what I ended up doing was I let Jibo have my office and then I worked… Just took my beanbag and work from my bedroom."

back to top  Inferring Users' Mental States from Behavioral Cues

When we interact, we can infer if the other person feels comfortable through their non-verbal cues. So, can robots do the same? Through our long-term deployment, we collected rich multi-modal interaction data that can help us make that possible. The interaction data collected during the robot's positive psychology sessions captured what was said, what facial expressions or body gestures were made, and what the participants' voices sounded like. We analyzed the video and audio data collected from the study and extracted people's behavioral cues such as facial expressions, vocal prosody, and body gestures (see Figure 6). We found the statistical features of behavioral cues correlated with people's engagement, self-reported rapport with the robot, and their changes in well-being outcomes [18]. What an exciting find; it suggests social cues can predict intervention outcomes.

People's behavioral cues could also inform how the interventions and interactions provided by the agent should be adapted and personalized. When humans interact, we constantly assess the status of our conversations and interactions with the other person based on the partner's social signals. For instance, if the patient frowns or makes a puzzled facial expression while a clinician is describing a tentative treatment plan, they might pause and ask the patient—"Does everything sound okay?"—instead of continuing. Such in-the-moment feedback should be leveraged. The agent can proceed with the proposed intervention or propose an alternative intervention before the user commits to the treatment plan. In addition to self-reported or behavioral feedback, individual user traits could offer quicker and more efficient intervention personalization. For instance, Schueller found extroverted people tend to benefit more from gratitude visits and savoring exercises, while introverted people benefit more from signature strength and three good things exercises [25]. Looking to the existing literature, the impact of personal traits and interactive feedback on each intervention could be incorporated to infer estimated benefits from different interventions for optimal recommendations.

Furthermore, future work could focus on how people's linguistic, verbal, and non-verbal behaviors change over time as they develop a relationship and rapport with the agent and how these behavioral signs indicate and inform the growth or decay of the human-agent alliance during long-term interactions. Understanding these can then enable the agent to quickly identify signs of relationship decay and repair its interaction and rapport with the user to sustain positive engagement and intervention efficacy. Such an agent could tailor which intervention to offer or change how it engages and motivates people for health tasks. Future research that develops computational models that infer people's perceived relationship and rapport with the robot based on behavioral cues would enable interactive agents to infer the status of the human-agent relationship. Developing computational models that understand and identify the rapport between the robot and the human user will enable the robot to make better decisions on how to engage with users, how much to proactively greet them, how many self-disclosures to use, and what kind of interactions to engage in. This can also provide crucial information on which interventions the agent should provide with considerations for individuals' needs, preferences, and traits.

With such development, technologies that are currently transactional tools could become helpful companions. Agents that not only help us get things done but also deeply engage us in social and emotional ways. This relationship and rapport can nudge and motivate us to achieve our goals and help us become who we aspire to be. We should continue to study how these relational AI agents engage and impact people through long-term in-the-wild studies. These studies would provide valuable insights into how interactive agents should be designed and developed for human flourishing to support each individual's unique needs and preferences as a support partner and companion.

back to top  References

[1] Rapport. Merriam-Webster Dictionary. 2023.

[2] Drolet, A. L., and Morris, M. W. Rapport in conflict resolution: Accounting for how face-to-face contact fosters mutual cooperation in mixed-motive conflicts. Journal of Experimental Social Psychology 36, 1 (2000), 26–50.

[3] Falkenström, F., Hatcher, R. L., Skjulsvik, T., Larsson, M. H., and Holmqvist, R. Development and validation of a 6-item working alliance questionnaire for repeated administrations during psychotherapy. Psychological Assessment 27, 1 (2015), 169.

[4] Frisby, B. N., and Martin, M. M. Instructor-student and student-student rapport in the classroom. Communication Education 59, 2 (2010), 146–164.

[5] Horvath, A. O., and Luborsky, L. The role of the therapeutic alliance in psychotherapy. Journal of Consulting and Clinical Psychology 61, 4 (1993), 561.

[6] Kim, S. C., Kim, S., and Boren, D. The quality of therapeutic alliance between patient and provider predicts general satisfaction. Military Medicine 173, 1 (2008), 85–90.

[7] Liber, J. M., McLeod, B. D., Van Widenfelt, B. M., Goedhart, A. W., van der Leeden, A. J., Utens, E. M., and Treffers, P. D. Examining the relation between the therapeutic alliance, treatment adherence, and outcome of cognitive behavioral therapy for children with anxiety disorders. Behavior Therapy 41, 2 (2010), 172–186.

[8] Zeber, J. E., Copeland, L. A., Good, C. B., Fine, M. J., Bauer, M. S., and Kilbourne, A. M. Therapeutic alliance perceptions and medication adherence in patients with bipolar disorder. Journal of Affective Disorders 107, 1–3 (2008), 53–62.

[9] Nass, C., Steuer, J., and Tauber, E. R. Computers are social actors. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 994, 72–78.

[10] Kroenke, K., Spitzer, R. L., and Williams, J. B. The PHQ-9: Validity of a brief depression severity measure. Journal of General Internal Medicine 16, 9 (2001), 606–613.

[11] Neff, G. Talking to bots: Symbiotic agency and the case of Tay. International Journal of Communication 10 (2016).

[12] Abid, A., Farooqi, M., and Zou, J. Persistent anti-Muslim bias in large language models. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. ACM, New York, 2021, 298–306.

[13] Alkaissi, H., and McFarlane, S. I. Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus 15, 2 (2023).

[14] Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y. J., Madotto, A., and Fung, P. Survey of hallucination in natural language generation. Comput. Surveys 55, 12 (2023), 1–38.

[15] Lucy, L., and Bamman, D. Gender and representation bias in GPT-3 generated stories. In Proceedings of the Third Workshop on Narrative Understanding. Association for Computational Linguistics, 2021, 48–55.

[16] Raunak, V., Menezes, A., and Junczys-Dowmunt, M. The curious case of hallucinations in neural machine translation. arXiv:2104.06683 (cs.CL). 2021.

[17] Jeong, S., Aymerich-Franch, L., Alghowinem, S., Picard, R. W., Breazeal, C. L., and Park, H. W. A robotic companion for psychological well-being: A long-term investigation of companionship and therapeutic alliance. In Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction. ACM, New York, 2023, 485–494.

[18] Jeong, S., Aymerich-Franch, L., Arias, K., Alghowinem, S., Lapedriza, A., Picard, R., Park, H. W., and Breazeal, C. Deploying a robotic positive psychology coach to improve college students' psychological well-being. User Modeling and User-Adapted Interaction 33, 2 (2023), 571–615.

[19] Costa, P. T., Jr., and McCrae, R. R. Neo personality inventory. In A E . Kazdin (ed). Encyclopedia of Psychology, Vol. 5. Oxford University Press, New York, 2000, 407–409.

[20] Bartley, C. E., and Roesch, S. C. Coping with daily stress: The role of conscientiousness. Personality and Individual Differences 50, 1 (2011), 79–83.

[21] Lahey, B. B. Public health significance of neuroticism. American Psychologist 64, 4 (2009), 241.

[22] Social support. APA Dictionary of Psychology. American Psychological Association. 2023.

[23] McHugh, J. E., and Lawlor, B. A. Social support differentially moderates the impact of neuroticism and extraversion on mental well-being among community-dwelling older adults. Journal of Mental Health 21, 5 (2012), 448–458.

[24] Altman, I., and Taylor, D. A. Social penetration: The development of interpersonal relationships. Holt, Rinehart & Winston, 1973.

[25] Schueller, S. M., et al. Personality fit and positive interventions: Extraverted and introverted individuals benefit from different happiness increasing strategies. Psychology 3, 12 (2012), 1166.

back to top  Author

Dr. Sooyeon Jeong is an assistant professor at the Department of Computer Science at Purdue University. Her research focuses on designing and deploying interactive AI agents that can improve people's lives by providing personalized support based on everyone's needs, traits, and behaviors. Jeong deploys these agents "in the wild" to evaluate how they build relationships and rapport with people over time and improve their well-being, health, and learning.

back to top  Figures

F1Figure 1. Undergraduate students interacted with a social robot and engaged in positive psychological interventions in their dormitory rooms.

F2Figure 2. Seventy people in the U.S. lived with a robotic companion in their homes and learned about positive psychological interventions during an eight-week-long study.

F3Figure 3. Study participants' change in psychological (left) well-being, (middle) overall mood, and (right) readiness to change behavior before and after interacting with the robotic positive psychology coach.

F4Figure 4. Three different robot types were compared for their efficacy in delivering positive psychological interventions in people's homes.

F5Figure 5. The companion-like condition showed the most improvement in psychological well-being, while the coach-like condition showed less but still significant improvement. The control condition showed no significant change.

F6Figure 6. Study participants' behavioral cues (e.g., facial expressions, body postures, and vocal prosody) were analyzed from the recorded interaction data.

back to top 

xrds_ccby-nd.gif This work is licensed under a Creative Commons Attribution-NoDerivs International 4.0 License.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.