Home > Vol. 37, No. 4

Investigating Work Companion Robot Interactions to Enhance Work-from-Home Productivity and Experience
  • Heeji Kim : Underwood International College, Student, Yonsei University, Seoul, Korea
  • Bokyung Lee : Underwood International College, Assistant Professor, Yonsei University, Seoul, Korea

Background Working from home (WFH) provides flexibility but often at the expense of productivity. In traditional office settings, the presence of colleagues has an impact on productivity, and the absence of social monitoring is a key factor affecting WFH workers’ concentration. To address this challenge, this study proposes a desktop robot companion, Roby, that monitors workers’ behaviors to provide real-time interactive feedback to increase productivity and to enhance the WFH experience.

Methods After generating our initial interaction design framework for Roby, we used a Wizard-of-Oz experiment to evaluate Roby’s suggested interactions and its form factor using our research prototype. The participants were asked to work at home with Roby on their tabletops. The collected data included observed behaviors, participant feedback on Roby’s design through questionnaires and interviews, and co-design session findings. Then, using a thematic analysis, we identified recurring themes to derive design guidelines for an optimized WFH companion robot.

Results The study showed that Roby’s physical presence and interactive behaviors positively impact WFH productivity. Contributing factors included its supervisory presence, clear, interactive cues marking work/break transitions, subtle non-intrusive motion reminders to stay on task, and simple, functional minimalist form factor. Future WFH companion robots should balance supervision with a lack of disruption, provide natural communication, deliver unobtrusive multi-sensory cues, and allow adaptability to diverse home setups to optimize productivity and user experiences.

Conclusions This study provides insights into how a tabletop robot companion influences the productivity of knowledge workers in WFH settings. By evaluating Roby in real-world contexts, we identify key factors that we hope can inform the design of future intelligent robot companions to optimize remote work experiences across various professional domains.

Keywords:
Human–Robot Interaction, Work from Home, Companion Robot, Productivity.
pISSN: 1226-8046
eISSN: 2288-2987
Publisher: 한국디자인학회Publisher: Korean Society of Design Science
Received: 11 Apr, 2024
Revised: 30 Jul, 2024
Accepted: 12 Aug, 2024
Printed: 31, Aug, 2024
Volume: 37 Issue: 4
Page: 43 ~ 63
DOI: https://doi.org/10.15187/adr.2024.08.37.4.43
Corresponding Author: Bokyung Lee (bo.lee@yonsei.ac.kr)
PDF Download:

Funding Information ▼
Citation: Kim, H., & Lee, B. (2024). Investigating Work Companion Robot Interactions to Enhance Work-from-Home Productivity and Experience. Archives of Design Research, 37(4), 43-63.

Copyright : This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/), which permits unrestricted educational and non-commercial use, provided the original work is properly cited.

1. Introduction

Working from home (WFH) has gained popularity among knowledge professionals such as software developers, writers, and designers since the start of the pandemic (Bick et al., 2023). While WFH has enhanced employees’ flexibility in working hours and job satisfaction, it has also been associated with potential downsides, such as decreased productivity (Wu & Chen, 2020).

In the field of design and human–computer interaction (HCI), there have been several attempts to support productivity in work settings. However, most of these existing works remain limited within the scope of office environments (Figure 1a) and/or exist as software in a virtual environment (Figure 1d), unable to capture the full scope of workers’ dynamics in diverse physical environments.


Figure 1 Research landscape of work productivity solutions (assistive, worker-employed)

Most virtual solutions (Figure 1d) predominantly rely on screen-based data within a 2D space. The most common virtual approach is to apply self-tracking technologies to monitor digital activity and screen time (Kim et al., 2019) and return the monitored information to users with minimal interruptions (Winikoff et al., 2021). Additional virtual solutions that have been explored or developed for both office and WFH environments include chat and voice-based conversational agents (Kocielnik et al., 2018) along with various employment surveillance technologies (Cousineau et al., 2023). These technologies are primarily aimed at enhancing communication and monitoring work activities digitally, and they focus solely on monitoring micro-level digital activities communicated by people. Additionally, these works rely on users’ self-reflection process, using monitored activity data as a trigger.

Conversely, physical and embodied solutions (Figure 1c) typically take into account more tangible or interactive components that incorporate physical elements. There have been some solutions proposed for office environments, such as break management robots (Šabanović et al., 2014), socially assistive robots (Zhang et al., 2021), and other types of robots (Jacobs et al., 2019; Ogasawara & Gouko, 2017; Tan et al., 2023). However, there is still a lack of solutions tailored to the needs of remote work setups. Consequently, the domain of embodied WFH practices remains largely unexplored.

The presence of physical forms and embodied elements in agents has been found to improve a variety of user experiences. For example, Rogge (2023) suggests that embodiment fosters the creation of social and emotional attachments, highlighting the unique capabilities of physical agents in establishing meaningful connections with users. Similarly, Ventre-Dominey et al. (2019) found that physical agents are generally perceived as more likable and sociable compared to their virtual counterparts, underscoring the potential for enhanced user engagement and satisfaction. Moreover, the benefits of physical embodiment may extend beyond mere form—indicating that other aspects of embodiment, such as the agent’s behavior or the context of its interactions, could play a significant role in its effectiveness as well (Sasser et al. 2024). The existing studies emphasize the need for further exploration of physically embodied solutions that utilize these dimensions to enhance productivity in WFH settings, potentially leveraging social factors and work productivity simultaneously.

In this study, we shift our focus from “micro-monitoring workers’ behaviors” to “providing the impact of social presence” on productivity (Conrad et al., 2023) as a way to enhance WFH workers’ experiences. In real-world office environments, the presence of colleagues working in the same space indirectly influences workers’ productivity in a positive manner; for example, they may focus more if other colleagues seem to concentrate on their work. Integrating these ambient social factors can leverage the tangible dynamics of the workspace, providing a more engaging and interactive work experience (Shiomi et al., 2006).

To employ physical presence in a home office, we propose the concept of a tabletop robot, Roby, that acts as an interactive companion and transcends the limitations of digital tracking tools, not just providing reminders or alerts but also serving as a physical embodiment of work discipline (Figure 1a). Roby monitors workers’ presence at the desk and their working behaviors throughout their work journey and provides real-time interactive feedback in an ambient manner. As domestic spaces transform into home offices, introducing robotic objects can engineer new realities that positively impact the WFH environment. Along with projections indicating surging growth and adoption of robotics and smart home technologies, especially across the Asia Pacific (ReportLinker, 2022), tabletop robots present an innovative solution to enhance productivity for WFH professionals.

After proposing our initial design for Roby based on the literature, we suggest four types of interactions: i) a greeting interaction that welcomes users and indicates the start of the supervisory phase by moving the camera to the front-facing position; ii) a supervisory interaction that establishes a sense of observation through responsive movements during work periods to sustain focus; iii) a mirroring interaction that offers physical cues like a downward lens tilt to facilitate work-break transitions; and iv) a farewell interaction that signals the end of the work session by returning the camera to its rear-facing position, granting personal time. Then, we evaluate the proposed interactions in real-world settings using a Wizard-of-Oz (WoZ) approach with six participants. The results highlight the impact of Roby’s assistive approach on increasing productivity by collaborating with workers to improve their focus and ability to do more work rather than delegating tasks directly to the robot.

This study’s key contributions are twofold. First, the empirical findings from our WoZ study offer insights into how a companion robot affects the productivity of remote knowledge workers in terms of its presence, interactions, and form. Second, leveraging these insights, we delve into the design implications that can inform future decision-making regarding the design of such robots. By understanding how different shapes and motions impact concentration, we can inform the future design of robotic objects that optimize productive focus for knowledge workers.

2. Design Proposal

We introduce the initial design concept of a work companion robot, Roby, to support the productivity and emotional comfort of WFH workers. Roby is a desk-based robot designed to monitor workers’ activity using an embedded camera sensor and provide corresponding interactions. Guided by an understanding of the typical workflow adopted by knowledge workers, we outline four types of interactions that we created for Roby and discuss our initial design decisions.

2.1. Interaction Framework

The robot’s dynamic interactions are designed to enhance the remote working experience by reflecting the natural workday rhythms of WFH workers. Four different interactions are introduced to reflect the common cyclic phases: 1) work initiation, 2) work progression, 3) break time, and 4) work termination. By employing context-sensitive interactive feedback, the robot not only keeps users aware of their work phase but also subtly boosts productivity by blending into their environment as an ambient, non-intrusive presence. The ultimate goal of the suggested interactions is to empower the robot with social qualities, establishing a distinctive social connection with users.


Figure 2 Interaction framework: (a) greeting interaction, (b) supervisory interaction, (c) mirroring interaction, (d) farewell interaction
2.1.1. Work Initiation: Greeting Interaction

Inspired by Heenan et al. (2014), we designed the greeting interaction to explicitly indicate to users that Roby is activated while offering a warm welcome. As Roby comes equipped with a camera sensor, which may cause concern for users regarding their privacy, this interaction can explicitly indicate the beginning of the supervisory phase with camera monitoring. Upon the user sitting down to work, Roby performs a “welcoming” gesture by moving its camera 180 degrees from the rear to the front-facing position (Figure 1a).

2.1.2. Work Progression: Supervisory Interaction

Following the initial greeting, supervisory interaction is activated once users sit down and start working. This interaction establishes a sense of supervision by tracking the participant’s embodied movements and providing social presence through physical reactions. When the system detects the user’s attention (eye gaze) drifting from their workstation, Roby responds with slight head movements, following the direction of the distraction (left/right) and upward movements in the case of vertical movement (Figure 1b). Whereas the robot’s gaze and corresponding body manipulation were found to provide social cues for navigating robots (Friebe et al, 2022; He et al., 2023), their impacts on social presence in desktop work contexts have not yet been considered. We expect this supervisory interaction could also promote sustained concentration and productivity without being intrusive.

2.1.3. Break Time: Mirroring Interaction

Mirroring interaction is designed to assist users in transitioning between work and rest periods by offering physical cues that mirror their cue for rest. Inspired by the fact that people tend to take a rest in places with less visual exposure in a physical office (Aries et al., 2010), we aim to provide an interaction that explicitly makes people feel private from Roby. When the verbal cue for resting is activated (Table 1), Roby indicates a break by tilting the lens downward, simulating a gesture of nodding off (Figure 1c). This subtle motion aims to respect users’ personal space while suggesting a pause.

Table 1
Interaction Classification

Interaction Phase User Action (Input) Roby Reaction (Output)
Greeting Interaction Posture change
(Seated for work as a trigger for greeting)
Pivoting its lens 180 degrees to the frontfacing position
Supervisory Interaction Working duration Gazing (lens facing toward the user)
Occasional movement / signals of distraction Subtly moving its lens following direction of distraction
Mirroring Interaction Verbal cue
(“I’ll take a break for 5 minutes”)
Tilting its lens downward (averting its gaze away from user)
Farewell Interaction End-of-work signals (end of working hours) Pivoting its lens 180 degrees to the rearfacing position (shifting its focus away)

2.1.4. Work Termination: Farewell Interaction

When users are done with work for the day, Roby performs a farewell interaction to communicate its detachment from them. This interaction signifies the end of the focused work session, hence suspending all interactions and granting users their personal time. Thus, Roby rotates its camera 180 degrees and returns to its initial rear-facing position.

2.2. Form Factor

Influenced by the design philosophy of service robots (Kawamura et al., 1996), Roby adopted a simple structure and a gentle design that blends seamlessly with its environment. Roby’s design features a simple, non-anthropomorphic shape with dimensions of 120 x 90 x 90 mm. This design choice stems from the aim to avoid the potential pitfalls of anthropomorphic robots, such as high component costs and the risk of eliciting an uncanny valley effect, which is activated by the mirror-neuron system, as noted by Hoenen et al. (2014). The compact design is mechanically simpler, more cost-effective, and more reliable compared to its anthropomorphic counterparts (Erel et al., 2018). The robot’s minimalistic yet distinct form, combined with its physical motions, helps it establish a socially interactive presence, which is a key advantage over purely functional objects that lack such social qualities.

Moreover, Roby is designed with functionality and privacy, featuring two parts that facilitate rotational movement. This capability not only enhances interaction by allowing the robot to simulate head movements similar to nodding or turning but also addresses privacy concerns by managing what the robot can “see” according to each interactive phase.

2.3. Research Prototype

For the experiments, we prepared a research prototype (Odom et al., 2016) to evaluate our early design considerations of Roby in the wild (Figure 3). Similar to the method employed by Lee et al. (2019) in creating their robot interaction prototype, we utilized a hacking approach. Hacking was an efficient approach, as our focus was to conduct early explorative studies to test our initial design considerations rather than finalize the product intended for commercial release. We hacked a home security camera device (ASC10) from ABKO, as shown in Figure 3a, left. We chose this device as a base structure for three reasons. First, the size of this device is within our desired range, as described in Section 2.2. Second, it has built-in mechanisms for physical movements–360-degree horizontal and 90-degree vertical rotations–which enable simulating the interactions described in Section 2.1. Lastly, it comes with a mobile application that supports remote camera manipulation, which could benefit our WoZ experiment (Figure 3a, right).


Figure 3 (a) Homecam device by ABKO and its mobile application (b) 3D model of research prototype casing (c) 3D printed prototype

We designed and fabricated an outer shell of the ABKO device to conceal the obvious appearance of a typical home camera device, adopting a minimalistic, hemispherical dome-type shape (Figure 3b). The casing, designed with a 5-mm thickness, was divided into two segments to facilitate its essential movements, including the lateral rotation during the greeting, supervisory, and farewell interactions. To develop a 3D model for the casing, dimensions of the Homecam device –including height, width, and circumference–were precisely measured using a tape measure. Based on these measurements, the casing was designed to encase the device snugly, incorporating a 1-mm clearance on all sides to accommodate the base motor, the bulkiest part. This prototype was modeled using the Fusion 360 program, and the Single Plus 3D printer was utilized to fabricate the casing with PLA filament material. After printing, the casing was fitted over the device to complete the assembly.

3. Study Method

Inspired by experience-based research methodologies, our study leveraged research prototypes (Odom et al., 2016) to assess designed interactions within real-life contexts. This approach is rooted in the principles of integrating experiential insights directly into the research process, similar to the strategies discussed by Binder and Brandt (2008) in their exploration of design solutions in complex scenarios. Accordingly, we conducted a study to evaluate the suggested interactions for Roby in the wild.

The study examined how Roby’s initial design decisions (such as concept, interactions, and form factors) affected people’s productivity and overall working experience, ultimately developing design guidelines for future home companion robots. In this study, productivity refers to the level of output achieved by an individual within a given timeframe. Higher productivity, in turn, indicates achieving more tasks in less time or with reduced effort. In addition to collecting qualitative feedback from the participants following their in-the-wild experience session, we conducted a co-design session (Steen, 2013) to prompt participants to contemplate their experiences and convey their desired solutions.

The high-level research question driving the study was as follows:

  • RQ: Does the presence of a monitoring robot (Roby) enhance concentration for knowledge workers in a WFH environment?

Consequently, if the presence of a monitoring robot was found to be effective in boosting productivity and concentration, two sub-research questions would be posed to gain a deeper understanding:

  • RQ-1: What interactions could the robotic object embrace to influence productivity favorably?
  • RQ-2: What physical shape should the robot take to optimize its impact?
3.1. Participants

For this study, we recruited six participants (four females and two males) who had previously worked remotely, either temporarily or as a default condition, including during the COVID-19 period. Their average age was 30 (SD = 10.94). The length of their WFH experiences varied, as shown in Table 1. Targeting individuals familiar with WFH environments allowed us to collect experimental data grounded in practical pain points faced by remote workers.

Table 2
Participant profile

Code Age Gender WFH Period
P1 25 Female 1 Year (Every Friday)
P2 24 Female 1.5 Years (Occasional WFH)
P3 52 Male 15 Years (Occasional WFH)
P4 29 Female 6 Months (Flextime/Remote)
P5 26 Male 1 Month
P6 24 Female 3 Months (Every Friday)

None of our participants had used desktop or household robots before, although some (P3, P6) had experience with robot vacuum cleaners and other smart devices, such as AI speakers. Despite their lack of prior knowledge of robots or understanding of their basic mechanisms, they had relatively high knowledge of technology and smart devices. This familiarity facilitated their engagement with the study and provided some context for understanding why camera sensors were needed for the analysis.

3.2. Procedure

Inspired by the study design of Lee et al., (2019), the study process included four phases: an introduction phase (Phase A), an in-the-wild experiment where participants worked alongside Roby (Phase B), a post-interview phase capturing initial reactions and feedback (Phase C), and a co-design phase for ideating potential design considerations (Phase D). This procedure allowed for combining empirical data from the experimental tasks with rich qualitative insights, including participants’ motivations, preferences, and creative ideas. At the end of the experiment, the participants were remunerated with a 10,000-KRW gift card for their time and contribution to the study.

3.2.1. Phase A: Introduction

We initiated the study by briefly introducing the study goal and procedure. Each participant was provided with an introductory briefing before the experiment commenced. This briefing outlined the nature and aims of the research, specifically informing them of the robot’s ability to detect and react to physical movements to record concentration levels. To further aid participants’ understanding of our prototype, we conducted an introductory demo where they could quickly test the robot’s functionalities. This hands-on experience allowed participants to become familiar with the robot’s capabilities and the interaction mechanisms.

Based on the prior survey results, which asked about their work contexts and interests, we introduced a personalized task for each participant. For instance, P1, who was engaged in life science research, was tasked with writing a document concerning cancer cachexia, while P2 wrote about the domestic vehicle-to-grid (V2G) policy. Similar approaches have been employed in other research to accommodate the variability in professional tasks (Choe et al., 2017; Kocielnik et al., 2018). This approach ensured that the tasks were relevant to and engaging for each individual based on their professional focus and areas of interest. Moreover, it enabled an accurate reflection of each participant’s typical working context, thereby enhancing the ecological validity of our findings.

3.2.2. Phase B: Experiencing the Robot (WoZ)

With the robotic prototype placed on the desk, participants worked on the assigned task for approximately 45 minutes in an environment resembling an actual WFH setting. During their work, they were watched over the camera installed in the prototype, and three different motions based on each work phase were activated when spotted. The experimental component allowed us to test how participants reacted to the robot’s presence and its interactive modes, while the observational elements provided insights into participants’ initial physical reactions.

3.2.3. Phase C: Post-Survey and Interview

After completing the task, participants filled out a questionnaire to capture their initial responses focused on their perceptions of the robot’s physical shape, interactions, motions, and overall effectiveness in enhancing concentration, using a 1 to 5 Likert scale and binary choice questions. A semi-structured interview probing their general reactions from the preceding questionnaire was performed. For instance, participants who indicated that the robot’s motions positively or negatively influenced their productivity levels were asked several questions to further explore the underlying rationale behind their experiences and perceptions.

3.2.4. Phase D: Co-Design Session

Participants then engaged in a co-creation session, brainstorming additional forms and interaction concepts that could improve Roby’s ability to enhance productivity and comfort. This phase enabled participants to freely generate novel and desired ideas for the initial design decisions based on their real-world experience with the robot.

3.3. Experimental Setup

The study used a WoZ prototyping approach, in which the researcher covertly operates and mimics the intended system functionality while participants interact with what they believe to be a fully autonomous system (Bernsen et al., 1994). The WoZ approach, widely used in HCI research, was instrumental in this study for exploring the novel user experience of the WFH companion robot before investing considerable development efforts into a fully autonomous prototype (Dow et al., 2005). The experimenter in an adjacent room (Figure 6) observed participants through a pre-installed camera on the prototype (Figure 4) and controlled the robot’s reactions using a complementary app. The experiment room was set up to resemble a home office with a desk, computer, and minimal decorations to create a realistic simulated WFH environment.


Figure 4 Flowchart of study procedure

Figure 5 Photo of interview sessions

Figure 6 WoZ environment for user experiment

The specific moments for activating the robot’s interactions were standardized based on participants’ actions. For instance, the greeting interaction was triggered as soon as participants sat at their workstation to begin the assigned task. The supervisory interaction was actuated when participants’ attention seemed to drift from their work, such as when they reached for objects off their desk or visibly disengaged from their computer screens. Lastly, the mirroring interaction occurred when they actively decided to take a break by verbally saying, “I will take a break for x minutes.”


Figure 7 Experimental session snapshot during “Phase B: Experimenting with Robot”
3.4. Analysis

The analysis draws upon a blend of observed behaviors, qualitative interview feedback, and creative suggestions from participants. These diverse data sources provided a holistic view of the interaction between users and Roby, capturing both the measurable impact on productivity and the nuanced user experiences. The observed behaviors offered insights into how participants reacted to the robot’s presence, interactions, and form, laying the groundwork for in-depth analysis. In addition to qualitative data, we collected participants’ response from the questionnaire to capture their initial reactions on Roby. The purpose of the questionnaire was not to employ inferential statistics but to serve as a prompt for participants to initiate the interview and to enrich its content.

Utilizing this rich dataset, thematic analysis (Braun et al., 2012) was applied to enable a comprehensive understanding of the interactions and form elements. This process aimed to establish design guidelines for a WFH companion robot identified through recurring themes and feedback patterns. For the collected quantitative responses, we applied descriptive statistics (average, standard deviation) to support the findings from our qualitative study.

4. Results
4.1. General Feedback

All participants perceived the presence of the monitoring robot in their work environments positively, as it contributed to an increased sense of concentration and productivity. The participants attributed this effect to the robot’s ability to create a sense of being observed or monitored, which encouraged a more disciplined work ethic, similar to an office environment. The robot’s presence fostered a sense of surveillance, which was seen as a motivating factor rather than an intrusive one. This positive reception was further supported by their responses from the questionnaire indicating that on a scale of 1 to 5, participants rated their likelihood of recommending Roby to other WFH colleagues or workers at a mean of 4 (SD=0.58), suggesting a strong endorsement of the robot’s utility in remote work settings.

P1, P2, and P5 highlighted the importance of the robot’s presence and surveillance-like qualities in improving their focus and work output. The feeling of being “recorded” or “monitored” helped them maintain a work-appropriate posture and environment, preventing them from idling or getting distracted. P3 also expressed similar perceptions, especially regarding its companionship aspects, noting, “... thinking that I’m with someone gives me a bit of pressure that I need to focus more on work …” emphasizing the robot’s role as a companion. P4 appreciated the familiarity of the robot’s design, which resembled recognizable technological devices. The familiarity and the subtle psychological effect of being observed contributed to improved focus and work efficiency in a preferable WFH setting.

Participants generally did not view the smoothness and speed of the robot’s motions as crucial to their effectiveness. Rather, the robot’s responsiveness to their actions and commands was what truly mattered. P2 and P3 specifically preferred the “whirring” sound of the robot’s movements to quieter, smoother operations, emphasizing the significance of auditory cues. These sounds enhanced the interaction’s effectiveness by making the robot’s presence more pronounced.

4.2. Greeting Interaction

Roby’s ability to signal the start of work through its physical presence and front-facing motion was well received by participants. The robot’s camera movement to simulate looking at the user was mentioned as a positive feature, suggesting a form of greeting interaction that helps the user transition to work mode. P1 and P2 found the robot’s greeting interaction beneficial, as it contributed to a feeling of being observed and marked the onset of the work phase. P5 also appreciated the robot’s ability to signal the start of work, indicating that an initial interaction or “start signal” helps set the tone for the workday.

When Roby’s camera pivoted 180 degrees toward the participants, they made brief eye contact with the device and began their tasks. During the discussion on the timing and frequency of Roby’s movements, P1 emphasized the significance of this pivotal movement, as it helped signal its presence and the attention cue. She commented, “I get the feeling that it’s turning around to look at me, which prevents me from procrastinating my work.”

Participants also suggested that verbal or sound cues could enhance this interaction, making it more direct and intuitive. P3 specifically mentioned verbalizing the start of work, such as saying, “Work begins now” or “Please start your work,” as a potentially more direct and intuitive approach.

4.3. Supervisory Interaction

The supervisory role of the robot, evidenced by its occasional movements following the user, significantly contributed to productivity enhancement. The effectiveness of Roby’s different motions in influencing productivity was highly regarded, with an average rating of 4.16 out of 5 (SD=0.37), indicating that these interactions were perceived as highly effective. P1, P2, and P3 highlighted how its responsive behavior demonstrated its intelligence and manifested its presence as a robot. This sense of observation motivated participants to maintain a work-appropriate posture and attitude, similar to being in an office setting. Furthermore, P3 appreciated the robot’s subtle movements during work as gentle reminders of its presence, fostering a sense of companionship. This acted as a non-intrusive nudge to remain focused, like having colleagues around at work, thereby subtly enhancing productivity.

The movement of Roby’s lens (monitoring source) and its gazing interaction were emphasized as a significant influence on self-consciousness regarding posture and attire (P1 and P2). By keeping a focused gaze on the user, Roby discouraged less productive behaviors, such as working from bed, thus fostering a more disciplined work environment. P2 even noted, “At first, Roby staring at me all the time kind of reminded me of those CCTV cameras, but this familiarity actually made me feel better about the robot.” P1 further commented on the ease of accepting Roby’s presence, linking it to the ubiquitous nature of CCTV cameras. This comparison influenced her behavior during the experiment, mirroring her conduct in monitored spaces such as elevators, suggesting that common surveillance experiences can shape interactions with robotic systems in similar environments.

Still, some participants expressed a desire for the robot’s movements to be moderate and not too frequent or pronounced, as excessive movement could become a distraction. The motion should serve as a gentle reminder or nudge toward productivity rather than a constant distraction or source of stress. P4 noted, “...keeping an eye on the robot’s movements itself seems to deviate from the essence of working from home, so I think it kind of broke my concentration.” During the experiment, we observed that P4 frequently engaged in what could be described as eye contact with Roby during the supervisory interaction, a behavior that suggested a more pronounced awareness and interaction level than observed in other participants, who engaged less frequently.

4.4. Mirroring Interaction

Participants highly appreciated the robot’s ability to mark the beginning and end of break times with its lens movement. This interaction facilitated smooth transitions between work and rest, significantly benefiting time management and establishing a healthy work-break rhythm. P4 reported difficulties sustaining a consistent work pattern, noting fluctuations in concentration ranging from being highly fragmented at intervals of 10 minutes to periods of deep focus lasting 3–4 hours. This variability often resulted in a disruption of her work cycle. Thus, a physical cue, provided by the mirroring interaction, was anticipated to aid in establishing a more consistent work pattern. In addition, P1, P2, and P5 highlighted how the robot’s indication to resume work helped them swiftly refocus on tasks, enhancing overall productivity.

Roby’s motion of tilting its lens downward especially facilitated an optimal rest period, allowing the participants to recharge fully (P5). This motion was interpreted as Roby averting its gaze away from the participants, giving them a sense of privacy and permission to disengage momentarily from the interaction. This non-intrusive motion fostered a comfortable atmosphere for the rest periods between work sessions.

While appreciating the existing motion, some participants recommended enhancing the movements and signals during the interaction through more active motions, sounds, or lighting cues. For instance, P3 highlighted the effectiveness of having more active and engaging motions beyond simple rotation for break time interactions, stating, “I think it would be more fun and, how should I put it, give more of a feeling of it being a real robot.” Moreover, P1 and P2 suggested enhancing the visibility of the break’s end signal. While the beginning of break times was noticeable through the mirroring interaction, the conclusion was less apparent. They recommended incorporating distinct cues, such as sound or lighting, to clearly indicate the end of break times.

4.5. Farewell Interaction

Similar to the greeting interaction, participants appreciated the clear end-of-work signals from Roby’s lateral movement, as it shifted its focus away by turning its lens toward the wall. P2 was satisfied with the motion itself, as it appeared to symbolize turning away to signal approval for the participants to conclude their work for the day. P4 mentioned the challenge of adhering to a structured work schedule in a home environment, where the natural cues present in an office setting—such as the rhythms and patterns of colleagues’ work hours—are absent, making it difficult to distinguish work from personal time. A motion that signals the end of the workday can significantly contribute to fostering a healthier and more productive WFH environment by psychologically marking a clear boundary between work and personal time.

Akin to the initial welcoming gesture where the robot’s camera rotated 180 degrees horizontally from rear-to front-facing, the robot appeared to “say goodbye” by resuming its initial rear-facing orientation. The participants perceived this particular motion as a clear stop signal, indicating the end of their work session.

4.6. Form Factors for Work Companion Robots
4.6.1. Minimal Shape with Non-anthropomorphic Form

Participants generally preferred simplicity and minimalism in design for work companion robots, with a slight inclination toward customizable aspects such as color variations to match the user’s personal space and taste. They unanimously preferred a robot with organic, rounded shapes that can seamlessly integrate into workspaces. P2 also recommended that the design be versatile, expressing, “it should be sleek and simple, and needs to have that object vibe to it. It should still look cool sitting there even when you’re not using it.” P4 emphasized that while esthetics are important, the robot’s design should not be overly decorative or eye-catching to the point of distraction. She argued that the robot’s primary function is to enhance focus among workers, not divert attention away from their tasks.

Most participants preferred the robot not to have an anthropomorphic design, valuing its non-human-like appearance for better aligning with its intended function. This preference is supported by the quantitative data from our study, where 5 out of 6 respondents affirmed that the non-anthropomorphic shape positively influenced their experience. P1 appreciated the non-anthropomorphic nature of our prototype, finding its design appropriately aligned with its purpose. P2 highlighted the advantage of its current design in reducing discomfort or fear, suggesting that its simple, inanimate appearance fosters a more comfortable interaction by being less intrusive than a lifelike model, thereby avoiding any potential unease associated with its counterparts.

4.6.2. Customization Feature

A few participants expressed their desire for Roby to have custom features, such as personalized lighting or custom shapes. For example, P1 suggested integrating lighting variations similar to smart lighting systems that can vary in color and intensity. Further enhancing its integration within the workspace, she suggested Roby incorporate connectivity features that enable synchronization with various desk lighting systems. This feature could enable Roby to adjust its brightness and color based on specific interaction modes. For instance, it could be engineered to emit brighter light following a greeting interaction, thereby promoting an adaptive interaction experience within the user’s WFH environment.

In addition, P3, unlike most participants, suggested that a shape resembling surveillance equipment might invoke discomfort. Therefore, he suggested that the robot have a customizable appearance, possibly mimicking favorite characters or animals to foster a more comfortable and friendly environment. Moreover, P3 emphasized that it is possible to maintain its functional integrity while enhancing user comfort and acceptance through a customized appearance, noting, “even if it has a more familiar shape, I don’t think I’ll just ignore it and become lazy as if the robot wasn’t there. Anyway, I’m aware that it is a robot.”

4.6.3. Size

Participants also highlighted the importance of the robot’s design, facilitating easy integration into various home office setups. This included considerations of how it interacts with other devices and fits into small or crowded desks without requiring significant rearrangement of existing setups. While all participants were relatively satisfied with the robot’s current dimensions (120 x 90 x 90 mm), three individuals proposed slightly enlarging it—up to 1.5 times the current dimensions—to boost its presence.

In addition to the general satisfaction with the robot’s size, P1’s feedback highlights the need for the robot to be mobile, reflecting the “nomadic” nature of WFH environments often influenced by the presence and activities of family members. This context also raises the need for the robot’s design to consider not only its physical dimensions for stationary placement but also its portability and ease of adaptation to various workspaces within a home.

5. Discussion

Our work took a step toward understanding interaction qualities and corresponding user experiences for tabletop work companion robots in WFH settings. Based on our findings from the in-the-wild WoZ experiments followed by in-depth interviews, we have gained insights on various design considerations for creating work companion robots that enhance WFH workers’ productivity and emotional comfort.

We clarify that the interaction approach taken for the work companion robot in this paper is assistive rather than delegative. In the assistive approach, the robot aims to help workers improve their focus and ability to do more work. In contrast, the delegative approach relies on delegating tasks directly to the robot to process work faster (Lubars & Tans, 2019). Our focus is to understand the potency of robots that strive to positively and ambiently influence workers’ concentration, helping them achieve higher productivity through enhanced focus rather than automating tasks.

In the following sections, we discuss our reflections and implications for creating practical conditions to enhance productivity and the WFH environment by allowing workers to immerse themselves in their tasks.

5.1. Design Implications
5.1.1. Supervisory Presence and Work Distraction Trade-off

To design a work companion robot, finding the appropriate range of motion and movements that balance delivering a presence with a lack of work distraction is significant. Our study revealed that the participants responded positively to the sense of supervision and gentle pressure generated by Roby’s responsive movements. These adaptive motions, which communicated the robot’s attentive presence and identity as an intelligent entity rather than a passive object, were valued by participants in their work environment. However, the participants stressed the importance of moderation, suggesting that the robot’s movements during supervisory interaction should be occasional and unobtrusive, serving as gentle reminders to stay focused without distraction or stress. Regarding size, participants preferred dimensions ranging from 120 x 90 x 90 mm to 180 x 135 x 135 mm, as they would allow the robot to maintain a noticeable presence without dominating the workspace or causing frustration due to its physical footprint.

5.1.2. Multi-modal Interaction with Non-Verbal Auditory Cues

Beyond traditional voice assistance, the robot should leverage a range of auditory cues, including subtle sounds synchronized with its movements. Although the sound signal was beyond the scope of our initial design considerations, many participants emphasized the significance of distinct “whirring” noises accompanying the robot’s motions, as these auditory cues enhanced the perception of its active presence. Thus, strategically incorporating auditory cues can make the robot’s actions and gestures more perceptible. This multi-sensory approach reinforces the robot’s supervisory role, encouraging heightened focus and productivity among WFH workers.

5.1.3. Explicit Signaling for Transition Points

As a work companion robot comes with vision sensors to monitor human activities, it is highly encouraged that the design should incorporate distinct cues and social signals to indicate transition points between work and break periods to increase privacy and trust. Participants emphasized the significance of noticeable cues through motion, sound, or lighting alterations, highlighting that such clear indicators facilitate smoother and quicker transitions into or out of work mode. Furthermore, tilting its lens during breaks was perceived as creating a sense of personal space and privacy, resembling a person making room for somebody else. By providing these distinct social and transition point cues, the robot can better reflect the natural rhythms and boundaries experienced in a traditional office environment, allowing users to establish structured work patterns and maintain a healthy work–life balance in their WFH settings.

5.1.4. Types of Behaviors that Need to be Monitored

Key findings highlighted the effectiveness of robotic intervention in scenarios of smartphone use and gazing off into space, during the supervisory interaction stage. This suggested that future robots should possess nuanced capabilities to differentiate between work-related smartphone activities and potential distractions. Moreover, instances of participants gazing off into space elicited a robotic response aimed at redirecting their attention back to the task. This made participants feel more engaged with the robot, viewing it as an active presence. This revealed the importance of robots recognizing and reacting to subtle indicators of lost focus, such as gazing into space, as these moments can signal inattention even without significant movement.

Challenges emerged in interpreting fidgeting and contemplative gazing as definitive signs of distraction, highlighting the complexity of human behavior. This complexity suggests that monitoring a person’s movements or posture is insufficient for accurately determining their level of focus. Thus, we could consider incorporating temporal pattern recognition to analyze behavioral patterns over time (Povel & Essens, 1985), adding another layer of insight to help distinguish ambiguous signals. Furthermore, we can more accurately assess focus levels and better understand the nuances of human behavior and attention by adopting a multi-modal approach, incorporating methods such as eye tracking, sound detection, and behavioral biometrics.

Determining the most appropriate trigger for the greeting interaction is crucial to determine which action needs to be monitored. The greeting interaction in our study was activated when the user “sits down.” However, those in other related works (Lee et al., 2019) initiate when the user “approaches” the prototype. Several factors could be considered in this context when it comes to which trigger is most suitable.

First, we need to identify the specific goals of the interaction. Since the intention of the interaction is to mentally prepare the user for work, a sitting trigger might be more effective compared to an approaching trigger. Additionally, the physical layout and constraints of the workspace can influence the choice of trigger. For example, for smaller workspaces, an approaching trigger might be too sensitive and unintentionally activate the interaction, increasing the possibility for false positives. Likewise, the trigger should be aligned with the interaction goals and workspace characteristics to optimize the effectiveness and user experience of the greeting interaction.

During the mirroring interaction, participants were asked to verbally command the start of their break. Participants expressed positive feedback about the verbal input system, as it enabled them to take conscious and intentional breaks. However, there are potential alternatives that could also be effective. For instance, options such as pressing a button, gesture recognition, or engaging in physical interaction through contact, such as patting the robot’s head, could provide additional ways to initiate breaks. Considering the solitary working condition of this study, and in many WFH working environments, maintaining silence is often unnecessary, making verbal commands a practical and user-friendly option. Nonetheless, exploring these alternative methods could enhance the flexibility and accessibility of the system, making it suitable for a wider range of user preferences and situations.

5.1.5. Non-human Forms with Human Gestures

A non-anthropomorphic appearance aligned with the robot’s functional purpose was preferred, as participants valued its non-human nature for better aligning with its functional purpose as a work companion. However, the robot could selectively borrow familiar human gestures and movements, such as pivoting or tilting its lens to simulate eye and head movements. These natural gestures, often observed in human interactions, can enhance the intuitive flow of communication and enrich the overall user experience. The participants readily accepted and responded to these anthropomorphic cues from the robot, as they mirrored typical motions and behaviors that humans tend to exhibit.

5.1.6. Adaptive Design for Personalized Workspaces

The robot’s design should prioritize adaptability and customization to seamlessly blend into diverse personal workspaces. Customizable aspects such as color variations and lighting adjustments can cater to individual preferences and environments. Furthermore, the robot should be designed with portability, reflecting the dynamic nature of working at home, often influenced by external factors like family activities. By offering an adaptive design that allows for personalization and easy relocation, the robot can foster a sense of ownership and familiarity, enhancing user comfort and acceptance within the constraints of each workspace.

5.2. Limitations & Future Work

Our study has several limitations that call for further investigations. First, we applied the same set of interactions regardless of the different types of work our participants were performing. In the future, we need to consider various supervisory interactions based on the different types of work that accompany different working postures (e.g., physical drawing, writing notes, affinity diagramming).

Also, the prototype we used for the experiment was created by hacking existing commercial products, so the form factors and motions rely on the source object. While our approach was sufficient to deliver the desired experience to the participants and pave the way for future steps, future prototypes should demonstrate the desired range of motion to assess the actual impact of interactions.

Moreover, while this study is primarily focused on solitary working conditions, there are also contexts where it involves interactions with others. In communal remote work environments, the presence and behavior of a companion robot like Roby could be perceived differently compared to solitary work settings. For example, the dynamics of social presence and monitoring might differ when multiple users are present. In this case, Roby’s interactions could be adapted to account for the presence of additional individuals, tailoring to a shared living space. This could require modifications such as recognizing and responding to multiple members, adjusting interaction cues to be less intrusive, and providing response mechanisms that consider the presence of other workers or non-working individuals.

Nonetheless, our in-the-wild experiment with research prototypes provided rich findings and insights into how people perceive work companion robots in WFH settings, together with their experience-based feedback, thoughts, and desires.

6. Conclusion

This research demonstrates the potential of a tabletop robot to enhance the productivity and work habits of knowledge workers in a WFH environment. Through an experimental study employing a WoZ setup, participants experienced the robot’s four key interactions mirroring common work phases. The physical presence and interactions of Roby were perceived as helpful in fostering a focused, disciplined approach to remote work, thereby increasing productivity. Moreover, an analysis of participant behaviors and feedback revealed several key findings:

1. Roby’s creation of a feeling of being observed encouraged participants to maintain a work-appropriate attitude and environment similar to that of an office setting.

2. Clear transitions between work and break periods, marked by Roby’s interactions, helped participants establish a healthy work-break rhythm, seamlessly refocus after breaks, and increase the level of trust in using monitoring robots with vision sensors.

3. Participants appreciated the robot’s subtle, non-intrusive presence. Its occasional moderate movements served as gentle reminders to stay on task without causing excessive distraction.

4. Simplicity, minimalism, and potential customization options in the robot’s form factor were preferred by most participants, allowing seamless integration into diverse home office setups and reducing potential discomfort or intrusion.

This study suggests that the design should harmonize supervision, natural communication, unobtrusive multi-sensory cues, and environmental adaptability to enhance WFH workers’ productivity and overall experience.

Acknowledgments

This research was supported by the Yonsei University Research Fund of 2023 (2023-22-0454).

References
  1. 1 . Aries, M. B. C., Veitch, J. A., & Newsham, Guy. R. (2010). Windows, view, and office characteristics predict physical and psychological discomfort. Journal of Environmental Psychology, 30(4), 533-541.. [https://doi.org/10.1016/j.jenvp.2009.12.004]
  2. 2 . Bernsen, N. O., Dybkjær, H., & Dybkjær, L. (1994). Wizard of Oz prototyping: How and when. Proc. CCI Working Papers Cognit. Sci./HCI, Roskilde, Denmark..
  3. 3 . Bick, A., Blandin, A., & Mertens, K. (2023). Work from home before and after the COVID-19 outbreak. American Economic Journal: Macroeconomics, 15(4), 1-39.. [https://doi.org/10.1257/mac.20210061]
  4. 4 . Binder, T., & Brandt, E. (2008). The Design:Lab as platform in participatory design research. CoDesign, 4(2), 115-129.. [https://doi.org/10.1080/15710880802117113]
  5. 5 . Braun, V., Clarke, V., & Terry, G. (2012). Chapter 4: thematic analysis. APA handbook of research methods in psychology, 2, 57-71.. [https://doi.org/10.1037/13620-004]
  6. 6 . Choe, E. K., Abdullah, S., Rabbi, M., Thomaz, E., Epstein, D. A., Cordeiro, F., Kay, M., Abowd, G. D., Choudhury, T., Fogarty, J., Lee, B., Matthews, M., & Kientz, J. A. (2017). Semi-automated tracking: A balanced approach for self-monitoring applications. IEEE Pervasive Computing, 16(1), 74-84.. [https://doi.org/10.1109/MPRV.2017.18]
  7. 7 . Conrad, C., Klesel, M., Oschinsky, F., Mayhew, K., O'Neil, K., & Usai, F. (2023). Quality is more important than quantity: Social presence and workplace ergonomics control predict perceived remote work performance. Hawaii International Conference on System Sciences.. [https://doi.org/10.24251/HICSS.2023.082]
  8. 8 . Cousineau, L., Ollier-Malaterre, A., & Parent-Rocheleau, X. (2023). Employee surveillance technologies: Prevalence, classification, and invasiveness. Surveillance & Society, 21(4), 447-468.. [https://doi.org/10.24908/ss.v21i4.15763]
  9. 9 . Dow, S., MacIntyre, B., Lee, J., Oezbek, C., Bolter, J. D., & Gandy, M. (2005). Wizard of Oz support throughout an iterative design process. IEEE Pervasive Computing, 4(4), 18-26.. [https://doi.org/10.1109/MPRV.2005.93]
  10. 10 . Erel, H., Hoffman, G., & Zuckerman, O. (2018). Interpreting non-anthropomorphic robots' social gestures. Proceedings of The HRI'2018 Workshop on Explainable Robotic Systems..
  11. 11 . Friebe, K., Samporová, S., Malinovská, K., & Hoffmann, M. (2022). Gaze cueing and the role of presence in human-robot interaction. In F. Cavallo, J.-J. Cabibihan, L. Fiorini, A. Sorrentino, H. He, X. Liu, Y. Matsumoto, & S. S. Ge (Eds.), Social robotics (pp. 402-414). Springer Nature Switzerland.. [https://doi.org/10.1007/978-3-031-24667-8_36]
  12. 12 . Glassman, J., Prosch, M., & Shao, B. B. M. (2015). To monitor or not to monitor: Effectiveness of a cyberloafing countermeasure. Information & Management, 52(2), 170-182.. [https://doi.org/10.1016/j.im.2014.08.001]
  13. 13 . Grover, T., Rowan, K., Suh, J., McDuff, D., & Czerwinski, M. (2020). Design and evaluation of intelligent agent prototypes for assistance with focus and productivity at work. Proceedings of the 25th International Conference on Intelligent User Interfaces, 390-400.. [https://doi.org/10.1145/3377325.3377507]
  14. 14 . He, K., Chan, W. P., Cosgun, A., Joy, A., & Croft, E. A. (2023). Robot gaze during autonomous navigation and its effect on social presence. International Journal of Social Robotics, 16(5), 879-897.. [https://doi.org/10.1007/s12369-023-01023-y]
  15. 15 . Heenan, B., Greenberg, S., Aghel-Manesh, S., & Sharlin, E. (2014). Designing social greetings in human robot interaction. Proceedings of the 2014 Conference on Designing Interactive Systems, 855-864.. [https://doi.org/10.1145/2598510.2598513]
  16. 16 . Hoenen, M., Lübke, K. T., & Pause, B. M. (2016). Non-anthropomorphic robots as social entities on a neurophysiological level. Computers in Human Behavior, 57(C), 182-186.. [https://doi.org/10.1016/j.chb.2015.12.034]
  17. 17 . Jacobs, J. V., Hettinger, L. J., Huang, Y.-H., Jeffries, S., Lesch, M. F., Simmons, L. A., Verma, S. K., & Willetts, J. L. (2019). Employee acceptance of wearable technology in the workplace. Applied Ergonomics, 78, 148-156.. [https://doi.org/10.1016/j.apergo.2019.03.003]
  18. 18 . Kocielnik, R., Avrahami, D., Marlow, J., Lu, D., & Hsieh, G. (2018). Designing for workplace reflection: A chat and voice-based conversational agent. Proceedings of the 2018 Designing Interactive Systems Conference, 881-894.. [https://doi.org/10.1145/3196709.3196784]
  19. 19 . Kawamura, K., Pack, R. T., Bishay, M., & Iskarous, M. (1996). Design philosophy for service robots. Robotics and Autonomous Systems, 18(1), 109-116.. [https://doi.org/10.1016/0921-8890(96)00005-X]
  20. 20 . Kim, Y.-H., Choe, E. K., Lee, B., & Seo, J. (2019). Understanding personal productivity: How knowledge workers define, evaluate, and reflect on their productivity. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1-12.. [https://doi.org/10.1145/3290605.3300845]
  21. 21 . Lee, B., Wu, S., Reyes, M. J., & Saakes, D. (2019). The effects of interruption timings on autonomous height-adjustable desks that respond to task changes. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1-10.. [https://doi.org/10.1145/3290605.3300558]
  22. 22 . Lubars, B., & Tan, C. (2019). Ask not what AI can do, but what AI should do: Towards a framework of task delegability. Advances in Neural Information Processing Systems, 32..
  23. 23 . Odom, W., Wakkary, R., Lim, Y., Desjardins, A., Hengeveld, B., & Banks, R. (2016). From research prototype to research product. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2549-2561.. [https://doi.org/10.1145/2858036.2858447]
  24. 24 . Ogasawara, A., & Gouko, M. (2017). Stationery holder robot that encourages office workers to tidy their desks. Proceedings of the 5th International Conference on Human Agent Interaction, 439-441.. [https://doi.org/10.1145/3125739.3132581]
  25. 25 . Povel, D.-J., & Essens, P. (1985). Perception of temporal patterns. Music Perception, 2(4), 411-440.. [https://doi.org/10.2307/40285311]
  26. 26 . ReportLinker. (2022). Household robots market-Growth, trends, COVID-19 impact, and forecasts (2022-2027). GlobeNewswire Newsroom. https://www.globenewswire.com/news-release/2022/03/02/2395266/0/en/Household-Robots-Market-Growth-Trends-COVID-19-Impact-and-Forecasts-2022-2027.html.
  27. 27 . Rooksby, J., Asadzadeh, P., Rost, M., Morrison, A., & Chalmers, M. (2016). Personal tracking of screen time on digital devices. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 284-296.. [https://doi.org/10.1145/2858036.2858055]
  28. 28 . Rogge, A. (2023). Defining, designing and distinguishing artificial companions: A systematic literature review. International Journal of Social Robotics, 15(9), 1557-1579.. [https://doi.org/10.1007/s12369-023-01031-y]
  29. 29 . Šabanović, S., Reeder, S. M., & Kechavarzi, B. (2014). Designing robots in the wild: In situ prototype evaluation for a break management robot. Journal of Human-Robot Interaction, 3(1), 70-88.. [https://doi.org/10.5898/JHRI.3.1.Sabanovic]
  30. 30 . Sasser, J. A., McConnell, D. S., & Smither, J. A. (2024). Investigation of Relationships Between Embodiment Perceptions and Perceived Social Presence in Human-Robot Interactions. International Journal of Social Robotics, 1-16.. [https://doi.org/10.1007/s12369-024-01138-w]
  31. 31 . Shiomi, M., Kanda, T., Ishiguro, H., & Hagita, N. (2006). Interactive humanoid robots for a science museum. Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, 305-312.. [https://doi.org/10.1145/1121241.1121293]
  32. 32 . Steen, M. (2013). Co-design as a process of joint inquiry and imagination. Design Issues, 29(2), 16-28.. [https://doi.org/10.1162/DESI_a_00207]
  33. 33 . Tan, Z., Liu, Z., Guo, Z., & Gong, S. (2023). Designing a robot for enhancing attention of office workers with the heavily use of screen. In A. Marcus, E. Rosenzweig, & M. M. Soares (Eds.), Design, user experience, and usability (pp. 246-261). Springer Nature Switzerland.. [https://doi.org/10.1007/978-3-031-35696-4_18]
  34. 34 . Ventre-Dominey, J., Gibert, G., Bosse-Platiere, M., Farnè, A., Dominey, P. F., & Pavani, F. (2019). Embodiment into a robot increases its acceptability. Scientific Reports, 9(1), 10083.. [https://doi.org/10.1038/s41598-019-46528-7]
  35. 35 . Winikoff, M., Cranefield, J., Li, J., Doyle, C., & Richter, A. (2021, January). The Advent of Digital Productivity Assistants: The Case of Microsoft MyAnalytics. In HICSS (pp. 1-10).. [https://doi.org/10.26686/wgtn.13513929.v1]
  36. 36 . Wu, H., & Chen, Y. (2020). The Impact of Work from Home (WFH) on Workload and Productivity in Terms of Different Tasks and Occupations. In C. Stephanidis, G. Salvendy, J. Wei, S. Yamamoto, H. Mori, G. Meiselwitz, F. F.-H. Nah, & K. Siau (Eds.), HCI International 2020 - Late Breaking Papers: Interaction, Knowledge and Social Media (pp. 693-706). Springer International Publishing.. [https://doi.org/10.1007/978-3-030-60152-2_52]
  37. 37 . Zhang, B. J., Quick, R., Helmi, A., & Fitter, N. T. (2021). Socially assistive robots at work: Making break-taking interventions more pleasant, enjoyable, and engaging. 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 11292-11299.. [https://doi.org/10.1109/IROS45743.2020.9341291]
  38. 38 . Zimmerman, J., Forlizzi, J., & Evenson, S. (2007). Research through design as a method for interaction design research in HCI. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 493-502.. [https://doi.org/10.1145/1240624.1240704]