Archives of Design Research
[ Article ]
Archives of Design Research - Vol. 29, No. 3, pp.5-23
ISSN: 1226-8046 (Print) 2288-2987 (Online)
Print publication date Aug 2016
Received 09 Mar 2016 Revised 24 May 2016 Accepted 27 May 2016

Effects of Head-Mounted Display (HMD) Position on Procedure Following Tasks and Usability

ChoiYoung Mi ; YangTao
School of Industrial Design, Georgia Institute of Technology, Atlanta, GA, USA School of Industrial Design, Georgia Institute of Technology, Atlanta, GA, USA

Correspondence to: Young Mi Choi

Background Head-mounted displays (HMDs) have found value in industrial applications as aids in performing long and/or complicated procedural tasks. Currently, there are gaps in understanding the impact of various design characteristics on the performance of tasks aided by HMDs. This study investigated how different positioning of head mounted displays affected the performance of workers during procedure following tasks.

Methods Eight car maintenance tasks were performed by 20 participants with task guidance presented at four display conditions: above-eye, eye-centered, below-eye and via traditional paper manual. Task times, errors were measured and user experience measurements were collected.

Results The results showed that none of the display conditions had significant effects on completion times. However, below-eye display outperformed the above-eye display condition for tasks requiring participants to perform assessments. User experience ratings showed that the eye-central condition was the most preferred display position among the three HMD conditions.

Conclusions A non-transparent, monocular HMD that partially blocks a user's FoV was not observed to significantly decrease overall task performance time. In support of previous findings, an HMD positioned below the eye outperformed the above-eye position for assess actions in this study. User overreliance on the instructions provided via HMD was also observed in this study. In a slightly different finding from previous studies, users indicated that the eye-central HMD position provided the best overall experience and was the most preferred by participants in this study.


Human-centered Computing, HCI Design and Evaluation Methods, Head Mounted Display, Usability, Human Factors


A Head-mounted Display (HMD) is a display device, worn on the head or as part of a helmet that has a small display optic in front of one eye (monocular HMD) or both eyes (binocular HMD). Today HMDs are usually segmented into two categories: helmet mounted display and wearable glass.

HMDs are known for state-of-the-art display capabilities. In the consumer market, users use HMDs to enjoy high quality image presentation and an immersive experience. HMDs also can provide additional functions such as Internet access, smart phone access, GPS and navigation. Some market research predicts that the global HMD market will reach up to $12.28 billion by 2020 (HMD, 2015).

HMDs can also seamlessly provide workers with real time contextual information while performing tasks and allow companies to integrate with existing back-end systems. The hands-free nature of HMDs provide advantages over many traditional technologies such as paper checklists where pages can be inconvenient to turn if workers must wear gloves.

Consulting and research groups believe that HMDs will have a great impact on heavy industries such as manufacturing, oil and gas where they can enable on-the-job training in how to fix equipment and perform manufacturing tasks (Lee, et. al., 2014). They may also have significant impact on mixed industries such as retail, consumer goods and healthcare, where users are looking for information via a visual search. Other features such as voice command and video calling also promise easy access to information and convenient remote collaboration.

Despite the expected benefits provided by HMDs, it is not known how individual characteristics of an HMD contribute to them. Even if an HMD system is shown to be better than current technologies, other HMD systems employing different design characteristics may also perform similarly. Without understanding if or how individual design attributes affect task outcomes, designers and developers will not be able to identify the best way to customize an HMD system to best match a specific task scenario.

This study explores some of these variables. Common car maintenance tasks were used and performed in a realistic environment with procedures and preparations that are low-cost and easy to replicate. The goal was to better understand the implications of the attributes that are essential to Head-Mounted Displays, in particular the position of the display.


HMDs have found applications in a wide range of industries. Among all the benefits that they are believed to provide, aid in performing procedural tasks is one of the most valuable. Procedure following is commonly used in industries like oil, manufacturing, health care, aviation and retail. It is used to assist the workers in many types of tasks, including picking, assembly, operation, inspection, and maintenance.

Procedure following prevents machine components from missing certain inspections (Hart, 2006). It helps improve consistency of workflow by presenting a list of easily understood instructions. This helps workers to work in a safer, efficient and consistent way. A stand alone device or system to aid workers in following a procedure is a natural solution in cases where there are a large number of and/or complex steps that must be followed. Procedure following is a good way to provide adequate knowledge to less experienced workers to perform a task as long as they are physically capable of doing the job. Even if the steps for a particular task are already well known to workers, interruptions may occur causing the worker to skip steps or forget where he or she is in a procedure (Ockermann, 2000).

Written documents are the traditional media used as procedural aids for workers but they come with a number of issues. Written documents/checklists can be bulky, heavy and pages can be hard to turn while wearing gloves or using tools. When procedures change or are updated, the information in written documents must be updated and all copies of the old documents must be replaced. Navigating through written pages can also be difficult especially if task steps are not in a linear sequence. There are studies showing that paper checklists may lead to certain types of errors such as skipping steps due to interruptions, distractions or even intentionally (Palmer & Degani, 1991). There’s also possibility of repeating steps because the worker forgot what steps were already performed.

Task guidance systems have been developed to address some of these kinds of issues. The term “task guidance systems” was first defined by Ockerman(2000) referring to a system made up of inexpensive electronics designed to assist workers in taking advantage of the benefits of procedures. These systems only provide pre-loaded procedure information about a task (usually general information about the task along with how to complete it step by step). These task guidance systems are not capable of presenting information that is related to the current state of the environment, the worker, or the particular object that is being inspected, maintained or assembled.

Systems with more sophisticated technologies can sense the surrounding environment and intelligently contribute to a worker's situational awareness. For example, Reif et al. developed an HMD system using Augmented Reality (AR) technology to support the order picking system in a real storage environment (Reif, et. al., 2009). These intelligent HMD systems can process information collected from the environment and provide display-enhanced instructions to the operator. Often the resulting instructions contain extra real-time information and/or eliminate information that is not related to the current task situation.

Though these systems appear more intelligent to users, they often require special design and configuration for different tasks.

Some literature on the use of HMDs in procedure following exists. Smailagic & Siewiorek(2002) documented the results of U.S. Marine engineers doing a Limited Technical Inspection (LTI) with VuMan 3, a wearable computer designed at Carnegie Mellon University. They found a decrease of up to 40% in inspection time compared to traditional paper handling and a reduction of total inspection/data entry time by up to 70%. The VuMan 3 system they used was text based so there was no image of the equipment or other visual aid. Therefore, it couldn’t prove that the HMD actually helped the engineers in performing and completing the task itself. In a later work Siegel & Bauer conducted a field study comparing a wearable system with a printed technical procedure on two aircraft maintenance tasks. This time the wearable system was able to give task guidance and allowed more manipulation, but the specialists took on average 50% more time to perform the tasks using the wearable system.

Henderson & Feiner(2007) incorporated augmented reality (AR) into a maintenance job aid. Task aid information and tracking data of the work area received from an inertial-optical tracker was processed by the Valve Source game engine SDK. The stereoscopic content was then rendered onto an Inner Optic Vidsee video see-through HMD. In the study the user followed the instructions to perform the removal of a Dart 510 oil pressure transducer from a Rolls-Royce Dart 510 prototype component. The highlight of this study was the implementation of AR into a relatively complex task guidance system. However the study did not include comparison with other systems at that time to show if the system provided any significant advantages.

Ockerman & Pritchett(1998) investigated the capabilities of wearable computers using the procedural task of preflight aircraft inspection. They compared three different methods including a text-based HMD system, a picture-based HMD system and the traditional memory-recall method. The result shows no statistically significant effect on fault detection rate, while the videotape showed that those who used the HMD systems tended to overlook the items which were intentionally not pointed out in the checklist.

Weaver et al.(2010) found that an HMD with task guidance information led to significantly faster completion time and fewer errors than audio, text-based or graphical paper methods. Similar work by Guo et al.(2014) found that an HMD was better than an LED-indicating system. Both of these studies were conducted in a layout optimized for the specific task. Because the complexity of the tasks was relatively low, the observed effects may not translate to other task-guidance applications.

There are potential problems along with benefits to the use of HMDs. In one study (Peli, 1990), a monocular HMD with configurable display location was used to evaluate various visual phenomena such as binocular rivalry, image motion and motion sickness. It found that a peripheral display position could effectively reduce binocular rivalry and was preferred by the subjects. In the system studied only text was displayed on the HMD. Whether the findings would remain true in an image-based task guidance system was not clear.

Katsuyama et al.(1989) evaluated the effects of various display positions on task performance and on a user's comfort. They designed a study where the subjects performed a primary task by focusing on a monitor located 170cm away along with a secondary task displayed on a miniature cathode ray tube (CRT) attached to the head through an adjustable chin/head rest. The viewing angle of the secondary CRT relative to the primary monitor was manipulated across 12 treatment conditions (three levels of elevation, +15°, 0°. and - 15°, and four levels of azimuth, 0°, 20°, 35°, 45°). The study found that secondary task displays located 15° below a primary viewing area were better perceived and resulted in better performance and decreased discomfort compared to an identical display located 15° above the primary viewing area.

In a previous study, Zheng et al.(2015) investigated the effects of multiple eye-wearable technology characteristics on machine maintenance. A series of car maintenance tasks involving Locate, Manipulate, and Compare actions were tested by four different technologies: a peripheral eye-wearable display,a central eye-wearable display, a tablet, and a paper manual.It showed that the peripheral eye-wearable display yielded longer completion times than the central display. However, the eye-peripheral condition in this study was a monocular HMD while the eye-central condition was a binocular HMD. It is unknown whether the result would remain true if both conditions were monocular or binocular.


This study aimed to investigate the effects of different display positions on procedure following. Car maintenance tasks with sufficient complexity were used because they were easily accessible and frequently performed by regular people. They are also similar to many types of industrial mechanical inspection tasks. The experimental procedure was consistent with Zheng et al.'s (2015) previous work, but instead of using an existing product from the market, a device was specifically designed and built to allow only the display position to be varied.

Though many different types of HMDs exist, each device varied greatly from another and most only supported one particular viewing angle. The HMD used in this study was composed of an NTSC/PAL (Television) Video Glass (320x240 pixels) for the display, a Raspberry Pi single-board computer, power supplies and 3D printed housings for other parts to reside in. A close-up of the display is shown in Figure 1. A modem was used to provide an internal network connection so that task instructions could be sent from a laptop to the near-eye display. The battery for the HMD and case for the Raspberry Pi were held in a waist pack worn by the user.

Figure 1

Close-up of the display

A mounting system (Figure 2) was designed to hold the display on the user’s head. It was made up of an adjustable elastic headband and a 3D printed panel on which the display device was attached. The core display device was mounted onto the headband using 3M fastener material (2 times stronger than Velcro) to ensure stability and ease of reconfiguration. The headset (headband and the display device) was adjustable (Figure 3) to enable use with user’s right eye with the display located above, below, or directly in front of the wearer's line of sight.

Figure 2

Adjustable headband and the display device

Figure 3

The test HMD worn in each of the three test configurations

Four different conditions were investigated. Three used the HMD system and the other using a paper manual as a baseline for comparison:

1. Above-eye. In this condition, the display is above the participant's line-of-sight. Participants had to move their eyes at a slightly high angle (15° above the line of sight) to read the information.

2. Eye-centered. In this condition, the display is centered on the participant's line-of-sight. Participants would look straightforward to read the information.

3. Below-eye. In this condition, the display is below the participant's line-of-sight. Participants had to move their eyes at a slightly low angle (15° below the line of sight) to read the information.

4. Paper. In this condition, the instructions are printed on a custom-made paper manual, one page per instruction. The size of the image was calculated based on the assumption of an average reading distance of 40 cm (Bababekova, et. al., 2011).

20 participants (7 female, 13 male) aged 21 to 32 were recruited. All participants had driving experience, 5 years on average. Most participants (13 out of 20) had not performed any maintenance tasks themselves within the prior 12 months. All participants had normal (20/20) or corrected-to-normal vision. During the tests, 5 participants wore eyeglasses while the rest did not.

Participants were instructed to finish the task as fast and correctly as possible. All tasks were conducted outdoors in a realistic setting. Participants received compensation in the form of a $10.00 Amazon Gift Card.

Each participant performed eight instruction guided car maintenance tasks. Each task was decomposed into individual action steps. Each step consisted of an actual photo taken on the test car and a simple instruction that could be understood by a novice user. The instructions were screened and validated with official car manual and online resources. The eight tasks are shown in Table 1.

The maintenance tasks performed by the study participants.

Each participant performed a training task (opening the car's hood) before beginning study tasks in order to familiarize themselves with the instruction interface and how to interact with the system.

Based on task analysis and review of previous research (Neumann & Majoros, 1998; Arthur, 2000), all of the steps in each task were classified into one of four action types: Read, Locate, Manipulate or Assess. Figure 4 shows an example of the interface design for the four action types. Locate involved visual search, typically performed to find a specific car component. The part to look for was highlighted by a bright blue outline. Manipulate involved physical manipulation such as unscrewing, lifting and removing, etc. Assess involved visual comparison of what was seen in the real world with what was displayed or described on the screen. Assess involved answering the question aloud related to the task step (such as the condition of the component).

Figure 4

Instruction examples of four action types: Read, Locate Manipulate and Assess

The eight tasks were grouped into four trials. Based on the estimated complexity, one relatively easy task was paired with one relatively harder task. The relatively easy tasks were the checks for Coolant Level, Engine Oil Level, Washer Fluid Level and Fuse checks. The relatively harder tasks were the Cabin Air Filter, Center Brake Light, Air Filter and Headlight checks. Tasks that required disassembly or opening of comparments were considered harder and tasks that did not require this were considered easier. The specific task pairings were randomized for each participant.

The study was conducted during the day at an outdoor parking deck. All the tests were conducted either on a cloudy day or in the morning or late afternoon of a sunny day. This ensured that the lighting conditions were similar for all participants and avoided the influence of bright sunlight. The car used for the experiment was a 2007 Toyota Corolla LE. The tools necessary to complete all the tasks were handed to the participant when needed and consisted of paper towels, a screwdriver, a pair of pliers, and a bottle of washer fluid. Participants were also asked to put on a pair of gloves before performing the tasks.

Two researchers were involved in the experiment session, as shown on Figure 5. The first person, a facilitator, introduced the procedure to the participant and oversaw the performance of the participant. The facilitator also initiated the computer responses during the tests when participants gave voice commands. The second person, a cameraman, followed the participant and recorded the process.

Figure 5

The participant was performing a task while the facilitator oversaw the process and switched screens

The 20 participants were randomly assigned to one of the four groups. Every group performed the same sequence of trials, but received a different sequence of experimental condition (Table 2). 20 people ensured 5 people in each sequence of experimental condition in order to counter-balance potential order effects.

Test groups and corresponding conditions for different Trials.

The laptop control allowed the researchers remotely control the image displayed on the HMD, representing movement between steps. It also allowed logging of the exact time spent on each step to a file for analysis. To navigate through the instructions, the participants had to speak out voice commands. “Next'' to go one step further, and “Previous'' to go one step back. The instruction slides displayed in the HMD were manually changed from the laptop connection when the participant voiced the commands. As for the paper condition, same instructions were printed out single sided and stapled into a booklet, one step on each page. Participants manually flipped the page to navigate. After finishing the current step but before turning the page, the participant still voiced “Next” or “Previous” so that the task time could be recorded.

An experimental session for each subject lasted 40 to 60 minutes and consisted of 3 phases. In the first phase, a description of the study was given to the participant. Informed consent was obtained and a demographics questionnaire was then administered, covering some basic information and the experience with the tasks conducted in the experiment. In the second phase, four tests were performed, each one with a different experimental condition. Each test consisted of an introduction to the experimental condition, a practice task, a trial, and a post-trial questionnaire. Subjects could have a short break between each test. In the third phase, the participant was asked to rank the five systems just tested from most favorite to least favorite and was asked to justify the rankings.

Performance measures included completion time and error. The completion time was the time to complete a step and not to complete a whole task. Completion times were obtained by subtracting the instructions arrival time (when a participant arrives on an instruction) to the instruction’s leave time (when the participant leaves the instruction). Errors were obtained by comparing the participant’s answers regarding the condition of the car components with the actual condition.

User experience measures included overall preference ranking, task load, and system usability. Overall preference was obtained by asking the participants to rank the four experimental conditions at the end of the session, from most favorite (1) to least favorite (4). Task load was measured by asking the participants to fill-in the NASA-TLX questionnaire (Lee, et. al., 2006; Hart & Lowell, 1998) (one questionnaire per task, eight total). System usability was measured by asking the participants to answer six questions of the System Usability Scale (SUS) questionnaire (Brooke, 1996) that were most relevant (one questionnaire per trial, four total).


Among the 20 participants, only one committed an error in the Oil Level Check Task. The rest finished the tasks without error.

A 3-way ANOVA (Display Condition * Task * Action Type) was applied to the Completion time. The result showed that Task (F = 2.820, p = 0.006, power = 0.922) and Action Type (F = 86.329, p < 0.001, power = 1.000) had significant effects on the Completion Time. There were also significant two-way interaction effects for every combination of the three independent variables, namely Task * Action Type (F = 4.608, p < 0.001, power = 1.000), Task * Display condition (F = 2.078, p = 0.003, power = 0.993) and Action Type * Display condition (F = 1.893, p = 0.049, power = 0.836). There was no significant three-way interaction effect. The results are reported in Figure 6.

Figure 6

Three way ANOVA (Display Condition*Action Type*Task) on completion time

Post-hoc pair-wise comparisons with Bonferroni corrections on the Task showed that Cabin Air Filter, Fuse Box and Brake Light yielded significantly longer completion time than Coolant, Oil Level and Washer Fluid (p < 0.030). No significant difference was found for Air Filter and Headlight, as shown in Figure 7.

Figure 7

Completion time for different Tasks (with S.E. as the error bar)

The four Action Type conditions yielded different completion times (Figure 8). Post-hoc pair-wise comparisons with Bonferroni adjustments on Action Type showed that Manipulate had longest completion time (p < 0.001), and Read had the shortest completion time (p < 0. 001). There was no significant difference between Locate and Assess (p = 0.370). There was no significance in the completion time (p > 0.862) observed for any of the four Display Condition conditions.

Figure 8

Completion time for different Action Types (with S.E. as the error bar)

A 2-way ANOVA (Display Location* Task) applied to the NASA TLX rating showed neither significant main effects, nor interaction effectson overall workload.

A 2-way ANOVA (Display Location* Trial) applied to the SUS rating showed that Display Condition had significant main effects on SUS ratings (F = 3.476, p = 0.021, power = 0.750). As shown in Figure 9, Eye-central condition received the highest score among the three HMD conditions.

Figure 9

SUS ratings of different Display Conditions

One-way ANOVA analysis showed a significant effect on the overall preference rankings for different Display Condition conditions (the lower the score, the higher the rank). Paper was best preferred among the four conditions, followed by Eye-central (Figure 10).

Figure 10

Overall preference rankings of different Display Conditions (the lower the score, the more preferred)


The analysis did not reveal any significant differences in performance (overall task completion time) between the paper condition (control) and the three HMD conditions. This study’s main objective was to investigate whether or not this particular HMD attribute (display position) contnributes to a measurable difference in task performance. The evidence from this study indicates that each of the studied HMD conditions are equivalent and have no significant positive or negative effect on task performance.

This may be a useful result for guiding future HMD design. However the findings may be also be influenced by other factors. First, the display used in this study was monocular and visual information may have been less accurate compared to what is possible with a binocular display. This is due to a phenomenon called binocular rivalry. When two different images are presented to each eye simultaneously, perception alternates between the two images. This interference can make it harder to focus on an image presented to one eye when a different image (the ambient scene) is viewed by the other (Zheng et. al. 2015). This explains why many participants closed their left eye when reading the instructions which were displayed in front of the right eye. Second, the experiments were conducted outdoors on March days in Atlanta without shielding the display. Although bright sunny weather was intentionally avoided, the ambient lighting was still brighter than the display. This could make instructions difficult to read on a relatively low-contrast screen, even with the other eye covered (Pei, 1990). These two conditions together may have contributed to increase the time that participants needed to process the instructions displayed via the HMD conditions.

There was no significant difference in overall task completion time between the Above-eye Display, Eye-central Display and Below-eye Display HMD conditions. There was quite a bit of variability in some tasks. For example, The total completion time for cabin air filter check varied from 172.72 seconds to 394.24 seconds. The majority of this difference was caused by Manipulate actions, indicating that each participant spent very different amounts of time simply simply operating the mechanisms regardless of the Display Condition. This variability is too large to observe any effects of the display position on the total task completion time.

The Eye-central condition was expected to result in worse performance in some tasks than the other positions because the display was not transparent and would block part of the user’s field of view. The analysis revealed no observed decrease in performance. Participants seemed to adapt well to the limitation. Only five of the participants mentioned in about the display blocking their FoV (Field of View). Three of these participants raised the issue for the Eye-central condition and the other two raised the issue regarding the Below-eye condition. Since task performance for the HMD conditions was not significantly different from the (control) paper condition, this may indicate that even if the display is not transparent and blocks the FoV it does not have a strong enough effect to significantly decrease overall worker task performance.

The findings and participant comments and data in this study support aspects of previous work. The below-eye position outperformed the above-eye position for assess actions. Previous empirical study had revealed that “better performance and decreased discomfort in the bifocular position (15° below the line of sight) in comparison to the bioptic position (15° above the line of sight)” (Katsuyama, Monk & Rolek, 1989). In this study, the statistic data and the participants’ feedback again supported this conjecture. “It just feels weird to look up”, “It’s harder to get used to the higher angle compared to the lower angle”, as some participants reported. The question is why did this effect only appear during Assess actions? The answer to this might be that Assess required more focus changes between the different images perceived by each eye.

Unlike Read and Manipulate where most participants could look at the display once and proceed to operation or to the next action, in Assess participants needed to look back and forth at the screen multiple times to compare the actual part with the reference image.

Comparing the measured time to perform specific action types (Read, Locate, Manipulate, Assess), Manipulate actions had longest completion time and there were no significant differences between Locate and Assess. The Manipulate tasks in the study required participants to take time to actually use tools/operate mechanisms. Since participant were novices (i.e. not car mechanics) this is a more demanding and unfamiliar task compared to reading instructions and assessing what to do. This further indicates that inclusion of the Manipulate action time (actually working on the car) could lead to being unable to observe effects due to the display position. If all participants were experts at the tasks and were able to perform them similarly then it may be possible to observe effects due to the display position. Even if a statistically significant effect could be observed in this case, it is possible that the practical effect is too small to be useful.

The data collected from the SUS survey does reveal particular user preferences. Eye-central outperformed the other two HMD conditions in overall experience and was the most preferred position for the HMD. This finding is different from some other study results. Zheng et al.(2015) and Peli(1990) found in their studies that eye-peripheral position was preferred to eye-central position. There are some reasons why the eye-central display in this study actually had the highest user experience rating.

Although the central monocular HMD used in the present study partially blocked the peripheral lateral field, it was not totally occluding. In fact, as described by some participants, he or she could “still see things around pretty well”. Literature suggests that such a peripheral field “may be sufficient to maintain binocular fusion and serve alignment of the eyes” (Peli, 1990; Burian, 1939; Winkelman, 1951). In Zheng et al.’s study, the eye-central HMD was binocular and composed of thick lens frames and wide FoV lenses, participants could only see images through the transparent screen. In other words, they had to filter out the instruction images overlaid on the ambient environment, which caused extra effort and discomfort. Not to mention that the binocular HMD itself reduced the accuracy of depth perception whereas in the present study the peripheral awareness itself was sufficient to judge the spatial relationship between the participants themselves and other objects.

In Peli’s(1990) study, the primary monitor was 170cm away facing the subject, so when the subjects looked at the monitor, they basically looked straightforward. However in the present study, all the car components were located below subjects’ head level. When Locate and Assess action were required, subjects tend to turn their eyes downward to look at the components instead of crouching down to align the component with their line of sight and looking straightforward. Therefore, the eye-central display actually wouldn’t be in the way, Also when Manipulating the components, the participants didn’t feel that the display right in front of the right eye interfered with their performance. As one participant stated, “It felt the same when you are actually working on something” (referring all four conditions).

These characteristics of the test device and tasks, along with the widely accepted fact that the human visual acuity is best in the fovea (central pit composed of closely packed cones in the eye), could explain why Eye-central Display Condition performed the best in Experience.

The phenomenon of over-reliance on HMD technology for task guidance (Ockermann, 2000) seen in previous work (Zheng, et. al., 2015) was observed in some tasks in the present study. For example, some participants went to the passenger’s side of the passenger cabin to look for the hood release. This was surprising because it was assumed that anyone with at least one year of driving experience should know that the hood release is located on the driver’s side. Normally people with driving experience are able to locate the hood release without any visual hint. The task guidance presented via the HMD seemed to decrease participants' ability to think by themselves and make decisions based on their previously acquired experience and knowledge. Similar over-reliance was also observed in the Headlight Check task where some participants tried to look for the bulb assembly at the area around the coolant reservoir and the belts. The instruction image showed the bulb was connected to a wire, and there were many wires in these two areas. Instead most experienced drivers would naturally look first at the back of the headlight housing.

This study contains some limitations. Ideally, in the Above-eye condition, the HMD should be located 15° above the primary viewing area, and 15° below for the Below-eye condition. However, it was difficult to keep the angle of the HMD exactly the same among all subjects. This was caused by the individual difference. Different users had variance in the anthropometric measurements such as head circumference, ear-eye distance and ear height. These variations made it difficult to adjust the HMD for a wide range of users to remain in the proper position and not move around once it is in place. Because the monocular display was relatively small and contained a convex lens with a fixed focal distance, any small individual difference would be amplified. This phenomenon was common even in the mainstream HMD products too. The device built for this study was highly adjustable, but slight inconsistency was inevitable and may have influenced some results.

Another limitation was the field of view. Most participants had no problem reading the instructions on the display, but some participants complained that the screen size was too small to perceive the information easily. Although in the literature it was proved that FoV had no effect on binocular rivalry (Peli, 1990), it seemed small FoV would reduce the efficiency in perceiving the procedural instructions.


This study evaluated the effect of HMD display positions on task performance and user experience. No significant effects on overall task performance due to the studied display positions were found but some particular user preferences were observed. Results which might be used to help inform future designs of HMD systems and study of the effect of HMD attributes on performance and satisfaction were:

• The HMD conditions compared in this study were equivalent and had no significant positive or negative effect on task completion time. Even if a statistically significant effect exists, the practical effect is likely too small to be useful.

• A non-transparent, monocular HMD that partially blocks a user's FoV was not observed to significantly decrease overall task performance time.

• The below-eye position outperformed the above-eye position for assess actions in this study. This supports previous findings.

• Participants indicated that the Eye-central HMD position provided the best overall experience. It was also the most preferred by participants in this study. This result differed from some previous findings and most likely due to the design feature of the HMD used in this study: it did not fully block the eye and allowed a peripherial view of the environment.

• Participant over reliance on the instructions provided via HMD was observed during some of the maintenance tasks perfroemed by participants in this study and supports earlier finidngs.


Citation : Choi, Y., & Yang, T. (2016). Effects of Head-Mounted Display (HMD) Position on Procedure Following Tasks and Usability. Archives of Design Research, 29 (3), 5-23.

Copyright : This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (, which permits unrestricted educational and non-commercial use, provided the original work is properly cited.


  • Arthur, K. W. (2000). Effects of field of view on performance with head-mounted displays (Doctoral dissertation). University of North Carolina at Chapel Hill. []
  • Bababekova, Y., Rosenfield, M., Hue, J. E., & Huang, R. R. (2011). Font size and viewing distance of handheld smart phones. Optometry & Vision Science, 88 (7), 795-797. []
  • Burian, H. M. (1939). Role of peripheral retinal stimuli. Archives of Ophthalmology, 21, 486-491. []
  • Guo, A., Raghu, S., Xie, X., Ismail, S., Luo, X., Simoneau, J., ... & Starner, T. (2014, September). A comparison of order picking assisted by head-up display (HUD), cart-mounted display (CMD), light, and paper pick list. In Proceedings of the 2014 International Symposium on Wearable Computers (pp. 71-78). ACM.
  • Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Advances in psychology, 52, 139-183. []
  • Hart, S. G. (2006, October). NASA-task load index (NASA-TLX); 20 years later. In Proceedings of the human factors and ergonomics society annual meeting (Vol. 50, No. 9, pp. 904-908). Sage Publications. []
  • Henderson, S. J., & Feiner, S. K. (2007). Augmented reality for maintenance and repair (armar). Columbia Univ New York Dept of Computer Science.
  • HMD. (2015). Head-Mounted Display Market - Global Trend and Forecast to 2020. Retrieved November 2015, from
  • Jordan, P. W., Thomas, B., McClelland, I. L., & Weerdmeester, B. (Eds.). (1996). Usability evaluation in industry. CRC Press.
  • Katsuyama, R. M., Monk, D. L., & Rolek, E. P. (1989, May). Effects of visual display separation upon primary and secondary task performances. In Aerospace and Electronics Conference, NAECON 1989, Proceedings of the IEEE 1989 National (pp. 758-764). IEEE. []
  • Lee, P., Stewart, D., & Calugar-Pop, C. (2013). Technology, media & telecommunications predictions 2011. Deloitte. Online verfiigbar unter,20.
  • Neumann, U., & Majoros, A. (1998, March). Cognitive, performance, and systems issues for augmented reality applications in manufacturing and maintenance. In Virtual Reality Annual International Symposium, 1998. Proceedings., IEEE 1998 (pp. 4-11). IEEE. []
  • Ockerman, J. J., & Pritchett, A. R. (1998, October). Preliminary investigation of wearable computers for task guidance in aircraft inspection. In Wearable Computers, 1998. Digest of Papers. Second International Symposium on (pp. 33-40). IEEE. []
  • Ockerman, J. J. (2000). Task guidance and procedure context: aiding workers in appropriate procedure following.
  • Palmer, E., & Degani, A. (1991, April). Electronic checklists: Evaluation of two levels of automation. In Proceedings of the Sixth Symposium on Aviation Psychology (pp. 178-183).
  • Peli, E. (1990). Visual issues in the use of a head-mounted monocular display. Optical Engineering, 29 (8), 883-892. []
  • Reif, R., Günthner, W. A., Schwerdtfeger, B., & Klinker, G. (2009, February). Pick-by-vision comes on age: evaluation of an augmented reality supported picking system in a real storage environment. In Proceedings of the 6th International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa (pp. 23-31). ACM. []
  • Smailagic, A., & Siewiorek, D. (2002). Application design for wearable and context-aware computers. Pervasive Computing, IEEE, 1 (4), 20-29. []
  • Weaver, K. A., Baumann, H., Starner, T., Iben, H., & Lawo, M. (2010, April). An empirical task analysis of warehouse order picking using head-mounted displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1695-1704). ACM. []
  • Winkelman, J. E. (1951). Peripheral fusion. Archives of Ophthalmology, 45 (4), 425. []
  • Zheng, X. S., Foucault, C., Matos da Silva, P., Dasari, S., Yang, T., & Goose, S. (2015, April). Eyewearable technology for machine maintenance: Effects of display position and hands-free operation. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 2125 -2134). ACM.

Figure 1

Figure 1
Close-up of the display

Figure 2

Figure 2
Adjustable headband and the display device

Figure 3

Figure 3
The test HMD worn in each of the three test configurations

Figure 4

Figure 4
Instruction examples of four action types: Read, Locate Manipulate and Assess

Figure 5

Figure 5
The participant was performing a task while the facilitator oversaw the process and switched screens

Figure 6

Figure 6
Three way ANOVA (Display Condition*Action Type*Task) on completion time

Figure 7

Figure 7
Completion time for different Tasks (with S.E. as the error bar)

Figure 8

Figure 8
Completion time for different Action Types (with S.E. as the error bar)

Figure 9

Figure 9
SUS ratings of different Display Conditions

Figure 10

Figure 10
Overall preference rankings of different Display Conditions (the lower the score, the more preferred)

Table 1

The maintenance tasks performed by the study participants.

Task Description
Coolant Level Check Check coolant level and add coolant
Cabin Air Filter Check Check condition of the cabin air filter
Engine Oil Level Check Check enine oil level using the oil dipstick
Center Brake Light Check Remove the middle brake light assembly and check if light is burned out
Fuse Check Check a specific fuse from the exterior fuse box to see if it is blown
Washer Fluid Level Check Check the washer fluid level and add fluid if needed
Air Filter Check Check the air filter condition and replace if needed
Headlight Check Remove the front right light assembly and check if light is burned out

Table 2

Test groups and corresponding conditions for different Trials.

Trial 1 Trial 2 Trial 3 Trial 4
Group 1 Above-eye Eye-central Below-eye Paper
Group 2 Paper Above-eye Eye-central Below-eye
Group 3 Below-eye Paper Above-eye Eye-central
Group 4 Eye-central Below-eye Paper Above-eye