This entry tries to be a very short guide on how to perform field experiments for usability and user experience. Fields experiments is reported to have many advantages over laboratory experiments as can be read in the HCI literature . What we try to obtain with field experiments is to overcome the complexity that real contexts represents and cannot be reproduced in a laboratory. As the literature also said, these experiments cannot be replaced by expert evaluations  because field experiments focus on the participants and the context: using real users in real context: the weather, user profiles, effectiveness of the locations-based systems, screen resolutions, keyboards… The only way to see how the user and the system performs is taking a ride and practice. As Nielsen  and Brewster  say, field experiments are always difficult to perform due to the problem that sufficient data must be acquired without interfering in the experiment neither conditioning the participants. Talking about mobile devices in general, its usability is an special concern because of the context and the environment the devices may be used. There are a lot of services or functionalities that depends on the context like location-based services and applications in outdoors which are difficult to simulate in a laboratory. So usability testing in the laboratory will be very limited and will never simulated a fully user case when testing usability in real context with real users.
In order to start a simply field experiments, not so many things are needed: something to test, participants, a way of documenting the experiment and some measures in order to study results and get the conclusions.
Talking about the participants, as Nielsen said, only 5 are needed in order to get the 80% of your system’s problems. Getting participants for an experiment is always difficult, so try to offer some kind of reward to your participants: dinner ticket, discounts at supermarkets… give something back, reward people for their time. In something goes wrong, seduce your friends.
The documentation of the experiment requires special attention: all the data the participants generate must be collected and saved for future use. So the documentation must include, depending on your test: screen captures, sound recorders, video captures… These ways to collect information must be used in order not to lose these information and get conclusions in the last stage of our experiment. Try to use cheap cameras as goPro and electronic forms (tablets, smartphones) in order to save time.
The most important part of the experiment is method that should be used in order to get the data that we will use to get conclusions. What is common in usability and user experience is the use of scales like SUS , QUIS , Likert … These kind of methods uses a full list of questions that the participant must answer in order to get a quantified idea of what the participants thinks. But there are many others ways of getting quantitative and qualitative data in order to build ours conclusions. The following table summarizes what Goodman and Nielsen proposed.
|Errors||Productivity, key points and error points||% of completed tasks|
|perceived Workload||Satisfaction and effort perceived by the participant||Questionnaires and interviews|
|Comfort||Satisfaction and acceptance of the device or software system||Interviews|
|Comments and preferences||Satisfaction of the participant. Key points on tasks and design||Questionnaires and open interviews|
|Experimenter observations||Different aspects depending on the observation||Observation and note-taking by the observer|
A good real example of a field experiment can be found in the Goodman’s article and on Kjeldskov`s article. Both of them have practical information about what they did in their experiments and could be used as a template of what to do in field experiments.
References to dig deeper
 Kjeldskov, J., & Graham, C. (2003). A Review of Mobile HCI Research Methods. Human-Computer Interaction with Mobile Devices and Services., 317–335. doi:10.1007/978-3-540-45233-1_23
 Brewster, S. (2002). Overcoming the Lack of Screen Space on Mobile Computers. Personal and Ubiquitous Computing, (6), 188–205.
 Nielsen, C. (1998). Testing in the Field. In Proceedings of the third Asia Pacific Computer Human Interaction Conference (pp. 285–290).
 Brooke. (1996). Brooke, J.: SUS: A “quick and dirty” usability scale. In: Jordan, P.W., Thomas, B., Weerdmeester, B.A., McClelland, I.L, dustrypp, 189–194.
 Schneiderman, B. (1998). Designing the User Interface. 3rd Edition. Addison Wesley Inc., California.  Rubin, J. (1994). Handbook of Usability Testing. Wiley.
 Goodman, J., Brewster, S., & Gray, P. (2004). Using field experiments to evaluate mobile guides. Proceedings of HCI in Mobile Guides, Workshop at Mobile HCI, 2004(0).
 Likert, R. (1932). A Technique for the Measurement of Attitudes. Archives of Psychology, 140, 1–55.
 Kjeldskov, J., Skov, M.B., Als, B.S. and Høegh, R.T. Is it Worth the Hassle? Exploring the Added Value of Evaluating the Usability of Context-Aware Mobile Systems in the Field. Proc. Mobile HCI’04, Springer (2004), 61-73.
 Kjeldskov, J., Skov, M.B., Was it Worth the Hassle? Ten Years of Mobile HCI Research Discussions on Lab and Field Evaluations.In Proceedings of the 16th international conference on Human-computer interaction with mobile devices & services (MobileHCI ’14). ACM, New York, NY, USA, 43-52. doi:10.1145/2628363.2628398