I came across this line of research several years ago when I was working on a related issue. A recent post on Yangqiao Lu’s (my wife’s) blog brought it to mind again. She described a device called the “interrotron” that Errol Morris uses for documentary interviews. Essentially he performs the interview in a separate room and his face is projected through the camera to the interviewee. He claimed people more easily opened up in this setup.
In the field of Human-Computer Interaction (HCI), it is important to understand how a human will interact with a given computer setup and how that interaction differs from one with another human. A fascinating study from Youngme Moon investigated the role of distance on deception. A group of undergraduate students—the white mice of the human psychology world—were asked to fill out a survey on a computer. Each student was told the computer collecting the responses was either in the room, in the city, or across the country.
Moon found that the respondents were more likely to exaggerate their qualifications and responses as the distance increased. In reality, the computer in the room was always the one collecting the information; the subjects were only acting on the perceived differences. In a related study, Bradner and Mark found this phenomenon was not simply relegated to humans interacting with computers; subjects were less likely to believe and more likely to deceive a human from a different city than one they believed lived in the same city.
These results probably shouldn’t be surprising. When interacting with an entity near us, we likely assume there will be an increased level of accountability. A more interesting question is what this means for things like Mechanical Turk, online courses, and interviewers hiding behind their interrotron?
It is amazing how small changes to an interaction can affect the perception of a user. A few years ago I worked on an experiment involving humans interacting with a computer agent; the experiment required the human subjects to follow the directions given by a computer generated voice. Some were given instructions mostly from a female voice while others mainly interacted with the male voice. After each test, I would collect feedback from the users. Even though both voices said the exact same words, the subjects almost universally hated the female voice. She was rude and impolite, while the man was friendly and helpful. Changing one parameter in the study vastly changed the way the users experienced the task.
I believe this is one of the reasons studying human subjects is so difficult. At least with a machine you generally have some amount of reproducibility. Assuming a deterministic algorithm, you should receive the same result given the same input. A human may give you very different responses depending on whether they skipped breakfast or whether the interview is done in person or on the phone.
I imagine there is very much an art to the interview. You want to distance yourself enough that you don’t influence the interviewee’s responses. However, distance yourself enough and the subject may be more likely to lie and exaggerate their answers. It is a delicate balance. I wonder if the interrotron strikes that balance.