Two themes arising from the analysis are discussed here. The first, ‘privacy’, encompassed smart-home project researchers’ discussions about the desirability of personal privacy, the necessity for the privacy of end-users of the technology and the reasons for this. The second theme, ‘choice’ focused on the paradox between solving ethical dilemmas in research by allowing end-users freedom to choose, and the need to ultimately impose ethical judgments that limited end-users’ choices.
Privacy
Privacy is a “volatile and controversial concept” [30], and is thus difficult to define in a way that satisfies all reasonable objections. Nevertheless, there are clear societal norms protecting personal privacy, even if their boundaries are debated. One taxonomy of invasions of privacy includes:
the collection, storage, and computerization of information; the dissemination of information about individuals; peeping, following, watching, and photographing individuals; intruding or entering “private” places; eavesdropping, wiretapping, reading of letters; drawing attention to individuals; and forced disclosure of information [31].
Encouraged to speculate on the problems and possibilities of smart-home technologies, issues germane to privacy arose frequently in the interviews. In particular, smart-home project researchers predominantly characterise privacy as the unbidden sharing of data, rather than as an invasion of personal space.
Researchers vary in their concern about privacy: several indicate they are relaxed about their own privacy. For example, Julian thinks that the loss of privacy is a ‘done deal’, and therefore, not an important worry:
Julian: if you’ve got a mobile phone, I think people can track you if they really want to. Indeed, I think there are apps where you can, in fact, track your child’s or your partner’s phone, so you do know where they are. Well, that’s “not a big problem”, so I don’t know what is? I’m not too worried about that.
Similarly, Brian suggests he is relaxed about large-scale commercial collection of personal data:
Brian: Actually, Google products are really good. If you use them you get a lot of benefit from having - and actually do I really care about what they’re doing with my data? It’s not like that they’re looking at my data personally.
These responses conceptualise privacy as access to data, and suggest the participants may not contextualise quite distant, person-neutral, corporate information collection as problematic in privacy terms. Even where more personal notes are struck, privacy is often considered in terms of data, rather than in more abstract terms, like being watched. For instance, Margaret and Sally, both researchers who specialise in video technologies, contend that researchers in their field are less sensitive about being filmed in the course of their research than others:
Margaret: To be honest I think maybe [video researchers] are less sensitive than other people. For video we really want to get that data and we think it is really okay, so if you put it in my home I am okay with it. … I would be okay with [other sensors too].
Sally offers a similar reflection, which seemed to corroborate this suggestion:
Sally: I’m a [video researcher] and for years we’ve been videoing each other hopping and walking up and down and I truthfully don’t care if there’s a picture of me doing that in a conference paper at all. There needs to be some balance I think. It’s frustrating as well because I know other research groups don’t do that [gain research approval].
Although Margaret and Sally’s willingness to share images of themselves in research contexts could, on the face of it, speak of a relaxed approach to being watched, their aims regarding sharing suggest that imaging is again linked to data privacy. Margaret links video to the need for data, while Sally is relaxed about sharing video at a conference of herself moving, as this takes place in a research context. There is no contrast with this data-centred conception of privacy when Sally and Margaret express strong resistance to sharing other types of information about themselves with others: the focus is again on sharing data, rather than on sharing prompting more emotive responses (such as feeling exposed). Discussing her resistance to sharing location data, Sally says:
Sally: A lot of people [share location data] all the time but then you can look at people, you can tell what time they leave work, if they’ve left early, you can see where they go afterwards. I think they’re potty to leave that thing running. … I suppose I just don’t want people knowing exactly where I am at what time of the day. …the journey to and from there and the journey from home to work, I don’t want those patterns of my behaviour being public.
Similarly, when asked to talk about the types of sensors that might make her uneasy if used outside the project, Margaret expresses concern about sensors that recorded the voice:
Margaret: For me I probably don’t like voice sensors in there to record the environmental voice in it.
Interviewer: So recording speech and so on.
Margaret: Yes, the speech things. It is just a personal [thing] for me. I think somehow it is also difficult to process that data. Sometimes people say something and you don’t really want to process that data. Maybe it is better than having the voice recognition system at home or those constrained environment is probably better for users otherwise they have this and they probably think, “Every single word I say will be recorded.” It will always make them think, “That data has to be recorded should I say something?” They always need to think about this so it will probably affect their life.
Margaret and Sally’s reflections are interesting because their desire not to be located or audio-recorded relates to the unbidden sharing or storage of data. Sharing video is tolerated because it is innocuous data. Location is unacceptable if it is shared with the public. These concerns about data privacy were visible in a range of discussion of specific technologies, including less overtly intrusive smart-home technologies, such as door sensors, noise detectors or, as in the following examples, accelerometers and electricity usage monitors:
Norman: if someone has access to, let’s say, that data, and he sees you made 2,000 steps, it means if you were home, you’re somewhere else now, because you cannot walk 2,000 steps in your home at a single time.
Derek: The reason I don’t like all this data collection is just because I’m a data mining researcher. I know what they can do with the data. … [for example] I can simply use the electricity metering to detect whether someone has played their PlayStation. … [to detect that] this guy is playing video games for the whole night.
Nevertheless, despite the ubiquity of concerns about keeping data private, some researchers express concerns about privacy at a more visceral level. For example, Gabriel articulates resistance to the idea of deployment of any smart-home technology in his own home, stating it would invade his privacy since the home should be a space where
Gabriel: I can escape from the entire world, and also from the invigilation.
Such statements are more consistent with a fundamental desire for physical privacy, and this attitude seems to be replicated by other researchers, especially those who express reservations about using the technologies in their own homes:
Dexter: I don’t want a camera in my house … I don’t like the wearable as well. I don’t like it at all. No, I don’t.
Interviewer: For what reason don’t you like it?
Dexter: Because, in the home, I just feel like it’s a constraint, too many constraints.
Interviewer: Right. So it’s aesthetic for you?
Dexter: Yes. I don’t feel comfortable wearing one thing all day. Once you participate you feel that you have to use it.
Despite Dexter’s agreement with the suggestion that his dislike of a wristband is aesthetic, his final response clearly relies on a notion of physical privacy. There is no suggestion of discomfort at sharing information, the discomfort is more visceral, an invasion of the object into Dexter’s physical space and routines. Similarly Aiden avers cameras are problematic because:
Aiden: I guess is deeply rooted in the fact that someone has an idea that there is someone on the other side just watching at all times.
These extracts contain the sort of intuitive expressions of discomfort that may arise from being watched or followed, but privacy is most often discussed in terms of information. When invited to speculate beyond their immediate project, researchers clearly see privacy as a major issue in smart-home research, and are alert to the negative consequences of unauthorised sharing of information. This is unsurprising in one sense, because ensuring informational privacy has the appearance of a tractable problem. On the other hand, physical privacy is much more difficult to address. If potential users of smart-home technologies ‘just feel’ uncomfortable with being monitored by the technology, it is unclear if and how a focus on informational privacy could address the problem. This issue shall be explored further in the discussion section.
Choice
Privacy is a concept that appears to be linked to notions of consent, so it is unsurprising that ‘choice’ emerges as a second major theme in the interviews. Researchers often appeal to end-user choice to solve the ethical dilemmas inherent in ensuring privacy (and beyond). However a commitment to end-user choice does not sit comfortably with researchers’ roles in defining the boundaries of these choices. Researchers’ discussion of their own choices, which implies certain moral standpoints, and how these might influence—or even constrain—the choices the end-users made, sometimes sharply contradict commitment to end-user choice.
End-user choices
Researchers frequently invoke provision of choice to end-users in order to circumvent ethically challenging issues, which were in the majority of cases issues germane to privacy. These might be general questions of what data to share; for instance, Angela describes a model where end-users ‘opt in’ to particular smart-home technologies:
Angela: We provide this, and the end-users should choose about their home, what data they would like to provide. Like GPS is available on my phone, but I can turn it off. When I want some service based on that, I turn it on.
In a similar way, Frank appeals to the concept of choice to resolve a personal privacy issue raised by the question of whether to store personal data within the end-user’s home or online:
Frank: [how to solve data storage issues] depends on the people. Some people would prefer to send [data] online and don’t want to see anybody coming into their house and other people would prefer to keep it there and let people in to check it. If people can have a choice, that will be the best plan.
A further, contentious, issue within the project was the acceptability of recording video images. As we have seen in the first theme collecting video was contentious among some researchers because of privacy concerns, but those in favour argued end-user choice could justify collection of this type of data:
Elsie: Some users might not even want the camera in their home, so we can’t put the whole thing in their homes. I think that really depends on what the user wants. If they said okay then we are happy to put it in.
The quotes in this section exemplify a significant sentiment: ethical problems, especially of privacy, could potentially be dissolved if an end-user chose to share their data. Indeed, some researchers take choice to imply responsibility: for example, Oliver suggests the dilemma of gaining consent for (inadvertent) data collection from third parties can be left with the end-user:
Oliver: If you have third party children coming into the house obviously their guardian would have to say if this is okay or not. It is very difficult. It will probably have to be then on the [end-user]’s shoulders to bear that burden to tell people and inform people. I think it is really difficult.
On the other hand, some researchers state that personal choice can only go so far. For instance, Godfrey avers that, because a smart-home system could hypothetically indiscriminately monitor anyone who was in that smart-home, the nature of smart-home projects exposes the notion of end-user choice to wider problems:
Godfrey: [end-users] can say, “I am going to record my own self doing anything I happen to feel like. And I can do that for my own personal ends; whatever those personal ends might happen to me. And that is completely my choice.” And that is legally, if not ethically uncomplicated. The problem becomes when [end-users] have a family and things like that and then you blur the boundaries of what is personal.
Godfrey indicates that the special problems of dealing with the multiple occupants and visitors, who might include children or non-consenting adults, mean that choice is much more dynamic than simply allowing individual end-users to choose their preference. Firstly, since choices needed to be made before the implementation of the system, research choices are potentially a major conduit for researchers’ own value judgements. Secondly, even if there is clear evidence of particular end-user choices, these could not be unrestricted. As detailed in the first theme (‘privacy’), many researchers are concerned that a smart-home environment has hypothetical potential to broadcast data that would damage the end-user if shared in an unrestricted way; as such it is logical that sharing is prevented, even if that prevention brings the choices of end-users and researchers into conflict.
Making research choices
Researchers voiced near unanimous support for end-user choice to be the arbiter of ethical dilemmas. But researchers also had to make choices about selection, design and implementation of technologies and these by their very nature imply that ethical choices are being made. While these might be framed as choices of a purely technical nature, or choices driven by others, at some stage the researcher themselves has to make basic choices that, plausibly, would affect the ultimate direction of the project. Blake summarises this succinctly:
Blake: Well, in research, you make your own choices. You decide whether you use this technique or something else, and so on.
While such an explanation does not explicitly address ethical issues, certain aspects of design clearly took a (very standard) ethical position. For instance, consider this explanation by Gwen of the way the home network was designed:
Gwen: …if you have a physical network, you don’t really need to encrypt because you just have a cable between device A and device B. Whereas, if you’re using a wireless link, then you definitely have to encrypt data. That basically slows things down, so an advantage of using a wired link is that you can possibly avoid having to encrypt anything. Therefore, you can increase the speed of your storage system.
Gwen assumes data security—either based on encryption or the inherent features of a closed circuit—to be embedded in design. An ethical choice is made that (for example) there is a duty to end-users to keep data secure. While this is clearly a choice that is bulwarked by societal mores, regulatory norms, the law, and research ethics, it should nevertheless not be overlooked that the opposite choice can be made. It is open to researchers to decide that data is not personally identifiable, and data security can be relaxed. Indeed, it is not uncommon for researchers to report taking such a position, for instance:
Phoebe: The wristband is basically just accelerometer counts of my hands swinging around as I move around. Maybe it’s got location data that coarsely says which room I’m in in my own home, which is pretty meaningless.
Carter: I know for a fact that there’s absolutely, well, nothing they can know about me that I really don’t want them to know. The microphone is not able to record sound or send a sound anywhere else. The accelerometer data is literally just accelerometer data and at the end of the day there’s really only so much you can find out from that. You can’t really find out if someone is doing any specified task.
Yet, while researchers tend to defer to research guidelines in spite of such value judgements, some ethical questions required a decision to be taken at the design stage. For instance, should the project focus on technologies with a high risk of personal identifiability (like video)? Where is the balance between gathering valuable data and user acceptability? What is practical? What is relevant? Some of these questions about what should be monitored had been put to clinicians, but Florence highlights the ultimate need for researchers to discover what information to gather:
Interviewer: Presumably you’re collecting this data because clinicians have said that this type of data would be useful?
Florence: Yes. That’s a question for us, as well. In the very beginning, we didn’t know, even though we had a literature review. Many literature said that’s a wearable sensor, so you can use it to detect that kind of activity. Daily activity. But that’s not a strong conclusion to say that we need these sensors, these sensors, and that this list of sensors can give you this list of activity recognition.
Interviewer: So you’re almost collecting data to see what you can find out.
Florence: That’s one of the tasks we need to do. We deploy a list of sensors, to see which one, what list of sensors is actually useful to detect the daily living activity of the user.
Florence’s comments point toward a paradox of experimental research. To decide what to monitor, the researcher must first start monitoring. While discussions with clinicians and the public could be useful to guide these decisions, the researchers explain that some guesswork was always needed. For example, Connor indicates the need for researchers to anticipate what end-users would accept:
Connor: Basically, we are basing our choices on the fact that we think that the [end-users] that volunteer are happy to do that.
Connor’s comment contains an echo of the earlier appeal to end-user choice. Indeed, design choices are not framed as explicitly restrictive of ethical choices, but simply technical decisions. Nonetheless, as stated earlier, allowing end-users to follow their preferences in unrestricted ways is ultimately problematic for some researchers, and leads to discussion of controls on end-user behaviour, although such controls are rarely, if ever, made explicit to end-users.
Influencing end-user choice
In some instances, researchers describe decisions that had been explicitly made to prevent end-users from making ‘bad’ choices. Sometimes these were framed, as above, as technical decisions, but these technical considerations merge with clear intentions to prevent users from acting in certain ways. At a whole-project level the smart-home project would not store ‘raw’ data as collected by sensors. Instead, such data would be processed immediately by the system (a server) within the home into higher-level data, and the system in the home would not be connected to the wider internet beyond the home. In the following exchange, Seth justifies restrictions to end-user access to processed data, because access may cause them to reflect negatively on their actions:
Seth: [end-users] shouldn’t have access [to their data]. Because when you look at your own data, you feel like you didn’t do very well.
Other researchers suggest that end-user data should be shared with health professionals, but that the decision about what to share should rest with researchers, rather than the end-user themselves:
Austin: But I think the user isn’t necessarily to be the one to decide what data they will share.
As with Mia’s statements about allowing users to record video, such questions could still conceivably be framed as technical choices, so rightly in the ambit of researchers to make. However several researchers assert that end-users may use the technology for ethically suspect purposes. Roger is among several researchers who found the possibility that parents could hypothetically use smart-home technology to monitor their children’s behaviour ethically troubling:
Roger: [a smart-home system] would really have to be well thought out to avoid misuse … let’s say that you’re living with your children in such a house that you can monitor their every step. I think that’s not appropriate, or that’s not ethical. You would not want your children to behave good because you’re monitoring them, but you want them to behave in a good way because that’s the right thing to do.
These sentiments are clearly at odds with previously noted sentiments that proposed end-user choice—and the wider principles of end-user consent—as a justification for the collection and sharing of personal data. Yet, allowing end-user choice remains important for most researchers. For one thing, researchers perceive difficulties in preventing end-users from using the system as they choose. As Leon says:
Leon: I think [poor end-user choices are] the concern of the project, yes. I think it’s very difficult to force people. Not only that it might be ethically questionable, but I think it’s technically difficult to force them not to use the system in this way, or that way.
Thus, with regard to the dilemma of monitoring their own children, Elliot states that end-users, armed with the correct information, would make the right choice about their monitoring their children:
Elliot: I mean, we should still want these people to choose what the right thing is for them on their own free will.
Interviewer: Right, so you want people to make the right choice?
Elliot: We should give the information so they are able to make the right choice.
Elliot’s appeal, framed using the language of information, choice and free-will, closely traces the conceptual language of consent, and emphasises researchers’ reluctance to impose choices overtly on end-users, even where a need to minimise end-user’s choices to ethically acceptable options is asserted.
Choice was a major area of discussion in the interviews. Many researchers suggest that end-user choice might solve ethical dilemmas; indeed, given the preponderance of researchers favouring end-user choice as a way to solve ethical dilemmas, this may suggest a libertarian slant toward their thinking. Nevertheless researchers also indicate that end-user choices can be restricted. Such restriction is almost always done covertly, through design, rather than in ways that might overtly challenge the notion of end-user choice. The values of researchers might be channelled into the basic design choices they make, but the ethical dimension of design was rarely acknowledged. The implications of this and the previous theme are discussed below.