Abstract
The new information and communication technology providers collect increasing amounts of per-sonal data, a lot of which is user generated. Unless use policies are privacy-friendly, this leaves users vulnerable to privacy risks such as exposure through public data visibility or intrusive com-mercialisation of their data through secondary data use. Due to complex privacy policies, many users of online services unwillingly agree to privacy-intruding practices. To give users more control over their privacy, scholars and regulators have pushed for short, simple, and prominent privacy policies. The premise has been that users will see and comprehend such policies, and then rationally adjust their disclosure behaviour. In this paper, on a use case of social network service site, we show that this premise does not hold. We invited 214 regular Facebook users to join a new fictitious social network. We experimentally manipulated the privacy-friendliness of an unavoidable and sim-ple privacy policy. Half of our participants miscomprehended even this transparent privacy policy. When privacy threats of secondary data use were present, users remembered the policies as more privacy-friendly than they actually were and unwittingly uploaded more data. To mitigate such be-havioural pitfalls we present design recommendations to improve the quality of informed consent.
Original language | English |
---|---|
Title of host publication | Twenty-Eighth European Conference on Information Systems (ECIS2020) |
Editors | Association for Information Systems |
Place of Publication | An Online AIS Conference |
Pages | 1 - 17 |
Publication status | Published - 2020 |
Austrian Classification of Fields of Science and Technology (ÖFOS)
- 502050 Business informatics
- 102024 Usability research
- 102013 Human-computer interaction
- 508
- 505002 Data protection
- 303029 Addiction research
- 305909 Stress research
- 501003 Occupational psychology
- 501015 Organisational psychology
- 501011 Cognitive psychology
- 211912 Product design