The privacy paradox states that users' behavior is not in line with their stated privacy preferences.
For example, users might say that they are very concerned about their privacy; however, they willingly give away their personal data in exchange for perceived benefits.
In this blog post I will go through the history of the paradox, studies that have confirmed it, and finally present counter arguments to its validity.
1.) History of the privacy paradox
The term was first coined in 2001 in a paper by Barry Brown, working for HP at that time and who passed away in February 2022, titled Studying the Internet Experience.
In this paper he studied online shopping behaviour of 13 participants from the South West of England. He discovered that although participants were wary about giving websites their personal information, such as credit card information, they still like to shop online due to convinience or monetary reasons. The author gives one possible reason:
"[P]articipants feel that they should be concerned about these issues – in terms of appearing as reasonable individuals – although in practice these issues may not actually influence their practice"
In other words, participants gave the answers they thought the researcher wanted to hear, but that does not actually reflect their own behaviour. It is worth noting that a study done with only 13 participants from a very particular region is not at all generalizable.
Nonetheless, the privacy paradox could be confirmed in subsequent studies and the term privacy paradox was really established by Norberg et al. in 2007 in their paper The Privacy Paradox: Personal Information Disclosure Intentions versus Behaviors.
2.) Explanations of the privacy paradox
Alessandro Acquisti and Jens Grossklags presented a study in 2005 - Privacy and Rationality in Individual Decision Making - that confirmed the privacy paradox. An example of their findings is the fact that 87.5% of their 119 US-based particants who claimed to be very concerned about the offline collection of their personal information had and used supermarked loyalty cards. There are three reasons for this (and similar) behavior.
(a) Incomplete Information
Incomplete information means the underestimation or unawareness of privacy risks, such as profiling, identity theft, etc. People also underestimate the government's potential to spy on them.
(b) Bounded rationality
Bounded rationality means the difficulty to process all relevant information, such as reading privacy policies, considering all relevant actors in, e.g., a credit card transaction, etc. Another example is the misconception that deleteting browswer cookings enables private browsing. However, a 2019 study found that even people who are well informed about privacy risks, are not willing to put any effort into protecting their privacy.
(c) Psychological deviations from rationality
Lastly, even if humans have all the information and are able to process it fully, they still behave irrationally. One of these irrational behaviors is that humans tend to overvalue immediate rewards over future negative effects. An example would be not paying money for a VPN now to make one's internet surfing in the future more privacy-preserving.
Nowadays, research tries to explain the alleged paradox by examining the difference of stated preference and behavior (stated and observed). I will not go into detail about these models here, but if you're interested, Spyros Kokolakis has written a great paper about this subject: Privacy attitudes and privacy behaviour: A review of current reserach on the privacy paradox phenomenon.
Paper von Gerber et al.3.) Rebuttal of the privacy paradox
In Daniel J. Solove's paper aptly titled The Myth of the Privacy Paradox he argues that
"the privacy paradox is a myth created by faulty logic".
He claims that research that focuses on the privacy paradox falsely generalizes from peoples' behavior in a specific context to their privacy attitudes in general. As privacy is a very personal concept this argument seems reasonable. According to Solove, rather than a paradox, the difference in behavior and stated preference is a result in different perceptions of value and risk. Sharing personal data might be harmful for one person but could be seen as completley risk-free by another, depending on their respective trust. This trust can, e.g., be higher in the EU, where citizens are protected by the GDPR than in other parts of the world. So if a EU citizen is willing to share data, that does not mean that they care less about privacy than, e.g., a US citizen who might not be protected by similar laws. Solove concludes (rightly in my opinion) that individuals should not be responsible for regulating their own privacy, as this subject is too complex. Instead, privacy regulations and technical tools to enhance individuals' privacy should be put in place.
-PK, 18.08.2022