Rina Steenkamp - Privacy and technology
[The Open Internet rules and order | Against jawboning | Manila principles on intermediary liability | Americans' privacy strategies post-Snowden | A precautionary approach to big data privacy | European cyber security perspectives 2015 | How polymorphic warnings reduce habituation in the brain - Insights from an fMRI study | Data protection broken badly | Consumer Privacy Bill of Rights Act | Surreptitiously weakening cryptographic systems | Geotagging one hundred million Twitter accounts with total variation minimization | Privacy and cyber security - Emphasizing privacy protection in cyber security activities | Virtual currency schemes - a further analysis | PowerSpy - Location tracking using mobile device power analysis | Schrödinger's cybersecurity | From social media service to advertising network - A critical analysis of Facebook's revised policies and terms | Privacy implications of health information seeking on the web | Cookie sweep combined analysis - Report | Threat landscape and good pracice guide for smart home and converged media | Internet of Things security study - Home security systems report | Careless whispers - How speech is policed by outdated communications legislation | Out of balance - Defamation law in the European Union | State of privacy report 2015]
A publication by the FCC.
From the 'Open Internet' overview page:
"Bright Line Rules:
- No Blocking: broadband providers may not block access to legal content, applications, services, or non-harmful devices.
- No Throttling: broadband providers may not impair or degrade lawful Internet traffic on the basis of content, applications, services, or non-harmful devices.
- No Paid Prioritization: broadband providers may not favor some lawful Internet traffic over other lawful traffic in exchange for consideration of any kind - in other words, no 'fast lanes.' This rule also bans ISPs from prioritizing content and services of their affiliates."
Read more:
See also:
An article by Derek E. Bambauer.
Abstract:
"Despite the trend towards strong protection of speech in U.S. Internet regulation, federal and state governments still seek to regulate on-line content. They do so increasingly through informal enforcement measures, such as threats, at the edge of or outside their authority – a practice this Article calls 'jawboning.' The Article argues that jawboning is both pervasive and normatively problematic. It uses a set of case studies to illustrate the practice's prevalence. Next, it explores why Internet intermediaries are structurally vulnerable to jawboning. It then offers a taxonomy of government pressures based on varying levels of compulsion and specifications of authority. To assess jawboning's legitimacy, the Article employs two methodologies, one grounded in constitutional structure and norms, and the second driven by process-based governance theory. It finds the practice troubling on both accounts. To remediate, the Article considers four interventions: implementing limits through law, imposing reputational consequences, encouraging transparency, and labeling jawboning as normatively illegitimate. In closing, it extends the jawboning analysis to other fundamental constraints on government action, including the Second Amendment. The Article concludes that the legitimacy of informal regulatory efforts should vary based on the extent to which deeper structural limits constrain government's regulatory power."
Read more:
An initiative by a coalition of digital rights organisations.
From the 'Manila principles' website:
"All communication over the Internet is facilitated by intermediaries such as Internet access providers, social networks, and search engines. The policies governing the legal liability of intermediaries for the content of these communications have an impact on users’ rights, including freedom of expression, freedom of association and the right to privacy. With the aim of protecting freedom of expression and creating an enabling environment for innovation, which balances the needs of governments and other stakeholders, civil society groups from around the world have come together to propose this framework of baseline safeguards and best practices. These are based on international human rights instruments and other international legal frameworks."
Read more:
See also:
A report by PewResearchCenter.
From the overview page:
"Overall, nearly nine-in-ten respondents say they have heard at least a bit about the government surveillance programs to monitor phone use and internet use. [...] 34% of those who are aware of the surveillance programs (30% of all adults) have taken at least one step to hide or shield their information from the government. For instance, 17% changed their privacy settings on social media; 15% use social media less often; 15% have avoided certain apps and 13% have uninstalled apps; 14% say they speak more in person instead of communicating online or on the phone; and 13% have avoided using certain terms in online communications. [...] One potential reason some have not changed their behaviors is that 54% believe it would be ' somewhat' or 'very' difficult to find tools and strategies that would help them be more private online and in using their cell phones."
Read more:
See also:
A report by the Internet Watch Foundation in partnership with Microsoft.
From 'Conclusions':
"The finding that 667 (17.5%) of the images and videos assessed depicted children aged 15 years and younger, with 286 (42.9%) of these depicting children assessed as being 10 years and younger indicates a disturbing trend for younger children to be producing sexually explicit content which is being distributed online. [...] The finding that 85.9% of the content depicting children aged 15 years and younger was created using a webcam in a home environment, most commonly a bedroom or bathroom, is surprising as it challenges the traditional notion that youth-produced sexual content is created and distributed via mobile phone or other mobile device. [...] Of particular concern is that the young people depicted took no steps to conceal their identity or location, even in many cases using their real names. In some instances, it is apparent that this content is being knowingly created to appear on public websites, however as 100% of the content depicting children aged 15 years or younger had been harvested from its original upload location and further distributed via third party websites, control over its removal or onward distribution has been lost. "
Read more:
A paper by Arvind Narayanan, Joanna Huey and Edward W. Felten.
From the introduction to the paper:
"Once released to the public, data cannot be taken back. As time passes, data analytic techniques improve and additional datasets become public that can reveal information about the original da-ta. It follows that released data will get increasingly vulnerable to re-identification - unless methods with provable privacy properties are used for the data release."
Read more:
A publication by KPN, NCSC, TNO and National Police.
From 'Help, the app ate my password!':
"On a gloomy day in January with the office still buzzing of fresh new year's resolutions we received an e-mail from a journalist containing a link to a website. While expecting an interesting article or a scandalous new story my eyebrows went up half a meter when I looked upon the contents displayed on my screen. Before my eyes were more than a thousand credentials of users of the Telfort website. And even worse, a test of some of these credentials actually proved them correct as well. Now this is odd… why would a hacker keep stolen credentials on a publicly accessible website indexed by Google and easy to find for anyone? And even more important, how did he manage to get these passwords? In this article I would like to give you an insight as to how we tried to answer these questions, but also to point out the risk involved in the millions of easy smartphone apps available."
Read more:
A paper by Bonnie Brinton Anderson, C. Brock Kirwan, Jeffrey L. Jenkins, David Eargle, Seth Howard, and Anthony Vance.
Abstract:
"Research on security warnings consistently points to habituation as a key reason why users ignore security warnings. However, because habituation as a mental state is difficult to observe, previous research has examined habituation indirectly by observing its influence on security behaviors. This study addresses this gap by using functional magnetic resonance imaging (fMRI) to open the 'black box' of the brain to observe habituation as it develops in response to security warnings. Our results show a dramatic drop in the visual processing centers of the brain after only the second exposure to a warning, with further decreases with subsequent exposures. To combat the problem of habituation, we designed a polymorphic warning that changes its appearance. We show in two separate experiments using fMRI and mouse cursor tracking that our polymorphic warning is substantially more resistant to habituation than conventional warnings. Together, our neurophysiological findings illustrate the considerable influence of human biology on users' habituation to security warnings."
Read more:
A publication by EDRi, Access, Panoptykon Foundation and Privacy International.
From the introduction:
"The European Union is trying to establish a data protection framework that will put citizens back in control of their personal data and ensure a high standard of protection for their fundamental rights to privacy and data protection. This reform would also bring harmonisation to the EU data protection framework and update the rules in place that dates back from 1995, where technology was very different from today. Since 2012, the European Commission and the European Parliament both have produced a text that, while not being perfect, would greatly benefit citizens and businesses, establishing a common set of rules for the whole EU and guaranteeing high standards for personal data protection. Unfortunately, within the Council of the EU, Member State governments are working to undermine this reform process. For more than three years, the Council has not only failed to show support for this reform and negotiations, but is now proposing modifications to the text that would lower down the existing level of data protection in Europe guaranteed by the Directive 95/46 and even below the standards required to be in line with the EU treaties."
Read more:
See also:
An Administration Discussion Draft published at Whitehouse.gov.
The purpose of the bill:
"To establish baseline protections for individual privacy in the commercial arena and to foster timely, flexible implementations of these protections through enforceable codes of conduct developed by diverse stakeholders."
Read more:
See also:
A paper by Bruce Schneier, Matthew Fredrikson, Tadayoshi Kohno and Thomas Ristenpart.
Abstract:
"Revelations over the past couple of years highlight the importance of understanding malicious and surreptitious weakening of cryptographic systems. We provide an overview of this domain, using a number of historical examples to drive development of a weaknesses taxonomy. This allows comparing different approaches to sabotage. We categorize a broader set of potential avenues for weakening systems using this taxonomy, and discuss what future research is needed to provide sabotage-resilient cryptography."
Read more:
See also:
A paper by Ryan Compton, David Jurgens and David Allen.
From the Abstract:
"Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data."
Read more:
A report by the Office of the Privacy Commissioner of Canada.
Abstract:
"This research report examines the common interests and tensions between privacy and cyber security. It explores how challenges for cyber security are also challenges for privacy and data protection, considers how cyber security policy can affect privacy, and notes how cyberspace governance and security is a global issue. Finally, it sets out key policy directions with a view to generating dialogue on cyber security as an important element of online privacy protection."
Read more:
See also:
A publication by the European Central Bank.
From the Executive Summary:
"The [Virtual currency schemes (VCS)] 'ecosystem' consists mainly of specific, new categories of actors which were notpresent in the payments environment before. Moreover, emerging business models are built around obtaining, storing, accessing and transferring units of virtual currency. Many schemes have appeared and some have already disappeared again, but around 500 exist at the time of writing. This is in stark contrast to the situation of two years ago when it was only really Bitcoin that was known about. [...] VCS present several drawbacks and disadvantages for users, i.e. lack of transparency, clarity and continuity; high dependency on IT and on networks; anonymity of the actors involved; and high volatility. In addition, users face payment system-like risks owing to their direct participation in the VCS, as well as risks associated with certain intrinsic characteristics of VCS, i.e. the counterparty risk associated with the anonymity of the payee, the exchange rate risk associated with high volatility and the risk of investment fraud associated, inter alia, with the lack of transparency. There are currently no safeguards to protect users against these risks. Nevertheless, VCS present some advantages as perceived by users. They could pose a challenge to retail payment instruments and innovative payment solutions as regards costs, global reach, anonymity of the payer and speed of settlement. A new or improved VCS, if it overcame the current barriers to widespread use, might be more successful than the existing ones, specifically for payments within 'virtual communities'/closed-loop environments (e.g. internet platforms) and for cross-border payments."
Read more:
See also:
An article by Yan Michalevsky, Dan Boneh, Aaron Schulman and Gabi Nakibly.
Abstract:
"Modern mobile platforms like Android enable applications to read aggregate power usage on the phone. This information is considered harmless and reading it requires no user permission or notification. We show that by simply reading the phone's aggregate power consumption over a period of a few minutes an application can learn information about the user’s location. Aggregate phone power consumption data is extremely noisy due to the multitude of components and applications that simultaneously consume power. Nevertheless, by using machine learning algorithms we are able to successfully infer the phone’s location. We discuss several ways in which this privacy leak can be remedied."
Read more:
See also:
An article by Derek E. Bambauer.
Abstract:
"Both law and cybersecurity prize accuracy. Cyberattacks, such as Stuxnet, demonstrate the risks of inaccurate data. An attack can trick computer programs into making changes to information that are technically authorized but incorrect. While computer science treats accuracy as an inherent quality of data, law recognizes that accuracy is fundamentally a socially constructed attribute. This Article argues that law has much to teach cybersecurity about accuracy. In particular, law's procedural mechanisms and contextual analysis can define concepts such as authorization and correctness that are exogenous to code. The Article assesses why accuracy matters, and explores methods law and cybersecurity deploy to attain it. It argues both law and cybersecurity have but two paths to determining accuracy: hierarchy and consensus. Then, it defends the controversial proposition that accuracy is constructed through social processes, rather than emerging from information itself. Finally, it offers a proposal styled on the common law to evaluate when accuracy matters, and suggests that regulation should bolster technological mechanisms through a combination of mandates and funding. Similar to the cat of Schrödinger's famous thought experiment, information is neither accurate nor inaccurate until observed in social context."
Read more:
A report by Brendan van Alsenoy, Valerie Verdoodt, Rob Heyman, Jef Ausloos and Ellen Wauters (iMinds-SMIT).
From '1. Introduction':
"Facebook's revised Data Use Policy (DUP) is an extension of existing practices. This nevertheless raises concerns because Facebook's data processing capabilities have increased both horizontally and vertically. By horizontal we refer to the increase of data gathered from different sources. Vertical refers to the deeper and more detailed view Facebook has on its users. Both are leveraged to create a vast advertising network which uses data from inside and outside Facebook to target both users and non-users of Facebook."
Read more:
See also:
An article by Timothy Libert.
Abstract:
"This article investigates privacy risks to those visiting healthrelated web pages. The population of pages analyzed is derived from the 50 top search results for 1,986 common diseases. This yielded a total population of 80,124 unique pages which were analyzed for the presence of third-party HTTP requests. 91% of pages were found to make requests to third parties. Investigation of URIs revealed that 70% of HTTP Referer strings contained information exposing specific conditions, treatments, and diseases. This presents a risk to users in the form of personal identification and blind discrimination. An examination of extant government and corporate policies reveals that users are insufficiently protected from such risks."
Read more:
A report by the Article 29 Data Protection Working Party (WP29).
From the Executive summary:
"The Article 29 Working Party in partnership with national regulators with responsibility for enforcing Article 5(3) of the ePrivacy Directive 2002/58/EC, as amended by 2009/136/EC, conducted a sweep of up to 478 websites in the e-commerce, media and public sectors across 8 member states. The sweep highlighted differences in the use of cookies across different target sectors and between the individual member states. The sweep also highlighted areas for improvement including a few cookies with duration periods of up nearly 8000 years. This is in contrast to an average duration of 1 to 2 years. The sweep however also showed that 70% of the 16555 cookies recorded were third-party cookies. It was also shown that more than half of the third-party cookies were set by just 25 third-party domains."
Read more:
A study by David Barnard-Wills, Louis Marinos and Silvia Portesi (ENISA).
From the Executive Summary:
"Highlights of this study are:[...]
- Not all smart homes are created equally. There are multiple design pathways that lead to functional smart homes, ranging between localised and integrated home-automation systems. These pathways have their own security and privacy peculiarities, but also have shared issues and vulnerabilities.
- Smart homes will have significant privacy and data protection impacts. The increased number of interlinked sensors and activity logs present and active in the smart home will be a source of close, granular and intimate data on the activities and behaviour of inhabitants and visitors.
- Several economic factors may lead to poor security in smart home devices. Companies involved in the smart home market include home appliance companies, small start-up companies, and even crowd-funded efforts. These groups are likely to lack security expertise, security budgets and access to security research networks and communities."
Read more:
A report by HP.
From the related blog post:
"The Internet of Things is worse than just a new insecure space: it's a Frankenbeast of technology that links network, application, mobile, and cloud technologies together into a single ecosystem, and it unfortunately seems to be taking on the worst security characteristics of each."
Read more:
See also:
A report by Big Brother Watch.
From the Executive Summary:
"The social media revolution has changed the way people communicate with each other. Yet, whilst our communications have evolved the way crimes are dealt with has not and so we find ourselves using archaic legislation to police modern day crimes. Without exception, the laws that regulate what is said on social media platforms were passed before companies such as Facebook, Twitter and Ask FM became widely used. The laws used to police our communications are woefully out of date. As this report shows, there has been an increase in charges and convictions and cases involving the use of social media. It is therefore important that the legislation which is used by police and prosecutors is examined to ensure it doesn’t become obsolete in light of new technology."
Read more:
See also:
A report by Scott Griffen, Barbara Trionfi and Steven M. Ellis (IPI).
From the Introduction:
"Unbalanced defamation law and practice — too much protection for reputation, too little regard for free expression and the need for a free press — does not just affect those directly involved: defamation proceedings have a cascading impact on the overall media culture and the public's right to know. Negative experiences in such proceedings may lead to a chilling effect among media colleagues at the institutional, local and national level, causing journalists or critical commentators to be overly cautious in publishing news in the public interest, if they continue to work at all. Reporters may be wary of covering certain topics, some of which — especially investigations into the actions of public officials — will be seen as off-limits altogether, perceived as not being worth the legal costs, long and distracting proceedings, the infamy of being sued or charged with a crime, expensive damages, criminal fines, the threat of imprisonment, and the possibility of losing one's job and having one's reputation tarnished. But the ultimate losers in this situation are EU citizens who depend upon the free flow of information to make informed decisions about issues that matter in their daily lives, including the identities of the officials who make important policy decisions that impact society. Moreover, without this information, power risks becoming concentrated in the hands of political and business elites and serious abuses go unnoticed or unpunished. Good defamation policy is not just about making life easier for journalists: it is essential to the foundation of the democratic process."
Read more:
A report by Symantec.
From the introduction:
"Symantec's State of Privacy report offers a snapshot of current perceptions on data privacy, and reveals that people don't believe businesses and governments are doing enough to keep their information safe. It also provides guidance on how businesses can address consumer concerns. We surveyed over 7,000 respondents across seven European markets; UK, France, Germany, Denmark, Spain, Netherlands and Italy and learned nearly 60 per cent have experienced a data protection issue in the past. Our findings suggest that businesses and governments have an opportunity to strengthen trust with Europeans by implementing aggressive security measures and assuring people that their data is gathered, stored and used responsibly."
Read more: