By Jeremiah Posedel


Big data derives from “the growing technological ability to capture, aggregate, and process an ever-greater volume, velocity, and variety of data.”[1] Apple’s just-released iOS 8 software development kit (“iOS 8 SDK”) highlights this growth.[2] The iOS 8 SDK touts over 4,000 application programming interface calls including “greater extensibility” and “new frameworks.”[3] For example, HomeKit and HealthKit, two of these new frameworks, serve as hubs for data generated by other applications and provide user interfaces to manage that data and related functionality.[4] HealthKit’s APIs “provide the ability for health and fitness apps to communicate with each other … to provide a more comprehensive way to manage your health and fitness.”[5] HomeKit integrates home automation functions in a central location within the iOS device, allowing users to lock/unlock doors, turn on/off cameras, change or view thermostat settings, turn lights on/off, open garage doors and more – all from a single app.[6] The iOS 8 SDK will inevitably lead to the development of countless apps and other technologies that “capture, aggregate, and process an ever-greater volume, velocity, and variety of data,” contributing immense volumes of data to the already-gargantuan big data ecosystem.

In the context of our health and wellbeing, big data – which includes, but is definitely not limited to, data generated by future iOS 8-related technologies – has boundless potential and can have a momentous impact on biomedical research, leading to new therapies and improved health outcomes. The big data reports recently issued by the White House and the President’s Council of Advisors on Science and Technology (“PCAST”) echo this fact. However, these reports also emphasize the challenges posed by applying the current approach to privacy to big data, including the focus on notice and consent.

After providing some background, this article examines the impact of big data on medical research. It then explores the privacy challenges posed by focusing on notice and consent with respect to big data. Finally, this article describes an alternative approach to privacy suggested by the big data reports and its application to biomedical research.


On May 1, 2014, the White House released its report on big data, “Big Data: Seizing Opportunities, Preserving Values” (“WH Report”). The WH Report was supported by a separate effort and report produced by PCAST, “Big Data and Privacy: A Technological Perspective” (“PCAST Report”).[7] The privacy implications of the reports on biomedical research – an area where big data can arguably have the greatest impact – are significant.

Notice and consent provide the foundation upon which privacy laws are built. Accordingly, it can be difficult to envision a situation where these conceptual underpinnings, while still important, begin to yield to a new approach. However, that is exactly what the reports suggest in the context of big data. As HealthKit and iOS 8 SDK demonstrate, we live in a world where health data is generated in numerous ways, both inside and outside of the traditional patient-doctor relationship. If given access to all this data, researchers can better analyze the effectiveness of existing therapies, develop new therapies faster, and more accurately predict and suggest measures to avoid the onset of disease, all leading to improved health outcomes. However, existing privacy laws often restrict researchers’ access to such data without first soliciting and obtaining proof of appropriate notice and consent.[8] Focusing on individual notice and consent in some instances can be unnecessarily restrictive and can stall the discovery and development of new therapies. This is exacerbated by the fact that de-identification (or pseudonymization) – a process typically relied upon to alleviate some of these obstacles – is losing its effectiveness or would require stripping data of much meaningful value. Recognizing these flaws, the WH Report suggests a new approach where the focus is taken off of the collection of data and turned to the ways in which parties, including biomedical researchers, use data – an approach that allows researchers to maximize the possibilities of big data, while protecting individual privacy and ensuring that data is processed in a reasonable way.

The Benefits of Big Data to Biomedical Research

Before discussing why a new approach to privacy in the context of big data and biomedical research may be necessary, it is first important to understand the role of big data in research. As noted, the concept of big data encompasses “the growing technological ability to capture, aggregate, and process an ever-greater volume, velocity, and variety of data.”[9] The word “growing” is essential here, as the sources of data contributing to the big data ecosystem are extensive and will continue to expand, especially as Internet-enabled devices such as those contemplated by HomeKit continue to develop.[10] These sources include not only the traditional doctor-patient relationship, but also consumer-generated and other non-traditional sources of health data such as those contemplated by HealthKit, including wearable technologies (e.g., Fitbit), patient-support sites (e.g.,, wellness programs, electronic/personal health records, etc. These sources expand even further when non-health data is combined with lifestyle and financial data.[11]

The WH Report recognizes that these new abilities to collect and process information have the potential to bring about “unexpected … advancements in our quality of life.” [12]The ability of researchers to analyze this vast amount of data can help “identify clinical treatments, prescription drugs, and public health interventions that may not appear to be effective in smaller samples, across broad populations, or using traditional research methods.”[13] In some instances, big data can in fact be the necessary component of a life-changing discovery.[14]

Further, the WH Report finds that big data holds the key to fully realizing the promise of predictive medicine, whereby doctors and researchers can fully analyze an individual’s health status and genetic information to better predict the onset of disease and/or how an individual might respond to specific therapies.[15] These findings have the ability to affect not only particular patients but also family members and others with a similar genetic makeup.[16] It is worth noting that the WH Report highlights bio-banks and their role in “confronting important questions about personal privacy in the context of health research and treatment.”[17]

In summary, big data has a profound impact on biomedical research and, as a necessary result, on those that benefit from the fruits of researchers’ labor. The key to its realization is a privacy regime that can unlock for researchers vast amounts of different types of data obtained from diverse sources.

Problems With the Current Approach

Where the use of information is not directly regulated by the existing privacy framework, providing consumers with notice and choice regarding the processing of their personal information has become the de facto rule. Where the collection and use of information is specifically regulated (e.g., HIPAA, FCRA, etc.), notice and consent is required whenever information is used or shared in a way not permitted under the relevant statute. For example, under HIPAA, a doctor can disclose a patient’s personal health information for treatment purposes (permissible use) but would need to provide the patient with notice and obtain consent before disclosing the same information for marketing purposes (impermissible use). To avoid this obligation, entities seeking to share data in a way not described in the privacy notice and/or permitted under applicable law can de-identify the data, to purportedly make the data anonymous (for example, John Smith drives a white Honda and makes $55,000/year (identified) v. Person X drives a white Honda and makes $55,000/year (de-identified)).[18]Except under very limited circumstances (e.g., HIPAA limited data sets), the requirements regarding notice and consent apply equally to biomedical research as to more commercial uses.

In the context of big data, the first problem with notice and consent is that it places an enormous burden on the individual to manage all of the relevant privacy notices applicable to the processing of that individual’s data. In other words, it requires individuals to analyze each and every privacy notice applicable to them (which could be hundreds, if not more), determine whether those data collectors share information and with whom, and then attempt to track that information down as necessary. As the PCAST Report not-so-delicately states, “[i]n some fantasy world, users actually read these notices, understand their legal implications (consulting their attorneys if necessary), negotiate with other providers of similar services to get better privacy treatment, and only then click to indicate their consent. Reality is different.”[19] This is aggravated by the fact that relevant privacy terms are often buried in privacy notices using legalese and provided on a take-it-or-leave-it basis.[20] Although notice and consent may still play an important role where there is a direct connection between data collectors and individuals, it is evident why such a model loses its meaning when information is collected from a number of varied sources and those analyzing the data have no direct relationship with individuals.

Second, even where specific privacy regulations apply to the collection and use of personal information, such rules rarely consider or routinely allow for the disclosure of that information to researchers for biomedical research purposes, thus requiring researchers to independently provide notice and obtain consent. As the WH Report points out, “[t]he privacy frameworks that currently cover information now used in health may not be well suited to … facilitate the research that drives them.”[21] And as previously noted, often times biomedical researchers require non-health information, including lifestyle and financial data, if they want to maximize the benefits of big data. “These types of data are subjected to different and sometimes conflicting federal and state regulation,” if any regulation at all.[22]

Lastly, the ability to overcome de-identification is becoming easier due to “effective techniques … to pull the pieces back together through ‘re-identification’.”[23] In fact, the very techniques used to analyze big data for legitimate purposes are the same advanced algorithms and technologies that allow re-identification of otherwise anonymous data.[24] Moreover, “meaningful de-identification may strip the data of both its usefulness and the ability to ensure its provenance and accountability.”[25] In other words, de-identification is not as useful as it once was and further stripping data in an effort to overcome this fact could well extinguish any value the data may have (using the example above, car type and salary may still provide marketers with meaningful information (e.g., individuals with a similar salary may be interested in that car type), but the information “white Honda” alone is worthless). [26]

The consequences of all this are either 1) biomedical researchers are deprived of valuable data or provided meaningless de-identified data, or 2) individuals have no idea that their information is being processed for research purposes. Both the benefits and obstacles relating to big data and biomedical research led to the WH Report’s recognition that we may need “to look closely at the notice and consent framework” because “focusing on controlling the collection and retention of personal data, while important, may no longer be sufficient to protect personal privacy.”[27] Further, as the PCAST Report points out, and as reflected in the WH Report, “notice and consent is defeated by exactly the positive benefits that big data enables: new, non-obvious, unexpectedly powerful uses of data.”[28] So what does this new approach look like?

Alternative Approach to Big Data: Focus on Use, Not Collection [29]

The WH Report does not provide specific proposals. Rather, it suggests a framework for a new approach to big data that focuses on the type of use of such data and associated security controls, as opposed to whether notice was provided and consent obtained at the point of its collection. Re-focusing attention to the context and ways big data is used (including the ways in which results generated from big data analysis are used) could have many advantages for individuals and biomedical researchers. For example, as noted above, the notice and consent model places the burden on the individual to manage all of the relevant privacy notices applicable to the processing of that individual’s data and provides no backstop when those efforts fail or no attempt to manage notice provisions is made. Where the attention focuses on the context and uses of data, it shifts the burden of managing privacy expectations to the data collector and it holds entities that utilize big data (e.g., researchers) accountable for how data is used and any negative consequences it yields. [30]

The following are some specific considerations drawn from the reports regarding how a potential use framework might work:

  • Provide that all information used by researchers, regardless of the source, is subject to reasonable privacy protections similar to those prescribed under HIPAA. [31] For example, any data relied upon by researchers can only be used and shared for biomedical research purposes.
  • Create special authorities or bodies to determine reasonable uses for big data utilized by researchers so as to realize the potential of big data while preserving individual privacy expectations. [32] This would include recognizing and controlling harmful uses of data, including any actions that would lead to an adverse consequence to an individual. [33]
  • Develop a central research database for big data accessible to all biomedical researchers, with universal standards and architecture to facilitate controlled access to the data contained therein. [34]
  • Provide individuals with notice and choice whenever big data is used to make a decision regarding a particular individual. [35]
  • Where individuals may not want certain data to enter the big data ecosystem, allow them to create standardized data use profiles that must be honored by data collectors. Such profiles could prohibit the data collector from sharing any information associated with such individuals or their devices.
  • Require reasonable security measures to protect data and any findings derived from big data, including encryption requirements. [36]
  • Regulate inappropriate uses or disclosures of research information, and make parties liable for any adverse consequences of privacy violations. [37]

By offering these suggestions for public debate, the WH and PCAST reports have only initiated the discussion of a new approach to privacy, big data and biomedical research. Plainly, these proposals bring with them numerous questions and issues that must be answered and resolved before any transition can be contemplated (notably, what are appropriate uses and who determines this?).


Technologies utilizing the iOS 8 SDK, including HealthKit and HomeKit, illustrate the technological growth contributing to the big data environment. The WH and PCAST reports exemplify the endless possibilities that can be derived from this environment, as well as some of the important privacy issues affecting our ability to harness these possibilities. The reports constitute their authors’ consensus view that the existing approach to big data and biomedical research restricts the true potential big data can have on research, while providing individuals with little-to-no meaningful privacy protections. Whether the suggestions contained in the WH and PCAST reports will be – or should be – further developed is an open question that will undoubtedly lead to a healthy debate. Yet, in the case of the PCAST Report, the sheer diversity of players recognizing big data’s potential and associated privacy implications – including, but not limited to, leading representatives and academics from the Broad Institute of Harvard and MIT, UC-Berkeley, Microsoft, Google, National Academy of Engineering, University of Texas at Austin, University of Michigan, Princeton University, Zetta Venture Partners, National Quality Forum and others – provides hope that this potential will one day be realized – in a way that appropriately protects our privacy. [38]

WH Report Summary: click here.

PCAST Report Summary: click here.


[1] WH Report, p. 2.

[2] See Apple’s June 2, 2014, press release, Apple Releases iOS 8 SDK With Over 4,000 New APIs, last found at

[3] Id.

[4] Id.

[5] Id.

[6] Id.

[7] The White House and PCAST issued summaries of their respective reports, including their policy recommendations, which can be easily found at the links following this article.

[8] WH Report, p. 7.

[9] WH Report, p. 2.

[10] WH Report, p. 5.

[11] WH Report, p. 23.

[12] WH Report, p. 3.

[13] WH Report, p. 23.

[14] WH Report, p. 6 (the WH Report includes two research-related examples of the impact of big data on research, including a study whereby the large number of data sets made “the critical difference in identifying the meaningful genetic variant for a disease.”).

[15] WH Report, p. 23.

[16] WH Report, p. 23.

[17] WH Report, p. 23.

[18] In privacy law, “anonymous” data is often considered a subset of “de-identified” data. “Anonymized” data means the data has been de-identified and is incapable of being re-identified by anyone. “Pseudonymized” data, the other primary subset of “de-identified” data, replaces identifying data elements with a pseudonym (e.g., random id number), but can be re-identified by anyone holding the key. If the key was destroyed, “pseudonymized” data would become “anonymized” data.

[19] PCAST Report, p. 38.

[20] PCAST Report, p. 38.

[21] WH Report, p. 23.

[22] WH Report, p. 23.

[23] WH Report, p. 8.

[24] WH Report, p. 54; PCAST Report, pp. 38-39.

[25] WH Report, p. 8.

[26] The PCAST Report does recognize that de-identification can be “useful as an added safeguard.” See PCAST Report, p. 39. Further, other leading regulators and academics consider de-identification a key part of protecting privacy, as it “drastically reduces the risk that personal information will be used or disclosed for unauthorized or

malicious purposes.“ Dispelling the Myths Surrounding De-identification: Anonymization Remains a Strong Tool for Protecting Privacy, Ann Cavoukian, Ph.D. and Khaled El Emam, Ph.D. (2011), last found at Drs. Cavourkian and El Emam argue that “[w]hile it is clearly not foolproof, it remains a valuable and important mechanism in protecting personal data, and must not be abandoned.” Id.

[27] WH Report, p. 54.

[28] PCAST Report, p. 38; WH Report, p. 54.

[29] This approach is not one of the official policy recommendations contained in the WH Report. However, as discussed above, the WH Report discusses the impact of big data on biomedical research, as well as this new approach, extensively. Further, to the extent order has any meaning, the first recommendation made in the PCAST Report is that “[p]olicy attention should focus more on the actual uses of big data and less on its collection and analysis.” PCAST Report, pp. 49-50.

[30] WH Report, p. 56.

[31] WH Report, p. 24.

[32] WH Report, p. 23.

[33] PCAST Report, p. 44.

[34] WH Report, p. 24.

[35] PCAST Report, pp. 48-49.

[36] PCAST Report, p. 49.

[37] PCAST Report, pp. 49-50.

[38] It must be noted that many leading regulators and academics have a different view on the importance and role of notice and consent, and argue that these principles in fact deserve more focus. See, e.g., The Unintended Consequences of Privacy Paternalism, Ann Cavoukian, Ph.D., Dr. Alexander Dix, LLM, and Khaled El Emam, Ph.D. (2014), last found at

Source: Client Alert