Access to Data from Clinical Trials

Michelle Mello, Professor of Law and of Health Research and Policy at Stanford University, recently published an interesting paper in the New England Journal of Medicine called “Preparing for Responsible Sharing of Clinical Trial Data.” The paper advances the idea of a “learned intermediary” who could vet requests for access and prevent abuses.

A link to the paper was published in a Robert Wood Johnson Foundation group on LinkedIn to which we both belong, and I commented there, but want to share my comment here also and see if any of you have thoughts about this. Ernesto and I have been talking quite a bit about how to more effectively advocate for access to data, and so we’re reading more widely and talking to more people about it, as we try to understand the issues.

For context, the paper argues that learned intermediaries:

"…would be well situated to ensure that data users comply with the conditions of data release. Their staff could manage the execution and monitoring of legally binding data-use agreements and enforce adherence. As needed, they could issue cease-and-desist letters, post public notices of violates and report them to data users’ institutions and journals, demand return of the data, deny future data requests, and seek injunctions or money damages in court."

Here’s my comment from the RWJF site:

[i]I very much appreciate Professor Mello summarizing her paper and providing a link. I’ve read it with interest, as conversations about data access are intensifying in the Quantified Self community. Our focus is on “self-collected data,” which today appears to be quite distinct from the clinical trial data discussed; however, this distinction is already beginning to blur, and will be increasingly hard to maintain. Two areas where it is easy to see the overlap between self-collected data and clinical trial data are: the tracking of adverse events; and, measurement of the effectiveness of treatment in chronic disease. Where self-collected data gathered using common consumer tools contains important signals, investigators will want access to it. While today access to self-collected data may be attained through typical enrollment and followup methods, the origin of this data in the course of an individual participant’s everyday life changes the context of access. Simply put, individual participants become “data generators.”

In Professor Mello’s paper, the term data generators refers to clinical researchers, which is appropriate given her focus. However, I think it worth doing a thought experiment and re-reading the paper with an image of research participants as data generators in mind. It becomes apparent quickly in this thought experiment that the role of learned intermediary cannot be primarily legal and regulatory, at least as we normally understand these words. With data generators numbering in the thousands and tens of thousands, and eventually in the millions and tens of millions, it will be impossible to police data requests at the necessary level of detail. In this scenario we might hope that legal and regulatory authorities could punish egregious abuses; but even this minimal requirement would rely on substantial progress in public understanding - and expert understanding, too - of what standards to apply.

I offer his thought experiment as a forecast about the future of access. Since I’m skeptical that close grained regulatory governance of data access will be possible, I prefer to advocate for the broadest possible access. In particular, research participants themselves deserve access to clinical data. I very much like the idea of a learned intermediary, but the way I can imagine a learned intermediary operating in a context of millions of data generators is as part of our essential cultural infrastructure, rather than as a small set of government agencies or NGOs. There are many examples of learned intermediation embodied in cultural practices, but perhaps the closest analog is the press, which takes upon itself the responsibility of handling complex and voluminous information - much of it in the form of data - and transforming it into knowledge for the public. The press is regulated, of course. Anybody speaking to the public has to take account of laws designed to protect individuals and companies against abuse. However, most of the rules used by editors and writers are rules of thumb, based on experience and transmitted through formal and informal vocational training.

We have a long way to go before learned intermediation of this sort is effective, which I take as all the more reason to start soon.
[/i]

I’d like to have control over who can use my data. But I also recognize that if I’m “donating” data for public research, any data that is required to replicate published results has to be made available to anyone–even if it’s just a curious statistics undergraduate without a proper justification for accessing the data.

Having an organization that writes angry letters to people who misuse the data wouldn’t increase my trust as much as an organization that certifies that a group of researchers follows best practices in keeping private data secure and data as anonymous as possible.

Interesting suggestion - a “vetted” group of researchers trusted to handle our data!