Should QS toolmakers disclose actual usage and results?

I’m a QS organizer who’s relatively new to the movement (or whatever we are) and I’ve got a kind of big-picture concern I wanted to float here.

Most of the criticisms I’ve read of QS don’t bother me. Crazy nerds, narcissists, gadget freaks – bring it on : ). The only one that really stings is (I’ll paraphrase) “These people are just falling for a series of gimmicky products that they don’t stick with, either because the products don’t work as well as advertised or they just don’t have the discipline. Like most self-improvement fads, it’s resulting in very little actual self-improvement.”

From this perspective, the median QS member is someone who says “OK, I’m getting a Zeo and I’m gonna optimize my sleep quality!” After a month they realize what common sense could have told them: they sleep better when they avoid alcohol or caffeine, don’t drastically shift their sleep hours every weekend, turn off the computer in the evenings, etc. But actually changing those habits is too hard, and continuing to use the Zeo is just a depressing reminder of their failure, and so they move on: “I’m gonna start using Evernote and optimize my productivity!” Similar story, and a month later it’s emotion tracking, and so on.

At the end of a year they’ve spent a lot of money and time and have all these gadgets sitting on the shelf, and all these unused apps on their phone, and they’re no better off than when they started.

I realize that many QS members, especially those on these forums, are not in this category, and have made positive and permanent changes to their lives with the help of self-tracking. I also realize that some of us are interested in QS on a philosophical, aesthetic or self-discovery level, not just as a motivational or self-improvement tool. But if the median QSer is more like the above, then to me that’s pretty disturbing.

And how would we even know? Because to an external observer, that person is probably a huge QS adherent, who’s online gushing about every app and gadget during the first week they’re using it, and never comes back to say “oh yeah, it never did much for me in the end.” But what’s really disturbing is that, for many QS companies, they’re also a pretty good customer. Not many will ever ask for a refund on the gadgets, and some will let their paid subscriptions run on the apps long after they’ve stopped using them. They may even continue to recommend the products, since they see the failing as their own willpower rather than the product itself.

I think that pattern is why many people get a sleazy vibe from the self-help industry in general, and I would hate for them to start getting that same vibe from us.

I realize it’s inevitable that as QS grows, it will lose some of its quirky amateur character and become partly a marketing channel for QS-related startups. But we can at least try to impose some ethical standards on the companies that are benefiting from all our volunteer organizing and promotion. Demands for data portability and privacy are a great example of that.

So I would propose that another positive thing to encourage would be aggregate disclosure, where applicable, of actual use and actual results. I realize this doesn’t apply to every product, but any kind of gadget or app that collects your data, stores it in the cloud, and gives it back to you in custom graphs and charts, could very easily disclose things like:

  1. what percentage of new users are still using the product a month later? three months later? how frequently are they using it?

  2. when the relevant measurements have an unambiguous standard of improvement, what percentage of those users are improving? for example, on a wifi scale, what’s happening to the average user’s BMI over time?

I realize this flies in the face of both the culture of secrecy around startups and the general practice of marketing. But so what? Why can’t we have a norm of acclaim for companies that answer these questions and suspicion for companies that don’t, just like the norm that’s already developing around companies that let you download your data as a CSV vs. those that hold it hostage?

Anyway this is really still just a half-formed thought, but it’s been bugging me for a while and I would love to get some other people’s thoughts on it.

1 Like

I think the QS movement is a good thing because it encourages personal science – people gathering data and doing experiments to help themselves. To learn useful stuff this way (via personal science) requires more than just measuring yourself but measuring yourself in a quantitative way is often necessary. Let’s say there are three things you need to do. QS encourages people to do one of the three. If most QSers don’t understand this, the median QSer isn’t going to learn much. But QS is still a step in the right direction.

I agree completely, but personal science is only one facet of QS. A lot of people are interested in self-tracking for purely motivational reasons.

To me, “experimental” QS starts with “I know my problem/goal but I’m not sure what to do about it.” Whereas “motivational” QS starts with “I already know what I have to do, but I need some help in actually doing it.”

No question there are many of us in each camp (and many in both) but I think the most widespread QS products are still in the second category, right? Most people who buy a Nike+ Fuelband are not trying to learn anything about their ideal level or pattern of physical activity; they’re just trying to motivate themselves to be more physically active. In that case there really isn’t another step between the tracking and the goal.

Anyway, in either case, wouldn’t it be helpful to know in advance what the average user’s experience has been with a particular tracking product? So to the extent we have any influence over company norms in this sector, shouldn’t we be pushing for more disclosure of that aggregate data?

The “percentage of new users are still using the product a month later” might reflect as much on the demographics of a service as it does on the effectiveness of the service.

Such information could of course be useful when shopping around. Many people enjoy reading “Fifty Shades of Grey”, but would I? The Atkins diet works great for some people, but is it the best diet for someone with my genetics? How well does the Nike+ Fuelband work for people who prefer hiking over running?

I don’t think there is such a thing as an “average user”–even if some users are more average than others :slight_smile:

Welcome to Quantified-Quantified Self!

"…to an external observer, that person is probably a huge QS adherent, who’s online gushing about every app and gadget during the first week they’re using it, and never comes back to say “oh yeah, it never did much for me in the end.”

Thanks Peter for sharing your idea in its current “half-formed” state. That’s one of the things this forum is for, to expose new ideas and develop them further. I want to pick out one aspect of your complex post, that caught my attention. That is the statement quoted above about the “external observer.” I think you’ve accurately captured a stereotype of the QS participant (and I think of myself more as a participant than an adherent), but it is a stereotype that doesn’t match very closely with reality. I would challenge such an external observer to find somebody online gushing about every app and gadget, and then failing to update their thoughts. We tend to hear from people after they’ve had a range of experiences, and the reports we get are as often negative as positive. This makes the QS scene much different than merely a forum for tech marketing & evangelism, at least as I’ve seen it.

I think your post also asks, at least implicitly, “where does this stereotype come from?” That’s a question that could be answered, and yet I always find myself hesitating to spend much time in conversation about stereotypes… these have a tendency to turn into polemics super quickly. But maybe there is a way to do it. I’m thinking!

Well, “complex” is putting it nicely : ). Re-reading my post now, I’d say it’s kind of rambling. But let me restate the main point more directly: it seems like the QS community already wants it to be standard practice for toolmakers to make individual data available to users in raw form; I think it should also be standard practice for them to make aggregate data about all their users (not some selected subset) available to the public.

I described a stereotype of the typical customer of self-tracking products, which is a much bigger group than participants in QS, and that’s a distinction I should have made. But regardless: if that stereotype is incorrect, better disclosure of aggregate data would be a good way to refute it, right? For example, when this critic writes:

“They believe the maxim that only the things that are measured can be improved. But I see a lot of measuring, but not much improvement. […] In my personal experience, I know people who are obsessed by “quantified self” gadgets. I know people who eat well and exercise regularly, and as a result are physically fit. And these two groups don’t overlap.”

…I suppose it would be possible to dispute those assertions on a purely anecdotal level, or to ask, as you suggest, “where does this stereotype come from?” and somehow show that it’s unfounded. But surely the strongest response would be to say “Well, let’s look at the aggregate data for everyone who’s bought a Fitbit tracker. As you can see, __% are wearing it at least __% of the time for at least __ months, and on average they increase their activity by __%. And now here’s the data from a popular wifi scale, and a popular emotion tracking app, and so on with all the most widely-used self-tracking products. And as you can clearly see, the majority of users are indeed improving, so the ones you’ve met are not representative.”

But as far as I know, we can’t really do that, because most toolmakers don’t disclose this kind of aggregate raw data. Which ones, if any, do? And for those that don’t, why don’t we ask for it? Even in the absence of outside critics, shouldn’t we want this data in order to make better decisions ourselves?

Eric’s point above is certainly true: in the case of some tools, a simple average might not tell the whole story, and we might have to dig further. And with a few, the makers might simply not collect enough info to answer every question. But let’s start by getting the data and then we’ll see.

I know there are a lot of toolmakers on these forums, and I hope some of them will weigh in too. And to be clear, I don’t mean this as a “gotcha” question at all. I think lots of self-tracking products work very well and are giving their users a lot of value. I just think there are so many of them now (and more all the time) that we need more real data to accurately compare them.