Well, “complex” is putting it nicely : ). Re-reading my post now, I’d say it’s kind of rambling. But let me restate the main point more directly: it seems like the QS community already wants it to be standard practice for toolmakers to make individual data available to users in raw form; I think it should also be standard practice for them to make aggregate data about all their users (not some selected subset) available to the public.
I described a stereotype of the typical customer of self-tracking products, which is a much bigger group than participants in QS, and that’s a distinction I should have made. But regardless: if that stereotype is incorrect, better disclosure of aggregate data would be a good way to refute it, right? For example, when this critic writes:
“They believe the maxim that only the things that are measured can be improved. But I see a lot of measuring, but not much improvement. […] In my personal experience, I know people who are obsessed by “quantified self” gadgets. I know people who eat well and exercise regularly, and as a result are physically fit. And these two groups don’t overlap.”
…I suppose it would be possible to dispute those assertions on a purely anecdotal level, or to ask, as you suggest, “where does this stereotype come from?” and somehow show that it’s unfounded. But surely the strongest response would be to say “Well, let’s look at the aggregate data for everyone who’s bought a Fitbit tracker. As you can see, __% are wearing it at least __% of the time for at least __ months, and on average they increase their activity by __%. And now here’s the data from a popular wifi scale, and a popular emotion tracking app, and so on with all the most widely-used self-tracking products. And as you can clearly see, the majority of users are indeed improving, so the ones you’ve met are not representative.”
But as far as I know, we can’t really do that, because most toolmakers don’t disclose this kind of aggregate raw data. Which ones, if any, do? And for those that don’t, why don’t we ask for it? Even in the absence of outside critics, shouldn’t we want this data in order to make better decisions ourselves?
Eric’s point above is certainly true: in the case of some tools, a simple average might not tell the whole story, and we might have to dig further. And with a few, the makers might simply not collect enough info to answer every question. But let’s start by getting the data and then we’ll see.
I know there are a lot of toolmakers on these forums, and I hope some of them will weigh in too. And to be clear, I don’t mean this as a “gotcha” question at all. I think lots of self-tracking products work very well and are giving their users a lot of value. I just think there are so many of them now (and more all the time) that we need more real data to accurately compare them.