I don’t think that’s a valid criticism, and even if it was I don’t think that would be a valid response to the criticism.
I am not running an n = 1 study. I am not the sample, I am the population. I’m doing a P = 1 study. I am taking samples from myself, such as my mood and what I am doing, and comparing that with population data such as the food I’m taking in. Even if those “quants” who are criticizing us still think I’m doing a n = 1 study, what’s the problem with that? The problem is that n = 1 studies are not transferable to other n’s. I’m not trying to transfer it to other n’s, I’m just trying to understand myself. Problem solved.
Let’s say we did develop a standard data format, and used it to combine QS data across users. There’s a name for that kind of study, it’s called a meta-analysis. They’re very poopular, and they are highly suspect. They’re popular because they’re cheap. You don’t have to run your own study and deal with all of the human testing issues. They’re suspect because they’re combining data from different studies that used different methodologies. Not having a standard methodology for your data collection can introduce bias into your analysis.
Often these meta-analyses combine data from studies with different treatments. They have to, because you don’t get published repeating someone else’s study (which is an odd state of things if you consider the scientific method). That’s an even bigger problem in QS because everyone is doing a different treatment, often a completely different treatment. It wouldn’t be like combining five studies on aspirin and heart attacks, it would be like combining 500 studies on weight loss, weight gain, sleep quality, cholesterol, blood pressure, mood, …
The first lesson to take away from this is just because the data is in the same format, that doesn’t mean it’s comparable. The second lesson is that we should be defining what QS is, not the outside quants. They want papers they can publish in peer reviewed journals. We don’t have to give them that. If we are getting what we want out of the process, it is not our problem that they are not getting what they want.
I think that there is a lack of rigor in QS. (For that matter, I think there is a lack of rigor in peer reviewed journals, and maybe the outside quants should clean up their own back yard before looking over the fence.) I’ve seen QSers be happy with an increase from 95% to 97% without addressing whether that is statistically significant, much less practically significant. I’ve seen QSers be happy with a decrease in cholesterol, but they were only looking at LDL and not HDL, and they’d actually made their situation worse. But neither of these are data format issues.
I think that if we make sure that people are understanding what they are studying, and help them to understand best practices for analyzing the data they collect, it will do far more for our community than worrying about data format issues and trying to appease people outside our community (who perhaps don’t understand what we are doing).