What Next For Quantified Self?

Hi Gary,
Very interesting! Reading your pitch through the lense of a platform business model you are proposing, i observe the following:

  • There are two sides to this platform. On one side are the everday “scientists” in search of discoveries. On the other, clinical researchers looking for alternative datasets to anonomyzed randomized trials.

  • We know platforms sustain and grow themselves through network effects. Here you are suggesting the network effect is based on “each person making it easier for the next”. In terms of network effects, i take this as the more discoveries are made, the more people join the platform in search of their own discoveries and with more discoveries, the more clinical researchers looking for datasets and round and round it grows.

  • “10 million discoveries in 20 years” - I asked myself how many everday “scientist” need to be active on the platform to achieve this goal. If by “discovery” you mean as @ejain suggests in his point #3, “actual discoveries something new (maybe once in a lifetime)”, then you’ll need 10 million users over 20 years! I agree with Eric that discovery covers any one of those three points. In my case of casual experimentation, it is about five discoveries in three years. Of course you’ll have a range of discoveries per user, but safe to say we are on the order of 300K - 1M+ active everyday scientist over the 20 years to meet the goal.

  • You are suggesting the platform’s value proposition is based on innovative tools/education/community to enable entirely new experiences in the everyday scientist’s journey to self-discovery. In this regard, I would highlight examples when this journey goes awry,and how the platform will enable users to overcome the challenges and reach the final phase of discovery. You may also want to highlight innoative experiences for the clinical researcher. For example, the clinical researcher who doesn’t find the empirical discoveries she is looking for, but can use the platform’s levers to incentize new discoveries in this area.

  • In terms of the platform’s economics, I think you should mention the marginal costs in acquiring the platform’s participants i.e. everyday scientists as well as clinical researchers, and how this tends towards zero-marginal costs - which would be the platform’s sustainability element along with the network effects.

Sergio

1 Like

Very good and interesting feedback. Thank you @madprime, @dreeds, and @Sergio.

One thing that this exercise is showing very clearly is that he current slides point toward a technology platform/product - and that’s not what we meant to pitch. That’s an issue with the slides, not the commenters, and it’s very useful for us to have this exposed and to adjust course.

We’re definitely NOT proposing that creating a technology platform is the work we need to do. We are working much more directly to grow the number of personal discoveries using everyday science.

Reading the pitch now, I think we’re conveying: that we want to build a GitHub for Quantified Self. But GitHub already exists. The lack of GitHub is not what’s in our way. Our work is at the level of: Support real people who face barriers in their projects of everyday science by growing our community and sharing their work, educate and “train the trainers” in health and allied professions so they can support everyday science, create tools that address some of the most common and pressing points of failure, such as lack of instrumentation and data access, and build organizational capacity. We’ll need funders to get this, or they may expect a technology platform and be surprised when they don’t get it.

I think this work is peer production (it certainly isn’t an expert based system) but it doesn’t come through properly yet in the slides.

@dreeds A program description would help clear out the confusion, but it moves in the opposite direction from the elevator pitch - into details rather than away from them. Probably work in both directions is needed.

@Sergio: This paragraph in your comments really helps clarify:

“10 million discoveries in 20 years” - I asked myself how many everyday “scientist” need to be active on the platform to achieve this goal. If by “discovery” you mean as @ejain suggests in his point #3, “actual discoveries something new (maybe once in a lifetime)”, then you’ll need 10 million users over 20 years! I agree with Eric that discovery covers any one of those three points. In my case of casual experimentation, it is about five discoveries in three years. Of course you’ll have a range of discoveries per user, but safe to say we are on the order of 300K - 1M+ active everyday scientist over the 20 years to meet the goal.

I think you and @ejain are very much right - the goal is to grow the # of people who make their own discoveries, based on our knowledge and experience about how challenging this is today. We can currently help about 100 people make discoveries every year. Some multiple of this is also occurring “naturally” (that is, people doing everyday science without our help, but we’d like to document their work and make it easier to share). Can we get to 200 next year, and 400 the year after that?. This seems realistic, and gives us time to evolve the governance and institutional capacity we need.

(Perhaps one of our challenges is that this approach - building and supporting a community and helping scale the tools and methods so that all can benefit - isn’t as zeitgeisty as a technology/platform effort; but if we were doing what everybody else is doing, then why do it?)

2 Likes

I’m afraid I don’t have time to write my comments succinctly enough to fit in this forum thread, but let me throw out a few thoughts.

You’ve identified a core problem, how most health-related research today is driven top-down, by a priesthood of “experts”, in contrast to the dynamism in other industries where technology is moving us to a software-driven bottom-up world that is participatory and personalized.

The Article 27 solution, if I’m interpreting you correctly, is to develop a set of easy-to-use templates that can kickstart a motivated community that can scale to millions of individual experiments. To me, this sounds like https://www.instructables.com/, part of the bigger “Maker" movement. And maybe the Maker movement is an apt analogy in several senses: what started as a non-profit “movement” to encourage more individual involvement in hardware, ended up stalling (Make Magazine stopped publishing in June) when the bigger electronics industry co-opted much of the reason people were doing their own projects in the first place. The Maker movement isn’t dead, any more than open source is, but most of the serious work happens now in for-profit companies (Instructibles is now part of Autodesk).

The Instructables/Article27 idea reminds me of what I call the “Write More Cookbooks” model: we think healthcare today is like a big Mess Hall, where everybody eats the same dull food prepared by chefs for the masses, and we think that cookbooks will kickstart an infrastructure of restaurants, cafes, groceries stores, and Williams-Sonoma. But the analogy doesn’t hold as much as we’d think; healthcare already is full of individual, personalized innovation (the healthcare equivalent of grocery stores already exist for those motivated to try) — but it requires effort and discipline, just like home cooking requires more effort than stopping at the mess hall.

So how can Article 27 help? One thought is to focus less on experiments and more on hypotheses: it’s hard to do an actual experiment, but most people have smart hypotheses — they just need to get written down (“registered”). ( I wrote some ideas about this a couple years ago )

Another idea is to focus on standards and conventions: somebody should work to make data comparable across experiments. (Imagine trying to do science if there were no accurate time-keeping, for example). There are a few efforts in the Mess Hall / healthcare world to promote interoperability and standards (e.g. FHIR, Carin etc) — I wish somebody would help drive that from the personal/everyday science perspective.

You could also work on something like a certification process, not so much for the people doing the experiments, but to kickstart an industry of people who want to help others do experiments. Think registered dietician. Giving an air of professionalism — something to put on a resume — helps inspire some people to focus on this more than if they were just hobbyists.

Maybe I’ll try to write up the rest of my thoughts when I get more time, but meanwhile kudos for making this effort: the world definitely needs what you’re proposing — and you with the QS community are in the best position to do this.

7 Likes

It’s nice seeing many of the ideas that I have around QS to be written down and articulated so well.

The pitch explains the method for scaling the sharing of discoveries made from everyday science very well. But more emphasis should be placed on the how of making discoveries from one’s observations and the value of increasing self-knowledge. There’s a glancing reference to this in that methods will be shared, so it’s assumed that if you adopt these methods, you too, will learn from your observations.

But people collecting data through their tools and not knowing what to do with it is a central problem. It’s crucial to establish that we have an answer to that, and that what we are scaling is solid and impactful on the individual level.

The QS community is a proof-of-concept that people can make these discoveries. But it’s a mistake to make it seem that we have it all figured out. There are people in the community who stopped all tracking activity because they weren’t learning from their data (or perceived that they weren’t learning).

What are the methods and principles of everyday science? How does it differ (and need to differ) from clinical research? The message might be that everyday science isn’t randomized controlled experimentation applied to the individual. The methods that are most useful in generating self-knowledge may be antithetical to the best practices of clinical research.

People using empirical observations to understand themselves and their environment is not new, but it’s never been valued and, as such, a vocabulary and articulation of principles has not been developed. Part of the pitch may be that we will develop the process of how to help people learn from their observations. We’re going to identify, codify, and organize these methods so it’s easier for people to learn from one another’s experiences and apply them to their own lives.

I don’t think that ten million discoveries can happen if we don’t establish this new vocabulary and way to share not only the story, but the methods of these discoveries in a way that they can be abstracted from their particular project and applied to another.

For me, as an individual, it needs to be clear what the methods for discovery are and how I can use them. These methods need to have names so that they can be easily discussed and applied. A weakness of the Show&Tell talks is that the method is often not laid out clearly enough so that it can be applied by another person.

Everyday science is a concept that needs to exist because there is something of value that the traditional gatekeepers either missed or dismissed. We will take a leading role in recognizing and communicating that ordinary people can learn valuable things by applying empirical observation to their lives. And it may not be that kind of value that appeals to a research journal, but it’s incredibly valuable to the individual and the people around them, and it demands to be understood and cultivated.

To make this idea come home, the pitch may need a concrete example of what everyday science looks like and demonstrates the value that increasing one’s self-knowledge can have in bettering one’s experience of life (one could be drawn from the community). With that example firmly established, you give the person being pitched something solid to imagine being multiplied by 10 million.

3 Likes

Gary, great job putting this together. Great feedback so far, and most of mine has already been covered by others.

Your “train the trainers” line really resonated - I feel this could be played up much more. I realize this would require a ton of work, but what if there was a QS-branded paid certification/course that taught the basics of experimental design, tools, data collection, analysis, case studies, etc. then these “certified” folks (organizational thought leaders, doctors/clinicians, etc.) can bring these concepts to their clients (while the average person isn’t curious nor wants to put, but they will gladly collect the data and pay someone else for interpretation (sleep, diet, etc.). This will greatly expand education/reach.

In terms of the pitch deck, you may want to clearly lay out how much you are raising/will need to support these efforts over the coming years, whether those funds are coming from (corporate partners, public donations, etc.) and how those funds will be used.

:100:

2 Likes

These are excellent and useful comments. @sprague you are asking: what is most useful to share? I think the answer to this question isn’t entirely obvious in advance, but at the same time we’re not starting from scratch. Doing the conferences has kept us very close to people who are actively doing projects, and the barriers they face are at multiple levels: instrumentation, analysis, design of the project, domain knowledge. Amazingly, within the QS community there is (sometimes) knowledge at all these levels to help and get people onward toward their discovery, but there is a lot of serendipity required. For an example, see the project I’m currently trying to make progress on. It seems logical that I ought to be able to measure my tremor using a simple method, and in fact I got a great suggestion in the forum that got me a free app that did exactly what I needed, and then I hit a barrier around analyzing the data. More suggestions followed, and they are very plausible, but require familiarity with Matlab and/or Python. I think with more time these suggestions will evolve into an approach that I can manage, and I predict I’m going to learn something from my project - but I have, let’s say, “above average” access to community expertise. One of my goals in developing the pitch in such close consultation with people who have a history and a stake in QS is to remain true to what we already know is required by people, rather than to jump into technology solutions. That’s our greatest asset: our experience doing this. Figuring out what can be “templated” in some way for sharing is part of our collective job. To follow Richard’s line of thinking: recipes, standards, certification are certainly parts of the kit we could deploy, but which parts are most crucial and in precisely what sequence to work on them is part of what we’re figuring out.

@Steven_Jonas says:

This is so well stated, and seems of central importance to conveying the essence of our program. If our pitch as understood as delivering clinical knowledge (primarily) or supporting individuals to become “mini-clinical researchers” then we’ve gotten onto the wrong track.

Just a very brief comment from me: Thinking about how to ensure that potential funders realise early on that we’re not suggesting a technology platform… Traditionally a place for creating, supporting and disseminating knowledge can be called an institute/university/academy. Maybe there’s a suitable way to phrase that?

1 Like

I love QuantifiedBob’s suggestion of a certification/course on basic research concepts. Gary - this also came up during our CSA webinar. Bob T - do you think there is a demand for certification and a train the trainer model? DReed’s format for a concise pitch is very useful. I have trouble with the 10 million discoveries in 20 years - how would that be measured? A few use cases would be very important to include in this pitch - real QS stories with images to make the story real and personal. Thanks!

1 Like

Among the threads of feedback here are specific suggestions about the program work Article 27 could and should do to support people making discoveries. I want to capture a couple of points in a summary, with some comment. It’s still early in the discussion, and these aren’t “the answers” but there’s so much of value in these suggestions that they deserve an efficient recap.

10 Million Discoveries: How Will We Know?

Camille asks how we can measure our success. And this is a good point: It’s not just that we want 10 million discoveries, we want 10 million discoveries shared. In the academic and medical health discovery system, this occurs through publication. We also have a form of publication: the Show&Tell talk. The Show&Tell talk has some key virtues: It is a first person account that focused directly on what’s been learned by the individual, answering the three questions: What did you do, how did you do it, what did you learn? If the discoveries we facilitate are shared in a format that conformed to this template, we can count them. However, the specific Show&Tell format we currently use has some features which prevent it from scaling: The talks are given at live events, documentation is sparse except at the international conferences, and the people who do the projects don’t have an easy way to update them publicly (after they share them) or to get help along the way (before they share them). We need to have a “unit of production” that scales more easily than the live show&tell talk given at a meetup. As we work on developing this, we have an advantage: The non-scalable, handcrafted version is already working. And we can definitely grow it. We can go from 100 to 200, and in fact quite a bit further, until we are absolutely maxed out on show&tell talks. Along the way, we can experiment with other forms. This is definitely a 2-3 year process that we should approach with sparse assumptions. It would be fatal to just think: “YouTube for Everyday Science” or something like that, and charge ahead, with money flying out of our pockets in all directions. The opposite approach is actually more exciting and promising: go from 100 to 200, and then double again, and learn, learn, learn.

QS Institute/Train the Trainers/Certification

@Sara mentions the form of an institute. This is such a different approach than a typical startup strategy that it deserves being underlined. We actually have some experience with this model: The QS Institute at Hanze Technical University in Groningen, founded by Martijn de Groot. Martijn, whom many of you know, managed to fund and develop a very successful group at QSI that launched an undergraduate major in “Quantified Self and Global Health” and also a summer continuing education program in Quantified Self for health care allied professionals. (These were mainly nurses.) I visited the program several times and met students, who were working on their own self-tracking projects as a way to more deeply understand how to help patients. They selected instrumentation, formed their own questions, analyzed the data, and did a “show&tell” poster. Martijn is now the director of the ReShape health innovation center at Radbaud Medical University, where he has, among other responsibilities, a specific charge to develop approaches to teaching Quantified Self to medical professionals. This collaboration gives us a chance to develop curricula that could be shared, sold, or licensed broadly within health care, including the kinds of certifications that @sprague and @QuantifiedBob point to as a powerful component of influencing professional activity in health.

Please keep your comments coming. They are very important and will condition our fundraising and program development, which we hope will feed directly back into the community to spur the kind of work that’s needed.

4 Likes

<3 to everyone that’s weighed in!

@sprague I think you see so much of the same issues, and this potential for citizen science (and not the crowdsourced exploitative version). I hope we keep hearing your thoughts. Like @Agaricus, I’ve found attempting my own personal project to be very instructive on where I stumble. But the dream is to say: “we don’t know exactly how, but the internet makes it possible for people to create and share and expand ideas – it decentralizes knowledge production – and we think it can do that in this area”

I would expect “10 Million Discoveries” to be a long tail: the vast majority things will just matter to the individual, not a big finding. But (1) some become more important (re-used components, larger groups, etc). I think rather than try to predict those, we would want to help millions to exist – and see what grows naturally. i.e. to grow innovation, grow the whole distribution. (2) The long tail has its own value: each person has done something worthwhile for themselves, and there is value in simply making this easier. (Per @Agaricus the goal is also “sharing”, this is key for the serendipity of #2 becoming #1.)

@Steven_Jonas fully agree with the goal of being impactful on the individual level! That is the seed from which anything else grows, and has value in its own right. I think you may also have made the point: We don’t know what’s needed to expand personal research / everyday science. We have some ideas. I think the internet has demonstrated a transformative ability to enable collaboration & decentralized production – that’s the potential big win. But the platforms or tools are strategies, in service to the community and mission. I think what’s done should be flexible, it should expect to explore, learn, and iterate.

Along those lines, I have caution about our analogies – references to other platforms, communities, and models. Analogies are good because they make an idea familiar, and that’s vital for everything (community, funding, etc). Analogies are bad because they may go too far, they encourage imitative isomorphism – which sometimes works, but often doesn’t. I suppose the goal is: be inspired without being imitative.

Along those lines, I’ve been going back and forth on @Sara’s idea regarding an academy/institute/university. I think it really resonates in some cultural values…

I had started thinking along these lines too – is there an idea like an academy, institute, university? It touches on ideas of learning and research.

I’m cautious about leaning too much on it (does it convey hierarchical learning? we don’t want to reproduce current model, does it miscommunicate that?).

But what it might say is some important things about the social norms we associate with institutions & research.

Merton described four “norms of science” – communism, universalism, disinterestedness, and organized skepticism. (Later thinkers have added “originality”.) I feel like there’s a sense that some of these norms exist – maybe in a translated or extended sense – in the aims of everyday science / self research.

Universalism: participation is valued from everyone, not from a particular group – it’s taken outside a traditional institution and democratized. Organized skepticism: I think we might hope for a community that would voice skepticism in the pursuit of empiricism? (i.e. not sympathetic to pseudoscience.) Communism: I think we’d like to see sharing of knowledge/learning, approaches, resources, and solutions so that others can participate and do self-research.

To me, these are positive things I take from the idea of an “institute”… a community collectively engaged in knowledge production, where each researcher has their own project. I’m curious how others feel, if they see resonance with the ethos/norms of “traditional” science.

1 Like

New draft, substantially revised and heavily influenced by the comments here, is online and open for comment:

I’ll resist the temptation to explain all the changes. Go at it, I’m listening hard.

6 Likes

@tblomseth has been advocating that we have a bolder vision for 2 decades. How about 100 million discoveries? On an operational level, if we average slightly more than doubling each starting with 100 in the first year, we can get on track toward this goal. I think the early years of this doubling are easy to plan (using our current methods). A single conference delivers more than 80 talks and presentations, and in the past we’ve done 2 per year - and that doesn’t include any of the show&tell talks from local meetups - all produced on a shoestring budget. So I don’t think the early phase of the hockey stick is at all difficult to understand in tactical detail. HOWEVER, later years require that we master the use of (existing) participatory technologies and educational channels. We don’t have to reinvent participatory culture, though our approach will necessarily have some twists. And our nonprofit structure means that we are not trying to capture all of the value we create and support, we simply want this value to exist. So that’s all to the good. However, it is still uncomfortable (for me at least) to use the exponential arguments, which have in the past been used to support utterly absurd propositions from utterly unprepared startups. I haven’t quite figured out if this discomfort is wise or unwise, so just dropping this here for feedback.

Some added 2nd (or 3rd) thoughts - I’ve been reading the Wikipedia article on modeling Wikipedia growth and feeling more and more like putting a numerical target on “discoveries” by 2040 is perilously close to bullshit. I think it’s important to be clear about our ambition, which is “everyday science for everybody.” This is utopian, perhaps, in the way that the Universal Declaration of Human Rights is itself utopian, but it is not bullshit. However, saying that we will produce (or even “catalyze”) 100 million discoveries by 2040 - that’s a different matter. I think there is ambiguity in the concept of discovery that, while giving us wiggle room, gives us actually too much wiggle room. The seemingly “hard” number is really just a stand in for “really really a lot.” One of @tblomseth’s alternate suggestions was to eliminate the numerical claim altogether, and that’s where I’m leaning.

You can always water down your definition of “discoveries” until it corresponds to something that is being done 100 million times :slight_smile:

The large number does tell us is that the goal is to have more than just over-educated enthusiasts with too much spare time involved – but how?

Thanks Eric - I admit I’m probably obsessing a bit much about this, but on the other hand I think that if we state a quantitative goal it should be something that we believe in and can achieve; even if highly ambitious, it should be a genuine goal we can rally around. Otherwise it’s just a distraction.

@tblomseth asked some months ago: if our goal is to scale everyday science, what exactly are we scaling? That is, what is the “unit of production.” A good question!

Out of the discussion this question produced came the idea that a discovery is “shared project.” It’s basically a “show&tell” but not necessarily one that is presented live at a meeting and shared as a video. Maybe a project log also counts. But some requirements are:

  1. It has an author. Doesn’t have to be a real name (can be anonymous), but part of the definition of a project is that it is carried out by an individual who is both the subject and the investigator, who takes responsibility for it.

  2. It involves making self-observations. Usually this would be “generates data” but we’ve seen some that are minimal/edge cases where the observations were made but no data was collected or preserved. I think we can make room for these, but saying “observations” rather than “data” keeps it user centric rather than data centric.

  3. Is publicly shared. It’s partly up to us to define how the sharing takes place, and creating the right context and framework for sharing is partly what QS has done and Article 27 should support.

I think it is easy to see how we go from 100 to 200. Even doing this will teach us a lot. Going from 200-400 is also something we can plan. It may cause us to do some new things, like create user accounts for people sharing projects so they can update them without our intervention (if we do this outside the forum). If we think it is needed, we can also rally the community around different topics where there is a lot of activity already: Blood pressure, cholesterol, ovulatory cycling, blood sugar, sleep, etc. If we focused on it for a few months we could support 20-30 projects in any of these areas, I think.

So I see a path to maybe 500 discoveries, where discoveries are defined as shared projects. That is, they are not the “few times in a lifetime” level of discoveries, but more basic bits of learning. For instance, even my learning that my hand tremor is made worse when I stiffen the digit counts as a discovery, and I learned this from just trying to refine my measurement protocol, not from doing an experiment. But it was really interesting, and gives me some useful ideas. (Somebody might appreciate this idea for tracking their own tremor, but we don’t really have enough density and ease of sharing to make this plausible yet.)

Going from 500 to 1000 or 8000 is another phase shift, and I think by year two we would be ready to think about this.

And from 8000 to 128,000, other shift… and so on.

I’m resistant to pretending I know what happens at this point, since we have such a better chance of knowing after we get to 1000.

I gave a talk today at the Research track of the Wikimania conference in Stockholm which made use of some of the ideas being thrown here and discussions I had in the past with @Agaricus & @madprime and that might be useful for the discussion here too!

The talk was titled Peer production of community science with personal data and introduced the general idea of citizen science as potential implementations of Article 27 and the Mertonian norms of communism & universalism. It also shows how most citizen science projects these days are actually falling short of those ideals, as many of them are reframed crowdsourcing and not true participation.

Based on this I highlighted how people that are doing QS are actually turning themselves into scientists as they are actively participating in all bits of the research cycle (from building a hypothesis & collecting the necessary data all the way to analyzing it and drawing conclusions). At the same time the mindset of Quantified Self is different from doing research, in the sense that our lack of shared data flows or shareable tooling makes it hard for others to reproduce our own data collections and scale from the Quantified Self to the Quantified Us that allows communities of interest to emerge and enables people to work together on hypotheses.

I’ve went on to give some examples of how the shared data flows/tooling and project-based sharing of Open Humans can help to enable a peer-produced knowledge production that scales beyond the n=1 to larger groups.

The full slides can be found here: https://zenodo.org/record/3370474

1 Like

@ejain Totally! We can always default to some eroding goals:sweat_smile: I like how you point to how the magnitude of a quantitative goal has some important implications about reach and impact beyond the current QS community. And there’s some important signaling to potential funders and the surrounding world, too.

@Agaricus I still have issues with thinking in a 20-year timeframe. To me it seems difficult to have an idea of which kind of societal, technological etc. environment the program would be immersed in 10-20 years from now. 10 years feel more manageable to me.

Some ballparking has lead me to a slightly different way of stating a quantitative goal. Instead of focusing on the static inventory of discoveries i.e. 10M discoveries in 20 yrs, how about instead highligting the production of discoveries instead:

1 million discoveries a year in 10 years

If we pull that off it doesn’t actually seem unlikely that it could be scaled to 100M discoveries in 20 years. However, to me at least I’d prefer limiting us to talking about the path of the first 10 yrs and then cross that next bridge of the years 10+ out when we get (closer) to it.

2 Likes

The human capacity to process numbers is dismal at these scales, and the appeal of the goal is essentially metaphoric. This is a “as many X as there are stars in the sky” type of statement.

And I’ve seen the Wikipedia comparison receive positive reactions! I think it’s very good – I think it has resonance, calling to a constellation of related ideas, analogies, similarities, and lessons.

I don’t think exact numbers on the path matter much beyond the trajectory for the next couple years. And I think 20+ years is a good number because it’s generational in timespan.

Maybe instead I’d suggest a more vague statement, like: “Our dream: in 20 years, to see people share as many personal projects with each other as there are pages on Wikipedia.”

1 Like

At the suggestion of @madprime I’m applying for a Shuttleworth Fellowship. Their support has been important for Open Humans, and Mad and Bastian and Open Humans generally has been a key ally of QS work, so the connection makes sense. In the spirit of the fellowship I’m posting my draft application for public comment. Anybody with a link can comment in the doc:

The headers of each section are supplied by the Shuttleworth Foundation in their application form, and there is a 1500 character limit for each section. So it has its own format, but the links to the Article 27 pitch we discussed over the summer should be obvious.

Any suggestions welcome!

5 Likes

I gave it a read - it’s strong. I added some comments and a couple line edits.