What’s so special about Networks?

This is a long overdue blog post, so it’s also bit long.

In the latter half of last year, as part of the work on the Mozilla Foundation 2020 Strategy and 2016 Business Plan, I was thinking about KPIs (Key Performance Indicators).

I think a lot about KPIs, and how the numbers you choose to care about either do or don’t influence the people who make day-to-day decisions about where to invest time, energy and money.

Many people have written much about KPIs, and the intersection of people and numbers is a fascinating place to work.

And in many cases it’s pretty easy to find a good KPI. Most of the significant ‘business’ decisions made around the world today will be mapping as directly as possible to some concept of Shareholder Value, or Gross Domestic Product.

But when your goal isn’t making money, or moving money around, it gets more complex (and usually more interesting).

Which brings us to Networks. In particular, the trickiest kind of network to measure – the ones made up of people.


A significant part of the work we do in the Foundation is investing in communities of practice that are linked in various ways to our mission. These are amazing programmes, and the volunteers around the world who get involved are a huge part of our capacity to influence the shape of the world.

If you embed yourself in one of these communities, you see magical things happen. But as these networks get bigger and are distributed far and wide geographically, it’s hard to keep an accurate intuition of where things are going well, and where things need help.

To care for the network at scale, we need ways to track it’s ‘Vital Signs’.

Coming from a web analytics and fundraising optimisation background, this started out feeling somewhere between ‘very messy’ and ‘outright impossible’ to me. But the more I dug into existing work in the field, the more I realised we could do something useful here.

The question I’ve been asking myself is ‘how can we build a nervous system for our network?’ Something light weight, but capable of firing feedback signals and generating reflex actions. This frame has been helpful as I’ve talked to people about this work. We’re not at that point yet, but that where I’d like to see this work go.

So, onto what we’re doing right now.

Evaluating the impact of network building programmes isn’t new, but compared to other evaluation processes used by non-profits, it’s a younger field of work. We’re going to start by running network mapping surveys, and apply some Social Network Analysis (SNA) methods to analyse the results.

Where we might be doing something new, is trying to turn the ongoing process of network evaluation into an organisational KPI – by boiling down the constituent parts of the analysis to generate a single number we can track over time – allowing us to compare networks of different shapes and styles on a standard scale. I say ‘might be’, because it’s possible we’ve just not found the other people who’ve done this yet. Searching for content on Network KPIs usually ends up finding the ‘ICT type of network’ rather than the ‘people type of network’.

So, I have more to write here but for now I’ll share a couple of working documents:


Featured Photo Credit: Matt Gibson

Software as means of changing the world

This is a thought piece as part of the Mozilla Learning strategy project. In particular it’s a contribution to the Advocacy Working Group who are looking at how we design a programme to have impact at a significant global scale as one part of an organisational mission to create Universal Web Literacy.

To be clear, this is an input into the strategy project, not an output. This may or may not be something Mozilla Learning chooses for it’s strategy, but I put my name down to write this piece as it’s something I’ve been thinking about.

I agreed to write ‘a paragraph or two’. It turns out I have more thoughts on this.

So I’ll start with the summary:

Point 1. We could choose to build software as a deliberate means of creating change
Mozilla is known for changing the world by building software. We have precedent for taking this approach and exceptionally talented engineers around the world want to play a part in this story (both as staff and volunteers). Building new software products is hard, but if successful can scale to have a disproportionate impact in relation to investment.

Point 2. We can create change via software without writing any code
Through partnership with existing consumer software products, we could deliver web literacy content/training/engagement independently of any software we build. This approach can also have a disproportionate impact to investment return. Can we be there to greet the next billion as they come online and create their first accounts / profiles / digital identities? What would that look like? Is that something we want to work on?

Both approaches are valid options for a strategy, but require very different execution. They are not mutually exclusive.

Longer Brain Dump

This rest of this isn’t entirely coherent, but I can’t spend longer on it right now…

Software as a means of changing the world

What is the role of software in an Advocacy strategy?

I’m going to argue that choosing software as a method for creating deliberate change in the world is potentially the most cost effective route to large scale impact.

Assumption: Software already changes the world

I’ve been thinking about this topic for a while, and want to start by directing you to a short post I wrote three years ago – Interwoven with bits – it distills a topic I could write about for days on end. We shape software, and software shapes us.

The software that changes the world takes many forms. A list of examples could technically include all software that continues to be used by any human being (directly and indirectly). But I’ll call out a few categories:

  1. Software as a direct and targeted solution to a specific problem
  2. Software as a delivery mechanism for ‘traditional’ advocacy and communications
    • Like this game from WWF whose pirate distribution through file-sharing networks actually generated much greater reach in target markets than a paid communications channel could have hoped for – while paid downloads in other markets generated funds
  3. Software designed as a ‘neutral’ utility but which has unplanned side-effects (good and bad)
    • This is pretty much all commercial software. Those that reach the biggest percentage of the global population likely have the most impact on the world. These are the systems we are addicted to, which change our behaviour through the creation of new habits and interactions with technology, society, and even with ourselves.
  4. Software as an ethical, political, or other deliberate consumer choice
    • Like choosing a freely editable wiki to power a website
    • Or choosing Linux over Windows
    • Or using software to hide your identity online
  5. Building software as an act of advocacy
    • By building it’s products in the way that it did, Mozilla changed the world directly and indirectly as an advocate for the open web. While the success of Firefox as a consumer product gave Mozilla a louder voice in many important forums, even the ‘smaller’ stories like the organising principles of community participation had an impact by changing public understanding of how an organisation could be run in a networked world.

Mass adoption

In general, when I’m thinking about how software changes the world, I’m thinking about how the mass adoption of a piece of software creates a shift in behavioural norms. The more successful and/or popular a piece of software is, the more likely it is to have more impact on the world.

For people who want to fix the world with software, it’s easy to look at ‘Silicon Valley’ success stories and get excited about building the next big thing. But in a sensible planning process these inspiring stories need the context of survivor bias.

Most software will go to market and die. The same is true of most inventions and artistic creations. The majority who try to build something and ‘fail’ are known as the “optimistic martyrs“. Most software dies in the market because the formula for winning is elusive, and even a well defined target market is actually a collection of complex busy human beings.

People who build software with purely good intentions about making the world a better place face an even bigger challenge – they don’t have the forcing function of the market to keep them focused. That’s not to say it can’t be done (see Firefox), but as someone who uses numbers to influence decision making, numbers with currency symbols next to them always get a more direct response.

Is software too big of a bet?

As an advocacy strategy, choosing software as a means of changing the world would be a gamble. Even with the best people working on it.

But software is interesting because of its relatively low cost of development and distribution. These costs don’t need to scale with success – they become negligible if a product reaches a mass consumer market. I.e. the cost to impact an extra person beyond a critical mass approaches zero.

Therefore, the potential ROI on software is enormous (thanks Internet!). But potential is a word that optimistically wraps up unknown odds.

There’s a good reason that successful start-ups that scale are called ‘unicorns’.

A Mozilla bias

At this point I want to reflect on a Mozilla bias.

I was drawn into the org through experiencing this bias at my first Mozfest – hacking on software in my free time as a way to try and change the world.

Many people work at Mozilla because they believe building great software makes the world a better place; myself included. This is Mozilla’s heritage and origin story. But it might not always be the best or only solution to a given problem.

Changing the world through software, without building software

This next piece is not to exclude the option of building software. But I’d like to approach this section as though Mozilla didn’t have its history of building products and as though it was a new org setting out to advocate for Universal Web Literacy using software as a means of creating change.

If you pick a particular goal, like helping the next billion new users of the internet who come online in a mobile-first capacity to understand the full scope of the web, what is the most effective route to success?

We’ve already been exploring how to build learning opportunities into Firefox, but what about partnering with the organisations who own many other front doors to ‘the web’. For example, if you create a Twitter/WhatsApp/Telegram/Other account in Bangladesh, can we work with owners of those products to offer a locally relevant and meaningful crash course to the web – covering everything from privacy to economic opportunity? Something that delivers on our mission, but also adds real value to the end-user and the product owner. How much effort would be required to make that happen? How many people could we reach?

Though I noted the example of the WWF Rhino Raid game in the list above, many well intentioned orgs have tried to create software like apps and games as a way to change the world without understanding the real nuances of product (games are possibly the hardest to get right). As an example of using the partnership approach, WWF have also run campaigns in existing successful consumer products via partnerships, like these in game experiences.

When I talked about this concept briefly on our last working group call, Laura flagged up that Tumblr directs users who explore their theme editor to online training courses run by General Assembly. This is the kind of partnership that adds value in both directions. Tumblr want users to build higher quality sites, and General Assembly want to educate people who have an active interest in learning about these topics.

Investment to Impact Ratio

After thinking through the points above, I’d advocate for ‘advocating’ with software; some of which we build, some of which we don’t.

The potential investment to impact ratio for a software product that scales is immense. But it requires a long-term product strategy.

I’d argue that good products typically last longer than good campaigns, and my view is that the most meaningful change comes from what we help people do repeatedly.

Software also offers a degree of measurability and accountability hard to match in any other line of work. If I were an impact investor, that’s where I’d put my money, for that reason. Though I definitely have a bias here.

This isn’t to say what software we should build.

Image source.

Thoughts about CRM, from Whistler

…and in the context of the Mozilla Learning Strategy.

Obligatory photo of Whistler mountains
Another obligatory photo of the mountains

Mozilla recently held a coincidental work week in Whistler Village.

I knew this would be a busy week when the primary task for MoFo across teams was to ‘Dig into our relationships goal‘.

  1. Relationships‘ meant a lot of talk about relationship management (aka CRM).
  2. Goal‘ meant a lot of talk about metrics.

Between these two topics, my calendar was fully booked long before I landed in Canada.

My original plan was to avoid booking any meetings, and be more flexible about navigating the week. But that wasn’t possible. This is a noticeable change for me in how people want to talk about metrics and data. A year ago, I’d be moving around the teams at a work-week nudging others to ask ‘do you want to talk about metrics?’. This week in contrast was back-to-back sessions requested by many others.

We also spent a lot of time together talking about the Mozilla Learning Strategy. And this evolving conversation is feeding back into my own thinking about delivering CRM to the org.

Where I had been thinking about how to design a central service that copes with the differences between all the teams, what I actually need to focus on is the things that are the same across teams.

I’m not designing many CRMs for many teams, but instead a MoFo CRM that brings together many teams.

I’m not actually giving this post enough writing time to add the context for some readers, but that small change in framing is important. And I think very positive.

Lastly, one other important lesson learned: Pay attention to time-zones and connecting flights when booking your travel, or you’ll end up sleeping in the airport.


The importance of retention rates, explained by @bbalfour

In my last post I shared a tool for playing with the numbers that matter for growing a product or service. (I.e. Conversion, retention and referral rates).

This video of a talk by Brian Balfour is a perfect introduction / guide to watch if you’re also playing with that tool. In particular, the graphs from 1:46 onwards.

Optimizing for Growth

In my last post I spent some time talking about why we care about measuring retention rates, and tried to make the case that retention rate works as a meaningful measure of quality.

In this post I want to look at how a few key metrics for a product, business or service stack up when you combine them. This is an exercise for people who haven’t spent time thinking about these numbers before.

  • Traffic
  • Conversion
  • Retention
  • Referrals

If you’re used to thinking about product metrics, this won’t be new to you.

I built a simple tool to support this exercise. It’s not perfect, but in the spirit of ‘perfect is the enemy of good‘ I’ll share it in it’s current state.

>> Follow this link, and play with the numbers.

Optimizing for growth isn’t just ‘pouring’ bigger numbers into the top of the  ‘funnel‘. You need to get the right mix of results across all of these variables. And if your results for any of these measurable things are too low, your product will have a ‘ceiling’ for how many active users you can have at a single time.

However, if you succeed in optimizing your product or service against all four of these points you can find the kind of growth curve that the start-up world chases after every day. The referrals part in particular is important if you want to turn the ‘funnel’ into a ‘loop’.

Depending on your situation, improving each of these things has varying degrees of difficulty. But importantly they can all be measured, and as you make changes to the thing you are building you can see how your changes impact on each of these metrics. These are things you can optimize for.

But while you can optimize for these things, that doesn’t make it easy.

It still comes down to building things of real value and quality, and helping the right people find those things. And while there are tactics to tweak performance rates against each of these goals, the tactics alone won’t matter without the product being good too.

As an example, Dropbox increased their referral rate by rewarding users with extra storage space for referring their friends. But that tactic only works if people like Dropbox enough to (a) want extra storage space and (b) feel happy recommending the product to their friends.

In summary:

  • Build things of quality
  • Optimize them against these measurable goals

Measuring Quality

At the end of last year, Cassie raised the question of ‘how to measure quality?’ on our metrics mailing list, which is an excellent question. And like the best questions, I come back to it often. So, I figured it needed a blog post.

There are a bunch of tactical opportunities to measure quality in various processes, like the QA data you might extract from a production line for example. And while those details interest me, this thought process always bubbles up to the aggregate concept: what’s a consistent measure of quality across any product or service?

I have a short answer, but while you’re here I’ll walk you through how I get there. Including some examples of things I think are of high quality.

One of the reasons this question is interesting, is that it’s quite common to divide up data into quantitative and qualitative buckets. Often splitting the crisp metrics we use as our KPIs from the things we think indicate real quality. But, if you care about quality, and you operate at ‘scale’, you need a quantitative measure of quality.

On that note, in a small business or on a small project, the quality feedback loop is often direct to the people making design decisions that affect quality. You can look at the customers in your bakery and get a feel for the quality of your business and products. This is why small initiatives are sometimes immensely high in quality but then deteriorate as they attempt to replicate and scale what they do.

What I’m thinking about here is how to measure quality at scale.

Some things of quality, IMHO:

axeThis axe is wonderful. As my office is also my workshop, this axe is usually near to hand. It will soon be hung on the wall. Not because I am preparing for the zombie apocalypse, but because it is both useful as a tool, and as a visual reminder about what it means to build quality products. If this ramble of mine isn’t enough of a distraction, watch Why Values are Important to understand how this axe relates to measures of quality especially in product design.

toasterThis toaster is also wonderful. We’ve had this toaster more than 10 years now, and it works perfectly. If it were to break, I can get the parts locally and service it myself (it’s deliberately built to last and be repaired). It was an expensive initial purchase, but works out cheap in the long run. If it broke today, I would fix it. If I couldn’t fix it for some extreme reason, I would buy the same toaster in a blink. It is a high quality product.

coffeeThis is the espresso coffee I drink every day. Not the tin, it’s another brand that comes in a bag. It has been consistently good for a couple of years until the last two weeks when the grind has been finer than usual and it keeps blocking the machine. It was a high-quality product in my mind, until recently. I’ll let another batch pass through the supermarket shelves and try it again. Otherwise I’ll switch.

spatulaThis spatula looks like a novelty product and typically I don’t think very much of novelty products in place of useful tools, but it’s actually a high quality product. It was a gift, and we use it a lot and it just works really well. If it went missing today, I’d want to get another one the same. Saying that, it’s surprisingly expensive for a spatula. I’ve only just looked at the price, as a result of writing this. I think I’d pay that price though.

All of those examples are relatively expensive products within their respective categories, but price is not the measure of quality, even if price sometimes correlates with quality. I’ll get on to this.

How about things of quality that are not expensive in this way?

What is quality music, or art, or literature to you? Is it something new you enjoy today? Or something you enjoyed several years ago? I personally think it’s the combination of those two things. And I posit that you can’t know the real quality of something until enough time has passed. Though ‘enough time’ varies by product.

Ten years ago, I thought all the music I listened to was of high quality. Re-listening today, I think some of it was high-quality. As an exercise, listen to some music you haven’t for a while, and think about which tracks you enjoy for the nostalgia and which you enjoy for the music itself.

In the past, we had to rely on sales as a measure of the popularity of music. But like price, sales doesn’t always relate to quality. Initial popularity indicates potential quality, but not quality in itself (or it indicates manipulation of the audience via effective marketing). Though there are debates around streaming music services and artist payment, we do now have data points about the ongoing value of music beyond the initial parting of listener from cash. I think this can do interesting things for the quality of music overall. And in particular that the future is bleak for album filler tracks when you’re paid per stream.

Another question I enjoy thinking about is why over the centuries, some art has lasting value, and other art doesn’t. But I think I’ve taken enough tangents for now.

So, to join this up.

My view is that quality is reflected by loyalty. And for most products and services, end-user loyalty is something you can measure and optimize for.

Loyalty comes from building things that both last, and continue to be used.

Every other measurable detail about quality adds up to that.

Reducing the defect rate of component X by 10% doesn’t matter unless it impacts on the end-user loyalty.

It’s harder to measure, but this is true even for things which are specifically designed not to last. In particular, “experiences”; a once-in-a-lifetime trip, a festival, a learning experience, etc, etc. If these experiences are of high quality, the memory lasts and you re-live them and re-use them many times over. You tell stories of the experience and you refer your friends. You are loyal to the experience.

Bringing this back to work.

For MoFo colleagues reading this, our organization goals this year already point us towards Quality. We use the industry term ‘Retention’. We have targets for Retention Rates and Ongoing Teaching Activity (i.e. retained teachers). And while the word ‘retention’ sounds a bit cold and business like, it’s really the same thing as measuring ‘loyalty’. I like the word loyalty but people have different views about it (in particular whether it’s earned or expected).

This overarching theme also aligns nicely with the overall Mozilla goal of increasing the ‘number of long term relationships’ we hold with our users.

Language is interesting though. Thinking about a ‘20% user loyalty rate’ 7 days after sign-up focuses my mind slightly differently than a ‘20% retention rate’. ‘Retention’ can sound a bit too much like ‘detention’, which might explain why so many businesses strive for consumer ‘lock-in’ as part of their business model.

Talking to OpenMatt about this recently he put a better MoFo frame on it than loyalty; Retention is a measure of how much people love what we’re doing. When we set goals for increasing retention rate, we are committing to building things people love so much that they keep coming back for more.

In summary:

  • You can measure quality by measuring loyalty
  • I’m happy retention rates are one of our KPIs this year

My next post will look more specifically about the numbers and how retention rates factor into product growth.

And I’ll try not to make it another essay. 😉

Webmaking in the UK, and face-to-face events

One of this week’s conversations was with Nesta, about Webmaker usage within the UK and whether or not we have data to support the theory that face-t0-face events have an impact getting people involved in making on the web. These are two topics that interest me greatly.

I’m basically copying some of my notes into blog form so that the conversation isn’t confined to a few in-boxes.

And the TL;DR is our data represents what we’ve done, rather than any universal truth.

Our current data would support the hypothesis that face-to-face time is important for learning, but that would simply be because that’s how our program has been designed to date. In other words, our Webmaker tools were designed primarily for use in face-to-face events, which meant that adoption by ‘self-learners’ online is low because their is little guidance or motivation to play with our tools on your own. This year we’re making a stronger push on developing tools that can be used remotely, alongside our work on volunteer led face-to-face events. This will lead to a less biased overall data set in the future where we can begin to properly explore the impact on making and learning for people who do or don’t attend face-to-face events at various stages in their learning experience. In particular I’m keen to understand what factors help people transition from learners, to mentoring and supporting their peers.

I also took a quick look at the aggregate Google Analytics location data for the UK audience which I hadn’t done before and which re-enforces the point above.

Screen Shot 2015-01-30 at 11.14.29

Above: Traffic to Webmaker (loosely indicating an interest in the topic) is roughly distributed like a population map of the UK. This is what I expect to see of most location data.

Screen Shot 2015-01-30 at 11.17.25

Above: However, if you look at the locations of visitors who make something, there are lots of clusters around the UK and London is equaled by many other cities.

To-date, usage of the Webmaker tools has been driven by those who are using the tools to teach the web (i.e. Webmaker Mentors). But we also know there are large numbers of people who find Webmaker outside of the face-to-face event scenarios who need a better route into Webmaker’s offering.

The good news is that this year’s plans look after both sets of potential learners.

Fundraising testing update

I wrote a post over on fundraising.mozilla.org about our latest round of optimization work for our End of Year Fundraising campaign.

We’ve been sprinting on this during the Mozilla all-hands workweek in Portland, which has been a lot of fun working face-to-face with the awesome team making this happen.

You can follow along with the campaign, and see how were doing at fundraising.mozilla.org

And of course, we’d be over the moon if you wanted to make a donation.

These amazing people are working hard to build the web the world needs.