What’s so special about Networks?

This is a long overdue blog post, so it’s also bit long.

In the latter half of last year, as part of the work on the Mozilla Foundation 2020 Strategy and 2016 Business Plan, I was thinking about KPIs (Key Performance Indicators).

I think a lot about KPIs, and how the numbers you choose to care about either do or don’t influence the people who make day-to-day decisions about where to invest time, energy and money.

Many people have written much about KPIs, and the intersection of people and numbers is a fascinating place to work.

And in many cases it’s pretty easy to find a good KPI. Most of the significant ‘business’ decisions made around the world today will be mapping as directly as possible to some concept of Shareholder Value, or Gross Domestic Product.

But when your goal isn’t making money, or moving money around, it gets more complex (and usually more interesting).

Which brings us to Networks. In particular, the trickiest kind of network to measure – the ones made up of people.

16181710782_67731fa7bd_z

A significant part of the work we do in the Foundation is investing in communities of practice that are linked in various ways to our mission. These are amazing programmes, and the volunteers around the world who get involved are a huge part of our capacity to influence the shape of the world.

If you embed yourself in one of these communities, you see magical things happen. But as these networks get bigger and are distributed far and wide geographically, it’s hard to keep an accurate intuition of where things are going well, and where things need help.

To care for the network at scale, we need ways to track it’s ‘Vital Signs’.

Coming from a web analytics and fundraising optimisation background, this started out feeling somewhere between ‘very messy’ and ‘outright impossible’ to me. But the more I dug into existing work in the field, the more I realised we could do something useful here.

The question I’ve been asking myself is ‘how can we build a nervous system for our network?’ Something light weight, but capable of firing feedback signals and generating reflex actions. This frame has been helpful as I’ve talked to people about this work. We’re not at that point yet, but that where I’d like to see this work go.

So, onto what we’re doing right now.

Evaluating the impact of network building programmes isn’t new, but compared to other evaluation processes used by non-profits, it’s a younger field of work. We’re going to start by running network mapping surveys, and apply some Social Network Analysis (SNA) methods to analyse the results.

Where we might be doing something new, is trying to turn the ongoing process of network evaluation into an organisational KPI – by boiling down the constituent parts of the analysis to generate a single number we can track over time – allowing us to compare networks of different shapes and styles on a standard scale. I say ‘might be’, because it’s possible we’ve just not found the other people who’ve done this yet. Searching for content on Network KPIs usually ends up finding the ‘ICT type of network’ rather than the ‘people type of network’.

So, I have more to write here but for now I’ll share a couple of working documents:


 

Featured Photo Credit: Matt Gibson

The importance of retention rates, explained by @bbalfour

In my last post I shared a tool for playing with the numbers that matter for growing a product or service. (I.e. Conversion, retention and referral rates).

This video of a talk by Brian Balfour is a perfect introduction / guide to watch if you’re also playing with that tool. In particular, the graphs from 1:46 onwards.

Optimizing for Growth

In my last post I spent some time talking about why we care about measuring retention rates, and tried to make the case that retention rate works as a meaningful measure of quality.

In this post I want to look at how a few key metrics for a product, business or service stack up when you combine them. This is an exercise for people who haven’t spent time thinking about these numbers before.

  • Traffic
  • Conversion
  • Retention
  • Referrals

If you’re used to thinking about product metrics, this won’t be new to you.

I built a simple tool to support this exercise. It’s not perfect, but in the spirit of ‘perfect is the enemy of good‘ I’ll share it in it’s current state.

>> Follow this link, and play with the numbers.

Optimizing for growth isn’t just ‘pouring’ bigger numbers into the top of the  ‘funnel‘. You need to get the right mix of results across all of these variables. And if your results for any of these measurable things are too low, your product will have a ‘ceiling’ for how many active users you can have at a single time.

However, if you succeed in optimizing your product or service against all four of these points you can find the kind of growth curve that the start-up world chases after every day. The referrals part in particular is important if you want to turn the ‘funnel’ into a ‘loop’.

Depending on your situation, improving each of these things has varying degrees of difficulty. But importantly they can all be measured, and as you make changes to the thing you are building you can see how your changes impact on each of these metrics. These are things you can optimize for.

But while you can optimize for these things, that doesn’t make it easy.

It still comes down to building things of real value and quality, and helping the right people find those things. And while there are tactics to tweak performance rates against each of these goals, the tactics alone won’t matter without the product being good too.

As an example, Dropbox increased their referral rate by rewarding users with extra storage space for referring their friends. But that tactic only works if people like Dropbox enough to (a) want extra storage space and (b) feel happy recommending the product to their friends.

In summary:

  • Build things of quality
  • Optimize them against these measurable goals

2015 Mozilla Foundation Metrics Strategy(ish) & Roadmap(ish)

I wrote a version of this strategy in January but hadn’t published it as I was trying to remove those ‘ish‘s from the title. But the ‘ish’ is actually a big part of my day-to-day work, so this version embraces the ‘ish’.

MoFo Metrics Measures of Success:

These are ironically, more qualitative than quantitative.

  1. Every contributor (paid or volunteer) knows at any given time what number they (or we) are trying to move, where that number is right now, and how they hope to influence it.
  2. We consider metrics (i.e. measures of success) before, during and after after each project.
  3. We articulate the stories behind the metrics we aim for, so their relevance isn’t lost in the numbers.
  4. A/B style testing practice has a significant impact on the performance of our ‘mass audience’ products and campaigns.

1. Every contributor (paid or volunteer) knows at any given time what number they (or we) are trying to move, where that number is right now, and how they hope to influence it.

  • “Every” is ambitious, but it sets the right tone.
  • This includes:
    • Public dashboards, like those at https://metrics.webmaker.org
    • Updates and storytelling throughout the year
    • Building feedback loops between the process, the work and the results (the impact)

2. We consider metrics (i.e. measures of success) before, during and after after each piece of work.

  • This requires close integration into our organizational planning process
  • This work is underway, but it will take time (and many repetitions) before it becomes habit

3. We articulate the stories behind the metrics we aim for, so their relevance isn’t lost in the numbers.

  • The numbers should be for navigation, rather than fuel

4. A/B style testing practice has a significant impact on the performance of our ‘mass audience’ products and campaigns.

  • This is the growth hacking part of the plan
  • We’ve had some successes (e.g. Webmaker and Fundraising)
  • This needs to become a continuous process

Those are my goals.

In many cases, the ultimate measure of success is when this work is done by the team rather than by me for the team.

We’re working on Process AND Culture

Process and culture feed off of and influence each other. Processes must suit the culture being cultivated. A data driven culture can blinker creativity – it doesn’t have to, but it can. And a culture that doesn’t care for data, won’t care for processes related to data. This strategy aims to balance the needs of both.

A roadmap?

I tried to write one, but basically this strategy will respond to the roadmaps of each of the MoFo teams.

So, what does Metrics work look like in 2015?

  • Building the tools and dashboards to provide the organisational visibility we need for our KPIs
  • ‘Instrumenting’ our products so that we can accurately measure how they are being used
  • Running Optimization experiments against high profile campaigns
  • Running training and support for Google Analytics, Optimizely, and other tools
  • Running project level reporting and analysis to support iterative development
  • Consulting to the Community Development Team to plan experimental initiatives

Plus: supporting teams to implement our data practices, and of course, the unknown unknowns.

…ish

Fundraising testing update

I wrote a post over on fundraising.mozilla.org about our latest round of optimization work for our End of Year Fundraising campaign.

We’ve been sprinting on this during the Mozilla all-hands workweek in Portland, which has been a lot of fun working face-to-face with the awesome team making this happen.

You can follow along with the campaign, and see how were doing at fundraising.mozilla.org

And of course, we’d be over the moon if you wanted to make a donation.

IMG_0373
These amazing people are working hard to build the web the world needs.

Learning about Learning Analytics @ #Mozfest

If I find a moment, I’ll write about many of the fun and inspiring things I saw at Mozfest this weekend, but this post is about a single session I had the pleasure of hosting alongside Andrew, Doug and Simon; Learning Analytics for Good in the Age of Big Data.

We had an hour, no idea if anyone else would be interested, or what angle people would come to the session from. And given that, I think it worked out pretty well.

la_session

We had about 20 participants, and broke into four groups to talk about Learning Analytics from roughly 3 starting points (though all the discussions overlapped):

  1. Practical solutions to measuring learning as it happens online
  2. The ethical complications of tracking (even when you want to optimise for something positive – e.g. Learning)
  3. The research opportunities for publishing and connecting learning data

But, did anyone learn anything in our Learning Analytics session?

Well, I know for sure the answer is yes… as I personally learned things. But did anyone else?

I spoke to people later in the day who told me they learned things. Is that good enough?

As I watched the group during the session I saw conversations that bounced back and forth in a way that rarely happens without people learning something. But how does anyone else who wasn’t there know if our session had an impact?

How much did people learn?

This is essentially the challenge of Learning Analytics. And I did give this some thought before the session…

IMG_0184

As a meta-exercise, everyone who attended the session had a question to answer at the start and end. We also gave them a place to write their email address and to link their ‘learning data’ to them in an identifiable way. It was a little bit silly, but it was something to think about.

This isn’t good science, but it tells a story. And I hope it was a useful cue for the people joining the session.

Response rate:

  • We had about 20 participants
  • 10 returned the survey (i.e. opted in to ‘tracking’), by answering question 1
  • 5 of those answered question 2
  • 5 gave their email address (not exactly the same 5 who answered both questions)

Here is our Learning Analytics data from our session

Screen Shot 2014-10-30 at 13.53.26

Is that demonstrable impact?

Even though this wasn’t a serious exercise. I think we can confidently argue that some people did learn, in much the same way certain newspapers can make a headline out of two data points…

What, and how much they learned, and if it will be useful later in their life is another matter.

Even with the deliberate choice of question which was almost impossible to not show improvement from start to end of the session, one respondent claims to be less sure what the session was about after attending (but let’s not dwell on that!).

Post-it notes and scribbles

If you were at the session, and want to jog your memory about what we talked about. I kind-of documented the various things we captured on paper.

Screen Shot 2014-10-30 at 14.40.57
Click for gallery of bigger images

Into 2015

I’m looking forward to exploring Learning Analytics in the context of Webmaker much more in 2015.

And to think that this was just one hour in a weekend full of the kinds of conversations that repeat in your mind all the way until next Mozfest. It’s exhausting in the best possible way.

Mozilla Contributor Analysis Project (Joint MoCo & MoFo)

I’m  back at the screen after a week of paternity leave, and I’ll be working part-time for next two weeks while we settle in to the new family routine at home.

In the meantime, I wanted to mention a Mozilla contributor analysis project in case people would like to get involved.

We have a wiki page now, which means it’s a real thing. And here are some words my sleep-deprived brain prepared for you earlier today:

The goal and scope of the work:

Explore existing contribution datasets to look for possible insights and metrics that would be useful to monitor on an ongoing basis, before the co-incident workweek in Portland at the beginning of December.

We will:

  • Stress-test our current capacity to use existing contribution data
  • Look for actionable insights to support Mozilla-wide community building efforts
  • Run ad-hoc analysis before building any ‘tools’
  • If useful, prototype tools that can be re-used for ongoing insights into community health
  • Build processes so that contributors can get involved in this metrics work
  • Document gaps in our existing data / knowledge
  • Document ideas for future analysis and exploration

Find out more about the project here.

I’m very excited that three members of the community have already offered to support the project and we’ve barely even started.

In the end, these numbers we’re looking at are about the community, and for the benefit of the community, so the more community involvement there is in this process, the better.

If you’re interested in data analysis, or know someone who is, send them the link.

This project is one of my priorities over the following 4-8 weeks. On that note, this looks quite appealing right now.

So I’m going make more tea and eat more biscuits.

One month of Webmaker Growth Hacking

This post is an attempt to capture some of the things we’ve learned from a few busy and exciting weeks working on the Webmaker new user funnel.

I will forget some things, there will be other stories to tell, and this will be biased towards my recurring message of “yay metrics”.

How did this happen?

Screen Shot 2014-09-01 at 14.25.29

As Dave pointed out in a recent email to Webmaker Dev list, “That’s a comma, not a decimal.”

What happened to increase new user sign-ups by 1,024% compared the previous month?

Is there one weird trick to…?

No.

Sorry, I know you’d like an easy answer…

This growth is the result of a month of focused work and many many incremental improvements to the first-run experience for visitors arriving on webmaker.org from the promotion we’ve seen on the Firefox snippet. I’ll try to recount some of it here.

While the answer here isn’t easy, the good news is it’s repeatable.

Props

While I get the fun job of talking about data and optimization (at least it’s fun when it’s good news), the work behind these numbers was a cross-team effort.

Aki, Andrea, Hannah and I formed the working group. Brett and Geoffrey oversaw the group, sanity checked our decisions and enabled us to act quickly. And others got roped in along the way.

I think this model worked really well.

Where are these new Webmaker users coming from?

We can attribute ~60k of those new users directly to:

  • Traffic coming from the snippet
  • Who converted into users via our new Webmaker Landing pages

Data-driven iterations

I’ve tried to go back over our meeting notes for the month and capture the variations on the funnel as we’ve iterated through them. This was tricky as things changed so fast.

This image below gives you an idea, but also hides many more detailed experiments within each of these pages.

Testing Iterations

With 8 snippets tested so far, 5 funnel variations and at least 5 content variables within each funnel we’ve iterated through over 200 variations of this new user flow in a month.

We’ve been able to do this and get results quickly because of the volume of traffic coming from the snippet, which is fantastic. And in some cases this volume of traffic meant we were learning new things quicker than we were able to ship our next iteration.

What’s the impact?

If we’d run with our first snippet design, and our first call to action we would have had about 1,000 new webmaker users from the snippet, instead of 60,000 (the remainder are from other channels and activities). Total new user accounts is up by ~1,000% but new users from the snippet specifically increased by around 6 times that.

One not-very-weird trick to growth hacking:

I said there wasn’t one weird trick, but I think the success of this work boils down to one piece of advice:

  • Prioritize time and permission for testing, with a clear shared objective, and get just enough people together who can make the work happen.

It’s not weird, and it sounds obvious, but it’s a story that gets overlooked often because it doesn’t have the simple causation based hooked we humans look for in our answers.

It’s much more appealing when someone tells you something like “Orange buttons increase conversion rate”. We love the stories of simple tweaks that have remarkable impact, but really it’s always about process.

More Growth hacking tips:

  • Learn to kill your darlings, and stay happy while doing it
    • We worked overtime to ship things that got replaced within a week
    • It can be hard to see that happen to your work when you’re invested in the product
      • My personal approach is to invest my emotion in the impact of the thing being made rather than the thing itself
      • But I had to lose a lot of A/B tests to realize that
  • Your current page is your control
    • Test ideas you think will beat it
    • If you beat it, that new page is your new control
    • Rinse and repeat
    • Optimize with small changes (content polishing)
    • Challenge with big changes (disruptive ideas)
  • Focus on areas with the most scope for impact
    • Use data to choose where to use data to make choices
    • Don’t stretch yourself too thin

What happens next?

  • We have some further snippet coverage for the next couple of weeks, but not at the same level we’ve had recently, so we’ll see this growth rate drop off
  • We can start testing the funnel we’ve built for other sources of traffic to see how it performs
  • We have infrastructure for spinning up and testing landing pages for many future asks
  • This work is never done, but with any optimization you see declining returns on investment
    • We need to keep reassessing the most effective place to spend our time
    • We have a solid account sign-up flow now, but there’s a whole user journey to think about after that
    • We need to gather up and share the results of the tests we ran within this process

Testing doesn’t have to be scary, but sometimes you want it to be.

Overlapping types of contribution

Screen Shot 2014-08-21 at 14.02.27TL;DR: Check out this graph!

Ever wondered how many Mozfest Volunteers also host events for Webmaker? Or how many code contributors have a Webmaker contributor badge? Now you can find out

The reason the MoFo Contributor dashboard we’re working from at the moment is called our interim dashboard is because it’s combining numbers from multiple data sources, but the number of contributors is not de-duped across systems.

So if you’re counted as a contributor because you host an event for Webmaker, you will be double counted if you also file bugs in Bugzilla. And until now, we haven’t known what those overlaps look like.

This interim solution wasn’t perfect, but it’s given us something to work with while we’re building out Baloo and the cross-org areweamillionyet.org (and by ‘we’, the vast credit for Baloo is due to our hard working MoCo friends Pierros and Sheeri).

To help with prepping MoFo data for inclusion in Baloo, and by  generally being awesome, JP wired up an integration database for our MoFo projects (skipping a night of sleep to ship V1!).

We’ve tweaked and tuned this in the last few weeks and we’re now extracting all sorts of useful insights we didn’t have before. For example, this integration database is behind quite a few of the stats in OpenMatt’s recent Webmaker update.

The downside to this is we will soon have a de-duped number for our dashboard, which will be smaller than the current number. Which will feel like a bit of a downer because we’ve been enthusiastically watching that number go up as we’ve built out contribution tracking systems throughout the year.

But, a smaller more accurate number is a good thing in the long run, and we will also gain new understanding about the multiple ways people contribute over time.

We will be able to see how people move around the project, and find that what looks like someone ‘stopping’ contributing, might be them switching focus to another team, for example. There are lots of exciting possibilities here.

And while I’m looking at this from a metrics point of view today, the same data allows us to make sure we say hello and thanks to any new contributors who joined this week, or to reach out and talk to long running active contributors who have recently stopped, and so on.