The importance of retention rates, explained by @bbalfour

In my last post I shared a tool for playing with the numbers that matter for growing a product or service. (I.e. Conversion, retention and referral rates).

This video of a talk by Brian Balfour is a perfect introduction / guide to watch if you’re also playing with that tool. In particular, the graphs from 1:46 onwards.

Measuring Quality

At the end of last year, Cassie raised the question of ‘how to measure quality?’ on our metrics mailing list, which is an excellent question. And like the best questions, I come back to it often. So, I figured it needed a blog post.

There are a bunch of tactical opportunities to measure quality in various processes, like the QA data you might extract from a production line for example. And while those details interest me, this thought process always bubbles up to the aggregate concept: what’s a consistent measure of quality across any product or service?

I have a short answer, but while you’re here I’ll walk you through how I get there. Including some examples of things I think are of high quality.

One of the reasons this question is interesting, is that it’s quite common to divide up data into quantitative and qualitative buckets. Often splitting the crisp metrics we use as our KPIs from the things we think indicate real quality. But, if you care about quality, and you operate at ‘scale’, you need a quantitative measure of quality.

On that note, in a small business or on a small project, the quality feedback loop is often direct to the people making design decisions that affect quality. You can look at the customers in your bakery and get a feel for the quality of your business and products. This is why small initiatives are sometimes immensely high in quality but then deteriorate as they attempt to replicate and scale what they do.

What I’m thinking about here is how to measure quality at scale.

Some things of quality, IMHO:

axeThis axe is wonderful. As my office is also my workshop, this axe is usually near to hand. It will soon be hung on the wall. Not because I am preparing for the zombie apocalypse, but because it is both useful as a tool, and as a visual reminder about what it means to build quality products. If this ramble of mine isn’t enough of a distraction, watch Why Values are Important to understand how this axe relates to measures of quality especially in product design.

toasterThis toaster is also wonderful. We’ve had this toaster more than 10 years now, and it works perfectly. If it were to break, I can get the parts locally and service it myself (it’s deliberately built to last and be repaired). It was an expensive initial purchase, but works out cheap in the long run. If it broke today, I would fix it. If I couldn’t fix it for some extreme reason, I would buy the same toaster in a blink. It is a high quality product.

coffeeThis is the espresso coffee I drink every day. Not the tin, it’s another brand that comes in a bag. It has been consistently good for a couple of years until the last two weeks when the grind has been finer than usual and it keeps blocking the machine. It was a high-quality product in my mind, until recently. I’ll let another batch pass through the supermarket shelves and try it again. Otherwise I’ll switch.

spatulaThis spatula looks like a novelty product and typically I don’t think very much of novelty products in place of useful tools, but it’s actually a high quality product. It was a gift, and we use it a lot and it just works really well. If it went missing today, I’d want to get another one the same. Saying that, it’s surprisingly expensive for a spatula. I’ve only just looked at the price, as a result of writing this. I think I’d pay that price though.

All of those examples are relatively expensive products within their respective categories, but price is not the measure of quality, even if price sometimes correlates with quality. I’ll get on to this.

How about things of quality that are not expensive in this way?

What is quality music, or art, or literature to you? Is it something new you enjoy today? Or something you enjoyed several years ago? I personally think it’s the combination of those two things. And I posit that you can’t know the real quality of something until enough time has passed. Though ‘enough time’ varies by product.

Ten years ago, I thought all the music I listened to was of high quality. Re-listening today, I think some of it was high-quality. As an exercise, listen to some music you haven’t for a while, and think about which tracks you enjoy for the nostalgia and which you enjoy for the music itself.

In the past, we had to rely on sales as a measure of the popularity of music. But like price, sales doesn’t always relate to quality. Initial popularity indicates potential quality, but not quality in itself (or it indicates manipulation of the audience via effective marketing). Though there are debates around streaming music services and artist payment, we do now have data points about the ongoing value of music beyond the initial parting of listener from cash. I think this can do interesting things for the quality of music overall. And in particular that the future is bleak for album filler tracks when you’re paid per stream.

Another question I enjoy thinking about is why over the centuries, some art has lasting value, and other art doesn’t. But I think I’ve taken enough tangents for now.

So, to join this up.

My view is that quality is reflected by loyalty. And for most products and services, end-user loyalty is something you can measure and optimize for.

Loyalty comes from building things that both last, and continue to be used.

Every other measurable detail about quality adds up to that.

Reducing the defect rate of component X by 10% doesn’t matter unless it impacts on the end-user loyalty.

It’s harder to measure, but this is true even for things which are specifically designed not to last. In particular, “experiences”; a once-in-a-lifetime trip, a festival, a learning experience, etc, etc. If these experiences are of high quality, the memory lasts and you re-live them and re-use them many times over. You tell stories of the experience and you refer your friends. You are loyal to the experience.

Bringing this back to work.

For MoFo colleagues reading this, our organization goals this year already point us towards Quality. We use the industry term ‘Retention’. We have targets for Retention Rates and Ongoing Teaching Activity (i.e. retained teachers). And while the word ‘retention’ sounds a bit cold and business like, it’s really the same thing as measuring ‘loyalty’. I like the word loyalty but people have different views about it (in particular whether it’s earned or expected).

This overarching theme also aligns nicely with the overall Mozilla goal of increasing the ‘number of long term relationships’ we hold with our users.

Language is interesting though. Thinking about a ‘20% user loyalty rate’ 7 days after sign-up focuses my mind slightly differently than a ‘20% retention rate’. ‘Retention’ can sound a bit too much like ‘detention’, which might explain why so many businesses strive for consumer ‘lock-in’ as part of their business model.

Talking to OpenMatt about this recently he put a better MoFo frame on it than loyalty; Retention is a measure of how much people love what we’re doing. When we set goals for increasing retention rate, we are committing to building things people love so much that they keep coming back for more.

In summary:

  • You can measure quality by measuring loyalty
  • I’m happy retention rates are one of our KPIs this year

My next post will look more specifically about the numbers and how retention rates factor into product growth.

And I’ll try not to make it another essay. 😉

Webmaking in the UK, and face-to-face events

One of this week’s conversations was with Nesta, about Webmaker usage within the UK and whether or not we have data to support the theory that face-t0-face events have an impact getting people involved in making on the web. These are two topics that interest me greatly.

I’m basically copying some of my notes into blog form so that the conversation isn’t confined to a few in-boxes.

And the TL;DR is our data represents what we’ve done, rather than any universal truth.

Our current data would support the hypothesis that face-to-face time is important for learning, but that would simply be because that’s how our program has been designed to date. In other words, our Webmaker tools were designed primarily for use in face-to-face events, which meant that adoption by ‘self-learners’ online is low because their is little guidance or motivation to play with our tools on your own. This year we’re making a stronger push on developing tools that can be used remotely, alongside our work on volunteer led face-to-face events. This will lead to a less biased overall data set in the future where we can begin to properly explore the impact on making and learning for people who do or don’t attend face-to-face events at various stages in their learning experience. In particular I’m keen to understand what factors help people transition from learners, to mentoring and supporting their peers.

I also took a quick look at the aggregate Google Analytics location data for the UK audience which I hadn’t done before and which re-enforces the point above.

Screen Shot 2015-01-30 at 11.14.29

Above: Traffic to Webmaker (loosely indicating an interest in the topic) is roughly distributed like a population map of the UK. This is what I expect to see of most location data.

Screen Shot 2015-01-30 at 11.17.25

Above: However, if you look at the locations of visitors who make something, there are lots of clusters around the UK and London is equaled by many other cities.

To-date, usage of the Webmaker tools has been driven by those who are using the tools to teach the web (i.e. Webmaker Mentors). But we also know there are large numbers of people who find Webmaker outside of the face-to-face event scenarios who need a better route into Webmaker’s offering.

The good news is that this year’s plans look after both sets of potential learners.

Fundraising testing update

I wrote a post over on fundraising.mozilla.org about our latest round of optimization work for our End of Year Fundraising campaign.

We’ve been sprinting on this during the Mozilla all-hands workweek in Portland, which has been a lot of fun working face-to-face with the awesome team making this happen.

You can follow along with the campaign, and see how were doing at fundraising.mozilla.org

And of course, we’d be over the moon if you wanted to make a donation.

IMG_0373
These amazing people are working hard to build the web the world needs.

Learning about Learning Analytics @ #Mozfest

If I find a moment, I’ll write about many of the fun and inspiring things I saw at Mozfest this weekend, but this post is about a single session I had the pleasure of hosting alongside Andrew, Doug and Simon; Learning Analytics for Good in the Age of Big Data.

We had an hour, no idea if anyone else would be interested, or what angle people would come to the session from. And given that, I think it worked out pretty well.

la_session

We had about 20 participants, and broke into four groups to talk about Learning Analytics from roughly 3 starting points (though all the discussions overlapped):

  1. Practical solutions to measuring learning as it happens online
  2. The ethical complications of tracking (even when you want to optimise for something positive – e.g. Learning)
  3. The research opportunities for publishing and connecting learning data

But, did anyone learn anything in our Learning Analytics session?

Well, I know for sure the answer is yes… as I personally learned things. But did anyone else?

I spoke to people later in the day who told me they learned things. Is that good enough?

As I watched the group during the session I saw conversations that bounced back and forth in a way that rarely happens without people learning something. But how does anyone else who wasn’t there know if our session had an impact?

How much did people learn?

This is essentially the challenge of Learning Analytics. And I did give this some thought before the session…

IMG_0184

As a meta-exercise, everyone who attended the session had a question to answer at the start and end. We also gave them a place to write their email address and to link their ‘learning data’ to them in an identifiable way. It was a little bit silly, but it was something to think about.

This isn’t good science, but it tells a story. And I hope it was a useful cue for the people joining the session.

Response rate:

  • We had about 20 participants
  • 10 returned the survey (i.e. opted in to ‘tracking’), by answering question 1
  • 5 of those answered question 2
  • 5 gave their email address (not exactly the same 5 who answered both questions)

Here is our Learning Analytics data from our session

Screen Shot 2014-10-30 at 13.53.26

Is that demonstrable impact?

Even though this wasn’t a serious exercise. I think we can confidently argue that some people did learn, in much the same way certain newspapers can make a headline out of two data points…

What, and how much they learned, and if it will be useful later in their life is another matter.

Even with the deliberate choice of question which was almost impossible to not show improvement from start to end of the session, one respondent claims to be less sure what the session was about after attending (but let’s not dwell on that!).

Post-it notes and scribbles

If you were at the session, and want to jog your memory about what we talked about. I kind-of documented the various things we captured on paper.

Screen Shot 2014-10-30 at 14.40.57
Click for gallery of bigger images

Into 2015

I’m looking forward to exploring Learning Analytics in the context of Webmaker much more in 2015.

And to think that this was just one hour in a weekend full of the kinds of conversations that repeat in your mind all the way until next Mozfest. It’s exhausting in the best possible way.

Mozilla Contributor Analysis Project (Joint MoCo & MoFo)

I’m  back at the screen after a week of paternity leave, and I’ll be working part-time for next two weeks while we settle in to the new family routine at home.

In the meantime, I wanted to mention a Mozilla contributor analysis project in case people would like to get involved.

We have a wiki page now, which means it’s a real thing. And here are some words my sleep-deprived brain prepared for you earlier today:

The goal and scope of the work:

Explore existing contribution datasets to look for possible insights and metrics that would be useful to monitor on an ongoing basis, before the co-incident workweek in Portland at the beginning of December.

We will:

  • Stress-test our current capacity to use existing contribution data
  • Look for actionable insights to support Mozilla-wide community building efforts
  • Run ad-hoc analysis before building any ‘tools’
  • If useful, prototype tools that can be re-used for ongoing insights into community health
  • Build processes so that contributors can get involved in this metrics work
  • Document gaps in our existing data / knowledge
  • Document ideas for future analysis and exploration

Find out more about the project here.

I’m very excited that three members of the community have already offered to support the project and we’ve barely even started.

In the end, these numbers we’re looking at are about the community, and for the benefit of the community, so the more community involvement there is in this process, the better.

If you’re interested in data analysis, or know someone who is, send them the link.

This project is one of my priorities over the following 4-8 weeks. On that note, this looks quite appealing right now.

So I’m going make more tea and eat more biscuits.

One month of Webmaker Growth Hacking

This post is an attempt to capture some of the things we’ve learned from a few busy and exciting weeks working on the Webmaker new user funnel.

I will forget some things, there will be other stories to tell, and this will be biased towards my recurring message of “yay metrics”.

How did this happen?

Screen Shot 2014-09-01 at 14.25.29

As Dave pointed out in a recent email to Webmaker Dev list, “That’s a comma, not a decimal.”

What happened to increase new user sign-ups by 1,024% compared the previous month?

Is there one weird trick to…?

No.

Sorry, I know you’d like an easy answer…

This growth is the result of a month of focused work and many many incremental improvements to the first-run experience for visitors arriving on webmaker.org from the promotion we’ve seen on the Firefox snippet. I’ll try to recount some of it here.

While the answer here isn’t easy, the good news is it’s repeatable.

Props

While I get the fun job of talking about data and optimization (at least it’s fun when it’s good news), the work behind these numbers was a cross-team effort.

Aki, Andrea, Hannah and I formed the working group. Brett and Geoffrey oversaw the group, sanity checked our decisions and enabled us to act quickly. And others got roped in along the way.

I think this model worked really well.

Where are these new Webmaker users coming from?

We can attribute ~60k of those new users directly to:

  • Traffic coming from the snippet
  • Who converted into users via our new Webmaker Landing pages

Data-driven iterations

I’ve tried to go back over our meeting notes for the month and capture the variations on the funnel as we’ve iterated through them. This was tricky as things changed so fast.

This image below gives you an idea, but also hides many more detailed experiments within each of these pages.

Testing Iterations

With 8 snippets tested so far, 5 funnel variations and at least 5 content variables within each funnel we’ve iterated through over 200 variations of this new user flow in a month.

We’ve been able to do this and get results quickly because of the volume of traffic coming from the snippet, which is fantastic. And in some cases this volume of traffic meant we were learning new things quicker than we were able to ship our next iteration.

What’s the impact?

If we’d run with our first snippet design, and our first call to action we would have had about 1,000 new webmaker users from the snippet, instead of 60,000 (the remainder are from other channels and activities). Total new user accounts is up by ~1,000% but new users from the snippet specifically increased by around 6 times that.

One not-very-weird trick to growth hacking:

I said there wasn’t one weird trick, but I think the success of this work boils down to one piece of advice:

  • Prioritize time and permission for testing, with a clear shared objective, and get just enough people together who can make the work happen.

It’s not weird, and it sounds obvious, but it’s a story that gets overlooked often because it doesn’t have the simple causation based hooked we humans look for in our answers.

It’s much more appealing when someone tells you something like “Orange buttons increase conversion rate”. We love the stories of simple tweaks that have remarkable impact, but really it’s always about process.

More Growth hacking tips:

  • Learn to kill your darlings, and stay happy while doing it
    • We worked overtime to ship things that got replaced within a week
    • It can be hard to see that happen to your work when you’re invested in the product
      • My personal approach is to invest my emotion in the impact of the thing being made rather than the thing itself
      • But I had to lose a lot of A/B tests to realize that
  • Your current page is your control
    • Test ideas you think will beat it
    • If you beat it, that new page is your new control
    • Rinse and repeat
    • Optimize with small changes (content polishing)
    • Challenge with big changes (disruptive ideas)
  • Focus on areas with the most scope for impact
    • Use data to choose where to use data to make choices
    • Don’t stretch yourself too thin

What happens next?

  • We have some further snippet coverage for the next couple of weeks, but not at the same level we’ve had recently, so we’ll see this growth rate drop off
  • We can start testing the funnel we’ve built for other sources of traffic to see how it performs
  • We have infrastructure for spinning up and testing landing pages for many future asks
  • This work is never done, but with any optimization you see declining returns on investment
    • We need to keep reassessing the most effective place to spend our time
    • We have a solid account sign-up flow now, but there’s a whole user journey to think about after that
    • We need to gather up and share the results of the tests we ran within this process

Testing doesn’t have to be scary, but sometimes you want it to be.

Overlapping types of contribution

Screen Shot 2014-08-21 at 14.02.27TL;DR: Check out this graph!

Ever wondered how many Mozfest Volunteers also host events for Webmaker? Or how many code contributors have a Webmaker contributor badge? Now you can find out

The reason the MoFo Contributor dashboard we’re working from at the moment is called our interim dashboard is because it’s combining numbers from multiple data sources, but the number of contributors is not de-duped across systems.

So if you’re counted as a contributor because you host an event for Webmaker, you will be double counted if you also file bugs in Bugzilla. And until now, we haven’t known what those overlaps look like.

This interim solution wasn’t perfect, but it’s given us something to work with while we’re building out Baloo and the cross-org areweamillionyet.org (and by ‘we’, the vast credit for Baloo is due to our hard working MoCo friends Pierros and Sheeri).

To help with prepping MoFo data for inclusion in Baloo, and by  generally being awesome, JP wired up an integration database for our MoFo projects (skipping a night of sleep to ship V1!).

We’ve tweaked and tuned this in the last few weeks and we’re now extracting all sorts of useful insights we didn’t have before. For example, this integration database is behind quite a few of the stats in OpenMatt’s recent Webmaker update.

The downside to this is we will soon have a de-duped number for our dashboard, which will be smaller than the current number. Which will feel like a bit of a downer because we’ve been enthusiastically watching that number go up as we’ve built out contribution tracking systems throughout the year.

But, a smaller more accurate number is a good thing in the long run, and we will also gain new understanding about the multiple ways people contribute over time.

We will be able to see how people move around the project, and find that what looks like someone ‘stopping’ contributing, might be them switching focus to another team, for example. There are lots of exciting possibilities here.

And while I’m looking at this from a metrics point of view today, the same data allows us to make sure we say hello and thanks to any new contributors who joined this week, or to reach out and talk to long running active contributors who have recently stopped, and so on.

2014 Contributor Goals: Half-time check-in

We’re a little over halfway through the year now, and our dashboard is now good enough to tell us how we’re doing.

TL;DR:

  • The existing trend lines won’t get us to our 2014 goals
    • but knowing this is helpful
    • and getting there is possible
  • Ask less: How do we count our contributors?
  • Ask more: What are we doing to grow the contributor community? And, are we on track?

Changing the question

Our dashboard now needs to move from being a project to being a tool that helps us do better. After all, Mozilla’s unique strength is that we’re a community of contributors and this dashboard, and the 2014 contributor goal, exist to help us focus our workflows, decisions and investments in ways that empower the community. Not just for the fun of counting things.

The first half of the year focused us on the question “How do we count contributors?”. By and large, this has now been answered.

We need to switch our focus to:

  1. Are we on track?
  2. What are we doing to grow the contributor community?

Then repeating these two question regularly throughout the year, and adjusting our strategy as we go.

Are we on track?

Wearing my cold-dispassionate-metrics hat, and not my “I know how hard you’re all working already” hat, I have to say no (or, not yet).

I’m going to look at this team by team and then look at the All Mozilla Foundation view at the end.

Your task, for each graph below is to take an imaginary marker pen and draw the line for the rest of the year based on the data you can see to date. And only on the data you can see to-date.

  • What does your trend line look like?
  • Is it going to cross the dotted target line in 2014?

OpenNews

Screen Shot 2014-07-18 at 19.48.44

Based on the data to-date, I’d draw a flat line here. Although there are new contributors joining pretty regularly, the overall trend is flat. In marketing terms there is ‘churn'; not a nice term, but a useful one to talk about the data. To use other crass marketing terms, ‘retention’ is as important as ‘acquisition’ in changing the shape of this graph.

Science Lab

Screen Shot 2014-07-18 at 19.49.55

Dispassionately here, I’d have to draw a trend line that’s pointing slightly down. One thing to note in this view is that the Science Lab team have good historic data, so what we’re seeing here is the result of the size of the community in early 2013, and some drop-off from those people.

Appmaker

Screen Shot 2014-07-18 at 19.50.57

This graph is closest to what we want to see generally, i.e. pointing up. But I’ll caveat that with a couple of points. First, taking the imaginary marker pen, this isn’t going to cross the 2014 target line at the current rate. Second, unlike the Science Lab and OpenNews data above, much of this Appmaker counting is new. And when you count things for the first time, a 12 month rolling active total has a cumulative effect in the first year, which increases the appearance of growth, but might not be a long term trend. This is because Appmaker community churn won’t be a visible thing until next year when people could first drop out of the twelve month active time-frame.

Webmaker

Screen Shot 2014-07-18 at 19.51.47

This graph is the hardest to extend with our imaginary marker pen, especially with the positive incline we can see as Maker Party kicks off. The Webmaker plan expects much of the contributor community growth to come from the Maker Party campaign, so a steady incline was not the expectation across the year. But, we can still play with the imaginary marker pen.

I’d do the following exercise: In the first six months, active contributors grew by ~800 (~130 per month), so assuming that’s a general trend (big assumption) and you work back from 10k in December you would need to be at ~9,500 by the end of September. Mark a point at 9,500 contributors above the October tick and look at the angle of growth required throughout Maker Party to get there. That’s not impossible, but it’s a big challenge and I don’t have any historic data to make an informed call here.

Note: the Appmaker/Webmaker separation here is a legacy thing from the beginning of the year when we started this project. The de-duped datastore we’re working on next will allow us to graph: Webmaker Total > Webmaker Tools > Appmaker as separate graphs with separate goals, but which get de-duped and roll-up into the total numbers above, and in turn roll-up into the Mozilla wide total at areweamillionyet.org – this will better reflect the actual overlaps.

Metrics

[ 0 contributors ]

The MoFo metrics team currently has zero active volunteer contributors, and based on the data available to date is trending absolutely flat. Action is required here, or this isn’t going to change. I also need to set a target. Growing 0 by 10X doesn’t really work. So I’ll aim for 10 volunteer contributors in 2014.

All Mozilla Foundation

Screen Shot 2014-07-18 at 19.52.40

Here we’re adding up the other graphs and also adding in ~900 people who contributed to MozFest in October 2013. That MozFest number isn’t counted towards a particular team and simply lifts the total for the year. There is no trend for the MozFest data because all the activity happened at once, but if there wasn’t a MozFest this year (don’t worry, there is!) in October the total line would drop by 900 in a single week. Beyond that, the shape of this line is the cumulative result of the team graphs above.

In Q3, we’ll be able to de-dupe this combined number as there are certainly contributors working across MoFo teams. In a good way, our total will be less that the sum of our parts.

Where do we go from here?

First, don’t panic. Influencing these trend lines is not like trying to shift a nation’s voting trends in the next election. Much of this is directly under our control, or if not ‘control’, then it’s something we can strongly influence. So long as we work on it.

Next, it’s important to note that this is the first time we’ve been able to see these trends, and the first time we can measure the impact of decisions we make around community building. Growing a community beyond a certain scale is not a passive thing. I’ve found David Boswell’s use of the term ‘intentional’ community building really helpful here. And much more tasteful than my marketing vocabulary!

These graphs show where we’re heading based on what we’re currently doing, and until now we didn’t know if we were doing well, or even improving at all. We didn’t have any feedback mechanism on decisions we’d make relating to community growth. Now we do.

Trend setting

Here are some initial steps that can help with the ‘measuring’ part of this community building task.

Going back to the marker pen exercise, take another imaginary color and rather than extrapolate the current trend, draw a positive line that gets you to your target by the end of the year. This doesn’t have to be a straight line; allow your planned activity to shape the growth you want to see. Then ask:

  • Where do you need to be in Aug, Sep, Oct, Nov, Dec?
  • How are you going to reach each of these smaller steps?

Schedule a regular check-in that focuses on growing your contributor community and check your dashboard:

  • Are your current actions getting you to your goals?
  • What are the next actions you’re going to take?

The first rule of fundraising is ‘Ask for money’. People often overlook this. By the same measure, are you asking for contributions?

  • How many people are you asking this week or month to get involved?
  • What percentage of them do you expect to say yes and do something?

Multiply those numbers together and see if it that prediction can get you to your next step towards your goal.

Asking these questions alone won’t get us to our goals, but it helps us to know if our current approach has the capacity to get there. If it doesn’t we need to adjust the approach.

Those are just the numbers

I could probably wrap up this check-in from a metrics point of view here, but this is not a numbers game. The Total Active Contributor number is a tool to help us understand scale beyond the face-to-face relationships we can store in our personal memories.

We’re lucky at Mozilla that so many people already care about the mission and want to get involved, but sitting and waiting for contributors to show up is not going to get us to our goals in 2014. Community building is an intentional act.

Here’s to setting new trends.