In my last post I spent some time talking about why we care about measuring retention rates, and tried to make the case that retention rate works as a meaningful measure of quality.
In this post I want to look at how a few key metrics for a product, business or service stack up when you combine them. This is an exercise for people who haven’t spent time thinking about these numbers before.
If you’re used to thinking about product metrics, this won’t be new to you.
I built a simple tool to support this exercise. It’s not perfect, but in the spirit of ‘perfect is the enemy of good‘ I’ll share it in it’s current state.
Optimizing for growth isn’t just ‘pouring’ bigger numbers into the top of the ‘funnel‘. You need to get the right mix of results across all of these variables. And if your results for any of these measurable things are too low, your product will have a ‘ceiling’ for how many active users you can have at a single time.
However, if you succeed in optimizing your product or service against all four of these points you can find the kind of growth curve that the start-up world chases after every day. The referrals part in particular is important if you want to turn the ‘funnel’ into a ‘loop’.
Depending on your situation, improving each of these things has varying degrees of difficulty. But importantly they can all be measured, and as you make changes to the thing you are building you can see how your changes impact on each of these metrics. These are things you can optimize for.
But while you can optimize for these things, that doesn’t make it easy.
It still comes down to building things of real value and quality, and helping the right people find those things. And while there are tactics to tweak performance rates against each of these goals, the tactics alone won’t matter without the product being good too.
As an example, Dropbox increased their referral rate by rewarding users with extra storage space for referring their friends. But that tactic only works if people like Dropbox enough to (a) want extra storage space and (b) feel happy recommending the product to their friends.
- Build things of quality
- Optimize them against these measurable goals
I wrote a version of this strategy in January but hadn’t published it as I was trying to remove those ‘ish‘s from the title. But the ‘ish’ is actually a big part of my day-to-day work, so this version embraces the ‘ish’.
MoFo Metrics Measures of Success:
These are ironically, more qualitative than quantitative.
- Every contributor (paid or volunteer) knows at any given time what number they (or we) are trying to move, where that number is right now, and how they hope to influence it.
- We consider metrics (i.e. measures of success) before, during and after after each project.
- We articulate the stories behind the metrics we aim for, so their relevance isn’t lost in the numbers.
- A/B style testing practice has a significant impact on the performance of our ‘mass audience’ products and campaigns.
1. Every contributor (paid or volunteer) knows at any given time what number they (or we) are trying to move, where that number is right now, and how they hope to influence it.
- “Every” is ambitious, but it sets the right tone.
- This includes:
- Public dashboards, like those at https://metrics.webmaker.org
- Updates and storytelling throughout the year
- Building feedback loops between the process, the work and the results (the impact)
2. We consider metrics (i.e. measures of success) before, during and after after each piece of work.
- This requires close integration into our organizational planning process
- This work is underway, but it will take time (and many repetitions) before it becomes habit
3. We articulate the stories behind the metrics we aim for, so their relevance isn’t lost in the numbers.
- The numbers should be for navigation, rather than fuel
4. A/B style testing practice has a significant impact on the performance of our ‘mass audience’ products and campaigns.
- This is the growth hacking part of the plan
- We’ve had some successes (e.g. Webmaker and Fundraising)
- This needs to become a continuous process
Those are my goals.
In many cases, the ultimate measure of success is when this work is done by the team rather than by me for the team.
We’re working on Process AND Culture
Process and culture feed off of and influence each other. Processes must suit the culture being cultivated. A data driven culture can blinker creativity – it doesn’t have to, but it can. And a culture that doesn’t care for data, won’t care for processes related to data. This strategy aims to balance the needs of both.
I tried to write one, but basically this strategy will respond to the roadmaps of each of the MoFo teams.
So, what does Metrics work look like in 2015?
- Building the tools and dashboards to provide the organisational visibility we need for our KPIs
- ‘Instrumenting’ our products so that we can accurately measure how they are being used
- Running Optimization experiments against high profile campaigns
- Running training and support for Google Analytics, Optimizely, and other tools
- Running project level reporting and analysis to support iterative development
- Consulting to the Community Development Team to plan experimental initiatives
Plus: supporting teams to implement our data practices, and of course, the unknown unknowns.
I wrote a post over on fundraising.mozilla.org about our latest round of optimization work for our End of Year Fundraising campaign.
We’ve been sprinting on this during the Mozilla all-hands workweek in Portland, which has been a lot of fun working face-to-face with the awesome team making this happen.
You can follow along with the campaign, and see how were doing at fundraising.mozilla.org
And of course, we’d be over the moon if you wanted to make a donation.
If I find a moment, I’ll write about many of the fun and inspiring things I saw at Mozfest this weekend, but this post is about a single session I had the pleasure of hosting alongside Andrew, Doug and Simon; Learning Analytics for Good in the Age of Big Data.
We had an hour, no idea if anyone else would be interested, or what angle people would come to the session from. And given that, I think it worked out pretty well.
We had about 20 participants, and broke into four groups to talk about Learning Analytics from roughly 3 starting points (though all the discussions overlapped):
- Practical solutions to measuring learning as it happens online
- The ethical complications of tracking (even when you want to optimise for something positive – e.g. Learning)
- The research opportunities for publishing and connecting learning data
But, did anyone learn anything in our Learning Analytics session?
Well, I know for sure the answer is yes… as I personally learned things. But did anyone else?
I spoke to people later in the day who told me they learned things. Is that good enough?
As I watched the group during the session I saw conversations that bounced back and forth in a way that rarely happens without people learning something. But how does anyone else who wasn’t there know if our session had an impact?
How much did people learn?
This is essentially the challenge of Learning Analytics. And I did give this some thought before the session…
As a meta-exercise, everyone who attended the session had a question to answer at the start and end. We also gave them a place to write their email address and to link their ‘learning data’ to them in an identifiable way. It was a little bit silly, but it was something to think about.
This isn’t good science, but it tells a story. And I hope it was a useful cue for the people joining the session.
- We had about 20 participants
- 10 returned the survey (i.e. opted in to ‘tracking’), by answering question 1
- 5 of those answered question 2
- 5 gave their email address (not exactly the same 5 who answered both questions)
Here is our Learning Analytics data from our session
Is that demonstrable impact?
Even though this wasn’t a serious exercise. I think we can confidently argue that some people did learn, in much the same way certain newspapers can make a headline out of two data points…
What, and how much they learned, and if it will be useful later in their life is another matter.
Even with the deliberate choice of question which was almost impossible to not show improvement from start to end of the session, one respondent claims to be less sure what the session was about after attending (but let’s not dwell on that!).
Post-it notes and scribbles
If you were at the session, and want to jog your memory about what we talked about. I kind-of documented the various things we captured on paper.
I’m looking forward to exploring Learning Analytics in the context of Webmaker much more in 2015.
And to think that this was just one hour in a weekend full of the kinds of conversations that repeat in your mind all the way until next Mozfest. It’s exhausting in the best possible way.
I’m back at the screen after a week of paternity leave, and I’ll be working part-time for next two weeks while we settle in to the new family routine at home.
In the meantime, I wanted to mention a Mozilla contributor analysis project in case people would like to get involved.
We have a wiki page now, which means it’s a real thing. And here are some words my sleep-deprived brain prepared for you earlier today:
The goal and scope of the work:
Explore existing contribution datasets to look for possible insights and metrics that would be useful to monitor on an ongoing basis, before the co-incident workweek in Portland at the beginning of December.
- Stress-test our current capacity to use existing contribution data
- Look for actionable insights to support Mozilla-wide community building efforts
- Run ad-hoc analysis before building any ‘tools’
- If useful, prototype tools that can be re-used for ongoing insights into community health
- Build processes so that contributors can get involved in this metrics work
- Document gaps in our existing data / knowledge
- Document ideas for future analysis and exploration
I’m very excited that three members of the community have already offered to support the project and we’ve barely even started.
In the end, these numbers we’re looking at are about the community, and for the benefit of the community, so the more community involvement there is in this process, the better.
If you’re interested in data analysis, or know someone who is, send them the link.
This project is one of my priorities over the following 4-8 weeks. On that note, this looks quite appealing right now.
So I’m going make more tea and eat more biscuits.
This post is an attempt to capture some of the things we’ve learned from a few busy and exciting weeks working on the Webmaker new user funnel.
I will forget some things, there will be other stories to tell, and this will be biased towards my recurring message of “yay metrics”.
How did this happen?
As Dave pointed out in a recent email to Webmaker Dev list, “That’s a comma, not a decimal.”
What happened to increase new user sign-ups by 1,024% compared the previous month?
Is there one weird trick to…?
Sorry, I know you’d like an easy answer…
This growth is the result of a month of focused work and many many incremental improvements to the first-run experience for visitors arriving on webmaker.org from the promotion we’ve seen on the Firefox snippet. I’ll try to recount some of it here.
While the answer here isn’t easy, the good news is it’s repeatable.
While I get the fun job of talking about data and optimization (at least it’s fun when it’s good news), the work behind these numbers was a cross-team effort.
I think this model worked really well.
Where are these new Webmaker users coming from?
We can attribute ~60k of those new users directly to:
- Traffic coming from the snippet
- Who converted into users via our new Webmaker Landing pages
I’ve tried to go back over our meeting notes for the month and capture the variations on the funnel as we’ve iterated through them. This was tricky as things changed so fast.
This image below gives you an idea, but also hides many more detailed experiments within each of these pages.
With 8 snippets tested so far, 5 funnel variations and at least 5 content variables within each funnel we’ve iterated through over 200 variations of this new user flow in a month.
We’ve been able to do this and get results quickly because of the volume of traffic coming from the snippet, which is fantastic. And in some cases this volume of traffic meant we were learning new things quicker than we were able to ship our next iteration.
What’s the impact?
If we’d run with our first snippet design, and our first call to action we would have had about 1,000 new webmaker users from the snippet, instead of 60,000 (the remainder are from other channels and activities). Total new user accounts is up by ~1,000% but new users from the snippet specifically increased by around 6 times that.
One not-very-weird trick to growth hacking:
I said there wasn’t one weird trick, but I think the success of this work boils down to one piece of advice:
- Prioritize time and permission for testing, with a clear shared objective, and get just enough people together who can make the work happen.
It’s not weird, and it sounds obvious, but it’s a story that gets overlooked often because it doesn’t have the simple causation based hooked we humans look for in our answers.
It’s much more appealing when someone tells you something like “Orange buttons increase conversion rate”. We love the stories of simple tweaks that have remarkable impact, but really it’s always about process.
More Growth hacking tips:
- Learn to kill your darlings, and stay happy while doing it
- We worked overtime to ship things that got replaced within a week
- It can be hard to see that happen to your work when you’re invested in the product
- My personal approach is to invest my emotion in the impact of the thing being made rather than the thing itself
- But I had to lose a lot of A/B tests to realize that
- Your current page is your control
- Test ideas you think will beat it
- If you beat it, that new page is your new control
- Rinse and repeat
- Optimize with small changes (content polishing)
- Challenge with big changes (disruptive ideas)
- Focus on areas with the most scope for impact
- Use data to choose where to use data to make choices
- Don’t stretch yourself too thin
What happens next?
- We have some further snippet coverage for the next couple of weeks, but not at the same level we’ve had recently, so we’ll see this growth rate drop off
- We can start testing the funnel we’ve built for other sources of traffic to see how it performs
- We have infrastructure for spinning up and testing landing pages for many future asks
- This work is never done, but with any optimization you see declining returns on investment
- We need to keep reassessing the most effective place to spend our time
- We have a solid account sign-up flow now, but there’s a whole user journey to think about after that
- We need to gather up and share the results of the tests we ran within this process
Testing doesn’t have to be scary, but sometimes you want it to be.
TL;DR: Check out this graph!
Ever wondered how many Mozfest Volunteers also host events for Webmaker? Or how many code contributors have a Webmaker contributor badge? Now you can find out…
The reason the MoFo Contributor dashboard we’re working from at the moment is called our interim dashboard is because it’s combining numbers from multiple data sources, but the number of contributors is not de-duped across systems.
So if you’re counted as a contributor because you host an event for Webmaker, you will be double counted if you also file bugs in Bugzilla. And until now, we haven’t known what those overlaps look like.
This interim solution wasn’t perfect, but it’s given us something to work with while we’re building out Baloo and the cross-org areweamillionyet.org (and by ‘we’, the vast credit for Baloo is due to our hard working MoCo friends Pierros and Sheeri).
To help with prepping MoFo data for inclusion in Baloo, and by generally being awesome, JP wired up an integration database for our MoFo projects (skipping a night of sleep to ship V1!).
We’ve tweaked and tuned this in the last few weeks and we’re now extracting all sorts of useful insights we didn’t have before. For example, this integration database is behind quite a few of the stats in OpenMatt’s recent Webmaker update.
The downside to this is we will soon have a de-duped number for our dashboard, which will be smaller than the current number. Which will feel like a bit of a downer because we’ve been enthusiastically watching that number go up as we’ve built out contribution tracking systems throughout the year.
But, a smaller more accurate number is a good thing in the long run, and we will also gain new understanding about the multiple ways people contribute over time.
We will be able to see how people move around the project, and find that what looks like someone ‘stopping’ contributing, might be them switching focus to another team, for example. There are lots of exciting possibilities here.
And while I’m looking at this from a metrics point of view today, the same data allows us to make sure we say hello and thanks to any new contributors who joined this week, or to reach out and talk to long running active contributors who have recently stopped, and so on.
- Our MoFo dashboards now have trendlines based on known activity to date
- The recent uptick in activity is partly new contributors, and partly new recognition of existing contributors (all of which is good, but some of which is misleading for the trendline in the short term)
- Below is a rambling analogy for thinking about our contributor goals and how we answer the question ‘are we on track for 2014?’
- + if you haven’t seen it, OpenMatt has crisply summarized a tonne of the data and insights that we’ve unpicked during Maker Party
I was stacking logs over the weekend, and wondering if I had enough for winter, when it struck me that this might be a useful analogy for a post I was planning to write. So bear with me, I hope this works…
To be clear, this is an analogy about predicting and planning, not a metaphor for contributors* 😀
So the trendline looks good, but…
Trendlines can be misleading.
What if our task was gathering and splitting logs?
We’re halfway through the year, and the log store is half full. The important questions is, ‘will it be full when the snow starts falling?‘
Well, it depends.
It depends how quickly we add new logs to the store, and it depends how many get used.
So let’s push this analogy a bit.
Before this year, we had scattered stacks of logs here and there, in teams and projects. Some we knew about, some we didn’t. Some we thought were big stacks of logs but were actually stacked on top of something else.
Setting a target was like building a log store and deciding to fill it. We built ours to hold 10,000 logs. There was a bit of guesswork in that.
It took a while to gather up our existing logs (build our databases and counting tools). But the good news is, we had more logs than we thought.
Now we need to start finding and splitting more logs*.
Switching from analogy to reality for a minute…
This week we added trendlines to our dashboard. These are two linear regression lines. One based on all activity for the year to-date, and one based on the most recent 4 weeks. It gives a quick feedback mechanism on whether recent actions are helping us towards to our targets and whether we’re improving over the year to-date.
These are interesting, but can be misleading given our current working practices. The trendline implies some form of destiny. You do a load of work recruiting new contributors, see the trendline is on target, and relax. But relaxing isn’t an option because of the way we’re currently recruiting contributors.
Switching back to the analogy…
We’re mostly splitting logs by hand.
Things happen because we go out and make them happen.
Hard work is the reason we have 1,800 Maker Party events on the map this year and we’re only half-way through the campaign.
There’s a lot to be said for this way of making things happen, and I think there’s enough time left in the year to fill the log store this way.
But this is not mathematical or automated, which makes trendlines based on this activity a bit misleading.
In this mode of working, the answer to ‘Are we on track for 2014?‘ is: ‘the log store will be filled… if we fill it‘.
Systems can be tested, tuned, modified and multiplied. In a world of ‘systems’ we can apply trendlines to our graphs that are much better predictors of future growth.
We should be experimenting with systems now (and we are a little bit). But we don’t yet know what the contributor growth system looks like that works as well as the analogous log splitting machines of the forestry industry. These are things to be invented, tested and iterated on, but I wouldn’t bet on them as the solution for 2014 as this could take a while to solve.
I should also state explicitly that systems are not necessarily software (or hardware). Technology is a relatively small part of the systems of movement building. For an interesting but time consuming distraction, this talk on Social Machines from last week’s Wikimania conference is worth a ponder:
Predicting 2014 today?
Even if you’re splitting logs by hand, you can schedule time to do it. Plan each month, check in on targets and spend more or less time as required to stay on track for the year.
This boils down to a planning exercise, with a little bit of guess work to get started.
In simple terms, you list all the things you plan to do this year that could recruit contributors, and how many contributors you think each will recruit. As you complete some of these activities you reflect on your predictions, and modify the plans and update estimates for the rest of the year.
Geoffrey has put together a training workshop for this, along with a spreadsheet structure to make this simple for teams to implement. It’s not scary, and it helps you get a grip on the future.
From there, we can start to feed our planned activity and forecast recruitment numbers into our dashboard as a trendline rather than relying solely on past activity.
The manual nature of the splitting-wood-like-activity means what we plan to do is a much more important predictor of the future than extrapolating what we have done in the past, and that changing the future is something you can go out and do.
*Contributors are not logs. Do not swing axes at them, and do not under any circumstances put them in your fireplace or wood burning stove.