Working on CRM

I’m writing a couple of blog posts today. This first is a belated note about my work on CRM for MoFo, and how I ended up doing this.

Slides from my presentation to MoFo on our All Staff call in June.

In the second quarter of the year, my Metrics work was pretty quiet while we were prototyping the new Webmaker Android App, and the Learning Networks team was in planning mode for Mozilla Clubs. There was some strategic work to do, but at this stage in the product life-cycle, data-driven decision making isn’t a useful tool. I never actually ran out of things to do, but was keen to spend my time on things that had the most impact.

So I was looking around for projects to help with. Talking to David Ascher, I explained that the projects that engaged me most were the complex ones that combined the needs and views of many different teams. This was also a moment of realisation for me that this was true of every job I’ve held. I like connecting things, including differing points of view.

The MoFo CRM project has been on the table(s) for a while now, but it never gained momentum for legitimate organisational reasons. All our teams needed some form of CRM, but even those with the biggest requirements didn’t have spare capacity to supply CRM tools to the rest of the teams. The more a team tried to coordinate with others, the more complex it was to solve for their own use case. It was everyone’s problem, and no-one’s problem.

So my proposal was to have a ‘product manager’ to look after CRM as an internal service to the org; Centralise ownership of the complexity rather than making it everyone’s problem. That way teams can think about the benefits of using the CRM rather than the complexity of building it. And after reviewing the plan with our Ops leadership, I picked up this task.

It’s been a couple of month’s since then, and I’ve had hundreds of hours of conversations with people across Mozilla about CRM. The project is living up to my request of ‘complex’, but I’m also pleased we’ve started doing the work. Although CRM includes more than it’s fair share of ‘Enterprise IT’, we’re keeping our workflow inline with the agile methods we apply to our own products and projects.

But it’s a difficult project to track, with many plates that need to keep spinning. I noticed this most after being offline with my family for two weeks then coming back to work. It took me a few days to get up to speed on each of the CRM pieces. So this week I’ve been working on documentation that’s easier to follow.

The project is now split into seven projects, and the current status of each, and the next steps with links to github issues for tracking and discussion can now all be found in one place. Building on Matt Thomson’s hard work organizing many Mozilla things, I’m using this wiki/etherpad combo as my working doc: CRM Plan of Record.

The importance of retention rates, explained by @bbalfour

In my last post I shared a tool for playing with the numbers that matter for growing a product or service. (I.e. Conversion, retention and referral rates).

This video of a talk by Brian Balfour is a perfect introduction / guide to watch if you’re also playing with that tool. In particular, the graphs from 1:46 onwards.

Optimizing for Growth

In my last post I spent some time talking about why we care about measuring retention rates, and tried to make the case that retention rate works as a meaningful measure of quality.

In this post I want to look at how a few key metrics for a product, business or service stack up when you combine them. This is an exercise for people who haven’t spent time thinking about these numbers before.

  • Traffic
  • Conversion
  • Retention
  • Referrals

If you’re used to thinking about product metrics, this won’t be new to you.

I built a simple tool to support this exercise. It’s not perfect, but in the spirit of ‘perfect is the enemy of good‘ I’ll share it in it’s current state.

>> Follow this link, and play with the numbers.

Optimizing for growth isn’t just ‘pouring’ bigger numbers into the top of the  ‘funnel‘. You need to get the right mix of results across all of these variables. And if your results for any of these measurable things are too low, your product will have a ‘ceiling’ for how many active users you can have at a single time.

However, if you succeed in optimizing your product or service against all four of these points you can find the kind of growth curve that the start-up world chases after every day. The referrals part in particular is important if you want to turn the ‘funnel’ into a ‘loop’.

Depending on your situation, improving each of these things has varying degrees of difficulty. But importantly they can all be measured, and as you make changes to the thing you are building you can see how your changes impact on each of these metrics. These are things you can optimize for.

But while you can optimize for these things, that doesn’t make it easy.

It still comes down to building things of real value and quality, and helping the right people find those things. And while there are tactics to tweak performance rates against each of these goals, the tactics alone won’t matter without the product being good too.

As an example, Dropbox increased their referral rate by rewarding users with extra storage space for referring their friends. But that tactic only works if people like Dropbox enough to (a) want extra storage space and (b) feel happy recommending the product to their friends.

In summary:

  • Build things of quality
  • Optimize them against these measurable goals

Measuring Quality

At the end of last year, Cassie raised the question of ‘how to measure quality?’ on our metrics mailing list, which is an excellent question. And like the best questions, I come back to it often. So, I figured it needed a blog post.

There are a bunch of tactical opportunities to measure quality in various processes, like the QA data you might extract from a production line for example. And while those details interest me, this thought process always bubbles up to the aggregate concept: what’s a consistent measure of quality across any product or service?

I have a short answer, but while you’re here I’ll walk you through how I get there. Including some examples of things I think are of high quality.

One of the reasons this question is interesting, is that it’s quite common to divide up data into quantitative and qualitative buckets. Often splitting the crisp metrics we use as our KPIs from the things we think indicate real quality. But, if you care about quality, and you operate at ‘scale’, you need a quantitative measure of quality.

On that note, in a small business or on a small project, the quality feedback loop is often direct to the people making design decisions that affect quality. You can look at the customers in your bakery and get a feel for the quality of your business and products. This is why small initiatives are sometimes immensely high in quality but then deteriorate as they attempt to replicate and scale what they do.

What I’m thinking about here is how to measure quality at scale.

Some things of quality, IMHO:

axeThis axe is wonderful. As my office is also my workshop, this axe is usually near to hand. It will soon be hung on the wall. Not because I am preparing for the zombie apocalypse, but because it is both useful as a tool, and as a visual reminder about what it means to build quality products. If this ramble of mine isn’t enough of a distraction, watch Why Values are Important to understand how this axe relates to measures of quality especially in product design.

toasterThis toaster is also wonderful. We’ve had this toaster more than 10 years now, and it works perfectly. If it were to break, I can get the parts locally and service it myself (it’s deliberately built to last and be repaired). It was an expensive initial purchase, but works out cheap in the long run. If it broke today, I would fix it. If I couldn’t fix it for some extreme reason, I would buy the same toaster in a blink. It is a high quality product.

coffeeThis is the espresso coffee I drink every day. Not the tin, it’s another brand that comes in a bag. It has been consistently good for a couple of years until the last two weeks when the grind has been finer than usual and it keeps blocking the machine. It was a high-quality product in my mind, until recently. I’ll let another batch pass through the supermarket shelves and try it again. Otherwise I’ll switch.

spatulaThis spatula looks like a novelty product and typically I don’t think very much of novelty products in place of useful tools, but it’s actually a high quality product. It was a gift, and we use it a lot and it just works really well. If it went missing today, I’d want to get another one the same. Saying that, it’s surprisingly expensive for a spatula. I’ve only just looked at the price, as a result of writing this. I think I’d pay that price though.

All of those examples are relatively expensive products within their respective categories, but price is not the measure of quality, even if price sometimes correlates with quality. I’ll get on to this.

How about things of quality that are not expensive in this way?

What is quality music, or art, or literature to you? Is it something new you enjoy today? Or something you enjoyed several years ago? I personally think it’s the combination of those two things. And I posit that you can’t know the real quality of something until enough time has passed. Though ‘enough time’ varies by product.

Ten years ago, I thought all the music I listened to was of high quality. Re-listening today, I think some of it was high-quality. As an exercise, listen to some music you haven’t for a while, and think about which tracks you enjoy for the nostalgia and which you enjoy for the music itself.

In the past, we had to rely on sales as a measure of the popularity of music. But like price, sales doesn’t always relate to quality. Initial popularity indicates potential quality, but not quality in itself (or it indicates manipulation of the audience via effective marketing). Though there are debates around streaming music services and artist payment, we do now have data points about the ongoing value of music beyond the initial parting of listener from cash. I think this can do interesting things for the quality of music overall. And in particular that the future is bleak for album filler tracks when you’re paid per stream.

Another question I enjoy thinking about is why over the centuries, some art has lasting value, and other art doesn’t. But I think I’ve taken enough tangents for now.

So, to join this up.

My view is that quality is reflected by loyalty. And for most products and services, end-user loyalty is something you can measure and optimize for.

Loyalty comes from building things that both last, and continue to be used.

Every other measurable detail about quality adds up to that.

Reducing the defect rate of component X by 10% doesn’t matter unless it impacts on the end-user loyalty.

It’s harder to measure, but this is true even for things which are specifically designed not to last. In particular, “experiences”; a once-in-a-lifetime trip, a festival, a learning experience, etc, etc. If these experiences are of high quality, the memory lasts and you re-live them and re-use them many times over. You tell stories of the experience and you refer your friends. You are loyal to the experience.

Bringing this back to work.

For MoFo colleagues reading this, our organization goals this year already point us towards Quality. We use the industry term ‘Retention’. We have targets for Retention Rates and Ongoing Teaching Activity (i.e. retained teachers). And while the word ‘retention’ sounds a bit cold and business like, it’s really the same thing as measuring ‘loyalty’. I like the word loyalty but people have different views about it (in particular whether it’s earned or expected).

This overarching theme also aligns nicely with the overall Mozilla goal of increasing the ‘number of long term relationships’ we hold with our users.

Language is interesting though. Thinking about a ‘20% user loyalty rate’ 7 days after sign-up focuses my mind slightly differently than a ‘20% retention rate’. ‘Retention’ can sound a bit too much like ‘detention’, which might explain why so many businesses strive for consumer ‘lock-in’ as part of their business model.

Talking to OpenMatt about this recently he put a better MoFo frame on it than loyalty; Retention is a measure of how much people love what we’re doing. When we set goals for increasing retention rate, we are committing to building things people love so much that they keep coming back for more.

In summary:

  • You can measure quality by measuring loyalty
  • I’m happy retention rates are one of our KPIs this year

My next post will look more specifically about the numbers and how retention rates factor into product growth.

And I’ll try not to make it another essay. 😉

A ‘free’ online learning experience

2862656849_f0fa5c78bf_oI’ve blogged about various experiences of online learning I’ve taken part in over the years and wanted to reflect on the most recent one. Coursera’s three week Introduction to Ableton Live.

Learning more about learning is one of my personal goals this year. And I find writing out loud to be useful tool in thinking. So that’s mostly the point of this.

I take these courses mostly because I like learning new things, but also because I’m interested in online learning more generally. How do you most effectively transfer knowledge, skills and motivation via the web, and/or about the web? That question is often on my mind.

Almost all of the projects I work on at Mozilla are somewhere in the education space; directly with Webmaker or Mozilla Learning Networks and tangentially in the topic of volunteer contribution. Contributing to an open source project as complex and distributed as Mozilla is a learning experience in itself, and sometimes requires specific training to even make it possible.

To further frame this particular brain dump, I’m also interested generally in the economics of the web and how this shapes user experiences, and I have strong feelings about the impact of advertising’s underlying messaging and what this does over-time when it dominates a person’s daily content intake. I’m generally wary of the word “Free”. This all gets complex when you work on the web, and even directly on advertising at times. Most of my paycheques have had some pretty direct link to the advertising world, except maybe when I was serving school dinners to very rich children – but that wasn’t my favourite job, despite it’s lack of direct societal quandaries.

Now, to the content…

If you’re like me, you will tend to read notes about a topic like ‘commerce in education’ and react negatively to some of these observations because there are many cases where those two things should be kept as far apart as possible. But I’m actually not trying to say anything negative here. These are just observations.

Observations

All roads lead to… $

$ Coursera

My online experience within the Coursera site was regularly interrupted with a modal (think popup) screen asking if I wanted to pay to enrol in the ‘Signature Track’, and get a more official certification. This is Coursera’s business model and understandably their interest. It wasn’t at all relevant to me in my life situation, as I was taking a course about how to play with fun music software in my free time. I don’t often check my own qualifications before I let myself hobby. Not that anyone checked my qualifications before they let me work either, but I digress. Coursera’s tagline says ‘free’, but they want you to pay.

$ Blend.io

All assignments for the course had to be published to Blend for peer-evalutation, Blend is like Github but for raw audio production tracks rather than source-code. I didn’t know about Blend before the course, and really like it as a concept and how it’s executed and for what it could do for collaborative music making. But I note, it is a business. This course funnels tens of thousands of new users into that business over the course of a few days. There might not be any direct financial trade here (between companies for example), but users are capital in start-up land. And I now receive emails from Blend with advertisements for commercial audio production tools. My eyeballs, like yours, have a value.

$ Berklee College of Music

While hosted on Coursera, the content of this course is by Berklee College of Music. The content they ‘give away’ would traditionally only have been available to paying students. Berklee’s business is selling seats in classes. This course isn’t given away as an act of kindness, it’s marketing. Three weeks is short and therefore the content is ‘light’. Lighter than I was expecting (not that I’m entitled). But halfway through, we receive a promotional email about Berklee’s own online education platform where you could create an account to get access to further ‘free’ videos to supplement the Coursera materials. I found these supplementary videos more useful, and they lead to offers to sign-up for extended paid courses with Berklee Online. For Berklee, this whole excercise is a marketing funnel. Quite possibly it’s the most fun and least offensive marketing funnel you can be dropped into, but it exists to do that job.

$ Erin Barra – Course professor and artist

Now, I write this with genuine sympathy, as I’ve walked the floor at countless venues trying to sell enough music and merch to cover the petrol costs of playing a gig. But this is a commercial element of this learning experience, so I will note it. At many points throughout the three weeks, we had opportunities to buy Erin’s music, t-shirts, and audio production stems (these are like a layer file of an original recording) for consumption and or remixing. I know you have to hustle if you’re making music for a living, but the observation here is that the students of this course are also a marketable audience. Perhaps only because they arrive en-mass and end up slightly faceless. I’m sure it would be weird for most teachers to sell t-shirts in a class-room. It wasn’t particularly weird online, where we’re desensitised to being constantly sold things. And I may have only noticed this because I’m interested in how all these things fit together.

$ Ableton

The course was about learning Ableton Live. A commercial audio production tool. So at some point, the cost of Ableton had to be considered. Ableton offers a free 30 day trial, which works for this course and they kindly (or sensibly) agreed to let people taking the course start a new trial even if they’d used their 30 days already. Good manners like those are good for business. Anyway, I already owned Live 9 Intro (aka the cheap version), and for a three week intro course it does more than enough to learn the basics (I guess that’s why it’s called Intro?). But the course taught and encouraged the use of Live 9 Suite (the EUR599 rather than the EUR79 version). Until some people complained, the use of features in Suite was required to complete the final assignment. Reading between the lines, I doubt there was any deliberate commercial discussion around this planning, but the planning definitely didn’t stem from the question: ‘how can we keep the cost down for these beginners?’. At the end of the course there were discount codes to get 15% off purchasing anything from Ableton. I didn’t use Suite during the course, but I’m playing with it now on my own time and terms, and may end up spending money on it soon.

Reflections

It’s wonderful, but it’s not Wikipedia. The course opened a lot of doors, but mostly into places where I could spend money, which I am cautious about as a model for learning. It was valuable to me and prompted me to learn more about Ableton Live than I would have done in those three weeks without it. So I’m grateful for it. But I can’t in my heart think of this as a ‘shared public resource’.

For my own learning, I like deadlines. Preferably arbitrary. The fact that these Coursera courses are only available at certain times during the year, really works for me. But I struggle with the logic of this when I think about how best to provide learning material online to as many people as possible. The only MOOC style courses I have finished have been time-bound. I don’t know how many people this is true for though.

People will learn X to earn Y. For me this course was a form of hobby or entertainment, but much learning has a direct commercial interest for students as well as educators. Whether it’s for professional skills development, or building some perceived CV value.

There is no ‘free’ education, even if it says “free” on the homepage. There is always a cost, financial or otherwise. Sometimes the cost is borne by the educator, and sometimes the student. Both models have a place, but I get uncomfortable when one tries to look like the other. And if the world could only have one of these models for all of education I know which one I’d choose. Marketing fills enough of our daily content and claims enough brainprint as it is.

Conclusion

I thought I might find some conclusions in writing this, but that doesn’t always happen. There are a lot of interesting threads here.

So instead of a conclusion, you can have the song I submitted for my course assignment. It was fun to make. And I have this free-but-not-free course to thank for getting it done.

The week ahead: 19 Jan 2015

January

If all goes to plan, I will:

  • Write a daily working process
  • Use a public todo list, and make it work
  • Catch up on more email from time off
  • Ship V1 of Webmaker Metrics retention dashboard
  • Work out a plan for aligning metrics work with dev team heartbeats
  • Don’t let the immediate todo list get in the way of planning long term processes
  • Invest time in working open
  • Wrestle with multiple todo list systems until they (or I) work together nicely
  • Survive a 5 day week (it’s been a while)
  • Write up final testing blog posts from EOY before those tests are forgotten
  • Book data practices kick-off meetings with all teams

To try and solve some of the process challenges, I’ve gone back to a tool I built a couple of years ago (Done by When) and I’m breaking it a little bit to make it useful to me again. This might end up being an evening time project to learn about some of the new tech the Webmaker team are using this year (particularly rewriting the front end with React). I find it useful to have a side-project to use as a playground for learning new things.

Anyway, have a great week. I’ll try and write up some more notes at the end.

From 2014 to 2015

I’ve had a couple of weeks off work and it’s been a good time to reflect on the year past, and the one ahead. And before I dive back into things on Friday morning, I wanted to get this post published. It’s a long one, and writing it was more for my benefit than yours 😉

On 2014

Cassie’s post on 2014 has claimed the perfect title already, but I’ll ditto that it was a hell of a year: New job, second baby, two house moves, one house purchase, one trip to Toronto, two to San Francisco, Mozfest in London, Mozlandia in Portland, finally finishing my degree and graduating, and continually adjusting our home-life around the amazing speed at which a two year old and a newborn can change in any 24 hour period.

Some reflections:

My job title might focus on Metrics, but I don’t just work with numbers, I work with people. And that makes me happy because people are the best. While I love finding ways to measure things, what’s infinitely more interesting is the process of connecting that measurement back to decisions other people make. I’m one year into this role and it’s turning out more like I imagined it from the job description than I thought would be possible, but perhaps that’s because I’ve been given the space to make it that way.

On that note, public speaking is still one of the best ways to learn something (though ‘public writing’ may be almost as useful). I’ve learned to not-hate giving presentations but I don’t rush into them either. I gave a talk in September on ‘Working with numbers and People / Human Beings’, which was a useful exercise in reflecting on my own work, and how I go about doing my day job (slides). It forced me to think about many things that will help me plan for 2015.

I’m in many teams, even though I’m in a team of one. I make a conscious effort to understand the culture, process and motivations of all the teams and people I work with, and I take it as a huge compliment when people forget I’m not just on their team, but rather I’m working with all the teams. This cross-team working is something I’ve gravitated to in my roles prior to Mozilla too, but Metrics is a topic that lends itself especially well to this kind of cross-team work.

Being on many teams can be exhausting. Especially if it’s a ‘planning workweek’, in say Portland, and you’re working with all the teams who are doing their 2015 planning and also working in the team who are specifically not planning, because it’s the End of Year fundraising campaign (which could more than fill the week’s working hours on its own). That looked a lot like working from 04:00 to 22:00 for a few days, though I reclaimed my downtime in party at the end of the week. That was a good night.

On that note, I exceeded my working capacity by the end of the year. There will always be more work than can be done, but it takes a while for a new role to settle, and for the organization to know what to ask of and expect from a new role like mine. And by the end of the year, I’d overstretched what I had the capacity to deliver. That’s normal, but I need to build working processes this year that expect more work requests than can be delivered rather than just saying yes to everything like I was able to do for most of 2014.

‘Janky’ solutions were the right thing in 2014 for most of my projects (and I absorbed the word Janky into my vocabulary). This year I built a lot of ‘temporary’ and ‘interim’ things, which were useful at the time and helped various people make decisions quickly but which weren’t designed for long term maintenance. For most of these projects that was the right decision, as so much strategy is changing in 2015. But moving forward, I want to build some more reusable infrastructure around this work.

I had an (almost) clean slate, technology wise. The role was new so I didn’t inherit a bunch of systems to maintain, but I also didn’t build up much infrastructure. As I think about technology solutions in 2015, I need to keep maintenance time for projects to a minimum, because it could quickly eat up my capacity to get new things done. This year I want to shift to longer term solutions though for some of the work that was ‘janky’ in 2014.

I learned (much more) JavaScript. Previously I’d dabbled in front-end JS leaning heavily on jQuery. But working on Webmaker.org projects, and building other apps and services around our contribution metrics projects I learned a fair amount about node.js development and also a chunk of D3, both of which I’ve come to like a lot. I also upped my git skills. So +1 for learning by doing.

I have loved and hated Bugzilla this year. It took me most of the year to get a Bugzilla based work management system in place and use this as my central record for all metrics work, but with a bunch of MoFo dev and planning work shifting to Github for issue management, my work and collaboration process is now in multiple systems. This is a headache for me, but I’ll find a way to make this work.

Working remotely is excellent, given enough face-to-face time. I’m fine with managing my own time at home, having completed my degree this way and having worked as a freelancer too, but keeping the work ‘real’ with video meetings is something I need. And while the face-to-face workweeks throughout the year are tough for taking me away from home and our young family, they are essential for building the relationships with the team. If we didn’t have the workweeks, the remote work would be much harder. And when it all adds up, I think I get more time with my kids than most working parents who never have to travel. This year, I was really happy with the balance.

Working across timezones is hard, sometimes. I love having the morning to tackle complex problems knowing I won’t get a phone call or email as everyone else is still fast asleep, but by the afternoon I hit a point where my calendar is quickly full of calls, and the people waking up are full of questions and I switch from ‘doing’ to ‘answering’ mode. This is usually fine, but it means my calls are back-to-back until 6pm my time when I instantly switch from a bunch of unresolved threads of work conversation in my head to step out of my office and play with son for an hour before he goes to bed. It’s amazing to get that time with my son at this age, but that context switch is really tough and I need a better process of recording those open threads while on the calls, so that I can relax until the following morning knowing things won’t be lost. When we hit busier periods across the org, I know I’ll lose more of my evenings to late video meetings, but I try to keep this in check with the rest of life’s needs.

I need to take more time off throughout the year. Otherwise it all piles up at the end of the year like it did this year (though the time off has been useful for settling in after our recent house move). Taking time off was hard in year one, when I was still finding my place, and working out what’s expected and required in a new role, but it’s important for long term productivity. It’s the time when you have room to breathe and think further ahead, and put the work into the context of personal life and the world we live in. I’m lucky to do work that I am personally interested in and committed to, but that can make switching off hard at times. When your work is online, it’s hard to avoid work without avoiding the internet.

I completed my degree this year, after a six-year-long year-off. I got myself a ‘BSc (Open)’. The ‘Open’ is because it was a ‘choose your own adventure’ type of qualification via the Open University. I’ve written about that story elsewhere, but the final qualification is a little bit of maths, a fair amount of computer science and finished off with some storytelling in a creative writing course. This was a combination which I thought was funny at the time when I registered, but now turns out to be what I do for a living.

In 2015

I want to learn more about learning. I am a compulsive learner but I know that doesn’t apply to everyone, and with Webmaker’s educational mission, I want to better understand how learning works. In part that connects to the metrics angle of my role via the scientific field of Learning Analytics, but also I just want to know more about this for myself.

I might try and write at least some code every day (how’s that for commitment?!). Though I’m not one for New Year’s resolutions, because I generally like to set myself new challenges all year round, on New Year’s Eve I recalled John Resig’s post about writing code every day and told my wife I would do this in 2015. Then on 2nd January I got a vomitting bug and took to bed for the best part of 3 days, failing my goal pretty quickly. Since then however, I have been hacking on Done by When in my evenings and really enjoying it. I’ll see how this goes. Overall I’d like to be coding more regularly, rather than in short intense bursts.

I’ll need to say no to some things. As noted above, I’ve reached capacity so need to think about what does and doesn’t get done throughout the year.

I need to manage myself as though I was a team. I can’t just attack whatever is on top of the todo list and keep letting the list grow. I will adopt some of the heartbeat principles used by Webmaker, and try to be strict with myself about allocating separate time for planning, doing, documenting, and communicating. It can’t all be doing. I’ll write more about my processes as I work them out.

I’ll leave this blog post at that, as it’s grown to quite a length. I’m back to work from tomorrow morning, and this afternoon I need to finishing making the spare room into a suitable working environment.

Catch you online soon.

One month of Webmaker Growth Hacking

This post is an attempt to capture some of the things we’ve learned from a few busy and exciting weeks working on the Webmaker new user funnel.

I will forget some things, there will be other stories to tell, and this will be biased towards my recurring message of “yay metrics”.

How did this happen?

Screen Shot 2014-09-01 at 14.25.29

As Dave pointed out in a recent email to Webmaker Dev list, “That’s a comma, not a decimal.”

What happened to increase new user sign-ups by 1,024% compared the previous month?

Is there one weird trick to…?

No.

Sorry, I know you’d like an easy answer…

This growth is the result of a month of focused work and many many incremental improvements to the first-run experience for visitors arriving on webmaker.org from the promotion we’ve seen on the Firefox snippet. I’ll try to recount some of it here.

While the answer here isn’t easy, the good news is it’s repeatable.

Props

While I get the fun job of talking about data and optimization (at least it’s fun when it’s good news), the work behind these numbers was a cross-team effort.

Aki, Andrea, Hannah and I formed the working group. Brett and Geoffrey oversaw the group, sanity checked our decisions and enabled us to act quickly. And others got roped in along the way.

I think this model worked really well.

Where are these new Webmaker users coming from?

We can attribute ~60k of those new users directly to:

  • Traffic coming from the snippet
  • Who converted into users via our new Webmaker Landing pages

Data-driven iterations

I’ve tried to go back over our meeting notes for the month and capture the variations on the funnel as we’ve iterated through them. This was tricky as things changed so fast.

This image below gives you an idea, but also hides many more detailed experiments within each of these pages.

Testing Iterations

With 8 snippets tested so far, 5 funnel variations and at least 5 content variables within each funnel we’ve iterated through over 200 variations of this new user flow in a month.

We’ve been able to do this and get results quickly because of the volume of traffic coming from the snippet, which is fantastic. And in some cases this volume of traffic meant we were learning new things quicker than we were able to ship our next iteration.

What’s the impact?

If we’d run with our first snippet design, and our first call to action we would have had about 1,000 new webmaker users from the snippet, instead of 60,000 (the remainder are from other channels and activities). Total new user accounts is up by ~1,000% but new users from the snippet specifically increased by around 6 times that.

One not-very-weird trick to growth hacking:

I said there wasn’t one weird trick, but I think the success of this work boils down to one piece of advice:

  • Prioritize time and permission for testing, with a clear shared objective, and get just enough people together who can make the work happen.

It’s not weird, and it sounds obvious, but it’s a story that gets overlooked often because it doesn’t have the simple causation based hooked we humans look for in our answers.

It’s much more appealing when someone tells you something like “Orange buttons increase conversion rate”. We love the stories of simple tweaks that have remarkable impact, but really it’s always about process.

More Growth hacking tips:

  • Learn to kill your darlings, and stay happy while doing it
    • We worked overtime to ship things that got replaced within a week
    • It can be hard to see that happen to your work when you’re invested in the product
      • My personal approach is to invest my emotion in the impact of the thing being made rather than the thing itself
      • But I had to lose a lot of A/B tests to realize that
  • Your current page is your control
    • Test ideas you think will beat it
    • If you beat it, that new page is your new control
    • Rinse and repeat
    • Optimize with small changes (content polishing)
    • Challenge with big changes (disruptive ideas)
  • Focus on areas with the most scope for impact
    • Use data to choose where to use data to make choices
    • Don’t stretch yourself too thin

What happens next?

  • We have some further snippet coverage for the next couple of weeks, but not at the same level we’ve had recently, so we’ll see this growth rate drop off
  • We can start testing the funnel we’ve built for other sources of traffic to see how it performs
  • We have infrastructure for spinning up and testing landing pages for many future asks
  • This work is never done, but with any optimization you see declining returns on investment
    • We need to keep reassessing the most effective place to spend our time
    • We have a solid account sign-up flow now, but there’s a whole user journey to think about after that
    • We need to gather up and share the results of the tests we ran within this process

Testing doesn’t have to be scary, but sometimes you want it to be.

Overlapping types of contribution

Screen Shot 2014-08-21 at 14.02.27TL;DR: Check out this graph!

Ever wondered how many Mozfest Volunteers also host events for Webmaker? Or how many code contributors have a Webmaker contributor badge? Now you can find out

The reason the MoFo Contributor dashboard we’re working from at the moment is called our interim dashboard is because it’s combining numbers from multiple data sources, but the number of contributors is not de-duped across systems.

So if you’re counted as a contributor because you host an event for Webmaker, you will be double counted if you also file bugs in Bugzilla. And until now, we haven’t known what those overlaps look like.

This interim solution wasn’t perfect, but it’s given us something to work with while we’re building out Baloo and the cross-org areweamillionyet.org (and by ‘we’, the vast credit for Baloo is due to our hard working MoCo friends Pierros and Sheeri).

To help with prepping MoFo data for inclusion in Baloo, and by  generally being awesome, JP wired up an integration database for our MoFo projects (skipping a night of sleep to ship V1!).

We’ve tweaked and tuned this in the last few weeks and we’re now extracting all sorts of useful insights we didn’t have before. For example, this integration database is behind quite a few of the stats in OpenMatt’s recent Webmaker update.

The downside to this is we will soon have a de-duped number for our dashboard, which will be smaller than the current number. Which will feel like a bit of a downer because we’ve been enthusiastically watching that number go up as we’ve built out contribution tracking systems throughout the year.

But, a smaller more accurate number is a good thing in the long run, and we will also gain new understanding about the multiple ways people contribute over time.

We will be able to see how people move around the project, and find that what looks like someone ‘stopping’ contributing, might be them switching focus to another team, for example. There are lots of exciting possibilities here.

And while I’m looking at this from a metrics point of view today, the same data allows us to make sure we say hello and thanks to any new contributors who joined this week, or to reach out and talk to long running active contributors who have recently stopped, and so on.

Trendlines and Stacking Logs

TL;DR

  • Our MoFo dashboards now have trendlines based on known activity to date
  • The recent uptick in activity is partly new contributors, and partly new recognition of existing contributors (all of which is good, but some of which is misleading for the trendline in the short term)
  • Below is a rambling analogy for thinking about our contributor goals and how we answer the question ‘are we on track for 2014?’
  • + if you haven’t seen it, OpenMatt has crisply summarized a tonne of the data and insights that we’ve unpicked during Maker Party

Stacking Logs

I was stacking logs over the weekend, and wondering if I had enough for winter, when it struck me that this might be a useful analogy for a post I was planning to write. So bear with me, I hope this works…

To be clear, this is an analogy about predicting and planning, not a metaphor for contributors* 😀

So the trendline looks good, but…

Screen Shot 2014-08-19 at 11.47.27

Trendlines can be misleading.

What if our task was gathering and splitting logs?

Vedstapel, Johannes Jansson (1)

We’re halfway through the year, and the log store is half full. The important questions is, ‘will it be full when the snow starts falling?

Well, it depends.

It depends how quickly we add new logs to the store, and it depends how many get used.

So let’s push this analogy a bit.

Firewood in the snow

Before this year, we had scattered stacks of logs here and there, in teams and projects. Some we knew about, some we didn’t. Some we thought were big stacks of logs but were actually stacked on top of something else.

Vedstapel, Johannes Jansson

Setting a target was like building a log store and deciding to fill it. We built ours to hold 10,000 logs. There was a bit of guesswork in that.

It took a while to gather up our existing logs (build our databases and counting tools). But the good news is, we had more logs than we thought.

Now we need to start finding and splitting more logs*.

Switching from analogy to reality for a minute…

This week we added trendlines to our dashboard. These are two linear regression lines. One based on all activity for the year to-date, and one based on the most recent 4 weeks. It gives a quick feedback mechanism on whether recent actions are helping us towards to our targets and whether we’re improving over the year to-date.

These are interesting, but can be misleading given our current working practices. The trendline implies some form of destiny. You do a load of work recruiting new contributors, see the trendline is on target, and relax. But relaxing isn’t an option because of the way we’re currently recruiting contributors.

Switching back to the analogy…

We’re mostly splitting logs by hand.

Špalek na štípání.jpg

Things happen because we go out and make them happen.

Hard work is the reason we have 1,800 Maker Party events on the map this year and we’re only half-way through the campaign.

There’s a lot to be said for this way of making things happen, and I think there’s enough time left in the year to fill the log store this way.

But this is not mathematical or automated, which makes trendlines based on this activity a bit misleading.

In this mode of working, the answer to ‘Are we on track for 2014?‘ is: ‘the log store will be filled… if we fill it‘.

Scaling

Holzspalter 2

As we move forward, and think about scale… say a hundred-thousand logs (or even better, a Million Mozillians). We need to think about log splitting machines (or ‘systems’).

Systems can be tested, tuned, modified and multiplied. In a world of ‘systems’ we can apply trendlines to our graphs that are much better predictors of future growth.

We should be experimenting with systems now (and we are a little bit). But we don’t yet know what the contributor growth system looks like that works as well as the analogous log splitting machines of the forestry industry. These are things to be invented, tested and iterated on, but I wouldn’t bet on them as the solution for 2014 as this could take a while to solve.

I should also state explicitly that systems are not necessarily software (or hardware). Technology is a relatively small part of the systems of movement building. For an interesting but time consuming distraction, this talk on Social Machines from last week’s Wikimania conference is worth a ponder:

Predicting 2014 today?

Even if you’re splitting logs by hand, you can schedule time to do it. Plan each month, check in on targets and spend more or less time as required to stay on track for the year.

This boils down to a planning exercise, with a little bit of guess work to get started.

In simple terms, you list all the things you plan to do this year that could recruit contributors, and how many contributors you think each will recruit. As you complete some of these activities you reflect on your predictions, and modify the plans and update estimates for the rest of the year.

Geoffrey has put together a training workshop for this, along with a spreadsheet structure to make this simple for teams to implement. It’s not scary, and it helps you get a grip on the future.

From there, we can start to feed our planned activity and forecast recruitment numbers into our dashboard as a trendline rather than relying solely on past activity.

The manual nature of the splitting-wood-like-activity means what we plan to do is a much more important predictor of the future than extrapolating what we have done in the past, and that changing the future is something you can go out and do.

*Contributors are not logs. Do not swing axes at them, and do not under any circumstances put them in your fireplace or wood burning stove.