What’s happening this week?
My number one goal (P1) for this week is solving offline friendly mobile analytics for Webmaker App, while keeping other projects ticking along adequately.
Here’s to a productive week.
I wrote a post over on fundraising.mozilla.org about our latest round of optimization work for our End of Year Fundraising campaign.
We’ve been sprinting on this during the Mozilla all-hands workweek in Portland, which has been a lot of fun working face-to-face with the awesome team making this happen.
You can follow along with the campaign, and see how were doing at fundraising.mozilla.org
And of course, we’d be over the moon if you wanted to make a donation.
I hesitantly post this, as I’m spending the evening looking at DALMOOC and hope to take part, but know I’m short on free time right now (what with a new baby and trying to buy a house) and starting the course late.
This is either the first in a series of blog posts about this course, or, we shall never talk about this again.
The course encourages open and distributed publishing of work and assessments, which makes answering this first ‘warm-up’ task feel like more of a commitment to the course than I can really make. But here goes…
Competency 0.1: Describe and navigate the distributed structure of DALMOOC, different ways to engage with peers and various technologies to manage and share personal learning.
DALMOOC offers and encourages learning experiences that span many online products from many providers but which all connect back to a core curriculum hosted on the edX platform. This ranges from learning to use 3rd party tools and software to interacting with peers on commercial social media platforms like Twitter and Facebook. Learners can pick the tools and engagement best suited to them, including an option to follow just the core curriculum within edX if they prefer to do so.
It actually feels a lot like how we work at Mozilla, which is overwhelming and disorientating at first but empowering in the long run.
Writing this publically, however lazily, has forced me to engage with the task much more actively than I might have just sitting back and watching a lecture and answering a quiz.
But I suspect that a fear of the web, and a lack of experience ‘working open’ would make this a terrifying experience for many people. The DALMOOC topic probably pre-selects for people with a higher than average disposition to work this way though, which helps.
If I find a moment, I’ll write about many of the fun and inspiring things I saw at Mozfest this weekend, but this post is about a single session I had the pleasure of hosting alongside Andrew, Doug and Simon; Learning Analytics for Good in the Age of Big Data.
We had an hour, no idea if anyone else would be interested, or what angle people would come to the session from. And given that, I think it worked out pretty well.
We had about 20 participants, and broke into four groups to talk about Learning Analytics from roughly 3 starting points (though all the discussions overlapped):
- Practical solutions to measuring learning as it happens online
- The ethical complications of tracking (even when you want to optimise for something positive – e.g. Learning)
- The research opportunities for publishing and connecting learning data
But, did anyone learn anything in our Learning Analytics session?
Well, I know for sure the answer is yes… as I personally learned things. But did anyone else?
I spoke to people later in the day who told me they learned things. Is that good enough?
As I watched the group during the session I saw conversations that bounced back and forth in a way that rarely happens without people learning something. But how does anyone else who wasn’t there know if our session had an impact?
How much did people learn?
This is essentially the challenge of Learning Analytics. And I did give this some thought before the session…
As a meta-exercise, everyone who attended the session had a question to answer at the start and end. We also gave them a place to write their email address and to link their ‘learning data’ to them in an identifiable way. It was a little bit silly, but it was something to think about.
This isn’t good science, but it tells a story. And I hope it was a useful cue for the people joining the session.
- We had about 20 participants
- 10 returned the survey (i.e. opted in to ‘tracking’), by answering question 1
- 5 of those answered question 2
- 5 gave their email address (not exactly the same 5 who answered both questions)
Here is our Learning Analytics data from our session
Is that demonstrable impact?
Even though this wasn’t a serious exercise. I think we can confidently argue that some people did learn, in much the same way certain newspapers can make a headline out of two data points…
What, and how much they learned, and if it will be useful later in their life is another matter.
Even with the deliberate choice of question which was almost impossible to not show improvement from start to end of the session, one respondent claims to be less sure what the session was about after attending (but let’s not dwell on that!).
Post-it notes and scribbles
If you were at the session, and want to jog your memory about what we talked about. I kind-of documented the various things we captured on paper.
I’m looking forward to exploring Learning Analytics in the context of Webmaker much more in 2015.
And to think that this was just one hour in a weekend full of the kinds of conversations that repeat in your mind all the way until next Mozfest. It’s exhausting in the best possible way.
I’m back at the screen after a week of paternity leave, and I’ll be working part-time for next two weeks while we settle in to the new family routine at home.
In the meantime, I wanted to mention a Mozilla contributor analysis project in case people would like to get involved.
We have a wiki page now, which means it’s a real thing. And here are some words my sleep-deprived brain prepared for you earlier today:
The goal and scope of the work:
Explore existing contribution datasets to look for possible insights and metrics that would be useful to monitor on an ongoing basis, before the co-incident workweek in Portland at the beginning of December.
- Stress-test our current capacity to use existing contribution data
- Look for actionable insights to support Mozilla-wide community building efforts
- Run ad-hoc analysis before building any ‘tools’
- If useful, prototype tools that can be re-used for ongoing insights into community health
- Build processes so that contributors can get involved in this metrics work
- Document gaps in our existing data / knowledge
- Document ideas for future analysis and exploration
I’m very excited that three members of the community have already offered to support the project and we’ve barely even started.
In the end, these numbers we’re looking at are about the community, and for the benefit of the community, so the more community involvement there is in this process, the better.
If you’re interested in data analysis, or know someone who is, send them the link.
This project is one of my priorities over the following 4-8 weeks. On that note, this looks quite appealing right now.
So I’m going make more tea and eat more biscuits.
- Removing the second sentence increases conversion rate (hypothesis = simplicity is good).
- The button text ‘Go!’ increased the conversion rate.
- Both variations on the headline increased conversion rate, but ‘Welcome to Webmaker’ performed the best.
- We should remove the bullet points on this landing page.
- The log-in option is useful on the page, even for a cold audience who we assume do not have accounts already.
- Repeating the ask ‘Sign-up for Webmaker’ at the end of the copy, even when it duplicates the heading immediately above, is useful. Even at the expense of making the copy longer.
- The button text ‘Create an account’ works better than ‘Sign up for Webmaker’ even when the headline and CTA in the copy are ‘Sign up for Webmaker’.
- These two headlines are equivalent. In the absence of other data we should keep the version which includes the brand name, as it adds one further ‘brand impression’ to the user journey.
- The existing blue background color is the best variant, given the rest of the page right now.
The Webmaker Testing Hub
If any of those “conclusions” sound interesting to you, you’ll probably want to read more about them on the Webmaker Testing Hub (it’s a fancy name for a list on a wiki).
This is where we’ll try and share the results of any test we run, and document the tests currently running.
And why that image for this blog post?
Because blog posts need and image, and this song came on as I was writing it. And I’m sure it’s a song about statistical significance, or counting, or something…
Who’s getting involved?
Every day people visit the webmaker.org website. They come from many places, for many reasons. Sometimes they know about Webmaker, but most of the time it’s new to them. Some of those people take an action; they sign-up to find out more, to make something with our tools, or even to throw a Maker Party. But, most of the people who visit webmaker.org don’t.
The percentage of people who do take action is our conversion rate. Our conversion rate is an important number that can help us to be more effective. And being more effective is key to winning.
If you’re new to thinking about our conversion rate, it can seem complex at first, but it is something we can influence. And I choose the word influence deliberately, as a conversion rate is not typically something you can control.
The good thing about a conversion rate is that you can monitor what happens to it when you change your website, or your marketing, or your product. In all product design, marketing and copy-writing we’re communicating with busy human beings. And human beings are brilliant and irrational (despite our best objections). The things that inspire us to take action are often hard to believe.
For the Webmaker story to cut-through and resonate with someone as they’re skimming links on their phone while eating breakfast and trying convince a toddler to eat breakfast too, is really difficult.
How we present Webmaker, the words we use to ask people to get involved, and how easy we make it for them to sign-up, all combine to determine what percentage of people who visit webmaker.org today will sign-up and get involved.
- Conversion rate is a number that matters.
- It’s a number we can accurately track.
- And it’s a number we can improve.
It gets more complex though
The people who visit webmaker.org today are not all equally likely to take an action.
How people hear about Webmaker, and their existing level of knowledge affects their ‘predisposition to convert’.
- If my friend is hosting a Maker Party and I’ve volunteered to help and I’m checking out the site before the event, odds are I’ll sign-up.
- If I clicked a link shared on twitter that sounded funny but didn’t really explain what webmaker was, I’m much less likely to sign-up.
Often, the traffic sources that drive the biggest number of visitors, are the people with less existing knowledge about Webmaker, and who are less likely to convert. This is true of most ‘markets’ where an increase in traffic often results in a decrease in overall conversion rate.
Enter, The Snippet
Mozilla will be promoting Maker Party on The Snippet, and the snippet reaches such a vast audience that we just ran an early test to make sure everything is working OK and to establish some baseline metrics. The expectation of the snippet is high visits, low conversion rate, and overall a load of new people who hear about Webmaker.
By all accounts, a large volume of traffic from a hugely diverse audience whose only knowledge of Webmaker is a line of text and a small flashing icon should result in a very low conversion rate. And when you add this traffic into the overall webmaker.org mix, our overall average conversion rate should plummet (though this would be an artifact of the stats rather than any decline in effectiveness elsewhere).
However, after a few days of testing the snippet, our conversion rate overall is up. This is quite frankly astounding, and a great endorsement for the work that is going into our new landing pages. This is really something to celebrate.
So, how did this happen?
Well, mostly by design. Though the actual results are even better than I was personally expecting and hoping for.
You could say we’re cheating, because we chose a new type of conversion for this audience. Rather than ‘creating a webmaker.org account‘, we only asked them to ‘join a mailing list‘. It’s a lower-bar call to action. But, while it’s a lower-bar call to action, what really matters is that it’s an appropriate call to action for the level of existing knowledge we expect this audience to have. Appropriate is a really important part of the design.
Traffic from the snippet goes to a really simple introduction to webmaker.org page with an immediate call to action to join the mailing list, then it ‘routes’ you into the Maker Party website to explore more. That way, even if you’re busy now, you can tell us you’re interested and we can keep in touch and remind you in a weeks’ time (when you have a quiet moment, perhaps) about “that awesome Mozilla educational programme you saw last week but didn’t have time to look at properly”.
It’s built on the idea that many of the people who currently visit webmaker.org, but who don’t take action are genuinely interested but didn’t make it in the door. We just have to give them an easy enough way to let us know they’re interested. And then, we have to hold their hands and welcome them into this crazy world of remixed education.
A good landing page is like a good concierge.
The results so far:
Even with this lower bar call to action, I was expecting lower conversion rates for visitors who come from the snippet. Our usual ‘new account’ conversion rate for Webmaker.org is in the region of 3% depending on traffic source. The snippet landing page is currently converting around 8%, for significant volumes of traffic. And this is before any testing of alternative page designs, content tweaking, and other optimization that can usually uncover even higher conversion rates.
Our very first audience specific landing page is already having a significant impact.
So, here’s to many more webmaker.org landing pages that welcome users into our world, and in turn make that world even bigger.
On keeping the balance
Measuring conversion rate is a game of statistics, but improving conversion rate is about serving individual human beings well by meeting them where they are. Our challenge is to jump back and forth between these two ways of thinking without turning people into numbers, or forgetting that numbers help us work with people at scale.
Learning more about conversion rates?
- This talk I gave a little while ago covers most of the theory: Things they don’t tell you about conversion rates
Over the last week or so I’ve been building a thing: Gitribution. It’s an attempt to understand contributions to Mozilla Foundation work that happen on Github. It’s not perfect yet, but it’s in a state to get feedback on now.
Why did I build this?
For these reasons (in this order):
- Counting: To extract counts of contributor numbers from Github across Foundation projects on an automated ongoing basis
- Testing: To demo the API format we need for other sources of data to power our interim contributor dashboard
- Learning: To learn a bit about node.js so I can support metrics work on other projects more directly when it’s helpful to (i.e. submitting pull-requests rather than just opening bugs)
The data in this tool is all public data from the Github API, but it’s been restructured so it can be queried in ways that answer questions specific to our goals this year, and has some additional categorization of repositories to query against individual teams. The Github API on it’s own couldn’t answer our questions directly.
This also gives me data in a format that can be explored visually in Tableau (I’ll share this in a follow up blog post). We can now count Github contributors, and also analyze contributions.
Part of our interim dashboard plans include a standard format for reporting on numbers of active and new contributors for a given activity. Building this tool was a way to test if that format makes sense. The output is an API that you can ping with a date and see:
- The number of unique usernames to contribute in the 12 months prior (excluding those users who are members of the Github organization that owns the repositories – ie Mozilla or openNews)
- The number of those who only contributed in the 7 days prior (i.e. new contributors)
You can test the API here (change the date, or the team names – currently webmaker, openbadge, openNews)
We can use this in the dashboard soon.
I know a lot more about node.js than I did last week. So that’s something 🙂
I descended into what I later found out is called callback hell and felt much better when I learned that callback hell is a shared experience!
I tried an extreme escape from callback hell by re-building the app in a fire-and-forget process that kicked off several thousand pings to the Github API and paid no attention to whether or not they succeeded (clearly not a winning solution).
And I’ve ended up with something that isn’t too hellish but uses callbacks to manage the process flow. The current process is pretty linear, so I was able to sense check what it’s doing but it also works mostly on one task at a time so isn’t getting the potential value out of node’s non-blocking model.
- Tweaks to the categorization of ‘members=staff’
- See the attached image of contributions by username. There are some members of staff with many contributions who are not members of Mozilla on Github. This is not material when counting number of contributors in relation to targets, but when we analyze contribution activity those users with a lot of contributions skew the data significantly.
- Check and correct the list of repos assigned to each team
- Currently a best guess based on my limited knowledge and some time trawling through all the repos on the main Mozilla Github page
- Work out how to use this with Science Lab projects
- as Software Carpentry use Github as part of their training (which I love) it means the data in their account doesn’t represent the same kinds of activities in the other repos. I need to think about this.
- Pick the brains of my knowledgeable colleagues and get a review of this code
What else is this good for?
- This might be useful as one of the ways we get data into the upcoming project Baloo.
Tomorrow is the first day in my new role at the Mozilla Foundation, and I’m getting the new job nerves and excitement now.
Between wrapping up at WWF, preparing for Christmas, house hunting, and finishing off my next study assignment (a screenplay involving time-travel and a bus), I’ve been squeezing in a little bit of prep for the new job too.
This post is basically a note about some of the things I’ve been looking at in the last couple of weeks.
I thought it would be useful to jump through various bits of tech used in a number of MoFo projects, some of which I’d been wanting to play with anyway. This is not deep learning by any means, but it’s enough hands-on experience to speed up some things in the future.
I setup a little node.js app locally and had a look at Express. That’s all very nice, and I liked the basic app structures. Working with a package manager is a lovely part of the workflow, even if it sometimes feels a bit too magic to be real. I also had a look at MongoDB, Mongoose and MERS as a potential data model/store solution for another little app thing I want to build at some point. I didn’t take this far, but got the basic data model working over the MERS API.
I’d used Git a little bit already, but wanted a better grasp of the process for contributing ‘nicely’ to bigger projects (where you’re not also talking directly to the other people involved). Reading the Pro Git book was great for that, and a lighter read than I expected. It falls into the ‘why didn’t I do that months ago?’ category.
Sysadmin-esque work is one of my weak points so the next project was more of a stretch. I setup an Amazon EC2 instance and installed a copy of Graphite. The documentation for Graphite is on the sparse side for people who don’t know their way around a Linux command prompt well, but that probably taught me more than if I’d just been following a series of commands from a tutorial. I think I’ll be spending a lot more time on the metrics end of Graphite, so getting a grasp of the underlying architecture will hopefully help.
Then, for the last couple of days I’ve been working through Data Analysis With Open Source Tools at a slightly superficial level (i.e. skipping some of the Maths), but it’s been a good warm-up for the work ahead.
And that’s it for now.
I’m really looking forward to meeting many new people, problems and possibilities in 2014.
Happy New Year!