Mile long string of baloons (6034077499)

  • Removing the second sentence increases conversion rate (hypothesis = simplicity is good).
  • The button text ‘Go!’ increased the conversion rate.
  • Both variations on the headline increased conversion rate, but ‘Welcome to Webmaker’ performed the best.
  • We should remove the bullet points on this landing page.
  • The log-in option is useful on the page, even for a cold audience who we assume do not have accounts already.
  • Repeating the ask ‘Sign-up for Webmaker’ at the end of the copy, even when it duplicates the heading immediately above, is useful. Even at the expense of making the copy longer.
  • The button text ‘Create an account’ works better than ‘Sign up for Webmaker’ even when the headline and CTA in the copy are ‘Sign up for Webmaker’.
  • These two headlines are equivalent. In the absence of other data we should keep the version which includes the brand name, as it adds one further ‘brand impression’ to the user journey.
  • The existing blue background color is the best variant, given the rest of the page right now.

The Webmaker Testing Hub

If any of those “conclusions” sound interesting to you, you’ll probably want to read more about them on the Webmaker Testing Hub (it’s a fancy name for a list on a wiki).

This is where we’ll try and share the results of any test we run, and document the tests currently running.

And why that image for this blog post?

Because blog posts need and image, and this song came on as I was writing it. And I’m sure it’s a song about statistical significance, or counting, or something…

The Power of Webmaker Landing Pages

WelcomeWe just started using our first webmaker.org landing page, and I thought I’d write about why this is so important and how it’s working out so far.

Who’s getting involved?

Every day people visit the webmaker.org website. They come from many places, for many reasons. Sometimes they know about Webmaker, but most of the time it’s new to them. Some of those people take an action; they sign-up to find out more, to make something with our tools, or even to throw a Maker Party. But, most of the people who visit webmaker.org don’t.

The percentage of people who do take action is our conversion rate. Our conversion rate is an important number that can help us to be more effective. And being more effective is key to winning.

If you’re new to thinking about our conversion rate, it can seem complex at first, but it is something we can influence. And I choose the word influence deliberately, as a conversion rate is not typically something you can control.

The good thing about a conversion rate is that you can monitor what happens to it when you change your website, or your marketing, or your product. In all product design, marketing and copy-writing we’re communicating with busy human beings. And human beings are brilliant and irrational (despite our best objections). The things that inspire us to take action are often hard to believe.

For the Webmaker story to cut-through and resonate with someone as they’re skimming links on their phone while eating breakfast and trying convince a toddler to eat breakfast too, is really difficult.

How we present Webmaker,  the words we use to ask people to get involved, and how easy we make it for them to sign-up, all combine to determine what percentage of people who visit webmaker.org today will sign-up and get involved.

  1. Conversion rate is a number that matters.
  2. It’s a number we can accurately track.
  3. And it’s a number we can improve.

It gets more complex though

The people who visit webmaker.org today are not all equally likely to take an action.

How people hear about Webmaker, and their existing level of knowledge affects their ‘predisposition to convert’.

  • If my friend is hosting a Maker Party and I’ve volunteered to help and I’m checking out the site before the event, odds are I’ll sign-up.
  • If I clicked a link shared on twitter that sounded funny but didn’t really explain what webmaker was, I’m much less likely to sign-up.

Often, the traffic sources that drive the biggest number of visitors, are the people with less existing knowledge about Webmaker, and who are less likely to convert. This is true of most ‘markets’ where an increase in traffic often results in a decrease in overall conversion rate.

Enter, The Snippet

Mozilla will be promoting Maker Party on The Snippet, and the snippet reaches such a vast audience that we just ran an early test to make sure everything is working OK and to establish some baseline metrics. The expectation of the snippet is high visits, low conversion rate, and overall a load of new people who hear about Webmaker.

By all accounts, a large volume of traffic from a hugely diverse audience whose only knowledge of Webmaker is a line of text and a small flashing icon should result in a very low conversion rate. And when you add this traffic into the overall webmaker.org mix, our overall average conversion rate should plummet (though this would be an artifact of the stats rather than any decline in effectiveness elsewhere).

However, after a few days of testing the snippet, our conversion rate overall is up. This is quite frankly astounding, and a great endorsement for the work that is going into our new landing pages. This is really something to celebrate.

So, how did this happen?

Well, mostly by design. Though the actual results are even better than I was personally expecting and hoping for.

You could say we’re cheating, because we chose a new type of conversion for this audience. Rather than ‘creating a webmaker.org account‘, we only asked them to ‘join a mailing list‘. It’s a lower-bar call to action. But, while it’s a lower-bar call to action, what really matters is that it’s an appropriate call to action for the level of existing knowledge we expect this audience to have. Appropriate is a really important part of the design.

Traffic from the snippet goes to a really simple introduction to webmaker.org page with an immediate call to action to join the mailing list, then it ‘routes’ you into the Maker Party website to explore more. That way, even if you’re busy now, you can tell us you’re interested and we can keep in touch and remind you in a weeks’ time (when you have a quiet moment, perhaps) about “that awesome Mozilla educational programme you saw last week but didn’t have time to look at properly”.

It’s built on the idea that many of the people who currently visit webmaker.org, but who don’t take action are genuinely interested but didn’t make it in the door. We just have to give them an easy enough way to let us know they’re interested. And then, we have to hold their hands and welcome them into this crazy world of remixed education.

A good landing page is like a good concierge.

The results so far:

Even with this lower bar call to action, I was expecting lower conversion rates for visitors who come from the snippet. Our usual ‘new account’ conversion rate for Webmaker.org is in the region of 3% depending on traffic source. The snippet landing page is currently converting around 8%, for significant volumes of traffic. And this is before any testing of alternative page designs, content tweaking, and other optimization that can usually uncover even higher conversion rates.

Our very first audience specific landing page is already having a significant impact.

So, here’s to many more webmaker.org landing pages that welcome users into our world, and in turn make that world even bigger.

On keeping the balance

Measuring conversion rate is a game of statistics, but improving conversion rate is about serving individual human beings well by meeting them where they are. Our challenge is to jump back and forth between these two ways of thinking without turning people into numbers, or forgetting that numbers help us work with people at scale.

Learning more about conversion rates?

Who’s teaching this thing anyway?

This is an idea for Webmaker teacher dashboards, and some thoughts on metadata related to learning analytics

This post stems from a few conversations around metrics for Webmaker and learning analytics and it proposes some potential product features which need to be challenged and considered. I’m sharing the idea here as it’s easy to pass this around, but this is very much just an idea right now.

For context, I’m approaching this from a metrics perspective, but I’m trying to solve the data gathering challenge by adding value for our users rather than asking them to do any extra work.

These are the kind of questions I want us to be able to answer

and that can inform future decision making in a positive way…

  • How many people using Webmaker tools are mentors, students, or others?
  • Do mentors teach many times?
  • How many learners go on to become mentors?
  • What size groups do mentors typically work with?
  • How many mentors teach once, and then never again? (their feedback would be very useful)
  • How many learners come back to Webmaker tools several days after a lesson?
  • Which partnership programme reached the greatest number of learners?

And the particularly tricky area…

  • What data points show developing competencies in Web Literacy?

Flexible and organic data points to suit the Webmaker ecosystem

The Webmaker suite of tools are very open and flexible and as a result get used by people for many different things. Which personally, I like a lot. However, this also makes understanding our users more difficult.

When looking at the data, how can we tell if a new Thimble Make has come from a teacher, a student, or even an experienced web developer who works at Mozilla and is using the tool to publish their birthday wishes to the web? The waters here are muddy.

We need a few additional indicators in the data to analyze it in a meaningful way, but these indicators have to work with the informal teaching models and practices that exist in the Webmaker ecosystem.

On the grounds that everyone has both something to teach and to learn, and that we want trainers to train trainers and so on, I propose that asking people to self-identify as mentors via a survey/check-box/preferences/etc will not yield accurate flags in the data.

The journey to identifying yourself as a mentor is personal and complex, and though that process is immensely interesting, there are simpler things we can measure.

The simplest measure is that someone who teaches something is a teacher. That sounds obvious, but it’s very slightly different from someone who thinks of themselves as a teacher.

If we build a really useful tool for teaching (I’m suggesting one idea below) and its use identifies Webmaker accounts as teacher(s) and/or learner(s) then we’d have useful metadata to answer almost all of those questions asked above.

When we know who the learners are we can better understand what learning looks like in terms of data (a crucial step in conversations about learning analytics).

If anyone can use this proposed tool as part of their teaching process, and students can engage with it as students. Then anyone can teach, or attend a lesson in any order without having to update their account records to say “I first attended a Maker Party, then I taught a session on remixing for the web, and now I’m learning about CSS and next I want to teach about Privacy”.

A solution like this doesn’t need 100% use by all teachers and learners to be useful (which helps the solution remain flexible if it doesn’t suit). It just needs enough people to use it to use it that we have a meaningful sample of Webmaker teachers and learners flagged in the database.

With a decent sample we can see what teaching with Webmaker looks like at scale. And with this kind of data, continually improve the offering.

An idea: ‘Teacher Lesson Dashboards’

I think Teacher Lesson Dashboards would catch the metadata we need, and I’ll sketch this out here. Don’t get stuck on any naming I’ve made up right now, the general process for the teacher and the learner is the main thing to consider.

1. Starting with a teacher/mentor

User logs in to Webmaker.org

Clicks an option to “Create a new Lesson”

Gets an interface to ‘build-up’ a Lesson (a curation exercise)

Adds starter makes to the lesson (by searching for their own and/or others makes)

e.g. A ‘Lesson’ might include:

  • A teaching kit with discussion points, and a link to X-ray goggles demo
  • A thimble make for students to remix
  • A (deliberately) broken thimble make for students to try and debug
  • A popcorn make to remix and report back what they have learned

They give their lesson a name

Add optional text and an image for the lesson

Save their new Lesson, and get a friendly short URL

Then point students to this at the beginning of the teaching session

2. The learner(s) then…

Go the URL the mentor provides

Optionally, check-in to the lesson (and create a Webmaker account at the same time if required)

Have all the makes and activities they need in one place to get started

One click to view or remix any make in the Lesson

Can reference any written text to support the lesson

3. Then, going back to the mentor

Each ‘Lesson’ also has a dashboard showing:

  • Who has checked-in to the lesson
    • with quick links to their most recent makes
    • links to their public profile pages
    • Perhaps integrating together.js functionality if you’re running a lesson remotely?
  • Metrics that help with teaching (this is a whole other conversation, but it depends first on being able to identify who is teaching who)
  • Feedback on future makes created after the lesson (i.e. look what your session led to further down the line)

4. And to note…

‘Lessons’ as a kind of curated make, can also me remixed and shared in some way.


I’m not on the front-lines using the tools right now, so this is a proposal very much from a person who wants flags in a database :)

  • Does this feel like it adds value to mentors and/or learners?
  • Do you think is a good way to identify who’s teaching and who’s learning? (and who’s doing both of course)


User Testing vs. A/B Testing

So you have a page, or a system, or a form or an app or anything, and you know you want to make it ‘better’.

And the question is…

Should we use User Testing or A/B Testing?

TL;DR: Both can be valuable, and you can do both. So long as you have an underlying indicator of the ‘better’ that you can track over the long-term.

A/B Testing

A/B testing is good because:

  • It shows you ‘for real’, at scale, what actually prompts your users to do the things you most want them to do
  • It can further polish processes that have been through thorough user testing; to a degree that’s just not possible with a small group of people
  • It can self-optimize content in real-time to maximize the impact of short spikes of campaign activity

A/B testing is bad because:

  • You need a lot of traffic to get significant results which can prevent testing of new or niche products
  • You can get distracted by polishing the details, and overlook major UX changes that would have significant impact
  • It can slow down the evolution of your project if you try to test *everything* before you ship

User Testing

User testing is good because:

  • You can quickly get feedback on new products even before they are live to the public; you don’t need any real traffic
  • It flags up big issues that the original designers might miss because of over familiarity with their offering (the ‘obvious with hindsight’ things)
  • Watching someone struggle to use the thing you’ve built inspires action (it does for me at least!)

User testing is bad because:

  • Without real traffic, you’re not measuring how this works at scale
  • What people say they will do is not the same as what they do (this can be mitigated, but it’s a risk)
  • If you don’t connect this to some metrics somewhere, you don’t know if you’re actually making things better or you’re stuck in an infinite loop of iterative design

An ideal testing process combines both

  1. For the thing you want to improve, say a webpage, decide on your measure of success, e.g. number of sign-ups
  2. Ensure this measure of success is tracked, reported on and analyzed over time and understand if there are underlying seasonal trends
  3. Start with User Testing to look for immediate problems/opportunities
  4. If you have enough traffic, A/B Test continually to polish your content and design features
  5. Keep checking the impact of your changes (A/B and User Testing) against your underlying measure of success
  6. Repeat User Testing
  7. A/B Testing can include ideas that come from User Testing sessions
  8. Expect to see diminishing returns on each round of testing and optimization
  9. Regularly ask yourself, are the potential gains of another round of optimization worth the effort? If they’re not, move onto something else you can optimize.

That’s simplified in parts, but is it useful for thinking about when to run user testing vs. A/B testing?

Specific thoughts on the @Coursera experience

First, I’d like to say a massive thank you. I really value the chance to study this excellent material at zero financial cost, and more importantly I love the opportunity you provide to people all around the world who don’t have the finances or the circumstances to otherwise consider such an education.

I also know what it’s like to maintain and develop a complex online system while supporting active users, so this feedback is by no means an accusation of negligence. You will have thought about much of this already I’m sure, and if it’s already on a project roadmap somewhere then please excuse me.

In short, this is not a letter from a grumpy customer; I just thought it may be useful to hear some specific feedback and ideas that could help with the online experience:

When viewing and submitting assignments

  • Include some visual indicator as to which ones you’ve already submitted. A tick would be plenty.
  • Likewise for showing which assignments you have completed your peer-reviews for. If you forget, you have to click into each item to check what you have and haven’t done. Even then it’s confusing to remember.
  • The general visual hierarchy on this page is confusing. Those blue buttons jump out way more than the text you really want people to read (i.e. the assignment titles)
  • Indicate the assignments where the existing submission deadlines are closed (I’m only in week 2 so maybe this happens after week 1 evaluation is done, but currently its an effort to digest what my next steps are and how much I have to do before Sunday night)

Class Homepage

  • Bubbling up some top level stats on assignments due/completed to the homepage would be useful

Syllabus page

  • Ability to mark-off each item you have watched/completed would be nice. Like the assignments, if you’re doing this in the evenings after working, and you’re already tired, every little helps. I found myself relying on visited link colour, and that’s not a very cross-platform solution :)
In summary, the simpler you can explain what’s expected of people (and by when), the more enjoyable the learning experience will become. Let them focus on the learning, rather than the admin (unless of course you’re secretly trying to teach personal admin skills).

That’s it for now, as I have homework to do!

I hope that’s useful in some way, and thanks again.