Trendlines and Stacking Logs

TL;DR

  • Our MoFo dashboards now have trendlines based on known activity to date
  • The recent uptick in activity is partly new contributors, and partly new recognition of existing contributors (all of which is good, but some of which is misleading for the trendline in the short term)
  • Below is a rambling analogy for thinking about our contributor goals and how we answer the question ‘are we on track for 2014?’
  • + if you haven’t seen it, OpenMatt has crisply summarized a tonne of the data and insights that we’ve unpicked during Maker Party

Stacking Logs

I was stacking logs over the weekend, and wondering if I had enough for winter, when it struck me that this might be a useful analogy for a post I was planning to write. So bear with me, I hope this works…

To be clear, this is an analogy about predicting and planning, not a metaphor for contributors* 😀

So the trendline looks good, but…

Screen Shot 2014-08-19 at 11.47.27

Trendlines can be misleading.

What if our task was gathering and splitting logs?

Vedstapel, Johannes Jansson (1)

We’re halfway through the year, and the log store is half full. The important questions is, ‘will it be full when the snow starts falling?

Well, it depends.

It depends how quickly we add new logs to the store, and it depends how many get used.

So let’s push this analogy a bit.

Firewood in the snow

Before this year, we had scattered stacks of logs here and there, in teams and projects. Some we knew about, some we didn’t. Some we thought were big stacks of logs but were actually stacked on top of something else.

Vedstapel, Johannes Jansson

Setting a target was like building a log store and deciding to fill it. We built ours to hold 10,000 logs. There was a bit of guesswork in that.

It took a while to gather up our existing logs (build our databases and counting tools). But the good news is, we had more logs than we thought.

Now we need to start finding and splitting more logs*.

Switching from analogy to reality for a minute…

This week we added trendlines to our dashboard. These are two linear regression lines. One based on all activity for the year to-date, and one based on the most recent 4 weeks. It gives a quick feedback mechanism on whether recent actions are helping us towards to our targets and whether we’re improving over the year to-date.

These are interesting, but can be misleading given our current working practices. The trendline implies some form of destiny. You do a load of work recruiting new contributors, see the trendline is on target, and relax. But relaxing isn’t an option because of the way we’re currently recruiting contributors.

Switching back to the analogy…

We’re mostly splitting logs by hand.

Špalek na štípání.jpg

Things happen because we go out and make them happen.

Hard work is the reason we have 1,800 Maker Party events on the map this year and we’re only half-way through the campaign.

There’s a lot to be said for this way of making things happen, and I think there’s enough time left in the year to fill the log store this way.

But this is not mathematical or automated, which makes trendlines based on this activity a bit misleading.

In this mode of working, the answer to ‘Are we on track for 2014?‘ is: ‘the log store will be filled… if we fill it‘.

Scaling

Holzspalter 2

As we move forward, and think about scale… say a hundred-thousand logs (or even better, a Million Mozillians). We need to think about log splitting machines (or ‘systems’).

Systems can be tested, tuned, modified and multiplied. In a world of ‘systems’ we can apply trendlines to our graphs that are much better predictors of future growth.

We should be experimenting with systems now (and we are a little bit). But we don’t yet know what the contributor growth system looks like that works as well as the analogous log splitting machines of the forestry industry. These are things to be invented, tested and iterated on, but I wouldn’t bet on them as the solution for 2014 as this could take a while to solve.

I should also state explicitly that systems are not necessarily software (or hardware). Technology is a relatively small part of the systems of movement building. For an interesting but time consuming distraction, this talk on Social Machines from last week’s Wikimania conference is worth a ponder:

Predicting 2014 today?

Even if you’re splitting logs by hand, you can schedule time to do it. Plan each month, check in on targets and spend more or less time as required to stay on track for the year.

This boils down to a planning exercise, with a little bit of guess work to get started.

In simple terms, you list all the things you plan to do this year that could recruit contributors, and how many contributors you think each will recruit. As you complete some of these activities you reflect on your predictions, and modify the plans and update estimates for the rest of the year.

Geoffrey has put together a training workshop for this, along with a spreadsheet structure to make this simple for teams to implement. It’s not scary, and it helps you get a grip on the future.

From there, we can start to feed our planned activity and forecast recruitment numbers into our dashboard as a trendline rather than relying solely on past activity.

The manual nature of the splitting-wood-like-activity means what we plan to do is a much more important predictor of the future than extrapolating what we have done in the past, and that changing the future is something you can go out and do.

*Contributors are not logs. Do not swing axes at them, and do not under any circumstances put them in your fireplace or wood burning stove.

2014 Contributor Goals: Half-time check-in

We’re a little over halfway through the year now, and our dashboard is now good enough to tell us how we’re doing.

TL;DR:

  • The existing trend lines won’t get us to our 2014 goals
    • but knowing this is helpful
    • and getting there is possible
  • Ask less: How do we count our contributors?
  • Ask more: What are we doing to grow the contributor community? And, are we on track?

Changing the question

Our dashboard now needs to move from being a project to being a tool that helps us do better. After all, Mozilla’s unique strength is that we’re a community of contributors and this dashboard, and the 2014 contributor goal, exist to help us focus our workflows, decisions and investments in ways that empower the community. Not just for the fun of counting things.

The first half of the year focused us on the question “How do we count contributors?”. By and large, this has now been answered.

We need to switch our focus to:

  1. Are we on track?
  2. What are we doing to grow the contributor community?

Then repeating these two question regularly throughout the year, and adjusting our strategy as we go.

Are we on track?

Wearing my cold-dispassionate-metrics hat, and not my “I know how hard you’re all working already” hat, I have to say no (or, not yet).

I’m going to look at this team by team and then look at the All Mozilla Foundation view at the end.

Your task, for each graph below is to take an imaginary marker pen and draw the line for the rest of the year based on the data you can see to date. And only on the data you can see to-date.

  • What does your trend line look like?
  • Is it going to cross the dotted target line in 2014?

OpenNews

Screen Shot 2014-07-18 at 19.48.44

Based on the data to-date, I’d draw a flat line here. Although there are new contributors joining pretty regularly, the overall trend is flat. In marketing terms there is ‘churn’; not a nice term, but a useful one to talk about the data. To use other crass marketing terms, ‘retention’ is as important as ‘acquisition’ in changing the shape of this graph.

Science Lab

Screen Shot 2014-07-18 at 19.49.55

Dispassionately here, I’d have to draw a trend line that’s pointing slightly down. One thing to note in this view is that the Science Lab team have good historic data, so what we’re seeing here is the result of the size of the community in early 2013, and some drop-off from those people.

Appmaker

Screen Shot 2014-07-18 at 19.50.57

This graph is closest to what we want to see generally, i.e. pointing up. But I’ll caveat that with a couple of points. First, taking the imaginary marker pen, this isn’t going to cross the 2014 target line at the current rate. Second, unlike the Science Lab and OpenNews data above, much of this Appmaker counting is new. And when you count things for the first time, a 12 month rolling active total has a cumulative effect in the first year, which increases the appearance of growth, but might not be a long term trend. This is because Appmaker community churn won’t be a visible thing until next year when people could first drop out of the twelve month active time-frame.

Webmaker

Screen Shot 2014-07-18 at 19.51.47

This graph is the hardest to extend with our imaginary marker pen, especially with the positive incline we can see as Maker Party kicks off. The Webmaker plan expects much of the contributor community growth to come from the Maker Party campaign, so a steady incline was not the expectation across the year. But, we can still play with the imaginary marker pen.

I’d do the following exercise: In the first six months, active contributors grew by ~800 (~130 per month), so assuming that’s a general trend (big assumption) and you work back from 10k in December you would need to be at ~9,500 by the end of September. Mark a point at 9,500 contributors above the October tick and look at the angle of growth required throughout Maker Party to get there. That’s not impossible, but it’s a big challenge and I don’t have any historic data to make an informed call here.

Note: the Appmaker/Webmaker separation here is a legacy thing from the beginning of the year when we started this project. The de-duped datastore we’re working on next will allow us to graph: Webmaker Total > Webmaker Tools > Appmaker as separate graphs with separate goals, but which get de-duped and roll-up into the total numbers above, and in turn roll-up into the Mozilla wide total at areweamillionyet.org – this will better reflect the actual overlaps.

Metrics

[ 0 contributors ]

The MoFo metrics team currently has zero active volunteer contributors, and based on the data available to date is trending absolutely flat. Action is required here, or this isn’t going to change. I also need to set a target. Growing 0 by 10X doesn’t really work. So I’ll aim for 10 volunteer contributors in 2014.

All Mozilla Foundation

Screen Shot 2014-07-18 at 19.52.40

Here we’re adding up the other graphs and also adding in ~900 people who contributed to MozFest in October 2013. That MozFest number isn’t counted towards a particular team and simply lifts the total for the year. There is no trend for the MozFest data because all the activity happened at once, but if there wasn’t a MozFest this year (don’t worry, there is!) in October the total line would drop by 900 in a single week. Beyond that, the shape of this line is the cumulative result of the team graphs above.

In Q3, we’ll be able to de-dupe this combined number as there are certainly contributors working across MoFo teams. In a good way, our total will be less that the sum of our parts.

Where do we go from here?

First, don’t panic. Influencing these trend lines is not like trying to shift a nation’s voting trends in the next election. Much of this is directly under our control, or if not ‘control’, then it’s something we can strongly influence. So long as we work on it.

Next, it’s important to note that this is the first time we’ve been able to see these trends, and the first time we can measure the impact of decisions we make around community building. Growing a community beyond a certain scale is not a passive thing. I’ve found David Boswell’s use of the term ‘intentional’ community building really helpful here. And much more tasteful than my marketing vocabulary!

These graphs show where we’re heading based on what we’re currently doing, and until now we didn’t know if we were doing well, or even improving at all. We didn’t have any feedback mechanism on decisions we’d make relating to community growth. Now we do.

Trend setting

Here are some initial steps that can help with the ‘measuring’ part of this community building task.

Going back to the marker pen exercise, take another imaginary color and rather than extrapolate the current trend, draw a positive line that gets you to your target by the end of the year. This doesn’t have to be a straight line; allow your planned activity to shape the growth you want to see. Then ask:

  • Where do you need to be in Aug, Sep, Oct, Nov, Dec?
  • How are you going to reach each of these smaller steps?

Schedule a regular check-in that focuses on growing your contributor community and check your dashboard:

  • Are your current actions getting you to your goals?
  • What are the next actions you’re going to take?

The first rule of fundraising is ‘Ask for money’. People often overlook this. By the same measure, are you asking for contributions?

  • How many people are you asking this week or month to get involved?
  • What percentage of them do you expect to say yes and do something?

Multiply those numbers together and see if it that prediction can get you to your next step towards your goal.

Asking these questions alone won’t get us to our goals, but it helps us to know if our current approach has the capacity to get there. If it doesn’t we need to adjust the approach.

Those are just the numbers

I could probably wrap up this check-in from a metrics point of view here, but this is not a numbers game. The Total Active Contributor number is a tool to help us understand scale beyond the face-to-face relationships we can store in our personal memories.

We’re lucky at Mozilla that so many people already care about the mission and want to get involved, but sitting and waiting for contributors to show up is not going to get us to our goals in 2014. Community building is an intentional act.

Here’s to setting new trends.

The Power of Webmaker Landing Pages

WelcomeWe just started using our first webmaker.org landing page, and I thought I’d write about why this is so important and how it’s working out so far.

Who’s getting involved?

Every day people visit the webmaker.org website. They come from many places, for many reasons. Sometimes they know about Webmaker, but most of the time it’s new to them. Some of those people take an action; they sign-up to find out more, to make something with our tools, or even to throw a Maker Party. But, most of the people who visit webmaker.org don’t.

The percentage of people who do take action is our conversion rate. Our conversion rate is an important number that can help us to be more effective. And being more effective is key to winning.

If you’re new to thinking about our conversion rate, it can seem complex at first, but it is something we can influence. And I choose the word influence deliberately, as a conversion rate is not typically something you can control.

The good thing about a conversion rate is that you can monitor what happens to it when you change your website, or your marketing, or your product. In all product design, marketing and copy-writing we’re communicating with busy human beings. And human beings are brilliant and irrational (despite our best objections). The things that inspire us to take action are often hard to believe.

For the Webmaker story to cut-through and resonate with someone as they’re skimming links on their phone while eating breakfast and trying convince a toddler to eat breakfast too, is really difficult.

How we present Webmaker,  the words we use to ask people to get involved, and how easy we make it for them to sign-up, all combine to determine what percentage of people who visit webmaker.org today will sign-up and get involved.

  1. Conversion rate is a number that matters.
  2. It’s a number we can accurately track.
  3. And it’s a number we can improve.

It gets more complex though

The people who visit webmaker.org today are not all equally likely to take an action.

How people hear about Webmaker, and their existing level of knowledge affects their ‘predisposition to convert’.

  • If my friend is hosting a Maker Party and I’ve volunteered to help and I’m checking out the site before the event, odds are I’ll sign-up.
  • If I clicked a link shared on twitter that sounded funny but didn’t really explain what webmaker was, I’m much less likely to sign-up.

Often, the traffic sources that drive the biggest number of visitors, are the people with less existing knowledge about Webmaker, and who are less likely to convert. This is true of most ‘markets’ where an increase in traffic often results in a decrease in overall conversion rate.

Enter, The Snippet

Mozilla will be promoting Maker Party on The Snippet, and the snippet reaches such a vast audience that we just ran an early test to make sure everything is working OK and to establish some baseline metrics. The expectation of the snippet is high visits, low conversion rate, and overall a load of new people who hear about Webmaker.

By all accounts, a large volume of traffic from a hugely diverse audience whose only knowledge of Webmaker is a line of text and a small flashing icon should result in a very low conversion rate. And when you add this traffic into the overall webmaker.org mix, our overall average conversion rate should plummet (though this would be an artifact of the stats rather than any decline in effectiveness elsewhere).

However, after a few days of testing the snippet, our conversion rate overall is up. This is quite frankly astounding, and a great endorsement for the work that is going into our new landing pages. This is really something to celebrate.

So, how did this happen?

Well, mostly by design. Though the actual results are even better than I was personally expecting and hoping for.

You could say we’re cheating, because we chose a new type of conversion for this audience. Rather than ‘creating a webmaker.org account‘, we only asked them to ‘join a mailing list‘. It’s a lower-bar call to action. But, while it’s a lower-bar call to action, what really matters is that it’s an appropriate call to action for the level of existing knowledge we expect this audience to have. Appropriate is a really important part of the design.

Traffic from the snippet goes to a really simple introduction to webmaker.org page with an immediate call to action to join the mailing list, then it ‘routes’ you into the Maker Party website to explore more. That way, even if you’re busy now, you can tell us you’re interested and we can keep in touch and remind you in a weeks’ time (when you have a quiet moment, perhaps) about “that awesome Mozilla educational programme you saw last week but didn’t have time to look at properly”.

It’s built on the idea that many of the people who currently visit webmaker.org, but who don’t take action are genuinely interested but didn’t make it in the door. We just have to give them an easy enough way to let us know they’re interested. And then, we have to hold their hands and welcome them into this crazy world of remixed education.

A good landing page is like a good concierge.

The results so far:

Even with this lower bar call to action, I was expecting lower conversion rates for visitors who come from the snippet. Our usual ‘new account’ conversion rate for Webmaker.org is in the region of 3% depending on traffic source. The snippet landing page is currently converting around 8%, for significant volumes of traffic. And this is before any testing of alternative page designs, content tweaking, and other optimization that can usually uncover even higher conversion rates.

Our very first audience specific landing page is already having a significant impact.

So, here’s to many more webmaker.org landing pages that welcome users into our world, and in turn make that world even bigger.

On keeping the balance

Measuring conversion rate is a game of statistics, but improving conversion rate is about serving individual human beings well by meeting them where they are. Our challenge is to jump back and forth between these two ways of thinking without turning people into numbers, or forgetting that numbers help us work with people at scale.

Learning more about conversion rates?

Getting Bicho Running as a process on Heroku with a Scheduler

By Félicien Victor Joseph Rops (Belgium, Namur, 1833-1898) [Public domain], via Wikimedia Commons
“Ou la lecture du grimoire”
For our almost complete MoFo Interim Dashboard, I’m planning to use an issue tracker parsing tool called Bicho to work out how many people are involved in the Webmaker project in Bugzilla. Bicho is part of a suite of tools called Metrics Grimoire which I’ll explore in more detail in near future. When combined with vizGrimoire, you can generate interesting things like this which are very closely related to (but not exactly solving the same challenge) as our own contribution tracking efforts.

I recently installed a local copy of Bicho, and ran this against some products on Bugzilla to test it out. It generates a nicely structured relational database including the things I want to count and feed into our contributor numbers.

This morning I got this running on Heroku, which means it can run periodically and update a hosted DB, which can then feed numbers into our dashboard.

This was a bit trial and error for me as all the work I’ve done with Python was within Google App Engine’s setup, and my use of Heroku has been for Node apps, so these notes are to help me out some time in the future when I look back to this.

Getting this working on Heroku

$ pip freeze

generates a list of the requirements from your working localenv e.g.

BeautifulSoup==3.2.1
MySQL-python==1.2.5
feedparser==5.1.3
python-dateutil==2.2
six==1.6.1
storm==0.20
wsgiref==0.1.2

Copy this into a requirements.txt file in the root of your project

But remove the line: Bicho==0.9 (or it tries to install this via pip, which fails)

Heroku’s notes on specifying dependencies.

You can now push this to Heroku.

Then, I ran:

$ heroku run python setup.py

But I’m actually not sure if that was required.

Then you can run Bicho remotely via heroku run commands

$ heroku run python bin/bicho --db-user-out=yourdbusername --db-password-out=yourdbuserpassword --db-database-out=yourdbdatabase --db-hostname-out=yourdbhostname -d 5 -b bg --backend-user 'abugzilla@exampleuser.com' --backend-password 'bugzillapasswordexample' -u 'https://bugzillaurl.com?etc'

As a general precaution for anything like this, don’t use a user account that has any special privileges. I create duplicate logins that have the same level of access available to any member of the public.

Once you’ve got a command that works here, cancel the running script as it might have thousands of issues left to process.

Then setup a scheduler https://devcenter.heroku.com/articles/scheduler

$ heroku addons:add scheduler:standard
$ heroku addons:open scheduler

copy your working command into the scheduler just without the ‘heroku run’ part

python bin/bicho --db-user-out=yourdbusername --db-password-out=yourdbuserpassword --db-database-out=yourdbdatabase --db-hostname-out=yourdbhostname -d 5 -b bg --backend-user 'abugzilla@exampleuser.com' --backend-password 'bugzillapasswordexample' -u 'https://bugzillaurl.com?etc'

If you set this to run every 10 mins, the process will cycle and get killed periodically but in the logs this usefully shows you how the import is progressing.

I’m generally happy with this as a solution for counting contributors in Webmaker’s issue tracking history, but would need to work on some speed issues if this was of interest across Mozilla projects.

Currently, this is importing about 400 issues an hour, which would be problematic to process 1,000,000+ bugs in bugzilla.mozilla.org. But that’s not a problem to solve right now. And not necessarily the way you’d want to do that either.

Who’s teaching this thing anyway?

This is an idea for Webmaker teacher dashboards, and some thoughts on metadata related to learning analytics

This post stems from a few conversations around metrics for Webmaker and learning analytics and it proposes some potential product features which need to be challenged and considered. I’m sharing the idea here as it’s easy to pass this around, but this is very much just an idea right now.

For context, I’m approaching this from a metrics perspective, but I’m trying to solve the data gathering challenge by adding value for our users rather than asking them to do any extra work.

These are the kind of questions I want us to be able to answer

and that can inform future decision making in a positive way…

  • How many people using Webmaker tools are mentors, students, or others?
  • Do mentors teach many times?
  • How many learners go on to become mentors?
  • What size groups do mentors typically work with?
  • How many mentors teach once, and then never again? (their feedback would be very useful)
  • How many learners come back to Webmaker tools several days after a lesson?
  • Which partnership programme reached the greatest number of learners?

And the particularly tricky area…

  • What data points show developing competencies in Web Literacy?

Flexible and organic data points to suit the Webmaker ecosystem

The Webmaker suite of tools are very open and flexible and as a result get used by people for many different things. Which personally, I like a lot. However, this also makes understanding our users more difficult.

When looking at the data, how can we tell if a new Thimble Make has come from a teacher, a student, or even an experienced web developer who works at Mozilla and is using the tool to publish their birthday wishes to the web? The waters here are muddy.

We need a few additional indicators in the data to analyze it in a meaningful way, but these indicators have to work with the informal teaching models and practices that exist in the Webmaker ecosystem.

On the grounds that everyone has both something to teach and to learn, and that we want trainers to train trainers and so on, I propose that asking people to self-identify as mentors via a survey/check-box/preferences/etc will not yield accurate flags in the data.

The journey to identifying yourself as a mentor is personal and complex, and though that process is immensely interesting, there are simpler things we can measure.

The simplest measure is that someone who teaches something is a teacher. That sounds obvious, but it’s very slightly different from someone who thinks of themselves as a teacher.

If we build a really useful tool for teaching (I’m suggesting one idea below) and its use identifies Webmaker accounts as teacher(s) and/or learner(s) then we’d have useful metadata to answer almost all of those questions asked above.

When we know who the learners are we can better understand what learning looks like in terms of data (a crucial step in conversations about learning analytics).

If anyone can use this proposed tool as part of their teaching process, and students can engage with it as students. Then anyone can teach, or attend a lesson in any order without having to update their account records to say “I first attended a Maker Party, then I taught a session on remixing for the web, and now I’m learning about CSS and next I want to teach about Privacy”.

A solution like this doesn’t need 100% use by all teachers and learners to be useful (which helps the solution remain flexible if it doesn’t suit). It just needs enough people to use it to use it that we have a meaningful sample of Webmaker teachers and learners flagged in the database.

With a decent sample we can see what teaching with Webmaker looks like at scale. And with this kind of data, continually improve the offering.

An idea: ‘Teacher Lesson Dashboards’

I think Teacher Lesson Dashboards would catch the metadata we need, and I’ll sketch this out here. Don’t get stuck on any naming I’ve made up right now, the general process for the teacher and the learner is the main thing to consider.

1. Starting with a teacher/mentor

User logs in to Webmaker.org

Clicks an option to “Create a new Lesson”

Gets an interface to ‘build-up’ a Lesson (a curation exercise)

Adds starter makes to the lesson (by searching for their own and/or others makes)

e.g. A ‘Lesson’ might include:

  • A teaching kit with discussion points, and a link to X-ray goggles demo
  • A thimble make for students to remix
  • A (deliberately) broken thimble make for students to try and debug
  • A popcorn make to remix and report back what they have learned

They give their lesson a name

Add optional text and an image for the lesson

Save their new Lesson, and get a friendly short URL

Then point students to this at the beginning of the teaching session

2. The learner(s) then…

Go the URL the mentor provides

Optionally, check-in to the lesson (and create a Webmaker account at the same time if required)

Have all the makes and activities they need in one place to get started

One click to view or remix any make in the Lesson

Can reference any written text to support the lesson

3. Then, going back to the mentor

Each ‘Lesson’ also has a dashboard showing:

  • Who has checked-in to the lesson
    • with quick links to their most recent makes
    • links to their public profile pages
    • Perhaps integrating together.js functionality if you’re running a lesson remotely?
  • Metrics that help with teaching (this is a whole other conversation, but it depends first on being able to identify who is teaching who)
  • Feedback on future makes created after the lesson (i.e. look what your session led to further down the line)

4. And to note…

‘Lessons’ as a kind of curated make, can also me remixed and shared in some way.

Useful?

I’m not on the front-lines using the tools right now, so this is a proposal very much from a person who wants flags in a database 🙂

  • Does this feel like it adds value to mentors and/or learners?
  • Do you think is a good way to identify who’s teaching and who’s learning? (and who’s doing both of course)

 

Weeknotes 5 – Webmaker Workweek

View from Mozilla Space Toronto
View from Mozilla Toronto

As I’m halfway into the following week I’m writing these notes quickly rather than losing them completely. I apologize in advance 🙂

Week 5 was spent in Toronto with the Webmaker team and it will be hard for a quick write-up to do this week justice. I got to hack and hang-out with about half of the total Mozilla Foundation staff, which is hugely valuable four weeks into a job where you mostly work remotely. IRC handles turned into real people, and the people turned out to be very special. So first, thanks to this amazing team for welcoming me so kindly. I think we crammed a year’s worth of social activity into a week’s worth of evenings and across the whole week, I almost got a whole night’s worth of sleep.

I signed more that one waiver in the name of fun this week
I signed more than one waiver in the name of fun this week

There’s a test that goes something along the lines of “people you wouldn’t mind getting stuck at an airport with”, and everyone I met last week would pass that test. Genuinely.

I thought this week might have been a lot of talking and leaving with too many ideas to implement, but from the start it was structured to create measurable output.

Sunday in the office, the bugzilla tickets were transformed into a physical scrum board:

Webmaker Scrum Board

And during the week, these discrete tasks moved from To Make > Making > Made

In the metrics track, I was lucky to work closely with Scott Downe who taught me a tonne of useful things, and we shipped some stuff too. Including a brand new process to make continual testing and optimisation of the Webmaker tools practical.

You can see the new testing process here, and our tests and the results will be open for you to follow along as we learn more about our tools and how people use them.

Onwards…

I like this Canadian alternative to the UK's "One Way"
I like this Canadian alternative to the UK’s “One Way”

And I would be negligent not to include the gif of the week: