A ‘free’ online learning experience

2862656849_f0fa5c78bf_oI’ve blogged about various experiences of online learning I’ve taken part in over the years and wanted to reflect on the most recent one. Coursera’s three week Introduction to Ableton Live.

Learning more about learning is one of my personal goals this year. And I find writing out loud to be useful tool in thinking. So that’s mostly the point of this.

I take these courses mostly because I like learning new things, but also because I’m interested in online learning more generally. How do you most effectively transfer knowledge, skills and motivation via the web, and/or about the web? That question is often on my mind.

Almost all of the projects I work on at Mozilla are somewhere in the education space; directly with Webmaker or Mozilla Learning Networks and tangentially in the topic of volunteer contribution. Contributing to an open source project as complex and distributed as Mozilla is a learning experience in itself, and sometimes requires specific training to even make it possible.

To further frame this particular brain dump, I’m also interested generally in the economics of the web and how this shapes user experiences, and I have strong feelings about the impact of advertising’s underlying messaging and what this does over-time when it dominates a person’s daily content intake. I’m generally wary of the word “Free”. This all gets complex when you work on the web, and even directly on advertising at times. Most of my paycheques have had some pretty direct link to the advertising world, except maybe when I was serving school dinners to very rich children – but that wasn’t my favourite job, despite it’s lack of direct societal quandaries.

Now, to the content…

If you’re like me, you will tend to read notes about a topic like ‘commerce in education’ and react negatively to some of these observations because there are many cases where those two things should be kept as far apart as possible. But I’m actually not trying to say anything negative here. These are just observations.

Observations

All roads lead to… $

$ Coursera

My online experience within the Coursera site was regularly interrupted with a modal (think popup) screen asking if I wanted to pay to enrol in the ‘Signature Track’, and get a more official certification. This is Coursera’s business model and understandably their interest. It wasn’t at all relevant to me in my life situation, as I was taking a course about how to play with fun music software in my free time. I don’t often check my own qualifications before I let myself hobby. Not that anyone checked my qualifications before they let me work either, but I digress. Coursera’s tagline says ‘free’, but they want you to pay.

$ Blend.io

All assignments for the course had to be published to Blend for peer-evalutation, Blend is like Github but for raw audio production tracks rather than source-code. I didn’t know about Blend before the course, and really like it as a concept and how it’s executed and for what it could do for collaborative music making. But I note, it is a business. This course funnels tens of thousands of new users into that business over the course of a few days. There might not be any direct financial trade here (between companies for example), but users are capital in start-up land. And I now receive emails from Blend with advertisements for commercial audio production tools. My eyeballs, like yours, have a value.

$ Berklee College of Music

While hosted on Coursera, the content of this course is by Berklee College of Music. The content they ‘give away’ would traditionally only have been available to paying students. Berklee’s business is selling seats in classes. This course isn’t given away as an act of kindness, it’s marketing. Three weeks is short and therefore the content is ‘light’. Lighter than I was expecting (not that I’m entitled). But halfway through, we receive a promotional email about Berklee’s own online education platform where you could create an account to get access to further ‘free’ videos to supplement the Coursera materials. I found these supplementary videos more useful, and they lead to offers to sign-up for extended paid courses with Berklee Online. For Berklee, this whole excercise is a marketing funnel. Quite possibly it’s the most fun and least offensive marketing funnel you can be dropped into, but it exists to do that job.

$ Erin Barra – Course professor and artist

Now, I write this with genuine sympathy, as I’ve walked the floor at countless venues trying to sell enough music and merch to cover the petrol costs of playing a gig. But this is a commercial element of this learning experience, so I will note it. At many points throughout the three weeks, we had opportunities to buy Erin’s music, t-shirts, and audio production stems (these are like a layer file of an original recording) for consumption and or remixing. I know you have to hustle if you’re making music for a living, but the observation here is that the students of this course are also a marketable audience. Perhaps only because they arrive en-mass and end up slightly faceless. I’m sure it would be weird for most teachers to sell t-shirts in a class-room. It wasn’t particularly weird online, where we’re desensitised to being constantly sold things. And I may have only noticed this because I’m interested in how all these things fit together.

$ Ableton

The course was about learning Ableton Live. A commercial audio production tool. So at some point, the cost of Ableton had to be considered. Ableton offers a free 30 day trial, which works for this course and they kindly (or sensibly) agreed to let people taking the course start a new trial even if they’d used their 30 days already. Good manners like those are good for business. Anyway, I already owned Live 9 Intro (aka the cheap version), and for a three week intro course it does more than enough to learn the basics (I guess that’s why it’s called Intro?). But the course taught and encouraged the use of Live 9 Suite (the EUR599 rather than the EUR79 version). Until some people complained, the use of features in Suite was required to complete the final assignment. Reading between the lines, I doubt there was any deliberate commercial discussion around this planning, but the planning definitely didn’t stem from the question: ‘how can we keep the cost down for these beginners?’. At the end of the course there were discount codes to get 15% off purchasing anything from Ableton. I didn’t use Suite during the course, but I’m playing with it now on my own time and terms, and may end up spending money on it soon.

Reflections

It’s wonderful, but it’s not Wikipedia. The course opened a lot of doors, but mostly into places where I could spend money, which I am cautious about as a model for learning. It was valuable to me and prompted me to learn more about Ableton Live than I would have done in those three weeks without it. So I’m grateful for it. But I can’t in my heart think of this as a ‘shared public resource’.

For my own learning, I like deadlines. Preferably arbitrary. The fact that these Coursera courses are only available at certain times during the year, really works for me. But I struggle with the logic of this when I think about how best to provide learning material online to as many people as possible. The only MOOC style courses I have finished have been time-bound. I don’t know how many people this is true for though.

People will learn X to earn Y. For me this course was a form of hobby or entertainment, but much learning has a direct commercial interest for students as well as educators. Whether it’s for professional skills development, or building some perceived CV value.

There is no ‘free’ education, even if it says “free” on the homepage. There is always a cost, financial or otherwise. Sometimes the cost is borne by the educator, and sometimes the student. Both models have a place, but I get uncomfortable when one tries to look like the other. And if the world could only have one of these models for all of education I know which one I’d choose. Marketing fills enough of our daily content and claims enough brainprint as it is.

Conclusion

I thought I might find some conclusions in writing this, but that doesn’t always happen. There are a lot of interesting threads here.

So instead of a conclusion, you can have the song I submitted for my course assignment. It was fun to make. And I have this free-but-not-free course to thank for getting it done.

The week ahead 23 Feb 2015

First, I’ll note that even taking the time to write these short ‘note to self’ type blog posts each week takes time and is harder to do than I expected. Like so many priorities, the long term important things often battle with the short term urgent things. And that’s in a culture where working open is more than just acceptable, it’s encouraged.

Anyway, I have some time this morning sitting in an airport to write this, and I have some time on a plane to catch up on some other reading and writing that hasn’t made it to the top of the todo list for a few weeks. I may even get to post a blog post or two in the near future.

This week, I have face-to-face time with lots of colleagues in Toronto. Which means a combination of planning, meetings, running some training sessions, and working on tasks where timezone parity is helpful. It’s also the design team work week, and though I’m too far gone from design work to contribute anything pretty, I’m looking forward to seeing their work and getting glimpses of the future Webmaker product. Most importantly maybe, for a week like this, I expect unexpected opportunities to arise.

One of my objectives this week is working with Ops to decide where my time is best spent this year to have the most impact, and to set my goals for the year. That will get closer to a metrics strategy this year to improve on last years ‘reactive’ style of work.

IMG_0456If you’re following along for the exciting stories of my shed>to>office upgrades. I don’t have much to show today, but I’m building a new desk next and insulation is my new favourite thing. This photo shows the visible difference in heat loss after fitting my first square of insulation material to the roof.

Weeknotes: 30 Jan 2015

This works, but needs some work
This works, but needs some work

I don’t want these weeknotes to be a complete ‘done list’, as we have enough of those internally. This just a quick reflection on the extra objectives set for the week.

  • I moved outside into the MVP garden office (it’s still looking mostly like a shed).
    • This was thanks to the magical powers of powerline adapters which I only recently heard about. I still do not understand the sorcery that is transferring a high speed network through the existing electrical circuit, but it’s working without me needing to run any cabling.
  • I spent enough time chipping away at my processes and on ‘working open’ that I’m feeling good about it, and still enough time getting things done.
  • I cleaned out my ‘mofo-metrics’ Bugzilla backlog from 2014, and killed lots of tickets that weren’t relevant any more.
  • Setup my first Data Practices Review ticket as part of the new ‘Data Steward’ work I have acquired this year.

I feel like I’m getting in to the flow of 2015 a bit now.

The week ahead: 26 Jan 2015

unrelatedphoto

I should have started the week by writing this, but I’ll do it quickly now anyway.

My current todo list.
List status: Pretty good. Mostly organized near the top. Less so further down. Fine for now.

Objectives to call out for this week.

  • Bugzilla and Github clean-out / triage
  • Move my home office out to the shed (depending on a few things)

+ some things that carry over from last week

  • Write a daily working process
  • Work out a plan for aligning metrics work with dev team heartbeats
  • Don’t let the immediate todo list get in the way of planning long term processes
  • Invest time in working open
  • Wrestle with multiple todo list systems until they (or I) work together nicely

Weeknotes: 23 Jan 2015

unrelated photo
unrelated photo

I managed to complete roughly five of my eleven goals for the week.

  • Made progress on (but have not cracked) daily task management for the newly evolving systems
  • Caught up on some email from time off, but still a chunk left to work through
  • Spent more time writing code than expected
  • Illness this week slowed me down
  • These aren’t very good weeknotes, but perhaps better than none.

 

One month of Webmaker Growth Hacking

This post is an attempt to capture some of the things we’ve learned from a few busy and exciting weeks working on the Webmaker new user funnel.

I will forget some things, there will be other stories to tell, and this will be biased towards my recurring message of “yay metrics”.

How did this happen?

Screen Shot 2014-09-01 at 14.25.29

As Dave pointed out in a recent email to Webmaker Dev list, “That’s a comma, not a decimal.”

What happened to increase new user sign-ups by 1,024% compared the previous month?

Is there one weird trick to…?

No.

Sorry, I know you’d like an easy answer…

This growth is the result of a month of focused work and many many incremental improvements to the first-run experience for visitors arriving on webmaker.org from the promotion we’ve seen on the Firefox snippet. I’ll try to recount some of it here.

While the answer here isn’t easy, the good news is it’s repeatable.

Props

While I get the fun job of talking about data and optimization (at least it’s fun when it’s good news), the work behind these numbers was a cross-team effort.

Aki, Andrea, Hannah and I formed the working group. Brett and Geoffrey oversaw the group, sanity checked our decisions and enabled us to act quickly. And others got roped in along the way.

I think this model worked really well.

Where are these new Webmaker users coming from?

We can attribute ~60k of those new users directly to:

  • Traffic coming from the snippet
  • Who converted into users via our new Webmaker Landing pages

Data-driven iterations

I’ve tried to go back over our meeting notes for the month and capture the variations on the funnel as we’ve iterated through them. This was tricky as things changed so fast.

This image below gives you an idea, but also hides many more detailed experiments within each of these pages.

Testing Iterations

With 8 snippets tested so far, 5 funnel variations and at least 5 content variables within each funnel we’ve iterated through over 200 variations of this new user flow in a month.

We’ve been able to do this and get results quickly because of the volume of traffic coming from the snippet, which is fantastic. And in some cases this volume of traffic meant we were learning new things quicker than we were able to ship our next iteration.

What’s the impact?

If we’d run with our first snippet design, and our first call to action we would have had about 1,000 new webmaker users from the snippet, instead of 60,000 (the remainder are from other channels and activities). Total new user accounts is up by ~1,000% but new users from the snippet specifically increased by around 6 times that.

One not-very-weird trick to growth hacking:

I said there wasn’t one weird trick, but I think the success of this work boils down to one piece of advice:

  • Prioritize time and permission for testing, with a clear shared objective, and get just enough people together who can make the work happen.

It’s not weird, and it sounds obvious, but it’s a story that gets overlooked often because it doesn’t have the simple causation based hooked we humans look for in our answers.

It’s much more appealing when someone tells you something like “Orange buttons increase conversion rate”. We love the stories of simple tweaks that have remarkable impact, but really it’s always about process.

More Growth hacking tips:

  • Learn to kill your darlings, and stay happy while doing it
    • We worked overtime to ship things that got replaced within a week
    • It can be hard to see that happen to your work when you’re invested in the product
      • My personal approach is to invest my emotion in the impact of the thing being made rather than the thing itself
      • But I had to lose a lot of A/B tests to realize that
  • Your current page is your control
    • Test ideas you think will beat it
    • If you beat it, that new page is your new control
    • Rinse and repeat
    • Optimize with small changes (content polishing)
    • Challenge with big changes (disruptive ideas)
  • Focus on areas with the most scope for impact
    • Use data to choose where to use data to make choices
    • Don’t stretch yourself too thin

What happens next?

  • We have some further snippet coverage for the next couple of weeks, but not at the same level we’ve had recently, so we’ll see this growth rate drop off
  • We can start testing the funnel we’ve built for other sources of traffic to see how it performs
  • We have infrastructure for spinning up and testing landing pages for many future asks
  • This work is never done, but with any optimization you see declining returns on investment
    • We need to keep reassessing the most effective place to spend our time
    • We have a solid account sign-up flow now, but there’s a whole user journey to think about after that
    • We need to gather up and share the results of the tests we ran within this process

Testing doesn’t have to be scary, but sometimes you want it to be.

When ‘less than the sum of our parts’ is a good thing

areweamillionyetHere’s a happy update about our combined Mozilla Foundation (MoFo) and Mozilla Corporation (MoCo) contributor dashboards.

TL;DR: There’s a demo All Mozilla Contributor Dashboard you can see at areweamillionyet.org

It’s a demo, but it’s also real, and to explain why this is exciting might need a little context.

Since January, I’ve been working on MoFo specific metrics. Mostly because that’s my job, but also because this/these organisations/projects/communities take a little while to understand, and getting to know MoFo was enough to keep me busy.

We also wanted to ship something quickly so we know where we stand against our MoFo goals, even if the data isn’t perfect. That’s what we’ve built in our *interim* dashboard. It’s a non de-duped aggregation of the numbers we could get out of our current systems without building a full integration database. It gives us a sense of scale and shows us trends. While not precise and high resolution yet, this has still been helpful to us. Data can sometimes look obvious once you see it, but before this we were a little blind.

So naturally, we want to make this dashboard as accurate as possible, and the next step is integrating and de-duping the data so we can know if the people who run Webmaker events are the people who commit code, are the people who file bugs, are the people who write articles for Source, are the people who teach Software Carpentry Bootcamps, etc, etc.

The reason we didn’t start this project by building a MoFo integration database, is because MoCo were already working on that. And in less than typical Mozilla style (as I’m coming to understand it), we didn’t just build our own version of this. 😉 (though there might be some scope and value for integrating some of this data within the Foundation anyway, but that’s a separate thought).

The integration database in question is MoCo’s project Baloo, which Pierros, and many people on the MoCo side have been working on. It’s a complex project influenced by more than than just technical requirements. Scoping the system is the point at which many teams are first looking at their contributor data in detail and working out what their indicators of contribution look like.

Our plan is that our MoFo interim dashboard data-source can eventually be swapped out for the de-duped ‘single source of truth’ system, at which point it goes from being a fuzzy-interim thing to a finalized precise thing.

While MoCo and ‘Fo have been taking two different approaches to solving this problem, we’ve not worked in isolation. We meet regularly, follow each other’s progress and have been trying to find the point where these approaches can merge into a unified cross Mozilla solution.

The demo we shipped yesterday was the first point where we’ve joined up this work.

Dial-up modems

I want to throw in an internet based analogy here, for those who remember dial-up modems.

Let’s imagine this image shows us *all* the awesome Mozilla contributors and what they are doing. We want there to be 20k of them in 2014.

unknown

It’s not that we don’t know if we have contributors. We’ve seen individual contributors, and we’ve seen groups of contributors, but we haven’t seen them all in one place yet.

So to continue the dial-up modem analogy, let’s think of this big-picture view of contribution as a large uncompressed JPEG, which has been loading slowly for a few months.

The MoFo interim dashboard has been getting us part of this picture. Our approach has revealed the MoFo half of this picture with slowly increasing resolution and accuracy. It’s like an interlaced JPEG, and is about this accurate so far:

interlaced

The Project Baloo approach is precise and can show MoCo and MoFo data, but adds data source at a time. It’s rolling out like a progressive JPEG. The areweamillionyet.org dashboard demo isn’t using Baloo yet, but the data it’s using is a good representation of how Baloo can work. What you can see in the demo dashboard is a picture like this:

progressive

(Original Photo Credit: Gen Kanai)

About areweamillionyet.org

This is commit data extracted from cross team/org/project repositories via github. Even though code contribution is only one part of the big-picture. Seeing this much of the image tells us things we didn’t know before. It gives us scale, trends and ways to ask questions about how to effectively and intentionally grow the community.

The ‘commit history’ over time is also a fascinating data set, and I’ll follow up with a blog post on that soon.

Less that the sum of our parts? When 5 + 5 = 8

With the goal of 20k active contributors this year, shared between MoCo and MoFo, we’re thinking about 10k active contributors to MoCo and to MoFo. And if we counted each org in isolation we could both say “here’s 10k active contributors”, and this would be a significant achievement. But, if we de-dupe these two sets it would be really worrying if there wasn’t an overlap between the people who contribute to MoCo and the people who contribute to MoFo projects.

Though we want to engage many many individual contributors, I think a good measure of our combined community building effectiveness will be how much these ‘pots’ of contributors overlap. When 10k MoFo contributors + 10k MoCo contributors = 15k combined Mozilla contributors, we should definitely celebrate.

That’s the thing I’m most excited about with regards to joining up the data. Understanding how contributors connect across projects; how they volunteer their time, energy and skills is many different ways, and understanding what ‘Many Voices, One Mozilla’ looks like. When we understand this, we can improve it, for the benefit of the project, and the individuals who care about the mission and want to find ways into the project so they can make a difference.

While legal processes define ‘Corporations’ and ‘Foundations’, the people who volunteer and contribute rarely give a **** about which org ‘owns’ the project they’re contributing too; they just want to build a better internet. Mozilla is bigger than the legal entities. And the legal entities are not what Mozilla is about, they are just one of the necessities to making it work.

So the org dashboards, and team dashboards we’re building can help us day-to-day with tactical and strategic decisions, but we always need to keep them in the context of the bigger picture. Even if the big picture takes a while to download.

Here’s to more cross org collaboration.

Want to read more?

New Google Sheets: publishing a single worksheet to the web as CSV

With the switch to the new version of Google Sheets, the option to publish a specific worksheet and then access that as a CSV file has disappeared (hopefully just temporarily).

In the new Google Sheets, I managed to publish a worksheet as a CSV by piecing together answers from here and here.

This is how to do it:

  1. Share the Google Doc so anyone with the link can view (sadly this loses the granularity of only sharing specific sheets that used to exist in the old version).
  2. Publish the document (File > Publish to the Web) and look for the document ID in the URL
  3. Add that document ID into this URL in place of KEY:
    • https://docs.google.com/spreadsheets/d/<KEY>/export?format=csv&id=<KEY>
  4. While editing your Google Doc, open the worksheet you want to export and look in the URL for the GID parameter
  5. Copy this GID parameter and append it to your URL:
    • https://docs.google.com/spreadsheets/d/<KEY>/export?format=csv&id=<KEY>&gid=<GID>

Done

The immediate value of working in the open

I’m both excited and a tiny bit nervous about how “open” Mozilla are about the way they work.

As I’m getting to know the Foundation, and the projects and priorities, and to make sense of what exactly I’ll be doing here I’ve been reading lots of Etherpads. If you don’t know what an Etherpad is, it’s a bit like a Google Doc (the ‘word doc’ variety) but less slick and more open. If you give someone a link to an Etherpad, the barrier to them contributing to the document is almost non-existent.

Anyway, the value of this open working process somewhat blew my mind today. While lots of these docs have been useful in a general sense, today I read the documents from the initial planning around MoFo metrics that led to recruiting for my role (so it was pretty relevant!). The final document is a fine and very useful thing, like most summary documents, but what was really useful was the option to ‘replay’ the creation of the doc.

As I watched it being outlined, revised and then shared for comment I was able to see many of my new colleagues jump in to add the points they care most about and to challenge and contribute to the rest of the document.  Better than just seeing the final document is seeing how it started and where it changed direction and who was involved.

Even watching the hesitations and re-phrasing of particular sentences tells you something about where the subtleties and complications in the process exist.

I probably spent 15-20 minutes watching the replay of the writing of this document, and think I got more indirect information than I would otherwise pick up in even two-weeks of introduction meetings.

There is so much information in the history of that document that would be lost in any closed system. If for example, that document was a PDF on an intranet behind a password, the value I would have gotten from it as a new employee today would have be greatly reduced.

As ready as I’m going to be

Tomorrow is the first day in my new role at the Mozilla Foundation, and I’m getting the new job nerves and excitement now.

Between wrapping up at WWF, preparing for Christmas, house hunting, and finishing off my next study assignment (a screenplay involving time-travel and a bus), I’ve been squeezing in a little bit of prep for the new job too.

This post is basically a note about some of the things I’ve been looking at in the last couple of weeks.

I thought it would be useful to jump through various bits of tech used in a number of MoFo projects, some of which I’d been wanting to play with anyway. This is not deep learning by any means, but it’s enough hands-on experience to speed up some things in the future.

I setup a little node.js app locally and had a look at Express. That’s all very nice, and I liked the basic app structures. Working with a package manager is a lovely part of the workflow, even if it sometimes feels a bit too magic to be real. I also had a look at MongoDB, Mongoose and MERS as a potential data model/store solution for another little app thing I want to build at some point. I didn’t take this far, but got the basic data model working over the MERS API.

I’d used Git a little bit already, but wanted a better grasp of the process for contributing ‘nicely’ to bigger projects (where you’re not also talking directly to the other people involved). Reading the Pro Git book was great for that, and a lighter read than I expected. It falls into the ‘why didn’t I do that months ago?’ category.

Sysadmin-esque work is one of my weak points so the next project was more of a stretch. I setup an Amazon EC2 instance and installed a copy of Graphite. The documentation for Graphite is on the sparse side for people who don’t know their way around a Linux command prompt well, but that probably taught me more than if I’d just been following a series of commands from a tutorial. I think I’ll be spending a lot more time on the metrics end of Graphite, so getting a grasp of the underlying architecture will hopefully help.

Then, for the last couple of days I’ve been working through Data Analysis With Open Source Tools at a slightly superficial level (i.e. skipping some of the Maths), but it’s been a good warm-up for the work ahead.

And that’s it for now.

I’m really looking forward to meeting many new people, problems and possibilities in 2014.

Happy New Year!

Took my son to an "Alien Invasion" exhibition and got to play a little Space Invaders
Took my son to an “Alien Invasion” exhibition and got to play a few minutes of Space Invaders