A ‘free’ online learning experience

2862656849_f0fa5c78bf_oI’ve blogged about various experiences of online learning I’ve taken part in over the years and wanted to reflect on the most recent one. Coursera’s three week Introduction to Ableton Live.

Learning more about learning is one of my personal goals this year. And I find writing out loud to be useful tool in thinking. So that’s mostly the point of this.

I take these courses mostly because I like learning new things, but also because I’m interested in online learning more generally. How do you most effectively transfer knowledge, skills and motivation via the web, and/or about the web? That question is often on my mind.

Almost all of the projects I work on at Mozilla are somewhere in the education space; directly with Webmaker or Mozilla Learning Networks and tangentially in the topic of volunteer contribution. Contributing to an open source project as complex and distributed as Mozilla is a learning experience in itself, and sometimes requires specific training to even make it possible.

To further frame this particular brain dump, I’m also interested generally in the economics of the web and how this shapes user experiences, and I have strong feelings about the impact of advertising’s underlying messaging and what this does over-time when it dominates a person’s daily content intake. I’m generally wary of the word “Free”. This all gets complex when you work on the web, and even directly on advertising at times. Most of my paycheques have had some pretty direct link to the advertising world, except maybe when I was serving school dinners to very rich children – but that wasn’t my favourite job, despite it’s lack of direct societal quandaries.

Now, to the content…

If you’re like me, you will tend to read notes about a topic like ‘commerce in education’ and react negatively to some of these observations because there are many cases where those two things should be kept as far apart as possible. But I’m actually not trying to say anything negative here. These are just observations.

Observations

All roads lead to… $

$ Coursera

My online experience within the Coursera site was regularly interrupted with a modal (think popup) screen asking if I wanted to pay to enrol in the ‘Signature Track’, and get a more official certification. This is Coursera’s business model and understandably their interest. It wasn’t at all relevant to me in my life situation, as I was taking a course about how to play with fun music software in my free time. I don’t often check my own qualifications before I let myself hobby. Not that anyone checked my qualifications before they let me work either, but I digress. Coursera’s tagline says ‘free’, but they want you to pay.

$ Blend.io

All assignments for the course had to be published to Blend for peer-evalutation, Blend is like Github but for raw audio production tracks rather than source-code. I didn’t know about Blend before the course, and really like it as a concept and how it’s executed and for what it could do for collaborative music making. But I note, it is a business. This course funnels tens of thousands of new users into that business over the course of a few days. There might not be any direct financial trade here (between companies for example), but users are capital in start-up land. And I now receive emails from Blend with advertisements for commercial audio production tools. My eyeballs, like yours, have a value.

$ Berklee College of Music

While hosted on Coursera, the content of this course is by Berklee College of Music. The content they ‘give away’ would traditionally only have been available to paying students. Berklee’s business is selling seats in classes. This course isn’t given away as an act of kindness, it’s marketing. Three weeks is short and therefore the content is ‘light’. Lighter than I was expecting (not that I’m entitled). But halfway through, we receive a promotional email about Berklee’s own online education platform where you could create an account to get access to further ‘free’ videos to supplement the Coursera materials. I found these supplementary videos more useful, and they lead to offers to sign-up for extended paid courses with Berklee Online. For Berklee, this whole excercise is a marketing funnel. Quite possibly it’s the most fun and least offensive marketing funnel you can be dropped into, but it exists to do that job.

$ Erin Barra – Course professor and artist

Now, I write this with genuine sympathy, as I’ve walked the floor at countless venues trying to sell enough music and merch to cover the petrol costs of playing a gig. But this is a commercial element of this learning experience, so I will note it. At many points throughout the three weeks, we had opportunities to buy Erin’s music, t-shirts, and audio production stems (these are like a layer file of an original recording) for consumption and or remixing. I know you have to hustle if you’re making music for a living, but the observation here is that the students of this course are also a marketable audience. Perhaps only because they arrive en-mass and end up slightly faceless. I’m sure it would be weird for most teachers to sell t-shirts in a class-room. It wasn’t particularly weird online, where we’re desensitised to being constantly sold things. And I may have only noticed this because I’m interested in how all these things fit together.

$ Ableton

The course was about learning Ableton Live. A commercial audio production tool. So at some point, the cost of Ableton had to be considered. Ableton offers a free 30 day trial, which works for this course and they kindly (or sensibly) agreed to let people taking the course start a new trial even if they’d used their 30 days already. Good manners like those are good for business. Anyway, I already owned Live 9 Intro (aka the cheap version), and for a three week intro course it does more than enough to learn the basics (I guess that’s why it’s called Intro?). But the course taught and encouraged the use of Live 9 Suite (the EUR599 rather than the EUR79 version). Until some people complained, the use of features in Suite was required to complete the final assignment. Reading between the lines, I doubt there was any deliberate commercial discussion around this planning, but the planning definitely didn’t stem from the question: ‘how can we keep the cost down for these beginners?’. At the end of the course there were discount codes to get 15% off purchasing anything from Ableton. I didn’t use Suite during the course, but I’m playing with it now on my own time and terms, and may end up spending money on it soon.

Reflections

It’s wonderful, but it’s not Wikipedia. The course opened a lot of doors, but mostly into places where I could spend money, which I am cautious about as a model for learning. It was valuable to me and prompted me to learn more about Ableton Live than I would have done in those three weeks without it. So I’m grateful for it. But I can’t in my heart think of this as a ‘shared public resource’.

For my own learning, I like deadlines. Preferably arbitrary. The fact that these Coursera courses are only available at certain times during the year, really works for me. But I struggle with the logic of this when I think about how best to provide learning material online to as many people as possible. The only MOOC style courses I have finished have been time-bound. I don’t know how many people this is true for though.

People will learn X to earn Y. For me this course was a form of hobby or entertainment, but much learning has a direct commercial interest for students as well as educators. Whether it’s for professional skills development, or building some perceived CV value.

There is no ‘free’ education, even if it says “free” on the homepage. There is always a cost, financial or otherwise. Sometimes the cost is borne by the educator, and sometimes the student. Both models have a place, but I get uncomfortable when one tries to look like the other. And if the world could only have one of these models for all of education I know which one I’d choose. Marketing fills enough of our daily content and claims enough brainprint as it is.

Conclusion

I thought I might find some conclusions in writing this, but that doesn’t always happen. There are a lot of interesting threads here.

So instead of a conclusion, you can have the song I submitted for my course assignment. It was fun to make. And I have this free-but-not-free course to thank for getting it done.

Mozilla Contributor Analysis Project (Joint MoCo & MoFo)

I’m  back at the screen after a week of paternity leave, and I’ll be working part-time for next two weeks while we settle in to the new family routine at home.

In the meantime, I wanted to mention a Mozilla contributor analysis project in case people would like to get involved.

We have a wiki page now, which means it’s a real thing. And here are some words my sleep-deprived brain prepared for you earlier today:

The goal and scope of the work:

Explore existing contribution datasets to look for possible insights and metrics that would be useful to monitor on an ongoing basis, before the co-incident workweek in Portland at the beginning of December.

We will:

  • Stress-test our current capacity to use existing contribution data
  • Look for actionable insights to support Mozilla-wide community building efforts
  • Run ad-hoc analysis before building any ‘tools’
  • If useful, prototype tools that can be re-used for ongoing insights into community health
  • Build processes so that contributors can get involved in this metrics work
  • Document gaps in our existing data / knowledge
  • Document ideas for future analysis and exploration

Find out more about the project here.

I’m very excited that three members of the community have already offered to support the project and we’ve barely even started.

In the end, these numbers we’re looking at are about the community, and for the benefit of the community, so the more community involvement there is in this process, the better.

If you’re interested in data analysis, or know someone who is, send them the link.

This project is one of my priorities over the following 4-8 weeks. On that note, this looks quite appealing right now.

So I’m going make more tea and eat more biscuits.

Something special within ‘Hack the snippet’

Here are a couple of notes about ‘Hack the snippet‘ that I wanted to make sure got documented.

  1. It significantly changed peoples’ predisposition to Webmaker before they arrived on the site
  2. Its ‘post-interaction’ click-through-rate was equivalent to most one-click snippets

Behind these observations, something special was happening in ‘Hack the snippet’. I can’t tell you exactly what it was that had the end-effect, but it’s worth remembering the effect.

1. It ‘warmed people up’ to Webmaker

  • The ‘Hack the snippet’ snippet
    • was shown to the same audience (Firefox users) as eight other snippet variations we ran during the campaign
    • had the same % of users click through to the landing page
    • had the same on-site experience on webmaker.org as all the other snippet variations we tested (the same landing page, sign-up ask etc)
  • But when people who had interacted with ‘Hack the snippet’ landed on the website, they were more than three times as likely to signup for a webmaker account

Same audience, same engagement rate, same ask… but triple the conversion rate (most regular snippet traffic converted ~2%, ‘Hack the snippet’ traffic converted ~7%).

Something within that experience (and likely the overall quality of it) makes the Webmaker proposition more appealing to people who ‘hacked the snippet’. It could be one of many things: the simplicity, the guided learning, the feeling of power from editing the Firefox start page, the particular phrasing of the copy or many of the subtle design decisions. But whatever it was, it worked.

We need to keep looking for ways to recreate this.

Not everything we do going forwards needs to be a ‘Hack the snippet’ snippet (you can see how much time and effort went into that in the bug).

But when we think about these new-user experiences, we have a benchmark to compare things too. We know how much impact these things can have when all the parts align.

2. The ‘post-interaction’ CTR was as good as most one-click snippets

This is a quicker note:

  • Despite the steps involved in completing the ‘Hack the snippet’ on page activity, the same total number of people clicked through when compared to a standard ‘one-click’ snippet.
  • We got the same % of the audience to engage with a learning activity and then click through to the webmaker site, as we usually get just giving them a link directly to Webmaker
    • This defies most “best practice” about minimizing number of clicks

Again, this doesn’t give us an immediate thing we can repeat, but it gives us a benchmark to build on.

Overlapping types of contribution

Screen Shot 2014-08-21 at 14.02.27TL;DR: Check out this graph!

Ever wondered how many Mozfest Volunteers also host events for Webmaker? Or how many code contributors have a Webmaker contributor badge? Now you can find out

The reason the MoFo Contributor dashboard we’re working from at the moment is called our interim dashboard is because it’s combining numbers from multiple data sources, but the number of contributors is not de-duped across systems.

So if you’re counted as a contributor because you host an event for Webmaker, you will be double counted if you also file bugs in Bugzilla. And until now, we haven’t known what those overlaps look like.

This interim solution wasn’t perfect, but it’s given us something to work with while we’re building out Baloo and the cross-org areweamillionyet.org (and by ‘we’, the vast credit for Baloo is due to our hard working MoCo friends Pierros and Sheeri).

To help with prepping MoFo data for inclusion in Baloo, and by  generally being awesome, JP wired up an integration database for our MoFo projects (skipping a night of sleep to ship V1!).

We’ve tweaked and tuned this in the last few weeks and we’re now extracting all sorts of useful insights we didn’t have before. For example, this integration database is behind quite a few of the stats in OpenMatt’s recent Webmaker update.

The downside to this is we will soon have a de-duped number for our dashboard, which will be smaller than the current number. Which will feel like a bit of a downer because we’ve been enthusiastically watching that number go up as we’ve built out contribution tracking systems throughout the year.

But, a smaller more accurate number is a good thing in the long run, and we will also gain new understanding about the multiple ways people contribute over time.

We will be able to see how people move around the project, and find that what looks like someone ‘stopping’ contributing, might be them switching focus to another team, for example. There are lots of exciting possibilities here.

And while I’m looking at this from a metrics point of view today, the same data allows us to make sure we say hello and thanks to any new contributors who joined this week, or to reach out and talk to long running active contributors who have recently stopped, and so on.

Trendlines and Stacking Logs

TL;DR

  • Our MoFo dashboards now have trendlines based on known activity to date
  • The recent uptick in activity is partly new contributors, and partly new recognition of existing contributors (all of which is good, but some of which is misleading for the trendline in the short term)
  • Below is a rambling analogy for thinking about our contributor goals and how we answer the question ‘are we on track for 2014?’
  • + if you haven’t seen it, OpenMatt has crisply summarized a tonne of the data and insights that we’ve unpicked during Maker Party

Stacking Logs

I was stacking logs over the weekend, and wondering if I had enough for winter, when it struck me that this might be a useful analogy for a post I was planning to write. So bear with me, I hope this works…

To be clear, this is an analogy about predicting and planning, not a metaphor for contributors* :D

So the trendline looks good, but…

Screen Shot 2014-08-19 at 11.47.27

Trendlines can be misleading.

What if our task was gathering and splitting logs?

Vedstapel, Johannes Jansson (1)

We’re halfway through the year, and the log store is half full. The important questions is, ‘will it be full when the snow starts falling?

Well, it depends.

It depends how quickly we add new logs to the store, and it depends how many get used.

So let’s push this analogy a bit.

Firewood in the snow

Before this year, we had scattered stacks of logs here and there, in teams and projects. Some we knew about, some we didn’t. Some we thought were big stacks of logs but were actually stacked on top of something else.

Vedstapel, Johannes Jansson

Setting a target was like building a log store and deciding to fill it. We built ours to hold 10,000 logs. There was a bit of guesswork in that.

It took a while to gather up our existing logs (build our databases and counting tools). But the good news is, we had more logs than we thought.

Now we need to start finding and splitting more logs*.

Switching from analogy to reality for a minute…

This week we added trendlines to our dashboard. These are two linear regression lines. One based on all activity for the year to-date, and one based on the most recent 4 weeks. It gives a quick feedback mechanism on whether recent actions are helping us towards to our targets and whether we’re improving over the year to-date.

These are interesting, but can be misleading given our current working practices. The trendline implies some form of destiny. You do a load of work recruiting new contributors, see the trendline is on target, and relax. But relaxing isn’t an option because of the way we’re currently recruiting contributors.

Switching back to the analogy…

We’re mostly splitting logs by hand.

Špalek na štípání.jpg

Things happen because we go out and make them happen.

Hard work is the reason we have 1,800 Maker Party events on the map this year and we’re only half-way through the campaign.

There’s a lot to be said for this way of making things happen, and I think there’s enough time left in the year to fill the log store this way.

But this is not mathematical or automated, which makes trendlines based on this activity a bit misleading.

In this mode of working, the answer to ‘Are we on track for 2014?‘ is: ‘the log store will be filled… if we fill it‘.

Scaling

Holzspalter 2

As we move forward, and think about scale… say a hundred-thousand logs (or even better, a Million Mozillians). We need to think about log splitting machines (or ‘systems’).

Systems can be tested, tuned, modified and multiplied. In a world of ‘systems’ we can apply trendlines to our graphs that are much better predictors of future growth.

We should be experimenting with systems now (and we are a little bit). But we don’t yet know what the contributor growth system looks like that works as well as the analogous log splitting machines of the forestry industry. These are things to be invented, tested and iterated on, but I wouldn’t bet on them as the solution for 2014 as this could take a while to solve.

I should also state explicitly that systems are not necessarily software (or hardware). Technology is a relatively small part of the systems of movement building. For an interesting but time consuming distraction, this talk on Social Machines from last week’s Wikimania conference is worth a ponder:

Predicting 2014 today?

Even if you’re splitting logs by hand, you can schedule time to do it. Plan each month, check in on targets and spend more or less time as required to stay on track for the year.

This boils down to a planning exercise, with a little bit of guess work to get started.

In simple terms, you list all the things you plan to do this year that could recruit contributors, and how many contributors you think each will recruit. As you complete some of these activities you reflect on your predictions, and modify the plans and update estimates for the rest of the year.

Geoffrey has put together a training workshop for this, along with a spreadsheet structure to make this simple for teams to implement. It’s not scary, and it helps you get a grip on the future.

From there, we can start to feed our planned activity and forecast recruitment numbers into our dashboard as a trendline rather than relying solely on past activity.

The manual nature of the splitting-wood-like-activity means what we plan to do is a much more important predictor of the future than extrapolating what we have done in the past, and that changing the future is something you can go out and do.

*Contributors are not logs. Do not swing axes at them, and do not under any circumstances put them in your fireplace or wood burning stove.

When ‘less than the sum of our parts’ is a good thing

areweamillionyetHere’s a happy update about our combined Mozilla Foundation (MoFo) and Mozilla Corporation (MoCo) contributor dashboards.

TL;DR: There’s a demo All Mozilla Contributor Dashboard you can see at areweamillionyet.org

It’s a demo, but it’s also real, and to explain why this is exciting might need a little context.

Since January, I’ve been working on MoFo specific metrics. Mostly because that’s my job, but also because this/these organisations/projects/communities take a little while to understand, and getting to know MoFo was enough to keep me busy.

We also wanted to ship something quickly so we know where we stand against our MoFo goals, even if the data isn’t perfect. That’s what we’ve built in our *interim* dashboard. It’s a non de-duped aggregation of the numbers we could get out of our current systems without building a full integration database. It gives us a sense of scale and shows us trends. While not precise and high resolution yet, this has still been helpful to us. Data can sometimes look obvious once you see it, but before this we were a little blind.

So naturally, we want to make this dashboard as accurate as possible, and the next step is integrating and de-duping the data so we can know if the people who run Webmaker events are the people who commit code, are the people who file bugs, are the people who write articles for Source, are the people who teach Software Carpentry Bootcamps, etc, etc.

The reason we didn’t start this project by building a MoFo integration database, is because MoCo were already working on that. And in less than typical Mozilla style (as I’m coming to understand it), we didn’t just build our own version of this. ;) (though there might be some scope and value for integrating some of this data within the Foundation anyway, but that’s a separate thought).

The integration database in question is MoCo’s project Baloo, which Pierros, and many people on the MoCo side have been working on. It’s a complex project influenced by more than than just technical requirements. Scoping the system is the point at which many teams are first looking at their contributor data in detail and working out what their indicators of contribution look like.

Our plan is that our MoFo interim dashboard data-source can eventually be swapped out for the de-duped ‘single source of truth’ system, at which point it goes from being a fuzzy-interim thing to a finalized precise thing.

While MoCo and ‘Fo have been taking two different approaches to solving this problem, we’ve not worked in isolation. We meet regularly, follow each other’s progress and have been trying to find the point where these approaches can merge into a unified cross Mozilla solution.

The demo we shipped yesterday was the first point where we’ve joined up this work.

Dial-up modems

I want to throw in an internet based analogy here, for those who remember dial-up modems.

Let’s imagine this image shows us *all* the awesome Mozilla contributors and what they are doing. We want there to be 20k of them in 2014.

unknown

It’s not that we don’t know if we have contributors. We’ve seen individual contributors, and we’ve seen groups of contributors, but we haven’t seen them all in one place yet.

So to continue the dial-up modem analogy, let’s think of this big-picture view of contribution as a large uncompressed JPEG, which has been loading slowly for a few months.

The MoFo interim dashboard has been getting us part of this picture. Our approach has revealed the MoFo half of this picture with slowly increasing resolution and accuracy. It’s like an interlaced JPEG, and is about this accurate so far:

interlaced

The Project Baloo approach is precise and can show MoCo and MoFo data, but adds data source at a time. It’s rolling out like a progressive JPEG. The areweamillionyet.org dashboard demo isn’t using Baloo yet, but the data it’s using is a good representation of how Baloo can work. What you can see in the demo dashboard is a picture like this:

progressive

(Original Photo Credit: Gen Kanai)

About areweamillionyet.org

This is commit data extracted from cross team/org/project repositories via github. Even though code contribution is only one part of the big-picture. Seeing this much of the image tells us things we didn’t know before. It gives us scale, trends and ways to ask questions about how to effectively and intentionally grow the community.

The ‘commit history’ over time is also a fascinating data set, and I’ll follow up with a blog post on that soon.

Less that the sum of our parts? When 5 + 5 = 8

With the goal of 20k active contributors this year, shared between MoCo and MoFo, we’re thinking about 10k active contributors to MoCo and to MoFo. And if we counted each org in isolation we could both say “here’s 10k active contributors”, and this would be a significant achievement. But, if we de-dupe these two sets it would be really worrying if there wasn’t an overlap between the people who contribute to MoCo and the people who contribute to MoFo projects.

Though we want to engage many many individual contributors, I think a good measure of our combined community building effectiveness will be how much these ‘pots’ of contributors overlap. When 10k MoFo contributors + 10k MoCo contributors = 15k combined Mozilla contributors, we should definitely celebrate.

That’s the thing I’m most excited about with regards to joining up the data. Understanding how contributors connect across projects; how they volunteer their time, energy and skills is many different ways, and understanding what ‘Many Voices, One Mozilla’ looks like. When we understand this, we can improve it, for the benefit of the project, and the individuals who care about the mission and want to find ways into the project so they can make a difference.

While legal processes define ‘Corporations’ and ‘Foundations’, the people who volunteer and contribute rarely give a **** about which org ‘owns’ the project they’re contributing too; they just want to build a better internet. Mozilla is bigger than the legal entities. And the legal entities are not what Mozilla is about, they are just one of the necessities to making it work.

So the org dashboards, and team dashboards we’re building can help us day-to-day with tactical and strategic decisions, but we always need to keep them in the context of the bigger picture. Even if the big picture takes a while to download.

Here’s to more cross org collaboration.

Want to read more?

Week 4 at Mozilla

I gathered up the output from my many discussions with our teams so far, and I’m proposing a plan for shipping a Mozilla Foundation Contributors Dashboard as quickly as we realistically can. I’ll be presenting this next week, and once I’ve had feedback on it, this can be turned into a proper plan of action and shared more widely.

Next week I’m in Toronto with the Webmaker team for a work-week (a pretty focused gathering on getting things done), which I’ve been busily preparing for.

You can see what we’ll be up to here (I’m space-wrangling the Metrics track):
https://wiki.mozilla.org/Webmaker/Workweek

P.S. ‘Space-wrangling’ is official Mozilla terminology, and animated GIFs are our primary means of communication.

Because we work in the open, you can follow live updates on how well we’re shipping our planned output during the work-week:
https://wiki.mozilla.org/Webmaker/Scrumboard

Getting ready for this week involved opening a lot of Bugzilla tickets, so we can track progress during the week. Bugzilla is a bit of a monster (I think it looks like this) but it’s also a good way of getting things done. By the time I’d looked at 20+ tickets my brain was starting to filter out the noise in the interface from fields that don’t get used. I’m sure it will get easier to use with time.

I also learnt this week that there are close to 1 million bugs now logged in that system – which is pretty amazing record of the amount of work done by many many Mozillians over many many years. I reckon as you’re using the internet right now (I hope you haven’t printed this out!), you’re online experience is better as a direct result of at least one of those million bugs.

To wrap this up, I’d like to be about 10 times more prepared for next week, but I think that’s largely the result of not knowing what to expect.

I’m very excited to meet more of my new teammates IRL, but will also miss my wife and our little lunatic, so this picture is my new wallpaper for my travels:

Go work-week!
Go work-week!

As ready as I’m going to be

Tomorrow is the first day in my new role at the Mozilla Foundation, and I’m getting the new job nerves and excitement now.

Between wrapping up at WWF, preparing for Christmas, house hunting, and finishing off my next study assignment (a screenplay involving time-travel and a bus), I’ve been squeezing in a little bit of prep for the new job too.

This post is basically a note about some of the things I’ve been looking at in the last couple of weeks.

I thought it would be useful to jump through various bits of tech used in a number of MoFo projects, some of which I’d been wanting to play with anyway. This is not deep learning by any means, but it’s enough hands-on experience to speed up some things in the future.

I setup a little node.js app locally and had a look at Express. That’s all very nice, and I liked the basic app structures. Working with a package manager is a lovely part of the workflow, even if it sometimes feels a bit too magic to be real. I also had a look at MongoDB, Mongoose and MERS as a potential data model/store solution for another little app thing I want to build at some point. I didn’t take this far, but got the basic data model working over the MERS API.

I’d used Git a little bit already, but wanted a better grasp of the process for contributing ‘nicely’ to bigger projects (where you’re not also talking directly to the other people involved). Reading the Pro Git book was great for that, and a lighter read than I expected. It falls into the ‘why didn’t I do that months ago?’ category.

Sysadmin-esque work is one of my weak points so the next project was more of a stretch. I setup an Amazon EC2 instance and installed a copy of Graphite. The documentation for Graphite is on the sparse side for people who don’t know their way around a Linux command prompt well, but that probably taught me more than if I’d just been following a series of commands from a tutorial. I think I’ll be spending a lot more time on the metrics end of Graphite, so getting a grasp of the underlying architecture will hopefully help.

Then, for the last couple of days I’ve been working through Data Analysis With Open Source Tools at a slightly superficial level (i.e. skipping some of the Maths), but it’s been a good warm-up for the work ahead.

And that’s it for now.

I’m really looking forward to meeting many new people, problems and possibilities in 2014.

Happy New Year!

Took my son to an "Alien Invasion" exhibition and got to play a little Space Invaders
Took my son to an “Alien Invasion” exhibition and got to play a few minutes of Space Invaders

Remembering maps from memory

Today, I found this awesome post on Uncertain Cartographies (via Flowing Data), and it immediately took me back to something I made when I was in college and studying fine art.

So check out that link first, as this post will make more sense in relation to it, and it’s pretty fascinating anyway.

Then I’ll continue my reminiscence… :)

I used to have a framed print of a map I’d drawn on the wall at home, though when I say “print” it was ~10 bits of A4 photo paper I had carefully cut and glued together. It lasted about 6 years on the wall before the ink faded and the paper peeled and I had to take it down. And I hadn’t thought about this map again until today.

Then after reading this paragraph in particular, I really wanted to find my old map:

“Places where I once lived are deeply etched in my mind. Given a blank sheet of paper, and a little lenience, I can draw a respectable map of Murrays Bay, Mount Eden, Kingsland, Longburn, Summer Hill or Mana from memory. Yet most New Zealand localities are at once familiar and largely unknown to me.”

I went searching for my old map.

Amazingly (to me at least) I managed to find a copy of my original Photoshop artwork file on an old archive CD-R that’s followed me from desk to desk, between many house moves and a couple of countries and that I’ve not had any cause to look at until today.

I’ve posted the map below, but first here’s some context:

This was something I created 10 years ago when I was 18 and I’d been driving for less than a year. I didn’t really use maps for directions, but I drove quite a lot to play gigs with my band, and to visit my girlfriend (now wife) who was studying in Exeter. I knew my way around by which junctions led to where, but I’d never had to ‘map’ these places in a proper geographic context in my mind. This is pre-Google Maps/Earth (hence the AA roadmap styling) and it’s interesting now to think how the ability to browse maps so quickly and fluidly online shapes the way we visualize the abstraction of location in our minds. Even more so when you factor in navigation aids.

To create the map, I sat down with a pen and a large piece of paper and drew the roads leading out around me to all the places I had traveled in recent months. I didn’t allow edits or corrections, and there was no advanced sketching of where places were located. I drew the roads and made the links between places in one go. I did this until I ran out of roads I could meaningfully label. Scanned the whole thing in and traced the lines in Photoshop (no fixes allowed). I recall this “giant” file crashing my computer on a regular basis, though looking at it now, it’s only 50mb.

Anyway, that’s getting a bit nostalgic and I don’t mean it to. What I meant to do was post this map as a response to the thought provoking piece I read today.

A little insight into my earlier years
A little insight into my earlier years

 

 

My First #Mozfest

Mozfest 2013I have an hour free this morning, so wanted to quickly write up my thoughts on Mozfest before my memory fades too much. This will be a rough, but f*** it, ship it as they say at Mozfest.

I bought a Mozfest ticket in July with next to no expectations and just a little hope that meeting some new people might trigger some new ideas. It’s fair to say that this was a massive under-prediction on my part.

A couple of months later, with about a month to go until Mozfest, my boss (@ade) mentioned some sessions that might be interesting for WWF and my work in fundraising. A couple of introductory emails and a Skype call later and I’d put my name down for a yet-to-be-confirmed session called ‘Pass the App’.

We were going to use a new tool called Appmaker to build a donation app in a three hour session. At this point in time Appmaker didn’t do a lot. It was pre the version that would be called pre-alpha. I looked at Appmaker for a few minutes and worried I’d just agreed to waste the first quarter of Mozfest.

I had some time off and a couple of weeks went by. With two weeks to go, I wanted to get setup with Appmaker in a dev environment before the day so I didn’t waste people’s time with silly questions about configuration when we could be building things. Work was a bit crazier than usual and another week went by before I finally sat down to look at some code.

It was quite astounding how much Appmaker had evolved in those few weeks. The team working on this are incredible. From thinking the morning would be wasted, it now looked like a tool with enough components that with a little imagination you could hook up all sorts of awesome apps. My goal was to add some kind of payment to that.

The components in Appmaker are built with HTML, CSS and JavaScript and looking at a few examples I was happy I could build something by copying and adapting the work that’s already been done. But getting a development environment setup to work with these technologies I know pretty well required diving into a number of technologies that were completely new to me.

The deadline and motivation drove me through some of the initial hurdles and learning, and jumping into the IRC room for Appmaker I received great help and support from the team. I was worried about hassling them for their time while they were busy getting ready for Mozfest, but I was welcomed very warmly. It was a really great experience working with new people.

I guess the lesson here is: If you try and make something new, you cannot help but learn something new. And also that deadlines are amazing, as we’ve discussed before.

There were ten tracks at Mozfest, and at any given time I wanted to be in about eight of them. After the Saturday morning Pass the App session I was planning to alternate between the Open Data and Privacy tracks for lots of interesting things, but it didn’t work out that way. I didn’t actually make it to any other sessions. I got hooked into (and hooked on) making things in our scramble to build a working Pass the App demo, which we did. Here’s a link to the write up. I won’t re-tell that story. I got to work with kind and intelligent people making something valuable and learning a tonne. You can’t ask for more than that from any conference-esque event.

My hour of free time is up now, so I’m going to ship this despite the vast amount of things I was grateful for and wanted to talk about.

And I’ll say a quick hi to the people from the pass the app session,

And the many other lovely people I got to meet for the first time.