How to Keep Learning for Life: Learning After School and University

When we talk about learning, we’re often talking about school or university or a workplace training seminar. But to be a lifelong learner is to understand how to learn by yourself, outside of an institution, after that stage of life is over, without any teacher to show you the way.

Learning for life, is about learning how to teach ourselves.

Let’s me start by teaching you something.

What do you think this pictures shows? (I’ll give you a hint: you’ll find it somewhere in your house, and you’ll probably not like it when you find it.)

Electron-Microscope-Images-Every-Day-Objects-Dust-Magnified-22-Million-Times

This is a picture of dust, magnified 22 million times.

The reason I asked you was not to test your knowledge of microscopic photography (unless there’s an expert in the room) but to teach you something about how we learn.

Three things actually. First, to learn something new we need a question (Something like: what does this picture show us?). Secondly, we need the curiosity to care about the answer. Thirdly, we need to fail in answering the question. We need to get it wrong. (Not exactly what you were taught in high school)

Let’s start with curiosity.

When Albert Einstein talked about learning, he said:

“The important thing is not to stop questioning. Curiosity has its own reason for existing. One cannot help but be in awe when he contemplates the mysteries of eternity, of life, of the marvellous structure of reality.”

To continue learning after school or university we need to constantly remind ourselves to be curious about the world around us, to renew that childish nature of asking questions at the wrong time in awkward moments. Children tend to always ask: “Why?” These trivial questions are really important, because they build their perception of the world around.

But as we get older, asking “why” or “how” about everything seems childish. When in reality, something as simple as dust has a lot to teach us, if we look at it in the right way (under a microscope, for example).

The author Susan Sontag puts it beautifully when she says:

“We’re always [kind of] shrivelling and becoming narrower. This is what age does to us. I don’t mean age in terms of becoming older, just having your life. [Over time] you become, if you don’t work on it [this project of the self]… you more and more start relying on rote and routine responses to things. You build up habits and preferences and responses.”

If I asked you what dust looked like a couple hours ago, you’d have an automatic reply. It’s that grey, tiny, fluffy stuff. This is what happens to us when we lose the ability to question things and when we stop being curious about the world around us. We stop seeing what’s right in front of us, and instead impose rote learnt ideas on everything we see.

The author C.S. Lewis says it simply:

“When I became a man I put away childish things, including the fear of being childish.”

In his book, Alice in Wonderland, Alice goes down the rabbit hole into a new world, but she doesn’t just sit in the corner looking at her shoes. She explores her new environment with an insatiable childish curiosity. She tries to understand where she is, the people she meets, and ultimately, herself.

Alice is an autodidact, a self-learner. Someone who uses their curiosity of their environment to learn new things, without necessarily relying on others for help. Someone who teaches themselves by following their curiosity down each new rabbit hole. At one of the most central moments of the book, C.S. Lewis summarizes the thought perfectly:

“Curiouser and curiouser!” Cried Alice (she was so much surprised, that for the moment she quite forgot how to speak good English).

She gets so captivated with learning, that she completely forgets the basics! That’s the irony he’s striking at.

Her adventure is filled with learning new things – about caterpillars and Red Queens and so on, and each person she meets tests her knowledge – with riddles and rhymes. Part of the joy of the book for most people is following along on that journey, and as a kid, the journey is enjoyable because it is so similar to our own – being exposed to new knowledge, creatures and animals on a daily basis.

But it is not only Alice’s curiosity which leads to her learning about new things, it is also her self-confidence. Her belief that everything will be okay, even if she is very curious. Alice has the self-confidence to enter into an entirely new world and not be afraid, but be curious about it.

As the author Milan Kundera puts it:

“The difference between the university graduate and the autodidact lies not so much in the extent of knowledge as in the extent of vitality and self-confidence.”

It is this very self-confidence to chase down rabbit holes, that is so interesting, and so worth pursuing.

The key here however, is to have self-confidence without too high of a self-esteem. Research by the Psychology Professor Roy F. Baumeister, found that people with high self-esteem, or high self-regard, were actually less effective when learning because they thought that they had nothing to learn. They already felt like they were experts in a topic, before they had learnt anything about it.

Alice is the perfect autodidact because she is confident, yet she doubts herself. This allows her, to be open to new knowledge rather than presuming to know everything about Wonderland.

When finding ourselves in a new situation it is tempting to pretend we know everything, but to learn, it is better to confidently admit that we don’t know what we are doing, that we are curious about it and that we want to know more.

Today, we encourage the opposite kind of thought. We encourage the lowest performers to view themselves as equal to the top performs. Instead of getting a gold star no matter how well you do, what matters is having the self-confidence to admit that you don’t always get it right, that there are things you don’t know and that you have a lot to learn.

So long as you’re curious about discovering new things, that’s okay. This is the real ‘gold star’: being happy to chase down the rabbit hole regardless of the consequences.

Let’s try another picture. The first one might help you out here. This is a similar material and the colour might also help. Feel free to call out a guess. (Do you have any ideas?)

LearningForLife2

This one is a picture of sand, magnified 250 times, with bits of coral in it as well.

The reason why I’m showing you so many magnified pictures is because of my own discovery of magnified photography, something which I’m slowly learning more about. A colleague gave me a book recently of photographs all along the scale, from tiny microscopic pictures of dust and sand, to gigantic pictures of the whole universe.

The book is incredible because of that exact variation – from the smallest of things to entire galaxies, and it really puts things in perspective when (round about the first third), the photographer gets to humans and human life. Something so small and really microscopic when compared to the shots of the universe at the end.

As the astrophysicist Carl Sagan once put it:

Who are we? We find that we live on an insignificant planet of a humdrum star lost in a galaxy tucked away in some forgotten corner of a universe in which there are far more galaxies than people.

Just that is awe-inspiring in itself.

So that’s a little bit from my own learning discoveries lately, and something I’ve been paying attention to – how small we are in the scheme of things, and again, how much we have to learn about our place in that giant map of stars.

So why keep learning after school or university?

You might be in a stable job, you might feel comfortable with what you already know, or you might not see the point of knowledge as a goal in and of itself.

But there are two good reasons to learn more as we get older:

One is practical. The author Richard Susskind tells us that with the rise of automation and robotics:

“There will be a [new] emphasis on being able to learn, develop and adapt rapidly as new roles and tasks arise.”

The BBC estimates that 40% of white collar work will be automated in the next 20 years – and so there’s a need to learn quickly, when things change, to pick up new skills.

Without the capacity to learn, automation might make us redundant. So there’s a need for a renewed focus on transferable skills, and the most transferable skill of all is the ability to teach yourself whatever skill you need to transfer.

Even without the risk from robotics, certain industries still can collapse over time, move overseas, change and so on. In uncertain economic times, the best thing to do is to practice learning new things.

The second reason is more personal.

There is something deeply fulfilling about becoming well-rounded. The more we know, the easier it is to embark on that project that Susan Sontag called: “building a self”: of becoming all that we can be and growing to our full potential.

Part of that process is learning to learn for ourselves. Someone who can learn new things about the world without having to rely on others is well-placed to develop their own character, their own identity and their own ideas.

But in the day to day trappings of the twenty-first century it is hard to imagine a moment where one can stand outside oneself and inside oneself at the same time, to see the world as it is, the beauty, the majesty, the entire course of civilization, without somehow getting distracted or bothered, or without being too specialized to understand what general importance there is to understanding anything outside one’s particular industry.

Enter the character of the flaneur. A word for the 19th Century wanderer, reflecting from within the major cities, amidst the bustle and mayhem, what it is to be human. Walter Benjamin, quoting Poe, called the flaneur the “man of the crowd,” a person able to see the structure of society and learn about a great variety of fields even whilst caught up in daily life. Wandering through the streets, the flaneur attempts a partial “transcendence” of the moment they’re in, being at once inside the moment and outside observing it.

Basically, being a flaneur is the opposite of the modern idea of mindfulness. Instead of being in the moment, you take yourself outside of the moment, objectively look at life and analysing it. Instead of meditating in your office, you think about your office: why you are there, and what you are doing. You explore the world, like Alice, and learn what you can by asking questions about what you find. You use curiosity to give rise to new questions, and you follow those questions down each rabbit hole no matter the consequence.

Being apart and observing in this way creates a form of artistic self-discovery. Instead of rushing with the crowd, the flaneur slows down, considers and thinks, contemplates, alleviates boredom of technology with a stroll through the park, a considered thought on the world, a beautiful psychological distance from the present moment reminiscent of the Romantics watching a sunset and contemplating the meaning of life. In some ways, the flaneur is a rebel against the industrial age: protesting noise through silence, crowded streets through contemplation, speed and greed through gradual movement combined with a rigorous indifference to what we now call mindfulness.

We have a lot to gain from this process, from analysing the world.

A second process involves generalizing.

The world we live in today is made almost entirely out of specialists. People trained in their narrow industries, without a wider appreciation for a more diverse range of knowledge. Our education system all but compels people to specialize by getting degrees and various other qualifications. And this is quite a recent development.

From the 1970s to now there has been a huge increase in what has been called “academic inflation,” or the number of degrees required to do a job.

Think supply and demand. If an employer can hire someone with a degree or someone without, they’ll hire the person with the degree. This puts pressure on everyone to get degrees. But once everyone has one, the value of having a degree goes down.

A couple decades ago, a high school diploma was sufficient to get a job in journalism or business. Now a bachelor’s degree is required.

Where a bachelor’s degree was sufficient to get a job in research, now a master’s degree is required. Where a master’s degree was required to get a job in university tutoring, now a PhD is required. The number of people gaining master’s degrees has doubled from the early 1980s to the late 2000s. The PhD, once a niche qualification, has become the definitive qualification of what it means to be an expert today.

The more degrees people have, the more they specialize: and so today we live in a world full of specialists armed with all of these degrees. (Myself included).

The loss we face from this, however, is the ability to ask the big questions of the world around us. The Ivy League educator Bill Deresiewicz puts it perfectly when he says:

“When students go to college, they hear a couple speeches telling them to ask the big questions, and when they graduate, they hear a couple more speeches telling them to ask the big questions. And in between, they spend four years taking courses that train them to ask the little questions – specialized courses, taught by specialized professors, aimed at specialized students

What we don’t have, in other words, are thinkers. People who can think for themselves. People who can formulate a new direction: for the country, for a corporation or a college [or themselves]… a new way of doing things, a new way of looking at things. People, in other words, with vision.”

But this idea today is not very popular, in fact, often people are frowned upon just for mentioning a broad-minded education.

In a Portrait of the Artist as a Young Man, James Joyce [goes into the point]: “When the soul of a man is born in this country,” the protagonist Stephen Dedalus says about Ireland, “there are nets flung at it to hold it back from flight. You talk to me of nationality, language, religion. I shall try to fly by those nets.”

Today, we have other nets. “What are you going to do with that?” “is a net. Instead of finding yourself, how about finding a job?” Is a net.

So is the term “self-indulgent”.

“Isn’t it self-indulgent to try to live the life of the mind when there are so many other things I could be doing with my degree?” “I want to travel for a while after I graduate, but wouldn’t that be self-indulgent?” These are the kinds of questions that young people find themselves being asked today if they even think about doing something a little different – even worse, the kinds that they are made to feel compelled to ask themselves.

You’re told you’re supposed to go to college, but you’re also told that you are being self-indulgent if you actually want to get an education. As opposed to what? Going into finance isn’t self-indulgent? It’s not okay to study history because what good does that really do anyone, but it is okay to work for a hedge fund. It’s selfish to pursue your passion, unless it’s also going to make you a lot of money, in which case it isn’t selfish at all.

I think Deresiewicz hits on a lot of interesting points in his writing on this. Is it, in some way, selfish to pursue learning outside of a professional setting? Should education only be used for vocational purposes?

All of these are really interesting questions worth discussing.

Let’s look at my second point about learning how to question.

This is a scene from the 1973 film The Paper Chase. It’s a film about Harvard Law School and this lecturer is based on a real lecturer called Edward H. Warren.

If you notice something straight away about this film it’s that the lecturer keeps asking questions and that the student’s job is just to stand there and answer them.  The student is not allowed to be inquisitive or explore the topic on their own. They must simply recite, by memory, what they have already learnt.

We’ve all heard about the detriments of rote-learning and memorization, in schools, university and the workplace. Memorized knowledge has a tendency of going in one ear and out the other. It tends to actually get in the way of learning, despite everything your high school maths teacher might have told you. Ben Orlin, a High School maths teacher in the US, found this out the hard way recently. He writes that every year he would ask his students the same question:

       “What’s the sine of pie over 2?”

“One!” they all yelled out in unison.

Having heard the answer, Ben moved on to other maths content. Later that year, he realized that his students had simply memorized the answer by rote, and that they didn’t even know what sine was.

In Ben’s view, there are more clever ways of memorizing facts than simple repetition. Repetition is slow and tends to make you forget the facts a few weeks later. The more advanced version of repetition is mnemonics and other artificial tricks – songs, acronyms and so on, that help you remember a certain fact. This is better than simple repetition, but not by much. As soon as you forget the pneumonic you’ll forget the fact.

Another method is analysis – looking at something so closely that you’ll remember a lot more about it. This works better than either repetition or mnemonics – and tends to stay in your mind for a lot longer.

But the best technique is simply building on your existing memory – seeing new knowledge as building blocks and connecting pieces of information together with stuff you already know. This lasts the longest in your mind, but is perhaps the most difficult.

In any case, in an age where we’re overloaded with information on a daily basis, the ability to memorize large tracts of information isn’t as useful as it used to be. There’s just too much information to memorize. With the flaws I’ve mentioned, memorization isn’t even necessarily the best way of learning something – there are other, better methods that help you develop a particular skill or ability rather than just relying on your memory.

The video I showed you earlier (of Harvard Law School) is of a method of teaching called the Socratic Method. But it’s not entirely what Socrates had in mind. In reality, Socrates would have wanted students themselves to constantly ask questions to everyone else: to the lecturer, to other students and to society.

Socrates was famously put on trial and executed for being too “curious”. He asked too many questions and the Greek authorities killed him for it.

Here’s why he did.

One day the Oracle of Delphi (basically a greek prophet who spoke prophecy and truth on various topics) proclaimed Socrates the wisest of all men. Socrates recounts how he took this news with great puzzlement: he knew the oracle could not lie, and yet he was only too aware that he had no particular wisdom or specialized knowledge at all. In order to test the oracle, or to prove it wrong, Socrates sought out and questioned Athenian men who were highly esteemed for wisdom.

First, he interrogated the politicians, then the poets, and then the skilled craftsmen. In questioning the politicians, he found that though they thought they were very wise, they did not in fact know much of anything at all. The poets, though they wrote great works of genius, seemed incapable of explaining them, and Socrates concluded that their genius came not from wisdom but from some sort of instinct or inspiration which was in no way connected to their intellect. In the craftsmen, Socrates found men who truly did have great wisdom in their craft, but invariably, they seemed to think that their expertise in one field allowed them to speak authoritatively in many other fields.

After questioning people of great importance, Socrates finally understood why the Oracle had called him wise: because he was aware of his own ignorance. Only in knowing the extent of one’s ignorance, Socrates says, can we truly be wise. Or in other words, a wise man is someone who knows he knows nothing.

From this story I think we can take two things. Firstly, we should ask questions about the world in order to learn. Secondly, that we should know the extent of our own ignorance, in order to be able to fill in the gaps of our knowledge. Thinking that you know everything is one of the surest ways of becoming incapable of learning anything new.

Or as Confucius said:

“To know is to know that you know nothing. That is the meaning of true knowledge.”

But what kind of questions should we ask to gain new knowledge?

One of the best and simplest questions to ask is why?

When Socrates interrogated people about their authority, he would frequently ask why something was the way it was; why they thought they were an authority figure, or why a poem was particularly profound from a poetry.

We too can ask why things are the way they are, and in doing so, provoke new thoughts and inquiries about the things in life we take for granted.

Problematic questions, such as: why is education the way it is? Or why do we work for a particular number of hours a week? Or why are our politicians so frustrating? – Are very basic, but in asking them we can either learn something about how the world already works, or come to some new conclusion about how it should work in the future.

It is ironic then that the most basic of questions, riddled out of us at age five when we’re told to stop asking it every five seconds – is one of the most profound and most meaningful in allowing us to learn something new.

But you have to have the guts to ask it.

Speaking of questioning things, let’s take a look at another picture.

Take a look at this picture on the screen…

ThatcherIllusion

This experiment is fairly famous so you might know of it. Here are two pictures of Margaret Thatcher, both are upside down and there are a few slight differences between them. The one on the right looks slightly more angry than the one on the left.

Let’s see what happens when we turn them the right way up.

ThatcherIllusion

Our entire perception has changed when we see the face the right way up. Suddenly, the image on the right looks horrifying, and completely different from the one on the left.

This is known as the Thatcher illusion. It was first created by Peter Thompson, a Psychology Professor at the University of York in 1980. What it shows is that we find it difficult to read or understand facial expressions when faces are turned upside down.

The second thing it shows is about frames, and in some ways, this is similar to the magnified images earlier. When we’re used to seeing something, we tend to categorise its features. When we look at a face, we’re looking for groupings of objects: noses, eyes, mouths and so on. It’s how we learn what a face is when we’re a kid.

But when we see something upside down, our brains are incapable of processing whether the grouping is correct or not. So we can do what Thompson did and Picasso a politician’s face, but so long as we turn it upside down no one will really notice.

The important thing to learn here is that our perception often tricks us into seeing something differently, and we have to concentrate really hard to set this right.

The best way to learn, as you might have guessed by now, is to fail. But the interesting thing is how you should fail.

A study at Harvard has shown that if we fail the first time we try to do something, and succeed the second time, we will be more likely to learn something than if we had initially succeeded. What’s important is that first setback. Sometimes we have to artificially challenge ourselves to begin with – ask a very difficult question or perform a very difficult task, before doing something less challenging.

I can assure you that you won’t see microscopic photography the same way after today, if you failed to understand the first picture, and succeeded with the second.

So the idea is to fail while trying to do something, and it is closely related to a subject called deliberative practice.

In his book Outliers, Malcolm Gladwell famously said it takes 10, 000 hours to master a skill. He was famously wrong. He based his ideas off of a famous study of violinists, who had practiced 10, 000 hours to become a master in their field. But the original of authors of that study said it wasn’t just blind practicing, it was what we call deliberative practice. The researchers found that those who specifically practiced in their weakest areas each time, and set higher and higher challenges for themselves each time they practiced, learnt more. The key is to push ourselves beyond our comfort level. We learn the least in our comfort zone: which is also sometimes why we don’t learn much in office jobs. Because let’s face it, they can be pretty comfortable.

Once a solution does eventually arise, we feel satisfied.

  • A really good program which shows this is the Harvard Leadership Mount Everest Team Simulation. I did this several years ago, and the idea is that, in a team, you and a few others will digitally climb Mount Everest.
  • Everyone on the team is given a role – from leader, to photographer to weatherman and so on. Together in your team, you have to decide when to climb the mountain, and when to stop for the night, using the information you’ve been given. Trying not to get killed.
  • So you might be told that it’s raining, and decide to stay in a camp until it stops. That sort of thing.
  • The simulation tests you in two ways: firstly, as an individual. Secondly, as a team. You get a top score as a team if you all make it to the top of the mountain and no one dies. You get a top score individually by completing your own job: the photographer gets a top score by getting all the right photos.
  • But it’s not so easy. There’s a twist. Each round, the team gets given a set of information to make decisions. Unbeknownst to you, everyone on the team gets given slightly different information each time. The weatherman gets told more information about the weather than anyone else, for instance.
  • Teams that don’t talk to each other, will never find this out. They’ll climb the mountain thinking they’ve all been told the same thing, often relying on one person to make all the decisions. I.e. teams that don’t use teamwork will not reach the time.
  • To succeed, you need to find out that everyone has different information by talking it out, by communicating.
  • The second trick is that you can’t reach the top if everyone gets full marks individually. For the group to succeed (and to get the highest possible score), some people on your team need to make personal sacrifices on their individual scores.
  • This is the most interesting part. The simulation recognizes that failure is a key to success in a team or a project.
  • I had friends who did this challenge and didn’t do well, because they were so obsessed with succeeding that they didn’t realize that failure was crucial to success.

As Winston Churchill says:

“Success is going from failure to failure without loss of enthusiasm.”

Or as Henry Ford put it:

“Failure is the opportunity to begin again, only this time more wisely.”

In a letter to F. Scott Fitzgerald, the American writer Ernest Hemingway said that he wrote ninety nine pages of garbage for every one page of genius, and that without the ninety nine pages, there would be no genius page. He spoke of writing as acrobatics: the ability to make great jumps in thought, logic and skill – but an ability premised on the very failure of being unable to make the jump in the first place.

It is this counter-intuitive logic that we need to practice today.

So now we’re going to practice answering a really difficult question. This is a philosophical thought experiment called Mary’s Room. A thought experiment involves a hypothetical situation, which helps us learn something.

Mary lives her entire life in a room devoid of colour—she has never directly experienced colour in her entire life, though she is capable of it. Through black-and-white books and other media, she is educated on neuroscience to the point where she becomes an expert on the subject. Mary learns everything there is to know about the perception of colour in the brain, as well as the physical facts about how light works in order to create the different colour wavelengths. It can be said that Mary is aware of all physical facts about colour and colour perception.

After Mary’s studies on colour perception in the brain are complete, she exits the room and experiences, for the very first time, direct colour perception. She sees the colour red for the very first time, does she learn something new about it — namely, what red looks like.

This one remains unsolved. So try get to an answer.

There is no definitive answer to this problem – and that will frustrate you (if you’re anything like me). But it will also do something else. It will make you curious about the answer. And that’s where you start to learn. Your failure in answering the question brings out your curiosity, makes you ask new questions, and when you eventually get to an answer, you’ll be more likely to remember that answer.

Let’s take a look at failure and learning another way, through video games. Video games are really interesting because they’re controlled environments where you constantly fail and die while trying to do something.

Unlike real life, in a video game if you fail or die you get an immediate chance to try again. There is no direct consequence of failure. And as a result, a video game is a simulation of immortality: the prospect of being able to learn anything without any real consequence.

Real life obviously isn’t like that. If you fail at something and die, that’s the end of you. But small setbacks and failures still help us to learn – and the immortality effect of video games show us just how useful failure really is.

The video game researcher (best job ever) Mark Griffiths says:

By failing at the same task repeatedly, something starts happening, you start learning to see things differently. You start trying to achieve the same task in different ways. Eventually, you start succeeding at that task (after multiple failures or deaths in the game world), and once you start winning you gain something: a new skill. You know how to ‘beat’ that level, you’ve mastered a very particular task by failing to do it. In other words, video games use failure as a means of teaching players new skills, and then reward them for learning those skills.

This is the key method of learning something for yourself: failure, success, reward. In that order.

When Mario runs and falls off a cliff, you instantly learn that falling off a cliff is a bad idea: a negative sound plays every time you die – indicating that you’ve done something wrong. Graphics pop up. You are teleported back to the start where you have to repeat everything over again (one of the worst types of punishment).

Some games are more explicit about this than others.

One you might know of is called Zelda. It’s a game where you solve puzzles using new items you find along the way. As you play, you’re given a new skill or tool to use, and are then immediately tested on the use of that tool (For instance, you might be given a boomerang, and tested on whether you can throw it in the game). In some ways, the game challenges players to fail in the use of this new tool – by posing artificially difficult challenges that are too difficult to complete on the first turn.

Zelda works in the same way as that Harvard study I discussed earlier. When you first receive an item, you fail in using it, and then instantly have a few successes. The process of that initial failure followed by instant success is what allows players to learn the skill so quickly.

The creator of both Mario and Zelda is a Japanese game designer named Shigeru Miyamoto, and in his opinion, the key is for a game to have a

“sense of accomplishment… You have to have a sense that you have done something, so that you get that sense of satisfaction of completing something.”

That sense of reward is only possible if you understand the parameters in play: what will it take to win and what will it take to fail?

There are some games that are particularly bad at this. A few years ago, a few American game developers were experimenting with putting AI into video games. Their idea was to create an interactive story where you can type (or “say”) anything, and the computer will respond in some way.

I don’t have a slide so you’ll have to imagine this one. The game, Façade, puts you in the role of a friend of a couple, Trip and Grace, who have invited you to their New York City apartment for cocktails. There is some tension between Trip and Grace as you arrive, and the game leaves you from there to talk to the couple.

You can type anything to them, and they will react in some way. They have emotional reactions and so on, which are largely generated via AI, rather than coded into the game. If you start insulting them for instance, they will throw you out of their apartment.

The interesting part of the game is trying to work out what they will respond to, and there are natural limitations.

The problem with the game is that it is unclear how you can succeed or fail; the difficulty is random based on what you say, and sometimes the couple will throw you out of the house for no apparent reason. Without a clear sense of reward or failure, it is difficult to learn how to interact in the game, everything appears to occur for no distinct reason, and you are left confused and unwilling to continue.

Other games have artificial difficulty levels where you can play the game on easy, medium or hard. This is obviously even more unrealistic to some extent, because there is no set world that you enter into. Real life does not have an artificial difficulty level.

The lesson we can take from video games is the idea of using failure and rewards to guide us in our learning. We can also create our own artificial difficulty levels for the activities that we do. Perhaps we create an activity that is particularly difficult the first time we do it, and easy the second, so as to encourage our learning.

Failure is difficult however, and often one of the only ways to not lose enthusiasm for learning about something is to set up some reward for all your hard work. At school we get gold stars and participation certificates, but on our own there are no gold stars. (Terrible, I know). In many ways, we have to create our own rewards.

Again, games have something to teach us here. A group of software developers recently came up with a product called Habbitica. And it’s possibly the strangest thing I’ve ever seen. Basically you set tasks for yourself to complete: practice guitar, study physics and so on, and then you get rewarded for completing these tasks in classic video game style. You level up, get digital hats and clothes for your character and get weapons and so on – for some reason.

You progress in the game by improving your life and mastering your real-life habits – and should you slip up in life, you’ll start to lose some health points in the game. If you catch yourself in time and improve your habits you can get your health back. If you don’t, your online ‘virtual’ character will die and you will lose the game.

So the idea behind this is to game-ify life. Use some of the most basic principles of game design: failure and reward, in our daily life to enhance and push us to achieve things we otherwise wouldn’t do. It works mainly because we affix some end goal to all of our effort. Sometimes it’s difficult to keep reading up on a topic without having this kind of tangible end goal. And so it’s worth thinking up ways of rewarding ourselves in a unique and interesting way whenever we learn something new.

Habbittica does this with:

  • Ordinary achievements (trophies, digital rewards)
  • Advancement and level ups
  • Quests to complete
  • And Common interest groups and challenges.

Part of what they have found is that by sharing a goal with someone else, we are more likely to complete that goal. So if someone else is also learning a new skill, or keeping to a set habbit or routine, then they can inspire us to do the same.

But the most important aspect is the reward for all the hard work.

In real life, we have to create our own rewards for the learning we do. Maybe learning French can win us a trip to New Caledonia and so on. We can establish our own reward systems in our own lives, and compete with our friends around  us – and by doing so, we can push ourselves to go beyond our comfort zones and learn about areas we otherwise would not have considered learning about.

As Malcolm Gladwell suggests –

“Autonomy, complexity, and a connection between effort and reward. These are – most people will agree, the three qualities that work has to have if it is to be satisfying.”

And we can say the same about learning.

To conclude, there are three points for you to take away with you today:

  1. Be curious. (PAUSE)
  2. Question things. (PAUSE)
  3. Fail often, and reward yourself for your successes. (PAUSE)

If you apply these three things to any area of life you wish to learn about, you’ll learn more than you otherwise would, and you’ll start on the pathway to learning for life.