Try this, it works! Written Error Correction


I’ve come across a few posts on written error correction recently. ELT research Bites took on the topic in a two part post (2) and earlier in the year Gianfranco Conti (PhD applied linguistics, MA TEFL, MA English lit, PGCE modern languages an P.E.) wrote one. Conti claims that marking students books should be the ‘least of a language teacher’s priorities‘ but is he right?


Conti’s post starts with a reference to Hattie who has suggested that feedback is very effective. Conti notes that giving corrective feedback on writing has now been given top priority in many state schools. He then goes on to write that his post is a response to the numerous teachers who have written to him asking whether the time and effort they put into marking is justified. Conti states:

I intend to answer this question by drawing on thirty years of error-correction research, my personal experience as a learner of 14 languages and teacher of five and, more importantly, neuroscience and common sense.

Impressive stuff. 14 languages! 30 years of error correction research! AND neuroscience! However when we get to the research we run into a problem. 


What jumps out initially is the age of the references. Conti promises ‘thirty years of error correction research’ but sadly those 30 years seem to be 1974-2004. The most recent reference, Conti 2004, is to his own writing. In fact, the only post 2000 references are to his own writing. I would have liked to read the works in question to evaluate the claims made but as Conti doesn’t provide a reference list or hyperlink to the texts referenced in the post, this wasn’t possible. 

Now, references don’t have best before dates, and to this day E still equals MC squared. That said, the age of Conti’s references does present an issue in this case. For instance, Dana Ferris, possibly the world’s leading expert on written corrective feedback (WCF) is only mentioned in relation to a 1999 paper. She has, since then, written extensively on the subject including three books (Response to student Writing 2003, treatment of error in second language 2002, 2011, and with BitchenerWritten Corrective Feedback 2012). None of these are mentioned in the section called “What L2 error-correction research says”. 

What’s more, the research findings show a distinct change in the period Conti leaves out. For instance, Ellis and Shintani note that whereas in 1996 it was possible for Truscott to argue that the effectiveness of WCF could not be supported, this position is no longer tenable (2013:271). And as if spookily preempting Conti,  Ferris, in a ‘state of the art’ paper from 2004 notes that ‘since 1999, I have done a considerable amount of both primary and secondary research work on the issues surrounding error correction in L2 writing‘ (2004:50). 

A lot is missed if we miss out the last 15 years of research. In a recent meta-analysis looking at WCF, of the 21 studies that met the inclusion criteria, only four were published before 2004. Conti’s post does not include any of the 17 remaining studies. This is important as the research design of ‘early (pre-Truscott, 1996) studies‘ contained design and execution flaws (Bitchener and Ferris 2012:50) perhaps indicating why ‘studies published after the year 2000 showed a significantly higher effect size…than that of the studies published before 2000‘ (Kang and Han 2015:99). 

So what does the research say about corrective feedback? 


Research tends to suggest that error correction is effective. Ellis and Shintani state that ‘both oral and written CF can assist L2 acquisition.’ (2014:268) It has a positive effect on student writing (Ferris 2011, Bitchener and Ferris 2012). Kang and Han conducted a meta analysis of current research and concluded that “written corrective feedback appears to have a moderate to large effect on the grammatical accuracy of L2 students” (2015:7)Research by Van Beuningen et al (2012) also points to the efficacy of WCF noting that it can improve subsequent pieces of writing. This contrasts Conti’s claims  that ‘both direct and indirect correction do not impact students’ accuracy more effectively than no correction at all‘ (though it is perhaps possible that the bold font cancels out the research evidence).

It isn’t clear from his post, but Conti may be talking about lower level students. As Schmidt notes on the ELT research bytes webpage, the Kang and Han meta Analysis found that ‘[WCF’s] efficacy is mediated by a host of variables, including learners’ proficiency, the setting, and the genre of the writing task‘ (2015). Notably, Kang and Han’s analysis suggests WCF is less beneficial among lower level learners. 

And what type of feedback is best? 


Conti claims that direct correction is ‘pretty much a waste of time’  and ‘Indirect correction, on the other hand, is not likely to contribute much to acquisition as the learner will not be able to correct what s/he does not know’ (section 2) But what does the research say about types of correction? 

Direct or indirect? 

direct*


Direct correction, that is telling the students exactly what is wrong, and what they ought to write, ‘is more effective than indirect’ and direct feedback alone ‘resulted in gains in grammatical accuracy’ (Ellis and Shintani 2014:271). According to Shintani and Ellis ‘Bitchener and Knoch (2010), Hashemnezhad and Mohammadnejad (2012) and Frear (2012) all reported direct feedback to be more effective than indirect’ (2015:111In older studies no difference was detected, or indirect CF appeared superior  (Ferris 2011:32) but recent studies report a clear advantage for direct forms of feedback.’ (Bitchener and Ferris 2012:74). As an interesting side note, teaching guides tend to promote indirect feedback (Ellis and Shintani’s 2014:279). 

In conclusion, we can say fairly confidently that feedback of some kind is, in most cases, better than no feedback. Research suggests that even a ‘single treatment’, particularly if focused on a grammar point with a clear rule, is effective. (Ellis and Shintani 2014:271). 

indirect coded 

Coded or uncoded? 

Coded feedback is using some kind of code like ‘V’ for verb or ‘S/V’ for subject verb problems. These are usually accompanied by some kind of meta-linguistic explanation. Uncoded feedback, on the other hand, would just be highlighting that an error had occurred but not providing an indication as to what it was. The theory behind correction codes is that students will have to work a bit harder to work out what their errors are. 

indirect uncoded 

Interestingly, there is no evidence that coded feedback is superior to uncoded (Ferris 2011:34). Both teachers and students, however,  believe that coded feedback is more effective. (Ferris and Bitchener 2012:93) and there is some research supporting the idea that meta-linguistics explanations make feedback more effective (Ferris 2011:100). 

Focused or unfocused?

Focused just means concentrating on one type of error, verb forms or articles for example, rather than picking up different types of errors. The research is not that clear here. According to Ferris most researchers now believe focused feedback is more effective than unfocused (Ferris 2011:51, 2010:182). Shintani and Ellis (2015:111) are more cautious, noting that research has shown focused feedback to be effective ‘in the sense that it results in gains in grammatical accuracy in new pieces of writing‘ and adding that it is more effective than unfocused feedback ‘in some cases‘. 

So the jury is seemingly out on focused vs unfocused WFC. However, whereas a study that compared focused and unfocused feedback found no difference between the two (Ellis et al., 2008) both were superior to the ‘no feedback’ group. A finding which seems to contradict Conti’s bold statement. 

Doesn’t error correction demotivate students? 


Finally, a common complaint is that error correction demotivates or humiliates students. This is certainly possibleConti quotes research from 1998 noting that ‘an excessive concern with error treatment may affect students’ motivation negatively‘. Well yes, it may, but (ready the bold font) it also may not. Ellis and Shintani argue that the case for this is perhaps overstated, pointing to the fact that ‘learners typically state that they want to be corrected’ (2014:275) a point Ferris (2011:51)  and Conti himself (see point 1) concur with. In my context (academic English writing) a study by Weaver (2006, N=44) suggests, like much research on this subject, that when students are asked, they say they like and want feedback. In fact, 96% of business students surveyed by Weaver agreed that ‘tutors don’t provide enough feedback’. Unless they actively enjoy humiliation (a hypothesis I’m sure someone could investigate,) then it seems unlikely that students mind WCF.  

Conclusion 

Conti has written a great deal on this subject. His blog includes posts explaining how current essay feedback practices are questionable, ‘7 reasons why traditional error correction doesn’t work‘, why asking students to self correct is a waste of time and why teachers should not bother correcting errors in their students writing. Clearly, there is a theme here (and no, it’s not starting blog posts with the word ‘why’). Conti doesn’t think error correction is all that worthwhile. To be clear, he doesn’t think it is worthless either, just that it shouldn’t be given as much importance as it currently is. It would be really useful though, when making statements like “There is no conclusive empirical evidence that EC can be effective” (2.7), if he could explain why he chooses to only discuss evidence that is 15 or more years old. I don’t know Conti’s teaching context so can’t comment on whether or not there is an overemphasis on WCF there. What I can say is that, on my reading of the evidence at least, ‘there is a clear case for correcting learners written errors’ (Ellis and Shintani 2014:276). 




*I realise ‘I like dogs and I like cats’ isn’t a great sentence. 

Review of ELT podcasts part 2

In my previous review of podcasts I wrote “2014 was a great year for EFL podcasts with several sprouting up like veritable fungi”. Well not only had I missed some, but also more soon sprouted up like…more fungi?

1. Lives of teachers

When I first wrote about podcasts Darren Elliott commented that I’d left his podcast out. I had! I was shocked to discover a TEFL podcast that had existed since 2010 and which started with an interview of Paul Nation as it’s first episode! Elliott has interviewed EFL luminaries like Mike Swan, Scott Thornbury and Jennifer Jenkins. The interviews are great and Darren is an excellent host. My only criticism of this podcast (apart from its irregularity) is the fact that the sound quality is poor at times. It has improved recently but early episodes, particularly at the start, were very quiet. 

this show started in July and hosts Marek Kiczkowiak and Robert McCaul have already managed to pump out 16 episodes. They’ve covered a wide variety of topics such as ‘Chinese v Western education systems’ and ‘product v process approaches to teaching writing’. It’s quite ‘loose’ in style and of the ‘two dudes talking‘ school of podcasting (Marek tells me he doesn’t worry much about editing). At times the sound quality isn’t great (the ‘live from the language show’ episode sounded like it was recorded in a submarine) but I’d still say it’s well worth a listen. I’m a little biased however since they invented me on to one of their recent episodes and let me ramble on for about half and hour. It’ll be interesting to see what happens with this podcast. 



The ‘commute’, hosted by Shaun Wilden and Lindsay Clandfield‘s (and James Taylor at times*) is a rare beast. A TEFL podcast that isn’t about teaching. Instead they deal with peripheral issues such as ‘photocopiers’ and ‘translation’. My favourite episode so far was their examination of the movie ‘dead poets’ society’ from a teaching perspective. I really enjoyed that one. 

I would say that this podcast has far and away the best production values of these podcasts. It has clear sections, good art, good editing and (usually) great sound quality. They generally avoid teaching but do say in their blurb that it “might crop up A recent interview with Scott Thornbury which touched on ‘example sentences‘ got me wondering if this podcast would be even better if it did actually deal with teaching issues. 

4. SAGE language and linguistics (language testing bytes)

Glenn Fulcher started language ‘bytes podcast’ in 2010 and has so far produced around 20 episodes, so it’s a pretty infrequent. The episodes are also very short with 26 minutes being the longest and 8 the shortest. What it lacks in quantity it makes up for in quality. Glenn is a leading expert in language testing and has guests like Alan Davies and Stephen box discussing issues like ‘aviation English testing’ and ‘rather bias in speaking assessment’. The podcast has been combined with one of Sage’s other podcasts so the language testing is interspersed with ‘child language teaching’ which seems like a rather odd combination to me. 


 
5. EdTechConcerns

Another podcast I really enjoyed was EdTechConcerns. It was also hosted by Shaun Wilden and Lindsay Clandfield (with Philip Kerr) and ran for 7 episodes. It focused on the use of tech in education and the potential problems associated with that. It was packed with interesting interviews and was a high quality production. I’m not sure that you can listen to it now as it doesn’t seem to be available. Was it a perhaps a trial run for the TEFL commute? 

  
So there’s been a huge expansion in ELT podcasts but a few seemed to have died off. The minimal pair which I talked about last time and KKCL podcast both now seem defunct. I still think there is room for more so here are a few ideas:


1. A TEFL podcast that focuses on actually getting jobs in various countries. So each episode would be about a certain country/sector including an interview with someone there.

2. Similar to the above but getting a local teacher from different countries to talk about the particular language issues that students they teach have.

3. An Applied linguistics podcast. There’s a lot of good stuff in TEFLology and and language testing bytes but it would perhaps be good to have a podcast about more academic issues with more in-depth discussion -but not too complex as to turn off listeners.

4. Academic reading circle. A podcast that discusses important/interesting ELT articles. One per episode. Even better if they could interview the authors.

5.A TEFL podcast with a female host.*

 Here’s looking forward to a 2016 of great podcasting! 

*As Shaun Wilden notes in the comments, the TEFL commute does in fact have a female host  Ceri Jones. So apologies Ceri!   

EBEFL asks part 2: The evidence strikes back…

One odd thing that happened after IATEFL was people suddenly assuming I was an EFL expert. I started getting questions about the efficacy of this or that method or the merits of vocabulary versus grammar. To be honest I generally have no idea and while it may be expedient for me to cultivate an image of being a knowledgeable so-and-so that’s not the case. I’m not expert in very much and more importantly other ‘experts’ are probably not as expert as we may think. 

How do I know this? Maths. 
 
According to Fred Perry there are around 100 journals relating to SLA and language teaching at present. Each of these puts out around 3 or 4 issues a year (3×10=300) and each one has, let’s say, about five articles a piece which is about 1,500 articles a year. There is no way anyone could reasonably be expected to keep up with these and all the articles/books that have gone before them. Rod Ellis may be an expert on SLA but how would he fare in discussions of ELF, testing or corpus linguistic?
 
So in short I don’t know that much and nobody knows everything. These two points bring me to two requests:

No. 1. I’d like to try to help spread the ‘ask for evidence’ meme created by Sense about Science. If anything came out of the talk at IATEFL for me it’s the need for teachers to be less afraid of asking questions and challenging the status quo. I had a large number of emails thanking me from people saying they’d always thought something was not quite right but never felt they couldn’t say anything. Some had even got into trouble for questioning ‘established practice’. There is nothing wrong with asking the question ‘how do you know that?’ In fact, it’s sad that educators should feel they can’t. As long as you are not rude or patronising it’s reasonable to expect an answer.

So the next time someone claims that ‘teacher talking time should be reduced’ or ‘grammar mcnuggests are bad for students’ or that ‘students have nine different types of intelligence‘ politely enquire on what grounds the speaker makes those claims and be cautious of accepting ‘my experience’ or ‘it’s obvious’ as answers. There may be very good reasons for the claims, then again there may not. Either way, you’ll learn something. I’ve always been pleasantly surprised that people, who are probably far busier than me, have taken the time to respond to my emails. And that brings me to…

No. 2 I’d like to ask anyone who is an expert/knowledgeable in a particular field, be it motivation or vocab to get in touch. As I said earlier, it’s impossible for anyone to know everything and with that in mind I’d really like to start having some guest bloggers, particularly those who can offer teachers practical advice based on research. Ideally you’d be highlighting the research evidence that a certain practice or set of practices ‘work’ or conversely, don’t.
 
Let me know at rm190@le.ac.uk
 
 

  

Learner styles revisited: VAK-uous teaching

If you had to teach a lesson in which you were required to discover students’ blood types or star signs in order to tailor lessons according to the results, you might feel that this was both inappropriate and a waste of time. You may even argue that knowing whether your student was a Pisces or O negative couldn’t possibly help her to learn English because star signs, like blood types have no evidence of validity. However, TEFL teachers all over the world routinely and enthusiastically engage in this kind of testing. What is more, this kind of ‘vacuous nonsense’ is promoted by leading TEFL authorities, is the subject of talks at IATEFL, is considered an essential part of CELTA training and is promoted in journals and on the websites of Universities.

Despite having as little credibility as astrology, various brain-based myths exist in education. Perhaps the mostly widely believed myth is the idea that students will learn better when information is presented to them in their preferred learning styles. This myth was believed by 93% of teachers surveyed in one study (Dekker et al 2012), which is a remarkable number when it’s noted that the idea of learning styles has never been shown to be valid.

What happened to OG?
A further problem with the popular VAK model is the choice of senses it opts for. VAK, sometimes known as VAKOG stands for visual, auditory, kinaesthetic, olfactory and gustatory. These would seem to map onto the ‘traditional senses’ humans are supposed to have, but this is not as clear as it first seems. Firstly, there is the question of why the numerous other human senses, such as the sense of balance, pain, time and temperature, are missing. If we are happy to stick with the ‘traditional senses’ then it seems odd that ‘touch’ is substituted by the ‘kinaesthetic’, sense which is the sense of motion. Further, why, in discussions of learner styles are the final OG so often absent? It is perhaps unkind to suggest that the whole concept starts to unravel when we imagine catering for students whose ‘dominant modality’ is ‘smell’ or ‘taste’. This idea has been lampooned by satirical newspaper ‘the onion’ with an article entitled ‘parents of nasal learners, demand odour-based curriculum’. The ludicrousness of this should be enough to stop VAK on its own but no, it trundles on seemingly oblivious to its own internal contradictions.

One dominant style?
Just how a teacher can separate out a student who learns visually and one who learns kinaesthetically is very unclear to me. Websites suggest that kinaesthetic students like to move things around and touch them, but they are still going to have to use their eyes in order to do this. Another classic is to advise them to take notes (note: that is a University web site). The only problem is that anyone taking notes must also be listening and looking at what they’re writing, so how is this kinaesthetic?

Why the VAK love
Coffield et al identified 80 different paradigms, and only one of these was VAK(OG).

· convergers versus divergers

· verbalisers versus imagers

· holists versus serialists

· deep versus surface learning

· activists versus reflectors

· pragmatists versus theorists

· adaptors versus innovators

· assimilators versus explorers

· field dependent versus field independent

· globalists versus analysts

· assimilators versus accommodators

· imaginative versus analytic learners

Now are all these valid or only some? If they’re all valid then don’t we have an ethical duty to find out our students ‘total’ learning styles and test for all 80? If some are more valid, then which ones and who chose and how did they know? There is a clear problem here. Simply put, they can’t all be correct. These criticisms beg the question why are learning styles, particularly the VAK model, so popular?

Personalisation: the Forer effect
Whenever I get taking to another teacher about learning styles, which happens probably a bit too often for their liking, I invariably have a conversation that goes something like this.

Me: …and that’s why learning styles isn’t a particularly useful concept.

Teacher: hmmm yeah I see (pause)…I’m a really visual person, me.

This is all too reminiscent of commenting to a friend, with incredulity on the popularity of horoscopes only to have them nod and say ‘well a Sagittarius would say that.’ Horoscopes might actually give us some insight into the popularity of learning styles. How true would you say the following is about you?

You have a great need for other people to like and admire you. You have a tendency to be critical of yourself. You have a great deal of unused capacity which you have not turned to your advantage. While you have some personality weaknesses, you are generally able to compensate for them.

Betram Forer’s students were told that this was an evaluation of their personalities but actually they all got exactly the same results. Despite this his students on average rated the feedback as being very accurate (4.26 out of 5). In short, in the same way some people can see the face of the Virgin Mary in a piece of toast, (how does anyone know what she looked like anyway?) many people can see something relating to themselves in something which could be true of just about anyone. Compare this with a learning styles questionnaire:

1. When I operate new equipment I generally:
a) read the instructions first
b) listen to an explanation from someone who has used it before
c) go ahead and have a go, I can figure it out as I use it

2. When I need directions for travelling I usually:
a) look at a map
b) ask for spoken directions
c) follow my nose and maybe use a compass

3. When I cook a new dish, I like to:
a) follow a written recipe
b) call a friend for an explanation
c) follow my instincts, testing as I cook

4. If I am teaching someone something new, I tend to:
a) write instructions down for them
b) give them a verbal explanation
c) demonstrate first and then let them have a go

5. I tend to say:
a) watch how I do it
b) listen to me explain
c) you have a go

Learning styles questionnaires are similar to horoscopes (and personality tests) because they seem to have been specifically designed for you. We are so fascinated with ourselves that things like this can bypass our critical facilities and head straight to our emotions. “I can’t read maps! I always just follow my nose! OMG! this is totally me! I’m totally kinaesthetic!” the idea of finding out “what kind of person one is” has some eternal and deep appeal’ (Pashler et al 2008:117)

You may also have noticed something missing from this list? It is reminiscent of the famous loaded question “when was the last time you beat your wife?” The questions presuppose you actually have a learning style. There are stubborn folk who won’t play along and chose (above) something like A, B, C, C, B which seems to render the whole thing redundant, but don’t fear, they are labelled multimodal! I love the quote on that site “Multimodal learners can take in information by using more than one method”, -ah! You mean, like all normal human beings! I see.


The problem is basically that if you believe in, and accept something, no stubborn facts are going to change your mind. If your back was cured after you went to a
chiropractor or had acupuncture, then neither explanations of the placebo effect, or the mass of tests that have shown these two things to be ineffective is going to change your mind. Even something as ridiculous as horoscopes where it is clearly and demonstrably unfeasible still has millions of believers and may even affect people’s lives in serious ways. Astrology is in most newspapers daily and it’s ‘experts’ are rich and famous. Astronomers on the other hand have Brian Cox and Neil deGrasse Tyson. True-believers will just dismiss all of this with a wave of the hand, and the common refrain, ‘Well I think it’s useful.

Back to Front
Trying to publish an article on learning styles is easy, -trying to publish one saying they are not real is much harder. I dunno, call me old fashioned but when you’re suggesting that something exists, isn’t it up to you to provide the evidence? If tell you I saw ghosts or aliens, you’re going to want to see some convincing evidence. In the world of publishing however, unlikely interesting sounding ideas (like precognition) will get you published ten times faster than something pointing out that that probably isn’t true.

This is evidenced by the huge number of articles on learning styles out there. Here is a tiny sample of some of the articles I found relating to EFL and learning styles:

· The learning style preferences of ESL students

· Learning Styles in the ESL/EFL Classroom

· Match or mismatch? Learning styles and teaching styles in EFL

· The Relationship between Gender and Learning Styles in Internet-Based Teaching-A Study From Kuwait

· A cross‐cultural study of Taiwanese and Kuwaiti EFL students’ learning styles and multiple intelligences

· The learning styles of Japanese students

· Learning styles of American Indian/Alaska Native students: A review of the literature and implications for practice

· Bridging the cultural gap: A study of Chinese students’ learning style preferences

· Assessment of language learning strategies used by Palestinian EFL learners

Not only is it widely accepted, it also seems to be under some kind of magically protection. People write articles, like one recently in ELTJ, listing everything that is wrong with the idea, and then note “but we should continue to use them as they are a useful concept.” (Hatami 2012) Harmer, among others say pretty much the same thing. Call me old fashioned but if we have no evidence something exist, despite decades of testing, we might want to think carefully about what that tells us. If we are to accept their conclusion then Horoscopes and blood types should surely also be part of our teaching arsenal as, ‘it is clear that they […] address self-evident truths’ (Harmer 2007:93) and ‘facilitate appreciation for the divergent approaches to thinking and learning’ (Hatami 2012:2) Whatever that means.

The least worst solution

“It has been said that democracy is the worst form of government except all the others that have been tried.”
Churchill

Scott Thornbury usually comes off quite well on EBEFL. He writes (somewhat) criticially about things like learning styles, reading skills and NLP. However there is one quote of his which bothers me. When writing about the image problem TEFL suffers from in “the unbearable lightness of EFL” he divides the world into the bare foot, ‘sandals and candles’ type of EFLer and the more academic type. He rejects both and offers us a “third way”.

When Clemente wrote to ELTJ to criticise his article he shot back with another article in which he wrote, “the fact is that ELT is at risk of being hi-jacked by men in white coats”. But who just who are these ‘men in white coats’?

Thornbury is propagating the “mad scientist” myth common to much pseudo-science writing. Rather than a person we have a uniformed symbol of something sinister. Shadowy, sinister  ‘experts’ are putting mind control drugs in vaccines. Fluoride will give you cancer (if you believe this kind of thing, this is probably the wrong blog for you.) but Thornbury doesn’t ever explain why EFL researchers would necessarily be male, nor why applied linguists would need white coats.  


Historically and unfortunately there has always been an odd artificial divide between the TEFL world and the applied linguistics world. There is a notion that researchers are off writing books and know nothing about the hard-realities of classroom life, the ‘chalk-face’, of ELT when they come out with their high-faulting theories on language acquisition. This couldn’t be further from the truth.

the vast majority of lecturers and researchers started life as teachers and most continue to teach. My dissertation tutor Julie Norton worked in France teaching business English and Japan. another of my tutors, Glenn Fulcher, taught in Greece for years. Sure these people went on to publish and become lecturers but PHDs don’t cause amnesia, -do they?
who are the white coat brigade?

but there is, it seems, not only antipathy towards researchers but also at times an  antipathy towards research. A large number of teachers not only seem to distrust research, but consider personal experience to be far superior. Now, in the absence of evidence then experience is perhaps our only guide, but is it right to spurn research in favour of experience?

Evidence comes in varying degrees of reliability and so it needs to be looked at carefully. a study of 5 students over 1 week is going to yield less useful results than a study of 400 students over years. However if we think “the only thing that matters is experience” then we find ourselves with a number of problems.
If you accept this argument then you basically give up the right to discuss anything. Or rather, discussing anything becomes pointless because the teacher with the most experience will de facto be the ‘rightest’, regardless of his/her opinion. If another person’s equally long experience differs to yours then who is right? . This isn’t education, or critical thinking, it’s just demanding acquiescence.

The “I have more experience than you” card, is basically a variant of the argument from authority. As such, all teachers would have to demur to older, more experienced teachers, regardless of how crap they might be. It is not an unfair position, in my opinion, that if someone has been teaching crap lessons for 30 years, this should count against, rather than in favour of them. Of course, we wouldn’t know the lessons were crap because the experienced teacher would say that “in their experience” the lessons were great, and that would be the end of that.
Experience absolutely should not be discounted and it is often a vital tool in checking the validity of an idea. For example, I learnt a foreign language pretty fluently, as an adult, without ever knowing what kind of learning style I had, and this experience made me sceptical of the claims being made about learning styles (though it doesn’t mean I was right, mind!) But this idea that experience is a reliable measure of something is a deeply flawed concept that can easily be shown to be wrong. At this moment in time we know there are teachers, good teachers, all over the world teaching using different and contradictory methods who are convinced, by what they see every day, that their chosen method really is working. Their ‘experience’ is telling them that their method is effective. Often though, these approaches contradict each other, textbook -no textbook, grammar -no grammar, correction, -no correction, simply put they can’t all be right. 

At this point we may be tempted to turn to relativistic platitudes. We often hear that “it all depends on context” and to an extent that’s true. Things we do in a kid’s classroom will differ to an EAP setting. But this also opens us up to an uncritical free-for-all and we shouldn’t lose sight of the fact that all of our students are humans, using the same biological material (their ears, their eyes, their brains) to try to learn. Some things will work everywhere and others will work nowhere. Research can show us this and call me an old cynic but when I get sick and am admitted to hospital, I’ll take ‘tried and tested’ medicine from men (or women) in white coats, than something the local witch doctor knows, from his long-experience, is super effective.

Tim Harford, writing about Ben Goldacre’s recent push for evidence-based teaching, notes:

“Trust me, I’m a doctor” was never an excuse for not collecting evidence. And “trust me, I’m a teacher” is not an excuse today. But being a teacher is a superb vantage point for building an evidence-based education system. It is an opportunity that teachers need to seize

I would hate to think the antipathy towards research and the caricaturing of researchers is an attempt to retain authoritative power. Evidence, like democracy, might not be a panacea but it’s better than the other options.


The council of woo

The Council’ likes to promote itself as a rigorous and serious organisation, doing very serious testing and accreditation, but it can be quite partial to the odd bit of ‘woo’.  For example in order to get a CELTA trainees have to be well versed in “learning styles”. This predilection for a bit of magical thinking is most evident on its web page. 

Their article on NLP is littered with embarrassing factoids about my favourite TEFL pseudo-science. The article starts by telling us that NLP has “its roots in psychology and neurology” which is slightly misleading as its creators were studying maths and linguistics at the time. It has nothing to do with neurology and has been soundly rejected by psychology which classes it as a pseudo-science. Not to fear though, ever the great shape-shifter NLP has found a good home in management and education –two rich breeding grounds for ‘woo’.

Writer Steve Darn goes on to tell us that NLP is “about the way the brain works” (which it most certainly isn’t) and that it can help to train the brain (which it can’t because it doesn’t work). Next he tells us it “is related to ‘left / right brain’ functions” (also known as the “left brain right brain myth) and that it shares something with….yes you guessed it “learning styles, multiple intelligence and other areas of research”! BINGO!

Hang on a sec though; let’s look at that last sentence again. “Learning styles, multiple intelligence and other areas of research”…one of these three is not the same; one of these three is different. Ah yes, research. Because research is where you have a theory and then you test it, which is the opposite of what learning styles and multiple intelligences do. They tend to subscribe to the “have a theory and then sell loads of books” method.

Darn then notes “NLP and related subjects have their sceptics, particularly in terms of general classroom applicability and how NLP is commercially marketed as a method of self-improvement.” and as a creepy method of mind control?
“NLP has been labelled a ‘quasi science’ and criticised on the grounds of lack of empirical studies” That’s the spirit Steve, -don’t spoil it now…

“but there are sound reasons why NLP is compatible with current classroom practice”

This is what I like to call the ‘nod to scepticism’. You list as load of criticism and details as to why something has been rejected by science and then with a wave of your hand you dismiss all those problems. Fantastic! Perhaps we can try this when we teach?

“Well this essay has numerous grammar problems, it’s half plagiarised, it’s not related to the topic and is 100 words too short. –but don’t worry about that stuff, this essay is compatible with an A grade.”

 

I could go on and on about NLP but to be honest I can’t be bothered. The true believers will just retire to their familiar “well I know it works, I saw it with my eyes.” If you’re at all curious, don’t believe me, I advise you to go and check the literature. See if you can find any credible sources recommending NLP be taken seriously for anything.

If you can then you’ve done more than I managed in months of research. In short NLP either works and our knowledge of how the human brain works and how human languages evolved is wrong, or (and the safe money is here) teachers are signing up for expensive courses and wasting students’ (valuable) time with something which has the same credibility as Ouija boards and tarot cards.

Happy Birthday EBEFL! (about)

On the 19th of March 2012 I tentatively started this blog with a post about the word literally not really expecting much f  reaction. One year, 41 posts 120 comments and 2,400 hits later (mostly from Swedish spam bots) and  I’m constantly surprised and incredibly grateful for the overwhelmingly positive reaction this blog has received. I want to take the opportunity to talk about why I set up EBEFL. Firstly I should say that I’m massively influenced by Ben Goldacre, and if you haven’t read his blog or his book, “bad science” then I can’t recommend it highly enough. Recently, he’s been writing about evidence in education and it’s well worth a read. 

Why Evidence-based EFL?

Life is short. The older I get the more I realize time is running out at a breath-taking pace.  A common theme in my life is investing effort into something which turns out, in the end to be a waste of time.  An example of this is martial arts.  I always loved martial arts and did them since I was a kid. I used to love martial arts movies like (based on a true story!) “bloodsport“. Finally when I moved to Japan I had the chance to do the “real” thing and took up a jiu-jitsu class.  Every week I went along and practised, and eventually got my black-belt.  My family were in awe, thinking I was some kind of dangerous killer. This was complete tosh and a strong gust of wind could have probably knocked me over, but the idea that I was an “expert” was enough to convince them and I certainly wasn’t about to deny it. I had almost completely convinced myself that this bit of coloured fabric had some actual meaning. It didn’t. 


The problem was that the martial art, like many martial arts was misguided.  It had a fixed method and it bent reality to fit with that. For example, if someone grabs your arm like X, then you twist it like so and hey-presto! Or if someone, punches you like Y, then you side-step and perform some killer move on them.  Of course, in truth, and if you ever see a real violent confrontation, no one will ever grab you like X or try to punch you like Y. By and large fights are messy affairs, and if someone is intent on doing you harm, they’ll probably do it, before you know what’s happened. People don’t hold knives out as they approach, nor do they telegraph punches. (incidentally, I recently found out that the true story bloodsport was based on was complete tosh.)  

Martial arts may seem unrelated to TEFL but exactly the same problems exist. Experts are made with qualifications (DELTA black belts!) and are often believed unquestioningly. Techniques and methods are designed and then reality is forced to fit them. In TEFL, like in martial arts (and in health care, public policy, science and pretty much any aspect of human life) a healthy dose of scepticism will almost certainly end up leaving us all better off.   

I recently read a blog post that insisted people are naturally sceptical but this isn’t quite right. People can be naturally sceptical about some things, some of the time. Sagan gives the example of buying a car:

When we buy a used car, if we are the least bit wise we will exert some residual skeptical powers — whatever our education has left to us. You could say, “Here’s an honest-looking fellow. I’ll just take whatever he offers me.” Or you might say, “Well, I’ve heard that occasionally there are small deceptions involved in the sale of a used car, perhaps inadvertent on the part of the salesperson,” and then you do something. You kick the tires, you open the doors, you look under the hood. (You might go through the motions even if you don’t know what is supposed to be under the hood, or you might bring a mechanically inclined friend.) You know that some skepticism is required, and you understand why. It’s upsetting that you might have to disagree with the used-car salesman or ask him questions that he is reluctant to answer. There is at least a small degree of interpersonal confrontation involved in the purchase of a used car and nobody claims it is especially pleasant. But there is a good reason for it — because if you don’t exercise some minimal skepticism, if you have an absolutely untrammeled credulity, there is probably some price you will have to pay later. Then you’ll wish you had made a small investment of skepticism early.(read more of this excellent piece here)

So we can be sceptical but often little tricks in our brains stop us from kicking the tires. The most powerful are perhaps confirmation bias and argument from authority. People can be fooled by “experts” or can fool themselves because they really want to believe their new method is producing great results. This self-deception is often the hardest to overcome. Scepticism is not just for debunking those things you think are wrong, it is far more important for challenging  -those things you’re sure about. 

When people read this blog and come across something lacking evidence which they believe in, they usually all have a similar reaction. They tend to shrug and say either  “well, evidence or not, I still believe this is useful and I’m going to continue to use it.” Or “well teaching isn’t science –it’s art!” or something like that. When people see something they think works cognitive dissonance kicks in and the rationalisations start. “I’m a good teacher so what I do in class must be good” or the more common one I encounter “well sure this method might not work but I’m going to keep doing it because students like it/I have no alternative/it’s good for tests etc etc.

I hope that this blog will be a home for critical thinking. I hope it will stop teachers and students wasting time and money on things which don’t or can’t work. I hope it will challenge authority and more than anything get people thinking. If you don’t agree with what I write that’s fine, but at least think about what you’re doing and don’t just accept what your CELTA tutor, the British council or a famous good-looking tanned, TEFL expert tells you. But also don’t believe yourself either and certainly don’t take my word for anything. Ask to see the evidence and if there isn’t any, why not try to make some?



Is guessing from context a load of XXXXXX?


Look at the following sentence, -what do you think the missing word is?

Juan’s teacher is always angry because Juan never does his XXXXXX

I’m sure many of us, myself included, have seen or taught a sentence like the one above to introduce the skill of “guessing from context”, or trying to infer the meaning of unknown words by looking at clues such as the grammar or context. The logic of this approach seems to be that students spend far too much time looking at dictionaries and not enough time XXXXXX listening. They shouldn’t try to understand every word but should just “guess” at the meanings. 

But is it even possible to guess the meaning of a word? and if it is, how good are students at guessing? Just for fun, here are two sentences to try. I’ve been reading “Going Clear” this week and luckily, I actually came across two words I didn’t know. I’ve included both sentences because you might know one of the words but probably not both, and so one can be a ‘cloze’ and one a real word. Let’s see how XXXXXX well you do at guessing:

Homes was an XXXXXX* with almond-shaped brown eyes.

The racist white cop who molests a tony upper-class black woman.

I decided to read the literature on “guessing” (so that you don’t have to!) and quickly discovered two things. Firstly there’s a XXXXXX lot of it, and secondly it’s XXXXXX messy. But before I get down to the details, it’s important to understand a key problem with this area. There is essentially a tension between the fact that humans must learn a huge amount of vocabulary by guessing/inferring, (after all, no one actively taught you most of the words you know) and the fact that in a large number of situations (like the two examples above) guessing seems almost impossible. The type of word trying to be guessed is clearly a factor. guessing the word “hammer” is probably a lot easier than guessing the meaning of the word “acknowledge”. But what do the experts think about ‘guessing’ and more importantly, what does the evidence say?

Paul Nation thinks it is a “a very powerful and useful strategy” and is “worth spending a few minutes on” every week (2009:55). Unfortunately, he offers no evidence to back this up. Walter and Swan aren’t so keen describing it as “the alleged ‘skill’ of guessing unknown words from context.” and adding that “research has shown, and it can easily be demonstrated practically that unknown vocabulary can rarely be successfully processed in this way” (2008:71) However a different Walters (note the S) has carried out a meta-analysis of inferring in which she concludes that “it seems clear from research that…drawing students’ attention to context when attempting to infer meaning of unknown words is worth the time effort in the language classroom.” (2004:250)

There were two other meta-analyses carried out on this subject, (though it should be said, mostly relating to L1 learners) which completely disagreed with each other. Kuhn and Stahl reviewing the literature conclude that “if these studies represent where the field is now, then we cannot recommend instruction in context clues.” (1998:135) but Fukkink and de Glopper disagree noting that “even a small improvement of the ability to infer the meaning of unknown words would result in a sizable number of words learned.” (1998:451)

One major issue is that the research seems a bit XXXXXX sketchy at times, with a lot of it coming from L1. Add to that small sample sizes, lack of control groups, difference in testing procedure etc and it’s not surprising that Kuhn and Stahl note “given the frequent recommendations that children be taught the use of context clues, the paucity of research evidence is quite disappointing” (1998:129) Almost 20 years earlier Nation described it as being “widely acknowledged as a useful skill” while pointing out that there was very little evidence to back that sentiment up. In 1994 Knight reiterated this noting “although in recent years, many researchers, teachers, and textbook authors have encouraged students to guess, to use inference as the strategy of first choice (30; 48; 49; 64), this advice appears to be based more on conjecture than on empirical finding” (Knight 1994:286)

But lack of evidece aside, are student any good at it? Well they certainly like it. Schmitt, refers to one study which “found that their university ESL students used inferencing in about 78% of all cases where they actively tried to identify the meanings of unknown words” (Paribakht and Wesche 1999 in Schimidt) and another study found students used it with 58% of unknown words (Fraser 1999 in Schmidt). However, despite their enthusiasm for guessing, students are not very XXXXXX good at it for example Grabe states:

Guessing meanings of words is not an efficient way of learning new words explicitly when it is used as a textbook or class exercises…In four studies Gough and Wren (1999) showed that when L1 students guess words from context they are accurate only 14 to 45 percent of the time.(Grabe 2009:73)

However he doesn’t write it off completely as a strategy, but notes a dictionary would yield better results. Incidentally, the results for dictionary use are clearer “subjects who used the dictionary not only learned more words but also achieved higher reading comprehension scores than those who guessed from context” (Knight 1994:295) There was no evidence in other papers that guessing improved reading scores or that students were even able to remember words they guessed correctly.

The low rate of success for guessing is a common finding:

Nassaji (2003) found that of 199 guesses, learners only made 51 (25.6%) that were successful, and another 37 (18.6%) that were partially successful. This low success rate is similar to the 24% rate that Bensoussan and Laufer’s (1984) learners achieved. (Schmitt 2008:350)

Frantzen’s results (N11!) show students were only successful in about 30% of cases. She also reports Kelly’s (1990) findings that even when just one word was unknown in a text “Contextual guessing alone seldom allows the reader to arrive at the correct meaning”(1990 in Frantzen 2003:169)

Another problem is that it isn’t always clear what exactly people are talking about when they talk about this. When teachers “teach” this skill what exactly is it that they are doing? Likewise, what is actually being tested? Knight (1994) suggests that the huge disparity in results could be due to a disparity in researcher testing methods. For example some use XXXXXX (cloze) while others use made up words and others researchers use real words. Fukkink and de Glopper suggest that giving advice on best practice with regard to this method is hard because “the empirical evidence is not unequivocal and the theoretical foundations of instruction are sparse or even absent.” (1998:462) All of this adds to the confusion.


But generally speaking it seems, slightly effective. Experimental groups almost always seem to outperform those not taught anything. However these are largely the results of L1 studies and “research in L2 contexts however, does not provide such strong support for lexical inferencing” (Nassaji 2006:397) Another caveat to this is many L1 studies also show that mere “practice may be equally effective as instruction” (Fukkink and de Glopper 1998:452) that is where a “practice only” group was included they did just as well as those who were taught. Kuhn and Stahl suggest that “merely practising deriving words from context would be enough to make students better at deriving words from context.” (1998:129) 

There is however a very strong correlation between language ability and the ability to guess word meanings (Frantzen 2003). Nassaji (2006:394) adds that, being able to understand the text “as a whole and most of the words in it” is a good indicator of success in inferencing and this fits with Nations finding that students should know 94% of a text to be able to understand it. It’s perhaps not XXXXXX rocket science to suggest that guessing from context is tough when you have no idea what the context is.


Kelly (1990) and Laufer (1997) question the value of guessing on the grounds that texts do not always provide adequate information to facilitate correct guessing of words. Stein (1993:204) suggests that “part of the problem is that the contextual clues themselves are largely insufficient to narrow in on a word’s meaning. The language itself allows for many unavoidable possibilities in interpretation, often many more than wanted.” This view is supported by Grabe and Carrell who note that “what may appear to be transparent ‘guessable’contexts to native speakers are often incomprehensible contexts to native speakers”(in Schmitt 2002:240) So you might think it’s easy for your students but it XXXXXX isn’t.
Don’t believe me? Try it for yourself:

A biomic approach by integrating three independent methods, DNA microarray, proteomics and bioinformatics, is used to study the differentiation of human myeloid leukemia cell line HL-60 into macrophages when induced by 12-O-tetradecanoyl-phorbol-13-acetate (TPA). (Juan H-F et al 2002)

Guess away!


What is slightly odd however, is students who presumably know how to do this in their L1 mysterious forget how to do it in L2. This should be something we don’t need to teach them. As Nagy (in Kuhn and Stahl 1998:133) notes “Learning from context is a natural process, as well as the way in which we have learned most of what we know.” Swan agrees:

Why should language students need training in making intelligent guesses?Are they less intelligent people, less good at guessing, than other groups in the population? Than language teachers, for instance? Is there any reason at all to suppose that they do not already possess this skill? And if they possess it, do we have any real evidence that they cannot in general apply it to learning a foreign language? And if we do not have such evidence, what are we doing setting out to teach people something they can do already?” (Swan 1985:8)

One suggestion is that lower-level students’ “processing power” is entirely taken up with trying to understand the language to the extent that when they improve they will be able to use this skill. “The ability to apply the skill is inversely proportional to the user’s linguistic competence. This, and all the other sub-skills and strategies, are things that the learners have mostly already got in their L1, but they can only progressively apply them to L2 as their linguistic abilities improve” (Stranks 2010)

So in short; students like it, but they XXXXXX suck at it. They can be taught it but the results aren’t much better than just practicing. and some words are just unguessable. The more words they know and the better their English is, the better chance they have of guessing correctly. All of which leads me to the XXXXXX astonishing conclusion that working on their English, rather than teaching them how to ‘guess’, might be a pretty XXXXXX good idea. 



*Ingenue: A naive, innocent girl or young woman
tony: marked by an aristocratic or high-toned manner or style


How to create your own TEFL method

disclaimer: All methods appearing in this work are fictitious. Any resemblance to real methods, living or dead, is purely coincidental

1. come up with a new theory

It doesn’t really have to be new, it can be a rehash of old stuff with a new name if you like. Ideally it would involve doing the opposite of whatever it is teachers are currently doing. For example, if teachers are using textbooks, the your method should be textbook free. And if teachers generally like to correct students’ grammar then your method should avoid that altogether. In fact it should expressly prohibit correction.

Teachers are constantly disappointed with the results they achieve. Like the overweight   making yet another doomed set of new year’s resolutions, teachers’ sense of hope is strong. They believe that if they can just find the right method, it will unlock the secrets of English for their students. Whip up some interest, -the thrill of the new, -claim that your method is “revolutionary” and make extravagant claims about it’s effectiveness.

2. Give it a interesting name

Call your method something ear-catching and cool. If you can’t do that then come up with an approach which ideally can be reduced into a three letter acronym like TPR, NLP, CLL, ELF, PPP or TBI. If you only have two then just toss in a meaningless word.  Like ‘total’ in total physical response.  Could we have HHPR (half hearted physical response) or NMPR (not much physical response)?

The more complex the name the better. Make it sound complex and scientific if possible -don’t worry if you don’t know the first thing about science, it doesn’t matter!  Just grab some sciencey sounding words and paste them together. The more obscure the better.  Take Neuro linguistic programming for example, (NLP!) even the practitioners state, with no apparent shame, that it has nothing to do with neuro science or linguistics! 

3. don’t really describe what it is

That is, tell people it’s a new ‘system’ or ‘approach’ (don’t call it a method!) that is concerned with the approach to humanistic and holistic autonomous learning spheres which takes account of students’ multiple intelligences and promotes student-centric learning. Or something like that. Alternatively just define it as whatever anyone says it is, like this:


A: It seems to me this is related to motivation?
B: yes, motivation plays a part in it.

or

A: Is it related to teacher identity in the technological classroom?
B: If you want it to be

3.5 be a man 

No method has ever been invented by a woman. 

4. tell people it works
 
Nothing succeeds like success. In the same way. nothing works like things that people say work!  Just keep telling people that your idea “really works” that the students “love it” and that you have seen great improvements and eventually someone will become your follower and start saying all this stuff for you. After a few years you’ll have a book out and be running training courses in your approach.


5. In case of emergencies

By this time your method becoming quite popular. This is when the backlash begins.  Don’t worry about those spoilsports pointing out that your theory is meaningless, just carry on and be even more vague than you were before. Tell your critic that what you do is not measurable by their methods, but only by whole body and mind convergence and the nourishment of the soul!  Let’s see them try to measure that.


6. If that doesn’t work

Weird theories are oddly resistant so don’t worry. Even if some bright spark shows you to be a complete fraud just nod sagely and say that “it’s not for everyone” and that “teacher’s and more importantly students can decide for themselves what works and what doesn’t”.  Another well worn trick is to throw out some of the troublesome bits of the theory and keep the popular bits. Strangely in EFL when something doesn’t work teachers are very reluctant to throw it out but would rather keep using bits of it, so you’ll still be able to sell books and appear at conferences. Also if you wait about 30 years your method will no doubt come back into fashion.


7. sit back and count the cash

Now you can relax and let your followers do all the work for you.  If you’re as successful as someone like Chomsky you can move out of the field together, reappearing with a book every now and then!  Don’t worry about being found out, the academic world is slow to process things and weighted towards the ones with the ideas, not those who point out they don’t work. 


So what are you waiting for, get cracking with your new theory and good luck!




Skimming and scanning

For those of you who are firm believers in teaching skimming and scanning feel free to skim this post and answer the questions at the end…you have 1 minute…go! For those of you, like me, who are more sceptical…read on.

This is the second in my “reading skills” series, following up the piece on prediction. Like prediction, skimming and scanning are very attractive to teachers as they make the rather mysterious process of reading eminently teachable. Without “reading skills” teaching reading would resemble teaching the ‘Cinderella skill’, listening. But should we teaching skimming and scanning at all? I will argue ‘no’ for two reasons. Firstly, skimming and scanning don’t accurate reflect the way people usually read and secondly because most students already know how to do them.
 

Skimming and scanning are pretty popular in EFL, with hundreds of web pages offering lesson plans for skimming and scanning classes. St Martins University are keen on them  as is the ‘teaching English’ website and Harmer includes lesson plans with these skills as targets. Textbooks like Oxford’s “Well read” and “Headway” include these activities and   Grellet’s book, which as Paran notes is probably responsible for the popularity of these skills in the TEFL world, has a whole section on “from scanning to skimming”. Telling though Grabe doesn’t mentioned them once in his book on the reading in a foreign language, something which Kerr describes as “eloquent commentary” (2009:29).

Skimming and Scanning are so pervasive that a large number of teachers, (like the one pictured above and me, for the longest time) have managed to convince themselves that this is actually how people read. But it isn’t. At least, not usually. Usually we read one word at a time as you’re probably reading now.

Skimming and scanning are classed as “expeditious reading” (Nation 2009:70) skimming is reading quickly and for the general or “gist” meaning. Scanning is trying to identify specific information in a text. The classic example was always a “name in a phone book” until phone books went the way of tape cassettes and chalk. Nowadays “bus timetable” is the most likely example. Not only is this a reading skill that doesn’t need to be taught, it’s a basic human skill that doesn’t need to be taught. People who disagree should read “where’s Wally”.
 

Gist in laymen’s terms means a general understanding devoid of specifics as in “I wasn’t really paying attention but I got the gist of what he was saying”.  But is this a teachable skill? Or even one that we should be teaching?

We may do reading activities like setting time limits for our students while reassuring them that they “only have to get the gist” but is this teaching them anything or merely expecting them to apply a skill we assume they already have. Is a teacher who says “skimming is just trying to get the general meaning” teaching or explaining a concept we expect students to already know? If it’s the former, we have failed as we haven’t ‘taught’ them how to do it; we’ve just explained what it is. If the latter, why do we assume they don’t know how to do this? After all plenty of monolingual EFL teachers seem to be able to manage skimming without prior instruction –hell they’re so good they can even teach it!

 Secondly, what exactly is reading for gist? If it were possible for me to read faster than I do now then I would do it. But sadly I can’t (so the pile of unread books and papers grows ever larger, staring accusingly at me). If a person reads for gist then they are necessarily losing something. Otherwise they are just reading. If I read faster than normal, then I ignore parts of the text –I miss bits out. These bits may be important, they may not. I just take my chances.

Often with skimming students are told to read the first and last sentences of a paragraph; or the first sentence, or the first and second sentences. Sometimes they are told to “run their eyes over the text” whatever that means. This advice might work at times but other times it may not. Would it work with the paragraph directly before this one? I think it possible could for a test question like “what is this paragraph about” but probably not for understanding the text. 
 
I have heard it argued that these techniques could be useful for EAP students looking through texts and trying to find useful ones in a hurry, or trying to locate relevant sections in a book, but students will almost certainly not be doing these things under timed conditions. They’ll probably while away many pointless hours in libraries reading the wrong books, -much like native speakers do. It’s also quite likely that once the “don’t use a dictionary –just get the gist” bullies are out of the picture and the students successfully make it onto their courses, they’ll probably sit there (sensibly in my opinion) with a text in one hand and a dictionary in the other slowly trying to make sense of whatever tortuously dull and impenetrable academic text they are unlucky enough to find themselves having to read.

 
In fact, and rather ironically, these skills seem to be most useful for doing English reading tests. That is we, the EFL community, design tests which require students to employ reading skills they probably already know and then ‘teach’ them these skills in order for them to pass the tests we wrote! Genius! Perhaps we should also invent writing upside tests and tests of underwater listening.

Don’t teach grandma to suck eggs

Skimming and scanning are at times, very useful; so useful in fact that every person who comes from a culture with a written language already knows how to do them.  Arguably though they are more useful to teachers than to students as they give us something to teach. Thornbury notes

 


Very quickly, skimming/scanning became an end in itself, and teachers were misled into thinking that, by having students skim or scan texts, they were developing the skill of reading. How often do you see this expressed as an aim in examined lessons: “To develop the sub-skill of skimming a text for its gist.”

The point he goes on to make, and one also made by Swan is that student likely already have reading skills in their L1. “Much of the teaching of reading skills is predicated on the assumption that learners do not already possess them” (Swan 2008:266) but they almost certainly do and we almost certainly don’t need to spend time teaching them.Swan and Walter in a piece called “teaching reading skills: mostly a waste of time” refer to research which indicates that students will be able to use these reading skills automatically when their language reaches a proficient enough level.

 
In defence of Skimming and Scanning
 

There aren’t many defenders of skimming and scanning these days but one article written by Phillip Kerr could possibly be described as a “defence” but that wouldn’t really be accurate as Kerr lists criticism and then suggests that there might be some reasons why it might be OK to use them:

1. They aren’t very difficult and they don’t take much time and so they might motivate students to feel like they have achieved something.

2. Well-designed skimming and scanning activities can help students to decode and create meaning in a text.

3. The skills are short and though not perhaps helping students learn to read, may give them some impression about the text.

4. Good for tests

 
Number four has been already been discussed. Number two is the idea that these skills  belong to the psycholinguistic model of reading, criticised by Paran and Grabe. sampling a text is not how most people read, most of the time. 

 
But let me take a minute to talk about the other reasons. If you read the article you’ll notice Kerr wraps up his reasons in such apologetic language that you almost feel sorry for skimming and scanning and want to teach them just so they don’t get thrown in a bag with some Cuisenaire rods and drown. Kerr seems to be saying, “Well, look, we all know we don’t need to teach these skills but they’re awfully quick and they might make the students feel good about themselves and oh please! It’s awfully cold outside; these skills have no place to go!”

But don’t feel sorry for these skills. Feel sorry instead for the poor students who are forced to do them, and the poor teachers filling up their DELTA lesson plans with skimming and scanning targets. Isn’t it time we stopped teaching students to do things they can already do?