Understanding Effect Size Through A Study On Homework

December 8th, 2015 § 2 comments § permalink

Part 2
A continuation from Part 1. As I study John Hattie’s comparison of the effect size of direct instruction vs inquiry learning and problem-based learning I am first trying to understand how to interpret is effect size calculations. Today we look at a quick study of what his effect size of homework looks like and means.

To further clarify how to interpret effect size, Hattie describes his examination of meta-analyses for how homework effects achievement. Hattie studied 5 meta-analyses from 1984, 1989, 1994, 1994, and 2006. These covered 161 studies and more than 100,000 students to analyze the effect on achievement of giving homework. After studying them all and calculating, Hattie came up with homework having an effect size of 0.29.

This means that Homework has a positive effect on student achievement because it has a value greater than 0. The question is how much positive effect? Hattie attempts to explain it the following ways:

1. Compared to classes without homework, the use of homework was associated with advancing children’s achievement by about one year.
2. Homework improved a child’s learning rate by 15%.
3. 65% of the effects were positive, and 35% of the effects were zero or negative.
4. The average achievement levels of students in classes that were given homework exceeded 62% of the achievement levels of the students in classes where homework was not given.

Once again, these do not appear to all be the same interpretation of the data, but apparently they are. Overall, this sounds positive to me. According to this study of meta-analyses it seems that giving homework is a good thing that raises achievement. However, Hattie advises that this is actually a very small improvement and barely noticeable.

Hattie quotes a statistician who helped to originally craft the idea of effect size for the social sciences: Jacob Cohen. Cohen describes the effect size of 1.0 to be like the height difference of between a person 5’3″ and someone else who is 6’0″. He is obviously illustrating that the difference is drastic and easy to see. The effect size of 0.29 however would be akin to a comparable height of 5’11” and 6’0″. Thus, he is attempting to illustrate that although there is a difference in students who experience a 0.29 effect size, it is barely noticeable.

This takes us to the ambiguous nature of effect size. How can 0.29 be both such a minor change that “would not be perceptible” (like the difference between 5’11” and 6’0″) and also reflect advancing a child’s achievement by 1 year, improving a child’s learning rate by 15%, and having achievement levels exceed 62% of their peers without homework?

So what are we to make of this? The effect size of homework is positive so you should use homework? The effect size is negligible so using homework is not worth it? Does a positive effect size mean use a strategy?

In the final installment of understanding effect size we will look at what Hattie deems is optimum value to utilize and why. Hopefully we can gain an understanding of what Hattie thinks we should utilize and why. However, if the interpretation of what a 0.29 effect size means is this ambiguous then I do not hold much hope for moving forward.

Understanding Effect Size In John Hattie’s Research

December 7th, 2015 § 9 comments § permalink

Part 1

As I begin my research and study John Hattie’s claim that Direct Instruction has a far greater effect size compared to Inquiry-Based Learning or Project-Based Learning, I thought it was best to first make sense of what is meant by “effect-size.”

When understanding effect size according to John Hattie it is best to think of a sliding scale from -1 to 1 (although values over/under this are possible). An effect size of 0 would mean that a particular method has no effect on achievement, a negative score means it actually reduces achievement, and a positive score means it increases achievement. To get specific, an effect size of 1.0 would mean that a child’s achievement would advance 2-3 years or increase their learning rate by 50%.

This seems hard to believe as an advancement of 2-3 years seems insane considering Hattie does list several methods that have a 1.0 effect size or higher – like self reporting grades. “Self-report grades” is actually listed as an effect size of 1.44. Does this mean that employing this method would almost double your student’s achievement or raise the student achievement level by more than 3 years?

It quickly becomes clear that understanding Hattie’s effect size calculations seem to have too wide of an interpretation. He goes on to say that an effect size of 1.0 can be interpreted 4 different ways:

  1. A child’s achievement would advance 2-3 years.
  2. Their learning rate would increase by 50%.
  3. A correlation between a variable (like the amount of homework) and achievement was approximately a rate of .5 or 50%.
  4. Students would exceed 84% of students not receiving the “treatment” or method.

These interpretations do not appear to be the same yet could all possibly be used to interpret a 1.0 effect size. He attempts to make it more understandable to a layman by paraphrasing a statistician who helped to originally craft the idea of effect size for the social sciences: Jacob Cohen. Cohen describes the effect size of 1.0 to be like the height difference between a person 5’3″ and someone else who is 6’0″. He is obviously illustrating that the difference is drastic and easy to see.

Confused yet? Hattie’s breakdown of effect size leaves a lot to be desired. The sad part so far is that his research attempts to quantify a large amount of achievement differences and compare the results of a large list of strategies and methods. Teachers and administrators everywhere simply show the list of methods compared by effect size without actually defining what those effect sizes mean.

I think if we look at the effect size of homework we can use that to further understand what Hattie is trying to get at. That is coming in Part 2 later this week.

All paraphrasing and quoting comes from Visible Learning by John Hattie, 2009. Pages 7-8.