# Book Reviews

## Statistics Done Wrong: The Woefully Complete Guide

**
ISBN-13:
**
978-1593276201

**
Publisher:
**
No Starch Press

**
Pages:
**
176

Statistics underpin much of modern scientific research. It's our mathematical wonder tool that helps us identify groundbreaking findings and lets us separate real effects from noise. That is, as long as we're using our statistical tool the way it's intended. And even then there's no guarantee that our results aren't biased. Statistics Done Wrong is a marvelous journey through the major statistical methods, their pitfalls and misuses. It's the kind of book I wish I'd read as I did my first research.

Over the past two decades I've taken three different courses in statistics. My first course was during my engineering studies. It was a course heavy on theory and gave me some understanding of the field. My two other courses were during my psychology studies. These courses focused on applied statistics. We were given the tools we needed to design our own research. The course litterature reflected that and was tailored towards research. Even though the books did cover the basic math, our courses more or less skipped those parts. Sure, as a psychologist your prime interest probably isn't math. But it's my firm belief that it's impossible to make good use of a tool you don't understand.

Statistics Done Wrong would fit perfectly within a psychology
course. It kicks-off with a good discussion of statistical
significance. Within most fields, statistical significance is
reported using p values. p values are often misunderstood.
Perhaps because they're counter intuitive. A p value is based
on the assumption that there isn't any difference between the
groups you measure. The p value you finally get is the
probability that there *wasn't* any true effect of your
treatment. This is where research reports starts to get
interesting.

First of all, we consider a result statistical significant
when p < 0.05. That limit is the ravine that may divide years
of work on one side and getting published in a respectable
journal on the other side of the chasm. What fascinates me the
most is that this border of statistical significance is
completely arbitrary! That cut-off value isn't based on deep
empirical tests or some mathematical property. No, it's just a
rule of thumb suggested by its father R. A. Fisher almost a
century ago. The second fascinating aspect, and one that
Statistics Done Wrong discusses in detail, is that a
significant p value may mean nothing. It doesn't say anything
about the *effect* of the treatment. So even if you've
designed an experiment that shows statistical significant
results, with a small effect there may be no practical
significance of your findings.

My favorite part of Statistics Done Wrong is the discussion of statistical power. Alex writes that much modern research is underpowered. The consequences are severe as you risk missing real effects in your research. And it's not just about having enough participants in your research; The research design itself impacts power. One mistake that I've done myself is to divide participants into different groups. In my case, I did a study on working memory and chose to categorize participants into low, middle and high before performing the real experiment. I based my results on an analysis of variance. What I didn't realize at that time was that I was throwing away statistical power; The working memory scores of the participants varied even within the same group. By my dichotomization, I threw away interesting differences in data. That could have been avoided by using a regression method instead to take advantage of the true variations of the scores.

Statistics Done Wrong is a quick read, yet it manages to go
deep and provide valuable insights into the fascinating world
of statistics. To me, the most amazing feat of the book is
that it keeps me entertained all the way through. Alex pulls
this of by a blend of humor and real-world case studies of
misused statistics from published research. My favorite
example was the study of an open-ended mentalizing task

that
turned-out to obtain statistical significant results. Its test
subject? A dead salmon. Yes, it's brilliant writing and
guaranteed to keep you awake for the important lessons. This
is the best book I've read on the subject. Highly recommended!

*
Reviewed May 2015
*