Publication Bias |
April 17th, 2012 |
science |
Let's say I start out with 32 people. I pick a stock. I tell half of them it will go up, the other half it will go down. Tomorrow 16 people will have seen me be right, 16 will have seen me be wrong. I give up on the second half and pick a second stock for day two. Of the 16 who have never seen me be wrong I tell 8 it will go up and 8 it will go down. I repeat this until day six when I have one person who has seen me be right the past five days.
This is an illegal con, but something similar happens in studies. I claim "Foobaritol lowers blood pressure in tall people", but maybe I ran studies on several different drugs, only publishing the one with the strongest results. How do you know I didn't run the numbers on several ways different ways of dividing up my study participants (age, gender, location, race, height, ...) and am able to make my claim about tall people only because I sneakily neglected to make a Bonferroni correction?
What if we required preregistration as a condition for IRB approval?
Comment via: google plus, facebook