Be Reasonable II - Arming Yourself

Doing Battle - cropped.jpg
 

It sure appears that last week’s blog post about being a reasonable reviewer struck a chord! Thanks for all the attention you’ve given it, and the positive feedback I’ve received. Since the methodological issues seemed to attract the strongest responses, in this week’s post I provide some resources where you can educate yourself about the more nuanced understandings of the methods issues (and others) I noted, and arm yourself for battle with unreasonable reviewers. This week’s post is short (the end of the semester sure is fun), but I hope you find it helpful.

First, I strongly recommend you read Statistical and Methodological Myths and Urban Legends: Doctrine, Verity and Fable in the Organizational and Social Sciences (2009) and More Statistical and Methodological Myths and Urban Legends (2015), both edited by Charles E. Lance and Robert J. Vandenberg. I have spent more time with the first volume than the second, and used several chapters from it in my research design doctoral seminar, but they are both really valuable. I think at least selections from them should be required reading for all doctoral students. Most people teaching statistics and methods need to read them, too, since they’re the ones who continue to perpetuate many of these myths. In addition to addressing myths about sample size, effect size, omitted variables, and self-report data, they also address issues such as interpreting moderating effects, assessing mediation, missing data, and cross-level analysis. One of the nice things about these chapters is they share a similar structure, in that they summarize the issue, identify the myth, discuss the kernel of truth in the myth, explain why it’s a myth, and then provide recommendations for addressing it. Between these two books you should have a lot of ammo to deal with many of the unreasonable reviewers you encounter, and judicious use of their arguments in your initial submissions may even help you head them off at the pass.

My second recommendation can help you specifically address concerns about endogeneity. At some level endogeneity is inherent in all social research, and cannot be completely eradicated. At the same time, I would argue that more often than not it doesn’t pose a significant problem. However, as I mentioned last week, because it can potentially be a problem, many reviewers assume that it must definitely be a significant problem in assessing your findings. One tool that has recently gone into wider use for pushing back on these knee-jerk comments is impact threshold of a confounding variable (ITCV) analysis (Frank, 2000; for recent applications and more discussion, see also Busenbark, Lange, & Certo, 2017; Harrison, Boivie, Sharp, & Gentry, 2018; Hubbard, Christensen, & Graffin, 2017). ITCV analysis enables you to determine how strong the effect of a hypothetical confounding variable would have to be to overturn current findings. It calculates both the correlations that would have to exist between a hypothetically endogenous variable and the IV and DV in question for endogeneity to be an issue, and what percentage of observations for each IV have to be biased in order for endogeneity to be a concern. If you are a STATA user, the konfound command calculates these test statistics for you. If endogeneity is actually a potential problem for you it will alert you to this, so that you can search for control variables or take other actions to mitigate it. However, in most cases it serves its most useful purpose—shutting reviewers up about endogeneity.

I hope you find these tools effective in avoiding falling prey to blind statistical and methodological zealotry, and that they aid you dealing with reviewers that have.

 References:

 Busenbark, J. R., Marshall, N. T., Miller, B. P., & Pfarrer, M. D. 2019. How the severity gap influences the effect of top actor performance on outcomes following a violation. Strategic Management Journal, 40: 2078-2104.

 Frank, K. A. 2000. Impact of a confounding variable on a regression coefficient. Sociological Methods & Research, 29: 147-194.

 Harrison, J. S., Boivie, S., Sharp, N. Y., & Gentry, R. J. 2018. Saving face: How exit in response to negative press and star analyst downgrades reflects reputation maintenance by directors. Academy of Management Journal, 61: 1131-1157.

 Hubbard, T. D., Christensen, D. M., & Graffin, S. D. 2017. Higher highs and lower lows: The role of corporate social responsibility in CEO dismissal. Strategic Management Journal, 38: 2255-2265.

Previous
Previous

Keep It Simple

Next
Next

Be Reasonable