Are we in a 1914 scenario in East Asia? How often do guerrillas succeed? Did counterterrorism law erode national sovereignty? These are just a few of the important questions that political science has some bearing on. Yet barely a couple months goes by without an op-ed decrying political science's alleged lack of relevance to the outside world.
Political scientists are frequently told their research is too arcane, mathematical, and self-involved to be of possible value to anyone in Washington dealing with real-world policy problems. There's a grain of truth here. As international political economy whiz Kindred Winecoff observes, political scientists need to make a better “elevator pitch." But here's the problem: at the end of the day, there is a difference between what Max Weber dubbed science as a vocation and the subjective policy lessons we can take from our study. Part of that gap is reflected in the difficulties that people with purely policy interests inevitably encounter in PhD programs.
From my own (minor) experience so far, it is grueling, necessitates the assimilation of difficult methodologies, and involves having to think about intellectual questions that many people would regard as hopelessly arcane. Even a good PhD program that directly tackles policy questions will likely demand the student grapple with questions of esoteric theory and method. And not all research that tackles highly abstract questions is policy-irrelevant. Highly technical analysis of game theory and economics generated useful policy applications form the World War II convoy system to nuclear strategy and wargaming.
All of these advances began from the desire to grapple with difficult questions to produce knowledge, something many critics of political science research do not acknowledge. Take Greg Ferenstein, who penned an article supporting Eric Cantor's call to defund the NSF. His gripe is familiar. Political science is obscuratist, hyper-mathematical, and disconnected from the policy world. Political scientists don't do enough to make their research accessible to policymakers. Ferenstein wants a political science that his mother-in-law can understand, and he thinks starving academia of resources will motivate hungry researchers to do better. So is modern political science irrelevant to policy needs?
Contra Ferenstein, policymakers have thrown substantial $$ at the kind of research he regards as navel-gazing arcana. The RAND Corporation got a lot of mileage using what Ferenstein derides as "clever mathematical models" during the Cold War. I'm not sure that Jay Ulfelder, who worked for the intelligence community-funded Political Instability Task Force, would agree that his quantitative forecasting methodologies must pass a mother-in-law test to be valuable. And when New York University's game theory guru Bruce Bueno De Mesquita speaks, the CIA listens. Drew Conway, a man that could easily teach a computer programming course just as well as poli-sci 101, gives invited talks at West Point on analyzing terrorist networks. I don't think Ulfelder, Mesquita, or Conway have sleepless nights pondering the relevance of their research to the govermment!
Ferenstein also laments the decline of grand theory that policy-makers could comprehend and the rise of empirical research. Perhaps the most ironic thing about Ferenstein's citation of this trend as a bad thing is that the rise of empirical research actually does away with the worst tendencies of the "old" international relations. In the old days, people proclaimed their allegiance to warring theoretical tribes. Now the enterprising researcher will take whatever can get them from point A to B. That, and political scientists are actually interested in whether or not their theories are empirically correct! It is difficult to see why this is bad for policy and endless theological debates over theories first advanced in the middle of the 20th century is good.
Of course method isn’t everything. The aforementioned Ulfelder is conversant in both important theories of the state and rigorous methods. My friend Aaron Frank has built rigorous computer simulations based on deep thought about the philosophy of science and human cognition. Dare I say that Frank, before generating his quantitative simulations, has to deal with policy-unfriendly concepts like "ontology" and "epistemology?" But Ulfelder and Frank aren’t hung up about whether their theories confirm realism, liberalism, or constructivism. And they are not afraid to use the best methods available to pursue their research, no matter how challenging they may be. On the more qualitative side, someone like Ryan Evans uses field work to come to fine-ingrained analysis of civil wars. Ryan's work, though leaning towards qualitative political sociology, is equally as demanding as the most rigorous quant work. But Ryan also is informed by useful theory.
Ferenstein's "mother-in-law" test for policy relevance is also ridiculous. Economists looking to be policy-relevant don’t lose sleep over whether their mother-in-law gets their theories. Whether Timothy Geithner can pick it up matters. Much of what military historian (full disclosure: my military history instructor in Fall 2010) David Johnson writes about the future of land warfare is too esoteric for a mother-in-law without grounding in military history. But Johnson now heads a group that will likely decide the future of the Army. That's some policy relevance most academics can only dream of. And Ulfelder's mother-in-law would have to be down with Bayesian stats to get most of his work. Does the IC care?
I can't speak for Ferenstein, but I can't help but ask: when critics claim that political science research is too esoteric, mathematical, and self-involved, are they are really unhappy that it has become more rigorous and empirical? There was once a time when a policy thinker could converse about the social sciences without making an effort to grapple with the methodological tools and theories that underpin those disciplines. That era is over and won’t be coming back anytime soon. The policy-academic bridge certainly does need fixing. But this requires effort and understanding from both partners.
First, the blunt truth is that all of the policy-relevant research in the world won’t persuade a policymaker to deviate from something they ideologically believe in. And why should research alone dictate fundamentally political decisions? Politicians are not engineers or technocrats. Who believes political science research alone should decide the question of abortion? For supporters and detractors alike, it’s a question that touches on the most basic questions of human life and women’s autonomy.
Finally, op-eds like the piece Ferenstein penned offer no constructive advice for better academia-policy harmony. Political scientists already invest considerable effort and intellectual energy trying to “bridge the gap.” It’s time for the policy world to reciprocate. Yes, state of the art political science methods and theories take time and effort to learn. But couldn't a policymaker hire a political science grad to boil down research of interest to a few bullet points? There’s an army of Hill staffers already at work helping their bosses get smart on policy areas that Senator John or Jane Doe have to vote on. There really are many easy, commonsense solutions to the problem if one seriously thinks about it.
The academia-policy divide isn’t unbridgeable. Both sides just have to respect each other’s needs and culture. Policy enthusiasts should acknowledge and respect the inevitably arcane rigor needed to make good political science research instead of bashing it for not being immediately comprehensible to one’s mother-in-law. Academics must understand that policy makers have unique needs, don’t have tenure to insulate them from the consequences of getting an issue wrong, and make choices about fundamentally normative questions that science cannot conclusively answer.
More people get this than alarmist op-eds may lead us to believe. After all, love of Middle East studies and political violence research spurred a American University of Beirut and King's College London grad (who is now at the Pentagon in an one-year International Affairs Fellowship) to found this very blog. From late high school to the beginning of my PhD program, reading Andrew Exum's blogging of relevant political science research motivated me to view policy and academia as complementary. And surely I'm not the only one. Maybe the "policy relevant" crowd can take a hint too.
Yesterday, I watched some folks describe the United States as a "police state" because of some allegations of police brutality in Chicago. Without either defending the Chicago police department or agreeing with its critics, I tweeted that those who describe the United States as a "police state" have never lived in or visited an actual police state. I then watched as leftists went berserk in response.
As regular readers of this blog know, I believe language matters -- as does the precision with which we use it.
So let's first explore the term "police state." Political science literature has a lot to say about authoritarianism and police states, but here is the plain vanilla definition from Merriam-Webster:
a political unit characterized by repressive governmental control of political, economic, and social life usually by an arbitrary exercise of power by police and especially secret police in place of regular operation of administrative and judicial organs of the government according to publicly known legal procedures.
Now, by that definition, I think most observers of U.S. politics and comparative politics would be hard pressed to classify the system by which we we govern the United States as a police state. But let's look at the United States in comparison to other nations using the Freedom House and Polity IV surveys. The 2012 Freedom House survey (.pdf) ranks the United States as among the most free countries on earth with respect to both political rights and civil liberties. And here is the 2010 Polity IV country report for the United States (.pdf), which raises questions about some post-9/11 legislation passed in the United States (and also this crazy thing called the Electoral College) but otherwise gives the United States a clean bill of democratic health.
None of this is to say that the United States is perfect or that violations of civil liberties do not occurr too often for any of us to be comfortable with. And yes, I realize that a white guy such as myself shouldn't take his largely positive interactions with law enforcement authorities as being representative of, say, the experiences of African-Americans who live in my neighborhood.
At the same time, though, when polemicists and activists on both the left and the right so carelessly throw around pejoritive terms like "police state" and "facism" and "totalitarian," the only thing they accomplish is to strip these terms of any real meaning so that when we really do need them, they are rendered useless.
After all, if the United States is a police state, can Syria really be that much worse?
Longtime contributor to the blog Erin "Charlie" Simpson is back with a guest post for the ages...
Instead of going line by line through MAJ Thiel’s SWJ paper (which I characterized on the Twitters as “horrible, terrible stats work”), I’d like to offer some general guidelines for policy-relevant, conflict research. As Ex will tell you, I am not an Iraq expert. But I know a little bit about COIN and another bit about quantitative research.
1) Big Claims require Big Methods. I’m not one to argue that sophisticated statistics can answer all of our research and policy questions. But if you want to wade in on one of the biggest (conflict) policy debates of the last 10 years, you best bring a lot of stats firepower. Correlations among yearly, national data won’t cut it. There are people who do this for a living: Ivy League professors, Army ORSAs, DIA analysts, DARPA geeks, think-tank types. And they do it with care and sophistication. Learn from them, understand the data and model choices they make, and realize the complexity and contingency of the problem at hand. We cannot adjudicate these complicated causal claims with descriptive statistics.
2) Avoid Sigacts. Sigacts suck. I’m sorry. But they do. They are a function of our presence. More troops (outside of more bases) leads to more sigacts? <sarcasm>You don’t say!</sarcasm> Sigacts are as much a measure of our presence as they are of violence.* (There are also a ton of non-violent sigacts reported. So make sure you knock out those key leader engagements and non-battle injuries before you run your analysis.)
*And as we know, COIN isn’t just about violence (if you’re a Kalyvas person, you know violence has a non-monotonic relationship with control such that low-violence doesn’t always mean good things). So, sigacts are a bad measure of violence and violence is an unreliable measure of stability or “progress” or whatever. But that’s a slightly different debate.
What I'm trying to say here is: Moneyball that shit and find the COIN version of on-base percentage or WHIP.
3) Correlation is not causation. We all know this. But did you also know that low correlation does not preclude findings of causation? Two variables may appear to have a low correlation – until you control for various background conditions. Sometimes this can be tested with jury-rigged chi-square analysis (stratifying one of the variables of interest into various segments -- for example, divvying up Iraqi provinces by #’s of battalions present in 2006 and seeing if there are statistically different levels of violence in 2007). But the only real way to determine which variable among many has a causal effect is with something like regression analysis – correlation won’t cut it.
4) Model specification matters. Ok, so now you want to run some regressions? Which kind? For most conflict data, you won’t want ordinary least squares (OLS). In the parlance of our time, you’ll need to consider the underlying “data generating process.” How do the data come to be observed, and which models’ statistical assumptions best match that process? In general conflict researchers should evaluate various time series, time series-cross sectional, and count models (ie, Poisson) for their work.
5) Level of analysis matters more. How do you plan to aggregate your data? In many instances conflict researchers will want to look at how violence changes across time and space. Global investigations of violence (think Correlates of War or Fearon-Laitin style research) will look at the country-year. That is, annual level national data. This data is usually pre-collected and easy to work with. But if you’re focusing on Iraq or Afghanistan, you need subnational data. And while these wars are long, 5-10 years doesn’t generate enough data points for a useful time series. The more dynamic the conflict, the more detailed you want the data. So you need to dig down to province-month or district-week. (In Afghanistan, sigacts are relatively stable at the district-week level. If you’ve got some data or computing horsepower, you can even carve up the whole country into 10kmX10km grid and go from there.) Unfortunately, that means your other variables need to be measured at the same level, which can be tricky. But them’s the rules.
6) Regression has limitations, too. If you’re doing some sort of “policy evaluation” chances are we didn’t randomly assign the policy “treatment.” What does that mean? That means we probably spent development money in the most violent areas. Or established joint-security stations in safe areas first. Or otherwise implemented a policy based on the very thing you’re trying to study. From a causal inference perspective, that’s a humdinger. One set of solutions is to “match” or pair districts based on their “propensity for treatment,” which can deal with some of the non-random assignment problems. (See Gary King’s paper on health policy evaluation in Mexico for a good example.) There is a lot of good work that needs to be done in the realm of conflict research. Let’s figure out how to do it well.
(Those interested on the academic side may want to get involved in the Minerva-grant funded Empirical Studies of Conflict project run by Jake Shapiro, Eli Berman, Joe Felter and Radha Iyengar. Otherwise, talk to me about cool kids at Caerus Associates.)
From Abu Muqawama: check out Mike Few in SWJ while you're at it. Also, there is a good conversation on Twitter between @drewconway, @charlie_simpson, @abumuqawama, @chrisalbon, @jay_ulfelder and others on this post.
Andrew Exum touches on an academic issue here worth mentioning: that the events in Egypt have been poorly predicted by North American academia, perhaps because political science departments largely focus on quantitative analysis. Andrew, as ever (and I blame living in Washington as well as his southern roots for this), is very polite about not bashing the "quants", as he calls them.
Personally, I would be more blunt. Quantitative analysis and the behaviouralist approach of most American PoliSci academics is a big steaming turd of horseshit when applied in the Middle East. Statistics are useful, yes, when you are in a country that has relevant statistics or where polling is allowed. But things like electoral statistics tell you very little about the political reality of dictatorships, because the data sets are inherently flawed, since they're either unavailable, fraudulent, or irrelevant.
This is not a new problem, right? Garbage in equals garbage out. If the data you are plugging into your analysis is unreliable, your conclusions are not going to pass muster -- not with the political scientists using "soak and poke" methods or, for that matter, any dude you happen to pass on the street. A buddy of mine commented this is less about the divide between quantitative methods and qualitative methods as it is an epistemogical debate. But any debate over methods is ultimately a debate over epistemology: how does the researcher "know" what he or she knows? If he or she is relying on laughably poor data harvested from a semi-closed police state, Issandr points out, he or she can't claim to know much at all. All of this has direct relevance to the study of conflict, of course. Conflict zones are really difficult places to gather reliable data. On the one hand, the U.S. military harvests all kinds of data from its wars. But on the other hand, studying the war in Afghanistan, I have come to trust the data less and less over time and the more I have asked questions about how the data was collected. The numbers look neat on a PowerPoint slide, sure, but when you start asking hard questions, they are less impressive.
(This all reminds me of that quote/warning about how all government statistics are ultimately generated by a civil servant somewhere writing down whatever the hell he pleases on a sheet of paper. Help me out with the exact quote, readers.)
I have been greatly entertained by the debate between Daniel Drezner and Arpoova Shah over the question of whether the situation in Egypt says anything about the strength of political science in the United States. I encourage you all to read what the two of them have written, but there is something going on here that neither Drezner nor Shah deal with. I was standing in line a few hours ago, waiting on a sandwich at Potbelly's, when I read this, from Greg Gause, in a volume of the International Journal of Middle Eastern Studies last year:
Over the past five years, from volume 37, number 1 (February 2005) to volume 41, number 3 (August 2009), IJMES published thirty-seven articles that deal with politics in the contemporary Middle East, broadly understood. This is my count, of course, and others might add or drop some articles. I define contemporary as post World War II and have a relatively expansive definition of politics. My count does not include short features, only full articles.
Eighteen of the authors of these articles are identified as having academic appointments in political science departments, fewer than 50 percent of the total (some of the articles are co-authored, so there are more than thirty-seven authors involved). The other authors are concentrated in the discipline of anthropology (with one sociologist and one historian) or have appointments in religious studies or Middle East studies departments. Of the eighteen political scientists who have published in IJMES during this period, only eight were employed in North American universities. The majority of the political scientists appearing in IJMES during this period have appointments in European or Israeli universities; one political scientist working at an Arab university appeared in the pages.
Although those North American political scientists who did publish in IJMES during this period did some very good work, and it was my pleasure to review many of their articles, these numbers lead me to the troubling conclusion that there is a growing gap between the professional requirements for disciplinary success in political science in North America and the standards and forms expected of the best Middle East studies work. Increasingly, particularly at the best research universities, advancement in political science requires work concentrated in formal and statistical methods. There are, of course, exceptions. Some political scientists working on the Middle East who use postpositivist methods have secured leading jobs at top research universities. There is a refreshing recent trend toward encouraging mixed-methods research in dissertations, with large-n statistical and/or rational-choice formal mathematical components supplemented by case studies based on field work and more classic discursive and qualitative approaches. However, professional advancement in the field is driven by publication in journals that are heavily weighted toward quantitative and formal methods. In the subfield of comparative politics, where most Middle East work is done in the discipline, there are also strong currents arguing that cross-regional work, not intense concentration on a single region, is preferred. In promotion and tenure decisions, publication in regional-studies journals, although not actively discouraged, is not credited as highly as publication in disciplinary journals. The sad fact is that, for ambitious political scientists looking to get the best North American jobs, publication in IJMES is not a great career move. ...
The professional situation of political scientists outside of North America is not as constrained. Good area-studies work that is informed by the epistemology of social science but relies on “old-fashioned” area-studies methods of qualitative analysis and considerable field work is more highly respected in the discipline in Europe and elsewhere. One can advance professionally at the best universities in Europe and the Middle East doing such work. Because of these different incentives, and different financial-support systems, graduate students at European universities who are interested in the Middle East tend to spend more time in the field and produce work that is more accessible to cross-disciplinary Middle East studies audiences. The significant representation of European-trained political scientists in the pages of IJMES over the last five years is testament to this different set of career structures and incentives.
I am not trying to demonize quantitative methods here. Although I tease "Quants" because I myself am an area studies geek, let's be honest: the more "tools" you can bring to bear on a question, the better. And I am not trying to say -- and neither is Gause -- that one cannot publish smart scholarly work on the Arabic-speaking world outside of IJMES (which is the flagship journal of Middle Eastern Studies). But I am trying to say that American political scientists are, by and large, rewarded for doing work that does not immediately lend itself to relevance in situations such as the one in which we currently find ourselves.
There are some excellent American political scientists working on the Arabic-speaking world. Greg Gause is one of them. So is Marc Lynch, whose writing during this most recent crisis has been excellent and necessary. So too is Josh Stacher, who published a great essay in Foreign Affairs over the weekend. So it is unfair, Drezner is correct to point out, to start bashing political science. I actually think American political scientists -- from Samer Shehata to Nathan Brown -- have been quite prominent in offering informed commentary during this crisis. But that's not a reason not to fret that political scientists trained in America might not be doing the kind of field work necessary for both top-flight area studies as well as providing policy-relevant insights onto events on the ground when crises arise. I spoke at THE Ohio State University last week, and one of the professors there similarly worried to me that students trained in the American academy would not be able to "keep up" with their European peers on regional expertise. That, to me, might be worth American political scientists thinking about.
Work cited (emphasis mine):
This relatively easy-to-understand paragraph is from that article I mentioned below:
Civil wars are military contests where each side's military capacity shapes the type of military interaction and, therefore, the nature of the conflict. Both insurgent and counterinsurgent strategies vary accordingly, and yet their "lessons" are conditional on the prevailing technology of rebellion. For example, the combined experience of Iraq and Afghanistan has led the U.S. military to focus single mindedly on irregular war. However, the lessons of Afghanistan are not necessarily transferable to [a symmetric nonconventional] conflict such as the Somali one. Our analysis also implies that, as they consider peacekeeping and peace building operations, policy makers must be aware of the variation in technologies of rebellion, as well as the transformation of internal conflict after the end of the Cold War. For instance, neither conventional nor [symmetric nonconventional] civil wars correspond to the popular image of a quagmire associated with irregular wars, which have deterred international intervention in the past.
The question I would have for Kalyvas and Balcells would be the following: Yeah, this is all fine and good, but speaking in plain English, if the United States were to intervene in a conflict, might that external intervention change the conflict in unpredictable ways? Maybe it boosts the capacity of one party, and maybe a rival party (say, Iran) jumps in and boosts the capacity of another party. Maybe, before we know it, the conflict has morphed into a robust insurgency in which one actor is employing irregular means. And maybe policy-makers should internalize the lessons of Iraq and Afghanistan lest they lead the U.S. military into another, ahem, quagmire. Because the war in Afghanistan, to use one example, is not one conflict that you can code just once in some database but rather a series of conflicts that has been fought using a variety of "technologies" from 1979 to 2010. I myself saw a very different war in 2002 than the one I saw in 2004. And I saw an altogether new war in 2009 in large part because an external actor (Pakistan, in this case) jumped into the conflict in the five years between my second and third stints to Afghanistan and boosted the heck out of the capacity of the insurgent actors.
Another thing: if you really think something is important for policy makers to understand, why the hell write about it using highly specialized vernacular in a journal no actual policy maker reads? Please tell me this APSR article will be followed up by an article in International Security or, even better, in a paper for a humble policy-oriented think tank.
None of these questions, by the way, should detract from a really excellent article with potentially important implications.
This is really, really, really funny. (h/t Sullivan)
Regular readers of this blog know how much I enjoyed my friend Mike Horowitz's ground-breaking new book on military innovation and diffusion, a field of inquiry in which I have a lot of interest. Mike is a professor at my alma mater and one of the brightest young American thinkers in security studies. When he visited CNAS a few weeks ago to walk the staff through his new book, I asked him if he would mind sitting down to discuss the book, political science in the United States, and the future of warfare with the blog. Since I once managed to get the two of us into the Red Sox dugout to chat with Terry Francona for an hour before a game against the Orioles, Mike, a Massachusetts native, agreed.
1. Okay, briefly, explain your adoption-capacity theory. What is it, and what does it explain?
Adoption capacity theory is the term I use to explain the way that financial and organizational constraints shape the realm of the possible for both national militaries and non-state actors, thus influencing the strategies they choose when facing a new military innovation. Drawing on research from the business world, economics, and political science, I argue that you can use the relative financial and organizational requirements for adopting new innovations to explain both the way a particular innovation is likely to spread throughout the international system and the way individual states will respond. So what’s the takeaway for the real world? New military innovations that require high levels of financial investments to adopt tend to help the rich get richer – if adoption means integrating new, expensive capital platforms, pre-existing powerful actors will do very well. In contrast, innovations requiring a large degree of organizational change can be profoundly disruptive to existing powers. The organizational routines they’ve developed to help them master previous technologies or methods of force employment can become a virtual albatross that holds them back while newer and more nimble actors take advantage. These are the types of innovations more likely to usher in dangerous power transitions or devastating military campaigns (think blitzkrieg and the Battle of France).
2. Talk us through your methodology (because we are nerds). You use a variety of methods across a number of case studies. How did you test your theory?
I used what political scientists call a “multi-method” approach. I did research on specific militaries and non-state actors, sometimes including archival work. I also used regression analysis when there were enough observations that I could look for patterns of behavior that could shed light on my argument. For example, when studying which groups adopted suicide terrorism – a military innovation for non-state actors – you have a large enough universe of terrorist groups and adopters of suicide terrorism that you can usefully employ statistical analysis (though of course you also have to do the research). On the other hand, the organizational practices associated with using aircraft carriers to project power only spread to a very small number of countries over time. Thus, for that chapter I focused on case studies and simple descriptive statistics. For me, the key is trying to ask an interesting question and then figuring out which methods or methods will work best to answer that question, rather than picking the method (quantitative, game theory, qualitative, etc.) first.
3. You argue at the end of the book that your theory explains the behavior of non-state actors as well. A few questions related to that conclusion and motivated by my own curiosity and interests:
a. Violent non-state actors are necessarily secretive. They do not publish a QDR or a budget, much less a task organization chart. So how can we describe them in terms of your theory if we cannot answer basic questions about their finances and organizational dynamics?
b. You argue that ties between violent non-state actors helps determine the spread of suicide tactics. But how do we explain groups who have contact with non-state actors which employ suicide tactics who do NOT themselves adopt suicide tactics. So a connection between Hamas and Hizballah helps explain the migration of suicide tactics to the Palestinian territories -- I understand that. But how do we explain why other groups that have had contact with Hizballah -- the PFLP, Amal, FARC, etc. -- have in large part NOT adopted suicide tactics?
c. Individuals rarely serve in multiple armies of nation-states these days. So a guy in the U.S. Army is unlikely to have served in, say, the French Army as well. But that's not the case with non-state actors. Imad Mughniyeh got his start in Fatah. Hassan Nasrallah got his start in Amal. Are the divisions between violent non-state actors in a place like southern Lebanon not less clear than the divisions between state militaries? And does that then complicate the effect of "ties" between groups?
Hey, great question(s) – and you bring up a lot of key issues I try to think about. One of the goals of my book is to take topics that are often studied in isolation – nuclear weapons, naval warfare, and suicide terrorism, and explain how some common processes actually govern the way new military innovations spread (or don’t spread) and what that means. Terrorist groups, like national militaries, face budgetary pressures and have organizational hierarchies. They have ways of doing business that invest prestige in particular members and create organizational veto points if someone wants to change things up. Thus, at a conceptual level, adoption capacity constraints influence how terrorist groups behave. Whether we can get enough evidence to actually observe that, which your first question gets at, is a different story. Some factors, such as whether a group uses suicide terrorism or how long it has been in existence (organizational age), are observable. There are also some groups, such as the PIRA, where we have a lot of information about their organizational dynamics. In other cases, it’s more difficult, and harder to make a definitive ruling about whether the theory holds. I’m ok with that, though, since my theory seems to work pretty well for the cases where we do have enough information.
I argue that two factors primarily explain who adopted and who failed to adopt suicide terrorism. First, those groups that lacked established operational profiles prior to the beginning of the suicide terror era found adopting suicide terror much easier than more experienced groups. “Younger” groups did not have pre-set critical tasks and organizational veto points that would have made adoption more organizationally challenging. Second, those groups that were plugged into what amounted to a religiously-motivated network of terrorist groups were also significantly more likely to adopt suicide terror. Clearly, other factors matter as well, which is why some of the groups exposed to Hezbollah did not adopt suicide bombings (though even Amal did at one point). In my case studies and statistical analysis, I try to control for some of the other ideological, geographic, and contextual factors that explain why some groups decided to use suicide bombings but others did not. Essentially, being plugged into groups like Hezbollah that have adopted suicide terror makes a group significantly more likely to adopt, but that doesn’t mean it’s determinative. By the way, the FARC is fascinating in this regard. Kalyvas, who you have been known to reference, and Sánchez-Cuenca argue that the FARC did, in fact, use suicide terror once. Others are not so sure.
You make a great point about the possibility for individuals to serve in several different violent non-state groups. Tracking individuals like that is one way to evaluate ties between groups – or evidence of splintering within a group. That raises the bar for doing research on links between terrorist groups. There is a lot of uncertainty out there, so the best you can do is be honest when describing the limitations of your work and places where others can build on it to do a better job.
4. What does your research say, if anything, about the future of war? It's going to be all counterinsurgency, all the time, isn't it?
Absolutely. Nothing to see here. All COIN all the time. Right up until the time when an adversary UCAV shoots an F-22 out of the sky. Adoption capacity theory actually suggests that the United States military may face some serious challenges over the next generation. If innovations come about that undermine the importance of capital intense platforms such as carriers, fighters, and bombers, the United States will have its work cut out for it. The organizational expertise the US built up over time to fight based on those platforms could make it harder to shift towards, for example, UCAVs (unmanned combat aerial vehicles), war in the cyber realm, or other new areas. The trick is maintaining a high level of organizational capital, through acts such as funding basic research & development and encouraging experimentation, so that the US military is able to adapt rapidly when necessary. Fundamentally, I’m optimistic about the ability of the United States to do what is necessary to maintain its conventional military edge; I just think we can’t take it for granted.
5. You're one of the leading young lights in the field of security studies. How do you feel about the way in which your academic field is interacting with the policy community? Is your relevance increasing or decreasing in terms of policy?
Aww, shucks. In all seriousness, many people worry a lot about the irrelevance of political science to the policy community. I tend to be reasonably bullish about it in the medium-term, actually. I think there is a great deal of interest among the rising generation of scholars in doing methodologically sound social science on international security topics with policy relevance. The more that occurs—and I think it will occur in greater numbers over time—the more “relevant” international relations scholarship will become. On the other side, there is the question of the willingness of the policy community to listen when scholars do more policy relevant work, but I’ll leave that one to you.
6. Born on the gritty streets of Lexington, Massachusetts, you now live in my second American hometown of Philadelphia. What are the five best bars in Center City and in West Philadelphia?
I’m a proud son of the birthplace of American liberty, but Philadelphia is a pretty awesome place to live. There are so many good bars and restaurants that it is hard to choose, but my personal favorite is run by some bartenders who got sick of taking orders and decided to hang out their own shingle. It’s called Jose Pistolas and it’s on 15th between Locust and Spruce. It has solid food, a great micro-brew beer selection, and terrific bartenders—ask for Casey. My favorite bar for cocktails is Southwark, down at 4th and Bainbridge. It’s the best place I’ve found for classic cocktails in Philly (think Aviation or Old Fashioned, not Appletini). Smith’s, which is on 19th between Market and Chestnut, has to be on the list. It’s one of the only places in Philly where degenerate New England Patriots fans like myself can get together on Sunday’s to cheer on the Pats. The Resurrection Ale House, in my new neighborhood (Graduate Hospital), offers a great beer selection and tremendous fried chicken (don’t believe me, ask Bon Appétit). I’ll wrap up the list with Monk’s, a Philly institution at 16th and Spruce featuring an enormous selection of Belgian beers. And now I’m hungry.
Thanks for all of the questions and the opportunity to get the word out about my book!
Thank you, Mike. DC readers can see Mike talk about his new book on Monday at the CSIS. Details can be found via Mike's website or by following the hyperlinks. And you can buy the book itself here in paperback and here on Kindle.
So the quants, not content with mucking up the financial world, have turned their attention to the dynamics of irregular war. I may be a PECOTA guy when it comes to baseball, but I am wary of many quantitative efforts made to "explain" the dynamics of war. Strategic studies scholars I admire like Steve Biddle show the utility of quantitative analysis in their own work, and Steve in particular makes a strong case for why policy papers and academic research backed up by quantitative analysis have more of an impact than do papers based on strictly qualitative or theoretical work. But I think the pressure PhD students and junior professors in political science and international relations feel to check the three magic boxes -- qualitative, quantitative and theoretical -- when writing their dissertations and papers has contributed to the growing irrelevance of their fields in policy discussions. You shouldn't need two semesters of statistics to understand a policy paper on strategy or military operations. Acquisitions or budgeting, fine, but neither this book nor this book nor this book nor this book -- all enduring classics in the field of strategic studies -- rely on quantitative analysis. (This favorite of the blog, yes, but the key observations in the first half are all based on historical evidence.)
Anyway, you guys could probably care less why I never read the APSR. But based upon my limited personal experience in high-intensity and low-intensity conflict as well as my academic research -- to include field research in several active combat zones -- human or "moral" factors often explain war far better than number-crunching. (At its worst, the aforementioned number-crunching you see in scholarly journals is just qualitative judgements assigned numerical value, i.e. if 10="good" and 1="bad".) And all methods of analysis are inherently limited in their explanatory value.
That said, there is an article in Nature on the "unified model of insurgency". And Josh Foust and his gang of hired assassins have posted a critique of it on Registan.net worth checking out. Josh & Co. laud the authors of the Nature article for the way in which they have approached their subject. (And yeah, actually, they should be lauded, because honestly, God bless them for tackling a complicated issue with such methodological rigor.) What I get from the critique, though, is that the model the authors have constructed is -- surprise! -- too simple to reflect the realities of insurgencies and counterinsurgencies. And it reminds me of the way in which Quants on Wall Street discovered that all of their complicated computer models had failed to reflect the actual behavior of markets and indeed hastened the destruction of the very funds for which they had been constructed to generate income. (Honestly, didn't we learn our lesson with LTCM in 1998?)
I'm not trying to come off as one of those analytical dinosaurs the gang at Fire Joe Morgan used to poke fun at (you know, the guys who value "grit" and "hustle" over OPS), but we have to admit that in certain chaotic "systems" involving real live humans acting both rationally and irrationally -- such as international finance, or war -- the explanatory value of quantitative analysis might have its limits.