What does it mean for research in and around software, and software engineering, to be (deemed) successful?
This is something that i have thought about a lot over the last three years of doing software research, reading research works in software engineering and keeping track of the innovations in the world of software practice in the world today. I am, by no means, an expert in the field of software or software research. Nor have I gathered many years of experience in this field. However, I do think that opinions matter — no matter how naive. And the following remarks are perhaps my early impressions on this point of discussion — and impressions do change with time.
To me three basic notions define research success: a) impact on community or practice; b) the degree to which the research provokes new ideas and concepts (sooner or later); and c) how well is the research idea evaluated (irrespective of the results i.e. positive or negative). And successful research should have at least one of these three things in check.
Impact on practice or other research to me is important simply because it directly answers the question: “how did it change the world?” It does not matter how long it took to change the world, what matters is that it did — if at all. What’s the point of a research idea that had no tangible effect on either the way we develop software — say by improving it or changing it for the better; or the way we look at research itself — which brings me to my second point.
Successful research should provoke new thoughts and ideas. This is important because it allows us to look at things differently and makes us question our own assumptions of the status quo. While not all such ideas make it to actual software practice and industry, they at least keep us grounded in the simple reality that nothing can be taken for granted and what was thought of as not useful a while back can be made use of now.
Finally, any research in order to be successful should have detailed, methodical and possibly exhaustive evaluations of its ideas, irrespective of whether the results are positive or negative. Detailed evaluations ensure that everything that was to be analyzed in that idea was indeed analyzed; no matter how insignificant it might feel like at the time. Just in case that small, insignificant detail somehow becomes important tomorrow in a different context it would save us the time to go back and reevaluate our results.
Apart from those guiding notions of success, it is important to note that I specifically point out what doesn’t matter in defining successful research. First, the time to technology transfer, i.e. from research to practice, is not an adequate meaning of success. Although an idea might not have had enough impact on practice, it could have inspired new ways of thinking for other researchers. System models for software configuration management systems, as noted by [ELC+05], have not had a great impact on practice, but they make us question if the simple file and directory structure is sufficient for the modelling of complex software engineering artifacts.
Second, I welcome negative results in research as they show the roads that we should not take. For instance, countless iterations and improvements on the Tarantula formula (as in [JH02]) have been made over the years, but no one talks about what did not work in improving the formula itself — wouldn’t that be important as well so that we do not waste time in figuring that out again and again on our own?
These are some of my initial thoughts on the matter. Over time, I expect that these ideas will take new shape and form, or wither away entirely to give way to newer ideas. In any event, this is where I stand — for now.
References:
[ELC+05] Estublier, Jacky, David B. Leblang, Geoffrey Clemm, Reidar Conradi, Andre van der Hoek, Walter Tichy, and Darcy Wiborg-Weber. 2005. Impact of the Research Community on the Field of Software Configuration Management. ACM Transactions on Software Engineering Methodology (TOSEM) 14 (4):383–430.
[JH02] James A. Jones and Mary Jean Harrold. Empirical Evaluation of the Tarantula Automatic Fault-Localization Technique. Proceedings of the 20th IEEE/ACM International Conference on Automated Software Engineering, Long Beach, California, USA, November 2005, pp. 273–282.

Leave a comment