ISR

Many software systems eventually undergo changes to their basic architectural structure. Such changes may be prompted by new feature requests, new quality attribute requirements, changing technology, or other reasons. Whatever the causes, architecture evolution is commonplace in real-world software projects.

Today’s software architects, however, have few techniques to help them plan such evolution. In particular, they have little assistance in planning alternatives, making trade-offs among these different alternatives, or applying best practices for particular domains.

To address this, we have developed an approach for assisting architects in planning and reasoning about software architecture evolution. Our approach is based on modeling and analyzing potential evolution paths that represent different ways of evolving the system. We represent an evolution path as a sequence of transitional architectural states leading from the initial architecture to the target architecture, along with evolution operators that characterize the transitions among these states. We support analysis of evolution paths through the definition and application of constraints that express rules governing the evolution of the system and evaluation functions that assess path quality. Finally, a set of these modeling elements may be grouped together into an evolution style that encapsulates a body of knowledge relevant to a particular domain of architecture evolution.

We evaluate this approach in three ways. First, we evaluate its applicability to real-world architecture evolution projects. This is accomplished through case studies of two very different software organizations. Second, we undertake a formal evaluation of the computational complexity of verifying evolution constraints. Finally, we evaluate the implementability of the approach based on our experiences developing prototype tools for software architecture evolution.

Thesis Committee

David Garlan (Chair)
Travix Breaux
Ipek Ozkaya (Software Engineering Institute)
Kevin Sullivan (University of Virginia)

Copy of Thesis Document

Measuring scientific output has a long tradition and is fraught with controversy. My presentation will introduce the audience to relevant historical fragments of computer assisted scientometric analysis, such as the development of citation indices and metrics like the Journal Impact Factor. I will also address their disputed applications.

Today, the scenery has changed: from once being developed as descriptive methods, several metrics have become “social technologies” and powerful tools for decision making. We nowadays deal with a multitude of theoretical and empirical approaches to intervene in academic decision making. (Evidence based) Policy and administration increasingly seek to evaluate the effects of science and research on innovation and competitiveness in standardized ways, even though epistemic cultures vary a lot. Individual scholars are required to constantly consider their position within certain information markets, both in scientific or in social media realms. This leads to optimization strategies to reduce the complexity of scientific productivity or to specific publication behaviors, such as citation cartels as extreme example. However strategies like these conform to a regime's inner logic to the point of subverting it. The recent critique of the Journal Impact Factor (San Francisco Declaration) may mark a turning point in the application of bibliometric measures and may ask for new forms of evaluating and – even more importantly – objectifying science and research activities. The presentation will end with the discussion about potential new perspectives.

***

Katja Mayer works as post-doctoral research associate to the president of the European Research Council Prof. Helga Nowotny. In addition, Katja is lecturer at several universities in the field of Science-Technology-Society (STS).

 

Faculty Host: Juergen Pfeffer

Most contemporary programs are customizable. They provide many features that give rise to millions of program variants. Determining which feature selection yields an optimal performance is challenging, because of the exponential number of variants. In my talk, I will show different approaches how to determine the performance contributions of features and feature interactions, such that we can predict the performance of not yet measured variants. Furthermore, I will outline novel ideas how to further reduce the measurement effort by using variability encoding.

***

Norbert Siegmund is a post-doc at the University of Passau, Germany. His research area focuses on measuring and predicting non-functional properties of customizable programs. He received the PhD degree in computer science at the University of Magdeburg in 2012. He participates in a number of program committees and is currently program chair for the workshop on feature-oriented software development. He was awarded the best Ph.D. thesis 2012/2013 and the best scientific publication in 2011 at the faculty of computer science at the University of Magdeburg. He received the best paper award at the Software Product Line Conference. Furthermore, he was awarded the most innovative student-teaching concept at the University of Magdeburg.

Host: Christian Kästner

Pages

Subscribe to ISR