BLUEJ VS ECLIPSE CODE
We analyze one possibility to provide cognitive support for the reviewer: Determining the importance of change parts for review, specifically determining which parts of the code change can be left out from the review without harm. Thus, research on tools that help the reviewer to achieve better review performance can have a high impact.
BLUEJ VS ECLIPSE SOFTWARE
We show that the time and space requirements for multi-revision analyses can be reduced by multiple orders of magnitude, when compared to traditional, sequential approaches.Ĭhange-based code review is used widely in industrial software development. The evaluation of our approach consists of measuring the effect of each individual technique incorporated, an in-depth study of LISA's resource requirements and a large-scale analysis over 7 million program revisions of 4,000 software projects written in four languages. It employs a redundancy-free, multi-revision representation for artifacts and avoids re-computation by only analyzing changed artifact fragments across thousands of revisions. In this work, we propose the Lean Language-Independent Software Analyzer (LISA), a generic framework for representing and analyzing multi-revisioned software artifacts. Thus, tools tailored for the analysis of multiple revisions should only analyze these differences, thereby preventing re-computation and storage of redundant data, improving scalability and enabling the study of a larger number of revisions. However, the actual difference between two subsequent revisions is typically very small. Most existing analysis techniques are not designed for the analysis of multi-revision artifacts and they treat each revision individually. The time and resources requirements for running these analyses often make it necessary to limit the number of analyzed revision, e.g., by only selecting major revisions or by using a coarse-grained sampling strategy, which could remove significant details of the evolution. For example, they statically analyze the source code and monitor the evolution of certain metrics over multiple revisions.
Researchers often analyze several revisions of a software project to obtain historical data about its evolution. Furthermore, Ockham is able to extract edit operations in an industrial setting that are meaningful to practitioners. We find that our approach is able to discover frequent edit operations that have actually been applied. We evaluate our approach in two controlled experiments and one real-world case study of a large-scale industrial model-driven architecture project in the railway domain.
Ockham is based on the idea that meaningful edit operations will be the ones that compress the model differences. We propose a novel unsupervised approach, called Ockham, which is able to learn edit operations from model histories in model repositories. These approaches require time-consuming handcrafting or recording of concrete examples, or they are unable to derive complex transformations. In the past, various (semi-)automatic approaches have been proposed to derive model transformations from meta-models or from examples. They can be used to solve or support central tasks, such as creating models, handling model co-evolution, and model merging. Model transformations play a fundamental role in model-driven software development.