header icons

Scales Rule-checking

Well-known open-source tools like PMD and FindBugs help projects find bad patterns in their code: e.g. naming conventions, idiomatic usage, common errors. These rules can be considered as coding practices in the project, and are classified according to the quality attribute(s) they impact.

There are two important informations we retrieve from the output of these tools: firstly the number of violations for each category (analysability, changeability, etc.), which shows the effort needed to get rid of a bad practice or acquire a new good practice. The second measure is the percentage of acquired practices, which is the amount of rules never violated in the project.

One may want to address first the rules that are only violated a couple of times, so it is easy to acquire this practice by simply removing the few violations. By doing such small steps the project can really learn from it and improve its practices.

The evaluation for rules is manual, since it implies compiling the code or retrieving binaries (class files) for FindBugs. To setup the scales, we selected a list of projects and applied our analysis script on the resulting set.


Thresholds

The initial set of thresholds for metrics for rules impacting Analysability is the following.

The initial set of thresholds for metrics for rules impacting Changeability is the following.

The initial set of thresholds for metrics for rules impacting Reliability is the following.

The initial set of thresholds for metrics for rules impacting Reusability is the following.


Discussion

The set of metrics used for the benchmark can be downloaded here.

People can select any number of projects as their favourites. This data is then displayed on the project's badge (in the UI) and made available through the Marketplace REST API. It shows the interest of people in projects. To improve this, some projects ask their users to go vote for them on the Marketplace.


List of projects used for the benchmark