Papers 2017
Evaluation of Verification Tools Continued: More Tools, More Software, More Aspects
Date | 30 May 2017 - 1 Jun 2017 |
---|---|
Event | DASIA 2017 |
Location | Gothenburg, Sweden |
In a previous study six software verification tools
have been applied to a representative space software package.
The findings reported by each tool have been compared in
order to derive footprints regarding fault identification. In a
continuation three more tools were applied to the previously
selected application software and to another application
together with two tools previously used in order to broaden the
base of evaluation. More aspects were considered regarding
the evaluation of results: an additional evaluation criterion was
added and a comparison of reported defects with the outcome
of unit tests was performed. Due to a higher degree of
formalization and automation the manual evaluation effort
could be decreased while extending the number of considered
reports and the number of tools. The encountered evaluation
and verification issues are discussed in detail. All results
together shall provide a detailed view on the defect
identification capabilities of the considered tools w.r.t. current
software base. Altogether, the high quality of reports as
obtained in the previous study was not obtained again: in
context of a different set of tools and another (object-oriented)
language a lot of trivial reports were observed.
Permalink
Files
Challenges Regarding Automation of Requirements-based Testing
Date | 30 May 2017 - 1 Jun 2017 |
---|---|
Event | DASIA 2017 |
Location | Gothenburg, Sweden |
Testing as a method of software verification is
limited in that it can only prove the presence of defects, not
their absence. To be useful, a large number of test cases may
be needed, a strategy that is often in conflict with project
constraints such as available time and funds. Test automation
may be considered as an interesting approach to alleviating
this conflict. However, test automation requires accurate and
computer-accessible information about the system to be tested,
both in terms of the interfaces by which the system is to be
stimulated as well as the desired properties of these interfaces.
Within the FASTII activity (FAST=Flow-optimised Automated
Source-code based Testing) the possibility of deriving this
information from available requirements and design
documents is being investigated. Preliminary results of this
investigation as well as suggestions for future changes in the
process are presented in this paper.
Permalink