Definition of Software Testing

Software Testing

Software testing is an integral part in the software development life-cycle. Inadequate software testing leads to major risks and consequences (Garousi and Zhi, 2013). The American National Institute of Standards and Technology (NIST) 2002 report stated that lack of infrastructure in software testing alone cost $62 billion USD per year in United States (Tassey, 2002). The main goal of software testing is to find defects and faults in every stage (the earlier the better) of the development of a software. This is a necessary process as human mistakes are inevitable, and considering that costs to fix these faults when the software is in maintenance are expensive. With technology playing nowadays a pivotal role in human life’s (transport, health, etc.), a defect in a software system may lead to disaster. For example, a testing information systems failure resulted in a death in London Ambulance Service software (Finkelstein and Dowell, 1996). In this context, software testing tools are deployed to automate the test process (Emami et al., 2011) in order to improve quality and efficiency of the product, and reduce testing related costs.

Definition of Software Testing

In the literature, several definitions have been given to software testing. The guide to the Software Engineering Body of Knowledge(SWEBOK v3) defines Software Testing as, the ’dynamic verification’ that a program provides ’expected behaviours’ on a ’finite’ set of test cases (Bourque et al., 2014). These test cases are suitably selected from the unusually ’infinite execution domain’.

According to ANSI/IEEE 1059 standard: Testing can be defined as a process of analyzing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item (IEEE, 1994).

The Art of Software Testing – Second Edition defines: Software testing [as] a process, or a series of processes, designed to make sure computer code does what it was designed to do and that it does not do anything unintended (Reid, 2005).

Finally, The Art of Software Testing – Third Edition defines: Testing [as] the process of executing a program with the intent of finding errors. (Myers et al., 2011).

Software Testing Tools

The IEEE Standard Glossary of Software Engineering Terminology defines a software tool as: « a computer program used in the testing, development, maintenance, or analysis of a program or its documentation » (Radatz et al., 1990). These include cross-reference generators, time analyzers, flow-charters, decompilers, test case generators, etc. There is a wide variety of tools available today to provide assistance in every phase of the testing process. There are no universal testing tools that would cater to all testing needs of all levels and phases of a software development cycle. Rather, testing tools can be categorized in a number of ways:

a. The testing project or activity in which they are employed (e.g., code verification, test planning, test execution);
b. The descriptive keyword, i.e. the specific function performed by the tool (e.g., capture/replay, logic coverage, compactor);
c. A major area of classification going beyond testing only (e.g., test management, static analysis, simulator).

As an example, we can cite the categorization of (Abran et al., 2001):
a. Test generators
b. Test execution frameworks
c. Test evaluation tools
d. Test management tools
e. Performance analysis tools

While many tools are useful mainly in keeping track and managing tests that are scheduled or done, some testing tools provide automation in core testing activities. This reduces manual testing, which is costly, time consuming and error prone (Bajaj, 2014). Test automation helps in finding issues that are overlooked or not re-verified (regression) (Bajaj, 2014). However, a recent survey in (Garousi and Zhi, 2013) reveals that about 80 percent of the Canadian firms use manual testing due to acquisition costs and/or lack of training for good quality test automation tools.

Academic work on software testing and software testing tools

There are several academic works carried out on testing tools. A very recent (2016) study (Garousi et al., 2016) finds that the level of joint industry-academic collaboration in software testing is very low. Researchers have few insights on problems that are important to practitioners, while practitioners fail to learn what researchers have already discovered that might be useful to them. In fact, industrial problems are often devoid of scientific challenges, while at the same time more complex than the mostly small projects for which academia develops rigorous, but hardly scalable, solutions. A recent survey of 3000 employees in Microsoft suggests that topcited research articles in Software Engineering (SE) are not relevant or useful to their everyday challenges (Lo et al., 2015).

LIRE AUSSI :  Retournage des images, effets sonores et optiques

A 2013 survey on software testing practices in Canada (Garousi and Zhi, 2013) identified current trends and challenges in testing, in an effort to provide a window into industry practices and encourage more academia–industry collaborations. Some of the notable findings of this survey are as followed:

• Canadian firms are giving more importance to testing related training;
• More effort and attention is spent on unit testing and functional testing;
• When it comes to testing tools, web application and NUnit tools overtook IBM rational tools and JUnit;
• Many of the companies are still using the Test Last Development (TLD) and only a small number companies trying to implement new development approaches such as Test Driven Development (TDD) and Behaviour Driven Development (BDD);
• Canadian firms are considering using new techniques, such as the mutation testing;
• A majority of Canadian firms use a combination of two coverage metrics: condition and decision coverage;
• In most Canadian companies, testers are out-numbered by developers, with ratios ranging from 1:2 to 1:5;

An older survey (Emami et al., 2011) of 152 open source software testing tools finds that the majority of the tools available were meant for performance testing (22%) and unit testing (21%). On the other hand, only 3% were useful for test management and database testing. Based on that survey, JAVA is the most supported programming platform by open source software testing tools. Almost 39% of testing tools support JAVA programming platform. It is especially notable that no other programming platform is supported by even as low as 10% of the tools. On the other hand, Visual Basic and database are the least supported languages/- concepts for the testing tools surveyed. When it comes to open source software testing tools,concerns about their maintenance are real, due to the non-profit and community driven natures of these initiatives. The survey found that 77% of the surveyed open source tools have had an update (version release) within the last six months.

Table des matières

INTRODUCTION
CHAPTER 1 LITERATURE REVIEW
1.1 Software Testing
1.1.1 Definition of Software Testing
1.1.2 Software Testing Tools
1.1.3 Academic work on software testing and software testing tools
1.1.4 Software testing terminology
1.2 Release documentation (change logs and release notes)
1.2.1 change logs
1.2.2 Release Notes
1.2.3 Research on release documentation
CHAPTER 2 METHODOLOGY
2.1 The learning resources: SWEBOK v3, ISTQB standard glossary v3.1
2.2 Tools and release documentation
2.2.1 Standard format for the logs
2.2.2 « Trimming » the logs from their noise
2.3 Answering our research questions
CHAPTER 3 RESULTS AND DISCUSSION
3.1 RQ1: To which extent are the terms from established testing’s learning resources present in change logs and release notes of testing tools?
3.1.1 Quantitative results: distribution of terms from the learning resources in the logs
3.1.2 Qualitative analysis: most frequent terms from the learning resources
3.1.2.1 Most frequent terms from ISTQB
3.1.2.2 Most frequent terms from SWEBOK
3.1.3 A deeper look at the terms from the learning resources
3.1.3.1 Most frequent terms from ISTQB and SWEBOK
3.1.3.2 Significant terms from ISTQB
3.1.3.3 Significant terms from SWEBOK
3.2 RQ2: Which would be the dominant terms in a terminology extracted from change logs and release notes?
3.2.1 Distribution of log terminology
3.2.2 Most frequent terms from log based terminology
3.2.3 Generic Testing terms in the logs
3.2.4 Technologies mentioned in logs
3.2.5 Programming languages from Logs
3.2.6 Tools mentioned in the logs
3.3 RQ3: Comparing ISTQB, SWEBOK and Log based terminologies
3.4 Threats to validity
CONCLUSION

Cours gratuitTélécharger le document complet

 

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *