Software testing is like pizza: it comes in many different flavors, and you'll want many different types if you're aiming to please a crowd with diverse requirements.
Now, we're not here to tell you which toppings to select the next time you order pizza. But we are here to discuss the different types of software testing, explain the requirements that each one addresses, and discuss how the various forms of software testing fit together to meet software quality needs.
So, keep reading for a deep dive into different types of software testing and a breakdown of each main type of software test that developers and quality assurance engineers use today.
We'll look at specific types of software testing later in this article. But at a high level, suffice it to say that you can divide the types of software tests into three main categories:
Tests that assess technical application quality: Test types such as unit tests and integration tests help ensure that code works in the way it's supposed to. These tests are typically performed early in the software development life cycle.
Performance and usability tests: Tests that evaluate whether an application meets user experience requirements are a second key type of software test. Performance testing, accessibility testing, and visual/UI testing are examples of types of tests that fall into this category. These tests usually take place later in the software development life cycle – after applications have been built but before they are deployed into production.
Security tests: Security tests, which check applications for security risks and vulnerabilities, are another distinct category of software tests. There are multiple ways to perform security testing, such as analyzing source code and scanning binaries. At organizations that practice DevSecOps, security tests are usually integrated into the software delivery process, but they may also be performed separately by some teams.
Now that we've discussed broad categories of software tests, let's look at specific types of tests, the purposes each one serves, who performs the tests, and when the tests happen within the software delivery life cycle.
Unit testing allows developers to evaluate whether newly written units of source code meet basic requirements related to coding style, structure, and data processing techniques. Unit tests are usually the first type of test performed during the software development life cycle. Whenever developers write a new unit of code, they test it.
Because unit tests evaluate relatively basic aspects of code, they are one of the easiest tests to automate using unit testing frameworks like JUnit or Jest. And because unit tests need to happen before development workflows can proceed, taking advantage of test automation is crucial for ensuring that unit tests don't delay overall development operations.
Integration testing is the assessment of how multiple units or modules of code function as a group. The primary purpose of unit testing is to ensure that newly developed code units can integrate effectively into an existing codebase without introducing compatibility or dependency issues.
As such, integration testing is another type of test that usually happens quite early in the software development life cycle – usually right after unit testing is complete. Integration tests can also be automated in most cases. Some test automation frameworks that support unit testing, such as Jest, also enable integration tests. In addition, so-called end-to-end testing frameworks, which are designed to support a broad set of testing types, also usually facilitate integration tests. Selenium is an example of this type of framework.
Functional testing ensures that applications meet key functionality requirements for an application. For example, a functional test could assess whether a certain button is available within a new version of an application.
Functional testing falls under the umbrella of performance and usability testing because its goal is to validate that an application is capable of delivering the end-user experience that developers intend.
Most functional tests can be automated using end-to-end testing frameworks such as Selenium. However, complex types of functional tests as well as tests that involve application features or components (such as those that depend on biometric input) that are difficult to control using software may need to happen manually.
Performance testing is a type of software testing that assesses whether an application meets performance requirements. It allows teams to test, for example, how long an application (or different parts of an application) takes to load and whether an application continues to operate normally when it receives high volumes of requests. This is important because even if an application provides the functionality developers intend, it may not meet user requirements if the application is too slow or unreliable to deliver the functionality in the way users expect.
As with functional testing, most types of performance tests can be automated via frameworks like Selenium. But complex performance tests may require a manual approach.
Security testing is any type of test that checks software for vulnerabilities, configuration mistakes, or other issues that could expose an application to attack. There are multiple types of security tests, such as:
Software Composition Analysis (SCA), which assesses applications for insecure dependencies or components.
Static Application Security Testing (SAST), which scans source code for problems like injection vulnerabilities.
Dynamic Application Security Testing (DAST), which simulates malicious interactions with running applications to detect potential vulnerabilities.
Because each type of security test reveals different types of risks, teams typically perform multiple types of security testing.
API testing is a type of test designed to evaluate whether the Application Programming Interfaces (or APIs) used by an application work as intended. API testing can evaluate all aspects of API behavior, including reliability, performance, security, and more. API testing can also support both internal APIs (meaning those that an application uses to manage data and integrate internal services) and external APIs (which are APIs that external services can use to connect to an application).
Because actual API servers can be complex to deploy in a testing environment, API testing often relies on a method called API mocking. Under this approach, developers and test engineers generate simulated API calls and responses, then evaluate how the application interacts with them.
Visual/UI testing is a testing method designed to detect issues related to the appearance and layout of an application. For example, if an image inside a Web application appears in a distorted fashion on devices with small screens, visual/UI tests should surface the issue.
Thus, it's important to understand that visual/UI tests don't typically collect feedback from actual users on whether they like an application's visual interface. This is instead a type of automated test that validates whether the application UI renders in the way developers intended. Determining whether users actually like the UI or not is a separate issue (which brings us to usability testing, discussed below).
Usability testing evaluates the user-friendliness of software. Typically, usability tests involve giving humans who are representative of an application's target user base access to an application, then collecting feedback about what they like or don't like. Usability testing can identify a wide range of problems, such as poorly designed or extraneous features, confusing interfaces, and instances where a software feature passed automated performance tests but still performs too slowly to keep users happy.
Because usability is highly subjective, developers and quality assurance teams must often take a nuanced approach to evaluating usability testing feedback. They should also ensure that their approach to usability testing is as consistent as possible. Unlike most other types of tests, usability tests are difficult to automate (because they depend on manual human interaction with an application), so you can't run the same script to ensure consistency between tests. But you can establish a firm and consistent set of processes to guide users through usability testing and collect their feedback.
Regression testing ensures that changes to applications don't introduce problems (or "regressions"). Regression testing doesn't really represent a specific type of software test; instead, it's a testing goal that teams usually pursue by running several different types of particular tests, such as functional and performance tests.
If you integrate a broad set of different types of tests into your software development life cycle, performing explicit regression testing is often not necessary because any regressions introduced by application updates should be caught by the integration tests, functional tests, performance tests, and other tests you're already performing. However, for teams whose testing routines are less automated, comprehensive, or consistent, performing deliberate regression tests may be necessary.
Accessibility testing is a type of testing that ensures that an application works well for different types of users, including those who face challenges that don't apply to a majority of users. For example, accessibility tests can validate whether an application performs well for users with hearing or visual impairments.
Accessibility tests can be performed using a combination of automated and manual testing techniques. Some accessibility features, such as tools within an app that allow users to increase the size of fonts, can be assessed automatically, but collecting manual feedback from users with particular accessibility needs helps provide additional context and detect accessibility issues that automated tests may overlook.
End-to-end testing is a term that refers to all of the types of software tests that teams would normally perform during the software development life cycle. It's not a specific type of test as much as it's a catch-all phrase that encompasses all of the testing requirements of modern applications.
The goal of end-to-end testing is to ensure that all aspects of an application meet all requirements to which they are subject. If you perform each type of software test described above, you're performing end-to-end testing.
You can perform most types of software tests on an ad hoc basis, or via processes that are not linked to other aspects of software development. But in most cases, software testing is faster and more efficient when it is integrated into the software development life cycle (SDLC) such that tests happen routinely and (to the extent possible) automatically.
For example, running unit tests as part of the SDLC means that whenever developers write a new unit of code, a unit testing routine automatically begins. Likewise, performance testing as part of the SDLC would mean that as soon as a new application release candidate is compiled and deployed into a testing environment, automated performance tests occur.
As noted above, most types of software tests can be performed automatically using frameworks that let engineers define what they want to test using code, then execute tests automatically.
In most cases, automated testing saves substantial amounts of time. It also keeps tests more consistent because tests that are based on the same code will be identical.
However, as we saw above, certain types of tests, such as usability tests, are difficult to perform automatically. And even when you can automate tests, you may not have the development resources available to write testing code for every single test that you want to run. For these reasons, even highly efficient teams should expect to perform a certain amount of tests manually.
Automated testing is also sometimes compared to comprehensive testing, but this comparison can be a bit misleading. Comprehensive testing refers to all types of tests that must take place to ensure an application works as required for all users – and also that it's secure.
Thus, comprehensive testing is not the opposite of automated testing. On the contrary, automated tests are usually an important component of comprehensive testing, since running every type of test is easier when you automate them. That said, just because you automate some or most tests doesn't mean you're performing comprehensive testing. You need to be covering every key type of test for your testing routine to be comprehensive.
Continuous testing is another buzzword you may encounter in discussions about types of software tests. Continuous testing means running tests routinely and automatically as part of the software development process. It implies that tests are well integrated into the SDLC – often with the help of cloud-based test infrastructure that makes it possible to run a wide variety of tests on demand – and its goal is to ensure that tests are as efficient and comprehensive as possible.
That said, continuous testing doesn't mean that testing happens on a literally continuous basis. There may be moments when no testing is taking place if you're waiting for new code to test. But as long as you test regularly, and provided that each step in the SDLC triggers relevant types of tests, you can say you're performing continuous testing.
It's common to talk about software testing as if it's a single thing. But in fact, software testing is a broad and dynamic category that extends to many different types of tests which can be performed in many different ways. And although there's no one-size-fits-all approach to testing, your chief goal should always be to ensure that your tests are as automated, comprehensive, and scalable as possible – while also recognizing that certain types of tests will always be more difficult and time-consuming than others.