What is Automation Testing?
Automation testing is a Software testing technique to test and compare the actual outcome with the expected outcome. This can be achieved by writing test scripts or using any automation testing tool. Test automation is used to automate repetitive tasks and other testing tasks which are difficult to perform manually.
Automated hardware testing validates or verifies a product’s performance before it leaves the factory, using special automated test hardware and software. The product being tested is generally called the UUT (Unit Under Test), or sometimes DUT (Device Under Test).
The testing may be semi-automated, where a human is involved during some part of the testing process (maybe probing a specific point, or moving a connector), or it may be fully-automated, where the operator may place the UUT in a fixture and press a “go” button, but then it’s hands off until the testing is complete.
If you are creating a complex electronic system with many different circuits, boards, and components, you needed a way to test different parts of it isolation. Similar to unit testing software, you create an electronic rig that simulates the inputs and measures the various outputs from the device being tested. It can send a large number of different signals, measure the results and compare the values. This is much easier than manually trying them and recording the output voltages on paper. You will eventually test the devices in the real world, but automated solutions test them in a lab environment reduces cost and improves quality.
Automated software testing can increase the depth and scope of tests to help improve software quality. Lengthy tests that are often avoided during manual testing can be run unattended. They can even be run on multiple computers with different configurations. Automated software testing can look inside an application and see memory contents, data tables, file contents, and internal program states to determine if the product is behaving as expected. Test automation can easily execute thousands of different complex test cases during every test run providing coverage that is impossible with manual tests.
For specific examples of API testing and UI testing, we have some specific examples:
- UI Testing Information
- API Testing Information
The trick to automating performance testing in a meaningful manner is to take a level-based approach. Level-based performance testing is a process by which automated performance tests are executed on components at various levels of technology stack. Performance testing, particularly automated performance testing, is best done in an isolated manner at each level of the stack. Each level in the stack refers to different components/modules of the application, APIs, web services and DB-specific tests.
Running short automated test performance scripts against various levels of the technology stack is a more realistic approach than a top-level assault on the system overall. There are just too many parts in play to be adequately accommodated by a single, high-level approach to performance test automation. Performance testing using a level-based approach allows for a good deal of automation testing.
An end-to-end business flow specific Performance Test Strategy that runs a sequence of these component level tests is still critical to test the overall response of the application, but testing components at the early stages (shift-left approach) can reduce the testing time and help in early detection of performance issues.
Automated penetration testing tools have multiple key benefits for an organization. To start with, automated scans can be performed quickly than manual scans, and hence, the speed of detecting new vulnerabilities also increases.
Sometimes the testing will be looking for known versions of systems that are vulnerable (e.g. old versions of a web server), looking for specific attack vectors (stored XSS, CSRF), or even attempting to overload a system to see if it will reveal information (DDoS, Brute Force).
In addition, systems may try more active methods that actually try to hack into a system, rather than just looking for vulnerabilities passively. For example, testing password forms to see if they can be broken by brute force attacks, dictionary attacks, etc.
Finally, another type of automated testing is compatibility testing. In the software world, you may need to do cross-browser testing, to test that the same web page or application works on different web browsers.
You may also have to test the same application on different mobile devices (iOS, Android) or a hardware system may need to work on different voltages (230V for Europe, 115V for North America), different USB versions, etc. This kind of testing is called compatibility testing and can be complex and expensive to perform because you have to maintain so many different types of device.
So for automating this kind of testing you may want to use simulators that can simulate different devices, browsers or operating systems. In the hardware world it gets trickier, but you can use develop emulators and test labs that can test different possible environments.