QA

QA Talk: Test automation procedures.

While some of you are on summer vacation, enjoying ice cream and sunbathing, Quality assurance (QA) has been working all summer to automate test procedures. Read our QA-departments thoughts on Zissons test automation future, and what we are doing at the moment.

Test automation uses a software to control the execution of tests and the comparison of actual and predicted outcomes. Test automation can automate some repetitive but necessary tasks in a formalized testing process. Or, it can perform additional testing that would be difficult to do manually. Test automation is critical for continuous delivery and testing.

We asked Kristian and Arthi in Zissons QA-department to give us an update on where Zisson are regarding test automatization, manual testing and where they think we are headed the next years.

Kristian, where are we now regarding automatic testing?

Talking of test automation, we cannot avoid saying anything about test levels and test approach. Tests can be divided into static and dynamic tests, where dynamic tests are where we execute a code. Then, there are functional tests and non–functional tests. Functional tests are where we test functionality, in general where we can measure a predicted behavior as a result of a defined input; Where non-functional test is where you test performance, response time etc. Functional test can again be divided into Unit, Integration and System-level. Where Unit is tests run on a single component, and automated unit tests are basically a code that tests parts of the code, such as functions, classes etc. Integration tests is where a test code is testing the correlations between one or more components.

System-level tests are where the system as a complete system is tested as a black box, or white box. Black box testing means that the “tester” doesn’t know anything about how the system works without the known interfaces, (e.g. the contact center solution web UI combined with a cell phone in our case), where in white box testing you combine the normal black box with some known API’s, webhooks or similar.

System-level test are where most components are tested in combination and can give a better sanity check of the system as such, and one test case can test functionality on all the components, but this is on the cost of giving precise locations of failures, and time for execution. System-level tests can take up to minutes to execute only one test scenario, and the root cause of the failure can be very difficult to diagnose, where unit tests will run very swift, and the location of the failure will be easy to diagnose, but it doesn’t test the complexity of the complete system. Integration tests are located in between when it comes to both execution time and test “coverage”. Hence all the different levels are important when it comes to providing quality in the system and securing the functionality within an acceptable feedback rate on failures, it is not possible to prioritize only one test level and think that you will discover all sorts of defects. In fact, having a good test regime is not a guarantee for zero defects, but it will have a huge impact on the tally.

Test automation uses a software to control the execution of tests and the comparison of actual and predicted outcomes. Test automation can automate some repetitive but necessary tasks in a formalized testing process. Or, it can perform additional testing that would be difficult to do manually. Test automation is critical for continuous delivery and testing.

Besides the different test levels, there are other aspects of test automation. The test themselves are of course the first step in the toolchain, but they lose value without a regime for test execution, test metrics and actions. All correlated actions regarding test failures are dependent on the test metrics; that they are both correct and intuitive, hence test automation without a good test metrics loses a lot of its value.

During the spring there have been built a new framework for running system level tests, using test café.

TestCafe is a pure node. js end-to-end solution for testing web apps. It takes care of all the stages: starting browsers, running tests, gathering test results and generating reports.

Is manual testing dead? Arthi will explain why we still need to do manual unit testing and how we do it today.

What is Manual unit testing?

Manual unit testing is a process of manually testing a product and its features where a human tester plays the role of an end user to ensure product features works as intended. In many cases, test cases evolved from manual testing has been the base for test automation.

Is manual testing dead?

Not at all!

1. Any newly developed features have to be tested manually to ensure that it has been implemented according to specifications. Acceptance testing helps us to verify that every line of code is working as per the functionality in the end user point of view.

2. Even though test automation would be major part of testing framework in future, manual testing is still indispensable when it comes to picking up intrigue issues (either picked by tester or evolved as customer identified bugs) that was not covered within automation framework.

3. In addition, we require manual testing to document unidentified new test case scenarios which could be input for test automation.

4. Once test automation framework is developed, a good and holistic manual acceptance testing on the framework to be determine the effectiveness of the framework.

With above all reasonings, we can be certainly convinced to say manual testing is not dead and still required to ensure the quality standards of Zissons products.

How we do acceptance testing in Zisson?

“Given, When and Then”

The complexity of testing any utility functions can be solved by answering these three critical statements:

Given: Is the product feature functioning as per the product specification (Eg: Implementing a call button feature, by pressing call button, call should be triggered to a customer. Given: Is an event of product feature functioning as per the product specification (Call button feature and call trigger: By pressing call button, call should be triggered to a customer)

When: When or Upon what condition, the given feature must be triggered (Call button feature and call trigger: Call gets triggered only when a specified trigger happens such as Agent pressing the call button).

Then: What follows the specified event of product feature (Call button feature and call trigger: Following Call trigger what happens if customer picks up? Or call hanged?)

All the critical statements above are applied largely in manual acceptance testing of new features.

Manual unit testing is a base level testing on individual units of either new developed code or an error fix code to ensure it works as intended and does not create problem to other dependent features.

How do we ensure a good manual testing in Zisson

Every piece of code is pulled from the repository and tested in our local machine before merging to master. It highly improves the quality of the code and eliminates the bugs at an early stage. While testing, we also compare with the product owner’s specification and make sure it works as intended. Once after clearing on all these points, QA will approve the pull request sent by developers, upon which code gets merged to its master. If a bug is identified at code or branch level, the QA department will not approve the pull request, and the bugs are communicated to the developers. Once code improvements done, developers will send new pull requests to QA team for approval.

“It highly improves the quality of the code and eliminates the bugs at an early stage.”

Want to read how other company’s conduct their testing?

//ptgmedia.pearsoncmg.com/images/9780321803023/samplepages/0321803027.pdf

//smartbear.de/blog/test-and-monitor/how-spotify-does-test-automation

Source: Zisson, Wikipedia, PTG Media, Smartbear, Google