An RPA project involves building bots to perform certain tasks. Two phases of an effective RPA build are development and testing, which go hand in hand. Testing the developed bot is equally important as the bot design and build itself.
A key to success in RPA testing is the test data. Test data should be gathered, and test scenario discussions should be conducted before the development process begins, ideally during the design phase. The business sponsor should provide these while the RPA team writes the solution design. It’s important to understand how to treat test data so you get the most value from the testing process and from the bot over its lifecycle.
Understanding the Process
The discovery and assessment stage of an RPA build should include a deep understanding of the business process that is being automated. For the developer who might or might not be involved in those initial stages, it can be difficult to get a deep understanding of the process. It is best practice to run through a few test scenarios manually (i.e., a human with a keyboard) within the applications using test data before a single line of code is written. This will help flush out the unknowns that were not discussed in the discovery stage, scenarios that do not occur frequently or scenarios that come up only in development environment. It will also highlight environmental nuances that may exist between what humans do today in the production environment and the application behavior and configuration in a non-production environment.
Defining Test Data
Wrong or incomplete test data can lead to inaccurate development and result in defects during user acceptance testing (UAT) and deployment. This can negatively impact the perception of the bot and can even necessitate its wholesale re-design and re-development.
Testing the test data for accuracy in the initial phases will increase the efficacy of the automation. Relying on automated testing during development or testing phases can lead to inaccurate results during the testing phase. Therefore, the test data and purpose of each data record must be checked and confirmed beforehand.
Two types of test data must be generated for RPA:
- Happy path – This data covers test cases with a clean path that are part of the automation’s “main job.” Happy path data typically covers the largest percentage of the type of work being automated and includes data that the bot can execute on from beginning to end.
- Exception path – This is the data that requires exception handling. These are scenarios that are less expected but known to the business and require a different route. They should be reported as exceptions on completion of their path. Using this kind of data helps test system errors such as application errors, invalid input data, etc.
Two types of test data inputs that impact how you consume the data:
- External input/trigger – this is critical to understanding how the test data is generated. A general practice that applies to many organizations is that the lower environments already have the data that replicate production; in this case, it is easier and often quicker to obtain test data since it is relatively current and, if not, a refresh from the production environment does the trick. In other cases, in which the data is not readily available, the business must create the data, which can be a painstaking process. This needs to be done carefully to make sure all the test scenarios are covered and there is enough volume of test data.
- Consumed data – be sure you have a plan for running out of data. Is data re-usable? Is there an easy way to retrieve it once it is consumed? In some cases, the data is a refresh of the production environment, and retrieving it is straightforward. In other cases, in which the data has been specifically created, the developer needs to be mindful of how the data is used and be frugal with it. For example, a developer could use it just shy of being exhausted.
Creating Test Scenarios
When creating test scenarios, make sure every scenario in scope for the project has a test scenario for it. Also make a clear connection between test scenario, test data and expected output. The test scenarios must be as detailed as possible and focus on expected outcomes, so a scenario is clearly either a pass or a failure.
It is ideal to have the test scenarios ready for use before the development work starts. Availability of test scenarios during solution design will help guarantee that the most common scenarios are considered through the design. These test scenarios can be used as a guide for solutioning.
The importance of test data and test scenarios cannot be stressed enough in the overall success of a bot build. The lack of a plan and insufficient volume of good test data and appropriate test scenarios are among the most common reasons bot builds take longer than they should. Providing valid and complete test data for all key scenarios enables business stakeholders to meet performance, budget and timeline expectations for the development, testing and production release of an automation.
ISG helps companies navigate the rapidly evolving automation market and build effective bots in their environment. Contact us to find out how we can help you get started.