The Rise And Rise Of Automated Testing For Fintech Products  

 

Testing of fintech products and platforms is increasingly a mix of human and automated testing thanks to advancements in technology, machine learning and artificial intelligence. The movement away from manual work can allow financial services organisations to boost product development, improve efficiencies, enhance quality and mitigate risk.

But are we ready to hand this process over for full automation?

Future of jobs

A glance at the skills outlook for 2022 on the World Economic Forum Future of Jobs Report 2018 highlights growth across a number of very people centric skills and shows manual dexterity, memory, quality control and technology maintenance on the decline.

Image source: Future of Jobs Report 2018, World Economic Forum

Image source: Future of Jobs Report 2018, World Economic Forum

The skills required to test technology products across all stages of development include analytical thinking, systems analysis, problem solving and programming which are still in demand. Testers who can formulate test plans that capture the required checks efficiently and correctly to a high standard are critical in any tech team. But in the case of repetitious and high-volume testing work, it is far more efficient to go with automated testing.   

The fundamental questions to ask

When considering the automation of testing on new fintech products, existing products or platform upgrades, systems integrations or day-to-day operational checks there are three fundamental questions to ask:

  1. Are there repetitive and programmable components in the product delivery?

  2. Does the system architecture support automated processes?

  3. Is the data quality good enough to shift fully from manual to automated processes?

 

1. Are there repetitive and programmable components in the product delivery?

A repetitive task that is programmable means it can be automated. Automation in this case can lead to reduction in the risk of manual error, oversight, time and work effort.

An example of repetitive and programmable tasks within product development (generally more commonly automated already), is load or stress testing on systems.

For this type of testing, both quantity and quality of test cases are important because the test is to ensure that the system will perform in the way it is intended to perform. For example, a customer website might have this testing (stress) performed to ensure that during peak website visit times, the website will not break and become unavailable. The load test will ensure the site can handle the required level of concurrent users.

For this to be effective, having a program automate this test ensures better and more reliable results. Doing this manually with individual users accessing a site would not be an effective way to run the test.

Regression testing is another area that could be considered repetitive or repeatable. Regression testing is a suite of cases that are re-executed each time a change is implemented, or perhaps periodically for internal auditing purposes. Here the system is tested to ensure it is functioning as expected after a recent change has been implemented.

For example, a digital customer website (“site”) that provides customers with access to view and update their personal details, or make changes that impact their finances, such as changes to investments or insurance, will require regression testing. A suite of regression test cases will be required to ensure the site works as expected when a change to the site is implemented. A potentially different suite of regression test cases will be required if a change is made to an integrated system. The site may be integrated with a system that stores and maintains the customer data. That system will likely undertake a release or change that does not require an update to the site, but the site must be tested to ensure that it continues to work as expected with the integrated system.

These tests can expand over time as new product enhancements are developed and rolled out, to ensure ongoing the product is performing as expected. Just how often regression testing is required will be dependent on the frequency and scope of enhancements. Whether the test cases can be programmed for automation will have similar dependencies.

Other testing areas of the product development cycle include unit, integration and system testing. These tests are more specific and targeted to the development of new requirements. Some of these tests might be programmable but are not necessarily repetitive or repeatable. Post-implementation of the enhanced product may involve updating regression test plans to include new cases to ensure the new product enhancement continues to perform as expected.

 

2. Does the system architecture support automated processes?

Some systems are more complex than others. Complexity might be high for a mature organisation that utilises legacy systems at the core of their business. Often these legacy systems do not perform as fast, as well, or as required to keep up with new demands. For other organisations, complexity might exist because of the sheer number of integrated systems. The ecosystem may not be as efficient as it could be.

More broadly, the automation of complex business processes using digital technology to deliver superior customer experience and operational efficiency is a cornerstone of digital transformation.

There are supporting methods to leverage in this space such as Robotic Process Automation (RPA) which is a notable trend in fintech, and Test-Driven Development (TDD). RPA is a software that automates repetitive human processes actioned via an application user interface. Claims and forms processing, or even transferring data from one system to another are types of areas that could potentially benefit from RPA software.

TDD is essentially a software development process that starts with writing an automated test case first, then writing the smallest amount of code to deliver a pass result for that test case before refactoring the code to ensure efficient, good quality code that can be easily enhanced. This method can provide up front assessment of the requirements before writing the code. This can also allow for smaller and more frequent development cycles.

A cost-benefit analysis can help determine whether automation is feasible and what methods may work. Where full testing automation may not be feasible, there might be opportunities to introduce part-automation or streamlined processes.

With the right frameworks in place, automation of testing can help with improving efficiencies by freeing up resources from the repetitive manual tasks and speeding up delivery times. Risks can be managed by reducing human error along with increased transparency through reporting and monitoring. Improvement in overall quality should be a by-product.

However, not all processes can be automated and maybe some processes should not be automated. Investing in best practice frameworks and streamlined processes resulting in the formation of strong teams, can be a good start - in particular where there are repeatable, repetitive and potentially programmable tasks involved.

 

3. Is the data quality good enough to shift fully from manual to automated processes?

Underlying any attempt at testing automation is the quality of the data. If the data is questionable, the investment in automation quite simply will not be realised. Where there is less confidence in data quality, additional manual processes will more likely be required to ensure risks are mitigated for product delivery. The introduction of a data integrity solution like Investigate, can very quickly and effectively transform any known or latent data quality issues. This confidence naturally provides a higher level of comfort when considering the feasibility and return of automation. Plus, truly reliable and ongoing monitoring of data quality also provides the transparency required to mitigate risks.

While there will always be a place for human testing and intervention, automation is the way forward. Aiming to strike the right balance between technology and people is really what it’s about.

Regards

Katerina

Katerina Langas - Lead Consultant


If your organisation needs assistance with automated testing or data quality management, QMV can help. Please reach out to QMV for further information on p +61 3 9620 0707 or submit an online form.  


ABOUT QMV

QMV provides independent advisory, consulting and technology to superannuation, wealth management, banking and insurance organisations. 

  

Like what you see? Please subscribe to receive original QMV content! 

You may also benefit from our free monthly pensions and superannuation regulatory updates