-- EricKessler - 15 Oct 2020

Before starting an automation project, it is sometimes a good idea (or required) to do a Proof of Concept. Primarily, a proof of poncept is done to show that moving forward with a project is possible and/or desirable.

Goals

When doing a proof of concept for an automation project, there are two main questions that should be answered:
  1. "Can it be done?"
  2. "Should it be done?"

Question 1: Can it be done?

Prep work

While it is true that most types of application can be automated, there is always the possibily that the application that you are working on is not Just Another Web App and so it is still important to consider the feasibility of automation. Some of the questions to keep in mind while determining if automation is possible:
  • What tools and code libraries are available for automating this type of application? E.g. Selenium, Appium, Watir, etc.
  • If some already exist, how easy to use are they? Do they cost money? Is there good documentation and usage examples? What is the online community/support like?
  • If none exist, are you able to create some yourself?

Additionally, knowing what will happen to the project if it goes forward can influence some of your tool/design choices
  • Who will be maintaining the automation code once the project is live?
    • Do they need the automation to be done in a particular progrmmaing language or with a particular test framework?
    • Will the automation need to integrate with other systems (e.g. CI or reporting systems)?
  • Will the application development team be involved with the testing at all? If not, the ability to add or change important automation needs (e.g. element IDs) may be lacking.
  • What platforms will the application be run on (IE/Chrome/Safari, Windows/OSX, iOS/Android, phone/tablet/desktop. etc.) and do the automated tests need to run on all of them as well?

Getting it done

Remember that ithe proof of concept is there to prove a point and investigate predictable technical challenges before doing a full test suite. Therefore, only write as many test cases (or automate pre-existing ones) as are needed to demonstrate the feasability of automation. Once you have a test that involves manipulating text fields, buttons, and whatever else on one page, a different test that does essentially the same thing on a different page of the application doesn't really show anything new. While you will want to have multiple tests (if only to have a test 'suite' instead of just a test 'script'), your motivation should be to choose a small but sufficient number of tests to be representative of the possible user interactions with the application because once you have a codebase that can simulate a user performing clicks, swipes, data entry, or whatever else a use does via the UI/API, building any given combination of those interactions into the shape of a test is completely arbitrary and straightforward work. Save the "just one more test..." adventure for when the project moves beyond the proof of concept stage and gets the green light for more time and resource investment.

Question 2: Should it be done?

Although it is true that almost any application can be automated, it is not always the case that testing should be automated. The analysis portion of the proof of concept is what helps determine if there is likely to be a good Return on Investment when adding automation to the project. Some questions to have in mind when trying to arrive at your answer:
  • What level of automation is targeted to be considered a success? 75%? 95%?
    • What is the current level of effort involved in the maintenance and execution of any manual testing and will building out and maintaining an automation suite be less than that?
  • How much application behavior is there left to test that is not already covered with other existing testing (e.g. developer maintained unit/integration tests)?
    • What is the test pyramid likely to look like if additional automation is introduced to the project?
  • How frequently will the tests be run? Automating tests that only need to run a few times a year is not nearly as valuable as automating tests that run every week or day.
  • Are there any tests that should not or can not be automated? E.g. tests that require the passage of long periods of time, invoke costly third partty sevices, or create lots of data noise (like creating and deleting accounts).
    • What kind of test data and environment managment is in place? Can test be properly isolated and provided with fake endpoints/accounts in order to avoid "side-effect" issues?

Deliverables

The 'proof' portion of your Proof of Concept should include at least the following:
  • A small test suite that runs a few demo tests
    • Suggested format: a Git repo. It's simple and lots of places already use it for source control.
    • The more that this looks like a real, albeit small and abbreviated, test suite (e.g. test data management, reporting, usage of page models, keeping scalability in mind, etc.) the more convincing that it will be as proof.
    • If there are known systems that a test suite will will need to tie into (e.g. continuous integration or reporting platforms), the demo suite should demonstrate integration capability as well, if possible
  • An analysis document that contains the expected test automation potential
    • Suggested format: varies. A spreadsheet, a slide deck, a freeform email, etc. Whatever helps you convey your findings to your target audience. Go nuts.
    • If there are existing test cases, include the automation potential of each one. Fun fact: managers love spreadsheets with color categorized lists and quantifiable things like "95% automatable"
    • If there are application/architecture limitations/changes that might exist or be needed that impact the ability to automate testing of the application, they should be called out. E.g. a web app using "<div>" elements for everything instead of using more standard HTML elements may severely restrict how automation code can interact with the application. *
Topic revision: r5 - 16 Oct 2020, EricKessler
© 2020 Ultranauts - 75 Broad Street, 2nd Floor, Suite 206, New York, NY 10004 - info@ultranauts.co