Testing Blog
Automating tests vs. test-automation
Wednesday, October 24, 2007
Posted by
Markus
Clermont
, Test Engineering Manager, Zurich
In the last couple of years the practice of testing has undergone more than superficial changes. We have turned our art into engineering, introduced process-models, come up with best-practices, and developed tools to support our daily work and make each test engineer more productive. Some tools target test execution. They aim to automate the repetitive steps that a tester would take to exercise functions through the user interface of a system in order to verify its functionality. I am sure you have all seen tools like Selenium,
WebDriver
, Eggplant or other proprietary solutions, and that you learned to love them.
On the downside, we observe problems when we employ these tools:
Scripting your manual tests this way takes far longer than just executing them manually.
The
UI
is one of the least stable interfaces of any system, so we can start automating quite late in the development phase.
Maintenance of the tests takes a significant amount of time.
Execution is slow, and sometimes cumbersome.
Tests become flaky.
Tests break for the wrong reasons.
Of course, we can argue that none of these problems is particularly bad, and the advantages of automation still outweigh the cost. This might well be true. We learned to accept some of these problems as 'the price of automation', whereas others are met by some common-sense workarounds:
It takes long to automate a test—Well, let's automate only tests that are important, and will be executed again and again in regression testing.
Execution might be slow, but it is still faster than manual testing.
Tests cannot break for the wrong reason—When they break we found a bug.
In the rest of this post I'd like to summarize some experiences I had when I tried to overcome these problems, not by working around them, but by eliminating their causes.
Most of these problems are rooted in the fact that we are just automating manual tests. By doing so we are not taking into account whether the added computational power, access to different interfaces, and faster execution speed should make us change the way we test systems.
Considering the fact that a system exposes different interfaces to the environment—e.g., the user-interface, an interface between front-end and back-end, an interface to a data-store, and interfaces to other systems—it is obvious that we need to look at each and every interface and test it. More than that we should not only take each interface into account but also avoid testing the functionality in too many different places.
Let me introduce the example of a store-administration system which allows you to add items to the store, see the current inventory, and remove items. One straightforward manual test case for adding an item would be to go to the 'Add' dialogue, enter a new item with quantity 1, and then go to the 'Display' dialogue to check that it is there. To automate this test case you would instrument exactly all the steps through the user-interface.
Probably most of the problems I listed above will apply. One way to avoid them in the first place would have been to figure out how this system looks inside.
Is there a database? If so, the verification should probably not be performed against the
UI
but against the database.
Do we need to interface with a supplier? If so, how should this interaction look?
Is the same functionality available via an
API
? If so, it should be tested through the
API
, and the
UI
should just be checked to interact with the
API
correctly.
This will probably yield a higher number of tests, some of them being much 'smaller' in their resource requirements and executing far faster than the full end-to-end tests. Applying these simple questions will allow us to:
write many more tests through the
API
, e.g., to cover many boundary conditions,
execute multiple threads of tests on the same machine, giving us a chance to spot race-conditions,
start earlier with testing the system, as we can test each interface when it becomes 'quasi-stable',
makes maintenance of tests and debugging easier, as the tests break closer to the source of the problem,
require fewer machine resources, and still execute in reasonable time.
I am not advocating the total absence of
UI
tests here. The user interface is just another interface, and so it deserves attention too. However I do think that we are currently focusing most of our testing-efforts on the
UI
. The common attitude, that the
UI
deserves most attention because it is what the user sees, is flawed. Even a perfect
UI
will not satisfy a user if the underlying functionality is corrupt.
Neither should we abandon our end-to-end tests. They are valuable and no system can be considered tested without them. Again, the question we need to ask ourselves is the ratio between full end-to-end tests and smaller integration tests.
Unfortunately, there is no free lunch. In order to change the style of test-automation we will also need to change our approach to testing. Successful test-automation needs to:
start early in the development cycle,
take the internal structure of the system into account,
have a feedback loop to developers to influence the system-design.
Some of these points require quite a change in the way we approach testing. They are only achievable if we work as a single team with our developers. It is crucial that there is an absolute free flow of information between the different roles in this team.
In previous projects we were able to achieve this by
removing any spatial separation between the test engineers and the development engineers. Sitting on the next desk is probably the best way to promote information exchange,
using the same tools and methods as the developers,
getting involved into daily stand-ups and design-discussions.
This helps not only in getting involved really early (there are projects where test development starts at the same time as development), but it is also a great way to give continuous feedback. Some of the items in the list call for very development-oriented test engineers, as it is easier for them to be recognized as a peer by the development teams.
To summarize, I figured out that a successful automation project needs:
to take the internal details and exposed interface of the system under test into account,
to have many fast tests for each interface (including the
UI
),
to verify the functionality at the lowest possible level,
to have a set of end-to-end tests,
to start at the same time as development,
to overcome traditional boundaries between development and testing (spatial, organizational and process boundaries), and
to use the same tools as the development team.
No comments :
Post a Comment
Labels
Aaron Jacobs
1
Adam Porter
1
Alan Faulkner
1
Alan Myrvold
1
Alberto Savoia
4
Alek Icev
2
Alex Eagle
1
Allen Hutchison
6
Andrew Trenk
8
Android
1
Anthony Vallone
25
Antoine Picard
1
APIs
2
App Engine
1
April Fools
2
Arif Sukoco
1
Bruce Leban
1
C++
11
Chaitali Narla
2
Christopher Semturs
1
Chrome
3
Chrome OS
2
Dave Chen
1
Diego Salas
2
Dmitry Vyukov
1
Dori Reuveni
1
Eduardo Bravo Ortiz
1
Ekaterina Kamenskaya
1
Erik Kuefler
3
Espresso
1
George Pirocanac
2
Google+
1
Goranka Bjedov
1
GTAC
54
Hank Duan
1
Harry Robinson
5
Havard Rast Blok
1
Hongfei Ding
1
James Whittaker
42
Jason Arbon
2
Jason Elbaum
1
Jason Huggins
1
Java
5
JavaScript
7
Jay Han
1
Jessica Tomechak
1
Jim Reardon
1
Jobs
14
Joe Allan Muharsky
1
Joel Hynoski
1
John Penix
1
John Thomas
3
Jonathan Rockway
1
Jonathan Velasquez
1
Julian Harty
5
Julie Ralph
1
Karin Lundberg
1
Kaue Silveira
1
Kevin Graney
1
Kirkland
1
Kurt Alfred Kluever
1
Lesley Katzen
1
Marc Kaplan
3
Mark Ivey
1
Mark Striebeck
1
Marko Ivanković
1
Markus Clermont
3
Michael Bachman
1
Michael Klepikov
1
Mike Wacker
1
Misko Hevery
32
Mobile
2
Mona El Mahdy
1
Noel Yap
1
Patricia Legaspi
1
Patrick Copeland
23
Patrik Höglund
5
Peter Arrenbrecht
1
Phil Rollet
1
Philip Zembrod
4
Pooja Gupta
1
Radoslav Vasilev
1
Rajat Dewan
1
Rajat Jain
1
Rich Martin
1
Richard Bustamante
1
Roshan Sembacuttiaratchy
1
Ruslan Khamitov
1
Sean Jordan
1
Sharon Zhou
1
Shyam Seshadri
4
Simon Stewart
2
Stephen Ng
1
Tejas Shah
1
Test Analytics
1
Tony Voellm
2
TotT
54
Vojta Jína
1
WebRTC
2
Yvette Nameth
2
Zhanyong Wan
6
Zuri Kemp
2
Archive
2015
December
November
October
August
June
May
April
March
February
January
2014
December
November
October
September
August
July
June
May
April
March
February
January
2013
December
November
October
August
July
June
May
April
March
January
2012
December
November
October
September
August
2011
November
October
September
August
July
June
May
April
March
February
January
2010
December
November
October
September
August
July
June
May
April
March
February
January
2009
December
November
October
September
August
July
June
May
April
February
January
2008
December
November
October
September
August
July
June
May
April
March
February
January
2007
October
TotT: Avoiding friend Twister in C++
Automating tests vs. test-automation
Overview of Infrastructure Testing
Testing Google Mashup Editor Class
Performance Testing
Post Release: Closing the loop
September
August
July
June
May
April
March
February
January
Feed
Follow @googletesting
No comments :
Post a Comment