Testing Blog
An Ingredients List for Testing - Part Two
Friday, August 27, 2010
By James Whittaker
When are you finished testing? It’s the age old quality question and one that has never been adequately answered (other than the unhelpful answer of
never
). I argue it never will be answered until we have a definition of the size of the testing problem. How can you know you are finished if you don’t fully understand the task at hand?
Answers that deal with coverage of inputs or coverage of code are unhelpful. Testers can apply every input and cover every line of code in test cases and still the software can have very serious bugs. In fact, it’s actually likely to have serious bugs because inputs and code cannot be easily associated with what’s important in the software. What we need is a way to identify what parts of the product can be tested, a bill of materials if you will, and then map our actual testing back to each part so that we can measure progress against the overall testing goal.
This bill of materials represents everything that can be tested. We need it in a format that can be compared with actual testing so we know which parts have received enough testing and which parts are suspect.
We have a candidate format for this bill of materials we are experimenting with at Google and will be unveiling at GTAC this year.
An Ingredients List for Testing - Part One
Friday, August 20, 2010
By James Whittaker
Each year, about this time, we say goodbye to our summer interns and bid them success in the upcoming school year. Every year they come knowing very little about testing and leave, hopefully, knowing much more. This is not yet-another-plea to universities to teach more testing, instead it is a reflection on how we teach ourselves.
I like to experiment with metaphors that help people "get it." From attacks to tools to tours to the apocalypse, I've seen my fair share. This summer, I got a lot of aha moments from various interns and new hires likening testing to cooking. We're chefs with no recipes, just a list of ingredients. We may all end up making a different version of Testing Cake, but we better at least be using the same set of ingredients.
What are the ingredients? I'll list them here over the next couple of weeks. Please feel free to add your own and I'll hope you don't steal my thunder by getting them in faster than I. Right now I have a list of 7.
Ingredient 1: Product expertise
Developers grow trees, testers manage forests. The level of focus of an individual developer should be on the low level concerns of building reliable and secure components. Developers must maintain intellectual mastery from the UI to low level APIs and memory usage of the features they code. We don’t need them distracted and overwhelmed with system wide product expertise duties as well.
Testers manage system wide issues and rarely have deep component knowledge. As a manager of the forest, we can treat any individual tree abstractly. Testers should know the entire landscape understanding the technologies and components involved but not actually taking part in their construction. This breadth of knowledge and independence of insight is a crucial complement to the developer’s low level insights because testers must work across components and tie together the work of many developers when they assess overall system quality.
Another way to think about this is that developers are the domain experts who understand the problem the software is solving and how it is being solved. Testers are the product experts who focus on the breadth of technologies used across the entire product.
Testers should develop this product expertise to the extent that they cannot be stumped when asked questions like "how would I do this?" with their product. If I asked one of my Chrome testers any question about how to do anything with Chrome concerning installation, configuration, extensions, performance, rendering ... anything at all ... I expect an answer right away. An immediate, authoritative and correct answer. I would not expect the same of a developer. If I can stump a tester with such a question then I have cause for concern. If there is a feature none of us know about or don't know completely then we have a feature that might escape testing scrutiny. No, not on our watch!
Product expertise is one ingredient that must be liberally used when mixing Testing Cake.
Test Driven Code Review
Monday, August 2, 2010
By Philip Zembrod
In my
quest to explore TDD
I recently found another propery of TDD-written code that I hadn't expected: When reviewing or just reading such code, it's often best to first read the tests.
When I look at new code or a code change, I ask: What is this about? What is it supposed to do? Questions that tests often have a good answer for. They expose interfaces and state use cases. This is cool, I thought, and decided
to establish test-first reading as my code-reviewing routine. Of course this just applies
the specification aspect of tests: Reading the specs before reading the code.
Only it didn't always work. From some tests I just failed to learn the point and intention of the tested code. Often, though not always, these were tests that were heavy with mocks and mock expectations.
Mocks aren't always a helpful tool, was my first conclusion. The phrase "Good mocks, bad mocks" popped up in my mind. I began to appreciate fakes again - and the people who write them. But soon I realized that this was about more than mocks vs. fakes vs. dummies vs. other
Friends You Can Depend On
. I was really looking at
how well tests fulfill their role as specification
.
TDD teaches that tests are a better specification than prose. Tests are automatically enforced, and get stale less easily. But
not all tests work equally well
as specification
! That's what test driven code reviewing taught me.
I began to call them
well-specifying
tests and
poorly-specifying
tests. And the specification aspect isn't just some
additional benefit, it's a really crucial property of tests. The more I thought about it, the more I saw: It is connected to a lot of things that first weren't obvious to me:
If tests are poorly-specifying, then possibly the tested product is poorly specified or documented. After all, it's the tests that really make sure how a product behaves. If they don't clearly state what they test, then it's less clear how the product works. That's a problem.
Well-specifying tests are more robust. If a test just does and verifies things of which the architect or product manager will readily say "yes, we need that" then the test will survive refactorings or new features. Simply because "yes, we need that." The test's use case is needed, its conditions must hold. It needn't be adapted to new code, new code must pass it. False positives are less likely.
Corollary: Well-specifying tests have higher
authority
. If a test fails, a natural reaction is to ask "is this serious?" If a test is poorly-specifying, if you don't really understand what it is testing, then you may say "well, maybe it's nothing". And you may even be right! If a test is well-specifying, you'll easily see that its failing is serious. And you'll make sure the code gets fixed.
I'm now thinking about an authority rank between 0 and 1 as a property of tests. It could be used to augment test coverage metrics. Code that is just covered by poorly-specifying tests would have poor authority coverage, even if the coverage is high.
Quantifying an authority rank would be a conceptual challenge, of course, but part of it could be how well test driven code reviewing works with a given test.
P.S. If anyone suspects that I'm having some fun inventing terms beginning with "test driven," I'll plead guilty as charged. :-)
Labels
Aaron Jacobs
1
Adam Porter
1
Alan Faulkner
1
Alan Myrvold
1
Alberto Savoia
4
Alek Icev
2
Alex Eagle
1
Allen Hutchison
6
Andrew Trenk
8
Android
1
Anthony Vallone
25
Antoine Picard
1
APIs
2
App Engine
1
April Fools
2
Arif Sukoco
1
Bruce Leban
1
C++
11
Chaitali Narla
2
Christopher Semturs
1
Chrome
3
Chrome OS
2
Dave Chen
1
Diego Salas
2
Dmitry Vyukov
1
Dori Reuveni
1
Eduardo Bravo Ortiz
1
Ekaterina Kamenskaya
1
Erik Kuefler
3
Espresso
1
George Pirocanac
2
Google+
1
Goranka Bjedov
1
GTAC
54
Hank Duan
1
Harry Robinson
5
Havard Rast Blok
1
Hongfei Ding
1
James Whittaker
42
Jason Arbon
2
Jason Elbaum
1
Jason Huggins
1
Java
5
JavaScript
7
Jay Han
1
Jessica Tomechak
1
Jim Reardon
1
Jobs
14
Joe Allan Muharsky
1
Joel Hynoski
1
John Penix
1
John Thomas
3
Jonathan Rockway
1
Jonathan Velasquez
1
Julian Harty
5
Julie Ralph
1
Karin Lundberg
1
Kaue Silveira
1
Kevin Graney
1
Kirkland
1
Kurt Alfred Kluever
1
Lesley Katzen
1
Marc Kaplan
3
Mark Ivey
1
Mark Striebeck
1
Marko Ivanković
1
Markus Clermont
3
Michael Bachman
1
Michael Klepikov
1
Mike Wacker
1
Misko Hevery
32
Mobile
2
Mona El Mahdy
1
Noel Yap
1
Patricia Legaspi
1
Patrick Copeland
23
Patrik Höglund
5
Peter Arrenbrecht
1
Phil Rollet
1
Philip Zembrod
4
Pooja Gupta
1
Radoslav Vasilev
1
Rajat Dewan
1
Rajat Jain
1
Rich Martin
1
Richard Bustamante
1
Roshan Sembacuttiaratchy
1
Ruslan Khamitov
1
Sean Jordan
1
Sharon Zhou
1
Shyam Seshadri
4
Simon Stewart
2
Stephen Ng
1
Tejas Shah
1
Test Analytics
1
Tony Voellm
2
TotT
54
Vojta Jína
1
WebRTC
2
Yvette Nameth
2
Zhanyong Wan
6
Zuri Kemp
2
Archive
2015
December
November
October
August
June
May
April
March
February
January
2014
December
November
October
September
August
July
June
May
April
March
February
January
2013
December
November
October
August
July
June
May
April
March
January
2012
December
November
October
September
August
2011
November
October
September
August
July
June
May
April
March
February
January
2010
December
November
October
September
August
An Ingredients List for Testing - Part Two
An Ingredients List for Testing - Part One
Test Driven Code Review
July
June
May
April
March
February
January
2009
December
November
October
September
August
July
June
May
April
February
January
2008
December
November
October
September
August
July
June
May
April
March
February
January
2007
October
September
August
July
June
May
April
March
February
January
Feed
Follow @googletesting