Testing Blog
Interviewing Insights and Test Frameworks
Tuesday, January 5, 2010
By James A. Whittaker
Google is hiring. We have openings for security testers, test tool developers, automation experts and manual testers. That's right, I said manual testers.
As a result of all this interviewing I've been reading a lot of interview feedback and wanted to pass along some insights about how these applicants approach solving the testing problems we ask in our interviews. I think the patterns I note in this post are interesting insights into the mind of the software tester, at least the ones who want to work for Google.
One of the things our interviewers like to ask is 'how would you test product
xyz
?' The answers help us judge a tester's instincts, but after reading many hundreds of these interviews I have noticed marked patterns in how testers approach solving such problems. It's as though testers have a default testing framework built into their thinking that guides them in choosing test cases and defines the way they approach test design.
In fact, these built-in frameworks seem to drive a tester's thinking to the extent that when I manage to identify the framework a tester is using, I can predict with a high degree of accuracy how they will answer the interviewers' questions. The framework defines what kind of tester they are. I find this intriguing and wonder if others have similar or counter examples to cite.
Here are the frameworks I have seen just in the last two weeks:
The
Input Domain Framework
treats software as an input-output mechanism. Subscribers of this framework think in terms of sets of inputs, rules about which inputs are more important and relationships between inputs, input sequences and outputs. This is a common model in random testing, model-based testing and the testing of protocols and APIs. An applicant who uses this framework will talk about which inputs they would use to test a specific application and try to justify why those inputs are important.
The
Divide and Conquer Framework
treats software as a set of features. Subscribers begin by decomposing an app into its features, prioritizing them and then working through that list in order. Often the decomposition is multi-layered creating a bunch of small testing problems out of one very large one. You don't test the feature so much as you test its constituent parts. An applicant who uses this framework is less concerned with actual test cases and more concerned with reducing the size of the problem to something manageable.
The
Fishbowl Framework
is a big picture approach to testing in which we manipulate the application while watching and comparing the results. Put the app in a fishbowl, swirl it around in the water and watch what happens. The emphasis is more on the watching and analyzing than it is on exactly how we manipulate the features. An applicant who uses this framework chooses tests that cause visible output and large state changes.
The
Storybook Framework
consists of developing specific scenarios and making sure the software does what is is supposed to do when presented with those scenarios. Stories start with the expected path and work outward. They don't always get beyond the expected. This framework tests coherence of behavior more than subtle errors. Applicants who employ this framework often take a user's point of view and talk about using the application to get real work done.
The
Pessimists Framework
starts with edge cases. Subscribers test erroneous input, bad data, misconfigured environments and so on. This is a common strategy on mature products where the main paths are well trodden. Applicants who use this framework like to assume that the main paths will get tested naturally as part of normal dev use and dog-fooding and that the testing challenge is concentrated on lower probability scenarios. They are quick to take credit for prior testing, assume its rationality and pound on problematic scenarios.
There are more and I am taking furious notes to try and make sense of them all. As I get to know the testers who work in my organization, it doesn't take long to see which frameworks they employ and in what order (many are driven by multiple frameworks). Indeed, after studying an applicant's first interview, I can almost always identify the framework they use to answer testing questions and can often predict how they are going to answer the questions other interviewers ask even before I read that far.
Now some interesting questions come out of this that I am still looking into. Which of these frameworks is best? Which is best suited to certain types of functionality? Which is better for getting a job at Google? Already patterns are emerging.
One thing is for sure, we're interviewing at a rate that will provide me with lots of data on this subject. Contact me if you'd like to participate in this little study!
Labels
Aaron Jacobs
1
Adam Porter
1
Alan Faulkner
1
Alan Myrvold
1
Alberto Savoia
4
Alek Icev
2
Alex Eagle
1
Allen Hutchison
6
Andrew Trenk
8
Android
1
Anthony Vallone
25
Antoine Picard
1
APIs
2
App Engine
1
April Fools
2
Arif Sukoco
1
Bruce Leban
1
C++
11
Chaitali Narla
2
Christopher Semturs
1
Chrome
3
Chrome OS
2
Dave Chen
1
Diego Salas
2
Dmitry Vyukov
1
Dori Reuveni
1
Eduardo Bravo Ortiz
1
Ekaterina Kamenskaya
1
Erik Kuefler
3
Espresso
1
George Pirocanac
2
Google+
1
Goranka Bjedov
1
GTAC
54
Hank Duan
1
Harry Robinson
5
Havard Rast Blok
1
Hongfei Ding
1
James Whittaker
42
Jason Arbon
2
Jason Elbaum
1
Jason Huggins
1
Java
5
JavaScript
7
Jay Han
1
Jessica Tomechak
1
Jim Reardon
1
Jobs
14
Joe Allan Muharsky
1
Joel Hynoski
1
John Penix
1
John Thomas
3
Jonathan Rockway
1
Jonathan Velasquez
1
Julian Harty
5
Julie Ralph
1
Karin Lundberg
1
Kaue Silveira
1
Kevin Graney
1
Kirkland
1
Kurt Alfred Kluever
1
Lesley Katzen
1
Marc Kaplan
3
Mark Ivey
1
Mark Striebeck
1
Marko Ivanković
1
Markus Clermont
3
Michael Bachman
1
Michael Klepikov
1
Mike Wacker
1
Misko Hevery
32
Mobile
2
Mona El Mahdy
1
Noel Yap
1
Patricia Legaspi
1
Patrick Copeland
23
Patrik Höglund
5
Peter Arrenbrecht
1
Phil Rollet
1
Philip Zembrod
4
Pooja Gupta
1
Radoslav Vasilev
1
Rajat Dewan
1
Rajat Jain
1
Rich Martin
1
Richard Bustamante
1
Roshan Sembacuttiaratchy
1
Ruslan Khamitov
1
Sean Jordan
1
Sharon Zhou
1
Shyam Seshadri
4
Simon Stewart
2
Stephen Ng
1
Tejas Shah
1
Test Analytics
1
Tony Voellm
2
TotT
54
Vojta Jína
1
WebRTC
2
Yvette Nameth
2
Zhanyong Wan
6
Zuri Kemp
2
Archive
2015
December
November
October
August
June
May
April
March
February
January
2014
December
November
October
September
August
July
June
May
April
March
February
January
2013
December
November
October
August
July
June
May
April
March
January
2012
December
November
October
September
August
2011
November
October
September
August
July
June
May
April
March
February
January
2010
December
November
October
September
August
July
June
May
April
March
February
January
Interviewing Insights and Test Frameworks
2009
December
November
October
September
August
July
June
May
April
February
January
2008
December
November
October
September
August
July
June
May
April
March
February
January
2007
October
September
August
July
June
May
April
March
February
January
Feed
Follow @googletesting