Testing Blog
Update! GTAC 2010 Keynote Speakers
Tuesday, June 29, 2010
We are thrilled to announce that
GTAC 2010
keynotes are finalized. We are very fortunate to
have three world-renowned software testing icons: Robert Binder, Dr. Jeff Offutt and Dr. James Whittaker. Robert, Jeff and James bring to GTAC a powerful combination of practical and theoretical experience to help us address key aspect of this year's theme: Test to Testability. Robert will kick-off Day 1 with a keynote on the critical role of testability. On Day 2, Jeff will share his experiences on Automatic Test Generation from source code and about a technique he invented, Bypass Testing, for black-box testing of web applications. James will close the conference with a keynote focusing on the Test(ing) challenges and how to get ahead of them. More details on the specifics of their talk coming soon!
Here are brief bios of our esteemed speakers:
Robert Binder
Robert V. Binder is a software entrepreneur and technologist with over 34 years of systems engineering experience. His 1994 analysis of software testability
http://portal.acm.org/citation.cfm?id=184077
has had a continuing influence on research and practice in this area. As principal of System Verification Associates, he leads teams that deliver advanced IT assurance solutions. He was recently awarded a U.S. Patent for a unique
approach to model-based testing of mobile systems. He is a member of the Editorial Board of Software Testing, Verification, and Review and internationally recognized as the author of the definitive Testing Object-Oriented Systems: Models, Patterns, and Tools. Robert holds an MS in
Electrical Engineering and Computer Science from the University of Illinois at Chicago and a MBA from the University of Chicago. He is an IEEE Senior Member.
Dr. Jeff Offutt
Dr. Jeff Offutt is Professor of Software Engineering at George Mason University. He has invented numerous test strategies, published over 120 refereed research papers, and is co-author of the textbook Introduction to Software Testing. He is editor-in-chief of Wiley's journal of Software Testing, Verification and Reliability; steering committee chair for the IEEE International Conference on Software Testing, Verification, and Validation; and program chair for ICST 2009. He has consulted with numerous companies on software testing, usability, and software intellectual property issues. Offutt is on the web at
http://www.cs.gmu.edu/~offutt/
Dr. James Whittaker
Dr. Whittaker is currently the Engineering Director over engineering tools and testing for Google's Seattle and Kirkland offices. He holds a PhD in computer science from the University of Tennessee and is the author or coauthor of four acclaimed textbooks. How to Break Software, How to Break Software Security (with Hugh Thompson) and How to Break Web Software (with Mike Andrews). His latest is Exploratory Software Testing: Tips, Tricks, Tours and Techniques to Guide Test Design and he's authored over fifty peer-reviewed papers on software development and computer security. He holds patents on various inventions in software testing and defensive security applications and has attracted millions in funding, sponsorship, and license agreements while a professor at Florida Tech. He has also served as a testing and security consultant for dozens of companies and spent 3 years as an architect at Microsoft.
Reminder: Call for proposals
If you would like to present at this year’s GTAC please remember to submit your proposal by the July 9th deadline. Please visit
http://www.gtac.biz/call-for-proposals
for details.
Sujay Sahni for the GTAC 2010 Committee
Test Driven Integration
Tuesday, June 22, 2010
By Philip Zembrod
In an
earlier post
on trying out TDD I wrote how my mindset while coding changed from fear of bugs in the new code to eager anticipation to see the new code run through and eventually pass the already written tests. Today I want to tell about integrating components by writing integration tests first.
In a new project we decided to follow TDD from the start. We happily created components, “testing feature after feature into existence” (a phrase I love; I picked it up from a colleague), hitting a small test coverage of around 90% from the start. Obviously, when it came to integrating the components into a product, the obvious choice was to do that test-driven, too. So how did that go?
What I would have done traditionally was select a large enough set of components that, once integrated, should make up something I could play with. Since at least a minimum UI would be needed, plus something that does visible or useful things, preferably both, this something would likely have been largish, integrating quite a few components. With the playing around and tryout, I’d enter debugging, because of course it wouldn’t work at first attempt. The not-too-small number of integrated components would make tracking the cause of failures hard, and anticipating all this while coding, I’d have met the well-known fearful mindset again, slowing me down, as I described in my initial TDD post.
How did TDI change this game for me? I realized: With my unit test toolbox that can test any single component, I can also test an integration of 2 components regardless of whether they have a UI or do something visible. That was the key to a truly incremental process of small steps.
First, write the test for 2 components, run it and see it fail to make sure the integration code I’m about to write is actually executed by the test. Write that bit of code, run the test and see it succeed. If it still fails, fix what's broken and repeat. Finding what's broken in this mode is usually easy enough because the increments are small. If the test failure doesn’t make obvious what’s wrong, adding some verifications or some logging does the trick. A debugger should never be needed; automated tests are, after all, a bit like recorded debugging sessions that you can replay any time in the future.
Repeating this for the 3rd component added, the 4th, etc., I could watch my product grow, with new passing tests every day. Small steps, low risks in each, no fear of debugging; instead, continuous progress. Every day this roaring thrill: It works, it works! Something’s running that didn’t run yesterday. And tomorrow morning I’ll start another test that will run tomorrow evening, most likely. Or already at lunchtime. Imagine what kind of motivation and acceleration this would give you. Better, try it out for yourself. I hope you’ll be as amazed and excited as I am.
What are the benefits? As with plain TDD, I find this fun-factor, this replacement of dread of debugging by eagerness for writing the next test to be able to write and run the next code the most striking effect of TDI.
The process is also much more systematic. Once you have specified your expectations at each level of integration, you’ll verify them continuously in the future, just by running the tests. Compare that to how reproducible, thorough and lasting your verification of your integration would be if you’d done it manually.
And if you wrote an integration test for every function or feature that you cared about during integration, then you can make sure each of them is in shape any time by just running the tests. I suspect one can’t appreciate the level of confidence in the code that creates until one has experienced it. I find it amazing. I dare you to try it yourself!
P.S. On top of this come all the other usual benefits of well-tested code that would probably be redundant to enumerate here, so I won’t. ;-)
Labels
Aaron Jacobs
1
Adam Porter
1
Alan Faulkner
1
Alan Myrvold
1
Alberto Savoia
4
Alek Icev
2
Alex Eagle
1
Allen Hutchison
6
Andrew Trenk
8
Android
1
Anthony Vallone
25
Antoine Picard
1
APIs
2
App Engine
1
April Fools
2
Arif Sukoco
1
Bruce Leban
1
C++
11
Chaitali Narla
2
Christopher Semturs
1
Chrome
3
Chrome OS
2
Dave Chen
1
Diego Salas
2
Dmitry Vyukov
1
Dori Reuveni
1
Eduardo Bravo Ortiz
1
Ekaterina Kamenskaya
1
Erik Kuefler
3
Espresso
1
George Pirocanac
2
Google+
1
Goranka Bjedov
1
GTAC
54
Hank Duan
1
Harry Robinson
5
Havard Rast Blok
1
Hongfei Ding
1
James Whittaker
42
Jason Arbon
2
Jason Elbaum
1
Jason Huggins
1
Java
5
JavaScript
7
Jay Han
1
Jessica Tomechak
1
Jim Reardon
1
Jobs
14
Joe Allan Muharsky
1
Joel Hynoski
1
John Penix
1
John Thomas
3
Jonathan Rockway
1
Jonathan Velasquez
1
Julian Harty
5
Julie Ralph
1
Karin Lundberg
1
Kaue Silveira
1
Kevin Graney
1
Kirkland
1
Kurt Alfred Kluever
1
Lesley Katzen
1
Marc Kaplan
3
Mark Ivey
1
Mark Striebeck
1
Marko Ivanković
1
Markus Clermont
3
Michael Bachman
1
Michael Klepikov
1
Mike Wacker
1
Misko Hevery
32
Mobile
2
Mona El Mahdy
1
Noel Yap
1
Patricia Legaspi
1
Patrick Copeland
23
Patrik Höglund
5
Peter Arrenbrecht
1
Phil Rollet
1
Philip Zembrod
4
Pooja Gupta
1
Radoslav Vasilev
1
Rajat Dewan
1
Rajat Jain
1
Rich Martin
1
Richard Bustamante
1
Roshan Sembacuttiaratchy
1
Ruslan Khamitov
1
Sean Jordan
1
Sharon Zhou
1
Shyam Seshadri
4
Simon Stewart
2
Stephen Ng
1
Tejas Shah
1
Test Analytics
1
Tony Voellm
2
TotT
54
Vojta Jína
1
WebRTC
2
Yvette Nameth
2
Zhanyong Wan
6
Zuri Kemp
2
Archive
2015
December
November
October
August
June
May
April
March
February
January
2014
December
November
October
September
August
July
June
May
April
March
February
January
2013
December
November
October
August
July
June
May
April
March
January
2012
December
November
October
September
August
2011
November
October
September
August
July
June
May
April
March
February
January
2010
December
November
October
September
August
July
June
Update! GTAC 2010 Keynote Speakers
Test Driven Integration
May
April
March
February
January
2009
December
November
October
September
August
July
June
May
April
February
January
2008
December
November
October
September
August
July
June
May
April
March
February
January
2007
October
September
August
July
June
May
April
March
February
January
Feed
Follow @googletesting