Testing Smarter with Mike Bland (Full Interview)

This interview with Mike Bland is part of our series of “Testing Smarter with…” interviews. Our goal with these interviews is to highlight insights and experiences as told by many of the software testing field’s leading thinkers.

Mike Bland aims to produce a culture of transparency, autonomy, and collaboration wherever he goes, in which “Instigators” are inspired and encouraged to make creative use of existing systems to drive improvement throughout an organization. The ultimate goal of such efforts is to make the right thing the easy thing. He’s followed this path since 2005, when he helped drive adoption of automated testing throughout Google as part of the Testing Grouplet, the Test Mercenaries, and the Fixit Grouplet.


Mike Bland

Personal Background

Hexawise: If you could write a letter and send it back in time to yourself when you were first getting into software testing, what advice would you include in it?

Mike: When I first started practicing automated testing and had a lot of success with it, I couldn’t understand why people on my team wouldn’t adopt it despite its “obvious” benefits. One of the biggest things that experience, reading, and reflection has afforded me is the perspective to realize now that different people adopt change differently, at different rates and for different reasons, and that you’ve got to create the space for everyone to adapt accordingly.

As I say in my most recent presentation, “The Rainbow of Death”, metrics and arguments are far from sufficient to inspire action in either the skeptical or the powerless, and the greater challenge is to create the cultural space necessary for lasting change.

Oh, and you’ve got to repeat yourself and say the same thing different ways multiple times—a lot.

Hexawise: What change management lessons did you learn while driving adoption of test automation methods at Google between 2005-2010? Which of those lessons were applicable when you were involved in the recent U.S. federal government effort to bring in talented tech people to bring new ways of working with technology into the government? Which of those lessons were not?

Mike: The top objective is to make the right thing the easy thing. Once people have the knowledge and power to do the right thing the right way, they won’t require regulation, manipulation, or coercion—doing things any other way will cease to make any sense.

Most of what I learned in terms of specific approaches to supplying the necessary knowledge and power has come from trying different things and seeing what sticks—and I’m still working to make sense of why certain things stuck, years after the fact. The most important insight, as mentioned earlier, is that different people adopt change differently, for different reasons, and as a result of different stimuli. Geoffrey A. Moore’s Crossing the Chasm was the biggest eye-opener in this regard.

Then, years later, when I saw fellow ex-Googler Albert Wong present his “Framework for Helping” to describe his first experience in the U.S. Digital Service, I instantly saw it snapping in place across the chasm, describing how the Innovators and Early Adopters from Moore’s model—who I like to call “Instigators”—need to fulfill an array of functions in order to connect with and empower the Early Majority on the other side of the chasm. Of course, through the filter of my own twisted sense of humor, I thought “Rainbow of Death” might make the model stick in people’s brains a little better.


So these models helped provide context for why the specific things the Testing Grouplet did worked; and how, despite the fact that there were many scattered, parallel efforts underway, they ultimately served to reinforce one another, rather than creating confusion and chaos. Of course, Google’s open communication channels and the Testing Grouplet’s shared vision—that emerged two years into our five year run—helped keep everything aligned. The point being, don’t wait for the clear vision and perfect plan up-front—start doing things and pay attention to what’s working, and why, and develop your plans as you go. That’s just good Agile practice, isn’t it?

And did I mention that you have to reiterate things you’ve already said in different language over and over—like, a lot?

With regard to how this shaped my experience in the U.S. federal government as part of the “tech surge” following the healthcare.gov recovery, the very first thing I realized was that my team had absolutely no discipline around knowledge sharing other than “ask the same question to everyone in Slack that countless others have already asked in Slack chats past, and hope someone remembers which doc in which Google Drive folder will give you a hint of what you really need to do to find out how to ask the right question, and who the right person might be to ask”. I realized that what the Testing Grouplet did was built upon a foundation of company-wide knowledge sharing practices that existed long before we started, and that that was the first problem that needed solving—to get the team’s own act in order—before trying to preach the Agile gospel to the rest of the government.

As a result, I started writing documentation publishing tools—based on Jekyll and other tools already in use by the team on client products—and formed a Documentation Working Group (effectively the “Documentation Grouplet”). Eventually I wrote a lightweight GitHub Pages knock-off server, created a template for practice guides, and let people have at it. After that, I formed a Testing Grouplet and produced the Automated Testing Playbook, followed by the Working Group Working Group and the Grouplet Playbook. Without having to sell these tools at all, really, suddenly all sorts of grouplets across the team were producing guides, distilling their shared knowledge into a tangible artifact (what I like to call a “MacGuffin”) that was accessible to the rest of the team, as well as the broader public. Eventually this led to a public-facing team handbook, based on the same underlying tools, that solved the problem of cultivating team-wide organizational knowledge and disseminating it to prospective, new, and long-time team members. I even got talked into writing a Slack-to-GitHub issues Hubot plugin to facilitate content cultivation, from which I developed a unit testing in Node.js tutorial to illustrate how to design, implement, and test a small distributed system server.

In short, though not every single one of my experiments worked, I largely succeeded in making the right thing the easy thing, using the context of the team’s existing tools and practices to guide my efforts. They’re still using most of these tools to share knowledge with their colleagues and the public to this day, and I love that everything we did was open source and public domain so I can keep improving the tools and applying them to new projects.

Every lesson applied, in that the real lessons were about human nature, not technology. Google disabused me of the notion that one metric, one tool, or one method of persuasion would suffice to change an entire population’s behavior. In other words, there’s no silver bullet.

The top objective is to make the right thing the easy thing. Once people have the knowledge and power to do the right thing the right way, they won’t require regulation, manipulation, or coercion—doing things any other way will cease to make any sense.

Hexawise: Describe a testing experience you are especially proud of. What discovery did you make while testing and how did you share this information so improvements could be made to the software?

Mike: Probably like many folks, I remember my first time the most vividly. Immediately on the heels of a death march—when my team barely got a steaming pile of other people’s code to meet a critical spec by a harsh deadline and very nearly would’ve killed one another were it not for Strongbad’s Emails to keep us one hair’s breadth away from going completely insane—we got some time and freedom to try to make the program faster.

I’d gotten the idea that we needed to rewrite a particular subsystem to take advantage of data we weren’t even using, and at about the same time, I happened to read an issue of the C/C++ Users Journal that had an article on using CppUnitLite, I believe. Unit testing sounded like a neat idea, so I practiced it at the same time I started rewriting this subsystem from scratch.

In the end, my new subsystem was rock-solid and improved performance by a factor of 18x. When a couple bugs came up, I diagnosed and fixed them very, very quickly, when the norm was on the order of weeks or months. It totally transformed our relationship with our client—and I got a minor promotion and, like, a $250 bonus, plus admission to half of a conference in 2004.

More recently, I’ve gotten into developing a Bash scripting framework, mbland/go-script-bash. The README in that repo explains the project and my motivation, so I won’t belabor it here; but I had a blast using the Bats framework to write automated Bash tests!

The only catch was, the test suite got noticeably slower over time as I wrote more complex code, tests, and test helpers; and everything ran O(10x) slower on Windows more or less (i.e. O(6-8min) on UNIX, O(50-60min) on Windows), depending on whether I was running the Bash that comes with Git for Windows, Cygwin Bash, MSYS2 Bash, or the Windows Subsystem for Linux.

After a few months of getting experience writing and testing Bash, and studying the Bats internals, I realized the slowness came from the combination of two things: The DEBUG trap that Bats registered to capture stack traces for every single command (so that when a test failed, it could point exactly to the failing command) and the fact that launching new processes (i.e. subshells, command substitutions, commands, and pipelines) is about 50x more expensive on Windows than UNIX. After refactoring my test helpers, and then refactoring Bats to eliminate subshells, I got the UNIX run times down from O(6-8min) to O(1min) or less (O(~45secs) on some flavors), and got the Windows run times down to O(3-6min).

So in the process, I’ve learned a lot of interesting, deep Bash stuff; pulled off a crazy 10x-20x optimization; and wrote hundreds of tests for Bash code that run on Travis CI and can report coverage to Coveralls (thanks to kcov)! Who knew? (The only bummer being, I’ve offered to assume maintainership of Bats, but nothing much seems to be happening with the project at all.)

Hexawise: In watching your videos and reading your content online your ideas resonate with those of W. Edwards Deming, Russell Ackoff and Peter Senge from management, culture change and systems thinking perspectives. Who are your greatest influence in this area?

Mike: I’m a little ashamed to admit I haven’t read any of their stuff, or at least not much. Certainly what little I’ve gleaned of Deming resonates with my experience. I’ve begun reading Senge’s The Fifth Discipline, and while the introduction resonated very clearly, I’ve not yet read further. Ackoff is a new name for me (and thanks for the tip!). That said, it is gratifying when I do read an established author and find that, yep, more learned minds than mine have clearly articulated widely-accepted concepts that I’ve only figured out due to trial, error, and intuition.

In fact, one of the things I’m trying to do moving forward is to go back through the literature and connect it to the experiences I’ve had—not just for my own validation, but to reassure my audience and clients that the things I’ve done and the things I recommend aren’t all crazy talk. I’ve got Geoffrey A. Moore’s Crossing the Chasm model combined with Albert Wong’s model to form the Rainbow of Death, which comprises the core of my narrative now; and I’ve also recently added a very high-level view of Kurt Lewin’s theory of social change, which someone only recently suggested to me.

Still, the core of my modus operandi—something that I tried to emphasize especially in The Rainbow of Death as the “trick” to making change happen, if there is one—is to enter an environment, discover what the pain is, what the needs are, and get to work on those within the context of the culture, with the resources at-hand. What we can learn from case studies and other forms of analysis are the essential principles for change, broad outlines of reasons why certain specific actions worked in a specific environment, but those actions must be tailored to each specific environment every time. It’s the why, not the what, that people should focus on; getting people to make that perceptual shift is the windmill I’ve chosen to tilt at hardest lately.

Views on Software Testing

Hexawise: In your online presentation, Making the Right Thing the Easy Thing, you note: [Use] “amplifying feedback loops to make sure knowledge is shared where needed as quickly and clearly as possible.” How do you suggest this idea be applied by those involved with software testing?

Mike: Heh, that’s a paraphrase of the Second Way of Devops (out of Three), originally articulated by Gene Kim. Clearly the extent to which you can automate testable cases, to make it easy and fast to do, the better. People need to know that there are different kinds of automated tests for different levels of the software—they need to learn how to do the right thing the right way. Once developers in particular have gotten some traction with writing automated tests, then folks performing manual or system-level automated testing won’t waste their time catching (and re-catching!) bugs the developers could’ve easily caught, and can focus on truly pushing the limits of the software—reporting not just on whether it meets functional and nonfunctional requirements, but on the overall quality of the product.

In other words, a healthy balance of automated testing and manual testing plays to the strengths of all the humans and machines involved. When you’ve got optimal resource utilization happening, you eliminate a lot of both physical (in terms of slowness) and human friction, and a feeling of true partnership can take hold. Testers aren’t just the people reminding you that your code isn’t perfect—you’ve already reminded yourself of that through your own automated tests!—they’re the ones helping you make it even better.

With that feeling of teamwork comes the information flow that is the hallmark of a generative culture (hat tip to Jeff Gallimore) that tends to be more nimble and innovative.

A lot of the automated testing burden rests on the developers, so getting them on the bus is critical. At Google, the Testing Grouplet’s Test Certified program went a long way towards facilitating this partnership. I’m not saying that a TC-like program is required in every situation, but it’s an idea that worked well for us at the time. The key principle—the why—is that we found a means of communicating the value of automated testing to both developers and testers, and of how to go about implementing an effective automated testing regimen from scratch, that in turn maximized utilization and value for everyone.

it’s not about defects; it’s about feedback and collaboration. If you arrange incentives to produce an adversarial relationship between team members, e.g. if developers are incentivized to minimize defects and testers are incentivized to report defects, then that’s a house divided against itself.

Hexawise: What do you wish more developers, business analysts, and project managers understood about software testing?

Mike: Oh my. For one, it’s not about defects; it’s about feedback and collaboration. If you arrange incentives to produce an adversarial relationship between team members, e.g. if developers are incentivized to minimize defects and testers are incentivized to report defects, then that’s a house divided against itself. Some people think a degree of competition and/or adversarialism is a good thing, but when it comes to producing a product as a team—i.e. achieving a mission—you should keep it to a minimum in favor of fostering a spirit of collaboration.

Collaboration doesn’t mean blind consensus; it means communicating honestly in an environment in which we feel safe to do so, in which we share criticism in a spirit of mutual self-interest, not cutthroat competition.

One test type does not fit all. First, in terms of automated tests, unit testing can find a truly large number of errors, very quickly and cheaply, and tends to encourage better code quality (i.e. readability, maintainability, extensibility) overall. Integration tests can shake out errors and ambiguities between component contracts. High-level, developer-written system tests (as opposed to more extensive system tests developed by a dedicated tester) can quickly affirm that the entire product is in a buildable, runnable state. All of this “white box” testing by the developers is essential to giving the testers as high-quality a product as possible, so they can apply their “black box” techniques to push the product to its limits, rather than waste time alerting developers to defects they could’ve much more quickly, easily, and cheaply discovered themselves.

To this last point, I like to point to the examples of goto fail and Heartbleed. So many Internet “experts” threw up their hands and claimed that bugs like these were “too hard to test”. In both cases, after 2.5 years out of the industry (another story), I spent an evening diving into code I’d never seen before and wrote a test to reproduce each bug and validate its fix. After that, some liked to say, “Oh well, lots of other tools and techniques could’ve found these bugs.”

My claim isn’t that automated testing would’ve been the only way; my claim is that the discipline of automated testing likely would’ve prevented these bugs from ever existing even before writing a single test. With goto fail, the offending block of code was copied and pasted throughout the file six times! It was just that one of the six contained the errant “goto fail” line. But as I demonstrated with my version of the “fix”, extracting a common function and testing that six ways from Sunday likely would’ve avoided the problem entirely. In the case of Heartbleed, it was a failure to validate that an input buffer was actually as long as the user-supplied length indicated. Testing that kind of corner condition is unit testing 101, and the kind of thing you become more sensitive to every time you write a line of code once you’re in the habit of testing.

Hence, as difficult as it would’ve been for manual testing to discover these errors, and as long as it took for them to get shaken out months or years after their widespread deployment—Heartbleed via third-party fuzz testing, goto fail who knows how—both very, very likely could’ve been stopped dead in their tracks (or never would’ve existed!) if the developers were in the everyday habit of unit testing their code.

Finally, considering the potential for a miscopied block of code or an untested corner condition to compromise the privacy—and consequently in some cases, physical security—of millions of users, testing isn’t “nice to have”, and we can’t afford to play games regarding who tests what. For the benefit and well being of our users and society as a whole, it’s our responsibility to bring as many testing practices that we can to bear on catching as many defects as possible. And if your shareholders can’t appreciate that, maybe you need new shareholders.

Bonus: I just discovered the article “Simple testing can prevent most critical failures” (use of the word “simple” should be illegal for any software practitioners) via Kode Vicious’s (aka George Neville-Neil’s) article Forced Exception-Handling for ACM Queue. It’s a summary of a research paper diagnosing 198 critical failures in well-known distributed systems. One of the big take-aways?“A majority of the production failures (77%) can be reproduced by a unit test.”

That’s one of the few research studies I’ve seen to provide data for the potential efficacy of unit testing in particular. But knowing people as I’ve come to know them over my lifetime, I expect most will hear that and say, “Cool story, bro! But that won’t happen to me, ’cause I’m a guru rockstar ninja!” (Yet more words that should be illegal…) Such studies are important and immensely validating; but they’re not sufficient to change minds, worldviews, and behaviors in one fell swoop—there’s no silver bullet!

Hexawise: Our CTO, Sean Johnson, shared your memorably-named “Rainbow of Death” presentation with our management team. We absolutely loved it. In your presentation, you describe a series of concrete, practical steps you and your colleagues at Google took over the course of 5+ years to overcome resistance to change, educate teams, and successfully achieve broad adoption of automated testing efforts at Google across many teams, including lots of teams that were initially very change resistant. Can you please describe for our readers 2 or 3 noteworthy aspects of that change management journey?

Mike: What I hope the Rainbow of Death model, in combination with Geoffrey A. Moore’s Crossing the Chasm model, make apparent is that different people adopt change differently. There are many needs that need to be met by and for many different people, and the chances of figuring out the perfect plan to execute before taking any action are practically zero. After all, don’t the Agile and DevOps models that are all the rage comprise tools and practices for adapting to change, for performing experiments and adjusting course based on feedback? Organizational change is no different, yet many people remain conditioned to expect waterfall-like solutions to their social problems.

Also, I mention in the talk that “The problem you want to solve may not be the problem you have to solve first.” In our case, we wanted to solve the problem of developers not writing enough automated tests. But first, we had to solve two other problems: People back then had very little exposure to or experience with automated testing, leading to the “My code is too hard to test” excuse, because they had no idea how to test it, or to write testable code to begin with.

The second problem was that the tools at the time couldn’t keep up with the growth of the company, its products, and its code base. It was growing ever more painful to write any code to begin with, yet delivery pressure was intense and Imposter Syndrome was rampant—on top of the fear of admitting your code might contain flaws, how could you make any time to learn how to write automated tests to begin with? Hence the “I don’t have time to test” excuse.

So we couldn’t just say “Testing is good! Yay testing! Please write moar testz!”

I think this mix of perspective, empathy, creativity, collaboration, tenacity, and patience is crucial to changing not only tech organizations, but society at large. I hope to put this notion, and the Rainbow of Death model in particular, to the test continuously throughout the remainder of my career.

my advice to both developers and testers is to identify the priorities, the social structures and dynamics at play in the organization. How can you work with these structures and dynamics instead of against them—or do you need to create a culture of open communication and collaboration in parallel with (or even before) communicating the testing message?

Hexawise: Can you describe a view or opinion about software testing that you have changed your mind about in the last few years?  What caused you to change your mind?

Mike: Not really. Perhaps I was fortunate that as part of my first experience with unit testing, my relationship with our team’s manual tester changed completely. It was no longer a matter of her telling us that the latest build of our software was still taking forever to load and render nautical charts, and was still crashing in the same place, and that there were the same holes in the charts we were rendering. (There was one in particular that we even called “Virginia”, since that’s what it looked like—and was ironic considering that we mostly examined charts of Hampton Roads, Virginia, where we were from.)

After I rewrote one of the subsystems from the ground-up while unit testing all the way, the system rendered the charts quickly, hardly ever crashed, and I could even hear her musing one time “Oh, I didn’t realize there were islands over there!” Another time she told me there was a bug in which a line was getting rendered across the mouth of a river, and I could definitively prove to her—in under a minute!—that, yes, the data says to draw exactly that line, exactly there. And yes, “Virginia” went away.

As a result of that experience, I think I internalized the symbiosis between automated and manual testing. Because I could validate functionality and catch bugs so effectively as I was writing the code, she could drive the program to new limits and provide better feedback. Consequently, I’ve always been interested in how developers can do a better job with their own automated testing, while maintaining a deep respect for the work of dedicated testers who have their own special skills and role to play in ensuring overall product quality. Nothing in my experience has ever challenged that fundamental perspective, but nearly everything has reinforced it.

Industry Observations / Industry Trends

Hexawise: Large companies often discount the importance of software testing. What advice do you have for software testers to help their organizations understand the importance of expecting more from the software testing efforts in the organization?

Mike: Sadly, there’s no one message that works for every company, every culture, everywhere. It’s up to the Instigators in each environment to take the timeless principles I believe are essential—that testing is about feedback and collaboration, that different types of tests all catch different and important bugs, that developers and testers have different and mutually-reinforcing roles to play—and find the right cultural hooks to hang those messages on. In the case of Google, it took the Testing Grouplet five years to figure out and successfully implement, and it took an array of parallel efforts across multiple groups to saturate the culture with the message, not just one magical tool or technique or team to bind them all.

graphic showing efforts helping to cross the chasm

So my advice to both developers and testers is to identify the priorities, the social structures and dynamics at play in the organization. How can you work with these structures and dynamics instead of against them—or do you need to create a culture of open communication and collaboration in parallel with (or even before) communicating the testing message?

This is the punchline of my Rainbow of Death presentation: The problem you want to solve may not be the problem you have to solve first, and the Standard Narrative from which all the problems emerged will not produce any solutions—though it may provide the keys necessary to unlock effective solutions.  The Rainbow of Death model itself can give you a lens through which to view your environment and begin thinking of how to tailor the right solution, but the specific plans and actions necessary are up to you to figure out—eventually.

At Google, it was the Testing Grouplet’s Test Certified program and all the other education and tooling efforts supporting it that provided the right hook—after two years of experimentation and reflection! But don’t focus on what Test Certified was comprised of: focus on why that approach worked for us, and see if that reflection inspires an approach that will work for your company.

In addition to that, it probably wouldn’t hurt to remind anyone who’ll listen of goto fail and Heartbleed, and how basic unit testing practices and the coding habits they encourage could’ve prevented these potentially catastrophic defects from even being written in the first place.

Also, the DevOps Research and Assessment (DORA) team just released a white paper that provides a framework for estimating ROI for an investment in DevOps, for orgs of different sizes and maturity levels, based on their own research. Since testing (especially automated testing) is a core DevOps practice—specifically one that helps avoid unnecessary rework, supports rapid feature development, and helps prevent downtime—that framework may prove useful for having a conversation with the more money-minded influencers in an organization. I’m decidedly not money-minded—arguably to a fault—and even I found it enlightening and easy to follow!

Back to the Rainbow across the Chasm, testing (and DevOps) adoption is more a human problem than a technical one, and many different solutions are required for many different people and many different parts of the problem. Just as we must be free to experiment and adapt when it comes to developing features and deploying releases, we must apply the same mindset to hacking our organizations.

Hexawise: The story of your team’s journey is fantastic. We highly recommend it to IT organizations embarking on any large improvement effort. Thank you for sharing it. We’ve recently started using elements of your approach to help our clients successfully adopt test optimization approaches at scale in their organizations.

Mike: That’s incredibly gratifying to hear! Please keep me in the loop of how well it’s working for you and your clients. Just as the impact of Testing Grouplet’s efforts were far greater than the sum of any individual part, and as my Rainbow of Death presentation benefited enormously from the input of trusted fellow Instigators to help illustrate it, I’m sure there are many more insights and improvements waiting to emerge from the model once more people have applied it farther and wider than I ever could on my own!

Hexawise: Do you believe the DevOps movement is resulting in better software testing within organizations? Do you see any other trends that software testers could leverage to promote improved application of software testing practices?

Mike: When I attended my first DevOps conference, DevOps Enterprise Summit 2015, I had the luxury of being one of the first speakers. For the rest of the conference, I was dumbfounded how everyone seemed to take for granted that of course—of course!—you’re doing automated testing, because you can’t have DevOps without it! It was such a pleasant surprise to see such a radical change in the industry compared to ten years prior, when my fellow Instigators from the Testing Grouplet started our uphill battle to drive automated testing adoption throughout Google. So in the sense that the DevOps movement considers automated testing to be a core practice, it seems it can only help raise its profile and drive adoption; and as developers have more positive experiences with their own automated testing, hopefully it will lead them to embrace a partnership with testers as well.

Also, when I first heard the term “DevOps”, my first thought was, “Oh, that’s what we were doing with the SREs on websearch!” Some people may split hairs over the semantics of “DevOps” vs. “SRE”—which I often suspect is symptomatic of The Scarlet G syndrome—but the Three Ways of systems thinking, feedback loops, and continuous experimentation and learning were all there. And the “trick” of those Three Ways of DevOps, to me, are that they’re in no way exclusive to DevOps at all. DevOps is the gateway drug, the buzzword of the day—the new “cloud!” if you will—that will hopefully get people addicted to transparency, autonomy, and collaboration across the organization. That’s good not just for dev and ops, or for dev and test, but for the business as a whole—society, too, if you wanna take a trip to the moon with me!

Other trends may exist, but DevOps is as good a star to hitch the testing wagon to as any right now. Just be thoughtful about it; apply the Rainbow of Death model to analyze your environment and your audience, and shape your efforts appropriately. I mention this because I recently had a chat with a potential client about how they want to drive DevOps and testing throughout his org, but they’ve already “moved to the cloud”. It wasn’t until that moment that I realized that many folks hear “cloud!” and just think “decommission all our dedicated machines!” and not take the next logical step towards continuous integration and deployment, infrastructure as code, etc.—i.e. all the powerful practices that the cloud makes practical. So it’ll take time and care to get people to appreciate the full potential of a mature, collaborative developer-tester relationship without getting hung up on all the trappings and buzzwords.

Staying Current / Learning

Hexawise: What software testing-related books would you recommend should be on a tester’s bookshelf?

Mike: Sadly, I’m not current enough to make any solid recommendations. In my career, I’ve moved more into the culture change space than being a 100% in-the-trenches practitioner. That said, I certainly did years of quality time with my old “library” of programming and algorithms books, and was fortunate to be part of a culture that itself generated a broad swath of automated testing knowledge. Though I don’t keep up with the details of the latest developments, that time spent internalizing the core principles has served me very well throughout my career.

That said, I’m sure that great books exist, and people dedicated to the craft would do themselves a great service by discovering them and spending years plumbing their depths, rather than trying to read every book on the subject forevermore. That’s the model that worked for me, at least; but perhaps a more voracious reading regimen suits you better. Everybody’s different.

Hexawise: How do you stay current on improvements in software testing practices; or how would you suggest testers stay current?

Mike: I don’t have any special wisdom in this area—and I doubt anyone else does. Fred Brooks beat us all to the punch thirty years ago with “No silver bullet”, and Malcolm Gladwell popularized the 10,000 hours concept in Outliers. Seek information and tools, seek collaborators, seek feedback, and practice, practice, practice! Hard work and incremental progress are required to make a significant difference in your capabilities the long run, not any particular tool, technique, or information source.

The one thing I’d caution against is thinking there’s things you should be doing or studying, else you’re not a good tester, developer, or human being for that matter. There’s a lot of options in terms of books, blogs, videos, conferences, tools, etc. While it’s good to cast a wide net and learn for learning’s sake to expand your awareness, I think applying the tools at hand to a concrete problem you’re trying to solve in the here and now is the best way to gain the experience and insight you’ll need to make the most of whatever new information and ideas you’ll encounter in the future. Feed your brain slowly, and keep your hands busy! In my experience, the right balance of curiosity, adaptability, and tenacity applied to immediate tasks in pursuit of a larger outcome is the most effective means of expanding your capabilities and maximizing your impact, as opposed to having everything figured out before you even think about trying to start—or, even worse, waiting to be told exactly what to do, and how.

Hexawise: What management/culture-change/systems thinking related books would you recommend?

Mike: To recommend only one: Becoming Madison by Michael Signer. It focuses on the formative years and experiences of the diminutive James “Jemmy” Madison, from his early shame at being too physically frail to serve in the military, to his education at Princeton whereby he became deeply impressed by the concept of checks and balances, to his immense boredom at studying law shortly afterwards (as required per the Standard Narrative), to developing his methodical approach to researching and arguing a subject (not a person—key point!), to the cat-herding of the Constitutional Convention in which he emerged as the main driving force that shaped the United States Constitution, to publishing numerous Federalist papers to educate the public on the Constitution’s virtues and necessity, and finally to his epic oratorical showdown against Patrick Henry to persuade just enough delegates to get Virginia to ratify the Constitution—which ensured that the primary architect of said Constitution (and the fourth President), as well as the primary author of the Declaration of Independence (and the third President), and the man who led the war for independence and was expected to be the new country’s first President, would actually be Americans!

This probably seems an odd choice, but after diving into reading a lot of Revolutionary-era history starting about a year ago, I was struck by how difficult and fragile the dynamics were between the Founders, and how nasty the business of starting a new country really was. It wasn’t the product of pure, unquestionable genius from a band of brothers aligned in lockstep with one another, guaranteed to become one of the greatest nations in history. (Of course I think it’s “the” greatest despite its deep flaws and divisions, but I’m biased.) It took a lot of different personalities, approaching the issue from different angles, playing different roles, over the course of four decades before the whole enterprise really found its legs. And this is what breaking from the Standard Narrative and changing the world is really like!

While driving testing (or DevOps) adoption isn’t on as grand a scale, the basic principles and outline of action certainly apply; I haven’t done it yet, but I bet fitting the story of Madison and his fellow Founders into the Rainbow of Death would make a lot of sense. I find a great deal of comfort and inspiration in discovering that the whole business wasn’t the result of perfect people executing a perfect plan perfectly. We could all use that sort of insight, inspiration, and comfort these days.

There’s a few more books that I’d recommend, but I’ll shut up for now. Perhaps I’ll make a case for a few in my blog.

Hexawise: Have you incorporated a new testing idea into your testing practices in the last few years?  Will you continue using it? Why? / Why not?

Mike: Am I a bad person if I say no? Again, I’m moving to a different phase of my career; while I’m still an active practitioner of the practices I’m already familiar with, I’m moving into a more leadership-focused role in my career, so that I can create the space for practitioners to innovate and bring the best tools and practices to bear on the problem of creating reliable software that provides maximum value to society.

That, and I find there’s still, to this day, no shortage of people that could stand to benefit from learning the more fundamental coding and developer testing concepts and techniques in which I’m thoroughly versed. I’m happy to keep covering that ground, to make more and more people ready to receive even more advanced knowledge and wisdom—to play my role in creating that space where great software and lasting change can be made.

Profile

Mike aims to produce a culture of transparency, autonomy, and collaboration, in which “Instigators” are inspired and encouraged to make creative use of existing systems to drive improvement throughout an organization. The ultimate goal of such efforts is to make the right thing the easy thing. He’s followed this path since 2005, when he helped drive adoption of automated testing throughout Google as part of the Testing Grouplet, the Test Mercenaries, and the Fixit Grouplet. He was instrumental in the execution of Test Certified and Testing on the Toilet, and the four company-wide Fixits he organized led to the development and rollout of the Test Automation Platform. His account of Google’s automated testing adoption also appears as a case study in The DevOps Handbook by Gene Kim, et. al.

He also served as a member of the Websearch Infrastructure team, which practiced DevOps before he was aware it had a name. Frequently working in concert with other indexing infrastructure teams, he also worked closely with Release Engineers and Site Reliability Engineers to package, release, deploy, and monitor multiple indexing services.

Most recently he served as Practice Director at 18F, a technology team within the U.S. General Services Administration, where he personally launched and drove several initiatives to increase 18F’s capability as a learning organization, including the Pages platform, the Guides series, and the Handbook.

Links:

Leave a Reply

Your email address will not be published. Required fields are marked *