- 1 Conventional Testers on Agile Projects - Intro
- 2 Conventional Testers on Agile Projects - Agile Methods
- 3 Conventional Testers on Agile Projects - Values
- 4 Conventional Testers on Agile Projects - Getting Started
- 5 Conventional Testers on Agile Projects - Getting Started Continued
Conventional Testers on Agile Projects - Intro ∞
For the past year or so, I've been posting questions on my blog and to the development community about the role of testers on agile projects. This summer, I decided that the ship has sailed on the question: "should there be testers on agile projects?" No matter how much pundits on either side of the equation: "no tester role" vs. "tester role" debate it, the market will ultimately decide. What is more interesting to me, is the fact that there are conventional testers on agile projects, and they face unique challenges.
How do Conventional testers end up on Agile projects? ∞
I have experienced or been approached by people seeking help in situations like this:
- the customer tells an agile team they have to have "QA" people on their project
- an agile pilot project has such good results, a development lead from that team is put in charge of a QA group
- a software company with an existing QA department tries some agile methods
developers who want to learn more about testing request conventional testers to be on an agile project
Moving Forward ∞
I will put a stake in the ground, and answer some of the questions I've raised from what I have learned so far. This is the first in a blog series on "what are conventional testers doing on agile projects"? Now that conventional testers are here, what do we do?
I've been involved in agile projects, or projects trying out some agile methods as a conventional tester over the past five years. I feel woefully inadequate to really answer these questions, but demand is growing from people who read my ramblings. They are asking me to provide my own thoughts, so here we go. I'll continue to post my thoughts in this series, which are very much a work-in-progress.
What I seek the most is feedback [ 2 ] jonathan at kohl.ca from you, the reader. Are you a conventional tester on an agile project? If so, what did you do? What worked, and what didn't? Are you a developer doing agile testing that has worked with conventional testers? What worked well? What didn't work so well?
As a development community, I challenge each of us to help each other and work together by sharing knowledge and experience. We can leave the debating to the pundits, or for our own pursuit of knowledge. I hope we can get the ball rolling and move towards building better software products, gaining knowledge and skills and serving the customer. After all, the process isn't what counts to the consumer, it's the product.
Who Should Read This Blog Series ∞
If you work in Quality Assurance, or are a professional Software Tester, I consider you to be (like me) a "conventional tester". You tend to do more testing than contributing production code. If you are a conventional tester or a Business Analyst who is joining an agile project as a tester, this blog series is for you. If you are a Testing Manager, or Project Manager on an agile team, you too may find this series helpful. If you are a developer who isn't sure what to do when a customer asks you to work with people who are full-time testers or Quality Assurance folks, you may also find this series useful.
Conventional Testers on Agile Projects - Agile Methods ∞
With the rise in popularity of agile development, professionals from the Quality Assurance or software testing world are finding themselves on agile teams. Customers are becoming more conscious of how they spend money on the software they rely on, and as such are starting to demand that dedicated software testers or Quality Assurance teams be involved with development.
Professionals who work in software testing bring a special set of skills to a project. These are people who are thinking about testing all the time, and they work on projects on behalf of the development team as well as on behalf of the customer. They can help the team have more confidence in the product they have developed, and help the customer have more confidence in the product that has been delivered. Some customers realize the importance of testing, and often demand that testing professionals be on agile projects. Some developers also value these skills, and want to work with people to learn more about testing. How does a conventional tester use these skills to add value in a new and unique project environment?
What is Agile anyway? ∞
I hear this quite often: "Our development shop is pretty chaotic, and doesn't do much documentation. I guess we're agile, right?" Wrong. "Agile" does not mean "undisciplined" and "no documentation". Agile development refers to a group of very disciplined methodologies that share similar characteristics. Agile processes tend to be part of an iterative lifecycle, rely on rapid feedback, and believe that software development cannot be predictive, but must be adaptive. Many practitioners and methodology founders came out of chaotic [ 3 ] possibly see Capability Maturity Model or waterfall methodologies. They figured out what had worked for them and others, and published their findings.
Agilists tend to believe that it's pretty much impossible to have all the information up front, so have developed systems that cope with uncertainty, and rely on evolutionary designs. The Agile Manifesto has some good descriptions of overall agile values. The Agile Alliance has some great information to look into to find out more.
But if the developers are testing now, won't I be out of a job? ∞
Nope. Not necessarily. In my experience, I've yet to see a project where I and other conventional testers didn't find important bugs. This includes agile projects. The difference is that on an agile project, we find the important bugs faster. We are more involved with testing throughout development. Now that the developers are doing rigorous work themselves with solid automated unit tests, the products I test are much more robust. This gives me more time to focus on important testing tasks instead of dealing with lots of broken builds and unreliable software at the beginning of a testing phase.
It's also important to realize that developer testing such as Test-Driven Development can be very different from the kind of testing that conventional testers are used to doing. Testers don't tend to unit test the production code. Some testing experts (notably Cem Kaner and Brian Marick) describe TDD as "example driven development". It can be viewed as design work with examples written to evolve a program until it meets requirements. The automated unit tests then provide a safety net for refactoring. If the tests fail when new code is integrated, they are fixed immediately. Chronic broken builds, hours of debugging and a lot of time-consuming troubleshooting are eliminated or greatly reduced this way.
Conventional testing and agile testing are complementary tasks. One does not necessarily replace the other. I was recently the testing lead on a project that was using Scrum with some elements of XP. I had a couple of conventional testers working with me, doing manual testing (especially exploratory testing), automation at several layers in the application with Open Source testing tools, and working with the customer. The customer was doing acceptance tests, the developers were doing unit tests, and the developers and testers were working together on load testing. One day, the Project Manager asked me if he could get involved with testing during some of his spare cycles. A few days later, the Business Analyst asked the same thing. For several weeks, at any given time 95% of the project team were testing. It didn't put anyone out of job - each person had something unique to add. In fact, the conventional testers were able to do much more testing in this environment than in others I have seen.
"OK" says the conventional tester, "I'm convinced. Where do I start?"
As a software tester, it is important to not get too caught up in "the right development methodology". After all, it isn't the process that is important to the end user, it's the product. We need to be open-minded to different methods of delivering the right product on time with a reasonable level of quality.
In agile methods, the values behind the development methodology are important to understand. We'll look at values in the next post.
Conventional Testers on Agile Projects - Values ∞
Values Are Key ∞
Good conventional software testers can potentially offer a lot to a team provided their working attitude is aligned with that of the team. One of the most important aspects of agile development is the values that many agile methods encourage. A team focus rather than an adversarial relationship is important, and testers on agile teams tend to agree with the principles guiding the development process. A conventional tester should at least understand them and follow them when they are on an agile project. Understanding the values goes a long way to understanding the other activities, and why agile teams are motivated to do the things they do. What is important to an agile team? For one, working software. It isn't the process that is important, it's the product we deliver in the end. Agile methods are often pragmatic approaches to that end.
A good place to start to learn about values on agile projects is by looking at the values for Extreme Programming. These are the values that I personally identify the most with.
Many times I hear that testing teams should remain separate from development teams so that they can retain their independence. Even agilists have different opinions on this. This might be due to a misunderstanding of what "independence" can mean on a project. Testers must be independent thinkers, and sometimes need to stick to their guns to get important bugs fixed. To be an independent thinker who advocates for the customer does not necessitate being in a physically independent, separate testing department. Agile projects tend to favor integrated teams, and there are a lot of reasons why having separate teams can cause problems. It can slow down development processes, discourage collaboration, encourage disparate team goals, and impede team communication.
Testers who are integrated with a development team need not sacrifice their independent thinking just because they are sitting with and working closely with developers. The pros of integration can far outweigh the cons. If your project needs an independent audit, hire an auditing team to do just that. Then you should be guaranteed an independent, outside opinion. In other industries, an audit is generally done by an outside team. Any team that does a formal audit of itself wouldn't be taken seriously. That doesn't mean the team can't do a good job of auditing itself, or doing work to prepare for a formal audit isn't worthwhile. What it means is that a formal audit from an outsider overcomes a conflict of interest. If your team needs independent auditing, prepare for it by testing yourselves, and hire an outsider to do the audit.
I personally would rather be influenced by the development team and collaborate with them to do more testing activities. I get far more testing work done the more I collaborate. If I become biased towards the product in the process, I will trade that for the knowledge and better testing I am able to do by collaborating.
Do What Needs to be Done ∞
A talented professional who cares about the quality of the product that they work on, and believes in the values of agile methods should be able to add to any team they are a part of. This isn't limited to those who do software development or software testing, but also technical writers, business analysts and project managers. If the team values are aligned, the roles will emerge and come and go as needs arise and change. "That's not my job!" should not be in an agile team member's vocabulary.
On an agile project it is important to not stick slavishly to a job title, but to pitch in and do whatever it takes to get the job done. This is something that agilists value. If you can work with them, they can work with you, provided your values are aligned.
Understand the Motivations ∞
The motivation behind the values of agile methods are important. Read Kent Beck's Extreme Programming Explained for more on values. Read Ken Scwaber and Mike Beedle's Agile Software Development with Scrum to get insight into how an agile methodology came about. The first couple of chapters of the Scrum book really provide a picture for why agile methods can work.
My favourite line in the Schwaber/Beedle Scrum book is:
: "They inspected the systems development processes that I brought them. I have rarely provided a group with so much laughter. They were amazed and appalled that my industry, systems development, was trying to do its work using a completely inappropriate process control model." -- p. 24, Agile Software Development with Scrum, 2002, Prentice Hall.
This book provides a lot of insight into what motivated people to try something new in software development, and the rationale behind an agile methodology.
The values behind agile methods really flow from these early motivations and discoveries of pragmatic practitioners, and are well worth reading. When you understand where the knowledge of delivering working systems was drawn from, the values and activities really start to make sense.
Conventional Testers on Agile Projects - Getting Started ∞
At this point, the conventional tester says that they can really identify with the values, understand some of the motivations behind agile methods and are ready to jump in. "How do I get started? What do I do?"
Testers Provide Feedback ∞
I've talked about this before in the Testers Provide Feedback blog post.
A conventional tester starting out on an agile team should engage in testing activities that provide relevant feedback. It's as simple as that.
I'm hard pressed to think of any activity that doesn't tie into the tester as service role, ultimately helping the tester provide feedback. What activity that is depends on what the needs are on a project, right now.
Testing is what I do to provide good feedback on any development project. What is relevant depends on what your goals are, and what the team needs. This can be risk assessments, bug reports, a thumbs up on a new story, all sorts of things.
To have confidence in that feedback, we can engage in many activities to gather information. Exploratory testing is one effective way to do this, another is to use automated tests. There are lots of ways that we can gather information by inquiring, observing, and reporting useful information. What is key to me is to figure out what information the team needs at a particular time. What are some things that have worked well for you? Please [mailto:email@example.com let me know].
Personally, a testing activity is useful to the extent that it helps me get the information I need to provide useful feedback to the rest of the team. Sometimes it involves working with a customer and helping identify risks. Other times it's a status report on automated tests that I give to the team. It may involve manual testing when on a bug hunt, or another useful testing mission where I need to do testing activities beyond automated tests. Other times it's real-time feedback done when pair testing with a developer. Other times I am working with a customer helping them develop tests. The kind of feedback needed on a project guides what kind of testing activities I need to do.< Providing information is central. As James Bach says: "testing lights the way". If I am not able to provide more feedback than the automated tests and customer are already providing, then I need to evaluate whether I should be on that agile team or not. If a particular area is not being addressed well and the team needs more information, then I should focus activities on that area, not focus slavishly on what role I think I should be filling.
Doing what needs to be done to help the team and the customer have confidence in the product is central. That means stepping out of comfort zones, learning new things and pitching in to help. This can be intimidating at first, but helps the tester gather more information and helps me learn what kinds of feedback the team needs. It's a challenge, and those who enjoy challenges might identify with this way of thinking. Doing what needs to be done helps testers gather different kinds of information that can be used to provide the right kind of feedback.
More Information for Testers ∞
In The Ongoing Revolution in Software Testing, Cem Kaner describes the kind of thinking that I am trying to get across. Testers who identify and agree with what Cem Kaner has said should have few problems adjusting to agile teams. This article is worth reading for anyone who is thinking about software testing.
Conventional Testers on Agile Projects - Getting Started Continued ∞
Some of what you find out about agile methods may sound familiar. In fact, many development projects have adopted solutions that some agile methods employ. You may have already adjusted to some agile practices as a conventional tester without realizing it. For example, before the term "agile" was formally adopted, I was on more traditional projects that had some "agile" elements:
- when I started as a tester, I spent a lot of my first year pair testing with developers
- during the dot com bubble, we adopted an iterative life cycle with rapid releases at least every two weeks
- one project required quick builds, so the team developed something very similar to a continuous integration build system with heavy test automation
- developers I worked with had been doing refactoring since the early '80s. they didn't call it by that name, and used checkpoints in their code instead of xUnit tests that would be used now
- in a formal waterfall project, we had a customer representative on the team, and did quick iterations in between the formal signoffs from phase to phase
one project adapted Open Source-inspired practices and rapid prototyping
These actions were done by pragmatic, product-focused companies who needed to get something done to please the customer. Many of these projects would not consider themselves to be "agile" - they were just getting the job done. The difference between them and an agile development team is that the agile methods are a complete methodology driven towards a certain goal rather than a team who has adjusted some practices to improve what they are doing.
Other conventional testers tell me about projects they were on that were not agile, but did agile-like things. This shouldn't be surprising. The iterative lifecycle has been around for many years (at least back to the 1940s). There are a lot of methodologies that people have used, but not necessarily codified into a formal method within the iterative lifecycle as some agile champions have. A lot of what agile methods talk about isn't new. Jerry Weinberg has said that methods employed on the Mercury project team he was on in the early '60s looks to be indistinguishable from what is now known as Extreme Programming.: [ 4 ] p. 48 Iterative and Incremental Development: A Brief History, Larman and Basili, 2003. See Iterative and incremental development on Wikipedia
Another familiar aspect of agile methods is the way projects are managed. Much of the agile management theory draws very heavily from the quality movement, lean manufacturing, and what some might call Theory Y management. Like the quality pundits of past, many agile management writers are once again educating workers about the problems of Taylorism or Theory X management.
What is new with agile methods, are comprehensive methodology descriptions that are driven from experience. From these practices, disciplined design and development methodologies have improved rapidly, such as Test-Driven Development. Most importantly, a shared language has emerged for practices like "unit testing", "refactoring", "continuous integration" and others - many of which might have been widely practiced but called different things. This shared language helps a community of practice share and improve ideas much more efficiently. Common goals are much more easily identified when everyone involved is using the same terminology. As a result, the needs of the community have been quickly addressed by tool makers, authors, consultants and practitioners.
This has several implications for conventional testers that require some adjustments:
- a new vocabulary of practices, rituals, tools and roles
- getting involved in testing from day one
- testing in iterations which are often 2-4 weeks long
- an absence of detailed, formalized requirements documents developed up front
- requirements done by iteration in backlogs or on 3x5 story cards
- often, a lack of a formal bug-tracking system
- working knowledge of tools such as refactoring and TDD-based IDEs, xUnit automation and continuous integration build tools
- a team focus over individual performance
- developers who are obsessed with testing
- working closely with the entire team in the same work area
- not focusing on individual bug counts or lines of code
- less emphasis on detailed test plans and scripted test cases
heavy emphasis on test automation using Open Source tools
Some of these changes sound shocking to a conventional tester. Without a detailed requirements document, how can we test? Why would a team not have a bug tracking database? What about my comprehensive test plans and detailed manual regression test suites? Where are the expensive capture/replay GUI automation tools? How can we keep up with testing when the project is moving so quickly?
A good place to address some of these questions is: Lessons Learned in Software Testing: A Context-Driven Approach by Cem Kaner, James Bach and Bret Pettichord [ 5 ] Lessons Learned in Software Testing: A Context-Driven Approach - (2001 book), by Cem Kaner, James Bach and Bret Pettichord
ISBN-13 9780471081128 .
We'll address some of these challenges in this series, as well as examples of testing activities that conventional testers can engage in on agile projects.
|^ 1||was http://www.kohl.ca/blog/archives/000079.html|
|^ 2||jonathan at kohl.ca|
|^ 3||possibly see Capability Maturity Model|
|^ 4||p. 48 Iterative and Incremental Development: A Brief History, Larman and Basili, 2003. See Iterative and incremental development on Wikipedia|
|^ 5|| Lessons Learned in Software Testing: A Context-Driven Approach - (2001 book), by Cem Kaner, James Bach and Bret Pettichord