No large-scale Agile transformation is ever easy and Valpak is no different. Day in and day out I see things that scare the hell out of me. With that, I have for you the top 10 things that scare the hell out of me. Feel free to offer up some of your own too.
Top 10 Things That Scare The Hell Out Of Me
1. Everything takes 8 hours … Estimating tasks during planning can be scary for some team members, but what scares me even more is when some estimate every task to be 8 hours. Seriously! From a word change on a screen to a new workflow, everything takes 8 hours?
2. Waiting on requirements … Or really, waiting on anything for that matter. Instead of waiting, keep moving forward. I mean you are sprinting, in fact. Or even better, go collaborate with that person you’re waiting on to get the task done together. Now that is the Agile way!
3. Over-confident teams convinced they are high-performing … Confidence is a powerful asset to a team, but too much can be a bad thing. Over-confident teams believe that they are already high-performing, so no further improvements can make them any better. Cap the confidence and strive to be more and more Agile each day.
4. Stand-ups that linger … The daily 15-minute stand-up can be terrifying when it’s let to linger on and on and on. ScrumMasters can do us all a favor by keeping the stand-up moving along and calling it “done” when all statuses have been spoken. It’s inevitable that there will be conversations that follow the stand-up (with the Product Owner, with Stakeholders, between team members) but the ScrumMaster can release the rest of the team to get back to work.
5. Demos of screen shots instead of working software … This is probably the most controversial one. I know, I know, sometimes screen shots are the most practical approach to the demonstration of a given feature, especially where a series of time-consuming steps are involved. Where able to be successfully demonstrated within the time box of the demo, I prefer to see working software. Maybe that’s just me, but it gives me a warm-fuzzy.
6. Everything needs a spike … Spike to think, spike to meet, spike to talk to another team, spike to say “Good Morning”; everything needs a spike! Don’t get me wrong. There is certainly a time and place for a spike. However, spikes can be susceptible to abuse if not careful. No matter what your role, don’t be afraid to challenge the need for a spike.
7. Throwing in the towel too soon … I’m shocked when on day 2 of an 8 day sprint, team members start tossing out stories (all other things unchanged, of course). Really?! What makes this so impossible just 2 days after we planned and committed to doing it? I’m a firm believer in encouraging teams that the impossible is possible. This is easier with some than others. A good reality check is important in the last half of the sprint but meanwhile, let’s give it the ‘ole college try!
8. ScrumMasters waiting for impediments to be served up on a platter … Only in fiction do team members serve up impediments to their ScrumMasters like a nicely wrapped gift. In reality, team members are constantly impeded by one thing or another, but never really bring it up in the stand-up nor do they mention it to their ScrumMaster. It’s the ScrumMaster’s job to smell out impediments like a drug sniffing German Sheppard and attack them like a Pit Bull.
9. Laptops in retrospectives … I realize that we are in the Digital Age, but laptops are not needed in retrospectives. The whole team should be fully engaged in the retrospective. If some team members are on their laptops (focused on something else), it really ruins the collaborative learning mood of the whole retrospective.
10. Stories with words like “analysis”, “development”, and “testing” … I spent 14+ years applying Waterfall methods and I know a Waterfall phase when I see one. Words like “analysis, “development”, and “testing” are best used with tasks, not stories. When applied to stories, they wreak of Waterfall and make my skin crawl.
October 27th, 2012 at 2:29 pm
Hi Stephanie,
I don’t want to be glib about making suggestions when I haven’t seen the environment in person, but the things you mention do sound very familiar, and it’s possible you are experiencing issues that others have already experienced. Take the following with a grain of salt, and hopefully some of it will resonate.
1. Everything takes 8 hours.
The first thing that comes to mind when I read this is that the behavior is pointing to possible systemic issues. People can only do the things that are “permitted” by the system (organization) of which they are a part. They may sincerely want to do the right things and they may talk about and agree with the idea of doing the right things, but ultimately they can only act within the constraints of the system.
What I might do in this situation is apply some systems thinking and/or root cause analysis tools to try and find out what forces are acting on the people and driving them to behave in this way. I would not expect to find one single root cause that operates in a linear cause-and-effect fashion. Maybe people are “punished” administratively for getting estimates “wrong.” Maybe team members just don’t know the techniques for relative story sizing. Maybe business stakeholders have Old School assumptions about pressuring development teams into delivering faster. Maybe a combination of different, seemingly unrelated forces are driving the observed behavior. There is no way to guess, but there are ways to discover the answers.
The second thing that comes to mind is that you say you are using “agile” methods, but you also say “estimates” are made in terms of clock time. Those two things don’t go together.
2. Waiting on requirements (or anything).
I agree with you that team members ought not wait passively when they have a dependency on someone else. This might be another behavior pattern that is driven by systemic issues. By this time they probably understand that in theory they are supposed to step up and resolve impediments. What is stopping them? Something about the “system,” possibly.
3. Over-confident teams convinced they are high-performing.
I disagree that we would want to discourage people from feeling confident. What you describe sounds familiar to me. Of course, I don’t know if this is the case in your situation, but this behavior often occurs when a team is not using measurements to drive continuous improvement. They feel good about their work, they are keeping up with demand, and they are meeting their commitments; therefore, they have every reason to believe they are high-performing. But if they aren’t measuring performance objectively, they may not be able to see opportunities for improvement.
Instead of “capping” their confidence, try building on it to help them craft their own continuous improvement process. Congratulate them on their good performance and then challenge them to take it one step further. If you “cap” their confidence, they won’t think of anything; they will surrender and wait to be told what to do. At that point, only a single brain is operating – yours. If you /use/ their confidence, they will think of more ideas than you could possibly come up with on your own. In that case, all the brains are operating. If you have been effective at hiring and developing staff, then their brains are formidable. Consider making this the topic of a retrospective.
Whatever you decide, just be sure the team “owns” it, and not you; but /do/ give them guidance and support, as they may not know how to use measurements to drive improvement. Technical people usually have had no training in that area.
Another suggestion: Don’t strive to “be more and more Agile.” Strive to be more and more effective. Use ideas from the “agile” school of thought as well as from others. When teams strive to “be Agile,” they often end up focusing on following the rules instead of on the goal of improving delivery effectiveness. Teams can be perfectly “agile” as per the rules, and still be completely ineffective at delivering value to their customers. There are countless stories about situations when “agile” didn’t “work.”
4. Stand-ups that linger.
With due respect, there are several red flags in your description. It sounds as if:
* “ScrumMaster” is just another word for “manager”
* The ScrumMaster “runs” the stand-up
* Team members report their status to the manager; er, that is, the ScrumMaster
* People start discussing the details of issues and no one stops them
* The fifteen minute guideline is treated as a “rule”
* The ScrumMaster has the authority (really?) to “release” people so…
* …they can “get back to work” (implying the stand-up is not part of “work”)
That is completely different from a daily stand-up. No space here to go into details. I may be mistaken, and I hope I am, but your description sounds deeply dysfunctional. Each of the red flags might be the beginning of a trail of dysfunction that leads deep into the organization. Actually, it might be interesting to explore this on the ground.
5. Demos of screen shots instead of working software
If you are trying to use a time-boxed iterative process such as Scrum or XP, then part of the model is that the team delivers a vertical slice of functionality in each iteration. If they are doing so, then there cannot be a time when all they can do is show a screen shot, by definition. Therefore, if they are “demonstrating” screen shots, they are not really “doing” the time-box thing, they are just cargo-culting the terminology.
If a given feature is complicated and requires multiple iterations to complete, then it might comprise several demonstratable slices of functionality. It is certainly possible a team would demonstrate one or more slices per iteration, but this will not be reduced to a “screen shot” unless the team does not know how to slice stories properly, or does not know how to squeeze value out of a demo of a subset of a feature. A progressive series of demos could look something like this:
Iteration 15: Demonstrate that the screen correctly displays an empty input form. (Note this is not a “screen shot” – an image – it is the actual “screen”.)
Value for stakeholders: Concrete evidence of real progress, rather than just “estimates” and “promises” and “screen shots”.
Value for team: Feedback about the form layout, sequence of data entry fields, and labels. Feedback about fields whose visibility is based on user authorization.
Iteration 16: Demonstrate that the form performs appropriate input validation.
Value for stakeholders: More evidence of progress, feeling of participation and contribution, visible effort to align solution with needs.
Value for team: Feedback about possible combinations of input values and about edge cases that weren’t apparent from the original story.
Iteration 17: Demonstrate that the form invokes back-end services for a happy-path “add a new Thing” function.
Value for stakeholders: Now the application is starting to exhibit real, useful behavior.
Value for team: Feedback about how well the functionality fits with the desired user experience. Additional details about input validations may fall out from this demo, too.
Iteration 18: Demonstrate that the form invokes back-end services for happy-path “modify existing Thing” and “delete Thing” functions.
Value for stakeholders: Steady progress toward the goal.
Value for team: Opportunity to fine-tune the design before heavy investment in back-end programming.
Iteration 19: Demonstrate that the form handles exceptions gracefully.
Value for stakeholders: Assurance that the application functions well under all realistic conditions. Possible early promotion to production.
Value for team: Pride in their work, confidence boost.
Iteration 20: Demonstrate internationalization/localization functionality.
Value for stakeholders: Refine localized labels, messages, and content. Possible promotion to production of the localized version.
Value for team: Positive reinforcement from stakeholders.
Iteration 21: Demonstrate Web 2.0 client-side enhancements and look-and-feel details.
Value for stakeholders: Improved user experience.
Value for team: Feeling of closure for completing a complicated feature, boost of energy and confidence carried forward into their next piece of work; what you might call “momentum,” speaking loosely.
6. Everything needs a spike.
This sounds as if this might be nothing more than the casual use of a term that is supposed to have a more-specific meaning. Clearly, every time we figure something out, it isn’t technically a “spike.” Similarly, when we clean off our desk we aren’t “refactoring” our desk, although I sometimes hear people say that. Just because a team makes a tick mark on a calendar every week doesn’t mean they’re running “iterations,” although many teams say so. There are lots of words people tend to use in a non-specific way, and it can cause words to lose their distinctive meaning. The word “agile” itself is probably the worst offender of all in that regard.
On the other hand, if people really are afraid to take on a task without doing a spike first, it might reflect systemic problems. In particular, it could be a result of the same causes as item 1 regarding estimates. I’ve seen team do this when they are afraid of giving estimates that later turn out to be “wrong.” It can reflect an emphasis on estimates over empirical discovery. You might want to find out whether it’s just loose terminology or there’s really a problem. If the behavior reflects a systemic problem, you won’t solve it by “challenging the need for a spike.” You’ll fix it by eliminating the systemic causes.
7. Throwing in the towel too soon.
This is another red flag about the process. Two issues in particular leap off the page:
(a) Team members don’t toss out stories. Only the Product Owner or equivalent role has the authority to do that. The PO should do so only if business priorities change in mid-iteration, and not because people are afraid they won’t finish something in time.
(b) One purpose of the time-box model is to drive improvement. We have to let things run their course in order to have the data we need to do that. If we remove stories from a sprint because we fear we won’t have time to finish them, then we won’t get the measurements we need to make genuine improvements. We will be stuck in the same rut iteration after iteration. Delivery projections at the release level will also be unreliable. This will have a detrimental effect on strategic planning and portfolio management, especially if many teams are operating in the same way. A key to success with this approach is to treat every sprint commitment as a projection and not a hard-and-fast promise, and to understand that there is no such thing as “failure,” only outcomes from which we can learn.
8. ScrumMasters waiting for impediments to be served up on a platter.
The reason the ScrumMaster role exists is to help novice teams get started with the idea of removing organizational impediments proactively. On a novice team, it is quite true that no one will bring issues to the ScrumMaster’s attention. A person who is prepared for the ScrumMaster role must be alert to opportunities and ready to act on them. He/she must also act as a sort of mentor or coach for the team, with the goal of eliminating the need for the ScrumMaster role altogether.
However, if the goal is to lock in the ScrumMaster role permanently, then it’s possible someone has missed the point. Ultimately, we want all stakeholders (including technical team members) to deal with impediments proactively and to feel as if it is perfectly normal to do so. The ScrumMaster role itself is a bridge from the Old Ways to the New Ways. We will know we have turned the corner when we wake up one morning unable to remember why we used to have such a thing as a ScrumMaster, and unable to imagine what possible use there might be in it.
9. Laptops in retrospectives. Fully agree. Also be on the lookout for cell phones and other small personal electronic devices.
10. Stories with words like “analysis”, etc.
Right. Those aren’t stories. If you want to use the canonical “agile” model, then your work items need to comprise demonstratable slices of functionality your stakeholders can relate to in a demo. Whether you call them “stories” or some other name isn’t so important, but they have to be the real deal. They don’t have to be complete features, but they do have to be meaningful at least for demo purposes. Analysis, programming, and testing are all part of “development” and all three are carried out concurrently and continuously in a lightweight approach such as “agile”. If they are being carried out sequentially by individuals in functional silos, then the team isn’t using the model they think they’re using.
On the other hand, “stories” aren’t the only way to slice up the work. If your teams can’t handle “stories,” maybe there’s another sort of work item structure they can deal with more effectively.
Just food for thought.
Cheers,
Dave