I started a new job a few months ago as a scrum master.
The team was and still is having problems completing their tasks during the sprint.
Initially the problems seemed to be tasks that were too big and there were some blockers early on in the development process. As these problems were solved the percentage of tasks in the Testing column at the end of the sprint kept going up and up. While the done column didn’t move much at all.
There are two developers for each tester. Which is more testers than my previous team, but I am not sure how it compares to the industry average.
I asked how the team could help the testers. I was told the developers weren’t testing the changes before passing it to the tester. So we tightened up the pretesting quantity gates. Now not only do changes need to pass code review by another developer the developers also have to demo the code to the testers before the testers test it.
The percentage of tasks in the testing column went up again. Now it is more than 80% of tasks by the end of the sprint.
I suggested that the developers and testers pair test the tasks combining the pretest demo and testing. But the testers don’t trust any testing done in the development environment and they won’t let the changes into the testing environment without a pretest demo. And the suggestion that the developers and testers test in both environments in quick succession is popular with no-one.
I have been talking the lead tester and the challenges they are experiencing seem significant, however I keep getting the feeling I’m missing something. I feel as if I am asking the wrong questions.
I think I need to talk to someone lower on the totem pole, someone less invested in the status quo. Smaller, more concrete, subtler questions maybe during one of those pretesting demos. I’ll do that next week.
Also the complaints about lack of developer testing are louder.
My feeling is that only a few tasks are getting bounced back, but those few are causing disproportional pain to both developers and testers. Also much of the time the problem is not the task bounced back, but some other task in the testing queue. My feeling is the problems will get worse the longer the testing queue gets.
But that is my feeling. It would be nice to have some concrete numbers. With my previous team I would open up JIRA reports and I’d get some idea about what might be causing the problem. With this team JIRA reports is giving me garbage. They are saying we are getting no work done at all which is not quite true. I’d like to get the percentage of tasks reopened after testing and the percentage of time in testing, it looks like I will have to dig into JQL as the standard reports are giving me nothing.
What am I doing wrong? What am I missing?
My previous team was more cross functional. With this team I am not sure how to even begin to move them in that direction. Any suggestions in that direction are shot down immediately.
A developer perspective
Deliver and deploy features throughout the sprint cycle means QA testing will be done throughout the sprint cycle and developers will respond to QA feedback early on and will have time to work on items from next sprint, and so on!
This is actually an Agile scrum rule that is most ignored among semi-pro scrum masters! The rule is to under-estimate and
wait for it!
Always take on load of work per developer that will be done in LESS THAN sprint cycle days to ensure completing work WHILE giving room for testing!
Here is a full article on how I was able to beat Agile testing woes I solved Agile testing bottleneck problem!
Answered by Samer on January 4, 2022
From a developers point of view, other answers concentrate too much on theoretic practices. To have any concrete answers, it is essential to know what kind of tasks the developers are dealing with.
Also much of the time the problem is not the task bounced back, but some other task in the testing queue.
That sentence suggest that there is a mismatch between what developers think they should do and what the sprint thinks what tasks should be in this sprint.
For example, I am currently working on a project that is largely refactoring. My tasks are all architectural level. All my changes are huge, affecting near 100 files per feature. The bugs that the tester creates are either very small, or some obscure use cases, or cross-platform inconsistencies.
The thing is, from where I stand, the project is not in the stage where I can afford to grind some random corner-case issue for hours. The code is so spaghetti-like, that if I start tracing where the bug is located, I will find that a particular data element is a part of an object, which creates another object and passes the data along with another name, which modifies the data and passes it to another object with a third different name, which passes it along modified, which passes it along... It will take hours and hours to fix small bugs this way. I have to do the architectural work first.
Our ticket practices are very flexible, so we figured it out. 1) If there's any small bugs I can do along my big tasks in an hour or so, I'll do those. 2) If the problem is in the code I haven't refactored yet, I'll move the bug to the backlog. 3) Another developer who's not responsible for the architecture, takes any bugs that are doable, but can't be prioritized over architectural changes.
If your project is anything like ours, I don't think the fault is either with developers or testers. Testers are doing a great job finding the bugs, but not all of those are relevant to developers. Developers are doing a great job with the code, but they can't avoid overlooking details. In this case it sounds like the problem is the process being too inflexible, and the sprint has tasks that don't belong there.
Answered by Boat on January 4, 2022
I really like some of the other answers, but I just wanted to remind you of another tool in the toolbox: the Work In Progress limit.
Set a cap on the 'ready for test' column, as well as the 'testing' and 'developing' columns. This will mean that some developers will be unable to pick up new tasks, and therefore may be more motivated to help the testers get their current tasks done. Or they might just use the extra time to beef up the unit tests etc. for their tasks waiting for testers.
Combine this with not adding more tasks to the sprint than the team can reliably get done-done, and regular retrospective to figure out other blockers.
Answered by Paula on January 4, 2022
The first question I would want to consider in your position is:
Are the issues being seen in test because the code is unreliable, or because the requirements have not been understood?
The developers are presumably getting bogged down in ensuring the correctness of the code, but of the following two scenarios:
...there are very different causes and remediations.
Of course if you're very unlucky you may have an abundance of both.
For the first point, root causes may be problems such as:
Whereas the second could be:
A team with no vision of how these problems can be bettered, probably won't give you these answers - since for the most part they are a matter of degree. For the technical points, you may be in a situation where the team members have given up trying to explain their point of view to one another, or possibly to one particular person. This would promote an environment where technical debt accumulates because the team don't see eye to eye on how to maintain good code quality.
Similarly, for those points related to requirements, your description sounds as though you may have a product owner who is refusing to adapt their way of working to the needs of the team. You could definitely scrutinise the wording, granularity and specificity of the stories your product owner is producing, and your role as scrum master gives you a strong position to insist that these are improved if they are lacking.
Answered by Tom W on January 4, 2022
Other comments here all ring true: too waterfall-y, not enough team responsibility, etc. but I'd like to emphasize a point made just once in other answers: you're absolutely setting goals too high. You HAVE to set bite-sized goals that are achievable, no matter how slow that is. If that doesn't fit the business schedule, the business schedule cannot be met with existing staff and the sooner that's realized the better. Knocking out goals on schedule is fun and addictive and builds camaraderie and esprit de corps. Failing goals time after time makes everyone losers unnecessarily, and losers aren't motivated, productive or cooperative. Take the best developer or team in the world, give them goals they can't meet, and you will have made them into losers.
Now a new point: some code takes twice as long (or more) to test as to write. Any chance that's the case here? Are the testers correct that the work takes so long to test? Are you able to literally embed and do some testing yourself and see it from the inside?
If so perhaps you could have your testers off-load anything possible. For instance as a developer I write the unit tests and fill them up with all the tests I can think of. Since the original author always has blind spots, it would then make sense to hand the unit tests off to fresh staff who can find fresh problems, but at that point they're just adding test cases to an existing script.
In fact if the customer of the project request is technical ("I need an API that does XYZ") arguably the customer could be writing the initial test harness and test cases, as a concrete expression of what he requires. Developers work on the code until it passes that test script, and only then hands to QA to study for things overlooked. Like my previous point, this results in the testers having a lot less work to do, but additionally gives the developers a concrete initial goal before they submit candidate code for independent testing.
(As a variation, that doesn't offload work from testers but does prevent developers from submitting blatantly unusable code: have the testers write the test scripts first, and require developers to pass that before handing back to test...)
Answered by Swiss Frank on January 4, 2022
You are doing a lot of good things already, but I would also recommend the following:
Focus a lot of your effort on the retrospectives. There is going to be a temptation to get into blame ("it's the testers fault", "it's the developers fault"). Avoid this and continually focus on getting the team to work in a collaborative fashion.
You don't need to solve this problem, but you do need to help the team to solve it. They won't do that until they start thinking and acting as a unified team.
If the team is reluctant to make changes then suggest they do things as experiments:
"Why don't we try doing X for the next sprint? If it doesn't work out, we can revert to our old way of working."
Answered by Barnaby Golden on January 4, 2022
Your problem is not that you have developers and non-developers (as you call the business analysis/product owner, the designer and the testers). Your problem is that these people have individual ownership on their slice of the cake and not on the entire cake.
Here are a few things from the Scrum Guide (emphasis mine):
- Development Teams are cross-functional, with all the skills as a team necessary to create a product Increment;
- Scrum recognizes no sub-teams in the Development Team, regardless of domains that need to be addressed like testing, architecture, operations, or business analysis; and,
- Individual Development Team members may have specialized skills and areas of focus, but accountability belongs to the Development Team as a whole.
Ideally, every member of the development team in Scrum could be a full stack developer but if in reality you have developers and testers then that's no problem at all. I've worked with such teams where developers wrote code and testers were testing it, and in some teams it worked and in others it didn't. What was the difference?
In the teams that worked well together, people collaborated. They worked together to move each story through the sprint to "Done". Developers finished their work and did a handover to testers. They explained what was going on, how the thing worked, where to look for things in the database, how to create test data, etc. Developers and testers had a good understanding of what needed built after interactions with the Product Owner. Testers worked closely with developers to prepare their test scenarios before they received a handover of the development. If someone needed help from someone else they got it. They owned all of the work, even though they were taking care of different stages of the work (design, development, testing, etc).
Care to find out how things unfolded in the teams that didn't collaborate? Everyone was doing their own thing. Developers developed. Testers tested. Business analyst wrote requirements. They only cared about "their part" and once that was over they threw it over the fence to others to deal with it next. "I've done my part, now it's your turn". Instead of all pulling together to move the ball from one side of the court to the other, they just bounced the ball back and forth between each other until someone eventually said "good enough".
Your problem is not that people have different skills and are not cross functional. Your problem is that their skills don't complement each other. Their skills don't mix, they stay layered.
If you put developers to test more and testers to develop more, people will start to hate it. Find ways to make them work together. How exactly, I can't say. It depends on the team's dynamics. You might need to experiment with a few things. Test out some other things. Look at the entire picture and figure out what's going on. You might need to track each story in the sprint individually and determine from that where the friction points are between people. Then think how to work on that.
And keep in mind that it might take some while to improve the way people work together. You said you started as a Scrum Master a few months ago. How much time have these people worked the way they do now? That's how they do things. They might be so immersed in their way of doing things that they don't notice that there might be better approaches. You, on the other hand, are new and you see the problems. Work with them to improve communication and collaboration first, and later you can all look for ways to improve the process.
Answered by Bogdan on January 4, 2022
Your team appears to do mini-waterfall development within each sprint, which is a known anti-pattern, as you don't get the collaboration within the team that make agile methods successful.
Also, Scrum only has 3 "job titles": Product Owner, Scrum Master and member of the Development Team. There are no separate developers and testers, they are all equally members of the Development Team, although individual members may have more focus on implementation or testing.
The Development Team as a whole is responsible for delivering functionality according to their quality standards, which is typically represented by getting tickets to "Done". If there is a problem in getting tickets done, then the whole Development Team should be held accountable for that. If there is a backlog in testing functionality, then the people who wrote the code are equally responsible is resolving that backlog as the people doing the testing are.
As a final thought, I am assuming that that the end of a sprint all unfinished work automatically rolls over into the next sprint, with some new work added to fill the remaining capacity. I wonder ow the team would react if the Product Owner would start the planning of a new sprint with
I have had a change in priorities. All the unfinished work we couldn't deliver last sprint is not important enough for me anymore, so we will stop working on it and those stories will go back to the backlog until they become relevant again. Now, this new work is what I want us to start focusing on from today.
You could discuss with the Product Owner to try this as an experiment and give the team a jolt that unfinished work can really feel like wasted effort (where unfinished means the work isn't in the Done column. If implementation is complete, but testing not, then it is not Done). The point where those unfinished storied become relevant again could be the sprint after the one where you run this experiment, but that is up to the PO.
Answered by Bart van Ingen Schenau on January 4, 2022
Get help from others!