Note: I should probably point out that in the following piece, I will follow the Danish tendency to not use peoples’ titles. For people not living in Denmark, this might seem disrespectful, and if it is perceived as such, I apologize, but the habit of not doing so is too ingrained in me, for me to start doing so now.
I was considering calling this piece “through the looking glass”, but that would have connotations of weirdness which I found inappropriate, since what I wanted to was to indicate that I had experienced the “other side” of the divide for once.
What divide you ask?
The gender divide. The gender divide in technology to be more specific.
People who have followed my other blog and twitter stream are probably aware that I am an out-and-open feminist, and that I regularly criticize my field (programming and IT consulting) for how women are marginalized, e.g. by the male dominance when speakers are picked for conferences.
This year I participated in such a conference; the GOTO conference in Aarhus, Denmark (the conference was formerly known as JAOO). Here the lineup of speakers was also heavily tilting towards men, but it is one of the conferences which actively tries to get female speakers, and they had managed to get some really great ones, including Linda Rising, Rebecca Parsons, and Telle Whitney.
Telle Whitney held a talk on women in IT, and all three of them participated in a meeting with the Ada Aarhus group, which was held after the talks on the second day of the conference.
I went to the talk, and participated in the Ada Aarhus meeting, and both of these things introduced me to the concept of being the outsider. Something which I understood, or at least thought I did, yet which I hadn’t really experienced before. I cant say I enjoyed the experience, but it was certainly enlightening, and it forced me to re-evaluate what I thought I understood on this subject.
Before going into how this happened, I want to back away a bit, and give a brief introduction to myself and that part of my background which is relevant.
First of all, as the sidebar says, I am a Danish IT consultant in my thirties. For those interested in the details, I am a .NET consultant, working mostly with large financial or public systems.
What the sidebar doesn’t mention, but which many people know, is that while I am Danish, I am also Australian. My mother was Australian, and while I grew up in Denmark, my childhood was a mixture of cultures - not only Danish and Australian, but also several others, since my childhood friends were also mostly of mixed backgrounds as well (though all with Western backgrounds).
This upbringing has left me unable to entirely relate to a typical Danish upbringing.
It is the small things that usually trips me up - the children's’ stories and songs that I haven’t heard, and the ones that I grew up with instead (would you believe that most Danish children don’t grow up with neither The Wizard of Oz nor Snugglepot and Cuddlepie?) - but it is also the inability of many to look beyond the borders, and think globally. The distrust of foreign things and multi-culturalism that people hold, thinking that anything foreign must be dangerous or less good.
This means that I am the outsider in some cases. But given that fact that I’ve grown up in Denmark not entirely so, and since I look Danish, I can always act in ways which allows me to fit in.
Going back to the woman in IT talk, Whitney talked about what companies and individuals could do to ensure women could advance in IT. A subject I feel strongly about. Yet when listening to the talk, I kept feeling that I was left out - that Whitney wasn’t talking neither to nor about me. The reason was that I am not in a position to make company decisions, and that the individuals that Whitney was talking to, about what they could do, was the women. Not the men. All the recommendations didn’t relate to me and daily life.
You know why? Because it wasn’t about me!
I knew this at an intellectual level. Yet I hadn’t realized the full impact until I experienced being left out. It bothered me more than I thought it would. My privilege kicked in, and I felt a bit of resentment at the gut level, while knowing fully well that this was how it ought to be, at the intellectual level.
If this was how I felt during a 50 minute talk, how must it not be for people who experience it day in and day out? E.g. women whose wishes and needs are ignored or LGBT people who live in a heteronormative society.
I cannot in any way pretend that I can relate to how they feel. But I can say that I understand it a little better now.
The Asa Aarhus group meeting, where both Linda Rising and Rebecca Parsons gave brilliant talks, just strengthening my understanding of this, and my realization of how little I can relate to how it would feel to experience this every day.
A blog focused on programming, systems development, and IT consulting.
Friday, October 14, 2011
Wednesday, October 5, 2011
Hotfix Hell
I am a firm believer in many agile processes and tools, including iterative development, where you work with many deliveries. This allows for early, frequent feedback, and allows you to find errors early, before the project turns into an error-driven development project.
Unfortunately, this is not always possible.
A type of project I often work at, is the large project which spans several years, where the customer involvement is minimal, except at the start and at the end. This sort of project is pretty much doomed to go over time, go over cost, have many errors etc. They are exactly the reason why agile development has become so popular. Even if the project is developed agile, the lack of customer involvement, will mean that the project could in the wrong direction, without anyone finding out, before at the end.
Typical projects where this happens, are public projects subject to tenders. Here the scope of the functionality etc. is determined before the contractors bid on the contract (though often with a clarification phase at the start), and after the bid has been accepted (plus initial clarification phase has passed), the scope, deadline, and price are fixed. The customer will then often not be involved until the final acceptance test phase, where the solution is often found to be lacking (to put it mildly).
As this approach has obvious flaws, there has been an attempt to fix it by introducing sub-deliveries, which each has to pass the acceptance tests. In my experience, there are typically two sub-deliveries before the final delivery at the end.
This approach might seem somewhat agile, and since it gives earlier feedback, you’d think that it would help. Unfortunately, in my experience, it actually worsens the problem.
The problem is in the acceptance test part.
Picture the typical two year project with three deliveries (two sub deliveries and one final). Given the fact that there is a lot of scaffolding etc. at the start, the first sub delivery will fall after one year, with a sub delivery half a year later, and the final deliver half a year after the second sub delivery (i.e. one year after the first sub delivery).
This is, in itself, not an unreasonable schedule.
Unfortunately, the programmers involved will often have to acquire domain knowledge, while doing the scaffolding and early development of functionality, increasing the likelihood of wrong decisions and/or errors in the implementation. Some of this will become apparent as the first deadline approaches, and might be changed in time - unfortunately some it won’t be possible to change it all, and some misunderstandings will only become clear during the acceptance testing.
Since the project is on a tight schedule, the work on sub delivery two starts - often starting by changing the flaws found before the deadline, which they didn’t have time to fix, but also start on new functionality etc.
Unfortunately, the errors will still be in the code submitted to the acceptance test.
Since the errors are found, the acceptance test fails, and the customer rejects the sub delivery as it stands.
What happens then? Well, this is where the hotfix hell starts.
Given the fact that the code submitted as sub delivery one has failed the acceptance test, the developers have to fix the errors in the submitted code, which is now out of date, compared to the code base. This is done by making a patch or hotfix to the code.
The patched/hotfixed code is then re-submitted to acceptance testing.
If this passes, then all is well. Unfortunately that’s rarely the case. Instead, new errors will be found (perhaps introduced by the fix), which will need to be fixed, re-submitted, tested etc. This will take up considerable amounts of time, calender wise, but also resource wise - meaning that programmers, testers, customer testers etc. will use a lot of time fixing problems in the code, which they could have spent on other things. Other things, such as sub delivery two.
Just because sub delivery one has failed the acceptance test, doesn’t mean that work on sub delivery two has stopped - it is, after all, to be delivered six months down the line.
Unfortunately the plan didn’t take into account the hours need to work on sub delivery one after the delivery and/or date for expected acceptance test. This means that sub delivery two is in problems, since the developers won’t have time to do all the work required for it to pass acceptance test.
Meanwhile, sub delivery one and sub delivery two move more and more apart, resulting in developers having to fix problems in obsolete code - this is frustrating for the programmers, and introduces the risk of errors only being fixed in the old delivery instead of being fixed in both, since porting the fixes is difficult.
At some stage, the sub delivery will pass the acceptance test, or (more commonly in my experience) it will be dropped, as the next delivery either is about to be delivered or has been delivered.
Due to the work on the sub delivery one, the second sub delivery is unfortunately either late or in such a mess that it cannot pass acceptance test (or, more likely, both).
This means that when sub delivery two is handed in, hotfix hell starts all over again.
So, how can this be fixed?
Well, one way is to do iterations the agile way. Unfortunately, that’s not particularly likely to happen.
Another way is to base deliveries on the date when the acceptance test of the earlier sub delivery has passed. So in the above example, the second delivery will be handed in six months after the first delivery has passed the acceptance test.
Given the nature of the projects using sub deliveries, this is also unlikely to happen. Often the last deadline is defined by a new law or regulation, and is firm (until it becomes completely apparent that it cannot be done).
A more likely solution would be to take the overhead to hotfixes into account when planning. This would mean that the time spent on hotfixes on sub delivery one wouldn't take time set aside to sub delivery two. The problem with this approach would be that this would make the price higher when bidding, since more people would be needed to finish the work on time, than if one assumes that not hotfixes are necessary. On top of that, it is also hard to estimate just how much time is needed for this (in my experience, everybody vastly underestimates this).
My suggestion would be something more simple. Timebox the acceptance test - and call it something else.
Before the project starts, the customer and the contractor decide how much time will be used for testing the sub delivery in order to make sure that the fundamentals are fine, but without resulting in the developers having to fix obsolete problems.
When the timebox is done, the customer will either accept or reject - a rejection would mean that the customer think the code is so fundamentally flawed that it cannot be used. If that’s the case, the customer and the contractor will need to sit down together and figure out how to get on from there. Perhaps the final deadline will have to be moved, the contractor will have to add more people to the project in order to get it back on track, or the customer will have to become more involved in the development process (e.g. by providing people who can help testing during the development of sub delivery two).
I am well aware that my suggestion breaks with the concept of sub deliveries, but I would claim that the concept of sub deliveries is fundamentally flawed, and instead of helping a problem, it actually makes it worse. Since this is the case, I think we have to re-think how they are used, if at all.
Unfortunately, this is not always possible.
A type of project I often work at, is the large project which spans several years, where the customer involvement is minimal, except at the start and at the end. This sort of project is pretty much doomed to go over time, go over cost, have many errors etc. They are exactly the reason why agile development has become so popular. Even if the project is developed agile, the lack of customer involvement, will mean that the project could in the wrong direction, without anyone finding out, before at the end.
Typical projects where this happens, are public projects subject to tenders. Here the scope of the functionality etc. is determined before the contractors bid on the contract (though often with a clarification phase at the start), and after the bid has been accepted (plus initial clarification phase has passed), the scope, deadline, and price are fixed. The customer will then often not be involved until the final acceptance test phase, where the solution is often found to be lacking (to put it mildly).
As this approach has obvious flaws, there has been an attempt to fix it by introducing sub-deliveries, which each has to pass the acceptance tests. In my experience, there are typically two sub-deliveries before the final delivery at the end.
This approach might seem somewhat agile, and since it gives earlier feedback, you’d think that it would help. Unfortunately, in my experience, it actually worsens the problem.
The problem is in the acceptance test part.
Picture the typical two year project with three deliveries (two sub deliveries and one final). Given the fact that there is a lot of scaffolding etc. at the start, the first sub delivery will fall after one year, with a sub delivery half a year later, and the final deliver half a year after the second sub delivery (i.e. one year after the first sub delivery).
This is, in itself, not an unreasonable schedule.
Unfortunately, the programmers involved will often have to acquire domain knowledge, while doing the scaffolding and early development of functionality, increasing the likelihood of wrong decisions and/or errors in the implementation. Some of this will become apparent as the first deadline approaches, and might be changed in time - unfortunately some it won’t be possible to change it all, and some misunderstandings will only become clear during the acceptance testing.
Since the project is on a tight schedule, the work on sub delivery two starts - often starting by changing the flaws found before the deadline, which they didn’t have time to fix, but also start on new functionality etc.
Unfortunately, the errors will still be in the code submitted to the acceptance test.
Since the errors are found, the acceptance test fails, and the customer rejects the sub delivery as it stands.
What happens then? Well, this is where the hotfix hell starts.
Given the fact that the code submitted as sub delivery one has failed the acceptance test, the developers have to fix the errors in the submitted code, which is now out of date, compared to the code base. This is done by making a patch or hotfix to the code.
The patched/hotfixed code is then re-submitted to acceptance testing.
If this passes, then all is well. Unfortunately that’s rarely the case. Instead, new errors will be found (perhaps introduced by the fix), which will need to be fixed, re-submitted, tested etc. This will take up considerable amounts of time, calender wise, but also resource wise - meaning that programmers, testers, customer testers etc. will use a lot of time fixing problems in the code, which they could have spent on other things. Other things, such as sub delivery two.
Just because sub delivery one has failed the acceptance test, doesn’t mean that work on sub delivery two has stopped - it is, after all, to be delivered six months down the line.
Unfortunately the plan didn’t take into account the hours need to work on sub delivery one after the delivery and/or date for expected acceptance test. This means that sub delivery two is in problems, since the developers won’t have time to do all the work required for it to pass acceptance test.
Meanwhile, sub delivery one and sub delivery two move more and more apart, resulting in developers having to fix problems in obsolete code - this is frustrating for the programmers, and introduces the risk of errors only being fixed in the old delivery instead of being fixed in both, since porting the fixes is difficult.
At some stage, the sub delivery will pass the acceptance test, or (more commonly in my experience) it will be dropped, as the next delivery either is about to be delivered or has been delivered.
Due to the work on the sub delivery one, the second sub delivery is unfortunately either late or in such a mess that it cannot pass acceptance test (or, more likely, both).
This means that when sub delivery two is handed in, hotfix hell starts all over again.
So, how can this be fixed?
Well, one way is to do iterations the agile way. Unfortunately, that’s not particularly likely to happen.
Another way is to base deliveries on the date when the acceptance test of the earlier sub delivery has passed. So in the above example, the second delivery will be handed in six months after the first delivery has passed the acceptance test.
Given the nature of the projects using sub deliveries, this is also unlikely to happen. Often the last deadline is defined by a new law or regulation, and is firm (until it becomes completely apparent that it cannot be done).
A more likely solution would be to take the overhead to hotfixes into account when planning. This would mean that the time spent on hotfixes on sub delivery one wouldn't take time set aside to sub delivery two. The problem with this approach would be that this would make the price higher when bidding, since more people would be needed to finish the work on time, than if one assumes that not hotfixes are necessary. On top of that, it is also hard to estimate just how much time is needed for this (in my experience, everybody vastly underestimates this).
My suggestion would be something more simple. Timebox the acceptance test - and call it something else.
Before the project starts, the customer and the contractor decide how much time will be used for testing the sub delivery in order to make sure that the fundamentals are fine, but without resulting in the developers having to fix obsolete problems.
When the timebox is done, the customer will either accept or reject - a rejection would mean that the customer think the code is so fundamentally flawed that it cannot be used. If that’s the case, the customer and the contractor will need to sit down together and figure out how to get on from there. Perhaps the final deadline will have to be moved, the contractor will have to add more people to the project in order to get it back on track, or the customer will have to become more involved in the development process (e.g. by providing people who can help testing during the development of sub delivery two).
I am well aware that my suggestion breaks with the concept of sub deliveries, but I would claim that the concept of sub deliveries is fundamentally flawed, and instead of helping a problem, it actually makes it worse. Since this is the case, I think we have to re-think how they are used, if at all.