Buckle up, settle down, and get ready for something exquisite. I am inspired tonight and what I present here should take a while to really sink in.
The primary use case of a computer is to augment human intelligence. We know that much. I feel, however, that in order to build an intelligence augmentation device you really need to understand how intelligence works.
People can only handle a certain amount of information at one time. No matter how smart you are, the amount of simultaneous details you can keep in your head isn't really that high.
You have to build abstractions; and a given abstraction of some concept I will call a symbol.
It is funny how when you are learning new things how we fight building a given symbol. A lot of times, after something becomes clear I am amazed at how concise it turns out in my head. Many times I have really worked hard to figure out why I fought a given concept so hard. Math is usually like this; when we are alert but relaxed we can pay enough attention to really learn it and enjoy the feeling of new concepts bouncing around in our heads.
Given time you get comfortable with a symbol and you start to incorporate it into you view such that you don't even notice it. Then the symbol gets semi-randomly combined with other symbols and if the new symbol is interesting or enjoyable you remember the new symbol. This process continues indefinitely in all of us all the time.
You should take time to learn new concepts that are interesting and different from what you know. As you build layers of abstractions and symbols to really adapt to the new experience you will make associations and new connections between things you have really known for a long time but been unable to see from the new perspective. Thus learning N brand new and different things really teachings N^2 (N-squared) or more new things; intelligence and learning are at least exponential (this relies on the assumption that this really is a very new thing)!
In any case, we all think by making symbols and manipulating them in some sort of systematic fashion.
The symbols you are allowed to make in a programming language, btw, directly relates to how exact and high level of a sophisticated of a concept you can express with it. Higher level programming forces you to make these clearer and higher level abstractions. The type system provides the rules for combining the symbols you make with the programming language.
Now lets talk about programs and get completely specific and practical. Most programs are very limited in the forms of symbols they allow you to make. Here are a few examples of building symbols:
One is an abstraction over a collection of objects. I have all these paragraphs organized in a chapter. Now I can re-arrange these chapters and form a book. That is a literal hierarchy of symbols. You can bet that in order to have a book laid out in a sophisticated manner the author has built a lot of abstractions about what the chapters mean and exactly how the flow of the story wanders through the literature.
Another example would be that I have these groups of formulas in a single excel spreadsheet. In the user's mind, they are going to build some symbol out of that spreadsheet that allows them to reason about its capabilities without knowing every single line of code in it.
Second is the ability to extend a given symbol with new information. This form of abstraction is akin to a master-instance relationship where you are in effect saying "this item is just like that one, but it differs in a few aspects here". Artists really love to use this form to create crazy interesting concepts. Dj's, Hollywood, video game makers all use this form of abstraction to some greater but mostly lesser benefit.
It is very common on the web where you have a templating system so that a given site will have web pages with some standard styles, heading and footing but override a lot of other things. Most systems use templates incorrectly, however. One they aren't supposed to be something you copy and start from. If you can't change the final product by changing the template then you haven't built an abstraction; you have just added more information. In the common case of Word documents, if I have a bunch of documents based on a template I should be able to change the headers in all the documents by changing the header style of the template. This doesn't happen (you might argue that it is safer and that is fine; I am talking about being able to build complex abstractions; this requires re-evaluating concepts that you might already know and thus presents the risk inherent in that you are changing existing symbols you are talking about ensuring that you *can't* damage existing goods which means that real learning or abstracting isn't happening).
You see this in programs a lot where you like a set of programs to have a standard look at feel. You get used to using these programs quickly because you aren't focusing on building new symbols from low level concepts but instead you are understanding the differences between what you are doing and what the other program was doing. Ctrl-C in windows is copy in most programs. This means that while its meaning may be context specific the general mental symbol that stands for copy is consistent across programs. You needed fewer symbols to use the new system.
Lets look at how someone learns a song. You get the main feel of the piece; this is the first symbol you build. Then given this rough, broad symbol you continue adding detail and forming new symbols until you build the song out to a tolerable richness or accuracy of reproduction.
I can't immediately think of a third way of learning anything that doesn't fall into a combination of the two ways I stated before.
This is where orthogonality of concepts really comes in. Clear, orthogonal symbols are composable in ways that rough, non-orthogonal concepts are not. This sucks because creating new symbols, either through grouping or through templating is what learning really is.
Earlier tonight I realized that my picking technique was too strict. Playing guitar, doing some random drill I found out of a book to warm up my hands and clear my mind I realized that I could pick this drill several ways; I didn't need to really develop some strict strategy and pick it the exact same every time. I then tried many techniques to see which ones felt the best; the drill had a lot of jumping from one string to another and to get it to work well I tried a hybrid-alternate picking style where I arpeggiated the jumps. This meant that if the jump was down several strings I used a down-stroke on both strings, and then used alternate up-down picking where I was playing a line of notes on a single string. If the jump was up several strings then I used an up-stroke for both the last string before the jump and the first string after.
Then I realized that I could also pick it with a strict alternate picking style where I could pick down on the top string, thus moving the pick towards a lower string *move completely over the lower string* and then pick up on the lower string. This meant that I was technically doing more than the most minimal amount of work in order to pick the drill but I realized that it allowed my wrist to make a more natural and better timed stroke and I decided since I had the control to do the strict style I liked it better because it felt like the timing was easier.
This was an example where my symbol I had built for picking -- it had to be the most minimal cost route to getting things done -- was way too strict. The truth of the matter is that wrists are damn quick and for most situations easily move the pick fast enough. Doing things in the most comfortable way that allows you to play the notes is more important than mechanical efficiency. Knowing this then gave me a lot more confidence with songs because I started noticing all the places where I really had somewhat inconsistent picking but because I have worked on control so much it doesn't affect my ability to play the song.
This form of abstraction, templatization, becomes to strict when the template has too much information. This means that it eliminates possibilities that really should be valid.
So what is the point of all this? Just for once I am going to ask for some reader-participation. I just gave up a lot to come up with all that above, it has taken me years to figure all this out. So please, if you enjoyed the above explanation lets take it to the next step.
You have two distinct forms of building symbols, aggregation and templatization (or master-instance if you like).
1. Are there other distinct forms of building symbols?
2. How could you apply these ways of building symbols to the programs that we use? An example would be if you had a Word document, are the different ways you can aggregate and templatize the document? We are familiar with document templates, can you have a meta-template where you have a common template across templates? What about aggregations of the templates, what would that imply? Aggregations of pages make a chapter, but under what scenarios do you want to use a page as a template or master of another (revision control comes to mind).
Tuesday, November 18, 2008
Friday, November 14, 2008
Graphics Abstractions
I am not really sure how to start this but lets talk tonight about useful graphics abstractions; meaning lets run over lots of different ways you could groups items.
- Geometry buffers and index sets into a geometry buffer.
- A geometry buffer mixed with some sort of material system would give you something you could see, lets call it a geometry object.
- Anything with some 3d transform information you could call a node.
- Out of these node objects you can produce a scene graph.
- Take a subgraph of these meant to represent a single entity (like a person) and you have a model.
- For a given model there will be a set of animations that are meant to be applied to it, a model w/ possible animations you would call...what? Perhaps a character?
- A large scene graph possibly containing groups of models you could call a new model.
- For a given model it may make sense to have several given states the model could be in. Perhaps with armor or without, perhaps glowing or perhaps without.
- This includes the larger scene graph, thus you have groups of models, each given model has several states and most likely lots of animations you can apply to it.
This is all talking about immediate mode things, or I guess you could call them instance level things. But really any given model would perhaps have a canonical form along with an instance level form. You can really have several levels of instancing, perhaps with a master state that the other states reference and just change bits of.
Oh, by the way, the actual set of properties on a given object is not well defined. They can change. At runtime.
Now lets look at this from a difference perspective....
Lets say you have a given state of an object. Normally in a 3d environment there is a relatively high repetition of various assets, thus you have a bunch of chairs that all look identical or very similar.
So you introduce various different levels of master-instance relationships. Really what is happening is that you have different representations of a given object that you can write things to and read from. This may get too vague to follow quickly but in a product I am currently working with we have:
schema -> defines default properties
Library -> Defines some changes to defaults
State -> Defines some changes to the library, contains animation systems
Scene graph -> Appends some properties to some objects (like global transform).
Scripting -> Changes the final result of stuff and sets properties.
Render engine -> Renders the result of the pipeline.
Thus the processing pipeline for an arbitrary property on an object looks like:
schema -> library -> state -> animation -> scene graph -> scripting -> render engine.
Capiche? Lots going on here. Now, to get a various property you need to know which stage you care about looking at it.
By the way, the state->library section can be repeated; thus you have multiple states both in series and as siblings in some sort of 3d object state graph.
The point is that we are thinking about writing a giant object abstraction that takes all of these details into account. This will be a behemoth of an object database system, completely custom to what we are doing. We will be making huge abstractions and bundling them up into simple, minimal interfaces.
Now comes the good part; why is this important?
Because people think in abstractions and symbols. They like to design things in abstractions and symbols. Plus they like to take an object, use it but change it slightly. So there are two large abstractions we are supporting generically.
The first is to allow arbitrary groupings of objects and to name these groupings. Then you should be able to use them as a distinct unit. A set of animations bundled into an animation group could define a running movement where you are animating a lot of things.
Sets of these groupings could be used to mark the set of animations that are explicitly used for a given model, which is itself a grouping mechanism for various other details.
Given a model (ignoring animation for a moment) and you may want it in different states where it is blue or red or perhaps armored as I said before or perhaps otherwise.
Building these large hierarchies of abstractions is what allows us as feeble humans to actually achieve very large things; it is important that our software recognizes and reflects this paradigm.
Next we like templates, or prototypes that we can then use and change without changing the source data. We like instancing things from master relationships. I am not certain how we like all these things to work out yet in my head but I am just now beginning to see what the next step in 3d graphics composition and application design really is.
Chris
- Geometry buffers and index sets into a geometry buffer.
- A geometry buffer mixed with some sort of material system would give you something you could see, lets call it a geometry object.
- Anything with some 3d transform information you could call a node.
- Out of these node objects you can produce a scene graph.
- Take a subgraph of these meant to represent a single entity (like a person) and you have a model.
- For a given model there will be a set of animations that are meant to be applied to it, a model w/ possible animations you would call...what? Perhaps a character?
- A large scene graph possibly containing groups of models you could call a new model.
- For a given model it may make sense to have several given states the model could be in. Perhaps with armor or without, perhaps glowing or perhaps without.
- This includes the larger scene graph, thus you have groups of models, each given model has several states and most likely lots of animations you can apply to it.
This is all talking about immediate mode things, or I guess you could call them instance level things. But really any given model would perhaps have a canonical form along with an instance level form. You can really have several levels of instancing, perhaps with a master state that the other states reference and just change bits of.
Oh, by the way, the actual set of properties on a given object is not well defined. They can change. At runtime.
Now lets look at this from a difference perspective....
Lets say you have a given state of an object. Normally in a 3d environment there is a relatively high repetition of various assets, thus you have a bunch of chairs that all look identical or very similar.
So you introduce various different levels of master-instance relationships. Really what is happening is that you have different representations of a given object that you can write things to and read from. This may get too vague to follow quickly but in a product I am currently working with we have:
schema -> defines default properties
Library -> Defines some changes to defaults
State -> Defines some changes to the library, contains animation systems
Scene graph -> Appends some properties to some objects (like global transform).
Scripting -> Changes the final result of stuff and sets properties.
Render engine -> Renders the result of the pipeline.
Thus the processing pipeline for an arbitrary property on an object looks like:
schema -> library -> state -> animation -> scene graph -> scripting -> render engine.
Capiche? Lots going on here. Now, to get a various property you need to know which stage you care about looking at it.
By the way, the state->library section can be repeated; thus you have multiple states both in series and as siblings in some sort of 3d object state graph.
The point is that we are thinking about writing a giant object abstraction that takes all of these details into account. This will be a behemoth of an object database system, completely custom to what we are doing. We will be making huge abstractions and bundling them up into simple, minimal interfaces.
Now comes the good part; why is this important?
Because people think in abstractions and symbols. They like to design things in abstractions and symbols. Plus they like to take an object, use it but change it slightly. So there are two large abstractions we are supporting generically.
The first is to allow arbitrary groupings of objects and to name these groupings. Then you should be able to use them as a distinct unit. A set of animations bundled into an animation group could define a running movement where you are animating a lot of things.
Sets of these groupings could be used to mark the set of animations that are explicitly used for a given model, which is itself a grouping mechanism for various other details.
Given a model (ignoring animation for a moment) and you may want it in different states where it is blue or red or perhaps armored as I said before or perhaps otherwise.
Building these large hierarchies of abstractions is what allows us as feeble humans to actually achieve very large things; it is important that our software recognizes and reflects this paradigm.
Next we like templates, or prototypes that we can then use and change without changing the source data. We like instancing things from master relationships. I am not certain how we like all these things to work out yet in my head but I am just now beginning to see what the next step in 3d graphics composition and application design really is.
Chris
Saturday, November 8, 2008
Creating software 101
Lets take a look at what is required to manage the development of medium to large scale software products.
Someone has a big idea. From my perspective, someone identifies a need and a customer base.
This boils down to a set of vague features. You can group features in several different ways, but you need to decide on a hierarchy of them; each level could be a release.
This is where the first set of problems come in. Picking the features. You need to, as constructively and objectively as possible figure out which features are going to hit the sweet spot of not a lot of work for a shitload of cash.
As a small aside, what it boils down to for me is the most functionality for the fewest lines of code. I look at every single line in the system as a liability; something that needs to be tested, verified, refactored, and all manner of other forms of maintained.
In the real world, however, not in Chris' world, what it boils down to is amount of cash made per line which I guess has a direct correlation to hourly rate. What the entire company should be interested in is making the most money with the smallest, tightest code base. You *don't* want your developers writing code as fast as they can every day. Ideally you want them refactoring old code, redesigning modules to take in new information, and shaking the last few bugs out of old features.
Anyway, we have features. Lets say some miracle happened and your expert marketing department did their job and picked a set that would be dynamite.
So far we have:
Idea -> Customer Research -> Master Feature Set -> Badass telepathic marketing research -> Beta, Alpha, and Release Feature set.
OK, we just got that far. There is another factor in the equation, however, and that is how much work each feature requires. The design and dev team need to figure out some kind of rough map and communicate this back to marketing so that the feature set hits the sweet spot between the least amount of effort for the most money.
But we haven't talked about the design or dev team yet, but there is an iterative sequence and feedback loop that happens throughout the process and one iteration looks like this:
Feature -> Design -> Dev -> Cost Analysis -> Badass telepathic marketing research -> New Better Simpler Feature
The cheapest feature to implement is the one you don't do. Never ever forget this. It is far *far* cheaper to remove details at this level, the highest level than at the design or the pump-code level. Nothing comes for free and each new feature has an n^2 effect on complexity because it will interact with existing features and make adding a new feature harder. In addition it makes the testing matrix larger and gives your sales team another detail to get tripped up on while they are trying to figure out where a potential customer is coming from.
Each piece of implemented system brings, along with the promise of cold, hard cash, the threat of carrying heavy chains of senseless complexity and pointless detail into each and every design decision later on. So think long and very very hard about exactly what you are going to do before you begin the process of doing it.
OK, lets say we have a feature set we are confident in. Now comes the fun part; you have an iterative process between a design and customer advocation team and the development team where the look, feel, and functionality of the product is hammered out to meet each largish feature. This should involve mock-ups, prototypes, using existing programs to see what the customer base is used to, artistic talent and a good, pragmatic eye towards a minimal cost route.
We the break features down into stories, stories into lists of requirements, and requirements into lists of tasks. The more thorough you are with this breakdown the better as it allows a clearer picture of the work required to move forward.
There isn't any software that does this well but really what you want is to build a large graph of dependencies here. This is because a given set of stories may generate interleaving requirements on any given software module.
What is useful is to be able to ask a question like "If we eliminate X, how much less work is it". Along with this you have the converse: "If we add Y, what is the impact on the system?".
So, in a hierarch of generators, we have:
Feature -> Stories -> Requirements -> Tasks.
This maps reasonably well to the actual work required to do anything. Each of these arrows is a 1 -> N relationship although multiple stories may generate the same requirement from the software's perspective. For example if you have a good serialization system then you can save/load the system as well as cut/paste between applications.
In any case, this should highlight just exactly how important it is that you come up with a minimal feature set. Then the design must be very good but also very smart so each feature generates the minimal set of stories. Finally the dev team needs to be careful with how they break down stories to end up with minimal requirements and minimal tasks.
Now we get into the details of what happens when you stop thinking about doing something and start doing it.
People, when testing the software will come up with all sorts of things. Some things they come up with are additional features of new stories. Other things they come up with will be actual defects where a given piece of software does not meet the specifications. Finally there will be details that are annoying but outside the scope of what was specified but they still should be tracked.
The design that I like is to have an issue database. This database is filtered by the design and product development teams to product a defect database, additional story features, and requests to redesign sections of the product.
The reason I have an issue database is that bugs are things that should require fairly immediate developer attention. These are defects in the system and release with a large number of them indicates a faulty process for creating software. These are like development gold; you rush to them, fix them to the best of your ability, and think about how they happened as these teach you a lot about how you are developing software.
Good design, both at the product level and at the engineering level is key to minimizing everything in the issue database. A good design at the product level makes certain kinds of problems impossible. A perfect design from the product design point of view means the customer *can't* make a mistake. A perfect design from an engineering point of view means that bugs can't happen.
It isn't that you have such a badass developer that they --do not-- make a mistake but they think about their code so much that they implement a design that --can not-- fail. There is a large difference between don't and can't. One requires discipline and one requires genius.
This is, btw, my problem with a lot of software. It is written with too much discipline and not enough genius but I digress...
Now lets think about what implementing something actually means. Lets start with a simple but perhaps non-obvious assumption.
Every time you add a line of code to the system you destabilize it to some extent. Thus when you are writing a bunch of code and adding capabilities you *will* have bugs there is not way around it. You will break old pieces that were working, even with unit tests and all manner of other stabilization devices. This is a fact of life; change means both moving forward and in some senses moving backwards and of course no one wants to move back.
Also lets make another assumption; the number of bugs, issues, and various other forms of feedback a feature or story will generate is directly dependent upon is complexity. This should be obvious; with two engines you are exactly twice as likely to have an engine failure as with one.
This, btw, is why a lot of dual engine airplanes are less safe than their single engine counterpart. It took people a long time to design a dual engine design that could fly capably with only one engine. This meant that there are a lot of aircraft designs that are twice as likely to crash when the initial idea was to have redundancy and thus greater safety.
In order to have a stable beta, alpha, and release you need to *not* be changing the software that much. Lets take a graph capabilities added to a software system. You want it to look like a bell curve, where the hump in the middle is that area of highest activity and you ramp into such development as well as ramp out of it.
This is because you want a stable product in the end. Thus you ramp up slowly, thinking a lot about design and how to accomplish what you are doing. Next you pump code and you QA team starts ripping you a new arse hole. Now you start switching resources to fixing QA issues and not so much adding new capabilities. As the release gets closer you begin to really focus on bugs and capabilities take a back seat.
The graph of bugs that people find will mirror your graph of adding capabilities to the system, just later in time. How much later depends on your ability to test the software effectively and in your ability to fix bugs where their solution will reveal or cause new bugs. You want your release to coincide with the point where your bugs have hit their point of largely diminishing return in terms of new bugs being found are by in large not worth fixing as they will have minimal customer impact or will be fixed by stories that are scheduled for after release.
You really want a good QA team. You want a rare combination of smart and disciplined for QA more than anything else. This is because if you QA team isn't smart then your best and brightest customers will find your worst bugs; thus you have lost some of your greatest advocates. If they aren't disciplined then they will not test all the combinations of features they could and your average customers will run into random issues just messing around with the product in an interesting way.
The QA team and the dev team should be drinking buddies but there shouldn't be animosity either. During a long release cycle, however, I know that I start to get aggravated and so does our QA dept and we stop speaking nearly as much.
In any case, a new story or capability is an issue generator but they don't generate all the issues right away. Bugs fixed will reveal new bugs and you will get chains of bugs that are very difficult to fix quickly.
Finally, there is a point when you want to show the world what you have done. You have faith in your marketing dept and their research is solid and smart. You have confidence in your customer research system, and of course in your big idea. Your product design team has been creative and done a great job of delineating a clear vision of how the product will look and feel from a customer perspective. The dev team is a team of patient, smart, tough geniuses who have produced smart, tight software design from day 1. You QA team doesn't take bullshit from anyone and while they can break most pieces of software just by looking at it they can't do touch your current hotness.
It is time for the demo, it is time to for everyone to work together and think out a set of scripts that will shock and awe, amaze and delight all potential customers and it is time to mobilize the sales force. These people shouldn't think about anything but cash. They need to be cut throat, they need to be able to really get into what they are selling but also be capable of reading each new prospect like a book. They are the front line, the marines, so to speak and now the fate of the entire operation rests on their shoulders. They need to take ground and what they do will ultimately make you all the money in the world or provide an excuse in your next job interview.
They will feed ideas back into the feature and story databases and will provide another source of information about how the product is working in the real world.
In any case, get a great idea and bring all of this complex machinery together and you are a long way above most companies in terms of your ability to bring great software to market. If any of these pieces are weak then your software, regardless of the vision or idea behind it, will not stand the test of time or customers.
Someone has a big idea. From my perspective, someone identifies a need and a customer base.
This boils down to a set of vague features. You can group features in several different ways, but you need to decide on a hierarchy of them; each level could be a release.
This is where the first set of problems come in. Picking the features. You need to, as constructively and objectively as possible figure out which features are going to hit the sweet spot of not a lot of work for a shitload of cash.
As a small aside, what it boils down to for me is the most functionality for the fewest lines of code. I look at every single line in the system as a liability; something that needs to be tested, verified, refactored, and all manner of other forms of maintained.
In the real world, however, not in Chris' world, what it boils down to is amount of cash made per line which I guess has a direct correlation to hourly rate. What the entire company should be interested in is making the most money with the smallest, tightest code base. You *don't* want your developers writing code as fast as they can every day. Ideally you want them refactoring old code, redesigning modules to take in new information, and shaking the last few bugs out of old features.
Anyway, we have features. Lets say some miracle happened and your expert marketing department did their job and picked a set that would be dynamite.
So far we have:
Idea -> Customer Research -> Master Feature Set -> Badass telepathic marketing research -> Beta, Alpha, and Release Feature set.
OK, we just got that far. There is another factor in the equation, however, and that is how much work each feature requires. The design and dev team need to figure out some kind of rough map and communicate this back to marketing so that the feature set hits the sweet spot between the least amount of effort for the most money.
But we haven't talked about the design or dev team yet, but there is an iterative sequence and feedback loop that happens throughout the process and one iteration looks like this:
Feature -> Design -> Dev -> Cost Analysis -> Badass telepathic marketing research -> New Better Simpler Feature
The cheapest feature to implement is the one you don't do. Never ever forget this. It is far *far* cheaper to remove details at this level, the highest level than at the design or the pump-code level. Nothing comes for free and each new feature has an n^2 effect on complexity because it will interact with existing features and make adding a new feature harder. In addition it makes the testing matrix larger and gives your sales team another detail to get tripped up on while they are trying to figure out where a potential customer is coming from.
Each piece of implemented system brings, along with the promise of cold, hard cash, the threat of carrying heavy chains of senseless complexity and pointless detail into each and every design decision later on. So think long and very very hard about exactly what you are going to do before you begin the process of doing it.
OK, lets say we have a feature set we are confident in. Now comes the fun part; you have an iterative process between a design and customer advocation team and the development team where the look, feel, and functionality of the product is hammered out to meet each largish feature. This should involve mock-ups, prototypes, using existing programs to see what the customer base is used to, artistic talent and a good, pragmatic eye towards a minimal cost route.
We the break features down into stories, stories into lists of requirements, and requirements into lists of tasks. The more thorough you are with this breakdown the better as it allows a clearer picture of the work required to move forward.
There isn't any software that does this well but really what you want is to build a large graph of dependencies here. This is because a given set of stories may generate interleaving requirements on any given software module.
What is useful is to be able to ask a question like "If we eliminate X, how much less work is it". Along with this you have the converse: "If we add Y, what is the impact on the system?".
So, in a hierarch of generators, we have:
Feature -> Stories -> Requirements -> Tasks.
This maps reasonably well to the actual work required to do anything. Each of these arrows is a 1 -> N relationship although multiple stories may generate the same requirement from the software's perspective. For example if you have a good serialization system then you can save/load the system as well as cut/paste between applications.
In any case, this should highlight just exactly how important it is that you come up with a minimal feature set. Then the design must be very good but also very smart so each feature generates the minimal set of stories. Finally the dev team needs to be careful with how they break down stories to end up with minimal requirements and minimal tasks.
Now we get into the details of what happens when you stop thinking about doing something and start doing it.
People, when testing the software will come up with all sorts of things. Some things they come up with are additional features of new stories. Other things they come up with will be actual defects where a given piece of software does not meet the specifications. Finally there will be details that are annoying but outside the scope of what was specified but they still should be tracked.
The design that I like is to have an issue database. This database is filtered by the design and product development teams to product a defect database, additional story features, and requests to redesign sections of the product.
The reason I have an issue database is that bugs are things that should require fairly immediate developer attention. These are defects in the system and release with a large number of them indicates a faulty process for creating software. These are like development gold; you rush to them, fix them to the best of your ability, and think about how they happened as these teach you a lot about how you are developing software.
Good design, both at the product level and at the engineering level is key to minimizing everything in the issue database. A good design at the product level makes certain kinds of problems impossible. A perfect design from the product design point of view means the customer *can't* make a mistake. A perfect design from an engineering point of view means that bugs can't happen.
It isn't that you have such a badass developer that they --do not-- make a mistake but they think about their code so much that they implement a design that --can not-- fail. There is a large difference between don't and can't. One requires discipline and one requires genius.
This is, btw, my problem with a lot of software. It is written with too much discipline and not enough genius but I digress...
Now lets think about what implementing something actually means. Lets start with a simple but perhaps non-obvious assumption.
Every time you add a line of code to the system you destabilize it to some extent. Thus when you are writing a bunch of code and adding capabilities you *will* have bugs there is not way around it. You will break old pieces that were working, even with unit tests and all manner of other stabilization devices. This is a fact of life; change means both moving forward and in some senses moving backwards and of course no one wants to move back.
Also lets make another assumption; the number of bugs, issues, and various other forms of feedback a feature or story will generate is directly dependent upon is complexity. This should be obvious; with two engines you are exactly twice as likely to have an engine failure as with one.
This, btw, is why a lot of dual engine airplanes are less safe than their single engine counterpart. It took people a long time to design a dual engine design that could fly capably with only one engine. This meant that there are a lot of aircraft designs that are twice as likely to crash when the initial idea was to have redundancy and thus greater safety.
In order to have a stable beta, alpha, and release you need to *not* be changing the software that much. Lets take a graph capabilities added to a software system. You want it to look like a bell curve, where the hump in the middle is that area of highest activity and you ramp into such development as well as ramp out of it.
This is because you want a stable product in the end. Thus you ramp up slowly, thinking a lot about design and how to accomplish what you are doing. Next you pump code and you QA team starts ripping you a new arse hole. Now you start switching resources to fixing QA issues and not so much adding new capabilities. As the release gets closer you begin to really focus on bugs and capabilities take a back seat.
The graph of bugs that people find will mirror your graph of adding capabilities to the system, just later in time. How much later depends on your ability to test the software effectively and in your ability to fix bugs where their solution will reveal or cause new bugs. You want your release to coincide with the point where your bugs have hit their point of largely diminishing return in terms of new bugs being found are by in large not worth fixing as they will have minimal customer impact or will be fixed by stories that are scheduled for after release.
You really want a good QA team. You want a rare combination of smart and disciplined for QA more than anything else. This is because if you QA team isn't smart then your best and brightest customers will find your worst bugs; thus you have lost some of your greatest advocates. If they aren't disciplined then they will not test all the combinations of features they could and your average customers will run into random issues just messing around with the product in an interesting way.
The QA team and the dev team should be drinking buddies but there shouldn't be animosity either. During a long release cycle, however, I know that I start to get aggravated and so does our QA dept and we stop speaking nearly as much.
In any case, a new story or capability is an issue generator but they don't generate all the issues right away. Bugs fixed will reveal new bugs and you will get chains of bugs that are very difficult to fix quickly.
Finally, there is a point when you want to show the world what you have done. You have faith in your marketing dept and their research is solid and smart. You have confidence in your customer research system, and of course in your big idea. Your product design team has been creative and done a great job of delineating a clear vision of how the product will look and feel from a customer perspective. The dev team is a team of patient, smart, tough geniuses who have produced smart, tight software design from day 1. You QA team doesn't take bullshit from anyone and while they can break most pieces of software just by looking at it they can't do touch your current hotness.
It is time for the demo, it is time to for everyone to work together and think out a set of scripts that will shock and awe, amaze and delight all potential customers and it is time to mobilize the sales force. These people shouldn't think about anything but cash. They need to be cut throat, they need to be able to really get into what they are selling but also be capable of reading each new prospect like a book. They are the front line, the marines, so to speak and now the fate of the entire operation rests on their shoulders. They need to take ground and what they do will ultimately make you all the money in the world or provide an excuse in your next job interview.
They will feed ideas back into the feature and story databases and will provide another source of information about how the product is working in the real world.
In any case, get a great idea and bring all of this complex machinery together and you are a long way above most companies in terms of your ability to bring great software to market. If any of these pieces are weak then your software, regardless of the vision or idea behind it, will not stand the test of time or customers.
Subscribe to:
Posts (Atom)