You need details of what the development team have provided, how it works, and how the operations team will run it. Even a basic level of how it works, and what to do if it doesn't will prevent some poor sod on the nightshift struggling to cope with a simple problem like a full disk.
Whether it's code, operating instructions or incident management any team of larger than zero will benefit from writing it down. And the benefits grow exponentially - which is why any large team lives or dies by its processes and documentation.
A developer once told me that his program didn't need documenting as "my code is perfect". But even that "perfect" code did not exist in a vacuum.
The program specification was based on an third party data format that was not formally defined (leats not to us) and so the data was processed based on assumptions and empircal evidence. When an unforseen data case arose (in this case a stock market ticker symbol was altered when a company demerged) the program had no way to deal with it. The "perfect" program could only be be reset by restarting it - which meant it lost all current status.
Thursday, 22 November 2007
Wednesday, 14 November 2007
We don't need a test environment until after launch
If you use your production environment for testing before launch you have two problems to manage.
Firstly as soon as you go live you do not have a test environment - arguably when you need it the most.
Secondly when you try and retro fit your test environment you are trying to build it to match a moving target in production - which has a tendency to have already diverged from what was defined.
If you build your test environment first - you can find out the problems of building at a smaller scale, and use this as a dry run for your production.
It's easier to establish standards and methods for system builds on a smaller scale.
I've seen a live environments where of 7 application servers all 7 were different builds ... and to add an eighth the operators would pick the "best" one to clone - as that was the only way to create a new one. But of somewhat unknown provenance.
Deployment methods and processes affect system stability, security and disaster recovery or continuity plans. It's as essential to develop and test those as it is to test the application itself ...
Firstly as soon as you go live you do not have a test environment - arguably when you need it the most.
Secondly when you try and retro fit your test environment you are trying to build it to match a moving target in production - which has a tendency to have already diverged from what was defined.
If you build your test environment first - you can find out the problems of building at a smaller scale, and use this as a dry run for your production.
It's easier to establish standards and methods for system builds on a smaller scale.
I've seen a live environments where of 7 application servers all 7 were different builds ... and to add an eighth the operators would pick the "best" one to clone - as that was the only way to create a new one. But of somewhat unknown provenance.
Deployment methods and processes affect system stability, security and disaster recovery or continuity plans. It's as essential to develop and test those as it is to test the application itself ...
Meaning of Beta
Beta - does not just mean that it's fast and free
nor even that it has bugs.
Some teams use alpha and gamma tests (I am sure delta's exist somewhere as well)
Here we usually mean Public Beta - meaning one used by some real customers.
I think the best meaning concept is that this Public Beta defines a stage where the software is good enough to use, (perhaps with restrictions), but that while the Supplier (rather than a purchaser/user) decides what issues should be enhanced or fixed, real user feedback and experience is obtained.
In a fully released product (whether commercial or open source) there is a formal or implied commitment to support the advertised functionality, and this places some restrictions on the development team in how they alter and maintain the existing behaviour.
So in a Beta an API might be altered with little palaver, but after Beta we'd expect more warning and backwards compatibility.
Of course all of this is relative to market or community understanding - Google's gmail is now almost three years old - and still describes itself as Beta in its logo. And while we expect reliability it may not be promised - and our expectations are still set by the description and the price.
nor even that it has bugs.
Some teams use alpha and gamma tests (I am sure delta's exist somewhere as well)
Here we usually mean Public Beta - meaning one used by some real customers.
I think the best meaning concept is that this Public Beta defines a stage where the software is good enough to use, (perhaps with restrictions), but that while the Supplier (rather than a purchaser/user) decides what issues should be enhanced or fixed, real user feedback and experience is obtained.
In a fully released product (whether commercial or open source) there is a formal or implied commitment to support the advertised functionality, and this places some restrictions on the development team in how they alter and maintain the existing behaviour.
So in a Beta an API might be altered with little palaver, but after Beta we'd expect more warning and backwards compatibility.
Of course all of this is relative to market or community understanding - Google's gmail is now almost three years old - and still describes itself as Beta in its logo. And while we expect reliability it may not be promised - and our expectations are still set by the description and the price.
Sunday, 21 October 2007
Non-English Idioms #1
You can not carry frogs in a bucket.
From Dutch.
Particularly applicable to development, marketing and all teams associated with delivery of a large project.
The little blighters will keep jumping out of the plans and deadlines you try and impose (or that they agreed to!) - unless you put a lid on the bucket.
This kind of restriction can also be called timeboxing or tigyhly controlling scope.
From Dutch.
Particularly applicable to development, marketing and all teams associated with delivery of a large project.
The little blighters will keep jumping out of the plans and deadlines you try and impose (or that they agreed to!) - unless you put a lid on the bucket.
This kind of restriction can also be called timeboxing or tigyhly controlling scope.
Monday, 15 October 2007
Strategic investment - I know what that means...
... it means without a business case.
So just because some techy and/or business sponsor believes it's essential it ain't necessarily so.
Strategic investments of effort or money deserve a more careful decision making process, and a business case includes a peer review from a financial perspective.
So just because some techy and/or business sponsor believes it's essential it ain't necessarily so.
Strategic investments of effort or money deserve a more careful decision making process, and a business case includes a peer review from a financial perspective.
Testing is not another term for contingency
Many naive projects budget a week or two for overall testing, but as deadlines slip and business pressures mount you may find that that golden period of stabilization and final checks turns into the last window for getting the actual development or operational integration finished.
Subscribe to:
Posts (Atom)
Copyright 2007. 2008 Paul Davey
The author asserts his moral right to be identified as the creator of these works.
So if you think one of these phrases is pithy/worthy/funny I'd appreciate some credit!