Since these pages went up several years ago I've gotten many comments (mostly good) and questions from people who are interested in understanding how sponsored project development actually works. Here's how we do things at one of my larger clients. In my opinion it's fairly typical of how sponsored project management should be done. Their System Development Methodology differs from mine in many respects, but you'll see in this narrative that the big picture is the same.
Typically, the project starts with a request from the Business Unit (BU). In the request they will list out all of their requirements for the system. Further technical requirements that will be needed to make this happen will be added by IS. These are bulleted in an executive summary and elaborated upon in the body of the request and design documents. Note that this encompasses Phase 1, Phase 2, and Phase 3 of my SDM.
Here's an important refinement that this client added to their version of the Business Requirements document: There is a clause stating that any change in requirements requires amendments to the design documentation (and the agreement of the BU to delay the implementation date by at one week at a minimum, with associated costs. No agreement, no change. This is to stifle frivolous requirements and curb "scope creep"). With the adoption of a newer SDM for Rapid Application Development (RAD) they've unfortunately dropped this clause. Even in a RAD environment it's even more important to keep the original requirements in check. With an iterative development model it's far too easy to sneak new requirements into the system, losing sight of the original target. This is one of the biggest weaknesses of RAD.
Back to business: we use the bulleted requirements I mentioned above to build a checklist that will be used both for user acceptance and for post implementation review. There is and isn't a pro-forma for post implementation review... meaning that it's custom built for each project at the beginning of the project and refined during design. Some criteria are added as a matter of form, but these typically deal with infrastructure issues that apply to any project (got enough bandwidth? Too much? etc.)
At this point the system is designed. Since I typically act as the system designer, this starts with me. Using my understanding of the business process, the existing systems with which I'll have to interface, and some experience in creating a broad range of systems, I "sketch out" a high-level overview of what I want to accomplish and how I think it should be done, as well as how this will meet the requirements. I may actually draw up several alternative plans from which to choose. Then I typically get my team together and we look at it and ask the following questions:
Out of this review (and brainstorming) session we decide on a rough design. I come up with a cost/benefit analysis, polish the design up for the BU (and list several alternatives), and work with the Business Analyst to present it for the the BU's approval. In presenting our solution, we honestly list the risks and whether all of the requirements are practical. The design at this point is specifically targeted at the BU. It's phrased in what I like to call Business Unit Markup Language. (BUML) It's vitally important that the BU is not bogged down in implementation details. In all reality this is a sales presentation. It's important for the BU to understand what's going to happen on their behalf and to have "mindshare" invested in the project. They must feel good about its funding... after all, they're paying for this stuff.
There are several possible outcomes from this meeting:
Armed with BU approval, we begin the design in earnest. I start with the Functional System Design and flesh it out to become a Technical System Design. This defines the actual classes to be used and their interfaces. If I'm blessed with experienced programmer/analysts, I can hand the interface specifications to them and they can do their own module designs, which I approve. Otherwise I might have to do the module design as well. In either event we review the design before actual programming begins. I act as moderator and arbiter in negotiating the final design of the interfaces. This is much more effective and less prone to errors of ommission than than simply dictating an interface.
The programmer/analysts can now start programming; and they're responsible for testing their modules before submitting them as complete. During construction I'm busy generating test plans for System Integration, Load Testing, and User Acceptance.
The primary question that is asked during user acceptance is, "were our requirements met?" Simple enough. The system is also tested to make sure it works as designed, and "negative tested" to make sure it responds well to unspecified input (you coded a dialog box to respond to "Y" or "N". What happens when the user presses, "Q"?)
Also during the programming phase I create a plan for implementation. This involves working with Technical Services and other project managers to schedule the implementation. I create UML Deployment Diagrams, and work with the Help Desk to prepare them to support the new system. Included in the Implementation Plan is a plan to have developers on-hand (or on-call) in the event that problems are encountered, as well as a plan to roll back the changes in the event of unresolvable problems
Now that the system is constructed, tested, and approved, it can be implemented according to the implementation plan. Post implementation review isn't scheduled until at least a month after implementation, but it could be as long as a fiscal quarter. In the meantime the system is closely monitored, and statistics are gathered regarding usage, performance, errors and bugs, help desk tickets, and the like.
At the post implementation review all of this information is trotted out, along with the checklist of requirements. The question is asked, "Does this meet our requirements in production as well as we thought it would prior to implementation?" If so, cool. If not, we have a few options:
An example of 3 and 4 would be a Sales Force Automation package I recently replaced. The original was underpowered and kludged up to work with 5 times the number of users it was designed to accommodate. We knew it sucked, but it was better than nothing, and we were willing to live with it until something better was designed commercially. Basically, we gambled development time on the wager that "somebody" would come up with a solution. Eventually the wait paid off and we were able to replace it with the commercial product that we modified to our specifications without spending a lot of development money fixing the old system.
Now, if we come up with anything but success we put together a list of "Lessons Learned" so that we don't make the same mistakes again. These are put in the project document folder in a shared directory; in an issues database shared by the IS department; and are talked about in one of our monthly departmental meetings (with the entire department in attendance.)
As to who conducts the reviews... here it's done by the project requestors, the IS developers, and by a Business Analyst who is tasked to be the intermediary (he translates "Geekspeak" into "Business Unit Markup Language" and visa versa). There is no outside agency involved (except perhaps in testing), because from experience we've learned that it's better to encourage honest assessment from those intimately involved than it is to bring in someone who is impartial, but will simply miss key factors due to unfamiliarity with the project.
Phase 9. Post-Implementation Review Swim Lane Diagram