Monitoring Project Progress
We project managers need to be constantly on our toes to ensure that our project is on the right track. Monitoring includes all the efforts and activities to collect and analyse project performance data. Accurate and effective monitoring helps us stick to our timeline and identify and resolve problems early to ensure our project success. To help achieve this we typically use the following three-step controlling process:
- Measure: Keep a strict vigil on progress against the project plan.
- Evaluate: Determine the root causes of deviations from the plan.
- Correct: Make appropriate corrections to address deviations.
Finding a good balance between monitoring too much or too little is often a challenge, especially if we’re new to project management. If we’re constantly checking our team or expecting them to report to us every day, it can feel like we’re micromanaging and don’t trust our team. If we go too long between check-ins, we may find out too late that the project has problems. Some projects may work best with bi-weekly or monthly check-ins. Others may need monitoring more frequently. Also, there is a trade-off between the cost to monitor progress and the value of resultant information since project control should be cost-effective. The need to monitor project performance will typically be influence by:
- Project priority.
- Size of resource commitment.
- Cross-project dependencies.
- Allowable tolerance limits.
- Project team members’ expertise and experience.
- Stakeholder expectations.
- Risk assessment.
- Consequences of failure.
- Project complexity.
- Project novelty.
- Progress to date.
Tools and techniques for tracking progress might include any of the following:
- risk log
- issues log
- lessons learned log
- change log
- accident register
- progress report
- status report
- exception report
- site visit
- variance report
- milestone slip chart
- personal contact
- review and audit
- prototype and trial
- telephone conference
- video conference
- structured walk-through (see an explanation below)
Here are four questions that we should regularly ask our team members. These aren’t stultified performance review questions. They are useful questions that can dramatically improve a team member’s morale, output and the quality of their work:
- – “What’s your biggest accomplishment this week?”
- – “What’s your biggest challenge right now?”
- – “What should we do differently?”
- – “Is there anything I can help you with?”
Reports that are common for most projects include status reports that compare project performance as at a specific date to that which was planned, progress reports that describe what has been accomplished over the reporting period, forecasts that predict future status or performance, and variance reports that document the difference between actual project results and anticipated results. Periodic reviews and audits are also appropriate, where:
- A review is a structured opportunity for reflection to compare the project against good-practice PM, assess performance to-date against original baseline, identify key issues and risks, and make timely and informed decisions for on-going effective project implementation. Reviews are usually an internal assessment. While monitoring is ongoing, reviews are periodic throughout the project lifecycle. It’s important to bring in someone with project experience for such reviews to give our project an objective health check. Sometimes our project team and we PMs can be too close to the project to see lurking problems. Such reviews are best built in at the beginning, rather than wait for a crisis to occur.
- An audit is an assessment to verify compliance with established legislation, policies, protocols, rules, regulations, procedures or mandates. The emphasis is on assurance and compliance – a check to confirm that we are doing the right things and doing those things right. Audit frequency will usually be determined by the novelty, size and complexity of our project, and progress to date. Auditors should be provided with the project charter, PM plan, recent status reports, and an up-to- date issues log, risk log, and change register. Based on the ISO 19011 standard for auditing, our project auditor’s should adhere to the following practices: demonstrate integrity and professionalism, comply with all applicable legal requirements, withstand pressures that may affect their professional judgment, present complete, independent and impartial assessments, respect confidentiality and information security, and use an evidence-based approach to reach reliable conclusions.
Should project variance look to move beyond predetermined and acceptable tolerance limits, corrective action is needed. Re-planning or corrections that are likely to exceed parameters and any contingency provisions in our current charter first need our sponsor’s approval, followed by the issue of an updated charter.
What’s a Structured Walk-through
The structured walk-through (SWT) is a gentle yet effective audit developed by IBM in the late 1960s as a relatively friendly approach to periodically evaluating project performance. In the case of evaluation, one significant problem can be that the negative aspects of evaluation discourage project staff from partaking forthrightly and readily in the evaluation process. The structured walk-through deals with this issue by giving the people who are being evaluated greater control over the evaluation process. The structured walk-through might also be used to evaluate a contractor’s performance where a partnering relationship exists. In this instance the contractor controls the evaluation process.
The key principles for conducting a structured walk-through are explained in the following paragraphs. These rules reflect the original rules developed by IBM, as well as some modifications to them that have evolved over recent years, as summarised by Professor J D Frame (‘The New Project Management’):
- – Those being evaluated choose the evaluators. This reduces the sense of threat sometimes felt by those being evaluated. If they choose the evaluators, they are less likely to complain that the evaluation team was selected in a vindictive or arbitrary fashion. They also can be assured that the team was not chosen in accordance with some hidden political agenda. And they can select an evaluation team made up of people who are already familiar with the organisation and the type of work involved. By doing so, they reduce the amount of time they must dedicate to bringing the evaluation team up to speed.
Obviously, there is some concern that those being evaluated will rig the jury. That is, they will choose evaluators who are close associates and who may be reluctant to criticise them too harshly. In practice, this potential abuse of privilege does not appear to be a serious problem. The properly conducted structured walk-through usually creates a sense of trust, and the people being evaluated are reluctant to violate this trust. They recognise that it is a greater crime to distort the walk-through process than to avoid adverse feedback. In addition, in some organisations, the project team being evaluated is not given full latitude in choosing its evaluators. Rather, it is give the opportunity to select its evaluators from a list of candidates who have been pre-approved by the project sponsor or PMO.
- – Those being evaluated determine the rules. The people being evaluated continue to control the process by establishing the rules of the game. They identify the evaluation scope and criteria. They send terms of reference to the evaluation team members. They set the agenda for the evaluation. There is the possibility that this rule can be abused. Specifically, those being evaluated could create rules that steer the evaluation team away from problem areas. However, they usually realise that by avoiding problems, they are defeating the purpose of the evaluation. In addition, in some organisations, the rules the team establish are governed by a set of guidelines provided by the project sponsor or project office.
- – Those being evaluated run the evaluation meetings. The evaluation review usually occurs through one or more meetings. This constitutes the actual walk-through. The people being evaluated run the meetings. They determine who talks, when they talk, and how long they talk. Typical concerns are: The people running the meetings are not experienced facilitators – they conduct the meetings in an amateurish fashion. The meetings do not stick to the agenda – for example, technical evaluators tend to go off on a technical tangent. The evaluation meetings can provide a forum for different groups to grind their political axes. Thus, in selecting evaluators, it is important that the people selected are open- minded and willing to cooperate.
These complaints are not directed at the structured walk-through process itself. Rather, they are more concerned with the lack of skills some of us may have in running a meeting.