Successful Project Completion - Part Two
29 January 2020 by Chris Woodhams
Just as a recap, I started my data acquisition for insight into successfully completing projects back in Oct 2019 at a MLUG meeting. I followed that up with a post covering my findings relating to the design element of project delivery. To reiterate, these are the elements of project delivery I have defined.
The aim of this post is to summarise the discussion from the MLUG in October for the Risk and Development elements of the project. There will be follow up blog posts on the other elements as I acquire more data.
Risk is something that varies in probability and severity, but needs to be managed as part of any project. Within software projects, especially those that are interacting with hardware, it is important to identify risks and manage them. This can be by prioritising high risk tasks or considering how to resolve certain risks should they arise.
This topic was relatively brief in the discussion, so I will try to cover it in a bit more detail when I present this again. We covered risk registers, a common way to manage risks within a project, but what do they do and how do they help? They enable you to consider and document risk within a project, having to write something down always makes you consider it more thoroughly. Typically they include what action should be taken if the risk arises.
There was some discussion around Failure Mode Effect Analysis (FMEA), this form of risk assessment is more formal and includes the ability to score risks according to a number of factors.
Development is when you put the meat on the bones and we talked about tracking the progress of development and how testing is incorporated.
We went straight in to discussing tools that people had used with success when managing the development tasks. Team Foundation Server (TFS) was used in one instance to log the user stories related to project development. Each story can be allocated a status to identify how far it has been progressed. Additionally it is possible to link commits to a story which helps to identify the changes associated with it.
Red Mine was discussed as a really good tool for tracking time against tasks with the ability to enter the estimated time and track the actual time against each task. JIRA was also mentioned in this context too as it has a feature where you can start and stop a stopwatch as you begin and finish working on a task. The summary was that whatever tool you use it is about integrating it into your development process so that it becomes second nature, you almost don’t know you are doing it. The data captured relating to time spent on tasks was being used to understand whether projects are on schedule. Then the detail of which tasks are running over can be analysed to determine if additional resources are required.
We finally touched on an interesting concept called the automated testing triangle that lays out the split of different types of tests within a project. In this article (refers to text base languages, but the theory carries across to LabVIEW projects) it gives a really good breakdown of the 4 layers of tests, with unit tests at the bottom and manual tests up at the top of the triangle. In this way the manual tests section of the triangle should be really small, meaning you don’t have many of them to perform. This ensures you have an application that is easily maintainable, requiring less input to test future changes. If the manual test section of the triangle is large, making future changes will mean a time consuming process to ensure the changes are ready for release.
Keep an eye out for a follow up blog after the next MLUG where is will be discussing the last four elements of project delivery. Also, if you have some input or feedback, please get in touch!
Back to Blog listings