Remember 15-20 years ago (yes, this flashback is more for the PMs who have been doing this for a while) when the choices for technology were few and far between?  It was pretty clear when the project was conceived what technology would be used and how the project would be implemented. Requirements didn’t really depend on the what and the how…just really the how.  There was less chance for things to go wrong – less because the decisions that needed to be made were fewer.  My project teams were coding in COBOL on a mainframe when I first started managing projects.  All we really cared about was what the software needed to perform – what outcome was needed.  The question of how to get to that outcome was already answered. The technology was known…there really wasn’t a choice.

Project Technology

Flash forward from the late 80’s to 2012. Now we have to consider cloud-based applications versus desktop apps.  We have many different kinds of database options.  3rd party reporting tools.  We even have many choices of tools with which to manage our projects.  We now even have nice add-on tools like Seavus’ Project Viewer that allows project managers to share MS Project schedules with customers and team members who aren’t using MS Project.

From my perspective, what this has done is to increase the variables that sneak in and cause risks to our projects.  It has extended our planning time, forced us to question the chosen technology on nearly every project thus adding dollars and time to the requirements phase, and has definitely added risk to the whole project that didn’t previously exist simply because there are more choices and with each potential choice comes the risk that you didn’t choose correctly.