Guest Blog: Enterprise Software - a broader analysis
This piece was sent to me by my colleague Chris Randle. It is very thoughtful and results from his extensive experience as a CIO for major corporates. I thought it really should be a guest blog rather than being confined to the comments. profserious
I was cheering when I read the title of your blog "Why is so much enterprise software so bad?". For without question using enterprise systems is generally a dispiriting and dismaying experience - and implementing large scale ERP projects is even more so. Worse this tendency seems to get no better, year after year, in stark contrast to the tools available, often at no cost, to individuals and workgroups that offer effective, intuitive and elegant solutions that work with a minimum of fuss.
But is the problem as simple as software development companies who cannot deliver quality solutions, kept in place by the herd buying instincts of risk averse and detached CIOs ? I think that is to some extent true, although that in itself worthy of more discussion. However the continued and repeated failure to deliver the expected benefits of an ERP solution, and the often poor reputation of the IT community as a result demands a broader analysis.
Is Package Software code unreliable and buggy ?
In my experience this is perhaps the least of our problems - in so far as a narrow definition of a program simply failing to perform a task correctly or falling over completely. Very often when problems occur with a specific task they are the result of combinations of parameter and data settings which are permitted by the package but not anticipated by the developers.
With a large ERP package the number permutations of possible parameter settings quickly makes a deterministic appraisal of their consequences so hard as not to be undertaken. Large scale testing, in situ, is the common way to deal with this as part of the implementation project. However testing, especially in the latter stages tends to focus on capacity and performance, not on data events out of normal ranges.
Part of the problem here, as has been stated, is the degree of flexibility offered for the most part in back office areas that affords little obvious competitive advantage. If processes and data were more uniform - and the package defined the solution, then the problem would be far less; and yet this would seem to present an rather utopian solution to the problem. A good (fictitious) example of the kind of issue might be with UCL’s own accounts payable and purchase ordering. Much of this is bog-standard and should be treated as such. Fridge (rotation and stock reordering) management of perishable biological items needs to be managed as part of the implementation. It is not a standard function of the modules being implemented but is supported in another module of the system (say warehouse management). There appears to be 2 or 3 different ways of implementing a solution. The project evaluates the options and decides against a full additional module implementation as it was not budgeted and is felt to be overkill for the problem. Therefore a workaround is found which accesses the necessary routines whilst not using the route envisaged by the developer and deliberately bypasses some of the parameters normally used in that routine.
This is tested and implemented successfully but fails in a subsequent release of the product when the workaround is invalidated.
Arising from this example some actions could be taken;
Software vendors could take a reference copy of parameter settings, code modifications and datasets during a project - and warrant their software to operate correctly against these. It is interesting to note that almost without exception software is not warranted against defect except for an initial post implementation period. By maintaining a change process where changes to the initial set up were reconfirmed by the vendor, and that all their new releases were tested against these a much more rigorous relationship could be put in place. There would doubtless be a cost to this, but the savings gained across even a modest customer base should easily outweigh the additional charges a vendor may make. It would also to some extent make developers consider these aspects before they started programming rather than afterwards.
The resistance to changes in process and the desire to automate everything has to be challenged at every step of the project, but especially at the outset. Whilst the CIO has a part to play in this, it is crucial that the line management of the functions are completely on board and strongly supportive of such an approach.
Is Package Software unstable, unreliable and poorly performing ?
On a reference implemention, no and yet most IT departments struggle to maintain a high level of service on large enterprise package systems, why is this ?
Echoing the flexibility issue above there is seen to be a commercial advantage to vendors to offer their product on the widest possible range of platforms operating systems and configurations. Capacity planning data to ensure reliable and performant processing tends to be a black art with little in the way of guarantees. Again vendors should be encouraged to warrant their product on a specific set of platforms and throughputs and to carry out load testing as part of their release cycle.
Package upgrades tend to be the most troublesome element of an ERP after the initial implementation. They tend to flush out latent issues with any localisation of the package, or exploitations of features not intentionally present. The only real answer is full scale retesting of the system in situ. This tends to be problematic.
End users tend not to have the time to conduct full scale retesting
Most enterprises do not have a complete mirror of their production environments (including all the upstream and downstream systems) for testing purposes
Windows of availability in the business cycle for testing and implementation tend to be few and crowded
Automated testing would seem to answer many of these issues, but the products on the market today are complex and difficult to set up and maintain.
Perhaps applications systems should record every single keystroke and state change in the package. This would be a huge benefit in problem diagnosis, and in regression testing of upgrades. It would also in extremis be a way of forward recovering in the event of a disaster.
Are package systems difficult to use, and so much worse than web apps ?
In a word, yes certainly compared with the UI driven systems encountered on the web. There is an area of irritation that could, and should, be removed by the vendors. Interestingly - and close to home - I talked to some of the PAs following a much reviled implementation of an expenses management system. They found it helpful and logical and they were happy using the system - it saved them time. However they typically used the system frequently and had quickly forgotten any initial difficulties in becoming acquainted with it. On the other hand the infrequent users found it baffling, illogical and time wasting.
There is a certain dogma (and perhaps suspension of disbelief) in process reengineering projects that says it is always better to move data entry to the source, ie the end user who raises a requisition, travel request or expense claim. However this puts extreme emphasis on the ease of use of the system, which is often lacking. It also leads to a rather segmented user base, with more senior members (and perhaps most influential) delegating tasks to assistants, whereas more junior members have to use the systems themselves. It may be sensible to realise that it might be more effective to recognise these limitations and that it may be more cost effective not to remove whole administrative functions when reengineering, but rather make them more efficient.
There is nothing inherent in web-apps that makes them better - there are plenty that are awful - but ease of transference, no compulsion to use and low barriers to entry mean that the laws of natural selection apply.
What are the prospects ?
From a software engineering point of view a package that was developed without any nod to backwards compatibility and encompassed modern and well accepted design principles would be a major leap forward. Microsoft late in to the game with its Dynamics product has to some extent managed this, and certainly is much better received by end users. Inevitably though anyone wanting to break into the very large enterprise solutions needs to offer the same variety and flexibility (unless enterprises are willing to change) and so may well end up with the same kinds of problems that this brings.
The blurred responsibilities and conflicting priorities between software vendor, local IT team and business management in the package software eco-system however tend to make any kind of meaningful improvement look quite remote.
The way forward ?
After many false dawns, software delivered as a service, whether over a public or private cloud seems to finally offer a solution to a great many of the problems described above.
Whereas in a locally deployed package there is little contractual clarity for quality and performance, in an SAAS situation it becomes the sole focus of the relationship between the vendor and the enterprise. Crucially it puts the vendor much more in control of both the software and the infrastructure environment. With the economies of scale generated it is far more likely to be able to create a highly robust and efficient hardware environment. It would also, with suitable safeguards, have access to the settings and data being processed in each customer environment making problem avoidance and resolution orders of magnitude better than in a conventional “in-house” deployment.
This coupled with an ability to write and have vendor certified a satellite system of add-ons and front-ending software which would have its own market dynamics would be, in many respects, an ideal solution - and parallel the successful on-line services already developing.
I would expect over a 5 year period the costs of SAAS solution versus in house to be roughly equivalent. Some studies show the latter to be more cost effective after that, but typically ignores the secondary (eg datacentre costs) and the inevitable need for at least one major upgrade during the period.
Perhaps Academia could set a lead in specifying rigorously a set of design principles and standards for package solutions that covered eg
Usability
Extensibility
Ease of maintenance
Testability
Robustness
And that the current market was rated against these. With sufficient care this might give hard pressed selectors a more structured means of rating packages, and give a target for vendors and new entrants to work towards ?