Lessons from the Millennium Bug
The Millennium Bug or Y2K issue arose when it was feared that computers would fail at midnight on 1 January 2000. The concerns related to the formatting and storage of data for dates beginning in 2000 because many computer programs represented four-digit years with only the final two digits, making the year 2000 indistinguishable from 1900.
This paper explains why Y2K was potentially so serious and how the risks were addressed by co-ordinated and timely action to mitigate them. It also points to important lessons from that experience to ensure future resilience.
The principal lessons of the Y2K experience that are relevant for the future are:
- Go beyond testing. Even though testing is still the primary way in which programmers assure that software is fit for purpose, testing can never find all the errors in IT systems. It is necessary to move away from the idea that cost and time-to-market take priority over modularity, robustness, security, ease of modification and other software engineering principles.
- Reduce cascading effects. Shared points of failure are still introduced without considering the possible consequences of cascading effects. Elevating potential risks to Board level, as auditors did for Y2K, can help with prioritisation and resourcing.
- Supply-chains redundancy. In the interests of efficiency, supply chains have become far more tightly coupled and redundancy has been removed with little thought about the impact on resilience, again making cascade failures more likely. This needs to be addressed when introducing major IT systems.
- Better regulation. There is no government appetite for using regulation to encourage private companies to write better software. Since the 1980s, industry has successfully argued that software is not a product but a service and that it should therefore be exempt from product safety and consumer protection laws. Such exemption should be reviewed.
The full report can be read below or downloaded here.NPC-LessonsFromTheMillenniumBug-Feb21