None of us could have known it at the time, but in retrospect, the year 2000 computer bug scare was just a fire drill. The real test of the technological underpinnings of the U.S. banking system came on September 11, 2001.
You remember the Y2K bug? Banks and other financial services companies spent an estimated $50 billion retrofitting computer programs to accommodate the century date change. Then everyone put their New Year’s Eve plans aside, just in case, and watched anxiously as the clock struck midnight and the year 2000 began. There were no system hiccups; everything worked just the way it was supposed to, if you trusted in technology.
But it wasn’t trust that got banks and other financial services firms through the Y2K computer scare. In the three years leading up to the century date change, financial institutions (with some gentle prodding from regulators) devoted significant resources to disaster recovery and contingency planning. For many of these organizations, the payoff came 21 months after the century date change.
While millions of people watched in horror as a terrorist attack took out a huge swath of New York’s financial district on September 11, the banking system never missed a beat. You might say the technology underpinnings of the system withstood the shock of the disaster without losing a byte.
“I would say the financial sector was better prepared than anyone else in or around the [World] Trade Center for the disaster that unfolded on September 11,” says Leslie M. Muma, president and chief executive officer, Fiserv Inc., a Brookfiled, Wisconsin-based firm that provides data and payments processing services to thousands of U.S. financial institutions.
Two of Fiserv’s customer institutions were affected directly by the attacks, according to Muma. (Muma, like others interviewed for this story declined to name affected institutions. He described them as small credit unions.) The computer systems housed there, destroyed along with the institutions’ offices when the twin towers collapsed, were up and running at contingency locations within about two days, Muma notes.
The New York Clearing House, the nation’s oldest interbank clearinghouse, owned by 11 of the largest banks in the country, managed to successfully clear and settle its various payments workloads, despite the fact that the central telephone switching center near the World Trade Center buildings, taken out by falling rubble, severely hampered data communications between the Clearing House and member institutions. The Clearing House oversees systemsu00e2u20ac”like CHIPS, the international funds transfer system, an automated clearing house network and a check-clearing systemu00e2u20ac”that process more than $1.4 trillion a day in payments, making it the largest processor of inter-bank payments outside of the Federal Reserve. It’s owner members include J.P. Morgan Chase, Citibank, Bank of America, and Deutsche Bank.
“All of the payment systems operated by the New York Clearing House remained operational,” says Jeffrey P. Neubert, Clearing House president and CEO.
Although several owner-banks sustained damages on September 11, Neubert says that CHIPS was able to settle all transactions that entered the system. Daily settlement routines were delayed only a few hours. ACH exchanges went off without a hitch, as did local check exchanges. Long-distance exchanges of checks, however, were halted for several days when the federal government grounded commercial air flights, he notes.
“It’s a credit to the entire system that each day all the work was processed and cleared, and we were able to close each member’s position each night,” says Neubert.
Clifford A. Wilke, director of technology at the Office of the Comptroller of the Currency (OCC), says banks generally have good plans in place for disaster recovery. As the OCC’s technology watchdog, Wilke deserves some of the credit for this. The OCC, in collaboration with other federal financial regulators, has issued several missives on the topic in the last few years, including warnings to bank executives concerning the need to plan for threats against the nation’s technology infrastructure.
“This has been and will continue to be a major focus for us,” says Wilke.
Wilke says regulators expect bank directors to take an active role in the oversight of an institution’s disaster recovery and contingency planning. “They should be thinking in terms of what needs to be done to maintain the position of trust the bank has with customers,” he counsels.
This is not a one-time exercise, experts note. Contingency planning is an evolving process, demanding continual testing and evaluations.
“The security officer or risk assessment officer should report to the board quarterly, at the very least, annually,” says Ken Proctor, a senior consultant with Alex Sheshunoff Management Services, Austin, Texas.
“I would ask the board of directors to evaluate the entire business plan. Break it down into key elements or departments, and test and evaluate each,” says Wilke.
Experts say they expect bank directors to be more disposed to the task today than might have been the case a year ago. “Directors certainly have to be more aware of the risks now,” says Muma. “Up until now it has been pretty much a series of ‘what ifs?’.”
“Until there’s an incident like what happened on 9/11, there’s not a lot of attention paid to these things,” remarks Proctor.
Proctor believes banks do a good job of planning for data backup and restoration, but warns that banks sometimes neglect the human resources component of contingency planning. “Typically, they don’t have plans for each of the individual business units,” explains Proctor.
An event like the attack on the World Trade Center drives home the point: What does a bank do if a natural or manmade disaster threatens (or worse, destroys) company offices and personnel?
“Typically, banks don’t have detailed contingency plans for each of the business units,” says Proctor. “It’s important to have some idea of where the business units go and how to connect those business units to the appropriate voice and data networks.”
Wilke says banks also need to consider business succession planning.
It’s not necessary to plan for everything, however. “If you don’t make a loan for 72 hours, it’s not going to be a problem, under the circumstances. But if you don’t process any checks for 72 hours, it’s going to cause some problems,” explains Proctor.
As more banks turn to outside service bureaus for support for technology-rich services (like payments and Internet banking) regulators are warning banks to be vigilant also in evaluating these vendors’ disaster recovery plans. In fact, it is not unusual these days to find bank examiners at bank vendor locations.
“Regardless of whether technology solutions are managed internally or outsourced, the board of directors and management need to understand the inherent risks to the institution and implement appropriate controls,” the OCC warned in an advisory letter to banks last year.
For example, what happens if one of the bank’s vendors (say, an Internet services provider) is destroyed to the point that the bank wants to exit its contract? “Is it part of the disaster planning effort to build into the contract an out in the event the vendor is hacked?” asks David Furnace, a director at Alex Sheshunoff.
“When you think about contingency planning, you need to think of it in terms of an overall program,” explains Wilke. He says directors need to ensure that every aspect of a bank’s contingency planning process is evaluated and enhanced regularly. “It’s about the overall program that’s in place to ensure safety and soundness,” Wilke says. |BD|