Why Asset Size Does Not Matter To Regulators In ERM


ERM-1-21-19.pngConventional wisdom in banking has been that asset size matters in terms of regulatory expectations around enterprise risk management (ERM).

But that traditional school of thought might be changing. A new question has emerged: is it the institution’s asset size that matters, or is the complexity of the risk profile more important?

A common question among peer roundtables: what is a bank expected to do for ERM as it approaches the $10 billion asset size threshold of a regional banking organization (RBO)? The Federal Reserve considers an RBO to have total consolidated between $10 billion and $50 billion.

The next question typically is if regulatory expectations have lessened around comprehensive capital analysis and review (CCAR) or Dodd-Frank Act Stress Test (DFAST) requirements because of recent reforms in Congress?

These are hot topics especially for banks below the $10 billion asset size bubble, known as community bank organizations (CBO) by the Fed, because the cost of ERM implementation remains high.

Specific to CBOs between $2 billion and $5 billion in assets, regulatory agencies have been providing more prescriptive guidance and recommendations to upgrade and enhance ERM and model risk management frameworks consistent with existing regulatory guidance aimed at RBOs.

Examinations are more detailed, covering policies and procedures, personnel, risk appetite, risk assessment activities and board reporting. Examiners are pushing smaller banks to recognize the ERM value proposition because a keen risk awareness will inspire more informed decisions.

An effective ERM program starts with the risk culture necessary for appropriate governance of policies and procedures, risk awareness training, tone from the top and credible challenge. The culture should start with the CEO and the board establishing a proactive risk strategy and aligning the risk appetite of the bank with strategic planning.

Implementing an effective risk management program is understanding your bank’s risk profile and addressing matters proactively, having the discipline to identify emerging risks and mitigating those risks before a risk event or loss.

As banks approach $10 billion in assets, they are expected to increase the rigor around risk identification and assess risks for their likelihood and impact before identifying risk-mitigating controls.

A CBO should have a champion to effect change strategically throughout the organization, rather than a regulatory or audit check-the-box exercise. The risk management champion can be compared to an orchestra conductor who does not need to do everyone else’s job but should be able to hear someone is out of tune. Breaking down silos is key because risk management should be a continuous, collaborative process involving all stakeholders.

Regulatory expectations are converging as examiners push smaller banks to show a safe and sound risk management framework. This should encompass a separate board risk committee, or, at a minimum, a subcommittee responsible for ERM.

All banks have traditionally been expected to maintain appropriate risk management processes commensurate with their size and complexity and operate in a safe and sound manner.

The formality and documentation required is a new, evolving trend. Board and senior management oversight is important, as is risk monitoring and information system reporting. Board support is critical to understand risk areas, develop training programs and establish accountability among leadership and risk management team members.

Regulatory scrutiny for banks below $10 billion of assets has increased for ERM sub-processes, including model risk management, new products and services and third-party risk management.

We live in a post-CCAR world trending toward deregulation; however, the regulatory burden of risk management expectations for the smaller CBOs is increasing. Essentially, asset size does not matter anymore.

Well Conceived and Executed Bank Acquisitions Drive Shareholder Value


acquisition-2-21-18.pngRecent takeovers among U.S.-based banks generally have resulted in above-market returns for acquiring banks, compared to their non-acquiring peers, according to KPMG research. This finding held true for all banks analyzed except those with greater than $10 billion in assets, for which findings were not statistically significant.

Our analysis focused on 394 U.S.-domiciled bank transactions announced between January 2012 and October 2016. Our study focused on whole-bank acquisitions and excluded thrifts, acquisitions of failed banks and government-assisted transactions. The analysis yielded the following conclusions:

  • The market rewards banks for conducting successful acquisitions, as evidenced by higher market valuations post-announcement.
  • Acquiring banks’ outperformance, where observable, increased linearly throughout our measurement period, from 90 days post-announcement to two years post-announcement.
  • The positive effect was experienced throughout the date range examined.
  • Banks with less than $10 billion in assets experienced a positive market reaction.
  • Among banks with more than $10 billion in assets, acquirers did not demonstrate statistically significant differences in market returns when compared to banks that did not conduct an acquisition.

Factors Driving Value

Bank size. Acquiring banks with total assets of between $5 billion and $10 billion at the time of announcement performed the strongest in comparison with their peers during the period observed. Acquiring banks in this asset range outperformed their non-acquiring peers by 15 percentage points at two years after the transaction announcement date, representing the best improvement when compared to peers of any asset grouping and at any of the timeframes measured post-announcement.

Performance-chart.png 

Acquisitions by banks in the $5 billion to $10 billion asset range tend to result in customer expansion within the acquirer’s market or a contiguous market, without significant increases in operational costs.

We believe this finding is a significant factor driving the value of these acquisitions. Furthermore, banks that acquire and remain in the $10 billion or less asset category do not bear the expense burden associated with Dodd-Frank Act stress testing (DFAST) compliance.

Conversely, banks with nearly $10 billion in assets may decide to exceed the regulatory threshold “with a bang” in anticipation that the increased scale of a larger acquisition may serve to partially offset the higher DFAST compliance costs.

The smaller acquiring banks in our study—less than $1 billion in assets and $1 billion to $5 billion in assets—also outperformed their peers in all periods post-transaction (where statistically meaningful). Banks in these asset ranges benefited from some of the same advantages mentioned above, although they may not have received the benefits of scale and product diversification of larger banks.

As mentioned earlier, acquirers with greater than $10 billion in assets did not yield statistically meaningful results in terms of performance against peers. We believe acquisitions by larger banks were less accretive due to the relatively smaller target size, resulting in a less significant impact.

Additionally, we find that larger bank transactions can be complicated by a number of other factors. Larger banks typically have a more diverse product set, client base and geography than their smaller peers, requiring greater sophistication during due diligence. There is no substitute for thorough planning, detailed due diligence and an early and organized integration approach to mitigate the risks of a transaction. Furthermore, alignment of overall business strategy with a bank’s M&A strategy is a critical first step to executing a successful acquisition (or divestiture, for that matter).

Time since acquisition. All three acquirer groups that yielded statistically significant results demonstrated a trend of increasing returns as time elapsed from transaction announcement date. The increase in acquirers’ values compared to their peers, from the deal announcement date until two years after announcement, suggests that increases in profitability from income uplift, cost reduction and market expansion become even more accretive with time.

Positive performance pre-deal may preclude future success. Our research revealed a positive correlation between the acquirer’s history of profitability and excess performance against peers post-acquisition. We noted this trend in banks with assets of less than $1 billion, and between $1 billion and $5 billion, at the time of announcement.

This correlation suggests that banks that were more profitable before a deal were increasingly likely to achieve incremental shareholder value through an acquisition.

Bank executives should feel comfortable pursuing deals knowing that the current marketplace rewards M&A in this sector. However, our experience indicates that in order to be successful, acquirers should approach transactions with a thoughtful alignment of M&A strategy with business strategy, an organized and vigilant approach to due diligence and integration, and trusted advisers to complement internal teams and ensure seamless transaction execution.

Using Big Data to Assess Credit Quality for CECL


CECL-4-7-17.pngThe new Financial Accounting Standards Board (FASB) rules for estimating expected credit losses presents banks with a wide variety of challenges as they work toward compliance.

New Calculation Methods Require New Data
The new FASB standard replaces the incurred loss model for estimating credit losses with the new current expected credit loss (CECL) model. Although the new model will apply to many types of financial assets that are measured at amortized cost, the largest impact for many lenders will be on the allowance for loan and lease losses (ALLL).

Under the CECL model, reporting organizations will make adjustments to their historical loss picture to highlight differences between the risk characteristics of their current portfolio and the risk characteristics of the assets on which their historical losses are based. The information considered includes prior portfolio composition, past events that affected the historic loss, management’s assessment of current conditions and current portfolio composition, and forecast information that the FASB describes as reasonable and supportable.

To develop and support the expected credit losses and any adjustments to historical loss data, banks will need to access a wider array of data that is more forward-looking than the simpler incurred loss model.

Internal Data Inventory: The Clock is Running
Although most of the data needed to perform these various pooling, disclosure and expected credit loss calculations can be found somewhere, in some form, within most bank’s systems, these disparate systems generally are not well integrated. In addition, many data points such as customer financial ratios and other credit loss characteristics are regularly updated and replaced, which can make it impossible to track the historical data needed for determining trends and calculating adjustments. Other customer-specific credit loss characteristics that may be used in loan origination today might not be updated to enable use in expected credit loss models in the future.

Regardless of the specific deadlines that apply to each type of entity, all organizations should start capturing and retaining certain types of financial asset and credit data. These data fields must be captured and maintained permanently over the life of each asset in order to enable appropriate pooling and disclosure and to establish the historical performance trends and loss patterns that will be needed to perform the new expected loss calculations. Internal data elements should focus on risks identified in the portfolio and modeling techniques the organization finds best suited for measuring the risks.

External Economic Data
In addition to locating, capturing, and retaining internal loan portfolio data, banks also must make adjustments to reflect how current conditions and reasonable and supportable forecasts differ from the conditions that existed when the historical loss information was evaluated.

A variety of external macroeconomic conditions can affect expected portfolio performance. Although a few of the largest national banking organizations engage in sophisticated economic forecasting, the vast majority of banks will need to access reliable information from external sources that meet the definition of “reasonable and supportable.”

A good place to start is by reviewing the baseline domestic macroeconomic variables provided by the Office of the Comptroller of the Currency (OCC) for Comprehensive Capital Analysis and Review (CCAR) and Dodd-Frank stress testing (DFAST) purposes. Because regulators use these variables to develop economic scenarios, these variables would seem to provide a reasonable starting point for obtaining potentially relevant historic economic variables and considerations from the regulatory perspective of baseline future economic conditions.

Broad national metrics—such as disposable income growth, unemployment, and housing prices—need to be augmented by comparable local and regional indexes. Data from sources such as the Federal Deposit Insurance Corporation’s quarterly Consolidated Report of Condition and Income (otherwise known as the call report) and Federal Reserve Economic Data (FRED), maintained by the Federal Reserve Bank of St. Louis, also can be useful.

Data List for CECL Compliance

critical-internal-data-elements-small.png

Looking Beyond Compliance
The new FASB reporting standard for credit losses will require banks to present expected losses in a timelier manner, which in turn will provide investors with better information about expected losses. While this new standard presents organizations of all sizes with some significant initial compliance challenges, it also can be viewed as an opportunity to improve performance and upgrade management capabilities.

By understanding the current availability and limitations of portfolio data and by improving the reliability and accuracy of various data elements, banks can be prepared to manage their portfolios in a way that improves income and maximizes capital efficiency.

Does Your Bank Have the Stress Testing Data You Need?


stress-testing-8-26-16.pngThe next several years will increase the need for better data management at banks. Banks that have experienced the Dodd-Frank Act’s required stress tests (DFAST) already have encountered that need. With the Basel III international accord phasing in and the new current expected credit loss impairment standard (CECL) eventually taking effect, all U.S. financial institutions are facing ever more demanding regulatory requirements driving the need for enhanced data and analytics capabilities.

Credit data is becoming increasingly integral to stress tests, as well as capital planning and management and credit loss forecasts. To meet regulatory expectations in these areas, though, some banks need to improve the quality of their data and the control they have over it. Effective data management can bring valuable support and efficiencies to a range of compliance activities.

Expanding Data Requirements
DFAST, which is required of banks above $10 billion in assets, is highly dependent on data quality. The DFAST process—including scenarios, analytics, and reporting—requires banks to maintain a vast array of reliable and detailed portfolio data, including data related to assets and liabilities; to customers, creditors and counterparties; to collateral; and to customer defaults.

Under Basel III, banks will need to gather even more data. The requirements call for consistent data sourcing and reconciliation, liquidity management and the capture of data for historical purposes, among other things.

The Financial Accounting Standards Board’s new CECL model for GAAP reporting applies to all banks and will bring implications for data management. Banks and financial services companies will need borrower and economic data, exposure level data, historical balances, risk ratings and data on charge-offs and recoveries. Failure to capture quality data in these and other areas could result in tougher examinations, reliance on peer or industry data, questions about safety and soundness and drops in capital and profits.

Data Management Challenges
Small banks generally have a handful of credit data sources, while large banks can have 15 or more—and the number of sources is expected to grow in coming years as new products are released. In addition, the data often is stored in different formats and might not be subject to any governance or control. It’s no wonder that banks can find it difficult to get a handle on their data, let alone produce a “single source of truth” that can withstand examiner scrutiny.

One solution to this dilemma is a credit data warehouse. A data warehouse can provide a vehicle for controlling and governing an immense amount of data. It allows a bank to easily show an examiner the data that was used for its models, the data sources and how the data reconciles with the bank’s financial statements.

Banks might encounter some obstacles on their way to effective warehousing, though, including the sheer volume of data to be stored. Quality assurance is another common issue. For example, information might be missing or not in a standardized format. Data availability also can pose problems. A bank might have the required information but not in an accessible format.

Best Practices
Overcoming these problems comes down to data governance—how a bank manages its data over time to establish and maintain the data’s trustworthiness. Data management without active governance that is targeted toward achieving a single source of truth isn’t sustainable.

In the case of DFAST, it’s important to resist the temptation to take a short-term perspective that considers only the data required for stress testing. Banks that take a more global view, bearing in mind that the data is used throughout the organization, will fare much better. Such banks build a framework that can handle the new data requirements (including those related to historical data) that will surely continue to come in the future.

Banks also should remember that data management is not a one-off task. A bank might have clean data today, but that data will degrade over time if not managed on an ongoing basis.

Finally, banks should not overlook the human factor. Success isn’t brought about by a database but by the people who are stewards for the data and the processes put in place to audit, balance, and control the data. The people and processes will, of course, be enabled with technology, but the people and processes will make or break a data management program.

Time to Take Control
Effective data management is an essential component of any stress testing endeavor, but data management also has implications that extend to CECL and Basel III compliance and likely will aid banks in coping with many forthcoming regulations and requirements. Banks that don’t yet have control of their data should take steps now to establish the governance, framework and people and processes necessary to ensure the completeness, accuracy, availability and auditability of a single source of truth for both the short- and long-term.

Lessons Learned From the Stress Tests


Stress-testing-9-24-15.pngIn the wake of the implementation of the Dodd-Frank Wall Street Reform and Consumer Protection Act stress test (DFAST) regulations, the term “stress test” has become a familiar part of the banking lexicon. The DFAST regulations require midsize banks—those with assets between $10 billion and $50 billion—to project the expected impact of three economic scenarios—baseline, adverse, and severely adverse—on their financial statements and capital ratios. Midsize financial institutions were required to report this year’s stress test results to their regulators by March 31, 2015, the second round of stress tests required for these banks.

Although the submission that was due in March was round two, most banks felt that it demanded just as much effort as the first round of stress tests.  Regulators focused more on process than results in round one and clearly stated that what was acceptable in the first submission would be insufficient for subsequent examinations. Little formal feedback is in so far, but what we have heard indicates that continuous improvement was definitely expected.

Model Mechanics
In the first round, most banks either used simplistic models or projections that did not capture their risks fully. Banks now are expected to develop enhanced models, and more significant portfolios are being modeled using bottom-up rather than top-down approaches. In assessing models, regulators are questioning assumptions and methodologies and looking for well documented, sound conceptual bases for the modeling choices made. Overly manual modeling processes also are being flagged as impractical for ad hoc use. The message is loud and clear: stress testing models are expected to be integrated into risk management practices.

Documentation
One common area for continued attention appears to be documentation. Whether it’s better organizing information to make it easier to follow the bank’s processes, improving validation documentation, writing user procedures, or better documenting the effective challenge process, the feedback received thus far reinforces that DFAST truly is a formal process. The documentation has to be sufficient for banks to manage, monitor and maintain the overall stress testing program. It also needs to be detailed enough to allow other users, including validators and regulators, to clearly understand the process.

Validation
Validation continues to be a big area of focus, and attention is being paid to both the timing and extent of validation activities. Timing is a critical review point, as the models are expected to be validated prior to the final stress test exercise. Validations have been criticized for having incomplete documentation, for failing to assess data lineage and quality, and for not being comprehensive. As modeling systems become more sophisticated, validations need to provide broader coverage. Validators—whether internal or third-party resources—must be experienced and competent, and they must deliver a sound validation in accordance with the agreed scope.

Sustainability
Banks have been encouraged to shore up organizational structures and procedures to keep their stress testing programs up-to-date and intact. With competition for quantitative resources at an all-time high, many are making choices about hiring statistical specialists and using contractors to keep on track. Banks are focusing on more automated processes, broader business participation, and more detailed user procedures to make sure the loss of one or two employees does not cause a program to fall apart completely.

Life in the DFAST Lane
As with most important business processes, effective DFAST risk management requires significant input from business management, risk management, and internal audit. A collaborative relationship among these three lines of defense results in the strongest DFAST processes. With reporting deadlines for the next cycle in 2016 being delayed from March 31 to July 31, banks have a bit of breathing room to assess the effectiveness and efficiency of their DFAST programs. Banks should use this extra time to further develop documentation, address highest priority issues, and continue to integrate stress testing into routine risk management practices.