Simulating Mega Risks of Tomorrow
What do you do when even the hypothetical doomsday scenario falls short?
What do you do when even the hypothetical doomsday scenario falls short?
By Julia Chong
As our reality is shaken by a virulent virus, banking should consider a shake-up of its own.
For over a decade, overhauls to the risk model under the Basel regime undoubtedly strengthened the financial sector’s resilience; yet, none were prepared for the onslaught of Covid-19. As the impact of the pandemic borders unexplored dimensions, the only rule as we overhaul banks’ risk models is this: No assumption is sacrosanct.
For most financial institutions, the sensitivity analyses in their risk models only take into consideration the most likely occurrences – upside, downside, and near the baseline. There are also more extreme stress tests, those which push the limits of the financial system by simulating one-in-a-thousand events a.k.a. Armageddon scenarios, but even these have come up short in predicting the financial fallout.
The Wall Street Journal quotes Randal Quarles, US Federal Reserve Vice-Chairman for Supervision, in stating that the regulator is “adjusting its annual ‘stress tests’ for banks to incorporate lenders’ performance during the coronavirus-triggered downturn, which is worse than the hypothetical scenarios that the central bank previously planned to use”.
In Quarles’ own words: “The right thing to do is for us to continue our stress tests but as part of them to analyse how banks’ portfolios are responding to real, current events, not just to the hypothetical event that we announced earlier this year.”
Forecasting models are populated by reasonable assumptions and historical relationships in data. However, there is no precedent in recent history that mirrors this pandemic or resembles the forces at play. Even in machine learning and artificial intelligence models, failures exist as their predictive power is highly dependent on historical precedents, data quality, and the sets of assumptions used.
In light of this, banks must question whether their long-held assumptions in current risk models still hold.
The Basel Committee on Banking Supervision’s (BCBS) final revisions to Basel III was to be implemented on 1 January 2022. These revisions, commonly referred to as Basel IV, include improvements to the market risk framework. Since Covid-19, the BCBS has announced a 12-month delay for banks, allowing them to focus on navigating the pandemic and perhaps an opportunity to retune their data collection processes in this ‘new normal’.
Data provider CRISIL, an S&P Global company, in its April 2020 paper, The Hour to Rejuvenate Credit-risk Data Models, highlights three broad risk-model areas for lenders to zone in:
To mitigate the impact of the pandemic, institutions have allowed financially strapped borrowers to postpone payments. In the UK, for instance, banks can extend zero-penalty payment holidays of up to three months on mortgages with no impact on the borrower’s creditworthiness. Despite the new repayment schedule, the exposure would not be reclassified as forborne or distressed. Overrides – manual or system-wide – are to be expected and it is imperative that banks do two things:
+ Factor in impact on credit-risk data collection, process, and governance. Reset systems and controls to monitor the timing of payment postponement and pandemic-related overrides, ensuring that borrowers are not penalised. Control failures will likely have implications for banks from a conduct risk perspective and may lead to regulatory penalties.
+ Foolproof the distressed restructuring. Continue to assess all indicators of borrowers’ unlikeliness to pay during and after the moratorium, accurately presenting the quality of the bank’s portfolio to market participants and ensure it remains adequately capitalised. Institutions should continue to capture data on distressed restructurings for a robust assessment of unlikeliness-to-pay. This ensures timely and accurate identification and measurement of credit risk.
These improvements relate to the probability of default (PD) and loss given default (LGD).
+ Poorly performing PD models. With the expected surge in defaults and credit migration, PD models which are largely driven by annual or quarterly financials are unlikely to predict accurate ratings reflective of the Covid-19 crisis and overrides are expected. However, determining the specific overrides for each borrower is challenging and will greatly burden model monitoring teams as they assess the performance of the overrides.
+ Conduct impact analysis due to a rise in defaults and LGDs. As new data emerge, the central tendency of the default rate is likely to shift and a recalibration of current PDs may be required. For LGDs, institutions need to monitor the pool of collateral used to mitigate credit risk as it will have a significant bearing on the recovery rate.
+ Improve data capture for risk-weighted assets (RWA) optimisation and pricing. While the full impact of the outbreak and related relief measures may take some time to manifest, it is imperative that institutions conduct an impact analysis of their RWAs and use this knowledge in the pricing of new loans.
• Economic support and relief measures affecting International Financial Reporting Standard 9 (IFRS 9) provisioning.
The goal is to capture defaulting loans correctly. It is up to institutions to define what constitutes a significant increase in credit risk, capture the expected credit loss effectively, and document this in accordance with its credit impairment policies and procedures.
Under IFRS 9, credit obligations are allocated under three stages: performing (Stage 1), underperforming (Stage 2), and impaired (Stage 3). During the period of pandemic relief programmes, credit obligations that would normally move to Stage 2 (a transition state before default) will now remain in Stage 1 for a longer time.
From a data governance perspective, this counts as a system override. Banks should apply the stage allocation correctly throughout the forbearance period and revert to the original classification once this period lapses. Where there is a significant deterioration of credit risk, the transfer to Stage 2 needs to be considered from when the loan was granted.
The forward-looking information must be reasonable and supportable for the purpose of IFRS 9. Additionally, the impact of overrides should be carefully monitored, with senior management having visibility. This will, in turn, provide better traceability and enable a better understanding about the key drivers of credit impairment during this time of pronounced uncertainty.
Haresh Sharpa, Professor of Accounting at Chicago Booth, is one of the many voices advocating against banks being temporarily relieved from post-crisis regulations such as IFRS 9 and the current expected credit losses (CECL – America’s new credit-loss standard), which were designed to curb excessive risk-taking.
Drawing from current research, Prof Sharpa writes in the April edition of Chicago Booth Review: “Some argue that due to the economic uncertainty brought about by Covid-19, banks may face higher-than-anticipated increases in credit-loss allowances. But my research demonstrates that the role of expected-loss models such as the CECL is to reveal timely information about credit losses so that a bank’s stakeholders can be more nimble and make informed, sound decisions.”
“Ignoring such information,” he warns, “would not discipline risk-taking – and could potentially exacerbate it.”
“As the crisis evolves, many companies and consumers will unfortunately default on their loans and banks will incur credit losses. But banks that are timely about recognising these losses will emerge from this crisis in better shape and with more credibility.”
On the flip side, economists such as Dr Jon Danielsson, Director of the Systemic Risk Centre at the London School of Economics, caution against preaching that risk models are the be-all and end-all of financial stability.
“Statistical pricing and risk-forecasting models played a significant role in the build-up to the crisis. For example, they gave wrong signals, underestimated risk, and mispriced collateralised debt obligations,” writes Dr Danielsson in a February 2011 opinion piece analysing the post-2008 crash.
“I am therefore surprised by the frequent proposals for increasing the use of such models in post-crisis reforms – and I am not alone. If the models performed so badly, why aren’t we questioning their increased prominence?
“This may be because of the view that that we can, somehow identify the dynamics of financial markets during crisis by studying pre-crisis data. That we can get from the failure process in normal times to the failure process in crisis times. That all the pre-crisis models were missing was the presence of a crisis in the data sample.
“This is not true. The models are not up to the task. While statistical risk and pricing models may do a good job when markets are calm, they lay the seeds for their own destruction – it is inevitable that such models be proven wrong. The riskometer is a myth.”
How then can we forestall finance’s next big crunch?
“The next crisis will not come from where we are looking,” Dr Davidson hints. “After all, bankers seeking to assume risk will look for it where nobody else is looking.”
Vigilance – a watchful eye for possible dangers on the financial horizon – might just be the critical determinant in this risk-model puzzle.
Risk officers should diversify their sources of information, keeping abreast of research on the fringe, which may unlock critical components in refining risk models.
For instance, Nature journal featured Arthur Turrell, a physicist-turned-economist whose current work at the Bank of England’s data science unit fuses economics and epidemiology (a branch of medicine studying the spread and risk factors of infectious diseases), merging life science with data analytics to track the financial fallout of the pandemic.
He shares how the virus has “galvanised a change that was already under way” to support policymakers at the central bank.
“Typical macroeconomic data points, such as figures on gross domestic product, come out every quarter. That’s fine in normal times when changes are gradual – but now, changes are happening week by week. And with policies such as lockdown, it’s as though whole sectors of the economy have been turned off overnight. So we’ve had to think differently. We’ve been using tools from data science and computer science to automatically collect and analyse data as soon as they come out, to create a report for policymakers at the bank. We use a little bit of machine learning and more sophisticated tools for text analysis.”
Julia Chong is a Singapore-based writer with Akasaa. She specialises in compliance and risk management issues in finance.