We use cookies on our website to give you the best online experience. Please know that if you continue to browse on our site, you agree to this use. You can always block or disable cookies using your browser settings. To find out more, please review our privacy policy.

COVID-19 RESOURCES AND INFORMATION: See the Atlanta Fed's list of publications, information, and resources; listen to our Pandemic Response webinar series.

About


Policy Hub: Macroblog provides concise commentary and analysis on economic topics including monetary policy, macroeconomic developments, inflation, labor economics, and financial issues for a broad audience.

Authors for Policy Hub: Macroblog are Dave Altig, John Robertson, and other Atlanta Fed economists and researchers.

Comment Standards:
Comments are moderated and will not appear until the moderator has approved them.

Please submit appropriate comments. Inappropriate comments include content that is abusive, harassing, or threatening; obscene, vulgar, or profane; an attack of a personal nature; or overtly political.

In addition, no off-topic remarks or spam is permitted.

May 3, 2021

Is There a Global Factor in U.S. Bond Yields?

The answer to this question seems obvious simply from observing the secular comovement of global nominal yields across some advanced economies plotted in chart 1.

Chart 1: 10-Year Bond Yields, 1992-2021

This observation raises the possibility that domestic bond yields, including those in the large U.S. Treasury market, may be anchored by global economic developments (see, for example, hereOff-site link and hereOff-site link), provision of global liquidity, and international markets arbitrage. The synchronized dynamics in global yields during the last few months serve as a stark reminder of the powerful role that global bond markets play in the transmission of country-specific shocks as well as of monetary and fiscal impulses.

Yet the standard term structure models (see, for example, hereOff-site link), that policymakers and market participants use to form their expectations about the future path of the policy rate, are typically estimated only with information embedded in domestic yields. Global influences enter only via the term premia—that is, the extra returns that investors demand to hold long-term bonds—and are influenced by the flight to safety and arbitrage across international markets. But because the term premia are obtained as a residual component in the model, any misspecification of the factor structure that drives equilibrium interest rates—by omitting a common global factor, for example—may result in erroneously attributing some fundamental movements to the term premia.

Chart 2 illustrates this point, presenting a less-noticed and even overlooked empirical regularity between the term premiaOff-site link on the spread between the 10-year U.S. bond and the 10-year/2-year German bond, which is the benchmark bond for the Eurozone government bond market. This comovement has proved remarkably strong since 2014.1

Chart 2: German Bond Spread and U.S. Term Premia, 2010–21

Take, for example, the pronounced decline in the term premia and the accompanying slide in the German bond spread between 2014 and 2019. Although technical factors might be behind the downward trend in the German bond spread—for example, large Eurozone bond outflows triggered by the euro-area crisis and the introduction of negative interest rates—the slope of the yield curve could also convey important information about the fundamentals of the economy. If the term premia on the 10-year U.S. bond reflect an exogenous "distortion" in the U.S. yield curve due to a flight to safety or an elevated demand for global safe assets, yields are likely to return to normal levels when the uncertainty shock dissipates. In contrast, if investors interpret the yield curve's decline as an endogenous "risk-off" response—that is, a switch to less risky assets—to a deteriorated global environment that can spill over to the U.S. economy, the term structure model would require a "global" factor whose omission may otherwise contaminate an estimate of the term premia.

So how sensitive is the estimate of the future path of policy rate to model specification? I next illustrate this sensitivity by augmenting the factor space in a standard (five-factor) term structure model with incremental information from an additional global factor, not contained in the other factors. Given the reasonably tight correlation between the term premia for the 10-year U.S. bond and the 10-year/2-year German bond spread, it seems natural to use the latter as an observed proxy for a global factor, although other statistical approaches for extracting one or more common global factors are certainly possible.

To quantify the potential effect of the global factor, I focus on yield curve dynamics seen in 2019, a period characterized by elevated economic, trade, and geopolitical uncertainty that led to a material decline in observed yields. But did a fundamental shift in the expected path of policy rate, or lower term premia, drive this decline? In the left panel of chart 3, I plot the expected policy rate paths for the second quarter (or midpoint) of 2019, obtained from models with and without a global factor. (Recall that in the second quarter of 2019, the target range for the federal funds rate was 2.25 percent to 2.50 percent.)

Chart 3: Model-Implied Paths of Policy Rate

The difference in the shape of the expected policy rate paths implied by the two models is striking. (The models' estimates use unsmoothed yield data at quarterly frequency, with continuous bond maturities from one to 80 quarters.) Although the expected policy rate path for the standard model is fairly flat, the rate path for the model with a global factor is deeply inverted up to five-year maturities, suggesting that over this horizon one could have expected rate cuts of almost 100 basis points. These expectations occurred against the backdrop of stable growth and inflation outlook in the United States but deteriorating global economic and trade conditions. The right panel of chart 3 displays the evolution of the expected rate path, estimated from the global factor model, for the two quarters before and the two quarters after the second quarter of 2019, as the Federal Reserve started to adjust its policy rate lower. It is worth noting that the strong effect of the German 10-year/2-year spread in the term structure model with global factor is a relatively recent phenomenon. (Additional results suggest that this factor has only a muted impact on the model estimates prior to 2014.)

The policy implications of these findings warrant several remarks. One direct implication is that the common global determinants of the neutral rate of interest, as well as inflationary dynamics, could constrain the potency of domestic monetary policy. A prime example of these constraints was the policy rate normalization phase undertaken by the Fed during the 2016–18 period, which was characterized by global disinflationary pressures, underwhelming economic performance in Europe and Japan, slowing economic growth in China, and escalating trade tensions. These forces were potentially counteracting the Fed's policy efforts and exerting downward pressure on the global neutral rate of interest. The recent economic and financial developments resulting from the COVID-19 pandemic (such as the global nature of the shock, synchronized monetary and fiscal response across countries, and international financial market comovements) and the ongoing recovery appear to only strengthen the case for the importance of incorporating global information in bond-pricing models.




1 [go back] I should note that the correlation between the two series increased from 52.9 percent before 2014 to 76.2 percent after 2014. Interestingly, the beginning of 2014 marks another important shift in financial markets: a sharp and persistent compression in the breakeven inflation forward curve, as a Liberty Street Economics blog postOff-site link recently discussed. A similar flattening is present in the forward term premia of nominal bonds. This is consistent with the interpretation that such flattening—starting in 2014—is likely the result of a new regime, characterized by the compression of inflation risk across maturities.

November 9, 2020

The Importance of Digital Payments to Financial Inclusion

Editor's note: In December, macroblog will become part of the Atlanta Fed's Policy Hub publication.

A recent Atlanta Fed white paper titled "Shifting the Focus: Digital Payments and the Path to Financial Inclusion" calls for a concerted effort to bring underbanked consumers into the digital payments economy. The paper—by Atlanta Fed president Raphael Bostic, payments experts Shari Bower and Jessica Washington, and economists Oz Shy and Larry Wall—acknowledges the importance of longstanding efforts to bring the full range of banking services to unbanked and underbanked consumers. (For another take on the white paper and its relationship to the Atlanta Fed's mission, you can read here.) However, the white paper observes, progress towards this goal has been slow. It further notes the growing importance of digital payments for a wide variety of economic activities. It concludes by highlighting a number of potential policies that could expand inclusion in the digital payments economy for policymakers to consider.

The 2017 Federal Deposit Insurance Corporation (FDIC) National Survey of Unbanked and Underbanked Households found that 6.5 percent of U.S. households are unbanked and an additional 18.7 percent underbanked. In this survey, a household is considered underbanked if it has a bank account but has obtained some financial services from higher-cost alternative service providers such as payday lenders. The proportions are even higher in some minority communities, with an unbanked rate for Black households at 16.9 percent. These figures were down modestly from earlier FDIC surveys, but progress remains inadequate.

The white paper retains full inclusion as the ultimate goal but argues we should not let the difficulties of achieving full inclusion deter us from moving aggressively to spread the benefits of digital payments. Such digital payments in the United States are typically made using (or funded by) a debit or credit card. Yet a recent paper by Oz Shy (one of the coauthors of this post) finds that over 4.8 percent of adults in a recent survey lack access to either card. Moreover, those lacking a card tend to be disproportionately concentrated in low-income households, with almost 20 percent of households earning under $10,000 annually and over 14 percent of those earning under $20,000 a year having neither card. These numbers also vary by ethnic groups: 4.8 percent of white and 10.2 percent of Black surveyed consumers.

The lack of access to digital payments has long been a costly inconvenience, but recent developments are moving digital payments from the "nice-to-have" category toward the "must-have" category. Card payments are increasing at an annual rate of 8.9 percent by number in recent years. While cash remains popular, debit cards have overtaken cash for the most popular in-person type of payments. Moreover, the use of cards in remote payments where cash is not an option nearly equals their use for in-person transactions. Most recently, COVID-19 has accelerated this move toward cards, with a 44.4 percent year-over-year increase in e-commerce sales in the second quarter of 2020.

These trends in card usage relative to cash usage pose several problems for consumers who lack access to digital payments. First, some retailers are starting to adopt a policy of refusing cash. Second, many governments are deploying no-cash parking meters, along with highway toll readers and mass transit fare machines that do not accept cash. Third, the growth of online shopping is being accompanied by a decrease in the number of physical stores, resulting in reduced access for those lacking cards.

The last part of the white paper discusses a number of not mutually exclusive ways of keeping the shift from paper-based payments (cash and checks) to digital payments from adversely affecting those lacking a bank account. A simple, short-term fix is to preserve an individual's ability to obtain cash and use it at physical stores. No federal law currently prevents businesses from going cashless, but some states and localities have mandated the acceptance of cash.

However, merely forcing businesses to accept cash does not solve the e-commerce problem, nor does it promote the development of faster, cheaper, safer, and more convenient payment systems, so considering alternatives takes on greater importance. One option the paper discusses is that of cash-in/cash-out networks that allow consumers to convert their physical cash to digital money (and vice versa). Examples of this in the United States include ATMs and prepaid debit cards, as well as prepaid services such as mass transit cards that can be purchased for cash in physical locations.

Another option is public banking. One version of this that has been proposed is a postal banking system like the ones operating in 51 countries outside the United States and the one that was once available here. Another public banking possibility would provide consumers with basic transaction accounts that allow digital payments services. The government or private firms (such as banks, credit unions, or some types of fintech firms) could administer such services.

The paper concludes with a discussion of some important challenges inherent in moving toward a completely cashless economy accessible to everyone. One such consideration is access to mobile and broadband. This issue has a financial dimension, that of being able to afford internet access. It also has a geographic dimension in that many rural areas lack both high-speed internet and fast cellphone networks. Another dimension is that of providing a faster payment service that would allow people to obtain earlier access to their incoming funds, and result in bank balances more accurately reflecting outgoing payments. Finally, the white paper raises the potential for central bank digital currency to expand access to digital payments. However, central bank digital currency raises a large number of issues that the federal government and Federal Reserve would need to work through before it could be a viable option.

November 21, 2019

Private and Central Bank Digital Currencies

The Atlanta Fed recently hosted a workshop, "Financial System of the Future," which was cosponsored by the Center for the Economic Analysis of Risk at Georgia State University. This macroblog post discusses the workshops discussion of digital currency, including Bitcoin, Libra, and central bank digital currency (CBDC). A companion Notes from the Vault post provides some highlights from the rest of the workshop.

The introduction of Bitcoin has sparked considerable interest in cryptocurrencies since its introduction in the 2008 paper "Bitcoin: A Peer-to-Peer Electronic Cash System" by Satoshi Nakamoto. However, for all its success, Bitcoin is not close to becoming a widely accepted electronic cash system. Why it has yet to achieve its original goals is the topic of a paper by New York University professors Franz Hinzen and Kose John, along with McGill University professor Fahad Saleh titled "Bitcoin's Fatal Flaw: The Limited Adoption Problem."

Their paper suggests that the inability of Bitcoin to achieve wider adoption is the result of the interaction of three features: the need for agreement on ledger contents (in blockchain terminology, "consensus"), free entry for creating new blocks (permissionless or decentralized), and an artificial supply constraint. The supply constraint means that an increase in demand leads to higher Bitcoin prices. Such a valuation increase expands the network seeking to create new blocks (that is, increases the number of Bitcoin "miners"). But an increase in the network size slows the consensus process as it takes time for newly created blocks to reach all of the miners across the internet. The end result is an increase in the time needed to make a payment, reducing the value of Bitcoin as a means of payment—a significant consideration, obviously, for any type of currency.

As an alternative to the Bitcoin consensus protocol, they suggest a public, permissioned blockchain that results in faster transactions because it imposes limits on who can create new blocks. In their system, new blocks would be selected based on a weighted vote based on the blockchain's cyptocurrency held by validators (in other words, approved block creators). If validators were to approve new and malicious blocks, that would erode the value of the validator's existing cryptocurrency holdings and thus provide an incentive to behave honestly.

Federal Reserve Bank of Atlanta visiting economist Warren Weber presented some work with me on Libra, the new digital coin proposed by Facebook. Weber began by pointing to another problem with using Bitcoin in payments: the cryptocurrency's volatile value. Libra solves this problem by proposing to hold a portfolio of assets denominated in sovereign currencies, such as the U.S. dollar, that will provide one-for-one backing of the value of Libra. This approach is similar to that taken by some other "stablecoins," with the exception that Libra proposes to be stable relative to an index of several currencies whereas other stablecoins are designed to be stable with respect to only one sovereign currency.

Drawing on his background in economic history, Weber observes that introducing a new private currency is hard, but not impossible. For example, he pointed to the Stockholm Bank notes issued in Sweden in the 1660s. These notes worked because they were more convenient than the alternatives used in that country. The fact that other U.S. payments systems are heavily bank-based might afford an advantage to Libra.

Although no one is certain of the public's interest in using Libra, policymakers around the world have taken considerable interest in the potential implications of Libra for monetary policy and financial regulation. Could Libra significantly reduce the use of the domestic sovereign currencies in some countries, thus reducing the effectiveness of monetary policy? How might financial institutions providing Libra-based services be regulated?

One of the other possible policy responses to Libra is central banks' introduction of digital currency. Economists Itai Agur, Anil Ari, and Giovanni Dell'Ariccia from the International Monetary Fund consider some of the issues in developing a CBDC in their paper "Designing Central Bank Digital Currencies." They start by observing some important differences between cash and bank deposits. Cash is completely anonymous in that it reveals nothing about the identity of the payer. However, lost or stolen cash can't be recovered, so it lacks security. Deposits have the opposite properties—they are not anonymous, but there is a mechanism to recover lost or stolen funds.

The paper develops a model in which CBDC can be designed to operate at multiple points on a continuum between deposits and cash. The key concern from a public policy perspective is that the more CBDC operates like bank deposits, the more it will depress bank credit and output. However, if the CBDC operates too much like paper currency, then it could supplant paper currency and eliminate a payments method that some individuals prefer. The paper proposes that CBDC be designed to look more like currency to minimize the extent to which CBDC replaces bank deposits. The problem then becomes how to avoid CBDC reducing the usage of cash to the point where cash is no longer viable. (For example, merchants could decide to stop accepting cash because they find that the few transactions using cash do not justify the costs of accepting it.) The way the paper proposes to keep CBDC from being too attractive relative to cash by applying a negative interest rate to the CBDC. The result would be that those who most highly value CBDC will use it, but the negative rate will likely deter enough people so that cash remains a viable payments mechanism.

January 4, 2018

Financial Regulation: Fit for New Technologies?

In a recent interview, the computer scientist Andrew Ng said, "Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don't think AI [artificial intelligence] will transform in the next several years." Whether AI effects such widespread change so soon remains to be seen, but the financial services industry is clearly in the early stages of being transformed—with implications not only for market participants but also for financial supervision.

Some of the implications of this transformation were discussed in a panel at a recent workshop titled "Financial Regulation: Fit for the Future?" The event was hosted by the Atlanta Fed and cosponsored by the Center for the Economic Analysis of Risk at Georgia State University (you can see more on the workshop here and here). The presentations included an overview of some of AI's implications for financial supervision and regulation, a discussion of some AI-related issues from a supervisory perspective, and some discussion of the application of AI to loan evaluation.

As a part of the panel titled "Financial Regulation: Fit for New Technologies?," I gave a presentation based on a paper  I wrote that explains AI and discusses some of its implications for bank supervision and regulation. In the paper, I point out that AI is capable of very good pattern recognition—one of its major strengths. The ability to recognize patterns has a variety of applications including credit risk measurement, fraud detection, investment decisions and order execution, and regulatory compliance.

Conversely, I observed that machine learning (ML), the more popular part of AI, has some important weaknesses. In particular, ML can be considered a form of statistics and thus suffers from the same limitations as statistics. For example, ML can provide information only about phenomena already present in the data. Another limitation is that although machine learning can identify correlations in the data, it cannot prove the existence of causality.

This combination of strengths and weaknesses implies that ML might provide new insights about the working of the financial system to supervisors, who can use other information to evaluate these insights. However, ML's inability to attribute causality suggests that machine learning cannot be naively applied to the writing of binding regulations.

John O'Keefe from the Federal Deposit Insurance Corporation (FDIC) focused on some particular challenges and opportunities raised by AI for banking supervision. Among the challenges O'Keefe discussed is how supervisors should give guidance on and evaluate the application of ML models by banks, given the speed of developments in this area.

On the other hand, O'Keefe observed that ML could assist supervisors in performing certain tasks, such as off-site identification of insider abuse and bank fraud, a topic he explores in a paper  with Chiwon Yom, also at the FDIC. The paper explores two ML techniques: neural networks and Benford's Digit Analysis. The premise underlying Benford's Digit Analysis is that the digits resulting from a nonrandom number selection may differ significantly from expected frequency distributions. Thus, if a bank is committing fraud, the accounting numbers it reports may deviate significantly from what would otherwise be expected. Their preliminary analysis found that Benford's Digit Analysis could help bank supervisors identify fraudulent banks.

Financial firms have been increasingly employing ML in their business areas, including consumer lending, according to the third participant in the panel, Julapa Jagtiani from the Philadelphia Fed. One consequence of this use of ML is that it has allowed both traditional banks and nonbank fintech firms to become important providers of loans to both consumers and small businesses in markets in which they do not have a physical presence.

Potentially, ML also more effectively measures a borrower's credit risk than a consumer credit rating (such as a FICO score) alone allows. In a paper  with Catharine Lemieux from the Chicago Fed, Jagtiani explores the credit ratings produced by the Lending Club, an online lender that that has become the largest lender for personal unsecured installment loans in the United States. They find that the correlation between FICO scores and Lending Club rating grades has steadily declined from around 80 percent in 2007 to a little over 35 percent in 2015.

It appears that the Lending Club is increasingly taking advantage of alternative data sources and ML algorithms to evaluate credit risk. As a result, the Lending Club can more accurately price a loan's risk than a simple FICO score-based model would allow. Taken together, the presenters made clear that AI is likely to also transform many aspects of the financial sector.