Consumer Compliance Outlook: Second Issue 2017

Keeping Fintech Fair: Thinking About Fair Lending and UDAP Risks

By Carol A. Evans, Associate Director, Division of Consumer and Community Affairs, The Board of Governors of the Federal Reserve System1

Fintech is the latest wave in the continuing technological evolution of financial services. Fintech has already produced real benefits to consumers, including increased speed, convenience, and new product offerings that make it easier for consumers to manage their financial lives. Fintech may also offer ways to bring banking and new financial products to underserved communities, including products and accounts that help the underbanked manage their finances more easily, budget, and save.

Additionally, many firms are exploring ways to leverage new data and analytic techniques to extend credit to more consumers. It may be possible to extend responsible and fair access to credit to more consumers who do not have a traditional credit history and who would otherwise be denied access to prime credit. The Consumer Financial Protection Bureau (CFPB) has found that approximately 26 million Americans are credit invisible, which means that they do not have a credit record, and another 19.4 million do not have sufficient recent credit data to generate a credit score.2

Some in the fintech world see an enormous opportunity to improve access to credit on fair terms but are frustrated that the complexities of consumer compliance laws may thwart progress, especially in the areas of fair lending and unfair or deceptive acts or practices (UDAP). On the other hand, some stakeholders, including consumer advocates, are alarmed that some firms are jumping headfirst into new data and products without adequately evaluating the risks. They believe that some fintech trends may not only be unfair to certain consumers but could serve to exacerbate existing inequities in financial access and result in the digital equivalent of redlining.

The purpose of this article is to offer some general guideposts for evaluating UDAP and fair lending risk related to fintech, with a focus on alternative data. Increasing fluency with fair lending and UDAP concepts can help integrate consumer protection considerations into the early phases of business development, which can ensure effective compliance and save everyone time in the long run. In fact, we often hear consumer compliance professionals express frustration that they are brought into the process late when it is harder to course correct. We encourage business executives to view their compliance colleagues as key partners who can provide valuable advice at every stage of the business development process. Of course, both fair lending and UDAP are broad areas of the law where sound legal analysis depends on the specific facts and circumstances. Thus, the summary that follows is intended to offer general questions to help guide thinking early on in the business development process. It is not a substitute for the careful legal review that should be part of any effective consumer compliance program.3


Before delving into the possibilities of fintech, it is helpful to first review the basics of fair lending and UDAP.

Fair Lending: The Equal Credit Opportunity Act and the Fair Housing Act

The Equal Credit Opportunity Act (ECOA) and the Fair Housing Act (FHA) are the two key federal fair lending laws. ECOA prohibits credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, receipt of income from any public assistance program, or because a person has exercised certain legal rights under ECOA and other financial statutes. ECOA applies to both consumer and commercial credit. The FHA applies to credit related to housing and prohibits discrimination on the basis of race or color, national origin, religion, sex, familial status, and handicap.

The fair lending laws broadly prohibit two kinds of discrimination: disparate treatment and disparate impact. In some instances, both theories may apply. Disparate treatment occurs when a lender treats a consumer differently because of a protected characteristic. Disparate treatment ranges from overt discrimination to more subtle differences in treatment that can harm consumers and does not need to be motivated by prejudice or a conscious intent to discriminate. The Federal Reserve has made numerous referrals to the U.S. Department of Justice (DOJ) involving disparate treatment in pricing where bank employees charged higher fees or interest rates on loans to minorities than to comparably qualified nonminority consumers. These referrals have resulted in several DOJ enforcement actions. These cases typically involve situations in which bank employees had broad discretion to set interest rates and fees and could increase their own compensation by charging borrowers more.4

Disparate impact occurs when a lender’s policy or practice has a disproportionately negative impact on a prohibited basis, even though the lender may have no intent to discriminate and the practice appears neutral.5 A policy or practice that has a disparate impact may violate the law, unless the policy or practice meets a legitimate business necessity that cannot reasonably be achieved by a means that has less impact on protected classes.6 Factors that may be relevant to business necessity could include cost and profitability.7 For example, the CFPB and DOJ brought a discrimination enforcement action against a wholesale lender in 2015.8 In that case, the CFPB and DOJ alleged that the lender’s policies with respect to broker fees and its pricing practices resulted in minorities paying more for loans than nonminority borrowers and that the policies could not be justified by legitimate business necessity. In many cases, it is possible to frame an issue of possible discrimination as either disparate impact or disparate treatment. In fact, many enforcement actions do not indicate which theory was used. So, it is helpful to be familiar with both theories.

As we will explore further, fintech may raise the same types of fair lending risks present in traditional banking, including underwriting discrimination, pricing discrimination, redlining, and steering. Although some fintech trends may decrease certain fair lending risks, other trends could amplify old problems or create new risks.


Unfair or Deceptive Acts or Practices

Section 5 of the Federal Trade Commission Act prohibits unfair or deceptive acts or practices.9 The Dodd–Frank Wall Street Reform and Consumer Protection Act prohibits unfair, deceptive, or abusive acts or practices.10 Many states also have their own UDAP laws. Deceptive acts or practices are representations, omissions, or practices that are likely to mislead a consumer acting reasonably under the circumstances and are material (i.e., are likely to affect the consumer’s conduct or decision with respect to a product or service). Unfair acts or practices are those that cause or are likely to cause substantial injury to consumers that consumers cannot reasonably avoid. Additionally, the substantial injury must not be outweighed by countervailing benefits to consumers or competition.

Deception in the financial services industry often involves misrepresenting the terms or costs of financial products or services. For example, in 2015, the Federal Reserve announced a public enforcement action against a provider of financial aid and reimbursement services to colleges and universities and demand deposit account services to students.11 The Federal Reserve alleged, among other things, that the company failed to provide information about the fees, features, and limitations of its product before requiring students to decide how to receive their financial aid disbursement. Another example is the enforcement action of the Federal Trade Commission  (FTC) and the Federal Deposit Insurance Corporation (FDIC) against CompuCredit,12 which advertised credit cards to consumers with poor credit histories. The FTC alleged that CompuCredit violated the UDAP prohibition when it misrepresented the amount of credit that would be available to consumers when they received the card, failed to disclose upfront fees, failed to disclose that purchases that triggered the company’s risk algorithm could reduce a consumer’s credit limit, and misrepresented a debt collection program as a credit card offer.

The unfairness prohibition is also relevant to financial services. In another FTC case, a website operator gathered extensive personal information from consumers for purported payday loan applications and purchased applications from other websites.13 Consumers believed that they were applying for loans, but the operator sold their application information, including Social Security numbers and bank account information, to companies that fraudulently debited their bank accounts.


Many fintech firms and banks are exploring new data sources as well as new analytical techniques, an approach sometimes referred to as big data. Big data does not have a uniform definition, but it generally refers to the analysis of large, complex data sets that are collected over time from different sources. These data sets, combined with developments in analytics, such as machine learning, can open up new approaches to data modeling. Instead of formulating a hypothesis and collecting data to test it, data sets can be analyzed to find patterns that may emerge.

Much has been written about the potential positive uses of big data to help businesses better serve consumers and to help policymakers solve social problems, as well as about potential concerns, such as fairness and accuracy.14 These concerns are not limited to financial services but extend broadly to both commercial and governmental uses of big data.15 In the criminal justice system, a model used by courts to predict recidivism has been criticized for potentially overpredicting the chance that black defendants would commit another crime.16 In the world of Internet advertising, researchers found that women were less likely to be shown ads for high-paying jobs.17 And, when Amazon initially launched same-day delivery, its algorithms excluded many minority neighborhoods from the service.18

So much depends on exactly which data are used, whether the data are accurate and representative, and how the data are used. A jarring reminder of the importance of representative data involves photo recognition software. Some photo software misclassified images of African Americans and Asian Americans, presumably because the data used to develop the software did not include sufficient diversity.19 Data also may reflect past biases. By way of illustration, if a hiring model for engineers is based on historical data, which may consist mostly of men, it may not adequately consider traits associated with successful engineers who are women.20 Thus, while statistical models have the potential to increase consistency in decision-making and to ensure that results are empirically sound, depending on the data analyzed and underlying assumptions, models also may reflect and perpetuate existing social inequalities. Thus, big data should not be viewed as monolithically good or bad, and the fact that an algorithm is data driven does not ensure that it is fair or objective.

To help evaluate alternative data in fintech, we suggest asking some questions early in the process. Before going further, it is important to underscore that institutions should conduct a thorough analysis to ensure compliance with consumer protection laws before implementing new data and modeling methods. The questions and discussion that follow are not offered to replace that careful analysis but may be helpful for institutions early in the business development process.

What Is the Basis for Considering the Data?

Is there a nexus with creditworthiness?

The first question to ask before using new data is the basis for considering the data. If the data are used in the credit decision-making process, what is the nexus with creditworthiness? Some data have an obvious link to creditworthiness and are logical extensions of current underwriting practices, while others are less obvious. For example, for small business lending, some creditors are developing new underwriting models based on financial and  business  records.21 These models consider  many  of the same types of data used in traditional underwriting methods but in an empirically derived way based on analyzing thousands of transactions.22 Some models may  be  expressly  developed  for  certain  businesses,  such  as dry cleaners or doctors’ offices. In essence, these models are expanding automated underwriting — long used for mortgages and other consumer lending products — to small business loans. Similarly, for consumer loans, some firms consider more detailed financial information from consumers’ bank accounts — especially for “thin file” consumers who may lack extensive traditional credit histories — to evaluate their creditworthiness.

Using data with an obvious nexus to credit risk — and often data that have long been used but in a less structured way can make good sense for lenders and borrowers. Better calibrated models can help creditors make better decisions  at a lower cost, enabling them to expand responsible and fair credit access for consumers. Additionally, these models may decrease fair lending risk by ensuring that all applicants are evaluated by the same standards.

On the other hand, some data may lack an obvious nexus to creditworthiness. These data may be viewed as proxies or signals of potential creditworthiness or future income. Generally, the more speculative the nexus with creditworthiness, the higher the fair lending risk.23 It is easy to find examples of correlations between variables that are not meaningfully related.24 Even if the data have some predictive foundation, to the extent the data are correlated with race or other prohibited bases under the fair lending laws, careful analysis is critical. For example, we understand that some lenders consider where an applicant went to school or an applicant’s level of education. These data should be carefully evaluated for legal compliance before being used. This approach is reflected in the CFPB staff’s recent no-action letter to a firm that considers educational data, in addition to traditional factors such as income and credit score, in underwriting and pricing loans. The CFPB recognized that the alternative data may benefit consumers who are credit invisible or lack sufficient credit history but conditioned the no-action letter on extensive fair lending testing and data reporting.25

Careful analysis is particularly warranted when data may not only be correlated with race or national origin but may also closely reflect the effects of historical discrimination, such as redlining and segregation. For example, it’s been reported that some lenders consider whether a consumer’s online social network includes people with poor credit histories,26 which can raise concerns about discrimination against those living in disadvantaged areas. Instead of expanding access to responsible credit, the use of data correlated with race or national origin could serve to entrench or even worsen existing inequities in financial access. Finally, it is important to consider that some data may not appear correlated with race or national origin when used alone but may be highly correlated with prohibited characteristics when evaluated in conjunction with other fields.

Are the data accurate, reliable, and representative of all consumers?

Next, it is important to consider whether the data are accurate, reliable, and representative of a broad range of consumers. Inaccurate data can inappropriately penalize consumers and impair their access to credit. It also prevents banks from making loans available to creditworthy borrowers. In recent years, for example, concerns have been raised about the accuracy and reliability of medical debt data. Federal Reserve and FTC studies have found widespread errors in public record data on consumers’ credit reports, much of which related to medical debt.27 Recent CFPB complaint data have underscored continuing concerns from consumers, including credit reports listing medical debt that was already paid, was for the wrong amount, or was not properly verified.28 As a result of concerns with these data, both FICO29 and VantageScore30 modified their scoring models to limit the weight placed on these debts. These changes followed a series of 2015 agreements between the three largest consumer reporting agencies and the attorneys general of over 30 states.31

In addition to accuracy and reliability, it is important to consider whether the data are representative of all consumers or only a subset. Although the previous examples involving photo recognition and hiring may seem extreme, it is easy to see that many data sets may not be fully representative of the population for which the resulting model will be used. For example, data used for behavioral modeling — such as browsing and social media data — may be skewed toward certain populations.

While noting this risk, it is worthwhile to pause and emphasize that new research on alternative data may in fact improve data availability and representation for the millions of consumers who are credit invisible.32 Lenders currently lack good tools to evaluate these consumers’ creditworthiness. Alternative data may result in new data sources that are accurate, representative, and predictive.33 Such data can increase access to credit for this population and permit lenders to more effectively evaluate their creditworthiness.


Will the predictive relationship be ephemeral or stable over time?

Finally, it is important to consider whether the predictive potential of the data is likely to be stable over time or ephemeral.  For  example,  if  a  model  uses  online  data from social media sites, such as Yelp or Facebook, what happens to the reliability of those data as consumers’ online  habits evolve?

How Are You Using the Data?

Are you using the data for the purpose for which they have been validated?

Are the data being used for marketing, fraud detection, underwriting, pricing, or debt collection? Validating a data field for one use — such as fraud detection — does not mean it is also appropriate for another use, such as underwriting or pricing. Thus, it is important to ask if the data have been validated and tested for the specific uses. Fair lending risk can arise in many aspects of a credit transaction. Depending on how the data are used, relevant fair lending risks could include steering, underwriting, pricing, or redlining.

Do consumers know how you are using the data?

Although consumers generally understand how their financial behavior affects their traditional credit scores, alternative credit scoring methods could raise questions of fairness and transparency. ECOA, as implemented by Regulation B,34 and the Fair Credit Reporting Act (FCRA)35 require that consumers who are denied credit must be provided with adverse action notices specifying the top factors used to make that decision. The FCRA and its implementing regulations also require that consumers receive risk-based pricing notices if they are provided credit on worse terms than others.36 These notices help consumers understand how to improve their credit standing. However, consumers and even lenders may not know what specific information is used by certain alternative credit scoring systems, how the data impact consumers’ scores, and what steps consumers might take to improve their alternative scores. It is, therefore, important that fintech firms, and any banks with which they partner, ensure that the information conveyed in adverse action notices and risk-based pricing notices complies with the legal requirements for these notices.

Certain behavioral data may raise particular concerns about fairness and transparency. For example, in FTC v. CompuCredit, mentioned earlier, the FTC alleged that the lender failed to disclose to consumers that their credit limits could be reduced based on a behavioral scoring model.37 The model penalized consumers for using their cards for certain types of transactions, such as paying for marriage counseling, therapy, or tire-repair services. Similarly, commenters reported to the FTC that some credit card companies have lowered consumers’ credit limits based on the analysis of the payment history of other consumers that had shopped at the same stores.38 In addition to UDAP concerns, penalizing consumers based on shopping behavior may negatively affect a lender’s reputation with consumers.

UDAP issues could also arise if a firm misrepresents how consumer data will be used. In a recent FTC action, the FTC alleged that websites asked consumers for personal information under the pretense that the data would be used to match the consumers with lenders offering the best terms.39 Instead, the FTC claimed that the firm simply sold the consumers’ data.

Are you using data about consumers to determine what content they are shown?

Technology can make it easier to use data to target marketing and advertising to consumers most likely to be interested in specific products, but doing so may amplify redlining and steering risks. On the one hand, the ability to use data for marketing and advertising may make it much easier and less expensive to reach consumers, including those who may be currently underserved. On the other hand, it could amplify the risk of steering or digital redlining by enabling fintech firms to curate information for consumers based on detailed data about them, including habits, preferences, financial patterns, and where they live. Thus, without thoughtful monitoring, technology could result in minority consumers or consumers in minority neighborhoods being presented with different information and potentially even different offers of credit than other consumers. For example, a DOJ and CFPB enforcement action involved a lender that excluded consumers with a Spanish-language preference from certain credit card promotions, even if the consumer met the promotion’s qualifications.40 Several fintech and big data reports have highlighted these risks. Some relate directly to credit, and others illustrate the broader risks of discrimination through big data.

The core concern is that, rather than increasing access to credit, these sophisticated marketing efforts could exacerbate existing inequities in access to financial services. Thus, these efforts should be carefully reviewed. Some well- established best practices to mitigate steering risk could help. For example, lenders can ensure that when a consumer applies for credit, he or she is offered the best terms she qualifies for, regardless of the marketing channel used.

Which consumers are evaluated with the data?

Are algorithms using nontraditional data applied to all consumers or only those who lack conventional credit histories? Alternative data fields may offer the potential to expand access to credit to traditionally underserved consumers, but it is possible that some consumers could be negatively impacted. For example, some consumer advocates have expressed concern that the use of utility payment data could unfairly penalize low-income consumers and undermine state consumer protections.47 Particularly in cold weather states, some low-income consumers may fall behind on their utility bills in winter months when costs are highest but catch up during lower-costs months.

Applying alternative algorithms only to those consumers who would otherwise be denied based on traditional criteria could help ensure that the algorithms expand access to credit. While such “second chance” algorithms still must comply with fair lending and other laws, they may raise fewer concerns about unfairly penalizing consumers than algorithms that are applied to all applicants. FICO uses this approach in its FICO XD score that relies on data from sources other than the three largest credit bureaus. This alternative score is applied only to consumers who do not have enough information in their credit files to generate a traditional FICO score to provide a second chance for access to credit.48

Finally, the approach of applying alternative algorithms only to consumers who would otherwise be denied credit may receive positive consideration under the Community Reinvestment Act (CRA). Recent interagency CRA guidance includes the use of alternative credit histories as an example of an innovative or flexible lending practice. Specifically, the guidance addresses using alternative credit histories, such as utility or rent payments, to evaluate low- or moderate-income individuals who would otherwise be denied credit under the institution’s traditional underwriting standards because of the lack of conventional credit histories.49


Fintech can bring great benefits to consumers, including convenience and speed. It also may expand responsible and fair access to credit. Yet, fintech is not immune to the consumer protection risks that exist in brick-and-mortar financial services and could potentially amplify certain risks such as redlining and steering. While fast-paced innovation and experimentation may be standard operating procedure in the tech world, when it comes to consumer financial services, the stakes are high for the long-term financial health of consumers.

Thus, it is up to all of us — regulators, enforcement agencies, industry, and advocates — to ensure that fintech trends and products promote a fair and transparent financial marketplace and that the potential fintech benefits are realized and shared by as many consumers as possible.


1 The author gratefully acknowledges the research assistance of Katrina Blodgett.

2 Kenneth P. Brevoort et al., “Data Point: Credit Invisibles,” CFPB Office of Research at 12 (May 2015), External Link

3 There are many important consumer protection topics that are outside the scope of this article, including the Fair Credit Reporting Act and data security. For general information on these issues, see Federal Trade Commission, “Big Data: A Tool for Inclusion or Exclusion?” (January 2016), PDF External Link For example, as noted on page 22 of this report, failure to reasonably secure consumers’ data could be considered an unfair practice under the Federal Trade Commission Act.

4 See United States v. SunTrust Mortgage, Inc., No. 3:12‐cv‐00397‐REP (E.D. Va., Consent order filed September 14, 2012); United States v. Countrywide Home Loans, Inc., No. 11-cv-10540 (PSG) (C.D. Cal., Consent order filed December 28, 2011); United States v. PrimeLending, No. 3:10‐cv‐02494‐P (N.D. Tex., Consent order filed January 11, 2011). For a discussion about the risks of discretion in pricing, see Outlook Live 2015 Interagency Fair Lending Hot Topics (October 15, 2015),; Outlook Live 2013 Interagency Fair Lending  Hot Topics (October 24, 2013),

5 The Supreme Court recently affirmed the availability of a disparate impact theory under the Fair Housing Act in Texas Department of Housing and Community Affairs v. Inclusive Communities Project, Inc., PDF External Link 135 S.Ct. 2507 (2015). It ruled that the Fair Housing Act permits liability under a disparate impact theory, and the act prohibits policies that seem to be neutral on their face but have a disparate impact on a protected class.

6 Regulation B, 12 C.F.R. Part 1002, Comment 1002.6(a)-2; CFPB Bulletin 2012-04 (Fair Lending) (April 18, 2014), External Link

7 “Interagency Policy Statement on Discrimination in Lending,” 59 FR 18266, 18269 (April 15, 1994), The CFPB adopted the Policy Statement in 2012. CFPB, Bulletin 2012-04 at 2-3 (April 18, 2012), External Link

8 See United States & CFPB v. Provident Funding Assocs., LP, No. 3:15‐cv‐02373 (N.D. Cal., Consent order filed May 28, 2015), Link

9 15 U.S.C. §45(a)(1) External Link

10 Pub. L. 111–203, title X, § 1031 (July 21, 2010), codified at 12 U.S.C. § 5531 Board of Governors of the Federal Reserve System, Federal Reserve Board Announces Civil Money Penalty and Issues Cease and Desist Order Against Higher One, Inc., (December 23, 2015), Link

12 FTC v. CompuCredit Corp., No. 1:08-CV-1976-BBM-RGV (N.D. Ga. 2008),; External Link In re: CompuCredit Corp., Columbus Bank & Trust Co., First Bank of Delaware, First Bank & Trust, FDIC-0 8-139b (December 19, 2008), Link

13 FTC v. Sequoia One, Case No. 2:15-cv-01512 (D. Nev. 2015), Link

14 Executive Office of the President, “Big Data: Seizing Opportunities, Preserving Values” (May 2014), docs/big_data_privacy_report_may_1_2014.pdf; PDF External Link Federal Trade Commission, “Big Data: A Tool for Inclusion or Exclusion?” (January 2016), External Link

15 See, e.g., Kate Crawford, “Artificial Intelligence’s White Guy Problem,” New York Times (June 26, 2016), Link

16 Alexandra Chouldekova, “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments,” (Updated February 2017),; Julia Angwin et al., “Machine Bias,” ProPublica, (May 23, 2016), External Link See also Kate Crawford, “Artificial Intelligence’s White Guy Problem,” New York Times (June 26, 2016), Link

17 Amit Datta, Michael Carl Tschantz, and Anupam Datta, “Automated Experiments on Ad Privacy Settings,” Proceedings on Privacy Enhancing Technologies 2015,; PDF External Link Kate Crawford, “Artificial Intelligence’s White Guy Problem,” New York Times (June 26, 2016), Link

18 David Ingold and Spencer Soper, “Amazon Doesn’t Consider the Race of Its Customers. Should It?” Bloomberg (April 21, 2016), com/graphics/2016-amazon-same-day/; Kate Crawford, “Artificial Intelligence’s White Guy Problem,” New York Times (June 26, 2016), Link

19 Kate Crawford, “Artificial Intelligence’s White Guy Problem,” New York Times (June 26, 2016), External Link See also Steven Gaines and Sara Williams, “The Perpetual Line-up: Unregulated Police Face Recognition in America,” Georgetown Law Center on Privacy & Technology (October 18, 2016),, External Link last visited October 19, 2017.

20 NPR, “Weapons of Math Destruction Outlines Dangers of Relying on Data Analytics,” (September 12, 2016), Link

21 Michael Erman, “Automated Lenders Threaten to Eat Banks’ Lunch,” Reuters (June 11, 2015), Link

22 Although social media data may be considered nontraditional data for consumer lending, assessing a firm’s online presence, including visibility and reviews, may be viewed as a logical extension of data used in the manual underwriting of small business loans.

23 Cf. 1994 Policy Statement 59 Fed. Reg. at 18269; OCC Bulletin 97-24, Credit Scoring Models, Appendix at 1-2, 10-11 (May 20, 1997), External Link

24 See, e.g., Tyler Vigen, “Spurious Correlations,”, External Link last visited October 19, 2017.

25 The no-action letter states that the CFPB does not currently plan to bring a supervisory or enforcement action against Upstart with respect to ECOA. The firm’s application describes its underwriting model, which considers educational attainment in addition to traditional credit factors, particularly for consumers with little credit history. CFPB staff specified that the no-action letter was issued based on the facts and circumstances of the applicant, and that the no-action letter does not endorse the use of any particular credit variables or credit modeling. CFPB staff conditioned the no-action letter on the firm regularly reporting lending and compliance information to the CFPB to mitigate risk to consumers and to assist the Bureau in understanding the impact of alternative data on lending. CPFB, “CFPB Announces First No-Action Letter to Upstart Network” (September 14, 2017), Link

26 Matt Vasilogambros, “Will Your Facebook Friends Make You a Credit Risk?” The Atlantic (August 7, 2015), Link

27 Robert B. Avery et al., “An Overview of Consumer Data and Credit Reporting,” Federal Reserve Bulletin at 68, 71 (February 2003), PDF External Link (noting that there may be errors and inconsistencies in public record reporting, including medical collections lawsuits, for several reasons: Many individuals have multiple public record items that appear to be part of a single episode, there are inconsistencies in public record reporting among geographic areas, and lack of data regarding the type of plaintiff in a public record action, such as a collector of medical debt); id. at 69 (finding that about 52 percent of collection accounts are based on medical debt); Federal Trade Commission “Report to Congress Under Section 319 of the Fair and Accurate Credit Transactions Act of 2003” (Accuracy Study) 50-51 (December 2012), at PDF External Link (finding 44 potential and 25 corrected errors relating to public records out of 1,001 study participants); at 3, 50-51 (noting that collection accounts can include medical bills and finding 502 possible errors relating to collection accounts with 267 corrected out of a sample of 1,001 consumers).

28 Illustrating the potential impact of errors, a 2014 CFPB report found that about 19.5 percent of consumers have medical collection tradelines on their consumer reports. Of consumers that have any collection tradelines on their reports, 22 percent have only a medical debt trade line, and 50 percent of those consumers have a credit history with no other major blemishes. CFPB, “Consumer Credit Reports: A Study of Medical and Non-medical Collections,” (December 2014), External Link

29 FICO, Press Release, “FICO® Score 9 Now Available to Consumers at myFICO. com” (March 8, 2016),

30 VantageScore,  “Introducing  VantageScore  4.0,”, last visited October 19, 2017.

31 See 3.8.15.pdf PDF External Link and Under these agreements, the consumer reporting agencies will not report medical debt that is less than 180 days from date of first delinquency. In addition, the agencies will remove medical collection accounts that are paid by insurance where the consumer had no responsibility. The settlement includes a host of other provisions, including stricter requirements in reporting the classification of the source of debt and enhancements to the consumer dispute process.

32 Kenneth P. Brevoort et al., “Data Point: Credit Invisibles,” CFPB Office of Research at 12 (May 2015),

33 CFPB, “Request for Information Regarding Use of Alternative Data and Modeling Techniques in the Credit Process,” (February 14, 2017), (describing the CFPB’s request for information on the use of alternative data and alternative models in credit scoring).

34 12 C.F.R. §1002.9 External Link

35 15 U.S.C. §1681m External Link

36 15 U.S.C. §1681m(h); 12 C.F.R. Part 1022, subpart HExternal Link 16 C.F.R. Part 640 External Link 

37 FTC v. CompuCredit Corp., No. 1:08-CV-1976-BBM-RGV (N.D. Ga. 2008), Complaint at 34-35, 39, External Link

38 FTC, Big Data Report at 9; Ron Lieber, “American Express Kept a (Very) Watchful Eye on Charges,” New York Times (January 30, 2009), http://www. html?hp=&mcubz=1 (describing letters sent to customers by American Express stating that it was reducing their credit line based on the businesses the user patronized).

39 FTC v. Blue Global, LLC, No 2:17-CV-021177-ESW (D. Ariz. 2017), Link

40 CFPB, “CFPB Orders GE Capital to Pay $225 Million in Consumer Relief for Deceptive and Discriminatory Credit Card Practices,” (June 19, 2014),; External Link External Link The promotions allowed credit card customers with delinquent accounts to settle their balances. See also In re American Express Centurion Bank and American Express Bank, FSB, File No. 2017-CFPB-0016 (Consent Order August 23, 2017), External Link (CFPB action against American Express alleging discrimination for offering credit and charge card terms that had higher interest rates, stricter credit cutoffs, or less debt forgiveness to consumers in Puerto Rico and other U.S. territories and some consumers with Spanish-language preferences).

41 Facebook does not have race as a data field on its users. Instead, it uses racial affinity determined by an algorithm. See Annalee Newitz, “Facebook’s Ad Platform Now Guesses at Your Race Based on Your Behavior,” Ars Technica (March 18, 2016), Link

42 Julia Angwin and Terry Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica (October 28, 2016), Link

43 Emily Steele and Julia Angwin, “On the Web’s Cutting Edge, Anonymity in Name Only,” Wall Street Journal (August 4, 2010),; External Link Jennifer Valentino-Devries and Jeremy Singer-Vine, “Websites Vary Prices, Deals Based on Users’ Information,” Wall Street Journal (December 24, 2012), Link

44 Jennifer Valentino-Devries and Jeremy Singer-Vine, “Websites Vary Prices, Deals Based on Users’ Information,” Wall Street Journal (December 24, 2012), Link

45 Julia Angwin and Jeff Larson, “The Tiger Mom Tax: Asians Are Nearly Twice as Likely to Get a Higher Price from Princeton Review,” ProPublica (September 1, 2015), Link

46 Aniko Hannak et al., “Measuring Price Discrimination and Steering on E-commerce Web Sites,” Association for Computing Machinery (2014), External Link

47 See, e.g., National Consumer Law Center, “Full Utility Credit Reporting: Risks to Low-Income ConsumersPDF External Link (July 2012), (discussing  risks  to consumers of utility credit reporting and high rates of delinquency for low-income households in cold weather states); Massachusetts Department of Energy and Environmental Affairs, “Help with Your Winter Utility Bill,”, External Link last visited October 19, 2017 (describing several options for low-income consumers, including protection from energy shut-off during winter and payment plans).

48 FICO, FICO Score XD: Expanding Credit Opportunities, External Link last visited October 19, 2017.

49 Interagency Questions and Answers Regarding Community Reinvestment, 81 Fed. Reg. 48,506, 48538-39 (July 15, 2016). External Link