Consumer Compliance Outlook: Third Issue 2019

From Catalogs to Clicks: The Fair Lending Implications of Targeted, Internet Marketing

By Carol A. Evans, Associate Director, and Westra Miller, Counsel, Division of Consumer and Community Affairs, Federal Reserve Board

When introduced in the late 1880s, the Sears catalog became a powerful tool for African Americans, suffering under Jim Crow and other forms of discrimination and segregation, to have the same shopping experience as whites.1 During this time, African Americans routinely faced discrimination in retail stores, such as higher prices and a limited selection of goods. The social disruption created by the Sears catalog prompted some white storeowners to encourage their customers to burn the catalog in the streets in protest.2 Recognizing the challenges that its African American customers faced, Sears included instructions on how to place an order through the post office and provided other ways for rural African Americans, non-English speakers, and others who had been systemically excluded from American civil society to order from the catalogs.3 The anonymity of the catalog offered shoppers of all backgrounds a level of retail inclusion that would take decades to achieve in physical stores. Moreover, the catalog bore other benefits: Sears offered credit that allowed African American farmers to buy the same items as their white peers, without the markup imposed when buying on credit at a local general store. The catalog’s prices were also lower than those offered in the rural towns or countryside where many African Americans lived.4

Consumers Guide

These benefits of the Sears catalog provide important lessons about financial inclusion. The anonymity offered by ordering from a catalog leveled the playing field for African Americans and other disadvantaged groups. By ensuring that everyone had access to the same products, Sears played a role in opening the marketplace for marginalized consumers. Ostensibly, the Internet could play this same role for modern consumers. However, today this broad-based approach appears to have been largely eclipsed by targeted marketing strategies designed to reach specific categories of consumers and to undermine consumers’ anonymity.

Online advertising platforms, such as those offered by Facebook, allow companies to use vast amounts of consumer data to target marketing in a highly individualized manner by using sophisticated algorithms that will only display advertisements to audiences or Internet users with desired characteristics. Although the anonymity of a catalog may have been an antidote to discrimination in face-to-face shopping encounters, today’s Internet leaves consumers more — not less — identifiable as companies become more efficient at targeting certain demographics.

The results of this targeted marketing may be discriminatory in contexts in which consumer protection and civil rights laws apply, such as marketing credit. While the use of technology in consumer financial services, or fintech, has created many innovations that benefit consumers, the ability to filter the reach of marketing so narrowly can raise a range of consumer protection and financial inclusion concerns, including the fair lending risks of steering and redlining. This article focuses on the increased use of Internet-based marketing practices to target audiences by personal characteristics, geography, or even hobbies. This practice may explicitly or implicitly classify users by prohibited characteristics protected under fair lending laws — such as race, national origin, or sex — and risk making financial inclusion out of reach for millions of consumers.

TARGETED MARKETING: CROSS-SITE TRACKING, LEAD GENERATION, AND E-SCORES

To a great and perhaps unanticipated extent, the combination of sophisticated analytic techniques and big data has unmasked the anonymity of the Internet. In 1993, the New Yorker published its famous cartoon captioned, “On the Internet, nobody knows you’re a dog.” However, nearly three decades after the New Yorker’s observation, not only can web analytics recognize you are a dog, they also know your favorite toy, whether you chase squirrels, and the last time you wagged your tail. For humans on the Internet, the wealth of data include your current location, your neighborhood and its characteristics, your browsing and shopping habits, and the companies with which you do business.

This treasure trove of data about consumers can help enrich consumers’ experiences and provide financial benefits tailored to their situation. For example, some investment companies and financial institutions use roboadvisors to provide customers with portfolios based on their financial and risk profile.5 Both bank and nonbank financial service providers are exploring whether the use of alternative data sources in credit scoring can expand access to credit to creditworthy consumers with limited or no credit histories.6

However, consumer data may also be used in ways that consumers have not intended or anticipated, often to fuel increasingly sophisticated marketing strategies that aim to target certain consumer groups. Many consumers have experienced the feeling of being tracked on the Internet when the item they had been browsing on one website is now being advertised to them on a second and then a third site. But companies now rely on consumers’ browsing histories in less obvious ways as well. Through the use of sophisticated cross-site tracking, lead generation, and other techniques described more in this section, an immense amount of consumers’ personal data is now used to determine the types of products advertised to individual consumers, eliminating any possibility of a universal experience on the Internet.

Cross-Site Tracking

The advertisements a consumer sees while browsing the Internet are the result of a complex interaction of several invisible activities. Websites track users and their browsing behaviors, with the goal of creating detailed profiles that can be used for marketing purposes. Companies sell consumer data to third parties, which compile these data from many sources.7 Tracking methods often include cookies or data files that are placed by the website in a user’s web browser. These are used by website owners to identify the user and personalize that user’s experience on the site.8 Cookies can then be used to track users across websites.9

Methods may also include fingerprinting, in which each computer or device is given a unique identifier, allowing website owners to track when the same device visits that webpage again. This can be a powerful tool in a time when many devices are used by only one individual.10 These methods also allow website owners (or others) to share the information they have collected and link together multiple profiles across different sites to yield a more finely detailed view of a single consumer.11 Indeed, companies are able to determine if multiple devices belong to a single consumer,12 and this information can be combined with offline data on consumers, such as data available from retailers and credit card companies.13 Taken together, these techniques, as well as others, allow companies to build ever more-detailed profiles on individual consumers to target the marketing those consumers see.

Lead Generators

In addition to the consumer data available from tracking techniques, online lead generators collect data about consumers by encouraging website users to volunteer personal information about themselves, often when users submit personal details to receive more information about a product or service. For example, a prospective homebuyer might submit personal information when using a mortgage rate calculator. This information can then be sold to mortgage brokers, credit card issuers, or others seeking details on prospective customers.14 Often, consumers may not even realize that the information they just entered on a website will be sold, at least until they start receiving unsolicited phone calls and text messages from companies they themselves did not contact.15

Lead generators may charge more for leads for consumers seeking credit, such as potential mortgage borrowers.16 Lead generation also can raise concerns about bias and exploitation. For example, at one time, the College Board’s website used personal information, such as whether prospective students expected to need financial aid, to immediately filter the results presented by its search tool and direct those individuals to search results highlighting private for-profit colleges over potential private and public nonprofit colleges and universities.17

E-Scores

Companies buying leads in bulk may seek even more data on consumers to better distinguish potentially profitable leads from those unlikely to result in a future customer. One method of predicting a consumer’s possible future activity is to use online consumer scores, or e-scores, which are calculated using complex algorithms and data mined from both online and offline sources.18 These privately calculated scores may factor in details such as occupation, salary, home value, and spending on certain consumer goods to predict a consumer’s future spending and to allow companies to rank a consumer’s estimated future profitability.19 A company might submit data sets containing the names of both leads and existing customers to an e-scoring service.

From those data sets, the e-scoring system would extract thousands of variables, identify predictive factors, and score the prospective customer leads based on how closely they resemble the company’s existing customers.20 E-scores are not new to the financial services sector. For example, a multinational credit card issuer used such scores to determine instantly what type of credit card to offer a customer calling into its call center. The scores also served to flag call center agents to speak to those customers who were thought to be “high-value.” Call center agents immediately routed those customers to agents, while callers who were considered to be less attractive were routed to an overflow call center.21

In today’s world of targeted marketing, advertisements are built for individual consumers, with advertisers able to target their audience by a vast range of increasingly specific characteristics, such as location, political affiliation, or occupation.22 Companies rely on data on consumers’ past activity and on predictions of future activity. As with e-scores, marketing data companies predict consumers’ future activities by comparing specific consumers with other consumers deemed to be suitably similar. The idea behind many of these predictive models is that “birds of a feather flock together.”23 Although some consumers may appreciate receiving targeted online advertisements instead of general ones, some stakeholders are raising concerns about these practices as privacy researchers question how consumer data are collected and used, with some comparing it with exploiting natural resources.24

FAIR LENDING BASICS

The use of these and other targeted Internet-based marketing practices presents unique challenges, but it raises the same core fair lending risks present in the traditional, offline marketing of credit products. Although such data-driven practices may offer new benefits, this type of marketing is not beyond the reach of fair lending laws. The Federal Reserve, along with other federal agencies, enforces two primary federal laws that ensure fairness in lending and apply to certain marketing activities: the Equal Credit Opportunity Act (ECOA) and the Fair Housing Act (FHA).

ECOA

The ECOA prohibits credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or receipt of income from any public assistance program, or because a person has exercised certain legal rights under the ECOA and other financial statutes.25 The ECOA and its implementing regulation, Regulation B, apply to both consumer and commercial credit. Through Regulation B, the ECOA prohibits creditors from making oral or written representations in advertising or other formats that would discourage, on a prohibited basis, a reasonable person from making or pursing a credit application.26 The FHA applies to credit related to housing and prohibits discrimination on the basis of race or color, national origin, religion, sex, familial status, and handicap.27 The FHA prohibits discrimination in advertising regarding the sale or rental of a dwelling, which includes mortgage credit discrimination.28 These fair lending laws prohibit two kinds of discrimination: disparate treatment and disparate impact. It is not uncommon that both theories may apply. Disparate treatment occurs when a lender treats a consumer differently because of a protected characteristic (e.g., race or age). Disparate treatment includes overt discrimination as well as less obvious differences in treatment. It does not need to be motivated by prejudice or a conscious intent to discriminate. The Federal Reserve has made a number of referrals to the U.S. Department of Justice (DOJ) involving discrimination in pricing and underwriting, as well as redlining. Many of these referrals have resulted in DOJ enforcement actions.29

Disparate impact occurs when a lender’s policy or practice has a disproportionately negative impact on a prohibited basis (i.e., protected characteristics), even though the lender may have no intent to discriminate and the practice appears neutral.30 A policy or practice that has a disparate impact may violate the law, unless the policy or practice meets a “legitimate business need” that cannot reasonably be achieved by a means that has less impact on protected classes.31 It is often possible to view issues raising fair lending concerns under both disparate treatment theory and disparate impact theory.

While the ECOA’s Regulation B and the FHA both provide specific prohibitions against discrimination or discouragement in the marketing of credit and/or mortgage credit, these laws also more broadly prohibit redlining and steering. Redlining is a form of illegal discrimination in which an institution provides unequal access to credit, or unequal terms of credit, based on the race, color, or national origin of a neighborhood.32 Likewise, steering is a form of illegal discrimination in which applicants or prospective applicants for credit are guided toward or away from a specific loan product or feature because of their race, sex, or other prohibited characteristic, rather than based on the applicant’s needs or other legitimate factors.33 Steering occurs when a bank’s actions are taken on a prohibited basis, even when those who have been steered are not measurably harmed.34 These and the other protections of the ECOA and the FHA apply to credit marketing in the online world, just as they do in the offline one.35

HOW TARGETED MARKETING MAY RAISE FAIR LENDING CONCERNS

Technology has made it easier for businesses to use consumer data for direct marketing and advertising to consumers who are predicted to be most interested in specific products. The ability to use such data for marketing and advertising may make it less expensive to reach consumers, resulting in a marketing strategy that may appear more effective to the advertiser. However, when such strategies are used to market credit, they may raise fair lending risks. By enabling advertisers (or the technology companies they rely on) to curate information for consumers based on detailed data about them, including habits, preferences, financial patterns, and where they live, there is a risk that this curation may result in digital redlining or steering. Likewise, when Internet-based marketing relies on artificial intelligence (AI) and machine learning (ML) technologies, the potential for discrimination may increase.

Facebooks settlement in March 2019 with several civil rights organizations and the related discrimination charge issued by the U.S. Department of Housing and Urban Development (HUD) put a spotlight on these concerns.36 Facebook’s advertising practices initially drew attention when it was revealed that the company permitted advertisers to exclude groups of Facebook users with selected personal characteristics from viewing particular advertisements on the social media site.37 Facebook’s technology effectively allowed advertisers to show advertisements to certain users while excluding others based on sex or age, or on interests, behaviors, demographics, or geography that related to or were associated with race, national origin, sex, age, or family status.38

The advertising platform also permitted advertisers to create custom audiences of Facebook users who shared common characteristics with the advertiser’s current customers or other desired groups.39 By permitting these features on its website, Facebook was alleged to have facilitated advertisers’ discrimination on multiple bases protected under the FHA because wide swaths of users were not able to view certain advertisements solely because of their personal characteristics.

Facebook’s March 2019 settlement promised significant changes: The company agreed to retool its advertising platform and appeared to acknowledge the risk of digital redlining in its decisions to limit the filtering options available to advertisers, restrict geographic targeting to a minimum geographic radius of 15 miles from a specific address or from the center of a city, and disallow targeting by zip code. Likewise, it also seemed to address the harm caused when advertisements are not broadly accessible; it agreed to build a tool that would allow any Facebook user to view any advertisement for housing or credit placed on the platform anywhere in the United States, regardless of the audience originally targeted for that advertisement or where the viewer lives.40

In the days after the Facebook settlement, HUD also charged the company with housing discrimination because of these practices.41 HUD’s charge was the result of a formal, fact finding investigation of the social media company by that agency, with HUD officials earlier noting that “[t]he Fair Housing Act prohibits housing discrimination including those who might limit or deny housing options with a click of a mouse,” and that “[w]hen Facebook uses the vast amount of personal data it collects to help advertisers to discriminate, it’s the same as slamming the door in someone’s face.”42

The March 2019 discrimination charge provided significant detail regarding the company’s activities and alleged that Facebook not only facilitated discrimination by advertisers using its platform but that the social media giant also engaged in discrimination itself in how it delivered advertisements to users. Specifically, HUD alleged that:

This final component of HUD’s allegations suggests that Facebook’s algorithms are applied to every advertisement on its platform, regardless of the advertisers’ intent. That is, Facebook’s advertising algorithms allegedly operate independently of advertisers to determine which users will view advertisements based on the users’ predicted response.

As a result, these algorithms may potentially raise fair lending risks and render some advertisements invisible to certain users, disproportionately impacting users based on protected characteristics, such as race and sex. Indeed, an academic study of Facebook’s advertisement delivery practices demonstrated just that, finding “previously unknown mechanisms that can lead to potentially discriminatory advertisement delivery, even when advertisers set their targeting parameters to be highly inclusive.”44 The study’s authors published groups of advertisements on Facebook, where advertisement features were varied to observe how changing a feature would affect the demographics of the audience of a particular advertisement. They found that the delivery of a particular advertisement may be skewed for reasons including the content of the advertisement itself, the images contained in the advertisement, and how advertisement images are classified by Facebook. Indeed, according to their results, an advertisement “can deliver vastly different racial and gender audiences” based solely on the advertisement’s creative content.45

For example, they created a suite of advertisements advertising both rental housing and real estate for purchase. They varied the type of property advertised and the implied cost of the property (as implied by text referencing “fixer upper” or luxury). The delivery of these advertisements was noticeably skewed based on race; some advertisements were delivered to Facebook audiences that were over 72 percent African American, while others were delivered to audiences that were over 51 percent African American.46

Facebook’s advertisement delivery policies appear to be driven both by profit and efficiency: It appears that it may be most efficient to show advertisements to consumers who are the most likely to want a certain product or job because revenue is generated when consumers click on advertisements. But efficiency in this context may be at cross purposes with bedrock principles of nondiscrimination. Even though more men than women, for example, may arguably be interested in certain jobs, both the law and social goals of diversity and inclusion require that both genders are shown the advertisements.

The HUD discrimination charge demonstrates the risks of relying on decision-making processes that are based on ML models that lack appropriate controls. With large volumes of consumer data now available from both online and offline sources, a wide array of industries are looking to AI and ML to automate decision-making processes and improve predictions of future outcomes because these technologies can find patterns or correlations in massive data sets that humans could not. However, ML algorithms are only as good as the data sets on which they are “trained.” It is this training data that teaches the algorithms what the outcomes may be for certain people or objects.

Incomplete or unrepresentative training data or training data that reflect real-world historical inequities or unconscious bias may lead to ML models that generate discriminatory results.47 For example, it was widely reported last year that Amazon had invested several years in developing an experimental hiring tool that relied on AI to rate candidates for employment opportunities. However, the tool did not make gender-neutral hiring recommendations as expected.

The tool had been trained using resumes that had been submitted to the company over a 10-year period, the majority of which had come from male applicants.48 As a result, it appears that the tool had learned to replicate the long-standing underrepresentation of women in the technology industry and reinforced this as the norm by downgrading resumes with references to the word “women’s” and all-female colleges.49 Indeed, the concept of unconscious bias in AI and ML models has received increased attention in recent years.50 Unfortunately, algorithms do not remove human bias; even automated processes cannot escape the weight of data that has been tainted by such bias.51

The use of data-driven technology in marketing also raises additional risks for discriminatory outcomes. One concern is that consumers will be misidentified and not offered the full range of products for which they might be qualified. A news article reported that a bank used predictive analytics to instantaneously decide which credit card offer to show to first-time visitors to its website: a card for those with “average” credit or a card for those with better credit.52 This practice and others like it raise the possibility that a consumer might be digitally steered to a subprime product based on behavioral analytics, even though the consumer could qualify for a prime product.

Another concern is that the intense curation of the information available to each consumer, caused in part by targeted marketing techniques, turns traditional notions of financial literacy and inclusion on their head. For years, consumers have been encouraged to seek information on financial products and to comparison shop. But those directives are undermined by targeted marketing; if the content that consumers see is determined by what a firm knows about them, it is not possible for them to select from among the full range of products and/or prices available online. Thus, even consumers who seek out information to make informed decisions may be thwarted from making the best choices for themselves or their families and instead may be subject to digital redlining or steering.

The growing prevalence of AI-based technologies and vast amounts of available consumer data raises the risk that technology could effectively turbocharge or automate bias. In doing so, we risk further entrenching past discrimination into future decision-making. In other words, whereas in the past, an individual’s conscious or unconscious bias may have resulted in discrimination, in the future, these biases may be carried out by algorithms, in effect automating discrimination. Although AI and ML have promise, the potential to use increasingly detailed data about consumers to either purposefully or unwittingly automate forms of discrimination is very real. Given these risks, targeted marketing efforts used to advertise credit products should be carefully reviewed, as will be discussed more in the next section.

IMPLICATIONS FOR FINANCIAL INSTITUTIONS

Although our knowledge of targeted Internet-based marketing practices, as well as the technology animating the practices themselves, is evolving, financial institutions nonetheless can address some of the risks of redlining and steering that such marketing may raise. For example, lenders can ensure that they understand how they are employing targeted, Internet-based marketing and whether any vendors use such marketing on their behalf. As the HUD discrimination charge against Facebook illustrates, advertising filters that exclude predominantly minority neighborhoods or groups of individuals based on a prohibited characteristic or another trait that is correlated with a prohibited characteristic raise fair lending concerns and could result in legal violations.

Lenders that use online advertising services or platforms can take steps to ensure that they monitor the terms used for any filters, as well as any reports they receive documenting the audience(s) that were reached by the advertising. It is also important to understand whether a platform employs algorithms — such as the ones HUD alleges in its charge against Facebook — that could result in advertisements being targeted based on prohibited characteristics or proxies for these characteristics, even if that is not what the lender intends.

Despite how new the technology may be, many of the tools to address fair lending risks in the offline world may be modified to mitigate risks in the evolving online world. For example, to mitigate redlining risk, lenders can closely review any geographic filters in use and include the monitoring of all marketing and outreach activities as part of their larger fair lending risk management programs.53

To mitigate steering risks, practices developed by brick-and-mortar lenders offering prime and subprime products through different channels may be helpful for lenders employing complex online marketing strategies. For example, lenders can ensure that, when a consumer applies for credit, she is offered the best terms she qualifies for, regardless of what marketing channel or platform was used to target marketing to the consumer or collect her application. By taking these and other steps, lenders and others who advertise credit products can work to ensure that technology is deployed in consumer financial services in ways that are consistent with a commitment to fair lending.

CONCLUSION

Unlike the democratizing effect of the Sears catalog, targeted marketing may constrain consumers’ access to the broad range of products and services available today. By making assumptions about what products might be the right fit for consumers, targeted marketing has an increasingly significant, though largely invisible, impact on the advertisements shown to consumers online. In the context of credit, without careful implementing and monitoring, Internet-based targeted marketing may undermine financial inclusion if a consumer is not shown the full range of financial products and services for which she could qualify.

Technological innovation has played an important role in expanding access to consumer credit in the past and can continue to do so. Yet, the well-documented and persistent gaps in wealth and income between people of different races and ethnicities is a reminder of the high stakes that fair access to credit opportunities has for many consumers, especially minorities.54 Thus, thoughtful design and monitoring of technologies that rely on consumer data are critical to guard against the risk that the volume and granularity of these data will lead to uses that automate human biases and calcify the legacy of past discrimination. The marketing of housing and credit products in particular carries obligations under the ECOA and the FHA. As a result, the use of technology reliant on consumer data for this type of marketing should be approached with an awareness of the risks that any selected technologies bring.

While the manner in which consumers access financial products and services has changed dramatically since the days of post office orders from the Sears catalog, financial institutions and their regulators need to ensure the underlying bedrock principles of consumer inclusion and fairness remain timeless.

ENDNOTES

1 Louis Hyman, “Opinion: How Sears Helped Oppose Jim Crow,” New York Times (October 20, 2018).

2 See Endnote 1.

3 Gaby Del Valle, How the Sears Catalog Transformed Shopping Under Jim Crow, Explained by a Historian, Vox (October 19, 2018).

4 See Endnote 1.

5 Lisa Kramer and Scott Smith, “Can Robo Advisors Replace Human Financial Advisors?Wall Street Journal (February 28, 2016).

6 See, e.g., “The Use of Cash-Flow Data in Underwriting Credit: Empirical Research Findings,FinRegLab (July 2019); “Leveraging Alternative Data to Extend Credit to More Borrowers,” FICO Blog, (May 22, 2019) (accessed July 12, 2019); Alternative Data: The Key to Expanding the Credit Universe, Experian (July 1, 2019). In December 2019, the Federal Reserve, along with other federal agencies, released an interagency statement on the use of alternative data in credit underwriting. Interagency Statement on the Use of Alternative Data in Credit Underwriting (December 3, 2019).

7 Jason Morris and Ed Lavendara, “Why Big Companies Buy, Sell Your Data,” CNN (August 23, 2012).

8 See “Cross-Site Tracking: Let’s Unpack That,” The Firefox Frontier (April 12, 2018).

9 See Endnote 8.

10 Simon Hill, Computing: How Much Do Online Advertisers Really Know About You? We Asked an Expert,” Digital Trends (June 27, 2015).

11 See Endnote 10.

12 See Endnote 10.

13 Stuart A. Thompson, “These Ads Think They Know You,” New York Times (April 30, 2019).

14 Natasha Singer, “Secret E-Scores Chart Consumers’ Buying Power,” New York Times (August 18, 2012);see also Led Astray: Online Lead Generation and Payday Loans,” Upturn (October 2015).

15 See “Led Astray,” Endnote 14, p. 5.

16 See Singer, Endnote 14.

17 Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Crown Random House, 2016).

18 See Singer, Endnote 14.

19 See Singer, Endnote 14.

20 See Singer, Endnote 14.

21 See Singer, Endnote 14.

22 See Endnote 13.

23 See Endnote 13 (quoting Chat Engelgau of Acxiom).

24 See Thompson, Endnote 13

25 See 15 U.S.C. §§1691-1691f.

26 See 12 C.F.R. §1002.4(b) (2019). The question of whether targeted marketing would violate the ECOA has been raised before. For example, a January 2016 report issued by the Federal Trade Commission raised the question of whether targeted marketing of credit products, such as credit cards, using data and algorithms would violate the ECOA. The report advised caution. FTC, Big Data: A Tool for Inclusion or Exclusion? (2016).

27 See 42 U.S.C. §§3601-3619.

28 Under the FHA, it is unlawful to “make, print, or publish or cause to be made, printed or published any notice, statement, or advertisement, with respect to the sale or rental of a dwelling that indicates any preference, limitation, or discrimination based on race, color, religion, sex, handicap, familial status, or national origin, or an intention to make any such preference, limitation,or discrimination.” 42 U.S.C. §3604(c); see also 24 C.F.R. §100.75(c)(3) (2019) (noting, for example, that discriminatory advertisements may include selecting media that make advertisements unavailable on the basis of a prohibited characteristic). Moreover, the Interagency Fair Lending Examination Procedures further highlight marketing-related factors that may raise fair lending risk. See Interagency Fair Lending Examination Procedures (Interagency Procedures) (2009), pp. 11–12.

29 See United States v. SunTrust Mortgage, Inc., No. 3:12-cv-00397-REP (E.D. Va., Consent order filed September 14, 2012); United States v. Countrywide Home Loans, Inc., No. 11-cv-10540 (PSG) (C.D. Cal., Consent order filed December 28, 2011); United States. v. PrimeLending, No. 3:10-cv-02494-P (N.D. Texas, Consent order filed January 11, 2011).

30 The Supreme Court affirmed the availability of a disparate impact theory under the Fair Housing Act in Texas Dept. of Housing & Community Affairs v. Inclusive Communities Project, Inc., 135 S.Ct. 2507 (2015). It ruled that the Fair Housing Act permits liability under a disparate impact theory, prohibiting policies that seem to be neutral on their face but have a disparate impact on a protected class.

31 See Regulation B, 12 C.F.R. §1002.6, Comment 1002.6(a)-2; CFPB Bulletin 2012-04 (Fair Lending) (April 18, 2014). Notably, the Department of Housing and Urban Development (HUD) has proposed a rule regarding the standard for a discrimination claim alleging disparate impact. This rule is subject to the formal rulemaking process, but if enacted, would have an effect on the requirements to prove a disparate impact claim.

32 Both the ECOA and the Fair Housing Act prohibit redlining. The ECOA and Regulation B prohibit discrimination in any aspect of a credit transaction. 15 U.S.C. §1691; 12 C.F.R. §1002.4(a). The Fair Housing Act also prohibits discrimination in making available or in the terms or conditions of the sale of a dwelling on the basis of race or national origin, and further prohibits businesses engaged in residential real estate-related transactions from discriminating against any person in making available or in the terms or conditions of such a transaction on a prohibited basis. 42 U.S.C. §3604(a) and (b), 3605(a); 24 C.F.R. §100.120(b). See Interagency Fair Lending Examination Procedures, pp. 29–30.

33 The broad protections of the ECOA and the Fair Housing Act also prohibit steering. See Endnote 32; see also Interagency Fair Lending Examination Procedures, p. 24.

34 See Interagency Procedures, Endnote 28, p. 24.

35 For example, discouraging prospective applicants because of a protected characteristic is also prohibited. Regulation B provides that “[a] creditor shall not make any oral or written statement, in advertising or otherwise, to applicants or prospective applicants that would discourage on a prohibited basis a reasonable person from making or pursuing an application.” 12 C.F.R. §1002.4(b); see also Comment 1002.4(b)-1.

36 Joint statement from Facebook, NFHA, CWA, ECBA, O&G, and ACLU, Summary of Settlements Between Civil Rights Advocates and Facebook (March 19, 2019). The New York Department of Financial Services is also investigating Facebook’s advertising platform. See press release, N.Y. Department of Financial Services, Governor Cuomo Calls on DFS to Investigate Claims that Advertisers Use Facebook Platform to Engage in Discrimination (July 1, 2019). But even before HUD’s charge against the social media company, several other fintech and big data reports also highlighted these risks. See, e.g., Jennifer Valentino-Devries & Jeremey Singer-Vine, “Websites Vary Prices, Deals Based on Users’ Information,” Wall Street Journal (December 24, 2012). and Aniko Hannak et al., “Measuring Price Discrimination and Steering on E-Commerce Web Sites” (November 2017).

37 Julia Angwin and Terry Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica (October 28, 2016); Julia Angwin, Ariana Tobin, and Madeleine Varner, “Facebook (Still) Letting Housing Advertisers Exclude Users by Race,” ProPublica (November 21, 2017).

38 In July 2019, in separate cases, the U.S. Equal Opportunity Commission found “reasonable cause” to believe that seven employers violated civil rights laws by excluding women or older workers or both from seeing ads for employment those employers posted on Facebook. See Ariana Tobin, “Employers Used Facebook to Keep Women and Older Workers from Seeing Jobs Ads. The Federal Government Thinks That’s Illegal,” ProPublica (September 24, 2019).

39 An academic study of Facebook’s advertising platform found multiple mechanisms by which the social media company permitted advertisers to publish “highly discriminatory ads.” Till Speicher et al., Potential for Discrimination in Online Targeted Advertising, Proceedings of Machine Learning Research81:1–15 (2018).

40 See Joint Statement, Endnote 36.

41 HUD Charge of Discrimination, March 28, 2019.

42 Press Release, HUD, HUD Files Housing Discrimination Complaint Against Facebook (August 17, 2018).

43 See Endnote 42.

44 See Muhammed Ali et al., Cornell University, Discrimination Through Optimization: How Facebook’s Ad Delivery Can Lead to Skewed Outcomes (April 3, 2019).

45 See Endnote 44.

46 See Endnote 44.

47 See, e.g., Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism, (2018); Nicol Turner Lee et al., Brookings Institution, Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harm (May 22, 2019); Testimony of Nicol Turner Lee, Brookings Institution, before the U.S. House Committee on Financial Services, June 26, 2019. In her testimony, Turner Lee defined bias as “outcomes that are systematically less favorable to individuals within a particular group and where there is no relevant difference between groups that justifies such harms.”

48 Jeffrey Dastin, “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters (October 9, 2018).

49 See Endnote 48.

50 See, e.g., AI Now Institute, New York University, AI Now Report 2018 24-27; Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code (2019).

51 See, e.g., Robert Bartlett et al., “Consumer-Lending Discrimination in the FinTech Era” (May 2019) (generally concluding that algorithms, such as those used by fintech lenders, do not remove discrimination from lending decisions).

52 Emily Steel and Julia Angwin, “On the Web’s Cutting Edge, Anonymity in Name Only,” Wall Street Journal (August 4, 2010).

53 For a general discussion of how institutions may manage redlining risk, see Consumer Compliance Supervision Bulletin (July 2018), pp. 2–4.

54 See “Urban Institute, Nine Charts about Wealth Inequality in America” (October 5, 2017).