WASHINGTON – A new nationally representative Consumer Reports survey explores Americans’ attitudes toward artificial intelligence (AI) and algorithmic decision-making. The survey found that a majority of Americans are uncomfortable about the use of AI and algorithmic decision-making technology around major life moments as it relates to housing, employment, and healthcare.
Grace Gedye, policy analyst at CR, said, “Companies are using AI and algorithms behind the scenes to help determine everything from decisions about your health insurance coverage, to your prospects of landing your dream job, to who is going to get that perfect apartment you found on Zillow. We conducted this survey to get a better understanding of how consumers feel about the role AI and algorithms play in these high stakes decisions.”
“The survey shows that a majority of Americans are uncomfortable with the use of AI in high-stakes decisions about their lives. We also found that in certain circumstances, Americans really want to understand what information an AI system is using to assess them and want the chance to correct any incorrect information. Consumer Reports advocates for regulations that require AI systems to be more transparent, and that give consumers agency.”
Key findings of the survey include:
Attitudes toward AI/algorithms making marketing decisions
When American consumers were asked how they would view it if a company used AI or algorithms to make decisions about what products or services to offer, about a third of Americans said “Unsure,” making it the most common overall response. Roughly equal percentages said it would be at least somewhat good (34%) and at least somewhat bad (32%), although a higher percentage said “mostly bad” (12%) than “mostly good” (7%).
Personalized pricing
CR asked Americans how they feel about price discrimination or “personalized pricing.” Nearly half of Americans (47%) said they strongly oppose this practice the most common response by far. Another 19% somewhat opposed it, and 26% neither supported nor opposed. Hardly any (7%) said they actively support it.
White Americans were more likely than Black and Hispanic Americans to say they strongly oppose this practice (53% compared to 37% and 36%, respectively). The percentage who strongly oppose it also goes up with age.
Discomfort with AI programs making decisions that affect people’s lives
Nearly half (45%) of US adults said they would be “very uncomfortable” when asked about a scenario in which a program would have a role in a job interview process, and less than 20% would be at all comfortable with it (12% “somewhat comfortable” and just 5% “very” so).
About 4 in 10 (39%) Americans would be “very uncomfortable” allowing banks to use such programs to determine if they qualified for a loan, and the same percentage said so for using programs to screen them as a potential tenant.
About a third said they would be “very uncomfortable” with video surveillance systems using facial recognition to identify them, and a similar percentage said so for hospitals using AI or algorithms to help with diagnosis and treatment planning.
In the hospital scenario, women were more likely to be uncomfortable with AI or algorithms being used to make a diagnosis or treatment plan than men (72% versus 56%).
Transparency and recourse
CR asked Americans to imagine an AI or algorithm had been used to determine whether or not they would be interviewed for a job they applied for and asked them if they would like to know specifically what information the program used to make the decision. Most Americans (83%) said they would want to know.
When asked if they would want to be able to correct any incorrect information that an AI system or algorithm used to make a decision about interviewing them, 91% of Americans said they would want to have a way to correct the data.
Consumer Reports advocates for legislation at the state and federal level to protect consumers from algorithmic discrimination and other AI-related harms. Clear standards are needed regarding the responsible use of AI across multiple industries so that companies are accountable for ensuring their products are safe, effective, and do not manipulate, deceive or discriminate. Consumers should be informed when AI tools make high-stakes decisions about them, and should have agency, such as the opportunity to correct any incorrect information and the opportunity to appeal a decision.
Consumer Reports has been pushing for state legislation that would require companies to disclose more information about how AI tools they use in consequential decisions assess consumers. Colorado’s new law, which goes into effect in 2026, will require companies to provide consumers and workers with an explanation after a company uses AI to make an adverse, high-stakes decision about him or her. California lawmakers are considering a similar bill, currently backed by CR, that would provide consumers with even more detailed information about how a consumer or worker will be assessed by an AI tool, as well as actionable post-decision explanations.
2024 States News Service