According to the researchers, these findings shows that it is very difficult for laypeople to understand trade-offs between benefits of being served by AI and risks of being harmed or exploited.
The insights are presented in the article “AI on the street: Context-dependent responses to artificial intelligence”, authored by Associate Professor Matilda Dorotic and Professor Luk Warlop from BI, together with former BI PhD student Emanuela Stagno, now Associate Professor at the University of Sussex. The paper was recently awarded the 2024 Best Paper Award by the International Journal of Research in Marketing, a highly respected journal in the field of marketing.
“This is a great honour—not only because it represents meaningful recognition of our work by our peers, but also because it reflects the community’s acknowledgment of the importance of understanding the delicate balance we currently face between the benefits and risks of implementing AI,” says Dorotic.
Public trust depends on context
The study shows that people evaluate AI differently depending on where and how it is used. Even when the technology is the same, people weigh personal costs and perceived benefits in context-specific ways, believing AI is bad in one case and good in another.
Commercial AI—like facial recognition on smartphones—is often accepted because it offers clear personal benefits. In contrast, public-sector AI, particularly in public safety applications, raises more concern due to fears around privacy and control over surveillance. However, infrastructure-related public AI, such as traffic or water management, which poses the same risk of surveillance, is seen as less intrusive and hence much more acceptable.
“We find that people trust the government to provide for 'AI on the street' but limit their support to applications that provide personal benefits to them as individuals,” says Warlop.
The article also offers guidance for policymakers and AI practitioners based on how consumers trade off solutions that differ in their benefits, costs, data transparency, and privacy enhancements. But as Dorotic notes, responsible adoption also depends on broader awareness and regulation:
“Future regulations must be not only reactive but also anticipatory. By fostering a culture of responsibility and openness, we can better align innovation with societal values and long-term trust.”
A career boost
Warlop highlights how meaningful the award is for his co-authors.
“I’m tremendously happy for my young colleagues Matilda and Emanuela. For them, such an award is a major career boost, and it is well deserved, not just for this paper, but for the quality of all their work.”
Dorotic adds: “I wish to thank BI and the Department of Marketing for their support of this research and my choice to pursue more unconventional, yet deeply societally impactful, lines of inquiry. I’m grateful to be part of a community that values both courage and curiosity in the pursuit of knowledge that truly matters.”
Read more about the study here: AI in the public space: How do we evaluate if it is good or bad?