By Mariano-Florentino Cuéllar, Benjamin Larsen, Yong Suk Lee, Michael WebbArtificial intelligence (AI) technologies have become increasingly widespread over the last decade. As the use of AI has become more common and the performance of AI systems has improved, policymakers, scholars, and advocates have raised concerns. Policy and ethical issues such as algorithmic bias, data privacy, and transparency have gained increasing attention, raising calls for policy and regulatory changes to address the potential consequences of AI (Acemoglu 2021).
As AI continues to improve and diffuse, it will likely have significant long-term implications for jobs, inequality, organizations, and competition. Premature deployment of AI products can also aggravate existing biases and discrimination or violate data privacy and protection practices. Because of AI technologies’ wide-ranging impact, stakeholders are increasingly interested in whether firms are likely to embrace measures of self-regulation based on ethical or policy considerations and how decisions of policymakers or courts affect the use of AI systems.
Where policymakers or courts step in and regulatory changes affect the use of AI systems, how are managers likely to respond to new or proposed regulations? AI-related regulation In the United States, the use of AI is implicitly governed by a variety of common law doctrines and statutory provisions, such as tort law, contract law, and employment discrimination law (Cuéllar 2019). This implies that judges’ rulings on common law-type claims already play an important role in how society governs AI. While common law often involves decisionmaking that builds on precedent, federal agencies also engage in important governance and regulatory tasks that may affect AI across various sectors of the economy (Barfield & Pagollo 2018).
Federal autonomous vehicle legislation, for instance, carves out a robust domain for states to make common law decisions about autonomous vehicles through the court system. Through tort, property, contract, and related legal domains, society shapes how people utilize AI while gradually defining what it means to misuse AI technologies (Cuéllar 2019). Existing law (e.
g. , tort law) may, for instance, require that a company avoid any negligent use of AI to make decisions or provide information that could result in harm to the public (Gallaso & Luo 2019). Likewise, current employment, labor, and civil rights laws imply that a company using AI to make hiring or termination decisions could face liability for its decisions involving human resources.
Policymakers and the public also consider new legal and regulatory approaches when faced with potentially transformative technologies, as these may challenge existing legislation (Barfield & Pagollo 2018). The Algorithmic Accountability Act of 2022 is one proposal to deal with such perceived gaps. The Algorithmic Accountability Act was first proposed in 2019 to regulate large firms through mandatory self-assessment of their AI systems, including disclosure of firm usage of AI systems, their development process, system design, and training, as well as the data gathered and used.
While statutes imposing new regulatory requirements such as the Algorithmic Accountability Act are still under debate, data privacy regulation is already being implemented. The state of California enacted the California Consumer Privacy Act (CCPA), which went into effect in January 2020. The CCPA affects all business that buy, sell, or otherwise trade the “personal information” of California residents, including companies using online-generated data from California residents in their products.
The CCPA thus adds another layer of oversight to data handling and privacy, on which many AI applications are contingent. Domain-specific regulators such as the Food and Drug Administration (FDA), the National Highway Traffic and Safety Administration (NHTSA), and the Federal Trade Commission (FTC) have also been active in devising their own approaches to regulating AI. In short, AI regulation is emerging rapidly and is likely to materialize more substantively across several directions simultaneously: from existing laws, new general regulations, and evolving domain-specific regulations.
The main goal of regulators is to ensure opportunity in the application and innovation of AI-based tools, products, and services while limiting negative externalities in the areas of competition, privacy, safety, and accountability. It remains little known, however, how the proposed Algorithmic Accountability Act, the CCPA, and regulatory approaches by the FDA, NHTSA, and the FTC will affect managerial preferences and the likely rate of AI adoption and innovation across different firms and industries. Manager response to AI regulation In a newly published paper (Cuéllar et al.
2022), we sought to address how different kinds of AI-related regulation––or even the prospect of regulation––might affect firm behavior, including firm responses to ethical concerns. In particular, we examined the impact of information on actual and potential AI-related regulations on business managers. We did so by observing the degree to which managers changed their perceptions of the importance of various AI-related ethical issues (labor, bias, safety, privacy, and transparency) and their intent to adopt AI technologies by conducting an online survey.
In our study, we assessed managerial perception of ethical and policy concerns by asking managers about the importance (measured on a standard Likert scale ranging from not important to very important sentiment) attached to (1) layoffs or labor-related issues due to AI adoption; (2) racial and gender bias/discrimination from AI algorithms; (3) safety and accidents related to AI technologies; (4) privacy and data security issues related to AI adoption; and (5) transparency and explainability of AI algorithms. AI-driven digital transformation has been widely documented to have important implications for job displacement (Gruetzemacher, Paradice, and Lee, 2020), and algorithmic racial and gender bias have been reported across sectors and industries (Lambrecht and Tucker, 2019). Safety-related concerns are also present across algorithmic use cases, from autonomous driving to AI in healthcare, while issues associated with data privacy and security are present in most forms of algorithmic adoption.
Finally, neural networks have at times been described as “black boxes,” where algorithmic decisionmaking processes may lack explanatory transparency in how and why a certain decision was reached. In combination, these five areas constitute some of the most pressing problems that managers are faced with when adopting new AI technologies into their organization. To assess manager intent to adopt AI technologies, we asked in how many business processes they would adopt AI technologies (i.
e. , machine learning, computer vision, and natural language processing) in the following year. To clarify what business processes are, we gave several examples when introducing each technology in the survey.
Respondents were allowed to choose from 0 to 10 or more (i. e. , top-coded at 10).
On average, managers in our sample said that they would adopt AI in about 3. 4 business processes. In order to assess the managerial responses to different kinds of AI regulation and their associated impact on ethical concerns, we conducted a randomized online survey experiment, where we randomly exposed managers to one of the following treatments: (1) a general AI regulation treatment that invokes the prospect of statutory changes imposing legislation like the Algorithmic Accountability Act, (2) agency-specific regulatory treatments that involve the relevant agencies, i.
e. , the FDA (for healthcare, pharmaceutical, and biotech), NHTSA (for automobile, transportation,` and distribution), and the FTC (for retail and wholesale), (3) a treatment that reminds managers that AI adoption in businesses is subject to existing common law and statutory requirements including tort law, labor law, and civil rights law, and (4) a data privacy regulation treatment that invokes legislation like the California Consumer Privacy Act. Our results (Cuéllar et al.
2022) indicate that exposure to information about AI regulation increases the importance managers assign to various ethical issues when adopting AI, though we do not find that the results are statistically significant in all cases. Figure 1 plots the coefficient estimates from the regressions that examine each outcome variable (i. e.
, the heading of each coefficient plot) against the different AI regulation information treatments. The dots represent the coefficient estimates from the regression and the bar represents the 95% confidence interval. Each coefficient estimate represents the difference between each treatment group and the control group.
Overall, Figure 1 visually illustrates the trade-off between the increased perception on ethical issues related to AI and the decreased intent to adopt AI technologies. Notably, all four regulation treatments increase the importance managers put on safety related to AI technologies, whereas none of the four regulation treatments appear to increase the importance managers put on labor-related to AI technologies. Moreover, there appears to be a trade-off: Increases in manager awareness of ethical issues are offset by a decrease in manager intent to adopt AI technologies.
All four regulation treatments decrease managers’ intent to adopt AI. The trade-off between AI ethics and adoption is more pronounced in smaller firms, which are generally more resource-constrained than larger firms. Recent industry reports often discuss successful AI transformation in terms of strategy/organization, data, technology, workforce, and training (McKinsey 2017).
Similarly, we identified six expense categories as key AI-related business activities and asked managers to consider the trade-offs they would have to make when planning a hypothetical AI budget. Then we examined how regulation information affects how managers plan to allocate AI-related budget across the six expense categories (Figure 2). Specifically, we asked managers to fill out the percent of the total budget they would allocate to each expense category, that is (1) developing AI strategy that is compatible with the company’s overall business strategy (labeled “Strategy” in Figure 2); (2) R&D related to creating new AI products or processes (labeled “R&D); (3) hiring managers, technicians, and programmers, excluding R&D workers, to operate and maintain AI systems (labeled “Hiring”); (4) AI training for current employees (labeled “Training”); (5) purchasing AI packages from external vendors (labeled “Purchase”); and (6) computers and data centers, including purchasing or gathering data (labeled “Data/Computing”).
On average, we found that managers allocated approximately 15% to developing AI strategy, 19% to hiring, 16% to training, 15% to purchasing AI packages, 13% to computing and data resources, and 22% to R&D. As Figure 2 illustrates, information on AI regulation significantly increases manager’s expenditure intent for developing AI strategy (“Strategy”). For the general AI regulation, agency-specific AI regulation and existing AI-related regulation treatments, we find that managers increase allocation to AI strategy by two to three percentage points.
However, the increase in developing AI business strategy is primarily offset by a decrease in training current employees on how to code and use AI technology (“Training”), as well as purchasing AI packages from external vendors (“Purchase”). Figure 2 visually illustrates these trade-offs by plotting the coefficient estimates of each regulation treatment. We also examined how information about AI regulation affected managers hiring plans across six different occupation categories.
The occupation categories are: managers, technical workers, office workers, service workers, sales workers, and production workers. We found that information about AI regulation increases intent to hire more managers. We found no effect on the other occupation categories.
This finding is consistent with the intent to invest more in strategy development since managers tend to be the ones that are responsible for establishing strategic goals and directions at their companies. When comparing the healthcare, automotive, and retail industries, we found that managers at times respond differently to the same regulatory treatments. In particular, we found a trade-off between the perception of ethical issues and adoption intent in healthcare and retail but not in the automotive sector.
Firms operating in the automotive, transportation, and distribution industries generally seem to factor in a positive outlook on how AI will affect the future of their operations despite existing laws and potential new regulations. This positive sentiment may reflect NHTSA’s current regulatory approach of removing unintended barriers to AI adoption and innovation. Overall, our findings imply that AI regulation may slow innovation through potentially lowering adoption, but at the same time it may improve consumer welfare through increased safety and heightened attention to issues such as bias and discrimination.
The diverse responses across ethical issues and firm characteristics suggest that managers are more likely to respond to concrete ethical guidelines, especially when these can be quantified or measured. Ethical areas such as safety, for example, display concrete and measurable instances that may be easier for managers to assess and quantify should an AI system cause harm. Managers across treatments display greater awareness of safety-related issues, which could be an expression of managers being more attuned than other ethical issues to what constitutes either an improvement or a deterioration.
Ethical issues related to bias and discrimination or transparency and explainability, on the other hand, can be thornier for managers to find broad solutions for, which shows in our sample: Managers across treatments respond less favorably to such issues. Therefore, the concreteness of the ethical issue and manager perception of enforcement of regulation could likely induce heterogeneous responses to AI regulation. Though our findings are on manager intent and not on actual behavior, to the best of our knowledge our research is the first to examine the potential impact of new and potential AI regulation on AI adoption and the ethical and legal concerns related to AI.
Policy Implications Our findings offer several potential implications for the design and analysis of AI-related regulation. First, though AI regulation may conceivably slow innovation through temporarily lowering adoption, instituting regulation at the early stages of AI diffusion may improve consumer welfare through increased safety and by better addressing bias and discrimination issues. At the same time, there is an inherent need to distinguish between innovation at the level of the firm consuming AI technology and at the level of the firm producing such technology.
Even if regulation indeed slows innovation in the former, it can still spur innovation in the latter by encouraging firms’ to invest in otherwise neglected fields. This would be consistent with theoretical observations such as the Porter hypothesis, which argues that (environmental) regulation can enhance firms’ competitiveness and bolster their innovative behaviors (Porter & Van der Linde, 1995). The approach of regulating early, however, contrasts with the common approach—at least in the U.
S. —of relying on competitive markets to generate the best technology so that government only needs to regulate anticompetitive behavior to maximize social welfare (Aghion et al. , 2018; Shapiro, 2019).
Second, although policymakers sometimes find justifications for adopting broad-based regulatory responses to major problems such as environmental protection and occupational safety, cross-cutting AI regulations such as the proposed Algorithmic Accountability Act may have complex effects and make it harder to take important sector characteristics into account. Given our findings associated with heterogeneous responses across sectors and firm size, policymakers would do well to take a meticulous approach to AI regulation across different technological and industry-specific use cases. While the importance of certain legal requirements and policy goals—such as reducing impermissible bias in algorithms and enhancing data privacy and security—may apply across sectors, specific features of particular sectors may nonetheless require distinctive responses.
For example, the use of AI-related technologies in autonomous driving systems must be responsive to a diverse set of parameters that are likely to be different from those relevant to AI deployments across drug discovery or online advertising. Our findings also hold several implications for managers and businesses that either develop or deploy AI solutions or intend to do so. Our survey experiment suggests that managers are not always fully aware of how a given product or technology complies with regulations.
Information pertaining to AI regulation needs to be factored in by managers, both when developing and adopting AI solutions. If managerial views change systematically after understanding (or being exposed to) regulation, such as in our experiment, this suggests that potential regulatory discrepancies should preferably be handled at a very early stage of the investment planning process. In most actual scenarios, however, regulation evolves at a much slower pace than technology, described as the “pacing problem” (Hagemann, Huddleston, and Thierer, 2018), which makes it hard for managers to ensure that a technology developed today continues to stay compliant in the future.
We find that when managers are presented with information on AI-related regulations, they tend to behave in a reactionary manner, which forces managers to rethink how they allocate their budget. This is consistent with reevaluating potential issues in a product or a technology’s development or adoption process. Managers and businesses that have developed more standardized ways of doing this are therefore expected to be better equipped to handle any potential regulatory shocks in the future.
Concrete managerial recommendations include documenting the lineage of AI products or services, as well as their behaviors during operation (Madzou & Firth-Butterfield 2020). Documentation could include information about the purpose of the product, the datasets used for training and while running the application, and ethics-oriented results on safety and fairness, for example. 1 Managers can also work to establish cross-functional teams consisting of risk and compliance officers, product managers, and data scientists, enabled to perform internal audits to assess ongoing compliance with existing and emerging regulatory demands (Madzou & Firth-Butterfield 2020).
While our findings confirm that conveying information about AI-related regulations generally entails a slower rate of reported AI adoption, we also find that even emphasizing existing laws relevant to AI can exacerbate uncertainty for managers in terms of implementing new AI-based solutions. For businesses that develop or deploy AI products or services, this implies that a new set of managerial standards and practices that details AI liability under varying circumstances needs to be embraced. As many of these practices are yet to emerge, more robust internal audits and third-party examinations would provide more information for managers, which could help some managers overcome specific present-biased preferences.
This could reduce managerial uncertainty and aid the development of AI products and services that are subject to higher ethical as well as legal and policy standards. As AI technologies remain at an early stage of adoption, AI implementation is likely to continue on an upward trending slope as companies increasingly will be required to adopt new AI tools and technologies in order to stay competitive. As the potential costs of varying forms of AI regulation are likely to differ across industries, the adoption of clearer rules and regulations at the sectoral level could be beneficial for firms that are already engaged in developing and adopting a range of novel AI technologies.
Re-engineering existing AI solutions can be both costly and time-consuming, while removing regulatory and legal uncertainties could potentially encourage to-be-adopters through the provision of a clearer set of rules and costs of compliance from the outset of adoption. Our study takes the cost side of the equation into consideration; further studies could provide valuable insights into the actual and perceived benefits that potentially come with new forms of AI regulation. References Acemoglu, Daron.
“Harms of AI. ” NBER Working Paper 29247 (September 2021). https://doi.
org/10. 3386/w29247. Aghion, Philippe, Stefan Bechtold, Lea Cassar and Holger Herz.
“The Causal Effects of Competition on Innovation: Experimental Evidence. ” Journal of Law, Economics, and Organization 34, no. 2 (2018): 162-195.
https://doi. org/10. 1093/jleo/ewy004.
Barfield, Woodrow and Ugo Pagallo. Research Handbook on the Law of Artificial Intelligence. Northampton Massachusetts: Edward Elgar Publishing, 2018.
Cuéllar, Mariano-Florentino. “A Common Law for the Age of Artificial Intelligence: Incremental Adjudication, Institutions, and Relational Non-Arbitrariness. ” Columbia Law Review 119, no.
7. (2019). Cuéllar, Mariano-Florentino, Benjamin Larsen, Yong Suk Lee and Michael Webb.
“Does Information About AI Regulation Change Manager Evaluation of Ethical Concern and Intent to Adopt AI?” Journal of Law, Economics, and Organization (2022). https://doi. org/10.
1093/jleo/ewac004. Galasso, Alberto and Hong Luo. “Punishing Robots: Issues in the Economics of Tort Liability and Innovation in Artificial Intelligence.
” The Economics of Artificial Intelligence: An Agenda (2019). Gruetzemacher, Ross, David Paradice and Kang Bok Lee. “Forecasting extreme labor displacement: A survey of AI practitioners.
” Technological Forecasting and Social Change 161, (2020). https://doi. org/10.
1016/j. techfore. 2020.
120323. Hagemann, Ryan, Jennifer Huddleston and Adam D. Thierer.
“Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future. ” Colorado Technology Law Journal 17 (2018). Lambrecht, Anja and Catherine Tucker.
(2019). “Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads. ” Management science 65, no.
7 (2019). https://doi. org/10.
1287/mnsc. 2018. 3093.
Madzou, Lofred and Kay Firth-Butterfield. “Regulation could transform the AI industry. Here’s how companies can prepare.
” World Economic Forum, 23 October, 2020. https://www. weforum.
org/agenda/2020/10/ai-ec-regulation-could-transform-how-companies-can-prepare/. McKinsey Global Institute. “Artificial Intelligence: The Next Digital Frontier?” June 2017.
https://www. mckinsey. com/~/media/mckinsey/industries/advanced%20electronics/our%20insights/how%20artificial%20intelligence%20can%20deliver%20real%20value%20to%20companies/mgi-artificial-intelligence-discussion-paper.
ashx. Porter, Michael E. and Class Van der Linde.
“Toward a New Conception of the Environment-Competitiveness Relationship. ” Journal of Economic Perspectives 9, no. 4 (1995): 97-118.
Shapiro, Carl. “Protecting Competition in the American Economy: Merger Control, Tech Titans, Labor Markets. ” Journal of Economic Perspectives 33, no.
3 (2019): 69-93. https://doi. org/10.
1257/jep. 33. 3.
69. .
From: brookings
URL: https://www.brookings.edu/research/how-does-information-about-ai-regulation-affect-managers-choices/