Home » Responsible Business Center » AI Thought Leadership Series | Responsible Business Center: Artificial Intelligence — The AI Power Struggle Governance, Innovation, and Regulation in the Digital Age

AI Thought Leadership Series | Responsible Business Center: Artificial Intelligence — The AI Power Struggle Governance, Innovation, and Regulation in the Digital Age

Faculty , Responsible Business Center | Sep 18, 2024 |

This article is the third of a four part series on AI and responsible business brought to you by the Gabelli School of Business faculty members and Responsible Business Center staff. Stay tuned for future editions of this series! 

The USA is widely recognized as the AI global development leader. According to the Stanford 2024 AI Index Report, American tech giants lead in AI investment, innovation, and implementation. A 2022 analysis by Tortoise Intelligence found that 50% of the world’s top AI research institutions were based in the USA Some of the most prominent leaders in the generative AI space now are US-based companies such as Google, Microsoft, and OpenAI. They attract top engineers and developers. And the USA dominates the semiconductor design market, having an 85% global share. AI chips mostly designed by Nvidia in California provide computational power essential in training AI models and powering other systems.

Another often overlooked factor explaining American dominance is its legal framework. US-based companies remain conspicuously free of comprehensive regulation. This regulatory void sharply contrasts with the structured approaches of the EU, China, and other G7 nations. Are we witnessing a replay of past technological revolutions, such as the early days of automobiles, when safety regulations lagged behind innovation, or the dawn of social media when platforms grew unchecked before privacy concerns came to the fore? What can these past technological shifts tell us about the future of AI governance?

The US government’s approach has largely been one of encouragement rather than constraint in both the Trump and the Biden administrations, as evidenced by the American AI Initiative, which focuses on promoting AI research while reducing regulatory “barriers to innovation.”

This laissez-faire attitude stands in stark contrast to the European Union, which has taken a leading role with its Artificial Intelligence Act, proposed in 2021 that came into force a few days ago. The AI Act classifies AI systems based on their risk levels, from “unacceptable risk” (to be banned) to “minimal risk.” The Act also imposes strict requirements on high-risk AI applications, including mandatory risk assessments and human oversight.

China has also moved towards regulation with its Global AI Governance Initiative, emphasizing fairness, transparency, and privacy protection. And the G7 nations, through the Hiroshima Process, have established International Guiding Principles for Organizations Developing Advanced AI Systems. These principles, which were agreed upon in 2023, focus on responsible AI development, emphasizing safety, security, and trust. They represent a collaborative effort among leading democracies to create a common approach to AI governance. These regulatory initiatives stand in stark contrast to the US approach.

At the forefront of resistance to AI regulation stand tech entrepreneurs. The “Musk algorithm,” described in Walter Isaacson’s 2023 biography of Elon Musk, encapsulates this approach. When faced with established norms, Musk reportedly instructs his employees to question, “Who wrote that rule? And do they have the authority to do so?” This mindset reflects a broader trend among tech leaders who view government oversight as an impediment to innovation rather than a necessary safeguard.

Ultimately, the “Musk algorithm” is a manifesto about who possesses the legitimate authority to organize production and social practices, which raises critical questions about the balance of power between tech companies and democratic institutions. The tension between AI companies and regulatory bodies is not a novel phenomenon but rather the latest chapter in a long-standing struggle between corporate interests and government oversight. Businesses have often resisted regulation throughout history, from antitrust laws to financial regulations to environmental protections, claiming it stifles innovation and economic growth.

In the late 19th century, the rise of industrial monopolies led to the implementation of antitrust laws, such as the Sherman Antitrust Act of 1890. Industrialists like John D. Rockefeller vehemently opposed these regulations, arguing they would hinder economic progress. Similarly, the creation of the Securities and Exchange Commission in 1934 to oversee financial markets in response to the stock market crash of 1929 was initially resisted echoing today’s tech entrepreneurs who argue for self-regulation in AI development.* Likewise, when the Environmental Protection Agency was created in 1970, many industries claimed the new rules would be economically devastating. Yet, these regulations have been crucial in improving markets, increasing social trust, and addressing pollution and climate change.

In these historical instances, as with AI today, corporations argued that self-regulation is more effective than government intervention. However, history has often shown that without external oversight, the pursuit of profit can overshadow public welfare. Mark Zuckerberg put it bluntly in a lecture to college students: he said it was “more useful to make things happen and apologize later than it is to make sure you dot all your i’s and cross your t’s.”**

In this context, California is debating bill SB 1047 to regulate AI development, requiring safety testing for AI systems and allowing legal action against companies whose technologies cause serious harm. Supporters argue it will prevent disasters, while critics claim it could stifle innovation and push AI development out of California. The bill would affect major AI companies and create a new state agency for oversight. The tech industry’s opposition to the bill contrasts with their previous public discussions about AI risks and calls for regulation. The bill is still moving through the California legislature and, if passed, could set a precedent for other states and national governments.

 The AI regulation debate follows a well-worn path of tension between corporate freedom and public interest, raising questions about how society can balance innovation with responsible development and use of transformative technologies: Can we trust profit-driven entities to manage AI’s societal impact without oversight? How will we address algorithmic bias, privacy concerns, and job displacement while prioritizing public interest? Who will assess the risks and externalities that companies might overlook? Can coordinated regulation establish global standards and prevent an ethical race to the bottom in AI development?

The venue for this debate is as crucial as the questions themselves. It is essential to critically examine the claims made by both industry leaders and regulatory advocates, ensuring that AI governance is grounded in empirical evidence and historical context. While it might be tempting to frame the challenge as simply crafting clever regulations that safeguard public interests without stifling innovation, this perspective may be overly optimistic. Harnessing AI’s potential while mitigating its risks may not yield a win-win scenario. When faced with situations where mutual benefits are unattainable, societies must make difficult tradeoffs. This is where ethics, my field of study, becomes indispensable. We do not need ethics to celebrate universally beneficial outcomes; we need it to navigate and resolve complex value conflicts. The ethical dimension of AI governance is not about finding perfect solutions but about making principled choices in the face of competing priorities and uncertain consequences.

* De Bedts, R. F. (1964). The First Chairmen of the Securities and Exchange Commission: Successful Ambassadors of the New Deal to Wall Street. The American Journal of Economics and Sociology, 23(2), 165-178.

**Documentary Zuckerberg: King of the Metaverse. Directed by Nick Green. (2024)

Written by: Miguel Alzola, Ph.D., associate professor of ethics, Fordham University Gabelli School of Business.

 

Connect with the Gabelli School of Business

© 2024 Gabelli School of Business

GabelliConnect is the news site of the Gabelli School of Business at Fordham University. Read about career opportunities, campus events, student and alumni success stories, and more.