
After OpenAi’s lead, Google has put forward its own Ai policy proposal and regulations in response to the Trump administration’s call for a national “Ai Action Plan.” In the document, Google recommends easy copyright rules in Ai training and calls for “balanced” export controls that protect national security while still allowing U.S businesses to flourish internationally.
Policy of Google on Ai Copyright & Fair Use
One of the most discussed elements of Google’s proposal is the use of copyrighted materials for Ai training. The company debated that “fair use and text-and-data mining exceptions” are important for Ai creativity and scientific advancements. Google wants the legal right to train Ai models using publicly available data such as OpenAi, even if it’s copyrighted without facing major restrictions.
According to Google, these exceptions avoid Ai developers from dealing with “highly unpredictable, imbalanced, and lengthy negotiations” with data holders, making Ai development smoother. This stance results in several cases from content creators and data owners who blame Google for using their copyrighted material without proper notification. U.S courts have still to rule on whether fair use protections apply in such cases.
Control on Ai Export: Google vs Microsoft
Google also takes issue with certain export regulations introduced under the Biden administration. The company warns that these rules could hurt the U.S. economy by making it harder for cloud service providers to operate globally. In contrast, Microsoft has publicly stated that it is “confident” in complying with the regulations.
While the government has restricted the sale of advanced Ai chips to certain countries, it has made exceptions for “trusted” businesses needing large-scale chip clusters. Google wants more leniency in these restrictions to keep the U.S competitive in Ai.
Related links you may find interesting
Investment in Ai R&D and Open Data
Moreover, Google is trying to maximize government funds on Ai research. The company recommends that the U.S. should invest in early Ai development. The datasets are beneficial for Ai training and ensure that computing resources are widely accessible to scientists and institutions.
Google also criticizes inconsistent state-level Ai regulations. It argues that a federal law should give more clarity and stability. In early 2025, more than 780 Ai related bills are pending in the U.S. This makes Ai compliance a growing challenge for tech companies.
Ai Responsibility & Clarity: Google’s Pushback
Another concern in Google’s proposal is responsibility for Ai-related harms. The company argues that Ai developers shouldn’t be held responsible for how their models are used, as they usually have little control over end-user applications. Previously, Google has opposed strict liability laws, such as California’s failed SB 1047, which sought to define clear responsibilities for Ai developers.
Google has also opposed strict Ai clarity rules, such as those being introduced in the EU and California. These rules would need companies to disclose details about their training datasets and model risks. According to Google these types of disclosures could expose trade secrets, give competitors an unfair advantage. Moreover it can even create national security risks by helping adversaries to change Ai models.
As Ai rules are constantly changing, Google is making a strong case for fewer restrictions to support creativity, while governments worldwide struggle with balancing oversight and technological progress.