In response to the growing prominence of generative AI and the competitive landscape, Google has amplified its focus on advancing artificial intelligence technology. The company, renowned for its pioneering efforts in AI development, is currently working on an array of AI-based products, including Bard, Duet AI, Assistant with Bard, and an upcoming large language model dubbed “Gemini.”
In light of this AI surge, Google is prioritizing user safety. The tech giant recently made an announcement signaling its commitment to stand by users in the event that AI-generated content causes legal complications. However, there exists a crucial caveat to this pledge.
The realm of generative AI has brought companies like Google under heightened scrutiny. The scrutiny encompasses the source material used to train AI models, as well as the content generated by these tools for end-users. Google’s decision to forgo publicly available internet information for AI model training has drawn both attention and criticism. Questions concerning user privacy, data access consent, and potential copyright infringement have been raised.
In response to these public discussions, Google has revealed its plan to shoulder the responsibility in cases where AI-generated content inadvertently infringes on copyright laws, as reported by Reuters. This commitment includes extending legal assistance to affected users, akin to the protection Microsoft and Adobe pledge to their clientele. Notably, Google’s support applies exclusively in cases of accidental copyright infringement, leaving intentional breaches beyond its purview.
It’s important to note that this provision applies solely to commercial generative AI services accessible through Google Cloud and Workspace. Services such as Vertex AI and Duet AI are covered, while Bard is conspicuously absent from the list. This strategic move aligns with Google’s business interests, offering legal support exclusively to paying subscribers rather than those utilizing free accounts. As end-users, exercising caution in the creation and public sharing of AI-generated content is imperative to avoid potential legal entanglements.
In tandem with safeguarding users, Google is taking proactive steps to eliminate biases from its training models. The company is leveraging red teaming, a process involving ethical hackers to simulate real-world attacks, to uncover vulnerabilities and understand the social implications of AI. Additionally, Google is implementing tools to protect children and young adults from content that may not be age-appropriate.
Google’s proactive stance on legal support and ethical development reflects its ongoing commitment to shaping a responsible and secure AI landscape for its users. As generative AI technology continues to evolve, Google’s dedication to user safety remains steadfast.