Google releases new ‘open’ AI models with a focus on safety

Google has introduced three new “open” generative AI models, which it claims are “safer,” “smaller,” and “more transparent” than most existing models. These new models are part of Google’s Gemma 2 family, initially launched in May. Named Gemma 2 2B, ShieldGemma, and Gemma Scope, each model caters to different applications but emphasizes safety.

Unlike Google’s Gemini models, the Gemma series focuses on being more accessible and fostering goodwill within the developer community, similar to Meta’s Llama models. While Gemini’s source code remains closed and is primarily used in Google’s products, Gemma aims to be more open and community-friendly.

Gemma 2 2B is a lightweight model designed for text generation and analysis, compatible with various hardware, including laptops and edge devices. It is available for certain research and commercial applications and can be downloaded from platforms like Google’s Vertex AI model library, Kaggle, and Google’s AI Studio toolkit.

ShieldGemma, on the other hand, comprises “safety classifiers” that detect and filter out harmful content such as hate speech, harassment, and sexually explicit material. Built on the foundation of Gemma 2, ShieldGemma can filter both prompts to a generative model and the content it generates.

Finally, Gemma Scope offers developers a way to delve deeper into the inner workings of a Gemma 2 model. According to Google, Gemma Scope uses specialized neural networks to unpack and simplify the complex information processed by Gemma 2, making it easier to analyze and understand. This tool aims to help researchers gain insights into how Gemma 2 models identify patterns, process information, and make predictions.

The release of these new models aligns with a recent endorsement from the U.S. Commerce Department, which highlighted the benefits of open AI models in a preliminary report. The report emphasized that open models can democratize generative AI, making it more accessible to smaller companies, researchers, nonprofits, and individual developers, while also underscoring the importance of monitoring these models for potential risks.

Source

Control F5 Team
Blog Editor
OUR WORK
Case studies

We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.

READY TO DO THIS
Let’s build something together