Google has announced its latest AI model, Gemini, which was built to be multimodal so that it could interpret information in multiple formats, spanning text, code, audio, image, and video.
According to Google, the typical approach for creating a multimodal model involves training components for different information formats separately and then combining them together. What sets Gemini apart is that it was trained from the start on different formats and then fine-tuned with additional multi-modal data.
“This helps Gemini seamlessly understand and reason about all kinds of inputs from the ground up, far better than existing multimodal models — and its capabilities are state of the art in nearly every domain,” Sundar Pichai, CEO of Google and Alphabet, and Demis Hassabis, CEO and co-founder of Google DeepMind, wrote in a blog post.
Google also explained that the new model has pretty sophisticated reasoning capabilities, which allow it to understand complex written and visual information, making it “uniquely skilled at uncovering knowledge that can be difficult to discern amid vast amounts of data.”
For example, it can read through hundreds of thousands of documents and extract insights that lead to new breakthroughs in certain fields.
Its multimodal nature also makes it particularly suited to understanding and answering questions in complex fields like math and physics.
Gemini 1.0 comes in three different versions, each tailored to a different size requirement. In order from largest to smallest, Gemini is available in Ultra, Pro, and Nano versions.
According to Google, in Gemini’s initial benchmarking, Gemini Ultra has surpassed the performance of 30 out of the 32 popular academic benchmarks that are often used in model development and research. Gemini Ultra is also the first model to outperform human experts, measured using massive multitask language understanding (MMLU), which combines 57 subjects, including math, physics, history, law, medicine, and ethics.
Gemini Pro is now integrated into Bard, making it the biggest update to Bard since its initial release. The Pixel 8 Pro has also been engineered to make use of Gemini Nano to power features like Summarize in the Recorder app and Smart Reply in Google’s keyboard.
In the next few months Gemini will also be added to more Google products, such as Search, Ads, Chrome, and Duet AI.
Developers will be able to access Gemini Pro via the Gemini API in Google AI Studio or Google Cloud Vortex AI starting on December 13.
The first release of Gemini understands many popular programming languages, including Python, Java, C++, and Go. “Its ability to work across languages and reason about complex information makes it one of the leading foundation models for coding in the world,” Pichai and Hassabis wrote.
The company also used Gemini to create an advanced code generation system called AlphaCode 2 (an evolution of the first version Google released two years ago). It can solve competitive programming problems that involve complex math and theoretical computer science.
Along with the announcement of Gemini, Google is also announcing a new TPU system called Cloud TPU v5p, which is designed for “training cutting-edge AI models.”
“This next generation TPU will accelerate Gemini’s development and help developers and enterprise customers train large-scale generative AI models faster, allowing new products and capabilities to reach customers sooner,” Pichai and Hassabis wrote.
Google also highlighted how it followed its responsible AI Principles when developing Gemini. It says it conducted new research into areas of potential risk, including cyber-offense, persuasion, and autonomy.The company also built safety classifiers for identifying, labeling, and sorting out content containing violence or negative stereotypes.
“This is a significant milestone in the development of AI, and the start of a new era for us at Google as we continue to rapidly innovate and responsibly advance the capabilities of our models. We’ve made great progress on Gemini so far and we’re working hard to further extend its capabilities for future versions, including advances in planning and memory, and increasing the context window for processing even more information to give better responses,” Pichai and Hassabis wrote.
The post Google unveils Gemini, a new multimodal AI model appeared first on SD Times.
from SD Times https://ift.tt/CMPNghw
Comments
Post a Comment