Google is announcing that the Gemini API and Google AI Studio now both offer the ability to ground models using Google Search, which will improve the accuracy and reliability of Gemini’s responses. By grounding the responses with Google Search results, responses can have fewer hallucinations, more up-to-date information, and richer information. Grounded responses also include … continue reading
Gemini users will now be able to more easily select the model that fits their requirements by using Google AI Studio’s new Compare Mode. “As a developer, you understand the critical tradeoffs involved in model selection, such as cost, latency, token limits, and response quality. Compare Mode simplifies this process by allowing you to evaluate … continue reading
Google is trying to make its AI assistant Gemini more useful by adding a conversation mode called Gemini Live, similar to how conversations in ChatGPT work. Gemini Live has a voice mode, so that users can speak their questions out loud rather than typing. This voice mode works even when the app is in the … continue reading
Google has announced that developers now have access to a 2 million context window for Gemini Pro 1.5. For comparison, GPT-4o has a 128k context window. This context window length was first announced at Google I/O and accessible only through a waitlist, but now everyone has access. Longer context windows can lead to higher costs, … continue reading
Google Cloud Next was this week, and the company unveiled a lot of innovations related to AI, such as two new Gemma models for code generation and inference. Google announced that Gemini 1.5 Pro will be entering public preview for Google Cloud customers, and it’s available through Vertex AI. This version of the model was … continue reading
Stack Overflow and Google Cloud have announced a new partnership aimed at better serving information to developers. Google Cloud will be integrating the AI model Gemini into Stack Overflow to surface relevant content in response to searches, and Google Cloud will also begin pulling in information directly from Stack Overflow so that developers don’t have … continue reading
After announcing its new multimodal AI model Gemini last week, Google is making several announcements today to enable developers to build with it. When first announced, Google said that Gemini will come in three different versions, each tailored to a different size or complexity requirement. In order from largest to smallest, Gemini is available in … continue reading
Google has announced its latest AI model, Gemini, which was built from the start to be multimodal so that it could interpret information in multiple formats, spanning text, code, audio, image, and video. According to Google, the typical approach for creating a multimodal model involves training components for different information formats separately and then combining … continue reading