Home Tutorials Categories Skills About
ZH EN JA KO
Model Integration

Guide to Integrating Google Gemini Models with OpenClaw

· 16 min read

Introduction

Google Gemini is Google's next-generation multimodal AI model, featuring powerful text, image, audio, and video understanding capabilities. OpenClaw fully supports the Gemini model family, and Google offers remarkably generous free quotas, making it an excellent choice for individual users and small teams. This tutorial walks you through the complete Gemini integration process.

Prerequisites

Requirement Details
OpenClaw installed and running openclaw doctor passes all checks
Google account For logging into Google AI Studio
Network access Ability to reach Google services

Step 1: Obtain a Gemini API Key

1.1 Via Google AI Studio (Recommended)

This is the simplest method, ideal for individual developers:

  1. Visit aistudio.google.com
  2. Sign in with your Google account
  3. Click Get API key in the left menu
  4. Click Create API key
  5. Select an existing Google Cloud project or create a new one
  6. Copy the generated API Key

API Key format example:

AIzaSyxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

1.2 Confirm Regional Availability

The Gemini API may not be available in all regions. As of 2026, the following regions are supported:

  • United States, Canada
  • Most European countries
  • Japan, South Korea, Singapore, India
  • Australia

If your region is not yet supported, consider using the Vertex AI approach (covered below).

Step 2: Configure OpenClaw

2.1 Basic Configuration

Edit the OpenClaw configuration file:

nano ~/.config/openclaw/openclaw.json5

Add the Gemini model configuration:

{
  models: {
    gemini: {
      provider: "google",
      apiKey: "AIzaSyxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
      defaultModel: "gemini-2.5-pro",
    }
  }
}

2.2 Store the Key Using Environment Variables

# Add to ~/.bashrc or ~/.zshrc
export GOOGLE_AI_API_KEY="AIzaSyxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

Update the configuration file to reference the environment variable:

{
  models: {
    gemini: {
      provider: "google",
      apiKey: "${GOOGLE_AI_API_KEY}",
      defaultModel: "gemini-2.5-pro",
    }
  }
}

2.3 Restart the Service

openclaw restart
openclaw doctor

Seeing ✓ Google Gemini connection OK confirms a successful setup.

Step 3: Gemini Model Selection

Google currently offers several Gemini models, each with different strengths:

Model Context Window Highlights Recommended Use Case
gemini-2.5-pro 1M tokens Strongest reasoning, deep thinking Complex analysis, code generation, long documents
gemini-2.5-flash 1M tokens Fast responses, great value Daily conversation, quick Q&A
gemini-2.0-flash 1M tokens Previous-gen fast model Latency-sensitive scenarios
gemini-2.0-flash-lite 1M tokens Most lightweight model Simple tasks, high concurrency

3.1 Recommended Configuration

For most users, the following combination is recommended:

{
  models: {
    "gemini-main": {
      provider: "google",
      apiKey: "${GOOGLE_AI_API_KEY}",
      defaultModel: "gemini-2.5-pro",
      parameters: {
        temperature: 0.7,
        maxTokens: 8192,
      }
    },
    "gemini-fast": {
      provider: "google",
      apiKey: "${GOOGLE_AI_API_KEY}",
      defaultModel: "gemini-2.5-flash",
      parameters: {
        temperature: 0.7,
        maxTokens: 4096,
      }
    }
  }
}

Step 4: Free Quota Details

Google AI Studio offers very generous free quotas:

Model Free RPM (Requests/Min) Free TPM (Tokens/Min) Free RPD (Requests/Day)
gemini-2.5-pro 5 250,000 25
gemini-2.5-flash 15 1,000,000 500
gemini-2.0-flash 15 1,000,000 1,500
gemini-2.0-flash-lite 30 1,000,000 3,000

For individual users, the free quota of gemini-2.5-flash is generally sufficient. If you exceed the free tier, you can enable a paid plan in the Google Cloud Console.

4.1 Tips for Maximizing Free Quota Usage

{
  models: {
    gemini: {
      provider: "google",
      apiKey: "${GOOGLE_AI_API_KEY}",
      defaultModel: "gemini-2.5-flash",  // Use flash by default for larger free quota
      parameters: {
        maxTokens: 2048,                  // Limit output length to save tokens
      }
    }
  },
  rateLimit: {
    gemini: {
      maxRequestsPerMinute: 14,           // Leave some headroom to avoid hitting limits
      maxRequestsPerDay: 480,
    }
  }
}

Step 5: Vertex AI Approach

If you need enterprise-grade service or your region does not support AI Studio, you can use Google Cloud's Vertex AI:

5.1 Enable Vertex AI

  1. Log in to the Google Cloud Console
  2. Create or select a project
  3. Enable the Vertex AI API
  4. Create a service account and download the JSON key file

5.2 Configure OpenClaw with Vertex AI

{
  models: {
    "gemini-vertex": {
      provider: "google-vertex",
      serviceAccountKeyFile: "/path/to/service-account-key.json",
      projectId: "your-gcp-project-id",
      region: "us-central1",            // Choose the region closest to you
      defaultModel: "gemini-2.5-pro",
      parameters: {
        temperature: 0.7,
        maxTokens: 8192,
      }
    }
  }
}

5.3 Vertex AI vs. AI Studio Comparison

Criteria AI Studio Vertex AI
Ideal for Individuals, small teams Enterprises, production environments
Authentication API Key Service account
Free quota Yes Limited free trial
SLA guarantee No Yes
Data residency No guarantee Configurable by region
Billing Pay-as-you-go Pay-as-you-go with reserved discounts

Step 6: Multimodal Capabilities Configuration

One of Gemini's major highlights is native multimodal support. You can send images, PDFs, and other content through chat channels, and Gemini can understand and respond to them.

6.1 Enable Image Understanding

{
  models: {
    gemini: {
      provider: "google",
      apiKey: "${GOOGLE_AI_API_KEY}",
      defaultModel: "gemini-2.5-flash",
      capabilities: {
        vision: true,        // Enable image understanding
        pdf: true,           // Enable PDF parsing
        audio: true,         // Enable audio understanding
      }
    }
  }
}

6.2 Supported File Types

File Type Supported Formats Notes
Images PNG, JPEG, WebP, GIF Supports multiple images per input
Documents PDF Can extract and understand PDF content
Audio MP3, WAV, FLAC Supports speech-to-text and comprehension
Video MP4, WEBM Can understand video content (select models)

Step 7: Troubleshooting

Invalid API Key

Error: API key not valid. Please pass a valid API key.

Verify the API Key was copied correctly and starts with AIza. If the error persists, try regenerating the key in AI Studio.

Regional Restrictions

Error: User location is not supported for the API use.

Your region may not support the Gemini API. Solutions:

  • Use the Vertex AI approach
  • Access Gemini indirectly through OpenRouter (see the OpenRouter tutorial)

Free Quota Exhausted

Error: 429 Resource has been exhausted

Solutions:

  • Switch to gemini-2.0-flash-lite for a higher free quota
  • Enable a Google Cloud paid plan
  • Configure rate limiting in OpenClaw to avoid frequent triggers

Truncated Responses

If responses are frequently cut off, check the maxTokens setting:

parameters: {
  maxTokens: 8192,  // Increase the output limit
}

Summary

Gemini models are an outstanding value choice within the OpenClaw ecosystem. Thanks to generous free quotas and powerful multimodal capabilities, individual users can run a feature-rich AI assistant at zero cost. We recommend using gemini-2.5-flash for daily use to balance speed and quality, and switching to gemini-2.5-pro for complex tasks. If your team has enterprise-level requirements, Vertex AI is well worth considering.

OpenClaw is a free, open-source personal AI assistant that supports WhatsApp, Telegram, Discord, and many more platforms