M
MeshWorld.
Cheatsheet Gemini Google AI AI LLM API Developer Tools Google Cloud 6 min read

Gemini API Cheat Sheet: 2.5 Pro, Vision & Grounding

By Vishnu Damwala

Quick reference tables

Models

Model IDContextBest for
gemini-2.5-pro1M tokensComplex reasoning, long documents, coding
gemini-2.5-flash1M tokensFast, cost-efficient, everyday tasks
gemini-2.0-flash1M tokensSpeed-optimized, multimodal
gemini-2.0-flash-lite1M tokensLightest, cheapest, high-volume
gemini-1.5-pro2M tokensLargest context window available
text-embedding-0042048 tokensText embeddings
imagen-3.0-generate-002Image generation

generateContent — key parameters

ParameterTypeWhat it does
modelstringWhich Gemini model to use
contentsarray[{role, parts}] — user/model turns
systemInstructionobjectSystem prompt {parts: [{text}]}
generationConfigobjectTemperature, tokens, format settings
safetySettingsarrayContent filtering thresholds
toolsarrayFunction declarations or built-in tools
toolConfigobject{functionCallingConfig: {mode}}

generationConfig options

SettingTypeWhat it does
temperaturefloat 0–2Randomness
topPfloat 0–1Nucleus sampling
topKintToken pool size
maxOutputTokensintMax response length
stopSequencesarrayStrings that stop generation
responseMimeTypestring"application/json" for JSON mode
responseSchemaobjectJSON schema for structured output
candidateCountintNumber of responses to generate
thinkingConfigobject{thinkingBudget: N} for reasoning

Built-in tools

ToolWhat it does
googleSearchGrounds responses in live Google Search results
codeExecutionRuns Python code, returns output + charts
urlContextFetches and includes content from URLs

Gemini CLI — commands

CommandWhat it does
geminiStart interactive REPL
gemini -p "prompt"Non-interactive single prompt
gemini --model gemini-2.5-proUse a specific model
gemini --yoloAuto-accept all tool actions (no confirmation)
gemini --sandboxRun code execution in sandboxed environment
gemini --debugShow full API request/response details
/helpShow slash commands in REPL
/clearClear conversation history
/statsShow token usage for this session
/toolsList available tools

Safety settings — harm categories

CategoryHarmCategory constant
Dangerous contentHARM_CATEGORY_DANGEROUS_CONTENT
HarassmentHARM_CATEGORY_HARASSMENT
Hate speechHARM_CATEGORY_HATE_SPEECH
Sexually explicitHARM_CATEGORY_SEXUALLY_EXPLICIT

Threshold values: BLOCK_NONE, BLOCK_LOW_AND_ABOVE, BLOCK_MEDIUM_AND_ABOVE, BLOCK_HIGH_AND_ABOVE


Detailed sections

Basic text generation (Node.js)

import { GoogleGenerativeAI } from "@google/generative-ai";

const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-2.0-flash" });

const result = await model.generateContent("Explain recursion simply.");
console.log(result.response.text());

System instruction + chat

const model = genAI.getGenerativeModel({
  model: "gemini-2.5-flash",
  systemInstruction: "You are a senior DevOps engineer. Give concise, practical answers.",
});

const chat = model.startChat();

const r1 = await chat.sendMessage("What is a Kubernetes pod?");
console.log(r1.response.text());

const r2 = await chat.sendMessage("How is it different from a deployment?");
console.log(r2.response.text());

Streaming response

const model = genAI.getGenerativeModel({ model: "gemini-2.0-flash" });

const result = await model.generateContentStream(
  "Write a step-by-step guide to setting up CI/CD."
);

for await (const chunk of result.stream) {
  process.stdout.write(chunk.text());
}

Vision — image input

import fs from "fs";

const model = genAI.getGenerativeModel({ model: "gemini-2.0-flash" });

const imageData = fs.readFileSync("diagram.png");
const base64 = imageData.toString("base64");

const result = await model.generateContent([
  { inlineData: { mimeType: "image/png", data: base64 } },
  "Describe what's in this architecture diagram.",
]);

console.log(result.response.text());

JSON / structured output

const model = genAI.getGenerativeModel({
  model: "gemini-2.5-flash",
  generationConfig: {
    responseMimeType: "application/json",
    responseSchema: {
      type: "object",
      properties: {
        name: { type: "string" },
        language: { type: "string" },
        stars: { type: "integer" },
      },
      required: ["name", "language", "stars"],
    },
  },
});

const result = await model.generateContent(
  "Extract repo info from: react/react - JavaScript - 230k stars"
);

const data = JSON.parse(result.response.text());
console.log(data); // { name: 'react/react', language: 'JavaScript', stars: 230000 }

Function calling (tool use)

const tools = [
  {
    functionDeclarations: [
      {
        name: "get_stock_price",
        description: "Get the current stock price for a ticker symbol",
        parameters: {
          type: "object",
          properties: {
            ticker: {
              type: "string",
              description: "Stock ticker symbol, e.g. GOOG",
            },
          },
          required: ["ticker"],
        },
      },
    ],
  },
];

const model = genAI.getGenerativeModel({
  model: "gemini-2.0-flash",
  tools,
});

const result = await model.generateContent("What's Google's stock price?");
const response = result.response;

// Check if model wants to call a function
const call = response.candidates[0].content.parts[0].functionCall;
if (call) {
  console.log(call.name, call.args); // get_stock_price { ticker: 'GOOG' }
}
const model = genAI.getGenerativeModel({
  model: "gemini-2.0-flash",
  tools: [{ googleSearch: {} }], // enable live search grounding
});

const result = await model.generateContent(
  "What happened in AI news this week?"
);

console.log(result.response.text());

// Check grounding metadata
const groundingMeta = result.response.candidates[0].groundingMetadata;
console.log(groundingMeta?.webSearchQueries); // queries used
console.log(groundingMeta?.groundingChunks); // sources cited

Code execution

const model = genAI.getGenerativeModel({
  model: "gemini-2.5-flash",
  tools: [{ codeExecution: {} }],
});

const result = await model.generateContent(
  "Calculate the first 20 Fibonacci numbers and plot them."
);

// Response includes code written, execution output, and optionally a chart
const parts = result.response.candidates[0].content.parts;
for (const part of parts) {
  if (part.executableCode) console.log("Code:", part.executableCode.code);
  if (part.codeExecutionResult) console.log("Output:", part.codeExecutionResult.output);
}

Embeddings

const embModel = genAI.getGenerativeModel({ model: "text-embedding-004" });

const result = await embModel.embedContent("How do I deploy to Kubernetes?");
const vector = result.embedding.values; // float array (768 dims)

// Batch embeddings
const batchResult = await embModel.batchEmbedContents({
  requests: [
    { content: { parts: [{ text: "First document" }] } },
    { content: { parts: [{ text: "Second document" }] } },
  ],
});

Long document — file upload (Files API)

import { GoogleAIFileManager } from "@google/generative-ai/server";

const fileManager = new GoogleAIFileManager(process.env.GEMINI_API_KEY);

// Upload a large PDF
const uploadResult = await fileManager.uploadFile("report.pdf", {
  mimeType: "application/pdf",
  displayName: "Q4 Report",
});

const file = uploadResult.file;
console.log(`Uploaded: ${file.uri}`);

// Use the uploaded file in a prompt
const model = genAI.getGenerativeModel({ model: "gemini-2.5-pro" });

const result = await model.generateContent([
  { fileData: { fileUri: file.uri, mimeType: "application/pdf" } },
  "Summarize the key financial highlights from this report.",
]);

console.log(result.response.text());

Environment setup

# Install SDK
npm install @google/generative-ai

# Set API key
export GEMINI_API_KEY=AIza...

# Get a key: aistudio.google.com

# Python SDK
pip install google-generativeai

python3 -c "
import google.generativeai as genai
genai.configure(api_key='YOUR_KEY')
model = genai.GenerativeModel('gemini-2.0-flash')
r = model.generate_content('Hello!')
print(r.text)
"

Gemini CLI setup

# Install
npm install -g @google/gemini-cli

# Authenticate (opens browser)
gemini auth login

# Or set API key directly
export GEMINI_API_KEY=AIza...

# Start interactive session
gemini

# One-shot with file context
gemini -p "Review this code for bugs" < src/main.ts

See how Gemini compares in practice: Claude vs Gemini 2.5 for Coding.