Function calling enables Gemini to request execution of external tools and APIs during generation. The model intelligently determines when to call functions, extracts parameters from natural language, and integrates results into responses.
Let the SDK handle function execution automatically:
def get_weather(city: str) -> str: """Gets the weather in a city.""" if "london" in city.lower(): return "Rainy" if "new york" in city.lower(): return "Sunny" return "Cloudy"response = client.models.generate_content( model="gemini-3.1-pro-preview", contents="What's the weather in London and New York?", config=GenerateContentConfig( tools=[get_weather], # Pass Python function directly ))# SDK automatically executes functions and returns final textprint(response.text)# View execution historyfor turn in response.automatic_function_calling_history: for part in turn.parts: if part.function_call: print(f"Called: {part.function_call.name}")
chat = client.chats.create( model="gemini-3-flash-preview", config=GenerateContentConfig( temperature=0, tools=[retail_tool] ))# User asks about productresponse = chat.send_message("Do you have the Pixel 9 in stock?")print(f"Function: {response.function_calls[0].name}")# Output: Function: get_product_info# Simulate API responseapi_response = {"sku": "GA04834-US", "in_stock": "yes"}response = chat.send_message( Part.from_function_response( name="get_product_info", response={"content": api_response} ))print(response.text)# Output: Yes, the Pixel 9 is in stock (SKU: GA04834-US).
Gemini can request multiple function calls simultaneously:
response = chat.send_message( "What about the Pixel 9 Pro XL? Is there a store in Mountain View, CA?")# Model requests both functionsprint(f"Number of function calls: {len(response.function_calls)}")for fc in response.function_calls: print(f" - {fc.name}: {fc.args}")
Output:
Number of function calls: 2 - get_product_info: {'product_name': 'Pixel 9 Pro XL'} - get_store_location: {'location': 'Mountain View, CA'}
from google.genai.types import ToolConfig, FunctionCallingConfig, FunctionCallingConfigModeresponse = client.models.generate_content( model="gemini-3-flash-preview", contents="Tell me about the weather", config=GenerateContentConfig( tools=[weather_tool], tool_config=ToolConfig( function_calling_config=FunctionCallingConfig( mode=FunctionCallingConfigMode.ANY, allowed_function_names=["get_weather"] ) ) ))# Model is forced to call get_weatherassert response.function_calls[0].name == "get_weather"
Stream function call arguments as they’re generated:
for chunk in client.models.generate_content_stream( model="gemini-3-flash-preview", contents="What's the weather in London and New York?", config=GenerateContentConfig( tools=[weather_tool], tool_config=ToolConfig( function_calling_config=FunctionCallingConfig( mode=FunctionCallingConfigMode.AUTO, stream_function_call_arguments=True ) ) )): if chunk.function_calls: function_call = chunk.function_calls[0] if function_call.name: print(f"Function: {function_call.name}") print(f"Will continue: {function_call.will_continue}")
For thinking models (Gemini 3.1 Pro), function calling automatically manages thought signatures:
# Automatic handling with SDKresponse = client.models.generate_content( model="gemini-3.1-pro-preview", contents="What's the weather in London?", config=GenerateContentConfig( tools=[get_weather], thinking_config=ThinkingConfig(include_thoughts=True) ))# Thought signatures preserved automatically in response.candidates[0].content
When using manual function calling with Gemini 3.1 Pro, always append the full response.candidates[0].content to maintain thought signatures across turns.
try: if response.function_calls: function_call = response.function_calls[0] # Validate arguments required_args = ["location"] for arg in required_args: if arg not in function_call.args: raise ValueError(f"Missing required argument: {arg}") # Execute function result = get_weather(**function_call.args)except ValueError as e: print(f"Invalid function call: {e}")except Exception as e: print(f"Function execution error: {e}")