gpt-5.2

Display Name: GPT-5.2
OpenAI
OpenAI
Released on Dec 14 12:00 AM

GPT-5.2 is OpenAI's best model for coding and agentic tasks across industries.

Specifications

Context400,000
Maximum Output128,000
Inputtext, image
Outputtext, json

Performance (7-day Average)

Uptime
TPS
RURT

Pricing

Standard
Input$1.93/MTokens
Output$15.40/MTokens
Cached Input$0.19/MTokens
Flex
Input$0.96/MTokens
Output$7.70/MTokens
Cached Input$0.10/MTokens
Batch
Input$0.96/MTokens
Output$7.70/MTokens
Cached Input$0.10/MTokens

Usage Statistics

No usage data available for this model during the selected period
View your usage statistics for this model

Similar Models

Documentation

GPT-5.2

GPT-5.2 is OpenAI's most capable model series, released in December 2025. It features a massive 400K token context window, enhanced reasoning capabilities, and improved performance on coding, math, and scientific tasks.

Basic Usage

python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.ohmygpt.com/v1",
    api_key="your-api-key",
)

response = client.chat.completions.create(
    model="gpt-5.2",
    messages=[
        {"role": "user", "content": "Explain quantum computing in simple terms."}
    ],
)

print(response.choices[0].message.content)

Using GPT-5.2 Thinking

For complex reasoning tasks like coding and planning, use the thinking variant:

python
response = client.chat.completions.create(
    model="gpt-5.2-thinking",
    messages=[
        {
            "role": "user",
            "content": "Write a Python function to find the longest palindromic substring."
        }
    ],
)

Vision Example

GPT-5.2 has enhanced vision capabilities for analyzing images, charts, and documents:

python
response = client.chat.completions.create(
    model="gpt-5.2",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "Analyze this chart and summarize the key trends."},
                {
                    "type": "image_url",
                    "image_url": {"url": "https://example.com/chart.png"}
                }
            ]
        }
    ],
)

Long Context Example

Take advantage of the 400K context window for processing large documents:

python
# Read a large document or codebase
with open("large_document.txt", "r") as f:
    document = f.read()

response = client.chat.completions.create(
    model="gpt-5.2",
    messages=[
        {
            "role": "system",
            "content": "You are a helpful assistant that analyzes documents."
        },
        {
            "role": "user",
            "content": f"Summarize the key points from this document:\n\n{document}"
        }
    ],
)

Best Practices

  1. Choose the Right Variant: Use gpt-5.2 for fast responses, gpt-5.2-thinking for complex reasoning, and gpt-5.2-pro for the highest accuracy
  2. Leverage Long Context: Process entire codebases or document collections without chunking
  3. Use System Messages: Set clear context and behavior expectations
  4. Monitor Token Usage: The 400K context is powerful but costs more—use it strategically