Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.julep.ai/llms.txt

Use this file to discover all available pages before exploring further.

Open Responses API Examples

Below are practical examples showing how to use the Julep Open Responses API for various use cases.
  • The Open Responses API requires self-hosting. See the installation guide below.
  • Being in Alpha, the API is subject to change. Check back frequently for updates.
  • For more context, see the OpenAI Responses API documentation.

API Key Configuration

  • RESPONSE_API_KEY is the API key that you set in the .env file.

Model Selection

  • While using models other than OpenAI, one might need to add the provider/ prefix to the model name.
  • For supported providers, see the LiteLLM Providers documentation.

Environment Setup

  • Add the relevant provider keys to the .env file to use their respective models.

Setup

First, set up your environment and create a client:
from openai import OpenAI

# Create an OpenAI client pointing to Julep's Open Responses API
client = OpenAI(base_url="http://localhost:8080/", api_key="RESPONSE_API_KEY")

Using Reasoning Features

Enhance your model’s reasoning capabilities for solving complex problems:
# Create a response with explicit reasoning
reasoning_response = client.responses.create(
    model="o1",
    input="If Sarah has 3 apples and John has 5, and they combine their apples, then how many apples do they have in total? Explain your approach.",
    reasoning={
        "effort": "medium"  # Control reasoning depth with "low", "medium", or "high"
    }
)

# Access the final answer
print(reasoning_response.output_text)
# Output: They would have 8 apples in total. The approach is straightforward: you simply add the number of apples Sarah has (3) to the number of apples John has (5), giving 3 + 5 = 8..

Using Web Search Tool

Python
web_search_response = openai_client.responses.create(
model="gpt-4o-mini",
tools=[{"type": "web_search_preview"}],
    input="What was a positive news story from today?",
)
# The output will include both the text response and any tool calls that were made

Maintaining Conversation History

Create a continuous conversation by referencing previous responses:
# Reference a previous response to continue a conversation
follow_up_response = client.responses.create(
    model="gpt-4o-mini",
    input="What was the final answer?",
    previous_response_id=reasoning_response.id
)

print(follow_up_response.output_text)

Retrieving Past Responses

Access previously created responses by their ID:
# Retrieve a response by ID
retrieved_response = client.responses.retrieve(response_id="your-response-id")

Next Steps

You’ve got Open Responses running – here’s what to explore next: