OpenAI
This quickstart guide will help you get started using SeekrFlow via OpenAI.
The Compatibility API allows developers to use SeekrFlow through OpenAI’s SDK.
It makes it easy to switch existing OpenAI-based applications to use Seekr’s models while still maintaining the use of OpenAI SDK — no big refactors needed.
Installation
First, install the OpenAI SDK and import the package.
Then, create a client and configure it with the Compatibility API base URL and your Seekr API key.
import os
import openai
# Set the API key
os.environ["OPENAI_API_KEY"] = "Your OpenAI API key here"
# Create the OpenAI client and retrieve the API key.
client = openai.OpenAI(
base_url="https://flow.seekr.com/v1/inference",
api_key=os.environ.get()
Basic chat completions
Here’s a basic example using the Chat Completions API:
from openai import OpenAI
client = OpenAI(
base_url="https://flow.seekr.com/v1/inference",
api_key="SEEKR_API_KEY",
)
completion = client.chat.completions.create(
model="meta-llama/Meta-Llama-3-8B",
messages=[
{
"role": "user",
"content": "Write a haiku about eating cake at the gym.",
},
],
)
print(completion.choices[0].message)
Chat with streaming
To stream the response, set the stream parameter to True
.
from openai import OpenAI
client = OpenAI(
base_url="https://flow.seekr.com/v1/inference",
api_key="SEEKR_API_KEY",
)
stream = client.chat.completions.create(
model="meta-llama/Meta-Llama-3-8B",
messages=[
{
"role": "user",
"content": "Write a haiku about eating cake at the gym.",
},
],
stream=True,
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")
State management
For state management, use the messages
parameter to build the conversation history.
You can include a system message via the developer role and the multiple chat turns between the user and assistant.
from openai import OpenAI
client = OpenAI(
base_url="https://flow.seekr.com/v1/inference",
api_key="SEEKR_API_KEY",
)
completion = client.chat.completions.create(
messages=[
{
"role": "user",
"content": "Respond in the style of Russ Hanneman.",
},
{
"role": "user",
"content": "What does ROI mean?",
},
{
"role": "assistant",
"content": "Radio. On. Internet.",
},
{
"role": "user",
"content": "How did you make your first billion?",
},
],
model="meta-llama/Meta-Llama-3-8B",
)
print(completion.choices[0].message)
Tool use (function calling)
You can utilize the tool use feature by passing a list of tools to the tools parameter in the API call. This one creates a custom unit conversion tool that can be configured dynamically.
from openai import OpenAI
client = openai.OpenAI(
base_url="https://flow.seekr.com/v1/inference",
api_key=os.environ.get("OPENAI_API_KEY"
)
# Send a request to the OpenAI API to leverage the specified Llama model as a unit conversion tool.
response = client.chat.completions.create(
model="meta-llama/Llama-3.1-8B-Instruct",
stream=False,
messages=[{
"role": "user",
"content": "Convert from 5 kilometers to miles"
}],
max_tokens=100,
tools=[{
"type": "function",
"function": {
"name": "convert_units",
"description": "Convert between different units of measurement",
"parameters": {
"type": "object",
"properties": {
"value": {"type": "number"},
"from_unit": {"type": "string"},
"to_unit": {"type": "string"}
},
"required": ["value", "from_unit", "to_unit"]
}
}
}]
)
Next, register the function from JSON and run the unit conversion tool:
# Parse json and register
def register_from_json(json_obj):
code = f"def {json_obj['name']}({', '.join(json_obj['args'])}):\n{json_obj['docstring']}\n{json_obj['code']}"
print(code)
namespace = {}
exec(code, namespace)
return namespace[json_obj["name"]]
# Execute our tool
def execute_tool_call(resp):
tool_call = resp.choices[0].message.tool_calls[0]
func_name = tool_call.function.name
args = tool_call.function.arguments
func = globals().get(func_name)
if not func:
raise ValueError(f"Function {func_name} not found")
if isinstance(args, str):
import json
args = json.loads(args)
return func(**args)
execute_tool_call(response)
Supported parameters
The following is the list supported parameters in the Compatibility API, including those that are not explicitly demonstrated in the examples above:
- model
- messages
- stream
- temperature
- logprobs
- top_logprobs
- max_tokens
- stop
- top_p
- frequency_penalty
- presence_penalty
Unsupported parameters
The following parameters are not supported in the Compatibility API:
- tool_choice
- store
- reasoning_effort
- metadata
- logit_bias
- max_completion_tokens
- n
- modalities
- prediction
- audio
- service_tier
- stream_options
- parallel_tool_calls
- user
Updated 12 days ago