Quickstart
Setting up SeekrFlow API and SDK
The Seekr Python Library is the official Python client for SeekrFlow's API platform. It provides a seamless way to interact with REST APIs, allowing for easy integration into Python 3.9+ applications with both synchronous and asynchronous support.
This guide will walk you through every step of setting up SeekrFlow, from installation to running your first API call. Let’s get started!
Installation
To use the Seekr Python Library, you first need to install it via PyPI. Follow these steps:
- Open your terminal or command prompt.
- Run the following command to install the library:
pip install --upgrade seekrai
This command ensures that you have the latest version of the library.
Tip: If you encounter any errors, ensure you have Python 3.9 or higher installed. You can verify your Python version with:
python --version
If your Python version is outdated, update it from Python.org.
Setting up your SeekrFlow API key
To use the SeekrFlow platform, you'll first need to create a Seekr account.
Step 1: Create an Account
- Go to Seekr.com and create an account.
- Seekr will generate a new API key, which can be found by navigating to the User Profile section of the SeekrFlow dashboard.
Step 2: Store Your API Key
For security, store your API key as an environment variable. Here’s how:
- Open Terminal on your computer.
- Run the following command, replacing
your_api_key
with the key you generated:
export SEEKR_API_KEY=your_api_key
**Windows Users:**If you are on Windows, use the set
command instead:
set SEEKR_API_KEY=your_api_key
- Confirm the variable is set by running:
echo $SEEKR_API_KEY
(or echo %SEEKR_API_KEY%
on Windows).
Using the Seekr Python client
Once your API key is set, you’re ready to use the Python client.
Step 1: Import the Library
Start by importing the SeekrFlow
client in your Python script:
from seekrai import SeekrFlow
client = SeekrFlow(api_key="your_api_key")
Or, if you’ve set the environment variable:
import os
from seekrai import SeekrFlow
client = SeekrFlow(api_key=os.environ.get())
Step 2: Make Your First API Call
Let’s make a simple request to the chat.completions
endpoint:
response = client.chat.completions.create(
model="meta-llama/Meta-Llama-3-8B-Instruct",
messages=[{"role": "user", "content": "What's the difference between ethics and responsibility?"}],
)
print(response.choices[0].message.content)
This will return a response containing the AI’s answer.
Streaming responses
Streaming allows you to receive responses in chunks, useful for real-time applications.
stream = client.chat.completions.create(
model="meta-llama/Meta-Llama-3-8B-Instruct",
messages=[{"role": "user", "content": "In what ways has the concept of responsibility evolved in the digital age?"}],
stream=True, # Enables streaming
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="", flush=True)
Tip: Use stream=True
to ensure the output is displayed in real-time.
Asynchronous usage
The Seekr Python Library also supports asynchronous usage for handling multiple requests efficiently.
Step 1: Import the Asynchronous Client
import os, asyncio
from seekrai import AsyncSeekrFlow
async_client = AsyncSeekrFlow(api_key=os.environ.get())
Step 2: Run Multiple Requests
Here’s an example of running multiple chat completions concurrently:
messages = [
"How do we determine responsibility in complex systems where many actors contribute to outcomes?",
"How do organizations balance ethical frameworks with practical responsibilities?",
]
async def async_chat_completion(messages):
tasks = [
async_client.chat.completions.create(
model="meta-llama/Meta-Llama-3-8B-Instruct",
messages=[{"role": "user", "content": message}],
)
for message in messages
]
responses = await asyncio.gather(*tasks)
for response in responses:
print(response.choices[0].message.content)
asyncio.run(async_chat_completion(messages))
Enhanced OpenAI compatibility
Our inference engine seamlessly integrates with OpenAI’s ecosystem, expanding workflow capabilities and enhancing usability.
Log Probabilities: Support for log_probs
and top_logprobs
, providing insights into model decision-making, aiding debugging, and improving output accuracy.
Dynamic Tool Calling: Custom functions can now be automatically invoked by the model based on context.
The example below shows you how to leverage the OpenAI client and SeekrFlow's inference engine to create a custom unit conversion tool that can be configured dynamically.
Step 1: Create the client and make an API request:
import os
import openai
# Set the API key
os.environ["OPENAI_API_KEY"] = "Paste your API key here"
# Create the OpenAI client and retrieve the API key.
client = openai.OpenAI(
base_url="https://flow.seekr.com/v1/inference",
api_key=os.environ.get("OPENAI_API_KEY"
)
# Send a request to the OpenAI API to leverage the specified Llama model as a unit conversion tool.
response = client.chat.completions.create(
model="meta-llama/Llama-3.1-8B-Instruct",
stream=False,
messages=[{
"role": "user",
"content": "Convert from 5 kilometers to miles"
}],
max_tokens=100,
tools=[{
"type": "function",
"function": {
"name": "convert_units",
"description": "Convert between different units of measurement",
"parameters": {
"type": "object",
"properties": {
"value": {"type": "number"},
"from_unit": {"type": "string"},
"to_unit": {"type": "string"}
},
"required": ["value", "from_unit", "to_unit"]
}
}
}]
)
Step 2: Register a function from JSON:
# Parse json and register
def register_from_json(json_obj):
code = f"def {json_obj['name']}({', '.join(json_obj['args'])}):\n{json_obj['docstring']}\n{json_obj['code']}"
print(code)
namespace = {}
exec(code, namespace)
return namespace[json_obj["name"]]
Step 3: Run the unit conversion tool:
This function executes the tool call, given an LLM response object.
# Execute our tool
def execute_tool_call(resp):
tool_call = resp.choices[0].message.tool_calls[0]
func_name = tool_call.function.name
args = tool_call.function.arguments
func = globals().get(func_name)
if not func:
raise ValueError(f"Function {func_name} not found")
if isinstance(args, str):
import json
args = json.loads(args)
return func(**args)
execute_tool_call(response)
Using the CLI
The Seekr Python Library also includes a command-line interface (CLI) for quick testing and automation.
Step 1: Ensure Installation
Ensure you have installed the library as described earlier.
Step 2: Run CLI Commands
For example, to test a simple chat completion:
seekr chat-completions create --model "meta-llama/Meta-Llama-3-8B-Instruct" --message "Tell me about SeekrFlow"
Troubleshooting tips
Invalid API Key: Ensure your API key is correctly set in the environment variable.
Connection Errors: Check your internet connection and firewall settings.
Python Version Issues: Ensure Python 3.9+ is installed and set as the default version.
By following these detailed steps, you’ll be able to set up and start using SeekrFlow with ease. For further assistance, refer to the full documentation or contact support.
Updated 1 day ago