Model Library

Explore all the models supported by SeekrFlow's platform.

Available Models

SeekrFlow supports fine-tuning on the following open-source models:

Llama 2Llama 3Llama 3.1Mistral
Text models ranging in scale from 7B to 70B parameters. Suited to basic tasks and simple applications.Text models in 8 and 70B sizes, optimized for speed and efficiency in standard tasks.Text models with enhanced reasoning and math abilities, in sizes ranging from 8B to 405B parameters. Ideal for tasks like long-form content generation, document analysis, and extended conversation.An instruction-tuned version of Mistral-7B-v0.2, for following instructions and answering questions naturally.
meta-llama/Llama-2-7b-chat-hf\meta-llama/Meta-Llama-3-8Bmeta-llama/Llama-3.1-8Bmistralai/Mistral-7B-Instruct-v0.2
meta-llama/Llama-2-7b-hfmeta-llama/Meta-Llama-3-8B-Instructmeta-llama/Llama-3.1-8B-Instruct
meta-llama/Llama-2-13b-chat-hfmeta-llama/Meta-Llama-3-70Bmeta-llama/Llama-3.1-70B
meta-llama/Llama-2-13b-hfmeta-llama/Meta-Llama-3-70B-Instructmeta-llama/Llama-3.1-70B-Instruct
meta-llama/Llama-2-70b-chat-hf
meta-llama/Llama-2-70b-hf

Which one should I use?

Generally, Chat models are designed for user/system conversations ("What happened at Enron in 2001?"), while Instruct models are fine-tuned to follow instructions to complete a task ("Write a summary of the latest AI research papers dealing with context attribution in model output.").

Smaller models generally cost less and train faster, while larger models will generally have better results. Consider your task complexity and performance requirements when making a model selection for fine-tuning.
If your goal is efficiency and targeted performance, consider using a smaller model.

  • Example: A bank builds a customer service chatbot for handling routine inquiries about account services. This specialized chatbot would be trained on banking-specific terminology and common customer questions, enabling it to provide precise, accurate responses. In this case, a small-model approach minimizes computational costs and ensures faster deployment.
    If your goal is to perform complex tasks, like advanced natural language inference or code generation, consider using a larger model for higher accuracy and versatility.
  • Example: A specialized system that assists radiologists by detecting abnormalities like tumors, fractures, or other conditions with high accuracy would be more suited to a large model with the ability to outperform smaller models in identifying subtle details that might be missed by human experts or less sophisticated systems.

Note: If your goal is complex and your budget constrained, consider using Mistral-7B-Instruct-v0.2, which delivers competitive performance at a lower computational cost compared to larger LLaMA models, along with a comparatively enormous 32k context window and an Apache 2.0 license (allowing unrestricted use across various platforms).

Specialized base models

These models are also available for specific use cases:

Llama GuardTinyLlama
An 8B Llama 3 safeguard model for classifying LLM inputs and responses.A compact 1.1B text model for lightweight experimentation.
meta-llama/Llama-Guard-3-8BTinyLlama/TinyLlama-1.1B-Chat-v1.0