Low-rank adaptation

Apply low-rank adaptation to fine-tuning jobs with the SeekrFlow Python SDK.

For conceptual background on LoRA, including when to use it and how it compares to full fine-tuning, see Low-rank adaptation (LoRA).

Add LoRA to a fine-tuning job

To use LoRA, set lora_config in your TrainingConfig using LoRAConfig. LoRA works with all fine-tuning methods: instruction, reinforcement, preference, and vision language tuning.

from seekrai import SeekrFlow
from seekrai.types import TrainingConfig, InfrastructureConfig
from seekrai.types.finetune import LoRAConfig

client = SeekrFlow()

training_config = TrainingConfig(
    training_files=["<your-training-file-id>"],
    model="meta-llama/Meta-Llama-3-8B-Instruct",
    n_epochs=1,
    n_checkpoints=1,
    batch_size=4,
    learning_rate=1e-5,
    experiment_name="helperbot_lora_v1",
    lora_config=LoRAConfig(
        r=16,
        alpha=32,
    ),
)

infrastructure_config = InfrastructureConfig(
    n_accel=8,
    accel_type="GAUDI2",
)

fine_tune = client.fine_tuning.create(
    training_config=training_config,
    infrastructure_config=infrastructure_config,
    project_id=123,
)

print(fine_tune.id)

LoRAConfig parameters

ParameterTypeDefaultDescription
rinteger8Adapter rank. Controls adapter capacity and the complexity of patterns the adapter can learn.
alphafloat32Scaling factor for adapter influence. Controls how strongly learned patterns affect base model behavior.
dropoutfloat0.1Dropout probability applied to adapter layers during training.
biasstring"none"Whether to train bias parameters. Accepted values: "none", "all", "lora_only".
extrasobjectOptional dictionary for additional LoRA parameters.

For guidance on choosing rank and alpha values, see Low-rank adaptation (LoRA).

See Create a fine-tuning job for the full workflow including project setup, file retrieval, and monitoring.