Supervised Fine Tuning (SFT)

What is Supervised Fine Tuning?

Supervised Fine Tuning (SFT) is a training approach that uses labeled input-output pairs to teach the model specific behaviors and response patterns. The model learns by observing examples of correct inputs and their corresponding desired outputs, gradually adjusting its parameters to minimize the difference between its predictions and the target responses. This technique is particularly effective when you have clear examples of how the model should behave and provides predictable, measurable improvements in task-specific performance.

Prerequisites

Before installing, make sure you have:

  1. Pip or

  2. gcloud CLI - required because gllm-training is a private library hosted in a private Google Cloud repository

After installing, please run

gcloud auth login

to authorize gcloud to access the Cloud Platform with Google user credentials.

Our internal gllm-training package is hosted in a secure Google Cloud Artifact Registry. You need to authenticate via gcloud CLI to access and download the package during installation.

  1. The minimum requirements:

    1. CUDA-compatible GPU

    2. Recommendation GPU:

      1. RTX A5000

      2. RTX 40/50 series.

    3. Windows/Linux, currently not support for macOS

Installation

pip install --extra-index-url "https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/" "gllm-training"

Quickstart

Let's jump into a basic example fine-tuned using SFTTrainer. SFT trainer cannot be empty, which means it must have at least the model name and csv datasets path.

from gllm_training import SFTTrainer

finetuner = SFTTrainer(
    model_name="Qwen/Qwen3-1.7b",
    datasets_path="examples/csv"
)
results = finetuner.train()

Fine tuning model using YAML file.

We can run experiments in a more structured way by using a YAML file. The current fine-tuning SDK supports both online data from Google Spreadsheets and local data in CSV format.

Example 1: Fine tuning using online data.

We can prepared our experiment using YAML file with the data trained and validation from google spreadsheet.

1

Configure environment variables (.env)

Fill in the GOOGLE_SHEETS_CLIENT_EMAIL and GOOGLE_SHEETS_PRIVATE_KEY fields. If you don’t have these keys, please contact the infrastructure team.

2

Share the spreadsheet

Share your Google Spreadsheet containing the training and validation data with the GOOGLE_SHEETS_CLIENT_EMAIL.

3

Experiment configuration (sft_experiment_config.yml)

You can use a YAML file to plan your fine tuning experiments. To fine tuning with YAML, you need to define the required variables in the file.

4

Fine tuning

To run your fine-tuning, you need to load the YAML data using the YamlConfigLoader function, and select the experiment ID when executing the load function.

5

(Notes) Output format

Our SDK supports dictionary or string output formats for fine-tuned models.

  1. Output dictionary

    YAML format

    Expected output

  2. Output string

    YAML format

    Expected output

Example 2: Fine tuning using local data.

The remaining hyperparameter configurations for fine-tuning are the same as when using online data. Below is an example YAML configuration for using local data for training and validation.

Logging Monitoring

During the fine-tuning process, the SDK automatically generates comprehensive logs to help you monitor training progress and debug issues. These logs are stored in two formats:

JSONL Logs (Structured Training Metrics)

The SDK generates structured JSONL logs that capture detailed training metrics at each step. These logs are stored in:

Example path: data/sft/model/exp_1/Qwen3-1.7b/logs/sft_train_steps.jsonl

Each line in the JSONL file contains a JSON object with training metrics such as:

  • step: Training step number

  • loss: Training loss at that step

  • learning_rate: Current learning rate

  • epoch: Current epoch number

  • And other relevant metrics

You can parse these logs programmatically or use tools like jq to analyze the training progression:

Note: The JSONL file contains both training steps (with loss field) and evaluation steps (with eval_loss field). Use select() to filter for the specific type of data you need.

Tensorboard Logs (Visual Monitoring)

For visual monitoring and analysis, the SDK also generates TensorBoard-compatible logs stored in:

Example path: data/sft/model/exp_1/Qwen3-0.6b/logs_tensorboard

To visualize your training progress:

  1. Launch TensorBoard:

  1. Open your browser and navigate to http://localhost:6006

  2. Monitor key metrics:

    • Training/Validation loss curves

    • Learning rate scheduling

    • Step-by-step progress

    • Custom metrics (if configured)

Log Configuration

You can customize logging behavior through the hyperparameters configuration:

Best Practices

  1. Monitor training stability: Check that your loss decreases smoothly without sudden jumps - if you see spikes or irregular patterns, your learning rate might be too high or there could be data quality issues

  2. Track convergence: Use TensorBoard to see when your model stops getting better - if training loss keeps going down but evaluation loss stops improving or starts going up, your model is overfitting

  3. Debug issues: JSONL logs show detailed metrics for each training step - look for NaN values, wild loss swings (learning rate problem), or when training loss is much lower than evaluation loss (overfitting)

  4. Monitor evaluation metrics: Good training means both training and evaluation losses go down together, with evaluation loss being slightly higher - if the gap between them gets too large, something is wrong

Upload model to cloud storage

When running experiments, we don’t always save the model directly to the cloud. Instead, we may first evaluate its performance before uploading it to cloud storage. To support this workflow, we provide a save_model function that allows you to upload the model as a separate step after fine tuning.

1

Configure environment variable (.env)

Fill in the AWS_ACCESS_KEY, AWS_SECRET_KEY and AWS_REGION fields. If you don’t have these keys, please contact the infrastructure team.

2

Upload model

To upload the model, you need to configure the storage configuration and specify the model path on save_model function. The model path should point to the directory of your best adapter model.

Last updated