Direct Preference Optimization (DPO)
What is Direct Preference Optimization (DPO)?
Direct Preference Optimization (DPO) is a preference-based fine-tuning technique that aligns a model using paired comparisons between responses, rather than relying on reinforcement learning or reward models. For each input prompt, DPO uses a chosen response (preferred) and a rejected response (less preferred) to directly increase the likelihood of generating the chosen output while decreasing the likelihood of the rejected one. This is achieved through a closed-form optimization objective that simplifies training while still capturing preference signals effectively. DPO is particularly useful when you have datasets that express relative human preferences, and it typically produces stable, efficient, and preference-aligned model behaviors.
Installation
pip install --extra-index-url "https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/" "gllm-training"pip install --extra-index-url "https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/" "gllm-training"FOR /F "tokens=*" %T IN ('gcloud auth print-access-token') DO pip install --extra-index-url "https://oauth2accesstoken:%T@glsdk.gdplabs.id/gen-ai-internal/simple/" "gllm-training"Quickstart
Let's move on to a basic example fine-tuned using DPOTrainer. To run DPO fine-tuning, you need to specify a model name, dpo_column_mapping and dataset path. Make sure your data sets contained of prompt, chosen as a correct response and rejected as a rejected response.
# Main Code
from gllm_training import DPOTrainer
from examples.llm_as_judge_reward_function import output_format_reward
dpo_trainer = DPOTrainer(
model_name="Qwen/Qwen3-0.6b",
datasets_path="examples/dpo_csv"
)
dpo_trainer.train()
Fine tuning model using YAML file.
We can run experiments in a more structured way by using a YAML file. The current DPO fine-tuning SDK supports both online data from Google Spreadsheets and local data in CSV format.
Example 1: Fine tuning using online data.
We can prepared our experiment using YAML file with the data trained and validation from google spreadsheet.
Example 2: Fine tuning using local data.
The remaining hyperparameter configurations for fine-tuning are the same as when using online data. Below is an example YAML configuration for using local data for training and validation.
Upload model to cloud storage
When running experiments, we don’t always save the model directly to the cloud. Instead, we may first evaluate its performance before uploading it to cloud storage. To support this workflow, we provide a save_model function that allows you to upload the model as a separate step after fine tuning.
Last updated