# Fine Tuning

[gllm-training](https://github.com/GDP-ADMIN/gl-sdk/tree/main/libs/gllm-training) |[ Fine tuning Guidelines](https://docs.google.com/document/d/1qpngyPcHpB039jwio8kgAk1j7aa8pInmNcEzNFBy-Qs/edit?usp=sharing)

## What is a Fine Tuning ?

Fine tuning is a process that adapts a pre-trained model to perform better on specific tasks or domains by training it on a smaller, specialized dataset.This ensures that the model’s responses are more accurate, relevant, and tailored to particular use cases or requirements. The fine-tuning techniques used in our SDK include:

1. [Supervised Fine Tuning](https://gdplabs.gitbook.io/sdk/~/revisions/beykCxz0UanaEX0sPJJu/tutorials/fine-tuning/supervised-fine-tuning-sft) -  training models using labeled input-output pairs to achieve task-specific performance improvements.
2. [Group Relative Policy Optimization (GRPO) ](https://gdplabs.gitbook.io/sdk/~/revisions/beykCxz0UanaEX0sPJJu/tutorials/fine-tuning/group-relative-policy-optimization-grpo)- A reinforcement learning-based method that trains models to maximize reward functions across groups of candidate responses. This enables models to learn preference-aligned behaviors directly from reward signals instead of relying on explicit input-output pairs.
