# Metric

Metrics are the core evaluation components in the gllm-evals framework. They define specific ways to measure and assess the performance of language models for generation, retrieval systems, and agent behaviors.&#x20;

Metrics work in conjunction with evaluators to provide comprehensive evaluation capabilities. Evaluators can run multiple metrics in parallel or sequentially, combining their results into a comprehensive evaluation report.

## Example Usage

```python
from gllm_evals.metrics.generation.langchain_helpfulness import LangChainHelpfulnessMetric
from gllm_evals import load_simple_qa_dataset

metric = LangChainHelpfulnessMetric(
    model="openai/gpt-4.1",
    credentials=os.getenv("OPENAI_API_KEY")
)

data = load_simple_qa_dataset()
result = await metric.evaluate(data.dataset[0])
print(result)
```

***

## Available Metrics

Below are several existing metrics example. To view the full metrics list, see the [Metrics](https://github.com/GDP-ADMIN/gl-sdk/tree/main/libs/gllm-evals/gllm_evals/metrics) directory.

1. **Generation Evaluation Metrics**
   1. [GEvalCompletenessMetric](https://github.com/GDP-ADMIN/gl-sdk/blob/main/libs/gllm-evals/gllm_evals/metrics/generation/geval_completeness.py)
   2. [GEvalGroundednessMetric](https://github.com/GDP-ADMIN/gl-sdk/blob/main/libs/gllm-evals/gllm_evals/metrics/generation/geval_groundedness.py)
   3. [GEvalRedundancyMetric](https://github.com/GDP-ADMIN/gl-sdk/blob/main/libs/gllm-evals/gllm_evals/metrics/generation/geval_redundancy.py)
   4. [CompletenessMetric](https://github.com/GDP-ADMIN/gen-ai-internal/blob/main/libs/gllm-evals/gllm_evals/metrics/generation/completeness.py)
   5. [GroundednessMetric](https://github.com/GDP-ADMIN/gen-ai-internal/blob/main/libs/gllm-evals/gllm_evals/metrics/generation/groundedness.py)
   6. [RedundancyMetric](https://github.com/GDP-ADMIN/gen-ai-internal/blob/main/libs/gllm-evals/gllm_evals/metrics/generation/redundancy.py)
   7. [RagasFactualCorrectness](https://github.com/GDP-ADMIN/gen-ai-internal/blob/main/libs/gllm-evals/gllm_evals/metrics/generation/ragas_factual_correctness.py)
   8. [DeepEvalAnswerRelevancyMetric](https://github.com/GDP-ADMIN/gl-sdk/blob/main/libs/gllm-evals/gllm_evals/metrics/generation/deepeval_answer_relevancy.py)
   9. [DeepEvalFaithfulnessMetric](https://github.com/GDP-ADMIN/gl-sdk/blob/main/libs/gllm-evals/gllm_evals/metrics/generation/deepeval_faithfulness.py)
   10. [LangChainConcisenessMetric](https://github.com/GDP-ADMIN/gl-sdk/blob/main/libs/gllm-evals/gllm_evals/metrics/generation/langchain_conciseness.py)
   11. [LangChainCorrectnessMetric](https://github.com/GDP-ADMIN/gl-sdk/blob/main/libs/gllm-evals/gllm_evals/metrics/generation/langchain_correctness.py)
   12. [LangChainGroundednessMetric](https://github.com/GDP-ADMIN/gl-sdk/blob/main/libs/gllm-evals/gllm_evals/metrics/generation/langchain_groundedness.py)
   13. [LangChainHallucinationMetric](https://github.com/GDP-ADMIN/gl-sdk/blob/main/libs/gllm-evals/gllm_evals/metrics/generation/langchain_hallucination.py)
   14. [LangChainHelpfulnessMetric](https://github.com/GDP-ADMIN/gl-sdk/blob/main/libs/gllm-evals/gllm_evals/metrics/generation/langchain_helpfulness.py)
2. **Retrieval Evaluation Metrics**
   1. [PyTrecMetric](https://github.com/GDP-ADMIN/gen-ai-internal/blob/main/libs/gllm-evals/gllm_evals/metrics/retrieval/pytrec_metric.py)
   2. [TopKAccuracy](https://github.com/GDP-ADMIN/gen-ai-internal/blob/main/libs/gllm-evals/gllm_evals/metrics/retrieval/top_k_accuracy.py)
   3. [RagasContextPrecisionWithoutReference](https://github.com/GDP-ADMIN/gen-ai-internal/blob/main/libs/gllm-evals/gllm_evals/metrics/retrieval/ragas_context_precision.py)
   4. [RagasContextRecall](https://github.com/GDP-ADMIN/gen-ai-internal/blob/main/libs/gllm-evals/gllm_evals/metrics/retrieval/ragas_context_recall.py)
   5. [DeepEvalContextualPrecisionMetric](https://github.com/GDP-ADMIN/gl-sdk/blob/main/libs/gllm-evals/gllm_evals/metrics/retrieval/deepeval_contextual_precision.py)
   6. [DeepEvalContextualRecallMetric](https://github.com/GDP-ADMIN/gl-sdk/blob/main/libs/gllm-evals/gllm_evals/metrics/retrieval/deepeval_contextual_recall.py)
   7. [DeepEvalContextualRelevancyMetric](https://github.com/GDP-ADMIN/gl-sdk/blob/main/libs/gllm-evals/gllm_evals/metrics/retrieval/deepeval_contextual_relevancy.py)
3. **Agent Evaluation Metrics**
   1. [LangChainAgentTrajectoryAccuracyMetric](https://github.com/GDP-ADMIN/gl-sdk/blob/main/libs/gllm-evals/gllm_evals/metrics/agent/langchain_agent_trajectory_accuracy.py)
