🔢Metric
Example Usage
from gllm_evals.metrics.generation.langchain_helpfulness import LangChainHelpfulnessMetric
from gllm_evals import load_simple_qa_dataset
metric = LangChainHelpfulnessMetric(
model="openai/gpt-4.1",
credentials=os.getenv("OPENAI_API_KEY")
)
data = load_simple_qa_dataset()
result = await metric.evaluate(data.dataset[0])
print(result)Available Metrics
Last updated
Was this helpful?