LM-Only Pipeline

Let's try to build a simple pipeline that only consists of a language model.

Prerequisites

This example specifically requires:

Installation

# you can use a Conda environment
pip install gllm-rag -i https://glsdk.gdplabs.id/gen-ai-internal/simple/

Environment Setup

Set a valid language model credential as an environment variable.

  • In this example, let's use an OpenAI API key.

Get an OpenAI API key from OpenAI Console.

export OPENAI_API_KEY="sk-..."

Running the Pipeline

1

Create a script called lm.py:

import asyncio
import os

from gllm_rag.preset import LM

async def main():
    lm = LM(
        language_model_id="openai/gpt-4.1-nano", 
        language_model_credentials=os.getenv("OPENAI_API_KEY")
    )
    await lm("Name an animal that starts with the letter 'A'")

if __name__ == "__main__":
    asyncio.run(main())
2

Run the script:

python lm.py
3

The language model will generate a response for the given query, e.g:

An animal that starts with the letter 'A' is an **Alligator**.

Switching Between Language Models

Using the above script, we can easily switch to any of the available language model options by simply defining the language_model_id and language_model_credentials parameters.

Please refer below for each of the available options format examples:

Using Anthropic

Setup

  1. Model ID Format: anthropic/<model_name>.

Example

lm = LM(
    language_model_id="anthropic/claude-sonnet-4-20250514",
    language_model_credentials=os.getenv("ANTHROPIC_API_KEY")
)

Using Google Gen AI

Setup

  1. Model ID Format: google/<model_name>.

Example

lm = LM(
    language_model_id="google/gemini-2.5-flash-preview-05-20",
    language_model_credentials=os.getenv("GEMINI_API_KEY")
)

Using Google Vertex AI

Setup

  1. Model ID Format: google/<model_name>.

Example

lm = LM(
    language_model_id="google/gemini-2.5-flash-preview-05-20",
    language_model_credentials=os.getenv("CREDENTIALS_PATH")
)

Using OpenAI

Setup

  1. Model ID Format: openai/<model_name>.

Example

lm = LM(
    language_model_id="openai/gpt-4.1-nano",
    language_model_credentials=os.getenv("OPENAI_API_KEY")
)

Using Azure OpenAI

Setup

  1. Model ID Format: azure-openai/<endpoint>:<deployment>.

  2. Azure OpenAI Resource and Deployment:

Example

lm = LM(
    language_model_id="azure-openai/https://<endpoint>.openai.azure.com:<deployment>",
    language_model_credentials=os.getenv("AZURE_OPENAI_API_KEY")
)

Using OpenAI Compatible Endpoints

OpenAI compatible endpoints include but are not limited to the following:

Setup

  1. Model ID Format: openai-compatible/<base_url>:<model_name>.

  2. Varied Credentials Depending on the Endpoints:

    1. In this example, let's use Groq, which requires a Groq API key: Create one from Groq Console.

Example

lm = LM(
    language_model_id="openai-compatible/https://api.groq.com/openai/v1:llama-3.1-8b-instant",
    language_model_credentials=os.getenv("GROQ_API_KEY")
)

Using LangChain

LangChain available packages and classes can be found in the documentation page.

Setup

  1. Model ID Format: langchain/<package>.<class>:<model_name>.

  2. Varied Credentials Depending on the Package and Class:

    1. In this example, let's use ChatOpenAI, which requires an OpenAI API key: Create one from OpenAI Console.

Example

lm = LM(
    language_model_id="langchain/langchain_openai.ChatOpenAI:gpt-4.1-nano",
    language_model_credentials={"api_key": os.getenv("OPENAI_API_KEY")}
)

Using LiteLLM

Last updated