LM-Only Pipeline
Let's try to build a simple pipeline that only consists of a language model.
Installation
# you can use a Conda environment
pip install gllm-rag -i https://glsdk.gdplabs.id/gen-ai-internal/simple/
Environment Setup
Set a valid language model credential as an environment variable.
In this example, let's use an OpenAI API key.
Get an OpenAI API key from OpenAI Console.
export OPENAI_API_KEY="sk-..."
Running the Pipeline
Create a script called lm.py
:
import asyncio
import os
from gllm_rag.preset import LM
async def main():
lm = LM(
language_model_id="openai/gpt-4.1-nano",
language_model_credentials=os.getenv("OPENAI_API_KEY")
)
await lm("Name an animal that starts with the letter 'A'")
if __name__ == "__main__":
asyncio.run(main())
Run the script:
python lm.py
The language model will generate a response for the given query, e.g:
An animal that starts with the letter 'A' is an **Alligator**.
Congratulations! You have successfully run your first pipeline!
Switching Between Language Models
Using the above script, we can easily switch to any of the available language model options by simply defining the language_model_id
and language_model_credentials
parameters.
Please refer below for each of the available options format examples:
Using Anthropic
Setup
Model ID Format:
anthropic/<model_name>
.Anthropic API key: Create one from Anthropic Console.
Example
lm = LM(
language_model_id="anthropic/claude-sonnet-4-20250514",
language_model_credentials=os.getenv("ANTHROPIC_API_KEY")
)
Using Google Gen AI
Setup
Model ID Format:
google/<model_name>
.Gemini API key: Create one from Google AI Studio.
Example
lm = LM(
language_model_id="google/gemini-2.5-flash-preview-05-20",
language_model_credentials=os.getenv("GEMINI_API_KEY")
)
Using Google Vertex AI
Setup
Model ID Format:
google/<model_name>
.Google Service Account JSON Credential Path:
Example
lm = LM(
language_model_id="google/gemini-2.5-flash-preview-05-20",
language_model_credentials=os.getenv("CREDENTIALS_PATH")
)
Using OpenAI
Setup
Model ID Format:
openai/<model_name>
.Open AI API key: Create one from OpenAI Console.
Example
lm = LM(
language_model_id="openai/gpt-4.1-nano",
language_model_credentials=os.getenv("OPENAI_API_KEY")
)
Using Azure OpenAI
Setup
Model ID Format:
azure-openai/<endpoint>:<deployment>
.Azure OpenAI Resource and Deployment:
Example
lm = LM(
language_model_id="azure-openai/https://<endpoint>.openai.azure.com:<deployment>",
language_model_credentials=os.getenv("AZURE_OPENAI_API_KEY")
)
Using OpenAI Compatible Endpoints
OpenAI compatible endpoints include but are not limited to the following:
Setup
Model ID Format:
openai-compatible/<base_url>:<model_name>
.Varied Credentials Depending on the Endpoints:
In this example, let's use Groq, which requires a Groq API key: Create one from Groq Console.
Example
lm = LM(
language_model_id="openai-compatible/https://api.groq.com/openai/v1:llama-3.1-8b-instant",
language_model_credentials=os.getenv("GROQ_API_KEY")
)
Using LangChain
LangChain available packages and classes can be found in the documentation page.
Setup
Model ID Format:
langchain/<package>.<class>:<model_name>
.Varied Credentials Depending on the Package and Class:
In this example, let's use ChatOpenAI, which requires an OpenAI API key: Create one from OpenAI Console.
Example
lm = LM(
language_model_id="langchain/langchain_openai.ChatOpenAI:gpt-4.1-nano",
language_model_credentials={"api_key": os.getenv("OPENAI_API_KEY")}
)
Using LiteLLM
Coming soon!
Last updated