Output Parser
gllm-inference | Tutorial: Output Parser| Use Case: Produce Consistent Output from LM| API Reference
What's an Output Parser?
The output parser is a module designed to parse text output of a language model into a certain format such as JSON. In this tutorial, you'll learn how to utilize the JSONOutputParser in just a few lines of code. You can explore the available types of output parsers here.
Installation
# you can use a Conda environment
pip install --extra-index-url https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/ gllm-inference# you can use a Conda environment
pip install --extra-index-url https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/ gllm-inference# you can use a Conda environment
FOR /F "tokens=*" %T IN ('gcloud auth print-access-token') DO pip install --extra-index-url "https://oauth2accesstoken:%T@glsdk.gdplabs.id/gen-ai-internal/simple/" gllm-inferenceQuickstart
Let’s jump into a basic example using JSONOutputParser, which can be used to extract JSON string contained in a text output of a language model.
from gllm_inference.output_parser import JSONOutputParser
text = """
Here's the extracted JSON:
{
"name": "John Doe",
"age": 30,
"city": "New York"
}
Please let me know if you need any more information!
"""
output_parser = JSONOutputParser()
result = output_parser.parse(text)
print(result)Expected Output
And that's it! Using an output parser is as straightforward as it can get!
Last updated