Using the LLM client¶
The LLM Client library in YARF provides Robot Framework keywords for interacting with Large Language Model servers that support the OpenAI Chat Completions API format. This enables you to integrate AI capabilities into your test automation workflows.
In this guide, we will cover:
Setting up an LLM server¶
The LLM Client is designed to work with any server that implements the OpenAI Chat Completions API. The most common setup is using Ollama:
Installing Ollama¶
curl -fsSL https://ollama.com/install.sh | sh
Starting a model¶
ollama run qwen3-vl:2b-instruct
By default, Ollama serves on http://localhost:11434 with the Chat Completions
API available at /v1/chat/completions.
Basic text prompting¶
To use the LLM Client in your Robot Framework tests, first import the library:
*** Settings ***
Library yarf.rf_libraries.libraries.llm_client.LlmClient
Simple text prompt¶
*** Test Cases ***
Ask LLM a Question
${response}= Prompt Llm What is the capital of France?
Log ${response}
Should Contain ${response} Paris
Using system prompts¶
System prompts help guide the LLM’s behavior and responses:
*** Test Cases ***
Structured Response
${response}= Prompt Llm
... prompt=Analyze this test result
... system_prompt=You are a test automation expert. Provide concise, actionable feedback.
Log ${response}
Using images in prompts¶
The LLM Client supports multi-modal prompts that include both text and images. This is particularly useful for visual testing scenarios.
Image from file path¶
*** Test Cases ***
Analyze Interface Screenshot
# Take a screenshot first (example using YARF's screenshot capabilities)
${image} = Grab Screenshot
# Prompt the LLM to analyze the image
${analysis}= Prompt Llm
... prompt=Describe what you see in this user interface
... image=${image}
Log ${analysis}
# The screenshot shows an image of a simple calculator
Should Contain ${analysis} calculator
Image validation workflows¶
*** Test Cases ***
Validate Installation Screen
${image} = Grab Screenshot
${validation}= Prompt Llm
... prompt=Does this screen show the ubuntu installation on the "choose your language" step? Answer with YES or NO.
... image=${image}
... system_prompt=You are a UI testing assistant. Be very precise in your answers.
Should Start With ${validation} YES
Configuring the client¶
The LLM Client can be configured to work with different servers, models, and parameters.
Changing the model¶
*** Test Cases ***
Use Different Model
Configure Llm Client model=phi4-mini:3.8b
${response}= Prompt Llm Hello, what model are you?
Log ${response}
Using a different server¶
*** Test Cases ***
Remote Server Setup
Configure Llm Client
... server_url=http://192.168.1.100:11434
... model=llama3.2-vision:11b
${response}= Prompt Llm Test connection
Log ${response}
Adjusting token limits¶
*** Test Cases ***
Short Response
Configure Llm Client max_tokens=100
${response}= Prompt Llm Write a brief summary of automated testing
Log ${response}
Complete configuration example¶
*** Test Cases ***
Custom Configuration
Configure Llm Client
... model=qwen3-vl:7b-instruct
... server_url=http://llm-server:11434
... endpoint=/v1/chat/completions
... max_tokens=2048