Talking to Large Language Models
Golem provides a library called golem-llm (opens in a new tab) that is a WebAssembly Component allowing Golem components written in any of the supported languages to use a set of supported LLM providers.
The current list of supported LLMs are the following:
- Anthropic (Claude) (opens in a new tab)
- xAI (Grok) (opens in a new tab)
- OpenAI (opens in a new tab)
- OpenRouter (opens in a new tab)
The library contains a WIT interface (opens in a new tab) providing a unified interface for all these providers.
Starting with the LLM templates
There are example templates provided by golem component new for some languages, called example-gitpulse-agent. To start from this example from scratch, use the following sequence of golem CLI commands:
golem app new my-agent python
cd my-agent
golem component new python:example-gitpulse-agentAdding LLM support to an existing component
To add the golem-llm dependency to an existing Golem component, follow following steps:
Select one of the released WASM binaries
Go to the releases page of golem-llm (opens in a new tab) and copy the link of to the latest WASM binary of the desired LLM provider. The -portable versions are to be used outside of Golem so choose the non-portable ones.
Add a new dependency to golem.yaml
In the golem.yaml of the component, add the following new dependency:
dependencies:
my:component: # <-- the name of your component you add the golem-llm dependency to
- type: wasm
url: https://github.com/golemcloud/golem-llm/releases/download/v0.1.2/golem_llm_openai.wasmImport golem-llm in WIT
Add the following import statement to your component's WIT file:
import golem:llm/llm@1.0.0;Rebuild the application
The next build will download the WASM component and regenerate the language-specific bindings for the new import:
golem app buildUsing the library
There are three top level functions exported from golem-llm:
send: func(messages: list<message>, config: config) -> chat-event;
continue: func(messages: list<message>, tool-results: list<tuple<tool-call, tool-result>>, config: config) -> chat-even;
%stream: func(messages: list<message>, config: config) -> chat-stream;sendsends a prompt with a configuration to the LLM and returns the whole responsecontinuecan be used after onesendand zero or morecontinuecalls, if the returned result contained a tool call. After performing the call, the results can be passed back to the LLM with thetool-resultsparameter.streamreturns a stream resource which can be used to process the LLM response incrementally as it arrives
For general information about how to construct the prompts and how tool usage works, read the chosen LLMs official documentation.
Images are supported in the input as messages referring to external images via an url.
Simple text chat example
The following example demonstrates the usage of send:
use crate::bindings::golem::llm::llm;
let config = llm::Config {
model: "gpt-3.5-turbo",
temperature: Some(0.2),
max_tokens: None,
stop_sequences: None,
tools: vec![],
tool_choice: None,
provider_options: vec![],
};
let response = llm::send(
&[llm::Message {
role: llm::Role::User,
name: Some("user-name".to_string()),
content: vec![llm::ContentPart::Text(
"What is the usual weather on the Vršič pass in the beginning of May?".to_string(),
)],
}],
&config,
);