Skip to main content

Quickstart: Langchain-hs

Pre-requisites

  1. Install GHC and Stack via GHCup
  2. For Ollama, Download and install Ollama and make sure the model you want to use is installed. You can check the list of models using ollama list command or install a model using ollama pull <model-name> command.

Steps

  • Add langchain-hs to your project
package.yaml
dependencies:
- base < 5
- langchain-hs

Example of generating response from a single prompt.

{-# LANGUAGE OverloadedStrings #-}

module LangchainLib (runApp) where

import Langchain.LLM.Ollama (Ollama(..))
import Langchain.LLM.Core
import qualified Data.Text as T

runApp :: IO ()
runApp = do
let ollamaLLM = Ollama "llama3.2" []
genResult <- generate ollamaLLM "Explain Haskell in simple terms." Nothing
case genResult of
Left err -> putStrLn $ "Generate error: " ++ err
Right text -> putStrLn $ "Generated Text:\n" ++ T.unpack text

In above code:

  1. Setup the Ollama LLM with the model name and optional list of callback functions.
  2. Call the generate function with the prompt and optional parameters.
  3. Handle the result, which can be either an error or the generated text.
  4. The generate function returns a Text response, which you can print or use as needed.
warning

For Ollama, Make sure the model that you want to use is installed on your local machine; else it will throw error.

Example of generating response from a chat history

{-# LANGUAGE OverloadedStrings #-}

module LangchainLib (runApp) where

import Langchain.LLM.Ollama (Ollama(..))
import Langchain.LLM.Core
import qualified Data.Text as T
import Data.List.NonEmpty (fromList)

runApp :: IO ()
runApp = do
let ollamaLLM = Ollama "llama3.2" []
let chatHistory = fromList
[ Message System "Explain everthing with a texas accent." defaultMessageData
, Message User "What is functional programming?" defaultMessageData
]
chatResult <- chat ollamaLLM chatHistory Nothing
case chatResult of
Left err -> putStrLn $ "Chat error: " ++ err
Right response -> putStrLn $ "Chat Response:\n" ++ T.unpack response
  1. Setup the Ollama LLM with the model name and optional list of callback functions.
  2. Create a chatHistory using fromList with Message constructor.
  3. Call the chat function with the chatHistory and optional parameters.
  4. Handle the result, which can be either an error or the generated text.
  5. The chat function returns a Text response, which you can print or use as needed.
warning

For Ollama, Make sure the model that you want to use is installed on your local machine; else it will throw error.

note

Message constructor takes 3 parameters:

  1. role: The role of the message sender (System, User, Assistant).
  2. content: The content of the message (Text).
  3. metadata: Optional metadata for the message (A type containing a optinal name and optional list of toolnames) (Currently unstable).
  4. defaultMessageData: A default value for the metadata, which can be used if no specific metadata is provided.

chat takes a NonEmpty list of Message as input. The NonEmpty type ensures that the list is not empty, which is important for chat history.

Example of streaming response

{-# LANGUAGE OverloadedStrings #-}

module Main where

import Langchain.LLM.Ollama (Ollama(..))
import Langchain.LLM.Core
import qualified Data.Text as T
import qualified Data.Text.IO as T
import Data.List.NonEmpty (fromList)

main :: IO ()
main = do
let ollamaLLM = Ollama "llama3.2" []
let chatHistory = fromList
[ Message System "You are an AI assistant." defaultMessageData
, Message User "What is functional programming?" defaultMessageData
]
let handler = StreamHandler T.putStr (putStrLn "Response complete")
eRes <- stream ollamaLLM chatHistory handler Nothing
case eRes of
Left err -> putStrLn $ "Chat error: " ++ err
Right _ -> pure ()

In above code:

  1. Setup the Ollama LLM with the model name and optional list of callback functions.
  2. Create a chatHistory using fromList with Message constructor.
  3. Create a StreamHandler with onToken and onComplete functions.
  4. Call the stream function with the chatHistory, StreamHandler and optional parameters.
  5. Handle the result, which can be either an error or unit.
  6. The stream function returns a unit and the onToken function will be called for each token generated.
note

The StreamHandler takes two functions:

  1. onToken: A function that takes a Text and returns IO (). This function will be called for each token generated.
  2. onComplete: A function that takes a Text and returns IO (). This function will be called when the streaming is complete.
note

Message constructor takes 3 parameters:

  1. role: The role of the message sender (System, User, Assistant).
  2. content: The content of the message (Text).
  3. metadata: Optional metadata for the message (A type containing a optinal name and optional list of toolnames) (Currently unstable).
  4. defaultMessageData: A default value for the metadata, which can be used if no specific metadata is provided. stream takes a NonEmpty list of Message as input. The NonEmpty type ensures that the list is not empty, which is important for chat history.