Quickstart: Langchain-hs
Pre-requisites
- Install GHC and Stack via GHCup
- For Ollama, Download and install Ollama and make sure the model you want to use is installed. You can check the list of models using
ollama list
command or install a model usingollama pull <model-name>
command.
Steps
- Add langchain-hs to your project
dependencies:
- base < 5
- langchain-hs
Example of generating response from a single prompt.
- Ollama
- OpenAI
- Huggingface
{-# LANGUAGE OverloadedStrings #-}
module LangchainLib (runApp) where
import Langchain.LLM.Ollama (Ollama(..))
import Langchain.LLM.Core
import qualified Data.Text as T
runApp :: IO ()
runApp = do
let ollamaLLM = Ollama "llama3.2" []
genResult <- generate ollamaLLM "Explain Haskell in simple terms." Nothing
case genResult of
Left err -> putStrLn $ "Generate error: " ++ err
Right text -> putStrLn $ "Generated Text:\n" ++ T.unpack text
In above code:
- Setup the
Ollama
LLM with the model name and optional list of callback functions. - Call the
generate
function with the prompt and optional parameters. - Handle the result, which can be either an error or the generated text.
- The
generate
function returns aText
response, which you can print or use as needed.
For Ollama, Make sure the model that you want to use is installed on your local machine; else it will throw error.
{-# LANGUAGE OverloadedStrings #-}
module Main where
import Data.Text (Text)
import qualified Langchain.LLM.Core as LLM
import Langchain.LLM.OpenAI (OpenAI(..))
main :: IO ()
main = do
let openAI = OpenAI
{ apiKey = "your-api-key"
, openAIModelName = "gpt-4.1-nano"
, callbacks = []
}
result <- LLM.generate openAI "Tell me a joke" Nothing
case result of
Left err -> putStrLn $ "Error: " ++ err
Right response -> putStrLn response
In above code:
- Setup the
OpenAI
LLM with the api key, model name and optional list of callback functions. - Call the
generate
function with the prompt and optional parameters. - Handle the result, which can be either an error or the generated text.
- The
generate
function returns aText
response, which you can print or use as needed.
{-# LANGUAGE OverloadedStrings #-}
module LangchainLib (runApp) where
import qualified Data.Text as T
import Langchain.LLM.Core
import Langchain.LLM.Huggingface
runApp :: IO ()
runApp = do
let huggingface =
Huggingface
{ provider = Cerebras
, apiKey = <your-api-key>
, modelName = "llama-3.3-70b"
, callbacks = []
}
eRes <- generate huggingface "Explain me Monads in Haskell" Nothing
case eRes of
Left err -> putStrLn $ "Chat error: " ++ err
Right response -> putStrLn $ "Chat Response:\n" ++ T.unpack response
In above code:
- Setup the
Huggingface
LLM with the provider, api key, model name and optional list of callback functions. - Call the
generate
function with the prompt and optional parameters. - Handle the result, which can be either an error or the generated text.
- The
generate
function returns aText
response, which you can print or use as needed.
Example of generating response from a chat history
- Ollama
- OpenAI
- Huggingface
{-# LANGUAGE OverloadedStrings #-}
module LangchainLib (runApp) where
import Langchain.LLM.Ollama (Ollama(..))
import Langchain.LLM.Core
import qualified Data.Text as T
import Data.List.NonEmpty (fromList)
runApp :: IO ()
runApp = do
let ollamaLLM = Ollama "llama3.2" []
let chatHistory = fromList
[ Message System "Explain everthing with a texas accent." defaultMessageData
, Message User "What is functional programming?" defaultMessageData
]
chatResult <- chat ollamaLLM chatHistory Nothing
case chatResult of
Left err -> putStrLn $ "Chat error: " ++ err
Right response -> putStrLn $ "Chat Response:\n" ++ T.unpack response
- Setup the
Ollama
LLM with the model name and optional list of callback functions. - Create a
chatHistory
usingfromList
withMessage
constructor. - Call the
chat
function with thechatHistory
and optional parameters. - Handle the result, which can be either an error or the generated text.
- The
chat
function returns aText
response, which you can print or use as needed.
For Ollama, Make sure the model that you want to use is installed on your local machine; else it will throw error.
Message
constructor takes 3 parameters:
role
: The role of the message sender (System, User, Assistant).content
: The content of the message (Text).metadata
: Optional metadata for the message (A type containing a optinal name and optional list of toolnames) (Currently unstable).defaultMessageData
: A default value for the metadata, which can be used if no specific metadata is provided.
chat
takes a NonEmpty
list of Message
as input. The NonEmpty
type ensures that the list is not empty, which is important for chat history.
{-# LANGUAGE OverloadedStrings #-}
module Main where
import qualified Langchain.LLM.Core as LLM
import qualified Data.Text as T
import Data.List.NonEmpty (fromList)
main :: IO ()
main = do
let openAI = OpenAI
{ apiKey = "your-api-key"
, openAIModelName = "gpt-4.1-nano"
, callbacks = []
}
let chatHistory = fromList
[ Message System "You are an AI assistant." defaultMessageData
, Message User "What is functional programming?" defaultMessageData
]
chatResult <- LLM.chat openAI chatHistory Nothing
case chatResult of
Left err -> putStrLn $ "Chat error: " ++ err
Right response -> putStrLn $ "Chat Response:\n" ++ T.unpack response
In above code:
- Setup the
OpenAI
LLM with the api key, model name and optional list of callback functions. - Create a
chatHistory
usingfromList
withMessage
constructor. - Call the
chat
function with thechatHistory
and optional parameters. - Handle the result, which can be either an error or the generated text.
- The
chat
function returns aText
response, which you can print or use as needed.
Message
constructor takes 3 parameters:
role
: The role of the message sender (System, User, Assistant).content
: The content of the message (Text).metadata
: Optional metadata for the message (A type containing a optinal name and optional list of toolnames) (Currently unstable).defaultMessageData
: A default value for the metadata, which can be used if no specific metadata is provided.chat
takes aNonEmpty
list ofMessage
as input. TheNonEmpty
type ensures that the list is not empty, which is important for chat history.
{-# LANGUAGE OverloadedStrings #-}
module LangchainLib (runApp) where
import qualified Data.Text as T
import Langchain.LLM.Core
import Langchain.LLM.Huggingface
import Data.List.NonEmpty (fromList)
runApp :: IO ()
runApp = do
let huggingface =
Huggingface
{ provider = Cerebras
, apiKey = <your-api-key>
, modelName = "llama-3.3-70b"
, callbacks = []
}
let chatHistory = fromList [
Message System "You are an AI assistant." defaultMessageData
, Message User "What is functional programming?" defaultMessageData]
eRes <- chat huggingface chatHistory Nothing
case eRes of
Left err -> putStrLn $ "Chat error: " ++ err
Right response -> putStrLn $ "Chat Response:\n" ++ T.unpack response
In above code:
- Setup the
Huggingface
LLM with the provider, api key, model name and optional list of callback functions. - Create a
chatHistory
usingfromList
withMessage
constructor. - Call the
chat
function with thechatHistory
and optional parameters. - Handle the result, which can be either an error or the generated text.
- The
chat
function returns aText
response, which you can print or use as needed.
Message
constructor takes 3 parameters:
role
: The role of the message sender (System, User, Assistant).content
: The content of the message (Text).metadata
: Optional metadata for the message (A type containing a optinal name and optional list of toolnames) (Currently unstable).defaultMessageData
: A default value for the metadata, which can be used if no specific metadata is provided.chat
takes aNonEmpty
list ofMessage
as input. TheNonEmpty
type ensures that the list is not empty, which is important for chat history.
Example of streaming response
- Ollama
- OpenAI
- Huggingface
{-# LANGUAGE OverloadedStrings #-}
module Main where
import Langchain.LLM.Ollama (Ollama(..))
import Langchain.LLM.Core
import qualified Data.Text as T
import qualified Data.Text.IO as T
import Data.List.NonEmpty (fromList)
main :: IO ()
main = do
let ollamaLLM = Ollama "llama3.2" []
let chatHistory = fromList
[ Message System "You are an AI assistant." defaultMessageData
, Message User "What is functional programming?" defaultMessageData
]
let handler = StreamHandler T.putStr (putStrLn "Response complete")
eRes <- stream ollamaLLM chatHistory handler Nothing
case eRes of
Left err -> putStrLn $ "Chat error: " ++ err
Right _ -> pure ()
In above code:
- Setup the
Ollama
LLM with the model name and optional list of callback functions. - Create a
chatHistory
usingfromList
withMessage
constructor. - Create a
StreamHandler
withonToken
andonComplete
functions. - Call the
stream
function with thechatHistory
,StreamHandler
and optional parameters. - Handle the result, which can be either an error or unit.
- The
stream
function returns a unit and theonToken
function will be called for each token generated.
The StreamHandler takes two functions:
onToken
: A function that takes aText
and returnsIO ()
. This function will be called for each token generated.onComplete
: A function that takes aText
and returnsIO ()
. This function will be called when the streaming is complete.
Message
constructor takes 3 parameters:
role
: The role of the message sender (System, User, Assistant).content
: The content of the message (Text).metadata
: Optional metadata for the message (A type containing a optinal name and optional list of toolnames) (Currently unstable).defaultMessageData
: A default value for the metadata, which can be used if no specific metadata is provided.stream
takes aNonEmpty
list ofMessage
as input. TheNonEmpty
type ensures that the list is not empty, which is important for chat history.
{-# LANGUAGE OverloadedStrings #-}
module LangchainLib (runApp) where
import Data.List.NonEmpty (fromList)
import qualified Data.Text.IO as T
import Langchain.LLM.Core as LLM
import Langchain.LLM.OpenAI
runApp :: IO ()
runApp = do
let openAI =
OpenAI
{ apiKey = "your-api-key"
, openAIModelName = "gpt-4.1-nano"
, callbacks = []
}
let chatHistory =
fromList
[ Message System "You are an AI assistant." defaultMessageData
, Message User "What is functional programming?" defaultMessageData
]
let streamHandler = StreamHandler {
onToken = T.putStr,
onComplete = pure ()
}
chatResult <- LLM.stream openAI chatHistory streamHandler Nothing
case chatResult of
Left err -> putStrLn $ "Chat error: " ++ err
Right _ -> pure ()
In above code:
- Setup the
OpenAI
LLM with the api key, model name and optional list of callback functions. - Create a
chatHistory
usingfromList
withMessage
constructor. - Create a
StreamHandler
withonToken
andonComplete
functions. - Call the
stream
function with thechatHistory
,StreamHandler
and optional parameters. - Handle the result, which can be either an error or unit.
- The
stream
function returns a unit and theonToken
function will be called for each token generated.
The StreamHandler takes two functions:
onToken
: A function that takes aText
and returnsIO ()
. This function will be called for each token generated.onComplete
: A function that takes aText
and returnsIO ()
. This function will be called when the streaming is complete.
Message
constructor takes 3 parameters:
role
: The role of the message sender (System, User, Assistant).content
: The content of the message (Text).metadata
: Optional metadata for the message (A type containing a optinal name and optional list of toolnames) (Currently unstable).defaultMessageData
: A default value for the metadata, which can be used if no specific metadata is provided.stream
takes aNonEmpty
list ofMessage
as input. TheNonEmpty
type ensures that the list is not empty, which is important for chat history.
{-# LANGUAGE OverloadedStrings #-}
module LangchainLib (runApp) where
import Data.List.NonEmpty (fromList)
import qualified Data.Text.IO as T
import Langchain.LLM.Core as LLM
import Langchain.LLM.Huggingface
runApp :: IO ()
runApp = do
let huggingface =
Huggingface
{ provider = Cerebras
, apiKey = "your-api-key"
, modelName = "llama-3.3-70b"
, callbacks = []
}
let chatHistory = fromList [Message System "You are an AI assistant." defaultMessageData, Message User "What is functional programming?" defaultMessageData]
let streamHandler =
StreamHandler
{ onToken = T.putStr
, onComplete = pure ()
}
eRes <- stream huggingface chatHistory streamHandler Nothing
case eRes of
Left err -> putStrLn $ "Chat error: " ++ err
Right _ -> pure ()
In above code:
- Setup the
Huggingface
LLM with the provider, api key, model name and optional list of callback functions. - Create a
chatHistory
usingfromList
withMessage
constructor. - Create a
StreamHandler
withonToken
andonComplete
functions. - Call the
stream
function with thechatHistory
,StreamHandler
and optional parameters. - Handle the result, which can be either an error or unit.
- The
stream
function returns a unit and theonToken
function will be called for each token generated.
The StreamHandler takes two functions:
onToken
: A function that takes aText
and returnsIO ()
. This function will be called for each token generated.onComplete
: A function that takes aText
and returnsIO ()
. This function will be called when the streaming is complete.
Message
constructor takes 3 parameters:
role
: The role of the message sender (System, User, Assistant).content
: The content of the message (Text).metadata
: Optional metadata for the message (A type containing a optinal name and optional list of toolnames) (Currently unstable).defaultMessageData
: A default value for the metadata, which can be used if no specific metadata is provided.stream
takes aNonEmpty
list ofMessage
as input. TheNonEmpty
type ensures that the list is not empty, which is important for chat history.