Quickstart: Langchain-hs
Pre-requisites
- Install GHC and Stack via GHCup
- For Ollama, Download and install Ollama and make sure the model you want to use is installed. You can check the list of models using
ollama listcommand or install a model usingollama pull <model-name>command.
Steps
- Add langchain-hs to your project
dependencies:
- base < 5
- langchain-hs
Example of generating response from a single prompt.
- Ollama
- OpenAI
- Huggingface
{-# LANGUAGE OverloadedStrings #-}
module LangchainLib (runApp) where
import Langchain.LLM.Ollama (Ollama(..))
import Langchain.LLM.Core
import qualified Data.Text as T
runApp :: IO ()
runApp = do
let ollamaLLM = Ollama "llama3.2" []
genResult <- generate ollamaLLM "Explain Haskell in simple terms." Nothing
case genResult of
Left err -> putStrLn $ "Generate error: " ++ err
Right text -> putStrLn $ "Generated Text:\n" ++ T.unpack text
In above code:
- Setup the
OllamaLLM with the model name and optional list of callback functions. - Call the
generatefunction with the prompt and optional parameters. - Handle the result, which can be either an error or the generated text.
- The
generatefunction returns aTextresponse, which you can print or use as needed.
For Ollama, Make sure the model that you want to use is installed on your local machine; else it will throw error.
{-# LANGUAGE OverloadedStrings #-}
module Main where
import Data.Text (Text)
import qualified Langchain.LLM.Core as LLM
import Langchain.LLM.OpenAI (OpenAI(..))
main :: IO ()
main = do
let openAI = OpenAI
{ apiKey = "your-api-key"
, callbacks = []
, baseUrl = Nothing
}
result <- LLM.generate openAI "Tell me a joke" Nothing
case result of
Left err -> putStrLn $ "Error: " ++ err
Right response -> putStrLn response
In above code:
- Setup the
OpenAILLM with the api key and optional list of callback functions. You can specify a custom base URL if needed. - Call the
generatefunction with the prompt and optional parameters. - Handle the result, which can be either an error or the generated text.
- The
generatefunction returns aTextresponse, which you can print or use as needed.
{-# LANGUAGE OverloadedStrings #-}
module LangchainLib (runApp) where
import qualified Data.Text as T
import Langchain.LLM.Core
import Langchain.LLM.Huggingface
runApp :: IO ()
runApp = do
let huggingface =
Huggingface
{ provider = Cerebras
, apiKey = <your-api-key>
, modelName = "llama-3.3-70b"
, callbacks = []
}
eRes <- generate huggingface "Explain me Monads in Haskell" Nothing
case eRes of
Left err -> putStrLn $ "Chat error: " ++ err
Right response -> putStrLn $ "Chat Response:\n" ++ T.unpack response
In above code:
- Setup the
HuggingfaceLLM with the provider, api key, model name and optional list of callback functions. - Call the
generatefunction with the prompt and optional parameters. - Handle the result, which can be either an error or the generated text.
- The
generatefunction returns aTextresponse, which you can print or use as needed.
Example of generating response from a chat history
- Ollama
- OpenAI
- Huggingface
{-# LANGUAGE OverloadedStrings #-}
module LangchainLib (runApp) where
import Langchain.LLM.Ollama (Ollama(..))
import Langchain.LLM.Core
import qualified Data.Text as T
import Data.List.NonEmpty (fromList)
runApp :: IO ()
runApp = do
let ollamaLLM = Ollama "llama3.2" []
let chatHistory = fromList
[ Message System "Explain everything with a texas accent." defaultMessageData
, Message User "What is functional programming?" defaultMessageData
]
chatResult <- chat ollamaLLM chatHistory Nothing
case chatResult of
Left err -> putStrLn $ "Chat error: " ++ err
Right response -> putStrLn $ "Chat Response:\n" ++ T.unpack response
- Setup the
OllamaLLM with the model name and optional list of callback functions. - Create a
chatHistoryusingfromListwithMessageconstructor. - Call the
chatfunction with thechatHistoryand optional parameters. - Handle the result, which can be either an error or the generated text.
- The
chatfunction returns aTextresponse, which you can print or use as needed.
For Ollama, Make sure the model that you want to use is installed on your local machine; else it will throw error.
Message constructor takes 3 parameters:
role: The role of the message sender (System, User, Assistant).content: The content of the message (Text).messageData: Optional metadata for the message (A type containing an optional name and optional list of tool calls).defaultMessageData: A default value for the metadata, which can be used if no specific metadata is provided.
chat takes a NonEmpty list of Message as input. The NonEmpty type ensures that the list is not empty, which is important for chat history.
{-# LANGUAGE OverloadedStrings #-}
module Main where
import qualified Langchain.LLM.Core as LLM
import qualified Data.Text as T
import Data.List.NonEmpty (fromList)
main :: IO ()
main = do
let openAI = OpenAI
{ apiKey = "your-api-key"
, callbacks = []
, baseUrl = Nothing
}
let chatHistory = fromList
[ Message System "You are an AI assistant." defaultMessageData
, Message User "What is functional programming?" defaultMessageData
]
chatResult <- LLM.chat openAI chatHistory Nothing
case chatResult of
Left err -> putStrLn $ "Chat error: " ++ err
Right response -> putStrLn $ "Chat Response:\n" ++ T.unpack response
In above code:
- Setup the
OpenAILLM with the api key and optional list of callback functions. You can specify a custom base URL if needed. - Create a
chatHistoryusingfromListwithMessageconstructor. - Call the
chatfunction with thechatHistoryand optional parameters. - Handle the result, which can be either an error or the generated text.
- The
chatfunction returns aTextresponse, which you can print or use as needed.
Message constructor takes 3 parameters:
role: The role of the message sender (System, User, Assistant).content: The content of the message (Text).metadata: Optional metadata for the message (A type containing a optional name and optional list of toolnames) (Currently unstable).defaultMessageData: A default value for the metadata, which can be used if no specific metadata is provided.chattakes aNonEmptylist ofMessageas input. TheNonEmptytype ensures that the list is not empty, which is important for chat history.
{-# LANGUAGE OverloadedStrings #-}
module LangchainLib (runApp) where
import qualified Data.Text as T
import Langchain.LLM.Core
import Langchain.LLM.Huggingface
import Data.List.NonEmpty (fromList)
runApp :: IO ()
runApp = do
let huggingface =
Huggingface
{ provider = Cerebras
, apiKey = <your-api-key>
, modelName = "llama-3.3-70b"
, callbacks = []
}
let chatHistory = fromList [
Message System "You are an AI assistant." defaultMessageData
, Message User "What is functional programming?" defaultMessageData]
eRes <- chat huggingface chatHistory Nothing
case eRes of
Left err -> putStrLn $ "Chat error: " ++ err
Right response -> putStrLn $ "Chat Response:\n" ++ T.unpack response
In above code:
- Setup the
HuggingfaceLLM with the provider, api key, model name and optional list of callback functions. - Create a
chatHistoryusingfromListwithMessageconstructor. - Call the
chatfunction with thechatHistoryand optional parameters. - Handle the result, which can be either an error or the generated text.
- The
chatfunction returns aTextresponse, which you can print or use as needed.
Message constructor takes 3 parameters:
role: The role of the message sender (System, User, Assistant).content: The content of the message (Text).metadata: Optional metadata for the message (A type containing a optional name and optional list of toolnames) (Currently unstable).defaultMessageData: A default value for the metadata, which can be used if no specific metadata is provided.chattakes aNonEmptylist ofMessageas input. TheNonEmptytype ensures that the list is not empty, which is important for chat history.
Example of streaming response
- Ollama
- OpenAI
- Huggingface
{-# LANGUAGE OverloadedStrings #-}
module Main where
import Langchain.LLM.Ollama (Ollama(..))
import Langchain.LLM.Core
import qualified Data.Text as T
import qualified Data.Text.IO as T
import Data.List.NonEmpty (fromList)
main :: IO ()
main = do
let ollamaLLM = Ollama "llama3.2" []
let chatHistory = fromList
[ Message System "You are an AI assistant." defaultMessageData
, Message User "What is functional programming?" defaultMessageData
]
let handler = StreamHandler T.putStr (putStrLn "Response complete")
eRes <- stream ollamaLLM chatHistory handler Nothing
case eRes of
Left err -> putStrLn $ "Chat error: " ++ err
Right _ -> pure ()
In above code:
- Setup the
OllamaLLM with the model name and optional list of callback functions. - Create a
chatHistoryusingfromListwithMessageconstructor. - Create a
StreamHandlerwithonTokenandonCompletefunctions. - Call the
streamfunction with thechatHistory,StreamHandlerand optional parameters. - Handle the result, which can be either an error or unit.
- The
streamfunction returns a unit and theonTokenfunction will be called for each token generated.
The StreamHandler takes two functions:
onToken: A function that takes aTextand returnsIO (). This function will be called for each token generated.onComplete: A function that takes aTextand returnsIO (). This function will be called when the streaming is complete.
Message constructor takes 3 parameters:
role: The role of the message sender (System, User, Assistant).content: The content of the message (Text).metadata: Optional metadata for the message (A type containing a optional name and optional list of toolnames) (Currently unstable).defaultMessageData: A default value for the metadata, which can be used if no specific metadata is provided.streamtakes aNonEmptylist ofMessageas input. TheNonEmptytype ensures that the list is not empty, which is important for chat history.
{-# LANGUAGE OverloadedStrings #-}
module LangchainLib (runApp) where
import Data.List.NonEmpty (fromList)
import qualified Data.Text.IO as T
import Langchain.LLM.Core as LLM
import Langchain.LLM.OpenAI
runApp :: IO ()
runApp = do
let openAI =
OpenAI
{ apiKey = "your-api-key"
, callbacks = []
, baseUrl = Nothing
}
let chatHistory =
fromList
[ Message System "You are an AI assistant." defaultMessageData
, Message User "What is functional programming?" defaultMessageData
]
let streamHandler = StreamHandler {
onToken = T.putStr,
onComplete = pure ()
}
chatResult <- LLM.stream openAI chatHistory streamHandler Nothing
case chatResult of
Left err -> putStrLn $ "Chat error: " ++ err
Right _ -> pure ()
In above code:
- Setup the
OpenAILLM with the api key, model name and optional list of callback functions. - Create a
chatHistoryusingfromListwithMessageconstructor. - Create a
StreamHandlerwithonTokenandonCompletefunctions. - Call the
streamfunction with thechatHistory,StreamHandlerand optional parameters. - Handle the result, which can be either an error or unit.
- The
streamfunction returns a unit and theonTokenfunction will be called for each token generated.
The StreamHandler takes two functions:
onToken: A function that takes aTextand returnsIO (). This function will be called for each token generated.onComplete: A function that takes aTextand returnsIO (). This function will be called when the streaming is complete.
Message constructor takes 3 parameters:
role: The role of the message sender (System, User, Assistant).content: The content of the message (Text).metadata: Optional metadata for the message (A type containing a optional name and optional list of toolnames) (Currently unstable).defaultMessageData: A default value for the metadata, which can be used if no specific metadata is provided.streamtakes aNonEmptylist ofMessageas input. TheNonEmptytype ensures that the list is not empty, which is important for chat history.
{-# LANGUAGE OverloadedStrings #-}
module LangchainLib (runApp) where
import Data.List.NonEmpty (fromList)
import qualified Data.Text.IO as T
import Langchain.LLM.Core as LLM
import Langchain.LLM.Huggingface
runApp :: IO ()
runApp = do
let huggingface =
Huggingface
{ provider = Cerebras
, apiKey = "your-api-key"
, modelName = "llama-3.3-70b"
, callbacks = []
}
let chatHistory = fromList [Message System "You are an AI assistant." defaultMessageData, Message User "What is functional programming?" defaultMessageData]
let streamHandler =
StreamHandler
{ onToken = T.putStr
, onComplete = pure ()
}
eRes <- stream huggingface chatHistory streamHandler Nothing
case eRes of
Left err -> putStrLn $ "Chat error: " ++ err
Right _ -> pure ()
In above code:
- Setup the
HuggingfaceLLM with the provider, api key, model name and optional list of callback functions. - Create a
chatHistoryusingfromListwithMessageconstructor. - Create a
StreamHandlerwithonTokenandonCompletefunctions. - Call the
streamfunction with thechatHistory,StreamHandlerand optional parameters. - Handle the result, which can be either an error or unit.
- The
streamfunction returns a unit and theonTokenfunction will be called for each token generated.
The StreamHandler takes two functions:
onToken: A function that takes aTextand returnsIO (). This function will be called for each token generated.onComplete: A function that takes aTextand returnsIO (). This function will be called when the streaming is complete.
Message constructor takes 3 parameters:
role: The role of the message sender (System, User, Assistant).content: The content of the message (Text).metadata: Optional metadata for the message (A type containing a optional name and optional list of toolnames) (Currently unstable).defaultMessageData: A default value for the metadata, which can be used if no specific metadata is provided.streamtakes aNonEmptylist ofMessageas input. TheNonEmptytype ensures that the list is not empty, which is important for chat history.