@hackage ollama-holes-plugin0.1.1.0

A typed-hole plugin that uses LLMs to generate valid hole-fits

This package provides a GHC plugin that uses LLMs to generate valid hole-fits. It supports multiple backends including Ollama, OpenAI, and Gemini.

The following flags are available:

To specify the model to use:

-fplugin-opt=GHC.Plugin.OllamaHoles:model=<model_name>

To specify the backend to use (ollama, openai, or gemini):

-fplugin-opt=GHC.Plugin.OllamaHoles:backend=<backend_name>

When using the openai backend, you can specify a custom base_url, e.g.

-fplugin-opt=GHC.Plugin.OllamaHoles:openai_base_url=api.groq.com/api 

You can also specify which key to use.

-fplugin-opt=GHC.Plugin.OllamaHoles:openai_key_name=GROQ_API_KEY 

To specify how many fits to generate (passed to the model)

-fplugin-opt=GHC.Plugin.OllamaHoles:n=5

To enable debug output:

-fplugin-opt=GHC.Plugin.OllamaHoles:debug=True

For the Ollama backend, make sure you have the Ollama CLI installed and the model you want to use is available. You can install the Ollama CLI by following the instructions at https://ollama.com/download, and you can install the default model (gemma3:27b) by running `ollama pull gemma3:27b`.

For the OpenAI backend, you'll need to set the OPENAI_API_KEY environment variable with your API key.

For the Gemini backend, you'll need to set the GEMINI_API_KEY environment variable with your API key.

Note that the speed and quality of the hole-fits generated by the plugin depends on the model you use, and the default model requires a GPU to run efficiently. For a smaller model, we suggest `gemma3:4b-it-qat`, or `deepcoder:1.5b`.