Large Language Models (LLMs) have the potential to provide personalized support to learners and transform the programming experience. GuiPy provides built-in support for LLM-assisted programming, available in two forms.
Both cloud-based and local LLMs are supported:
You need to register with OpenAI and create an API key. The API key can be either a project key or a legacy user key. Please note that this is a paid service, so you will need to either:
A $10 credit can be very helpful when using OpenAI with GuiPy. See also the OpenAI pricing. Note that the older GPT-3.5 Turbo model is 10 times cheaper than the newer GPT-4o model, which in turn is much cheaper than its predecessors gpt-4-turbo and gpt-4.
You will need an API key. Although it is a commercial service, the Text Embedding 004 pricing model is free to use, making it great for use with GuiPy.
Unlike the cloud-based providers OpenAI and Gemini, Ollama models are run locally.
Why would you want to run LLMs locally? Here are a few reasons:
To use Ollama you need:
Otherwise, it can be frustratingly slow to use.
You must first download and install the Ollama installer. Note that after installation, an Ollama Windows service will start automatically every time you start Windows.
Ollama provides access to a large number of LLM models such as Codegemma from Google and Codelllama from Meta. To use a specific model you must install it locally. You can do this from a command prompt by typing the command:
ollama pull model_name
After that, you can use the local model with GuiPy.
The wizard currently works with the codellama:code model and its variants. You can use models like codellama, codegemma, starcoder2, and deepseek-Coder-V2 with the chat. For each model, you can choose the largest model size that fits comfortably in your GPU memory. For example, codellama:13b-code-q3_K_M is a codellama:code variant with a size of 6.3 GB that can be used with a GPU with 8 GB or more memory. Keep in mind that the larger the model, the slower the model response, but the higher the quality of the responses.
You can access the LLM Assistantfrom the editor's context menu.
Suggest
Request a suggestion from the assiatant for code completion. This is only available when there is no selection in the editor.
The following three commands are only available when a portion of code is selected.
Explain
Ask the assistant to explain the selected section of code. The explanation is in the form of comments.
Fix Bugs
Ask the assistant to fix the bugs in the selected section of code. The assistant should also explain the nature of the original problems and their solution using comments in the code.
Optimize
Ask the assistant to optimize the selected section of code. The optimizations should retain the original functionality of the code.
Cancel
Cancel a pending request. In all the above cases, an activity indicator is displayed in the GuiPy status bar. You can also cancel a request by clicking on the activity indicator.
In the GuiPy configuration, settings of the LLM assistant are entered, in particular the API keys.
All LLM Assistant answers are displayed in a pop-up suggestion window:
You can edit the answer before executing any of the following commands
Accept (Tab)
Inserts the contents of the suggestion at the cursor position.
Accept Word (Ctrl+Right)
Accept only the first word of the suggestion.
Accept Line (Ctrl+Enter)
Accept only the first line of the suggestion.
Cancel (Esc)
Ignore the suggestion and return to the editor.
The LLM Chat Window is designed to interact with Large Language Models (LLMs) without leaving GuiPy. You call it up via the view menu.
It offers:
Toolbar commands:
New Topic
Adds a new chat topic.
Remove Current Topic
Removes the current chat topic.
Save Chat History
Saves chat topics in a JSON file called „Chat history.json“ in the same directory as GuiPy.ini. The chat history is automatically restored when GuiPy is restarted.
Title
Give the current topic a title. The topic title is displayed in the window title.
Next/Previous Topic
Displays the next/previous topic.
Settings for the LLM chat window are made in the configuration.
Copy
Copies the question or answer under the cursor to the clipboard.
Copy Code
Copies the Python code under the cursor to the clipboard.
Copy Code to New Editor
Copies the Python code under the cursor to a new editor so you can easily test it.
When entering a question/prompt, you can use parameters and modifiers. For example, you can use the following prompt instead of copying and pasting code from the editor
Please explain the following code: $[SelText]
You can invoke parameter completion with Shift+Ctrl+P and modifier completion with Shift+Ctrl+M.
To send your request, press Ctrl+Enter or click the chat icon to the right of the question.