The development of how humans interact with computers has always focused on making complex computer tasks easier for people to use.

One of the possible advancements in this area is the creation of a natural language terminal. This tool lets users type commands in everyday language, which are then changed into commands that the computer can understand. This makes using the computer's powerful terminal features easier for people who aren't familiar with the usual command-line syntax.

In this guide, we'll look at how to make a natural language terminal using Python. We'll use MistralAI to transform natural language commands into computer commands and also run them.

If you prefer, you can follow along with the video version:


Why Make a Natural Language Terminal?

One of the main reasons for making a natural language terminal is to make it easier to use and more accessible. Traditional command-line interfaces (CLIs) can be scary and hard for people who aren't familiar with command syntax. By letting users type commands in everyday language, a natural language terminal makes it easier for more people to use the computer's powerful terminal features.

For experienced users, a natural language terminal can also make them more productive. Instead of remembering and typing out complicated command sequences, users can just describe what they want to do in natural language. This can save time and reduce mistakes, leading to a more efficient workflow.

For beginners, a natural language terminal can be a helpful learning tool. As users type commands in natural language and see the corresponding computer commands, they can learn the syntax and structure of command-line operations. This gradual learning approach can encourage more people to use command-line tools and improve their technical skills over time.

A natural language terminal can be customized to fit specific needs and uses. By defining custom commands and responses, developers can adjust the terminal to fit particular workflows or automation tasks. This flexibility makes it a useful tool for a wide range of applications, from system administration to software development and more.


What is Codestral Mamba?

Codestral Mamba is a highly developed language model created by Mistral AI, designed specifically for generating code.

It's part of the larger Codestral family, which focuses on helping developers by making code generation more efficient and accessible.

Key Features of Codestral Mamba

  • Made for Code Generation: Codestral Mamba is a Mamba2 language model that's great at generating code. It works with many programming languages, making it a useful tool for developers working in different environments.
  • High Performance: The model is built to perform very well in code generation tasks. It's excellent at turning natural language descriptions into working code, providing high accuracy and efficiency.
  • Supports Many Languages: Trained on a diverse dataset that includes over 80 programming languages, Codestral Mamba can handle both popular languages like Python, Java, and JavaScript, and less common ones like Fortran and Swift.
  • Advanced Capabilities: With a larger context window of 32k tokens, it performs better than many existing models in handling long-range dependencies in code. This makes it particularly effective for tasks that require understanding and generating large codebases.
  • Open-Weight Model: Released under the Apache 2.0 license, Codestral Mamba is available for research and testing purposes. This open-weight model allows for wide usage and experimentation within the developer community.
  • Integrated Tools and APIs: The model is designed to be easily integrated into various development environments, including IDEs like VSCode and JetBrains, through dedicated endpoints and plugins. This integration makes it easy to use for code generation and completion tasks.

Are you tired of writing the same old Python code? Want to take your programming skills to the next level? Look no further! This book is the ultimate resource for beginners and experienced Python developers alike.

Get "Python's Magic Methods - Beyond __init__ and __str__"

Magic methods are not just syntactic sugar, they're powerful tools that can significantly improve the functionality and performance of your code. With this book, you'll learn how to use these tools correctly and unlock the full potential of Python.

Project Overview and Implementation

This project consists of the following files:

  • requirements.txt: Lists the dependencies required for the project.
  • codestral_wrapper.py: Contains the functions for interacting with the MistralAI API and the necessary prompts.
  • main.py: The main script that uses codestral_wrapper to convert the natural language commands into shell commands and execute them.

Setting Up the Environment

First, ensure you have the necessary libraries installed. The requirements.txt file specifies the required packages:

rich
mistralai
python-decouple

We will use the rich library for creating an interactive terminal with beautiful formatting. More info about this package here.

Environment Variables

Since we will be using the cloud version of Codestral Mamba, you will require an account and API key from MistralAI. You can obtain your API key here.

Then you can create a .env file to store safely the API key:

MISTRALAI_API_KEY=<YOUR_MISTRALAI_API_KEY>

Understanding codestral_wrapper.py

The codestral_wrapper.py file encapsulates the functionality provided by the MistralAI library.

import json

from decouple import config
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage

# Get API key from environment variable
api_key = config("MISTRALAI_API_KEY")

# Initialize the Mistral client
client = MistralClient(api_key=api_key)

# Set the model to use
model = "codestral-mamba-latest"


# Define the function to get a natural language command
def get_natural_language_command(prompt):
    # Define the messages to send to the chat API
    messages = [
        ChatMessage(role="system",
                    content="""
                            - You are a knowledgeable AI assistant that can translate natural language commands 
                            into shell commands. You are knowledgeable in various programming languages and tools.
                            - Your role is to receive a natural language command and translate it into a shell command."
                            - Your response should be a JSON with the following format:
                                {
                                    "command": "",
                                    "description": ""
                                }
                            - Make the commands useful and safe to execute."""),
        ChatMessage(role="user",
                    content=f"Translate the following natural language command into a shell command: {prompt}"
                            f"Your response should be a JSON object with the translated command and a description")
    ]
    # Call the chat API
    chat_response = client.chat(
        model=model,
        messages=messages
    )
    # Extract the shell command from the response
    json_command = json.loads(chat_response.choices[0].message.content)
    return json_command

Here's a breakdown of what the code does:

  • Import necessary libraries: json for working with JSON data, decouple for handling environment variables, and Mistral AI's client and ChatMessage for interacting with the Mistral AI API.
  • Retrieve the Mistral AI API key from the environment variable MISTRALAI_API_KEY.
  • Initialize the Mistral AI client using the API key.
  • Set the model to use for translation tasks. In this case, it's "codestral-mamba-latest".
  • Define a function called get_natural_language_command that takes a natural language prompt as input.
  • Inside the function, create a list of ChatMessage objects. The first message is a system prompt that instructs the model on its role and the expected JSON response format. The second message is a user prompt that contains the natural language command to be translated.
  • Call the Mistral AI chat API using the client's chat method, passing the model and messages as arguments.
  • Extract the shell command from the API response, which is in JSON format. The JSON object contains the translated command and a description.
  • Return the extracted JSON command.

Utilizing codestral_wrapper in main.py

The main.py file demonstrates how to use the codestral_wrapper functions in a practical scenario:

import subprocess
from rich.console import Console
from rich.prompt import Prompt, Confirm
from codestral_wrapper import get_natural_language_command


# Initialize the console for rich
console = Console()


# Define the function to execute a shell command
def execute_command(command):
    try:
        result = subprocess.run(command, shell=True, check=True, capture_output=True, text=True)
        return result.stdout
    except subprocess.CalledProcessError as e:
        return e.stderr


# Define the main function
def main():
    # Print the welcome message
    console.print("[bold green]Enhanced Terminal Tool[/bold green]")
    console.print("Type your command below and press Enter. Type 'exit' to quit.\n")

    # Start the main loop
    while True:
        # Get user input
        user_input = Prompt.ask("[bold yellow]You[/bold yellow]")

        # Check if the user wants to exit
        if user_input.lower() == 'exit':
            break

        # Check if the user input is a shell command
        if user_input.startswith('!'):
            # Execute as a shell command
            command = user_input[1:]
        else:
            # Interpret as a natural language command
            json_command = get_natural_language_command(user_input)
            command = json_command['command']
            # Show the proposed command and description
            console.print(f"[bold blue]Proposed Command:[/bold blue] {json_command['command']}")
            console.print(f"[bold blue]Description:[/bold blue] {json_command['description']}")
            # Confirm execution
            if not Confirm.ask("[bold red]Do you want to execute this command?[/bold red]"):
                console.print("[bold red]Command execution cancelled.[/bold red]\n")
                continue

        # Execute the command
        console.print(f"[bold blue]Executing:[/bold blue] {command}")
        output = execute_command(command)
        console.print(output)


# Entry point of the script
if __name__ == "__main__":
    main()

Here's a breakdown of what the code does:

  • Import necessary libraries: subprocess for executing shell commands, rich for creating a more user-friendly console interface, and get_natural_language_command from the codestral_wrapper module for translating natural language commands into shell commands.
  • Initialize the rich console.
  • Define a function called execute_command that takes a shell command as input, runs it using subprocess.run, and returns the command's output.
  • Define the main function called main.
  • Inside the main function, print a welcome message and start an infinite loop for user input.
  • In each loop iteration, get user input using Prompt.ask.
  • Check if the user wants to exit by typing 'exit'. If so, break the loop.
  • Check if the user input is a shell command by checking if it starts with '!'. If it does, execute it as a shell command. Otherwise, interpret it as a natural language command.
  • If the input is a natural language command, use the get_natural_language_command function to translate it into a shell command and get a description. Show the proposed command and description to the user, and ask for confirmation before executing the command.
  • Execute the command (either the provided shell command or the translated one) using the execute_command function, and print the output.
  • Set the script's entry point to be the main function if the script is run directly.

Running Examples

Now that we have our terminal code complete, let's run some examples to see the Natural Language Terminal in action.

Run it with:

python main.py

Let's try some commands:

In this example, we asked to create a directory called 'terminal', and as we can see it returns the correct command and asks for confirmation before executing.

You might also notice that we need to specify the Operating System, otherwise it might return the wrong commands. This can be improved further.

Another example:

Here we asked to list the current directory contents. You can see that even with a typo, the request was converted to the right command.


Enhancing this Project

There are several enhancements that can be done to this project to increase its usefulness.

Some of them are:

  • Detecting OS: The user's Operating System can be automatically detected and included in the system prompt. This removes the need for the user to specify it.
  • Chat History: The chat of the previous user's requests and resulting commands can be stored and included in the system prompt. This enables the AI to have more context of the previously executed commands and also enables the user to specify less information.

This enhanced version is available to download:

This source code download includes the enhanced version and the version that we created in this article.

Examples of the enhanced version:

As you can see, it detected the operating system as shown with Windows 11 10.0.22631.

Another example:

This example shows the chat history functionality where after creating the new directory 'term', we simply asked to delete the previously created directory and it returned the correct command.


Conclusion

Making a natural language terminal greatly improves the ease of use and accessibility of command-line interfaces, allowing people to interact with their computers in a more natural way. By using the MistralAI library, especially the Codestral Mamba model, developers can easily change natural language commands into executable computer commands. This not only makes work more efficient but also makes advanced terminal features available to people with different levels of technical skill.

Adding Codestral Mamba into a Python project, as explained in this guide, offers a practical and efficient way to use natural language processing for generating code. Its high performance, support for many languages, and simple integration make it a useful tool for increasing productivity and reducing mistakes in coding tasks.

A natural language terminal connects complex command-line operations with user-friendly interfaces, making advanced coding tools more intuitive and accessible. This development has the potential to change how developers and users work with their computers, leading to a more inclusive and efficient technology environment.