Cookie Consent by Free Privacy Policy Generator 📌 I fine-tuned my model on a new programming language. You can do it too! 🚀

✅ I fine-tuned my model on a new programming language. You can do it too! 🚀

💡 Newskategorie: Programmierung
🔗 Quelle:

I have been using OpenAI ChatGPT-4 for a while now.
I don't have a lot of bad stuff to say about it.
But sometimes, it's not enough.

In Winglang, we wanted to use OpenAI and ChatGPT-4 to answer people's questions based on our documentation.

Your options are:

  • Use OpenAI assistant or any other vector-based database with (RAG). It worked nicely since Wing looked like JS, but there were still many mistakes.
  • Passing the entire documentation into the context window is super expensive.

Soon enough, we realized that was not going to work.
It's time to host our own LLM.


Your LLM dataset

Before we train our model, we need to create data that will be used to train the model. In our case, the Winglang documentation. I will do something pretty simple.

  1. Extract all the URLs from the sitemap, set a GET request, and collect the content.
  2. Parse it; we want to convert all the HTML into readable content.
  3. Run it with ChatGPT 4 to convert the content into a CSV as the dataset.

It should be something like this:


Once you finish, save the CSV with one column named text and add the question and the answer. We will use it later. It should look something like this:

<s>[INST]How to define a variable in Winglang[/INST] let a = 'Hello';</s>
<s>[INST]How to create a new lambda[/INST] bring cloud; let func = new cloud.Function(inflight () => { log('Hello from the cloud!'); });</s>

Save it on your computer in a new folder called data.

Autotrain, your model

My computer is pretty weak, so I have decided to go into a smaller model - 7b parameters: mistralai/Mistral-7B-v0.1

There are millions of ways to train a model. We will use Huggingface Autotrain. We will use their CLI without running any Python code 🚀

When you use Autotrain from Huggingface, you can train it on your computer (my approach here) or train it on their servers (pay money) and train larger models.

I have no GPU with my old Macbook Pro M1 2021. thank you, Apple 🍎.

Let's install autotrain.

pip install -U autotrain-advanced
autotrain setup > setup_logs.txt

Then, all we need to do is run the autotrain command:

autotrain llm \
--train \
--model "mistralai/Mistral-7B-Instruct-v0.2" \
--project-name "autotrain-wing" \
--data-path data/ \
--text-column text \
--lr "0.0002" \
--batch-size "1" \
--epochs "3" \
--block-size "1024" \
--warmup-ratio "0.1" \
--lora-r "16" \
--lora-alpha "32" \
--lora-dropout "0.05" \
--weight-decay "0.01" \
--gradient-accumulation "4" \
--quantization "int4" \
--mixed-precision "fp16" \

Once finished you will have a new directory called "autotrain-wing" with the new fine-tuned model 🚀

Playing with the model

To play with the model, start by running:

pip install transformers torch

Once completed, create a new Python file named with the following code:

from transformers import pipeline

# Path to your local model directory
model_path = "./autotrain-wing"

# Load the model and tokenizer from the local directory
classifier = pipeline("text-classification", model=model_path, tokenizer=model_path)

# Example text to classify
text = "Example text to classify"
result = classifier(text)

And then you can run it by running the CLI command:


And you are done 🚀

Keep on working on your LLMs

I am still learning about LLMs.
One thing I realized is that it's not so easy to track changes with your models.

You can't really use it with Git because a model can reach a very large size > 100 GB; it doesn't make much sense - git doesn't handle it nicely.

A better way to do this is with a tool called KitOps.

I think it will soon be a standard in the world of LLM, so make sure you star this library so you can use it later.

  1. Download the latest KitOps release and install it.

  2. Go to the model folder and run the command to pack your LLM:

kit pack .
  1. You can also push it to Docker hub by running
kit pack . -t [your registry address]/[your repository name]/mymodelkit:latest

💡 To learn how to use DockerHub check this


⭐️ Star KitOps so you can find it again later ⭐️


I started a new YouTube channel mostly about open-source marketing :)

(Like how to get Stars, Forks and Client)

If that's something that interests you, feel free to subscribe to it here:


✅ What is Fine Tuning and Best Methods for Large Language Model (LLM) Fine-Tuning

📈 36.35 Punkte

✅ Meet LMQL: An Open Source Programming Language and Platform for Large Language Model (LLM) Interaction

📈 31.46 Punkte

✅ The state of data quality: Too much, too wild and too skewed

📈 27.83 Punkte

✅ Programming Interview Questions Are Too Hard and Too Short

📈 27.6 Punkte

✅ SignLLM: A Multilingual Sign Language Model that can Generate Sign Language Gestures from Input Text

📈 26.62 Punkte

✅ Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models

📈 26.29 Punkte

✅ Learn Programming: What Programming Language Should I Learn First?

📈 25.82 Punkte

✅ Fine-tune a Large Language Model with Python

📈 25.51 Punkte

✅ Meet DISC-FinLLM: A Chinese Financial Large Language Model (LLM) Based On Multiple Experts Fine-Tuning

📈 25.51 Punkte

✅ Efficiently fine-tune the ESM-2 protein language model with Amazon SageMaker

📈 25.51 Punkte

✅ LoRA: Revolutionizing Large Language Model Adaptation without Fine-Tuning

📈 25.51 Punkte

✅ Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps

📈 25.51 Punkte

✅ Bard can’t stop you from buying too many plants, but it can help you take care of them #ThanksBard

📈 24.89 Punkte

✅ Fine-tune your Amazon Titan Image Generator G1 model using Amazon Bedrock model customization

📈 24.74 Punkte

✅ Can you use vim for any programming language? Is it common to use it for java or python?

📈 24.58 Punkte


Datei nicht gefunden!