Ast Fine-Tune from Setup to Evaluation and Next Steps

Author

Reads 896

An artist’s illustration of artificial intelligence (AI). This illustration depicts language models which generate text. It was created by Wes Cockx as part of the Visualising AI project l...
Credit: pexels.com, An artist’s illustration of artificial intelligence (AI). This illustration depicts language models which generate text. It was created by Wes Cockx as part of the Visualising AI project l...

Fine-tuning an ast model can be a game-changer for your project, but it requires careful setup to ensure you're getting the most out of it.

The first step is to choose the right dataset for fine-tuning, which should be closely related to your project's specific task. This dataset should be large enough to provide accurate results, but not so large that it overwhelms your resources.

With the right dataset in place, you can start fine-tuning your ast model using a variety of techniques, including data augmentation and transfer learning. This process can take some time, but it's essential for achieving the level of accuracy you need.

As you fine-tune your model, keep an eye on its performance using metrics like accuracy and loss. This will help you identify areas where the model needs improvement and make adjustments accordingly.

Configure and Initialize Ast

To adapt the AST model to our specific audio classification task, we need to adjust the model's configuration. This is because our dataset has a different number of classes than the pre-trained model, and these classes correspond to different categories.

Credit: youtube.com, 2013 EuroLLVM Developers’ Meeting: “The Clang AST - a tutorial”

We'll replace the pre-trained classifier head with a new one for our multi-class problem. The weights for the new classifier head will be randomly initialized, while the rest of the model's weights will be loaded from the pre-trained version.

We can load the configuration from the pre-trained model using `ASTConfig.from_pretrained(pretrained_model)`. Then, we update the configuration with the number of labels in our dataset using `config.num_labels = num_labels` and `config.label2id = label2id`.

Finally, we initialize the model with the updated configuration using `model = ASTForAudioClassification.from_pretrained(pretrained_model, config=config, ignore_mismatched_sizes=True)`. This will result in warnings indicating that some weights, especially those in the classifier layers, are being reinitialized.

Fine-Tuning Basics

Fine-tuning a model is no easy task, but using the right combination of libraries can make it easier. The script we used to produce the results in this blog post can be found here.

Performing full parameter fine-tuning on models of this scale is a complex task, but it's a standard technique used for all three tasks. This technique involves fine-tuning the model for next-token prediction and updating all parameters in the model.

A unique perspective: How to Fine Tune a Model

Credit: youtube.com, Fine-tuning Large Language Models (LLMs) | w/ Example Code

Here are some key points to consider when fine-tuning a model:

Models are fine-tuned for next-token prediction.All parameters in the model are subject to gradient updates.

The script we used is built on top of Ray Train, Ray Data, Deepspeed, and Accelerate, and allows you to easily run any of the Llama-2 7B, 13B, or 70B models.

Fine-Tuning Basics

Fine-tuning models is a complex task, but using the right combination of libraries can make it easier. We used Ray Train, Ray Data, Deepspeed, and Accelerate to fine-tune the Llama-2 models.

Performing full parameter fine-tuning on large models can be a challenge, but it's a standard technique used for all three tasks. This involves fine-tuning for next-token prediction and updating all model parameters with gradient updates.

The script we used to produce the results in this blog post is built on top of these libraries and can be found here. It allows you to easily run the Llama-2 7B, 13B, or 70B models.

Credit: youtube.com, Fine-tuning Large Language Models (LLMs) | w/ Example Code

Fine-tuning is a promising approach, but it's not a one-size-fits-all solution. To determine whether fine-tuning is suitable for your use case, consider the following questions:

  1. New Concepts: Has the base model encountered the concepts within this task in its pre-training data, or is this an entirely new concept?
  2. Promising few-shot: Do you observe improvements when you employ few-shot prompting?
  3. Token budget: Can you provide lengthy prompts as input for every request, or will it quickly consume your token budget?

If you're dealing with a task that requires pattern recognition and has grounded facts, fine-tuning a smaller Llama-2 model could significantly enhance performance.

Preprocess Audio

To preprocess audio data, we need to cast the audio and labels columns to the correct feature types. This involves importing necessary libraries and getting the target value – class name mappings from the dataset.

The Audio feature handles loading and processing audio files, resampling them to the desired sampling rate, which in this case is 16kHz. The ClassLabel feature maps integers to labels and vice versa.

We define which pretrained model we want to use and instantiate a feature extractor, which is a crucial step in preparing for AST model inputs. The feature extractor is used to encode our waveforms into a format that the model can process.

Credit: youtube.com, Audio Data Processing in Python

The feature extractor requires setting the mean and std values for normalization, which we can calculate using the transformation without augmentation on the training dataset. This step is essential to ensure that the model is trained on the correct data distribution.

We create a function to preprocess the audio data by encoding the audio arrays into the input_values format expected by the model. This function is set up to be applied dynamically, meaning it processes the data on-the-fly as each sample is loaded from the dataset.

The transformed data is yielded as input_values, which are tensors representing the spectrograms of the audio files.

LoRa Fine Tuning

Fine-tuning the LoRa can be done by modifying the fine-tuning script, as demonstrated by forking the alpaca-lora repository and modifying the finetune_wizard_react.py file.

To fine-tune the WizardLM itself, you need to inject the ReAct prelude that langchain adds to all prompts, which can be done by concatenating the prompt with the output.

Consider reading: Lora Fine Tune

Credit: youtube.com, Fine-tuning LLMs with PEFT and LoRA

The parameters used for training include fine-tuning the WizardLM itself, injecting the ReAct prelude, and concatenating the prompt with the output.

Be sure to note where the fine-tuning script is saving the output, and install the requirements to execute it.

The model save was not working correctly, so the binary checkpoint had to be copied, and the solution was found here.

Setup and Training

In the final step of fine-tuning your model, you'll configure the training process with the 🤗 Transformers library and use the 🤗 Evaluate library to define the evaluation metrics to assess the model's performance.

The TrainingArguments class helps set up various parameters for the training process, such as learning rate, batch size, and number of epochs.

To evaluate the model's performance, you'll define metrics like accuracy, precision, recall, and F1 score, which the compute_metrics function will handle during training.

You can now use the Trainer class from Hugging Face to handle the training process, which integrates the model, training arguments, datasets, and metrics.

With everything set up, you can start training your model.

You might enjoy: Confusion Matrix Metrics

Fine-Tuning Results

Credit: youtube.com, Fine Tune a model with MLX for Ollama

Fine-tuning results can be impressive, especially when it comes to tasks that require structured form. Fine-tuned models consistently achieve >90% success rate in both evaluation methods.

The ViGGO dataset is a great example of this, showing that fine-tuning can provide reliable and efficient means to accomplish tasks. Fine-tuning with just the GSM8k data yields a 10% improvement.

Fine-tuning can also be more cost-effective in the long run, especially when compared to using lengthy input prompts that quickly consume token budgets. Fine-tuning in two stages with both the MathQA and GSM8k datasets results in a cumulative 10% improvement.

One notable example is the use of fine-tuning with the Llama-13b model, which showed a 20% increase from the base model after fine-tuning on the MathQA dataset and then the GSM8k dataset. Fine-tuning with the 7b and 13b models is significantly cheaper than using GPT-4 endpoint calls.

Here are some key findings from the experiments:

Fine-tuning results can vary depending on the specific task and dataset being used. However, in many cases, fine-tuning can lead to significant improvements in model performance.

Customizing LLM for Specific Use Cases

Credit: youtube.com, "okay, but I want GPT to perform 10x for my specific use case" - Here is how

Using a more powerful model like WizardLM 7b can help with specific use cases. This model is used with Langchain Zero Shot ReAct tooling.

We want to improve the efficiency of local LLMs running with Langchain tools. Unfortunately, most models are not good at using these tools, so we need to fine-tune them.

To fine-tune WizardLM, we'll generate a dataset using prompts with an LLM. This dataset can be used to fine-tune any language model to understand how to use the Langchain Python REPL tool.

We'll reuse our previous code to use Vicuna, a model with the same architecture, by changing only the weights. This code can be found here.

The process involves several steps: generating a list of tasks, logging prompts and outputs, executing tasks, consolidating the dataset, fine-tuning the LoRA, and consolidating the result.

Next Steps

We've made some great progress with fine-tuning our model, but there's still work to be done to get the best results.

Credit: youtube.com, Fine Tune a model with MLX for Ollama

First, we need to switch to a model with a more permissive license, as the current one is causing issues.

We also need to find a way to allow the model to install packages on its own, so we don't end up with a training dataset full of failed installation attempts.

Cleaning the dataset a bit more before fine-tuning will also help us achieve even better results.

Here are the specific next steps we need to take:

  1. We need to switch to a model with a more permissive license, like the one mentioned in the article.
  2. We need to figure out how to allow the model to install packages on its own, which will likely involve writing a custom tool for Langchain.
  3. We need to clean the dataset a bit more before fine-tuning.

By taking these next steps, we can improve the quality of our fine-tuned model and get better results.

SQL Generation

SQL Generation is a crucial step in the AST finetune process. It allows you to generate SQL queries from your data schema, which can be used for tasks such as data analysis and machine learning model training.

The generated SQL queries are based on the schema of your data, which is automatically detected by the tool. This means you don't have to manually write SQL queries, saving you time and effort.

For example, if you have a table with columns for user ID, name, and email, the tool can generate a SQL query to retrieve all users with a specific email address.

Evaluation

Credit: youtube.com, Fine-Tuning SLMs for Enterprise-Grade Evaluation & Observability

To evaluate the performance of your finetuned model, you need to understand its results on both train and test data. This is crucial for identifying potential areas for improvement.

Metrics such as accuracy, precision, recall, and F1 score are logged during training to TensorBoard, allowing you to inspect the model's progress and performance over time.

Starting TensorBoard is as simple as running the command "tensorboard –logdir=’./logs’" in your terminal, providing a graphical representation of the model's learning curve and metric improvements.

This helps you identify potential overfitting or underperformance early in the training process, saving you time and effort in the long run.

For more detailed insights, you can use Renumics' open-source tool, Spotlight, to explore and visualize the predictions alongside the data, helping you identify patterns, potential biases, and miss-classifications on the level of single data points.

To get started with Spotlight, you can install it and load the ESC50 dataset with audio embeddings and model predictions for interactive exploration with just one line of code.

See what others are reading: Ai and Machine Learning Training

Frequently Asked Questions

What does AST stand for in audio?

AST stands for Audio Spectrogram Transformer, a model that analyzes audio data. It's a cutting-edge technology that captures long-range context in audio.

How to finetune an AI?

To fine-tune an AI, start by selecting a pre-trained model and preparing your sample data, then iterate on the model to improve its performance. Fine-tuning an AI involves a process of selecting, preparing, and refining a model to suit your specific task.

How to use audio spectrogram transformer?

To use the Audio Spectrogram Transformer, normalize your audio input to have a mean of 0 and standard deviation of 0.5, which can be handled by the ASTFeatureExtractor. This extractor uses the AudioSet mean and standard deviation by default.

Jay Matsuda

Lead Writer

Jay Matsuda is an accomplished writer and blogger who has been sharing his insights and experiences with readers for over a decade. He has a talent for crafting engaging content that resonates with audiences, whether he's writing about travel, food, or personal growth. With a deep passion for exploring new places and meeting new people, Jay brings a unique perspective to everything he writes.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.