Automl Medium Guide to Automatic Machine Learning

Author

Posted Nov 8, 2024

Reads 212

An artist’s illustration of artificial intelligence (AI). This image represents how machine learning is inspired by neuroscience and the human brain. It was created by Novoto Studio as par...
Credit: pexels.com, An artist’s illustration of artificial intelligence (AI). This image represents how machine learning is inspired by neuroscience and the human brain. It was created by Novoto Studio as par...

Automated machine learning, or AutoML, is a game-changer for data scientists and non-experts alike. It can automate the entire machine learning workflow, from data preparation to model selection and hyperparameter tuning.

With AutoML, you can reduce the time and effort required to build and deploy machine learning models. According to a study, AutoML can accelerate the development process by up to 90%.

By automating the machine learning process, you can focus on higher-level tasks, such as data analysis and model interpretation. This is especially useful for non-experts who may not have the technical expertise to build and train machine learning models from scratch.

What Is

AutoML, short for automated machine learning, is a game-changer in the world of machine learning. It refers to the automated end-to-end process of applying machine learning in real and practical scenarios.

AutoML focuses on two main aspects: data collection and prediction, with the ability to easily automate any intermediate steps. This means that AutoML provides models that have been optimized and ready for prediction, saving time and effort.

Credit: youtube.com, What is AutoML? A conversation with Gnosis Data Analysis

Currently, AutoML mainly falls into three categories: parameter tuning, non-deep learning, and deep learning/neural networks. AutoML for non-deep learning is applied in data pre-processing, automated feature analysis, automated feature detection, automated feature selection, and automated model selection.

AutoML aims to automate as many steps as possible in ML pipelines and retain good model performance with minimum manpower. This is especially important for enterprises that struggle to implement ML model deployment.

The three major advantages of AutoML are: it improves efficiency by automatically running repetitive tasks, it helps avoid potential errors caused by manual work, and it's a big step toward the democratization of machine learning, allowing everyone to use ML features.

Here are the three categories of AutoML:

  • Parameter tuning
  • Non-deep learning (e.g. AutoSKlearn)
  • Deep learning/neural networks (e.g. NAS, ENAS, Auto-Keras)

AutoML Tools

AutoML tools are designed to automate the machine learning process, allowing you to create models with minimal coding. They're perfect for complex tasks like identifying specific actions in soccer games, where there's too much variation to be captured by simple rules.

Credit: youtube.com, 10. Automated Machine Learning (AutoML)

TPOT is a tree-based pipeline optimization tool that uses genetic algorithms to optimize machine learning pipelines, exploring thousands of possible pipelines to find the best fit for your data. It's built on top of scikit-learn and uses its own regressor and classifier methods.

H2O AutoML is an open-source platform that automates complex data science and machine learning tasks, including feature engineering, model validation, and model deployment. It uses exhaustive search for feature engineering methods and model hyper-parameters to optimize pipelines.

Common Frameworks (2019)

AutoML has a history of many years.

As of May 2019, many excellent AutoML frameworks have emerged.

The article only briefly describes several common frameworks, and subsequent articles will give more information about the use and performance of these frameworks.

AutoML frameworks have been around for a while, and it's exciting to see how they've evolved.

Auto-Sklearn

Auto-Sklearn is an automated machine learning software package built on scikit-learn. It frees a machine learning user from algorithm selection and hyper-parameter tuning.

Credit: youtube.com, AutoML (Automated Machine Learning) Tutorial in Python: Auto-SKLearn Regression & Classification

Auto-Sklearn includes feature engineering methods such as One-Hot, digital feature standardization, and PCA. The model uses SKLearn estimators to process classification and regression problems.

Auto-Sklearn creates a pipeline and uses Bayes search to optimize that pipeline. Two components are added for hyperparameter tuning by means of Bayesian reasoning.

Auto-Sklearn performs well in medium and small datasets, but it cannot produce modern deep learning systems with the most advanced performance in large datasets.

Here are some key features of Auto-Sklearn:

  • Automated machine learning pipeline creation
  • Bayes search for pipeline optimization
  • Hyperparameter tuning using Bayesian reasoning
  • Support for classification and regression problems

You can find the source code for Auto-Sklearn on GitHub: https://github.com/automl/auto-sklearn.

See what others are reading: Auto Ml Perfect Performance Stack

AutoML tools are designed to make machine learning more accessible to everyone. They can help you quickly meet your needs for data-driven operations.

One way to do this is with enterprise-level data modeling services. These services use machine learning algorithms to provide powerful data analysis capabilities. You can learn more about them by clicking the "Learn More" link.

If you're looking for a more flexible solution, you might consider elastic and secure virtual cloud servers. They can cater to all your cloud hosting needs and provide the scalability you require. Click "Learn More" to find out more.

Credit: youtube.com, Top 10 AutoML tools used in data science projects in 2021 - Slides

For tasks that require intense computational power, you might want to consider powerful parallel computing capabilities based on GPU technology. This can be a game-changer for complex data analysis tasks. Learn more by clicking the link.

Here are some key features to consider when choosing an AutoML tool:

How It Works

Automated machine learning, or AutoML, works by creating many pipelines in parallel that try different algorithms and parameters for you. These pipelines iterate through ML algorithms paired with feature selections, producing a model with a training score after each iteration.

The better the score for the metric you want to optimize for, the better the model is considered to "fit" your data. Azure Machine Learning stops the training process once it hits the exit criteria defined in the experiment.

To design and run your automated ML training experiments, you can follow these steps:

  1. Identify the ML problem to be solved: classification, forecasting, regression, computer vision, or NLP.
  2. Choose whether you want a code-first experience or a no-code studio web experience.
  3. Specify the source of the labeled training data.
  4. Configure the automated machine learning parameters.
  5. Submit the training job.
  6. Review the results.

How It Works

Automated machine learning (AutoML) is a game-changer for data scientists and non-experts alike. It automates the process of finding the best machine learning model for a given problem.

An artist’s illustration of artificial intelligence (AI). This image was inspired by neural networks used in deep learning. It was created by Novoto Studio as part of the Visualising AI pr...
Credit: pexels.com, An artist’s illustration of artificial intelligence (AI). This image was inspired by neural networks used in deep learning. It was created by Novoto Studio as part of the Visualising AI pr...

Azure Machine Learning creates many pipelines in parallel to try different algorithms and parameters, iterating through combinations of algorithms and feature selections to find the best model. The service stops once it hits the exit criteria defined in the experiment.

Here's a step-by-step overview of the AutoML process:

  1. Identify the machine learning problem to be solved, such as classification, forecasting, regression, computer vision, or NLP.
  2. Choose between a code-first experience using the Azure Machine Learning SDKv2 or CLIv2, or a no-code studio web experience.
  3. Specify the source of the labeled training data.
  4. Configure the automated machine learning parameters, including the number of iterations, hyperparameter settings, and metrics to evaluate.
  5. Submit the training job and review the results.

Auto-Keras, an open-source library, simplifies machine learning by automatically searching for architecture and hyperparameters of deep learning models. It uses automatic Neural Architecture Search (NAS) algorithms to adjust models.

The training job produces a Python serialized object (.pkl file) containing the model and data preprocessing information. You can also inspect the logged job information to gather metrics gathered during the job.

Creating a Custom Model

Your dataset is the foundation of a custom model, and it's essential to understand how Vertex AI uses it. By default, Vertex AI splits your dataset into 80% for training, 10% for validating, and 10% for testing.

The training set is where your model learns the parameters. It's the data your model "sees" during training, and it's used to learn the weights of the connections between nodes of the neural network.

Consider reading: Medium Generative Ai

Credit: youtube.com, What Are GPTs and How to Build your Own Custom GPT

The validation set is used to tune the model's hyperparameters. This is crucial because if you use the training set to tune hyperparameters, your model will likely end up overly focused on your training data and struggle to generalize.

The test set is not involved in the training process. It's an entirely new challenge for your model, and its performance on the test set gives you a good idea of how your model will perform on real-world data.

You can manually split your dataset if you want more control over the process. This is a good choice if you have specific examples that you're sure you want included in a certain part of your model training lifecycle.

To train a model, you'll need to select feature columns. Try to choose as many as possible, but review each to make sure it's appropriate for training. Be aware of feature columns that will create noise, like randomly assigned identifier columns.

Here are some key considerations for feature selection:

  • Don't select feature columns that will create noise, like randomly assigned identifier columns with a unique value for each row.
  • Make sure you understand each feature column and its values.
  • If you're creating multiple models from one dataset, remove target columns that aren't part of the current prediction problem.
  • Recall the fairness principles: Are you training your model with a feature that could lead to biased or unfair decision-making for marginalized groups?

Once your dataset is imported, you can create a machine learning model. Vertex AI will generate a reliable model with default parameters, but you may need to adjust parameters depending on your data quality and the outcome you're looking for.

For example, if you're working with video data, you'll need to consider the prediction type, frame rate, and resolution. The default pipeline uses 256x256 for regular training or 512x512 if there are too many small objects in your data.

Available Analysis Options

An artist’s illustration of artificial intelligence (AI). This image represents how machine learning is inspired by neuroscience and the human brain. It was created by Novoto Studio as par...
Credit: pexels.com, An artist’s illustration of artificial intelligence (AI). This image represents how machine learning is inspired by neuroscience and the human brain. It was created by Novoto Studio as par...

You can perform various types of analysis in Vertex AI to evaluate your model's performance.

Model output is one of the key things you can analyze to understand how well your model is doing.

The score threshold is another important metric that helps you determine when a prediction is made.

True positives, true negatives, false positives, and false negatives are all metrics that give you a better understanding of your model's performance.

Precision and recall are two metrics that are closely related to each other.

Average precision is a metric that helps you understand the trade-off between precision and recall.

Here are some of the key analysis options available in Vertex AI:

  • The model output
  • The score threshold
  • True positives, true negatives, false positives, and false negatives
  • Precision and recall
  • Average precision

Choosing the Right Tool

Vertex AI is the right tool for complex problems that require generalization, such as identifying specific actions in soccer games or categorizing customer comments.

Machine learning can solve these problems by learning from examples, rather than relying on a sequence of specific rules that can expand exponentially.

Credit: youtube.com, 5 Key Considerations in Picking an AutoML Platform

AutoML automates repetitive tasks like pipeline creation and hyper-parameter tuning, allowing data scientists to focus on business problems and accelerating ML development.

To choose the right tool, start with your problem and ask yourself what outcome you want to achieve, what categories or objects you need to recognize, and whether humans can recognize those categories.

For example, if you're trying to detect action moments in a video, use the action recognition objective, while classification objective is suitable for categorizing TV shots.

Here's a summary of the model objectives:

Preprocessing and Feature Engineering

Preprocessing and feature engineering are crucial steps in automating machine learning (AutoML). Feature engineering is the process of using domain knowledge to create features that help ML algorithms learn better.

In Azure Machine Learning, scaling and normalization techniques are applied to facilitate feature engineering, which is collectively referred to as featurization. Featurization can be applied automatically or customized based on your data.

Credit: youtube.com, Data Preprocessing and Feature Engineering for Machine Learning

Featurization steps, such as feature normalization, handling missing data, and converting text to numeric, become part of the underlying model. This means that when using the model for predictions, the same featurization steps applied during training are applied to your input data automatically.

Additional feature engineering techniques, like encoding and transforms, are also available for customization. To enable this setting, you can use the Azure Machine Learning studio or Python SDK.

Here are some ways to customize featurization:

  • Azure Machine Learning studio: Enable Automatic featurization in the View additional configuration section.
  • Python SDK: Specify featurization in your AutoML Job object.

In H2O, AutoML now has a preprocessing option with minimal support for automated Target Encoding of high cardinality categorical variables. This includes preprocessing=["target_encoding"] which automatically tunes a Target Encoder model and applies it to columns that meet certain cardinality requirements.

Gather Your

You need to start by identifying the data required to create your model. This data should be relevant to your use case.

Establish what data you need, and then consider whether your organization is already collecting it. You may be surprised to find that you're already collecting the relevant data.

If not, you can obtain it manually or outsource it to a third-party provider.

Include Labeled Examples in Each Category

Credit: youtube.com, Data Preprocessing and Feature engineering | Practice on New Data Set

Including labeled examples in each category is crucial for building a robust model. The bare minimum required is 100 image examples per category/label for classification.

The more high-quality examples you have, the better your model will be. Target at least 1000 examples per label.

Having a balanced distribution of examples is also essential. Distribute them equally across categories to avoid overfitting to a single label.

Imagine a model trained on mostly modern single-family homes, and you'll see why an unbalanced distribution can lead to poor results. Your model will learn to recognize the most common label and ignore the others.

It's not always possible to source an equal number of examples for each label. In those cases, follow this rule of thumb: the label with the lowest number of examples should have at least 10% of the examples as the label with the highest number of examples.

Feature Engineering

Feature engineering is the process of using domain knowledge of the data to create features that help machine learning algorithms learn better. This process is facilitated in Azure Machine Learning through scaling and normalization techniques, which are collectively referred to as featurization.

Credit: youtube.com, What is feature engineering | Feature Engineering Tutorial Python # 1

Featurization can be applied automatically in automated machine learning experiments, but it can also be customized based on your data. You can learn more about what featurization is included in Azure Machine Learning SDK v1 and how AutoML helps prevent over-fitting and imbalanced data in your models.

Automated machine learning featurization steps, such as feature normalization, handling missing data, and converting text to numeric, become part of the underlying model. When using the model for predictions, the same featurization steps applied during training are applied to your input data automatically.

To customize featurization, you can enable additional feature engineering techniques, such as encoding and transforms. You can do this by enabling Automatic featurization in the View additional configuration section in Azure Machine Learning studio or by specifying featurization in your AutoML Job object using the Python SDK.

Suggestion: Azure Automl

Preprocessing

Preprocessing is a crucial step in preparing your data for machine learning models. It's the process of cleaning and transforming your data to make it more suitable for analysis.

Credit: youtube.com, Data Preprocessing & Feature Engineering with Pandas and Numpy

Automated machine learning tools like Azure Machine Learning and H2O can help with preprocessing, but it's essential to understand the process to get the best results. In Azure Machine Learning, featurization is applied automatically, but can also be customized based on your data.

Featurization includes techniques like feature normalization, handling missing data, and converting text to numeric. These steps become part of the underlying model, so it's essential to apply the same featurization steps to your input data during predictions.

H2O's AutoML also has a preprocessing option with minimal support for automated Target Encoding of high cardinality categorical variables. This means that the tool can automatically tune a Target Encoder model and apply it to columns that meet certain cardinality requirements.

To clean up missing and incomplete data, it's essential to review and improve your data quality before using it for training. Check your data for missing values and correct them if possible, or leave the value blank if the column is set to be nullable.

Here are some general tips for preprocessing:

  • Check your data for missing values and correct them if possible.
  • For forecasting, check that the interval between training rows is consistent.
  • Clean your data by correcting or deleting data errors or noise.

By following these steps, you can ensure that your data is clean, consistent, and ready for analysis. Remember, the more missing values, the less useful your data will be for training a machine learning model.

Model Training and Evaluation

Credit: youtube.com, Results from 18-hours of training a machine learning model | Airbnb Amenity Detection 7

Model training is a crucial step in the automated machine learning (AutoML) process. With AutoML, you can specify the training data and let the algorithm handle the rest.

The training data should contain a mix of training, validation, and testing sets. If you don't specify the splits, AutoML will automatically use 80% of your data for training, 10% for validating, and 10% for testing. This is a good default, but you can manually split your data if you want more control.

The validation set is used to tune the model's hyperparameters, which are variables that specify the model's structure. The test set, on the other hand, is not involved in the training process and is used to evaluate the model's performance on new, unseen data.

To evaluate your model's performance, you can use metrics such as Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and Root Mean Squared Log Error (RMSLE). These metrics give you an idea of how well your model is performing and can help you identify areas for improvement.

Credit: youtube.com, Building and training ML models with Vertex AI

Here are some common metrics used to evaluate model performance:

By understanding these metrics and using them to evaluate your model's performance, you can fine-tune your AutoML process and create more accurate models.

Training

Training is a crucial step in the machine learning process, and it's essential to get it right. You'll need to decide how to split your dataset into training, validation, and test sets, with the vast majority of your data in the training set.

A good rule of thumb is to use 80% of your data for training, 10% for validation, and 10% for testing. This will give you a reliable model that generalizes well to new data. You can also manually split your data if you want more control over the process.

The validation set is used to tune the model's hyperparameters, which are variables that specify the model's structure. If you use the training set for this, your model may become overly focused on the training data and struggle to generalize to new examples.

Credit: youtube.com, Training Evaluation models

The test set, on the other hand, is not involved in the training process at all. It's used to evaluate the final model's performance on new, unseen data. This will give you a good idea of how your model will perform in real-world scenarios.

Here are some key considerations for training your model:

  • Use a diverse set of examples to train your model, including a variety of camera angles, day and night times, and player movements.
  • Try to provide a similar number of training examples for each class, aiming for a 1:10 ratio between the largest and smallest classes.
  • Avoid using feature columns that will create noise, such as randomly assigned identifiers.
  • Make sure you understand each feature column and its values.

By following these guidelines, you'll be well on your way to training a reliable and accurate machine learning model.

Grid Search Parameters

Grid Search Parameters can be a complex and time-consuming process, but AutoML simplifies it by performing a hyperparameter search over various H2O algorithms.

AutoML doesn't run a standard grid search for GLM, instead it builds a single model with lambda_search enabled and passes a list of alpha values to find the best alpha-lambda combination.

In AutoML's random grid search, hyperparameters are chosen randomly from a list of potential values, with some models having non-default values already set for certain hyperparameters.

Credit: youtube.com, Machine Learning Tutorial Python - 16: Hyper parameter Tuning (GridSearchCV)

The table below lists the hyperparameters and their potential values for the algorithms that are grid searched, excluding Random Forest and Extremely Randomized Trees which are not included in the current version of AutoML.

AutoML returns only the model with the best alpha-lambda combination for GLM, rather than one model for each alpha-lambda combination.

Prediction

Prediction is a crucial step in the machine learning process, where your model generates outputs based on the inputs it receives.

Your model can generate predictions using the predict() function with AutoML, which generates predictions on the leader model from the run. The order of the rows in the results is the same as the order in which the data was loaded, even if some rows fail.

Prediction outcomes can be categorized into four main types: true positive, false positive, true negative, and false negative. A true positive occurs when the model correctly predicts the positive class, while a false positive occurs when the model incorrectly predicts the positive class.

Credit: youtube.com, Terms: Training vs. Evaluation vs. Prediction

The main goal of classification models is to predict which categories new data fall into based on learnings from its training data. Common classification examples include fraud detection, handwriting recognition, and object detection.

Here are the four prediction outcomes:

You can use batch prediction to make many prediction requests at once, which is asynchronous, meaning the model will wait until it processes all of the prediction requests before returning a JSON Lines file with prediction values.

Log

The log in AutoML provides valuable information about the training process. This includes events generated during training, which can be accessed through the event_log property.

You can access the event_log property using Python or R clients. This will give you an H2OFrame with the selected AutoML backend events.

The event_log property is a powerful tool for post-analysis. It contains data that could be useful for further investigation.

To get training and prediction times for each model, you can explore the extended leaderboard using the h2o.get_leaderboard() function. This is often easier than digging through the training_info dictionary.

The training_info dictionary is a dictionary that exposes data useful for post-analysis. This includes various timings, but it's not always the most convenient way to get the information you need.

Classification Metrics

Credit: youtube.com, How to evaluate ML models | Evaluation metrics for machine learning

Classification metrics are crucial in evaluating the performance of a machine learning model. They help you understand how well your model is doing in making predictions.

A confidence score is a numeric assessment of the model's certainty that the predicted class is correct. This score determines when a prediction is converted into a yes or no decision.

If your score threshold is low, your model will run the risk of misclassification. This is because a low threshold will result in more false positives.

In a jacket binary classification model, predictions fall into four categories: true positive, false positive, true negative, and false negative. A true positive occurs when the model correctly predicts the positive class, while a false positive occurs when the model incorrectly predicts the positive class.

Precision and recall metrics are useful in understanding how well your model is capturing information and what it's leaving out. Precision is the fraction of positive predictions that were correct, while recall is the fraction of rows with the positive label that the model correctly predicted.

Credit: youtube.com, Top 9 Performance Evaluation Metrics | Machine Learning Classification

You may need to optimize for either precision or recall, depending on your use case. For example, if you're predicting customer purchases, you may want to prioritize recall to ensure that you don't miss any actual purchases.

Here are some common classification metrics:

  • AUC PR: The area under the precision-recall curve, ranging from zero to one, where a higher value indicates a higher-quality model.
  • AUC ROC: The area under the receiver operating characteristic curve, also ranging from zero to one, where a higher value indicates a higher-quality model.
  • Accuracy: The fraction of classification predictions produced by the model that were correct.
  • Log loss: The cross-entropy between the model predictions and the target values, ranging from zero to infinity, where a lower value indicates a higher-quality model.
  • F1 score: The harmonic mean of precision and recall, useful if you're looking for a balance between precision and recall and there's an uneven class distribution.

6 Top Machine Learning Frameworks

As of May 2019, many excellent AutoML frameworks have emerged, and understanding these frameworks is crucial for successful model training and evaluation.

TensorFlow, a popular open-source framework, has been around for a while, but it's still widely used due to its ease of use and flexibility.

AutoML has a history of many years, and it's exciting to see how these frameworks have evolved over time.

H2O, another well-known framework, offers a range of machine learning algorithms and is particularly useful for large-scale data analysis.

Since last year, many excellent AutoML frameworks have emerged, making it easier for data scientists to train and evaluate models.

PyTorch is a newer framework that has gained popularity due to its dynamic computation graph and rapid prototyping capabilities.

Model Interpretation and Explainability

Credit: youtube.com, Interpretable vs Explainable Machine Learning

AutoML objects are fully supported through the H2O Model Explainability interface.

You can generate a large number of multi-model comparison and single model plots with a single call to h2o.explain().

This feature allows for automatic generation of plots, making it easier to understand and visualize your AutoML results.

Broaden your view: H2o Automl

Explainability

Explainability is a crucial aspect of machine learning, allowing us to understand how our models make predictions. AutoML objects are fully supported through the H2O Model Explainability interface.

You can generate a large number of multi-model comparison and single model plots automatically with a single call to h2o.explain(). This feature is a game-changer for model interpretation and explainability.

To get the most out of explainability, it's essential to have relevant data. If your data is not relevant to the questions you're trying to answer, your model's performance will suffer. Ideally, your training examples are real-world data drawn from the same dataset you're planning to use the model to classify.

By using the H2O Model Explainability interface, you can gain a deeper understanding of your models' behavior and make more informed decisions. This is particularly useful when you need to compare multiple models or understand the performance of a single model.

Forecasting and Regression Metrics

Credit: youtube.com, Interpreting ML models with explainable AI

Forecasting and regression metrics are crucial for evaluating the performance of your model. A smaller value in metrics like MAE (mean absolute error), RMSE (root mean squared error), and RMSLE (root mean squared logarithmic error) indicates a higher-quality model.

MAE measures the average magnitude of errors between target and predicted values. A smaller MAE value indicates a better model. For example, an MAE of 0 represents a perfect predictor.

RMSE is more sensitive to outliers than MAE, making it a better choice when large errors are a concern. A smaller RMSE value also indicates a better model.

RMSLE is a logarithmic version of RMSE, making it more sensitive to relative errors. It's a good choice when you want to emphasize underperformance over overperformance.

The observed quantile shows how far or close the model is to the target quantile. A smaller difference between the two values indicates a better model.

Here's a summary of common metrics used in forecasting and regression:

Precision and Recall

Credit: youtube.com, Introduction to Precision, Recall and F1 | Classification Models

Precision and recall are two essential metrics that help you understand how well your model is capturing information and what it's leaving out. Precision is the fraction of the positive predictions that were correct.

Imagine you're building software that detects sensitive information in a video and blurs it out. In this use case, it's critical to optimize for recall to ensure that the model finds all relevant cases.

Recall is the fraction of all positive predictions that were actually identified. A model optimized for recall is more likely to label marginally relevant examples, but also likelier to label incorrect ones.

You may need to optimize for either precision or recall, depending on your use case. Here are some examples:

A false positive identifies something that doesn't need to be censored, but gets censored anyway. This might be annoying, but not detrimental. A false negative fails to identify information that needs to be censored, like a credit card number. This would release private information and is the worst case scenario.

Keith Marchal

Senior Writer

Keith Marchal is a passionate writer who has been sharing his thoughts and experiences on his personal blog for more than a decade. He is known for his engaging storytelling style and insightful commentary on a wide range of topics, including travel, food, technology, and culture. With a keen eye for detail and a deep appreciation for the power of words, Keith's writing has captivated readers all around the world.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.