Useful dashboards can elevate data analysis tasks, and bridge the gap between data and action. Viewers should be able to look at a dashboard and go, “I understand what’s going on and exactly what I need to do now.”
Published Date: December 20, 2022
A machine learning model is the output of a machine learning algorithm. Machine learning algorithms intake data, process it, and result in a machine learning model, which is a representation of the algorithm’s learnings and data analysis. This model can then be applied to future problems. Machine learning models are “trained” on data to find patterns in a dataset or make decisions based on that data. When a machine learning model is tasked with predicting the price of a house based on a set of information about it, for example, it is attempting to fit that data into the model it developed through the training process. The overall goal of any machine learning model is to give an organization the ability to extract information from its data (or from external data) that would otherwise go undiscovered.
While there are two primary types of machine learning models — supervised and unsupervised — there are literally hundreds of different sub-types of models and special types of machine learning systems that serve different functions and are ideally suited to different types of data. Models can be unique or customized, and they can be pre-trained or trained by the user. Improving the accuracy and performance of a model over time is a major goal of machine learning scientists.
In this article we’ll examine the many types of machine learning models, investigate the various use cases for which each is best suited, and talk about the basics of creating and training a machine learning model.
What is machine learning?
Machine learning (ML) is a subset of artificial intelligence that allows computer algorithms to approximate the way that humans learn, with the goal of improving accuracy over time. Machine learning algorithms are tasked with building a machine learning model based on training data. After the machine learning model is trained, it can then be used to make predictions about additional data points it is given — without human intervention.
Machine learning has evolved dramatically since it was first conceived in 1959 by Arthur Samuel. Growing from its computer science roots in simple pattern recognition, today’s ML tools are sophisticated platforms that enable everything from the detection of fraudulent credit card transactions to cars that can drive themselves. Modern ML tools have been used to master games like chess and checkers, which they do by playing millions and millions of games to find the optimal strategy for any given situation.
What are machine learning model use cases?
Machine learning is littered with use cases, and more are emerging every day. Some of the major ways in which machine learning models are now being used in the real world include:
- Automated stock trading: ML models learn from billions of historical data points and economic trends to predict stock price movement and make purchases or sales ahead of expected market shifts.
- Cancer analysis and diagnosis: ML models are trained on photographs and x-rays to determine whether a mole or x-rayed mass is likely to be cancerous.
- Fraud detection: By analyzing millions of financial transactions, machine learning models help stop fraudulent credit card transactions before they are processed.
- IT anomaly detection: An ML model is fed with parameters that represent malicious network attacks or vulnerabilities in order to notify IT administration when an unusual event is occurring.
- Online chatbots: Asking for help from an online chatbot – or a phone-based voice response system – involves an ML model that was trained to look for keywords that suggest optimal responses.
- Product recommendations: Ever watch a movie or buy something from an online store, then have a related product recommended to you? This is a machine learning model at work as it endeavors to understand your personal preferences.
- Spam filtering: Feed thousands of known good and known spam emails into an algorithm and an ML model determines whether any given message received thereafter should make it to your inbox.
- Self-driving vehicles: Autonomous driving would not be possible without the real-time decision-making power of a well-trained machine learning model.
- Voice recognition: When you speak to Alexa or Siri use machine learning to figure out exactly what it is you’re saying.
How does deep learning compare to machine learning?
Deep learning is a subset of machine learning. A neural network system is tasked with learning from the data it is given, then uses those learnings to improve its own decision-making process without human intervention. The “deep” in deep learning refers to the depth of the neural network required to make this possible. Any neural network with more than three layers of algorithmic neurons is considered a deep learning model. As with a neural network, deep learning is designed to mimic the brain, only on a more sophisticated and complex level.
Unlike supervised machine learning, deep learning does not require highly structured data and can take in (and output) unstructured datasets that are generally required to be much larger in size and scale. A deep learning algorithm can train itself to resolve a problem on its own, though this process takes a large amount of time and a vast amount of computing power in relation to traditional machine learning technologies. While the output of a traditional machine learning system is generally an answer to a question (“this picture is of an apple” or “this transaction is not fraudulent”), a deep learning use case will often involve a more ephemeral result, such as an artificial but photorealistic picture of a person’s face, or an artificial voice patterned after someone else. When Val Kilmer’s voice was recreated for Top Gun: Maverick, deep learning technologies were used.
What is the difference between a model and an algorithm?
In machine learning and data science, the terms model and algorithm are often used interchangeably, but they represent different aspects of the ML discipline. As in traditional development, a ML algorithm is software that is run against data. When you write code that explains how to separate photos into categories, this is an algorithm. Some machine learning algorithms include:
- Random forest
- Linear regression (where we try to predict one output variable using one or more input variables)
- Logistic regression
- Nearest neighbor
- Gradient Boosting algorithms (XGBoost or Light GBM, for example)
- Support vector machines (svm)
- Decision trees
- Google’s TensorFlow
- XGBoost
The results of an algorithm represent the machine learning model. The model contains the collected learnings from the training data that is given to it, which can then be applied to future problems. When a new photo is fed to the system, the trained ML model makes a prediction about what it is (say, an apple or an orange). At this point, the model is abstracted from the algorithm that was used to create it.
The difference between these two topics can be confusing because both algorithms and models are computer programs in the broadest sense, but an ML model represents what was learned from the algorithm while also consisting of the data that it used to achieve those learnings.
What are the different types of machine learning models?
ML models and learning techniques can be broadly categorized as one of four types, including:
- Supervised learning
- Unsupervised learning
- Semi-supervised learning
- Reinforcement learning
We’ll cover each of these in greater detail in the next four sections.
Within those broad ML model types, there are dozens, if not hundreds, of more specific machine learning systems, with names like Naïve Bayes and K-Means. These can in turn be placed in various sub-categories. Some of the most common include:
- Binary classification model (supervised): This is a simple classifier model that predicts an outcome with two choices. Spam filters, machine failure prediction systems, and fraud detection tools are all good examples of this type of model, with the model tasked with making a yes or no decision.
- Multiclass classification model (supervised): This model is more complex, tasked with choosing from more than two possibilities. For example, a recommendation system must choose from thousands or millions of products to find the ideal one for the customer at a particular moment in time. Image recognition systems that must determine what is pictured among a large library of possibilities are another good example.
- Regression model (supervised): Regression models are tasked with making numerical analyses, such as predicting prices or temperatures based on past trends and other available information.
- Clustering model (unsupervised): This type of model takes a large dataset and categorizes or segments it into several related groups with similar characteristics. Clusters are used to refine a large dataset into more meaningful subsets.
- Dimensionality reduction model (unsupervised): Given a dataset with a large number of variables, this model works to determine which variables are relevant to the problem at hand and which are not. Principal component analysis (pca) is an example of this model.
- Artificial neural network model (reinforcement): The neural network is inspired by the brain and is designed to uncover hidden relationships among data in a dataset by breaking a problem down into smaller and smaller pieces. Handwriting recognition is a commonly cited example of neural networks in action, where a written character is split into hundreds of pixels to determine what it most likely represents.

Within these broad ML model types, there are dozens, if not hundreds of more specific machine learning systems.
What are supervised machine learning models?
Supervised machine learning models use tagged or labeled data as part of their process. These labels are known outputs for a certain function and are generated from prior experiences. If you are training an algorithm to recognize images of cars, it would be most effective if the input data consisted of a group of photographs that were labeled by subject matter (namely car, or not car) to start with. The data labels do the so-called supervising: After training on the labeled data, it can then be tasked with looking at new photographs and deciding whether or not they are cars.
This is called categorization (deciding to what group an input should be assigned), and it is one of the primary use cases for supervised machine learning. The other major use case is regression (predicting a numerical outcome based on certain inputs). These cases often involve financial analysis or other predictive problems involving complex math, such as weather forecasting.
What are unsupervised machine learning models?
In contrast to supervised machine learning models, unsupervised machine learning models do not use labeled data. The model is fed a large amount of data and then, essentially, is asked to figure out what to do with it. This may involve clustering the data into groups or determining how certain values within the data relate to others. Unsupervised machine learning is often used for recommendation systems or other “soft” applications, like analyzing the effectiveness of a marketing program.
Unsupervised learning is inappropriate for many applications, as you wouldn’t get ideal results training a model to find all the cars in a random collection of photographs without telling the model what a car looks like. Rather, it is best suited for applications where a user doesn’t fully know what they’re looking for. However, since unlabeled data is easier to come by and doesn’t require labeling — which can be substantial with large and complex datasets – it can be the catalyst for some compelling and unique use cases.
What are semi-supervised machine learning models?
Semi-supervised machine learning models emerged as a response to the significant cost involved with labeling data for a supervised machine learning model. In a semi-supervised machine learning model, some of the data is labeled and some is not. (In fact, most of the data is usually unlabeled.) The idea is to use the already labeled data to automate the labeling of the remaining, unlabeled data by learning the rules from that previously labeled portion of the data. The resulting semi-supervised machine learning dataset may be less perfect than a traditionally labeled supervised machine learning dataset, but for situations where 100 percent accuracy is not critical – such as speech recognition, where some level of error is expected — the concept can work well while saving a substantial amount of expense.
What are reinforcement learning models?
Reinforcement learning models are a special case of machine learning outside of supervised and unsupervised learning that involves rewarding the ML system for making good decisions and punishing it for making bad ones. Over time, the goal is for the model to learn from its mistakes and only make good decisions as time goes on. Reinforcement learning has been heavily studied, but most of its applications remain theoretical or research-driven rather than practical in nature. For example, reinforcement learning has been used to teach machines how to master games like checkers and backgammon – but it isn’t wholly appropriate for most industrial fields because an organization can rarely afford the mountain of mistakes the ML necessarily makes while it is slowly learning which behaviors are good and which are not. Many attempts to train reinforcement learning models have resulted in unexpected results wherein the model looks for workarounds to maximize intermediate rewards while failing to achieve the overall goal set for the model. As a discipline, training these types of models can be considerably more complex than other types of machine learning models.
How do you build a machine learning model?
A machine learning model is ideally built by following these steps and answering some key questions:
- Analyze the problem: Machine learning begins with an analysis of the business problem and a determination of whether ML is the right way to solve it. Is machine learning an appropriate solution for the problem at all? Do you have a sufficient grasp on how to build that solution? Don’t be afraid to access a tutorial to help you get started.
- Analyze the data: Is training data available? How much? Is it labeled or unlabeled? Structured or unstructured? Is it of high enough quality to ensure the ML model can learn appropriately from it?
- Collect and clean the data: This step involves ensuring labels are appropriate, data is free from bias, and otherwise fits the parameters of the problem.
- Select an algorithm and train the model: Many standardized algorithms are available for ML problems; training a model often relies on selecting the right one and testing it on your training dataset and validation data. Some trial and error can be involved here, particularly if you are less experienced with ML development.
- Test the model on new data: Validate the model with another test data set. Is it performing as expected? Is accuracy within expected tolerance levels? At this stage it’s also important to test for bias and ensure the model is not skewing results.
- Put the model into production: With the model validated, it’s time to start using it. The work isn’t done, however. Models must continually be tested and benchmarked to mitigate issues such as overfitting or bias, and that performance is as expected.
- Adjust as needed: Has incoming data shifted since you trained the model? It may need to be retrained on new data in order to ensure accuracy. Have new algorithms that improve accuracy or performance emerged? These should be investigated and, if appropriate, put into effect by developing a new model and starting the process again.
How will the machine learning market continue to grow?
Machine learning has become a massive market with myriad applications, many of which are embedded in our daily lives. A $15 billion market in 2021, it’s expected to hit $209 billion by 2029 as ML continues to find its way into nearly all aspects of technology, business, and consumer life. New technologies related to machine learning are driving much of this growth. Automated machine learning (AutoML) is making it easier for users to train ML models without an advanced degree, while the increase in the amount of high-quality data on a global scale is making models more accurate and, as a result, more useful in the field.
What industry will benefit the most from machine learning models?
Machine learning models are disrupting an array of industries ranging from entertainment to manufacturing, but as the above-linked report notes, the healthcare industry is likely to benefit the most from ML, as it has shown incredible utility with health metrics: diagnosing diseases, examining medical imaging, developing therapies, and helping to develop appropriate responses to epidemic and pandemic situations.
Today, consumers use ML models without realizing it, likely dozens of times each day. This is in part because ML is so varied and versatile that it can be used just as effectively by your doctor as it can by your smartphone. As ML becomes easier to develop and more accessible to the masses, the applications for the technology are bound to increase, likely in ways we have yet to even anticipate.

Four Lessons for Observability Leaders in 2023
Frazzled ops teams know that their monitoring is fundamentally broken in this new multicloud reality. Bottom line? Real need will spur the coming observability boom.