Useful dashboards can elevate data analysis tasks, and bridge the gap between data and action. Viewers should be able to look at a dashboard and go, “I understand what’s going on and exactly what I need to do now.”
Published Date: November 29, 2022
Artificial intelligence (AI) is machines’ ability to observe, think and react like human beings. It’s grounded in the idea that human intelligence can be broken down into precise abilities, which computers can be programmed to mimic. AI is an umbrella term that encompasses a wide range of concepts and technologies, including machine learning (ML).
AI consists of many subfields that use techniques to mimic specific behaviors we associate with natural human intelligence. For example, humans can speak, hear, read and write language and glean meaning from it. The fields of speech recognition and natural language processing mimic these abilities by converting audio signals into text and processing that text to extract meaning from it. Other subfields are building intelligent systems that replicate human behaviors, such as:
- Robotics: These systems replicate the human ability to move through our physical environment
- Computer vision: Mimics our ability to see and process visual information
- Pattern recognition: Identifies and categorizes objects
AI algorithms have a variety of uses in the world today, with countless research projects exploring new ones all the time. You know that “Top Picks” section on your Netflix homepage? That’s an AI algorithm. In this article, we’ll discuss the basics of — and differences between — AI and machine learning, the business applications of AI and ML and how to get started.
Who invented artificial intelligence?
Concepts of artificial intelligence had been floating around in science fiction from at least the beginning of the 20th century. It wasn’t until the first stored-program computers became operational in 1949 that conditions were established for AI to become a reality. Within a few years, scientists and academics were theorizing that computers might be able to go beyond processing based on logical rules and actually become “thinking machines.”
Computer scientist John McCarthy is considered the father of artificial intelligence, coining the term in 1955 and writing one of the first AI programming languages, LISP while at the Massachusetts Institute of Technology in 1958. But he wasn’t the first to propose the idea of artificial intelligence.

John McCarthy is considered the father of AI. Photo via Wikimedia Commons.
Who were other important figures in early AI history?
One of the most prominent figures in early AI history was English mathematician Alan Turing who, in his 1950 paper “Computing Machinery and Intelligence,” proposed a method for testing machine intelligence that became known as the Turing Test. Five years later, Herbert Simon, Allen Newell and John Shaw created Logic Theorist, the first program written to mimic a human’s problem-solving skills.

Alan Turing proposed a method for testing machine intelligence. Photo via Wikimedia Commons.
Still, what we now know as AI was an undefined field until McCarthy dropped the term “artificial intelligence” into a proposal for a summer research conference on the subject. McCarthy wrote in the proposal, "The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”
Still, what we now know as AI was an undefined field until McCarthy dropped the term “artificial intelligence” into a proposal for a summer research conference on the subject. McCarthy wrote in the proposal, "The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”
What is artificial intelligence?
Artificial Intelligence (AI) comprises algorithms designed to mimic a human brain’s neural network, allowing machines to use massive amounts of data to learn from their own actions and improve future outcomes. There are different types of artificial intelligence and AI can further be subdivided into “Weak/Narrow AI” and “Strong/True AI,” which we go into further detail below.

AI is an umbrella term for the ability of machines to mimic aspects of the human neural network.
What is weak AI?
Weak AI, also called narrow AI, is a subset of AI that is used to produce human-like responses to inputs by relying on programming algorithms. Weak AI tools are not actually doing any “thinking,” they just seem like they are. Voice-activated apps like Siri, Cortana and Alexa are common examples of weak AI. When you ask them a question or give them a command, they listen for sound cues in your speech, then follow a series of programmed steps to produce the appropriate response. They have no real understanding of the words you speak or the meaning behind them.
What is strong AI?
Strong AI, or “true AI,” refers to any system that can think on its own. These AI systems can reason, learn, plan, communicate, make judgments and have some level of self-awareness. In essence, they don’t simulate the human mind, they are minds — at least in theory. If we can replicate the architecture and function of the human brain, experts believe we can build machines with genuine cognitive ability. In the AI field of deep learning, scientists are using neural networks to teach computers to be more autonomous, but we’re still far from the types of independent AI depicted in science fiction. While change is coming rapidly, at this point, truly strong AI is still closer to a philosophy than a reality.
What is machine learning vs. AI?
Machine learning is a subfield of AI. Machine learning is an AI application that enables computers to learn from experience and improve the performance of specific tasks. It allows computers to analyze data and use statistical techniques to learn from that data to improve their ability to perform a given task. Machine learning is the field of computer science working to develop computer systems that can autonomously learn from experience — specifically, by processing the data they receive — and improve the performance of specific tasks. The term “machine learning” is often used interchangeably with the term “artificial intelligence,” but machine learning is a subfield of AI.
Machine learning algorithms are often classified as “supervised” or “unsupervised.”
What is supervised machine learning?
In supervised machine learning, a data scientist guides an AI algorithm through the learning process. The scientist provides the algorithm with training data that includes examples as well as specific target outcomes for each example. The scientist then decides which variables should be analyzed and provides feedback on the accuracy of the computer’s predictions. After sufficient training (or supervision), the computer is able to use the training data to predict the outcome of new data it receives.
What is unsupervised machine learning?
In unsupervised machine learning, algorithms are provided with training data, but don’t have known outcomes to use for comparison. Instead, they analyze data to identify previously unknown patterns. Unsupervised learning algorithms can cluster similar data together, detect anomalies within a data set and find patterns that correlate various data points.
Semi-supervised machine learning algorithms, as the name suggests, combine both labeled and unlabeled training data. The use of a small amount of labeled training data significantly improves prediction accuracy while mitigating the time and cost of labeling huge amounts of data.
What is deep learning?
Deep learning is a branch of machine learning that mimics the brain as closely as possible. It typically uses a model based on the brain’s structure, called a deep neural network, to emulate a system of human neurons. The particulars of deep learning are complex, but in essence deep learning models analyze data iteratively and draw conclusions much more closely to the way a human would. When a machine learning algorithm makes an incorrect prediction, a human has to let it know so it can make the necessary alterations. That human-level intervention helps the algorithm more accurately predict outcomes. In contrast, deep neural networks or deep learning algorithms can recognize the accuracy of their predictions on their own. Because of this, deep learning is better suited to very complex tasks than standard machine learning models tend to be.
What are the business applications of AI and machine learning?
Machine learning is already driving many of the real world applications you use every day, such as Facebook, which uses machine learning to make sure you see more posts by people you’ve reacted positively with and see fewer posts from people you haven’t interacted with as much. Your GPS navigation service uses machine learning to analyze traffic data and predict high-congestion areas on your commute. Even your email spam filter is using machine learning when it routes unwanted messages away from your inbox.
There’s a wealth of applications for machine learning in the enterprise, as well. Machine learning can help pull insights from large amounts of customer data so companies can deliver personalized services and targeted products based on individual needs. In regulated industries like healthcare and financial services, machine learning can strengthen security and compliance by analyzing activity records to identify suspicious behavior, uncover fraud and improve risk management. In general, machine learning and other AI techniques can provide an organization with greater real-time transparency so the company can make better decisions.
A snapshot of some of the ways companies use AI to improve all aspects of their business:
Customer service:
- Answer customer questions using AI-powered chatbots.
- Improve credit card fraud detection.
- Analyze customer feedback and surveys.
Sales and marketing:
- Create more accurate forecasts using historical and market data.
- Update customer contact information, generate new leads and optimize lead scoring.
- Personalize messages and create curated content streams.
- Use virtual assistants to help with customer support.
- Use recommendation engines to recommend a purchase to someone based on a past purchase.
- Optimize pricing in real time based on competitive and market factors.
- Enhance supply-chain management by comparing market demand to inventory.
- Create better risk-management models.
- Automatically review contractor proposals.
- Reduce equipment maintenance by identifying abnormal behavior.
IT:
- Enhance IT operations and network security.
- Protect against cyber attacks by finding exploitable software bugs and malware.
- Automate root-cause analysis.
What are the risks of AI?
Some people fear that AI will create intelligent machines that will take jobs away from humans. Others fear that as machines become better able to act on their own without human guidance, they could make potentially harmful decisions. In a 2014 interview with the BBC, the late scientist Stephen Hawking said the “development of full artificial intelligence could spell the end of the human race.” Others predict that AI will improve human life by automating repetitive and simple tasks, giving people time for more rewarding activities.
McKinsey estimates that by 2030, 100 million workers will need to “find a different occupation” because AI has displaced them. But some studies predict AI will create at least as many jobs as it destroys. Although the World Economic Forum Future of Jobs report estimates that 85 million jobs will be replaced by machines with AI by the year 2025, the report also states that 97 million new jobs will be created by 2025 due to AI.
How do you know if you should use AI and machine learning?
- Is your task complex enough? First you should ask if the task you need to tackle is complex enough to justify an investment in machine learning. The range of AI applications in the enterprise is vast, and the best way to determine whether you should adopt AI is to look for similar use cases at other companies.
- Do other companies use AI for similar tasks? One of the more popular uses of machine learning relevant to many businesses and industries is to parse customer data to learn an individual’s preferences, purchasing habits and other behaviors when interacting with a company. This provides the information necessary to tailor highly personalized messages, services and products on a customer-by-customer basis. Do you have a definite outcome you want to achieve? In the financial services industry, machine learning is used to improve credit card fraud detection. In light of rapidly evolving methods of fraud and other financial crime, machine learning allows systems to adapt in real time to detect new types of fraud faster and more accurately than any human could. Machine learning is able to tackle a host of security and risk management challenges, with easy-to-find examples in financial, healthcare and other industries.
- Is your data recent enough to provide usable outputs? Once you’ve determined machine learning to be a worthwhile investment, it’s time to consider your data. Machine learning requires data — and a lot of it — to work successfully. But even more important than the quantity of data is its quality — and age. New data is a requirement for AI. Given the rapid changes in most industries, data even a few years old may hold no relevance to your business and probably won’t provide you with any predictive value.
- Is your data organized in a way that you can use it? “Clean data is better than big data” is a popular belief among in data science. Data that’s unstructured or disorganized won’t provide the necessary business insights no matter how much of it you have.
Asking yourself a series of simple questions can get you started with AI.
How do you get started with AI?
The best way for a business to get started using AI is to use an existing AI platform. While it’s true that building artificial intelligence from scratch is incredibly expensive and complicated, it’s not the only — or even the preferred — way to bring AI to your organization. A better and simpler option for many companies is to implement existing AI platforms within your business.
You can also consider supervised learning applications that offer amore straightforward, guided training process, and subsequently, a more manageable pilot AI project. As noted, machine learning requires data to have existing labels to make predictions. Using the credit card fraud example above, a bank could use data labeled “fraud” in conjunction with other transaction data to predict future fraudulent transactions. Without that labeling to jump start the process, the machine learning application will be considerably more complex and slow to show results.
Start with a small amount of data and a short time frame for the project — say two months. Define a question related to a specific business problem for the AI to answer, then gather feedback on the results. This will allow you to decide what value machine learning has for your business and determine how it might influence decision making.
You may be surprised how straightforward it is to get started. Your business is already using sophisticated technology every day without you ever giving a thought to what’s under the hood. The email clients, word processors, spreadsheets, project management software and cloud platforms that are the backbone of your daily operations all rely on complex source code, but you’ve probably been able to successfully use them without ever taking a peek at a single line. AI can be implemented in a similar way now, thanks to the proliferation of easily accessible tools.
It’s not just analytics either. Popular cloud providers, including Google, Amazon and Microsoft, are making AI a far less arduous prospect by providing tools that allow laypeople to build their own machine learning models. They simplify the undertaking by providing ready-made algorithms and easy-to-use interfaces that allow those with minimal development experience to get up and running quickly.
Can small businesses use AI?
Even small businesses can become data-driven companies with the help of AI. With AI-enabled customer resource management (CRM), a business as small as a single-owner operation can parse customer reviews, social media posts, email and other written feedback to tailor its services and product offerings. A small business user can automate repetitive customer service tasks like answering queries and classifying tickets using an AI platform such as Digital Genius. Small businesses can even extract actionable data from existing tools like Google Sheets and ZenDesk by integrating them with with an AI tool like Monkey Learn.
Can you use AI if you don’t have a lot of data?
Small companies can use AI even if they don’t have a lot of in-house data. Social media data can be collected directly from its sources and analyzed on the fly. Similarly, an AI system that tracks and analyzes housing prices, a popular AI application in real estate, usually culls this data from publicly available sources.
What is AI programming?
AI programming is a form of software programming that allows developers to bring AI capabilities to an application. These can be as basic as creating a smarter search engine or as complex as enabling a self-driving car.
Which programming language is best for artificial intelligence?
The most common programming languages for AI are Python, Java, C++, LISP and Prolog.
- Python: Python typically ranks as developers’ favorite programming language for AI, thanks to its adaptability and the simplicity of its syntax. It supports object-oriented, functional and procedural styles of programming. It’s also extremely portable, capable of running on Linux, Windows, MacOS and UNIX with minimal changes to the code. It offers an extensive and convenient library for AI programmers, including NLTK and SpaCy for natural language processing, Numpy for scientific computation, scikit-learn for machine learning, and TensorFlow, PyTorch and Apache MXNet, among others, for deep learning applications.
- Java: Java is one of the most popular languages in general software development, and is favored for AI projects. It’s also very portable and facilitates easy coding of algorithms — critical for AI programming — and is easy to debug. Built-in libraries include CoreNLP for natural language processing (NLP), ND4J for complex mathematics and DL4J for deep learning.
- C++: C++ is prized for its blazing speed and is widely considered the fastest programming language, which is great for calculation-heavy AI applications. It’s not the easiest language to learn, so it’s not usually developers’ first choice, but it’s often recommended for applications involving machine learning and neural network building.
- LISP: Developed by John McCarthy in 1958, LISP is the second-oldest programming language after Fortran. Despite its age, it’s still a popular option due to its great prototyping capabilities and its ability to process symbolic expressions effectively. Many of its features are now offered by younger languages, so it’s hardly unique anymore, but it’s still a common choice for machine learning projects.
- Prolog: Prolog is another “classic” programming language, though it doesn’t have LISP’s range of applications. This rule-based, declarative language allows developers to easily answer a variety of queries, making it ideal for straightforward problem-solution types of AI applications. Prolog also supports backtracking, which makes algorithm management easier.
As an industry, AI is growing rapidly. Gartner projected worldwide AI sales will have reached $62 billion in 2022. A 2022 report from Grand View Research valued the global AI market at $93.5 billion in 2021 with a projected compound annual growth rate of 38.1% from 2022 to 2030. Artificial intelligence and machine learning are more than esoteric computer science research projects at Stanford and MIT. AI algorithms are doing more than unseating world chess champions or powering virtual personal assistants — cognitive computing is transforming healthcare to powering the development of autonomous vehicles. If you’re concerned about experimenting with artificial intelligence, don’t fret. AI technology is more affordable and easier to use than ever before — and both of those factors continue to improve every day.

Four Lessons for Observability Leaders in 2023
Frazzled ops teams know that their monitoring is fundamentally broken in this new multicloud reality. Bottom line? Real need will spur the coming observability boom.