Text mining is the practice of extracting and transforming unstructured text data into structured text information. Mining typically relies on a unique combination of machine learning, statistics and linguistics. This mined information can then be used to:
- Answer specific questions
- Perform analysis
- Discover meaningful patterns
A subset of data mining, text mining is particularly focused on documents, materials and information resources that contain unstructured text data. So, in this article, let’s take a look at how text mining works, use cases for it — and how it can uncover meanings and patterns that traditional approaches cannot.
How text mining works
The goal of text mining is to discover meaningful insights and patterns, as well as unknown information based on contextual knowledge. The concept of text mining is similar to that of data mining, except that text mining is focused only on text that can be interpreted as natural language given a specific structural format, such as documents, materials and information resources that contain unstructured text data.
Other names for this practice include text data mining and text analytics.
Research suggests that 80% of business data consists of unstructured text data. In order to transform text-based big data into meaningful information and — eventually — actionable knowledge, text mining procedures may include:
- Information Retrieval (IR)
- Information Extraction (IE)
- Natural Language Processing (NLP)
- Other machine learning algorithms
The important element of text mining is to produce knowledge from distributed and isolated sources of data across structured, unstructured and semi-structured formats.
Use cases for text mining
Most traditional data platforms using data warehouse systems require preprocessing of information to adopt an established schema structure. Additionally, modern data platforms such as data lake and data lakehouse technologies also apply a schema structure based on tooling specifications at the analysis stage (schema-on-read).
With that context, we can confidently say that an automated and intelligent mechanism for transforming natural text data into a standardized format has plenty of applications, no matter your business function or your industry. These applications include:
- Sentiment analysis. Analyzing language nuances to understand customer sentiment on product performance, functionality and features. Relevant data may be available in the form of social media posts, emails, surveys and transcriptions that follow diverse linguistic semantics and nuances.
- Document categorization. Documents may be clustered and classified based on information themes, language style, metadata and attributes, consumption trends by end-users.
- Text summary. Reviewing large documents and text assets and extracting the most relevant or meaningful knowledge from the data.
- Name entity recognition. Extracting identifiable information pertaining to entities including users, partners, service providers, businesses, locations and other objects.
- Pattern recognition & knowledge discovery. Identifying knowledge and insights collectively from multiple sources of natural language. Machine learning algorithms are used to automatically identify patterns in the text and use existing knowledge or the generalization capability of the machine learning model to classify text as an insight, anomaly or pattern.
- Feedback analysis & recommender engines. Understanding customer preferences from natural language interactions in the form of social media posts, tickets submitted to the IT Service Desk, search queries and reviews published online. This knowledge is then used to improve product functions and features, recommend products and services that are most likely to engage a customer or close a purchase.
As the application of text mining becomes more complex, traditional statistical techniques for information retrieval and text classification do not suffice for two key reasons.
- Large volumes of text data must be efficiently processed and analyzed. This is where using traditional statistical techniques may produce accurate results at the expense of processing speed, and therefore are rendered ineffective in a business setting.
- The target keywords may not be recorded in a natural language or text format. This requires integration with complementing data mining techniques that process audio, images, video and log data streams.
Text mining: a two-phase process
Text mining has a high commercial value – imagine all that knowledge available in corporate databases! But, extracting any non-trivial pattern from the text big data requires tedious manual efforts.
A simplified text mining process can be described in two phases: refining the text and distilling the knowledge contained therein.
Phase 1: Text refining
This is an intermediate step that processes unstructured text from resources such as emails, documents, images or other sources of text data, into a structured piece of information. AI techniques including Information Retrieval and Information Extraction are employed at this phase. The unstructured data may not conform to a unified standard required for an NLP tool for knowledge discovery.
Deviations including differences in language nuances and semantics make it challenging to assign a consistent structure to the available text big data.
Phase 2: Knowledge distillation & discovery
A refined text requires further analysis in order to discover patterns, extract knowledge, obtain contextual insights and answer specific questions.
The function of knowledge distillation employs advanced machine learning techniques including NLP that are used to discover knowledge from structured text efficiently and automatically. This knowledge may include non-trivial patterns that can only be deduced from refined text after exhaustive search, AI model training and learning.
Valuable application: Text mining in biology
Some of the most impactful applications of text mining are observed in the bioinformatics domain. For instance, researchers studying protein interactions are able to use text mining to analyze the usage of language around specific sets of proteins separately in existing biosciences literature.
It may be possible that two protein structures may not be discussed together in the same document and so a simple “bag of words” search may not return any meaningful search result. However, the language and terminology that occurs in separate documents around the keywords of interest, may point to relevance between the protein structures.
What is Splunk?
This posting does not necessarily represent Splunk's position, strategies or opinion.