Useful dashboards can elevate data analysis tasks, and bridge the gap between data and action. Viewers should be able to look at a dashboard and go, “I understand what’s going on and exactly what I need to do now.”
Published Date: Febuaray 13, 2023
Data structure is a format for organizing data so that it can be stored, retrieved, processed and used in a productive manner, making those functions faster and more efficient. In order for computers to use data, they need to be able to understand, find, organize and send it to the right places. The key to building the algorithms and applications that drive the value of modern computing is in how the data is structured. Data structure starts with the simplest forms of data — data types — and progresses all the way to the most complicated structures that drive computing power — algorithms and applications.
There are multiple types of data structures designed to facilitate different computing activities, all with the goal of storing data or making it easier for operators to use their data. Data structures can also allow data to be used for data science, with computational and sorting algorithms, in order to accomplish a specific function.
To help you learn data structures, we’ll talk about the fundamentals of data structure, the types of data structures and how they interact with one another and why the structure of data is important, as well as some examples and best practices.
What are the main types of data structures?

Data structures can be defined as primitive, non-primitive, linear or nonlinear.
There are many different data structures, the most common are as follows:
- Primitive data structures are the basic structures that other data structures are built on. Integer, character, float, double and pointer are primitive data structures that hold a single value. Integer data types, for example, represent a single whole number.
- Non-primitive data structures are complex data structures built from primitive data structures. Non-primitive data types are divided into linear and nonlinear.
- Linear data structure is based on data types arranged in sequence. Every element is connected in sequence to the elements before and after it. Some examples of linear data structure include list, queue, array and stack.
- Non-linear data structures are not connected in sequence. Every element can attach to other elements in multiple ways. Examples include tree, graph, trie and hash table.
Some properties that make up common data structures include the following:
- Array Data Structure: This is a finite data grouping, with specifically allocated memory locations that share a common border, and where each element in the array can be accessed via an index key that is generally either numerical or zero-based.
- Linked List Data Structure: Here, the order of the elements are not determined by allocation of contiguous memory, but rather placed in memory because of its design where each part of the list incorporates both the data and the pointer. This includes singly linked lists, doubly linked lists and circular linked lists.
- Tree Data Structure: Represented by a set of linked nodes, a tree structure is a hierarchy that contains a root node, a parent node and subtrees (or children). Various iterations of the tree data structure include binary search tree, red-black tree, B-tree, avl tree and weight-balanced tree.
- Hash Table Data Structure: Often abstracted and embellished by many programming languages, this is capable of mapping keys to values. The keys are determined via a hash function, but need to avoid a hashing collision, which happens when the hash function isn’t able to create a separate and unique key.
What are data types?
Data structures are the building blocks of algorithms, including sorting algorithms, computer programs and programming languages, such as javascript and python. In software engineering, data types are the building blocks of data structures that provide the foundation for basic operations. A data type specifies the allocation of a value of a variable and identifies the mathematical, relational or logical operations that can be applied to it. In other words, it is an attribute associated with a piece of data that tells a computer how to interpret it.
The data type tells the computer which operations can be performed on the data and how to use it. For example, an integer data type tells the computer that the value of that data is a whole number. A character type tells the computer that the data contains characters, such as text. Defining data types prevents errors by making sure that, for example, the computer doesn’t try to multiply an integer by a character (25 x Bob).
The typical base data items include:
- Boolean data types are based on logical values that are either true or false.
- Integer data types store a range of whole numbers.
- Floating-point numbers store a formulaic representation of real numbers.
- Fixed-point numbers hold real values but are managed as digits to the left and right of the decimal point.
- Character data types store character data in a fixed-length field.
- Pointers are reference values that point to other values.
- String is an array of letters, numbers, punctuation or other characters.

Typical base data includes boolean, integer and character data types, floating point numbers, pointers and strings.
What are data models?
Data models are visual representations of data elements and graph data structure, including the connections and vertices among them. They help define the structure of data inside an organization based on how it is used, in order to support the creation of information systems. Data models can be seen as a blueprint for how an organization stores, shares, manages and uses its data.
There are three primary types of data models that form a progression from the abstract to the detailed: conceptual, logical and physical.
- Conceptual data models are both the simplest and most abstract. They are often used in the discovery stages of a project to create the overall layout and rules of data relationships.
- Logical data models are a step beyond the basic outline created by the conceptual model and considers more relational factors. Logical data models are often used in creating data warehousing plans.
- Physical data models are the most complex and detailed models and are used in the final stage of modeling a database. The physical data model will include all the components necessary to complete a database build.
Many data models will have specific sequential approaches to handling data structures, which can include:
- FIFO: Or first in, first out, is a method in which the first element is processed first and the newest element is processed last.
- LIFO: Or last in, first out, is a method where the first element is processed last and the last element is processed first.

The three main data models include conceptual, logical and physical.
What are data hierarchies?
In data and database theory, a data hierarchy is a method of organizing data based on an established order, in this case a hierarchy. Data is made up of characters, fields, records, files and other components. In a hierarchical structure, the elements are ordered so that they have a logical relationship to one another.
If we use a customer database as an example, then each customer record would have information including first name, last name, company, title, department and so on. A hierarchical relationship would show which of these terms is a smaller or larger element of the hierarchy, so that the database would understand that Julia Lopez is a senior manager in the accounting department of XYZ Corporation.
A typical data hierarchy starts with the smallest type of data (a data field) and builds into more complex combinations.
- Data field: A single fact or attribute, such as a date.
- Record: A collection of related fields, such as name, address, title and phone number making up a customer record.
- File: A collection of related records. All customer records together would make up a customer file.
- Database: A collection of related files. All customer files together would constitute the customer database.
What is the difference between structured and unstructured data?
Unstructured data is any data that doesn’t conform to a conventional data model, meaning it cannot be stored and managed by a relational database. Unstructured data does not follow a data model or schema.
Structured data usually consists of letters or numbers. Unstructured data may be any type of data and may be stored in its original file format. It can contain data such as numbers, words, dates and other types of information.
In addition to the format, the primary difference between structured and unstructured data is the way it can be used. Data structures were created so that the data contained in them could be used by machine learning algorithms or computer programs and applications. The structure is designed specifically so that it is understandable and usable. Unstructured data is not uniform, doesn’t conform to specified data structures and typically can only be used by applications specifically designed to understand and parse unstructured data.
What is a data log and how does it relate to data structure?
A data log is a record of activities, events or actions that are captured, stored and displayed on files to one or more datasets with the aim of uncovering patterns, analyzing activities and predicting events or trends. Log data — which can record events such as size of data, type of modifications and who is making them — can in turn be used by system administrators to monitor and analyze how a system performs over time.
Log data also enables cybersecurity and compliance professionals to determine who has had access to the system and analyze audit trails to detect malware and locate and monitor suspicious activity in the network.
Examples of Llog types include perimeter device logs, Windows event logs, endpoint logs, application logs, proxy logs, and IoT logs. Log formats often include CSV, JSON, Key Value Pair, and Common Event Format (ex: Syslog).
Because a log is a historical record of activity, it naturally will be filed and categorized to some kind of data structure or index in a priority queue in order to be prioritized and then accessed based on level of prioritization and frequency of use. It’s also used as a source of authority and truth for restoring data and related structures in the event of a disruption or crash.
Why are data structures important?
Data structures are important for designing algorithms that perform specific functions by arranging basic data types into usable groupings of data. Data structures save time and allow computers to more efficiently perform operations such as storage, retrieval and processing of data.
Data structures bring abstract data elements together in a way that conveys meaning to an algorithm or application. For example, the label “customer name” is an abstract data type made up of the character strings for “first name” and “last name.” Without the data structure to define “customer name,” the data would not be usable in an application.
Data structures also make it significantly easier to write algorithms and applications, because programmers can build programs using data structures to create functions rather than smaller data types.
What use cases are there for unstructured data?
According to a 2019 survey by Deloitte, only 18% of organizations reported that they were able to take advantage of unstructured data. Unfortunately for them, unstructured data can contain insights of significant business value. Some real-world use cases for unstructured data include:
- Medical records: The healthcare industry creates significant amounts of unstructured data, including machine-generated data collected from medical imaging, data from wearable devices and even the insights contained in the conversations medical professionals have with their patients.
- Sentiment analysis: Organizations pay professionals large sums of money to help them understand what consumers think about them. Much of that data can be found, freely available, in social media and other public formats. It is often contained in unstructured data — in the form of tweets, posts, updates, comments on surveys and the like. The ability to understand that data is extremely valuable to companies hoping to understand how they are perceived.
- Information contained in text documents: Text documents created by an average business contain data as text, numbers, images and other unstructured formats. They can contain useful information but are not searchable. There are numerous tutorials available highlighting tools that can be used to extract the information from these kinds of documents, including pattern recognition, text mining and natural language processing (NLP).
- Business communications: Email, live chat, text messaging and other similar communication technologies are widely used in business but are generally made up of unstructured data that cannot be indexed and searched by traditional means. NLP tools can be used to identify key topics and search terms and help to make the information contained in these communications useful.
- Recorded telephone conversations: Sales calls, customer service calls, inquiries from the public and 911 calls to emergency responders are some of the types of recorded conversations that can contain useful information if it can be extracted. Speech-to-text processing can be employed to convert the conversations to a machine-readable text format, and then NLP can help to identify keywords and categorize the transcripts.
- Survey responses: Surveys and questionnaires usually include open-ended questions that respondents can answer in text, representing unstructured data. Using text analytics tools, the answers can be rendered into a format that can be read and interpreted by a computer.
- Digital publications and web content: Everything posted on the web in text is potentially valuable to someone for market research, competitive intelligence, consumer research, sales forecasting and any of a number of business purposes. Understanding digital text can be accomplished with a combination of NLP and artificial intelligence (AI), that helps open up data for productive use.
Data structures are one of the fundamental building blocks of data science and computer science. They take the smallest, simplest data types with the least amount of context and group them together so that they can be used in programming, and thereafter drive algorithms, applications and operating systems to perform specific, useful tasks. Understanding the way data structures work is a vital part of programming, and using them to drive computing operations is the key to creating effective applications. Data structures are the keys to good code, not because of the practical applications but also because they help programmers better understand the problems they are trying to solve and the goals they are trying to achieve.

Four Lessons for Observability Leaders in 2023
Frazzled ops teams know that their monitoring is fundamentally broken in this new multicloud reality. Bottom line? Real need will spur the coming observability boom.