Skip to main content


What Is Edge Computing?

Edge computing is a computing model that occurs at or near the source of data. The “edge” in “edge computing” doesn’t refer to any sort of physical edge. Traditionally, most data is processed using centralized computing, through major cloud vendors such as Amazon Web Services (AWS), Microsoft Azure, IBM and Google. If you’re not at the center, you’re at the edge of the network: hence “edge computing.” When computing happens closer to data sources, services become faster and more reliable. Organizations also benefit from the flexibility, given that edge computing allows organizations to use and distribute their resources across multiple locations.

Edge computing serves a particularly critical role in today’s cloud computing environment. Cloud infrastructure is often pushed to the limits by the abundance of cloud services and applications it supports, and can’t process data from connected devices with sufficient speed to support new technologies such as AR and VR, much less generate insights and action in near-real time in response. Cloud computing struggles to keep pace with this explosion of services and applications due to latency, often caused by network distance from the data source. That said, the resulting inefficiency (and the resulting customer experience degradation) are not an option for applications that need near-instant analysis and response.

That’s where edge computing comes in, and organizations have already begun to realize its benefits. While most data processing today still happens at centralized data centers, it’s estimated that by 2025, 75% of data will be created and processed outside of a traditional data center or cloud. What’s more, 90% of all data that enterprises collect today is never actually used. However, thanks to its low latency and high connectivity, edge computing is better positioned to transform that data into actionable insights.

What Is Edge Computing? | Contents

What are some examples of edge computing?

Typically organizations that rely on edge computing struggle with latency when transmitting data to a data center and need to process data locally in real time. Modern manufacturing plants are one such example. Because a modern plant with 2,000 pieces of equipment can generate around 2,200 terabytes of data per month, it’s faster and cheaper to process all the data closer to the equipment, rather than first transmitting it to a remote data center.

Autonomous vehicles (i.e., self-driving cars) represent another example, as they can process sensor data on board to decrease latency while still being connected to a central location for over-the-air software updates.

In retail, edge computing offers sensors and cameras to help organizations boost retail inventory accuracy, improve supply chain and product development efficiency with automation, and analyze customer behavior in near real time to improve the shopping experience. A real-world example is the Sensormatic video-based artificial intelligence (AI) solution, which tracked occupancy and monitored social distancing within stores, ultimately helping retailers safely re-open during the COVID-19 pandemic.

edge-computing-inset edge-computing-inset

Self-driving cars have the ability to process sensor data on board to combat latency in real time.

How does edge computing fit in with 5G?

Though 5G is still in its early days globally — in terms of coverage as well as the availability of 5G-enabled devices — the relationship between 5G and edge computing already exists. With mobile devices, for example, the closest mobile edge is the cell tower. Thus, if organizations can enable edge processing at the tower, they can dramatically improve performance for the end user.

Despite current limitations to 5G, analysts have been touting the combined power of 5G and edge computing for several years now. And looking ahead, 5G and edge will be complementary: 5G will likely connect the next wave of smart devices, resulting in an exponential growth in data at the edge within the next five years. The anticipated growth will be an opportunity for enterprises to improve operations and create new customer experiences. But because of latency, a centralized data approach won’t cut it — in light of its efficiency and cost-effectiveness, edge computing is what will enable organizations to take advantage of new 5G-connected data sources.

How does edge computing work with blockchain?

Thanks to its higher processing power and lower latency, edge computing alleviates many of blockchain’s challenges and roadblocks, providing viable infrastructure for blockchain nodes to store and verify transactions. As it stands, blockchain algorithms and transactions require a hefty amount of processing power, and general-purpose servers and processors are insufficient for the task. Edge computing infrastructure represents a solution to these challenges by providing graphics processing units (GPUs) and high compute processors that can sufficiently meet blockchain’s extensive processing requirements.

Another blockchain pain point that edge computing addresses is latency. In blockchain, data has to travel through the entire network and back whenever one blockchain node communicates with another. But edge prevents data from traveling through the core network, instead directly facilitating server-to-server communication.

How does edge computing work with IoT (internet of things)? 

Edge computing can drastically reduce latency in communication between IoT-enabled devices and the central IT networks that connect them. IoT, or the internet of things, refers to the process of connecting everyday physical objects to the internet, from light bulbs within homes to medical devices at hospitals to personal wearable smart devices.

The ever-growing surge of IoT devices, as well as the enormous volume of data they collectively generate, further drive the need for edge computing. The exponential rise of data has created challenges for organizations attempting to manage, analyze and store it, especially as networks become increasingly overburdened. Edge computing helps address these challenges by allowing organizations to analyze data closer to where it’s collected, rather than after it's sent to the cloud.

What is the difference between edge, cloud and fog computing?

Edge, cloud and fog computing are types of distributed computing that respectively delineate the physical deployment of compute and storage resources in relation to various edge locations, or where the data is produced. The difference between the three lies in where the resources are located.

For edge computing, organizations deploy computing and storage resources where the data is produced (for example, edge servers and storage installed on a wind turbine to collect and process data that’s produced by sensors inside.)

In the case of cloud computing, compute and storage resources are deployed at several distributed areas. The closest cloud facilities tend to be hundreds of miles away from the site of data collection, and the connection quality is dependent on internet connectivity.

Fog computing offers a third option when cloud data centers are too far away and when the edge deployment resources are limited or too scattered to be a viable solution. When the sensor and IoT datasets are too large to count as “edge,” fog computing puts compute and storage “within” the data. Smart cities are a good example of fog computing environments, as they generate too much data for a single edge deployment to handle. Instead, they rely on fog node deployments for collecting, processing and analyzing data within the environment. Some use the terms “fog computing” and “edge computing” synonymously, but the two operate at different scales.

edge-computing-diagram edge-computing-diagram

Cloud computing, fog computing and edge computing are various distributed computing systems indicating where compute and storage resources are in relation to edge locations.

What are the requirements of edge computing?

Edge computing entails an array of computer hardware and monitoring requirements. For computer hardware, it’s imperative that edge computers be hardy and fanless. The hardware needs to function in volatile environments, as well as in dust, debris, vibrations and extreme temperatures. A fanless design is important to eliminate the need for vents. Having a totally closed hardware system prevents dust, dirt and other debris from entering and damaging the system.

Whatever software platform an organization chooses for an edge computing environment must provide complete visibility and control over remote infrastructure, making comprehensive monitoring capabilities crucial for operations. These deployments should also be constructed with resilience, self-healing capabilities and fault tolerance in mind. The monitoring tools should enable easy provisioning and configuration, as well as be equipped with comprehensive alerting and reporting.

What are the benefits and challenges of edge computing?

The key benefits of edge computing include faster, more stable services, lower latency and real-time decision making. The key challenges include high overhead, lack of visibility from remote management and physical security risks.

Benefits include:

  • Faster, more stable services: Users get faster, more consistent experiences, while enterprises and service providers benefit from real-time monitoring and apps with low latency and high availability.
  • Faster response time: This is especially necessary for cutting-edge applications such as augmented reality (AR) and virtual reality (VR). VR as a technology has use cases for gaming, collaboration and more that need ultra-low latency for near-real-time responsiveness.
  • Real-time decision making: Edge computing makes this possible with its ability to conduct onsite analytics and aggregation.
  • Enhanced security: Edge computing lets organizations keep computing power local, thus reducing the likelihood of exposing sensitive data. Keeping computer power local also makes enforcing security practices and adhering to regulatory policies easier (some customers and governments actually require that data remain in the jurisdiction where it was created). For the healthcare industry, HIPAA requirements as well as local and regional compliance mandates limit the storage and transmission of personal healthcare data.
  • Improved connectivity: While cloud computing relies on consistent internet connectivity, edge-to-cloud computing is made feasible by multiple kinds of connectivity. 5G, for one, provides a high-bandwidth and low-latency connection for data transfers and service delivery from the edge.
  • Improved traffic management: Edge computing reduces the amount of data sent over the network and onto the cloud, thus reducing the affiliated bandwidth and costs of transmission and storage.
  • Faster data analysis and insights: Having AI at the data source allows for quicker processing and mining of insights.

On the other hand, edge infrastructure poses several challenges, including:

  • Increased overhead: Edge computing requires servers at multiple small sites, a more complicated endeavor than adding the equivalent capacity to a single data center. Edge computing also adds more overhead in physical locations, especially for smaller organizations.
  • Lack of technical expertise: Often small organizations or distributed offices tend to have limited or no onsite technical expertise. In the event that troubleshooting is needed, sites often have to rely on non-technical and/or remote staff to address any serious issues.
  • Lack of standardized operations: Site management operations across edge sites need to be more or less standardized and highly reproducible to simplify management and streamline troubleshooting. Edge deployments are not linked to a centralized data platform, which presents impediments to receiving standardized software updates and sharing data that can improve processes across other locations, in turn creating obstacles to management.
  • Reduced physical security: Physical security at edge sites is often lower than at core sites.

How do you implement edge computing?

Edge computing puts storage and servers where the data is, and so can require as little as a partial rack of gear to operate on a remote LAN to aggregate and process the data locally. The computing gear would likely need a shielded or hardened enclosure to be protected from environmental conditions such as extreme temperatures or moisture.

Edge computing requires a common horizontal infrastructure that spreads across the edge sites and IT infrastructure as a whole in order to effectively manage all the distributed data sources and means of data storage. Some organizations can effectively manage infrastructure that spans multiple geographical locations, but edge computing poses additional challenges.

Organizations need an edge computing solution that:

  • Can be managed with the same tools and processes as centralized infrastructure for automated provisioning, management and orchestration of hundreds (and up to tens of hundreds) of sites with minimal IT staff, if any.
  • Can address the functional needs of different edge tiers that have varying requirements, such as the size of hardware footprint and cost.
  • Offers flexibility that enables use of hybrid workloads.
  • Is interoperable with components from numerous vendors.

What are the cost considerations for edge computing?

The costs associated with edge computing depend on the size and scale of the deployment, the amount of data collected and processed, and the geographic location of the edge computing deployment.

Edge computing tends to cost more than cloud computing; a recent estimate ascertains that edge computing on-premises costs about 35-55% more than cloud computing at core regions, taking into account upfront costs as well as costs from three years of operations.

Additional costs come from significant real estate expenses and cooling and maintenance personnel, among other things. Organizations also have a choice between building their own edge clouds or using a pre-existing platform like AWS Outpost to set up a cloud instance locally. The former can be up to 90% more expensive, as a customized solution may create more costs than expenditures for the server hardware and software licenses.

What is the future of edge computing?

It’s likely that edge will very soon become a natural extension of enterprise environments. We’ve seen major cloud providers making strategic bets on edge infrastructure already: AWS now has edge services, while Google announced its Google Distributed Cloud that can run at the edge. The rise of 5G and growing compute power will collectively enable edge to gain substantial traction. Some experts are even looking to IoT devices and edge computing to augment remote monitoring capabilities of carbon-generating sources and prevent unwarranted carbon emissions.

In many ways, mainstream edge is already here — our phones, wearables, laptops and more are all edge devices. More often, people are referring to industrial use cases and enterprise initiatives when they talk about “edge computing,” but it will be consumer devices such as digital assistants that drive the technology forward. Once tech companies prove it viable at scale in the consumer space, where users don’t demand as much in low latency or reliability as they would in industrial contexts, it’s likely that edge will be used widely in corporate environments.

What is Splunk


The Bottom Line: The edge is here for the long run

The edge has made its way into our pockets, homes and cars. However, it’ll likely take several more years for edge to go beyond consumer applications and into infrastructure across all industries. This evolution will occur as other cutting-edge technologies such as 5G and blockchain continue to evolve and mature.

Edge is the only way for organizations to keep up with the massive explosion of data that’s around us. As cloud computing is pushed to its limits by the exponential growth of data, adopting edge will be the logical next step for enterprises and other organizations that can’t afford latency. For that and many other reasons, edge is here to stay. And it will be the key we need to not just unlock value from data, but also stay afloat during this epoch.