Edge computing refers to a distributed IT architecture that processes client data at the edge of the network as close as possible to the source.
Data is vital to modern businesses. It provides valuable business insights and supports real-time control of critical business processes. Huge amounts of data can be collected routinely from distributed sensors and IoT devices operating in remote locations and inhospitable environments around the globe.
This is changing how businesses use computing. Traditional computing models that rely on the internet and a central data center are not well-suited for moving ever increasing amounts of real-time data. Centralized cloud systems can suffer by bandwidth limitations, latency issues, and unpredictable network disruptions. These data challenges are being addressed by businesses using a distributed edge computing architecture.
Edge computing is simply a way to move some storage and compute resources closer to the source of data. Instead of sending raw data to a central center for processing and analysis, it is done where the data is actually generated. This could be in a retail store or factory floor, or even across a smart community. Only the results of computing at the edge, such real-time business insight, equipment maintenance predictions, or other actionable answers are sent back to a centralized management instance for review and human interaction.
Edge computing is changing IT and business computing. In this post, we a detailed look at the definition of edge computing, its impact, and tradeoffs.
What is edge computing?
It is all about location. Traditional enterprise computing uses data produced at the client endpoint (e.g. a user’s laptop). The data is then moved via the corporate LAN where it is stored and processed by an enterprise app. The client is then informed of the results. This client-server approach to computing works well for most business applications.
However, the number of connected devices, as well as the volume of data produced by these devices, is increasing far faster than traditional data center infrastructures can handle. Gartner forecasted that 75% of enterprise-generated information will be generated outside of centralized data centres by 2025. It is a huge burden on the global internet that so much distribued data must be moved in such a way that it can cause disruptions or delays.
IT architects have moved their focus away from the central data centre to the logical edges of infrastructure. This involves taking computing and storage resources from the datacenter and moving them to the place where the data is generated. It’s simple: If the data is not available close to the center, move the center closer to it. Take for example retail where video surveillance can be combined with sales data to determine the best product configurations or consumer demand. Predictive analytics can be used to guide maintenance and repair of equipment before any actual failures or defects occur. Here, the data stream is process on the edge, then normalized and the results are sent back to the principal data center for analysing.
Edge computing is not a new concept. It is rooted in decades-old distributed computing ideas, such as branch offices and remote offices. This was where it was more reliable to have computing resources distributed at the desired location than relying on one central location. Only 27% of respondents have implemented edge computing technology, but 54% find it interesting.
Edge vs. Cloud vs. Fog Computing
Edge computing is closely related to the cloud computing concepts and fog computing. These concepts may have some overlap, but they are not the same thing and should not be used interchangeably. It is helpful to understand the differences between these concepts and to compare them.
All three concepts are related to distributed computing. They focus on the physical deployment and storage of compute and storage resources relative to the data being produced. It is all about where these resources are located.
Edge. Edge computing refers to the placement of computing and storage resources at the place where data is created. This allows compute and storage to be located at the same location as the network edge’s data source. A small enclosure might be placed on top of a wind turbine, with a few servers and storage. This would allow the turbine’s sensors to collect and process the data. Another example is a railway station that might store a small amount of compute and storage to process the many track and rail traffic sensors data. Any such processing results can be sent back to another data centre for human review, archive and to be combined with other data results for wider analytics.
Cloud. Cloud computing is a large, highly scalable deployment for compute and storage resources at one or more global locations (regions). The cloud also offers a variety of prepackaged services that can be used for IoT operations. This makes it a popular centralized platform for IoT deployments. Cloud computing is capable of handling complex analytics but it can be far from the nearest regional facility. Additionally, connections to the cloud rely on the same unstable internet connectivity as traditional data centers. Cloud computing can be used as an alternative to or complement to traditional data centers. Cloud computing can bring central computing closer to the data source but not at its network edge.
Fog. The choice of distributed computing or storage deployment doesn’t have to be limited to the edge. While a cloud data center may be too far away, edge computing might not be feasible due to the fact that it is too resource-limited or physically dispersed. Fog computing is a solution in this situation. Fog computing is a way to take a step back, and put compute and storage resources “within the data,” but not necessarily “at the data.” Distributed fog computing environments can generate a staggering amount of sensor and IoT data across vast physical areas that are too large to be called an edge. Smart buildings, smart cities and smart utility grids are just a few examples. A smart city is one that uses data to monitor, analyze, and optimize public transit, municipal utilities, and city services, and to guide long-term urban planning. One edge deployment is not sufficient to handle this load. Therefore, fog computing can run a series fog node deployments within a given environment to collect, process, and analyze data.
Note that Edge computing and Fog Computing have almost identical architecture and definitions. The terms are often interchangeable even among tech experts.
What is the importance of edge computing?
Selecting the right architecture is essential for computing. However, every architecture is not be suitable for all computing tasks and you have to consider several issues before deciding which architecture (cloud, fog, edge) is the optimal choice for your business. Especially moving away from a centralized to an distributed architecture is not easy. It requires high levels of control and monitoring that can easily be overlooked when switching from a centralized computing model. Take for example self-driving vehicles. Intelligent traffic control signals will be crucial for their success. Traffic controls and cars will have to exchange, analyze, and produce data in real-time. This requirement is multiplied by large numbers of autonomous cars and this requires a responsive and fast network. Here, Fog- and edge computing address three main network limitations: latency, bandwidth, congestion, and reliability.
Bandwidth. Bandwidth refers to the data a network can transmit over time. It is usually expressed in bits per seconds. Wireless communication has a lower bandwidth than all networks. This means that the data transfer speed, or the number of devices that can be connected to the network, is limited. While it is possible to increase the network bandwidth to accommodate more devices, it can be costly and may not solve other problems.
Latency. Latency refers to the delay in sending data between two points within a network. Communication should occur at the speed of light. However, data movement can be slowed by large distances and network congestion. This can delay analytics and decision-making, as well as reducing the system’s ability to respond quickly in real time. In the case of an autonomous vehicle, it even costs lives.
Congestion. In essence, the internet is a globally distributed network of networks. It has evolved to provide good general-purpose data interchanges for most computing tasks, such as file exchanges and basic streaming. However, the internet can be overwhelmed by the sheer volume of data required with tens billions of devices. This can lead to high levels of congestion that causes data retransmissions taking time and can cause excessive data transfer delays. Other cases can cause congestion or even stop communication between internet users. This makes the internet of everything useless.
Generally speaking, edge computing allows for many devices to be operated over a smaller, more efficient distributed network. The bandwidth used by local data-generating devices is limited, so latency and congestion are virtually eliminated. Local storage stores and protects raw data while local servers can perform edge analytics (or at least reduce and pre-process the data) to enable users to make real-time decisions before sending them to the cloud or to a central data center.
Examples of distributed edge computing applications
Edge computing is used to gather, filter, process, and analyze data “in place” at the network edge. This is a powerful way to use data parallel in a distributed manner that otherwise cannot be moved to a central location. This is usually due to the sheer volume of data making such moves costly, technically impractical, or violating compliance obligations such as data sovereignty. This definition has led to many real-world cases and use cases.
Manufacturing. A industrial manufacturer used edge computing for monitoring manufacturing. This enabled real-time analytics, machine learning and machine learning at edge to identify production issues and improve product quality. Edge computing enabled the installation of environmental sensors in the manufacturing plant. These sensors provided insight into the assembly and storage processes and the length of time the components remained in stock. Manufacturers can now make more informed business decisions about factory facilities and manufacturing operations.
Network optimization. Edge Computing can optimize network performance by measuring the performance of users across the internet, and then using analytics to determine which path is the most reliable and low-latency for each user’s traffic. Edge computing can be used to “steer” traffic through the network in order to achieve optimal traffic performance for time-sensitive traffic.
Workplace safety. Edge computing can combine data from employee safety devices, on-site cameras, and other sensors to assist businesses in monitoring workplace conditions and ensuring that employees follow safety protocols, especially when the work environment is remote or dangerous.
Healthcare. The amount of data collected by medical devices, sensors, and other equipment has increased dramatically in the healthcare industry. Edge computing is required to use machine learning and automation to extract the data, to ignore “normal” data, and to identify problematic data in order to allow clinicians to take immediate action to prevent health problems.
Transportation. Self-driving vehicles can produce up to 5 TB to 20TB per day. They gather information about speed, location, road conditions, and the condition of other vehicles. The data must be gathered and analysed in real-time, even while the vehicle is moving. This requires substantial onboard computing. Each autonomous vehicle becomes an “edge”. The data can also be used by authorities and businesses to manage vehicle fleets based upon actual ground conditions.
Retail. Retail can also generate huge data volumes from stock tracking, surveillance, and sales data. Edge computing is a way to analyze all this data and find business opportunities. This includes identifying the right endcap or campaign, predicting sales, optimizing vendor ordering, and other such things. Edge computing is a great solution to local processing in retail stores, as local conditions can make it difficult for businesses to operate.
The main benefits of Edge Computing
Edge computing solves critical infrastructure issues such as bandwidth limitations, excessive latency, and network congestion, as stated above. However, there are many potential benefits to edge computing which can make it appealing in other situations.
Autonomy. Edge computing is used when connectivity is not reliable or bandwidth is limited by site’s environmental characteristics. These include ships at sea, oil rigs and remote farms. Edge computing performs the computation on-site — sometimes on the device — such as water quality sensors for water purifiers in remote villages. It can also save data and transmit it to a central location only when connectivity is available. The amount of data that must be sent can be greatly reduced by processing it locally. This requires far less bandwidth and connectivity time than would otherwise be required. Edge devices include a wide range of devices, such as sensors, actuators, and other endpoints as well as IoT gateways.
Data sovereignty. Moving large amounts of data isn’t just a technical issue. The data’s movement across national and international borders can create additional legal problems in terms of privacy, security and other legal issues. Edge computing is a way to keep data close at its source. It also allows data to be kept within the limits of data sovereignty laws such as the GDPR (the European Union’s data processing and disclosure law). This allows raw data to be processed locally and any sensitive data can be hidden or secured before being sent to the cloud or primary server.
Edge security. Edge computing provides an additional opportunity to implement and ensure data security. Cloud providers offer IoT services that specialize in complex analysis. However, enterprises are still concerned about data security once it leaves the edge and returns to the cloud. Any data that traverses the network to the cloud or the data center can be encrypted. The edge deployment can also be protected against hackers and malicious activity, even though IoT devices are still limited in security.
Challenges and Limitations
Edge computing can provide many compelling benefits in a variety of applications, but the technology is not foolproof. There are many key factors that will affect edge computing adoption, beyond the network limitations.
Limited capabilities. One of the many benefits of cloud computing for edge computing (or fog computing) is the scale and variety of the services and resources. Although it can be efficient to deploy infrastructure at the edge, the purpose and scope of the deployment must be clear — even large-scale edge computing deployments serve a purpose and are limited in resources and services.
Connectivity. Edge computing can overcome network limitations but will still require connectivity. It is important to consider how connectivity will be lost at the edge and design an edge deployment that can accommodate poor or intermittent connectivity. Edge computing is only possible with the right combination of autonomy and graceful failure planning.
Security. Internet of Things devices are notoriously insecure. It’s important to design edge computing environments that emphasize security. This includes policy-driven configuration enforcement and security in computing and storage resources. Also, software patching and upgrades should be considered. Encryption in data at rest and flight must also be taken into consideration. Secure communications are included in IoT services offered by major cloud providers, but it is not automatic when creating an edge site.
Data lifecycles. The problem with today’s data overflow is that so many of the data is not needed. Think of a medical monitoring device. It’s not all critical data that is important, it’s the critical data that’s most important. There’s no point keeping days of normal patient information. Real-time analytics involves a lot of short-term data, which isn’t stored over the long-term. After the analysis is complete, a business must decide what data it should keep and discard. Data that is kept must be protected according to business and regulatory policies.
Implementation of distributed edge computing
Although edge computing may seem simple on paper, it can be difficult to develop a coherent strategy and implement a sound deployment at. A meaningful business and technological edge strategy are the first essential elements of any successful technology deployment. This strategy doesn’t focus on picking the right vendors or gear. An edge strategy considers the necessity for edge computing and understanding why is essential for the business and technical problems the organization is trying solve. This includes overcoming network constraints or observing data sovereignty.
These strategies could start with a discussion about what edge is, where it can be found for the business, and how it should be used to benefit the company. It is important to align edge strategies with technology roadmaps and business plans. If a business wants to reduce its footprint at the data center, edge and other distributed computing technologies may be a good fit.
It is important to carefully evaluate the hardware and software options as you move closer to implementation. Many vendors are available in the edge computing area. These include for instance Cisco, Amazon, Dell or HPE. Every product must be evaluated on cost, performance, features and interoperability. Software tools must provide complete visibility and control over remote edges.
An edge computing initiative’s actual deployment can be vastly different in terms of its scope and scale. It could include a small computing unit in a rugged enclosure on top of a utility, or a large array of sensors that feed a low-latency, high-bandwidth network connection to the cloud. There are no two edge deployments the same. These variations are what make edge strategy and planning so important for success in edge projects.
An edge deployment demands comprehensive monitoring. It might prove difficult or impossible to send IT personnel to an edge site. Edge deployments must be designed to offer resilience, fault tolerance, and self-healing capabilities. Monitoring tools should provide a clear overview of remote deployments, allow for easy configuration and provisioning, offer detailed alerting and reporting, and protect the installation and data. Monitoring distributed edge sites often includes an array metrics and KPIs such as site availability and uptime, network performance and storage capacity, utilization and utilization, and data security.
Security. Both physical and logical security measures are essential. They should include tools that emphasize vulnerability management, intrusion detection and prevention. Security must include sensor and IoT devices as each device can be accessed, hacked or both — which presents a wide range of attack points.
Connectivity. Connection is another issue. Access to control and reporting must be possible even when the actual data connectivity is not available. Edge deployments may use a secondary connection to provide backup connectivity and control.
Management. Remote provisioning and management are essential in remote, often inhospitable areas of edge deployments. IT managers need to be able see the edge and control the deployments when needed.
Maintenance. Maintenance requirements cannot be ignored. IoT devices have a limited life expectancy and require regular battery and device replacements. Gear eventually fails and will need to be replaced. Maintenance must include practical site logistics.
IoT and 5G options
Edge computing is constantly evolving, with new technologies and practices being used to improve its capabilities and performance. The most notable trend is edge availability. Edge services will be available worldwide by 2028. Edge computing, which is often context-specific, is set to become more widespread and transform the internet’s use. This will allow for more abstraction and possible uses of edge technology.
This is evident in the increased availability of edge computing products that are optimized for storage, compute, and network. Multivendor partnerships are more likely to enable greater product interoperability at the edge. AWS and Verizon are examples of such a partnership. This will allow for better connectivity at the edge.
In the next years, wireless communication technologies such as Wi-Fi 6 and 5G will have an impact on edge deployments and utilization. This will enable virtualization and automation capabilities not yet explored. These include better vehicle autonomy, workload migrations to edge and more cost-effective wireless networks.
This diagram demonstrates in detail how 5G offers significant advances for edge computing, core networks and LTE capabilities over 4G.
With the explosion of IoT devices and the sudden increase in data, edge computing has gained attention. The future development of edge computing will be affected by IoT technologies, which are still in their infancy. The development of distributed micro modular data centres (MMDCs) is one example of such future alternatives. The MMDC is essentially a data centre in a box. It allows for computing to be closer to the data, such as within a smaller mobile system.