Explained: edge computing

Credit to Author: Pieter Arntz| Date: Mon, 30 Dec 2019 18:41:23 +0000

Edge computing may seem like a foreign and future-facing term. Yet its applications are widespread and diverse, with the ability to transform the way we store, use, and share data and programs online. The implications of edge computing are far-reaching, trickling down from software development and business applications to everyday computing—even to gameplay.

Recently, I followed a discussion about whether online gaming’s performance and graphics could ever compare to that of consoles. Gaming consoles typically provide users with faster action, more detailed environments and precise movements, quicker reaction times, and higher resolution than online games.

While some stated that online games could never measure up, others noted that the gaming industry keeps moving its focus toward online play. Redbox, for example, has stopped offering physical video games for rent. So will gamers have to forever trade game performance and quality for ease-of-use? Or can they have both? If game developers want to make this dream a reality, it will certainly involve edge computing.

What is edge computing?

Edge computing is, by definition, a method by which data storage and computation happens closer to the location where it’s needed, reducing latency and improving bandwidth while using cloud-based applications. This can be a huge benefit for those streaming videos, opening large files, or playing online games. To accomplish this, we might see Content Delivery Networks (CDNs), or networks of proxy servers set up in different locations, combined with cloud functionality to deliver the requested data almost directly to the user.

As more and more applications move to the cloud, shared bandwidth becomes increasingly problematic. Edge computing, then, is being hailed as the next movement in software development and data storage. Let’s explain a few of the ground principles of edge computing to understand how this principle will apply across the technologies we know and use every day.

Latency in computing

Latency is the time interval between a stimulation and the response, or, to simplify even further, a time delay between the cause and the effect of some physical change in the system being observed. Latency can happen in the human nervous system, in mechanical engineering, and, of course, in computing. Whenever witnessing a streaming service buffer, a pinwheel rotate around and around, a web page slow to load—that’s latency in a nutshell.

In that context, network latency describes the delay that takes place during communication over a network (including the Internet). Latency mostly depends on the type of connection and the proximity of the nearest server.

For a regular Internet connection, a latency of 100 ms (milliseconds) is considered acceptable today—though users are arguable becoming less and less patient. For a good gaming experience, you would want latency of 30 ms or less. In virtual reality applications, any latency above 7 ms produces motion sickness.

Content delivery networks

Content delivery networks (CDNs) are systems of distributed servers (networks) that deliver web content to a user based on their geographic location, the origin of the webpage, and the content delivery server itself.

In layman’s terms, this means that the information is copied to servers around the globe and a user gets the information from the server closest to him that has the requested information available. This also allows for geo-specific content to be distributed for optimal usage. After all, having a Dutch EULA on a server in Japan doesn’t make a lot of sense.

CDNs, as mentioned above, will provide a critical pathway from data stored in the cloud to the user, essentially bouncing the information from a single massive server to the servers closest to the exchange of data (the web content and the user).

The cloud

This is where the equation for edge computing comes together—at the edge of the cloud. CDNs alone can’t accomplish delivering all necessary data to accomplish solving for latency while allowing easier access. Cloud computing, then, or the delivery of on-demand computing resources, such as applications and data centers over the Internet, completes the formula.

Cloud resources are often split up in three ways:

  • Public: Cloud services are delivered over the Internet and sold on demand, which provides customers with a great amount of flexibility. You only pay for what you need.
  • Private: Cloud services are delivered over the business network from the owner’s data center. You have control over the hardware, as well as the management and related costs.
  • Hybrid: A mix of the above. Businesses can choose to have control over the most sensitive data, and use public services to cover the rest of their needs.

Edge computing would likely employ the hybrid solution with a distributed cloud platform, which means that the cloud resources are placed strategically to provide the locations that have the highest demands with the highest level of resources.

Netflix: a special case

Considering edge computing’s applications, you might be inclined to envision streaming video services as beneficiaries. That might be so one day, but right now, the biggest name in the game, Netflix, has achieved fast-loading viewing times to millions at once without going to the edge.

Netflix has grown to serve over 50 million subscribers in 40 countries. To optimize the user experience, Netflix has taken online video streaming to the next level by building their own CDN, partnering with ISPs in the serviced countries, and developing a system that adapts the quality (resolution) of the content to the latency of the users’ connection. They are not employing edge computing techniques because they built their own infrastructure.

What this means in practice is that Netflix works directly with the ISPs by installing boxes called Open Connect Appliances either at exchange points or within the ISPs. These boxes can hold up to 280 Terabytes of video, which contains everything Netflix has to offer in your neck of the woods.

This actually means that in most cases, you are connecting to Netflix with your own ISP, provided they are one of Netflix’s partners, which results in maximum speed and low latency. As an extra method of avoiding noticeable buffering, Netflix can lower the image quality, which results in less pixels being sent.

Edge computing

The goal of edge computing is to achieve the resolution, speed, and ease of access that Netflix offers us, but without having to make the huge investments in infrastructure. The trick is to create a mix of hardware solutions and distributed cloud resources that can be deployed so that every endpoint user has the impression they are working locally.

When looking for an edge computing solution, it is imperative to know whether demand will remain more or less distributed in the same way or whether we need more flexibility when it comes to peak usage taking place in different locations.

To keep the mix of resources in sync with any shifts in demand by size and location, we will need some software solution to keep track of the demand and adjust the settings to meet the set parameters—preferably one that informs us beforehand when the limits of what it can achieve are getting close to the borders of what we have indicated to be acceptable.

This way, we can make informed decisions about whether we need to expand in hardware, shell out more for cloud services, or start looking for a better management software.

Security in edge computing

As per usual in quickly-evolving fields, edge computing runs the risk of deploying security measures as an afterthought. It’s nice when our employees in remote offices can participate as if they were next-door, but not at the cost of leaking business information along the lines of communication or at the edges of our corporate network.

Assuming you have security within your own network perimeters in order, the next logical step would be to lock down the pathway of information to and from the cloud, as well as the data stored in the cloud—all has to be done without a noticeable impact on the latency.

As the increased speed of communication was the goal we set out to achieve, it is convenient to forget we need to check what goes out and what comes back in. But neglecting these checks might turn the devices at the edge into open doors into your infrastructure. On the plus side, in edge computing, the devices at the perimeter only get what they need by design, and that limits the chance of any threat actor retrieving a complete set of data from one device.

The future of edge computing

With 5G on the horizon and artificial intelligence ready to orchestrate resources, we see a bright future for edge computing. The latency times might even be adequate enough to conquer the gaming industry. Looking at the pile of plastic in my closet, it occurs to me that this will be better for the planet as well.

The post Explained: edge computing appeared first on Malwarebytes Labs.

https://blog.malwarebytes.com/feed/

Leave a Reply