Cisco Systems, which coined the term “fog computing,” and IBM (which prefers the term edge computing), both have ongoing initiatives to push computing back out to the edge of the networks, to the devices, routers and sensors where the Internet ends and the real world begins, blanketing our world with a fog of devices with the computing power to handle much of the data and processing we request of them on their own. Fog computing, also known as fog networking or fogging, is a decentralized computing infrastructure in which data, compute, storage and applications are distributed in the most logical, efficient place between the data source and the cloud. Fog computing essentially extends cloud computing and services to the edge of the network, bringing the advantages and power of the cloud closer to where data is created and acted upon. Fog computing is also sometimes called edge computing, solves the problem of cloud computing by keeping data closer “to the ground,” so to speak, in local computers and devices, rather than routing everything through a central data center in the cloud. The goal of fogging is to improve efficiency and reduce the amount of data transported to the cloud for processing, analysis and storage. This is often done to improve efficiency, though it may also be used for security and compliance reasons.
In our modern, western world, we exist with a huge amount of computing power around us all the time. How many devices do you have at home at any given time? A phone for every member of the family, a tablet or two, and probably a laptop in addition? What if the laptop could download software updates and then share them with the phones and tablets? Instead of using precious (and slow) bandwidth for each device to individually download the updates from the cloud, they could utilize the computing power all around us and communicate internally. That is fog computing. It allows for data to be processed and accessed more rapidly, accessed more efficiently, and processed and accessed more reliably from the most logical location, which reduces the risk of data latency.
How Fog Computing works?
In a fog environment, the processing takes place in a data hub on a smart device, or in a smart router or gateway, thus reducing the amount of data sent to the cloud. It is important to note that fog networking complements — not replaces — cloud computing; fogging allows for short-term analytics at the edge, and the cloud performs resource-intensive, longer-term analytics. Fog computing utilizes fog nodes that are located on the edge of networks, or in the fog. Fog nodes are typically storage devices that are located in between users and the cloud. Essentially, fog nodes are storage servers that are located closer to end users. Due to the decreased distance between end user devices and fog nodes, end users benefit due to decreased latency in data transmission and retrieval. Owners of fog nodes can use them as local cache storage nodes to help better serve clients by storing frequently accessed data closer to the client instead of on a cloud based server. Data that is not accessed frequently may be placed on the cloud based server, and retrieved upon client request.
Benefits of Fog Computing:
Extending the cloud closer to the things that generate and act on data benefits the business in the following ways:
Greater business agility: With the right tools, developers can quickly develop fog applications and deploy them where needed.
Better security: Protect your fog nodes using the same policy, controls, and procedures you use in other parts of your IT environment. Use the same physical security and cybersecurity solutions.
Deeper insights, with privacy control: Analyze sensitive data locally instead of sending it to the cloud for analysis.
Lower operating expense: Conserve network bandwidth by processing selected data locally instead of sending it to the cloud for analysis.
Demerits of Fog Computing:
Depending on one’s perspective, some of the advantages of Fog Computing function as disadvantages:
Physical locality: There are some who would argue that the whole point of utilizing the Cloud is to access data and resources from anywhere, regardless of physical location. Although Fog Computing merely functions as a more selective way of ascertaining which data becomes centralized and which stays local, some perceive that the limitations of the latter are disadvantageous in terms of access.
Security: Security has long been regarded as the Achilles heel of the Cloud, but with a number of developments in this space within the past several years, issues of security really amount to a matter of trust. Certain organizations feel more comfortable having their data in a centralized location rather in remote, disparate ones—although the former option can exacerbate Data Governance when considered on a global scale.
Confusion: There is also the perspective that facilitating Fog Computing merely adds to the number of Cloud options (public, private, hybrids, cloudlets, etc.) and is needlessly a complicated architecture that is already complex enough.
Applications of Fog Computing:
Connected Cars: The advent of semi-autonomous and self-driving cars will only increase the already large amount of data vehicles create. Having cars operate independently requires a capability to locally analyze certain data in real-time, such as surroundings, driving conditions and directions.nA fog computing environment would enable communications for all of these data sources both at the edge (in the car), and to its end point (the manufacturer).
Smart cities and smart grids: Like connected cars, utility systems are increasingly using real-time data to more efficiently run systems. Sometimes this data is in remote areas, so processing close to where its created is essential. Other times the data needs to be aggregated from a large number of sensors. Fog computing architectures could be devised to solve both of these issues.
Real-time analytics: A host of use cases call for real-time analytics. From manufacturing systems that need to be able to react to events as they happen, to financial institutions that use real-time data to inform trading decisions or monitor for fraud. Fog computing deployments can help facilitate the transfer of data between where its created and a variety of places where it needs to go.
Fog computing gives the cloud a companion to handle the two exabytes of data generated daily from the Internet of Things. Processing data closer to where it is produced and needed, solves the challenges of exploding data volume, variety, and velocity. Fog computing accelerates awareness and response to events by eliminating a round trip to the cloud for analysis. It avoids the need for costly bandwidth additions by offloading gigabytes of network traffic from the core network. It also protects sensitive IoT data by analyzing it inside company walls. Ultimately, organizations that adopt fog computing gain deeper and faster insights, leading to increased business agility, higher service levels, and improved safety.

Content Protection by DMCA.com

About the Author

Akshay Palande

Akshay Palande is a passionate teacher helping hundreds of students in their UPSC preparation. With a degree in Mechanical Engineering and double masters in Public Administration and Economics, he has experience of teaching UPSC aspirants for 5 years. His subject of expertise are Geography, Polity, Economics and Environment and Ecology.

View All Articles