Part 10: Cloud Computing with Fog computing

Cloud Computing

Cloud Computing and Fog Computing

Cloud Computing: The delivery of on-demand computing services is known as cloud computing. We can use applications to storage and processing power over the internet. It is a pay as you go service. Without owning any computing infrastructure or any data centers, anyone can rent access to anything from applications to storage from a cloud service provider. We can avoid the complexity of owning and maintaining infrastructure by using cloud computing services and pay for what we use. In turn, cloud computing services providers can benefit from significant economies of scale by delivering the same services to a wide range of customers.

Fog Computing:

Fog computing is a decentralized computing infrastructure or process in which computing resources are located between the data source and the cloud or any other data center. Fog computing is a paradigm that provides services to user requests at the edge networks. The devices at the fog layer usually perform operations related to networking such as routers, gateways, bridges, and hubs. Researchers envision these devices to be capable of performing both computational and networking operations, simultaneously. Although these devices are resource-constrained compared to the cloud servers, the geological spread and the decentralized nature help in offering reliable services with coverage over a wide area. Fog computing is the physical location of the devices, which are much closer to the users than the cloud servers.

Many people use the terms fog computing and edge computing interchangeably because both involve bringing intelligence and processing closer to where the data is created. This is often done to improve efficiency, though it might also be done for security and compliance reasons.

The fog metaphor comes from the meteorological term for a cloud close to the ground, just as fog concentrates on the edge of the network. The term is often associated with Cisco; the company’s product line manager, Ginny Nichols, is believed to have coined the term. Cisco Fog Computing is a registered name; fog computing is open to the community at large.

History of fog computing

In 2015, Cisco partnered with Microsoft, Dell, Intel, Arm and Princeton University to form the OpenFog Consortium. Other organizations, including General Electric (GE), Foxconn and Hitachi, also contributed to this consortium. The consortium’s primary goals were to both promote and standardize fog computing. The consortium merged with the Industrial Internet Consortium (IIC) in 2019.

Fog computing vs. edge computing

According to the OpenFog Consortium started by Cisco, the key difference between edge and fog computing is where the intelligence and compute power are placed. In a strictly foggy environment, intelligence is at the local area network (LAN), and data is transmitted from endpoints to a fog gateway, where it’s then transmitted to sources for processing and return transmission.

In edge computing, intelligence and power can be in either the endpoint or a gateway. Proponents of edge computing praise its reduction of points of failure because each device independently operates and determines which data to store locally and which data to send to a gateway or the cloud for further analysis. Proponents of fog computing over edge computing say it’s more scalable and gives a better big-picture view of the network as multiple data points feed data into it. It should be noted, however, that some network engineers consider fog computing to be simply a Cisco brand for one approach to edge computing.

How fog computing works

Fog networking complements doesn’t replace cloud computing; fogging enables short-term analytics at the edge, while the cloud performs resource-intensive, longer-term analytics. Although edge devices and sensors are where data is generated and collected, they sometimes don’t have the compute and storage resources to perform advanced analytics and machine learning tasks. Though cloud servers have the power to do this, they are often too far away to process the data and respond in a timely manner. In addition, having all endpoints connecting to and sending raw data to the cloud over the internet can have privacy, security and legal implications, especially when dealing with sensitive data subject to regulations in different countries. Popular fog computing applications include smart grids, smart cities, smart buildings, vehicle networks and software-defined networks.

Fog computing benefits and drawbacks

Like any other technology, fog computing has its pros and cons. Some of the advantages to fog computing include the following:

Bandwidth conservation. Fog computing reduces the volume of data that is sent to the cloud, thereby reducing bandwidth consumption and related costs.

Improved response time. Because the initial data processing occurs near the data, latency is reduced, and overall responsiveness is improved. The goal is to provide millisecond-level responsiveness, enabling data to be processed in near-real time.

Network-agnostic. Although fog computing generally places compute resources at the LAN level — as opposed to the device level, which is the case with edge computing — the network could be considered part of the fog computing architecture. At the same time, though, fog computing is network-agnostic in the sense that the network can be wired, Wi-Fi or even 5G.

Of course, fog computing also has its disadvantages, some of which include the following:

Physical location. Because fog computing is tied to a physical location, it undermines some of the “anytime/anywhere” benefits associated with cloud computing.
Potential security issues. Under the right circumstances, fog computing can be subject to security issues, such as Internet Protocol (IP) address spoofing or man in the middle (MitM) attacks.
Startup costs. Fog computing is a solution that utilizes both edge and cloud resources, which means that there are associated hardware costs.
Ambiguous concept. Even though fog computing has been around for several years, there is still some ambiguity around the definition of fog computing with various vendors defining fog computing differently.

Below is a table of differences between Cloud Computing and Fog Computing:

Feature Cloud Computing Fog Computing
Latency Cloud computing has high latency compared to fog computing Fog computing has low latency
Capacity Cloud Computing does not provide any reduction in data while sending or transforming data Fog Computing reduces the amount of data sent to cloud computing.
Responsiveness Response time of the system is low. Response time of the system is high.
Security Cloud computing has less security compared to Fog Computing Fog computing has high Security.
Speed Access speed is high depending on the VM connectivity. High even more compared to Cloud Computing.
Data Integration Multiple data sources can be integrated. Multiple Data sources and devices can be integrated.
Mobility In cloud computing mobility is Limited. Mobility is supported in fog computing.
Location Awareness Partially Supported in Cloud computing. Supported in fog computing.
Number of Server Nodes Cloud computing has Few number of server nodes. Fog computing has Large number of server nodes.
Geographical Distribution It is centralized. It is decentralized and distributed.
Location of service Services provided within the internet. Services provided at the edge of the local network.
Working environment Specific data center building with air conditioning systems Outdoor (streets,base stations, etc.) or indoor (houses, cafes, etc.)
Communication mode IP network Wireless communication: WLAN, WiFi, 3G, 4G, ZigBee, etc. or wired communication (part of the IP networks)
Dependence on the quality of core network Requires strong network core. Can also work in Weak network core.

0 Comments

You may find interest following article

Chapter 4 Relational Algebra

Relational Algebra The part of mathematics in which letters and other general symbols are used to represent numbers and quantities in formula and equations. Ex: (x + y) · z = (x · z) + (y · z). The main application of relational algebra is providing a theoretical foundation for relational databases, particularly query languages for such databases. Relational algebra...

Chapter 3 Components of the Database System Environment

Components of the Database System Environment There are five major components in the database system environment and their interrelationships are. Hardware Software Data Users Procedures Hardware:  The hardware is the actual computer system used for keeping and accessing the database. Conventional DBMS hardware consists of secondary storage devices, usually...

Chapter 2: Database Languages and their information

Database Languages A DBMS must provide appropriate languages and interfaces for each category of users to express database queries and updates. Database Languages are used to create and maintain database on computer. There are large numbers of database languages like Oracle, MySQL, MS Access, dBase, FoxPro etc. Database Languages: Refers to the languages used to...

Database basic overview

What is DBMS? A Database Management System (DBMS) is a collection of interrelated data and a set of programs to access those data. Database management systems (DBMS) are computer software applications that interact with the user, other applications, and the database itself to capture and analyze data. Purpose of Database Systems The collection of data, usually...

Laravel – Scopes (3 Easy Steps)

Scoping is one of the superpowers that eloquent grants to developers when querying a model. Scopes allow developers to add constraints to queries for a given model. In simple terms laravel scope is just a query, a query to make the code shorter and faster. We can create custom query with relation or anything with scopes. In any admin project we need to get data...