Workload Classification: For Better Resource Management in Fog-Cloud Environments

Workload Classification: For Better Resource Management in Fog-Cloud Environments

Zahid Raza, Nupur Jangu
Copyright: © 2022 |Pages: 14
DOI: 10.4018/IJSSOE.297135
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Traditional cloud-only architecture faces the challenge of coexistence with the ever-increasing IoT devices and demands for the need of the hybrid cloud computing and fog/IoT architecture to realize better handling of workload/requests. Determining where the user-workload should be assigned depends on the workload itself, and thereby, the workload classification gains the pivotal role. This location and offloading decision to the right resource affects both users and the providers. This work describes various cloud-fog workloads and relates them to their suitable place of execution in such a hybrid environment. The workloads have been classified based on their different parameters and characteristics with the aim to identify appropriate resources for efficient resource provisioning. The workload classification and characterization promises a significant role in the resource management by efficient capacity planning, future resource requirement predictions, workload offloading and an improvement in the Quality of Service (QoS) leading to an improvement in the system performance.
Article Preview
Top

Introduction

Cloud computing is a paradigm based on a utility model for services provisioning on-demand. It follows a pay-as-you-go model, which depends on how cloud service provider makes its resources available to the users and how cloud users consume these resources. Internet of Things (IoT) and fog have also made their presence felt with the arising needs for the smart computing and smart environment. But all these paradigms have different characteristics differing in the characteristics of the participating constituents and the objectives to be realized.

Resource provisioning can be defined as the appropriate selection, deployment, and management of available resources subject to the optimization needs is an actual demanding problem. An efficient resource provisioning can be realized only when there is a proper matching between the workload characteristics and the computing environment. The resource management becomes even more complex and challenging when fog and IoT systems come into the picture as resources provided by these nodes are heterogeneous, unlike cloud, where the resources are homogeneous. The objective of this work is to address the challenges calling for explicit knowledge and understanding of the demands of workloads generated by cloud, fog and IoT devices and their suitability before assigning them to resources and sending them for actual execution.

In the age of blurring boundaries between cloud, IoT, Fog and a possible mix of other computing paradigms, it especially becomes important that the resources need to be scheduled as per the user requirements over appropriate resources which are provisioned by the service providers to optimize the scheduling objectives. These objectives could be either one or a combination of various Quality of Service (QoS) parameters like turnaround time, throughput, response time, security, reliability etc. The performance of the entire gamut of computing depends on resource management. Thus, resource management, which includes resource provisioning also referred to as job scheduling, becomes the core of the cloud and its complementary technology, IoT and Fog.

Job scheduling for cloud and peers has been classified as NP-hard because of the sizable solution space and the time it consumes in finding an optimal solution (Kalra & Singh, 2015). In a computing environment like cloud, it is preferable to find sub-optimal solutions but in a reasonably quick time. Different researchers have suggested different scheduling algorithms, each trying to optimize the resources efficiently and can be either heuristics or metaheuristics based, multi-objective task scheduling, Multilevel priority-based, Load balancing, Energy-efficient, Fault-tolerant workflow scheduling. The resource provisioning mechanisms can be static or dynamic, each having its advantages and challenges. An efficient resource provisioning ensures meeting the Service Level Agreement (SLA), an essential factor for business applications like cloud (Pereira, Araujo & Maciel, 2019; Kothari, & Mahalkari, 2017; Singh & Chana, 2016; Bhavani & Guruprasad, 2014).

Job scheduling aims to provide appropriate resources to the incoming jobs, seeking to optimize specific objectives also referred to as QoS parameters, e.g., minimizing the turnaround time, minimizing the response time, maximizing throughput, maximizing security, to name a few. These QoS parameters, in turn, helps in defining the Service Level Agreement (SLA). Overall, the scheduler may aim to optimize either one or more parameters depending on the job requirements, availability of the resources, nature of the resources, availability of the resources and the cost of execution in terms of computation cost and the financial cost. Table 1 gives a glimpse of some Resource Provisioning Mechanisms (RPM) and their aims in large-scale distributed systems like cloud, grid, IoT, and fog (Malik, Anwar, Ilyas, Jafar, Iftikhar, Malik & Deen, 2018).

Complete Article List

Search this Journal:
Reset
Volume 13: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 12: 2 Issues (2022): 1 Released, 1 Forthcoming
Volume 11: 2 Issues (2021)
Volume 10: 2 Issues (2020)
Volume 9: 2 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing