Cost–benefit analysis

It is important to analyze the costs of offloading on to the cloud such as time, energy and monetary, versus monolithic execution/storage beforehand.
Walker et al. [41] discusses when a consumer should take the decision to offload storage to a remote cloud such as Amazon EC2. Their model for evaluating the benefits of leasing storage from a cloud service as opposed to buying hard drives, takes into account factors such as cost of electricity, cost of hard disks, disk power consumption, cloud storage price per GB, expected storage requirement, and human operator salary.
Li et al. in [42] propose a model with a suite of metrics to calculate the cloud cost. They consider two main costs of cloud computing; namely, Total Cost of Ownership (TCO) and Utilization Cost. Total Cost of Ownership (TCO) is generally used as a financial estimate to determine the costs attributing to owning and managing an IT infrastructure. With respect to cloud computing, TCO is deemed appropriate to function as a basis for providing an estimate for the commercial value of cloud investment, and takes into account server cost, software cost, network cost, support and maintenance cost, power cost, cooling cost, facilities cost, and real-estate cost. Utilization Cost refers to the actual resources being consumed by a particular user or application according to the dynamic demand. Because of the ‘elasticity’ in a cloud, the amount of resources such as servers, software, power and facilities including UPS and battery system, can be dynamically added or removed as per demand from the resource pool. Therefore rather than statically summarizing the cash outlays, cloud cost analysis must consider the impact of elastic utilization. Furthermore, Li et al. argue that virtual machines are the unit of resource in cloud, since virtualization is widely adopted in cloud computing. Therefore VMs are considered as the inputs in a three layer derivation based method that also takes cloud amortization into account.
Although some of these issues such as power consumption and cloud pricing are applicable to mobile cloud computing as well, additional issues and perspectives need to be considered.
The importance of cost–benefit analysis. In a mobile cloud, because of the essential mobile hence dynamic nature, the resources are likely to change in any given moment. Therefore, a cost–benefit analysis is essential to weigh the benefits of offloading against the potential gain by evaluating the predicted cost of execution with user specific requirements as illustrated in Fig. 6.
Cost–benefit analysis and decision making in a mobile cloud.
Fig. 6. 
Cost–benefit analysis and decision making in a mobile cloud.
Cost models using resource monitoring and profiling. Spectra  [27] and Chroma [28] are two of the earliest cyber-foraging systems for mobile devices that employ methods to weigh the cost vs. the benefit of offloading to surrogate servers. In Spectra, contradictory goals such as performance, energy conservation, and quality are evaluated when considering if an application should be offloaded, and if so, on to which surrogate device. The performance, energy consumption and quality are predicted for each potential surrogate, and the selection is made by balancing the competing goals with each other so as to give the best possible trade-off. Since a pervasive environment is made up of mobile devices constantly changing in terms of resource availability, Spectra constantly monitors the resources such as CPU, network, battery, file cache state, remote CPU, and remote file cache state. Changes in these resources are taken into consideration when estimating the best placement. A ‘self-tuning’ approach is adopted to match the available resources with the application’s resource demands since it is not practical for the applications to define a priori the exact resource requirements. Rather, the self-tuning method observes application execution and maintains profile histories for each known surrogate. These profiles are continuously updated, and consulted for future actions.
To estimate the best trade-off, the gain also needs to be predicted. For this, Spectra uses previous work done by Narayanan et al. [43] by using a prediction model based on the assumption that a resource consumption of an operation is similar to recent executions of similar operations. In the case of encountering a new operation for which history data does not exist, the model employs linear regression to give an estimate. Chroma uses a somewhat similar approach called ‘tactics’ specified in a declarative language, while building on ideas from the same earlier work Odyssey [44]. Chroma also employs resource monitoring and history based predictions to weigh the outcome of each tactic plan against the estimated cost. History data for applications are logged offline, and the predictors update the logs online and use machine learning to optimize the predictions. To come to a decision, Chroma performs a trade-off using utility functions that perform comparisons between attributes such as power consumption and speed. However, unlike Spectra, the adaptive policies are separated from the decision making and policy enforcement at runtime.
The Scavenger [39] framework also employs a cost assessment method to decide whether offloading should be done or not. This is carried out by the ‘scheduler’ component which considers the following factors:
1.
Relative speed and current utilization of the surrogates: A CPU benchmarking method is employed to evaluate the performance of potential surrogate devices. Scavenger does not use the CPU clock speed values of devices since it would make little sense for comparison purposes due to the vastly heterogeneous nature of the device set. The benchmarking method uses the NBench suite11 to get a score that gives an assessment of the strength of a device. The number of tasks running in the potential surrogate is used as a measurement of current resource utilization.
2.
Network bandwidth and latency to the surrogates: Network connection information is statically configured to avoid unnecessary traffic using and this information on expected bandwidth, Scavenger estimates the cost of remote execution.
3.
Task complexity: This does not refer to the Big O asymptotic time, rather this means an estimation of how much time it would take to execute the task on the surrogate device. History data recorded whenever a task is performed is used for this estimation similar to Spectra and Chroma. However, unlike Spectra and Chroma, for each task Scavenger employs a ‘dual profiling’ technique where not one but two profiles are recorded per task; a task-centric profile using global task weight and a peer-centric profile for each device–task pair. The peer-centric profile is consulted first, and if the peer is a hitherto unknown device, the scheduler looks into the task-centric profile. Eq. (1) shows the calculation of global task weight, where View the MathML source is the time taken to complete the task, View the MathML source is the NBench score, and View the MathML source is the number of other tasks being run on the device at the time.
equation1
View the MathML source
Turn MathJaxon
4.
Input and output size: Input size is available at runtime, but the programmer needs to specify the information on output size.
MAUI [24] carries out a cost–benefit analysis by profiling each method in an application through serialization. Measurements of network bandwidth and latency are also taken to incorporate into the cost. Specifically, MAUI’s profiler takes three issues into consideration:
1.
The devices’ energy use: An energy profile of the mobile phone is prepared by first measuring the battery usage from a hardware power meter. A JouleMeter [45] style benchmarking system is used to measure the CPU utilization. Initially the profiler is ‘trained’ on the device by collecting data on CPU utilization and energy consumption. These values are used to construct a linear model that is able to estimate how much energy will be consumed by a method. This estimate is given in the form of the number of CPU cycles it requires to execute that particular method. They report that the model’s mean error is less than 6% with a standard deviation of 4.6%.
2.
Application characteristics: Since profiling an application for each of its possible paths will be expensive, MAUI uses the past invocations of methods to predict the future invocations.
3.
Network characteristics: They follow a simple procedure to measure network throughput. 10 kB of data is sent to a MAUI server and is observed to obtain an idea of network bandwidth and latency. Also, the profiler records the statistics of network quality every time a method is offloaded, and this history data is utilized for future estimates.
The data from the profiler as mentioned above is then fed into the MAUI ‘Solver’ to decide if a method should be executed locally or remotely. The Solver tries to give the best possible partitioning strategy that will give the least amount of phone battery consumption.
Clonecloud [36] employs a ‘Dynamic Profiler’ to collect data used in the cost–benefit analysis, that is then fed in to the ‘Optimization Solver’ to decide which methods needs to be migrated, such that the cost of migration and execution will be minimized. Here, the cost could refer to execution time, energy consumption or resource footprint.
In [46] Zhang et al. take four attributes into consideration when calculating the cost of migrating mobile apps to the cloud: power consumption, monetary cost, performance attributes, and security and privacy. These are inferred from various sensing modules in the mobile device and cloud, that monitor data such as battery, network, device loads, cloud loads and latency. After processing these inputs, the cost model decides on a suitable course of action such as, migrating apps to the cloud/mobile device, switching between different networks and allocating cloud resources.
Cost models using parametric analysis. In  [47], Kumar and Lu provides an analytical model for comparing energy usage in the cloud and the mobile device. The model takes the following parameters into consideration; the speeds of the mobile device (M) and the remote cloud (S), the number of instructions of the computation (C) (assuming both mobile and cloud versions have the same number of instructions), the number of bytes to be transferred (D), network bandwidth (B), the energy consumed by the mobile device in idle (View the MathML source), computing (View the MathML source) and communicating (View the MathML source) states. Assuming the cloud is F times faster than the mobile device, they infer the amount of energy saving to be given by the Formula (2).
equation2
View the MathML source
Turn MathJaxon
When the formula gives a value greater than zero, an energy saving is possible, i.e. View the MathML sourceshould be lower compared to View the MathML source and F should be sufficiently large. Thus, according to this model, offloading is beneficial in cases where heavy computation is needed with comparatively low amounts of communication.
Analyzing the conditions for optimal computation offloading, Wang and Li in [48] identify four kinds of cost factors; Computation cost, Data communication cost, Task scheduling cost and Data registration cost. These costs are expressed as functions of run-time parameters such as buffer size, input size and command line options. These are then fed into their partitioning algorithm to determine an efficient partitioning depending on the parameters.
Cost models using stochastic methods. For a mobile cloud service of the model given in MobiCloud  [37] Liang et al. [49] proposes an economic mobile cloud computing model based on Semi-Markov Decision Process (SMDP) [50] for resource allocation. MobiCloud describes a system where mobile devices use application components named ‘weblets’ which can be either migrated to the cloud, or run on the mobile device itself. The SMDP model is based on three states in the mobile cloud; new weblet request or an inter-domain request, intra-domain transfer weblet request, and weblet leaves domain. When the cloud receives a request for migration from a mobile device, it will only accept it if there is an overall system gain. The overall system gain is based on maximizing the cloud profit and reducing the expense of the mobile user. The expense of the mobile user depends on the trade-offs of energy consumption in a mobile device vs. the monetary cost of offloading to the cloud. They argue that an intra-domain weblet transfer from one service node to another would usually generate more profit than a new weblet migration from a mobile device, or an inter-domain transfer in which the transfer happens from another cloud service provisioning domain. Besides the monetary gain, the overall system gain also takes into consideration the CPU cost in the cloud server due to virtual image occupation. A ‘reward model’ is used to calculate the costs based on the system state and its corresponding action.
Discussion. Existing cost models in current mobile cloud computing systems mainly fall into three categories: history based profiling, parametric, and stochastic. An overview of these are given in Table 1.
Table 1.
Overview of cost models in mobile clouds.
Name and typeObjectiveResource monitoringBenchmarkingAssumptions
Spectra [27], Chroma [28]: History based profilingUser specifiable; reduction in execution time, energy usage, and increasing fidelityYes: CPU, network, battery, file cache state, remote CPU, memory usage and remote file cache stateNoResource usage of an operation will be similar to the amount used by recent operations of similar type.
Scavenger [39]: History based profilingImprove performance and energy savingNoYes. Uses the NBench suite.Task duration is proportional to the NBench rating such that a task that takes 1 s to perform on an idle surrogate with an NBench rating of 40, should take about 2 s to perform on an idle surrogate with a rating of 20.
MAUI [24]: History based profilingPrimary goal is saving energy. Speed is also considered.Yes: CPU, network bandwidth and latencyYes. Uses a method similar to JouleMeter.Past invocations of a method can be used as a predictor of future invocations concerning energy usage.
CloneCloud [36]: History based profilingUser specifiable; reduction in execution time, energy usage.Yes: CPU, network, storageNoAll objects have the same cost per byte.
Analytical model by Kumar and Lu [47]: Parametric modelSaving energyYes: network bandwidth, energy consumptionNoThe number of instructions in the task is similar in the cloud version and the mobile version.
Analytical model by Wang and Li [48]: Parametric modelImprove performance and energy savingNoYes
MobiCloud [37]: Stochastic methodSaving energy and minimize monetary costYes: CPU, battery state, networkNo
Spectra and Chroma have two of the oldest history based profiling cost models, and are quite similar, with Chroma being a result of lessons learned from Spectra. Several later works build on concepts from these two lines of research while adding new methods to address their shortcomings. For example, the cost models of aforementioned work and MAUI rely on the assumption that past invocations of the same, or a similar operation is a good indicator of its current resource usage. However, the energy consumption of an operation in MAUI is expressed as a function of the number of CPU cycles it requires, while in Spectra, energy measurements are directly taken from the mobile device’s battery. Scavenger also records history data and maintains profiles similar to Spectra, but optimizes the concept by recording dual profiles per task.
While a thorough cost–benefit analysis is crucial for optimal performance, the cost of cost analysis itself should not be overlooked. This issue is discussed in MAUI, where the overhead of performing frequent profiling and accurate estimations based on latest data, are balanced to give a positive outcome.