Motivation: the need for a mobile cloud



The case for mobile cloud computing can be argued by considering the unique advantages of empowered mobile computing, and a wide range of potential mobile cloud applications have been recognized in the literature. These applications fall into different areas such as image processing, natural language processing, sharing GPS, sharing Internet access, sensor data applications, querying, crowd computing and multimedia search. However, as explained in [11], applications that involve distributed computation do have certain common characteristics, such as having data with easily detectable segment boundaries, and the time to recombine partial results into a complete result must also be small. An example is string matching/manipulation such as grep and word frequency counters. The different applications and scenarios presented in recent literature are described in detail below:
1.
Image processing: In [11], the authors have experimented with running GOCR,1 an optical character recognition (OCR) program on a collection of mobile devices. In a real life scenario, this would be useful in a case of a foreign traveler who takes an image of a street sign, performs OCR to extract the words, and translates words into a known language. A similar scenario is given in [12] where a foreign tourist Peter is visiting a museum in South Korea. He sees an interesting exhibit, but cannot understand the description since it is in Korean. He takes a picture of the text, and starts an OCR app on his phone. Unfortunately his phone lacks the resources to process the whole text. Although he could connect to a remote server via the Internet, that would mean he use roaming data which is too expensive. Instead, his device scans for nearby users/devices who are also interested in reading the description, and requests sharing their mobile resources for the task collaboratively. Those who are interested in this common processing task create an ad hoc network with Peter and together, their mobile cloud is able to extract the text, and then translate it to English. This can be applied to many situations in which a group is involved in an activity together. Another example is a group performing archaeological expeditions in a desert.
2.
Natural language processing: As mentioned above, language translation is one possible application, and this is mentioned in [11] as a useful tool for foreign travelers to communicate with locals. Translation is a viable candidate since different sentences and paragraphs can be translated independently, and this is experimentally explored in [11] using Pangloss-Lite [13]. Text-to-speech is also mentioned in [11], where a mobile user may prefer having a file read to them, especially in case of the visually impaired.
3.
Crowd computing: Video recordings from multiple mobile devices can be spliced to construct a single video that covers the entire event from different angles, and perspectives [14]. In [15], two scenarios of this nature are described in detail: ‘Lost child’ and ‘Disaster relief’.
The ‘Lost child’ scenario takes place at a parade in Manhattan. John, a five year old child who is attending the parade with his parents goes missing among all the people, and his parents only notice he is missing after some time. Fortunately, a police officer sends out an alert via text message to all mobile phones within a two mile radius, requesting them to upload all photographs they have taken in the parade during the past hour, to a server that only the police has access to. With John’s parents, the police officer searches through these photographs via an app on his phone. After looking through some pictures, they are able to spot John in one of the images, which they identify to be taken at a nearby location. Soon, the relieved parents are reunited with their child.
In the ‘Disaster relief’ scenario, a massive earthquake measuring 9.1 on the Richter scale has occurred in Northern California, resulting in much human loss, and infrastructure and property destruction. Disaster relief teams are facing an uphill task because of limited manpower, lack of transportation, and poor communication. Internet infrastructure has been destroyed. Previous maps on terrain and buildings are suddenly rendered obsolete, contributing to slow disaster relief. Data on Google Earth and Google Maps on this area is now useless since highways, bridges, landmarks and buildings have now all collapsed. To conduct efficient search and rescue operations, new data must be gained and a clear picture of the terrain and buildings state must be constructed. To do this, the relief teams use camera based GigaPan sensing.2 Local citizens are asked to use their mobile phones to photograph disaster sites, and these are collected at a central server. The collected images are then sewn together to create a whole, panoramic image. The new face of the area emerges, and relief teams can now conduct their work with accurate maps and information on inaccessible areas.
4.
Sharing GPS/Internet data: It is more efficient to share data among a group of mobile devices that are near each other, through local-area or peer-to-peer networks. It is not only cheaper, but also faster [14]. Rodriguez et al. [16] present a case study of a hiking party at Padjelanta National Park, which is a deserted land in the Arctic circle lacking power access points and network coverage. A data set contains Bluetooth scans for discovering devices and GPS reads of 17 persons. The paper reports up to 11% energy savings by sharing GPS readings. However colocation of most participants was low, so this energy savings should be much higher in a conventional hiking party, or other social situations such as pubs, restaurants, and stadiums where energy saving could be up to 40%. A similar scenario is a mobile device requesting to access a p2p file which is downloaded or is currently being downloaded by another mobile in the vicinity [12].
5.
Sensor data applications: Since most mobile phones are equipped with sensors today, readings from sensors such as GPS, accelerometer, light sensor, microphone, thermometer, clock, and compass can be timestamped and linked with other phone readings. Queries can then be executed on such data to gather valuable information. Such queries could be “What is the average temperature of nodes within a mile of my location?” or “what is the distribution of velocities of all nodes within half a mile of the next highway on my current route?” Sample applications for this are traffic reporting, sensor maps, and network availability monitoring [14].
6.
Multimedia search: Mobile devices store many types of multimedia content such as videos, photos, and music. For example, Shazam is a music identification service for mobile phones, that searches for similar songs in a central database. In the context of the mobile cloud, the searching could be executed on the contents of nearby phones [14].
7.
Social networking: Since sharing user content is a popular way we interact with friends on social networks such as Facebook, integrating a mobile cloud into social networking infrastructure could open up automatic sharing and p2p multimedia access, and this will also reduce the need to back up and serve all of this data on huge servers [14].

2.1. Example scenario: using mobile cloud with distributed computation, and collective sensing

Now let us consider the following detailed scenario: In the aftermath of a natural disaster such as the Indian Ocean tsunami in 2004, the immediate provisioning of emergency services becomes of great importance. Among these services, searching for missing persons is one of the most critical yet excruciating tasks. In this kind of chaotic situation, infrastructure is destroyed, limiting access to computers and data, making such a search even more difficult. Often, missing person reports are filed, but the persons in question may be injured with no means of communication, or even deceased. One way of dealing with this is to photograph every person found, gather all images to a central location, and perform search and match operations with images of missing persons. However, this approach is not very realistic considering the limited human and machine resources in such a situation. Several questions exist in this scenario:
1.
How and who would capture the images necessary?
2.
How would the captured images be collected?
3.
How would the collected images be processed?
The first question is easily answered. Anyone with a camera phone of decent quality could contribute to this. However, the second and third questions—data collection and processing, are more tricky. Acquired data could be uploaded to a remote server, but as is often the case in such disaster sites, connectivity would be a problem. Also this method could take a while, especially if a centralized server node is not already set up. Images could be processed locally, but mobile devices are typically not equipped with enough resources to carry out such operations (individually).
Let us now consider the possibility of employing a local mobile cloud for the aforementioned scenario. In this case, photographs taken by various individuals would constitute the data against which the missing persons will be matched. Relief workers and communities working together at the disaster site could collaboratively ‘lend’ their mobile devices’ storage and processing resources to a ‘local mobile cloud’, that could effectively carry out the image processing needed to identify the missing persons.
A key challenge here is the fact that the number of, and the type of available resources cannot be known or predicted beforehand. How then, can the work be efficiently distributed and load balanced? Furthermore, in such situations it most likely that devices will encounter other unknown nodes, rather than familiar devices. Therefore, it is important that the mobile cloud be able to give a performance gain even without prior information.
The aforementioned scenario is only one example demonstrating the need for a mobile cloud computing framework. In wearable computing, two major challenges are to reduce the bulkiness of systems for every day use and not having enough battery power [17]. This could be solved by offloading/sharing the computational jobs to the local ‘mobile cloud’, while sensors and peripherals facilitate the pervasive experience for the user. In the area of augmented reality, it has been suggested that using cloud resources [18] can solve similar problems. In biomedical engineering, wearable medical devices forming Body Area Networks (BANs) can enable real time collection and analysis of patients’ medical data [19].

2.2. Remote proxy versus local resources

Today we do have mobile applications connected to the cloud, such as Apple iCloud,3Google’s Gmail for Mobile,4 and Google Goggles.5 Using mobile devices for disaster situations has also been explored in work such as [20], [21] and [22]. However, current mobile cloud applications, or apps connect to a remote server where the brunt of the computations are performed. The mobile devices exist purely as thin clients that connect to a remote proxy providing complex services. Although these apps are becoming popular, they can perform well only under high speed connectivity. However, it is not practical to assume speedy connections, affordable data access fees and good response times in most places of the world. Except city areas, this holds true even in most developed countries. In contrast, short range communication consumes less energy, and this is a key factor since mobile devices usually operate on a limited energy source. Also, connecting to local resources would be cheaper and promise faster connectivity and better availability. As explained in [23] by Satyanarayanan, compared to WiFi LAN bandwidths of 400 Mbps, mobile wireless Internet operates at a bandwidth of 2 Mbps. Depending on user interaction, latency could vary significantly. For example, it is 80 ms versus 16 ms for a 4 MB image, and this would greatly hinder the execution and usefulness of the app, as well as the user experience. Satyanarayanan [23] predicts that, considering the current trajectory of Internet evolution, although bandwidth is likely to improve, latency is not.
Therefore, considering the data access fees, issues with latency and bandwidth [24], and also the high demands of energy when using 3G connectivity, the local cloud would be a better alternative to the remote cloud [23]. Furthermore, using the local mobile resources is an efficient way of making use of available computation power, that would otherwise be idling [25]. Since typically mobile devices are equipped with sensing capabilities, a cloud made up of mobile devices will be able to provide the users with context and location aware services as well, leading to a more personalized experience.
By combining the local cloud with other mobile devices as opposed to local servers, we are able to support mobility without needing additional infrastructure. Considering the trends for smart phones, which shows that they are getting more powerful each year, a local mobile cloud will be able to provide sufficient resources for intensive mobile apps. It is feasible to envision the future mobile clouds as hybrids, where the users themselves would act as cloud resources, but with the ability to connect to remote servers in cases of good connectivity and other conditions such as access fees, available battery, and response time. This would require the mobile cloud architecture to be proactive, self-adaptive and equipped with cost–benefit analysis capabilities.
To summarize, the reasons for sharing/offloading work from a mobile device would be: limited computational capability, limited battery power, limited connectivity, opportunity to gather more sensing data (such as encountering other mobile devices with different sensing abilities), access to different content/data sets, and to make use of idling processing power.
The advantages of sharing work with local nearby resources versus a remote proxy would be due to: limited connectivity to remote servers (such as in remote areas, and dead zones), limited battery power inhibiting long range communications, data access fees, and high availability of local resources.
However, concerns about privacy and security are a major issue when sharing work. Would users be comfortable sharing their resources with unfamiliar people? Would mobile clouds consisting of ‘known groups’ such as co-workers, friends and family be more feasible? What incentives can be provided to entice people to share their resources, and what security and privacy measures can be taken to ensure safety? Even from a trusted user, his/her mobile device may be unfamiliar. Furthermore, mobile environments are typically dynamic and unpredictable. In such cases, how can a mobile cloud function opportunistically to ensure maximum gain? These are valid challenges that concern the future of mobile cloud computing, and we shall discuss these in detail in later sections.