When Machine Learning Meets Media Cloud: Architecture, Application and Outlook

来源 :ZTE Communications | 被引量 : 0次 | 上传用户:xxcdejingcai
下载到本地 , 更方便阅读
声明 : 本文档内容版权归属内容提供方 , 如果您对本文有版权争议 , 可与客服联系进行内容授权或下架
论文部分内容阅读
  Nowadays, media cloud and machine learning have become two hot research domains. On the one hand, the increasing user demand on multimedia services has triggered the emergence of media cloud, which uses cloud computing to better host media services. On the other hand, machine learning techniques have been successfully applied in a variety of multimedia applications as well as a list of infrastructure and platform services. In this article, we present a tutorial survey on the way of using machine learning techniques to address the emerging challenges in the infrastructure and platform layer of media cloud. Specifically, we begin with a review on the basic concepts of various machine learning techniques. Then, we examine the system architecture of media cloud, focusing on the functionalities in the infrastructure and platform layer. For each of these function and its corresponding challenge, we further illustrate the adoptable machine learning based approaches. Finally, we present an outlook on the open issues in this intersectional domain. The objective of this article is to provide a quick reference to inspire the researchers from either machine learning or media cloud area.
  machine learning; media cloud
  1 Introduction
  ecently, the increasing user demand on rich media experience has triggered an exponential growth of media services worldwide. According to the Cisco Visual Networking Index (VNI) report [1], the Internet video traffic would increase 3?fold from 2016 to 2021, contributing up to 82% of all Internet traffic by 2021. This trend may bring tremendous opportunities for all the stakeholders in the media service chain. Application developers can attract more customers by providing novel media experiences, such as video?on?demand, multi?screen interactions, and face/expression recognition. Platform service providers can host more applications and get more revenue. Content service providers can generate more contents and have them viewed by billions of users. Network service operators can expect to deliver significantly more network traffic. Nevertheless, such a trend also calls for novel paradigms to properly fulfil all the requirements.
  Media cloud [2]-[5], inheriting the advances from cloud computing, has emerged as a promising computing paradigm to provide novel multimedia services with satisfied Quality of Service (QoS) and reduced cost. Specifically, media cloud adds media?related functions to each cloud computing layers (i.e., Infrastructure?as?a?Service (IaaS), Platform?as?a?Service (PaaS), and Software?as?a?Service (SaaS)), following the cloud computing paradigm. In the infrastructure layer, media cloud schedules more virtual resources in a more dynamic style. In the platform layer, it integrates a list of media?specific functions, such as media adaptation, media streaming, and media traffic analysis, to meet various QoS requirements from different media services. In the software layer, media cloud is able to host novel media services with higher complexity than traditional web services with only text and images.   The uniqueness of media cloud posits a list of new challenges, especially in the infrastructure and platform layer. First, the process, storage, and transmission of multimedia contents need more resources, leading to more power consumption and higher failure ratio of physical and virtual resources. Second, most media services need to be delivered with low latency and high volume, thus requiring precise workload prediction and careful resource scheduling accordingly. Third, the media distribution and adaption are more resource?intensive and thus more complicated than traditional web services. Last but not least, different media functions must be orchestrated properly to better serve the media users with optimized cost.
  Machine learning, which have been intensively applied in various multimedia applications, provides a nature solution to address these challenges in media cloud. In particular, machine learning represents the set of algorithms that can progressively improve the performance of a specific task without being explicitly programmed. As a result, the adoption of machine learning makes the development of new media services and the optimization of existing media systems much easier than ever before. For example, machine learning has been already widely used in image and video processing such as face recognition, image classification, and video surveillance. However, the machine learning research in the infrastructure and platform layer of media cloud has not been as hot as the upper layer media applications.
  In this article, we present a survey of how machine learning addresses the challenges in media cloud, from the infrastructure and platform perspectives. In particular, we start with the tutorial study on different machine learning strategies, as well as the concept and the challenges of media cloud. Then, we substantiate the ways of applying these machine learning techniques into media cloud via a literature review. The map between machine learning techniques and the challenges in the infrastructure and platform layer of media cloud are illustrated respectively. As a result, this allows the researchers from either machine learning or media cloud domain to quickly grasp the state?of?the?art knowledge in the overlaps of these two domains.
  The rest of this paper is organized as follows. In Section 2, we introduce the basic ideas of machine learning as well as a layered media cloud framework and the functionalities in each layer. In Section 3, we review the machine learning efforts towards the challenges in the infrastructure layer of media cloud. In Section 4, we investigate the machine learning solutions to address the issues in the platform layer of media cloud. In Section 5, we highlight a list of open research issues in media cloud that could be addressed by machine learning techniques in the near future. Finally, in Section 6, we conclude this article.   2 Overview of Machine Learning and Media
  Cloud
  In this section, we first introduce the basic machine learning algorithms to provide the necessary background knowledge, which will be referred to in the later sections. Then, we illustrate the media cloud framework and decompose it into a layered model. This model will serve as the blueprint to survey existing research efforts.
  2.1 Machine Learning Algorithms
  Existing machine learning algorithms can be generally categorized into three types [6], including supervised learning, unsupervised learning, and reinforcement learning. Fig. 1 depicts such categorization, where each category further consists of one or more sub?categories. The brief concepts of these techniques are presented as follows.
  2.1.1 Supervised Learning
  Supervised learning aims to build a model to map an input to an output based on pre?labelled input?output pairs. Typically, the input objective is a high dimension vector, the output is a low dimension or even one?dimension decision, while the objective is to minimize the difference between the labels and the output from the model. Regression, decision tree, Bayesian network, and deep neural network/deep learning are supervised learning algorithms
  Regression tries to find a single function with proper parameters to represent the relationship between the input and output. There are a list of different regression models with different function types to deal with different input. For example, linear regression uses a linear function to deal with continuous input, logistic regression uses a logistic function to deal with categorical input, and non?linear regression uses non?linear functions (e.g., polynomial, logarithmic).
  A decision tree uses a tree?like graph to deduct the consequences from the input. In a decision tree, each internal node refers to a control variable on an attribute, each branch refers to the consequence from the control decision, each leaf node refers to one final output, and the paths from the root to each leaf node refer to the rules.
  Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph. It calculates an estimate for the class probability from the training set based on the Bayes’ theorem, and uses the estimation to map the input and output.
  Deep neural network/deep learning is generally based on artificial neural networks, which consist of a collection of multiple layers of connected units (i.e., neurons). The weights between each pair of neurons are tunable to optimize the objective function. It can be used as a supervised learning approach for classification tasks.   2.1.2 Unsupervised Learning
  On the contrary to supervised learning, unsupervised learning algorithms, such as K?means clustering, principle component analysis (PCA), and deep learning, focuses on inferring a function to describe the hidden structure from unlabeled data.
  K?means clustering aims to partition [n] observations into [k] clusters, where each observation belongs to one cluster. The criteria is to ensure the overall shortest distance between the observations and the centroid of their assigned clusters accordingly.
  Principle component analysis uses an orthogonal trans? formation to convert given observations into a set of values of linearly uncorrelated variables in lower dimension. The generated variables are often called as principal components. They serve as a projection of the original higher dimension input from its most informative perspective.
  Deep learning can be also used in an unsupervised manner. Due to its multi?layer structure of fully connected neurons, deep learning can well represent complex non?linear relationships. As a result, it is able to compact the input in higher dimension into informative output with much lower dimension. Deep auto?encoder is one example in this category.
  2.1.3 Reinforcement Learning
  Reinforcement learning trains the model by interacting with the environment using different actions and receiving the incurred rewards iteratively. Specifically, it relies on two operations, including exploration of uncharted territory and exploitation of current knowledge to maximize the received rewards. On the one hand, exploration operation enables the algorithm to keep trying different decisions so that it can evolve without explicitly giving labelled data. On the other hand, the exploitation allows the algorithm to be aware of the explored point and move closer to the optimal decision strategy. Q?learning is a reinforcement learning algorithm.
  Q?learning is most representative reinforcement learning technique. Specifically, it uses Q?value to represent the quality of a state?action combination, and iteratively update this Q?value for the improvement. Q?learning can compare the expected utility of the available actions without requiring a model of the environment. Moreover, it has been proven that Q?learning is able to eventually get the optimal action?selection policy, for any finite Markov decision process.
  Fig. 2 illustrates a comparison of supervised learning, unsupervised learning, and reinforcement learning. In particular, supervised learning relies on the pre?defined labelled input and output pairs as the target. On the other hand, unsupervised learning does not need labelled data, and it uses the internal features of the dataset instead of any labelled data as the objective. Whereas reinforcement learning does not have the labelled data in advance. It has to sense the results by performing different actions, and use the previous outputs as the objective.   We will illustrate how the machine learning techniques from different categories can be applied in media cloud in Sections 3 and 4.
  2.2 Media Cloud Framework
  Media cloud aims to leverage cloud computing paradigm together with a list of media?related functions to enhance the media experience. From a cloud?centric view [2], [5], it still can be defined as a cloud?centric layered model as shown in Fig. 3. Each layer consists of traditional cloud services (e.g., virtualization and resource management) and corresponding media services (e.g., media adaptation and media analytic). This conceptual hierarchy provides a clear clue for us to characterize the technical challenges and existing works in different layer. Note that, this paper mainly focuses on the machine learning works for infrastructure and platform layer, whereas the efforts towards multimedia software applications have been intensively reviewed by many other literatures [7]-[11].
  The infrastructure layer aggregates all the physical ICT resources together via virtualization technology, with the objective to allocate them in a fine?granular, on?demand, and fault?free manner. According to the different functionalities, we can further classify this layer into the following three sub?layers.
  ·Supporting Infrastructure: This layer refers to the power supply, air?conditioner, and other cooling systems, which support the smooth operations of datacenters [12] as well as the cloud services on top of them. It focuses on those bottom level schemes, for instance the datacenter layout schedule, power consumption optimization, and cooling system management [13].
  ·Physical Resources: This layer consists of servers (including CPUs, hard disks, memory, network interface, etc.) and switches/routers, which provide the networking, computation, and storage capacity. These resources need to be monitored in real time and well maintained once there is any fault [14]. As a result, the hosted cloud services will not be affected.
  ·Virtualization Management: This layer virtualizes the underlying physical resources into a virtual resource pool in terms of virtual machines [15], [16]. These resources are then exposed to the cloud platform services to meet the specific Service Level Agreement (SLA) with the lowest possible cost. To achieve this target, the resource provision needs be allocated elastically and dynamically via virtual machine configuration and migration.
  Following this hierarchy, machine learning algorithms can be developed and applied to each of the sub?layer to address the corresponding challenges. This will be the main focus of Section 3.   The platform layer encapsulates various fundamental media services into a layer of middle?ware, by utilizing the virtual resources provided by the infrastructure layer. This middle?ware is then exposed to the software layer via a set of APIs. According to the functionalities, we cast these media services into the following four types.
  ·Media Analytics: This service refers to the data mining schemes that focus on the nature of media contents as well as the user request patterns. Typical examples include media content popularity prediction [17] and content recommendation [18].
  ·Media Distribution: This service is in charge of acquiring media contents from the origin servers, and delivering them to end users throughout the media cloud. The objective is to improve the delivery efficiency. Content caching [19] and pre?fetching [20] are two representative examples under this category.
  ·Media Adaptation: This service modifies the original media contents into the target ones with different domains (e.g., format, rate, resolution, and annotation). Typical examples of such services are content encoding/transcoding [21], content quality estimation/assessment [22], and media mashup [23].
  Similarly, intelligent mechanisms powered by machine learning algorithms can be adopted by these services to improve the quality. The in?depth survey of adopting machine learning in the cloud?based media platform services will be covered in Section 4.
  3 Machine Learning in Infrastructure Layer
  In this section, we present three typical scenarios that can be benefited from machine learning techniques in the infrastructure layer of media cloud (Fig. 3). They are datacenter power consumption prediction and control, cloud resources failure prediction and operation, and virtual machine configuration and operation.
  3.1 Power Consumption Prediction and Control
  Datacenters nowadays have become a large energy consumption center, resulting in the fact that even modest improvements are able to yield significant cost cut and avert millions of carbon emissions globally. In particular, power consumption from datacenter comprises around 1.4% of global energy usage and 2% of global carbon emissions [24]. Among all datacenters in the world, the majority of them has a power usage effectiveness (PUE) at 1.6-2.0 while the ideal efficiency should be around 1.1-1.2 [24]. As a result, there are sufficient improvement spaces, and any small one improvement can bring great impact.   Regression is one of the classic ways to predict the power consumption. Choi et al. [25] proposed a three?dimension regression model to predict the power usage using work intensity and CPU utilization. This model achieved 9% error margin on average comparing with real observed usage. Similarly, Lewis et al. [26] developed a linear regression model to correlate processor power, bus activity, and system ambient temperatures with real?time server power consumption by considering. Their model gave an error of 4% as verified using a set of benchmarks. In addition, some further studies [27] indicated that the linear regression model based on CPU usage and workload is only able to provide reasonable prediction accuracy for CPU?intensive jobs, while a Gaussian mixture regression model can perform consistently well with different workload (e.g., IO or memory intensive jobs).
  Neural network/deep learning is another powerful tool to predict the data?center power consumption with the ability to take into much more input parameters. Gao [28] from Google, presented a simply three layers neural network model by considering 19 different factors as the input, including the IT load, weather conditions, number of chillers and cooling towers running, equipment set points and so on. This generated a promising prediction performance already, with a mean absolute error of 0.4% and standard deviation of 0.005. Li et al. [29] further deepened the neural network with more layers. Specifically, it used a linear recursive auto?encoder to process the input, and added an additional layer before the final output to correct the prediction results of auto?encoder. This model was fed by 11?dimension input, including CPU/memory/disk usage, network traffic, and file system workload. The results pushed the performance a bit further by reducing around 40% prediction error comparing with a widely?used regression based time?series prediction model.
  3.2 Cloud Failure Prediction and Operation
  Given the scale and complexity of cloud infrastructure, the failure prediction and operation desires significant high levels of automation. In particular, such failure consists of physical hardware failure such as disk/CPU/memory/network error and virtual jobs failure due to software or configuration issues. It is challengeable but important to properly identify these failures and take actions accordingly on time if not in advance, so that the high standard Service Level Agreement can be well maintained.   Decision trees have become a popular method for failure prediction and detection. Pelleg et al. [30] collected system metrics including execution count, CPU usage, waiting time, blocked time, and IO count, on top of Xen virtual machine, and fed them into a decision tree classifier. By using this classifier, they were able to detect the potential system problems with 0.94 receiver operating characteristic (ROC) curve as the accuracy. Fu [31] proposed a framework to combine the decision tree model together with the principle component analysis algorithm. Specifically, the principle component analysis is first used to reduce feature dimensions from the set of collected cloud infrastructure parameters, then only the principle components are input into a decision tree classifier to identify anomalies in the cloud. A following?up work [32] further integrated a Bayesian model with the decision tree. It first reduced 50 plus system metrics including CPU statistics, memory swap statistics, IO requests, and network traffic into principle components. Then this solution fed these generated components into both the Bayesian predictor and decision tree, and did an ensemble between the output. This generated a promising result with 0.99 ROC curve.
  At the same time, there is an increasing popularity of applying neural network or deep learning into the cloud failure prediction task recently. Prevost et al. [33] presented a neural network model to predict the cloud datacenter work? load. Specifically, the model takes historical data points as input to predict the future trend, with the objective to minimize the Rooted Mean Squared Error (RMSE) of sample data. Chen et al. [34] developed a recurrent neural network (RNN) based model to learn the temporal characteristics of resource usage metrics including CPU and memory usage, which are in turn used to calculate the failure possibility of a running job in the cloud. They then verified the model by using real Google cluster workload traces. The results indicated a reasonable accuracy with a false positive rate at around 6%, and the following?up operations based on the prediction were able to save 6% to 10% cost saving by early killing and restarting jobs with high failure possibility. Zhu et al. [35] also explored the performance of back propagation based neural network combined with a boosting approach, in driver failure prediction for large scale storage system. Moreover, it compared the results with a traditional Supported Vector Machine (SVM) model on a real world database. The evaluation showed the proposed neural network model achieves over 95% detection accuracy which is much better than 68% achieved by SVM.   3.3 Virtual Resource Configuration and Consolidation
  Cloud infrastructure virtualizes the physical resources into a virtual machine pool and operates them in a fine?grained model, thus providing significant flexibilities to host different services. In particular, virtual machines can be dynamically turned on/off, migrated from one physical machine to another. As a result, there is a chance to significantly increase the cost efficiency by properly orchestrating the virtual machines to consolidate the workload in an on?demand manner.
  Bayesian networks have become a popular tool to consolidate virtual resources for cloud environment. Sohrabi et al. [36] proposed a virtual machine migration heuristic based on Bayesian networks. In particular, this solution evaluates the probability of a physical server host to be overloaded, then migrates the virtual machines away from those servers. As a result, not only energy consumption can be saved by consolidating virtual machines, but also the performance is improved by balancing the workload into multiple hosts. Li et al. [37] discussed a very similar Bayesian based approach to estimate the resource utilization in physical machines and then used it to predict the migration probability of virtual machines. Shyam et al. [38] presented a Bayesian model to determine both short and long term virtual resource requirements for CPU or memory intensive applications running in cloud environment. They built the Bayesian model based on a list of parameters, including day of week, time?interval of application access, workload, benchmarks, and availability of virtual machines. All of these works were able to generate better performance in terms of either lower energy consumption of cloud infrastructure or higher accuracy in predicting virtual resource utilization, by comparing with a few other non?machine?learning methods.
  Reinforcement learning has also been applied into this task. Masoumzadeh et al. [39] presented a Q?learning based model, which takes multiple virtual machine metrics (including CPU performance, disk storage, memory usage and network bandwidth) as the input, the migration action as the output, and the energy consumption combined with SLA score as the reward function. The trained model outperforms virtual machine selection policies using fixed criteria for decision making. Jin et al. [40], [41] built the virtual machine migration model specifically for the cloud media scenario by using the same technique. In particular, this work used the user interactive behaviors in multi?screen applications as the input, the backend virtual machine migration decision as the output, and the total monetary cost of operating cloud resources as the rewards. The result revealed a significant cost saving compared with some heuristics. The model also showed a very closed performance to an offline optimal solution. Liu et al. [42] introduced deep reinforcement learning into the virtual machine allocation and consolidation problem. Specifically, deep reinforcement learning integrates deep neural network with reinforcement learning, enabling the algorithm to deal with larger state space while keeping the fast coverage speed. Thus, this work is able to take the real?time metrics for each job and virtual machine pair as the input, the job and virtual machine matching decision as the output, and the combined job latency and energy consumption as the rewards. Similarly, this approach also achieves cost saving while at the same time the latency improvement.   3.4 Summary
  We demonstrate a list of works that use different machine learning techniques to tackle three major infrastructure challenges in this section. Table 1 matches the specific machine learning approaches with the topic domains for each work, so that the interested readers can quickly obtain the ways how machine learning can be applied in the infrastructure layer in media cloud.
  4 Machine Learning in Platform Layer
  In this section, we discuss the machine learning applications in three major media platform services (Fig. 3). Specifically, this section covers content popularity prediction and recommendation in media analysis domain, content caching and pre?fetching in media distribution domain, and content transcoding in media adaptation domain.
  4.1 Content Popularity Prediction and Recommendation
  The tremendous growth of multimedia content generation has changed not only the user content consumption behaviors, but also the way of operating the media services. Millions of hours of video are generated and uploaded to YouTube every day [43]. As opposed to the traditional TV programs where all the audiences watched the same content at the same time, multimedia content users have much more options to spend their video watching time. As a result, given such a large amount of available user generated content, their popularity are much more difficult to be predicted. Moreover, the personalized video recommendation becomes increasingly important for better user experience.
  Regression is the simplest yet feasible machine learning tool for dealing with the content popularity prediction task. Szabo et al. [44] found the long?term content popularity on YouTube had a strong correlation with their early popularity. Such correlation can be represented by a linear regression model to predict the long?term content popularity. Borgho et al. [45] confirmed the efficiency of using the linear model to predict the popularity, and further derived a multi?linear regression model by taking more factors such as video quality, number of keywords, uploader view count, uploader followers, and uploader video count. Chu et al. [46] adopted a similar approach by using a bilinear regression framework to achieve a personalized content recommendation system. They used this regression model to associate the attributes in user profiles with the potential content that might be interested to the user. Unsupervised learning tools provide another angle to examine the content popularity task. Szabo et al. [44] used k?means algorithm to separate video contents into two clusters, where the content popularity in one group grew faster than the average, and the other grew slower. Borgho et al. [45] applied PCA to characterize the relationships between different content/user profiles and the content popularity. In this way, they were able to identify the groups of variables which were responsible for the variation of popularity prediction. Ahmed et al. [47] introduced another clustering algorithm known as affinity propagation to the content popularity prediction task. This method does not require a predefined number of clusters, which differs from the k?means algorithm. By properly cluster the similarity score for the content popularity, this approach is able to outperform the traditional k?means and the linear regression models.   There are also a few works making use of deep learning for content recommendation. Ma et al. [48] developed an auto?encoder model backed by unsupervised deep learning technique, to cluster the similarity among different videos. They could recommend different videos to different users according to their categories. Covington et al. [49] designed a YouTube recommendation system based on a fully?connected deep neural network. It first embeds the video profile, video watch history, search tokens, previous impressions, and user profile into high?dimension vectors, and uses the concatenation of them as the input to the neural network. And the output can be directly used as ranked recommendations for each individual user.
  4.2 Media Content Caching and Pre?fetching
  It is a common practice nowadays to cache multimedia content data in the intermediate nodes between users and the host servers, to improve the user experience as well as the media service operational cost. In particular, because the sizes of multimedia contents are much larger than the traditional text or images, it takes more time to transmit them from the host to the end?users. To relieve the pain, content delivery networks has been proposed to cache contents in some middle places. However, it is not efficient to cache all or just blindly choose a few at all the time. Therefore, the key factor of this task is how to choose the right content to be cached at the right time. Bayesian network is a promising tool for content personalization prefetching task by identifying the right content to be cached in the content delivery network [50]. Venketesh et al. [51] introduced the naive Bayesian classifier to calculate the probability of viewing a potential content based on the browsing pattern exhibited by the end users in sessions. This approach helps to increase the cache hit rate and minimize access latency, especially when user has long browsing sessions. Ali et al. [52] used naive Bayesian classifier in the same task but in a different way. Specifically, they incorporated the Bayesian classifier with the classical caching strategy (e.g., Least?Recently?Used and Greedy?Based), by learning the dependency probability between the content access log and the content re?visit event. As a result, when doing cache eviction, the content with higher probability of re?visit will be kept. Clustering is another promising way for the content personalization prefetching task [50]. Yan et al. [53] uses K?means to cluster users based on their geo?location and temporal access patterns. In this way, the contents for different mobile applications can be prefetched into the mobile device, thus reducing the app launching delay and improving the user experience. Hu et al. [54] applied the affinity propagation clustering algorithm to group users in different communities, based on their social relationships, geo?locations, and video watching interests. As a result, the content caching decision can be made specifically for different communities, thus improving the caching efficiency.   It is also possible to use reinforcement learning to optimize this content prefetching process. Hu et al. [55] formulated the content prefetching problem as a Markov Decision Process (MDP), with the objective to strike a balance between the increased content caching cost from incorrect prediction and the reduced content download delay from correct prediction. A Q?learning based approach was then proposed to address this problem.
  4.3 Media Content Adaptation
  There is an increasing trend to consume online video via mobile phones rather than via fixed terminals like TV and PCs. This means the video contents must adapt to the terminals by providing different resolution, bitrates, and quality versions for different screen sizes under different network environments. Such video adaptation tasks can be computation intensive, but at the same time, they also pose an opportunity to improve the user experience with different devices.
  Deep learning is the most widely used machine learning tool for such tasks. Covell et al. [56] explored a neural network based framework to predict the parameters of a model that relates the bitrate to various video properties. Specifically, in video transcoding, the perceptual video quality for a given bandwidth constraint can be achieved by controlling the quantization levels. In this context, they used deep neural network to correlate this quantization level with the bitrate, and achieved a much higher accuracy than the traditional alternative. Dash et al. [57] proposed to use deep neural network to assess the quality of images after encoding/decoding or transcoding, and the model is able to achieve as high as 98% image?level accuracy for the assessment. Zhang et al. [58] further extended the quality of experience assessment from images into video by using an even deeper neural networks with more hidden layers and unique structures.
  4.4 Summary
  In this section, we demonstrate the way of using machine learning techniques for three important media platform services. Table 2 maps each work according to the adopted machine learning technique as well as the detailed platform services that it focused on. As a result, the interested readers can quickly obtain the ways how machine learning can be used in the platform layer services in media cloud.
  5 Open Research Issues
  The research on applying machine learning to media cloud is at the infancy stage, while there are still many open challenges. In this section, we present a brief outlook on these open issues, aiming to provide insights for researchers from either machine learning or media cloud area.   5.1 Media Traffic Classification and Flow Control
  Real?time media traffic classification and flow information are important for network management and optimizing the service operational cost. The traditional way of classifying Internet traffic is based on the network protocols (e.g., TCP or UDP). However, such static methods are not enough for media contents as they roughly use the same protocol for network transmission.
  Machine learning is able to either learn from the historical media traffic data with fine?grained categories as per different metric set in a supervised manner, or cluster the real?time media traffic based on their internal features into different groups in an unsupervised manner. For example, for the former one, it is possible to make use of distributed SVM [59], and deep learning [60]. While for the later one, K?means [61] and deep auto?encoder [62] can be the right tools.
  5.2 Media Service Chain Orchestration
  Recently, network function virtualization (NFV) emerges to transform the way of operating communication networks. Specifically, NFV implements network functions in software, orchestrating various services dynamically instead of follow the pre?defined workflows from hardware. As a result, it provides an opportunity to dramatically increase the infrastructure flexibility, simplify the resource management process, and reduce both hardware and operational cost.
  The emergence of NFV?enabled media cloud framework [63] offers the opportunity to further improve the performance of media services running on top of the media cloud, and machine learning can be one of the best candidates to achieve this target. In particular, media services are not standalone. Most media services require a list of functions to be orchestrated in a chain. For example, the consumption of online video streaming via a mobile phone involves content caching, prefetching, adaptation and personalization. Machine learning can be used to learn the most efficient pattern on how to orchestrate these services in the large scale.
  5.3 Media Security
  Nowadays, it is much easier to access, download, and upload multimedia contents via the Internet, making the Digital Rights Management (DRM) much more difficult and complicated than before. Previously, audio and video DRM was usually achieved by physical subscription and rental, but this method does not work well today, because any subscriber can simply upload the copyrighted contents as user?generated contents to popular video distribution platforms like YouTube. It is hard to restrict such behavior giving the huge amount of uploaded contents every day.   Machine learning can be applied in this field too. In particular, it can be used to classify or identify if the uploaded audio or video has a copyright issue, by learning from a set of labelled contents from their commercial owners. It is also possible to use machine learning to improve the performance of DRM techniques such as digital watermarking by learning from the failed cases. In fact, such operation has been introduced to the music and audio DRM system [64], while it is on the way to extend to video contents.
  6 Conclusions
  In this article, we presented a tutorial survey on applying machine learning techniques to address challenges in the infrastructure and platform layers of media cloud. In particular, we first reviewed the basic concept of different machine learning techniques. Then, we examined the system architecture of media cloud framework, focusing on the functionalities in the infrastructure and platform layers. For each functionality and its corresponding challenge, we further illustrated the adopted machine learning techniques. Finally, we present an outlook on a few open issues in this domain, aiming to inspire the researchers from either machine learning or media cloud area.
其他文献
大多数慢性炎症都有急性炎症史,多因感染较重、体质较弱、治疗不及时或不彻底而迁延变成慢性。常发生于产后、流产或人工流产、不洁性生活等后的感染,继而由子宫、输卵管内膜的局部炎症扩散,经肌层而致使周围结缔组织或盆腔膜发生炎症。慢性盆腔炎由于长期炎症刺激而形成各种病型,器官周围粘连,抗炎药物不易进入,病情顽固,恢复缓慢。  关键词:慢性盆腔炎;中西医结合疗法;疗效  中图分类号:R711.3 文献标识
期刊
胆道蛔虫属于祖国医学“蛔厥”范畴,当人体脾胃功能失调时,蛔虫扰动不安,上窜入胆道,使肝气闭郁,胆气不行而发病。本组53例患者属胃热肠寒(上热下寒),寒热夹杂,给予温脏安蛔,利胆定痛的乌梅汤加味治疗。方中重用乌梅,用其味酸能制蛔,先安其动扰;川椒、细辛味辛能驱蛔,性温可以温脏祛寒;黄连、黄柏苦能下蛔,寒能清上热。  关键词:胆道蛔虫;乌梅汤;中医药疗法  中图分类号:R256.36 文献标识码:B
期刊
不寐是大脑功能平衡失调引起的功能性疾病,是由于大脑功能活动长期处于过度紧张所致,根据中医基础理论“心藏神,主神明”,“肝藏魂,主疏泄”,则不寐责之于心,且与肝息息相关。而足太阴脾经注心中,足少阴肾经络心中,足三里经络均贯通于心,故不寐尚与心、肝、脾、胃、胆等脏腑密要相关,是由于脏腑失调、气血失和、心神不安、阴阳失衡、阳不入阴而致,其病理变化为阳盛阴衰,阴阳失调。  关键词:不寐;针刺;穴位注射  
期刊
消化性溃疡,属祖国医学胃脘痛范畴,致病不外饮食不洁,起居失宜,精神刺激等因素。胃为水谷之海,六腑之一,以降为顺,以通为用,然脾与胃以膜相连,肝与胃克中寓生,故无论从生理或病理角度,两者与胃的关系都极为密切,所以,用药常同时数者兼顾。  关键词:消化性溃疡;中西医结合疗法;疗效  中图分类号:R573.1 文献标识码:B 文章编号:1007-2349(2004)03-0061-01
期刊
笔者运用祖国医学辨证论治、异病同治的治疗原则,对玫瑰糠疹、过敏性紫癜、寻常型银屑病进行治疗,获得满意的疗效。这3个病通过辨皮损的形态、大小、数目、颜色、分布、表面情况等,这3个病的病因病机均为营血亏损,外感风热之邪,以致营血失和,气血不畅,阻于肌表而成。  关键词:风热血燥型皮肤病;中医冶疗;银翘散  中图分类号:R275 文献标识码:B 文章编号:1007-2349(2004)03-001
期刊
西比灵为新型钙通道拮抗剂,可溶性强,易透过血脑屏障,对脑血管平滑肌有较高的特异性和选择性,它抑制血小板受刺激释放儿茶酚胺、5-羟色胺、血栓烷A2等的收缩血管作用,对缺氧引起的血管收缩有明显的仰制作用,还能保护内皮细胞,抑制内皮细胞受损引起的脑水肿,另外阻止钙离子超载引起的细胞内中毒,防止血液粘稠度增高,改善细胞在缺氧情况下变形能力,从而改善脑细胞功能。  关键词:血塞通软胶囊;西比灵;偏头痛  中
期刊
CAO Jiannong is currently a chair professor and head of the Department of Computing at Hong Kong Polytechnic University, Hung Hom, Hong Kong, China. His research interests include parallel and distrib
期刊
本病属中医学“头痛”范畴。《素问.太阴阳明论》云“伤于风者,上先受之,故头痛以风邪所致者,最为常见”。方中川芎行血中之气,祛血中之风,上行头目,为治风寒头痛之要药;荆芥、细辛、白芷、防风辛温散寒,疏风止痛;薄荷清利头目,白芍、甘草缓急止痛,香附活血止痛,全方共奏疏风散寒,活血止痛之功。
期刊
Multimedia data traffic occupies more than 70% of the Internet traffic and is still growing. On?demand video is already a major video content platform and private broadcast is getting more popular. In
期刊
While cellular networks have continuously evolved in recent years, the industry has clearly seen unprecedented challenges to meet the exponentially growing expectations in the near future. The 5G syst
期刊