2nd IFIP/IEEE International Workshop on Analytics for Network and Service Management

AnNet 2017

May 8, 2017 in Lisbon, Portugal

IFIP/IEEE International Symposium on Integrated Network Management
                   Lisbon Portigal 8-12 MAY 2017

10:30 - 11:00 Short Paper Session 1
  On Extracting User-centric Knowledge for Personalised Quality of Service in 5G Networks
  Abstract: This paper aims to improve the user Quality of Service (QoS) in 5G networks by introducing a user-centric view that exploits the predictability of the users daily motifs. An agglomerative clustering is used to identify these motifs according to the cells in which the user is camping during the day. Then, a technique to extract the personalised QoS observed by the user is proposed. The methodology is illustrated with an example that makes use of real measurements obtained from a specific customer of a 3G/4G operator. The presented results illustrate that the proposed user-centric approach is able to identify situations with poor user perceived QoS which could not be identified by a classical network-centric approach.
  Comprehensive Vulnerability Assessment and Optimization Method for Smart Grid Transmission Systems
  Abstract: Smart grid transmission systems is critical information infrastructure which bears communication services. Vulnerability assessment and optimization for systems can enhance the robustness and sustainability of network, thus reducing negative effects for power system operation. However, current vulnerability assessment methods lack service and availability indicator considerations, and corresponding optimization methods ignore dynamic process for temporal factors as well. Aiming at above problems, a novel and comprehensive vulnerability assessment and optimization method is proposed. Firstly, for assessment, the influence factors are analyzed from static and dynamic aspects respectively. Integrating these factors, a comprehensive vulnerability indicator is designed to assess the vulnerability of nodes and edges in the network. The indicators reasonability is verified by different attack models next. And then, to relieve unbalanced vulnerability distribution in the network, a routing optimization method is proposed through reconfiguration for service routes on the edge with high vulnerability. Finally, the simulation is taken under a real network topology. Vulnerability assessment for nodes and edges with the defined indicator are executed, and its correctness are proved as well. Then with the optimization method, the network vulnerability can be balanced, which takes on effective theoretical and practical significance.
  Assembling VoLTE CDRs based on Network Monitoring - Challenges with Fragmented Information
  Abstract: Providing proper technical solutions to cover all requirements of the Voice over LTE (VoLTE) service is still a great challenge for operators. Network monitoring is one of the important methods to support service verification, deployment and operations. The VoLTE service utilizes both the LTE Evolved Packet Core (EPC), and the IP Multimedia Subsystem (IMS). These architectures are built on different principles, using protocols with different mindset. Furthermore, they utilize subscriber- and call-related parameters both in redundant and fragmented manners. On one hand, the same data is stored in functional elements of both architectures, which led to partial data redundancy. On the other hand, Call Data Records (CDRs) cannot be assembled by simply capturing signaling on a few given links. Information is fragmented, hence on-the-fly cross-correlation of key parameters is required. In order to effectively utilize the network and service monitoring system, operators need new methods to correlate the information of various interfaces and protocols. There are many obstacles to overcome here, including information fragmentation in various links, ciphered control messages, and global identifiers hidden by temporary ones. This paper shows the challenges for VoLTE Call Data Record assembling, and shows how to extract key parameters in order to enable expert analysis. The deciphering mechanism is especially important here, hence we discuss how its success affects analysis results. We present Call Data Record assembling methods for some complex scenarios, as well.
15:30 - 16:00 Short Paper Session 2
  Monitoring the Network Monitoring System: Anomaly Detection using Pattern Recognition
  Abstract: For a successful and efficient network supervision, an Anomaly Detection System is essential. In this paper, our goal is to develop a simple, practical, and application-domain specific approach to identify anomalies in the input/output data of network probes. Since data are periodic and continuously evolving, it is not possible to use threshold-based approaches. We propose an algorithm based on pattern recognition to help mobile operators detect anomalies in real time. The algorithm is unsupervised (with the possibility of integrating user feedback, if available) and easily configurable with a small number of tuning parameters. After weeks of deployment in a production network monitoring system, we obtain satisfactory results: we detect major anomalies with low error rate.
  Quantifying Cloud Workload Burstiness: New Measures and Models
  Abstract: Diverse cloud applications deployed on-demand make for workload burstiness. Burstiness is quantified statistically through different variance measures. These however only capture features specific to individual workloads while lacking a unified approach applicable to the diversity of applications present in the cloud. This paper focuses on the statistical measures used to quantify cloud workload burstiness. Using a diverse set of real and synthetic workload traces, it identifies different statistical models that uniquely capture workload specific burstiness. These features are described as standard when traditional variance measures are observed and nonstandard when such measures become inadequate. The methods employ recent econometric models described as Auto-regressive Conditional Score (ACS) motivated by their ability to model time-varying parameters that capture burstiness more accurately than existing methods. Furthermore, it has inspired a novel measure of burstiness, the Normalized Score Index (NSI). Compared to existing measures, the NSI captures burstiness specific to statistical features per workload. When standard variance features are observed, the NSI reverts to traditional measures and when nonstandard features are present, it models them accordingly. The NSI has been applied to a diverse workload set and yields both a static metric and a means by which to track burstiness over a workload?s lifecycle.
  A Clustering-based Analysis of DPI-labeled Video Flow Characteristics in Cellular Networks
  Abstract: Using a specially instrumented deep packet inspection (DPI) appliance placed inside the core network of a commercial cellular operator we collect data from almost four million flows produced by a 'heavy-hitter' a subset of the customer base. The data contains per packet information for the first 100 packets in each flow, along with the classification done by the DPI engine. The data is used in for unsupervised learning to obtain clusters of typical video flow behaviors, with the intent to quantify the number of such clusters. Among the flows identified as belonging to video applications by the DPI engine, a subset may acutally be video application signaling rather than flows carrying actual transfers of video data. Given that DPI-labeled data is used to train supervised machine learning models to identify video flows in encrypted traffic, which is becoming more and more common, the potential presence and structure of such 'noise' signalling flows in the ground truth is important to examine. Here, K-means and DBSCAN clustering is used to cluster the collected data, and several metrics employed to detemine the appropriate cluster count. The results show that there are several typical video application clusters, and that video signalling flows are indeed present in flows labeled as video by the DPI-engine.
  Application Switch using DPN for Improving TCP Based Data Center Applications
  Abstract: Current network switches cannot be programmed and flexibly controlled. Then, developers of a data center application system, which is composed of software and computers connected with a network, are not able to optimize behavior of network switches on which the application is running. On the other hand, Deeply Programmable Network (DPN) switches can completely analyze packet payloads and be profoundly programmed. In our previous work, we introduced an application switch based on DPN. The switch was able to be deeply programed and developers could implement a part of functions of a data center application in the switch. The switch deeply analyzed packets, which is called Deep Packet Inspection (DPI), and provided some functions of the application in the switch. However, the switch did not manage connection and not support communication with TCP. In this paper, we proposed a method for constructing an application switch supporting TCP based communication. The method analyzes IP headers, TCP headers, and payloads of packets. When the switch detects a request which the switch supports, the switch replies according to its TCP session. We then introduce our implementation and evaluate performance of our application switch. Our evaluation has demonstrated that our switch has been able to improve performance of the data center applications.