Xiaowei Huang (U Liverpool)
Deep learning techniques have been shown successful in a number of tasks such as image classification, robotic control, and natural language processing, etc. This motivates their application to broader industrial sectors, including safety critical sectors – such as automotive sector, healthcare sector, and avionic sector, etc – and business critical sectors – such as financial services sector.
There is an urgent need to certify the safety of learning-enabled systems, i.e., systems with (deep) learning components, when such systems are increasingly deployed and interact with human operators. This talk will review some recent progresses on formal verification and coverage-guided testing, and discuss whether and how they can be utilised to support the certification of deep learning. Existing formal verification techniques – including layer-wise refinement, reduction to global optimisation, reduction to constraint-solving, etc – are able to provide provable guarantees to the results, but may be subject to the scalability problem. Coverage-guided testing, on the other hand, is able to intensively test deep learning models with a large number of test cases generated under the guidance of coverage metrics. But research is needed to determine the relation between coverage metrics and the safety risks of deep learning. While both verification and testing can provide evidence to low-level claims, such as the robustness of a neural network for a given input, it is desirable to know how to utilise these low-level evidence to support high-level safety claims of deep learning such as the rate of failure within a number of new inputs.
Wolfgang Kellerer (TU Munich)
The trend towards softwarized networks provides ample opportunities and hence a high degree of flexibility in the way to plan and operate your communication networks. Such flexibility supports the realization of novel applications with largely varying requirements. Many emerging applications pose stringent dependability requirements such as industrial communication, autonomous vehicles, telepresence and teleoperation in healthcare. In particular, ongoing research on 5G networks focuses on predictable low latency communication. This raises the question to what extent softwarized networks also enable such more predictable networks. As network performance relies on the underlying switches, we have a closer look at Software-Defined Networking infrastructure and SDN switches, in particular, as enablers of softwarized networks. This work presents an empirical study of the predictability of SDN switches with a focus on latency and addresses the question of modeling of network latency with SDN switches based on Network Calculus. Therefore, we benchmark seven hardware OpenFlow switches in a first step. Our measurement results reveal several unexpected and unpredictable behaviors and performance In particular, we observe unpredictable behaviors related to flow management and buffer management. We further uncover unexpected overhead introduced with conventional quality-of-service mechanisms such as priority queueing, which can lead to violations of latency guarantees. In a second step, we extend our empirical investigations to small networks with a comprehensive measurement campaign of low cost, low capacity SDN hardware switches. We propose a novel measurement-based methodology that uses deterministic network calculus to derive a reliable performance model of a given switch. Our experiments with the Zodiac FX switch show that the derived models are accurate enough to actually provide deterministic end-to-end guarantees with low-cost softwarized network devices.
Francois Baccelli (U of Texas at Austin)
This talk features networks of coupled processor sharing queues in the Euclidean space, where customers arrive according to independent Poisson point processes at every queue, are served, and then leave the network. The coupling is through service rates. In any given queue, this rate is inversely proportional the interference seen by this queue, which is determined by the load in neighboring queues, attenuated by some distance-based path-loss function.
The model is a discrete version of a spatial birth and death process where customers arrive to the Euclidean space according to Poisson rain and leave it when they have transferred an exponential file, assuming that the instantaneous rate of each transfer is determined through information theory by the signal to interference and noise ratio experienced by the user.
The discrete and the continuous models will be discussed, both in finite and infinite domains. The stability condition is identified. The minimal stationary regime is built using coupling from the past techniques.
The mean queue size of this minimal stationary regime is determined in closed form using the rate conservation principle of Palm calculus. Some bounds on the tail of latency will be discussed.
In infinite domains, when the stability condition holds, for all bounded initial conditions, there is weak convergence to this minimal stationary regime; however, there exist initial conditions for which all queue sizes converge to infinity.
Joint work with S. Foss and A. Sankararaman