Présentations


20 Novembre 2017

From Neural Networks to Deep Learning, Patrick Gallinari, UPMC

After a brief historical perspective of the domain, we will introduce the challenges and the main concepts behind the techniques of Deep Learning. This presentation will offer a global perspective of the field and will also introduce the main technical concepts that will be further developed and illustrated via a series of applications and challenges in different domains during the day.

AI & Big Data : Challenges and opportunities in the Automotive Industry, Jean-Marc David, Renault

The talk will address the main challenges and opportunities related to big data & AI in the automotive industry. We will focus on 2 main aspects :

  • how the data flood can be used to enhance our internal processes, better understand our customers, and propose new types of services ;
  • while the performance of autonomous vehicles have made tremendous progress in the last years by ‘learning from data’, this approach still raises issues in order we can deliver safe and reliable autonomous vehicles on the road.

BNP Paribas Data & AI Lab : AI Rethinking Banking : BNP Paribas Use Case, Edouard d’Archimbaud, BNP Paribas CIB

80% of world data is unstructured and it is the same in the bank. Majority of the data we handle are in news, contracts, emails, voice records, … We will explain what goals we pursue with the AI in the bank. We will present the platform, powered by deep learning,that we are developing to better handle unstructured data and its challenges.

Radiology in the Era of Deep Learning : What is at Stake ?, Pierre Fillard, Therapixel

The advent of deep learning is often perceived as both a blessing and a threat to radiologists. But what is it really about ? Where are we today, and where will we be in 5 years ? We propose to illustrate these issues through two concrete applications : the prediction of lung cancer from thoracic scanners (CT) and the prediction of breast cancer from mammographies. These two applications have recently been the subject of international challenges where Therapixel has moved up to 5th and 1st place respectively.

Deep Learning at UPMC (LIP6, UPMC : Matthieu Cord, Ludovic Denoyer et Patrick Gallinari)

The talk will present recent works from UPMC illustrating some of the most successful concepts of Deep Learning : Convolutional Neural Networks for vision, Recurrent Neural Networks and Deep Reinforcement Learning. This is a series of 3 short talks, each one focusing on one of these topics, providing a brief introduction to the concept and illustrating its use through a use case.

Visual Question Answering, Matthieu Cord
Multimodal representation learning for text and image has been extensively studied in recent years. Visual Question Answering (VQA) is a complex multimodal task, which aims at answering a question about an image. Large scale datasets have been recently collected, enabling the development of powerful models. In this talk, I will introduce the main ideas to solve this problem. Precise image and text models are required and, most importantly, high level interactions between these two modalities have to be carefully encoded into the model in order to provide the correct answer.

Budgeted Deep Reinforcement Learning, Ludovic Denoyer
I will explain how reinforcement learning can be used for data processing. My talk will focus on budgeted sequential models able to learn which information to acquire, and which computations to apply under budgeted constraints. Different applications on text, image, recommender systems will be presented.

Deep Learning for Spatio-Temporal Data, Patrick Gallinari
Data with spatio-temporal dependencies are present in many fields (ecology, meteorology, satellite imagery, biology, medicine, economics, etc.). Deep Learning has developed sequence models for a set of generic tasks, that represent the state of the art in different application domains. However, the spatial dimension is rarely considered explicitly. How can we incorporate this dimension into neural networks, are current neural models capable of modeling complex spatio-temporal phenomena ? The presentation will be organized around these issues. It will introduce recent recurrent neural models for spatio-temporal prediction, and physico-statistical models where prior knowledge of a complex phenomenon is used to guide the development of a Deep architecture.

Deep learning in the movie industry, Patrick Perez, Technicolor

Computer vision and computer graphics empower the tools used by artists in visual effect and post-production pipelines. After the former, deep learning is now rapidly changing the face of the latter. This presentation will survey some of the opportunities that modern AI is opening for the creation of high-end visual content, with illustrations from research projects conducted in Technicolor research labs.

Scaling up ML, Antoine Bordes, Facebook

This talk will present a series of work from Facebook AI Research aiming at making at various training algorithms and models more efficient, compact and fast. We will illustrate that with applications such as image recognition, similarity search, machine translation or language modeling.

End to End Deep Learning to Manage Intelligent Data for AI Cities, Serge Palaric NVIDIA

Data is the lifeblood of the modern city. Today, it is being captured by over 500 million cameras and billions of sensors worldwide, and that number is growing exponentially. This is creating a tsunami of information that is impossible for humans to analyze. AI is the key to turning this information into insight. It is transforming how we capture, inspect, and analyze data to impact everything from public safety, traffic, and parking management to law enforcement and city service.

Large Scale Deep Learning in Practice, Benedikt Wilbertz, Trendiction S. A.

Deep Learning and AI can be found lately in many commercial products. Nevertheless transforming the latest research results into a working product can be a painful experience. In this talk, we illustrate a few examples of deep learning for social media analytics and discuss challenges to run and train models for millions of predictions per day.


22 Novembre 2017

GPU Nested Monte Carlo Techniques for XVA Computations Stéphane Crépey, LaMME, Université d’Evry Val d’Essonne

Since the 2008 financial crisis, investment banks charge to their clients, in the form of rebates with respect to the counterparty-risk-free value of financial derivatives, various add-ons meant to account for counterparty risk and its capital and funding implications. These add-ons are dubbed XVAs, where VA stands for valuation adjustment and X is a catch-all letter to be replaced by C for credit, D for debt, F for funding, M for margin, K for capital (!), and so on. XVAs deeply affect the derivative pricing task by making it global, nonlinear, and entity-dependent. They are best computed by nested Monte Carlo simulation run on high performance concurrent computing architectures. However, although naively suited to parallel architectures like GPUs, nested Monte Carlo applied to XVAs requires various optimizations. In this talk, after a brief survey of the XVA field, we explain some of the essential improvments brought for their computation by GPU nested Monte Carlo to : SDE and BSDE simulation, sorting for risk measure computations that may be embedded in MVA and KVA coputations, and an efficient simulation of expressions involving indicator functions of (financial counterparty) default times.

Clustering in Longitudinal Networks and Exploration of GPU Parallelization, Tabea Rebafka & Fanny Villers, LPMA, UPMC

For the statistical analysis of interaction data that evolve with time, we propose an extension of the stochastic block model associating inhomogeneous Poisson processes. Clustering of individuals and parameter estimation are based on a semiparametric variational expectation-maximization algorithm. The utility of our approach is illustrated on several real data examples. As for large networks computation is very time-consuming, we explore possibilities of speeding up computation by parallelizing our algorithm implemented using GPU and compare to classical CPU-parallelization.

Efficient Deployements of Scientific Applications on GPUs Pierre Fortin, LIP6, UPMC

GPUs perform best for computations which are massively parallel, regular and fine grained. Many applications may however not present (all) such features. Through examples of applications in scientific computing, we will show how to adapt algorithms and implementations in order to exploit at best the GPU compute power.

A High Performance Computing Platform for New Generation Molecular Simulations, Jean-Philip Piquemal, LCT, UPMC

In this talk, I will present the new possibilities of the Tinker molecular modeling package using HPC platforms. A dual strategy targeting both large HPC petascale systems and GPU platform has been implemented. First, I will introduce the Tinker-HP code which is a CPU based, double precision, massively parallel version of Tinker dedicated to long polarizable molecular dynamics simulations. Tinker-HP proposes a high performance scalable computing environment for polarizable force fields giving access to large systems up to millions of atoms with high resolution methods. Then I will detail the new results of the Tinker-OpenMM, the GPU extension of Tinker, with a focus towards pharmaceutical applications. Finally, I will conclude on the ongoing efforts towards the use of hybrid CPU/GPU strategies.

Estimation of Numerical Reliability on GPUs Fabienne Jézéquel, LIP6, UPMC

GPUs offer an interesting computing power for numerical simulations. However floating-point operations lead to round-off errors that may accumulate and invalidate results. The CADNA library (http://lip6.fr/cadna) enables one to estimate round-off errors in simulation codes on CPU and GPU. It can possibly be used to determine if the numerical quality of results computed in single precision on GPU is satisfactory. We show how to execute CUDA codes with CADNA to estimate their numerical reliability.