Monday, April 22, 2019

Videos: New Deep Learning Techniques, February 5 - 9, 2018, IPAM Workshop


  



Overview

In recent years, artificial neural networks a.k.a. deep learning have significantly improved the fields of computer vision, speech recognition, and natural language processing. The success relies on the availability of large-scale datasets, the developments of affordable high computational power, and basic deep learning operations that are sound and fast as they assume that data lie on Euclidean grids. However, not all data live on regular lattices. 3D shapes in computer graphics represent Riemannian manifolds. In neuroscience, brain activity (fMRI) is encoded on the structural connectivity network (sMRI). In genomics, the human body functionality is expressed through DNA, RNA, and proteins that form the gene regulatory network (GRN). In social sciences, people interact through networks. Eventually, data in communication networks are structured by graphs like the Internet or road traffic networks.

Deep learning that has originally been developed for computer vision cannot be directly applied to these highly irregular domains, and new classes of deep learning techniques must be designed. This is highly challenging as most standard data analysis tools cannot be used on heterogonous data domains. The workshop will bring together experts in mathematics (statistics, harmonic analysis, optimization, graph theory, sparsity, topology), machine learning (deep learning, supervised & unsupervised learning, metric learning) and specific applicative domains (neuroscience, genetics, social science, computer vision) to establish the current state of these emerging techniques and discuss the next directions.
This workshop will include a poster session; a request for posters will be sent to registered participants in advance of the workshop.
Here are the videos (and some slides):
Samuel Bowman (New York University)
Toward natural language semantics in learned representations




Emily Fox (University of Washington)
Interpretable and Sparse Neural Network Time Series Models for Granger Causality Discovery



Ellie Pavlick (University of Pennsylvania), Should we care about linguistics?




Leonidas Guibas (Stanford University), Knowledge Transport Over Visual Data


Yann LeCun (New York University), Public Lecture: Deep Learning and the Future of Artificial Intelligence


Alán Aspuru-Guzik (Harvard University), Generative models for the inverse design of molecules and materials


Daniel Rueckert (Imperial College), Deep learning in medical imaging: Techniques for image reconstruction, super-resolution and segmentation




Kyle Cranmer (New York University), Deep Learning in the Physical Sciences



Stéphane Mallat (École Normale Supérieure), Deep Generative Networks as Inverse Problems



Michael Elad (Technion - Israel Institute of Technology), Sparse Modeling in Image Processing and Deep Learning



Yann LeCun (New York University), Public Lecture: AI Breakthroughs & Obstacles to Progress, Mathematical and Otherwise


Xavier Bresson (Nanyang Technological University, Singapore), Convolutional Neural Networks on Graphs



Federico Monti (Universita della Svizzera Italiana), Deep Geometric Matrix Completion: a Geometric Deep Learning approach to Recommender Systems




Joan Bruna (New York University), On Computational Hardness with Graph Neural Networks






Jure Leskovec (Stanford University), Large-scale Graph Representation Learning




Arthur Szlam (Facebook), Composable planning with attributes


Yann LeCun (New York University), A Few (More) Approaches to Unsupervised Learning


Sanja Fidler (University of Toronto), Teaching Machines with Humans in the Loop
Raquel Urtasun (University of Toronto), Deep Learning for Self-Driving Cars


Pratik Chaudhari (University of California, Los Angeles (UCLA)), Unraveling the mysteries of stochastic gradient descent on deep networks




Stefano Soatto (University of California, Los Angeles (UCLA)), Emergence Theory of Deep Learning


Tom Goldstein (University of Maryland), What do neural net loss functions look like?



Stanley Osher (University of California, Los Angeles (UCLA)), New Techniques in Optimization and Their Applications to Deep Learning and Related Inverse Problems




Michael Bronstein (USI Lugano, Switzerland), Deep functional maps: intrinsic structured prediction for dense shape correspondence



Sainbayar Sukhbaatar (New York University), Deep Architecture for Sets and Its Application to Multi-agent Communication



Zuowei Shen (National University of Singapore), Deep Learning: Approximation of functions by composition




Wei Zhu (Duke University), LDMnet: low dimensional manifold regularized neural networks







Join the CompressiveSensing subreddit or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly