2016 Workshop on Visualization for Deep Learning

Teaser

Image credit: Deep Dream Generator, and Google Inceptionism

The workshop is held at the Astor room (7th floor) at Marriott on Thursday. Schedule is available now!

Abstract

Deep networks have had profound impact across machine learning research and in many application areas. DNNs are complex to design and train. They are non-linear systems that almost always have many local optima and are often sensitive to training parameter settings and initial state. Systematic optimization of structure and hyperparameters is possible e.g. with Bayesian optimization, but hampered by the expense of training each design on realistic datasets. Exploration is still ongoing for best design principles. We argue that visualization can play an essential role in understanding DNNs and in developing new design principles. With rich tools for visual exploration of networks during training and inference, one should be able to form closer ties between theory and practice: validating expected behaviors, and exposing the unexpected which can lead to new insights.

View accepted papers for more details. Also, here is a list of reference papers related to this topics in the past. Let us know if you can recommend more!

Important Dates

Submission deadline: 9AM PDT, May 6, 2016

Acceptance notification: May 10, 2016

Workshop: June 23, 2016 (Astor at Marriott Marquis)

Invited speakers

Martin Wattenberg (Google)
Christopher Olah (Google)
Luke Yeager (NVIDIA)
Junyan Zhu, (University of California, Berkeley)

Organizers:

John Canny (UC Berkeley, canny@berkeley.edu)
Polo Chau (Georgia Tech, polo@gatech.edu)
Biye Jiang (UC Berkeley, bjiang@berkeley.edu)
Aditya Khosla (MIT, khosla@csail.mit.edu)

Program Committee

Byron Boots, Georgia Tech
Jeff Clune, University of Wyoming
Shixia Liu, Tsinghua Univerisity
Wojciech Samek, Heinrich Hertz Institute
Le Song, Georgia Tech
Yingnian Wu, University of California, Los Angeles
Jason Yosinski, Cornell University
Junyan Zhu, University of California, Berkeley


Topics:

Likely topics for the workshop include, but are not limited to:

• Directly visualizing the activations and parameters in intuitive aggregates

• Visualizing weights as features

• Visualizing gradient aggregates during training

• Improving interpretability of networks:

• Localizing “responsbility” in the network for particular outputs

• Sensitivity/stability of network behavior

• Visualizing loss function geometry and the trajectory of the gradient descent process

• Visual representation of the input-output mapping of the network

• Visualizing alternative structures and their performance

• Monitoring/debugging the training process, i.e to detect saddle points or local optima, saturation units

• Visualizing distributed training methods across a cluster

• Using animation in network visualization

• Interactive visualizations for exploration or parameter tuning

• Software architectures for effective visualization

• Visualization and interaction user interfaces

Call for papers:

We accept many kinds of papers, such as (and not limited to): Extended abstract, short papers, work-in-progress papers and demo papers. You can also submit your published work as 2-page summarization.

We encourage submissions of relevant work that has been previously published, or is to be presented at the main conference. The accepted papers will be posted on the workshop website and will not appear in the official ICML proceedings.

Submissions must be in PDF, written in English, no more than 8 pages long — shorter papers are welcome — and formatted according to the standard ICML 2016 guideline. Submissions do not have to be anonymized.

For accepted papers, at least one author must attend the workshop to present the work.

Submission website: https://cmt3.research.microsoft.com/VDL2016