Home
Invited Speakers
Important Dates
Submissions
Program
Venue
Registration
Contact
Supported by
TTIJ.jpg   TTIC.jpg
   AIP.jpg     AIRC.png
osaka-u.jpg    gco.jpg
Additional cooperation from
ISM2.jpg tokyotech2.png

Fourth International Workshop on Symbolic-Neural Learning (SNL-2020)

June 30-July 1, 2020
Osaka International Convention Center 12F (Osaka, Japan)

Symbolic-neural learning involves deep learning methods in combination with symbolic structures. A "deep learning method" is taken to be a learning process based on gradient descent on real-valued model parameters. A "symbolic structure" is a data structure involving symbols drawn from a large vocabulary; for example, sentences of natural language, parse trees over such sentences, databases (with entities viewed as symbols), and the symbolic expressions of mathematical logic or computer programs.
Symbolic-neural learning has an innovative feature that allows to model interactions between different modals: speech, vision, and language. Such multimodal information processing is crucial for realizing research outcomes in real-word.
For growing needs and attention to multimodal research, SNL workshop this year features researches on "Beyond modality: Researches across speech, vision, and language boundaries."
Topics of interests include, but are not limited to, the following areas:

  • Speech, vision, and natural language interactions in robotics
  • Multimodal and grounded language processing
  • Multimodal QA and translation
  • Dialogue systems
  • Language as a mechanism to structure and reason about visual perception
  • Image caption generation and image generation from text
  • General knowledge question answering
  • Reading comprehension
  • Textual entailment
Deep learning systems across these areas share various architectural ideas. These include word and phrase embeddings, self-attention neural networks, recurrent neural networks (LSTMs and GRUs), and various memory mechanisms. Certain linguistic and semantic resources may also be relevant across these applications. For example, dictionaries, thesauri, WordNet, FrameNet, FreeBase, DBPedia, parsers, named entity recognizers, coreference systems, knowledge graphs and encyclopedias.

The workshop consists of invited oral presentations and contributed poster presentations.

Organizing Committee:

Yasushi Yagi (Chair) Osaka University, Osaka, Japan
Yuki Arase Osaka University, Osaka, Japan
Sadaoki Furui Toyota Technological Institute at Chicago, Chicago, USA
Tomoko Matsui The Institute of Statistical Mathematics, Tokyo, Japan
David McAllester Toyota Technological Institute at Chicago, Chicago, USA
Yutaka Sasaki (treasurer) Toyota Technological Institute, Nagoya, Japan
Koichi Shinoda Tokyo Institute of Technology, Tokyo, Japan
Masashi Sugiyama RIKEN Center for AIP and the University of Tokyo, Tokyo, Japan
Jun'ichi Tsujii AIST AI Research Center, Tokyo, Japan and
the University of Manchester, Manchester, UK

Program Committee:

Yuki Arase (Chair) Osaka University, Osaka, Japan
Nakamasa Inoue Tokyo Institute of Technology, Tokyo, Japan
Daichi Mochihashi The Institute of Statistical Mathematics, Tokyo, Japan
Greg Shakhnarovich Toyota Technological Institute at Chicago, Chicago, USA
Hiroya Takamura AIST AI Research Center, Tokyo, Japan and Tokyo Institute of Technology, Tokyo, Japan
Norimichi Ukita Toyota Technological Institute, Nagoya, Japan
Kazuyoshi Yoshii RIKEN Center for AIP and Kyoto University, Kyoto, Japan

Local Arrangements Committee

Tomoyuki Kajiwara (Local Chair) Osaka University, Osaka, Japan
Chenhui Chu (Web Chair) Osaka University, Osaka, Japan

Previous Workshops: