Discrete Structures in Machine Learning 2017

Traditionally, machine learning has been focused on methods where objects reside in continuous domains. The goal of this workshop is to advance state-of-the-art methods in machine learning that involve discrete structures.

Models with ultimately discrete solutions play an important role in machine learning. At its core, statistical machine learning is concerned with making inferences from data, and when the underlying variables of the data are discrete, both the tasks of model inference as well as predictions using the inferred model are inherently discrete algorithmic problems. Many of these problems are notoriously hard, and even those that are theoretically tractable become intractable in practice with abundant and steadily increasing amounts of data. As a result, standard theoretical models and off-the-shelf algorithms become either impractical or intractable (and in some cases both).

While many problems are hard in the worst case, the problems of practical interest are often much more well-behaved, and have the potential to be modeled in ways that make them tractable. Indeed, many discrete problems in machine learning can possess beneficial structure; such structure has been an important ingredient in many successful (approximate) solution strategies. Examples include submodularity, marginal polytopes, symmetries and exchangeability.

Machine learning, algorithms, discrete mathematics and combinatorics as well as applications in computer vision, speech, NLP, biology and network analysis are all active areas of research, each with an increasingly large body of foundational knowledge. The workshop aims to ask questions that enable communication across these fields. In particular, this year we aim to address the investigation of combinatorial structures allows to capture complex, high-order dependencies in discrete learning problems prevalent in deep learning, social networks, etc. An emphasis will be given on uncertainty and structure that results from problem instances being estimated from data.

Invited Talks

Call for Papers

Discrete optimization problems and combinatorial structures are ubiquitous in machine learning. They arise for discrete labels with complex dependencies, structured estimators, learning with graphs, partitions, permutations, or when selecting informative subsets of data or features.

What are efficient algorithms for handling such problems? Can we robustly solve them in the presence of noise? What about streaming or distributed settings? Which models are computationally tractable and rich enough for applications? What theoretical worst-case bounds can we show? What explains good performance in practice?

Such questions are the theme of the DISCML workshop. It aims to bring together theorists and practitioners to explore new applications, models and algorithms, and mathematical properties and concepts that can help learning with complex interactions and discrete structures.

We invite high-quality submissions that present recent results related to discrete and combinatorial problems in machine learning, and submissions that discuss open problems or controversial questions and observations, e.g., missing theory to explain why algorithms work well in certain instances but not in general, or illuminating worst case examples. We also welcome the description of well-tested software and benchmarks.

Areas of interest include, but are not restricted to:

  • discrete optimization in context of deep learning
  • graph algorithms
  • continuous relaxations
  • learning and inference in discrete probabilistic models
  • algorithms for large data (streaming, sketching, distributed)
  • online learning
  • new applications

Submissions

Please submit contributions in NIPS 2017 format (length max. 6 pages, non-anonymous) on easychair.

Submission deadline: November 1, 2017.

Organizers