Large-scale Distributed Machine Learning
Abstract

Machine learning frequently deals with data of volume larger than the capacity of a single machine. To accommodate the training data in memory for faster access, multiple machines are often used together to train machine learning models in a distributed manner. In this scenario, the per machine computational burden is reduced, but the expensive inter-machine communication becomes the bottleneck for further accelerating the training of machine learning models. 

In this talk, I will introduce state-of-the-art distributed training algorithms for various machine learning tasks broadly covered by the regularized empirical risk minimization problem, including but not limited to binary and multi-class classification, regression, feature selection, and structure learning.

SpeakerMr Lee Ching-pei
Date:
 14 February 2019 (Thur)
Time: 15:00pm - 16:00pm
PosterClick here

Biography

Mr LEE Ching-pei is a doctoral candidate at the University of Wisconsin-Madison with a major in Computer Sciences and a minor in Mathematics. Ching-pei is also affiliated with the Wisconsin Institute for Discovery. Ching-pei is the major developer of the distributed machine learning package Distributed LIBLINEAR. Prior to the PhD study, Ching-pei received a MS degree from the Department of Computer Science and Information Engineering of National Taiwan University. Ching-pei's research interests include nonlinear optimization, distributed and parallel machine learning, and convex analysis.