Hyper-parameter Tuning for ML Models: A Monte-Carlo Tree Search (MCTS) Approach by Sanjay Shakkottai

Location: MP auditorium, ECE dept. (behind ECE main building)

Seminar announcement – ECE

Title: Hyper-parameter Tuning for ML Models: A Monte-Carlo Tree Search (MCTS) Approach

Speaker: Sanjay Shakkottai (The University of Texas at Austin, USA)

Date/time: Thursday 2 Jan 2020, 4pm (coffee at 3:45pm)

Venue: MP building auditorium, Dept. of ECE

Abstract: We study the application of online learning techniques in the context of hyper-parameter tuning, which is of growing importance in general machine learning. Modern neural networks have several tunable parameters, where training for even one such parameter configuration can take several hours to days. We first cast hyper-parameter tuning as optimizing a multi-fidelity black-box function (which is noise-less) and propose a multi-fidelity tree search algorithm for the same. We then present extensions of our model and algorithm, so that they can function even in the presence of noise. We show that our tree-search based algorithms can outperform state of the art hyper-parameter tuning algorithms on several benchmark data-sets.

Bio: Sanjay Shakkottai received his Ph.D. from the ECE Department at the University of Illinois at Urbana-Champaign in 2002. He is with The University of Texas at Austin, where he is currently the Temple Foundation Endowed Professor No. 3, and a Professor in the Department of Electrical and Computer Engineering. He received the NSF CAREER award in 2004, and was elected as an IEEE Fellow in 2014. His research interests lie at the intersection of algorithms for resource allocation, statistical learning and networks, with applications to  wireless communication networks and online platforms.

All are welcome.

Scroll Up