Peer-reviewed publications and pre-prints.

  • Better schedules for low precision training of deep neural networks

    Venue: published in Machine Learning Journal 2024

    Keywords: low precision training, hyperparameter schedules, cyclic precision training

  • Cold Start Streaming Learning for Deep Networks

    Venue: currently under review

    Keywords: online learning, streaming learning, neural networks, deep learning

  • Current progress and open challenges for applying deep learning across the biosciences

    Venue: published in Nature Communications Volume 13

    Keywords: deep learning, computational biology, perspective and review

  • i-SpaSP: Structured Neural Pruning via Sparse Signal Recovery

    Venue: oral presentation at L4DC 2022

    Keywords: neural network pruning, sparse signal recovery, non-convex optimization

  • PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication

    Venue: poster at ICLR 2022

    Keywords: graph convolutional networks (GCNs), distributed training, pipelined communication

  • How much pre-training is enough to discover a good subnetwork?

    Venue: published in TMLR 2024

    Keywords: lottery ticket hypothesis, neural network pruning, conditional gradient method, overparameterization

  • Exceeding the Limits of Visual-Linguistic Multi-Task Learning

    Venue: internship project at Salesforce

    Keywords: multi-modal learning, transformers, multi-task learning, transfer learning

  • REX: Revisiting Budgeted Training with an Improved Schedule

    Venue: conference paper at MLSys 2022

    Keywords: learning rate decay, hyperparameter schedules, efficient training

  • ResIST: Layer-Wise Decomposition of ResNets for Distributed Training

    Venue: poster at UAI 2022

    Keywords: residual networks, distributed training, efficient training

  • GIST: Distributed Training for Large-Scale Graph Convolutional Networks

    Venue: published in Journal of Applied and Computational Topology 2023

    Keywords: graph convolutional networks (GCNs), distributed training, overparameterization, efficient training

  • Distributed Learning of Deep Neural Networks using Independent Subnet Training

    Venue: published in PVLDB Volume 15

    Keywords: fully-connected neural networks, distributed training, efficient training

  • Demon: Momentum Decay for Improved Neural Network Training

    Venue: conference paper at ICASSP 2022

    Keywords: non-convex optimization, momentum, hyperparameter schedules

  • E-Stitchup: Data Augmentation for Pre-Trained Embeddings

    Venue: undergraduate honors thesis at UT Austin 2020

    Keywords: transfer learning, mixup, confidence calibration, out-of-distribution detection

  • Functional Generative Design of Mechanisms with RNNs and Novelty Search

    Venue: conference paper at GECCO 2019

    Keywords: genetic algorithms, novelty search, recurrent neural networks (RNNs), generative design