no code implementations • 21 Mar 2024 • Alice Baird, Rachel Manzelli, Panagiotis Tzirakis, Chris Gagne, Haoqi Li, Sadie Allen, Sander Dieleman, Brian Kulis, Shrikanth S. Narayanan, Alan Cowen
In this short white paper, to encourage researchers with limited access to large-datasets, the organizers first outline several open-source datasets that are available to the community, and for the duration of the workshop are making several propriety datasets available.
1 code implementation • 6 Feb 2024 • Christopher Liao, Christian So, Theodoros Tsiligkaridis, Brian Kulis
However, most DG methods assume access to abundant source data in the target label space, a requirement that proves overly stringent for numerous real-world applications, where acquiring the same label space as the target task is prohibitively expensive.
1 code implementation • 5 Feb 2024 • Eric Yang Yu, Christopher Liao, Sathvik Ravi, Theodoros Tsiligkaridis, Brian Kulis
We first show that when an OOD data point is misclassified, the correct class can be typically found in the Top-K predicted classes.
no code implementations • 12 Jan 2024 • Zuzhao Ye, Gregory Ciccarelli, Brian Kulis
Data augmentation is a key tool for improving the performance of deep networks, particularly when there is limited labeled data.
1 code implementation • 21 Nov 2023 • Christopher Liao, Theodoros Tsiligkaridis, Brian Kulis
A recent study, WaffleCLIP, demonstrated that similar zero-shot accuracy can be achieved with an ensemble of random descriptors.
1 code implementation • 4 Oct 2022 • Christopher Liao, Theodoros Tsiligkaridis, Brian Kulis
There is extensive interest in metric learning methods for image retrieval.
no code implementations • 15 Jun 2022 • Christin Jose, Joseph Wang, Grant P. Strimel, Mohammad Omar Khursheed, Yuriy Mishchenko, Brian Kulis
We also show that when our approach is used in conjunction with a max-pooling loss, we are able to improve relative false accepts by 25 % at a fixed latency when compared to cross entropy loss.
1 code implementation • 26 May 2022 • Christopher Liao, Theodoros Tsiligkaridis, Brian Kulis
Domain Adaptation (DA) has received widespread attention from deep learning researchers in recent years because of its potential to improve test accuracy with out-of-distribution labeled data.
no code implementations • 2 Nov 2021 • Ali Siahkamari, Durmus Alp Emre Acar, Christopher Liao, Kelly Geyer, Venkatesh Saligrama, Brian Kulis
For the task of convex Lipschitz regression, we establish that our proposed algorithm converges with iteration complexity of $ O(n\sqrt{d}/\epsilon)$ for a dataset $\bm X \in \mathbb R^{n\times d}$ and $\epsilon > 0$.
no code implementations • 29 Sep 2021 • Mohammad Omar Khursheed, Christin Jose, Rajath Kumar, GengShen Fu, Brian Kulis, Santosh Kumar Cheekatmalla
In this work, we propose Tiny-CRNN (Tiny Convolutional Recurrent Neural Network) models applied to the problem of wakeword detection, and augment them with scaled dot product attention.
no code implementations • 20 Jul 2021 • Sivaramakrishnan Sankarapandian, Brian Kulis
Gravitational wave detectors such as LIGO and Virgo are susceptible to various types of instrumental and environmental disturbances known as glitches which can mask and mimic gravitational waves.
no code implementations • 16 May 2021 • Xiao Wang, Wei Jiang, Wei Wang, Shan Liu, Brian Kulis, Peter Chin
The key idea is to replace the image to be compressed with a substitutional one that outperforms the original one in a desired way.
no code implementations • 25 Nov 2020 • Mohammad Omar Khursheed, Christin Jose, Rajath Kumar, GengShen Fu, Brian Kulis, Santosh Kumar Cheekatmalla
In this work, we propose small footprint Convolutional Recurrent Neural Network models applied to the problem of wakeword detection and augment them with scaled dot product attention.
no code implementations • 20 Oct 2020 • Xide Xia, Tianfan Xue, Wei-Sheng Lai, Zheng Sun, Abby Chang, Brian Kulis, Jiawen Chen
We present a novel algorithm for transferring artistic styles of semantically meaningful local regions of an image onto local regions of a target video while preserving its photorealism.
2 code implementations • ICML 2020 • Ali Siahkamari, Aditya Gangrade, Brian Kulis, Venkatesh Saligrama
We present a new piecewise linear regression methodology that utilizes fitting a difference of convex functions (DC functions) to the data.
no code implementations • ICML 2020 • Kubra Cilingir, Rachel Manzelli, Brian Kulis
Classical linear metric learning methods have recently been extended along two distinct lines: deep metric learning methods for learning embeddings of the data using neural networks, and Bregman divergence learning approaches for extending learning Euclidean distances to more general divergence measures such as divergences over distributions.
3 code implementations • ECCV 2020 • Xide Xia, Meng Zhang, Tianfan Xue, Zheng Sun, Hui Fang, Brian Kulis, Jiawen Chen
Photorealistic style transfer is the task of transferring the artistic style of an image onto a content target, producing a result that is plausibly taken with a camera.
1 code implementation • 20 Aug 2019 • Xiao Wang, Siyue Wang, Pin-Yu Chen, Yanzhi Wang, Brian Kulis, Xue Lin, Peter Chin
However, one critical drawback of current defenses is that the robustness enhancement is at the cost of noticeable performance degradation on legitimate data, e. g., large drop in test accuracy.
2 code implementations • NeurIPS 2020 • Ali Siahkamari, Xide Xia, Venkatesh Saligrama, David Castanon, Brian Kulis
Bregman divergences generalize measures such as the squared Euclidean distance and the KL divergence, and arise throughout many areas of machine learning.
no code implementations • NIPS Workshop CDNNRIA 2018 • Sivaramakrishnan Sankarapandian, Anil Kag, Rachel Manzelli, Brian Kulis
We describe a training strategy that grows the number of units during training, and show on several benchmark datasets that our model yields architectures that are smaller than those obtained when tuning the number of hidden units on a standard fixed architecture.
no code implementations • 26 Jun 2018 • Rachel Manzelli, Vijay Thakkar, Ali Siahkamari, Brian Kulis
Existing automatic music generation approaches that feature deep learning can be broadly classified into two types: raw audio models and symbolic models.
11 code implementations • 22 Nov 2017 • Xide Xia, Brian Kulis
While significant attention has been recently focused on designing supervised deep semantic segmentation algorithms for vision tasks, there are many domains in which sufficient supervised pixel-level labels are difficult to obtain.
no code implementations • 26 Jul 2017 • Trevor Campbell, Brian Kulis, Jonathan How
Bayesian nonparametrics are a class of probabilistic models in which the model size is inferred from data.
no code implementations • ICLR 2018 • Ben Usman, Kate Saenko, Brian Kulis
Our empirical results suggest that using the dual formulation for the restricted family of linear discriminators results in a more stable convergence to a desirable solution when compared with the performance of a primal min-max GAN-like objective and an MMD objective under the same restrictions.
no code implementations • 7 Apr 2016 • Ke Jiang, Suvrit Sra, Brian Kulis
Topic models have emerged as fundamental tools in unsupervised machine learning.
no code implementations • 10 Jan 2016 • Robert Finn, Brian Kulis
Second, we bridge the divide between the discrete and continuous likelihoods by illustrating a canonical construction for stochastic processes whose Levy measure densities are from positive exponential families, and then demonstrate that these processes in fact form the prior, likelihood, and posterior in a conjugate family.
no code implementations • CVPR 2015 • Ke Jiang, Qichao Que, Brian Kulis
We present a simple but powerful reinterpretation of kernelized locality-sensitive hashing (KLSH), a general and popular method developed in the vision community for performing approximate nearest-neighbor searches in an arbitrary reproducing kernel Hilbert space (RKHS).
no code implementations • 29 Oct 2014 • Xiangyang Zhou, Jiaxin Zhang, Brian Kulis
Despite strong performance for a number of clustering tasks, spectral graph cut algorithms still suffer from several limitations: first, they require the number of clusters to be known in advance, but this information is often unknown a priori; second, they tend to produce clusters with uniform sizes.
no code implementations • 4 Oct 2014 • Anirban Roychowdhury, Brian Kulis
In this paper, we present a variational inference framework for models involving gamma process priors.
no code implementations • NeurIPS 2013 • Anirban Roychowdhury, Ke Jiang, Brian Kulis
Starting with the standard HMM, we first derive a “hard” inference algorithm analogous to k-means that arises when particular variances in the model tend to zero.
1 code implementation • NeurIPS 2013 • Trevor Campbell, Miao Liu, Brian Kulis, Jonathan P. How, Lawrence Carin
This paper presents a novel algorithm, based upon the dependent Dirichlet process mixture model (DDPMM), for clustering batch-sequential data containing an unknown number of evolving clusters.
no code implementations • NeurIPS 2012 • Ke Jiang, Brian Kulis, Michael. I. Jordan
Links between probabilistic and non-probabilistic learning algorithms can arise by performing small-variance asymptotics, i. e., letting the variance of particular distributions in a graphical model go to zero.
no code implementations • NeurIPS 2010 • Prateek Jain, Brian Kulis, Inderjit S. Dhillon
Our result shows that the learned kernel matrices parameterize a linear transformation kernel function and can be applied inductively to new data points.
no code implementations • NeurIPS 2009 • Brian Kulis, Trevor Darrell
Fast retrieval methods are increasingly critical for many large-scale analysis tasks, and there have been several recent methods that attempt to learn hash functions for fast and accurate nearest neighbor searches.
no code implementations • NeurIPS 2008 • Prateek Jain, Brian Kulis, Inderjit S. Dhillon, Kristen Grauman
Metric learning algorithms can provide useful distance functions for a variety of domains, and recent work has shown good accuracy for problems where the learner can access all distance constraints at once.