Search Results for author: David Jacobs

Found 33 papers, 16 papers with code

GaNI: Global and Near Field Illumination Aware Neural Inverse Rendering

no code implementations22 Mar 2024 Jiaye Wu, Saeed Hadadan, Geng Lin, Matthias Zwicker, David Jacobs, Roni Sengupta

In this paper, we present GaNI, a Global and Near-field Illumination-aware neural inverse rendering technique that can reconstruct geometry, albedo, and roughness parameters from images of a scene captured with co-located light and camera.

Inverse Rendering

Measured Albedo in the Wild: Filling the Gap in Intrinsics Evaluation

no code implementations27 Jun 2023 Jiaye Wu, Sanjoy Chowdhury, Hariharmano Shanmugaraja, David Jacobs, Soumyadip Sengupta

We then finetune different algorithms on our MAW dataset to significantly improve the quality of the reconstructed albedo both quantitatively and qualitatively.

Intrinsic Image Decomposition Inverse Rendering

Preserve Your Own Correlation: A Noise Prior for Video Diffusion Models

no code implementations ICCV 2023 Songwei Ge, Seungjun Nah, Guilin Liu, Tyler Poon, Andrew Tao, Bryan Catanzaro, David Jacobs, Jia-Bin Huang, Ming-Yu Liu, Yogesh Balaji

Despite tremendous progress in generating high-quality images using diffusion models, synthesizing a sequence of animated frames that are both photorealistic and temporally coherent is still in its infancy.

Image Generation Text-to-Video Generation +1

LD-ZNet: A Latent Diffusion Approach for Text-Based Image Segmentation

no code implementations ICCV 2023 Koutilya PNVR, Bharat Singh, Pallabi Ghosh, Behjat Siddiquie, David Jacobs

First, we show that the latent space of LDMs (z-space) is a better input representation compared to other feature representations like RGB images or CLIP encodings for text-based image segmentation.

Image Classification Image Segmentation +2

Hyperbolic Contrastive Learning for Visual Representations beyond Objects

1 code implementation CVPR 2023 Songwei Ge, Shlok Mishra, Simon Kornblith, Chun-Liang Li, David Jacobs

To exploit such a structure, we propose a contrastive learning framework where a Euclidean loss is used to learn object representations and a hyperbolic loss is used to encourage representations of scenes to lie close to representations of their constituent objects in a hyperbolic space.

Contrastive Learning Image Classification +5

A simple, efficient and scalable contrastive masked autoencoder for learning visual representations

1 code implementation30 Oct 2022 Shlok Mishra, Joshua Robinson, Huiwen Chang, David Jacobs, Aaron Sarna, Aaron Maschinot, Dilip Krishnan

Our framework is a minimal and conceptually clean synthesis of (C) contrastive learning, (A) masked autoencoders, and (N) the noise prediction approach used in diffusion models.

Contrastive Learning Self-Supervised Learning +1

On the Spectral Bias of Convolutional Neural Tangent and Gaussian Process Kernels

no code implementations17 Mar 2022 Amnon Geifman, Meirav Galun, David Jacobs, Ronen Basri

We study the properties of various over-parametrized convolutional neural architectures through their respective Gaussian process and neural tangent kernels.

Object-Aware Cropping for Self-Supervised Learning

1 code implementation1 Dec 2021 Shlok Mishra, Anshul Shah, Ankan Bansal, Abhyuday Jagannatha, Janit Anjaria, Abhishek Sharma, David Jacobs, Dilip Krishnan

This assumption is mostly satisfied in datasets such as ImageNet where there is a large, centered object, which is highly likely to be present in random crops of the full image.

Data Augmentation Object +3

Maneuver Identification Challenge

no code implementations25 Aug 2021 Kaira Samuel, Vijay Gadepally, David Jacobs, Michael Jones, Kyle McAlpin, Kyle Palko, Ben Paulk, Sid Samsi, Ho Chit Siu, Charles Yee, Jeremy Kepner

The Maneuver Identification Challenge hosted at maneuver-id. mit. edu provides thousands of trajectories collected from pilots practicing in flight simulators, descriptions of maneuvers, and examples of these maneuvers performed by experienced pilots.

Shift Invariance Can Reduce Adversarial Robustness

1 code implementation NeurIPS 2021 Songwei Ge, Vasu Singla, Ronen Basri, David Jacobs

Using this, we prove that shift invariance in neural networks produces adversarial examples for the simple case of two classes, each consisting of a single image with a black or white dot on a gray background.

Adversarial Robustness

Low Curvature Activations Reduce Overfitting in Adversarial Training

1 code implementation ICCV 2021 Vasu Singla, Sahil Singla, David Jacobs, Soheil Feizi

In particular, we show that using activation functions with low (exact or approximate) curvature values has a regularization effect that significantly reduces both the standard and robust generalization gaps in adversarial training.

Learning Visual Representations for Transfer Learning by Suppressing Texture

1 code implementation3 Nov 2020 Shlok Mishra, Anshul Shah, Ankan Bansal, Janit Anjaria, Jonghyun Choi, Abhinav Shrivastava, Abhishek Sharma, David Jacobs

Recent literature has shown that features obtained from supervised training of CNNs may over-emphasize texture rather than encoding high-level information.

Image Classification object-detection +3

Frequency Bias in Neural Networks for Input of Non-Uniform Density

no code implementations ICML 2020 Ronen Basri, Meirav Galun, Amnon Geifman, David Jacobs, Yoni Kasten, Shira Kritchman

Recent works have partly attributed the generalization ability of over-parameterized neural networks to frequency bias -- networks trained with gradient descent on data drawn from a uniform distribution find a low frequency fit before high frequency ones.

The Convergence Rate of Neural Networks for Learned Functions of Different Frequencies

1 code implementation NeurIPS 2019 Ronen Basri, David Jacobs, Yoni Kasten, Shira Kritchman

We study the relationship between the frequency of a function and the speed at which a neural network learns it.

Adversarially robust transfer learning

1 code implementation ICLR 2020 Ali Shafahi, Parsa Saadatpanah, Chen Zhu, Amin Ghiasi, Christoph Studer, David Jacobs, Tom Goldstein

By training classifiers on top of these feature extractors, we produce new models that inherit the robustness of their parent networks.

Transfer Learning

Understanding the (un)interpretability of natural image distributions using generative models

no code implementations6 Jan 2019 Ryen Krusinga, Sohil Shah, Matthias Zwicker, Tom Goldstein, David Jacobs

Probability density estimation is a classical and well studied problem, but standard density estimation methods have historically lacked the power to model complex and high-dimensional image distributions.

Density Estimation

Stabilizing Adversarial Nets With Prediction Methods

1 code implementation ICLR 2018 Abhay Yadav, Sohil Shah, Zheng Xu, David Jacobs, Tom Goldstein

Adversarial neural networks solve many important problems in data science, but are notoriously difficult to train.

3D Menagerie: Modeling the 3D shape and pose of animals

no code implementations CVPR 2017 Silvia Zuffi, Angjoo Kanazawa, David Jacobs, Michael J. Black

The best human body models are learned from thousands of 3D scans of people in specific poses, which is infeasible with live animals.

Big Batch SGD: Automated Inference using Adaptive Batch Sizes

no code implementations18 Oct 2016 Soham De, Abhay Yadav, David Jacobs, Tom Goldstein

The high fidelity gradients enable automated learning rate selection and do not require stepsize decay.

Biconvex Relaxation for Semidefinite Programming in Computer Vision

1 code implementation31 May 2016 Sohil Shah, Abhay Kumar, Carlos Castillo, David Jacobs, Christoph Studer, Tom Goldstein

We propose a general framework to approximately solve large-scale semidefinite problems (SDPs) at low complexity.

Metric Learning

Efficient Representation of Low-Dimensional Manifolds using Deep Networks

no code implementations15 Feb 2016 Ronen Basri, David Jacobs

We consider the ability of deep neural networks to represent data that lies near a low-dimensional manifold in a high-dimensional space.

A Hyperelastic Two-Scale Optimization Model for Shape Matching

no code implementations28 Jul 2015 Konrad Simon, Sameer Sheorey, David Jacobs, Ronen Basri

We suggest a novel shape matching algorithm for three-dimensional surface meshes of disk or sphere topology.

Vocal Bursts Valence Prediction

Locally Scale-Invariant Convolutional Neural Networks

no code implementations16 Dec 2014 Angjoo Kanazawa, Abhishek Sharma, David Jacobs

We show on a modified MNIST dataset that when faced with scale variation, building in scale-invariance allows ConvNets to learn more discriminative features with reduced chances of over-fitting.

Comparing apples to apples in the evaluation of binary coding methods

no code implementations5 May 2014 Mohammad Rastegari, Shobeir Fakhraei, Jonghyun Choi, David Jacobs, Larry S. Davis

We discuss methodological issues related to the evaluation of unsupervised binary code construction methods for nearest neighbor search.

Cannot find the paper you are looking for? You can Submit a new open access paper.