no code implementations • 15 Feb 2024 • Harbir Antil
It illustrates that foundational Mathematical advances are required for Digital Twins (DTs) that are different from traditional approaches.
no code implementations • 23 Aug 2023 • Aaron Mahler, Tyrus Berry, Tom Stephens, Harbir Antil, Michael Merritt, Jeanie Schreiber, Ioannis Kevrekidis
We use these tools to obtain adversarial examples that reside on a class manifold, yet fool a classifier.
no code implementations • 5 Jul 2023 • Harbir Antil, David Sayre
This paper explores the application of event-based cameras in the domains of image segmentation and motion estimation.
no code implementations • 16 May 2023 • Harbir Antil, Madhu Gupta, Randy Price
Again, these networks can be trained in parallel, for each EIM point.
no code implementations • 30 Nov 2022 • Rainald Löhner, Harbir Antil
Deep neural network (DNN) architectures are constructed that are the exact equivalent of explicit Runge-Kutta schemes for numerical time integration.
1 code implementation • 18 Apr 2022 • Harbir Antil, Hugo Díaz, Evelyn Herberg
The proposed framework can be applied to any of the existing networks such as ResNet, DenseNet or Fractional-DNN.
no code implementations • 15 Mar 2022 • Harbir Antil, Rainald Löhner, Randy Price
The NINNs framework can be applied to almost all pre-existing DNNs, with forward propagation, with costs comparable to existing DNNs.
no code implementations • 1 Apr 2021 • Thomas S. Brown, Harbir Antil, Rainald Löhner, Fumiya Togashi, Deepanshu Verma
Chemically reacting flows are common in engineering, such as hypersonic flow, combustion, explosions, manufacturing processes and environmental assessments.
no code implementations • 8 Feb 2021 • Harbir Antil, Howard C Elman, Akwum Onwunta, Deepanshu Verma
We consider the simulation of Bayesian statistical inverse problems governed by large-scale linear and nonlinear partial differential equations (PDEs).
no code implementations • 1 Apr 2020 • Harbir Antil, Ratna Khatri, Rainald Löhner, Deepanshu Verma
This paper introduces a novel algorithmic framework for a deep neural network (DNN), which in a mathematically rigorous manner, allows us to incorporate history (or memory) into the network -- it ensures all layers are connected to one another.
no code implementations • 22 Jul 2019 • Harbir Antil, Zichao, Di, Ratna Khatri
As an example, we consider tomographic reconstruction as a model problem and show an improvement in reconstruction quality, especially for limited data, via fractional Laplacian regularization.
no code implementations • 2 Jul 2019 • Zilong Zou, Sayan Mukherjee, Harbir Antil, Wilkins Aquino
To manage the computational cost of propagating increasing numbers of particles through the loss function, we employ a recently developed local reduced basis method to build an efficient surrogate loss function that is used in the Gibbs update formula in place of the true loss.
1 code implementation • 16 May 2018 • Harbir Antil, Dangxing Chen, Scott E. Field
While the proper orthogonal decomposition (POD) is optimal under certain norms it's also expensive to compute.
Distributed, Parallel, and Cluster Computing General Relativity and Quantum Cosmology Numerical Analysis