no code implementations • NeurIPS 2012 • Yi Wu, David P. Wipf
In contrast, for analyses of update rules and sparsity properties of local and global solutions, as well as extensions to more general likelihood models, we can leverage coefficient-space techniques developed for Type I and apply them to Type II.
no code implementations • NeurIPS 2011 • David P. Wipf
In the vast majority of recent work on sparse estimation algorithms, performance has been evaluated using ideal or quasi-ideal dictionaries (e. g., random Gaussian or Fourier) characterized by unit $\ell_2$ norm, incoherent columns or features.
no code implementations • NeurIPS 2009 • David P. Wipf, Srikantan S. Nagarajan
Finding maximally sparse representations from overcomplete feature dictionaries frequently involves minimizing a cost function composed of a likelihood (or data fit) term and a prior (or penalty function) that favors sparsity.
no code implementations • NeurIPS 2008 • Julia Owen, Hagai T. Attias, Kensuke Sekihara, Srikantan S. Nagarajan, David P. Wipf
In a restricted setting, the proposed method is shown to have theoretically zero bias estimating both the location and orientation of multi-component dipoles even in the presence of correlations, unlike a variety of existing Bayesian localization methods or common signal processing techniques such as beamforming and sLORETA.
no code implementations • NeurIPS 2007 • David P. Wipf, Srikantan S. Nagarajan
The result is an efficient algorithm that can be implemented using standard convex programming toolboxes and is guaranteed to converge to a stationary point unlike existing methods.