Search Results for author: Kazuho Watanabe

Found 7 papers, 0 papers with code

Unbiased Estimating Equation on Inverse Divergence and Its Conditions

no code implementations25 Apr 2024 Masahiro Kobayashi, Kazuho Watanabe

For the loss function defined by the monotonically increasing function $f$ and inverse divergence, the conditions for the statistical model and function $f$ under which the estimating equation is unbiased are clarified.

Unified Likelihood Ratio Estimation for High- to Zero-frequency N-grams

no code implementations3 Oct 2021 Masato Kikuchi, Kento Kawakami, Kazuho Watanabe, Mitsuo Yoshida, Kyoji Umemura

A naive estimation approach that uses only $N$-gram frequencies is sensitive to low-frequency (rare) $N$-grams and not applicable to zero-frequency (unobserved) $N$-grams; these are known as the low- and zero-frequency problems, respectively.

Vocal Bursts Intensity Prediction

Unbiased Estimation Equation under $f$-Separable Bregman Distortion Measures

no code implementations23 Oct 2020 Masahiro Kobayashi, Kazuho Watanabe

We discuss unbiased estimation equations in a class of objective function using a monotonically increasing function $f$ and Bregman divergence.

Multi-Decoder RNN Autoencoder Based on Variational Bayes Method

no code implementations29 Apr 2020 Daisuke Kaji, Kazuho Watanabe, Masahiro Kobayashi

Clustering algorithms have wide applications and play an important role in data analysis fields including time series data analysis.

Clustering Decoder +2

Generalized Dirichlet-process-means for $f$-separable distortion measures

no code implementations31 Jan 2019 Masahiro Kobayashi, Kazuho Watanabe

Therefore, it is vulnerable to outliers in data, and can cause large maximum distortion in clusters.

Clustering

Analysis of Variational Bayesian Latent Dirichlet Allocation: Weaker Sparsity Than MAP

no code implementations NeurIPS 2014 Shinichi Nakajima, Issei Sato, Masashi Sugiyama, Kazuho Watanabe, Hiroko Kobayashi

Latent Dirichlet allocation (LDA) is a popular generative model of various objects such as texts and images, where an object is expressed as a mixture of latent topics.

Bayesian Properties of Normalized Maximum Likelihood and its Fast Computation

no code implementations28 Jan 2014 Andrew Barron, Teemu Roos, Kazuho Watanabe

The normalized maximized likelihood (NML) provides the minimax regret solution in universal data compression, gambling, and prediction, and it plays an essential role in the minimum description length (MDL) method of statistical modeling and estimation.

Data Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.