Classification Using Marginalized Maximum Likelihood Estimation and Black-Box Variational Inference
Based upon variational inference (VI) a new set of classification algorithms has recently emerged. This set of algorithms aims (A) to increase generalization power, (B) to decrease computational complexity. However, the complex math and implementation considerations have led to the emergence of black-box variational inference methods (BBVI). Relying on these principles, we assume the existence of a set of latent variables during the generation of data points. We subsequently marginalize the conventional maximum likelihood objective function w.r.t this set of latent variables and then apply black-box variational inference to estimate the model’s parameters. We evaluate the performance of the proposed method by comparing the results obtained from the application of our method to real-world and synthetic data sets with those obtained using basic and state-of-art classification algorithms. We proceed and scrutinize the impact of: (1) the existence of non-informative features at various dimensionalities, (2) the imbalanced data representation, (3) non-linear data sets, and (4) different data set size on the performance of algorithms under consideration. The results obtained prove to be encouraging and effective.