Computer Science / Artificial Intelligence
Computer Science / Machine Learning
Statistics / Machine Learning
Asymptotic Model Selection for Directed Networks with Hidden Variables
Reading time: 2 minute
...
📝 Original Info
- Title: Asymptotic Model Selection for Directed Networks with Hidden Variables
- ArXiv ID: 1302.3580
- Date: 2015-05-19
- Authors: Researchers from original ArXiv paper
📝 Abstract
We extend the Bayesian Information Criterion (BIC), an asymptotic approximation for the marginal likelihood, to Bayesian networks with hidden variables. This approximation can be used to select models given large samples of data. The standard BIC as well as our extension punishes the complexity of a model according to the dimension of its parameters. We argue that the dimension of a Bayesian network with hidden variables is the rank of the Jacobian matrix of the transformation between the parameters of the network and the parameters of the observable variables. We compute the dimensions of several networks including the naive Bayes model with a hidden root node.💡 Deep Analysis
Deep Dive into Asymptotic Model Selection for Directed Networks with Hidden Variables.We extend the Bayesian Information Criterion (BIC), an asymptotic approximation for the marginal likelihood, to Bayesian networks with hidden variables. This approximation can be used to select models given large samples of data. The standard BIC as well as our extension punishes the complexity of a model according to the dimension of its parameters. We argue that the dimension of a Bayesian network with hidden variables is the rank of the Jacobian matrix of the transformation between the parameters of the network and the parameters of the observable variables. We compute the dimensions of several networks including the naive Bayes model with a hidden root node.
📄 Full Content
Reference
This content is AI-processed based on ArXiv data.