Introduction

Bayesian network classifiers (Bielza and Larrañaga 2014; Friedman, Geiger, and Goldszmidt 1997) are competitive performance classifiers (e.g., Nayyar A. Zaidi et al. 2013) with the added benefit of interpretability. Their simplest member, the naive Bayes (NB) (Minsky 1961), is well-known (Hand and Yu 2001). More elaborate models exist, taking advantage of the Bayesian network (Pearl 1988; Koller and Friedman 2009) formalism for representing complex probability distributions. The tree augmented naive Bayes (Friedman, Geiger, and Goldszmidt 1997) and the averaged one-dependence estimators (AODE) (Webb, Boughton, and Wang 2005) are among the most prominent.

A Bayesian network classifier is simply a Bayesian network applied to classification, that is, to the prediction of the probability \(P(c \mid \mathbf{x})\) of some discrete (class) variable \(C\) given some features \(\mathbf{X}\). The bnlearn (Scutari and Ness 2018; Scutari 2010) package already provides state-of-the art algorithms for learning Bayesian networks from data. Yet, learning classifiers is specific, as the implicit goal is to estimate \(P(c \mid \mathbf{x})\) rather than the joint probability \(P(\mathbf{x}, c)\). Thus, specific search algorithms, network scores, parameter estimation, and inference methods have been devised for this setting. In particular, many search algorithms consider a restricted space of structures, such as that of augmented naive Bayes (Friedman, Geiger, and Goldszmidt 1997) models. Unlike with general Bayesian networks, it makes sense to omit a feature \(X_i\) from the model as long as the estimation of P(c) is no better than that of \(P(c\mid \mathbf{x} \setminus x_i)\). Discriminative scores, related to the estimation of P(c) rather than P(c, ), are used to learn both structure (Keogh and Pazzani 2002; Grossman and Domingos 2004; F. Pernkopf and Bilmes 2010; Carvalho et al. 2011) and parameters (Nayyar A. Zaidi et al. 2013; Nayyar A. Zaidi et al. 2017). Some of the prominent classifiers (Webb, Boughton, and Wang 2005) are ensembles of networks, and there are even heuristics applied at inference time, such as the lazy elimination technique (Zheng and Webb 2006). Many of these methods (e.g., Dash and Cooper 2002; Nayyar A. Zaidi et al. 2013; Keogh and Pazzani 2002; Pazzani 1996) are, at best, just available in standalone implementations published alongside the original papers.

The bnclassify package implements state-of-the-art algorithms for learning structure and parameters. The implementation is efficient enough to allow for time-consuming discriminative scores on relatively large data sets. It provides utility functions for prediction and inference, model evaluation with network scores and cross-validated estimation of predictive performance, and model analysis, such as querying structure type or graph plotting via the Rgraphviz package (Hansen et al. 2017). It integrates with the caret (Kuhn et al. 2017; Kuhn 2008) and mlr (Bischl et al. 2017) packages for straightforward use in machine learning pipelines. Currently it supports only discrete variables. The functionalities are illustrated in an introductory vignette, while an additional vignette provides details on the implemented methods. It includes over 200 unit and integration tests that give a code coverage of 94 percent (see https://codecov.io/github/bmihaljevic/bnclassify?branch=master).

The rest of this paper is structured as follows. We begin by providing background on Bayesian network classifiers (Section 2) and describing the implemented functionalities ([sec:functionalities]). We then illustrate usage with a synthetic data set ([sec:usage]) and compare the methods’ running time, predictive performance and complexity over several data sets ([sec:properties]). Finally, we discuss implementation ([sec:implementation]), briefly survey related software ([sec:relatedsw]), and conclude by outlining future work ([sec:conclusion]).

Background

Bayesian network classifiers

A Bayesian network classifier is a Bayesian network used for predicting a discrete class variable \(C\). It assigns \(\mathbf{x}\), an observation of \(n\) predictor variables (features) \(\mathbf{X} = (X_1,\ldots,X_n\)), to the most probable class:

\[c^* = \mathop{\mathrm{arg\,max}}_c P(c \mid \mathbf{x}) = \mathop{\mathrm{arg\,max}}_c P(\mathbf{x}, c).\]

The classifier factorizes \(P(\mathbf{x}, c)\) according to a Bayesian network \(\mathcal{B} = \langle \mathcal{G}, \boldsymbol{ \theta } \rangle\). \(\mathcal{G}\) is a directed acyclic graph with a node for each variable in \((\mathbf{X}, C)\), encoding conditional independencies: a variable \(X\) is independent of its nondescendants in \(\mathcal{G}\) given the values \(\mathbf{pa}(x)\) of its parents. \(\mathcal{G}\) thus factorizes the joint into local (conditional) distributions over subsets of variables:

\[P(\mathbf{x}, c) = P(c \mid \mathbf{pa}(c)) \prod_{i=1}^{n} P(x_i \mid \mathbf{pa}(x_i)).\]

Local distributions \(P(C \mid \mathbf{pa}(c))\) and \(P(X_i \mid \mathbf{pa}(x_i))\) are specified by parameters \(\boldsymbol{ \theta }_{(C,\mathbf{pa}(c))}\) and \(\boldsymbol{ \theta }_{(X_i,\mathbf{pa}(x_i))}\), with \(\boldsymbol{ \theta } = \{ \boldsymbol{ \theta }_{(C,\mathbf{pa}(c))}, \boldsymbol{ \theta }_{(X_1,\mathbf{pa}(x_1))}, \ldots, \boldsymbol{ \theta }_{(X_n,\mathbf{pa}(x_n))}\}\). It is common to assume each local distribution has a parametric form, such as the multinomial, for discrete variables, and the Gaussian for real-valued variables.

Learning structure

We learn \(\mathcal{B}\) from a data set \(\mathcal{D} = \{ (\mathbf{x}^{1}, c^{1}), \ldots, (\mathbf{x}^{N}, c^{N}) \}\) of \(N\) observations of \(\mathbf{X}\) and \(C\). There are two main approaches to learning the structure from \(\mathcal{D}\): (a) testing for conditional independence among triplets of sets of variables and (b) searching a space of possible structures in order to optimize a network quality score. Under assumptions such as a limited number of parents per variable, approach (a) can produce the correct network in polynomial time (Cheng et al. 2002; Tsamardinos, Aliferis, and Statnikov 2003). On the other hand, finding the optimal structure–even with at most two parents per variable–is NP-hard (Chickering, Heckerman, and Meek 2004). Thus, heuristic search algorithms, such as greedy hill-climbing, are commonly used (see e.g., Koller and Friedman 2009). Ways to reduce model complexity, in order to avoid overfitting the training data \(\mathcal{D}\), include searching in restricted structure spaces and penalizing dense structures with appropriate scores.

Common scores in structure learning are the penalized log-likelihood scores, such as the Akaike information criterion (AIC) (Akaike 1974) and Bayesian information criterion (BIC) (Schwarz 1978). They measure the model’s fitting of the empirical distribution P(c, ) adding a penalty term that is a function of structure complexity. They are decomposable with respect to \(\mathcal{G}\), allowing for efficient search algorithms. Yet, with limited \(N\) and a large \(n\), discriminative scores based on P(c), such as conditional log-likelihood and classification accuracy, are more suitable to the classification task (Friedman, Geiger, and Goldszmidt 1997). These, however, are not decomposable according to \(\mathcal{G}\). While one can add a complexity penalty to discriminative scores (e.g., Grossman and Domingos 2004), they are instead often cross-validated to induce preference towards structures that generalize better, making their computation even more time demanding.

For Bayesian network classifiers, a common (see Bielza and Larrañaga 2014) structure space is that of augmented naive Bayes (Friedman, Geiger, and Goldszmidt 1997) models (see Figure 1), factorizing \(P(\mathbf{X}, C)\) as

\[P(\mathbf{X}, C) = P(C) \prod_{i=1}^{n} P(X_i \mid \mathbf{Pa}(X_i)), \label{eq:augnb} (\#eq:augnb)\]

with \(C \in \mathbf{Pa}(X_i)\) for all \(X_i\) and \(\mathbf{Pa}(C) = \emptyset\). Models of different complexity arise by extending or shrinking the parent sets \(\mathbf{Pa}(X_i)\), ranging from the NB (Minsky 1961) with \(\mathbf{Pa}(X_i) = \{C \}\) for all \(X_i\), to those with a limited-size \(\mathbf{Pa}(X_i)\) (Friedman, Geiger, and Goldszmidt 1997; Sahami 1996), to those with unbounded \(\mathbf{Pa}(X_i)\) (Franz Pernkopf and O’Leary 2003). While the NB can only represent linearly separable classes (Jaeger 2003), more complex models are more expressive (Varando, Bielza, and Larrañaga 2015). Simpler models, with sparser \(\mathbf{Pa}(X_i)\), may perform better with less training data, due to their lower variance, yet worse with more data as the bias due to wrong independence assumptions will tend to dominate the error.

The algorithms that produce the above structures are generally instances of greedy hill-climbing (Keogh and Pazzani 2002; Sahami 1996), with arc inclusion and removal as their search operators. Some (e.g., Pazzani 1996) add node inclusion or removal, thus embedding feature selection (Guyon and Elisseeff 2003) within structure learning. Alternatives include the adaptation (Friedman, Geiger, and Goldszmidt 1997) of the Chow-Liu (Chow and Liu 1968) algorithm to find the optimal one-dependence estimator (ODE) with respect to decomposable penalized log-likelihood scores in time quadratic in \(n\). Some structures, such as NB or AODE, are fixed and thus require no search.

Learning parameters

Given \(\mathcal{G}\), learning \(\boldsymbol{\theta}\) in order to best approximate the underlying P(C, ) is straightforward. For discrete variables \(X_i\) and \(\mathbf{Pa}(X_i)\), Bayesian estimation can be obtained in closed form by assuming a Dirichlet prior over \(\boldsymbol{\theta}\). With all Dirichlet hyper-parameters equal to \(\alpha\),

\[\theta_{ijk} = \frac{N_{ijk} + \alpha}{N_{ \cdot j \cdot } + r_i \alpha}, \label{eq:disparams} (\#eq:disparams)\]

where \(N_{ijk}\) is the number of instances in \(\mathcal{D}\) such that \(X_i = k\) and \(\mathbf{pa}(x_i) = j\), corresponding to the \(j\)-th possible instantiation of \(\mathbf{pa}(x_i)\), \(N_{\cdot j \cdot}\) is the number of instances in which \(\mathbf{pa}(x_i) = j\), while \(r_i\) is the cardinality of \(X_i\). \(\alpha = 0\) in Equation @ref(eq:disparams) yields the maximum likelihood estimate of \(\theta_{ijk}\). With incomplete data, the parameters of local distributions are no longer independent and we cannot separately maximize the likelihood for each \(X_i\) as in Equation @ref(eq:disparams). Optimizing the likelihood requires a time-consuming algorithm like expectation maximization (Dempster, Laird, and Rubin 1977) which only guarantees convergence to a local optimum.

While the NB can separate any two linearly separable classes given the appropriate , learning by approximating P(C, ) cannot recover the optimal in some cases (Jaeger 2003). Several methods (M. Hall 2007; Nayyar A. Zaidi et al. 2013; Nayyar A. Zaidi et al. 2017) learn a weight \(w_i \in [0,1]\) for each feature and then update \(\boldsymbol{\theta}\) as

\[\theta_{ijk}^{weighted} = \frac{(\theta_{ijk})^{w_i}}{\sum_{k=1}^{r_i} (\theta_{ijk})^{w_i}}.\]

A \(w_i < 1\) reduces the effect of \(X_i\) on the class posterior, with \(w_i = 0\) omitting \(X_i\) from the model, making weighting more general than feature selection. The weights can be found by maximizing a discriminative score (Nayyar A. Zaidi et al. 2013) or computing the usefulness of a feature in a decision tree (M. Hall 2007). Mainly applied to naive Bayes models, a generalization for augmented naive Bayes classifiers has been recently developed (Nayyar A. Zaidi et al. 2017).

Another parameter estimation method for the naive Bayes is by means of Bayesian model averaging over the \(2^n\) possible naive Bayes structures with up to \(n\) features (Dash and Cooper 2002). It is computed in time linear in \(n\) and provides the posterior probability of an arc from \(C\) to \(X_i\).

Inference

Computing P(c) for a fully observed means multiplying the corresponding \(\boldsymbol{\theta}\). With an incomplete , however, exact inference requires summing over parameters of the local distributions and is NP-hard in the general case (Cooper 1990), yet can be tractable with limited-complexity structures. The AODE ensemble computes P(c) as the average of the \(P_i (c\mid\mathbf{x})\) of the \(n\) base models. A special case is the lazy elimination (Zheng and Webb 2006) heuristic which omits \(x_i\) from Equation @ref(eq:augnb) if \(P(x_i \mid x_j) = 1\) for some \(x_j\).

Functionalities

The package has four groups of functionalities:

  1. Learning network structure and parameters

  2. Analyzing the model

  3. Evaluating the model

  4. Predicting with the model

Learning is split into two separate steps, the first step is structure learning and the second, optional, step is parameter learning. The obtained models can be evaluated, used for prediction, or analyzed. The following provides a brief overview of this workflow. For details on some of the underlying methods please see the “methods” vignette.

Structures

The learning algorithms produce the following network structures:

  • Naive Bayes (NB) (Figure 1a) (Minsky 1961)
  • One-dependence estimators (ODE)
    • Tree-augmented naive Bayes (TAN) (Figure 1b) (Friedman, Geiger, and Goldszmidt 1997)
    • Forest-augmented naive Bayes (FAN) (Figure 1c)
  • k-dependence Bayesian classifier (k-DB) (Sahami 1996; F. Pernkopf and Bilmes 2010)
  • Semi-naive Bayes (SNB)(Figure 1d) (Pazzani 1996)
  • Averaged one-dependence estimators (AODE) (Webb, Boughton, and Wang 2005)

Figure 1 shows some of these structures and their factorizations of P(c, ). We use k-DB in the sense meant by (F. Pernkopf and Bilmes 2010) rather than that by (Sahami 1996), as we impose no minimum on the number of augmenting arcs. SNB is the only structure whose complexity is not a priori bounded: the feature subgraph might be complete in the extreme case.

graphic without alt text graphic without alt text
p(c,x) = p(c)p(x1|c)p(x2|c)p(x3|c)p(x4|c)
p(x5|c)p(x6|c) p(c,x) = p(c)p(x1|c,x2)p(x2|c,x3)p(x3|c,x4)p(x4|c)
p(x5|c,x4)p(x6|c,x5)
graphic without alt text graphic without alt text
p(c,x) = p(c)p(x1|c,x2)p(x2|c)p(x3|c)p(x4|c)
p(x5|c,x4)p(x6|c,x5) p(c,x) = p(c)p(x1|c,x2)p(x2|c)p(x4|c)
p(x5|c,x4)p(x6|c,x4,x5)
Figure 1: Augmented naive Bayes models produced by the bnclassify package. (a) NB; (b) TAN (c) FAN (d) SNB. k-DB and AODE not shown. The NB assumes that the features are independent given the class. ODE allows each predictor to depend on at most one other predictor: the TAN is a special case with exactly n − 1 augmenting arcs (i.e., inter-feature arcs) while a FAN may have less than n − 1. The k-DB allows for up to k parent features per feature Xi, with NB and ODE as its special cases with k = 0 and k = 1, respectively. The SNB does not restrict the number of parents but requires that connected feature subgraphs be complete (connected, after removing C, subgraphs in (d): {X1, X2}, and {X4, X5, X6}), also allowing the removal of features (X3 omitted in (d)). The AODE is not a single structure but an ensemble of n ODE models in which one feature is the parent of all others (a super-parent).

Algorithms

Each structure learning algorithm is implemented by a single R function. Table 1 lists these algorithms along with the corresponding structures that they produce, the scores they can be combined with, and their R functions. Below we provide their abbreviations, references, brief comments, and illustrate function calls.

Fixed structure

We implement two algorithms:

  • NB
  • AODE

The NB and AODE structures are fixed given the number of variables, and thus no search is required to estimate them from data. For example, we can get a NB structure with

n <- nb('class', dataset = car)

where class is the name of the class variable \(C\) and car the dataset containing observations of \(C\) and .

Optimal ODEs with decomposable scores

We implement one algorithm:

  • Chow-Liu for ODEs (CL-ODE; (Friedman, Geiger, and Goldszmidt 1997))

Maximizing log-likelihood will always produce a TAN while maximizing penalized log-likelihood may produce a FAN since including some arcs can degrade such a score. With incomplete data our implementation does not guarantee the optimal ODE as that would require computing maximum likelihood parameters. The arguments of the tan_cl() function are the network score to use and, optionally, the root for features’ subgraph:

n <- tan_cl('class', car, score = 'AIC', root = 'buying')

Greedy hill-climbing with global scores

The bnclassify package implements five algorithms:

  • Hill-climbing tree augmented naive Bayes (HC-TAN) (Keogh and Pazzani 2002)
  • Hill-climbing super-parent tree augmented naive Bayes (HC-SP-TAN) (Keogh and Pazzani 2002)
  • Backward sequential elimination and joining (BSEJ) (Pazzani 1996)
  • Forward sequential selection and joining (FSSJ) (Pazzani 1996)
  • Hill-climbing k-dependence Bayesian classifier (k-DB)

These algorithms use the cross-validated estimate of predictive accuracy as a score. Only the FSSJ and BSEJ perform feature selection. The arguments of the corresponding functions include the number of cross-validation folds, k, and the minimal absolute score improvement, epsilon, required for continuing the search:

fssj <- fssj('class', car, k = 5, epsilon = 0)
Table 1: Implemented structure learning algorithms.
Structure Search algorithm Score Feature selection Function
NB - - - nb
TAN/FAN CL-ODE log-lik, AIC, BIC - tan_cl
TAN TAN-HC accuracy - tan_hc
TAN TAN-HCSP accuracy - tan_hcsp
SNB FSSJ accuracy forward fssj
SNB BSEJ accuracy backward bsej
AODE - - - aode
kDB kDB accuracy - kdb

Parameters

The bnclassify package only handles discrete features. With fully observed data, it estimates the parameters with maximum likelihood or Bayesian estimation, according to Equation @ref(eq:disparams), with a single \(\alpha\) for all local distributions. With incomplete data it uses available case analysis and substitutes \(N_{\cdot j \cdot}\) in Equation @ref(eq:disparams) with \(N_{i j \cdot} = \sum_{k = 1}^{r_i} N_{i j k}\), i.e., with the count of instances in which \(\mathbf{Pa}(X_i) = j\) and \(X_i\) is observed.

We implement two methods for weighted naive Bayes parameter estimation:

  • Weighting attributes to alleviate naive Bayes’ independence assumption (WANBIA) (Nayyar A. Zaidi et al. 2013)
  • Attribute-weighted naive Bayes (AWNB) (M. Hall 2007)

We implement one method for estimation by means of Bayesian model averaging over all NB structures with up to \(n\) features:

  • Model averaged naive Bayes (MANB) (Dash and Cooper 2002)

It makes little sense to apply WANBIA, MANB, and AWNB to structures other than NB. WANBIA, for example, learns the weights by optimizing the conditional log-likelihood of the NB. Parameter learning is done with the lp() function. For example,

a <- lp(n, smooth = 1, manb_prior = 0.5)

computes Bayesian parameter estimates with \(\alpha = 1\) (the smooth argument) for all local distributions, and updates them with the MANB estimation obtained with a 0.5 prior probability for each class-to-feature arc.

Utilities

Single-structure-learning functions, as opposed to those that learn an ensemble of structures, return an S3 object of class "bnc_dag". The following functions can be invoked on such objects:

  • Plot the network: plot()
  • Query model type: is_tan(), is_ode(), is_nb(), is_aode(), …
  • Query model properties: narcs(), families(), features(), …
  • Convert to a gRain object: as_grain()

Ensembles are of type "bnc_aode" and only print() and model type queries can be applied to such objects. Fitting the parameters (by calling lp()) of a "bnc_dag" produces a "bnc_bn" object. In addition to all "bnc_dag" functions, the following are meaningful:

  • Predict class labels and class posterior probability: predict()
  • Predict joint distribution: compute_joint()
  • Network scores: AIC(),BIC(),logLik(),clogLik()
  • Cross-validated accuracy: cv()
  • Query model properties: nparams()
  • Parameter weights: manb_arc_posterior(), weights()

The above functions for "bnc_bn" can also be applied to an ensemble with fitted parameters.

Documentation

This vignette provides an overview of the package and background on the implemented methods. Calling ?bnclassify gives a more concise overview of the functionalities, with pointers to relevant functions and their documentation. The “usage” vignette presents more detailed usage examples and shows how to combine the functions. The “methods” vignette provides details on the underlying methods and documents implementation specifics, especially where they differ from or are undocumented in the original paper.

Usage example

The available functionalities can be split into four groups:

  1. Learning network structure and parameters

  2. Analyzing the model

  3. Evaluating the model

  4. Predicting with the model

We illustrate these functionalities with the synthetic car data set with six features. We begin with a simple example for each functionality group and then elaborate on the options in the following sections. We first load the package and the dataset:

library(bnclassify)
data(car)

Then we learn a naive Bayes structure and its parameters:

nb <- nb('class', car)
nb <- lp(nb, car, smooth = 0.01)

Then we get the number of arcs in the network:

narcs(nb)
[1] 6

Then we get the 10-fold cross-validation estimate of accuracy:

cv(nb, car, k = 10)
[1] 0.8628258

Finally, we classify the entire data set:

p <- predict(nb, car)
head(p)
[1] unacc unacc unacc unacc unacc unacc
Levels: unacc acc good vgood

Learning

The functions for structure learning, shown in Table 1, correspond to the different algorithms. They all receive the name of the class variable and the data set as their first two arguments, which are then followed by optional arguments. The following runs the CL-ODE algorithm with the AIC score, followed by the FSSJ algorithm to learn another model:

ode_cl_aic <- tan_cl('class', car, score = 'aic')
set.seed(3)
fssj <- fssj('class', car, k = 5, epsilon = 0)

The bnc() function is a shorthand for learning structure and parameters in a single step,

ode_cl_aic <- bnc('tan_cl', 'class', car, smooth = 1, dag_args = list(score = 'aic'))

where the first argument is the name of the structure learning function while and optional arguments go in dag_args.

Analyzing

Printing the model, such as the above ode_cl_aic object, provides basic information about it.

ode_cl_aic

  Bayesian network classifier

  class variable:        class
  num. features:   6
  num. arcs:   9
  free parameters:   131
  learning algorithm:    tan_cl

While plotting the network is especially useful for small networks, printing the structure in the deal (Bottcher and Dethlefsen 2013) and bnlearn format may be more useful for larger ones:

ms <- modelstring(ode_cl_aic)
strwrap(ms, width = 60)
[1] "[class] [buying|class] [doors|class] [persons|class]"
[2] "[maint|buying:class] [safety|persons:class]"
[3] "[lug_boot|safety:class]"

We can query the type of structure–params() lets us access the conditional probability tables (CPTs), while features() lists the features:

is_ode(ode_cl_aic)
[1] TRUE
params(nb)$buying
       class
buying         unacc          acc         good        vgood
  low   0.2132243562 0.2317727320 0.6664252607 0.5997847478
  med   0.2214885458 0.2994740131 0.3332850521 0.3999077491
  high  0.2677680077 0.2812467451 0.0001448436 0.0001537515
  vhigh 0.2975190903 0.1875065097 0.0001448436 0.0001537515
length(features(fssj))
[1] 5

For example, fssj() has selected five out of six features.

The manb_arc_posterior() function provides the MANB posterior probabilities for arcs from the class to each of the features:

manb <- lp(nb, car, smooth = 0.01, manb_prior = 0.5)
round(manb_arc_posterior(manb))
  buying    maint    doors  persons lug_boot   safety
       1        1        0        1        1        1

With the posterior probability of 0% for the arc from class to doors, and 100% for all others, MANB renders doors independent from the class while leaving the other features’ parameters unaltered. We can see this by printing out the CPTs:

params(manb)$doors
       class
doors   unacc  acc good vgood
  2      0.25 0.25 0.25  0.25
  3      0.25 0.25 0.25  0.25
  4      0.25 0.25 0.25  0.25
  5more  0.25 0.25 0.25  0.25
all.equal(params(manb)$buying, params(nb)$buying)
[1] TRUE

For more functions for querying a structure with parameters ("bnc_bn") see ?inspect_bnc_bn. For a structure without parameters ("bnc_dag"), see ?inspect_bnc_dag.

Evaluating

Several scores can be computed:

logLik(ode_cl_aic, car)
'log Lik.' -13307.59 (df=131)
AIC(ode_cl_aic, car)
[1] -13438.59

The cv() function estimates the predictive accuracy of one or more models with a single run of stratified cross-validation. In the following we assess the above models produced by NB and CL-ODE algorithms:

set.seed(0)
cv(list(nb = nb, ode_cl_aic = ode_cl_aic), car, k = 5, dag = TRUE)
        nb ode_cl_aic
 0.8582303  0.9345913

Above, k is the desired number of folds, and dag = TRUE evaluates structure and parameter learning, while dag = FALSE keeps the structure fixed and evaluates just the parameter learning. The output gives 86% and 93% accuracy estimates for NB and CL-ODE, respectively. The mlr and caret packages provide additional options for evaluating predictive performance, such as different metrics, and bnclassify is integrated with both (see the “usage” vignette).

Predicting

As shown above, we can predict class labels with predict(). We can also get the class posterior probabilities:

pp <- predict(nb, car, prob = TRUE)
# Show class posterior distributions for the first six instances of car
head(pp)
  unacc          acc         good        vgood
[1,] 1.0000000 2.171346e-10 8.267214e-16 3.536615e-19
[2,] 0.9999937 6.306269e-06 5.203338e-12 5.706038e-19
[3,] 0.9999908 9.211090e-06 5.158884e-12 4.780777e-15
[4,] 1.0000000 3.204714e-10 1.084552e-15 1.015375e-15
[5,] 0.9999907 9.307467e-06 6.826088e-12 1.638219e-15
[6,] 0.9999864 1.359469e-05 6.767760e-12 1.372573e-11

Properties

We illustrate the algorithms’ running times, resulting structure complexity and predictive performance on the datasets listed in Table 2. We only used complete data sets as time-consuming inference with incomplete data makes cross-validated scores costly for medium-sized or large data sets. The structure and parameter learning methods are listed in the legends of Figure 2, Figure 3, and Figure 4.

Table 2: Data sets used, from the UCI repository (Lichman 2013). Incomplete rows have been removed. The number of classes (i.e., distinct class labels) is \(r_c\).
\(N\) \(n\) \(r_c\) Dataset
1728 7 4 car
958 10 2 tic-tac-toe
435 17 2 voting
351 35 2 ionosphere
562 36 19 soybean
3196 37 2 kr-vs-kr
3190 61 3 splice

Figure 2 shows that the algorithms with cross-validated scores, followed by WANBIA, are the most time-consuming. Running time is still not prohibitive: TAN-HC ran for 139 seconds on kr-vs-kp and 282 seconds on splice, adding 27 augmenting arcs on the former and 7 on the latter (\(a\) added arcs mean \(a\) iterations of the search algorithm). Note that their running time is linear in the number of cross-validation folds k; using k \(= 10\) instead of k \(=5\) would have roughly doubled the time.

graphic without alt text
Figure 2: Running times of the algorithms on a Ubuntu 16.04 machine with 8 GB of RAM and a 2.5 GHz processor, on a \(\log_{10}\) scale. We used the default options for all algorithms and k = 5 and epsilon = 0 for the wrappers. CL-ODE-AIC is CL-ODE with the AIC rather than the log-likelihood score. The lines have been horizontally and vertically jittered to avoid overlap where identical.

CL-ODE tended to produce the most complex structures (see Figure 3), with FSSJ learning complex models on car, soybean and splice, yet simple ones, due to feature selection, on voting and tic-tac-toe. The NB models with alternative parameters, WANBIA and MANB, have as many parameters as the NB, because we are not counting the length-\(n\) weights vector, rather just the parameters of the resulting NB (the weights simply produce an alternative parameterization of the NB).

graphic without alt text
Figure 3: The number of Bayesian network parameters of the resulting structures, on a \(\log_{10}\) scale. The lines have been horizontally and vertically jittered to avoid overlap where identical.

In terms of accuracy, NB and MANB performed comparatively poorly on car, voting, tic-tac-toe, and kr-vs-kp, possibly because of many wrong independence assumptions (see Figure 4). WANBIA may have accounted for some of these violations on voting and kr-vs-kp, as it outperformed NB and MANB on these datasets, showing that a simple model can perform well on them when adequately parameterized. More complex models, such as CL-ODE and AODE, performed better on car.

graphic without alt text
Figure 4: Accuracy of the algorithms estimated with stratified 10-fold cross-validation. The lines have been horizontally and vertically jittered to avoid overlap where identical.

Implementation

With complete data, bnclassify implements prediction for augmented naive Bayes models as well as for ensembles of such models. It multiplies the corresponding in logarithmic space, applying the log-sum-exp trick before normalizing, to reduce the chance of underflow. On instances with missing entries, it uses the gRain package (Højsgaard 2016, 2012) to perform exact inference, which is noticeably slower. Network plotting is implemented by the Rgraphviz package. Some functions are implemented in C++ with Rcpp for efficiency. The package is extensively tested, with over 200 unit and integrated tests that give a 94% code coverage.

Conclusion

The bnclassify package implements several state-of-the art algorithms for learning Bayesian network classifiers. It also provides features such as model analysis and evaluation. It is reasonably efficient and can handle large data sets. We hope that bnclassify will be useful to practitioners as well as researchers wishing to compare their methods to existing ones.

Future work includes handling real-valued feature via conditional Gaussian models. Straightforward extensions include adding flexibility to the hill-climbing algorithm, such as restarts to avoid local minima.

Acknowledgements

This project has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No. 785907 (HBP SGA2), the Spanish Ministry of Economy and Competitiveness through the Cajal Blue Brain (C080020-09; the Spanish partner of the EPFL Blue Brain initiative) and TIN2016-79684-P projects, from the Regional Government of Madrid through the S2013/ICE-2845-CASI-CAM-CM project, and from Fundación BBVA grants to Scientific Research Teams in Big Data 2016.

Akaike, Hirotugu. 1974. “A New Look at the Statistical Model Identification.” IEEE Transactions on Automatic Control 19 (6): 716–23.
Bielza, Concha, and Pedro Larrañaga. 2014. “Discrete Bayesian Network Classifiers: A Survey.” ACM Computing Surveys 47 (1).
Bischl, Bernd, Michel Lang, Lars Kotthoff, Julia Schiffner, Jakob Richter, Zachary Jones, Giuseppe Casalicchio, and Mason Gallo. 2017. mlr: Machine Learning in R. https://CRAN.R-project.org/package=mlr.
Bottcher, Susanne Gammelgaard, and Claus Dethlefsen. 2013. deal: Learning Bayesian Networks with Mixed Variables. https://CRAN.R-project.org/package=deal.
Bouckaert, Remco. 2004. “Bayesian Network Classifiers in Weka.” 14/2004. Department of Computer Science, University of Waikato.
Bouckaert, Remco Ronaldus. 1995. “Bayesian Belief Networks: From Construction to Inference.” PhD thesis, Universiteit Utrecht.
Carvalho, A. M., T. Roos, A. L. Oliveira, and P. Myllymäki. 2011. “Discriminative Learning of Bayesian Networks via Factorized Conditional Log-Likelihood.” Journal of Machine Learning Research 12: 2181–2210.
Cheng, J., R. Greiner, J. Kelly, D. A. Bell, and W. Liu. 2002. “Learning Bayesian Networks from Data: An Information-Theory Based Approach.” Artificial Intelligence 137: 43–90.
Chickering, David Maxwell, David Heckerman, and Christopher Meek. 2004. “Large-Sample Learning of Bayesian Networks Is NP-Hard.” Journal of Machine Learning Research 5: 1287–1330.
Chow, CK, and CN Liu. 1968. “Approximating Discrete Probability Distributions with Dependence Trees.” IEEE Transactions on Information Theory 14 (3): 462–67.
Cooper, Gregory F. 1990. The Computational Complexity of Probabilistic Inference Using Bayesian Belief Networks.” Artificial Intelligence 42 (2-3): 393–405.
Cooper, Gregory F, and Edward Herskovits. 1992. “A Bayesian Method for the Induction of Probabilistic Networks from Data.” Machine Learning 9 (4): 309–47.
Dash, Denver, and Gregory F Cooper. 2002. “Exact Model Averaging with Naive Bayesian Classifiers.” In 19th International Conference on Machine Learning (ICML-2002), 91–98.
Dempster, Arthur P, Nan M Laird, and Donald B Rubin. 1977. “Maximum Likelihood from Incomplete Data via the EM Algorithm.” Journal of the Royal Statistical Society. Series B (Methodological) 39 (1): 1–38.
Friedman, N., D. Geiger, and M. Goldszmidt. 1997. “Bayesian Network Classifiers.” Machine Learning 29: 131–63.
Glover, Fred, and Manuel Laguna. 2013. Tabu Search.” In Handbook of Combinatorial Optimization, edited by Panos M. Pardalos, Ding-Zhu Du, and Ronald L. Graham, 3261–362. New York, NY: Springer-Verlag.
Grossman, Daniel, and Pedro Domingos. 2004. Learning Bayesian Network Classifiers by Maximizing Conditional Likelihood.” In Proceedings of the Twenty-First International Conference on Machine Learning, 361–68.
Guyon, Isabelle, and André Elisseeff. 2003. “An Introduction to Variable and Feature Selection.” Journal of Machine Learning Research 3: 1157–82.
Hall, M. 2007. “A Decision Tree-Based Attribute Weighting Filter for Naive Bayes.” Knowledge-Based Systems 20 (2): 120–26.
Hall, Mark, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. “The WEKA Data Mining Software: An Update.” SIGKDD Explorations Newsletter 11 (1): 10–18.
Hand, D. J., and K. Yu. 2001. Idiot’s Bayes - Not so Stupid after All? International Statistical Review 69 (3): 385–98.
Hansen, Kasper Daniel, Jeff Gentry, Li Long, Robert Gentleman, Seth Falcon, Florian Hahne, and Deepayan Sarkar. 2017. Rgraphviz: Provides Plotting Capabilities for R Graph Objects. https://doi.org/10.18129/B9.bioc.Rgraphviz .
Højsgaard, Søren. 2012. “Graphical Independence Networks with the gRain Package for R.” Journal of Statistical Software 46 (10): 1–26.
———. 2016. gRain: Graphical Independence Networks. https://CRAN.R-project.org/package=gRain.
Jaeger, M. 2003. “Probabilistic Classifiers and the Concept They Recognize.” In Proceedings of the 20th International Conference on Machine Learning (ICML-2003), 266–73.
Keogh, Eamonn J, and Michael J Pazzani. 2002. “Learning the Structure of Augmented Bayesian Classifiers.” International Journal on Artificial Intelligence Tools 11 (4): 587–601.
Kirkpatrick, Scott, C Daniel Gelatt, and Mario P Vecchi. 1983. “Optimization by Simulated Annealing.” Science 220 (4598): 671–80.
Koller, Daphne, and Nir Friedman. 2009. Probabilistic Graphical Models: Principles and Techniques. Cambridge, MA, USA: MIT press.
Kuhn, Max. 2008. “Building Predictive Models in R Using the caret Package.” Journal of Statistical Software 28 (5): 1–26.
Kuhn, Max, Jed Wing, Steve Weston, Andre Williams, Chris Keefer, Allan Engelhardt, Tony Cooper, et al. 2017. caret: Classification and Regression Training. https://CRAN.R-project.org/package=caret.
Lauritzen, S. L., and N. Wermuth. 1989. “Graphical Models for Associations Between Variables, Some of Which Are Qualitative and Some Quantitative.” The Annals of Statistics 17 (1): 31–57.
Lichman, M. 2013. UCI Machine Learning Repository.” University of California, Irvine, School of Information; Computer Sciences. http://archive.ics.uci.edu/ml.
Margaritis, Dimitris, and Sebastian Thrun. 2000. “Bayesian Network Induction via Local Neighborhoods.” In Advances in Neural Information Processing Systems 12, 505–11. MIT Press.
McGeachie, Michael J, Hsun-Hsien Chang, and Scott T Weiss. 2014. CGBayesNets: Conditional Gaussian Bayesian Network Learning and Inference with Mixed Discrete and Continuous Data.” PLoS Computational Biology 10 (6): e1003676.
Minsky, M. 1961. “Steps Toward Artificial Intelligence.” Transactions on Institute of Radio Engineers 49: 8–30.
Pazzani, M. 1996. “Constructive Induction of Cartesian Product Attributes.” In Proceedings of the Information, Statistics and Induction in Science Conference, 66–77.
Pearl, J. 1988. Probabilistic Reasoning in Intelligent Systems. San Francisco, CA, USA: Morgan Kaufmann.
Pernkopf, F., and J. A. Bilmes. 2010. “Efficient Heuristics for Discriminative Structure Learning of Bayesian Network Classifiers.” Journal of Machine Learning Research 11: 2323–60.
Pernkopf, Franz, and Paul O’Leary. 2003. “Floating Search Algorithm for Structure Learning of Bayesian Network Classifiers.” Pattern Recognition Letters 24 (15): 2839–48.
Sacha, Jarosław P, Lucy S Goodenday, and Krzysztof J Cios. 2002. “Bayesian Learning for Cardiac SPECT Image Interpretation.” Artificial Intelligence in Medicine 26 (1): 109–43.
Sahami, Mehran. 1996. “Learning Limited Dependence Bayesian Classifiers.” In Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD-1996), 96:335–38.
Schwarz, Gideon. 1978. “Estimating the Dimension of a Model.” The Annals of Statistics 6 (2): 461–64.
Scutari, Marco. 2010. “Learning Bayesian Networks with the bnlearn R Package.” Journal of Statistical Software 35 (3): 1–22.
Scutari, Marco, and Robert Ness. 2018. Bnlearn: Bayesian Network Structure Learning, Parameter Learning and Inference. https://CRAN.R-project.org/package=bnlearn.
Tsamardinos, Ioannis, and Constantin F Aliferis. 2003. “Towards Principled Feature Selection: Relevancy, Filters and Wrappers.” In Proceedings of the 9th International Workshop on Artificial Intelligence and Statistics. Morgan Kaufmann Publishers: Key West, FL, USA.
Tsamardinos, Ioannis, Constantin F Aliferis, and Alexander Statnikov. 2003. “Algorithms for Large Scale Markov Blanket Discovery.” In Proceedings of the 16th International Florida Artificial Intelligence Research Society Conference (FLAIRS-2003), 376–81. AAAI Press.
Varando, G., C. Bielza, and P. Larrañaga. 2015. “Decision Boundary for Discrete Bayesian Network Classifiers.” Journal of Machine Learning Research 16: 2725–49.
Webb, Geoffrey I, Janice R Boughton, and Zhihai Wang. 2005. Not so Naive Bayes: Aggregating One-Dependence Estimators.” Machine Learning 58 (1): 5–24.
Zaidi, Nayyar A, Jesus Cerquides, Mark J Carman, and Geoffrey I Webb. 2013. “Alleviating Naive Bayes Attribute Independence Assumption by Attribute Weighting.” Journal of Machine Learning Research 14: 1947–88.
Zaidi, Nayyar A., Geoffrey I. Webb, Mark J. Carman, François Petitjean, Wray Buntine, Mike Hynes, and Hans De Sterck. 2017. Efficient Parameter Learning of Bayesian Network Classifiers.” Machine Learning 106 (9): 1289–329.
Zheng, Fei, and Geoffrey I Webb. 2006. “Efficient Lazy Elimination for Averaged One-Dependence Estimators.” In Proceedings of the 23rd International Conference on Machine Learning, 148:1113–20. ACM.