# Unknown Title
Journal of Arti/cial Intelligence Research /1/0 /(/1/9/9/9/) /4/5/7/-/4/7/7
Submitted /1/1///9/8/; published /6///9/9
## Research Note The Good Old Davis/-Putnam Procedure Helps
Elazar Birnbaum Eliezer L/. Lozinskii
/9/1/9/0/4 Jerusalem/, Israel
Institute of Computer Science The Hebrew University
Abstract As was shown recently/, many important AI problems require counting the number of models of propositional formulas/. The problem of counting models of such formulas is/, according to present knowledge/, computationally intractable in a worst case/. Based on the Davis/-Putnam procedure/, we present an algorithm/, CDP/, that computes the exact number of models of a propositional CNF or DNF formula F/. Let m and n be the number of clauses and variables of F/, respectively/, and let p denote the probability that a literal l of F occurs in a clause C of F/, then the average running time of CDP is shown to be O/(m d n/)/, where d /= d /BnZr/1 log /2 /(/1/BnZrp/) e/. The practical performance of CDP has been estimated in a series of experiments on a wide variety of CNF formulas/.
/1/. Introduction Given a propositional formula F in CNF or DNF/, one may want to know what is the number //(F /) of its models/, that is/, assignments of truth values to its variables that satisfy F/. This problem of counting models has numerous applications /(Roth/, /1/9/9/6/)/. Many important AI problems require counting models of propositional formulas or are reducible to this problem/. Among these problems are/: /(a/) Computing degree of belief in a propositional statement s with respect to a propositional knowledge base KB as a conditional probability of s given KB/. In the absence of a su/cient statistics this probability can be estimated by j//(KB /[fsg/)j /= j//(KB/)j/. /(b/) Inference in Bayesian belief networks/. As Roth /(/1/9/9/6/) showed/, the problem of counting models of a propositional formula can be reduced to the problem of computing the probability that a node in a Bayesian belief network is true/. /(c/) Computing the number of solutions of a constraint satisfaction problem/. A CNF formula F on n variables x/1 /; /: /: /: /;xn may be regarded as a constraint satisfaction problem in which the domain of each variable is f/0/; /1g/, and each clause C is a relation R / f/0/; /1g n of all n/-tuples for which at least one literal of C assumes the value /1/. Then the problem of computing the number of solutions of such problems is the one of computing //(F/)/. /(d/) Estimating the utility of reasoning with approximate theory AT instead of the original theory T /. This utility depends on the size of j//(AT /)j /BnZr j//(T /)j /(Roth/, /1/9/9/6/)/. Another important application is to reasoning with incomplete information /(Grove et al/./, /1/9/9/4/; Lozinskii/, /1/9/8/9/, /1/9/9/5/)/. Indeed/, if F describes a real world W faithfully/, then given a formula / such that neither / nor /:/ is a logical consequence of F/, a reasonable assumption
c
assert //, the more likely is that / is true in W/.
/
/1/9/9/9 AI Access Foundation and Morgan Kaufmann Publishers/. All rights reserved/.
elazar/@cs/.huji/.ac/.il lozinski/@cs/.huji/.ac/.il
Counting Models
Birnbaum /& Lozinskii
The problem of counting models is /#P/-complete in general /(Valiant/, /1/9/7/9/)/, and so/, according to the present state of the art in the /eld/, computationally intractable in the worst case/. Dubois /(/1/9/9/1/) introduced an algorithm that for any formula F of n variables and m clauses/, each clause containing r literals/, counts models of F in time O/(m/ n r /)/, / r being the positive root of the polynomial y r /BnZr y r/BnZr/1 /BnZr / / / /BnZr /1 /(/ /2 / /1/:/6/2/; / /3 / /1/:/8/4/; / /4 / /1/:/9/3/; /: /: /: and limr/!/1 / r /= /2/)/. Recently/, Zhang /(/1/9/9/6/) presented an algorithm based on similar ideas and with similar time complexity/. The time complexity achieved by these algorithms is an important improvement over that of a naive algorithm which checks all assignments of complexity/. Regarding the /rst alternative/, Luby and Veli/ ckovi/ c /(/1/9/9/1/) presented a deterministic algorithm for approximating the proportion of truth assignments that satisfy a DNF formula F/. Let Pr/[F/] denote this proportion /(P r/[F /] /= //(F /)/=/2 n where n is the number of variables of F/)/. Given small numbers //; / /> /0/, the algorithm computes an estimate Y of Pr/[F /] which
truth values to the variables of a formula F in time O/(m/2 n /)/. To satisfy practical applications/, one has either to resort to a good approximation of //(F /)/, or use an algorithm for computing //(F/) exactly with a reasonable average time satis/es
time of the algorithm is estimated by a polynomial in n and m multiplied by
/(/1 /BnZr //)P r/[F /] /BnZr / / Y / P r/[F /] /+ //: Let m and n be the number of clauses and variables of F/, respectively/. The running n O/( /1 / /(log /2 m/+log /2 /1 / /)/(log log m/+log /1 / /+log /1 / /)/) /:
$$\begin{array}{ll}
\sum _ { i = 1 } ^ { m } | U _ { S _ { i } } | & = \sum _ { i < j } | S _ { i } | - \sum _ { i < j < k } | S _ { i } | - \sum _ { i < j < k < l } | S _ { i } | \\
& + ( - 1 ) ^ { m + 1 } | S _ { 1 } | n \cdot \ldots \cdot n S _ { m } .
\end{array}$$
So/, the running time of the algorithm grows fast with the precision required/. Linial and Nisan /(/1/9/9/0/) studied the problem of computing the size of a union of a given
/+/(/BnZr/1/) m/+/1 jS /1 /\ / / / /\ S m j/: /(/1/) They showed that the size of the union can be approximated accurately when the sizes of only part of the set intersections are known/. Suppose the intersection sizes are known for all subfamilies containing at most k sets/. If k / /
/( p m/)/, then the union size can be
precision/, k approaches m/. For instance/, for m /= /1/0/0 and an error of /1/0 /BnZr/3 /, k / /3/4/: Iwama /(/1/9/8/9/) introduced an algorithm for testing satis/ability of a CNF formula F over n variables by counting the truth assignments falsifying F and checking if their number is less than /2 n /. The counting is accomplished through an Inclusion/-Exclusion process/, where the size of the set of truth assignments falsifying F is computed with the aid of the approximated with a relative error of O/(e /BnZr/2k/= p m /)/. As Linial and Nisan /(/1/9/9/0/) pointed out/, this result can be applied to approximating the number of models of a DNF formula F /| each clause of F has its set of models/, and //(F /) is equal to the size of the union of all those sets/. Thus/, only the /rst k terms of the formula /(/1/) are needed for a good approximation/, where p m / k / m/. However/, for a high
observation that this set is the union of the sets of truth assignments falsifying each clause
/4/5/8
Davis/-Putnam Procedure Helps Counting Models of F/. Iwama analyzed the average running time of his algorithm on the same framework as we did for the algorithm we present in the sequel/. He showed that for a CNF formula satisfying the condition that for a constant c/, such that ln c / ln m /BnZr p /2 n/, where n is the number of variables/, m is the number of clauses/, and p is the probability that a given literal appears in a clause /(the same for all literals/)/, the average running time of the algorithm is
variables is O/(m c n/)/, where c /= O/(log m/)/. In the sequel we present an algorithm which we call CDP /(Counting by Davis/-Putnam/) for computing //(F/) precisely/. Algorithm CDP is based on the Davis/-Putnam procedure /(DPP/) /(Davis /& Putnam/, /1/9/6/0/) for solving the satis/ability problem /(SAT/)/. We study features of DPP/, and show how its rules can be modi/ed/, resulting in an algorithm for counting models/. An analysis shows that for a formula F with m clauses on n variables/, the average running time of CDP is O/(m d n/)/, where d /= d /BnZr/1 log /2 /(/1/BnZrp/) e/, and p is the probability
O/(m c/+/1 /)/. Lozinskii /(/1/9/9/2/) employed an algorithm which is similar to that presented by Iwama /(/1/9/8/9/)/, for counting models/. He showed that under reasonable assumptions/, the average running time of computing the exact number of models of a formula with m clauses on n that any literal occurs in a given clause/. Section /5 describes numerous experiments with CDP showing that its actual average time complexity is signi/cantly lower than the upper bounds provided by mathematical analysis in Section /3 and in previous works /(Dubois/, /1/9/9/1/; Iwama/, /1/9/8/9/; Lozinskii/, /1/9/9/2/; Zhang/, /1/9/9/6/)/. We also point out advantages of this new algorithm over the ones presented in these previous works/. Finally/, we estimated CDP performance on formulas with /3/-literal clauses/, and compared the results to those of checking the satis/ability of such formulas by
## DPP presented by Mitchell et al/. /(/1/9/9/2/)/.
/2/. Algorithm for Counting Models In the sequel we consider propositional formulas over a set V /= fx /1 /; /: /: /: /; x n g of variables/, l
/2/./1 The Davis/-Putnam Procedure The Davis/-Putnam procedure /(DPP/) /(Davis /& Putnam/, /1/9/6/0/) is intended for deciding satis/ability of a propositional CNF formula F /(regarded as a set of clauses/)/, and can be
## denotes a literal /(l /2 fx /1 /; /: /: /: /; x n /; / x/1 /; /: /: /: /; /
xng/)/, and /
l stands for the negation of l/.
presented as a recursive boolean function /(assuming that no clause of F
```
<doc> function DPP (F: propositional CNF formula);
1. if F is empty then
return true;
2. if F contains an empty clause then
return false;
3.* The pure literal rule *)
if there exists a pure literal l in F (* such that l does
not appear in C ); (* delete from F all clas-
F1 = {C | C ∈ F};
return DPP(F1);
return true *) </doc>
```
return DPP/(F/1 /)/; /(/* F
is satis/able i/ F /1
is a tautology/)/:
- /*/) then
- if there exists a pure literal l in F /(/* such that / l does not appear in F F/1 /= fC j C /2 F/; l /6/2 Cg/; /(/* delete from F all clauses containing l /*/)
/4/5/9
is so /*/)
Birnbaum /& Lozinskii
```
<doc> 4. (* The unit clause rule *)
if F contains a unit clause Γ; then
F = {C − {f} | C return DPP(F );
1 i i
}
5. (* The splitting rule *)
choose a variable x of
F = {C − {x} | C e
1 i 2
F = {C − {x} | C e
1 i 2
return (DPP(F) v
It can be seen that the
cases of the splitting rule,
*
are actually particular
the splitting rule chooses
* </doc>
```
the pure literal rule/. If F contains a unit clause flg/, and the splitting rule chooses l/, then F/2 contains an empty clause /(the unit clause without its only literal/)/, and so/, is unsatis/able/. So/, F is satis/able i/ F/1 is so/. This is the unit clause rule/. /(Incidentally/, the splitting rule treats well tautological clauses too/. If a clause C of F contains a literal l and its complement / l/, return /(DPP/(F /1 /) /\_ DPP/(F /2 /)/)/. /(/* F is satis/able i/ /(F /1 /\_ F /2 /) is so /*/) It can be seen that the pure literal rule and the unit clause rule are actually particular cases of the splitting rule/. If there exists a pure literal l in F/, and the splitting rule chooses l /, then F/1 / F /2 /, and hence /(F /1 /\_ F /2 /) is satis/able i/ F /1 is so/. According to the splitting rule/, F is satis/able i/ /(F /1 /\_ F /2 /) is so/, and hence/, F is satis/able i/ F /1 is so/. This is exactly
and the splitting rule is applied to l/, then C will be erased from both F /1 and F /2 /./) Thus/, we may consider DPP as being based on the splitting rule only/, while the two other rules serve as guides for choosing a literal e/ciently/. Indeed/, if F contains a unit clause flg or a pure literal l/, then applying the splitting rule to l yields only one formula/, but choosing
## We address this issue in the sequel/.
a non/-pure and non/-unit/-clause literal leaves two formulas for further processing/. If F contains neither a unit clause nor a pure literal/, then the literal for the splitting rule should be chosen in such a way that produces sets F /1 and F /2 that are as small as possible/.
/2/./2 Counting Models Thinking of counting models of a propositional formula F/, we notice that by choosing a variable x/, the splitting rule actually splits the set of models of F into two disjoint subsets/: one in which x is true /(namely/, the models of F /1 /)/, and the other /(models of F /2 /) in which x is false/. The chosen variable x appears neither in F /1 nor in F /2 /, and there may be other variables that appear in F but not in F /1 /(or not in F /2 /)/. These are the variables which belong only to the clauses that contain the literal x /(or / x/, respectively/)/. Hence/, every model of F/1 /(F /2 /) can be completed to a model of F by assigning true /(false/, respectively/) to x/, and true or false arbitrarily to every variable of F that does not appear in F /1 /(F /2 /, respectively/)/. Let //(F /) stand for the number of models of F/, and n /1 /(n /2 /) denote the number of variables satis/ability/, the pure literal rule turns out to be ine/cient for counting models/.
/(other than x/) which occur in F but not in F /1 /(F /2 /)/, then //(F /) /= /2 n/1 //(F /1 /) /+ /2 n/2 //(F /2 /)/. As it has been shown in Section /2/./1/, if checking satis/ability is the goal/, and a pure literal /(which is not a unit clause/) is chosen for splitting/, then only F /1 should be processed further/, while F /2 may be disregarded/. However/, for counting models both F /1 and F /2 matter/. Note that in this case F /2 contains the same number of clauses as F/. So/, in contrast to checking
/4/6/0
Davis/-Putnam Procedure Helps Counting Models
Based on these observations/, the following function CDP counts models of a formula F
return CDP/(F/1 /; n /BnZr /1/) /+ CDP/(F/2 /; n /BnZr /1/)/. Any iteration of CDP deletes the chosen variable/, and so/, reduces the number of variables
```
<doc> Based on these observations, the following function CDP counts monomial
over n variables.
function CDP (F: propositional CNF formula; n: integer):
1. if F is empty then
return 2";
2. if F contains an empty clause then
return 0;
3. if F contains a unit clause {i} then
F = {C - {i}} | C ∈ F;
i
return CDP(F ,n-1);
4. choose a variable x of F;
F = {C - {x}} | C ∈ F;
1 2
F = {C - {x} | C ∈ F;
2 3
return CDP(F ,n-1) +
Any iteration of CDP deletes the chosen variable, and so, reduces the
of F by at least one. This guarantees that CDP terminates (either on st </doc>
```
## of F by at least one/.
This guarantees that CDP terminates /(either on step /1 or on step /2/)/.
/3/. Average Running Time In the analysis of the average running time of the function CDP we follow Goldberg /(/1/9/7/9/)
/3/./1 The Recurrence Equation Let m and n denote the number of clauses and variables of F/, respectively/. Assume that all the literals have the same probability p to appear in each clause C of F/. /(To simplify the analysis we allow F to contain duplicate clauses and tautologies/./) Assume also that there is no dependency among the occurrences of di/erent variables and among the occurrences of the same variable in di/erent clauses/. For a given distribution of p/, let T /(m/; n/) denote and Goldberg et al/. /(/1/9/8/2/)/, who studied the average running time of DPP/.
the average running time of CDP/(F/; n/) with F containing m clauses/. If F contains m clauses on n variables/, then mn bounds the size of F/. Each step of cmn bounds the time of executing one iteration of function CDP/. It might be thought/, that in order to obtain a good bound on the average running time of function CDP/, one must show that the unit clause rule is applied frequently/, since the splitting rule is the one that introduces the possibility of exponentially long computations/. We show that even if only the splitting rule is applied/, low bounds are obtained/. Therefore/, we assume that no use of the unit clause rule is made/, and analyze a variant of CDP/, in which step /3 is omitted/. Obviously/, this change does not a/ect Lemma /1/. We also assume
function CDP can be accomplished during at most two passes over F/. Lemma /1 If F contains m clauses on n variables/, then there is a constant c such that that the literal for the splitting rule is chosen randomly/. Let l be the literal that has been chosen for the splitting rule/. Since the probability that l occurs in a clause is p/, the probability that k out of m clauses contain l is /( m /)p k /(/1 /BnZr p/) m/BnZrk /.
/4/6/1
k
Birnbaum /& Lozinskii
The same holds for / l/. The number of clauses of formulas F /1 /, F /2 produced by splitting F is equal to the number of all clauses of F minus the number of clauses of F which contain the probability /( m k /)p k /(/1 /BnZr p/) m/BnZrk /. Since each of F /1 /, F /2 contains at most n/BnZr/1 variables/, the following recurrence inequality
literal l/, / l/, respectively/. Lemma /2 Each of formulas F/1/, F/2 produced by splitting F contains m /BnZr k clauses with
$$T ( m , n ) ≤ \sum _ { k = 0 } ^ { n } \frac { c _ { mn } } { 1 }$$
at /0/. The right/-hand side of /(/2/) overestimates T /(m/; n/) since/, /rst/, the number of variables of F/1 /, F/2 is at most n /BnZr /1/, and/, second/, function CDP may stop when F contains an empty
/: /1 n /= /0 or m /= /0 The /rst term /(cmn/) is the time needed for one iteration of CDP/. The second and third terms are the expected times needed for computing the number of models of F/1 and F/2 /, respectively/. Since the literal l appears in F/, it is impossible that the number of clauses in which l occurs is /0/, and so/, the summation of the second term starts at /1/. The literal / l may not appear at all in F /(if l is pure/)/. Consequently/, the summation of the third term begins clause/, even before m or n becomes /0/. To estimate T/(m/; n/)/, we show/, /rst/, that for p /= /1/=/3/, T /(m/; n/) /= O/(m /2 n/)/, and/, second/, that for any p/, T /(m/; n/) /= O/(m d n/) where d /= d /BnZr/1 log /2 /(/1/BnZrp/) e/:
$$\sum _ { m = 1 } ^ { n } \sum _ { k = 1 } ^ { n } ( \frac { T ( m , n - 1 ) + } { 2 } \sum _ { j = 1 } ^ { n } ( \frac { T ( m - k , n - 1 ) } { 3 } ) ( \frac { 3 } { 2 } m - k T ( m - k , n - 1 )$$
## Proof is given in Appendix A/.
/3/./2 The Average Running Time for p /= /1/=/3 Following Goldberg /(/1/9/7/9/)/, we start with the analysis of the average running time for p /= /1/=/3/. In this case/, each variable occurs in each clause positively/, negatively or not at all
/: /1 T/(m/; n/) /= O/(m /2
n /= /0 or m /= /0
Theorem /1 For p /= /1 /3 /, n/)/:
/3/./3 The Average Running Time for Any p The assumption of p /= /1/=/3 is commonly adopted in probabilistic analysis of algorithms handling CNF or DNF formulas /(Franco /& Paull/, /1/9/8/3/;
/4/6/2
Goldberg/, /1/9/7/9/;
Goldberg et al/./,
Davis/-Putnam Procedure Helps Counting Models
/1/9/8/2/)/, which means that for each variable/, its occurrence in a clause with or without negation or non/-occurrence/, all have the same probability/. However/, in real cases this may not hold/, so we analyze the average running time of CDP for any probability p of a literal occurrence formulas which contain an empty clause/. If an internal node represents a formula with k clauses/, the expected number of clauses in each of its children is k/(/1/BnZrp/)/. The expected height of the tree h is therefore about log b m where b /= /1 /1/BnZrp /. The number of nodes in a complete binary tree of height h is about /2 / /2 h /, and this is the expected number / of iterations of the function CDP/. So/, / /= /2 / /2 h /= /2m d /, where d /= /BnZr/1 log /2 /(/1/BnZrp/) /. The running time of each iteration is O/( /~ m/~ n/) where /~ m is the number
in a clause of F/. Before analyzing the recurrence equation let us make the following estimation/. The function CDP /(without the unit clause rule/) may be regarded as an algorithm which scans a binary tree/. The root of the tree represents the input formula F/; The children of each internal node represent the formulas obtained as a result of applying the splitting rule to the formula at that internal node/, and the leaves of the tree represent empty formulas and of clauses and /~ n is the number of variables of the formula that is treated at that iteration/. Theorem /2 T/(m/; n/) /= O/(m d n/)/, where d /= d /BnZr/1 log /2 /(/1/BnZrp/) e/. Proof is given in Appendix A/. A few remarks are in order/. First/, we assumed that the probability of occurring in a clause is the same for all literals/. If this is not the case/, then Theorem /2 holds for p being the lowest occurrence probability among all literals of F/. Second/, if p is assumed constant/, then the size of clauses/, which is of order /2pn/, is proportional to the number of variables/,
and hence/, is supposed to change with n/.
/4/. Re/ning the CDP
/4/./1 Choosing a Variable for the Splitting Rule The function CDP/, like the Davis/-Putnam procedure/, gives no recommendation how to choose a variable for a split/. However/, a good choice of this variable may reduce the running time of the function/. By /`good choice/' we mean one that causes the formulas F /1
## This section describes some re/nements to CDP improving its performance/.
and F/2 to be as small as possible/. The size of these formulas is determined by both m and n/. As minimizing the number of variables in F /1 /, F /2 causes an unjusti/ed computation overhead/, we concentrated on reducing the number of clauses m /1 /, m /2 in F /1 /, F /2 /, either by minimizing m /1 /+m /2 /(by means of choosing a variable appearing in a maximal number of clauses of F/)/, or by minimizing max/(m /1 /; m /2 /)/. The latter can be achieved by maximizing over x the quantity min/(pos/(x/)/; neg/(x/)/)/, where pos/(x/) /(neg/(x/)/) denotes the number of clauses of F in which x occurs unnegated /(negated/, respectively/)/. These two approaches may be combined/. Indeed/, experiments reported in Section /5 showed best performance if the split variable has been chosen due to min/(m /1 /+ m/2 /)/, but if there have been more than one such variable/, then a further choice among the competing variables has been made due to minmax/(m/1 /; m /2 /)/.
/4/6/3
## small computation overhead/.
Birnbaum /& Lozinskii
As has been suggested by an anonymous reviewer/, by using proper data structures/, the computation of pos/(x/) and neg/(x/) for all variables x can be done e/ciently/, causing only a
/4/./2 Handling Small Formulas The function CDP stops when F contains an empty clause or when F is empty/. What if F consists of only one clause containing k literals/? The splitting rule will be applied to F/, creating F /1 and F/2 such that one of them may be empty/, but the other still consists of one clause involving just one variable less than F/. So/, splitting of the single clause of F will be being processed is reduced under /6/, CDP runs the algorithm of Lozinskii /(/1/9/9/2/)/.
repeated k times/. A similar process of ine/cient splitting takes place also on formulas consisting of rela/tively few clauses/. It turns out that the algorithm presented by Lozinskii /(/1/9/9/2/) is more e/cient for handling small sets of clauses/. Experiments on randomly chosen formulas showed that for formulas with less than /6 clauses the algorithm of Lozinskii /(/1/9/9/2/) performs better than function CDP in its original form/. Therefore/, when the number of clauses of a formula
/5/. Experiments Algorithm CDP presented in the previous sections has been programmed in Turbo/-Pascal/, and run on PC/, with the purpose of determining an e/cient way of choosing a variable for the splitting rule/, /nding an appropriate number of clauses in a formula for switching the function CDP to the algorithm of Lozinskii /(/1/9/9/2/)/, and assessing the actual running time its running time and by the number of recursive calls to the main function/. We have observed that for each pair of probabilities p /1 /; p /2 /, there is a threshold number of clauses / m/, such that for m / / m the hard cases lie around a corresponding number of variables/. For instance/, the hard cases for p /1 /= p /2 /= /0/:/1 and / m /= /6/0 are located around n /= /5/0/. Figures /1 and /2 present the average number of recursive calls performed by CDP in the hard cases/. More data are given in Tables /1/-/6 of Appendix B/. Following a suggestion of one of the reviewers/, we checked the standard deviation in all cases /(see Table /7 in Appendix B/)/. All the experiments ran on a pentium based PC /(/2/6/6 MHz/)/. The actual running time
of CDP/. In the experiments/, CDP has been applied to randomly produced sets of clauses with the following parameters/: /1/0 / m / /2/0/0/, /1/0 / n / /8/0 and p/1 /; p /2 /2 f/0/:/1/; /0/:/2/; /0/:/3g/, where p/1 /, p/2 denote the probability of a variable to occur in a clause unnegated or negated/, respectively /(p /1 and p/2 are not necessarily the same/)/. For each combination of the parameters/, /1/0/0 random instances of F have been processed/. Performance of CDP has been measured by of CDP ranged from less than one second to /5 hours/. Following the discovery of hard cases for SAT /(Mitchell et al/./, /1/9/9/2/)/, we ran CDP on a series of random sets of /3/-literal clauses with the ratio of m/=n from /0/./2 to /8/./0/. For each combination of m and n/, /2/0/0 instances were considered/. Figure /3 gives the median number
of recursive calls performed by CDP for a sample of n /= /2/0/; /4/0/; /5/0/.
/4/6/4
More data in Table /8
Figure /1/:
Davis/-Putnam Procedure Helps Counting Models
<details>
<summary>Image 1 Details</summary>

### Visual Description
\n
## Line Chart: Scalability of Computational Calls vs. Formula Complexity
### Overview
The image is a line chart plotting the base-2 logarithm of the number of calls (y-axis) against the number of clauses in a formula (x-axis). It compares three different problem configurations, showing how computational cost scales with increasing formula size. The chart demonstrates a clear, non-linear relationship where the number of calls grows exponentially with the number of clauses, and this growth rate is significantly steeper for problems with more variables.
### Components/Axes
* **Chart Type:** Line chart with markers.
* **X-Axis:**
* **Label:** "Number of clauses in the formula"
* **Scale:** Linear, ranging from 0 to 200.
* **Major Ticks:** 0, 20, 40, 60, 80, 100, 120, 140, 160, 180, 200.
* **Y-Axis:**
* **Label:** "log₂ of number of calls"
* **Scale:** Linear, representing the logarithm (base 2) of the actual call count.
* **Range:** 0 to 26.
* **Major Ticks:** 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26.
* **Legend:**
* **Position:** Bottom-right corner of the plot area.
* **Entries (from top to bottom in legend box):**
1. **Diamond marker (◆) with line:** "50 variables, p1=p2=0.1"
2. **Circle marker (●) with line:** "30 variables, p1=p2=0.2"
3. **Triangle marker (▲) with line:** "20 variables, p1=p2=0.3"
### Detailed Analysis
The chart displays three distinct data series, each showing an increasing, concave-down trend (the rate of increase slows slightly as clauses increase).
**1. Series: 50 variables, p1=p2=0.1 (Diamond markers ◆)**
* **Trend:** This series exhibits the steepest growth. The line rises sharply from the origin and continues to climb steeply across the entire range.
* **Approximate Data Points (x, log₂(calls)):**
* (0, 0)
* (20, ~10.0)
* (40, ~14.0)
* (60, ~16.5)
* (80, ~18.0)
* (100, ~19.5)
* (120, ~21.0)
* (140, ~22.5)
* (160, ~23.5)
* (180, ~24.5)
* (200, ~25.5)
**2. Series: 30 variables, p1=p2=0.2 (Circle markers ●)**
* **Trend:** This series shows moderate growth. It rises less steeply than the 50-variable series but more steeply than the 20-variable series.
* **Approximate Data Points (x, log₂(calls)):**
* (0, 0)
* (20, ~5.0)
* (40, ~7.5)
* (60, ~9.0)
* (80, ~10.0)
* (100, ~11.0)
* (120, ~11.5)
* (140, ~12.5)
* (160, ~13.0)
* (180, ~13.5)
* (200, ~14.0)
**3. Series: 20 variables, p1=p2=0.3 (Triangle markers ▲)**
* **Trend:** This series has the shallowest growth curve. It increases steadily but at a much lower rate compared to the other two series.
* **Approximate Data Points (x, log₂(calls)):**
* (0, 0)
* (20, ~3.0)
* (40, ~5.0)
* (60, ~6.0)
* (80, ~7.0)
* (100, ~7.5)
* (120, ~8.0)
* (140, ~8.5)
* (160, ~9.0)
* (180, ~9.5)
* (200, ~10.0)
### Key Observations
1. **Exponential Scaling:** Since the y-axis is logarithmic (log₂), the approximately linear-to-concave trends indicate that the actual number of calls (2^y) grows exponentially with the number of clauses.
2. **Dominant Factor - Variable Count:** The number of variables is the primary driver of computational cost. The 50-variable problem requires orders of magnitude more calls than the 20-variable problem for the same number of clauses (e.g., at 200 clauses: ~2^25.5 vs. ~2^10 calls).
3. **Secondary Factor - Probability Parameters (p1, p2):** The chart suggests an inverse correlation between the probability parameters (p1, p2) and the growth rate. The series with the lowest probabilities (0.1) has the steepest curve, while the series with the highest probabilities (0.3) has the shallowest. This implies that lower clause/variable occurrence probabilities lead to harder computational instances.
4. **Convergence at Origin:** All three series originate at (0,0), meaning a formula with zero clauses requires zero calls (log₂(1)=0), which is a logical baseline.
5. **Divergence with Scale:** The performance gap between the configurations widens dramatically as the number of clauses increases. The lines fan out, showing that scalability is highly sensitive to the problem's initial parameters (variable count and probabilities).
### Interpretation
This chart likely visualizes the performance of a search or satisfiability algorithm (e.g., for Boolean formulas). The "number of calls" probably refers to recursive calls, unit propagations, or some core computational step.
The data demonstrates a classic **combinatorial explosion**. The exponential relationship (linear on a log plot) confirms that the problem is computationally hard (likely NP-hard). The key insight is that the **number of variables is a more critical parameter for scalability than the number of clauses** within the tested range. A problem with 50 variables becomes intractable much faster than one with 20 variables as the formula grows.
The inverse relationship with `p1` and `p2` is particularly interesting. These parameters likely control the density or structure of the generated formulas. Lower probabilities might create more "sparse" or "unstructured" formulas that are harder for the algorithm to prune efficiently, leading to a larger search space. Conversely, higher probabilities might create more constraints or structure that the algorithm can exploit to reduce calls.
In summary, the chart is a technical demonstration of algorithmic complexity. It warns that for this class of problems, increasing the number of variables has a severe, non-linear impact on runtime, and the inherent structure of the problem (governed by p1, p2) significantly modulates this difficulty. This information is crucial for understanding the practical limits of the algorithm and for benchmarking its performance.
</details>
Average number of recursive calls for di/erent values of p/1/; p/2
<details>
<summary>Image 2 Details</summary>

### Visual Description
## Line Chart: Computational Complexity Scaling
### Overview
The image is a line chart plotting the base-2 logarithm of the number of calls (y-axis) against the number of clauses in a formula (x-axis). It compares three different problem configurations, each defined by a specific number of variables and two probability parameters (p1 and p2). All three curves demonstrate a sub-linear, concave-downward growth pattern on this semi-logarithmic plot.
### Components/Axes
* **X-Axis:** Labeled "Number of clauses in the formula". It is a linear scale with major tick marks at intervals of 20, ranging from 0 to 200.
* **Y-Axis:** Labeled "log₂ of number of calls". It is a linear scale with major tick marks at intervals of 2, ranging from 0 to 20.
* **Legend:** Located in the bottom-right quadrant of the chart area. It contains three entries, each associating a line/marker style with a specific configuration:
1. **Diamond marker (◆):** "40 variables, p1=0.1 p2=0.2"
2. **Circle marker (●):** "30 variables, p1=0.1 p2=0.3"
3. **Triangle marker (▲):** "20 variables, p1=0.2 p2=0.3"
* **Data Series:** Three distinct lines, each with markers at data points corresponding to x-values of approximately 0, 10, 20, 30, 40, 50, 60, 80, 100, 120, 140, 160, 180, and 200.
### Detailed Analysis
**Trend Verification:** All three data series exhibit a similar visual trend: they start at the origin (0,0) and rise steeply for low clause counts, with the rate of increase gradually diminishing as the number of clauses grows, resulting in a concave-down shape.
**Data Point Extraction (Approximate Values):**
* **Series 1 (40 variables, p1=0.1, p2=0.2 - Diamond ◆):** This is the topmost curve.
* (0, 0), (~10, 3.2), (~20, 5.8), (~30, 7.5), (~40, 8.8), (~50, 9.8), (~60, 10.7), (~80, 12.1), (~100, 13.3), (~120, 14.2), (~140, 15.0), (~160, 15.7), (~180, 16.3), (~200, 16.8)
* **Series 2 (30 variables, p1=0.1, p2=0.3 - Circle ●):** This is the middle curve.
* (0, 0), (~10, 2.8), (~20, 5.0), (~30, 6.5), (~40, 7.6), (~50, 8.5), (~60, 9.2), (~80, 10.4), (~100, 11.3), (~120, 12.0), (~140, 12.6), (~160, 13.1), (~180, 13.5), (~200, 13.9)
* **Series 3 (20 variables, p1=0.2, p2=0.3 - Triangle ▲):** This is the bottom curve.
* (0, 0), (~10, 2.5), (~20, 4.5), (~30, 5.8), (~40, 6.8), (~50, 7.5), (~60, 8.1), (~80, 9.0), (~100, 9.7), (~120, 10.2), (~140, 10.6), (~160, 11.0), (~180, 11.3), (~200, 11.6)
### Key Observations
1. **Hierarchy by Variable Count:** The primary differentiating factor between the curves is the number of variables. The configuration with 40 variables consistently requires the highest number of calls (log₂ scale), followed by 30 variables, and then 20 variables. This ordering is maintained across the entire range of clauses.
2. **Diminishing Returns:** The slope of all curves decreases as the number of clauses increases. The initial growth from 0 to ~40 clauses is very steep, after which the curves begin to flatten, suggesting that the marginal increase in computational cost (log₂ calls) per additional clause becomes smaller for larger formulas.
3. **Parameter Influence:** While variable count is the dominant factor, the probability parameters (p1, p2) also differ between series. The series with the highest variable count (40) has the lowest p1 (0.1) and a mid-range p2 (0.2). Isolating the effect of p1/p2 is difficult without controlled comparisons, but the chart suggests their impact is secondary to the number of variables.
### Interpretation
This chart visualizes the scaling behavior of an algorithm or process whose cost is measured in the number of calls, plotted on a logarithmic scale. The key insight is that the computational complexity grows **exponentially** with the number of clauses (since log₂(calls) increases linearly with clauses would imply calls = 2^(k*clauses)), but the exponent itself depends heavily on the number of variables in the problem.
The data suggests that for this specific process, **problem size (number of variables) is a more critical determinant of computational cost than the number of logical clauses** within the formula, especially as the clause count becomes large. The concave shape indicates that while cost always increases with more clauses, the *rate* of exponential growth slows down. This could be due to algorithmic optimizations, problem structure, or the nature of the search space being explored. The chart effectively communicates that scaling up the number of variables in a problem leads to a dramatically steeper increase in resource requirements compared to simply adding more clauses to a formula with a fixed, smaller number of variables.
</details>
Average number of recursive calls for di/erent values of p/1/; p/2 /(continued/)
Figure /2/:
of Appendix B/. /(Median number is shown to enable comparison with the results presented
/4/6/5
by Mitchell et al/./, /1/9/9/2/)/.
Figure /3/:
Birnbaum /& Lozinskii
<details>
<summary>Image 3 Details</summary>

### Visual Description
\n
## Line Chart: Computational Complexity vs. Clause-to-Variable Ratio
### Overview
The image is a line chart plotting the logarithmic measure of computational effort against a problem density metric. It displays three data series representing formulas with different numbers of variables (20, 40, and 50). The chart illustrates a characteristic "phase transition" pattern common in computational problems like SAT solving, where difficulty peaks at a specific ratio before declining.
### Components/Axes
* **Y-Axis (Vertical):**
* **Label:** `log₂ of number of calls`
* **Scale:** Linear scale from 0 to 24, with major tick marks every 2 units (0, 2, 4, ..., 24).
* **Interpretation:** Represents the base-2 logarithm of the number of computational calls (e.g., solver steps). A value of 10 corresponds to 2¹⁰ = 1024 calls; a value of 20 corresponds to 2²⁰ ≈ 1,048,576 calls.
* **X-Axis (Horizontal):**
* **Label:** `clauses-to-variables ratio`
* **Scale:** Linear scale from 0 to 8, with major tick marks every 1 unit (0, 1, 2, ..., 8).
* **Interpretation:** Represents the ratio of clauses to variables in a formula, a measure of problem constraint density.
* **Legend:**
* **Position:** Top-right quadrant of the chart area.
* **Entries:**
1. `20-variables formulas` - Represented by a line with diamond (♦) markers.
2. `40-variables formulas` - Represented by a line with circle (●) markers.
3. `50-variables formulas` - Represented by a line with triangle (▲) markers.
### Detailed Analysis
**Trend Verification & Data Points:**
All three series follow the same fundamental trend: a sharp, near-vertical increase from the origin (0,0) to a distinct peak, followed by a gradual, monotonic decline as the clauses-to-variables ratio increases.
1. **50-variables formulas (▲ line):**
* **Trend:** Rises most steeply to the highest peak, then declines, remaining the topmost line throughout.
* **Key Data Points (Approximate):**
* At ratio ≈ 0.1: y ≈ 12
* **Peak:** At ratio ≈ 1.5, y ≈ 23.5 (the global maximum of the chart).
* At ratio = 4: y ≈ 12
* At ratio = 8: y ≈ 8
2. **40-variables formulas (● line):**
* **Trend:** Rises to a peak lower than the 50-variable series but higher than the 20-variable series. It sits between the other two lines.
* **Key Data Points (Approximate):**
* At ratio ≈ 0.1: y ≈ 10.5
* **Peak:** At ratio ≈ 1.2, y ≈ 19
* At ratio = 4: y ≈ 10
* At ratio = 8: y ≈ 7
3. **20-variables formulas (♦ line):**
* **Trend:** Has the lowest peak and remains the bottommost line. Its decline is the shallowest.
* **Key Data Points (Approximate):**
* At ratio ≈ 0.1: y ≈ 6
* **Peak:** At ratio ≈ 1.0, y ≈ 10.5
* At ratio = 4: y ≈ 7
* At ratio = 8: y ≈ 5.5
**Spatial Relationships:** The vertical ordering of the lines (50-var on top, 40-var middle, 20-var bottom) is consistent across the entire x-axis range shown. The horizontal spacing between the peaks is notable: the peak for 20-var formulas occurs at the lowest ratio (~1.0), followed by 40-var (~1.2), and then 50-var (~1.5).
### Key Observations
1. **Phase Transition Peak:** All series exhibit a clear maximum difficulty (peak number of calls) at a clauses-to-variables ratio between 1.0 and 1.5. This is the "hard" region.
2. **Scaling with Problem Size:** The peak height (log₂ calls) increases significantly with the number of variables. The jump from 20 to 40 variables nearly doubles the peak log value (from ~10.5 to ~19), implying an exponential increase in actual calls.
3. **Asymptotic Behavior:** For high ratios (x > 4), the curves flatten and converge slightly, but maintain their strict ordering. The computational cost decreases and becomes less sensitive to further increases in constraint density.
4. **Initial Growth:** The initial ascent from ratio=0 is extremely steep for all series, indicating that adding the first few constraints dramatically increases problem difficulty.
### Interpretation
This chart visualizes a fundamental phenomenon in computational complexity, likely for Boolean Satisfiability (SAT) or similar constraint satisfaction problems.
* **What the data suggests:** Problem difficulty is not monotonic with constraint density. There is a critical region (ratio ≈ 1-1.5) where problems are maximally difficult to solve, requiring an exponential number of computational steps (as shown by the high log₂ values). This is the well-known "phase transition" between under-constrained (easily satisfiable) and over-constrained (easily proven unsatisfiable) regions.
* **How elements relate:** The number of variables acts as a scaling factor. Larger problems (more variables) are exponentially harder at the peak, as evidenced by the large vertical gaps between the lines. The shift in the peak's position to the right for larger problems suggests that the critical density ratio itself may scale slightly with problem size.
* **Notable implications:** The chart provides empirical evidence for the "easy-hard-easy" pattern. For practitioners, it highlights that the most challenging instances to solve or benchmark are not those with the most clauses, but those with a clause-to-variable ratio near 1. The consistent ordering of the lines confirms that, for a fixed ratio, more variables invariably lead to harder problems. The use of a log₂ scale is crucial, as it compresses the enormous range of computational effort (from hundreds to millions of calls) into a readable, linear trend.
</details>
Median number of recursive calls for /3/-literal formulas/.
In all experiments with counting models we have observed a phenomenon akin to that characteristic of checking satis/ability/, namely/, for a given length of clauses r/, the running time of algorithm CDP reaches its maximum at a certain value of the clauses/-to/-variables ratio/. For r /= /3 /(Figure /3/) this /\hard/" ratio is /1/./2/, while it has been reported in many works /(e/.g/./, Mitchell et al/./, /1/9/9/2/) that hard cases of /3/-SAT lie around m/=n /= /4/:/3/. This shift of the maximum running time to the left /(from m/=n /= /4/:/3 to m/=n /= /1/:/2/) can be explained by the fact that when a program for SAT discovers a satisfying assignment/, it stops processing/, while counting models has to go on over all satisfying assignments/. So/, the higher is the probability that a formula is satis/able/, and the more models it is likely to have /(and this is what happens if the ratio m/=n decreases/)/, the longer model counting takes compared to checking satis/ability/.
/6/. Concluding Remarks We have presented an algorithm /(CDP/) for counting models of a propositional formula F/. The previous sections considered CNF formulas/, however/, it is easy to construct a dual
This feature is also common to the algorithms presented by Dubois /(/1/9/9/1/) and by Zhang algorithm for DNF formulas/. Comparing algorithm CDP with that of Iwama /(/1/9/8/9/) and Lozinskii /(/1/9/9/2/)/, we note that both have a common central feature/: the more variables have both negated and unnegated occurrences in the clauses of a given formula/, the better is the performance of the algorithms/.
/4/6/6
Davis/-Putnam Procedure Helps Counting Models
/(/1/9/9/6/)/. In contrast to checking satis/ability/, pure literals slow down counting models by least one variable is deleted/, the maximal number of stored clauses is bounded by mn/. During the experiments we checked also the maximal number M of stored clauses/. The results showed that M does almost not depend on n/, such that M / sm t /, where t / /1/:/3 is not depending on p/1 and p /2 /, while s depends on p /1 and p /2 /. Table /1 displays s as a function of p /1 and p/2 /. Figure /1 in Appendix B shows average values of M for n /= /5/0/; p /1 /= p /2 /= /0/:/1/; /0/:/2/; /0/:/3/.
these algorithms/. The experiments made with algorithm CDP have shown that it performs better than that of Lozinskii /(/1/9/9/2/)/. However the main advantage of CDP is in the amount of storage it requires/. In the experiments that have been performed by Lozinskii /(/1/9/9/2/) for m / /1/0/0 and n / /3/0/, there were cases with thousands of clauses stored during the run of the algorithm/. Since algorithm CDP performs a depth/-/rst search of the split tree/, and by each split at
The maximum of M in all our experiments was /1/4/9/1 /(for n /= /5/0/; m /= /2/0/0/)/.
/0/./2
| | p/2 | |
|---------------|---------------|-------------------|
| p/1 | | s |
| /0/./1 /0/./1 | /0/./1 /0/./2 | /1/./5/4 /0/./8/2 |
| /0/./1 | | |
| | /0/./3 /0/./2 | /0/./5/7 /0/./7/4 |
| /0/./2 | | |
/0/./3
/0/./5/0
/0/./3
/0/./3
/0/./4/6
Table /1/: s as a function of p /1 and p/2 The analysis of the running time of CDP follows the method developed by Goldberg /(/1/9/7/9/) and by Goldberg et al/. /(/1/9/8/2/)/. This method was used also by Iwama /(/1/9/8/9/)/. The well known criticisms by Franco and Paull /(/1/9/8/3/)/, and by Mitchell et al/. /(/1/9/9/2/) points out that the instances generated by this method are mostly satis/able/, and that is indeed the reason for the good performance of satis/ability checking algorithms/: the algorithms /nd rapidly one model of the formula/. This point does not apply when we seek not only just for predicate calculus formulas is a next challenge of a great importance/.
one model but all the models/. Finally/, it should be noted that the algorithms for counting models developed so far are designed for propositional formulas/. As it has been mentioned in the introduction/, one application of counting models is in reasoning with incomplete information in logic systems usually expressed in predicate calculus/. Therefore/, developing counting algorithms
Acknowledgments We are grateful to the anonymous referees for their most apt and inspiring comments/. This research has been partly supported by the Israel Science Foundation administrated by the
Israel Academy of Sciences and Humanities/.
/4/6/7
Appendix A/. Proofs
Birnbaum /& Lozinskii
$$Theorem 1 For p = \frac { 1 } { 3 } , T ( m , n ) = O ( m ^ { 2 } n ).$$
Proof/.
Theorem /1 For p /= /1 /3 /, T/(m/; n/) /= O/(m /2 n/)/:
/: /1 n /= /0 or m /= /0
$$\sum _ { k = 0 } ^ { n - 1 } ( m _ { k } ) \left( \frac { 3 } { 2 } \right) ^ { k } \left( \frac { 3 } { 2 } \right)^ { m - k } T ( m - k , n - 1 )$$
/: /1 n /= /0 or m /= /0 The di/erence between /(/2/) and /(/1/)/, the recurrence for S/(m/; n/) and the recurrence for T/(m/; n/)/, is the extra term /( /2 /) m T/(m/; n /BnZr /1/) in the latter/. To compare T/(m/; n/) and S/(m/; n/)
$$\sum _ { k = 1 } ^ { m } ( \frac { 1 } { k } ) ^ { | k | } ( \frac { 1 } { 3 } ) ^ { | 3 - k | } S ( m - k , n - 1 )$$
$$Lemma 1 For all m \geq 0 and n \geq 0 , S ( m , n ) \leq ( \frac { 3 } { 2 } ) ^ { m - 1 } c m n .$$
/3 we /rst need the following result/.
for m /= /0/; /1/; /2/; /3/; /4 completes the proof for all m / /0 and all n / /0/.
Lemma /1 For all m / /0 and all n / /0/, S/(m/; n/) / /( /3 /2 /) m/BnZr/1 cmn/. Proof/. Goldberg /(/1/9/7/9/) showed that S/(m/; n/) / cm /2 n/. Since m /< /( /3 /2 /) m/BnZr/1 for m / /5/, this proves the lemma for all m / /5 and all n / /0/. Direct computation of S/(m/; n/)/, due to /(/2/)/,
Now T/(m/; n/) can be estimated as follows/.
Proof/. By induction on n/. The assertion is true for n / /1/.
Lemma /2 For all m and all n/, T/(m/; n/) / /3 / S/(m/; n/)/.
$$\begin{align*}
T(m,n) & \leq c_{m} \\
& \leq c_3 \\
& \text{According to (2), } 3S(m,n) - 3 \cdot \left(\frac{1}{3}\right)^m S(m,n-1) + m + 3 \cdot \left(\frac{1}{3}\right)^m T(m,n-1) + 2 \sum_{k=1}^m k^3 \left(\frac{1}{3}\right)^m S(m,n-k,n-1), \\
& = 3 c_{m} + 3 \cdot \left(\frac{1}{3}\right)^m T(m-k,n-1) + 2 \sum_{k=1}^m k^3 \left(\frac{1}{3}\right)^m S(m-k,n-1), \\
& = ( - )^{m-k} T(m-k,n-1).
\end{align*}$$
$$( m , n ) \leq 3 S ( m , n ) + 3 ( \frac { 1 } { k } ) ^ { n } \sum _ { i = 1 } ^ { n } ( \frac { 1 } { \frac { 1 } { k } } ) ^ { - 2 } S ( m - k , n - 1 ) ,$$
/3
k/=/1 k /3 /3
According to /(/2/)/, so/, T/(m/; n/) / /3S/(m/; n/) /+ /3/( /2 /3 /) m S/(m/; n /BnZr /1/) /BnZr /2cmn/. For all n /> /0/, S/(m/; n /BnZr /1/) / S/(m/; n/)/, and by Lemma /1/, for all m and all n/, /3/( /2 /3 /) m S/(m/; n/) / /2cmn/. This completes the proof/. Since S/(m/; n/) /= O/(m /2 n/) and T /(m/; n/) / /3/S/(m/; n/)/, we get for p /= /1 /, T/(m/; n/) /= O/(m /2 n/)/:
/4/6/8
Davis/-Putnam Procedure Helps Counting Models
Theorem /2 T/(m/; n/) /= O/(m d n/)/, where d /= d /BnZr/1 log /2 /(/1/BnZrp/) e/. Proof/. We analyze the recurrence inequality for any p/, using considerations similar to those of Goldberg et al/. /(/1/9/8/2/)/.
/: /1
$$T ( m , n ) \leq \sum _ { k = 1 } ^ { n } p ^ { | k | } ( 1 - p ) ^ { m - k } T ( m - k , n - 1 )$$
$$It is easy to show, by induction on n, that T ( m , n ) \leq c m _ { k }$$
n /= /0 or m /= /0
k/=/1 A solution of /(/4/) is linear in n/, since this recurrence is linear in the T /0 s/. Now for any value
T/(m/; n/) / am d n/; where d /= d m/0 let us choose a constant a such that for all m / m /0
$$I ( m , n ) \leq a _ { m } ^ { d } n .$$
$$\sum _ { k = 1 } ^ { m } \sum _ { k = 1 } ^ { n } ( p ^ { ( 1 - p ) ^ { m } } ) ^ { d } = ( 1 - p ) ^ { m } + O ( 1 ^ { n } )$$
and therefore/,
/(/5/)
$$\sum _ { q = 1 } ^ { n } ( p ^ { k } ( 1 - p ) ^ { m - k } ( 1 - \frac { k } { m } ) ^ { d } - ( 1 - p ) ^ { a } + O ( p ^ { k } ( 1 - p ) ^ { m - k } ( 1 - \frac { k } { m } ) ^ { d } )$$
/(/1 /BnZr /(/1
/BnZr p/)
k/=/1
$$d _ { n } ( 1 - p ) ^ { m } I ( m , n ) \leq c m n + 2 a m - ( 1 - p ) ^ { m } I ( m , n ) .$$
$$n \cdot O ( m ^ { d - 1 } ) .$$
cmn /= O/(mn/)/, and so/,
$$We are going now to find out conditions under which \frac { 2 a m ^ { d } n ( 1 - p ) ^ { d } } { 1 - ( 1 - p ) ^ { d } } + a n \cdot O ( n ^ { a x } ( 1 , d - 1 ) ) \geq a m ^ { d } _ { n }$$
T/(m/; n/) / /1 /BnZr /(/1 /BnZr p/) m
/1 /BnZr /(/1 /BnZr p/) m
/4/6/9
/)/:
that is
Birnbaum /& Lozinskii
$$O ( m \max \{ 1 , d - 1 \} ) \leq O ( m ^ { d } [ 1 - \frac { 2 ( 1 - p ) ^ { d } } { 1 - ( 1 - p ) ^ { m } } ] .$$
d /> /1 /(/8/) Second/, for /0 /< p /< /1/, /(/1 /BnZr p/) m diminishes monotonically/, and tends to /0 as m /! /1/, hence
/[/1 /BnZr /1 /BnZr /(/1 /BnZr p/) m /]/: /(/7/) First/, let g denote the subexpression of /(/7/) in square brackets/. For all d/, g is less than /1/, hence/, d must be greater than /1 to satisfy O/(m max/(/1/;d/BnZr/1/) /) / m d /. So/, to make g positive/, we need
$$\frac { - 1 } { l a r }$$
d /> log /2 /(/1 /BnZr p/) /(/9/) Thus/, choosing m /(and hence/, m /0 /) su/ciently large /(depending on the value of the O/(m /BnZr/1 /) expression in /(/6/)/)/, and determining the values of a and d satisfying conditions /(/5/)/, /(/8/)/, /(/9/)/, establishes T /(m/; n/) /= O/(m d n/)/, where d /= d /BnZr/1 log /2 /(/1/BnZrp/) e/.
/4/7/0
Davis/-Putnam Procedure Helps Counting Models
Appendix B/. More Experimental Data
| Number clauses | /4/0 | /5/0 | /6/0 |
|------------------|-----------------------------|-----------------------------|---------------------------|
| of /1/0 | variables /1/6 | variables /1/5 | variables /1/3 |
| /2/0 | /2/3/4 | /1/9/1 | |
| | /1/6/0/0 | | /1/5/3 |
| /3/0 | /6/2/9/4 | /1/2/0/3 /5/0/7/6 | /8/6/8 /3/6/9/9 |
| /4/0 /5/0 | | | /1/1/3/7/0 |
| /6/0 | /1/9/8/5/6 /4/6/6/4/5 | /1/6/3/8/4 /5/3/2/9/6 | /2/8/4/6/7 |
| | /1/0/1/6/0/7 | /1/0/9/3/2/7 | /7/5/4/7/2 |
| /7/0 | /1/8/6/1/9/9 | /2/4/4/2/4/8 | /1/5/3/3/8/4 |
| /8/0 | | | |
| /9/0 /1/0/0 | /2/9/9/2/2/0 | /4/2/6/1/6/9 | /3/1/1/6/6/8 /6/3/5/7/1/2 |
| | /5/0/7/8/8/8 /1/0/9/3/3/8/3 | /8/0/7/6/6/9 /2/2/2/2/4/9/9 | |
| /1/2/0 /1/4/0 | | | /1/6/8/7/1/4/7 |
| | /1/9/4/5/8/7/4 | /5/5/0/7/3/4/0 | /4/2/8/9/7/1/3 |
| /1/6/0 | /3/0/9/5/8/4/8 | /1/1/8/2/4/3/2/5 | /1/0/8/1/2/9/3/1 |
/1/8/0
/2/0/0
Table /1/:
/4/4/4/1/7/6/0
/2/2/4/4/4/0/8/6
/2/0/2/3/0/3/5/2
/6/3/7/7/5/0/1
/3/9/1/1/9/0/4/5
/3/7/3/8/1/2/0/7
Average number of recursive calls for p /1 /= p /2 /= /0/:/1
| Number clauses | /2/0 | /3/0 | /4/0 |
|------------------|--------------|--------------|-------------------|
| of /1/0 | variables /8 | variables /7 | variables /7 |
| /2/0 | /4/1 | | |
| | /1/1/0 | /3/4 /8/8 | /2/9 /7/6 |
| /3/0 /4/0 | /2/3/4 | /1/8/5 | /1/5/0 |
| /5/0 | /4/3/3 | /3/3/2 | /2/6/6 |
| /6/0 | /6/4/8 | /5/4/5 | /4/2/4 |
| /7/0 | /9/7/3 | /8/3/5 | /6/3/6 |
| | /1/3/0/2 | /1/2/4/0 | /9/1/9 |
| /8/0 | /1/6/8/3 | /1/7/3/1 | /1/2/2/5 |
| /9/0 /1/0/0 | /2/0/8/9 | /2/3/4/9 | /1/6/4/7 |
| /1/2/0 | /3/1/2/8 | /3/9/4/7 | |
| /1/4/0 | /3/9/3/4 | /6/4/6/1 | /2/7/9/1 |
| /1/6/0 | /5/2/0/9 | /9/8/2/1 | /4/3/3/9 /6/4/4/6 |
/1/8/0
/2/0/0
Table /2/:
/6/0/5/1
/1/3/6/6/1
/9/0/0/2
/6/5/1/3
/1/8/6/9/6
/1/2/1/9/3
Average number of recursive calls for p /1 /= p /2 /= /0/:/2
/4/7/1
Birnbaum /& Lozinskii
| Number clauses | /2/0 | /3/0 |
|------------------|------------------------------------------------------------------------------------------|--------------|
| of | variables /5 /1/7 /3/2 /5/2 /7/8 /1/0/4 /1/3/4 /1/7/6 /2/1/8 /2/6/5 /3/6/3 /4/9/5 /6/2/3 | |
| /1/0 | variables /5 /1/7 /3/2 /5/2 /7/8 /1/0/4 /1/3/4 /1/7/6 /2/1/8 /2/6/5 /3/6/3 /4/9/5 /6/2/3 | variables /5 |
| /2/0 | variables /5 /1/7 /3/2 /5/2 /7/8 /1/0/4 /1/3/4 /1/7/6 /2/1/8 /2/6/5 /3/6/3 /4/9/5 /6/2/3 | /1/5 |
| /3/0 | variables /5 /1/7 /3/2 /5/2 /7/8 /1/0/4 /1/3/4 /1/7/6 /2/1/8 /2/6/5 /3/6/3 /4/9/5 /6/2/3 | /2/9 |
| /4/0 | variables /5 /1/7 /3/2 /5/2 /7/8 /1/0/4 /1/3/4 /1/7/6 /2/1/8 /2/6/5 /3/6/3 /4/9/5 /6/2/3 | /4/5 |
| /5/0 | variables /5 /1/7 /3/2 /5/2 /7/8 /1/0/4 /1/3/4 /1/7/6 /2/1/8 /2/6/5 /3/6/3 /4/9/5 /6/2/3 | /6/5 |
| /6/0 | variables /5 /1/7 /3/2 /5/2 /7/8 /1/0/4 /1/3/4 /1/7/6 /2/1/8 /2/6/5 /3/6/3 /4/9/5 /6/2/3 | /8/9 |
| /7/0 | variables /5 /1/7 /3/2 /5/2 /7/8 /1/0/4 /1/3/4 /1/7/6 /2/1/8 /2/6/5 /3/6/3 /4/9/5 /6/2/3 | /1/1/4 |
| /8/0 | variables /5 /1/7 /3/2 /5/2 /7/8 /1/0/4 /1/3/4 /1/7/6 /2/1/8 /2/6/5 /3/6/3 /4/9/5 /6/2/3 | /1/4/6 |
| /9/0 | variables /5 /1/7 /3/2 /5/2 /7/8 /1/0/4 /1/3/4 /1/7/6 /2/1/8 /2/6/5 /3/6/3 /4/9/5 /6/2/3 | /1/7/8 |
| /1/0/0 | variables /5 /1/7 /3/2 /5/2 /7/8 /1/0/4 /1/3/4 /1/7/6 /2/1/8 /2/6/5 /3/6/3 /4/9/5 /6/2/3 | /2/1/5 |
| /1/2/0 | variables /5 /1/7 /3/2 /5/2 /7/8 /1/0/4 /1/3/4 /1/7/6 /2/1/8 /2/6/5 /3/6/3 /4/9/5 /6/2/3 | /2/9/5 |
| /1/4/0 | variables /5 /1/7 /3/2 /5/2 /7/8 /1/0/4 /1/3/4 /1/7/6 /2/1/8 /2/6/5 /3/6/3 /4/9/5 /6/2/3 | /3/8/4 |
| /1/6/0 | variables /5 /1/7 /3/2 /5/2 /7/8 /1/0/4 /1/3/4 /1/7/6 /2/1/8 /2/6/5 /3/6/3 /4/9/5 /6/2/3 | /4/8/9 |
| | variables /5 /1/7 /3/2 /5/2 /7/8 /1/0/4 /1/3/4 /1/7/6 /2/1/8 /2/6/5 /3/6/3 /4/9/5 /6/2/3 | |
Table /3/:
/1/8/0
/8/0/7
/6/0/7
/2/0/0
/9/6/6
/7/2/4
Average number of recursive calls for p /1 /= p /2 /= /0/:/3
| Number clauses | /3/0 | /4/0 | /5/0 |
|------------------|-----------------------|---------------------------|-------------------|
| of | | variables | |
| /1/0 | variables /1/1 | | variables /9 |
| /2/0 | /8/9 | /1/0 /7/5 | /6/0 |
| | /3/4/3 | | /2/1/1 |
| /3/0 | /1/0/2/5 | /2/9/0 | /5/7/2 |
| /4/0 /5/0 | /2/1/7/9 | /7/5/2 /1/7/0/9 | /1/2/5/2 |
| /6/0 | | /3/4/3/8 | /2/4/7/3 |
| | /4/1/9/5 /7/3/0/9 | /6/2/5/0 | |
| /7/0 | /1/1/7/4/0 | | /4/3/1/1 /7/6/2/2 |
| /8/0 | /1/9/2/2/4 | /1/0/9/1/9 /1/7/6/1/9 | /1/0/9/9/9 |
| /9/0 | | | /1/6/7/3/4 |
| /1/0/0 /1/2/0 | /2/5/8/6/6 /4/4/7/2/9 | /2/7/8/4/7 /5/7/5/4/8 | |
| /1/4/0 | /7/4/2/8/0 | | /3/7/0/5/1 |
| | | /1/0/9/4/6/9 /1/8/5/5/6/8 | /6/8/3/5/4 |
| /1/6/0 | /1/0/9/1/3/4 | | /1/1/7/1/7/5 |
Table /4/:
/1/8/0
/1/4/4/6/9/8
/3/0/4/9/8/3
/2/0/1/8/2/2
/2/0/0
/1/8/2/5/1/6
/4/6/3/5/0/3
/3/0/7/8/5/1
Average number of recursive calls for p /1 /= /0/:/1/, p /2 /= /0/:/2
/4/7/2
Davis/-Putnam Procedure Helps Counting Models
| Number clauses | /3/0 | /4/0 | /5/0 |
|------------------|--------------|--------------|---------------------|
| of | | | |
| /1/0 | variables /9 | variables /9 | variables |
| /2/0 | /5/1 | /4/1 | /8 /3/6 |
| | /1/4/1 | /1/1/4 | /9/7 |
| /3/0 /4/0 | /3/3/3 | /2/3/9 | /2/0/6 |
| /5/0 | /6/3/5 | /4/4/0 | /3/6/1 |
| /6/0 | /1/0/4/8 | /7/8/2 | /5/9/6 |
| /7/0 | /1/6/3/8 | /1/1/1/7 | /9/0/3 |
| | /2/6/0/3 | /1/7/0/6 | /1/3/1/7 |
| /8/0 /9/0 | /3/8/0/1 | /2/4/9/3 | |
| /1/0/0 | /4/8/1/3 | /3/5/2/3 | /1/9/1/5 /2/5/6/8 |
| /1/2/0 | /8/5/9/1 | /6/0/8/7 | |
| /1/4/0 | /1/4/0/4/3 | /1/0/2/2/1 | /4/4/0/1 |
| | /2/1/2/3/7 | | /7/2/5/4 /1/0/7/3/1 |
| /1/6/0 | | /1/5/0/8/7 | |
Table /5/:
/1/8/0
/2/9/7/8/2
/2/2/8/3/2
/1/5/2/7/1
/2/0/0
/4/0/5/5/0
/3/1/8/8/2
/2/0/7/6/0
Average number of recursive calls for p /1 /= /0/:/1/, p /2 /= /0/:/3
| Number clauses | /2/0 | /3/0 |
|------------------|---------------|-----------------|
| of | | |
| /1/0 | variables /6 | variables /6 |
| /2/0 | /2/4 | /2/1 |
| /3/0 | /5/6 | /4/5 |
| /4/0 | /1/0/1 | /8/0 |
| /5/0 | /1/6/4 | /1/2/4 |
| /6/0 | /2/3/4 | /1/8/9 |
| /7/0 | | |
| /8/0 | /3/3/7 /4/5/6 | /2/5/5 /3/5/2 |
| /9/0 | /6/0/6 | /4/4/7 |
| /1/0/0 | /7/6/2 | /5/6/7 |
| /1/2/0 | /1/1/8/3 | |
| /1/4/0 | /1/6/0/8 | /8/5/1 /1/2/2/5 |
| /1/6/0 | /2/1/8/4 | /1/6/2/1 |
Table /6/:
/1/8/0
/2/8/8/7
/2/1/5/9
/2/0/0
/3/5/3/3
/2/7/4/3
Average number of recursive calls for p /1 /= /0/:/2/, p /2 /= /0/:/3
/4/7/3
Birnbaum /& Lozinskii
| | | | | Standard |
|-------------------------------|-----------|----------------|---------------------------|-------------------------|
| Probabilities | | | Average | deviation |
| | Variables | Clauses /1/2/0 | /1/0/9/3/3/8/3 | /4/1/2/2/4/6 |
| p/1 /= p/2 /= /0/:/1 | /4/0 | | /6/3/7/7/5/0/1 | /2/9/2/1/0/2/9 |
| | | /2/0/0 /1/2/0 | /2/2/2/2/4/9/9 | /1/1/4/1/9/0/9 |
| | /5/0 | | | |
| | | /2/0/0 | /3/9/1/1/9/0/4/5 /3/9/4/7 | /1/4/5/8/7/0/9/6 /9/0/6 |
| p/1 /= p/2 /= /0/:/2 | /3/0 | /1/2/0 /2/0/0 | | |
| | | /1/2/0 | /1/8/6/9/6 /2/7/9/1 | /3/3/8/7 /4/7/3 |
| | /4/0 | | | |
| | /2/0 | /2/0/0 /1/2/0 | /1/2/1/9/3 | /2/0/9/7 /4/3 |
| p/1 /= p/2 /= /0/:/3 | | /2/0/0 | /3/6/3 /9/6/6 | /1/0/6 |
| | | /1/2/0 | | /2/5 |
| | /3/0 | /2/0/0 | /2/9/5 /7/2/4 | /6/3 |
| | | /1/2/0 | /5/7/5/4/8 | /2/1/9/4/6 |
| p/1 /= /0/:/1/, p/2 /= /0/:/2 | /4/0 | /2/0/0 | /4/6/3/5/0/3 | /1/3/5/1/7/1 |
| | /5/0 | /1/2/0 | /3/7/0/5/1 | /1/1/0/8/1 |
| | | /2/0/0 | | /9/0/8/9/7 |
| | /3/0 | /1/2/0 | /3/0/7/8/5/1 /8/5/9/1 | /2/2/4/4 |
| p/1 /= /0/:/1/, p/2 /= /0/:/3 | | /2/0/0 | | /7/0/0/3 |
| | | /1/2/0 | /4/0/5/5/0 /6/0/8/7 | /1/3/4/9 |
| | /4/0 | /2/0/0 | | |
| /= /0/:/2/, /= /0/:/3 | | /1/2/0 | /3/1/8/8/2 /1/1/8/3 | /6/2/6/4 /1/8/3 |
| p/1 p/2 | /2/0 | /2/0/0 | | /5/1/4 |
| | | | /3/5/3/3 | |
Table /7/:
/3/0
/1/2/0
/8/5/1
/1/2/6
/2/0/0
/2/7/4/3
/3/9/2
Samples of standard deviation of number of recursive calls
/4/7/4
Davis/-Putnam Procedure Helps Counting Models
| Clauses/-to/-variables ratio | /2/0 | /4/0 | /5/0 |
|--------------------------------|-------------------|---------------------------|-----------------------------------|
| /0/./2 | variables | variables /1/0/5/6 | variables /3/2/0/6/2 |
| | /5/9 /2/7/2 | | /2/5/0/0/9/8/7 |
| /0/./4 /0/./6 | | /1/7/7/1/5 | /1/1/6/3/4/6/7/0 |
| | /6/2/5 | /1/1/3/1/8/5 /2/5/2/0/4/4 | /3/1/3/7/6/6/8/6 |
| /0/./8 | /9/9/3 /1/3/0/1 | | /3/4/8/4/3/0/2/5 |
| /1/./0 | /1/5/2/1 | /5/0/2/6/6/9 /5/3/8/5/7/9 | /5/9/4/4/3/5/8/0 |
| /1/./2 /1/./4 | | /5/2/9/5/2/0 | |
| /1/./6 | /1/4/4/5 /1/3/2/8 | | /4/8/8/0/5/4/4/0 /2/6/4/1/6/1/3/4 |
| | /1/2/8/3 | /4/1/2/5/6/3 /3/0/7/9/3/9 | /2/9/4/9/4/0/5/7 |
| /1/./8 /2/./0 | | /1/9/4/7/5/4 | /1/1/7/9/2/3/5/1 |
| /2/./2 | /1/0/5/0 /7/8/9 | | /8/2/7/7/5/2/9 |
| /2/./4 | /6/6/4 | /1/0/5/0/8/9 /6/0/5/4/8 | /4/0/3/7/9/8/6 |
| | /5/5/8 | /3/4/0/5/7 | /2/2/0/8/1/4/7 |
| /2/./6 /2/./8 | /3/8/7 | /1/8/8/7/5 | /9/2/3/2/2/8 |
| /3/./0 | /3/4/1 | | /6/3/3/4/0/7 |
| /4/./0 | /1/3/5 | /9/5/7/4 | /2/1/3/3/9 |
| | /8/6 | /8/1/6 | |
| /5/./0 | | /3/3/5 /2/2/5 | /1/4/4/3 |
| /6/./0 | /6/5 | | /8/6/2 |
Table /8/:
/7/./0
/5/5
/1/6/0
/5/5/8
/8/./0
/4/8
/1/3/8
/4/7/6
Median number of recursive calls for /3/-literal clauses/, n /= /2/0/; /4/0/; /5/0
/4/7/5
Birnbaum /& Lozinskii
<details>
<summary>Image 4 Details</summary>

### Visual Description
## Line Chart: Clause Storage Scaling
### Overview
This is a line chart illustrating the relationship between the number of clauses in a formula (input) and the resulting number of clauses stored (output) under three different probability parameter settings. The chart demonstrates how storage requirements scale as the size of the input formula increases.
### Components/Axes
* **X-Axis (Horizontal):** Labeled "Number of clauses in the formula". The scale runs from 0 to 200, with major tick marks and numerical labels every 20 units (0, 20, 40, ..., 200).
* **Y-Axis (Vertical):** Labeled "Number of clauses stored". The scale runs from 0 to 1300, with major tick marks and numerical labels every 100 units (0, 100, 200, ..., 1300).
* **Legend:** Positioned in the top-left quadrant of the chart area. It contains three entries, each associating a line style and marker with a parameter setting:
* `p1=p2=0.1`: Represented by a solid line with diamond (♦) markers.
* `p1=p2=0.2`: Represented by a solid line with circle (●) markers.
* `p1=p2=0.3`: Represented by a solid line with triangle (▲) markers.
### Detailed Analysis
The chart plots three distinct data series, each showing a positive, non-linear correlation between the x and y variables. All three lines originate at (0,0).
**1. Series: p1=p2=0.1 (Diamond Markers)**
* **Trend:** This line exhibits the steepest, near-linear upward slope. It represents the highest storage requirement for any given formula size.
* **Data Points (Approximate):**
* (20, ~60)
* (40, ~180)
* (60, ~290)
* (80, ~420)
* (100, ~560)
* (120, ~710)
* (140, ~820)
* (160, ~960)
* (180, ~1120)
* (200, ~1280)
**2. Series: p1=p2=0.2 (Circle Markers)**
* **Trend:** This line shows a moderate upward slope, less steep than the first series. The growth appears slightly sub-linear or gently curving.
* **Data Points (Approximate):**
* (20, ~20)
* (40, ~70)
* (60, ~130)
* (80, ~190)
* (100, ~260)
* (120, ~330)
* (140, ~390)
* (160, ~460)
* (180, ~520)
* (200, ~580)
**3. Series: p1=p2=0.3 (Triangle Markers)**
* **Trend:** This line has the shallowest upward slope, indicating the most efficient storage scaling. The curve is more pronounced than the middle series.
* **Data Points (Approximate):**
* (20, ~10)
* (40, ~40)
* (60, ~80)
* (80, ~120)
* (100, ~160)
* (120, ~200)
* (140, ~240)
* (160, ~280)
* (180, ~320)
* (200, ~350)
### Key Observations
1. **Clear Parameter Impact:** There is a strong, inverse relationship between the probability parameter (p1=p2) and the number of clauses stored. Lower probability values (0.1) result in significantly higher storage counts compared to higher values (0.3) for the same input formula size.
2. **Divergence with Scale:** The difference in stored clauses between the three series grows dramatically as the number of clauses in the formula increases. At x=200, the storage for p=0.1 (~1280) is over 3.6 times greater than for p=0.3 (~350).
3. **Growth Pattern:** All series show super-linear growth (output grows faster than input), but the degree of super-linearity decreases as the probability parameter increases. The p=0.1 series is nearly linear, while the p=0.3 series shows a more noticeable curve.
### Interpretation
This chart likely originates from a field involving computational logic, database theory, or automated reasoning, where "clauses" are fundamental units of information (e.g., in logic programming or SAT solving). The parameters `p1` and `p2` probably represent probabilities related to clause generation, redundancy, or filtering.
The data suggests a fundamental trade-off: **systems configured with lower probability thresholds (p=0.1) incur a much higher storage cost as problem size scales.** This could be because lower probabilities allow more clauses to be retained or generated. Conversely, higher probability thresholds (p=0.3) act as a stronger filter, leading to more compact storage but potentially at the cost of information loss or reduced solution completeness.
The near-linear scaling of the p=0.1 series is particularly noteworthy. It implies that for this configuration, the storage overhead per added clause in the formula is roughly constant, which could be a critical design consideration for large-scale systems. The chart provides a clear visual argument for carefully tuning such probability parameters to balance resource consumption (storage) against other system requirements.
</details>
/G31/G58/G50/G45/G48/G55/G03/G52/G49/G03/G46/G4F/G44/G58/G56/G48/G56/G03/G4C/G51/G03/G57/G4B/G48/G03/G49/G52/G55/G50/G58/G4F/G44
Figure /1/: Average of maximal number of clauses stored during the running of CDP for /5/0
/4/7/6
variables/.
Davis/-Putnam Procedure Helps Counting Models
- References Davis/, M/./, /& Putnam/, H/. /(/1/9/6/0/)/. A computing procedure for quanti/cation theory/. Journal
- Computer Science/, /8/1/(/1/)/, /4/9/{/6/4/. Franco/, J/./, /& Paull/, M/. /(/1/9/8/3/)/. Probabilistic analysis of the Davis/-Putnam procedure for
- of the ACM/, /7/, /2/0/1/{/2/1/5/. Dubois/, J/. /(/1/9/9/1/)/. Counting the number of solutions for instances of satis/ability/. Theoretical
- solving the satis/ability problem/. Discrete Applied Mathematics/, /5/, /7/7/{/8/7/. Goldberg/, A/. /(/1/9/7/9/)/. Average case complexity of the satis/ability problem/. In Proceedings
- Putnam procedures/. Information Processing Letters/, /1/5/(/2/)/, /7/2/{/7/5/. Grove/, A/./, Halpern/, J/./, /& Koller/, D/. /(/1/9/9/4/)/. Random worlds and maximum entropy/. Journal
- of the /4th Workshop on Automated Deduction Austin/, Texas/. Goldberg/, A/./, Purdom/, P/./, /& Brown/, C/. /(/1/9/8/2/)/. Average time analysis of simpli/ed Davis/-
- of Arti/cial Intelligence Research/, /2/, /3/3/{/8/8/. Iwama/, K/. /(/1/9/8/9/)/. CNF satis/ability test by counting and polynomial average time/. SIAM
- /3/4/9/{/3/6/5/. Lozinskii/, E/. /(/1/9/8/9/)/. Answering atomic queries in inde/nite deductive databases/. Interna/-
- Journal on Computing/, /1/8/(/2/)/, /3/8/5/{/3/9/1/. Linial/, N/./, /& Nisan/, N/. /(/1/9/9/0/)/. Approximate inclusion/-exclusion/. Combinatorica/, /1/0 /(/4/)/,
- tional Journal of Intelligent Systems/, /4/(/4/)/, /4/0/3/{/4/2/9/. Lozinskii/, E/. /(/1/9/9/2/)/. Counting propositional models/. Information Processing Letters/, /4/1/(/6/)/,
- imental and Theoretical Arti/cial Intelligence/, /7/, /3/6/1/{/3/7/8/. Luby/, M/./, /& Veli/ ckovi/ c/, B/. /(/1/9/9/1/)/. On deterministic approximation of DNF/. In Proceedings
- /3/2/7/{/3/3/2/. Lozinskii/, E/. /(/1/9/9/5/)/. Is there an alternative to parsimonious semantics /?/. Journal of Exper/-
- of STOC/'/9/1/. Mitchell/, D/./, Selman/, B/./, /& Levesque/, H/. /(/1/9/9/2/)/. Hard and easy distributions of SAT
- /2/7/3/{/3/0/2/. Valiant/, L/. /(/1/9/7/9/)/. The complexity of computing the permanent/. Theoretical Computer
- problems/. In Proceedings of AAAI/-/9/2/. Roth/, D/. /(/1/9/9/6/)/. On the hardness of approximate reasoning/. Arti/cial Intelligence/, /8/2/,
- Science/, /8/, /1/8/9/{/2/0/1/. Zhang/, W/. /(/1/9/9/6/)/. Number of models and satis/ability of sets of clauses/. Theoretical
/4/7/7
Computer Science/, /1/5/5/(/1/)/,
/2/7/7/{/2/8/8/.