IUP Publications Online
Home About IUP Magazines Journals Books Archives
     
Recommend    |    Subscriber Services    |    Feedback    |     Subscribe Online
 
The IUP Journal of Computer Sciences :
The Application of the Inclusion-Exclusion Principle in Learning Monotonic Boolean Functions
:
:
:
:
:
:
:
:
:
 
 
 
 
 
 
 

In this paper, we consider the inference problem for monotone Boolean structure functions (for example, Torvik and Triantaphyllou, 2002 and 2005; or Judson et al., 2005). We follow Judson’s algorithm (in Judson, 1999; or Judson et al., 2005), except with two possible changes. First, when choosing a vector to test, we consider simply evaluating the “value” of a given number of random vectors (instead of using Judson’s “neighbor” algorithm to find test vectors). Second, we consider a new way of calculating the value of a vector, which makes use of the inclusionexclusion principle from combinatorics. Via testing on some 10-component systems, we find that the “random” approach is better than the “neighbor” approach, and that the inclusionexclusion method is an improvement whenever the size of the boundary of the “unknown vector set” is small.

 
 
 

The structural inference problem is an offshoot of the basic problem of Reliability Theory. We are presented with an n-component reliability system whose structure function is completely unknown to us, except that we know it is monotonic. We are able to test individual Boolean vectors and find out whether the system functions or fails. The goal is to develop an efficient way to choose vectors for testing, so that we can learn the entire structure function quickly. The many practical applications and areas of study from which reliability systems occur, for example, game theory (Ramamurthy, 1990), logical analysis (Kovalerchuk et al., 1996), or data mining (Torvik and Triantaphyllou, 2005), have kept focus on this problem.1

Previous investigation has focused on learning the structure incrementally (for example, Triantaphyllou, 1996; and Sanchez et al., 2002). This approach gives an estimate of the structure function, updating this estimate whenever a vector is tested, and the results do not agree with the current estimate.

Torvik and Triantaphyllou (2002)—see also, Torvik and Triantaphyllou (2005)—focus on learning a structure function while minimizing the number of vectors actually tested, noting that the cost of testing a vector is often much higher than the cost of a slowly running algorithm. They completely solve this problem in the case of small n. For larger problems, they determine vectors to be tested by picking a vector x from the set of unknown vectors that minimizes the difference between the number of vectors that would be learned if x is functioning and the number of vectors that would be learned if x is not functioning. We emphasize here that since only the number of actually tested vectors is important for them, they evaluate this difference for all of the vectors in the unknown set on every iteration.

 
 
 

Computer Sciences Journal, Business Intelligence, Enterprise Systems, Enterprise Resource Planning, Customer Relationship Management, CRM, Business Operations Management, Business Process Mining, Finite State Machine, Transactional Information System, Genetic Algorithms, Decision Making Process, Data Mining Tools, Online Analysis Processing, OLAP, Artificial Intelligence.