### Table 27: Proof of Proposition 5.1

2002

"... In PAGE 27: ... Hence, we obtain the table. Table27 : It is known that determining whether the conjunction of two FBDD formulas fi1 and fi2 is consistent is NP-complete (Gergov amp; Meinel, 1994b) Moreover, FBDD satisfies :C. Since fi1 ^ fi2 is inconsistent iff fi1 j= :fi2, we can reduce the consistency test into an entailment test.... ..."

Cited by 59

### Table 1. The size of circuits, proofs and point images

2006

"... In PAGE 12: ...) To form a circuit test set from T we randomly picked a subset of the set inp(T) (where inp(T) consists of the input parts of the points from T). Table1 shows experimental results for four circuits of a MCNC benchmark set. All circuits consist of two-input AND and OR gates inputs of which may be negated.... In PAGE 13: ...1.6 (10) 82.4 (80) Table 2. Circuit testing of fault testing for the circuits of Table1 . In every experiment we generated 100 testable faults (i.... In PAGE 13: ... Namely, we randomly extracted a particular number of tests from inp(T). The corresponding sizes of T are given in Table1 . In every experiment we also generated 10 test sets of a particular size and we give the average value and the worst result out of 10.... ..."

Cited by 1

### Table 1. The fault efficiency shows the ratio of the number of detected faults to the number of total faults. The ATPG time shows the CPU time used by HITEC-PROOFS. The last column in the table shows the fault coverage of the circuit by applying random test vectors. This parameter indicates the ease of testing the circuit.

"... In PAGE 4: ...Table1... In PAGE 6: ...Table1 . Experimental Results ATPG Ci rcu i t Alloc a tion M e t hod Module Allocation Register Allocation Run Time (sec) Bus Width Area (#gates) Delay (ns) Fault Coverage Fault Efficiency ATPG Time (sec) Fault Coverage (RTP) 2 444 3.... ..."

### Table 1 and table 2 give the measurements of the performance of our new version of SVC on the examples we constructed. All tests were conducted on a 200MHz Pentium Pro with 128M of main memory. For the examples test m n.svc, the new implementation with proof production enabled is consistently just over 9 times slower than with proof production disabled. The new implementation with proof production disabled is, however, over 3.5 times as fast on average as the old implementation. The new implementation with proofs is thus around 2.5 times slower than the old one, on those examples. Note that the times for the new implementation with proofs do not include the time to write the proofs to disk. Also, the dramatic improvement in the last example is explained by the fact that chains.svc was constructed to highlight the previous implementation apos;s unbalanced trees in the union- nd algorithm. We do not see as large an improvement in the rst examples, which we conjecture are more typical. For the new implementation, we observe that the memory used when producing proofs is around 4.7 times greater on average than that used when not producing proofs5.

1999

"... In PAGE 10: ... Table1 : Run-times in seconds. 4.... ..."

Cited by 16

### Table 1: Benchmark of recording the multiplier proof FILE No. of DISABLED ENABLED SIZE

1993

"... In PAGE 48: ... The tests were run on a SUN Sparc 10 Server. The results including the run time, garbage connection and proof le sizes are listed in Table1 . The time is measured in seconds, and the le sizes are in bytes.... ..."

Cited by 10

### Table 1. Sample proofs whose solution requires meta-reasoning about failures.

"... In PAGE 13: ... There- fore, we tested the bene t in three domains, the - -proofs from the analysis textbook [1], the residue class domain, and inductive proofs. Table1 gives sam- ple problems from all three domains and the failure-reasoning they require. The numbered colons denote (i) case split introduction, (ii) unblock constraint solv- ing, (iii) unblock by lemma speculation, (iv) analyze variable dependencies.... In PAGE 13: ... Note that x ! a and x ! a+ denote the left-hand limit and the right-hand limit, respectively. The relevance of failure reasoning is not only demonstrated by Table1 . Its gures alone are underestimating because many similar problems can be formu- lated.... In PAGE 14: ...xperiments). Some representative examples occur in Table 1. Inductive Proofs So far, we did not apply Multi to inductive proofs. The induc- tive theorems in Table1 are taken from [9], which describes failure reasoning by so-called critics in the proof planner CLaM. Since the critics employed in CLaM are a special case bound to a particular method (see related work in section 7), our general failure reasoning rules for case-split introduction and lemma spec- ulation are applicable for inductive proofs as well.... ..."

### Table 11. Responses to question: quot;What type of archiving strategies do you use or plan to use? quot;

2006

"... In PAGE 55: ... Most of the programs have only done small-scale testing or proof-of-concept exercises, particu- larly with regard to migration and emulation. Table11 summarizes the programs apos; responses about the archiving strategies they use now ... ..."

### lable. Proof:

2000

Cited by 1

### Table 1: Selected results from LPO-Experiments Results for the KBO are very similar. For the second experiment, the net- work typically achieved more than 99% correctness on the test set. 4.3 Experiments in Fact Classi cation These experiments used only the simple labels from Sec. 4.1. The data was generated from PCL listings of DISCOUNT proof runs (compare [DS96b]). For each of 29 successful proof attempts, the generated equations were marked as either contributing to the proof or as direct derivatives of contributing facts. All other equations were discarded. We performed two di erent experiments. 7

"... In PAGE 7: ... The choice of the labeling scheme had only very minor in uence. Table1 shows selected results and the percentage of examples in the largest class of the test set { i.e.... ..."

### Table 3: A c-use, p-use, and all-uses but not mutation adequate test set of P2 Test case number (a, b) values

1992

"... In PAGE 14: ... A program for which an all-uses adequate test set is not mutation adequate appears in Figure 4. A test set T with four test cases listed in Table3 is all-uses, but not mutation, adequate for P2. It fails to distinguish the mutant obtained by mutating if(a + b + 5 gt; 1) to if(a + b + 5 gt; 0).... In PAGE 16: ...Proof: Since the test set in Table3 is also c-use adequate, the proof follows from the arguments used in the proof of Theorem 5. Corollary 6 PU does not subsume MR.... In PAGE 16: ... Corollary 6 PU does not subsume MR. Proof: Since the test set in Table3 is also p-use adequate, the proof follows from the arguments used in the proof of Theorem 5. 4.... ..."

Cited by 2