|
Fuzzy Matching Algorithms
Q. Discuss how fuzzy matching can be accomplished between
objects. (Dec.99) |
Ans. Fuzzy matching is accomplished by computing a fuzzy
distance or similarity measure between two objects. A similarity
score of 1 corresponds to an identical match while a score
near 0 corresponds to a maximal dissimilarity. For example,
suppose two objects say o1 and o2, are
each described by the same set of k attributes Ai,
I = 1, ……, k. Each attribute may be regarded as a fuzzy set.
A measure of fuzzy similarity between the two objects can
then be defined as
For an accurate match, the quantity d in equation
should be computed for several different values of each linguistic
variable.
|
The RETE Matching Algorithm
Q. State the basic principles of RETE matching algorithm
because of which the algorithm is efficient. (Dec. 99) |
Ans. The RETE matching algorithm was developed as
part of the OPS family of programming languages. It was developed
to eliminate the need to perform thousands of matches per
cycle. This algorithm uses several novel features, including
methods to avoid repetitive matching on successive cycles.
The main time-saving features of RETE are as follows:
- In most expert systems, the contents of working memory
change very little from cycle to cycle. There is a persistence
in the data known as temporal redundancy. This makes exhaustive
matching on every cycle unnecessary. Instead, by saving
match information, it is only necessary to compare working
memory changes on each cycle. In RETE, additions to, removals
from, and changes to working memory are translated directly
into changes to the conflict set. Then, when a rule from
the conflict set has been selected to fire, it is removed
from the set and the remaining entries are saved for the
next cycle. Consequently, repetitive matching of all rules
against Working Memory is avoided. Furthermore, by indexing
rules with the condition terms appearing in their LHS, only
those rules that could match Working Memory changes need
to be examined. This greatly reduces the number of comparisons
required on each cycle.
- Many rules in a knowledge base will have the same conditions
occurring in their LHS. This is just another way in which
unnecessary matching can arise. Repeated testing of the
same conditions in those rules could be avoided by grouping
rules, which share the same conditions and linking them
to their common terms. It would then be possible to perform
a single set of tests for all the applicable rules.
|
|
|