Search code examples
algorithmartificial-intelligenceprobabilityconstraint-satisfaction

Constraint Satisfaction with Uncertainty


I'm trying to solve a problem in which the satisfaction of constraints cannot always be verified. I can find lots of papers on flexible constraint satisfaction, but that's not quite what I want. Here's an example:

P(Jim likes Cheese) = 0.8
P(Joe likes Cheese) = 0.5
P(Sam likes Cheese) = 0.2
P(Jim and Sam are friends) = 0.9
P(Jim and Joe are friends) = 0.5
P(Joe and Sam are friends) = 0.7

Charlie is talking about two cheese-liking friends. Who is he most likely talking about?

I'm currently viewing this as a constraint satisfaction problem:

[likes cheese]   [likes cheese]
 |                           |
 | /-------[alldiff]-------\ |
 |/                         \|
[X]--------[friends]--------[Y]

  ?            ?             ?
  |            |             |
(Sam)        (Joe)         (Jim)

Are there existing ways for dealing with this type of CSP?

Is a CSP even the right way to frame the problem?


Solution

  • For a propositional model (where each variable has a distinct name), you should have a look at probabilistic graphical models (in particular Markov networks). They are very closely related to SAT and CSP, since they are basically a generalization, but still fall into the same complexity class #P.

    If you are interested in concise, first order representation of these models, you should look into statistical relational learning or first order probabilistic models (synonyms). Here, the model is expressed in a "lifted" form. E.g. possibly probabilistic constraints of the following form, using variables ranging over some object domain:

    on(?x,?y) => largerThan(?y,?x)
    

    Inferences with these models that do not rely on generating the ground model are done in the field of lifted probabilistic inference.