Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimization #19

Open
benbrastmckie opened this issue Apr 5, 2024 · 15 comments
Open

Optimization #19

benbrastmckie opened this issue Apr 5, 2024 · 15 comments

Comments

@benbrastmckie
Copy link
Owner

I've added benchmarking to test_complete.py in order to begin comparing results. Here is how the proposition constraints were originally defined:

    prop_const = ForAll(X, proposition(X))
    solv.add(prop_const)
    print(f"Proposition constraint: {world_const}")

This ran with an execution time of .71 sec. I then replaced this constraint with:

def prop_const(atom):
    sent_to_prop =[
        ForAll(
            [x, y],
            Implies(And(verify(x, atom), verify(y, atom)), verify(fusion(x, y), atom)),
        ),
        ForAll(
            [x, y],
            Implies(And(falsify(x, atom), falsify(y, atom)), falsify(fusion(x, y), atom)),
        ),
        ForAll(
            [x, y],
            Implies(And(verify(x, atom), falsify(y, atom)), Not(compatible(x, y))),
        ),
    ]
    return sent_to_prop

    for sent_letter in input_sentence_letters:
        print(f"\nSentence {sent_letter} yields the general constraints:\n")
        for const in prop_const(sent_letter):
            solv.add(const)
            print(f"{const}\n")

This ran with an execution time of .18 seconds. This makes me curious what kind of improvements can be gained by removing all occurrences of Exists from the Z3 constraints generated by the functions in semantics.py, replacing these with unique variables instead.

@benbrastmckie
Copy link
Owner Author

I add documentation instructions to General.md regarding using MIT's compute. It is very easy to log in. I tried adding the exhaustive constraint, but the process was killed. This suggests that there is something deeper going on.

@benbrastmckie
Copy link
Owner Author

I'm going to store various loose ends to come back to that pertain to Z3 optimization is this issue, the first of which was from when I was experimenting with different non-trivial proposition constraints.

# NOTE: if null verifiers are permitted, then null state verifies A
# but possible state c does not?
premises = ['A','B']
conclusions = ['(A boxright B)']

Here are some examples where the unsat_core looks satisfiable:

# # NOTE: unsat_core seems satisfiable
# premises = []
# conclusions = ['(A vee neg A)']

# # NOTE: unsat_core seems satisfiable
# premises = []
# conclusions = ['neg (A wedge neg A)']

@mbuit82
Copy link
Collaborator

mbuit82 commented Jul 7, 2024

Coming here after raising the no quantifiers issue for optimization—I think removing Exists is a good idea

@mbuit82
Copy link
Collaborator

mbuit82 commented Jul 7, 2024

It'll be easy to remove Exists when no universal quantifiers are in the formula—you just remove them and that's it (like Exists(x, formula_with_x) just becomes formula_with_x). However when there are universal quantifiers in the formula, removing them I think will collapse differences in quantifier scope, making the existential always scope over the universal (and sadly when we have both existential and universal we have universal scoping over existential). Need to do more thinking on how to get universal > existential formulas.

@mbuit82
Copy link
Collaborator

mbuit82 commented Jul 7, 2024

A potential workaround is just to remove existential quantifiers except when universal scope is needed over an existential; the universal > existential constraints are only for the hyperintensional operators that aren't boxright and one frame constraint—getting rid of Exists in all other contexts would likely nonetheless speed things up. So not really a workaround but I mean removing them in other contexts is already a win, would be cool to figure out a real workaround I think anyways

@benbrastmckie
Copy link
Owner Author

Hey that's a good shout! Graham also suggested eliminating quantifiers (defining them in python) since the models are all finite and small. Once this has been done, I was thinking of adding a flag which overrides N = n and instead steps through values of N until it finds a model.

One reason to avoid removing Exists from definitions is that it makes those definitions hard to compose since other quantifiers could end up scoping over them. I think defining Exists and ForAll in python will also be the easiest thing to do, and can easily be toggled on and off to compare run times (I was thinking of adding a flag for this). Honestly shouldn't take much to write, I just haven't gotten to it yet. Feel free to open a new feature branch if you want to give it a go.

I'm close to finishing a new draft on the paper for this project (same content though the narrative has changed a lot), but will return to some optimization (raising issues on the Z3 GitHub) once I do.

Hope that your summer is going well!

@mbuit82
Copy link
Collaborator

mbuit82 commented Jul 7, 2024

Sounds good! I'm working on the python definitions now, so far looking alright—harder than I thought but I may just be overthinking it. Looking forward to seeing the new draft, and hope your summer's going well too :)

@mbuit82
Copy link
Collaborator

mbuit82 commented Jul 8, 2024

Made a new version of ForAll and Exists (FiniteForAll and FiniteExists respectively)—these are both in the semantics.py file of the no_quantifiers branch.

I tested it with test_CL_5 from the test file, which should find a model and according to the comments would previously take 347 seconds to run on the MIT servers:

premises = ['(A \\boxright C)', '(B \\boxright C)']
conclusions = ['((A \\wedge B) \\boxright C)']

Turning on both FiniteForAll and FiniteExists at the same time made it not find a model—I think because the universal>existential problem is not resolved whenever Exists is a function call (since if Exists is a function call and it is passed as an argument to another function, Python will always evaluate it before using it in the other function, so it will always be instantiated when being passed to a ForAll). However, using FiniteForAll and Z3's Exists returned a model with a runtime of less than a second (!). Here's the output:


Premises:
1. (A \boxright C)
2. (B \boxright C)

Conclusion:
3. ((A \wedge B) \boxright C)

State Space:
  #b0000 = □
  #b0001 = a
  #b0010 = b
  #b0100 = c
  #b0101 = a.c (world)
  #b0110 = b.c (world)
  #b1000 = d
  #b1001 = a.d (world)
  #b1100 = c.d (world)

The evaluation world is: b.c
  A, B  (True in b.c)
  C  (False in b.c)

INTERPRETED PREMISES:

1.  |(A \boxright C)| = < ∅, {□} >  (False in b.c)
    |A| = < {a, a.c, b, b.c, c, c.d}, ∅ >  (True in b.c)
    (A)-alternatives to b.c = {a.c, b.c, c.d}
      |C| = < ∅, {b.c, c, c.d, d} >  (False in a.c)
      |C| = < ∅, {b.c, c, c.d, d} >  (False in b.c)
      |C| = < ∅, {b.c, c, c.d, d} >  (False in c.d)

2.  |(B \boxright C)| = < ∅, {□} >  (False in b.c)
    |B| = < {b, b.c, c, c.d, d}, ∅ >  (True in b.c)
    (B)-alternatives to b.c = {b.c, c.d}
      |C| = < ∅, {b.c, c, c.d, d} >  (False in b.c)
      |C| = < ∅, {b.c, c, c.d, d} >  (False in c.d)

INTERPRETED CONCLUSION:

3.  |((A \wedge B) \boxright C)| = < ∅, {□} >  (False in b.c)
    |(A \wedge B)| = < {a.c, a.d, b, b.c, c, c.d}, ∅ >  (True in b.c)
      |A| = < {a, a.c, b, b.c, c, c.d}, ∅ >  (True in b.c)
      |B| = < {b, b.c, c, c.d, d}, ∅ >  (True in b.c)
    ((A \wedge B))-alternatives to b.c = {a.c, a.d, b.c, c.d}
      |C| = < ∅, {b.c, c, c.d, d} >  (False in c.d)
      |C| = < ∅, {b.c, c, c.d, d} >  (False in a.d)
      |C| = < ∅, {b.c, c, c.d, d} >  (False in b.c)
      |C| = < ∅, {b.c, c, c.d, d} >  (False in a.c)

0.2844

Hopefully the model is right and FiniteForAll is doing the right thing—the python function actually turned out to be pretty simple (I was indeed overthinking things—Z3 has a Lambda quantifier that basically made everything work. No need to use exec() or eval() in this case, they have some quirks about what frames they can access for variables that make them difficult to work with in this scenario, since everything with frames in semantics.py is all over the place).

@mbuit82
Copy link
Collaborator

mbuit82 commented Jul 8, 2024

I ran every test in test_sat.py on the new quantifiers; to do this easily you can run the no_quant_playground.py file; you'll see all the successful tests and the models made (if any). I'm really bad with imports and forgot how we had pytest set up so I copied the functions from that file into the no_quant_playground file and made some small changes to get it to print out when successful. Only two tests don't pass, test_R2 and test_R10, for which a model is erroneously found. Both of those are commented out in the no_quant_playground.py file. The rest apparently all pass in less than half a second each (with N=3 or N=4), most of them in less than 0.1 seconds.

The tests that should not find models that work with N=4 all work with N=5, passing each in about 5 seconds. At N=6, they take about 1-2 mins each.

@benbrastmckie
Copy link
Owner Author

That's great! I'll take a look at the code later today. Looking at the example you included above, note that the premises are both false. I ran into this issue while adding other operators to the language. I am not totally sure what makes this happen but guessed that it is where the constraints being generated from the input sentences aren't quite right. That is, the semantics generate something, but not the right things for the premises to be true and the conclusion false (I labeled these false premise models in the notes). I have also run into cases where the premises are true and the conclusion is also true (calling these true conclusion models). By improving the semantics for the new operators, I was mostly able to eliminate these issues. This might be another case where looking at all generated constraints may be helpful to see what is going on. I'll try to take a look at this soon.

As for the imports, I am yet to find someone who knows how to set this up correctly, and have found lots of conflicting information online, no combination of which has worked so far. But I hope to straighten that out soon and will document it accordingly.

@benbrastmckie
Copy link
Owner Author

I tried running tests on the new branch but couldn't get it to work. Setting use_z3_quantifiers = True made it work, where otherwise I was getting the error:

AttributeError: 'QuantifierRef' object has no attribute 'domain_n'

I was wondering if Lambda is missing a value for the domain. See the attached image:

Screenshot from 2024-07-08 17-47-18

@mbuit82
Copy link
Collaborator

mbuit82 commented Jul 8, 2024

Huh that's interesting, can you show what the entire error output is?

@benbrastmckie
Copy link
Owner Author

benbrastmckie commented Jul 9, 2024

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/benjamin/Documents/Philosophy/Projects/ModelChecker/Code/src/model_checker/_
_main__.py", line 381, in <module>
    main()
  File "/home/benjamin/Documents/Philosophy/Projects/ModelChecker/Code/src/model_checker/_
_main__.py", line 365, in main
    model_setup = make_model_for(module.N, module.premises, module.conclusions)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/benjamin/Documents/Philosophy/Projects/ModelChecker/Code/src/model_checker/m
odel_structure.py", line 679, in make_model_for
    model_setup = ModelSetup(N, backslash_premises, backslash_conclusions)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/benjamin/Documents/Philosophy/Projects/ModelChecker/Code/src/model_checker/m
odel_structure.py", line 155, in __init__
    frame_cons, prop_cons, premise_cons, conclusion_cons = find_constraints_func(
                                                           ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/benjamin/Documents/Philosophy/Projects/ModelChecker/Code/src/model_checker/s
emantics.py", line 961, in find_all_constraints
    frame_constraints = find_frame_constraints(main_world)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/benjamin/Documents/Philosophy/Projects/ModelChecker/Code/src/model_checker/s
emantics.py", line 874, in find_frame_constraints
    ForAll([x, y], Implies(And(possible(y), is_part_of(x, y)), possible(x))),
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/benjamin/Documents/Philosophy/Projects/ModelChecker/Code/src/model_checker/s
emantics.py", line 44, in FiniteForAll
    cons_list.append(lambda_formula[BitVecVal(i,temp_N),BitVecVal(j,temp_N)])
                     ~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/nix/store/ayipcfk7d4xs7kfhkqvd4m700li4zlf3-python3-3.12.3-env/lib/python3.12/site
-packages/z3/z3.py", line 2035, in __getitem__
    return _array_select(self, arg)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/nix/store/ayipcfk7d4xs7kfhkqvd4m700li4zlf3-python3-3.12.3-env/lib/python3.12/site
-packages/z3/z3.py", line 4585, in _array_select
    args = [ar.domain_n(i).cast(arg[i]) for i in range(len(arg))]
            ^^^^^^^^^^^
AttributeError: 'QuantifierRef' object has no attribute 'domain_n'

@benbrastmckie
Copy link
Owner Author

Ran into the same error running no_quant_playground.py.

@benbrastmckie
Copy link
Owner Author

benbrastmckie commented Jul 15, 2024

I've been trying to fix the problem but can't seem to sort it out. Seems to be an issue with FiniteForAll. I tried redefining it recursively as follows:

def ForAllFinite(bvs, formula):
    constraints = []
    if not isinstance(bvs, list):
        bvs = [bvs]
    bv_test = bvs[0]
    temp_N = bv_test.size()
    num_bvs = 2 ** temp_N
    if len(bvs) == 1:
        lambda_formula = Lambda(bvs[0], formula)
        for i in range(num_bvs):
            constraints.append(lambda_formula[BitVecVal(i,temp_N)])
    else:
        bv = bvs[0]
        remaining_bvs = bvs[1:]
        lambda_formula = Lambda(bv, ForAllFinite(remaining_bvs, formula))
        for i in range(num_bvs):
            constraints.append(lambda_formula[BitVecVal(i, temp_N)])
    return And(constraints)

It seems this has the same issue, where the problem seems to be with lambda_formula[BitVecVal(i,temp_N)]. From what I have seen, the syntax seems to be right. In any case, I'm curious how you got it to work before.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants