Skip to content

Code and text for the suspended paper/project "Predicting Satisfiability of Benchmark Instances".

Notifications You must be signed in to change notification settings

Jakob-Bach/Predicting-SAT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 

Repository files navigation

Predicting Satisfiability of Benchmark Instances

This repository contains the code and text of the suspended paper

"Predicting Satisfiablity of Benchmark Instances"

This research project was discontinued ☹️ but the existing code should work. This document describes the setup and the steps to reproduce the experiments.

Setup

Before running the scripts to reproduce the experiments, you should

  1. Set up an environment (optional, but recommended).
  2. Install all necessary dependencies.

Our code is implemented in Python (version 3.8; other versions, including lower ones, might work as well).

Option 1: conda Environment

If you use conda, you can directly install the correct Python version into a new conda environment and activate the environment as follows:

conda create --name <conda-env-name> python=3.8
conda activate <conda-env-name>

Choose <conda-env-name> as you like.

To leave the environment, run

conda deactivate

Option 2: virtualenv Environment

We used virtualenv (version 20.4.7; other versions might work as well) to create an environment for our experiments. First, you need to install the correct Python version yourself. Let's assume the Python executable is located at <path/to/python>. Next, you install virtualenv with

python -m pip install virtualenv==20.4.7

To set up an environment with virtualenv, run

python -m virtualenv -p <path/to/python> <path/to/env/destination>

Choose <path/to/env/destination> as you like.

Activate the environment in Linux with

source <path/to/env/destination>/bin/activate

Activate the environment in Windows (note the back-slashes) with

<path\to\env\destination>\Scripts\activate

To leave the environment, run

deactivate

Dependency Management

After activating the environment, you can use python and pip as usual. To install all necessary dependencies for this repo, switch to the directory code/ and run

python -m pip install -r requirements.txt

If you make changes to the environment and you want to persist them, run

python -m pip freeze > requirements.txt

Reproducing the Experiments

After setting up and activating an environment, you are ready to run the code. From the directory code/, run

python -m prepare_datasets

to download and pre-process the input data for the experiments from the GBD website. Next, start the experimental pipeline with

python -m run_experiments

Depending on your hardware, this might take some time. To print statistics and create the plots for the paper, run

python -m run_evaluation

All scripts have a few command-line options, which you can see by running the scripts like

python -m prepare_datasets --help

About

Code and text for the suspended paper/project "Predicting Satisfiability of Benchmark Instances".

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published