Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does ALIGNNTL works with ALIGNN 2024.2.4? #2

Open
antonf-ekb opened this issue May 11, 2024 · 0 comments
Open

Does ALIGNNTL works with ALIGNN 2024.2.4? #2

antonf-ekb opened this issue May 11, 2024 · 0 comments

Comments

@antonf-ekb
Copy link

Dear developers!
I have an installed and proper working ALIGNN v. 2024.2.4. I cloned the ALIGNNTL repository and trying to reproduce the FineTuning example, first of all, I face that the train_folder.py script (which is suggested to run ) does not contain all_models = {...} required for the TL. Then I found that train.py script contains this code, so I tried to run

python alignn/train.py --root_dir "../examples" --config "../examples/config_example.json" --id_prop_file "id_prop.csv" --output_dir=model

but get the following errors

from .named_optimizer import _NamedOptimizer File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/optim/named_optimizer.py", line 11, in <module> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/fsdp/__init__.py", line 1, in <module> from ._flat_param import FlatParameter as FlatParameter File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py", line 30, in <module> from torch.distributed.fsdp._common_utils import ( File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/fsdp/_common_utils.py", line 35, in <module> from torch.distributed.fsdp._fsdp_extensions import FSDPExtensions File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/fsdp/_fsdp_extensions.py", line 8, in <module> from torch.distributed._tensor import DeviceMesh, DTensor File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/_tensor/__init__.py", line 6, in <module> import torch.distributed._tensor.ops File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/_tensor/ops/__init__.py", line 2, in <module> from .embedding_ops import * # noqa: F403 File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/_tensor/ops/embedding_ops.py", line 8, in <module> import torch.distributed._functional_collectives as funcol File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/_functional_collectives.py", line 12, in <module> from . import _functional_collectives_impl as fun_col_impl File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/distributed/_functional_collectives_impl.py", line 36, in <module> from torch._dynamo import assume_constant_result File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/_dynamo/__init__.py", line 2, in <module> from . import convert_frame, eval_frame, resume_execution File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 40, in <module> from . import config, exc, trace_rules File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 11, in <module> from .utils import counters File "/home/anton/miniconda3/envs/alignn/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 5, in <module> import cProfile File "/home/anton/miniconda3/envs/alignn/lib/python3.10/cProfile.py", line 23, in <module> run.__doc__ = _pyprofile.run.__doc__ AttributeError: module 'profile' has no attribute 'run'

From the setup.py I see that the expected version of ALIGNN is 2021.11.16, but is it mandatory and this is the cause of the problems? If yes, aren't you going to update the ALIGNNTL code to support the actual version of ALIGNN?

Best regards,
Anton.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant