Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wfc extrapolation / MD #1003

Open
wants to merge 37 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
2075434
nlcglib/md-extrapolation changes
simonpintarelli May 15, 2024
c3fdfb9
format
simonpintarelli May 15, 2024
8f31474
sirius_api
simonpintarelli May 15, 2024
869154f
format
simonpintarelli May 15, 2024
7c05aaa
format
simonpintarelli May 15, 2024
f6dfbf3
format
simonpintarelli May 15, 2024
efcf696
format
simonpintarelli May 16, 2024
717cb3d
missing ifdef guard
simonpintarelli May 16, 2024
784771c
fix python api
simonpintarelli May 16, 2024
153c5c2
add nlcglib spack recipe
simonpintarelli Jun 6, 2024
adb024c
ci: use nlcglib@develop
simonpintarelli Jun 7, 2024
c38b008
fix recipe
simonpintarelli Jun 7, 2024
2d91751
sirius.f90
simonpintarelli Jun 7, 2024
b1044d7
python fixes
simonpintarelli Jun 9, 2024
529de08
format
simonpintarelli Jun 9, 2024
89b4d18
wfc extrapolation
simonpintarelli May 15, 2024
3a94494
use density projector
simonpintarelli May 15, 2024
21af9cb
fix magnetic case
simonpintarelli Jun 10, 2024
20dc1bb
cleanup
simonpintarelli Jun 11, 2024
01aa19e
more python fixes
simonpintarelli Jun 11, 2024
746ae72
wfc extrapolation: always re-orthogonalize
simonpintarelli Jun 12, 2024
970a460
add (simple) copy (duplication) routine for wave functions
simonpintarelli Jun 12, 2024
9c76a40
md extrapolation: fix SKIP_WFC_EXTRAPOLATION, refactor
simonpintarelli Jun 12, 2024
33161ba
SKIP_WFC_EXTRAPOALTION, check value
simonpintarelli Jun 13, 2024
550f0f7
call_sirius -> dump core if `SIRIUS_COREDUMP=1` env is defined
simonpintarelli Jun 13, 2024
08f3dd3
format
simonpintarelli Jun 13, 2024
eb50b72
format
simonpintarelli Jun 26, 2024
45b835a
indent pragma
simonpintarelli Jun 27, 2024
9a76d02
increase container build timeout
simonpintarelli Jul 2, 2024
2601de5
cosmetics
simonpintarelli Jul 5, 2024
4ca2961
profile symmetrize_occupation_matrix
simonpintarelli Jul 5, 2024
4c55550
profile Hubbard find_orbital_index
simonpintarelli Jul 7, 2024
bebc2b7
compute rotations matrices before for loop
simonpintarelli Jul 8, 2024
5f02678
format
simonpintarelli Jul 8, 2024
4b7f237
add openmp pragma
simonpintarelli Jul 8, 2024
f7ce24a
add missing line
simonpintarelli Jul 8, 2024
46f6c19
remove timer in find_orbital_index
simonpintarelli Jul 11, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion ci/baseimage.cuda.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ RUN spack install openblas %gcc +fortran

RUN spack install magma %gcc +cuda +fortran ^openblas

RUN spack install nlcglib@master %gcc +cuda
RUN spack install nlcglib@develop %gcc +cuda

# for the MPI hook
RUN echo $(spack find --format='{prefix.lib}' mpich) > /etc/ld.so.conf.d/mpich.conf
Expand Down
10 changes: 10 additions & 0 deletions ci/cscs-daint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ build cuda image mkl:
extends: .container-builder
needs: ["build base cuda image"]
stage: build
timeout: 4h
variables:
DOCKERFILE: ci/build.Dockerfile
PERSIST_IMAGE_NAME: discard
Expand All @@ -41,6 +42,7 @@ build cuda image:
extends: .container-builder
needs: ["build base cuda image"]
stage: build
timeout: 4h
variables:
CSCS_REBUILD_POLICY: always
DOCKERFILE: ci/build.Dockerfile
Expand All @@ -52,6 +54,7 @@ build rocm image:
extends: .container-builder
needs: ["build base rocm image"]
stage: build
timeout: 4h
variables:
DOCKERFILE: ci/build.Dockerfile
PERSIST_IMAGE_NAME: discard
Expand All @@ -62,6 +65,7 @@ build elpa cuda image:
extends: .container-builder
needs: ["build base cuda image"]
stage: build
timeout: 4h
variables:
DOCKERFILE: ci/build.Dockerfile
PERSIST_IMAGE_NAME: discard
Expand All @@ -72,6 +76,7 @@ build sequential eigen-solver cuda image:
extends: .container-builder
needs: ["build base cuda image"]
stage: build
timeout: 4h
variables:
DOCKERFILE: ci/build.Dockerfile
PERSIST_IMAGE_NAME: discard
Expand All @@ -82,6 +87,7 @@ build fp32 cuda image:
extends: .container-builder
needs: ["build base cuda image"]
stage: build
timeout: 4h
variables:
DOCKERFILE: ci/build.Dockerfile
PERSIST_IMAGE_NAME: discard
Expand All @@ -92,6 +98,7 @@ build vdwxc cuda image:
extends: .container-builder
needs: ["build base cuda image"]
stage: build
timeout: 4h
variables:
DOCKERFILE: ci/build.Dockerfile
PERSIST_IMAGE_NAME: discard
Expand All @@ -103,6 +110,7 @@ build nlcg image:
extends: .container-builder
needs: ["build base cuda image"]
stage: build
timeout: 4h
variables:
DOCKERFILE: ci/build.Dockerfile
PERSIST_IMAGE_NAME: discard
Expand All @@ -113,6 +121,7 @@ build clang image:
extends: .container-builder
needs: ["build base cuda image"]
stage: build
timeout: 4h
variables:
DOCKERFILE: ci/build.Dockerfile
PERSIST_IMAGE_NAME: discard
Expand All @@ -124,6 +133,7 @@ build openmpi image:
extends: .container-builder
needs: ["build base cuda image"]
stage: build
timeout: 4h
variables:
DOCKERFILE: ci/build.Dockerfile
PERSIST_IMAGE_NAME: discard
Expand Down
10 changes: 6 additions & 4 deletions python_module/py_sirius.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,8 @@
* SPDX-License-Identifier: BSD-3-Clause
*/

#include "k_point/k_point_set.hpp"
#include "context/simulation_context.hpp"
#include "k_point/k_point.hpp"
#include "python_module_includes.hpp"
#include <mpi.h>

Expand Down Expand Up @@ -216,6 +217,7 @@ PYBIND11_MODULE(py_sirius, m)
.def("update", &Simulation_context::update)
.def("use_symmetry", py::overload_cast<>(&Simulation_context::use_symmetry, py::const_))
.def("processing_unit_memory_t", &Simulation_context::processing_unit_memory_t)
.def("spla_context", &Simulation_context::spla_context_ptr)
.def(
"comm", [](Simulation_context& obj) { return make_pycomm(obj.comm()); },
py::return_value_policy::reference_internal)
Expand Down Expand Up @@ -403,8 +405,7 @@ PYBIND11_MODULE(py_sirius, m)
.def("fv_states", &K_point<double>::fv_states, py::return_value_policy::reference_internal)
.def("ctx", &K_point<double>::ctx, py::return_value_policy::reference_internal)
.def("weight", &K_point<double>::weight)
.def("beta_projectors", py::overload_cast<>(&K_point<double>::beta_projectors, py::const_),
py::return_value_policy::reference_internal)
.def("beta_projectors", py::overload_cast<>(&K_point<double>::beta_projectors_ptr, py::const_))
.def("spinor_wave_functions", py::overload_cast<>(&K_point<double>::spinor_wave_functions),
py::return_value_policy::reference_internal);

Expand All @@ -415,7 +416,8 @@ PYBIND11_MODULE(py_sirius, m)
.def(py::init<Simulation_context&, r3::vector<int>, r3::vector<int>, bool>(), py::keep_alive<1, 2>())
.def(py::init<Simulation_context&, std::vector<int>, std::vector<int>, bool>(), py::keep_alive<1, 2>())
.def("initialize", &K_point_set::initialize, py::arg("counts") = std::vector<int>{})
.def("ctx", &K_point_set::ctx, py::return_value_policy::reference_internal)
.def("ctx", static_cast<Simulation_context const& (K_point_set::*)() const>(&K_point_set::ctx),
py::return_value_policy::reference_internal)
.def("unit_cell", &K_point_set::unit_cell, py::return_value_policy::reference_internal)
.def("_num_kpoints", &K_point_set::num_kpoints)
.def("size", [](K_point_set& ks) -> int { return ks.spl_num_kpoints().local_size(); })
Expand Down
33 changes: 19 additions & 14 deletions python_module/py_sirius_operators.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
*/

#include "python_module_includes.hpp"
#include <memory>
#include <string>
#include <iomanip>
#include <complex>
Expand All @@ -25,16 +26,16 @@ using PT = double;
void
init_operators(py::module& m)
{
py::class_<Non_local_operator<PT>>(m, "Non_local_operator")
py::class_<Non_local_operator<PT>, std::shared_ptr<Non_local_operator<PT>>>(m, "Non_local_operator")
.def("get_matrix", &Non_local_operator<PT>::get_matrix<std::complex<double>>);
py::class_<D_operator<PT>, Non_local_operator<PT>>(m, "D_operator");
py::class_<Q_operator<PT>, Non_local_operator<PT>>(m, "Q_operator");
py::class_<D_operator<PT>, Non_local_operator<PT>, std::shared_ptr<D_operator<PT>>>(m, "D_operator");
py::class_<Q_operator<PT>, Non_local_operator<PT>, std::shared_ptr<Q_operator<PT>>>(m, "Q_operator");

py::class_<Hamiltonian0<PT>>(m, "Hamiltonian0")
.def(py::init<Potential&, bool>(), py::keep_alive<1, 2>(), "Potential"_a,
py::arg("precompute_lapw") = false)
.def("Q", &Hamiltonian0<PT>::Q, py::return_value_policy::reference_internal)
.def("D", &Hamiltonian0<PT>::D, py::return_value_policy::reference_internal)
.def("Q", &Hamiltonian0<PT>::Q_ptr)
.def("D", &Hamiltonian0<PT>::D_ptr)
.def("Hk", &Hamiltonian0<PT>::operator(), py::keep_alive<0, 1>())
.def("potential", &Hamiltonian0<PT>::potential, py::return_value_policy::reference_internal);

Expand All @@ -46,8 +47,9 @@ init_operators(py::module& m)
.def_property_readonly("U", &Hamiltonian_k<PT>::U, py::return_value_policy::reference_internal);

py::class_<S_k<complex_double>>(m, "S_k")
.def(py::init<Simulation_context&, const Q_operator<PT>&, const Beta_projectors_base<PT>&, int>(),
py::keep_alive<1, 2>(), py::keep_alive<1, 3>(), py::keep_alive<1, 4>())
.def(py::init<memory_t, std::shared_ptr<spla::Context>, std::shared_ptr<Q_operator<PT> const>,
std::shared_ptr<Beta_projectors_base<PT> const>, int>(),
py::keep_alive<1, 2>())
.def_property_readonly("size", &S_k<complex_double>::size)
.def("apply", [](py::object& obj, py::array_t<complex_double>& X) {
using class_t = S_k<complex_double>;
Expand All @@ -71,9 +73,11 @@ init_operators(py::module& m)
return sk.apply(array, memory_t::host);
});

py::class_<spla::Context, std::shared_ptr<spla::Context>>(m, "splaContext");

py::class_<InverseS_k<complex_double>>(m, "InverseS_k")
.def(py::init<Simulation_context&, const Q_operator<PT>&, const Beta_projectors_base<PT>&, int>(),
py::keep_alive<1, 2>(), py::keep_alive<1, 3>(), py::keep_alive<1, 4>())
.def(py::init<memory_t, std::shared_ptr<spla::Context>, std::shared_ptr<Q_operator<PT> const>,
std::shared_ptr<Beta_projectors_base<PT> const>, int>())
.def_property_readonly("size", &InverseS_k<complex_double>::size)
.def("apply", [](py::object& obj, py::array_t<complex_double>& X) {
using class_t = InverseS_k<complex_double>;
Expand All @@ -98,9 +102,9 @@ init_operators(py::module& m)
});

py::class_<Ultrasoft_preconditioner<complex_double>>(m, "Precond_us")
.def(py::init<Simulation_context&, const Q_operator<PT>&, int, const Beta_projectors_base<PT>&,
const fft::Gvec&>(),
py::keep_alive<1, 2>(), py::keep_alive<1, 3>(), py::keep_alive<1, 5>(), py::keep_alive<1, 6>())
.def(py::init<Simulation_context&, std::shared_ptr<Q_operator<PT> const>, int,
std::shared_ptr<Beta_projectors_base<PT> const>, const fft::Gvec&>(),
py::keep_alive<1, 2>())
.def_property_readonly("size", &Ultrasoft_preconditioner<complex_double>::size)
.def("apply", [](py::object& obj, py::array_t<complex_double>& X) {
using class_t = Ultrasoft_preconditioner<complex_double>;
Expand Down Expand Up @@ -163,12 +167,13 @@ init_operators(py::module& m)
.def("generate_j", py::overload_cast<beta_projectors_coeffs_t<PT>&, int, int>(
&Beta_projector_generator<PT>::generate, py::const_));

py::class_<Beta_projectors_base<PT>>(m, "Beta_projectors_base")
py::class_<Beta_projectors_base<PT>, std::shared_ptr<Beta_projectors_base<PT>>>(m, "Beta_projectors_base")
.def_property_readonly("num_chunks", &Beta_projectors_base<PT>::num_chunks)
.def("make_generator", py::overload_cast<device_t>(&Beta_projectors_base<PT>::make_generator, py::const_),
py::keep_alive<1, 0>());

py::class_<Beta_projectors<PT>, Beta_projectors_base<PT>>(m, "Beta_projectors");
py::class_<Beta_projectors<PT>, Beta_projectors_base<PT>, std::shared_ptr<Beta_projectors<PT>>>(m,
"Beta_projectors");
m.def("apply_U_operator", &apply_U_operator<double>, py::arg("ctx"), py::arg("spin_range"), py::arg("band_range"),
py::arg("hub_wf"), py::arg("phi"), py::arg("u_op"), py::arg("hphi"));

Expand Down
4 changes: 3 additions & 1 deletion python_module/sirius/coefficient_array/coefficient_array.py
Original file line number Diff line number Diff line change
Expand Up @@ -231,7 +231,9 @@ def sum(self, **kwargs):
""" """
loc_sum = np.array(sum([np.sum(v, **kwargs) for _, v in self.items()]))
reduced = MPI.COMM_WORLD.allreduce(loc_sum, op=MPI.SUM)
return np.ndarray.item(reduced)
if isinstance(reduced, np.ndarray):
return np.ndarray.item(reduced)
return reduced

def log(self, **kwargs):
out = type(self)()
Expand Down
14 changes: 3 additions & 11 deletions python_module/sirius/edft/neugebauer.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
import numpy as np
from scipy.constants import physical_constants

from ..coefficient_array import diag, inner, l2norm
from ..coefficient_array import diag, inner, l2norm, identity_like
from ..operators import US_Precond, Sinv_operator, S_operator
from ..logger import Logger
from ..py_sirius import sprint_magnetization
Expand Down Expand Up @@ -229,7 +229,6 @@ def run(
tol=1e-10,
kappa=0.3,
tau=0.5,
cgtype="FR",
K=IdentityPreconditioner(),
callback=lambda *args, **kwargs: None,
error_callback=lambda *args, **kwargs: None,
Expand All @@ -242,15 +241,6 @@ def run(
is_converged -- bool
"""

if cgtype == "PR":
cg_update = polak_ribiere
elif cgtype == "FR":
cg_update = fletcher_reeves
elif cgtype == "SD":
cg_update = steepest_descent
else:
raise ValueError("wrong type")

if self.is_ultrasoft:
K = self.K
# TODO cleanup signature of run and don't pass the preconditioner anymore
Expand All @@ -265,6 +255,7 @@ def run(
X = X @ U
# set occupation numbers from band energies
fn, mu = self.M.smearing.fn(ek)

# compute initial free energy
FE, Hx = M(X, fn=fn, mu=mu, ek=ek)
logger("initial F: %.13f" % FE)
Expand All @@ -278,6 +269,7 @@ def run(
SX = self.S @ X
else:
SX = X

XhKHX = SX.H @ (K @ HX)
XhKX = SX.H @ (K @ SX)
LL = _solve(XhKX, XhKHX)
Expand Down
8 changes: 3 additions & 5 deletions python_module/sirius/edft/pseudo_hamiltonian.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,17 +83,15 @@ def __call__(self, t):
X_new = self.X + t * self.G_X
eta_new = self.eta + t * self.G_eta
ek, Ul = eta_new.eigh()
X = loewdin(X_new, self.overlap) @ Ul
X_new = loewdin(X_new, self.overlap) @ Ul
fn, mu = self.M.smearing.fn(ek)

# check fn

FE, Hx = self.M(X, fn=fn, mu=mu, ek=ek)
FE, Hx = self.M(X_new, fn=fn, mu=mu, ek=ek)
return SimpleNamespace(
**{
"free_energy": FE,
"Hx": Hx,
"X": X,
"X": X_new,
"fn": fn,
"mu": mu,
"ek": ek,
Expand Down
30 changes: 24 additions & 6 deletions python_module/sirius/operators.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
from .py_sirius import InverseS_k, S_k, Precond_us, Hamiltonian0
from .py_sirius import InverseS_k, S_k, Precond_us, Hamiltonian0, MemoryEnum

# from .coefficient_array import threaded
from scipy.sparse.linalg import LinearOperator
import numpy as np
Expand All @@ -24,7 +25,11 @@ def __init__(self, ctx, potential, kpointset):
for ik, kp in enumerate(kpointset):
for ispn in range(ctx.num_spins()):
self.ops[ik, ispn] = S_k(
ctx, hamiltonian0.Q(), kp.beta_projectors(), ispn
MemoryEnum.host,
ctx.spla_context(),
hamiltonian0.Q(),
kp.beta_projectors(),
ispn,
)

def apply(self, cn):
Expand All @@ -37,8 +42,11 @@ def apply(self, cn):
def __getitem__(self, key):
def _matvec(X):
return np.array(self.ops[key].apply(np.asfortranarray(X)))

n = self.ops[key].size
return LinearOperator(dtype=np.complex128, shape=(n, n), matvec=_matvec, rmatvec=_matvec)
return LinearOperator(
dtype=np.complex128, shape=(n, n), matvec=_matvec, rmatvec=_matvec
)

def __matmul__(self, cn):
return self.apply(cn)
Expand All @@ -53,7 +61,11 @@ def __init__(self, ctx, potential, kpointset):
for ik, kp in enumerate(kpointset):
for ispn in range(ctx.num_spins()):
self.ops[ik, ispn] = InverseS_k(
ctx, hamiltonian0.Q(), kp.beta_projectors(), ispn
MemoryEnum.host,
ctx.spla_context(),
hamiltonian0.Q(),
kp.beta_projectors(),
ispn,
)

def apply(self, cn):
Expand All @@ -65,8 +77,11 @@ def apply(self, cn):
def __getitem__(self, key):
def _matvec(X):
return np.array(self.ops[key].apply(np.asfortranarray(X)))

n = self.ops[key].size
return LinearOperator(dtype=np.complex128, shape=(n, n), matvec=_matvec, rmatvec=_matvec)
return LinearOperator(
dtype=np.complex128, shape=(n, n), matvec=_matvec, rmatvec=_matvec
)

def __matmul__(self, cn):
return self.apply(cn)
Expand Down Expand Up @@ -99,8 +114,11 @@ def apply(self, cn):
def __getitem__(self, key):
def _matvec(X):
return np.array(self.ops[key].apply(np.asfortranarray(X)))

n = self.ops[key].size
return LinearOperator(dtype=np.complex128, shape=(n, n), matvec=_matvec, rmatvec=_matvec)
return LinearOperator(
dtype=np.complex128, shape=(n, n), matvec=_matvec, rmatvec=_matvec
)

def __matmul__(self, cn):
return self.apply(cn)
2 changes: 1 addition & 1 deletion python_module/test/test_mvp2_gradient.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ def make_new_ctx(pw_cutoff, gk_cutoff):
"pw_cutoff": pw_cutoff,
"gk_cutoff": gk_cutoff,
},
"control": {"verbosity": 0},
"control": {"verbosity": 0, "processing_unit": "cpu"},
}
# create simulation context
ctx = Simulation_context(json.dumps(inp))
Expand Down
Loading