Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

issue with trying to build cholmod_cuda #764

Open
NessieCanCode opened this issue Feb 16, 2024 · 3 comments
Open

issue with trying to build cholmod_cuda #764

NessieCanCode opened this issue Feb 16, 2024 · 3 comments
Assignees

Comments

@NessieCanCode
Copy link

NessieCanCode commented Feb 16, 2024

I'm trying to update the spack package for suitesparse as the current one is highly outdated, but I'm not able to get cholmod_cuda libs to build which is expected libs by Julia 10.1.x

When I tried adding cholmod_cuda as a project, cmake errors, with it is not a known project:

Here are the CMAKE args currently being used.

'-DCUDA=NO' '-DCUDA_PATH=/opt/spack/opt/spack/linux-rhel8-icelake/gcc-12.2.0/cuda-12.3.0-dxo3hgtqe5knk6uctpru4f35hi6qg5lp' '-DSUITESPARSE_USE_CUDA=ON' '-DCHOLMOD_USE_CUDA=ON'

Here is what is in the Julia makefile,
JL_PRIVATE_LIBS-$(USE_SYSTEM_LIBSUITESPARSE) += libamd libbtf libcamd libccolamd libcholmod libcholmod_cuda libcolamd libklu libldl librbio libspqr libspqr_cuda libsuitesparseconfig libumfpack

The publicly available spack package can be found here:https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/suite-sparse/package.py

And Here is my version of the spack package I'm currently making:

# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)

from spack import *


class SuiteSparse(CMakePackage):
    """SuiteSparse is a suite of sparse matrix algorithms"""

    homepage = "https://people.engr.tamu.edu/davis/suitesparse.html"
    url = "https://github.com/DrTimothyAldenDavis/SuiteSparse/archive/v7.6.0.tar.gz"
    git = "https://github.com/DrTimothyAldenDavis/SuiteSparse.git"

    maintainers = 'nessiecancode'

    version('7.6.0', sha256='765bafd9645826a7502e69d666481840604c0073')

    # Variant definitions based on SuiteSparse CMake options
    variant('camd', default=True, description='Enable CAMD')
    variant('ccolamd', default=True, description='Enable CCOLAMD')
    variant('colamd', default=True, description='Enable COLAMD')
    variant('cholmod', default=True, description='Enable CHOLMOD')
    variant('amd', default=True, description='Enable AMD')
    variant('btf', default=True, description='Enable BTF')
    variant('cxsparse', default=True, description='Enable CXSparse')
    variant('ldl', default=True, description='Enable LDL')
    variant('klu', default=True, description='Enable KLU')
    variant('umfpack', default=True, description='Enable UMFPACK')
    variant('paru', default=True, description='Enable ParU')
    variant('rbio', default=True, description='Enable RBio')
    variant('spqr', default=True, description='Enable SPQR')
    variant('spex', default=True, description='Enable SPEX')
    variant('graphblas', default=True, description='Enable GraphBLAS')
    variant('lagraph', default=True, description='Enable LAGraph')
    variant('cuda', default=False, description='Enable CUDA acceleration for CHOLMOD and SPQR')
    variant('openmp', default=True, description='Enable OpenMP usage')
    variant('pic', default=True, description='required to link with shared libraries')

    # Additional CMake options
    variant('build_shared_libs', default=True, description='Build shared libraries')
    variant('build_static_libs', default=True, description='Build static libraries')
    variant('cuda_arch', default='52;75;80', description='CUDA architectures for SuiteSparse')
    variant('suitesparse_enable_projects', default='all', description='Semicolon-separated list of projects to be built or `all`')
    variant('cmake_build_type', default='Release', description='Build type: `Release` or `Debug`')
    variant('suitesparse_use_strict', default=False, description='Treat all *_USE_* settings strictly')

    depends_on('blas')
    depends_on('lapack')
    depends_on('cuda', when='+cuda')
    depends_on('gmp')
    depends_on('mpfr')
    depends_on('[email protected]:', type='build')
    depends_on('openmpi')

    def flag_handler(self, name, flags):
        if name in ("cflags", "cxxflags"):
            if self.spec.satisfies("^openblas ~shared threads=openmp"):
                flags.append(self.compiler.openmp_flag)
        return (flags, None, None)

    def symbol_suffix_blas(self, spec, args):
        """When using BLAS with a special symbol suffix we use defines to
        replace blas symbols, e.g. dgemm_ becomes dgemm_64_ when
        symbol_suffix=64_."""

        # Currently only OpenBLAS does this.
        if not spec.satisfies("^openblas"):
            return

        suffix = spec["openblas"].variants["symbol_suffix"].value
        if suffix == "none":
            return

        symbols = (
            "dtrsv_", "dgemv_", "dtrsm_", "dgemm_", "dsyrk_",
            "dger_", "dscal_", "dpotrf_", "ztrsv_", "zgemv_",
            "ztrsm_", "zgemm_", "zherk_", "zgeru_", "zscal_",
            "zpotrf_", "dnrm2_", "dlarf_", "dlarfg_", "dlarft_",
            "dlarfb_", "dznrm2_", "zlarf_", "zlarfg_", "zlarft_",
            "zlarfb_"
        )

        for symbol in symbols:
            args.append("CFLAGS+=-D{0}={1}{2}".format(symbol, symbol, suffix))

    def cmake_args(self):
        cc_pic_flag = self.compiler.cc_pic_flag if "+pic" in self.spec else ""
        f77_pic_flag = self.compiler.f77_pic_flag if "+pic" in self.spec else ""
        enabled_projects = []
        # Add projects corresponding to enabled variants
        if '+suitesparse_config' in self.spec:
            enabled_projects.append('suitesparse_config')
        if '+mongoose' in self.spec:
            enabled_projects.append('mongoose')
        if '+amd' in self.spec:
            enabled_projects.append('amd')
        if '+btf' in self.spec:
            enabled_projects.append('btf')
        if '+camd' in self.spec:
            enabled_projects.append('camd')
        if '+ccolamd' in self.spec:
            enabled_projects.append('ccolamd')
        if '+colamd' in self.spec:
            enabled_projects.append('colamd')
        if '+cholmod' in self.spec:
            enabled_projects.append('cholmod')
        if '+cxsparse' in self.spec:
            enabled_projects.append('cxsparse')
        if '+ldl' in self.spec:
            enabled_projects.append('ldl')
        if '+klu' in self.spec:
            enabled_projects.append('klu')
        if '+umfpack' in self.spec:
            enabled_projects.append('umfpack')
        if '+paru' in self.spec:
            enabled_projects.append('paru')
        if '+rbio' in self.spec:
            enabled_projects.append('rbio')
        if '+spqr' in self.spec:
            enabled_projects.append('spqr')
        if '+spex' in self.spec:
            enabled_projects.append('spex')
        if '+graphblas' in self.spec:
            enabled_projects.append('graphblas')
        if '+lagraph' in self.spec:
            enabled_projects.append('lagraph')

        # Construct the CMake arguments
        args = [
            '-DSUITESPARSE_ENABLE_PROJECTS={0}'.format(';'.join(enabled_projects)),
            '-DCMAKE_BUILD_TYPE={0}'.format(self.spec.variants['cmake_build_type'].value),
            '-DCMAKE_INSTALL_PREFIX={0}'.format(self.prefix),
            '-DCMAKE_INSTALL_RPATH_USE_LINK_PATH=TRUE'  # Adjust if necessary
            # Add other arguments as needed
        ]

        # Add CUDA-related arguments if CUDA is enabled
        if '+cuda' in self.spec:
            args.extend([
                '-DCUDA=NO',
                '-DCUDA_PATH={0}'.format(self.spec["cuda"].prefix),
                '-DSUITESPARSE_USE_CUDA=ON',
                '-DCHOLMOD_USE_CUDA=ON',
                '-DSPQR_USE_CUDA=ON'
            ])
        else:
            args.extend([
                '-DSUITESPARSE_USE_CUDA=OFF',
                '-DCHOLMOD_USE_CUDA=OFF',
                '-DSPQR_USE_CUDA=OFF'
            ])

        # Add strict mode if enabled
        if '+suitesparse_use_strict' in self.spec:
            args.append('-DSUITESPARSE_USE_STRICT=ON')

        # Add OpenMP-related arguments if enabled
        if '+openmp' in self.spec:
            args.extend([
                '-DSUITESPARSE_USE_OPENMP=ON',
                '-DCHOLMOD_USE_OPENMP=ON',
                '-DGRAPHBLAS_USE_OPENMP=ON',
                '-DLAGRAPH_USE_OPENMP=ON',
                '-DPARU_USE_OPENMP=ON'
            ])
        else:
            args.extend([
                '-DSUITESPARSE_USE_OPENMP=OFF',
                '-DCHOLMOD_USE_OPENMP=OFF',
                '-DGRAPHBLAS_USE_OPENMP=OFF',
                '-DLAGRAPH_USE_OPENMP=OFF',
                '-DPARU_USE_OPENMP=OFF'
            ])
        return args

    def setup_environment(self, spack_env, run_env):
        if '+cuda' in self.spec:
            run_env.set('CUDACXX', self.spec['cuda'].prefix.bin.nvcc)

@DrTimothyAldenDavis
Copy link
Owner

I had a separate libcholmod_cuda.so library at one point, only because I wasn't able to figure out how to get cmake to build a single libcholmod.so library with the CUDA functions included. We figured that out, and so now the CUDA functions are all inside the single libcholmod.so. There's no longer a need to link against cholmod_cuda.

@DrTimothyAldenDavis
Copy link
Owner

The same is true for the spqr_cuda library.

Thanks for taking a look at updating spack -- that's great to hear.

@DrTimothyAldenDavis
Copy link
Owner

Is it possible to revise the libraries that julia is asking for, so it doesn't try to link against cholmod_cuda and spqr_cuda? Perhaps depending on the SuiteSparse version?

I could look through my SuiteSparse versions to see which ones have cholmod_cuda and spqr_cuda, if that would help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants