Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ModuleNotFoundError: No module named 'MultiScaleDeformableAttention' #176

Open
Githia opened this issue Jun 10, 2024 · 0 comments
Open

ModuleNotFoundError: No module named 'MultiScaleDeformableAttention' #176

Githia opened this issue Jun 10, 2024 · 0 comments

Comments

@Githia
Copy link

Githia commented Jun 10, 2024

  1. 你好,我的电脑系统是Windows11,其他的python环境和mmseg都是根据readme配置的,在编译'MultiScaleDeformableAttention',执行指令bash make.sh时,出现

running build_ext
E:\mmseg\ops\torch\utils\cpp_extension.py:370: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
E:\mmseg\ops\torch\utils\cpp_extension.py:305: UserWarning: Error checking compiler version for cl: 'utf-8' codec can't decode byte 0xd3 in position 0: invalid continuation byte
warnings.warn(f'Error checking compiler version for {compiler}: {error}')
building 'MultiScaleDeformableAttention' extension
E:\VS\1\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DWITH_CUDA -IE:\mmseg\ops\src -IE:\mmseg\ops\torch\include -IE:\mmseg\ops\torch\include\t
orch\csrc\api\include -IE:\mmseg\ops\torch\include\TH -IE:\mmseg\ops\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\include" -IE:\Miniconda3lj\envs\open
mmlab\include -IE:\Miniconda3lj\envs\openmmlab\Include -IE:\VS\1\VC\Tools\MSVC\14.29.30133\ATLMFC\include -IE:\VS\1\VC\Tools\MSVC\14.29.30133\include "-IE:\Windows Kits\10\include\10.
0.19041.0\ucrt" "-IE:\Windows Kits\10\include\10.0.19041.0\shared" "-IE:\Windows Kits\10\include\10.0.19041.0\um" "-IE:\Windows Kits\10\include\10.0.19041.0\winrt" "-IE:\Windows Kits
10\include\10.0.19041.0\cppwinrt" -IE:\VS\1\VC\Tools\MSVC\14.29.30133\include "-IE:\Windows Kits\10\Include\10.0.19041.0\ucrt" "-IE:\Windows Kits\10\Include\10.0.19041.0\um" "-IE:\Win
dows Kits\10\Include\10.0.19041.0\cppwinrt" "-IE:\Windows Kits\10\Include\10.0.19041.0\shared" "-IE:\Windows Kits\10\Include\10.0.19041.0\winrt" /EHsc /TpE:\mmseg\ops\src\cpu\ms_defor
m_attn_cpu.cpp /Fobuild\temp.win-amd64-cpython-38\Release\mmseg\ops\src\cpu\ms_deform_attn_cpu.obj /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=MultiScaleDeformableAttention -D_GLIBCXX_USE_CXX11_ABI=0
ms_deform_attn_cpu.cpp
E:\mmseg\ops\torch\include\c10/util/Optional.h(183): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除”
with
[
T=at::Tensor
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(367): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用
with
[
T=at::Tensor
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用
with
[
T=at::Tensor
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBaseat::Tensor”的引用
E:\mmseg\ops\torch\include\ATen/core/TensorBody.h(734): note: 查看对正在编译的 类 模板 实例化“c10::optionalat::Tensor”的引用
E:\mmseg\ops\torch\include\c10/util/Optional.h(395): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除”
with
[
T=at::Tensor
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(183): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除”
with
[
T=at::Generator
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(367): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用
with
[
T=at::Generator
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用
with
[
T=at::Generator
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBaseat::Generator”的引用
E:\mmseg\ops\torch\include\ATen/core/TensorBody.h(800): note: 查看对正在编译的 类 模板 实例化“c10::optionalat::Generator”的引用
E:\mmseg\ops\torch\include\c10/util/Optional.h(395): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除”
with
[
T=at::Generator
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(183): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除”
with
[
T=c10::impl::InlineDeviceGuardc10::impl::VirtualGuardImpl
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(367): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用
with
[
T=c10::impl::InlineDeviceGuardc10::impl::VirtualGuardImpl
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用
with
[
T=c10::impl::InlineDeviceGuardc10::impl::VirtualGuardImpl
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBase<c10::impl::InlineDeviceGuardc10::impl::VirtualGuardImpl>”的引用
E:\mmseg\ops\torch\include\c10/core/impl/InlineDeviceGuard.h(427): note: 查看对正在编译的 类 模板 实例化“c10::optional<c10::impl::InlineDeviceGuardc10::impl::VirtualGuardImpl>”的引 用
E:\mmseg\ops\torch\include\c10/core/DeviceGuard.h(178): note: 查看对正在编译的 类 模板 实例化“c10::impl::InlineOptionalDeviceGuardc10::impl::VirtualGuardImpl”的引用
E:\mmseg\ops\torch\include\c10/util/Optional.h(395): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除”
with
[
T=c10::impl::InlineDeviceGuardc10::impl::VirtualGuardImpl
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(183): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除”
with
[
T=std::string
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(367): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用
with
[
T=std::string
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用
with
[
T=std::string
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBasestd::string”的引用
E:\mmseg\ops\torch\include\ATen/core/jit_type_base.h(107): note: 查看对正在编译的 类 模板 实例化“c10::optionalstd::string”的引用
E:\mmseg\ops\torch\include\c10/util/Optional.h(395): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除”
with
[
T=std::string
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(183): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除”
with
[
T=std::vector<c10::ShapeSymbol,std::allocatorc10::ShapeSymbol>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(367): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用
with
[
T=std::vector<c10::ShapeSymbol,std::allocatorc10::ShapeSymbol>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用
with
[
T=std::vector<c10::ShapeSymbol,std::allocatorc10::ShapeSymbol>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBase<std::vector<c10::ShapeSymbol,std::allocatorc10::ShapeSymbol>>”的引用
E:\mmseg\ops\torch\include\ATen/core/jit_type.h(351): note: 查看对正在编译的 类 模板 实例化“c10::optional<std::vector<c10::ShapeSymbol,std::allocatorc10::ShapeSymbol>>”的引用
E:\mmseg\ops\torch\include\c10/util/Optional.h(395): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除”
with
[
T=std::vector<c10::ShapeSymbol,std::allocatorc10::ShapeSymbol>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(183): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除”
with
[
T=std::vector<c10::optionalc10::Stride,std::allocator<c10::optionalc10::Stride>>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(367): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用
with
[
T=std::vector<c10::optionalc10::Stride,std::allocator<c10::optionalc10::Stride>>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用
with
[
T=std::vector<c10::optionalc10::Stride,std::allocator<c10::optionalc10::Stride>>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBase<std::vector<c10::optionalc10::Stride,std::allocator<c10::optionalc10::Stride>>>”的引用
E:\mmseg\ops\torch\include\ATen/core/jit_type.h(425): note: 查看对正在编译的 类 模板 实例化“c10::optional<std::vector<c10::optionalc10::Stride,std::allocator<c10::optionalc10::Stride>>>”的引用
E:\mmseg\ops\torch\include\ATen/core/jit_type.h(664): note: 查看对正在编译的 类 模板 实例化“c10::VaryingShapec10::Stride”的引用
E:\mmseg\ops\torch\include\c10/util/Optional.h(395): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除”
with
[
T=std::vector<c10::optionalc10::Stride,std::allocator<c10::optionalc10::Stride>>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(183): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除”
with
[
T=std::vector<c10::optional<int64_t>,std::allocator<c10::optional<int64_t>>>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(367): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用
with
[
T=std::vector<c10::optional<int64_t>,std::allocator<c10::optional<int64_t>>>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用
with
[
T=std::vector<c10::optional<int64_t>,std::allocator<c10::optional<int64_t>>>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBase<std::vector<c10::optional<int64_t>,std::allocator<c10::optional<int64_t>>>>”的引用
E:\mmseg\ops\torch\include\ATen/core/jit_type.h(425): note: 查看对正在编译的 类 模板 实例化“c10::optional<std::vector<c10::optional<int64_t>,std::allocator<c10::optional<int64_t>>>>” 的引用
E:\mmseg\ops\torch\include\ATen/core/jit_type.h(470): note: 查看对正在编译的 类 模板 实例化“c10::VaryingShape<int64_t>”的引用
E:\mmseg\ops\torch\include\c10/util/Optional.h(395): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除”
with
[
T=std::vector<c10::optional<int64_t>,std::allocator<c10::optional<int64_t>>>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(183): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除”
with
[
T=std::vector<int64_t,std::allocator<int64_t>>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(367): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用
with
[
T=std::vector<int64_t,std::allocator<int64_t>>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用
with
[
T=std::vector<int64_t,std::allocator<int64_t>>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBase<std::vector<int64_t,std::allocator<int64_t>>>”的引用
E:\mmseg\ops\torch\include\ATen/core/jit_type.h(568): note: 查看对正在编译的 类 模板 实例化“c10::optional<std::vector<int64_t,std::allocator<int64_t>>>”的引用
E:\mmseg\ops\torch\include\c10/util/Optional.h(395): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除”
with
[
T=std::vector<int64_t,std::allocator<int64_t>>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(183): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除”
with
[
T=c10::QualifiedName
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(367): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用
with
[
T=c10::QualifiedName
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用
with
[
T=c10::QualifiedName
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBasec10::QualifiedName”的引用
E:\mmseg\ops\torch\include\ATen/core/jit_type.h(903): note: 查看对正在编译的 类 模板 实例化“c10::optionalc10::QualifiedName”的引用
E:\mmseg\ops\torch\include\c10/util/Optional.h(395): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除”
with
[
T=c10::QualifiedName
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(183): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除”
with
[
T=c10::impl::InlineStreamGuardc10::impl::VirtualGuardImpl
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(367): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用
with
[
T=c10::impl::InlineStreamGuardc10::impl::VirtualGuardImpl
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用
with
[
T=c10::impl::InlineStreamGuardc10::impl::VirtualGuardImpl
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBase<c10::impl::InlineStreamGuardc10::impl::VirtualGuardImpl>”的引用
E:\mmseg\ops\torch\include\c10/core/impl/InlineStreamGuard.h(196): note: 查看对正在编译的 类 模板 实例化“c10::optional<c10::impl::InlineStreamGuardc10::impl::VirtualGuardImpl>”的引 用
E:\mmseg\ops\torch\include\c10/core/StreamGuard.h(139): note: 查看对正在编译的 类 模板 实例化“c10::impl::InlineOptionalStreamGuardc10::impl::VirtualGuardImpl”的引用
E:\mmseg\ops\torch\include\c10/util/Optional.h(395): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除”
with
[
T=c10::impl::InlineStreamGuardc10::impl::VirtualGuardImpl
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(183): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除”
with
[
T=c10::impl::VirtualGuardImpl
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(367): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用
with
[
T=c10::impl::VirtualGuardImpl
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用
with
[
T=c10::impl::VirtualGuardImpl
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBasec10::impl::VirtualGuardImpl”的引用
E:\mmseg\ops\torch\include\c10/core/impl/InlineStreamGuard.h(231): note: 查看对正在编译的 类 模板 实例化“c10::optional”的引用
with
[
T=c10::impl::VirtualGuardImpl
]
E:\mmseg\ops\torch\include\c10/core/StreamGuard.h(162): note: 查看对正在编译的 类 模板 实例化“c10::impl::InlineMultiStreamGuardc10::impl::VirtualGuardImpl”的引用
E:\mmseg\ops\torch\include\c10/util/Optional.h(395): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除”
with
[
T=c10::impl::VirtualGuardImpl
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(183): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除”
with
[
T=std::vector<std::reference_wrapper,std::allocator<std::reference_wrapper>>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(367): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用
with
[
T=std::vector<std::reference_wrapper,std::allocator<std::reference_wrapper>>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用
with
[
T=std::vector<std::reference_wrapper,std::allocator<std::reference_wrapper>>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBase<std::vector<std::reference_wrapper,std::allocator<std::reference_wrapper>>>”的引用
E:\mmseg\ops\torch\include\ATen/core/ivalue_inl.h(362): note: 查看对正在编译的 类 模板 实例化“c10::optional<std::vector<std::reference_wrapper,std::allocator<std::reference_wrapper>>>”的引用
E:\mmseg\ops\torch\include\c10/util/Optional.h(395): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除”
with
[
T=std::vector<std::reference_wrapper,std::allocator<std::reference_wrapper>>
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(183): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除”
with
[
T=c10::OperatorName
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(367): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用
with
[
T=c10::OperatorName
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用
with
[
T=c10::OperatorName
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBasec10::OperatorName”的引用
E:\mmseg\ops\torch\include\ATen/record_function.h(306): note: 查看对正在编译的 类 模板 实例化“c10::optionalc10::OperatorName”的引用
E:\mmseg\ops\torch\include\c10/util/Optional.h(395): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除”
with
[
T=c10::OperatorName
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(183): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除”
with
[
T=at::DimVector
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(367): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用
with
[
T=at::DimVector
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用
with
[
T=at::DimVector
]
E:\mmseg\ops\torch\include\c10/util/Optional.h(427): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBaseat::DimVector”的引用
E:\mmseg\ops\torch\include\ATen/TensorIterator.h(616): note: 查看对正在编译的 类 模板 实例化“c10::optionalat::DimVector”的引用
E:\mmseg\ops\torch\include\c10/util/Optional.h(395): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除”
with
[
T=at::DimVector
]
"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\bin\nvcc" -c E:\mmseg\ops\src\cuda\ms_deform_attn_cuda.cu -o build\temp.win-amd64-cpython-38\Release\mmseg\ops\src\cuda\ms_de
form_attn_cuda.obj -IE:\mmseg\ops\src -IE:\mmseg\ops\torch\include -IE:\mmseg\ops\torch\include\torch\csrc\api\include -IE:\mmseg\ops\torch\include\TH -IE:\mmseg\ops\torch\include\THC
"-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\include" -IE:\Miniconda3lj\envs\openmmlab\include -IE:\Miniconda3lj\envs\openmmlab\Include -IE:\VS\1\VC\Tools\MSVC\14.29.3
0133\ATLMFC\include -IE:\VS\1\VC\Tools\MSVC\14.29.30133\include "-IE:\Windows Kits\10\include\10.0.19041.0\ucrt" "-IE:\Windows Kits\10\include\10.0.19041.0\shared" "-IE:\Windows Kits
10\include\10.0.19041.0\um" "-IE:\Windows Kits\10\include\10.0.19041.0\winrt" "-IE:\Windows Kits\10\include\10.0.19041.0\cppwinrt" -IE:\VS\1\VC\Tools\MSVC\14.29.30133\include "-IE:\Wi
ndows Kits\10\Include\10.0.19041.0\ucrt" "-IE:\Windows Kits\10\Include\10.0.19041.0\um" "-IE:\Windows Kits\10\Include\10.0.19041.0\cppwinrt" "-IE:\Windows Kits\10\Include\10.0.19041.0
\shared" "-IE:\Windows Kits\10\Include\10.0.19041.0\winrt" -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assum
ed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompil
er /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_
CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORC
H_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=MultiScaleDeformableAttention -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --use-local-env
ms_deform_attn_cuda.cu
E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(127): error: identifier "grad_output_n" is undefined

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: type name is not allowed

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: expected an expression

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: identifier "per_sample_loc_size" is undefined

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: identifier "per_attn_weight_size" is undefined

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: identifier "grad_sampling_loc" is undefined

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: type name is not allowed

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: expected an expression

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: identifier "grad_attn_weight" is undefined

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: type name is not allowed

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: expected an expression

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: no instance of function template "ms_deformable_col2im_cuda" matches the argument list
argument types are: (c10::cuda::CUDAStream, , double *, int64_t *, int64_t *, , , const int, const int, const int, const int, const int, const int, const int, double *, , )

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: type name is not allowed

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: expected an expression

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: identifier "per_sample_loc_size" is undefined

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: identifier "per_attn_weight_size" is undefined

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: identifier "grad_sampling_loc" is undefined

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: type name is not allowed

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: expected an expression

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: identifier "grad_attn_weight" is undefined

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: type name is not allowed

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: expected an expression

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(128): error: no instance of function template "ms_deformable_col2im_cuda" matches the argument list
argument types are: (c10::cuda::CUDAStream, , float *, int64_t *, int64_t *, , , const int, const int, const int, const int, const int, const int, const int, float *, , )

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(145): error: identifier "grad_sampling_loc" is undefined

E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(145): error: identifier "grad_attn_weight" is undefined

E:/mmseg/ops/src\cuda/ms_deform_im2col_cuda.cuh(258): warning: variable "q_col" was declared but never referenced
detected during:
instantiation of "void ms_deformable_im2col_gpu_kernel(int, const scalar_t *, const int64_t *, const int64_t *, const scalar_t *, const scalar_t *, int, int, int, int, int, int, int, scalar_t *) [with scalar_t=double]"
(943): here
instantiation of "void ms_deformable_im2col_cuda(cudaStream_t, const scalar_t *, const int64_t *, const int64_t *, const scalar_t *, const scalar_t *, int, int, int, int, int, int, int, scalar_t *) [with scalar_t=double]"
E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(64): here

E:/mmseg/ops/src\cuda/ms_deform_im2col_cuda.cuh(258): warning: variable "q_col" was declared but never referenced
detected during:
instantiation of "void ms_deformable_im2col_gpu_kernel(int, const scalar_t *, const int64_t *, const int64_t *, const scalar_t *, const scalar_t *, int, int, int, int, int, int, int, scalar_t *) [with scalar_t=float]"
(943): here
instantiation of "void ms_deformable_im2col_cuda(cudaStream_t, const scalar_t *, const int64_t *, const int64_t *, const scalar_t *, const scalar_t *, int, int, int, int, int, int, int, scalar_t *) [with scalar_t=float]"
E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu(64): here

25 errors detected in the compilation of "E:/mmseg/ops/src/cuda/ms_deform_attn_cuda.cu".
error: command 'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\bin\nvcc.exe' failed with exit code 1

是否是因为windows系统编译不了'MultiScaleDeformableAttention'?

2.后来我在网上查了资料,尝试用mmcv模块自带的from mmcv.ops.multi_scale_deform_attn import ext_module as MSDA代替ops\functions\ms_deform_attn_func.py 中的import MultiScaleDeformableAttention as MSDA,在运行训练脚本时,出现错误:

Traceback (most recent call last):
File "E:/mmsegmentation-0.20.2/train.py", line 217, in
main()
File "E:/mmsegmentation-0.20.2/train.py", line 206, in main
train_segmentor(
File "E:\mmsegmentation-0.20.2\mmseg\apis\train.py", line 167, in train_segmentor
runner.run(data_loaders, cfg.workflow)
File "E:\Miniconda3lj\envs\openmmlab\lib\site-packages\mmcv\runner\iter_based_runner.py", line 134, in run
iter_runner(iter_loaders[i], **kwargs)
File "E:\Miniconda3lj\envs\openmmlab\lib\site-packages\mmcv\runner\iter_based_runner.py", line 67, in train
self.call_hook('after_train_iter')
File "E:\Miniconda3lj\envs\openmmlab\lib\site-packages\mmcv\runner\base_runner.py", line 309, in call_hook
getattr(hook, fn_name)(self)
File "E:\Miniconda3lj\envs\openmmlab\lib\site-packages\mmcv\runner\hooks\optimizer.py", line 56, in after_train_iter
runner.outputs['loss'].backward()
File "E:\Miniconda3lj\envs\openmmlab\lib\site-packages\torch_tensor.py", line 255, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "E:\Miniconda3lj\envs\openmmlab\lib\site-packages\torch\autograd_init_.py", line 147, in backward
Variable._execution_engine.run_backward(
File "E:\Miniconda3lj\envs\openmmlab\lib\site-packages\torch\autograd\function.py", line 87, in apply
return self._forward_cls.backward(self, *args) # type: ignore[attr-defined]
File "E:\Miniconda3lj\envs\openmmlab\lib\site-packages\torch\autograd\function.py", line 204, in wrapper
outputs = fn(ctx, *args)
File "E:\Miniconda3lj\envs\openmmlab\lib\site-packages\torch\cuda\amp\autocast_mode.py", line 236, in decorate_bwd
return bwd(*args, **kwargs)
File "E:\mmseg\ops\functions\ms_deform_attn_func.py", line 42, in backward
MSDA.ms_deform_attn_backward(
TypeError: ms_deform_attn_backward(): incompatible function arguments. The following argument types are supported:
1. (value: at::Tensor, value_spatial_shapes: at::Tensor, value_level_start_index: at::Tensor, sampling_locations: at::Tensor, attention_weights: at::Tensor, grad_output: at::Tensor, grad_value: at::Tensor, grad_sampling_loc: at::Tensor, grad_attn_weight: at::Tensor, im2col_step: int) -> None

Invoked with: tensor([[[[ 1.0454e+00, 2.3037e+00, 7.8711e-02, ..., -5.2629e-02,
-2.0575e+00, 3.8382e-01],
[ 4.4317e-01, -1.9688e+00, -7.4302e-01, ..., 1.1384e-01,
-2.0322e+00, -9.8970e-01],
[-1.4092e+00, 9.1649e-01, 4.5920e-01, ..., 7.8782e-02,
2.3319e-02, 1.0307e+00],
...,
[-4.7453e-01, 6.1668e-03, 8.4582e-01, ..., -4.2253e-01,
9.2638e-01, 5.2819e-01],
[-5.0433e-02, -1.9279e+00, 8.2762e-02, ..., 5.4080e-01,
5.2500e-01, 2.8486e-01],
[-1.0713e+00, -6.2969e-02, 5.8540e-01, ..., 1.4663e+00,
-1.4296e+00, -1.2585e+00]],

     [[ 1.1206e+00,  2.4906e+00, -3.1013e-02,  ..., -3.0609e-02,
       -2.2217e+00,  1.4448e-01],
      [ 6.1139e-01, -2.0068e+00, -6.6372e-01,  ...,  2.2328e-01,
       -2.0233e+00, -9.4682e-01],
      [-1.2682e+00,  8.6923e-01,  2.0708e-01,  ..., -4.7062e-02,
       -1.1046e-01,  1.1453e+00],
      ...,
      [-6.3431e-01,  2.1591e-02,  9.5461e-01,  ..., -4.7739e-01,
        9.2620e-01,  2.3050e-01],
      [ 1.9415e-01, -1.9220e+00,  3.1868e-01,  ...,  4.6656e-01,
        4.0112e-01,  4.0854e-01],
      [-1.0031e+00, -1.0892e-01,  6.4965e-01,  ...,  1.3789e+00,
       -1.4427e+00, -1.0564e+00]],

     [[ 9.0757e-01,  2.8214e+00, -1.7081e-01,  ..., -1.4807e-01,
       -1.9186e+00,  3.8623e-01],
      [ 1.4501e-01, -2.3035e+00, -1.0328e+00,  ...,  1.5632e-01,
       -2.1178e+00, -1.1422e+00],
      [-1.2786e+00,  4.0894e-01,  5.7620e-01,  ...,  3.7977e-01,
        1.3790e-01,  1.2952e+00],
      ...,
      [-7.2484e-01,  3.4538e-01,  5.0384e-01,  ..., -3.6696e-01,
        8.6514e-01,  1.8609e-01],
      [-1.0860e-01, -1.8703e+00,  5.4616e-01,  ...,  3.8461e-01,
        1.0382e-01,  5.3416e-01],
      [-1.3383e+00,  2.1359e-01,  6.7868e-01,  ...,  1.4641e+00,
       -1.5306e+00, -1.0518e+00]],

     ...,

     [[ 1.0929e+00,  2.7581e+00, -8.2794e-01,  ..., -3.3875e-01,
       -1.5017e+00,  6.0941e-01],
      [ 8.2336e-01, -1.9411e+00, -1.1339e+00,  ...,  7.6681e-01,
       -2.5302e+00, -9.1900e-01],
      [-1.7683e+00,  1.9924e-01,  1.1286e+00,  ..., -3.1640e-01,
        5.3762e-01,  1.3179e+00],
      ...,
      [-9.3323e-01, -8.5072e-02,  5.8708e-01,  ...,  4.3338e-01,
        3.7068e-01,  6.0760e-01],
      [-2.6206e-01, -1.5947e+00,  7.4005e-01,  ...,  3.7379e-01,
        3.2166e-01,  4.6654e-01],
      [-1.2336e+00,  8.4531e-01,  7.8242e-01,  ...,  1.7001e+00,
       -5.5930e-01, -1.4831e+00]],

     [[ 1.0318e+00,  2.7551e+00, -8.5547e-01,  ..., -4.2486e-01,
       -1.3242e+00,  6.2919e-01],
      [ 8.6186e-01, -1.7788e+00, -1.2767e+00,  ...,  8.3591e-01,
       -2.5157e+00, -1.0009e+00],
      [-1.6732e+00,  2.8615e-01,  1.1921e+00,  ..., -4.1708e-01,
        3.6207e-01,  1.1855e+00],
      ...,
      [-9.5871e-01, -2.7266e-01,  3.8468e-01,  ...,  3.5823e-01,
        1.8915e-01,  6.9922e-01],
      [-2.5619e-01, -1.7919e+00,  6.1408e-01,  ...,  5.4967e-01,
        1.8969e-01,  5.0725e-01],
      [-1.2818e+00,  1.0523e+00,  8.7130e-01,  ...,  1.5627e+00,
       -3.2631e-01, -1.5450e+00]],

     [[ 1.0850e+00,  2.8336e+00, -8.6054e-01,  ..., -4.4391e-01,
       -1.1734e+00,  6.0740e-01],
      [ 8.3954e-01, -1.7373e+00, -1.3910e+00,  ...,  9.6381e-01,
       -2.5446e+00, -1.1104e+00],
      [-1.6940e+00,  2.4029e-01,  1.0767e+00,  ..., -4.1322e-01,
        4.2692e-01,  1.1715e+00],
      ...,
      [-9.4724e-01, -2.7286e-01,  4.2115e-01,  ...,  3.7616e-01,
        2.9657e-01,  7.9576e-01],
      [-1.1355e-01, -1.8284e+00,  5.9923e-01,  ...,  6.6290e-01,
        1.4958e-01,  5.7003e-01],
      [-1.2426e+00,  1.0952e+00,  9.8004e-01,  ...,  1.6550e+00,
       -4.1721e-01, -1.5737e+00]]],


    [[[-2.5502e+00, -2.4241e+00, -5.6781e-01,  ...,  7.3550e-01,
        2.7306e+00,  2.3187e-01],
      [-1.4555e+00,  1.6208e+00, -6.5569e-01,  ..., -4.5564e-01,
        1.2228e-01, -1.2036e-01],
      [-1.6754e+00,  1.3842e+00, -1.4357e+00,  ...,  4.5124e-01,
       -3.4033e-01,  1.1951e+00],
      ...,
      [ 1.1263e+00,  1.0141e+00, -5.4882e-01,  ..., -1.3509e+00,
       -1.7057e-02, -9.8708e-01],
      [ 1.5114e+00,  8.3234e-01,  2.6397e-01,  ..., -1.8911e+00,
       -9.7779e-01, -2.9526e-01],
      [-3.5018e-02, -1.7900e-02, -1.0779e-01,  ..., -1.6781e+00,
       -1.1053e+00, -1.3945e+00]],

     [[-2.4887e+00, -2.5109e+00, -2.3583e-01,  ...,  7.7854e-01,
        2.5594e+00,  5.0108e-01],
      [-1.2788e+00,  1.3141e+00, -4.9137e-01,  ..., -5.3539e-01,
       -3.3578e-04,  1.2102e-01],
      [-1.6249e+00,  1.4436e+00, -1.1583e+00,  ...,  5.4817e-01,
       -3.2341e-01,  1.0544e+00],
      ...,
      [ 1.4997e+00,  8.0902e-01, -7.4345e-01,  ..., -1.4459e+00,
        2.4938e-01, -1.0819e+00],
      [ 1.3192e+00,  8.8820e-01,  2.6871e-01,  ..., -1.6320e+00,
       -1.1183e+00, -2.4170e-01],
      [-1.2274e-01, -1.8346e-01, -4.6851e-01,  ..., -1.9324e+00,
       -1.4013e+00, -1.1599e+00]],

     [[-2.6008e+00, -2.4953e+00, -3.1804e-01,  ...,  6.1436e-01,
        2.7267e+00,  2.9878e-01],
      [-1.2539e+00,  1.3729e+00, -4.8603e-01,  ..., -6.0826e-01,
        1.0540e-01, -2.0047e-01],
      [-1.6904e+00,  1.1609e+00, -1.2040e+00,  ...,  6.5796e-01,
       -1.7179e-01,  1.3064e+00],
      ...,
      [ 1.4611e+00,  1.0485e+00, -7.6063e-01,  ..., -1.4945e+00,
        1.0611e-01, -1.1416e+00],
      [ 1.5249e+00,  8.1096e-01,  2.9682e-01,  ..., -1.6567e+00,
       -1.2007e+00, -2.3147e-01],
      [-2.2091e-01,  9.0052e-02, -4.3451e-01,  ..., -1.7489e+00,
       -1.1571e+00, -9.9585e-01]],

     ...,

     [[-2.6546e+00, -2.2700e+00, -4.4524e-01,  ...,  3.6463e-01,
        3.0417e+00,  5.6671e-01],
      [-1.4011e+00,  1.7222e+00, -5.2218e-01,  ..., -5.9744e-01,
        1.2535e-01, -2.9419e-01],
      [-1.6770e+00,  9.9765e-01, -1.3965e+00,  ...,  3.3141e-01,
       -4.0974e-02,  1.4140e+00],
      ...,
      [ 1.2155e+00,  1.2499e+00, -6.7844e-01,  ..., -1.4157e+00,
        4.3620e-03, -8.7917e-01],
      [ 1.7722e+00,  6.6028e-01,  1.2669e-01,  ..., -1.5781e+00,
       -1.2310e+00, -2.3653e-01],
      [-1.5084e-01,  2.0007e-01, -1.7730e-01,  ..., -1.5123e+00,
       -1.2505e+00, -1.3376e+00]],

     [[-2.8202e+00, -2.0736e+00, -6.1635e-01,  ...,  5.0996e-01,
        2.9691e+00,  2.4014e-01],
      [-1.5698e+00,  1.5092e+00, -6.3197e-01,  ..., -5.5759e-01,
        2.0227e-01,  4.7546e-02],
      [-1.7259e+00,  9.7447e-01, -1.2060e+00,  ...,  4.3823e-01,
       -2.9090e-01,  1.2330e+00],
      ...,
      [ 1.0943e+00,  1.3985e+00, -5.4110e-01,  ..., -1.2570e+00,
       -1.2158e-02, -9.1076e-01],
      [ 1.6390e+00,  9.1825e-01,  2.5502e-01,  ..., -1.8521e+00,
       -1.2553e+00, -2.6770e-01],
      [-3.1239e-01,  4.2653e-01, -3.4742e-01,  ..., -1.6335e+00,
       -9.8791e-01, -1.2960e+00]],

     [[-2.5978e+00, -2.1904e+00, -6.3893e-01,  ...,  6.7533e-01,
        3.0959e+00,  1.9615e-01],
      [-1.5242e+00,  1.8759e+00, -5.9848e-01,  ..., -2.8841e-01,
        4.3010e-01, -3.2935e-01],
      [-1.5945e+00,  1.0576e+00, -1.5061e+00,  ...,  3.1413e-01,
       -2.0028e-01,  1.3712e+00],
      ...,
      [ 1.1991e+00,  1.4466e+00, -6.1770e-01,  ..., -1.2508e+00,
        2.8221e-01, -9.3919e-01],
      [ 1.2299e+00,  8.4930e-01,  1.0351e-01,  ..., -1.6968e+00,
       -9.2566e-01, -2.2154e-01],
      [-1.1786e-01,  9.8225e-02, -1.9706e-01,  ..., -1.9332e+00,
       -9.1589e-01, -1.3163e+00]]]], device='cuda:0',
   grad_fn=<ViewBackward>), tensor([[32, 32]], device='cuda:0'), tensor([0], device='cuda:0'), tensor([[[[[[ 0.0391,  0.0078],
        [ 0.0703,  0.0078],
        [ 0.1016,  0.0078],
        [ 0.1328,  0.0078]]],


      [[[ 0.0391,  0.0259],
        [ 0.0703,  0.0439],
        [ 0.1016,  0.0619],
        [ 0.1328,  0.0800]]],


      [[[ 0.0259,  0.0391],
        [ 0.0439,  0.0703],
        [ 0.0619,  0.1016],
        [ 0.0800,  0.1328]]],


      ...,


      [[[ 0.0078, -0.0234],
        [ 0.0078, -0.0547],
        [ 0.0078, -0.0859],
        [ 0.0078, -0.1172]]],


      [[[ 0.0259, -0.0234],
        [ 0.0439, -0.0547],
        [ 0.0619, -0.0859],
        [ 0.0800, -0.1172]]],


      [[[ 0.0391, -0.0102],
        [ 0.0703, -0.0283],
        [ 0.1016, -0.0463],
        [ 0.1328, -0.0644]]]],



     [[[[ 0.0547,  0.0078],
        [ 0.0859,  0.0078],
        [ 0.1172,  0.0078],
        [ 0.1484,  0.0078]]],


      [[[ 0.0547,  0.0259],
        [ 0.0859,  0.0439],
        [ 0.1172,  0.0619],
        [ 0.1484,  0.0800]]],


      [[[ 0.0415,  0.0391],
        [ 0.0595,  0.0703],
        [ 0.0776,  0.1016],
        [ 0.0956,  0.1328]]],


      ...,


      [[[ 0.0234, -0.0234],
        [ 0.0234, -0.0547],
        [ 0.0234, -0.0859],
        [ 0.0234, -0.1172]]],


      [[[ 0.0415, -0.0234],
        [ 0.0595, -0.0547],
        [ 0.0776, -0.0859],
        [ 0.0956, -0.1172]]],


      [[[ 0.0547, -0.0102],
        [ 0.0859, -0.0283],
        [ 0.1172, -0.0463],
        [ 0.1484, -0.0644]]]],



     [[[[ 0.0703,  0.0078],
        [ 0.1016,  0.0078],
        [ 0.1328,  0.0078],
        [ 0.1641,  0.0078]]],


      [[[ 0.0703,  0.0259],
        [ 0.1016,  0.0439],
        [ 0.1328,  0.0619],
        [ 0.1641,  0.0800]]],


      [[[ 0.0571,  0.0391],
        [ 0.0751,  0.0703],
        [ 0.0932,  0.1016],
        [ 0.1112,  0.1328]]],


      ...,


      [[[ 0.0391, -0.0234],
        [ 0.0391, -0.0547],
        [ 0.0391, -0.0859],
        [ 0.0391, -0.1172]]],


      [[[ 0.0571, -0.0234],
        [ 0.0751, -0.0547],
        [ 0.0932, -0.0859],
        [ 0.1112, -0.1172]]],


      [[[ 0.0703, -0.0102],
        [ 0.1016, -0.0283],
        [ 0.1328, -0.0463],
        [ 0.1641, -0.0644]]]],



     ...,



     [[[[ 0.8750,  0.9688],
        [ 0.9062,  0.9688],
        [ 0.9375,  0.9688],
        [ 0.9688,  0.9688]]],


      [[[ 0.8750,  0.9868],
        [ 0.9062,  1.0048],
        [ 0.9375,  1.0229],
        [ 0.9688,  1.0409]]],


      [[[ 0.8618,  1.0000],
        [ 0.8798,  1.0312],
        [ 0.8979,  1.0625],
        [ 0.9159,  1.0938]]],


      ...,


      [[[ 0.8438,  0.9375],
        [ 0.8438,  0.9062],
        [ 0.8438,  0.8750],
        [ 0.8438,  0.8438]]],


      [[[ 0.8618,  0.9375],
        [ 0.8798,  0.9062],
        [ 0.8979,  0.8750],
        [ 0.9159,  0.8438]]],


      [[[ 0.8750,  0.9507],
        [ 0.9062,  0.9327],
        [ 0.9375,  0.9146],
        [ 0.9688,  0.8966]]]],



     [[[[ 0.9375,  0.9688],
        [ 0.9688,  0.9688],
        [ 1.0000,  0.9688],
        [ 1.0312,  0.9688]]],


      [[[ 0.9375,  0.9868],
        [ 0.9688,  1.0048],
        [ 1.0000,  1.0229],
        [ 1.0312,  1.0409]]],


      [[[ 0.9243,  1.0000],
        [ 0.9423,  1.0312],
        [ 0.9604,  1.0625],
        [ 0.9784,  1.0938]]],


      ...,


      [[[ 0.9062,  0.9375],
        [ 0.9062,  0.9062],
        [ 0.9062,  0.8750],
        [ 0.9062,  0.8438]]],


      [[[ 0.9243,  0.9375],
        [ 0.9423,  0.9062],
        [ 0.9604,  0.8750],
        [ 0.9784,  0.8438]]],


      [[[ 0.9375,  0.9507],
        [ 0.9688,  0.9327],
        [ 1.0000,  0.9146],
        [ 1.0312,  0.8966]]]],



     [[[[ 1.0000,  0.9688],
        [ 1.0312,  0.9688],
        [ 1.0625,  0.9688],
        [ 1.0938,  0.9688]]],


      [[[ 1.0000,  0.9868],
        [ 1.0312,  1.0048],
        [ 1.0625,  1.0229],
        [ 1.0938,  1.0409]]],


      [[[ 0.9868,  1.0000],
        [ 1.0048,  1.0312],
        [ 1.0229,  1.0625],
        [ 1.0409,  1.0938]]],


      ...,


      [[[ 0.9688,  0.9375],
        [ 0.9688,  0.9062],
        [ 0.9688,  0.8750],
        [ 0.9688,  0.8438]]],


      [[[ 0.9868,  0.9375],
        [ 1.0048,  0.9062],
        [ 1.0229,  0.8750],
        [ 1.0409,  0.8438]]],


      [[[ 1.0000,  0.9507],
        [ 1.0312,  0.9327],
        [ 1.0625,  0.9146],
        [ 1.0938,  0.8966]]]]],




    [[[[[ 0.0391,  0.0078],
        [ 0.0703,  0.0078],
        [ 0.1016,  0.0078],
        [ 0.1328,  0.0078]]],


      [[[ 0.0391,  0.0259],
        [ 0.0703,  0.0439],
        [ 0.1016,  0.0619],
        [ 0.1328,  0.0800]]],


      [[[ 0.0259,  0.0391],
        [ 0.0439,  0.0703],
        [ 0.0619,  0.1016],
        [ 0.0800,  0.1328]]],


      ...,


      [[[ 0.0078, -0.0234],
        [ 0.0078, -0.0547],
        [ 0.0078, -0.0859],
        [ 0.0078, -0.1172]]],


      [[[ 0.0259, -0.0234],
        [ 0.0439, -0.0547],
        [ 0.0619, -0.0859],
        [ 0.0800, -0.1172]]],


      [[[ 0.0391, -0.0102],
        [ 0.0703, -0.0283],
        [ 0.1016, -0.0463],
        [ 0.1328, -0.0644]]]],



     [[[[ 0.0547,  0.0078],
        [ 0.0859,  0.0078],
        [ 0.1172,  0.0078],
        [ 0.1484,  0.0078]]],


      [[[ 0.0547,  0.0259],
        [ 0.0859,  0.0439],
        [ 0.1172,  0.0619],
        [ 0.1484,  0.0800]]],


      [[[ 0.0415,  0.0391],
        [ 0.0595,  0.0703],
        [ 0.0776,  0.1016],
        [ 0.0956,  0.1328]]],


      ...,


      [[[ 0.0234, -0.0234],
        [ 0.0234, -0.0547],
        [ 0.0234, -0.0859],
        [ 0.0234, -0.1172]]],


      [[[ 0.0415, -0.0234],
        [ 0.0595, -0.0547],
        [ 0.0776, -0.0859],
        [ 0.0956, -0.1172]]],


      [[[ 0.0547, -0.0102],
        [ 0.0859, -0.0283],
        [ 0.1172, -0.0463],
        [ 0.1484, -0.0644]]]],



     [[[[ 0.0703,  0.0078],
        [ 0.1016,  0.0078],
        [ 0.1328,  0.0078],
        [ 0.1641,  0.0078]]],


      [[[ 0.0703,  0.0259],
        [ 0.1016,  0.0439],
        [ 0.1328,  0.0619],
        [ 0.1641,  0.0800]]],


      [[[ 0.0571,  0.0391],
        [ 0.0751,  0.0703],
        [ 0.0932,  0.1016],
        [ 0.1112,  0.1328]]],


      ...,


      [[[ 0.0391, -0.0234],
        [ 0.0391, -0.0547],
        [ 0.0391, -0.0859],
        [ 0.0391, -0.1172]]],


      [[[ 0.0571, -0.0234],
        [ 0.0751, -0.0547],
        [ 0.0932, -0.0859],
        [ 0.1112, -0.1172]]],


      [[[ 0.0703, -0.0102],
        [ 0.1016, -0.0283],
        [ 0.1328, -0.0463],
        [ 0.1641, -0.0644]]]],



     ...,



     [[[[ 0.8750,  0.9688],
        [ 0.9062,  0.9688],
        [ 0.9375,  0.9688],
        [ 0.9688,  0.9688]]],


      [[[ 0.8750,  0.9868],
        [ 0.9062,  1.0048],
        [ 0.9375,  1.0229],
        [ 0.9688,  1.0409]]],


      [[[ 0.8618,  1.0000],
        [ 0.8798,  1.0312],
        [ 0.8979,  1.0625],
        [ 0.9159,  1.0938]]],


      ...,


      [[[ 0.8438,  0.9375],
        [ 0.8438,  0.9062],
        [ 0.8438,  0.8750],
        [ 0.8438,  0.8438]]],


      [[[ 0.8618,  0.9375],
        [ 0.8798,  0.9062],
        [ 0.8979,  0.8750],
        [ 0.9159,  0.8438]]],


      [[[ 0.8750,  0.9507],
        [ 0.9062,  0.9327],
        [ 0.9375,  0.9146],
        [ 0.9688,  0.8966]]]],



     [[[[ 0.9375,  0.9688],
        [ 0.9688,  0.9688],
        [ 1.0000,  0.9688],
        [ 1.0312,  0.9688]]],


      [[[ 0.9375,  0.9868],
        [ 0.9688,  1.0048],
        [ 1.0000,  1.0229],
        [ 1.0312,  1.0409]]],


      [[[ 0.9243,  1.0000],
        [ 0.9423,  1.0312],
        [ 0.9604,  1.0625],
        [ 0.9784,  1.0938]]],


      ...,


      [[[ 0.9062,  0.9375],
        [ 0.9062,  0.9062],
        [ 0.9062,  0.8750],
        [ 0.9062,  0.8438]]],


      [[[ 0.9243,  0.9375],
        [ 0.9423,  0.9062],
        [ 0.9604,  0.8750],
        [ 0.9784,  0.8438]]],


      [[[ 0.9375,  0.9507],
        [ 0.9688,  0.9327],
        [ 1.0000,  0.9146],
        [ 1.0312,  0.8966]]]],



     [[[[ 1.0000,  0.9688],
        [ 1.0312,  0.9688],
        [ 1.0625,  0.9688],
        [ 1.0938,  0.9688]]],


      [[[ 1.0000,  0.9868],
        [ 1.0312,  1.0048],
        [ 1.0625,  1.0229],
        [ 1.0938,  1.0409]]],


      [[[ 0.9868,  1.0000],
        [ 1.0048,  1.0312],
        [ 1.0229,  1.0625],
        [ 1.0409,  1.0938]]],


      ...,


      [[[ 0.9688,  0.9375],
        [ 0.9688,  0.9062],
        [ 0.9688,  0.8750],
        [ 0.9688,  0.8438]]],


      [[[ 0.9868,  0.9375],
        [ 1.0048,  0.9062],
        [ 1.0229,  0.8750],
        [ 1.0409,  0.8438]]],


      [[[ 1.0000,  0.9507],
        [ 1.0312,  0.9327],
        [ 1.0625,  0.9146],
        [ 1.0938,  0.8966]]]]]], device='cuda:0', grad_fn=<AddBackward0>), tensor([[[[[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      ...,

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]]],


     [[[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      ...,

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]]],


     [[[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      ...,

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]]],


     ...,


     [[[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      ...,

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]]],


     [[[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      ...,

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]]],


     [[[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      ...,

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]]]],



    [[[[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      ...,

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]]],


     [[[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      ...,

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]]],


     [[[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      ...,

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]]],


     ...,


     [[[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      ...,

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]]],


     [[[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      ...,

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]]],


     [[[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      ...,

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]],

      [[0.2500, 0.2500, 0.2500, 0.2500]]]]], device='cuda:0',
   grad_fn=<ViewBackward>), tensor([[[ 1.4704e-07,  2.0442e-07, -2.3773e-07,  ..., -1.2448e-07,
       1.5151e-07,  1.1875e-07],
     [ 2.7538e-07,  9.7752e-08, -4.0921e-07,  ..., -3.8392e-07,
       9.4188e-08,  1.8300e-07],
     [ 3.1790e-07,  5.3305e-08, -4.0360e-07,  ..., -4.8290e-07,
       3.0333e-08,  1.9147e-07],
     ...,
     [-3.5902e-06, -2.6383e-06,  1.1648e-06,  ...,  2.3170e-06,
      -6.1133e-06, -1.5203e-06],
     [-4.8687e-06, -2.9234e-06,  2.6226e-06,  ...,  2.4621e-06,
      -5.5553e-06, -1.1254e-06],
     [-4.2980e-06, -2.4518e-06,  3.1164e-06,  ...,  3.7315e-06,
      -2.3795e-06,  1.8520e-06]],

    [[ 9.2248e-08,  7.7889e-08, -4.6520e-08,  ...,  4.6973e-08,
      -8.7686e-08,  7.9339e-08],
     [ 1.6331e-07,  1.5263e-07, -3.9625e-08,  ...,  6.1208e-08,
      -3.2945e-08,  1.4610e-07],
     [ 2.0937e-07,  9.7642e-08, -1.2317e-08,  ...,  2.2829e-08,
      -2.6568e-08,  1.4188e-07],
     ...,
     [ 1.3758e-06,  1.7925e-06, -8.3217e-06,  ...,  1.5411e-06,
       2.5807e-06,  2.2825e-06],
     [ 1.5971e-06, -1.0762e-06, -6.9236e-06,  ...,  1.7843e-06,
       1.3317e-07,  2.9574e-06],
     [-2.5151e-06, -1.7165e-06, -3.6710e-06,  ...,  1.7461e-06,
       2.2823e-06,  3.1695e-06]]], device='cuda:0'), 64

Process finished with exit code 1

这个问题困扰我很久了,是否只能在linux系统上进行编译,或者说windows上编译也可有别的解决办法?非常感谢!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant