We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
获取gpu id 的地方需要优化,paddle.distributed.launch 多卡调用paddleocr会导致模型只使用第一张卡运行?
The text was updated successfully, but these errors were encountered:
@LDOUBLEV, 老哥你能不能抽时间给它修一下。 这个好像一直都是只能用一个gpu推理。
Sorry, something went wrong.
而且看前面window系统直接返回gpu_id=0了
gpu_id=0
你们修一下呗
PaddleOCR/tools/infer/utility.py
Line 250 in 4336771
@LDOUBLEV 威威,现在 PaddleOCR 项目,主要是用爱发电在维护。 GreatV 也不是百度的雇员,所以也联系不到 inference 的人。
https://www.paddlepaddle.org.cn/inference/master/api_reference/cxx_api_doc/Config/GPUConfig.html#gpu 从这里看是必须要传入gpu_id 的
No branches or pull requests
问题描述 / Problem Description
获取gpu id 的地方需要优化,paddle.distributed.launch 多卡调用paddleocr会导致模型只使用第一张卡运行?
![image](https://private-user-images.githubusercontent.com/26592129/344379465-55ce716e-73f1-43ab-b1b0-14ae462ee10e.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTk5NzU3NzcsIm5iZiI6MTcxOTk3NTQ3NywicGF0aCI6Ii8yNjU5MjEyOS8zNDQzNzk0NjUtNTVjZTcxNmUtNzNmMS00M2FiLWIxYjAtMTRhZTQ2MmVlMTBlLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MDMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzAzVDAyNTc1N1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTdjNzE1M2MyN2Q1MGI1YzYwMmNiYmM2MzZhNGVmMjdjYTdmYWU0MDU5N2VjYzMxYTRjYzVkNTNiZTg4MTUyMTAmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.nGwYpRuQ3bt6thLIdt6jQk5E2E6BsOghBDih8Ojpn9A)
运行环境 / Runtime Environment
复现代码 / Reproduction Code
完整报错 / Complete Error Message
可能解决方案 / Possible solutions
附件 / Appendix
The text was updated successfully, but these errors were encountered: