Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.onnx Model file is int64_t.Can it support int16_t ? #529

Open
High-calcium-tiger opened this issue Jun 27, 2024 · 0 comments
Open

.onnx Model file is int64_t.Can it support int16_t ? #529

High-calcium-tiger opened this issue Jun 27, 2024 · 0 comments

Comments

@High-calcium-tiger
Copy link

I noticed that the CPU usage is very high when running piper tts on my Linux device.
I want change int64_t to int16_t to reduce CPU usage.
code:
std::vectorOrt::Value inputTensors;
std::vector<int64_t> phonemeIdsShape{1, (int64_t)phonemeIds.size()};
inputTensors.push_back(Ort::Value::CreateTensor<int64_t>(
memoryInfo, phonemeIds.data(), phonemeIds.size(), phonemeIdsShape.data(),
phonemeIdsShape.size()));

I change CreateTensor<int64_t> to CreateTensor<int16_t>,but an error occurred:Expectation int64_t.
I guess .onnx model file is int64_t,
I want to know Can it support int16_t ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant