Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why i am getting this error #625

Open
MG219 opened this issue Mar 18, 2024 · 0 comments
Open

Why i am getting this error #625

MG219 opened this issue Mar 18, 2024 · 0 comments

Comments

@MG219
Copy link

MG219 commented Mar 18, 2024

python train.py --output_directory=outdir --log_directory=logdir -c tacotron2_statedict.pt --warm_start

FP16 Run: False
Dynamic Loss Scaling: True
Distributed Run: False
cuDNN Enabled: True
cuDNN Benchmark: False
Warm starting model from checkpoint 'tacotron2_statedict.pt'
Epoch: 0
2024-03-18 22:08:46.261985: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
WARNING:tensorflow:From D:\ANACONDA\lib\site-packages\keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.

Traceback (most recent call last):
File "D:\sr\tacotron2\train.py", line 289, in
train(args.output_directory, args.log_directory, args.checkpoint_path,
File "D:\sr\tacotron2\train.py", line 208, in train
for i, batch in enumerate(train_loader):
File "D:\ANACONDA\lib\site-packages\torch\utils\data\dataloader.py", line 633, in next
data = self._next_data()
File "D:\ANACONDA\lib\site-packages\torch\utils\data\dataloader.py", line 1345, in _next_data
return self._process_data(data)
File "D:\ANACONDA\lib\site-packages\torch\utils\data\dataloader.py", line 1371, in _process_data
data.reraise()
File "D:\ANACONDA\lib\site-packages\torch_utils.py", line 644, in reraise
raise exception
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "D:\ANACONDA\lib\site-packages\torch\utils\data_utils\worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "D:\ANACONDA\lib\site-packages\torch\utils\data_utils\fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "D:\ANACONDA\lib\site-packages\torch\utils\data_utils\fetch.py", line 51, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "D:\sr\tacotron2\data_utils.py", line 63, in getitem
return self.get_mel_text_pair(self.audiopaths_and_text[index])
File "D:\sr\tacotron2\data_utils.py", line 34, in get_mel_text_pair
mel = self.get_mel(audiopath)
File "D:\sr\tacotron2\data_utils.py", line 51, in get_mel
melspec = torch.from_numpy(np.load(filename))
File "D:\ANACONDA\lib\site-packages\numpy\lib\npyio.py", line 438, in load
raise ValueError("Cannot load file containing pickled data "
ValueError: Cannot load file containing pickled data when allow_pickle=False

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant