It literally means that the the tuple class in Python doesn't have a method called to. Since you're trying to put your labels onto your device, just do labels = torch.tensor(labels).to(device).
If you don't want to do this, you can change the way the DataLoader works by making it return your labels as a PyTorch tensor rather than a tuple.
Edit
Since the labels seem to be strings, I would convert them to one-hot encoded vectors first:
>>> import torch
>>> labels_unique = set(labels)
>>> keys = {key: value for key, value in zip(labels_unique, range(len(labels_unique)))}
>>> labels_onehot = torch.zeros(size=(len(labels), len(keys)))
>>> for idx, label in enumerate(labels_onehot):
... labels_onehot[idx][keys[label]] = 1
...
>>> labels_onehot = labels.to(device)
I'm shooting a bit in the dark here because I don't know the details exactly, but yeah strings won't work with tensors.
Answer from Sean on Stack OverflowIt literally means that the the tuple class in Python doesn't have a method called to. Since you're trying to put your labels onto your device, just do labels = torch.tensor(labels).to(device).
If you don't want to do this, you can change the way the DataLoader works by making it return your labels as a PyTorch tensor rather than a tuple.
Edit
Since the labels seem to be strings, I would convert them to one-hot encoded vectors first:
>>> import torch
>>> labels_unique = set(labels)
>>> keys = {key: value for key, value in zip(labels_unique, range(len(labels_unique)))}
>>> labels_onehot = torch.zeros(size=(len(labels), len(keys)))
>>> for idx, label in enumerate(labels_onehot):
... labels_onehot[idx][keys[label]] = 1
...
>>> labels_onehot = labels.to(device)
I'm shooting a bit in the dark here because I don't know the details exactly, but yeah strings won't work with tensors.
I also got the same error when I was training an Image classification model with a reference-style image dataset. Here I implemented a Custom Dataset Class by extending from torch.utils.data import Dataset class. Same as you I haven't encoded my target labels and they are just a tuple of class names(strings). Since PyTorch tensors don't accept string data directly, I had to convert these labels into integer-encoded tensors before using them in model training.
The pytorch LSTM returns a tuple.
So you get this error as your linear layer self.hidden2tag can not handle this tuple.
So change:
out = self.lstm(x)
to
out, states = self.lstm(x)
This will fix your error, by splitting up the tuple so that out is just your output tensor.
out then stores the hidden states, while states is another tuple that contains the last hidden and cell state.
You can also take a look here:
https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM
You will get another error for the last line as max() returns a tuple as well. But this should be easy to fix and is yet different error :)
Transform your state in a numpy array first:
state = np.array(state)
PyTorch is probably missing a np.asarray in their API.