You return four variables s1,s2,s3,s4 and receive them using a single variable obj. This is what is called a tuple, obj is associated with 4 values, the values of s1,s2,s3,s4. So, use index as you use in a list to get the value you want, in order.
Copyobj=list_benefits()
print obj[0] + " is a benefit of functions!"
print obj[1] + " is a benefit of functions!"
print obj[2] + " is a benefit of functions!"
print obj[3] + " is a benefit of functions!"
Answer from Aswin Murugesh on Stack Overflowpython - AttributeError: 'tuple' object has no attribute - Stack Overflow
GridSearchCV AttributeError: 'tuple' object has no attribute 'to'
AttributeError: 'tuple' object has no attribute 'hidden_states'
python - Longformer get last_hidden_state - Stack Overflow
Videos
You return four variables s1,s2,s3,s4 and receive them using a single variable obj. This is what is called a tuple, obj is associated with 4 values, the values of s1,s2,s3,s4. So, use index as you use in a list to get the value you want, in order.
Copyobj=list_benefits()
print obj[0] + " is a benefit of functions!"
print obj[1] + " is a benefit of functions!"
print obj[2] + " is a benefit of functions!"
print obj[3] + " is a benefit of functions!"
You're returning a tuple. Index it.
Copyobj=list_benefits()
print obj[0] + " is a benefit of functions!"
print obj[1] + " is a benefit of functions!"
print obj[2] + " is a benefit of functions!"
Do not select via index:
sequence_output = outputs.last_hidden_state
outputs is a LongformerBaseModelOutputWithPooling object with the following properties:
print(outputs.keys())
Output:
odict_keys(['last_hidden_state', 'pooler_output', 'hidden_states'])
Calling outputs[0] or outputs.last_hidden_state will both give you the same tensor, but this tensor does not have a property called last_hidden_state.
Answer to this particular case
outputs is a transformers.models.longformer.modeling_longformer.LongformerBaseModelOutputWithPooling. You can print its attributes with the command:
outputs.keys()
To access the last hidden state for the first data point in the batch, use the command:
outputs.last_hidden_state[0]
For those who didn't find an answer to their problem here
There are other cases in which this type of error can appear. If you are using a transformer that embeds both an encoder and a decoder (for example T5), the default return for a call is a numpy array. To get encodings in that situation, use:
output = model.encoder(...)
emb = output.last_hidden_state
Look here
I am trying to make a discord bot that would turn a message into lowercase. I am encountering an error, as the title suggests, "AttributeError: 'tuple' object has no attribute 'lower'. "
Here is my code if anyone can help.
https://hastebin.com/ibareyilax.py
If this is the wrong subreddit I don't mind taking down my post and posting it elsewhere.
The issue comes from the fact that hidden (in the forward definition) isn't a Torch.Tensor. Therefore, r_output, hidden = self.gru(nn_input, hidden) raises a rather confusing error without specifying exaclty what's wrong in the arguments. Altough you can see it's raised inside a nn.RNN function named check_hidden_size()...
I was confused at first, thinking that the second argument of nn.RNN: h0 was a tuple containing (hidden_state, cell_state). Same can be said ofthe second element returned by that call: hn. That's not the case h0 and hn are both Torch.Tensors. Interestingly enough though, you are able to unpack stacked tensors:
>>> z = torch.stack([torch.Tensor([1,2,3]), torch.Tensor([4,5,6])])
>>> a, b = z
>>> a, b
(tensor([1., 2., 3.]), tensor([4., 5., 6.]))
You are supposed to provide a tensor as the second argument of a nn.GRU __call__.
Edit - After further inspection of your code I found out that you are converting hidden back again to a tuple... In cell [14] you have hidden = tuple([each.data for each in hidden]). Which basically overwrites the modification you did in init_hidden with torch.stack.
Take a step back and look at the source code for RNNBase the base class for RNN modules. If the hidden state is not given to the forward it will default to:
if hx is None:
num_directions = 2 if self.bidirectional else 1
hx = torch.zeros(self.num_layers * num_directions,
max_batch_size, self.hidden_size,
dtype=input.dtype, device=input.device)
This is essentially the exact init as the one you are trying to implement. Granted you only want to reset the hidden states on every epoch, (I don't see why...). Anyhow, a basic alternative would be to set hidden to None at the start of an epoch, passed as it is to self.forward_back_prop then to rnn, then to self.rnn which will in turn default initialize it for you. Then overwrite hidden with the hidden state returned by that RNN forward call.
To summarize, I've only kept the relevant parts of the code. Remove the init_hidden function from AssetGRU and make those modifications:
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
...
if hidden is not None:
hidden = hidden.detach()
...
output, hidden = rnn(inp, hidden)
...
return loss.item(), hidden
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches):
...
for epoch_i in range(1, n_epochs + 1):
hidden = None
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
loss, hidden = forward_back_prop(rnn, optimizer, criterion,
inputs, labels, hidden)
...
...
There should be [] brackets instead of () around 0.
def forward(self, nn_input, hidden):
''' Forward pass through the network.
These inputs are x, and the hidden/cell state `hidden`. '''
# batch_size equals the input's first dimension
batch_size = nn_input.size(0)
hii folks,
i am trying to write data to a excel file.when using for loop,to assign the values to cells,it's giving me the error that 'tuple' object has no attribute 'value' at line 6.
import openpyxl wb=openpyxl.Workbook() ws=wb.active for i in range(10): index="A"+str(i) ws[index]=i
please help me this error.