from your definition, population is a tuple. I'd suggest two options, the first is converting it to an array, i.e.
population = np.asarray(population)
Alternatively, you can use the DataFrame column .values attribute, which is essentially a numpy array:
X = np.concatenate((np.ones(len(population)).reshape(len(population), 1), df['Population'].values.reshape(len(population),1)), axis=1)
Answer from Tarifazo on Stack Overflowpython - AttributeError: 'tuple' object has no attribute 'shape' - Stack Overflow
AttributeError: 'tuple' object has no attribute 'shape' error while using Google colab
'tuple' object has no attribute 'x'
python - AttributeError: 'tuple' object has no attribute 'reshape' - Stack Overflow em Português
According to the error you posted, Data is of type tuple and there is no attribute shape defined for data. You could try casting Data when you call your preprocess function, e.g.:
preprocess(numpy.array(Data))
.shape is an attribute of numpy ndarrays and tuples don't have such attributes but it's possible to call numpy.shape on a tuple to get its "shape".
import numpy as np
sh = np.shape(Data)
In general (not for OP), it's more useful to get the length of tuples:
len(Data)
working on an assignment and I've been getting this error a lot with the codes our professor is giving us. Here is an example of a code given:
a = np.random.normal(0, 1, 9).reshape(3, 3)
and it returns:
AttributeError Traceback (most recent call last) <ipython-input-10-d5ba530a56da> in <module> ----> 1 a = np.random.normal(0, 1, 9).reshape(3, 3) AttributeError: 'tuple' object has no attribute 'random'
so I'm not sure what I'm doing wrong. Any ideas?
*RESOLVED* thank you u/FunkyDoktor
The selected answer is inaccurate. The reason why the code is failing is not because the tuples are of the ((input1,output1), (input2,output2)), ...), but because they are of the type (((input1, class1), (input2, class2), ...), ((output1, class1), (output2, class2), ...)).
You could have fixed your problem by simply adding class_mode=None to your flow_from_directory calls.
So, since your model has only one output, you cannot join two generators like that.
- A generator must output things like
(input,output)in a tuple. - Yours is outputting
((input1,output1), (input2,output2)), tuples inside a tuple.
When your model gets a batch from the generator, it's trying to get the shape of what it thinks is the input, but it finds (input,output) instead.
Working the generator:
You can probably create your own generator like this:
def myGenerator(train_generator,train_generator1):
while True:
xy = train_generator.next() #or next(train_generator)
xy1 = train_generator1.next() #or next(train_generator1)
yield (xy[0],xy1[0])
Instantiate it with:
train_generator2 = myGenerator(train_generator,train_generator1)
Now, you're going to have real trouble with the output shapes. If you're working from image to image, I recommend you work with a purely convolutional model.
A convolutional layer outputs (Batch, Side1, Side2, channels), which is the shape you are working with in your images.
But a dense layer outputs (Batch, size). This can only work if you reshape it later with Reshape((200,150,3)) to match your "true images".
Hint: a Dense 20 in the middle of the model may be too little to represent an entire image. (But of course it depends on your task).
A possible model from this task is:
Conv
... Maybe more convs
MaxPooling
Conv
... Maybe more convs
MaxPooling
Conv
......
UpSampling
Conv
...
UpSampling
Conv
....
Every convolution with padding='same' to make your life easier. (But since you have one dimension being 150, you will have to manage padding the values at some point, because when you reach 75, the MaxPooling will remove/add one pixel (75 cannot be divided by two).