You could preallocate the array before assigning the respective values:
a = np.empty(shape=(25, 2), dtype=int)
for x in range(1, 6):
for y in range(1, 6):
index = (x-1)*5+(y-1)
a[index] = x, y
Answer from aasoo on Stack OverflowYou could preallocate the array before assigning the respective values:
a = np.empty(shape=(25, 2), dtype=int)
for x in range(1, 6):
for y in range(1, 6):
index = (x-1)*5+(y-1)
a[index] = x, y
Did you had a look at numpy.ndindex? This could do the trick:
a = np.ndindex(6,6)
You could have some more information on Is there a Python equivalent of range(n) for multidimensional ranges?
Building up an array in numpy/scipy by iteration in Python? - Stack Overflow
creating a numpy array of arrays while looping (Python) - Stack Overflow
python - Fast loop to create an array of values - Code Review Stack Exchange
python - how to use for loop with numpy array? - Stack Overflow
Videos
Can someone help me build a numpy array with a for loop? This would fit into a for loop of something else I'm doing with for loops which outputs something into each row.
Basically I'd like to build a numpy array that will output into a csv file like this:
1st iteration:
a,b,c,d
2nd iteration:
a,b,c,d
e,f,g,h
3rd iteration:
a,b,c,d
e,f,g,h
x,y,z,q
Thanks a lot! Sorry if my description isn't clear.
NumPy provides a 'fromiter' method:
def myfunc(n):
for i in range(n):
yield i**2
np.fromiter(myfunc(5), dtype=int)
which yields
array([ 0, 1, 4, 9, 16])
The recommended way to do this is to preallocate before the loop and use slicing and indexing to insert
my_array = numpy.zeros(1,1000)
for i in xrange(1000):
#for 1D array
my_array[i] = functionToGetValue(i)
#OR to fill an entire row
my_array[i:] = functionToGetValue(i)
#or to fill an entire column
my_array[:,i] = functionToGetValue(i)
numpy does provide an array.resize() method, but this will be far slower due to the cost of reallocating memory inside a loop. If you must have flexibility, then I'm afraid the only way is to create an array from a list.
EDIT: If you are worried that you're allocating too much memory for your data, I'd use the method above to over-allocate and then when the loop is done, lop off the unused bits of the array using array.resize(). This will be far, far faster than constantly reallocating the array inside the loop.
EDIT: In response to @user248237's comment, assuming you know any one dimension of the array (for simplicity's sake):
my_array = numpy.array(10000, SOMECONSTANT)
for i in xrange(someVariable):
if i >= my_array.shape[0]:
my_array.resize((my_array.shape[0]*2, SOMECONSTANT))
my_array[i:] = someFunction()
#lop off extra bits with resize() here
The general principle is "allocate more than you think you'll need, and if things change, resize the array as few times as possible". Doubling the size could be thought of as excessive, but in fact this is the method used by several data structures in several standard libraries in other languages (java.util.Vector does this by default for example. I think several implementations of std::vector in C++ do this as well).
I have a project were I need to do something very similar. You can implement dynamic resizing in your loop, but python's list type is actually implemented as a dynamic array so you might as well take advantage of the tools that are already available to you. You can do something like this:
delta_Array = np.array([0.01,0.02,0.03, 0.04, 0.05, 0.06,0.07, 0.08, 0.09, 0.10])
theta_Matrix = []
for i in range(N):
t = Ridge(Xtrain, ytrain, .3)
theta_Matrix.append(t)
theta_Matrix = np.array(theta_Matrix)
I should mention that if you already know the size you expect for theta_Matrix, you'll get the best performance by doing something like:
delta_Array = np.array([0.01,0.02,0.03, 0.04, 0.05, 0.06,0.07, 0.08, 0.09, 0.10])
theta_Matrix = np.zeros((N, 8))
for i in range(N):
t = Ridge(Xtrain, ytrain, .3)
theta_Matrix[i] = t
np.append returns the concatenated array, and you're ignoring its return value. You should also consider using np.vstack instead, since that stacks row vectors into matrices (append can do it but it takes extra arguments).
However, running np.append or np.vstack in a loop is still not a very good idea as constructing the matrix will take quadratic time. It's better to preallocate an array and then fill it row by row using slicing. If you don't know how large it will need to be, consider iteratively doubling its size (see Wikipedia, Dynamic array) with np.resize.
There is no need to use a loop. You can use numpy.arange([start, ]stop, [step, ]) to generate a range of numbers.
In your case:
predicted_value = np.arange(9, 33) # Note the 33 if you want 9..32
If you really want to use a loop, there is the option of using a list comprehension:
predicted_value = np.array([i for i in range(9, 33)])
Or an explicit loop, which would be most horrible:
predicted_value = np.empty(33 - 9)
for k, i in enumerate(range(9, 33)):
predicted_value[k] = i
predicted_value = np.array([i for i in range(9, 33)])
Do you want this?
a = [1,2,3];
b = [4,5,6];
c = [a,b];
c[1][1] # Gives you 5
To do it in a loop
for z in [a,b]:
c.append(z)
# continue as usual......
Also, you don't really need numpy to do this. If you do, follow @Taha s answer above.
c[1] would give [[4 5 6]] to access to 5 you should c[1][0][1]
import numpy as np
b=[4,5,6]
a=[1,2,3]
c=np.array([[a],[b]])
print c[1][0][1]
UPDATE
It's easier to do it this way:
import numpy as np
b=[4,5,6]
a=[1,2,3,9] #I added an element to clarify how to manage indexes in case u have diff sizes
c=np.array([a,b])
#j= sum(1 for x in c if isinstance(x, np.ndarray)) Another way to see how many item in the list
for j in range(len(c)): #Selecting the list (a,b ..)
i=0
while i<=len(c[j])-1: #Looping in the list
print "index (",i,",",j,"):",c[j][i]
i+=1