You could do something ugly as
for i in range(len(your_array)):
for j in range(len(your_array[i])):
print(your_array[i][j])
Answer from Shintlor on Stack OverflowVideos
Have a look at itertools, especially itertools.product. You can compress the three loops into one with
import itertools
for x, y, z in itertools.product(*map(xrange, (x_dim, y_dim, z_dim)):
...
You can also create the cube this way:
cube = numpy.array(list(itertools.product((0,1), (0,1), (0,1))))
print cube
array([[0, 0, 0],
[0, 0, 1],
[0, 1, 0],
[0, 1, 1],
[1, 0, 0],
[1, 0, 1],
[1, 1, 0],
[1, 1, 1]])
and add the offsets by a simple addition
print cube + (10,100,1000)
array([[ 10, 100, 1000],
[ 10, 100, 1001],
[ 10, 101, 1000],
[ 10, 101, 1001],
[ 11, 100, 1000],
[ 11, 100, 1001],
[ 11, 101, 1000],
[ 11, 101, 1001]])
which would to translate to cube + (x,y,z) in your case. The very compact version of your code would be
import itertools, numpy
cube = numpy.array(list(itertools.product((0,1), (0,1), (0,1))))
x_dim = y_dim = z_dim = 10
for offset in itertools.product(*map(xrange, (x_dim, y_dim, z_dim))):
work_with_cube(cube+offset)
Edit: itertools.product makes the product over the different arguments, i.e. itertools.product(a,b,c), so I have to pass map(xrange, ...) with as *map(...)
import itertools
for x, y, z in itertools.product(xrange(x_size),
xrange(y_size),
xrange(z_size)):
work_with_cube(array[x, y, z])
I have a 3d array that i realized i cant efficiently and cleanly turn into 3 or even 2 parrallel arrays.
There arent too many items in this 3d array. Is it good practice in this case to just use a tripple nested loop? I never used one before so it seems freaky but im not sure what else to do.
I never used a binary tree to iterate through a multidimensional array either. Im not sure if thats what it is used for.
So what would ve the best approach given the fact that there are probably a max of 200 items in each array?
m = [[[i*j*k for k in range(10)] for j in range(6)] for i in range(4)]
Since m is not defined when you start your loop python does not know how to access the [i][j][k]-th element.
m = [] # init the first level
for i in range (4):
m.append([]) # init m[i]
for j in range (6):
m[i].append([]) # init m[i][j]
for k in range (10):
m[i][j].append( i*j*k ) # add m[i][j] the k-th element
print(m)
itertools.product is nicer than nested loops, aesthetically speaking. But I don't think it will make your code that much faster. My testing suggests that iteration is not your bottleneck.
>>> bigdata = numpy.arange(256 * 256 * 256 * 3 * 3).reshape(256, 256, 256, 3, 3)
>>> %timeit numpy.linalg.eigvals(bigdata[100, 100, 100, :, :])
10000 loops, best of 3: 52.6 us per loop
So underestimating:
>>> .000052 * 256 * 256 * 256 / 60
14.540253866666665
That's 14 minutes minimum on my computer, which is pretty new. Let's see how long the loops take...
>>> def just_loops(N):
... for i in xrange(N):
... for j in xrange(N):
... for k in xrange(N):
... pass
...
>>> %timeit just_loops(256)
1 loops, best of 3: 350 ms per loop
Orders of magnitude smaller, as DSM said. Even the work of slicing the array alone is more substantial:
>>> def slice_loops(N, data):
... for i in xrange(N):
... for j in xrange(N):
... for k in xrange(N):
... data[i, j, k, :, :]
...
>>> %timeit slice_loops(256, bigdata)
1 loops, best of 3: 33.5 s per loop
I'm sure there is a good way to do this in NumPy, but in general, itertools.product is faster than nested loops over ranges.
from itertools import product
for i, j, k in product(xrange(N), xrange(N), xrange(N)):
a = np.linalg.eigvals(data[i,j,k,:,:])