You can multiply numpy arrays by scalars and it just works.
>>> import numpy as np
>>> np.array([1, 2, 3]) * 2
array([2, 4, 6])
>>> np.array([[1, 2, 3], [4, 5, 6]]) * 2
array([[ 2, 4, 6],
[ 8, 10, 12]])
This is also a very fast and efficient operation. With your example:
>>> a_1 = np.array([1.0, 2.0, 3.0])
>>> a_2 = np.array([[1., 2.], [3., 4.]])
>>> b = 2.0
>>> a_1 * b
array([2., 4., 6.])
>>> a_2 * b
array([[2., 4.],
[6., 8.]])
Answer from iz_ on Stack OverflowVideos
You can multiply numpy arrays by scalars and it just works.
>>> import numpy as np
>>> np.array([1, 2, 3]) * 2
array([2, 4, 6])
>>> np.array([[1, 2, 3], [4, 5, 6]]) * 2
array([[ 2, 4, 6],
[ 8, 10, 12]])
This is also a very fast and efficient operation. With your example:
>>> a_1 = np.array([1.0, 2.0, 3.0])
>>> a_2 = np.array([[1., 2.], [3., 4.]])
>>> b = 2.0
>>> a_1 * b
array([2., 4., 6.])
>>> a_2 * b
array([[2., 4.],
[6., 8.]])
Using .multiply() (ufunc multiply)
a_1 = np.array([1.0, 2.0, 3.0])
a_2 = np.array([[1., 2.], [3., 4.]])
b = 2.0
np.multiply(a_1,b)
# array([2., 4., 6.])
np.multiply(a_2,b)
# array([[2., 4.],[6., 8.]])
you can do this in two simple steps using NumPy:
>>> # multiply column 2 of the 2D array, A, by 5.2
>>> A[:,1] *= 5.2
>>> # assuming by 'cumulative sum' you meant the 'reduced' sum:
>>> A[:,1].sum()
>>> # if in fact you want the cumulative sum (ie, returns a new column)
>>> # then do this for the second step instead:
>>> NP.cumsum(A[:,1])
with some mocked data:
>>> A = NP.random.rand(8, 5)
>>> A
array([[ 0.893, 0.824, 0.438, 0.284, 0.892],
[ 0.534, 0.11 , 0.409, 0.555, 0.96 ],
[ 0.671, 0.817, 0.636, 0.522, 0.867],
[ 0.752, 0.688, 0.142, 0.793, 0.716],
[ 0.276, 0.818, 0.904, 0.767, 0.443],
[ 0.57 , 0.159, 0.144, 0.439, 0.747],
[ 0.705, 0.793, 0.575, 0.507, 0.956],
[ 0.322, 0.713, 0.963, 0.037, 0.509]])
>>> A[:,1] *= 5.2
>>> A
array([[ 0.893, 4.287, 0.438, 0.284, 0.892],
[ 0.534, 0.571, 0.409, 0.555, 0.96 ],
[ 0.671, 4.25 , 0.636, 0.522, 0.867],
[ 0.752, 3.576, 0.142, 0.793, 0.716],
[ 0.276, 4.255, 0.904, 0.767, 0.443],
[ 0.57 , 0.827, 0.144, 0.439, 0.747],
[ 0.705, 4.122, 0.575, 0.507, 0.956],
[ 0.322, 3.71 , 0.963, 0.037, 0.509]])
>>> A[:,1].sum()
25.596156138451427
just a few simple rules are required to grok element selection (indexing) in NumPy:
NumPy, like Python, is 0-based, so eg, the "1" below refers to the second column
commas separate the dimensions inside the brackets, so [rows, columns], eg, A[2,3] means the item ("cell") at row three, column four
a colon means all of the elements along that dimension, eg, A[:,1] creates a view of A's column 2; A[3,:] refers to the fourth row
Sure:
import numpy as np
# Let a be some 2d array; here we just use dummy data
# to illustrate the method
a = np.ones((10,5))
# Multiply just the 2nd column by 5.2 in-place
a[:,1] *= 5.2
# Now get the cumulative sum of just that column
csum = np.cumsum(a[:,1])
If you don't want to do this in-place you would need a slightly different strategy:
b = 5.2*a[:,1]
csum = np.cumsum(b)
You could use None (or np.newaxis) to expand A to match B:
>>> A = np.arange(10)
>>> B = np.random.random((10,3,5))
>>> C0 = np.array([A[i]*B[i,:,:] for i in range(len(A))])
>>> C1 = A[:,None,None] * B
>>> np.allclose(C0, C1)
True
But this will only work for the 2 case. Borrowing from @ajcr, with enough transposes we can get implicit broadcasting to work for the general case:
>>> C3 = (A * B.T).T
>>> np.allclose(C0, C3)
True
Alternatively, you could use einsum to provide the generality. In retrospect it's probably overkill here compared with the transpose route, but it's handy when the multiplications are more complicated.
>>> C2 = np.einsum('i,i...->i...', A, B)
>>> np.allclose(C0, C2)
True
and
>>> B = np.random.random((10,4))
>>> D0 = np.array([A[i]*B[i,:] for i in range(len(A))])
>>> D2 = np.einsum('i,i...->i...', A, B)
>>> np.allclose(D0, D2)
True
Although I like the einsum notation, I'll add a little variety to the mix ....
You can add enough extra dimensions to a so that it will broadcast across b.
>>> a.shape
(3,)
>>> b.shape
(3,2)
b has more dimensions than a
extra_dims = b.ndim - a.ndim
Add the extra dimension(s) to a
new_shape = a.shape + (1,)*extra_dims # (3,1)
new_a = a.reshape(new_shape)
Multiply
new_a * b
As a function:
def f(a, b):
'''Product across the first dimension of b.
Assumes a is 1-dimensional.
Raises AssertionError if a.ndim > b.ndim or
- the first dimensions are different
'''
assert a.shape[0] == b.shape[0], 'First dimension is different'
assert b.ndim >= a.ndim, 'a has more dimensions than b'
# add extra dimensions so that a will broadcast
extra_dims = b.ndim - a.ndim
newshape = a.shape + (1,)*extra_dims
new_a = a.reshape(newshape)
return new_a * b