Considering that you create the array A before timing it, both solutions will be equally fast because you are just iterating over the array. But i am actually not sure on why the pure python solution is quicker, maybe it has to do with collection-based iterators (enumerate) are better suited for primitive python types?

Looking at the example with one row, you want to take a range of elements from the row and wrap around the out-of-bounds indices. For this I would suggest doing:

row = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
start = 8
L = 4
np.take(row, np.arange(start, start+L), mode='wrap')

output:

array([ 9, 10,  1,  2])

This behavior can then be extended to 2 dimensions by specifying the axis keyword. But working with uneven lengths in L does make it a bit trickier, because working with non-homogeneous arrays you will loose most of the benefits from using numpy. The work-around is to partition L in a way that equally sized lengths are grouped together.

If I understand the whole task correctly, you are given some start value and you want to extract each corresponding strip length along the second axis of A.

A = np.arange(5*8000).reshape(5,8000) # using arange makes it easier to verify output
L = (4,3,3,3,4) # length for each of the 5 strips
parts = ((0,4), (1,2,3)) # partition L (to lazy to implement this myself atm)
start = 7998 # arbitrary start position

for part in parts:
  ranges = np.arange(start, start+L[part[0]])
  out = np.take(A[part,:], ranges, axis=-1, mode='wrap')
  print(f'Output for rows {part} with length {L[part[0]]}:\n\n{out}\n')

Output:

Output for rows (0, 4) with length 4:

[[ 7998  7999     0     1]
 [39998 39999 32000 32001]]

Output for rows (1, 2, 3) with length 3:

[[15998 15999  8000]
 [23998 23999 16000]
 [31998 31999 24000]]

Although, it looks like you want a random starting position for each row?

Answer from Kevin on Stack Overflow
🌐
NumPy
numpy.org › doc › 2.1 › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v2.1 Manual
The gradient is computed using second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries.
Top answer
1 of 1
2

Considering that you create the array A before timing it, both solutions will be equally fast because you are just iterating over the array. But i am actually not sure on why the pure python solution is quicker, maybe it has to do with collection-based iterators (enumerate) are better suited for primitive python types?

Looking at the example with one row, you want to take a range of elements from the row and wrap around the out-of-bounds indices. For this I would suggest doing:

row = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
start = 8
L = 4
np.take(row, np.arange(start, start+L), mode='wrap')

output:

array([ 9, 10,  1,  2])

This behavior can then be extended to 2 dimensions by specifying the axis keyword. But working with uneven lengths in L does make it a bit trickier, because working with non-homogeneous arrays you will loose most of the benefits from using numpy. The work-around is to partition L in a way that equally sized lengths are grouped together.

If I understand the whole task correctly, you are given some start value and you want to extract each corresponding strip length along the second axis of A.

A = np.arange(5*8000).reshape(5,8000) # using arange makes it easier to verify output
L = (4,3,3,3,4) # length for each of the 5 strips
parts = ((0,4), (1,2,3)) # partition L (to lazy to implement this myself atm)
start = 7998 # arbitrary start position

for part in parts:
  ranges = np.arange(start, start+L[part[0]])
  out = np.take(A[part,:], ranges, axis=-1, mode='wrap')
  print(f'Output for rows {part} with length {L[part[0]]}:\n\n{out}\n')

Output:

Output for rows (0, 4) with length 4:

[[ 7998  7999     0     1]
 [39998 39999 32000 32001]]

Output for rows (1, 2, 3) with length 3:

[[15998 15999  8000]
 [23998 23999 16000]
 [31998 31999 24000]]

Although, it looks like you want a random starting position for each row?

Top answer
1 of 1
1

Two comments.

First, about implementing periodic boundary conditions. There are several ways to implement such boundary conditions. In my opinion, the easiest is to directly incorporate them into your approximation. Say you have points numbered , , ..., . What periodicity means is that the solution at points and are identical, so you don't need to store the solution twice. Furthermore, periodicity also means is that if you need the solution to the left of point , i.e., at the fictitious point , you should actually look to the left of point , i.e. at the point . The same goes if you need the solution to the right of point .

So, your method at point (which is the same as at point ),

should actually be interpreted as

Similar, your method at point ,

should actually be interpreted as

At all the other (interior) points, the method is unchanged. Periodic boundaries can be confusing at first. In my experience, it helps to draw this (which I can't do right now) and think through it step-by-step.

The second and more important point is that the FTCS method is unconditionally unstable. What this means is that if you were to implement it, you would never obtain an approximate solution because it would always diverge (grow without bound) irrespective of how small you make your time step. You can easily establish the unconditional instability by applying the von Neumann method, see Section 2.1 in these notes. To get a simple, stable method, consider the FTBS method, see Section 2.2 in the notes. (By the way, these notes also discuss the implementation of periodic boundary conditions for the FTBS method.)

🌐
Readthedocs
py-pde.readthedocs.io › en › latest › manual › advanced_usage.html
3.3 Advanced usage — py-pde unknown documentation
Here, the first line creates a function apply_laplace for the given grid field.grid and the boundary conditions bc. This function can be applied to numpy.ndarray instances, e.g. field.data. Note that the result of this call is again a numpy.ndarray. ... grid = UnitGrid([6, 8]) apply_gradient = grid.make_operator("gradient", bc="auto_periodic_neumann") data = np.random.random((6, 8)) gradient_data = apply_gradient(data) assert gradient_data.shape == (2, 6, 8)
Top answer
1 of 1
6

Wrap function

A simple function can be written with the mod function, % in basic python and generalised to operate on an n-dimensional tuple given a specific shape.

def latticeWrapIdx(index, lattice_shape):
    """returns periodic lattice index 
    for a given iterable index
    
    Required Inputs:
        index :: iterable :: one integer for each axis
        lattice_shape :: the shape of the lattice to index to
    """
    if not hasattr(index, '__iter__'): return index         # handle integer slices
    if len(index) != len(lattice_shape): return index  # must reference a scalar
    if any(type(i) == slice for i in index): return index   # slices not supported
    if len(index) == len(lattice_shape):               # periodic indexing of scalars
        mod_index = tuple(( (i%s + s)%s for i,s in zip(index, lattice_shape)))
        return mod_index
    raise ValueError('Unexpected index: {}'.format(index))

This is tested as:

arr = np.array([[ 11.,  12.,  13.,  14.],
                [ 21.,  22.,  23.,  24.],
                [ 31.,  32.,  33.,  34.],
                [ 41.,  42.,  43.,  44.]])
test_vals = [[(1,1), 22.], [(3,3), 44.], [( 4, 4), 11.], # [index, expected value]
             [(3,4), 41.], [(4,3), 14.], [(10,10), 33.]]

passed = all([arr[latticeWrapIdx(idx, (4,4))] == act for idx, act in test_vals])
print "Iterating test values. Result: {}".format(passed)

and gives the output of,

Iterating test values. Result: True

Subclassing Numpy

The wrapping function can be incorporated into a subclassed np.ndarray as described here:

class Periodic_Lattice(np.ndarray):
    """Creates an n-dimensional ring that joins on boundaries w/ numpy
    
    Required Inputs
        array :: np.array :: n-dim numpy array to use wrap with
    
    Only currently supports single point selections wrapped around the boundary
    """
    def __new__(cls, input_array, lattice_spacing=None):
        """__new__ is called by numpy when and explicit constructor is used:
        obj = MySubClass(params) otherwise we must rely on __array_finalize
         """
        # Input array is an already formed ndarray instance
        # We first cast to be our class type
        obj = np.asarray(input_array).view(cls)
        
        # add the new attribute to the created instance
        obj.lattice_shape = input_array.shape
        obj.lattice_dim = len(input_array.shape)
        obj.lattice_spacing = lattice_spacing
        
        # Finally, we must return the newly created object:
        return obj
    
    def __getitem__(self, index):
        index = self.latticeWrapIdx(index)
        return super(Periodic_Lattice, self).__getitem__(index)
    
    def __setitem__(self, index, item):
        index = self.latticeWrapIdx(index)
        return super(Periodic_Lattice, self).__setitem__(index, item)
    
    def __array_finalize__(self, obj):
        """ ndarray.__new__ passes __array_finalize__ the new object, 
        of our own class (self) as well as the object from which the view has been taken (obj). 
        See
        http://docs.scipy.org/doc/numpy/user/basics.subclassing.html#simple-example-adding-an-extra-attribute-to-ndarray
        for more info
        """
        # ``self`` is a new object resulting from
        # ndarray.__new__(Periodic_Lattice, ...), therefore it only has
        # attributes that the ndarray.__new__ constructor gave it -
        # i.e. those of a standard ndarray.
        #
        # We could have got to the ndarray.__new__ call in 3 ways:
        # From an explicit constructor - e.g. Periodic_Lattice():
        #   1. obj is None
        #       (we're in the middle of the Periodic_Lattice.__new__
        #       constructor, and self.info will be set when we return to
        #       Periodic_Lattice.__new__)
        if obj is None: return
        #   2. From view casting - e.g arr.view(Periodic_Lattice):
        #       obj is arr
        #       (type(obj) can be Periodic_Lattice)
        #   3. From new-from-template - e.g lattice[:3]
        #       type(obj) is Periodic_Lattice
        # 
        # Note that it is here, rather than in the __new__ method,
        # that we set the default value for 'spacing', because this
        # method sees all creation of default objects - with the
        # Periodic_Lattice.__new__ constructor, but also with
        # arr.view(Periodic_Lattice).
        #
        # These are in effect the default values from these operations
        self.lattice_shape = getattr(obj, 'lattice_shape', obj.shape)
        self.lattice_dim = getattr(obj, 'lattice_dim', len(obj.shape))
        self.lattice_spacing = getattr(obj, 'lattice_spacing', None)
        pass
    
    def latticeWrapIdx(self, index):
        """returns periodic lattice index 
        for a given iterable index
        
        Required Inputs:
            index :: iterable :: one integer for each axis
        
        This is NOT compatible with slicing
        """
        if not hasattr(index, '__iter__'): return index         # handle integer slices
        if len(index) != len(self.lattice_shape): return index  # must reference a scalar
        if any(type(i) == slice for i in index): return index   # slices not supported
        if len(index) == len(self.lattice_shape):               # periodic indexing of scalars
            mod_index = tuple(( (i%s + s)%s for i,s in zip(index, self.lattice_shape)))
            return mod_index
        raise ValueError('Unexpected index: {}'.format(index))

Testing demonstrates the lattice overloads correctly,

arr = np.array([[ 11.,  12.,  13.,  14.],
                [ 21.,  22.,  23.,  24.],
                [ 31.,  32.,  33.,  34.],
                [ 41.,  42.,  43.,  44.]])
test_vals = [[(1,1), 22.], [(3,3), 44.], [( 4, 4), 11.], # [index, expected value]
             [(3,4), 41.], [(4,3), 14.], [(10,10), 33.]]

periodic_arr  = Periodic_Lattice(arr)
passed = (periodic_arr == arr).all()
passed *= all([periodic_arr[idx] == act for idx, act in test_vals])
print "Iterating test values. Result: {}".format(passed)

and gives the output of,

Iterating test values. Result: True

Finally, using the code provided in the initial problem we obtain:

True
error
error
🌐
The Mail Archive
mail-archive.com › numpy-discussion@python.org › msg08392.html
[Numpy-discussion] RFC: Adding 'periodic' boundary support to numpy.gradient
While numpy.gradient is a foundational tool for numerical differentiation, it currently lacks a mechanism to handle periodic domains without requiring users to manually pad their arrays (e.g., via np.pad(mode='wrap')). I believe adding a periodic parameter (potentially a boolean or a tuple ...
🌐
The-rccg
the-rccg.github.io › hw2d › hw2d › gradients › numpy_gradients.html
hw2d.gradients.numpy_gradients API documentation
def periodic_gradient(input_field: numpy.ndarray, dx: float, axis: int = 0) -> numpy.ndarray: View Source · 123def periodic_gradient(input_field: np.ndarray, dx: float, axis: int = 0) -> np.ndarray: 124 """ 125 Compute the gradient of a 2D array using finite differences with periodic boundary conditions.
🌐
GitHub
github.com › numpy › numpy › issues › 21357
ENH: boundary condition for np.gradient · Issue #21357 · numpy/numpy
April 18, 2022 - This cannot be done using xarray (although seems quite natural given the data and coordinates), as it is based on numpy.gradient (see the issue here). Just want to know, if we could pad data with different BCs (length becoming N+2) so that the indicing will not be out-of-range, and then take centered difference. ... # padded with BCs into N+2, which could use `numpy.pad` data_pad = pad_BCs(data, dim, BCs='periodic') # padded with BCs into N+2 with mixed BCs data_pad = pad_BCs(data, dim, BCs=['extend', 'fixed'], fill=1) # padded with BCs into N+2 with different values at each BC data_pad = pad_BCs(data, dim, BCs=['fixed', 'fixed'], fill=(2,3)) # it is safe to take finite difference for i in range(len(data)) diff[i] = (data_pad [i+1] - data_pad [i-1]) / (dx*2)
Find elsewhere
🌐
NumPy
numpy.org › doc › stable › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v2.4 Manual
The gradient is computed using second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries.
🌐
NumPy
numpy.org › doc › 1.25 › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v1.25 Manual
The gradient is computed using second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries.
🌐
Alex McFarlane
flipdazed.github.io › blog › python › periodic-boundary-conditions
Periodic Boundary Conditions for Lattices in Python
September 20, 2016 - class Periodic_Lattice(np.ndarray): """Creates an n-dimensional ring that joins on boundaries w/ numpy Required Inputs array :: np.array :: n-dim numpy array to use wrap with Only currently supports single point selections wrapped around the boundary """ def __new__(cls, input_array, lattice_spacing=None): """__new__ is called by numpy when and explicit constructor is used: obj = MySubClass(params) otherwise we must rely on __array_finalize """ # Input array is an already formed ndarray instance # We first cast to be our class type obj = np.asarray(input_array).view(cls) # add the new attribut
🌐
SciPy
docs.scipy.org › doc › numpy-1.9.3 › reference › generated › numpy.gradient.html
numpy.gradient — NumPy v1.9 Manual
The gradient is computed using second order accurate central differences in the interior and either first differences or second order accurate one-sides (forward or backwards) differences at the boundaries.
🌐
GitHub
github.com › numpy › numpy › issues › 5184
numpy.gradient() doesn't compute boundary values correctly in version 1.9.0 · Issue #5184 · numpy/numpy
October 14, 2014 - It seems that numpy.gradient() computes the boundary values incorrectly in version 1.9.0 (but correctly in e.g. version 1.8.1) To reproduce: Alternative 1: >>> x = np.array([1, 2, 4], dtype=np.float) >>> np.gradient(x) array([ 0.5, 1.5, ...
Author   drandreasdr
🌐
GitHub
github.com › lululxvi › deepxde › issues › 26
Periodic boundary condition issue · Issue #26 · lululxvi/deepxde
I think there may be a bug in the periodic boundary condition. See this test code · from __future__ import absolute_import from __future__ import division from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import tensorflow as tf import sys import deepxde as dde def main(): def pde(x, y): dy_x = tf.gradients(y, x)[0] dy_x, dy_y = dy_x[:, 0:1], dy_x[:, 1:] dy_xx = tf.gradients(dy_x, x)[0][:, 0:1] dy_yy = tf.gradients(dy_y, x)[0][:, 1:] return -dy_xx-dy_yy-1 def boundary(x, on_boundary): return on_boundary def func_bdy(x): return np.zeros([len(x), 1]) def b
🌐
TutorialsPoint
tutorialspoint.com › numpy › numpy_difference_ufunc.htm
NumPy - Difference Universal Function (ufunc)
import numpy as np # Define an array a = np.array([1, 3, 6, 10, 15]) # Compute the first-order differences with a periodic boundary periodic_diff = np.diff(a, append=a[0]) print("Periodic differences:", periodic_diff) ... The numpy.gradient() function computes the gradient of an array.
🌐
PyPI
pypi.org › project › pbcpy
pbcpy · PyPI
February 21, 2018 - pbcpy is a Python3 package providing some useful abstractions to deal with molecules and materials under periodic boundary conditions (PBC). In addition, pbcpy exposes a fully periodic N-rank array, the pbcarray, which is derived from the numpy.ndarray.
      » pip install pbcpy
    
Published   Feb 22, 2018
Version   0.2.7
🌐
Pythoninchemistry
pythoninchemistry.org › sim_and_scat › important_considerations › pbc.html
Periodic boundary conditions
May 29, 2019 - Parameters ---------- positions: ndarray of floats The positions, in a single dimension, for all of the particles box_length: float The size of the periodic cell cutoff: float The distance after which the interaction is ignored Returns ------- ndarray of floats The acceleration on each particle """ accel_x = np.zeros((positions.size, positions.size)) for i in range(0, positions.size - 1): for j in range(i + 1, positions.size): r_x = positions[j] - positions[i] r_x = r_x % box_length rmag = np.sqrt(r_x * r_x) force_scalar = lj_force(rmag, 0.0103, 3.4) force_x = force_scalar * r_x / rmag accel_x[i, j] = force_x / mass_of_argon accel_x[j, i] = - force_x / mass_of_argon return np.sum(accel_x, axis=0)
🌐
Devasking
devasking.com › issue › npndarray-with-periodic-boundary-conditions
Np.ndarray with Periodic Boundary conditions
This is a periodic boundary condition forming an n-dimensional torus,Wrap the indexing of a python np.ndarray around the boundaries in n-dimensions,To impose np.ndarray periodic boundary conditions as laid out below,A simple function can be written with the mod function, % in basic python and ...