pyDive.arrays.gpu_ndarray module

Note

This module has a shortcut: pyDive.gpu.

class pyDive.arrays.gpu_ndarray.gpu_ndarray(shape, dtype=<type 'float'>, distaxes='all', target_offsets=None, target_ranks=None, no_allocation=False, **kwargs)

Represents a cluster-wide, multidimensional, homogeneous array of fixed-size elements. cluster-wide means that its elements are distributed across IPython.parallel-engines. The distribution is done in one or multiply dimensions along user-specified axes. The user can optionally specify which engine maps to which index range or leave the default that persuits an uniform distribution across all engines.

This gpu_ndarray - class is auto-generated out of its local counterpart: pyDive.arrays.local.gpu_ndarray.gpu_ndarray.

The implementation is based on IPython.parallel and local pyDive.arrays.local.gpu_ndarray.gpu_ndarray - arrays. Every special operation pyDive.arrays.local.gpu_ndarray.gpu_ndarray implements (“__add__”, “__le__”, ...) is also available for gpu_ndarray.

Note that array slicing is a cheap operation since no memory is copied. However this can easily lead to the situation where you end up with two arrays of the same size but of distinct element distribution. Therefore call dist_like() first before doing any manual stuff on their local arrays. However every cluster-wide array operation first equalizes the distribution of all involved arrays, so an explicit call to dist_like() is rather unlikely in most use cases.

If you try to access an attribute that is only available for the local array, the request is forwarded to an internal local copy of the whole distributed array (see: gather()). This internal copy is only created when you want to access it and is held until __setitem__ is called, i.e. the array’s content is manipulated.

__init__(shape, dtype=<type 'float'>, distaxes='all', target_offsets=None, target_ranks=None, no_allocation=False, **kwargs)

Creates an instance of gpu_ndarray. This is a low-level method of instantiating an array, it should rather be constructed using factory functions (“empty”, “zeros”, “open”, ...)

Parameters:
  • shape (ints) – shape of array
  • dtype – datatype of a single element
  • distaxes (ints) – distributed axes. Accepts a single integer too. Defaults to ‘all’ meaning each axis is distributed.
  • target_offsets (list of lists) – For each distributed axis there is a (inner) list in the outer list. The inner list contains the offsets of the local array.
  • target_ranks (ints) – linear list of engine ranks holding the local arrays. The last distributed axis is iterated over first.
  • no_allocation (bool) – if True no instance of pyDive.arrays.local.gpu_ndarray.gpu_ndarray will be created on engine. Useful for manual instantiation of the local array.
  • kwargs – additional keyword arguments are forwarded to the constructor of the local array.
to_cpu()

Copy array data to cpu main memory.

Result pyDive.ndarray:
 distributed cpu array.
pyDive.arrays.gpu_ndarray.array(array_like, distaxes='all')[source]

Create a pyDive.gpu_ndarray instance from an array-like object.

Parameters:
  • array_like – Any object exposing the array interface, e.g. numpy-array, python sequence, ...
  • distaxis (ints) – distributed axes. Defaults to ‘all’ meaning each axis is distributed.
pyDive.arrays.gpu_ndarray.hollow(shape, dtype=<type 'float'>, distaxes='all')[source]

Create a pyDive.gpu_ndarray instance distributed across all engines without allocating a local gpu-array.

Parameters:
  • shape (ints) – shape of array
  • dtype – datatype of a single element
  • distaxes (ints) – distributed axes. Defaults to ‘all’ meaning each axis is distributed.
pyDive.arrays.gpu_ndarray.hollow_like(other)[source]

Create a pyDive.gpu_ndarray instance with the same shape, distribution and type as other without allocating a local gpu-array.