SciPy

Structured arrays

Introduction

Structured arrays are ndarrays whose datatype is a composition of simpler datatypes organized as a sequence of named fields. For example,

>>> x = np.array([('Rex', 9, 81.0), ('Fido', 3, 27.0)],
...              dtype=[('name', 'U10'), ('age', 'i4'), ('weight', 'f4')])
>>> x
array([('Rex', 9, 81.0), ('Fido', 3, 27.0)],
      dtype=[('name', 'S10'), ('age', '<i4'), ('weight', '<f4')])

Here x is a one-dimensional array of length two whose datatype is a structure with three fields: 1. A string of length 10 or less named ‘name’, 2. a 32-bit integer named ‘age’, and 3. a 32-bit float named ‘weight’.

If you index x at position 1 you get a structure:

>>> x[1]
('Fido', 3, 27.0)

You can access and modify individual fields of a structured array by indexing with the field name:

>>> x['age']
array([9, 3], dtype=int32)
>>> x['age'] = 5
>>> x
array([('Rex', 5, 81.0), ('Fido', 5, 27.0)],
      dtype=[('name', 'S10'), ('age', '<i4'), ('weight', '<f4')])

Structured arrays are designed for low-level manipulation of structured data, for example, for interpreting binary blobs. Structured datatypes are designed to mimic ‘structs’ in the C language, making them also useful for interfacing with C code. For these purposes, numpy supports specialized features such as subarrays and nested datatypes, and allows manual control over the memory layout of the structure.

For simple manipulation of tabular data other pydata projects, such as pandas, xarray, or DataArray, provide higher-level interfaces that may be more suitable. These projects may also give better performance for tabular data analysis because the C-struct-like memory layout of structured arrays can lead to poor cache behavior.

Structured Datatypes

To use structured arrays one first needs to define a structured datatype.

A structured datatype can be thought of as a sequence of bytes of a certain length (the structure’s itemsize) which is interpreted as a collection of fields. Each field has a name, a datatype, and a byte offset within the structure. The datatype of a field may be any numpy datatype including other structured datatypes, and it may also be a sub-array which behaves like an ndarray of a specified shape. The offsets of the fields are arbitrary, and fields may even overlap. These offsets are usually determined automatically by numpy, but can also be specified.

Structured Datatype Creation

Structured datatypes may be created using the function numpy.dtype. There are 4 alternative forms of specification which vary in flexibility and conciseness. These are further documented in the Data Type Objects reference page, and in summary they are:

  1. A list of tuples, one tuple per field

    Each tuple has the form (fieldname, datatype, shape) where shape is optional. fieldname is a string (or tuple if titles are used, see Field Titles below), datatype may be any object convertible to a datatype, and shape is a tuple of integers specifying subarray shape.

    >>> np.dtype([('x', 'f4'), ('y', np.float32), ('z', 'f4', (2,2))])
    dtype=[('x', '<f4'), ('y', '<f4'), ('z', '<f4', (2, 2))])
    

    If fieldname is the empty string '', the field will be given a default name of the form f#, where # is the integer index of the field, counting from 0 from the left:

    >>> np.dtype([('x', 'f4'),('', 'i4'),('z', 'i8')])
    dtype([('x', '<f4'), ('f1', '<i4'), ('z', '<i8')])
    

    The byte offsets of the fields within the structure and the total structure itemsize are determined automatically.

  2. A string of comma-separated dtype specifications

    In this shorthand notation any of the string dtype specifications may be used in a string and separated by commas. The itemsize and byte offsets of the fields are determined automatically, and the field names are given the default names f0, f1, etc.

    >>> np.dtype('i8,f4,S3')
    dtype([('f0', '<i8'), ('f1', '<f4'), ('f2', 'S3')])
    >>> np.dtype('3int8, float32, (2,3)float64')
    dtype([('f0', 'i1', 3), ('f1', '<f4'), ('f2', '<f8', (2, 3))])
    
  3. A dictionary of field parameter arrays

    This is the most flexible form of specification since it allows control over the byte-offsets of the fields and the itemsize of the structure.

    The dictionary has two required keys, ‘names’ and ‘formats’, and four optional keys, ‘offsets’, ‘itemsize’, ‘aligned’ and ‘titles’. The values for ‘names’ and ‘formats’ should respectively be a list of field names and a list of dtype specifications, of the same length. The optional ‘offsets’ value should be a list of integer byte-offsets, one for each field within the structure. If ‘offsets’ is not given the offsets are determined automatically. The optional ‘itemsize’ value should be an integer describing the total size in bytes of the dtype, which must be large enough to contain all the fields.

    >>> np.dtype({'names': ['col1', 'col2'], 'formats': ['i4','f4']})
    dtype([('col1', '<i4'), ('col2', '<f4')])
    >>> np.dtype({'names': ['col1', 'col2'],
    ...           'formats': ['i4','f4'],
    ...           'offsets': [0, 4],
    ...           'itemsize': 12})
    dtype({'names':['col1','col2'], 'formats':['<i4','<f4'], 'offsets':[0,4], 'itemsize':12})
    

    Offsets may be chosen such that the fields overlap, though this will mean that assigning to one field may clobber any overlapping field’s data. As an exception, fields of numpy.object type cannot overlap with other fields, because of the risk of clobbering the internal object pointer and then dereferencing it.

    The optional ‘aligned’ value can be set to True to make the automatic offset computation use aligned offsets (see Automatic Byte Offsets and Alignment), as if the ‘align’ keyword argument of numpy.dtype had been set to True.

    The optional ‘titles’ value should be a list of titles of the same length as ‘names’, see Field Titles below.

  4. A dictionary of field names

    The use of this form of specification is discouraged, but documented here because older numpy code may use it. The keys of the dictionary are the field names and the values are tuples specifying type and offset:

    >>> np.dtype=({'col1': ('i1',0), 'col2': ('f4',1)})
    dtype([(('col1'), 'i1'), (('col2'), '>f4')])
    

    This form is discouraged because Python dictionaries do not preserve order in Python versions before Python 3.6, and the order of the fields in a structured dtype has meaning. Field Titles may be specified by using a 3-tuple, see below.

Manipulating and Displaying Structured Datatypes

The list of field names of a structured datatype can be found in the names attribute of the dtype object:

>>> d = np.dtype([('x', 'i8'), ('y', 'f4')])
>>> d.names
('x', 'y')

The field names may be modified by assigning to the names attribute using a sequence of strings of the same length.

The dtype object also has a dictionary-like attribute, fields, whose keys are the field names (and Field Titles, see below) and whose values are tuples containing the dtype and byte offset of each field.

>>> d.fields
mappingproxy({'x': (dtype('int64'), 0), 'y': (dtype('float32'), 8)})

Both the names and fields attributes will equal None for unstructured arrays.

The string representation of a structured datatype is shown in the “list of tuples” form if possible, otherwise numpy falls back to using the more general dictionary form.

Automatic Byte Offsets and Alignment

Numpy uses one of two methods to automatically determine the field byte offsets and the overall itemsize of a structured datatype, depending on whether align=True was specified as a keyword argument to numpy.dtype.

By default (align=False), numpy will pack the fields together such that each field starts at the byte offset the previous field ended, and the fields are contiguous in memory.

>>> def print_offsets(d):
...     print("offsets:", [d.fields[name][1] for name in d.names])
...     print("itemsize:", d.itemsize)
>>> print_offsets(np.dtype('u1,u1,i4,u1,i8,u2'))
offsets: [0, 1, 2, 6, 7, 15]
itemsize: 17

If align=True is set, numpy will pad the structure in the same way many C compilers would pad a C-struct. Aligned structures can give a performance improvement in some cases, at the cost of increased datatype size. Padding bytes are inserted between fields such that each field’s byte offset will be a multiple of that field’s alignment, which is usually equal to the field’s size in bytes for simple datatypes, see PyArray_Descr.alignment. The structure will also have trailing padding added so that its itemsize is a multiple of the largest field’s alignment.

>>> print_offsets(np.dtype('u1,u1,i4,u1,i8,u2', align=True))
offsets: [0, 1, 4, 8, 16, 24]
itemsize: 32

Note that although almost all modern C compilers pad in this way by default, padding in C structs is C-implementation-dependent so this memory layout is not guaranteed to exactly match that of a corresponding struct in a C program. Some work may be needed, either on the numpy side or the C side, to obtain exact correspondence.

If offsets were specified using the optional offsets key in the dictionary-based dtype specification, setting align=True will check that each field’s offset is a multiple of its size and that the itemsize is a multiple of the largest field size, and raise an exception if not.

If the offsets of the fields and itemsize of a structured array satisfy the alignment conditions, the array will have the ALIGNED flag set.

A convenience function numpy.lib.recfunctions.repack_fields converts an aligned dtype or array to a packed one and vice versa. It takes either a dtype or structured ndarray as an argument, and returns a copy with fields re-packed, with or without padding bytes.

Field Titles

In addition to field names, fields may also have an associated title, an alternate name, which is sometimes used as an additional description or alias for the field. The title may be used to index an array, just like a field name.

To add titles when using the list-of-tuples form of dtype specification, the field name may be specified as a tuple of two strings instead of a single string, which will be the field’s title and field name respectively. For example:

>>> np.dtype([(('my title', 'name'), 'f4')])

When using the first form of dictionary-based specification, the titles may be supplied as an extra 'titles' key as described above. When using the second (discouraged) dictionary-based specification, the title can be supplied by providing a 3-element tuple (datatype, offset, title) instead of the usual 2-element tuple:

>>> np.dtype({'name': ('i4', 0, 'my title')})

The dtype.fields dictionary will contain titles as keys, if any titles are used. This means effectively that a field with a title will be represented twice in the fields dictionary. The tuple values for these fields will also have a third element, the field title. Because of this, and because the names attribute preserves the field order while the fields attribute may not, it is recommended to iterate through the fields of a dtype using the names attribute of the dtype, which will not list titles, as in:

>>> for name in d.names:
...     print(d.fields[name][:2])

Union types

Structured datatypes are implemented in numpy to have base type numpy.void by default, but it is possible to interpret other numpy types as structured types using the (base_dtype, dtype) form of dtype specification described in Data Type Objects. Here, base_dtype is the desired underlying dtype, and fields and flags will be copied from dtype. This dtype is similar to a ‘union’ in C.

Indexing and Assignment to Structured arrays

Assigning data to a Structured Array

There are a number of ways to assign values to a structured array: Using python tuples, using scalar values, or using other structured arrays.

Assignment from Python Native Types (Tuples)

The simplest way to assign values to a structured array is using python tuples. Each assigned value should be a tuple of length equal to the number of fields in the array, and not a list or array as these will trigger numpy’s broadcasting rules. The tuple’s elements are assigned to the successive fields of the array, from left to right:

>>> x = np.array([(1,2,3),(4,5,6)], dtype='i8,f4,f8')
>>> x[1] = (7,8,9)
>>> x
array([(1, 2., 3.), (7, 8., 9.)],
     dtype=[('f0', '<i8'), ('f1', '<f4'), ('f2', '<f8')])

Assignment from Scalars

A scalar assigned to a structured element will be assigned to all fields. This happens when a scalar is assigned to a structured array, or when an unstructured array is assigned to a structured array:

>>> x = np.zeros(2, dtype='i8,f4,?,S1')
>>> x[:] = 3
>>> x
array([(3, 3.0, True, b'3'), (3, 3.0, True, b'3')],
      dtype=[('f0', '<i8'), ('f1', '<f4'), ('f2', '?'), ('f3', 'S1')])
>>> x[:] = np.arange(2)
>>> x
array([(0, 0.0, False, b'0'), (1, 1.0, True, b'1')],
      dtype=[('f0', '<i8'), ('f1', '<f4'), ('f2', '?'), ('f3', 'S1')])

Structured arrays can also be assigned to unstructured arrays, but only if the structured datatype has just a single field:

>>> twofield = np.zeros(2, dtype=[('A', 'i4'), ('B', 'i4')])
>>> onefield = np.zeros(2, dtype=[('A', 'i4')])
>>> nostruct = np.zeros(2, dtype='i4')
>>> nostruct[:] = twofield
ValueError: Can't cast from structure to non-structure, except if the structure only has a single field.
>>> nostruct[:] = onefield
>>> nostruct
array([0, 0], dtype=int32)

Assignment from other Structured Arrays

Assignment between two structured arrays occurs as if the source elements had been converted to tuples and then assigned to the destination elements. That is, the first field of the source array is assigned to the first field of the destination array, and the second field likewise, and so on, regardless of field names. Structured arrays with a different number of fields cannot be assigned to each other. Bytes of the destination structure which are not included in any of the fields are unaffected.

>>> a = np.zeros(3, dtype=[('a', 'i8'), ('b', 'f4'), ('c', 'S3')])
>>> b = np.ones(3, dtype=[('x', 'f4'), ('y', 'S3'), ('z', 'O')])
>>> b[:] = a
>>> b
array([(0.0, b'0.0', b''), (0.0, b'0.0', b''), (0.0, b'0.0', b'')],
      dtype=[('x', '<f4'), ('y', 'S3'), ('z', 'O')])

Assignment involving subarrays

When assigning to fields which are subarrays, the assigned value will first be broadcast to the shape of the subarray.

Indexing Structured Arrays

Accessing Individual Fields

Individual fields of a structured array may be accessed and modified by indexing the array with the field name.

>>> x = np.array([(1,2),(3,4)], dtype=[('foo', 'i8'), ('bar', 'f4')])
>>> x['foo']
array([1, 3])
>>> x['foo'] = 10
>>> x
array([(10, 2.), (10, 4.)],
      dtype=[('foo', '<i8'), ('bar', '<f4')])

The resulting array is a view into the original array. It shares the same memory locations and writing to the view will modify the original array.

>>> y = x['bar']
>>> y[:] = 10
>>> x
array([(10, 5.), (10, 5.)],
      dtype=[('foo', '<i8'), ('bar', '<f4')])

This view has the same dtype and itemsize as the indexed field, so it is typically a non-structured array, except in the case of nested structures.

>>> y.dtype, y.shape, y.strides
(dtype('float32'), (2,), (12,))

Accessing Multiple Fields

One can index and assign to a structured array with a multi-field index, where the index is a list of field names.

Warning

The behavior of multi-field indexes will change from Numpy 1.15 to Numpy 1.16.

In Numpy 1.16, the result of indexing with a multi-field index will be a view into the original array, as follows:

>>> a = np.zeros(3, dtype=[('a', 'i4'), ('b', 'i4'), ('c', 'f4')])
>>> a[['a', 'c']]
array([(0, 0.), (0, 0.), (0, 0.)],
     dtype={'names':['a','c'], 'formats':['<i4','<f4'], 'offsets':[0,8], 'itemsize':12})

Assignment to the view modifies the original array. The view’s fields will be in the order they were indexed. Note that unlike for single-field indexing, the view’s dtype has the same itemsize as the original array, and has fields at the same offsets as in the original array, and unindexed fields are merely missing.

In Numpy 1.15, indexing an array with a multi-field index returns a copy of the result above for 1.16, but with fields packed together in memory as if passed through numpy.lib.recfunctions.repack_fields. This is the behavior since Numpy 1.7.

Warning

The new behavior in Numpy 1.16 leads to extra “padding” bytes at the location of unindexed fields. You will need to update any code which depends on the data having a “packed” layout. For instance code such as:

>>> a[['a','c']].view('i8')  # will fail in Numpy 1.16
ValueError: When changing to a smaller dtype, its size must be a divisor of the size of original dtype

will need to be changed. This code has raised a FutureWarning since Numpy 1.12.

The following is a recommended fix, which will behave identically in Numpy 1.15 and Numpy 1.16:

>>> from numpy.lib.recfunctions import repack_fields
>>> repack_fields(a[['a','c']]).view('i8')  # supported 1.15 and 1.16
array([0, 0, 0])

Assigning to an array with a multi-field index will behave the same in Numpy 1.15 and Numpy 1.16. In both versions the assignment will modify the original array:

>>> a[['a', 'c']] = (2, 3)
>>> a
array([(2, 0, 3.0), (2, 0, 3.0), (2, 0, 3.0)],
      dtype=[('a', '<i8'), ('b', '<i4'), ('c', '<f8')])

This obeys the structured array assignment rules described above. For example, this means that one can swap the values of two fields using appropriate multi-field indexes:

>>> a[['a', 'c']] = a[['c', 'a']]

Indexing with an Integer to get a Structured Scalar

Indexing a single element of a structured array (with an integer index) returns a structured scalar:

>>> x = np.array([(1, 2., 3.)], dtype='i,f,f')
>>> scalar = x[0]
>>> scalar
(1, 2., 3.)
>>> type(scalar)
numpy.void

Unlike other numpy scalars, structured scalars are mutable and act like views into the original array, such that modifying the scalar will modify the original array. Structured scalars also support access and assignment by field name:

>>> x = np.array([(1,2),(3,4)], dtype=[('foo', 'i8'), ('bar', 'f4')])
>>> s = x[0]
>>> s['bar'] = 100
>>> x
array([(1, 100.), (3, 4.)],
      dtype=[('foo', '<i8'), ('bar', '<f4')])

Similarly to tuples, structured scalars can also be indexed with an integer:

>>> scalar = np.array([(1, 2., 3.)], dtype='i,f,f')[0]
>>> scalar[0]
1
>>> scalar[1] = 4

Thus, tuples might be thought of as the native Python equivalent to numpy’s structured types, much like native python integers are the equivalent to numpy’s integer types. Structured scalars may be converted to a tuple by calling ndarray.item:

>>> scalar.item(), type(scalar.item())
((1, 2.0, 3.0), tuple)

Viewing Structured Arrays Containing Objects

In order to prevent clobbering object pointers in fields of numpy.object type, numpy currently does not allow views of structured arrays containing objects.

Structure Comparison

If the dtypes of two void structured arrays are equal, testing the equality of the arrays will result in a boolean array with the dimensions of the original arrays, with elements set to True where all fields of the corresponding structures are equal. Structured dtypes are equal if the field names, dtypes and titles are the same, ignoring endianness, and the fields are in the same order:

>>> a = np.zeros(2, dtype=[('a', 'i4'), ('b', 'i4')])
>>> b = np.ones(2, dtype=[('a', 'i4'), ('b', 'i4')])
>>> a == b
array([False, False])

Currently, if the dtypes of two void structured arrays are not equivalent the comparison fails, returning the scalar value False. This behavior is deprecated as of numpy 1.10 and will raise an error or perform elementwise comparison in the future.

The < and > operators always return False when comparing void structured arrays, and arithmetic and bitwise operations are not supported.

Record Arrays

As an optional convenience numpy provides an ndarray subclass, numpy.recarray, and associated helper functions in the numpy.rec submodule, that allows access to fields of structured arrays by attribute instead of only by index. Record arrays also use a special datatype, numpy.record, that allows field access by attribute on the structured scalars obtained from the array.

The simplest way to create a record array is with numpy.rec.array:

>>> recordarr = np.rec.array([(1,2.,'Hello'),(2,3.,"World")],
...                    dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')])
>>> recordarr.bar
array([ 2.,  3.], dtype=float32)
>>> recordarr[1:2]
rec.array([(2, 3.0, 'World')],
      dtype=[('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')])
>>> recordarr[1:2].foo
array([2], dtype=int32)
>>> recordarr.foo[1:2]
array([2], dtype=int32)
>>> recordarr[1].baz
'World'

numpy.rec.array can convert a wide variety of arguments into record arrays, including structured arrays:

>>> arr = array([(1,2.,'Hello'),(2,3.,"World")],
...             dtype=[('foo', 'i4'), ('bar', 'f4'), ('baz', 'S10')])
>>> recordarr = np.rec.array(arr)

The numpy.rec module provides a number of other convenience functions for creating record arrays, see record array creation routines.

A record array representation of a structured array can be obtained using the appropriate view:

>>> arr = np.array([(1,2.,'Hello'),(2,3.,"World")],
...                dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'a10')])
>>> recordarr = arr.view(dtype=dtype((np.record, arr.dtype)),
...                      type=np.recarray)

For convenience, viewing an ndarray as type np.recarray will automatically convert to np.record datatype, so the dtype can be left out of the view:

>>> recordarr = arr.view(np.recarray)
>>> recordarr.dtype
dtype((numpy.record, [('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')]))

To get back to a plain ndarray both the dtype and type must be reset. The following view does so, taking into account the unusual case that the recordarr was not a structured type:

>>> arr2 = recordarr.view(recordarr.dtype.fields or recordarr.dtype, np.ndarray)

Record array fields accessed by index or by attribute are returned as a record array if the field has a structured type but as a plain ndarray otherwise.

>>> recordarr = np.rec.array([('Hello', (1,2)),("World", (3,4))],
...                 dtype=[('foo', 'S6'),('bar', [('A', int), ('B', int)])])
>>> type(recordarr.foo)
<type 'numpy.ndarray'>
>>> type(recordarr.bar)
<class 'numpy.core.records.recarray'>

Note that if a field has the same name as an ndarray attribute, the ndarray attribute takes precedence. Such fields will be inaccessible by attribute but will still be accessible by index.

Recarray Helper Functions

Collection of utilities to manipulate structured arrays.

Most of these functions were initially implemented by John Hunter for matplotlib. They have been rewritten and extended for convenience.

numpy.lib.recfunctions.append_fields(base, names, data, dtypes=None, fill_value=-1, usemask=True, asrecarray=False)[source]

Add new fields to an existing array.

The names of the fields are given with the names arguments, the corresponding values with the data arguments. If a single field is appended, names, data and dtypes do not have to be lists but just values.

Parameters:
base : array

Input array to extend.

names : string, sequence

String or sequence of strings corresponding to the names of the new fields.

data : array or sequence of arrays

Array or sequence of arrays storing the fields to add to the base.

dtypes : sequence of datatypes, optional

Datatype or sequence of datatypes. If None, the datatypes are estimated from the data.

fill_value : {float}, optional

Filling value used to pad missing data on the shorter arrays.

usemask : {False, True}, optional

Whether to return a masked array or not.

asrecarray : {False, True}, optional

Whether to return a recarray (MaskedRecords) or not.

numpy.lib.recfunctions.drop_fields(base, drop_names, usemask=True, asrecarray=False)[source]

Return a new array with fields in drop_names dropped.

Nested fields are supported.

Parameters:
base : array

Input array

drop_names : string or sequence

String or sequence of strings corresponding to the names of the fields to drop.

usemask : {False, True}, optional

Whether to return a masked array or not.

asrecarray : string or sequence, optional

Whether to return a recarray or a mrecarray (asrecarray=True) or a plain ndarray or masked array with flexible dtype. The default is False.

Examples

>>> from numpy.lib import recfunctions as rfn
>>> a = np.array([(1, (2, 3.0)), (4, (5, 6.0))],
...   dtype=[('a', int), ('b', [('ba', float), ('bb', int)])])
>>> rfn.drop_fields(a, 'a')
array([((2.0, 3),), ((5.0, 6),)],
      dtype=[('b', [('ba', '<f8'), ('bb', '<i4')])])
>>> rfn.drop_fields(a, 'ba')
array([(1, (3,)), (4, (6,))],
      dtype=[('a', '<i4'), ('b', [('bb', '<i4')])])
>>> rfn.drop_fields(a, ['ba', 'bb'])
array([(1,), (4,)],
      dtype=[('a', '<i4')])
numpy.lib.recfunctions.find_duplicates(a, key=None, ignoremask=True, return_index=False)[source]

Find the duplicates in a structured array along a given key

Parameters:
a : array-like

Input array

key : {string, None}, optional

Name of the fields along which to check the duplicates. If None, the search is performed by records

ignoremask : {True, False}, optional

Whether masked data should be discarded or considered as duplicates.

return_index : {False, True}, optional

Whether to return the indices of the duplicated values.

Examples

>>> from numpy.lib import recfunctions as rfn
>>> ndtype = [('a', int)]
>>> a = np.ma.array([1, 1, 1, 2, 2, 3, 3],
...         mask=[0, 0, 1, 0, 0, 0, 1]).view(ndtype)
>>> rfn.find_duplicates(a, ignoremask=True, return_index=True)
... # XXX: judging by the output, the ignoremask flag has no effect
numpy.lib.recfunctions.get_fieldstructure(adtype, lastname=None, parents=None)[source]

Returns a dictionary with fields indexing lists of their parent fields.

This function is used to simplify access to fields nested in other fields.

Parameters:
adtype : np.dtype

Input datatype

lastname : optional

Last processed field name (used internally during recursion).

parents : dictionary

Dictionary of parent fields (used interbally during recursion).

Examples

>>> from numpy.lib import recfunctions as rfn
>>> ndtype =  np.dtype([('A', int),
...                     ('B', [('BA', int),
...                            ('BB', [('BBA', int), ('BBB', int)])])])
>>> rfn.get_fieldstructure(ndtype)
... # XXX: possible regression, order of BBA and BBB is swapped
{'A': [], 'B': [], 'BA': ['B'], 'BB': ['B'], 'BBA': ['B', 'BB'], 'BBB': ['B', 'BB']}
numpy.lib.recfunctions.join_by(key, r1, r2, jointype='inner', r1postfix='1', r2postfix='2', defaults=None, usemask=True, asrecarray=False)[source]

Join arrays r1 and r2 on key key.

The key should be either a string or a sequence of string corresponding to the fields used to join the array. An exception is raised if the key field cannot be found in the two input arrays. Neither r1 nor r2 should have any duplicates along key: the presence of duplicates will make the output quite unreliable. Note that duplicates are not looked for by the algorithm.

Parameters:
key : {string, sequence}

A string or a sequence of strings corresponding to the fields used for comparison.

r1, r2 : arrays

Structured arrays.

jointype : {‘inner’, ‘outer’, ‘leftouter’}, optional

If ‘inner’, returns the elements common to both r1 and r2. If ‘outer’, returns the common elements as well as the elements of r1 not in r2 and the elements of not in r2. If ‘leftouter’, returns the common elements and the elements of r1 not in r2.

r1postfix : string, optional

String appended to the names of the fields of r1 that are present in r2 but absent of the key.

r2postfix : string, optional

String appended to the names of the fields of r2 that are present in r1 but absent of the key.

defaults : {dictionary}, optional

Dictionary mapping field names to the corresponding default values.

usemask : {True, False}, optional

Whether to return a MaskedArray (or MaskedRecords is asrecarray==True) or a ndarray.

asrecarray : {False, True}, optional

Whether to return a recarray (or MaskedRecords if usemask==True) or just a flexible-type ndarray.

Notes

  • The output is sorted along the key.
  • A temporary array is formed by dropping the fields not in the key for the two arrays and concatenating the result. This array is then sorted, and the common entries selected. The output is constructed by filling the fields with the selected entries. Matching is not preserved if there are some duplicates…
numpy.lib.recfunctions.merge_arrays(seqarrays, fill_value=-1, flatten=False, usemask=False, asrecarray=False)[source]

Merge arrays field by field.

Parameters:
seqarrays : sequence of ndarrays

Sequence of arrays

fill_value : {float}, optional

Filling value used to pad missing data on the shorter arrays.

flatten : {False, True}, optional

Whether to collapse nested fields.

usemask : {False, True}, optional

Whether to return a masked array or not.

asrecarray : {False, True}, optional

Whether to return a recarray (MaskedRecords) or not.

Notes

  • Without a mask, the missing value will be filled with something, depending on what its corresponding type:
    • -1 for integers
    • -1.0 for floating point numbers
    • '-' for characters
    • '-1' for strings
    • True for boolean values
  • XXX: I just obtained these values empirically

Examples

>>> from numpy.lib import recfunctions as rfn
>>> rfn.merge_arrays((np.array([1, 2]), np.array([10., 20., 30.])))
masked_array(data = [(1, 10.0) (2, 20.0) (--, 30.0)],
             mask = [(False, False) (False, False) (True, False)],
       fill_value = (999999, 1e+20),
            dtype = [('f0', '<i4'), ('f1', '<f8')])
>>> rfn.merge_arrays((np.array([1, 2]), np.array([10., 20., 30.])),
...              usemask=False)
array([(1, 10.0), (2, 20.0), (-1, 30.0)],
      dtype=[('f0', '<i4'), ('f1', '<f8')])
>>> rfn.merge_arrays((np.array([1, 2]).view([('a', int)]),
...               np.array([10., 20., 30.])),
...              usemask=False, asrecarray=True)
rec.array([(1, 10.0), (2, 20.0), (-1, 30.0)],
          dtype=[('a', '<i4'), ('f1', '<f8')])
numpy.lib.recfunctions.rec_append_fields(base, names, data, dtypes=None)[source]

Add new fields to an existing array.

The names of the fields are given with the names arguments, the corresponding values with the data arguments. If a single field is appended, names, data and dtypes do not have to be lists but just values.

Parameters:
base : array

Input array to extend.

names : string, sequence

String or sequence of strings corresponding to the names of the new fields.

data : array or sequence of arrays

Array or sequence of arrays storing the fields to add to the base.

dtypes : sequence of datatypes, optional

Datatype or sequence of datatypes. If None, the datatypes are estimated from the data.

Returns:
appended_array : np.recarray

See also

append_fields

numpy.lib.recfunctions.rec_drop_fields(base, drop_names)[source]

Returns a new numpy.recarray with fields in drop_names dropped.

numpy.lib.recfunctions.rec_join(key, r1, r2, jointype='inner', r1postfix='1', r2postfix='2', defaults=None)[source]

Join arrays r1 and r2 on keys. Alternative to join_by, that always returns a np.recarray.

See also

join_by
equivalent function
numpy.lib.recfunctions.recursive_fill_fields(input, output)[source]

Fills fields from output with fields from input, with support for nested structures.

Parameters:
input : ndarray

Input array.

output : ndarray

Output array.

Notes

  • output should be at least the same size as input

Examples

>>> from numpy.lib import recfunctions as rfn
>>> a = np.array([(1, 10.), (2, 20.)], dtype=[('A', int), ('B', float)])
>>> b = np.zeros((3,), dtype=a.dtype)
>>> rfn.recursive_fill_fields(a, b)
array([(1, 10.0), (2, 20.0), (0, 0.0)],
      dtype=[('A', '<i4'), ('B', '<f8')])
numpy.lib.recfunctions.rename_fields(base, namemapper)[source]

Rename the fields from a flexible-datatype ndarray or recarray.

Nested fields are supported.

Parameters:
base : ndarray

Input array whose fields must be modified.

namemapper : dictionary

Dictionary mapping old field names to their new version.

Examples

>>> from numpy.lib import recfunctions as rfn
>>> a = np.array([(1, (2, [3.0, 30.])), (4, (5, [6.0, 60.]))],
...   dtype=[('a', int),('b', [('ba', float), ('bb', (float, 2))])])
>>> rfn.rename_fields(a, {'a':'A', 'bb':'BB'})
array([(1, (2.0, [3.0, 30.0])), (4, (5.0, [6.0, 60.0]))],
      dtype=[('A', '<i4'), ('b', [('ba', '<f8'), ('BB', '<f8', 2)])])
numpy.lib.recfunctions.stack_arrays(arrays, defaults=None, usemask=True, asrecarray=False, autoconvert=False)[source]

Superposes arrays fields by fields

Parameters:
arrays : array or sequence

Sequence of input arrays.

defaults : dictionary, optional

Dictionary mapping field names to the corresponding default values.

usemask : {True, False}, optional

Whether to return a MaskedArray (or MaskedRecords is asrecarray==True) or a ndarray.

asrecarray : {False, True}, optional

Whether to return a recarray (or MaskedRecords if usemask==True) or just a flexible-type ndarray.

autoconvert : {False, True}, optional

Whether automatically cast the type of the field to the maximum.

Examples

>>> from numpy.lib import recfunctions as rfn
>>> x = np.array([1, 2,])
>>> rfn.stack_arrays(x) is x
True
>>> z = np.array([('A', 1), ('B', 2)], dtype=[('A', '|S3'), ('B', float)])
>>> zz = np.array([('a', 10., 100.), ('b', 20., 200.), ('c', 30., 300.)],
...   dtype=[('A', '|S3'), ('B', float), ('C', float)])
>>> test = rfn.stack_arrays((z,zz))
>>> test
masked_array(data = [('A', 1.0, --) ('B', 2.0, --) ('a', 10.0, 100.0) ('b', 20.0, 200.0)
 ('c', 30.0, 300.0)],
             mask = [(False, False, True) (False, False, True) (False, False, False)
 (False, False, False) (False, False, False)],
       fill_value = ('N/A', 1e+20, 1e+20),
            dtype = [('A', '|S3'), ('B', '<f8'), ('C', '<f8')])