First off, the code you're learning from is flawed. It almost certainly doesn't do what the original author thought it did based on the comments in the code.

What the author probably meant was this:

def to_1d(array):
    """prepares an array into a 1d real vector"""
    return array.astype(np.float64).ravel()

However, if array is always going to be an array of complex numbers, then the original code makes some sense.

The only cases where viewing the array (a.dtype = 'float64' is equivalent to doing a = a.view('float64')) would double its size is if it's a complex array (numpy.complex128) or a 128-bit floating point array. For any other dtype, it doesn't make much sense.

For the specific case of a complex array, the original code would convert something like np.array([0.5+1j, 9.0+1.33j]) into np.array([0.5, 1.0, 9.0, 1.33]).

A cleaner way to write that would be:

def complex_to_iterleaved_real(array):
     """prepares a complex array into an "interleaved" 1d real vector"""
    return array.copy().view('float64').ravel()

(I'm ignoring the part about returning the original dtype and shape, for the moment.)


Background on numpy arrays

To explain what's going on here, you need to understand a bit about what numpy arrays are.

A numpy array consists of a "raw" memory buffer that is interpreted as an array through "views". You can think of all numpy arrays as views.

Views, in the numpy sense, are just a different way of slicing and dicing the same memory buffer without making a copy.

A view has a shape, a data type (dtype), an offset, and strides. Where possible, indexing/reshaping operations on a numpy array will just return a view of the original memory buffer.

This means that things like y = x.T or y = x[::2] don't use any extra memory, and don't make copies of x.

So, if we have an array similar to this:

import numpy as np
x = np.array([1,2,3,4,5,6,7,8,9,10])

We could reshape it by doing either:

x = x.reshape((2, 5))

or

x.shape = (2, 5)

For readability, the first option is better. They're (almost) exactly equivalent, though. Neither one will make a copy that will use up more memory (the first will result in a new python object, but that's beside the point, at the moment.).


Dtypes and views

The same thing applies to the dtype. We can view an array as a different dtype by either setting x.dtype or by calling x.view(...).

So we can do things like this:

import numpy as np
x = np.array([1,2,3], dtype=np.int)

print 'The original array'
print x

print '\n...Viewed as unsigned 8-bit integers (notice the length change!)'
y = x.view(np.uint8)
print y

print '\n...Doing the same thing by setting the dtype'
x.dtype = np.uint8
print x

print '\n...And we can set the dtype again and go back to the original.'
x.dtype = np.int
print x

Which yields:

The original array
[1 2 3]

...Viewed as unsigned 8-bit integers (notice the length change!)
[1 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0]

...Doing the same thing by setting the dtype
[1 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0]

...And we can set the dtype again and go back to the original.
[1 2 3]

Keep in mind, though, that this is giving you low-level control over the way that the memory buffer is interpreted.

For example:

import numpy as np
x = np.arange(10, dtype=np.int)

print 'An integer array:', x
print 'But if we view it as a float:', x.view(np.float)
print "...It's probably not what we expected..."

This yields:

An integer array: [0 1 2 3 4 5 6 7 8 9]
But if we view it as a float: [  0.00000000e+000   4.94065646e-324   
   9.88131292e-324   1.48219694e-323   1.97626258e-323   
   2.47032823e-323   2.96439388e-323   3.45845952e-323
   3.95252517e-323   4.44659081e-323]
...It's probably not what we expected...

So, we're interpreting the underlying bits of the original memory buffer as floats, in this case.

If we wanted to make a new copy with the ints recasted as floats, we'd use x.astype(np.float).


Complex Numbers

Complex numbers are stored (in both C, python, and numpy) as two floats. The first is the real part and the second is the imaginary part.

So, if we do:

import numpy as np
x = np.array([0.5+1j, 1.0+2j, 3.0+0j])

We can see the real (x.real) and imaginary (x.imag) parts. If we convert this to a float, we'll get a warning about discarding the imaginary part, and we'll get an array with just the real part.

print x.real
print x.astype(float)

astype makes a copy and converts the values to the new type.

However, if we view this array as a float, we'll get a sequence of item1.real, item1.imag, item2.real, item2.imag, ....

print x
print x.view(float)

yields:

[ 0.5+1.j  1.0+2.j  3.0+0.j]
[ 0.5  1.   1.   2.   3.   0. ]

Each complex number is essentially two floats, so if we change how numpy interprets the underlying memory buffer, we get an array of twice the length.

Hopefully that helps clear things up a bit...

Answer from Joe Kington on Stack Overflow
๐ŸŒ
NumPy
numpy.org โ€บ doc โ€บ stable โ€บ reference โ€บ arrays.dtypes.html
Data type objects (dtype) โ€” NumPy v2.4 Manual
A data type object (an instance of numpy.dtype class) describes how the bytes in the fixed-size block of memory corresponding to an array item should be interpreted. It describes the following aspects of the data: Type of the data (integer, float, Python object, etc.)
Top answer
1 of 4
45

First off, the code you're learning from is flawed. It almost certainly doesn't do what the original author thought it did based on the comments in the code.

What the author probably meant was this:

def to_1d(array):
    """prepares an array into a 1d real vector"""
    return array.astype(np.float64).ravel()

However, if array is always going to be an array of complex numbers, then the original code makes some sense.

The only cases where viewing the array (a.dtype = 'float64' is equivalent to doing a = a.view('float64')) would double its size is if it's a complex array (numpy.complex128) or a 128-bit floating point array. For any other dtype, it doesn't make much sense.

For the specific case of a complex array, the original code would convert something like np.array([0.5+1j, 9.0+1.33j]) into np.array([0.5, 1.0, 9.0, 1.33]).

A cleaner way to write that would be:

def complex_to_iterleaved_real(array):
     """prepares a complex array into an "interleaved" 1d real vector"""
    return array.copy().view('float64').ravel()

(I'm ignoring the part about returning the original dtype and shape, for the moment.)


Background on numpy arrays

To explain what's going on here, you need to understand a bit about what numpy arrays are.

A numpy array consists of a "raw" memory buffer that is interpreted as an array through "views". You can think of all numpy arrays as views.

Views, in the numpy sense, are just a different way of slicing and dicing the same memory buffer without making a copy.

A view has a shape, a data type (dtype), an offset, and strides. Where possible, indexing/reshaping operations on a numpy array will just return a view of the original memory buffer.

This means that things like y = x.T or y = x[::2] don't use any extra memory, and don't make copies of x.

So, if we have an array similar to this:

import numpy as np
x = np.array([1,2,3,4,5,6,7,8,9,10])

We could reshape it by doing either:

x = x.reshape((2, 5))

or

x.shape = (2, 5)

For readability, the first option is better. They're (almost) exactly equivalent, though. Neither one will make a copy that will use up more memory (the first will result in a new python object, but that's beside the point, at the moment.).


Dtypes and views

The same thing applies to the dtype. We can view an array as a different dtype by either setting x.dtype or by calling x.view(...).

So we can do things like this:

import numpy as np
x = np.array([1,2,3], dtype=np.int)

print 'The original array'
print x

print '\n...Viewed as unsigned 8-bit integers (notice the length change!)'
y = x.view(np.uint8)
print y

print '\n...Doing the same thing by setting the dtype'
x.dtype = np.uint8
print x

print '\n...And we can set the dtype again and go back to the original.'
x.dtype = np.int
print x

Which yields:

The original array
[1 2 3]

...Viewed as unsigned 8-bit integers (notice the length change!)
[1 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0]

...Doing the same thing by setting the dtype
[1 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0]

...And we can set the dtype again and go back to the original.
[1 2 3]

Keep in mind, though, that this is giving you low-level control over the way that the memory buffer is interpreted.

For example:

import numpy as np
x = np.arange(10, dtype=np.int)

print 'An integer array:', x
print 'But if we view it as a float:', x.view(np.float)
print "...It's probably not what we expected..."

This yields:

An integer array: [0 1 2 3 4 5 6 7 8 9]
But if we view it as a float: [  0.00000000e+000   4.94065646e-324   
   9.88131292e-324   1.48219694e-323   1.97626258e-323   
   2.47032823e-323   2.96439388e-323   3.45845952e-323
   3.95252517e-323   4.44659081e-323]
...It's probably not what we expected...

So, we're interpreting the underlying bits of the original memory buffer as floats, in this case.

If we wanted to make a new copy with the ints recasted as floats, we'd use x.astype(np.float).


Complex Numbers

Complex numbers are stored (in both C, python, and numpy) as two floats. The first is the real part and the second is the imaginary part.

So, if we do:

import numpy as np
x = np.array([0.5+1j, 1.0+2j, 3.0+0j])

We can see the real (x.real) and imaginary (x.imag) parts. If we convert this to a float, we'll get a warning about discarding the imaginary part, and we'll get an array with just the real part.

print x.real
print x.astype(float)

astype makes a copy and converts the values to the new type.

However, if we view this array as a float, we'll get a sequence of item1.real, item1.imag, item2.real, item2.imag, ....

print x
print x.view(float)

yields:

[ 0.5+1.j  1.0+2.j  3.0+0.j]
[ 0.5  1.   1.   2.   3.   0. ]

Each complex number is essentially two floats, so if we change how numpy interprets the underlying memory buffer, we get an array of twice the length.

Hopefully that helps clear things up a bit...

2 of 4
6

By changing the dtype in this way, you are changing the way a fixed block of memory is being interpreted.

Example:

>>> import numpy as np
>>> a=np.array([1,0,0,0,0,0,0,0],dtype='int8')
>>> a
array([1, 0, 0, 0, 0, 0, 0, 0], dtype=int8)
>>> a.dtype='int64'
>>> a
array([1])

Note how the change from int8 to int64 changed an 8 element, 8 bit integer array, into a 1 element, 64 bit array. It is the same 8 byte block however. On my i7 machine with native endianess, the byte pattern is the same as 1 in an int64 format.

Change the position of the 1:

>>> a=np.array([0,0,0,1,0,0,0,0],dtype='int8')
>>> a.dtype='int64'
>>> a
array([16777216])

Another example:

>>> a=np.array([0,0,0,0,0,0,1,0],dtype='int32')
>>> a.dtype='int64'
>>> a
array([0, 0, 0, 1])

Change the position of the 1 in the 32 byte, 32 bit array:

>>> a=np.array([0,0,0,1,0,0,0,0],dtype='int32')
>>> a.dtype='int64'
>>> a
array([         0, 4294967296,          0,          0]) 

It is the same block of bits reinterpreted.

๐ŸŒ
Python Course
python-course.eu โ€บ numerical-programming โ€บ numpy-data-objects-dtype.php
3. Numpy Data Objects, dtype | Numerical Programming
We offer live Python training courses covering the content of this site. ... To demonstrate how structured NumPy arrays with the same shape and fields can be compared, we will create a new array containing the population data from 1995: import numpy as np # Define the structured dtype dt = np.dtype([ ('country', 'U20'), ('density', 'i4'), ('area', 'i4'), ('population', 'i4') ]) # 1995 population data population_table_1995 = np.array([ ('Netherlands', 462, 33720, 15_565_032), ('Belgium', 332, 30510, 10_137_265), ('United Kingdom', 239, 243610, 58_154_634), ('Germany', 235, 348560, 82_019_890),
๐ŸŒ
GeeksforGeeks
geeksforgeeks.org โ€บ python โ€บ data-type-object-dtype-numpy-python
Data type Object (dtype) in NumPy Python - GeeksforGeeks
January 19, 2026 - DSA Python ยท Data Science ยท NumPy ยท Pandas ยท Practice ยท Django ยท Flask ยท Last Updated : 19 Jan, 2026 ยท In NumPy, dtype defines the type of data stored in an array and how much memory each value uses.
๐ŸŒ
W3Schools
w3schools.com โ€บ python โ€บ numpy โ€บ numpy_data_types.asp
NumPy Data Types
import numpy as np arr = np.array([1, 2, 3, 4], dtype='i4') print(arr) print(arr.dtype) Try it Yourself ยป ยท If a type is given in which elements can't be casted then NumPy will raise a ValueError. ValueError: In Python ValueError is raised when the type of passed argument to a function is unexpected/incorrect.
๐ŸŒ
Pandas
pandas.pydata.org โ€บ docs โ€บ reference โ€บ api โ€บ pandas.DataFrame.dtypes.html
pandas.DataFrame.dtypes โ€” pandas 3.0.2 documentation
>>> df = pd.DataFrame( ... { ... "float": [1.0], ... "int": [1], ... "datetime": [pd.Timestamp("20180310")], ... "string": ["foo"], ... } ... ) >>> df.dtypes float float64 int int64 datetime datetime64[us] string str dtype: object
๐ŸŒ
Python for Data Science
python4data.science โ€บ en โ€บ latest โ€บ workspace โ€บ numpy โ€บ dtype.html
dtype - Python for Data Science
ndarray is a container for homogeneous data, i.e. all elements must be of the same type. Each array has a dtype, an object that describes the data type of the array: NumPy data types:,,, Type, Type...
๐ŸŒ
Practical Business Python
pbpython.com โ€บ pandas_dtypes.html
Overview of Pandas Data Types - Practical Business Python
Customer Number float64 Customer Name object 2016 object 2017 object Percent Growth object Jan Units object Month int64 Day int64 Year int64 Active bool dtype: object ยท Whether you choose to use a lambda function, create a more standard python function or use another approach like np.where() , these approaches are very flexible and can be customized for your own unique data needs.
Find elsewhere
๐ŸŒ
NumPy
numpy.org โ€บ doc โ€บ 2.1 โ€บ reference โ€บ arrays.dtypes.html
Data type objects (dtype) โ€” NumPy v2.1 Manual
A data type object (an instance of numpy.dtype class) describes how the bytes in the fixed-size block of memory corresponding to an array item should be interpreted. It describes the following aspects of the data: Type of the data (integer, float, Python object, etc.)
Top answer
1 of 3
16

The simple, high-level answer is that NumPy layers a second type system atop Python's type system.

When you ask for the type of an NumPy object, you get the type of the container--something like numpy.ndarray. But when you ask for the dtype, you get the (numpy-managed) type of the elements.

>>> from numpy import *
>>> arr = array([1.0, 4.0, 3.14])
>>> type(arr)
<type 'numpy.ndarray'>
>>> arr.dtype
dtype('float64')

Sometimes, as when using the default float type, the element data type (dtype) is equivalent to a Python type. But that's equivalent, not identical:

>>> arr.dtype == float
True
>>> arr.dtype is float
False

In other cases, there is no equivalent Python type. For example, when you specified uint8. Such data values/types can be managed by Python, but unlike in C, Rust, and other "systems languages," managing values that align directly to machine data types (like uint8 aligns closely with "unsigned bytes" computations) is not the common use-case for Python.

So the big story is that NumPy provides containers like arrays and matrices that operate under its own type system. And it provides a bunch of highly useful, well-optimized routines to operate on those containers (and their elements). You can mix-and-match NumPy and normal Python computations, if you use care.

There is no Python type uint8. There is a constructor function named uint8, which when called returns a NumPy type:

>>> u = uint8(44)
>>> u
44
>>> u.dtype
dtype('uint8')
>>> type(u)
<type 'numpy.uint8'>

So "can I create an array of type (not dtype) uint8...?" No. You can't. There is no such animal. You can do computations constrained to uint8 rules without using NumPy arrays (a.k.a. NumPy scalar values). E.g.:

>>> uint8(44 + 1000)
20
>>> uint8(44) + uint8(1000)
20

But if you want to compute values mod 256, it's probably easier to use Python's mod operator:

>> (44 + 1000) % 256
20

Driving data values larger than 255 into uint8 data types and then doing arithmetic is a rather backdoor way to get mod-256 arithmetic. If you're not careful, you'll either cause Python to "upgrade" your values to full integers (killing your mod-256 scheme), or trigger overflow exceptions (because tricks that work great in C and machine language are often flagged by higher level languages).

2 of 3
10

The type of a NumPy array is numpy.ndarray; this is just the type of Python object it is (similar to how type("hello") is str for example).

dtype just defines how bytes in memory will be interpreted by a scalar (i.e. a single number) or an array and the way in which the bytes will be treated (e.g. int/float). For that reason you don't change the type of an array or scalar, just its dtype.

As you observe, if you multiply two scalars, the resulting datatype is the smallest "safe" type to which both values can be cast. However, multiplying an array and a scalar will simply return an array of the same datatype. The documentation for the function np.inspect_types is clear about when a particular scalar or array object's dtype is changed:

Type promotion in NumPy works similarly to the rules in languages like C++, with some slight differences. When both scalars and arrays are used, the array's type takes precedence and the actual value of the scalar is taken into account.

The documentation continues:

If there are only scalars or the maximum category of the scalars is higher than the maximum category of the arrays, the data types are combined with promote_types to produce the return value.

So for np.uint8(200) * 2, two scalars, the resulting datatype will be the type returned by np.promote_types:

>>> np.promote_types(np.uint8, int)
dtype('int32')

For np.array([200], dtype=np.uint8) * 2 the array's datatype takes precedence over the scalar int and a np.uint8 datatype is returned.

To address your final question about retaining the dtype of a scalar during operations, you'll have to restrict the datatypes of any other scalars you use to avoid NumPy's automatic dtype promotion:

>>> np.array([200], dtype=np.uint8) * np.uint8(2)
144

The alternative, of course, is to simply wrap the single value in a NumPy array (and then NumPy won't cast it in operations with scalars of different dtype).

To promote the type of an array during an operation, you could wrap any scalars in an array first:

>>> np.array([200], dtype=np.uint8) * np.array([2])
array([400])
๐ŸŒ
GeeksforGeeks
geeksforgeeks.org โ€บ python โ€บ numpy-data-type-objects
NumPy - Data type Objects(dtype) - GeeksforGeeks
July 23, 2025 - Every ndarray has an associated data type (dtype) object. This data type object (dtype) informs us about the layout of the array. This means it gives us information about : Type of the data (integer, float, Python object etc.)
๐ŸŒ
pandas
pandas.pydata.org โ€บ pdeps โ€บ 0014-string-dtype.html
pandas - Python Data Analysis Library
May 3, 2024 - Second: this is not efficient (all string methods on a Series are eventually calling Python methods on the individual string objects). To solve the first issue, a dedicated extension dtype for string data has already been added in pandas 1.0. This has always been opt-in for now, requiring users to explicitly request the dtype (with dtype="string" or dtype=pd.StringDtype()).
Top answer
1 of 2
64

NumPy arrays are stored as contiguous blocks of memory. They usually have a single datatype (e.g. integers, floats or fixed-length strings) and then the bits in memory are interpreted as values with that datatype.

Creating an array with dtype=object is different. The memory taken by the array now is filled with pointers to Python objects which are being stored elsewhere in memory (much like a Python list is really just a list of pointers to objects, not the objects themselves).

Arithmetic operators such as * don't work with arrays such as ar1 which have a string_ datatype (there are special functions instead - see below). NumPy is just treating the bits in memory as characters and the * operator doesn't make sense here. However, the line

np.array(['avinash','jay'], dtype=object) * 2

works because now the array is an array of (pointers to) Python strings. The * operator is well defined for these Python string objects. New Python strings are created in memory and a new object array with references to the new strings is returned.


If you have an array with string_ or unicode_ dtype and want to repeat each string, you can use np.char.multiply:

In [52]: np.char.multiply(ar1, 2)
Out[52]: array(['avinashavinash', 'jayjay'], 
      dtype='<U14')

NumPy has many other vectorised string methods too.

2 of 2
4

There are 3 main dtypes to store strings in numpy:

  • object: Stores pointers to Python objects
  • str: Stores fixed-width strings
  • numpy.types.StringDType(): New in numpy 2.0 and stores variable-width strings

str consumes more memory than object; StringDType is better

Depending on the length of the fixed-length string and the size of the array, the ratio differs but as long as the longest string in the array is longer than 2 characters, str consumes more memory (they are equal when the longest string in the array is 2 characters long). For example, in the following example, str consumes almost 8 times more memory.

On the other hand, the new (in numpy>=2.0) numpy.dtypes.StringDType stores variable width strings, so consumes much less memory.

from pympler.asizeof import asizeof

ar1 = np.array(['this is a string', 'string']*1000, dtype=object)
ar2 = np.array(['this is a string', 'string']*1000, dtype=str)
ar3 = np.array(['this is a string', 'string']*1000, dtype=np.dtypes.StringDType())

asizeof(ar2) / asizeof(ar1)  # 7.944444444444445
asizeof(ar3) / asizeof(ar1)  # 1.992063492063492

For numpy 1.x, str is slower than object

For numpy>=2.0.0, str is faster than object

Numpy 2.0 has introduced a new numpy.strings API that has much more performant ufuncs for string operations. A simple test (on numpy 2.2.0) below shows that vectorized string operations on an array of str or StringDType dtype is much faster than the same operations on an object dtype array.

import timeit

t1 = min(timeit.repeat(lambda: ar1*2, number=1000))
t2a = min(timeit.repeat(lambda: np.strings.multiply(ar2, 2), number=1000))
t2b = min(timeit.repeat(lambda: np.strings.multiply(ar3, 2), number=1000))
print(t2a / t1)   # 0.8786601958427778
print(t2b / t1)   # 0.7311586933668037

t3 = min(timeit.repeat(lambda: np.array([s.count('i') for s in ar1]), number=1000))
t4a = min(timeit.repeat(lambda: np.strings.count(ar2, 'i'), number=1000))
t4b = min(timeit.repeat(lambda: np.strings.count(ar3, 'i'), number=1000))

print(t4a / t3)   # 0.13328748153237377
print(t4b / t3)   # 0.3365874412749679
For numpy<2.0.0 (tested on numpy 1.26.0)

Numpy 1.x's vectorized string methods are not optimized, so operating on the object array is often faster. For example, in the example in the OP where each character is repeated, a simple * (aka multiply()) is not only more concise but also over 10 times faster than char.multiply().

import timeit
setup = "import numpy as np; from __main__ import ar1, ar2"
t1 = min(timeit.repeat("ar1*2", setup, number=1000))
t2 = min(timeit.repeat("np.char.multiply(ar2, 2)", setup, number=1000))
t2 / t1   # 10.650433758517027

Even for functions that cannot be readily be applied on the array, instead of the vectorized char method of str arrays, it is faster to loop over the object array and work on the Python strings.

For example, iterating over the object array and calling str.count() on each Python string is over 3 times faster than the vectorized char.count() on the str array.

f1 = lambda: np.array([s.count('i') for s in ar1])
f2 = lambda: np.char.count(ar2, 'i')

setup = "import numpy as np; from __main__ import ar1, ar2, f1, f2, f3"
t3 = min(timeit.repeat("f1()", setup, number=1000))
t4 = min(timeit.repeat("f2()", setup, number=1000))

t4 / t3   # 3.251369161574832

On a side note, if it comes to explicit loop, iterating over a list is faster than iterating over a numpy array. So in the previous example, a further performance gain can be made by iterating over the list

f3 = lambda: np.array([s.count('i') for s in ar1.tolist()])
#                                               ^^^^^^^^^  <--- convert to list here
t5 = min(timeit.repeat("f3()", setup, number=1000))
t3 / t5   # 1.2623498005294627
๐ŸŒ
w3resource
w3resource.com โ€บ numpy โ€บ data-type-routines โ€บ dtype.php
NumPy Data type: dtype() function - w3resource
August 19, 2022 - dtype : dtype or Python type - The data type of rep. Example: numpy.dtype() function ยท
๐ŸŒ
SciPy
docs.scipy.org โ€บ doc โ€บ numpy-1.4.x โ€บ reference โ€บ arrays.dtypes.html
Data type objects (dtype) - Numpy and Scipy Documentation
July 21, 2010 - A data type object (an instance of numpy.dtype class) describes how the bytes in the fixed-size block of memory corresponding to an array item should be interpreted. It describes the following aspects of the data: Type of the data (integer, float, Python object, etc.)
๐ŸŒ
Note.nkmk.me
note.nkmk.me โ€บ home โ€บ python โ€บ numpy
NumPy: astype() to change dtype of an array | note.nkmk.me
February 4, 2024 - NumPy arrays (ndarray) hold a data type (dtype). You can set this through various operations, such as when creating an ndarray with np.array(), or change it later with astype(). Data type objects (dty ...
๐ŸŒ
DataCamp
datacamp.com โ€บ doc โ€บ numpy โ€บ data-types
NumPy Data Types
The `dtype` in NumPy is used to specify the desired data type for the elements of an array. This can be crucial for optimizing performance and ensuring compatibility with other data processing operations and interoperability with other systems and libraries. python import numpy as np array ...
๐ŸŒ
Ufkapano
ufkapano.github.io โ€บ scicomppy โ€บ week08 โ€บ np_dtype.html
Python
# Using array-scalar types. dt_int = np.dtype(int) # Python-compatible integer # "b" (byte), "i1", 'int8'; # "h", "i2", "int16"; # "i", "i4", "int32"; "i8", "int64" ('int' 64-bit); # unsigned int: "B", "u1", 'uint8'; # "H", "u2", 'uint16'; # "u4", "uint32"; # "u8", "uint64"; dt_float = np.dtype(float) # Python-compatible floating-point number # "f2", 'float16'; # "f", "f4", 'float32' ('float' in 32-bit); # "d", "f8", 'float64' ('double', 'float' in 64-bit); # 'float128'; dt_bool = np.dtype(bool) dt_complex = np.dtype(complex) # 'complex64', 'complex128', 'complex256' dt_object = np.dtype(object) # Python object