## 03 - 02 Ufuncs¶

Python's default implementation does some operations slowly. This is in part due to the dynamic and interpreted nature of the language. It is this feature that allows `type`

s to be flexible but since the type has to be checked at every operation, the sequences of operations cannot be compiled down to efficient machine code as in languages like C.
Lets take a look at python native implementation of this:

```
from __future__ import print_function
import numpy as np
def get_sin(arr):
# Create an empty output array of same size as input
output = np.empty_like(arr)
for i in range(len(output)):
output[i] = np.sin(arr[i])
return output
```

```
input_arr = np.random.uniform(-np.pi, np.pi, 10000000)
%time get_sin(input_arr)
```

- ipython adds some commands to add further enhancements to the interactivity of ipython. These commands begin with
`%`

and are known as magic commands.`%time`

gives information about the time taken to execute a python statement.- There are many built-in magic commands .. and as always, since the magic commands start with
`%`

, you can simply type`%`

in one of the code blocks and press`?`

or`Shift + <TAB>`

after it to get the docstring.- Remeber that these magic commands are specific only to ipython (and jupyter notebooks). These cannot be implemented in native python code.

Even though the above implementation is correct and might look optimized for people who are familiar with languages like C and Java, the above loop takes significant amount of time (check the `total`

CPU times) and is horribly inefficient due to the reasons we mentioned above.

This is where Numpy's `ufunc`

s come to save the day. NumPy provides a convenient interface into these kinds of statically typed, compiled routine. This is known as a `vectorized operation`

. This can be accomplished by simply performing an operation on the array, which will then be applied to each element.

The vectorized approach is designed to push the

`loop`

part of the code into the compiled layer that underlies NumPy, leading to much faster execution.

Let's take a look at Numpy ufunc based solution for same example

```
input_arr = np.random.uniform(-np.pi, np.pi, 10000000)
%time np.sin(input_arr)
```

Thats much faster, right?

You can also use these ufuncs on multi-dimensional array.

```
arr = np.random.randint(1, 100, (3, 4))
# take reciprocal
print("Original Array: \n{}".format(arr), end="\n\n")
print("Reciprocal: \n{}".format(1/arr), end="\n\n")
```

```
x = np.arange(-5, 5)
print("x =", x)
print("x + 10 =", x + 10) # wrapper for np.sum
print("x - 10 =", x - 10) # wrapper for np.subtract
print("x * 4 =", x * 4) # wrapper for np.multiply
print("x / 4 =", x / 4) # wrapper for np.divide
print("x % 4 =", x % 4) # wrapper for np.mod
print("x // 4 =", x // 4) # wrapper for np.floor_divide
print("x ** 2 =", x ** 2) # wrapper for np.power
print("abs(x) =", abs(x)) # wrapper for np.abs
```

The above operations have been performed on the array of a particular datatype and so the result will have the same datatype as the array that is being operated on. However when you perform any operation on an array that results in a different datatype or on multiple arrays of different datatypes, the type of the resulting array will correspond to the more

preciseone. This is also known as`upcast`

ing.In the above example, check the output of division (

`/`

). Can you find the type of that array?

When standard mathematical operations are used with numpy arrays, they are applied on an element-by-element basis and a new array is created and filled with the result. This means that the arrays should be of same size when any mathematical operation is performed on them.

```
arr1 = np.array([1., 2., 3., 4.])
arr2 = np.linspace(4, 16, num=4)
print("Array1: \n{}".format(arr1), end="\n\n")
print("Array2: \n{}".format(arr2), end="\n\n")
print("\n Array2 - Array1: \n {}".format(arr2-arr1), end="\n\n")
```

However, if there was a size mismatch, then we would receive a `ValueError`

```
arr2 = np.linspace(4, 16, num=3)
print("\n Array2 - Array1: \n {}".format(arr2-arr1), end="\n\n")
```

Well you might wonder why was it that we did not get a broadcast error when we performed addition of a single number over an array.. We shall look at this in the module on Broadcasting.

#### .. 02.01.02 Trignometric Functions¶

Just like Arithemetic operations, Numpy provides a bunch of trignometric `ufuncs`

. Lets take a look at some

```
input_arr = np.random.uniform(-1, 1, 5)
print("Input Array: \n{}".format(input_arr), end="\n\n")
print("sin: \n{}".format(np.sin(input_arr)), end="\n\n")
print("cos: \n{}".format(np.cos(input_arr)), end="\n\n")
print("tan: \n{}".format(np.tan(input_arr)), end="\n\n")
print("arcsin: \n{}".format(np.arcsin(input_arr)), end="\n\n")
print("arccos: \n{}".format(np.arccos(input_arr)), end="\n\n")
print("arctan: \n{}".format(np.arctan(input_arr)), end="\n\n")
```

#### .. 02.01.03 Logarithmic Functions¶

Numpy provides logarithmic ufuncs for different `base`

s

```
input_arr = np.random.randint(1, 7, 5)
print("x =", input_arr)
print("ln(x) =", np.log(input_arr))
print("log2(x) =", np.log2(input_arr))
print("log10(x) =", np.log10(input_arr))
```

Counterpart of Logs, we also have exponential ufuncs

```
input_arr = np.random.randint(1, 7, 5)
print("x =", input_arr)
print("e^x =", np.exp(input_arr))
print("2^x =", np.exp2(input_arr))
print("10^x =", np.power(10, input_arr))
```