Scipy
SciPy is an open-source library for mathematics, science, and engineering that builds on the capabilities of the NumPy library. It provides many efficient and user-friendly interfaces for tasks such as numerical integration, optimization, interpolation, signal and image processing, linear algebra, statistics, and more.
SciPy is a Python library of mathematical routines. Many of the SciPy routines are Python “wrappers,” that is, Python routines that provide a Python interface, for numerical libraries and routines originally written in Fortran, C, or C++. Thus, SciPy lets you take advantage of the decades of work that has gone into creating and optimizing numerical routines for science and engineering. Because the Fortran, C, or C++ code that Python accesses is compiled, these routines typically run very fast. Therefore, there is no real downside—no speed penalty—for using Python in these cases.
SciPy is organized into sub-packages, each specializing in different aspects of scientific computing. Here are some key sub-packages and their functionalities:
scipy.cluster
: Clustering algorithms.scipy.constants
: Physical and mathematical constants.scipy.fftpack
: Fast Fourier Transform routines.scipy.integrate
: Integration and ordinary differential equation solvers.quad
: Numerical integration.odeint
: Solves ordinary differential equations.
scipy.interpolate
: Interpolation.interp1d
: 1-dimensional interpolation.griddata
: Interpolation on a grid.spline
: B-spline interpolation.
scipy.io
: Input and output routines.loadmat
: Reads MATLAB data files.savemat
: Saves MATLAB data files.loadtxt
: Reads text files.
scipy.linalg
: Linear algebra routines.inv
: Matrix inverse.det
: Determinant of a matrix.eig
: Eigenvalues and eigenvectors.
scipy.ndimage
: Multi-dimensional image processing.filters
: Various image filters.morphology
: Morphological operations.
scipy.optimize
: Optimization and root-finding.minimize
: General optimization.fsolve
: Non-linear algebraic equations.
scipy.signal
: Signal processing.convolve
: Convolution of two 1-dimensional sequences.spectrogram
: Compute a spectrogram.
scipy.sparse
: Sparse matrix and related algorithms.csr_matrix
,csc_matrix
: Compressed Sparse Row/Column matrices.spdiags
: Create a sparse matrix from diagonals.scipy.spatial
: Spatial data structures and algorithms.distance
: Distance computations.
KDTree
: k-dimensional tree for fast search.scipy.special
: Special functions.
- Bessel functions, gamma functions, etc.
scipy.stats
: Statistical functions.norm
: Normal continuous random variable.ttest_ind
: Independent samples t-test.
These are just some of the key functionalities provided by SciPy. It's a powerful and versatile library that is widely used in scientific and engineering applications for numerical and statistical computing. To use SciPy, you typically need to install it separately, as it is not included in the standard Python distribution. You can install it using: pip install scipy
Linear Algebra
Python’s mathematical libraries, NumPy and SciPy, have extensive tools for numerically solving problems in linear algebra. Here we focus on two problems that arise commonly in scientific and engineering settings: (1) solving a system of linear equations and (2) eigenvalue problems. In addition, we show how to perform a number of other basic computations, such as finding the determinant of a matrix, matrix inversion, and LU decomposition. The SciPy package for linear algebra is called scipy.linalg.
Determinant
import scipy.linalg
import numpy as np
A=np.array([[-2,3],[4,5]])
det=scipy.linalg.det(A)
print(det)
output
-22
Inverse
import scipy.linalg
import numpy as np
A=np.array([[-2,3],[4,5]])
inv=scipy.linalg.inv(A)
print(inv)
print(np.dot(A,inv))
output
[[-0.22727273 0.13636364] [ 0.18181818 0.09090909]] [[1. 0.] [0. 1.]]Solving systems of linear equations
Solving systems of equations is nearly as simple as constructing a coefficient
matrix and a column vector. Suppose you have the following
system of linear equations to solve:
$2x1 + 4x2 + 6x3 = 4 $
$x1 - 3x2 - 9x3 = -11$
$8x1 + 5x2 - 7x3 = 1$
$Ax=b$
A=np.array([[2,4,6],[1,-3,-9],[8,5,-7]])
b=np.array([4,-11,1])
x1,x2,x3=scipy.linalg.solve(A,b)
print(x1,x2,x3)
output
-8.52173913043478 9.695652173913041 -2.9565217391304337
Note that the matrix A must be non singular.
Write python program to solve the following system of equations ( university question)
x1 − 2x2 + 9x3 + 13x4 = 1
−5x1 + x2 + 6x3 − 7x4 = −3
4x1 + 8x2 − 4x3 − 2x4 = −2
8x1 + 5x2 − 7x3 + x4 = 5
import numpy as np
import scipy.linalg
A=np.array([[1,-2,9,13],[-5,1,6,-7],[4,8,-4,-2],[8,5,-7,1]])
b=np.array([1,-3,-2,5])
x1,x2,x3,x4=scipy.linalg.solve(A,b)
print(x1,x2,x3,x4)
output
1.7683923705722076 -1.02724795640327 0.49318801089918274 -0.5585831062670301
4x1 + 3x2 - 5x3 = 2
-2x1 - 4x2 + 5x3 = 5
8x1 + 8x2 = -3
import numpy as np
import scipy.linalg
A=np.array([[4,3,-5],[-2,-4,5],[8,8,0]])
b=np.array([2,5,-3])
x1,x2,x3=scipy.linalg.solve(A,b)
print(x1,x2,x3)
Output
2.2083333333333335 -2.5833333333333335 -0.18333333333333332
Eigenvalue problems
One of the most common problems in science and engineering is the eigenvalue problem, which in matrix form is written as
$Ax = \lambda x $
where $A$ is a square matrix, $x$ is a column vector, and $\lambda$ is a scalar (number).Given the matrix A, the problem is to find the set of eigenvectors x and their corresponding eigenvalues $\lambda$ that solve this equation.We can solve eigenvalue equations like this using the SciPy routine scipy.linalg.eig. The output of this function is an array whose entries are the eigenvalues and a matrix whose rows are the eigenvectors.Let’s return to the matrix we were using previously and find its eigenvalues and eigenvectors.
A=np.array([[2,4,6],[1,-3,-9],[8,5,-7]])
evl,eve=scipy.linalg.eig(A)
print(evl,eve)
print(evl[0])
print(eve[:,0])
print(np.dot(A,eve[:,0]))
print(np.dot(evl[0],eve[:,0]))
[ 2.40995356+0.j -8.03416016+0.j -2.3757934 +0.j] [[-0.77167559 -0.52633654 0.57513303] [ 0.50360249 0.76565448 -0.80920669] [-0.38846018 0.36978786 0.12002724]] (2.4099535647625494+0j) [-0.77167559 0.50360249 -0.38846018] [-1.85970234 1.21365861 -0.93617101] [-1.85970234+0.j 1.21365861+0.j -0.93617101+0.j]
Generalized eigenvalue problem
The scipy.linalg.eig function can also solve the generalized eigenvalue problem
$Ax = \lambda Bx$
where $B$ is a square matrix with the same size as $A$.
import numpy as np
import scipy.linalg
A=np.array([[2, 4, 6], [1, -3, -9], [8, 5, -7]])
B=np.array([[5, 9, 1], [-3, 1, 6], [4, 2, 8]])
eval, evec = scipy.linalg.eig(A, B)
print(eval)
print(evec)
output
[ 0.83252442+0.j -1.36087907+0.j -0.10099858+0.j] [[ 0.94464645 0.03854296 -0.65072239] [-0.16768705 0.3949523 0.69941952] [ 0.28200025 -0.91789276 -0.29558875]]
The solutions are returned in the same fashion as before, as an array eval whose entries are the eigenvalues and a matrix evec whose rows are the eigenvectors.
Hermitian and banded matrices
SciPy has a specialized routine for solving eigenvalue problems for Hermitian (or real symmetric) matrices. The routine for Hermitian matrices is scipy.linalg.eigh. It is more efficient (faster and uses less memory) than scipy.linalg.eig. The basic syntax of the two routines is the same, although some of the optional arguments are different.Both routines can solve generalized as well as standard eigenvalue problems.SciPy has a specialized routine scipy.linalg.eig_banded for solving eigenvalue problems for real symmetric or complex Hermitian banded matrices. When there is a specialized routine for handling a particular kind of matrix, you should use it; it is almost certain to run faster, use less memory, and give more accurate results.
Numerical Integration
When a function cannot be integrated analytically, or is very difficult to integrate analytically, one generally turns to numerical integration methods. SciPy has a number of routines for performing numerical integration. Most of them are found in the same scipy.integrate library where the ODE(Ordinary Differential Equation) solvers are located.
Single integrals
The function quad is the workhorse of SciPy’s integration functions.Numerical integration is sometimes called quadrature, hence the name. The quad function uses adaptive quadrature (Gaussian quadrature) to numerically integrate a function over a specified range.The function quad is the default choice for performing single integrals of a function $f (x)$ over a given fixed range from $a$ to $b$
$\int_{a}^{b} f(x)dx$The general form of quad is scipy.integrate.quad(f, a, b), where f is the name of the function to be integrated and a and b are the lower and upper limits, respectively.
As an example, let’s integrate a Gaussian function over the range from 0 to 1:
$\int_{0}^{1} e^{-x^2}dx$
from scipy.integrate import quad
from numpy import exp
f=lambda x : exp(-x**2)
quad(f, 0, 1)
output
(0.7468241328124271, 8.291413475940725e-15)
Example: integrate $x^2$
from scipy import integrate
import numpy as np # Define the function to be integrated
def integrand(x):
return x**2 # Set the integration limits
lower_limit = 0
upper_limit = 1 # Use quad to perform the integration
result, error = integrate.quad(integrand, lower_limit, upper_limit)
# Print the result and estimated error
print("Result of integration:", result)
print("Estimated error:", error)
import numpy as np # Define the function to be integrated
def integrand(x):
return x**2 # Set the integration limits
lower_limit = 0
upper_limit = 1 # Use quad to perform the integration
result, error = integrate.quad(integrand, lower_limit, upper_limit)
# Print the result and estimated error
print("Result of integration:", result)
print("Estimated error:", error)
Output:
Result of integration: 0.33333333333333337
Result of integration: 0.33333333333333337
Estimated error: 3.700743415417189e-15
Example: $\int_0^\inf \frac{sin(x)}{x}$
from scipy import integrate
import numpy as np
# Define a more complex function to be integrated
def complex_integrand(x):
return np.sin(x) / x
# Set the integration limits
lower_limit = 0
upper_limit = np.inf # Use np.inf for infinite upper limit
# Use quad to perform the integration
result, error = integrate.quad(complex_integrand, lower_limit, upper_limit)
# Print the result and estimated error
print("Result of integration:", result)
print("Estimated error:", error)
The quad function can integrate standard predefined NumPy functions of a single variable, like exp, sin, and cos.
from scipy.integrate import quad
from numpy import exp,sin,cos
print(quad(exp,0,1))
print(quad(sin, -0.5, 0.5))
print(quad(cos, -0.5, 0.5))
output
(1.7182818284590453, 1.9076760487502457e-14) (0.0, 2.707864644566304e-15) (0.9588510772084061, 1.0645385431034061e-14)
Suppose we want to integrate a function such as $Ae^{-cx^2}$ defined as a normal Python function:
def gauss(x, A, c):
return A * np.exp(-c*x**2)
Of course we will need to pass the values of $A$ and $c$ to gauss via quad in order to numerically perform the integral. This can be done using args, one of the optional keyword arguments of quad. The code below shows how to do this
A,c=2.0,0.5
intgrl1= quad(gauss, 0.0, 5.0, args=(A, c))
print(intgrl1)
output
(2.5066268375731307, 2.1728257867977207e-09)
Note that the order of the additional parameters in args=(A, c) must be in the same order as they appear in the function definition of gauss.
Integrating polynomials
Working in concert with the NumPy poly1d, the NumPy function polyint takes the nth antiderivative of a polynomial and can be used to evaluate definite integrals. The function poly1d creates a polynomial.Suppose we want to make the polynomial function $p(x)=2x^2+5x+1$
p = np.poly1d([2, 5, 1])
print(p)
$p(x)=2x^2+5x+1$
The polynomial $p(x) = 2x^2 +5x +1$ is evaluated using the syntax $p(x)$.Below, we evaluate the polynomial at three different values of $x$. The polynomial $p(x) = 2x^2 + 5x + 1$ is evaluated using the syntax $p(x)$. Below, we evaluate the polynomial at three different values of $x$.
8, 19, 43.0
Now the antiderivative of $p(x)=2x^2+5x+1$ is $P(x)=\frac{2}{3}x^3 + \frac{5}{2}x^2 + x + C$ where $C $is the integration constant. The NumPy function polyint,which takes the $n$th antiderivative of a polynomial, works as follow
np.polyint(p)
poly1d([0.66666667, 2.5 , 1. , 0. ])When polyint has a single input, $p$ in this case, polyint returns the coefficients of the antiderivative with the integration constant set to zero.
It is then an easy matter to determine any definite integral of the polynomial $p(x) = 2x^2 + 5x + 1$ since
$\int_a^b p(x)dx = P (b) - P (a)$
For example, if $a = 1$ and $b = 5$,
$q=P(5)-P(1)$
print( q)
146.66666666666666
or
$\int_1^5 (2x^2 + 5x + 1)dx=146 \frac{2}{3}$
example:
import numpy as np
# Define the coefficients of the polynomial (in descending order)
coefficients = [1, 2, 3] # Represents the polynomial 1 + 2x + 3x^2
poly=np.poly1d(coefficients)
# Use polyint to integrate the polynomial
integral_coefficients = np.polyint(coefficients)
integral_poly=np.poly1d(integral_coefficients)
# Print the result
print("Original polynomial coefficients:", coefficients)
print("original Polynomial")
print(poly)
print("Integral polynomial coefficients:", integral_coefficients)
print("IntegralPolynomial")
print(integral_poly)
example:
import numpy as np
# Define the coefficients of the polynomial (in descending order)
coefficients = [1, 2, 3] # Represents the polynomial 1 + 2x + 3x^2
# Define the integration limits
lower_limit = 0
upper_limit = 2
# Use polyint to integrate the polynomial
integral_coefficients = np.polyint(coefficients)
# Use polyval to evaluate the antiderivative at the limits
integral_at_upper_limit = np.polyval(integral_coefficients, upper_limit)
integral_at_lower_limit = np.polyval(integral_coefficients, lower_limit)
# Calculate the definite integral
definite_integral = integral_at_upper_limit - integral_at_lower_limit
# Print the result
print("Definite integral of the polynomial:", definite_integral)
Double integrals
The scipy.integrate function dblquad can be used to numerically evaluate double integrals of the form
$\int_{ y=a}^{y=b} dy \int_{x=g(y)}^{h=g(y)} dx f(x,y)$
The general form of dblquad is
scipy.integrate.dblquad(func, a, b, gfun, hfun)
where func is the name of the function to be integrated, $a$ and $b$ are the lower and upper limits of the $y$ variable, respectively, and gfun and hfun are the names of the functions that define the lower and upper limits of the $x$ variable.
As an example, let’s perform the double integral
$\int_{ 0}^{1/2} dy \int_{0}^{\sqrt{1-4y^2}} 16xy dx $
f = lambda x, y : 16*x*y
g = lambda x : 0
h = lambda y : np.sqrt(1-4*y**2)
scipy.integrate.dblquad(f, 0, 0.5, g, h)
output
(0.5, 1.7092350012594845e-14)
example:
from scipy import integrate
import numpy as np
# Define the integrand function for double integration
def integrand(y, x):
return x**2 + y**2
# Define the integration limits for x and y
x_lower_limit = 0
x_upper_limit = 1
y_lower_limit = 0
y_upper_limit = 1
# Use dblquad to perform the double integration
result, error = integrate.dblquad(integrand, x_lower_limit, x_upper_limit, y_lower_limit, y_upper_limit)
# Print the result and estimated error
print("Result of double integration:", result)
print("Estimated error:", error)
Other integration routines
In addition to the routines described above, scipy.integrate has a number of other integration routines, including nquad, which performs n-fold multiple integration, as well as other routines that implement other integration algorithms. You will find, however, that quad and dblquad meet most of your needs for numerical integration.
Solving ODEs
Ordinary Differential Equation (ODE) can be used to describe a dynamic system. To some extent, we are living in a dynamic system, the weather outside of the window changes from dawn to dusk, the metabolism occurs in our body is also a dynamic system because thousands of reactions and molecules got synthesized and degraded as time goes.
More formally, if we define a set of variables, like the temperatures in a day, or the amount of molecule X in a certain time point, and it changes with the independent variable (in a dynamic system, usually it will be time t). ODE offers us a way to mathematically depict the dynamic changes of defined variables. The opposite system to that is called static system, thinking about taking a photo of the outside, this snapshot doesn’t contain any dynamics, in another word, it is static.
The scipy.integrate library has two powerful routines, ode and odeint, for numerically solving systems of coupled first-order ordinary differential equations (ODEs). While ode is more versatile, odeint (ODE integrator) has a simpler Python interface that works very well for most problems. It can handle both stiff and non-stiff problems.Here we provide an introduction to odeint.
The syntax of odeint functions is as follows:
odeint(model, y0, t, …..)
Parameters :
model– the differential equation
$y0$– Initial value of $Y$
$t$– the time space for which we want the curve(basically the range of $x$)
Example:
from scipy.integrate import odeint
import numpy as np
import matplotlib.pyplot as plt
# Define the system of ordinary differential equations
def model(y, t):
dydt = -2 * y # Example: dy/dt = -2y
return dydt
# Set the initial condition
initial_condition = 1.0
# Set the time points where you want to evaluate the solution
time_points = np.linspace(0, 5, 101) # From 0 to 5 with 101 points
# Use odeint to solve the ODE
solution = odeint(model, initial_condition, time_points)
# Plot the solution
plt.plot(time_points, solution, label='y(t)')
plt.xlabel('Time')
plt.ylabel('y(t)')
plt.legend()
plt.show()
Example:
$\frac{dy}{dt}=-yt+13$
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def model(y,t):
dydt = -y * t + 13
return dydt
# initial condition
y0 = 1
# values of time
t = np.linspace(0,5)
# solving ODE
y = odeint(model, y0, t)
# plot results
plt.plot(t,y)
plt.xlabel("Time")
plt.ylabel("Y")
plt.show()
Example 2
$\frac{dy}{dt}=13e^t+y$
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def model(y,t):
dydt = 13 * np.exp(t) + y
return dydt
# initial condition
y0 = 1
# values of time
t = np.linspace(0,5)
# solving ODE
y = odeint(model, y0, t)
# plot results
plt.plot(t,y)
plt.xlabel("Time")
plt.ylabel("Y")
plt.show()
A typical problem is to solve a second- or higher-order ODE for a given set of initial conditions. Here we illustrate using odeint to solve the equation for a driven damped pendulum. The equation of motion for the angle $\theta$ that the pendulum makes with the vertical is given by
$\frac{\mathrm{d} ^2 \theta}{\mathrm{d} t^2}=\frac{1}{Q}\frac{\mathrm{d} \theta}{\mathrm{d} t}+ sin(\theta)+d cos \Omega t$
We start by transforming our second-order ODE into two coupled first-order ODEs. The transformation is easily accomplished by defining a new variable $\omega=\frac{\mathrm{d} \theta}{\mathrm{d} t}$.With this definition, we can rewrite our second-order ODE as two coupled first-order ODEs:
$\frac{\mathrm{d} \theta}{\mathrm{d} t}=\omega$
$\frac{\mathrm{d} \omega }{\mathrm{d} t}=-\frac{1}{Q}\omega+sin \theta+d cos \Omega t$
In this case the functions on the right-hand side of the equations are
$f_1(t,\theta,\omega)=\omega$
$f_2(t,\theta,\omega)=-\frac{1}{Q}\omega+sin \theta+d cos \Omega t$
The initial conditions specify the values of $\theta$ and $\omega$ at $t = 0$. SciPy’s ODE solver scipy.integrate.odeint has three required arguments and many optional keyword arguments, of which we only need one, args, for this example. So in this case, odeint has the form
odeint(func, y0, t, args=())
The first argument func is the name of a Python function that returns a list of values of the $n$ functions $f_i (t;y_1; \ldots,y_n)$ at a given time $t$. The second argument $y_0$ is an array (or list) of the values of the initial conditions of $(y_1, \ldots,yn)$. The third argument is the array of times at which you want odeint to return the values of $(y_1;\ldots;y_n)$. The keyword argument args is a tuple that is used to pass parameters (besides $y_0$ and $t$) that are needed to evaluate func. Our example should make all of this clear.
Having written the nth-order ODE as a system of n first-order ODEs, the next task is to write the function func. The function func should have three arguments: (1) the list (or array) of current $y$ values,(2) the current time $t$, and (3) a list of any other parameters params needed to evaluate func. The function func returns the values of the derivatives $dy_i /dt = f_i (t;y_1,\ldots,y_n)$ in a list (or array).
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
def f(y, t, params):
theta, omega = y # unpack current values
Q, d, Omega = params # unpack parameters
derivs = [omega,-omega/Q+np.sin(theta)+d*np.cos(Omega*t)]
return derivs
# Parameters
Q = 2.0 # quality factor (inverse damping)
d = 1.5 # forcing amplitude
Omega = 0.65 # drive frequency
# Initial values
theta0 = 0.0 # initial angular displacement
omega0 = 0.0 # initial angular velocity
# Bundle parameters for ODE solver
params = [Q, d, Omega]
# Bundle initial conditions for ODE solver
y0 = [theta0, omega0]
# Make time array for solution
tStop = 200.
tInc = 0.05
t = np.arange(0., tStop, tInc)
# Call the ODE solver
psoln = odeint(f, y0, t, args=(params,))
# Plot results
fig = plt.figure(figsize=(9.5, 6.5))
# Plot theta as a function of time
ax1 = fig.add_subplot(221)
ax1.plot(t, psoln[:, 0], color='black')
ax1.set_xlabel('time')
ax1.set_ylabel(r'$\theta$', fontsize=14)
from scipy.integrate import odeint
import numpy as np
import matplotlib.pyplot as plt
# Define the system of ordinary differential equations
def model(y, t):
dy1dt = -2 * y[0] # Example: dy1/dt = -2y1
dy2dt = y[0] - y[1] # Example: dy2/dt = y1 - y2
return [dy1dt, dy2dt]
# Set the initial conditions
initial_conditions = [1.0, 0.0]
# Set the time points where you want to evaluate the solution
time_points = np.linspace(0, 5, 101) # From 0 to 5 with 101 points
# Use odeint to solve the system of ODEs
solution = odeint(model, initial_conditions, time_points)
# Plot the solution
plt.plot(time_points, solution[:, 0], label='y1(t)')
plt.plot(time_points, solution[:, 1], label='y2(t)')
plt.xlabel('Time')
plt.ylabel('State Variables')
plt.legend()
plt.show()
Discrete (Fast) Fourier Transforms
The SciPy library has a number of routines for performing discrete Fourier transforms. Before delving into them, we provide a brief review of Fourier transforms and discrete Fourier transforms.
Continuous and discrete Fourier transforms
The Fourier transform of a function $g(t)$ is given by
$G(f ) =\int_\infty^\infty g(t) e^{-i2 \pi f_t}dt $where $f$ is the Fourier transform variable; if t is time, then $f$ is frequency.
The inverse transform is given by
$g(t) =\int_\infty^\infty G(f )e^{i2\pi f_t}dt $
Here we define the Fourier transform in terms of the frequency $f$ rather than the angular frequency $\omega = 2 \pi f $
The conventional Fourier transform is defined for continuous functions, or at least for functions that are dense and thus have an infinite number of data points. When doing numerical analysis, however, you work with discrete data sets, that is, data sets defined for a finite number of points. The discrete Fourier transform (DFT) is defined for a function $g_n$ consisting of a set of $N$ discrete data points.
Those $N$ data points must be defined at equally spaced times $tn = n\Delta t$,where $\Delta t$ is the time between successive data points and $n$ runs from 0 to $N - 1$. The discrete Fourier transform (DFT) of $g_n$ is defined as
$G_l =\sum_{n=0}^{N-1} g(t) e^{-i(2 \pi /N) ln} $
where $l$ runs from 0 to $N - 1$. The inverse discrete Fourier transform
(iDFT) is defined as
$g_n =\frac{1}{N} \sum_{n=0}^{N-1} G_l e^{i(2 \pi /N) ln} $
The DFT is usually implemented on computers using the well-known Fast Fourier Transform (FFT) algorithm, generally credited to Cooley and Tukey who developed it at AT&T Bell Laboratories during the 1960s. But their algorithm is essentially one of many independent rediscoveries of the basic algorithm dating back to Gauss who described it as early as 1805.
The SciPy FFT library
The SciPy library scipy.fftpack has routines that implement a souped-up version of the FFT algorithm along with many ancillary routines that support working with DFTs. The basic FFT routine in scipy.fftpack is appropriately named fft. The program below illustrates its use, along with the plots that follow.
import numpy as np
from scipy import fftpack
import matplotlib.pyplot as plt
width = 2.0
freq = 0.5
t = np.linspace(-10, 10, 128)
g = np.exp(-np.abs(t)/width) * np.sin(2.0*np.pi*freq*t)
dt = t[1]-t[0] # increment between times in time array
G = fftpack.fft(g) # FFT of g
f = fftpack.fftfreq(g.size, d=dt) # FFT frequenies
f = fftpack.fftshift(f) # shift freqs from min to max
G = fftpack.fftshift(G) # shift G order to match f
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(9, 6))
ax1.plot(t, g)
ax1.set_xlabel(r'$t$')
ax1.set_ylabel(r'$g(t)$')
ax1.set_ylim(-1, 1)
ax2.plot(f, np.real(G), color='dodgerblue',
label='real part')
ax2.plot(f, np.imag(G), color='coral',
label='imaginary part')
ax2.legend()
ax2.set_xlabel(r'$f$')
ax2.set_ylabel(r'$G(f)$')
plt.tight_layout()
plt.savefig('fftExample.pdf')
plt.show()
The DFT has real and imaginary parts, both of which are plotted in Fig. above.The fft function returns the $N$ Fourier components of $G_n$ starting with the zero-frequency component $G_0$ and progressing to the maximum positive frequency component $G_{(N/2)-1}$ (or $G_{(N-1)/2}$ if $N$ is odd). From there, fft returns the maximum negative component $G_N/2$ (or $G(N-1)/2$ if $N$ is odd) and continues upward in frequency until it reaches the minimum negative frequency component $G_{N-1}$.
This is the standard way that DFTs are ordered by most numerical DFT packages. The scipy.fftpack function fftfreq creates the array of frequencies in this non-intuitive order such that $f[n]$ in the above routine is the correct frequency for the Fourier component $G[n]$. The arguments of fftfreq are the size of the original array $g$ and the keyword argument $d$ that is the spacing between the (equally spaced) elements of the time array (d=1 if left unspecified). The package scipy.fftpack provides the convenience function fftshift that reorders the frequency array so that the zero-frequency occurs at the middle of the array, that is, so the frequencies proceed monotonically from smallest (most negative) to largest (most positive). Applying fftshift to both f and G puts the frequencies f in ascending order and shifts G so that the frequency of $G[n]$ is given by the shifted $f[n]$.
The scipy.fftpack module also contains routines for performing 2-dimensional and $n$-dimensional DFTs, named fft2 and fftn, respectively, using the FFT algorithm.
As for most FFT routines, the scipy.fftpack FFT routines are most efficient if $N$ is a power of 2. Nevertheless, the FFT routines are able to handle data sets where $N$ is not a power of 2.scipy.fftpack also supplies an inverse DFT function ifft. It is written to act on the unshifted FFT so take care! Note also that ifft returns a complex array. Because of machine roundoff error, the imaginary part of the function returned by ifft will, in general, be very near zero but not exactly zero even when the original function is a purely real function.
Example:
import numpy as np
from scipy.fft import fft, ifft
import matplotlib.pyplot as plt
# Create a simple signal
Fs = 1000 # Sampling frequency
T = 1/Fs # Sampling interval
t = np.arange(0, 1, T) # Time vector from 0 to 1 with step T
# Create a signal composed of two sine waves
f1 = 5 # Frequency of the first sine wave
f2 = 20 # Frequency of the second sine wave
signal = np.sin(2 * np.pi * f1 * t) + 0.5 * np.sin(2 * np.pi * f2 * t)
# Plot the original signal
plt.figure(figsize=(10, 4))
plt.subplot(2, 1, 1)
plt.plot(t, signal)
plt.title('Original Signal')
plt.xlabel('Time')
plt.ylabel('Amplitude')
# Compute the FFT
fft_result = fft(signal)
# Compute the frequencies corresponding to the FFT result
frequencies = np.fft.fftfreq(len(fft_result), T)
# Plot the FFT result
plt.subplot(2, 1, 2)
plt.plot(frequencies, np.abs(fft_result))
plt.title('FFT Result')
plt.xlabel('Frequency (Hz)')
plt.ylabel('Amplitude')
plt.tight_layout()
plt.show()
In this example:
We create a simple signal composed of two sine waves with frequencies of 5 Hz and 20 Hz.
We use the fft function from scipy.fft to compute the FFT of the signal.
The frequencies corresponding to the FFT result are obtained using np.fft.fftfreq.
We plot the original signal and the FFT result.
We use the fft function from scipy.fft to compute the FFT of the signal.
The frequencies corresponding to the FFT result are obtained using np.fft.fftfreq.
We plot the original signal and the FFT result.
You can also use ifft from scipy.fft to compute the inverse FFT if you want to transform back to the time domain.
# Compute the inverse FFT to transform back to the time domain
reconstructed_signal = ifft(fft_result)
# Plot the reconstructed signal
plt.figure(figsize=(10, 4))
plt.plot(t, reconstructed_signal)
plt.title('Reconstructed Signal (Inverse FFT)')
plt.xlabel('Time')
plt.ylabel('Amplitude')
plt.show()
This is a simple example, and you can adapt it to your specific use case. FFT is commonly used for analyzing signals in various fields such as signal processing, communications, and audio/image processing. The resulting FFT provides information about the frequency components present in the original signal.
Comments
Post a Comment