I am pretty new to signal processing and I want to calculate the signal to noise ratio (SNR) from a signal in frequency domain.
import matplotlib.pyplot as plt
import numpy as np
# create signal in time domain
N = 1000
T = 0.05
x = np.linspace(0, N*T, N)
y = 10 * np.sin(0.1 * np.pi * x)
noise = 2* np.random.normal(size=len(x))
y += noise
# signal in frequency domain
N = len(y)
x_f = np.fft.rfftfreq(N, d=T)
y_f = np.fft.rfft(y)
# normalize
x_f = x_f / max(x_f)
y_f = np.abs(y_f) / max(np.abs(y_f))
The idea is to use a cutoff frequency (e.g. 0.3) to divide the signal in a useful signal (left side) and noise signal (right side) and calculate the SNR. But how can I accomplish this and is this the right approach?
This can be done in different ways, depending on the characteristics of the signal and the accuracy required. In your case, you can see a clear peak of the useful signal in the spectrum; for real signals, the useful signal might not be so clearly distinguishable from the noise in its spectrum.
By analysing the normalised spectrum of the signal, you can perform a simple filtering based on an esperimentally chosen amplitude threshold. You can take it as 0.2, for example.
threshold = 0.2
y_f_filtered = y_f.copy()
y_f_filtered[y_f < threshold] = 0
After returning to the time domain, you can calculate signal-to-noise ratio using the formula:
y_filtered = np.fft.irfft(y_f_filtered)
signal_power = np.sum(y ** 2)
noise_power = np.sum((y - y_filtered) ** 2)
snr_db = 10 * np.log10(signal_power / noise_power)