General Theory of Self-Reciprocal Functions in the Fourier Transform

Page content

Fixed Points of the Fourier Transform

In the following two articles, examples of self-reciprocal functions in the Fourier transform were provided:

The two examples are:

$$ \exp{[-\frac{t^2}{2}]} $$

$$ \frac{1}{\sqrt{|t|}} $$

These two functions are mapped to (except for a constant multiple) the same function under the Fourier transform.

Such functions are considered fixed points of the Fourier transform, and are referred to as self-reciprocal functions in the context of the Fourier transform.

There are various other such functions.

In this article, I will discuss the general theory related to such functions.

Unlike other articles, the following definition of the Fourier transform will be used here:

$$ \mathcal{F}[f](\omega) = \frac{1}{\sqrt{2 \pi}}\int_{-\infty}^{\infty}{f(t)e^{-i\omega t}dt} $$

This is the third definition from Multiple Definitions of the Fourier Transform, used to ensure the Fourier transform is a unitary transform (avoiding constant multiples).

It will also be written as:

$$ \hat{f}(\omega) = \mathcal{F}[f](\omega) $$

Repeated Application of the Fourier Transform

Let’s consider repeatedly applying the Fourier transform to a function $f(t)$.

After one Fourier transform, $f(t)$ is mapped as follows:

$$ \mathcal{F}: f(t) \mapsto \hat{f}(\omega) $$

Here,

$$ \mathcal{F}[f](\omega) = \frac{1}{\sqrt{2 \pi}}\int_{-\infty}^{\infty}{f(t)e^{-i\omega t}dt} $$

By substituting $s = -t$ in the integral, we get:

$$ \mathcal{F}[f](\omega) = \frac{1}{\sqrt{2 \pi}}\int_{s=\infty}^{-\infty}{f(-s)e^{i\omega s} (-ds)} = \frac{1}{\sqrt{2 \pi}}\int_{-\infty}^{\infty}{f(-s)e^{i\omega s} ds} = \mathcal{F}^{-1}[f(-t)] $$

The last transformation is the inverse Fourier transform.

From this, we can see that:

$$ \mathcal{F}^2[f](t) = f(-t) $$

Continuing further:

$$ \mathcal{F}^3[f](\omega) = \hat{f}(-\omega) $$

and

$$ \mathcal{F}^4[f](t) = f(-(-t)) = f(t) $$

This type of operation, where something returns to its original form after four iterations, often appears in mathematics—such as multiplying by the imaginary unit $i$, or differentiating trigonometric functions.

To summarize this diagrammatically:

Returning to the main point, the important equation obtained here is:

$$ \mathcal{F}^4 = 1 $$

Focusing specifically on even functions:

$$ \mathcal{F}^2 = 1 $$

Eigenvalues of the Fourier Transform

Since the Fourier transform is a linear transformation, it is natural to consider its eigenvalues.

Let $\lambda$ be an eigenvalue of the Fourier transform $\mathcal{F}$, and $f$ be the eigenvector corresponding to $\lambda$ (in this case, it’s a function, so it’s called an eigenfunction).

By definition, we have:

$$ \mathcal{F}[f] = \lambda f $$

Repeating this four times gives:

$$ \mathcal{F}^4[f] = \lambda^4 f $$

On the other hand, from the earlier analysis:

$$ \mathcal{F}^4[f] = 1[f] = f $$

Thus:

$$ \lambda^4 = 1 $$

Therefore:

$$ \lambda = 1, i, -1, -i $$

Later, I will show examples, but there are indeed non-zero functions $f$ that satisfy:

$$ \mathcal{F}[f] = \lambda f $$

for $\lambda = 1, i, -1, -i$, meaning that these are indeed eigenvalues of $\mathcal{F}$.

Let the eigenspaces corresponding to these eigenvalues be $W_1, W_i, W_{-1}, W_{-i}$, respectively.

These eigenspaces are orthogonal to each other.

In fact, for $f \in W_a$ and $g \in W_b$, by Parseval’s theorem:

$$ f \cdot g = \mathcal{F}[f] \cdot \mathcal{F}[g] = (af) \cdot (bg) = a\bar{b} (f \cdot g) $$

When $a \ne b$, $a\bar{b} \ne 0$, so we deduce that $f \cdot g = 0$.

Thus, any function $f \in L^2(\mathbb{R})$ can be uniquely decomposed into these eigenspaces as:

$$ f = f_1 + f_i + f_{-1} + f_{-i} $$

Applying the Fourier transform repeatedly to this decomposition gives:

$$ \begin{pmatrix} 1 & 1 & 1 & 1 \\ 1 & i & -1 & -i \\ 1 & -1 & 1 & -1 \\ 1 & -i & -1 & i \end{pmatrix} \begin{pmatrix} f_1 \\ f_i \\ f_{-1} \\ f_{-i} \end{pmatrix} = \begin{pmatrix} f \\ \mathcal{F} [f] \\ \mathcal{F}^2 [f] \\ \mathcal{F}^3 [f] \end{pmatrix} $$

Solving this, we can express $f_1, f_i, f_{-1}, f_{-i}$ as:

$$ \begin{pmatrix} f_1 \\ f_i \\ f_{-1} \\ f_{-i} \end{pmatrix} = \frac{1}{4} \begin{pmatrix} 1 & 1 & 1 & 1 \\ 1 & -i & -1 & i \\ 1 & -1 & 1 & -1 \\ 1 & i & -1 & -i \end{pmatrix} \begin{pmatrix} f \\ \mathcal{F} [f] \\ \mathcal{F}^2 [f] \\ \mathcal{F}^3 [f] \end{pmatrix} $$

Here, the $f_1$ component is the projection onto the eigenspace $W_1$, which is a self-reciprocal function.

Examples of Eigenfunctions Corresponding to Eigenvalues $1, i, -1, -i$

Example of an Eigenfunction for Eigenvalue $1$

This is precisely a self-reciprocal function, and a typical example is the Gaussian function mentioned at the beginning:

$$ \exp{[-\frac{t^2}{2}]} $$

Differentiation in the Frequency Domain

One of the important properties of the Fourier transform is the frequency differentiation formula:

$$ \mathcal{F}[t f(t)] = i \frac{d \hat{f}(\omega)}{d\omega} $$

This asserts that multiplying by $t$ in the time domain corresponds to differentiation in the frequency domain.

This can be derived by calculating as follows:

$$ \frac{d \hat{f}(\omega)}{d\omega} = \frac{d}{d \omega} \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty}{f(t) e^{-i \omega t}dt} = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty}{f(t) \left( \frac{d}{d \omega} e^{-i \omega t} \right) dt} = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty}{f(t) \left( -it e^{-i \omega t} \right) dt} = -i \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty}{tf(t) e^{-i \omega t} dt} = -i \mathcal{F}[tf(t)] $$

Below, we calculate Fourier transform of $t^n \exp{[-\frac{t^2}{2}]}$ using this formula. In general, these provide examples of eigenfunctions.

Fourier Transform of $t^n \exp{[-\frac{t^2}{2}]}$

$n=1$

$$ \mathcal{F}\left[t \exp{[-\frac{t^2}{2}]}\right] = i\frac{d}{d\omega} \exp{[-\frac{\omega^2}{2}]} = -i\omega \exp{[-\frac{\omega^2}{2}]} $$

$n=2$

$$ \mathcal{F}\left[t^2 \exp{[-\frac{t^2}{2}]}\right] = i\frac{d}{d\omega} \left( -i\omega \exp{[-\frac{\omega^2}{2}]} \right) = (1 - \omega^2) \exp{[-\frac{\omega^2}{2}]} $$

$n=3$

$$ \mathcal{F}\left[t^3 \exp{[-\frac{t^2}{2}]}\right] = i\frac{d}{d\omega} \left( (1 - \omega^2) \exp{[-\frac{\omega^2}{2}]} \right) = i(-3\omega + \omega^3) \exp{[-\frac{\omega^2}{2}]} $$

Example of an Eigenfunction for the Eigenvalue $-i$

From the result for $n=1$ above, we can see that

$$ t\exp{[-\frac{t^2}{2}]} $$

is an eigenfunction corresponding to the eigenvalue $−i$.

Example of an Eigenfunction for the Eigenvalue $-1$

Although the result for $n=2$ cannot be directly applied, it is almost the answer when $n=2$.

Considering the Fourier transform of

$$ f(t) = (t^2 + \alpha) \exp{[-\frac{t^2}{2}]} $$

we obtain

$$ \hat{f}(\omega) = -(\omega^2 - (1+\alpha)) \exp{[-\frac{\omega^2}{2}]} $$

Therefore, if we choose $\alpha$ such that $\alpha = -(1 + \alpha)$, then $\hat{f} = -f$。

Solving this equation give $\alpha = -\frac{1}{2}$. Hence,

$$ f(t) = (t^2 - \frac{1}{2}) \exp{[-\frac{t^2}{2}]} $$

is an eigenfunction corresponding to the eigenvalue $−1$.

Example of an Eigenfunction for the Eigenvalue $i$

Although the result for $n=3$ cannot be directly applied, it is almost the answer when $n=3$.

Considering the Fourier transform of

$$ f(t) = (t^3 + \beta t) \exp{[-\frac{t^2}{2}]} $$

We obtain

$$ \hat{f}(\omega) = i(\omega^3 - (3+\beta) \omega) \exp{[-\frac{\omega^2}{2}]} $$

Therefore, if we choose $\beta$ such that $\beta = -(3 + \beta)$, then $\hat{f} = if$.

Solving this equation gives $\beta = -\frac{3}{2}$. Hence,

$$ f(t) = (t^3 - \frac{3}{2}t) \exp{[-\frac{t^2}{2}]} $$

is an eigenfunction corresponding to the eigenvalue $i$.