Bernoulli numbers and a related integral

Consider the sequence \{B_r(x)\}_{r=0}^{\infty} of polynomials defined using the recursion:

\begin{aligned} B_0(x) &= 1 \\ B_r^\prime(x) &= r B_{r-1}(x) \quad \forall r\geq 1 \\ \int_0^1 B_r(x) dx &= 0 \quad \forall r\geq 1 \end{aligned}

The first few Bernoulli Polynomials are: \begin{aligned} B_0(x) & =1 \\ B_1(x) & =x-\frac{1}{2} \\ B_2(x) & =x^2-x+\frac{1}{6} \\ B_3(x) & =x^3-\frac{3}{2}x^2+\frac{1}{2}x \\ B_4(x) & =x^4-2x^3+x^2-\frac{1}{30} \\ \end{aligned}

The numbers B_n = B_n(0) are called the Bernoulli numbers. Integrating the relation B_r^\prime(x)=rB_r(x) between 0 and 1 gives: B_r(1)-B_r(0) = \int_0^1 B_r^\prime (x) dx = r\int_0^1 B_{r-1}(x) dx = 0 \quad \forall r\geq 2 This motivates us to define the periodic Bernoulli polynomials by \tilde{B}_r(x) = B_r(\langle x\rangle), \quad x\in\mathbb{R}, \; r\geq 2 where \langle x\rangle denotes the fractional part of x. We will now compute the Fourier series of \tilde{B}_r(x), where r\geq 2. The n-th Fourier coefficient is given by: \begin{aligned} a_n &= \int_0^1 \tilde{B}_r(x) e^{-2\pi i n x} dx= \int_0^1 B_r(x) e^{-2\pi i n x} dx \end{aligned} Let’s first consider the case when n\neq 0. Integration by parts, gives us: \begin{aligned} a_n &= -\frac{e^{-2\pi i n x}}{2\pi i n}B_r(x)\Big|_0^1 + \frac{1}{2\pi i n}\int_0^1 B_r^\prime (x) e^{-2\pi i n x} dx \\ &= \frac{1}{2\pi i n}\int_0^1 B_r^\prime (x) e^{-2\pi i n x} dx \\ &= \frac{r}{2\pi i n}\int_0^1 B_{r-1}(x) e^{-2\pi i n x} dx \quad \quad (1) \end{aligned} The repeated use of equation (1) gives: \begin{aligned} a_n &= \frac{r!}{(2\pi i n)^{r-1}} \int_0^1 B_1(x) e^{-2\pi i nx} dx \\ &= \frac{r!}{(2\pi i n)^{r-1}} \int_0^1 \left(x-\frac{1}{2} \right) e^{-2\pi i nx} dx \\ &= \frac{r!}{(2\pi i n)^{r-1}}\int_0^1 x e^{-2\pi i n x} dx \\ &= -\frac{r!}{(2\pi i n)^r} \end{aligned} When n=0, we have a_0 = \int_0^1 B_r(x)dx = 0. Note that the Fourier series -r! \sum_{\substack{n=-\infty \\ n\neq 0}} \frac{e^{2\pi i n x}}{(2\pi i n)^r} converges absolutely for all r\geq 2. Therefore, it converges uniformly to \tilde{B}_r(x) for all r\geq 2. This leads to the following bound: |\tilde{B}_r(x)| \leq \frac{2r!}{(2\pi)^r}\sum_{n=1}^\infty \frac{1}{n^r} < \frac{4r!}{(2\pi)^r} \;\; \forall r\geq 2\quad\quad (2)

Note that the above inequality also remains valid for r=0 and r=1. Now, let’s consider the generating function: F(x,t) = \sum_{n=0}^\infty \frac{\tilde{B}_n(x) t^n}{n!} The inequality (2) implies that: |F(x,t)| < 4 \sum_{n=0}^\infty \left(\frac{t}{2\pi}\right)^n Therefore, the series converges uniformly for all t\in [0,2\pi] and all x. We, may, therefore differentiate term by term to obtain: \frac{\partial F(x,t)}{\partial x} = \sum_{n=1}^\infty \frac{\tilde{B}_{n-1}(x)}{(n-1)!}t^n = t F(x,t) Solving the above differential equation, we get F(x,t) = G(t) e^{xt} where G is some arbitrary function of t. Next, we integrate F(x,t) between 0 and 1: \begin{aligned} \int_0^1 F(x,t) dx &= G(t) \int_0^1 e^{xt} dx \\ &= G(t) \frac{e^t-1}{t} \end{aligned} On the other hand, note that: \begin{aligned} \int_0^1 F(x,t) dx &= \int_0^1 \sum_{n=0}^\infty \frac{\tilde{B}_n(x) t^n}{n!} dx \\ &= 1 + \sum_{n=1}^\infty \frac{t^n}{n!}\int_0^1 B_n(x) dx \\ &= 1 \end{aligned} Therefore, we obtain G(t) = \frac{t}{e^t - 1} and \boxed{F(x,t) = \frac{t e^{xt}}{e^t-1}}

An interesting property of the Bernoulli numbers is that B_{2n+1}=0 for all n\geq 1. To see this, consider: \frac{t}{e^t -1} + \frac{t}{2}= 1+\sum_{n=2}^\infty \frac{B_n t^n}{n!} Now, on the left hand side we have an even function of t. Therefore, the coefficients of the odd powers of t on the right hand side are equal to 0. Using the Fourier series expansion, we can express the even-index Bernoulli numbers in terms of the Riemann zeta function: B_{2n} = \frac{2 (-1)^{n-1} (2n)!}{(2\pi)^{2n}} \zeta(2n)

The Bernoulli polynomials satisfy the following recursive equation: {B}_n(x) = \sum_{k=0}^n \binom{n}{k} B_{n-k} x^k This can be proved by noting that: \begin{aligned} \sum_{n=0}^\infty \frac{{B}_n(x) t^n}{n!} &= \frac{te^{xt}}{e^t-1} \\ &= e^{xt} \sum_{n=0}^\infty \frac{B_n t^n}{n!} \\ &= \sum_{m=0}^\infty \frac{(xt)^m}{m!} \sum_{n=0}^\infty \frac{B_n t^n}{n!} \\ &= \sum_{m=0}^\infty \sum_{n=0}^\infty \frac{B_n x^m t^{n+m}}{n! m!} \\ &= \sum_{n=0}^\infty \sum_{k=0}^n \frac{B_k t^n x^{n-k}}{k! (n-k)!} \\ &= \sum_{n=0}^\infty \frac{t^n}{n!}\sum_{k=0}^n \binom{n}{k} B_k x^{n-k} \end{aligned} where x\in [0,1]. Now, compare the coefficients of t^n to get the desired result. Plugging in x=1, gives the identity: \sum_{k=0}^{n-1} \binom{n}{k} B_k = 0

Now, let’s turn our attention to the integral: I=\int_0^{\frac{\pi}{2}}\frac{\sin(2nx)}{\sin^{2n+2}(x)}\cdot \frac{1}{e^{2\pi \cot x}-1} dx where n\in\mathbb{N}. We will use the following trigonometric identity: \frac{\sin(2nx)}{\sin^{2n}(x)} =(-1)^n \sum_{r=1}^{n}\binom{2n}{2r-1} (-1)^{r}\cot^{2r-1}(x) Substituting the above into the integral, gives: \begin{aligned} I &= (-1)^n \int_0^{\pi\over 2}\left( \sum_{r=1}^{n}\binom{2n}{2r-1} (-1)^{r}\cot^{2r-1}(x)\right)\frac{\csc^2(x)}{e^{2\pi \cot x}-1}dx \\ &= (-1)^n \sum_{r=1}^{n}\binom{2n}{2r-1} (-1)^{r} \int_0^{\pi \over 2}\cot^{2r-1}(x)\frac{\csc^2(x)}{e^{2\pi \cot x}-1}dx \\ &= (-1)^n \sum_{r=1}^{n}\binom{2n}{2r-1} (-1)^{r} \int_0^\infty \frac{t^{2r-1}}{e^{2\pi t}-1}dt \\ &= (-1)^n \sum_{r=1}^{n}\binom{2n}{2r-1} (-1)^{r}\frac{(2r-1)! \zeta(2r)}{(2\pi)^r }\\ &= (-1)^n \sum_{r=1}^{n}\binom{2n}{2r-1} (-1)^{r} (-1)^{r-1} \frac{B_{2r}}{4r}\\ &= \frac{(-1)^{n-1}}{4}\sum_{r=1}^{n}\binom{2n}{2r-1}\frac{B_{2r}}{r} \\ &= \frac{(-1)^{n-1}}{2(2n+1)}\sum_{r=1}^n \binom{2n+1}{2r} B_{2r} \\ &= \frac{(-1)^{n-1}}{2(2n+1)} \left[\sum_{r=0}^{2n} \binom{2n+1}{r} B_r - \binom{2n+1}{0}B_0 - \binom{2n+1}{1} B_1\right] \\ &= \frac{(-1)^{n-1}}{2(2n+1)} \left[-\binom{2n+1}{0}B_0 - \binom{2n+1}{1} B_1\right] \\ &= \frac{(-1)^{n-1}}{4}\cdot \frac{2n-1}{2n+1} \end{aligned}

Weyl’s Equidistribution Theorem

In this post, we will prove the Weyl’s Equidistribution theorem. A sequence of real numbers x_1, x_2, \cdots is said to be equidistributed (mod 1) if for every sub-interval (a,b)\subset [0,1], we have \lim_{N\to \infty}\frac{|\{1\leq n\leq N:\; \langle x_n \rangle\in (a,b)\}|}{N} = b-a where \langle x \rangle denotes the fractional part of x. Weyl’s equidistribution criteria states that the following statements are equivalent:

  1. x_1, x_2, \cdots are equidistributed (mod 1).
  2. For each non-zero integer k, we have \lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^N e^{2\pi i k x_n}=0
  3. For each Riemann integrable function f:[0,1]\to\mathbb{C}, we have \lim_{N\to \infty}\frac{1}{N}\sum_{n=1}^N f(\langle x_n \rangle) = \int_0^1 f(x) dx

Proof: (1) ⇒ (3)

Let I=[a,b)\subseteq [0,1] and note that \frac{|\{1 \leq n \leq N: \langle x_n \rangle\in [a,b) \}|}{N}=\frac{1}{N}\sum_{n=1}^N \chi_{[a,b)}(\langle x_n \rangle) where \chi_{[a,b)}(x) equals 1 if x\in [a,b) and 0 otherwise. This shows that (3) holds for the case when f is a characteristic function. Now, let \lambda_1, \lambda_2\in \mathbb{R} and f_1, f_2 be functions for which (3) holds. Then, \begin{aligned}\lim_{N\to \infty}\sum_{n=1}^N (\lambda_1 f_1 + \lambda_2 f_2)(\langle x_n \rangle) &= \lim_{N\to \infty} \frac{\lambda_1}{N}\sum_{n=1}^N f_1(\langle x_n\rangle) + \lim_{N\to \infty}\frac{\lambda_2}{N}\sum_{n=1}^Nf_2(\langle x_n\rangle) \\ &= \lambda_1\int_0^1 f_1(x) dx + \lambda_2 \int_0^1 f_2(x) dx \\ &= \int_0^1 (\lambda_1 f_1 + \lambda_2 f_2)(x) dx\end{aligned} Thus, (3) holds for all linear combinations of characteristic functions of subintervals of [0,1].

Now, let f:[0,1]\to \mathbb{R} be an integrable function, and let \epsilon >0. Choose step functions f_1 and f_2 such that:

  • f_1\leq f\leq f_2 pointwise
  • \int_0^1 (f_2(x)-f_1(x))dx < \frac{\epsilon}{2}
  • There exists N_0 such that \left|\int_0^1 f_1(x)dx - \frac{1}{N}\sum_{n=1}^N f_1(\langle x_n\rangle) \right| < \frac{\epsilon}{2} and \left|\int_0^1 f_2(x)dx - \frac{1}{N}\sum_{n=1}^N f_2(\langle x_n\rangle) \right| < \frac{\epsilon}{2} for all N\geq N_0
It follows that for N\geq N_0, \begin{aligned} \int_0^1 f(x) dx - \frac{1}{N}\sum_{n=1}^N f(\langle x_n\rangle) &\leq \int_0^1 f(x) dx - \frac{1}{N}\sum_{n=1}^N f_1(\langle x_n\rangle) \\ &< \int_0^1 f(x) dx -\int_0^1 f_1(x) dx +\frac{\epsilon}{2} \\ &< \int_0^1 (f_2(x)-f_1(x)) dx + \frac{\epsilon}{2} \\ &< \epsilon \end{aligned} In a similar way, we can prove that \int_0^1 f(x) dx - \frac{1}{N}\sum_{n=1}^N f(\langle x_n\rangle) > -\epsilon \quad \forall\; N\geq N_0 Therefore, we have \left|\int_0^1 f(x) dx - \frac{1}{N}\sum_{n=1}^N f(\langle x_n\rangle) \right| < \epsilon \quad \forall\; N\geq N_0 To see that (3) holds when f is complex valued, we need only consider the real and imaginary parts separately.

(2) ⇒ (3)

Let f:[0,1]\to \mathbb{R} be continuous, and let \epsilon > 0. The Stone-Weierstrass Theorem allows us to choose a trigonometric polynomial p such that: \sup_{x\in [0,1]} |f(x) - p(x)| < \frac{\epsilon}{3} Also, (2) implies the existence of an N_0 such that for N\geq N_0, we have \left|\frac{1}{N}\sum_{n=1}^N p(\langle x_n \rangle)-\int_0^1 p(x) dx \right| < \frac{\epsilon}{3} Now, \begin{aligned} &\; \left|\frac{1}{N}\sum_{n=1}^N f(\langle x_n\rangle) - \int_0^1 f(x) dx\right| \\ &= \left|\frac{1}{N}\sum_{n=1}^N (f(\langle x_n \rangle) - p(\langle x_n \rangle)) + \int_0^1 (p(x)-f(x))dx + \frac{1}{N}\sum_{n=1}^N p(\langle x_n \rangle) - \int_0^1 p(x) dx\right| \\ &< \frac{1}{N}\sum_{n=1}^N\left|f(\langle x_n \rangle) - p(\langle x_n \rangle) \right| + \int_0^1 \left|p(x)-f(x) \right| dx + \left|\frac{1}{N}\sum_{n=1}^N p(\langle x_n \rangle) - \int_0^1 p(x) dx \right| \\ &< \frac{\epsilon}{3}+\frac{\epsilon}{3}+\frac{\epsilon}{3} \\ &= \epsilon \end{aligned} for all N\geq N_0. Thus, (3) holds for continuous functions on [0,1]. By the proof of (1) ⇒ (3), it is sufficient to show that (3) holds for all step functions on [0,1]. If g is a step function on [0,1], we can find continuous functions g_1, g_2 such that g_1\leq g\leq g_2 and \int_0^1 (g_1(x)-g_2(x))dx < \epsilon. We again conclude that (3) holds for g.

The implications (3) ⇒ (1) and (3) ⇒ (2) are obvious.

References

  • Hannigan-Daley, Brad. Equidistribution and Weyl’s criterion. Retrieved from http://individual.utoronto.ca/hannigandaley/equidistribution.pdf. Accessed 5 Feb. 2020.
  • Stein, Elias M. and Shakarchi, Rami. Fourier Analysis: An Introduction. Princeton University Press, 2003

Eisenstein’s Proof of Quadratic Reciprocity

In this post, we take a look at an interesting proof of the Quadratic Reciprocity theorem by Gotthold Eisenstein.

Definition: The Legendre symbol is a function \left(\frac{a}{p}\right) which takes the values \pm 1 depending on whether a is a quadratic residue modulo p. \left(\frac{a}{p}\right) = \begin{cases}0 \quad \text{if }p|a \\ 1 \quad \text{if }a\text{ is a quadratic residue modulo }p \\ -1 \quad \text{if }a\text{ is a quadratic non-residue modulo }p\end{cases} Theorem (Quadratic Reciprocity Law): If p and q are distinct odd primes, then the quadratic reciprocity theorem states that the congruences \begin{aligned} x^2 \equiv q \quad (\text{mod }p) \\ x^2 \equiv p \quad (\text{mod }q) \end{aligned} are both solvable or both unsolvable unless both p and q leave the the remainder 3 when divided by 4. Written symbolically, \left(\frac{p}{q}\right)\left(\frac{q}{p}\right) = (-1)^{(p-1)(q-1)/4}

We will start by proving two important results about quadratic residues that will be useful later on.

Lemma 1: Let n\not\equiv 0 \;(\text{mod }p). Then n is a quadratic residue modulo p iff n^{\frac{p-1}{2}}\equiv 1 \; (\text{mod }p).

Proof: Fermat’s little theorem tells us that n^{p-1}\equiv 1 \;(\text{mod }p) whenever p \not| n. Since, n^{p-1}-1\equiv (n^{\frac{p-1}{2}}-1)(n^{\frac{p-1}{2}}+1)\equiv 0 \; (\text{mod }p) we have n^{\frac{p-1}{2}}\equiv \pm 1\; (\text{mod }p). Therefore, it suffices to prove that n is a quadratic residue modulo p if and only if n^{\frac{p-1}{2}}\equiv 1 \; (\text{mod }p).

Suppose that \left(\frac{n}{p} \right)=1. Then, there is an integer x such that n\equiv x^2 \; (\text{mod p}). Now, we have n^{\frac{p-1}{2}}\equiv x^{p-1} \equiv 1 \equiv \left(\frac{n}{p} \right) \; (\text{mod }p)

Conversely, assume that n^{\frac{p-1}{2}}\equiv 1 \; (\text{mod }p). Let g be a primitive root of p. Then, we have n\equiv g^k \; (\text{mod }p) for some integer k. Since n^{\frac{p-1}{2}}\equiv g^{\frac{k(p-1)}{2}} \equiv 1 \; (\text{mod }p), the order of g must divide the exponent \frac{k(p-1)}{2}. Therefore, p-1 | \frac{k(p-1)}{2} and thus k is an even integer. Let’s say k=2j for some integer j. Then, we have n \equiv g^k \equiv g^{2j} \equiv (g^j)^2 \; (\text{mod }p) This proves that n is a quadratic residue modulo p.

Lemma 2 (Gauss’s Lemma): For any odd prime p, let a be an integer that is co-prime to p. Consider the integers S = \left\{ a,\ 2a,\ 3a,\cdots, \frac{p-1}{2}a \right\} and their least positive residues modulo p. Let n be the number of these residues that are greater than p/2. Then \left(\frac{a}{p}\right)=(-1)^n

Proof: Since p\not | a, none of the integers in S are congruent to 0 and no two of them are congruent to each other modulo p. Let r_1, \cdots, r_m be the residues modulo p smaller than \frac{p}{2}, and let s_1, \cdots, s_n be the residues modulo p greater than \frac{p}{2}. Then m+n=\frac{p-1}{2} and the integers r_1, \cdots, r_m, p-s_1, \cdots , p-s_n are all positive and less than \frac{p}{2}. Now, we will prove that no two of these integers are equal. Suppose that for some choice of i and j we have p-s_i = r_j. We can choose integers u and v, with 1 \leq u,v \leq \frac{p-1}{2} and satisfying \begin{aligned} s_i &\equiv u a \; (\text{mod }p) \\ r_j &\equiv v a \; (\text{mod }p) \end{aligned} Now, we have s_i+r_j \equiv a(u+v) \equiv p \equiv 0 \; (\text{mod }p) This implies that u+v \equiv 0 \; (\text{mod }p). However, this is not possible because 1\leq u+v \leq p-1. Thus, we have proven that the numbers r_1,\cdots, r_m, p-s_1, \cdots p-s_n are simply a rearrangement of the integers 1,2,\cdots, \frac{p-1}{2}. Their product is equal to \left(\frac{p-1}{2} \right)!. Therefore, \begin{aligned} \left(\frac{p-1}{2}\right)! &= r_1 \cdots r_m (p-s_1)\cdots (p-s_n) \\ &\equiv (-1)^n r_1 \cdots r_m s_1\cdots s_n \; (\text{mod }p) \\ &\equiv (-1)^n a\cdot 2a\cdots \left(\frac{p-1}{2}\right)a \; (\text{mod }p) \\ &\equiv (-1)^n a^{\frac{p-1}{2}} \left( \frac{p-1}{2}\right)! \; (\text{mod }p) \end{aligned} The \left( \frac{p-1}{2}\right)! term can be cancelled from both sides as p\not| \left( \frac{p-1}{2}\right)! . In other words, we have a^{\frac{p-1}{2}}\equiv (-1)^n \; (\text{mod }p). This completes the proof of Gauss’s lemma.

We are now ready to prove the Quadratic reciprocity theorem.

Proof of the Quadratic Reciprocity Theorem:

Using the periodicity properties of \sin and Gauss’s lemma, it is easy to verify the following result:

Lemma: Let p and q be distinct odd primes and let A = \left\{ \alpha\in \mathbb{Z} | 1\leq \alpha \leq \frac{p-1}{2}\right\} be a half system modulo p. Then, \left(\frac{q}{p}\right) = \prod_{\alpha\in A}\frac{\sin\left(\frac{2\pi}{p}q\alpha\right)}{\sin\left(\frac{2\pi}{p}\alpha\right)} \quad\quad (1)

We start by examining the right hand side of equation (1). The addition theorem for trigonometric functions yields \sin 2\alpha = 2\sin\alpha\cos\alpha and \sin 3\alpha = \sin\alpha(3-4\sin^2\alpha). Induction shows that \sin q\alpha = \sin\alpha P(\sin\alpha) for all odd q\geq 1, where P\in \mathbb{Z}[X] is a polynomial of degree q-1 and highest coefficient (-4)^{\frac{q-1}{2}}. Thus there exist a_i \in \mathbb{Z} such that \begin{aligned}\frac{\sin qz}{\sin z} &= (-4)^{\frac{q-1}{2}} \left( (\sin z)^{q-1}+a_{q-2} (\sin z)^{q-2}+\cdots + a_0 \right) \\ &= (-4)^{\frac{q-1}{2}} \psi(X), \quad \text{where }X=\sin z\end{aligned} Since \phi(z)=\frac{\sin qz}{\sin z} is an even function, so is \psi(X), hence a_{q-2}=\cdots = a_1 = 0. Now \phi(z) has zeros \left\{ \pm \frac{2\pi}{q}\beta, \ 1\leq \beta \leq \frac{q-1}{2}\right\}. Since \psi is monic of degree q-1, we may write \psi(X) = \prod_{\beta\in B}\left(X^2-\sin^2\frac{2\pi\beta}{q} \right) where B=\left\{1,\cdots,\frac{q-1}{2} \right\} is a half system modulo q. Replacing X by \sin z, we get \frac{\sin qz}{\sin z} = (-4)^{\frac{q-1}{2}} \prod_{\beta\in B }\left( \sin^2z -\sin^2\frac{2\pi\beta}{q}\right) \quad \quad (2) Put z=\frac{2\pi\alpha}{p} in equation (2) and plug the result into equation (1). \begin{aligned} \left(\frac{q}{p}\right) &= \prod_{\alpha\in A} (-4)^{\frac{q-1}{2}} \prod_{\beta\in B} \left( \sin^2\frac{2\pi\alpha}{p} -\sin^2\frac{2\pi\beta}{q}\right)\\ &= (-4)^{\frac{q-1}{2} \frac{p-1}{2}} \prod_{\alpha\in A}\prod_{\beta\in B} \left( \sin^2\frac{2\pi\alpha}{p} -\sin^2\frac{2\pi\beta}{q}\right)\quad\quad (3) \end{aligned} Exchanging p and q on the right side of (3) give rise to factor of (-1)^{(p-1)(q-1)/4}. Therefore, \left(\frac{q}{p}\right) = (-1)^{(p-1)(q-1)/4}\left(\frac{p}{q}\right) \tag{4} which is the quadratic reciprocity law.

References

  • Lemmermeyer, Franz. “Reciprocity Laws: From Euler to Eisenstein”. New York, Springer, 2000
  • Burton, David M. “Elementary Number Theory”. New Delhi, Tata McGraw-Hill Publishing Company Limited, May 1 2006