\documentclass{article}
%\usepackage[a4paper, total={6in, 8in}]{geometry}
\usepackage{mathtools}
\usepackage{mathrsfs}
\usepackage{dsfont}
\usepackage{amssymb}
\usepackage{latexsym}
\newcommand{\defeq}{\vcentcolon=}
\author{Benjamin Brady}
\title{An investigation into factorial identities}
\date{December 2019}
\begin{document}
\maketitle
\section{Introduction}
The factorial function has many uses. It's main and most obvious use is the amount of ways to arrange objects. However, it also appears in many other definitions and has many definitions in and of itself.
Unfortunately, many definitions limit the scope of the allowed input. In this paper I will investigate some definitions that will extend the factorials and calculate them to some error term in more efficient time than a normal approach might yield.
\section{Main Definition}
The factorials right now are only defined only for integer inputs:
\[\forall z \in \mathds{Z}, z! = \prod_{k=1}^{k=z} k\]
This is because our product multiplies all integer elements in the set of its running index. However, if our z isn't an integer then we blatantly get the wrong answer with this approach. Thus, our current definition is ill-defined for some inputs. We will address this in the next section.
Conversely, we know that there is a notion of an exponential function in even some of the most restrictive number systems. However, the exponential function is defined as a summation of exponents (as the title suggests) of the input but dampened with terms, the factorials. But what if the factorials were, for some reason, ill-defined for some inputs (as mentioned above). We might need to find other ways to constuct our exponential function. Ideally, we can find a definition that is valid for all inputs. This will act as our motivation for the time being.
\section{Basic analytical continuation}
The exponential function is defined using the factorials, but it is actually extemely well-defined. So, it stands to reason that it might be a good place to start our search for a new definition there. The exponential function is defined for our purposes as such:
\[\forall a \in \mathds{N}, e^{at} = \sum_{z \geq 0} \frac{(at)^z}{z!}\]
We can take this definition and try to move the z! from the denominator unto a well-defined numerator. This is called analytical continuation.
\section{Integral definition}
Moving a denominator from an indexed sum is virtually impossible. So, can we find an indexed sum to set the indices equal. The Beta transform for converting a function into its frequency domain can help us out here. It is defined as follows:
\[\beta\{f(t)\}(s) \defeq \int_{-\infty}^{\infty} f(t) e^{-st} dt\]
Let's find the frequency domain of the exponential function as both itself and as its infinite summation. In both cases we need to introduce a new term to the Beta transform to help convergence. We will use the Heaviside unit step function for this purpose. Definining a new transform that has this built in:
\[\beta\{f(t) u(t)\}(s) \leadsto \int_{-\infty}^{\infty} f(t) u(t) dt \defeq \mathscr{L}\{f(t)\}(s)\]
This is called the Laplace transform. It's integral definiton is:
\[\mathscr{L}\{f(t)\}(s) \defeq \int_{0}^{\infty} f(t) e^{-st} dt\]
We can now continue with some normal algebraic manipulation:
\[\mathscr{L}\{e^{at}\}(s) \leadsto \int_{0}^{\infty} e^{at} e^{-st} dt\]
\[= \int_{0}^{\infty} e^{-t(s-a)} dt\]
\[= \frac{-1}{s-a} e^{-t(s-a)} \mid_{0}^{\infty}\]
\[= \frac{1}{s-a}, \Re (s) > a\]
\[= \frac{1}{s}\frac{1}{1-\frac{a}{s}}, |s| > |a|\]
\[= \frac{1}{s}\sum_{z \geq 0} (\frac{a}{s}) ^ {z}, 1 > |\frac{a}{s}|\]
\[\therefore \mathscr{L}\{e^{at}\}(s) = \sum_{z \geq 0} \frac{a^{z}}{s^{z+1}}\]
Now, the infinte summation:
\[\sum_{z \geq 0} \frac{a^{z} t^{z}}{z!} = \int_{0}^{\infty} \sum_{z \geq 0} \frac{a^{z}}{z!} t^{z} e^{-st} dt\]
Substituting our answer from earlier gives:
\[\sum_{z \geq 0} \frac{a^{z}}{s^{z+1}} = \sum_{z \geq 0} \frac{a^{z}}{z!} \int_{0}^{\infty} t^{z} e^{-st} dt\]
While I don't rigourously justify cancelling the summation of these generating functions in this paper, the arrived at relation can be concluded rigourously by other means, we will continue on for the time being:
\[\therefore \frac{a^{z}}{s^{z+1}} = \frac{a^{z}}{z!} \int_{0}^{\infty} t^{z} e^{-st} dt\]
\[\implies z! = s^{z+1} \int_{0}^{\infty} t^{z} e^{-st} dt\]
Setting \(s = 1\) recovers the following form of the factorial:
\[z! = \int_{0}^{\infty} t^{z} e^{-t} dt\]
This definition is, in every sense, an integral definition that is used most of the time. However, we have to approximate the area under a complex contour for some inputs which isn't entirely efficient. A better definition might be required.
For now, we will continue with our current workings. Knowing that out integral is equal to the factorial function, we can look at a new transform.
This is called the Mellin transform:
\[\mathcal{M}\{f(t)\}(s) \defeq \int_{0}^{\infty} f(t) t^{s-1} dt\]
The Mellin transform can be constructed by the following relation to the Laplace transform discussed earlier:
\[\mathscr{L}\{f(-\ln{t})\}(s) \leadsto \int_{0}^{\infty} f(-\ln{t}) (e^{-t})^{s} dt = \int_{0}^{\infty} \frac{1}{w} f(w) w^{s} dt = \int_{0}^{\infty} f(t) t^{s-1} dt\]
\[\therefore \mathcal{M}\{f(t)\}(s) = \mathscr{L}\{f(-\ln{t})\}(s)\]
Taking the Mellin tranform of the exponential decay function:
\[\mathcal{M}\{e^{-t}\}(s) \leadsto \int_{0}^{\infty} e^{-t} t^{s-1} dt\]
We already concluded that this is equal to \((s-1)!\). This transform is fairly important so there is a new function called the gamma function which is actually defined to be equal to \((z-1)!\). We will discuss the gamma function later.
Yet again, we will find yet another use for our integral definition:
\[\int_{0}^{\infty} t^{s} e^{-t^{p}} dt = \int_{0}^{\infty} t^{s+1-p} t^{p-1} e^{-t^{p}} dt = \frac{1}{p} \int_{0}^{\infty} t^{p^{\frac{s+1}{p}-1}} e^{-t^{p}} p t^{p-1} dt\]
\[= \frac{1}{p} \int_{0}^{\infty} u^{\frac{s+1}{p}-1} e^{-u} dt\]
We recognise this last integral to be \((\frac{s+1}{p}-1)!\). We can rewrite the expression as:
\[\frac{1}{p} \frac{(\frac{s+1}{p})!}{\frac{s+1}{p}}\]
Setting \(s=0\) recovers the following form of the factorials:
\[\int_{0}^{\infty} e^{-t^{p}} dt = (\frac{1}{p})!\]
So far we have successfully found the most useful integral definitions of the factorials.
We will now examine the Euler definition but for this we will need to look at our original product definition on the integers.
\section{Eulerian definition}
As established at the beginning of this document:
\[z! = \prod_{k=1}^{k=z} k\]
The main problem with this is that the product here is over the evaluation of all elements in an ordered set of the integers from 1 to z. This can be problematic because:
\begin{enumerate}
\item It only considers integer entries in a set which isn't always what we want. For example, it we want 1.5! then we will only consider the element \{1\} because it is the only integer element in the range of [1, 1.5].
\item A product with non-integer bounds isn't fully defined.
\item We might not have an ordering on our set (for example, the set of complex numbers). This will never be an actual problem as we can choose arbitrary elements until we have evaluated all of our elements. However, we want to approximate this with some error term later so we ideally should have an ordering where higher valued terms approach 0.
\end{enumerate}
We can resolve this by removing any potential non-integers from our running index using parameterisation. For this purpose will we multiply by a product and its own multiplicitive inverse to retain our value:
\[z! = \prod_{k=1}^{k=z} k \quad \prod_{k=z+1}^{k=z+\tau} k \quad \prod_{k=z+1}^{k=z+\tau} k^{-1} \implies z! = \prod_{k=1}^{k=z+\tau} k \quad \prod_{k=z+1}^{k=z+\tau} k^{-1}\]
\[\implies z! = \prod_{k=1}^{k=\tau} k \quad \prod_{k=\tau+1}^{k=\tau+z} k \quad \prod_{k=1}^{k=\tau} (k+z)^{-1}\]
\[\implies z! = \prod_{k=1}^{k=\tau} k \quad \prod_{k=1}^{k=z} (k+\tau) \quad \prod_{k=1}^{k=\tau} k^{-1} \quad \prod_{k=1}^{k=\tau} (1+\frac{z}{k})^{-1}\]
\[\implies z! = \prod_{k=1}^{k=z} (k+\tau) \quad \prod_{k=1}^{k=\tau} (1+\frac{z}{k})^{-1} \implies z! = \tau^{z} \quad \prod_{k=1}^{k=z} (1+\frac{k}{\tau}) \quad \prod_{k=1}^{k=\tau} (1+\frac{z}{k})^{-1}\]
This last line is fairly important. It will be brought up later as we can change the form of the \(\tau^{z}\) into something that is more useful for a couple of different scenarios, for now though, we will convert it into the Eulerian definition. First we need to acknowledge that any integer \(\phi^{\omega}\) can be represented as:
\[\prod_{i=1}^{i=\phi-1} (\frac{i+1}{i})^{\omega} = (\frac{\phi}{\phi+1})^{\omega} \prod_{i=1}^{i=\phi} (1 + \frac{1}{i})^{\omega}\]
\[\implies z! = (\frac{\tau}{\tau+1})^{z} \prod_{k=1}^{k=\tau} (1 + \frac{1}{k})^{z} \quad \prod_{k=1}^{k=z} (1+\frac{z}{\tau}) \quad \prod_{k=1}^{k=\tau} (1+\frac{z}{k})^{-1}\]
\[\implies \lim_{\tau \to \infty} z! = \lim_{\tau \to \infty} \quad (\frac{1}{1+\frac{1}{\tau}})^{z} \quad \prod_{k=1}^{k=z} (1 + \frac{z}{\tau}) \quad \prod_{k=1}^{k=\tau} \frac{(1 + \frac{1}{k})^{z}}{(1 + \frac{z}{k})}\]
and finally we recover the Euler definition of the factorial function, defined for most inputs:
\[z! = \prod_{k=1}^{k=\infty} \frac{(1 + \frac{1}{k})^{z}}{(1 + \frac{z}{k})}\]
This definition is very effective for our purposes. Can we do better? Well, an approach to that question might involve closer analysis on that limit to infinity when it comes to the \(\tau^{z}\).
\section{Weierstrass definition}
As hinted at, the Weierstrass definition is constructed by starting with a line we now already know to be true:
\[z! = \tau^{z} \quad \prod_{k=1}^{k=z} (1+\frac{k}{\tau}) \quad \prod_{k=1}^{k=\tau} (1+\frac{z}{k})^{-1}\]
We will take the exponent here and use the exponential function to get another infinite product.
\[z! = \mathrm{e}^{z\ln{(\tau)}} \quad \prod_{k=1}^{k=z} (1+\frac{k}{\tau}) \quad \prod_{k=1}^{k=\tau} (1+\frac{z}{k})^{-1}\]
The movivation might not be clear at the moment but adding both the harmonic sum to \(\tau\) and its additive inverse to the exponential function will keep the value of the equation the same (and we'll also shift some signs without changing the result for clarity purposes):
\[z! = \exp{(-z(\sum_{k=1}^{k=\tau} \frac{1}{k} - \ln{(\tau)}) + z \sum_{k=1}^{k=\tau} \frac{1}{k})} \quad \prod_{k=1}^{k=z} (1+\frac{k}{\tau}) \quad \prod_{k=1}^{k=\tau} (1+\frac{z}{k})^{-1}\]
Most of the work left is just algebraic manipulation:
\[\implies z! = \exp{(-z(\sum_{k=1}^{k=\tau} \frac{1}{k} - \ln{(\tau)}))} \exp{(z\sum_{k=1}^{k=\tau} \frac{1}{k})}\quad \prod_{k=1}^{k=z} (1+\frac{k}{\tau}) \quad \prod_{k=1}^{k=\tau} (1+\frac{z}{k})^{-1}\]
\[\implies z! = \exp{(-z(\sum_{k=1}^{k=\tau} \frac{1}{k} - \ln{(\tau)}))} \prod_{k=1}^{k=\tau} \exp{(\frac{z}{k})} \quad \prod_{k=1}^{k=z} (1+\frac{k}{\tau}) \quad \prod_{k=1}^{k=\tau} (1+\frac{z}{k})^{-1}\]
\[\implies z! = \exp{(-z(\sum_{k=1}^{k=\tau} \frac{1}{k} - \ln{(\tau)}))} \quad \prod_{k=1}^{k=\tau} \exp{(\frac{z}{k})} (1+\frac{z}{k})^{-1} \quad \prod_{k=1}^{k=z} (1+\frac{k}{\tau})\]
If we take our limit again we get:
\[\implies \lim_{\tau \to \infty} z! = \exp{(\lim_{\tau \to \infty} -z(\sum_{k=1}^{k=\tau} \frac{1}{k} - \ln{(\tau)}))} \lim_{\tau \to \infty} \prod_{k=1}^{k=\tau} \mathrm{e}^{\frac{z}{k}} (1+\frac{z}{k})^{-1} \quad \prod_{k=1}^{k=z} (1+\frac{k}{\tau})\]
Using the fact that:
\[\lim_{\rho \to \infty} (\sum_{n=1}^{n=\rho} \frac{1}{n} - \ln{\rho}) = \gamma\]
where \(\gamma\) is the Euler-Mascheroni constant \(\approx 0.57721566...\) yields:
\[z! = \mathrm{e}^{-\gamma z} \prod_{k=1}^{k=\infty} ((1 + \frac{z}{k})^{-1} \mathrm{e}^{\frac{z}{k}})\]
This is the Weierstrass definition and it is very applicable and accurate. Thus concludes the sections of this paper where we will deal with direct definitions relating to the factorials. In the rest of this paper we will investigate identities that involve factorials.
\section{Rate of change}
We will now take some time to look at the rate of change of the factorial function. Our normal definition is only defined for integers and it will be what we observe for the moment:
\[z! = \prod_{k=1}^{k=z} k\]
For a rate of change, we can look at the finite difference:
\[\Delta \defeq \frac{f(x + h) - f(x)}{h}\]
Using a step size of 1 gives us a rate of change that we can always use for discrete function that are defined at positive integer inputs:
\[\Delta f(x) \defeq f(x + 1) - f(x)\]
Taking the finite difference of the factorials gives:
\[\Delta z! = (z + 1)! - z! \implies \Delta z! = (z + 1) z! - z!\]
\[\implies \Delta z! = z! z\]
We can take a further look at our other definitions of the factorials to find the derivative of the factorials. We will use the Weierstrass definition:
\[z! = \mathrm{e}^{-\gamma z} \prod_{k=1}^{k=\infty} ((1 + \frac{z}{k})^{-1} \mathrm{e}^{\frac{z}{k}})\]
Taking the natural log and implicitly differentiating:
\[\ln{(z!)} = \ln{(\mathrm{e}^{-\gamma z} \prod_{k=1}^{k=\infty} ((1 + \frac{z}{k})^{-1} \mathrm{e}^{\frac{z}{k}}))}\]
\[\implies \ln{(z!)} = -\gamma z + \sum_{k=1}^{k=\infty} (\frac{z}{k} + \ln{(k)} - \ln{(z + k)})\]
Taking the derivative of this implicitly gives:
\[\frac{1}{z!} (\frac{d}{dz} (z!)) = -\gamma + \sum_{k=1}^{k=\infty} (\frac{1}{k} - \frac{1}{z+k})\]
We can get the derivative of the factorials:
\[z!^{\prime} = z!(-\gamma + \sum_{k=1}^{k=\infty} (\frac{1}{k} - \frac{1}{z+k}))\]
This doesn't really have any real applications but a more useful function is the quotient of the factorial and its derivative (\(\frac{z!^{\prime}}{z!}\)) denoted here as \(\overset{\sim}{\psi} (z)\):
\[\overset{\sim}{\psi} (z) = \frac{z!^{\prime}}{z!} = -\gamma + \sum_{k=1}^{k=\infty} (\frac{1}{k} - \frac{1}{z+k})\]
A question that you might have is whether or not this function is analytical. To find out we can take higher order derivatives of this function to create a set of functions, but first lets declare the function we already found as the zeroth order term as it is the zeroth derivative in the set of functions we wish to study:
\[\overset{\sim}{\psi}^{(0)} (z) = -\gamma + \sum_{k=1}^{k=\infty} (\frac{1}{k} - \frac{1}{z+k}) = \overset{\sim}{\psi}_{(0)} (z) \quad \wedge \quad \overset{\sim}{\psi}_{(\mu + 1)} (z) \defeq \frac{d(\overset{\sim}{\psi}_{(\mu)} (z))}{dz}\]
\[\overset{\sim}{\psi}_{(1)} (z) = \sum_{k=1}^{k=\infty} \frac{1}{(z+k)^{2}} \quad \overset{\sim}{\psi}_{(2)} (z) = \sum_{k=1}^{k=\infty} (-2) \frac{1}{(z+k)^{3}}\]
\[\overset{\sim}{\psi}_{(3)} (z) = \sum_{k=1}^{k=\infty} (6) \frac{1}{(z+k)^{4}} \quad \overset{\sim}{\psi}_{(4)} (z) = \sum_{k=1}^{k=\infty} (-24) \frac{1}{(z+k)^{5}}\]
We can make a conjucture and then prove it using the principle of mathematical induction:
\[\mu \in \mathds{N}, \quad \overset{\sim}{\psi}_{(\mu)} (z) = \sum_{k=1}^{k=\infty} (-1)^{\mu + 1} (\mu)! \frac{1}{(z+k)^{\mu + 1}}\]
\[\overset{\sim}{\psi}_{(1)} (z) = \sum_{k=1}^{k=\infty} (-1)^{1 + 1} (1)! \frac{1}{(z+k)^{1+1}} = \sum_{k=1}^{k=\infty} \frac{1}{(z+k)^{2}} = \overset{\sim}{\psi}_{(1)} (z)\]
\[\overset{\sim}{\psi}_{(\mu + 1)} (z) = \sum_{k=1}^{k=\infty} (-1)^{\mu + 1 + 1} (\mu + 1)! \frac{1}{(z+k)^{\mu + 1 + 1}}\]
\[= \frac{d}{dz} (\sum_{k=1}^{k=\infty} (-1)^{\mu + 1} (\mu)! \frac{1}{(z+k)^{\mu + 1}})\]
\[\defeq \frac{d(\overset{\sim}{\psi}_{(\mu)} (z))}{dz} \defeq \overset{\sim}{\psi}_{(\mu + 1)} (z)\]
Now that we have a general term for the \(\mu_{th}\) derivative of our quotient, we can create a taylor series expansion of it. Recall the taylor series formula:
\[f(x) = \sum_{\alpha=0}^{\alpha=\infty} \frac{f^{(\alpha)}(x_{0})}{\alpha!} (x - x_{0})^{\alpha}\]
Setting our \(x_{0}\) to 1:
\[f(x) = f(1) + \sum_{\alpha=1}^{\alpha=\infty} \frac{f^{(\alpha)}(1)}{\alpha!} (x - 1)^{\alpha}\]
Any equation that satisfies this for some values will is analytical. Before we apply it however, we might want to change the form of our general derivative:
\[\overset{\sim}{\psi}_{(\mu)} (z) = \sum_{k=1}^{k=\infty} (-1)^{\mu + 1} (\mu)! \frac{1}{(z+k)^{\mu + 1}}\]
Now, we are ready to test whether or not this function is analytical or not, applying the taylor series formula evaluated at one to our function:
\[\overset{\sim}{\psi}_{(0)} (z) = \overset{\sim}{\psi}_{(0)} (1) + \sum_{\alpha=1}^{\alpha=\infty} \frac{\sum_{k=1}^{k=\infty} (-1)^{\alpha + 1} (\alpha)! \frac{1}{(1+k)^{\alpha + 1}}}{\alpha!} (z - 1)^{\alpha}\]
\[= -\gamma + 1 + \sum_{\alpha=1}^{\alpha=\infty} (-1)^{\alpha + 1} (z - 1)^{\alpha} \sum_{k=1}^{k=\infty} \frac{1}{(1+k)^{\alpha + 1}}\]
\[= 1 - \gamma - \sum_{\alpha=1}^{\alpha=\infty} (1 - z)^{\alpha} \sum_{k=1}^{k=\infty} \frac{1}{(1+k)^{\alpha + 1}}\]
\section{Gamma Function}
In the previous sections of this paper we have established four definitions of the factorial function:
\[z! = \prod_{k=1}^{k=z} k\]
\[z! = \int_{0}^{\infty} t^{z} e^{-t} dt\]
\[z! = \prod_{k=1}^{k=\infty} \frac{(1 + \frac{1}{k})^{z}}{(1 + \frac{z}{k})}\]
\[z! = \mathrm{e}^{-\gamma z} \prod_{k=1}^{k=\infty} ((1 + \frac{z}{k})^{-1} \mathrm{e}^{\frac{z}{k}})\]
However, there is an important transform known as the Mellin transform which is defined using the Haar measure \(\frac{dt}{t}\):
\[\mathcal{M}\{f(t)\}(s) \defeq \int_{0}^{\infty} t^{s} f(t) \frac{dt}{t} = \int_{0}^{\infty} t^{s-1} f(t) dt\]
but when we take the Mellin Transform of the exponential decay function \(\mathrm{e}^{-t}\):
\[\mathcal{M}\{\mathrm{e}^{-t}\}(s) \leadsto \int_{0}^{\infty} t^{s-1} \mathrm{e}^{-t} dt\]
we get the factorial function shifted down by one. This is the predominately used form when dealing with the definitions covered in this paper. You may also notice that some gamma function (denoted by \(\Gamma(z)\)) definitions can be found using the fact that \(\Gamma(z) = (z-1)!\) or that \(\Gamma(z) = \frac{1}{z} z!\). Taking that into account we have four new definitions of the gamma function:
\[\Gamma(z) = \prod_{k=1}^{k=z-1} k\]
\[\Gamma(z) = \int_{0}^{\infty} t^{z-1} e^{-t} dt\]
\[\Gamma(z) = \frac{1}{z} \prod_{k=1}^{k=\infty} \frac{(1 + \frac{1}{k})^{z}}{(1 + \frac{z}{k})}\]
\[\Gamma(z) = \frac{1}{z} \mathrm{e}^{-\gamma z} \prod_{k=1}^{k=\infty} ((1 + \frac{z}{k})^{-1} \mathrm{e}^{\frac{z}{k}})\]
Some also opt to take the reciprocal of that last definition to put it into a form that is well-defined for all values:
\[\frac{1}{\Gamma(z)} = z \mathrm{e}^{\gamma z} \prod_{k \geq 1} ((1 + \frac{z}{k})^{-1} \mathrm{e}^{-\frac{z}{k}})\]
This is usually considered to be the best definition. It is valid for all inputs, has an ordering which converges very (relatively) quickly, and it is fairly easy to calculate. You can also arrive at most conclusions reached in the paper thus far using this definition. For the most part, our definition searching is finished and we can now shift more focus on finding identities.
We can also convert the identities we found using the factorials into the gamma function:
\end{document}