Calculus of the Inverse Logit Function

maths
Published

August 7, 2021

I was recently doing some logistic regression, and calculated the derivative of the Inverse Logit function (sometimes known as expit), to understand how the coefficients impact changes depending on the predicted probability. It turns out it has some mathematically interesting properties that I thought would be fun to explore.

The inverse logit function is logit1(x)=exp(x)1+expx. A bit of calculus shows that

ddxinvlogit(x)=ex(1+ex)2=invlogit(x)(1invlogit(x))

This is interesting in that if the predicted probability is p, then a small change in a predictor with a coefficient a should change the probability by approximately ap(1p). This is maximised at p=1/2, where the local change in probability is a/4 which is the source of the divide-by-four rule in interpreting coefficients in logistic regression.

However I find this expression interesting and wanted to find out whether it defines the inverse logit function. We want to find a function f such that f=f(1f). Using the derivative of the inverse function gives that

ddxf1(x)=1x(1x)=1x+11x.

Integrating gives f1(x)=log(x)log(1x)+c=log(x1x)+c. Up to an additive constant this is just the logit function. Finally inverting this equation gives

f(x)=exp(xc)1+exp(xc),

so that this indeed does define the inverse logit up to a translation.

Translating it to an inverse logit so that the maximum probability is at 0 gives it one more interesting property,

1logit1(x)=1exp(x)1+exp(x)=11+exp(x)=exp(x)1+exp(x)=logit1(x)

Of course this symmetry property isn’t defining, since any function defined on the positive numbers between 0 and 1 can be extended on the negative numbers to satisfy this property.