More difficult math does not necessarily mean more useful solutions
Mathematics is said to be the language of science in that most (but certainly not all) of the world’s scientific models involve mathematical equations. Pythagoras’ theorem, normal distribution, Einstein’s energy-mass equivalence, Newton’s second law of motion, Newton’s universal law of gravitation, Planck’s equation. How could any of these remarkable patterns be expressed without mathematics?
Unfortunately, sometimes it’s tempting to overemphasize math and underestimate relevance. The brilliance of the models listed above lies not in the mathematical pyrotechnics, but rather in their breathtaking simplicity. Useful models help us understand the world and make reliable predictions. Math for math’s sake does neither. Stupid math examples abound. I will give three very different examples.
The net present value (NPV) of an investment is calculated by discounting all costs and benefits by an interest rate that takes into account the time value of money. With an interest rate of 10%, $10 received in one year has a present value of $9.09, because $9.09 invested at 10% will be worth $10 in one year. Projects with positive NPVs are financially attractive. Projects with negative NPVs are not.
The internal rate of return (IRR) is the interest rate that gives an NPV of 0. Many business owners prefer to work with IRRs because rates of return seem more intuitive than NPVs. However, there is many problems with IRRs; more fundamentally, they don’t tell us whether the NPV is positive or negative for other interest rates. For example, a project with an IRR of 10% has an NPV of 0 at an interest rate of 10%, but the NPVs for interest rates of 5% or 20% can be positive (attractive) or negative ( unattractive). We just don’t know without calculating the NPV. We should therefore do without IRRs and focus on NPVs.
Believe it or not, an article with a hopeful title, “A resolution to the NPV-IRR debate?”, published in a respected financial journal, argued that the NPV and IRR approaches can be reconciled by considering IRRs that are complex numbers — those puzzling square roots of negative one that puzzled us in high school. However, the existence of complex TRI does not solve any of the problems with the IRRs. Specifically, complex IRRs tell us nothing about the NPVs of other interest rates. Worse still, complex IRRs are not economically meaningful. If the most important weakness of the NPV criterion is that it does not give a rate of return easily understood by businessmen, this weakness is not solved by presenting them with meaningless complex-valued IRRs.
A very different example is a famous paper, published in the flagship journal of the American Psychological Association, which claimed that a 2.9013 ratio of positive to negative emotions separates individuals, marriages, and business teams that thrive from those that languish. One of the authors, Marcial Losada, shamelessly named this magical report “the Losada line”.
It is absurd to think that a line of demarcation (up to four decimal places!) can apply to complex human emotions. Losada and his co-author used physics and engineering equations that describe the motion of fluids to make their scientific argument sound, but their use of the language of fluid dynamics could easily be mistaken for a parody: the teams of high performers operate “in a floating atmosphere created by the expansive emotional space;” low performing teams are “stuck in a viscous atmosphere highly resistant to flow”.
A review written by Nick Brown (who describes himself as a “self-proclaimed data police cadet”), Alan Sokal (full-time physics and math professor and part-time BS buster) and Harris Friedman (a prominent professor psychology) concluded — unsurprisingly — that,
We find no theoretical or empirical justification for using differential equations drawn from fluid dynamics, a subfield of physics, to describe changes in human emotions over time… The lack of relevance of these equations and their incorrect application lead us to conclude that Fredrickson and Losada’s assertion that he demonstrated the existence of a minimum critical positivity ratio of 2.9013 is completely unfounded.
Brown, NJL, Sokal, AD and Friedman, HL (2013). The complex dynamics of wishful thinking: The critical positivity ratio. American Psychologist, 68(9), 801–813. https://doi.org/10.1037/a0032850
My third example concerns the unfortunate fact that the current data deluge has supercharged the use of dumb math in data science and artificial intelligence. There are now data for so many variables that researchers are looking for ways to select a parsimonious number of explanatory variables to predict what they are trying to predict. Many pruning procedures seem to be chosen simply because the stones are impressively dense.
For example, principal component regression – or a variant of this procedure – is often used to reduce a large number of explanatory variables to a small number of principal components. The mathematical procedure is to use a singular value decomposition (don’t ask) to create orthogonal eigenvectors, then remove the eigenvectors with the smallest eigenvalues. Wow! What does this bite of mathese mean?
Obscured by mathematical gymnastics, the process imposes constraints on the explanatory variables. Surprisingly, these constraints depend only on the correlations between the explanatory variables without taking into account the variable that the model tries to predict. Consequently, the final model can force a variable that has a positive effect on the dependent variable to have a negative effect (or vice versa) and force the unrelated nuisance variables to be more important than the actual explanatory variables.
For example, household spending can be positively related to both income and stock prices. A principal component regression that considers these two explanatory variables plus the price of tea in China could, due to the correlations between income, stock prices and tea prices, make the estimated effect of income on expenditure negative and make the effect of tea prices more important. than the effect of stock prices. Principal components regression not only distorts the estimated effects, but hides them in an eigenvector blizzard, so the only thing we know is that sophisticated calculations were used.
In all three examples, the researchers ignore the assumptions and implications of the mathematical tools they use. They obviously assume that awesome math leads to awesome insights. This disconnect is acute in black-box-like artificial intelligence (AI) algorithms that few understand but many trust.
One example of countless: the CEO and co-founder of the insurance company Lemonade noted that “AI crushes humans in chess, for example, because it uses algorithms that no human could create and which no one would fully understand”. He argued that, similarly, “algorithms we can’t understand can make insurance fairer.” Lemonade has been a lemon so far, with its stock price down almost 90%.
The invisibility of dumb math inside black box algorithms is a flaw, not a feature. As I have said many times, the real danger today is not that computers are smarter than us, but that we think computers are smarter than us and therefore trust them to make decisions they shouldn’t be making.
You can also read: Move away from stepwise regression (and other data mining). Stepwise regression, which is making a comeback, is just another form of HARKing – Hypothesizing After the Results are Known. I ran a series of Monte Carlo simulations and found that stepwise regression was disappointing when applied to new data. (Gary Smith)