Predicting Suicide, part 1

This posting is about suicide. If you, or someone you know, is having suicidal thoughts,
please call the National Suicide Prevention Lifeline at (800) 273-8255.

I’m sorry for the demon I’ve become
— Five Finger Death Punch, "Walk Away"

Recently, the press has been awash with articles on how AI can predict … well … everything. This is unlikely to be true. Or, being more precise, unlikely to be useful to critical domains. Like suicide.

Quartz surfaced a study from April 2017 that attempts to predict risk of suicide attempts using ML.

The paper is awash in detail about ML, but they partook in one of the greatest ML sins: Using a leaky variable.

At the highest level, they compared people who attempted suicide to people who did self-harm that was not judged to be a suicide attempt to a random cohort of hospital patients.  

Then they used a set of metrics to characterize each person over time leading up to the event, building different models for 7, 14, 30, 60, 90, 180,365, and 720 days before the incident.

Building models with this nested approach is pretty subject to error: The 7 day results, for example, include some of the 14 day results as the person remains the same.  Ignoring this “repeated measures design” makes results look more significant than they actually are.

They report AUC to assess the quality of their predictive models, and also give both Precision and Recall metrics.  AUC measures the overall quality of a model. Precision measures how many of your “true” predictions were correct, while Recall measures the percentage of actual “trues” that you predicted as “true”.

Their AUCs are really high. 7 days out, their AUC is .84, with a precision of .79 and a recall of .95. These numbers are far higher than you’d expect in a social science study -- people are really complex, and finding models that predict that well is unusual.  Even weirder, 720 days out, the AUC is .8, precision .74 and recall was .95.  These are virtually the same as 7 days out; for this to be true, the people would need not to have changed at all over 2 years. This seems unlikely.

These kinds of results are usually due to “leaky variables”. Leaky variables are input signals that include some measure that is, in fact, the output. With a leaky variable, your input models include your output measure, and your model looks really good.  

When I looked at their signal importance table, the first thing that jumped out was that “self-inflicted poisoning” was a powerful signal in all their models. Given that self-inflicted poisoning is likely a suicide attempt, having it in the input signals means that you are directly including the outcome measure.  This is a classic leaky variable.

There are other issues, but leakage in the models will swamp any real findings, so it’s not worth trying to fix others.  A note to all data scientists: Check your models for leaky variables. This is easy to do in simple models, and a common feature in xgboost-based models (like this one). For other types of mathematical models, additional software is required.

 

(547)

No, screen time doesn’t make your kids less able to read emotions


And we would scream together
Songs unsung
— Asia, “Heat of the Moment”

For some reason, the US seems bizarrely focused on creating rules that show how we are screwing up our kids. Well, actually, rules that suggest *you* are screwing up *your* kids, and mine are much better ThankYouVeryMuch.

The problem? It’s mostly garbage. The omnipresent argument that “today’s kids are different/worse/unfocused/spoiled” is mostly said about all generations.

It’s not hard to poke holes in the “research” that underpins this phenomenon. To wit, NPR publicizing some research out of UCLA that argues screen time (as measured as, more or less, any interactions mediated by anything silicon) yields lower ability to recognize human emotions.

Ah, where to begin?

Using “ability to identify human emotions” as a potential weakness of screen time seems odd — I would expect some cognitive outcome.  Frankly, using emotional recognition really seems like the psychographic equivalent of venue-shopping; in fact, they point out that they selected the measure after “comprehensive piloting”…

However, let’s ignore that.  They took about 100 students, divided them into two groups and then did their study.  All the students took a test that asked them to interpret emotions on faces projected before them.  The researchers counted the errors they made.

Then, half the students were told to go home and change nothing, then come back in to retake the test in about a week.  The other half went to an outdoor-oriented camp with no electronic access and then retook the test.

The researchers decided to use “change scores” as their dependent variable — the change from the first measure score to the second. This is well-traveled ground in experimental psychology.  It has really high statistical power; however, because of the statistical power, it’s easy to get a significant — not important — result.

The camp group made 14 errors in the first test, and then ~9.4 in the second test.  That’s a big difference, of about 4.6!  The control group made about 12.2 errors on the first test and about 9.8 on the second, for a difference of 2.4. And using a straight means comparison test called an F-test (which isn’t appropriate since, as all you astute readers already know, counts are not normally distributed), that difference is significant.

Where’s your crown,
King Nothing
— Metallica, "King Nothing"

So, the camp group reduced their errors by a LOT more.

But they also started at a much higher point (14 versus ~12). Even though the difference was greater, the final error number was the same. In other words, on the silly practical measure that screen time makes you less able to identify emotions, the two groups are the same (making 9-10 errors).  

The entire difference is because the camp group made many more errors on the pretest. The groups weren’t identical.  

Oops.

Maybe screen time is bad for our kids — for *my* kids — but this study certainly doesn’t demonstrate that.

(461)

Gun control math, part 1. (a.k.a., don't predict what you use as explanation...)

OK, I waded into housing controversy last time, and now I want to take a brief dip into one of my favorite, incredibly controversial, policy topics: gun control.

And a little luck,
we can clear it up
— Wings, "With A Little Luck"

There is a long running policy debate about whether increasing the number of guns in a society impacts crime rates.  There is a famous book called “More Guns, Less Crime”, written by a pro-gun activist slash econometrician named John Lott.

Lott’s thesis was that an increase in availability of guns — usually because of “right to carry” laws — causes a reduction in crime.  Lott uses regression methods to attempt to justify his conclusion; his prose often emphasizes the mathematical minutiae; it seems that he’s trying to beat the reader over the head with the idea that "math is perfect, you should simply believe the prose!" 

By the way, please never, ever, do that. Check the math. Really. And feel great doubt for any article that seems to shroud its answer in a bunch of seemingly impenetrable prose.

It will take me a bunch of (400 word) postings to cover all the math mistakes in the gun control literature.  Both sides use math creatively in order to get the answer they want. But here’s the first.

I can feel it in the air tonight
— Phil Collins, “In The Air Tonight”

 

In 2005, the National Research Council reviewed the research on right to carry laws, and more or less threw up its hands saying that neither Lott nor his primary critics had demonstrated whether the laws had any impact on crime.

The NRC study replicated a pretty basic error Lott made.  In Lott’s regression equation, he has murder rate on both sides of the equal sign. That is, murder is BOTH predicted by the equation AND in the equation that is doing the prediction.

Having the same variable on both sides of the equation makes it far more likely that the equation will predict the outcome.   In other words, the regression looks more significant than it really is. This can lead to a policy prescription that is either “throw up your hands” or, perhaps “proves that more guns equals less crime”.

Based solely on this data, the latter is likely false. The former conclusion seems cowardly.

Please don't assume, however, that the pro-gun control side's math is as pure as the driven snow. Stay tuned for the beginning of the errors on the other side…

(386)

 

Is the Ellis Act killing San Francisco

There is a popular narrative that Ellis Act evictions are wildly up since the tech boom really began.  This narrative is true… kind of.

Broken glass and a broken jaw
Lies are told in a Southern drawl
— Lamb of God, "Again We Rise"

For those of you who don’t read SF-ish press, there is an uproar about evictions from (usually rent-controlled) apartments, in favor of owner-occupied housing. These evictions are often done under the auspices of the Ellis Act.

It’s easy to find articles that excoriate the Act: Business Insider touts that “A 30-year-Old Law is Creating A Crisis in San Francisco”.  The article points out, among other things, that “speculation has become fairly common practice in San Francisco, exponentially driving rent up across the Bay Area.”

The article cites an SFGate article that, in turn, cites (but does not provide a link to) a city report.  Although that is sloppy, they helpfully provide a chart of the numbers from the study.

And the numbers only vaguely support their claims.

First, the analysis suffers from ignoring the law of small numbers: Ellis Act evictions have “skyrocketed” by 170% between 2009 and 2012, while all evictions have increased by “only” 38%.  True, but the Ellis Act evictions started at a base of about 43 per year, going up to a final value of about 116 per year over the three-year period.  Saying that there have been about 100 total additional Ellis Act evictions in the past 3 years is less impressive.  Especially when compared to the overall evictions increase of (total) about 750 additional evictions over that same period.

This issue is minor compared to the selection of time window.  The same SFGate article points out that, in the year 2000, there were about 400 Ellis Act evictions, at a time when housing prices were at the lowest point in the reported period.  Using 2000 as the start of the analysis would yield a headline akin to “Ellis Act evictions are down 75% in the past decade, while housing prices are up almost 50%”, with a subtext of how great landlords are in SF, since they aren’t kicking renters out even though their houses would be worth so much more on the open market.

That conclusion is equally valid, based solely on the math, but makes for less compelling press.

A population likelihood != any individual's likelihood...

Today I want to talk about a slightly different topic type. Instead of talking about math, per se, I want to talk about a mathematical reasoning concept that people often get wrong.

Specifically, I want to talk about the difference between a population percentage and an individual likelihood.

Broken glass and a broken jaw
Lies are told in a Southern drawl
— Lamb of God, "Again We Rise"

I got my DNA sequenced by 23andMe, because I'm fascinated by the incredible technical advances in biotech over the past few years. (For a similar reason, I'm a small investor in Celmatix. Very cool tech and math...)

Anyway, my 23andMe results show me to have far lower risk (than the population average) for Celiac Disease. And yet, I have Celiac!

Love will tear us apart again
— Joy Division, "Love Will Tear Us Apart"

Clearly, 23andMe is fatally flawed, right?

Nope.  Roughly 1% of Americans have Celiac. Let's pretend that my genes suggest I am 50% less likely to have Celiac, which means that (if I'm reincarnated 199 times), out of my 200 lives, I would get Celiac one time. However, for each of those lives, I don't have a 0.5% (1-in-200) chance of getting Celiac.

I have either a 0% chance (i.e., I don't have the disease) or a 100% chance (i.e., I come down with Celiac).

As an individual, something either happens, or it doesn't.  As a population, stuff happens to some percentage of folks.

It's not the same.

Michigan seems like a dream to me now
— Simon & Garfunkel, "America"

I don't care whether -- or why -- my population risk is low for Celiac. I have it.

The next time you hear that "X% of brown-haired, left-handed men are likely to be great motorcycle racers, keep in mind that I, a brown-haired, left-handed man, am a terrible motorcycle racer.

And the population averages don't directly apply to you, either.

Annoying pseudoscience

Recently FB has reincarnated this article from last year.  It presents a breathless summary of a class project done by a handful of high school students in Denmark.

White on
white translucent capes
—Bauhaus, “Bela Lugosi’s Dead”

The idea was spawned by that omnipresent “cell phones cause <insert your favorite issue here>” meme.  In this case, the students decided that their lack of concentration in class was due to sleeping next to their cell phones.

The class experiment that the students did was take some seeds, divide them into two groups. Put one group in the main room, the second in an anteroom right next to a wifi router.  The hypothesis being that the wifi router — which apparently “[emit] roughly the same type of radiation as an ordinary mobile phone” — will kill the seeds.

Lo and behold, the routers killed *all* the seeds.  As in, *all* of them.  This should raise eyebrows, right there. I mean, dumping Agent Orange on the seeds probably wouldn’t kill them *all*.  

Although I suspect some fakery here, let’s just go with the idea that the routers killed the seeds and are a good analogue for cellphones.

Wifi routers run at about 2.4 GHz (2.4 billion waves per second.)  Cell phones in Denmark run about about 2.6Mhz (2.6 million waves per second).  That’s about 1,000 fold slower. This is roughly the difference between infrared (ie., heat) and X-rays.

Also, cell phones emit power at about 250 milliwatts (though this number changes with different models and uses).  A light use wifi router emits about 400 milliwatts. Another factor of about two.  By the way, your body heat is running along at about 100 watts (about 200 times greater).

I won’t bother going into the science of radiation absorption, but the song remains the same there.

We dance on the strings
of powers we cannot perceive
—Rush, “Freewill”

Wifi and cell have nothing in common that matters.  Even if wifi killed all the seeds, it doesn’t teach us anything about cell phones.

Speaking of infrared, apropos of nothing at all, wifi routers in closed rooms do generate a boatload of heat. Heat actually does kill seeds… by cooking them. I’m just saying…

I think the students should be commended for a cool idea and doing the test. BRAVO! I hope they continue to do science — the world needs more people who try stuff.  

And, not to beat a dead horse, but the math does matter.  One of the scientists who (breathlessly) praised the study and was going to be involved in a replication may have made up her data on a similar study.  Ahh, the joy.

(402)

 

Would you like me to measure yours first?

Hi there everyone…

21 to win
—Rob Zombie, “Dragula”

One of my smartest friends sent me a note saying that she didn’t understand the difference between effect size and significance.  Even given my self-imposed 400 word limit, if I couldn’t be clear to her (who is way smarter than me), I didn’t do a good job.

So, let’s try again, shall we? Strictly speaking, I’m not going to savage someone else’s math here. I’m going to savage my own.

The textbook definition of “statistical significance” refers to the likelihood that two things would differ simply by bad luck.  

// Side note: If I were political, I would suggest that a great way to study surprising statistical significance would be to examine the voter results from Ohio in the first election of Bush II. But I’m not that cynical, so let’s stay grounded, shall we? //

Before going into this point, let me emphasize that statistical significance is usually wrong. Reality rarely matches statistical assumptions.  In the real world, stuff is messy. In stats-land, all is lovely and pristine, smelling not of the putrefaction of modern politics but rather of the perfume of your favorite lover.  In the real world, sadly, your statistical lover is oft the morn, not the eve.

Effect size captures how much any particular difference matters.  A big effect size means that the thing you are looking at REALLY makes a big difference in what you’re measuring.  Effect size and significance are related: The bigger an effect size, the easier it is to find a significant difference.  But the semi-converse is also true: Although a little effect is hard to find, given enough cases in your data, even a little teeny effect will be statistically significant.

Let’s get practical, shall we?

I need someone to hold on to
—Nine Inch Nails, “Terrible Lie”

Imagine, if you will, that we are using 2 rulers to measure 3 pieces of 8.5x11” paper. We are wondering whether, in fact, the rulers — notionally 12” long — are the same as each other.  The first ruler measures the 3 pieces of paper as being {11.1”, 11.1”, 11”}. This sounds pretty good, yes? The second ruler measures the same pieces of paper as {“10.9”, 10.9”, 11”}. Pretty close, yes?

According to the standard statistical test you’d use here, these two rulers are different from one another with a “p-value” of less than .05 (the normal standard).  So, they are significantly different, despite varying by only about 2%.  They are different, but, really, who cares — the effect size is about 2%.

A small effect, but a significant one. I don’t think anyone will get rich on that 2%, although most papers, journalists, and reports will report simply the significance of the difference.

Check the effect size. Do you really care, at the end of the day, about 2% in rulers? Probably not.

(459)

Half a good question: Or, how big is it really?

My good friend Q just wrote a post for the New York Times. It was a profile of another company — not mine, boo — but it captured a lot of the interest in big data.  Was a great read.

There was a comment on his Facebook post about “spurious correlations” and how big data companies never have a good answer to how to avoid spurious correlations.  This is half of a good question.

So I sit at the edge of my bed
I strum my guitar and I sing an outlaw love song
--Social Distortion, "Story of my life"

Now pay attention, my seven readers, because this will be an ongoing theme.  Ready? Here we go.

Given enough cases, almost any difference will be “statistically significant”. But, honestly, who cares?  Even if something is likely not random — that is, it is significantly different — if the something has a small enough impact, it’s probably not worth paying attention to.  

This is a BIG DEAL: Instead of looking for significance, you should be asking for “effect size”.  A small, but significant, effect size doesn’t tell you much about behavior, it just tells you about the vagaries of statistics.

Most statistics that people use to measure “significant differences” place a number of assumptions on the underlying data.  The two most common — and egregious — assumptions are normality and linearity.   The normality assumption requires that underlying data fit a certain distribution and that there are no underpinning hidden relationships between data. In interesting domains — like, most all — nothing is normally distributed.  Oops, that stat is wrong.  Second, many factors in the real world are related in underpinning ways — there is a relationship between educational attainment and the *mother’s* educational attainment, but not the father’s.  Oops, if that’s true, that stat is even more wrong.

The problem is that all statistical tools will spit out a significance number.  And tools that use statistical tools often repeat those numbers, without paying attention to whether they are correct.

There are loads of problems with spurious correlations.  But they are mostly subsumed by the problems of effect size and statistical correctness.  

The entire good question? “What’s the effect size and how did you compute significance”.  Ask the question at your next dinner party policy discussion. You’ll either inspire awe or  make a bunch of people think you’re a horrible geek.

// Side note: I am the second. No doubt. //

 

(401)

"Log scales" (or, we are out of birthday candles)


There's no comfort in the truth,
pain is all you'll find
--George Michael, "Careless Whisper"

We had a birthday celebration this weekend for a 65 year old. We didn't have 65 candles for the cake (and, had we had that many, we undoubtedly would have set off the fire alarms, but that's a different story).

We did, however, have 6.  So SLM decided to use one candle per decade.  Now, we have a nicely manageable 6 candles on the (yummy!) ((gluten-free)) carrot cake.

My birthday is coming up, and I suggested the same protocol -- we can use one candle for each decade, requiring 4 candles on my cake.

Note that the difference between candles -- 6 versus 4 -- is much less than the difference in actual years -- 65 versus 43.

The "one per decade" rule is actually log(10)[age], for those of you who want an equation.

For the rest, when you want to make things look closer together, for a birthday or a chart, use a log scale like "one per decade". For example, there are ~2.25 log-words in this post.

 

Sex, drugs, and rock ’n’ roll revisited.

When I got away
I only got so far
--Slipknot, "Dead Memories"

I am new parent — my daughter is 3 years old.  Like many new parents, I’m incredibly stressed that I’m doing it wrong — whatever "it" is — with her.  I’m talking too much, or not enough.  I’m praising too much, or not enough. I’m allowing her to fail too much, or … well, you get the picture.

And there is no lack of “research” to guide — or exacerbate — my worrying.  I read it all — books, magazines, blog posts. You name it, I’ve probably read it.

It’s not surprising, I guess, that it all seems compelling. In fact, two pieces of research suggesting the exact *opposite* of each other can be equally compelling.

Sigh, this parenting thing is hard.

So, when confronted with something hard and anxiety-inducing, I fall back on the math.

Because, my dear readers, the math does, in fact, matter.

I am a music junkie.  And I grew up during the PMRC era, when bad hair and lazy thinking created a cultural meme about the dangers of music.  Parenthetically, Dee Snider’s Congressional testimony on the topic is epic.

But, lazy thinking aside, there must be some relationship to the media we are exposed to and our views of the world and willingness to act. I mean, propaganda works, right? So music must have some impact.  Seems logical.

So when that renowned journalistic empire, USA Today, posted an article about the connection between rock music lyrics and binge drinking, I listened.

And then read.

And then sighed.

OK, the relevant paper is “Receptivity to and Recall of Alcohol Brand Appearances in U.S. Popular Music and Alcohol-Related Behaviors”.

I won’t go through the article in detail, but let’s pick on the math.  First, they created a complicated coding system for whether you liked a song, owned a song, and could identify any of the brands in the song.

They used 10 songs, scoring song one time if you liked it, a second time if you owned it. So, there were up to 20 points to be had on the song side.  The mean score of song points was 3.7.  The standard deviation was 4.2.  

Sigh. Keep in mind the normal distribution rules: ~66% of all responses fall between the mean plus one standard deviation and the mean minus one standard deviation.  Keep in mind that the standard deviation was 4.2, against a mean of 3.7.  The standard deviation is greater than the distance the mean is from zero.  For normal distributional rules to be true in this case means that a substantial number of respondents would have had to have scores that are less than zero (ie., out towards the 3.8 - 4.2 range of the standard deviation). Negative scores require, yup, that the subjects needed to give points back in some magic way.

That’s unlikely.  So, obviously, the distribution isn’t normal.  And, it can’t be normal, anyway, because it’s a count, and counts aren’t normal, but whatever.

So, any traditional significance coefficient, or confidence interval, is largely random — it might be right, but you can’t assess how likely it is to be right.

They then took this wildly strange variable and broke it up into thirds (low, medium, high).  (Humorously, they refer to these as “tertiles” which is correct, but incredibly pompous.)  Low was defined as 0, medium 1 - 4, high as 5 or more.  Keep in mind that a 5 — the “highs” — had about 2 songs out of the 10 overall.  By breaking the results up, they are able to obfuscate the non-normality and dial up any potential relationships.  

Back to the research, but this time on the alcohol brand recognition side.  Only about 8% of all respondents were able to correctly identify even a single brand across the 10 songs.  In other words, 92% of the respondents couldn’t figure out a single brand.  Naturally, when facing what is called a wildly unbalanced data set, they did what you shouldn’t do — split the results into 2 groups (yes or no), which makes it sound like you have half the results in each group.

Their behavior measures make more sense mathematically, but as confusing to me semantically.  Their most serious categories are “have reported bingeing at least monthly” (which is >6 drinks on one occasion — yikes!) and “reported problems such as injuries due to alcohol” (which sounds absolutely terrifying).  The “injuries” category includes a bunch of scary things, but then also includes “feeling guilty after drinking” (which seems far less serious than the category sounds). Since answering yes to any of the 7 injury questions makes "been injured" a yes, I wonder how much of that data is due to guilt rather than injuries. Don't know.  But regardless that’s a semantic issue, not directly a math one, so I’ll move on.

...to another point they get wrong. The authors use an Odds Ratio to see if the groups have differential odds for the alcohol outcomes based on the segments from above. I think they really mean a risk ratio, since that captures relative likelihood of something bad happening in difference cases.  Odds ratio and risk ratio are the same under something called the “rare disease assumption” which basically says “it happens so frequently that the risk can’t be measured directly”. I don't think that applies here, so I think they simply used the wrong statistic.

Here's where it gets interesting.  If you read the text of their results, it sounds like the results are stunningly clear — this factor has a OR = 2 (by which they mean that this is twice as likely as that), this other factor has an OR of about 3, and so on.  They also give confidence intervals of the OR (which can’t be computed, since we are in a wildly non-normal distribution that is then chopped up into dichotomous variables that look balanced, but aren’t). The results section of their paper — which drove the USA Today article — reads like a very clear indictment of alcohol in music.

However, when you look at the tables of their results, you discover that in almost all cases, the OR is driven by an alias to age — People who are 15, for example, are wildly less likely to have had a full drink than the sample, while people who are 21 are wildly more likely to have had a full drink than the sample.  Right. 21 is the legal drinking age.  It’s possible that on or about one's 21st birthday *everyone* goes out and drinks. This weird age effect, if ignored, would create the entire OR finding about binge drinking.

White on
White translucent capes
--Bauhaus, "Bela Lugosi's Dead"

In other words, their data — insofar as I can recreate it from their paper — simply shows that 21 year olds drink more than 15 year olds.  Which isn’t terribly surprising.

This kind of thing really annoys me. There is some scary data in this paper about binge drinking. There should be a policy implication there, or something we should be paying attention to. By wrapping that fact in a bodyguard of mathematical lies, they created an unworthy article (that was USA Today-worthy) that added absolutely nothing to the ongoing policy debate about music, media, and development.

We can do better.

 

Can Elves breed?

I’m often captured by random ideas. This is one that came to me the other night, when I should have been sleeping.

In the Lord of the Rings series, there are a bunch of Elves floating around.  Not literally, but, allow me a bit of artistic license here, please.

One of the main Elvish dudes is Elrond.  Elrond is supposedly about 3,000 years old.  OK, he probably has a really good 401(k). But, more than that, I’m intrigued with why there are so few Elves. I mean, that’s a long time to have kids.

If Elrond had a kid every 100 years, he’d have 30 kids.  But what if the kids breed too?

If each of them breeds 100 years after their birth, and then every 100 years thereafter, the oldest kid would have 29 kids.

But, oops, those kids all breed too.

Trade in these wings for some wheels
—Bruce Springsteen, “Thunder Road”

Hmm. Let’s make this problem simpler.

Assume there are 1 Elf, he’s 2 years old, and he breeds every year after his first year.  So, he’s got 1 kid, who hasn’t bred yet.  So, in 2 years, he’s doubled the Elf ranks.

Now, make that dude 1 year older. Now he’s 3, he’s got 2 kids (one 2 and the other 1 year old).  The 2-year-old has one kid. In 3 years, the Elves have quadrupled (from poor lonely Elrond to Elrond, older kid, younger kid, and grandkid)

// Side note: I have to introduce some nomenclature here to make this work. Kid#1 is Elrond’s first kid. Any kids that kid#1 has will be marked as kid#1-kid#<n>.  So kid#1-kid#2 is the 2nd kid of the oldest kid. //

4 years yields Elrond, kid#1, kid#2, kid#3, kid#1-kid#1, kid#1-kid#2, kid#2-kid#1, or 7 Elves.

By 5 years, the forest is getting crowded. You have:
Elrond
Kid#1 (who now has kid#1-kid#1, kid#1-kid#2, kid#1-kid#3)
Kid#2 (who now has kid#2-kid#1, kid#2-kid#2)
Kid#3 (who now has kid#3-kid#1)
Kid#4

That’s a total of 11 Elves.

Let’s do the math, a bit. For every new year you add, you get one more Elrond kid. You also create the opportunity for one more kid to have a kid.  But it takes too long to do this work by hand.

This is a series expansion that is a variant of a Triangle number.

// Side note: Talking to others is good. I exploited my brother’s big math brain a lot for this post. All screwups to me, all smartness to him. //

He's got the saints and the sinners
Coming up from behind
-Eurythmics

It works out to adding up the sum of all numbers less than the generational age and correcting for the extra generational count.  For 3, for example, it is 1 (Elrond) plus (2 + 1) = 1 + 3 = 4.  For 5, it is 1 (Elrond) plus (4 + 3 + 2 + 1).

This ultimately becomes 1/2 * (generation^2 - generation) + 1.  

So, back to our buddy Elrond. If he is 3,000 years old, and gets to breed once per century, he had 30 generations.  Which, plugging 30 into our handy-dandy formula, yields 436. 

That's a lot of Elves.

And what if there are a lot of Elronds? Like, say 10,000.  That is 4.4M Elves. 

But wait! There were supposedly “great cities” full of Elves, which sounds like millions of elves to start with. Let's say there were 1M Elves; now Elrond's 30 generations would have made 400M Elves, which is more than the total population of the US.

So, my question… why aren’t Elves like cockroaches, sitting around everywhere, generally in the way.

It’s stuff like this that keeps me up nights.

A new beginning....

Tell the Machiavellian
he’s not welcome anymore
—Stone Sour, “Tired”

Hello my new friends. The time has come. I’ve decided that the snark that lives inside me has to emerge, Athena-like, into the light. Well, at least into the dim; I don’t expect to generate much traffic here.

What am I doing? What is this new entry in the world?

This, ladies and germs, is TheMathMatters. Because, you see, the math DOES matter, and so many people get it wrong.

And I find that maddening.

What did you hope to learn about here?
—Matchbox 20, “Real World”

For those of you who are, inexplicably, new to Douglas’ Writing Style(tm), let me explain the ground rules.

At my previous blog, OtherEndofSunset, I talked only about personal stuff, often quite painful personal stuff. My goal was to externalize that which lived inside me. I used it to help me heal, to move on. Thanks to any of you who helped me on that journey.

I pointedly didn’t respond to any business-related questions, nor talk about stuff related to my employer.  I did, however, seem to have some odd fascination with hats. I can’t explain that part.

I will do things a bit differently here. My posts will be shorter, and less personal.  I may talk about business stuff, although it won’t be a primary goal.  

I hope someday you’ll join us
—John Lennon, “Imagine”

I will continue to quote song lyrics. I won’t tell you why I quote a particular song. The beauty of art is that it strikes each of us differently. Perhaps you’ll have some surprising reaction to a song I quote. Perhaps not. But let’s give it a shot, shall we?

I am going to write posts about stuff — press, television, academic articles, fiction — where math is … or should be … a key component. Some of my topics will be about hard things, others trivial. Some will result in “my side” being right, but other times not.

Really, Barbie notwithstanding, math often isn’t that tough.  It’s a funny language describing more or less common sense. It does not exist in some Platonic form. It is a tool created and wielded by humans, with the associated biases, errors and, occasionally, politics.

I don’t care what you believe — I’m glad you believe in something, and I very much want you to tell everyone what you believe. But you should try to get the numbers right.

If you don’t, I’m going to try and correct it. Mostly because it makes me laugh, but, perhaps, it will be edifying.

I won’t ask for nothing
while I’m gone
—Billy Joel, “Honesty”

I expect (hope?) that I will get the math wrong at some time. And I hope you point it out, even with a giggle if that helps.

Mostly, I hope we enjoy a brief trip together, for as long as it lasts.