Charlie Munger, Dating Apps, and Dangers of Impartiality

Artem Burachenok
4 min readSep 13, 2023

--

Tldr: For the first time in human history we enter a world where matching based on moral values and done on a modest scale in a highly fragmented environment is replaced by one done at a large scale by a handful of agents each approaching it without any moral norms. To mitigate the adverse consequences of this transition we should think about guardrails, handicaps, transparency, and competition.

***

I recently came across an article discussing a seeming paradox in modern dating: despite numerous dating apps and opportunities, many users now find themselves endlessly dating for years, without finding a lasting match.

The article identified several reasons for this, including superficial filtering (based on physical attractiveness), shallow subsequent communications, unrealistic expectations, and lack of commitment.

While all these elements seem valid, I thought it’s worth considering two questions, inspired by Charlie Munger’s approach:

  • What drives dating apps’ business models?
  • With these drives, what’s the system’s most likely behavior?

Since the prevalent monetization model is a monthly subscription, one of the most important metrics is the user’s lifetime.

Consider a thought experiment where an advanced algorithm is tasked with optimizing user lifetime using extensive profile and behavior data. Quickly finding great matches shortens the lifetime. Showing terrible matches leads to disappointment and churn. So, what should it do?

The hint may be found in one of the stories, shared by Charlie, The story of the gambling machine, quoted below:

You own a small casino in Las Vegas. It has fifty standard slot machines. Identical in appearance, they’re identical in the function. They have exactly the same payout ratios. The things that cause the payouts are exactly the same. They occur in the same percentages. But there’s one machine in this group of slot machines that, no matter where you put it among the fifty, in fairly short order, when you go to the machines at the end of the day, there will be 25% more winnings from this one machine than from any other machine. <…> What is different about that heavy winning machine? <…>

What’s different about that machine is people have used modern electronics to give a higher ratio of near misses. That machine is going bar, bar, lemon. Bar, bar, grapefruit, way more often than normal machines, and that will cause heavier play.”

If near-wins increase slot machines’ playing time, would near-matches increase the lifetime of a dating app user?

Zooming out I find two points about this story critical to consider: the structure of incentives, and the presence of the impartial (no moral judgment) automatic self-learning algorithm operating on a large scale.

It’s unlikely that the system will aim for lasting matches when its economic incentives favor prolonged user engagement. But what if we build a matching system aligning the incentives perfectly? Consider the dating service that charges for a strong match instead (you pay only if you found a match). Here the quick strong match seems to be the algorithm’s best strategy, but is it? For example, if we define a strong match as a couple spending at least a year in a relationship, the algorithm may over time learn to match people so that they spend exactly one year together, just to break up and return to the dating service immediately after. Can we raise the bar to “multiple years”? This will likely lead to no matches at all. Aligning incentives well turns out to be extremely hard.

But is this problem novel, in other words, do we need to worry about it? I think it is and we should. Matching decisions used to be made by people. In dating, suggestions used to be given by family members, friends, etc.

When matching is done manually, two things happen. First, human moral judgment comes into play. While there are exceptions, we are generally unlikely to recommend someone a genuinely bad option. Second, the bandwidth of our recommendations is low. Even if a recommender is bent on giving treacherous advice, they will be able to affect a relatively small number of people. There will always be healthy competition between recommenders, which will dilute the adverse effects of any one advisor.

Now, consider a modern internet service’s impartial self-learning algorithm. It has no moral constraints to pursue the optimization goal. If the most efficient strategy is to produce near-matches, what will stop it from following this strategy? As far as the machine is concerned — it is a game not unlike Go or Chess, and the job is to win.

On the other hand, because of its practically unlimited bandwidth and winner-takes-most market dynamic, the algorithm’s workings are likely to affect a lot of people with little competition.

To summarize, we moved from matching based on moral values and done on a modest scale in a highly competitive environment to one done at a large scale by a handful of agents each approaching it without any moral guardrails.

What can be done about it? Coming up with a set of rules resembling our moral judgment is extremely challenging and it’s impossible to ask the machine to use the golden algorithm. Nevertheless, approximating our judgment with rules seems like a worthy course of action, alongside a certain level of transparency into the algorithms' workings, and a healthy competitive situation, so that different versions of the algorithms and constraining rules are available to the users.

Another idea would be to make the machine less powerful by deliberately limiting available data. The less powerful it is — the less opportunity it will have to game the constraints. While the user will receive more bad matches, they will also have a higher chance for a genuinely good one.

Beyond dating, this applies to any decision-making algorithm where “ultimate good for a human” is the goal (e.g. recommending information pieces).

--

--