logo
Published on

How I (almost) replaced two machine learning models with an if-statement

Authors
  • avatar
    Name
    Dale
    Twitter
robot cowboy

A few years ago, when I was a little greener and my illiquid stock appreciation rights weighed heavily in my pocket, I set out on the hero's journey.

My employer at the time ran a marketplace and we were having trouble with “bad actor” vendors on one side of the marketplace spamming our customers (the side of the marketplace who paid us) with low-quality submissions.

I’d heard whispers that the machine learning team was working on a solution and reached out to learn more. Effectively they would train a model to predict which customer postings were likely be spammed, then restrict those postings to vendors that another model predicts will submit high quality product.

This struck me as odd. We knew which vendors were spamming low quality submissions. Why did we need machine learning when a couple of database queries and an if statement would suffice? Further, the best case for the machine learning solution would approximate the if-statement solution with some added lag for training on new customers, new vendors, and the new behavior of the spammers after they're locked out from the best customers.

Clearly the team just needed a presentation of the simpler solution.

So I set to work on a slide deck where I detailed my solution, and for good measure detailed some of the probable drawbacks of using machine learning instead. I gave the pitch to two layers of my superiors, made a few suggested improvements, then scheduled a meeting with the machine learning team.

Flying high in the knowledge that my quest was almost complete, I gave my pitch: If a customer request already has N active submissions or is newer than D days old, we restrict new submissions to vendors who have a past acceptance rate above R, where R scales with N. That’s it. Now previously spammed customer requests will only receive the highest-quality submissions, new vendors won’t get permanently locked out of high-value customer requests, and vendor effort will be spread out to more customer requests. Everybody wins.

To drive my point home, I went through the cons of the machine learning solution and detailed how my solution did not have them. At this point the machine learning team’s project manager interjected.

PM: “Are you suggesting that we throw away the 6 months of work we’ve put into the machine learning solution?”

Me: “Yeah, I think we could implement my solution in a sprint or two.”

PM: “I’m not having this conversation.”

Soon after this exchange the meeting came to an awkward end. Despite already getting a degree of buy-in from the PM’s manager previous to the meeting and hearing from the ML engineers that the cons I mentioned were legitimate concerns, the PM shut down all discussion of alternative solutions. The next week I was informally reprimanded by my manager (one of the people who previously thought my idea had enough merit to explore with the machine learning team).

In the following months I left for greener pastures, my SARs expired worthless, and last I heard the PM was promoted.

Now there’s clearly a lesson to be learned here and it might be one of the following:

Given a similar situation in the future, I should…

  1. Continue to apply my insight and skill to try to optimize my company’s product and maximize my chance of an equity payout. But to avoid reputational risk, I should do more due diligence on how invested the stakeholders are in the current solution. Further, I should go directly to the PM so they can hear my idea without feeling the need to be defensive.

    Because…

    I was just unlucky. Next time a different PM will surely be open to suggestions from outside their team.

  2. Only suggest other ML or AI solutions to replace existing less-than-optimal ML or AI solutions.

    Because…

    If ML or AI is being used where an if-statement would suffice it is a deliberate decision that provides a marketing boost for the company, product, PM, etc. Removing an ML or AI feature from a product is always a mistake and suggesting it will always result in reputational damage.

  3. Never attempt to optimize a product feature that is not owned by me or one of my reports.

    Because…

    No individual in any hierarchy is open to criticism from outside their chain of leadership because entertaining such criticism would paint them as incompetent. Getting myself into this situation will always end poorly.

I want to believe the lesson is number 1 but my current crumb of anecdotal evidence suggests otherwise.