Businesses have grown to increasingly trust algorithms, to the point that several companies essentially exist and profit primarily based on a proprietary algorithm. Investment companies use in-house algorithms to automatically trade stocks, while government agencies are using algorithms to guide everything from criminal sentencing to housing. Many companies now have predictive algorithms doing anything from forecasting product sales to identifying potential hacks.
SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)
A recent high-profile example of an “algorithm gone wrong” comes from real-estate company Zillow. Perhaps best known by consumers for its “Zestimate,” an algorithm-driven estimation of a home’s value, the company also had a business called Zillow Offers. Zillow Offers took the old idea of buying undervalued houses, making repairs and then selling them, usually called “flipping,” and added algorithmic magic.
The concept was elegant and straightforward. The algorithm would identify homes to purchase, using Zillow’s trove of real-estate data to find houses that offered a predictable and less-risky return. Zilliow technology would automate many of the steps of making an offer and completing the transaction, and the company would make a minor profit on the flip and predictable returns from transactional fees associated with the purchase and sale.
The idea was so compelling that in a 2019 interview, Zillow CEO Rich Barton speculated that Zillow Offers could have $20 billion in revenue in the coming three to five years.
When algorithms go wrong
If you’ve followed the business press, you’ve probably heard that Zillow has shut down the Zillow Offers business and is selling off its remaining portfolio of homes. A variety of factors contributed to the shutdown, ranging from unanticipated difficulty in sourcing materials and contractors to perform the repairs to houses before reselling, to the algorithm not performing well at predicting house prices.
Human vagaries also contributed to Zillow Offers demise. Given two homes with all the same specifications and similar locations, an algorithm is unlikely to predict that human beings might prefer an open layout kitchen to an enclosed kitchen in a particular housing market. Similarly, Zillow leaders attempted to correct algorithmic missteps buy putting the digital equivalent of a “finger on the scale” that would add or subtract percentages from the algorithm’s estimates in the hopes of correcting missteps.
SEE: Metaverse cheat sheet: Everything you need to know (free PDF) (TechRepublic)
Competitive pressures also created conflict. Staff that claimed the algorithm was overestimating home values were ignored, according to a recent WSJ article. At the end of the day, an algorithm that seemed to work well in a test market was rapidly deployed to more markets, coinciding with one of the strangest real estate, supply chain and employment markets in nearly a century, saddling Zillow with a portfolio of houses that were financially under water.
Bring sanity to algorithms
There’s a lot of coverage of the wonders of algorithms, machine learning and artificial intelligence, and rightfully so. These tools have seemingly magical abilities to identify disease, optimize complex systems, and even best humans at complex games. However, they are not infallible, and in many cases struggle with tasks and inferences that humans make so naturally as to assume they’re completely insignificant.
Your organization probably wouldn’t trust a single employee to make multi-million dollar transactions without any checks and balances, monitoring or regular evaluations and controls put in place. Just because a machine performs these transactions doesn’t mean that similar oversight, controls and regular reviews should not be put in place.
Unlike a human, your algorithms won’t have bad days or attempt to steal, but they are still subject to imperfect information and a different set of shortcomings and foibles. Pair an algorithm with wildly uncertain economic and social conditions, and the monitoring needs become even more acute.
As your organization considers and deploys algorithms, you should strive to educate your peers on their capabilities and limitations. Things that might seem miraculous, like spotting tumors in an MRI image or identifying objects in a picture, are actually easier for machines since they rely on a static data set. Give a machine enough images of tumors and it will learn to identify them in other images. However, when applied to dynamic markets, algorithms suffer the same challenges as humans, best described by the warning in every investment prospectus that “past performance does not indicate future results.” Embrace their use, but understand and convey their limitations.