The Diminishing Returns of Suggested Content

Justin Sandberg
5 min readJul 11, 2021
Why not let a machine decide what is For You?

Love them or hate them, algorithm-driven suggestions are now the norm in most forms of online interaction. Ads, content, and even ideas are perpetually sorted, ranked, and then presented to us in an effort to sell, influence or simply keep engaged. A well-developed algorithm can launch a good platform to the forefront of the market, for example, Tik-Tok which keeps its millions of users readily engaged through constant, effective suggestions. This is in contrast to older social media, where what the user sees tends to be curated by the user themselves. However, the appears to be a trend towards less user control over the content that shows up in their feeds, sometimes with comical results.

Twitter, either to mask an increase in promoted content or to boost engagement, has put out a new “suggested topics” feature. The idea is not particularly novel, where a user, subscribed to a specific topic, will see tweets on their timeline that are about their interests. This would function outside of the normal follower function. Unfortunately, for someone interested in a particular topic, this function is an inaccurate, blunt instrument that, at best, recognizes a few keywords and at worst can barely even handle that. However, it is an excellent function for fans of absurdist comedy.

Men’s golf. Sesame Street.

Now, the mention of a slope could perhaps justify why the algorithm would consider golf for this particular interaction. I am not enough of a Golf aficionado to know whether slopes are particular to Men’s Golf. I doubt it. This is not an isolated instance either, as Twitter's suggestion algorithm will seemingly attach random Tweets to topics.

Good for residents of Houston, Pennsylvania, less optimal for Texans

I’m sure a person interested in Houston-related information finds value in this news item, even if we disregard the fact that BNO News is based in the Netherlands. But fine, algorithms can be improved, given time and effort. However, it seems doubtful that an algorithm can work its way around irony. Take the following tweet as an example:

Technically, this tweet is “pop-rock” adjacent (as long as pop-rock refers to the musical genre, not the candy). Yet, the main idea of the Tweet, the value the reader might extract from it, is not at all connected to music. It is not even particularly pleasant content, with respect to the poster. This is an example of human communication that does not facilitate machine learning and will likely continue to confuse any algorithms created to suggest content, particularly in a text-rich platform such as Twitter. This would create fairly entertaining unintentional comedy, as users can imagine a true fan of a topic being randomly exposed to such tweets. This begs the question, why implement such features at all, if their functionality is so low. Obviously, rolling it out is a good way to start improving it, though it seems unclear how an algorithm could stay on top of such internet staples as sarcasm, irony, and memes. It’s also possible to simply lie when tagging a topic for any piece of content, which, over time, drastically decreases the effectiveness of any algorithm.

Twitter is not the only platform suffering from this issue. One can browse YouTube's new “shorts” feature, a blatant Tik-Tok imitation. It often literally features Tik-Toks posted to YouTube, complete with watermarks and outros. Love it or hate it, Tik-Tok does boast a powerful suggestion algorithm that feeds a regular user new content frequently. “Shorts”, on the other hand, will feed the user seemingly random selections of topics, ranging from dog videos to military memes and lifestyle tips for Muslims, all within one minute. The juxtaposition is interesting enough, but users will find that YouTubes algorithm tends to lock on to one of the topics you enjoy and simply give you only that. I have a soft spot for dogs, so, naturally, after a few hours of use my suggestions looked like this:

Similar to Google’s deep dream, YouTube can’t seem to discern a cat from a dog

I am not at all mad, though the algorithm seems to believe that I am particularly interested in Rottweilers, when, in truth, I love all dogs equally. This is a perfect metaphor for how Shorts work in general, an inability to actually ascertain what I am interested in, so it simply spams one topic. While it is understandable why one would want to avoid Tik-Tok for security reasons, as a source of short videos, it blows YouTube out of the water. Tik-Tok also has the added benefit of novelty, as the user doesn’t know what to expect every time they swipe up to view another video. This can keep a user engaged for significant lengths of time. That being said, users begin to meta-game the content, disliking specifically to reduce the chances of seeing a particular type of video. One might enjoy a single video about carpentry and not want to be suggested seven more, so you dislike a video that was actually quite entertaining. This skews your engagement with any topic, as you feel obliged to “curate” your own feed. Which, in the end, was simply what you were already doing the first time you subscribed on YouTube or followed on Twitter.

All in all, suggested content, in its current application, seems to produce, at best, comical results, but little actual value beyond some extra exposure for creators. As internet users in the 21st century, we roughly understand how an algorithm works and will adjust our behavior in an attempt to game the system for our own benifit. Currently, machines can only see the trees, not the forest, and their suggestions suffer accordingly.

All images in the article are screenshots taken by the author and contain publicly visible information.

--

--