Are you even trying to fight spam bots, Twitter?

I followed the #Eurovision topic tag on Twitter after the Eurovision Song Contest Finale . What I saw was a lot of jokes and political opinions, plus some comments about the actual music. What you expect from Eurovision, in other words. What I also noticed was the enormous amount of bot activity posting and reposting the exact same messages.

I’ve heard of Twitter’s spam bot problem on and off for years. However, I’ve never had much personal experience with it besides the normal number of attractive woman liking my posts in an attempt to get me to visit their porn filled profiles. Seeing the large number of identical posts streaming past my screen was a complete surprise.

Note: I’m posting this from way up on my high horse with no knowledge of the efforts Twitter may be undertaking to prevent spam postings. I don’t have access to the full “hose” of all messages, so this post is pure speculation based on an incomplete picture of the situation.

I’ll not draw any attention to the agendas or topics that the bots were trying to promote. It’s enough to say that it was vicious, hateful, and almost surreal messages that didn’t relate to anything. Instead I’ll just like to discuss the pattern I observed after following the junk bots posted under the #Eurovision topic tag on Twitter :

  • The bot accounts posted tweets in bursts: three–four tweets in about one minute, followed by a random pause of seconds or hours, before a new burst of posts over a minute.
  • Messages included all the top two–four worldwide trending hashtags.
  • The same account posted the same messages multiple times (only once per burst.)
  • The messages were posted under several hundred accounts in what seemed to be a continuous stream. Each account posted nearly identical messages – the varying factor was the order of the trending hashtags.
  • Didn’t use Twitters “retweet” feature to amplify a single message, but rather drowned the conversation with a flood of identical messages.
  • None of the bot accounts had profile pictures. Their home pages were set some of the most popular domains. It looked to be a random selection from Alexa Internet’s top 1000 domains. No real user sets their home page address to “www.fbcdn.net” (a Facebook CDN) or “www.googleusercontent.com” (content uploaded by Google customers.)
  • Accounts would either follow other two–four other spam accounts, or follow two–four verified Twitter accounts. The verified accounts they followed looked to be the type of accounts new users are greeted with on the welcome screen when they first register for Twitter.
  • Most accounts had no followers, with a few following one–two other spam accounts that were positing the same messages as themselves.
  • Bots struggle with character encoding. Seems to originate from processing a source UTF-8 byte stream as if it was a single 8-bit encoded character (resulting in three–four random characters instead of the one expected UTF-8 character.)

I don’t believe that you need a semi-artificial intelligent fake-account detection system to identify and block combinations of the above patterns. The volume of messages streaming across my screen should be more than enough for Twitter to notice the pattern and act upon it. I searched through Twitter, and found the exact same messages appearing more than a month ago. I’m surprised a spam message can have this many exact duplicates over such a long period of time without the platform blocking it and any future instance of it.

I understand that Twitter is built on its users reposting, or retweeting, each others’ most interesting messages. However, if Twitter were to push legitimate users to use it’s native retweeting feature instead of copy-pasting posts (isn’t that out-of-fashion anyway), then their moderation job could be made much easier. If spammers were to retweet the same message to spread it, then a couple of spam reports against a single message could take care of it. Reposting the same message from different accounts would then more easily flag the message and those accounts as spammers.

I’m not suggesting this type of bot activity is Twitter’s only bot problem, and I’m simplifying the issue a lot here. I was only looking at one topic tag’s stream for about an hour and don’t have a clear grasp of the situation they’re dealing with over at Twitter headquarters. However, the patterns I observed doesn’t look all that different from what forums and comments system already identify and block all over the web.

I don’t have much faith in Twitter’s ability to solve their fake news/bot problem if they can’t even address the pretty obvious bot activity that I observed under the #Eurovision topic tag. I’ve got my own reasons for not being an active participant on Twitter. After peeking behind the scenes at the ongoing botfest that Twitter has become, I don’t see any reason why I’d want to be more active on Twitter in the foreseeable future.