Remember the Infinite Monkey Theorem? That, essentially, given enough time and space a monkey, typing at random, could eventually recreate the collected works of Shakespeare? Well, it now looks like there’s a better non-human for the job. Or, at the very least, a much more efficient one.
You guessed it: robots.
Well, not really “robots” (like the ones that chased Will Smith), but computer programs that function at the rate of human articulation. Dominic Basulto, an innovation blogger for the Washington Post and Big Think, points out the growing capability of bot-generated content and the literary possibilities it provides:
This is becoming the central paradox of the Information Age: the easier it is for humans to create content and information on their digital devices, the more likely it is that robots and online bots will eventually take over the job of creating content and information for those digital devices. In other words, the more we democratize the process of creating content, the more we are planting the seeds for our own future literary demise.
The notion of a machine writing as eloquent as Shakespeare certainly seems a long way off, but it’s just as much of a possibility as any of the other “the-robots-will-someday-be-smarter-than-us” theories out there.
For now, though, we’re seeing this sensibility manifest itself in more practical ways. Yahoo’s much ballyhooed, whiz kid acquisition Summlyis nothing more than a genetic algorithm – designed to mimic how a human thinks – that condenses pre-existing articles. While the content is not necessarily original in thought, it is original in format. Similarly, spam Twitter feeds have been composing unique tweets for years now (and we’re not talking about a small, isolated pocket of the Twittersphere – it recently came out that over half of Justin Bieber’s followers are fake), and some – like Horse ebooks – have parlayed their entirely random (yet oddly poetic) gibberish into hundreds of thousands of followers.
The down side of thoughtless machines creating thoughts are plentiful, of course. Just this week, Wall Street algorithms created to mimic human reaction to news on Twitter picked up a fake AP Twitter report of an explosion at the White House, setting off a DOW flash crash, a $121 billion swing in a matter of minutes. It was robot readers overreacting to robot writers – basically the entire human condition in nanoseconds. No bueno.
Similarly, there are specific pros and cons in using these new tools in the marketing world. On one hand, it creates a language overlap between man and machine that permits marketing databases to better understand and service the sensibilities of consumers (with something as simple as a smiley face icon). On the other, it has the potential to wear down the already thin layer of authenticity that any branded communication bares.
It is here, as an experiential marketer, that I lose my enthusiasm for computer generated content. Our field relies on authentic connections – genuine, human moments – to create content that delivers branded messages in a package consumers care about. Social media users don’t want a feed populated with brand-centric messaging assembled to appeal to recent web searches; they want glimpses into the memorable moments of their friends’ lives.
Experiential marketing creates those moments in a way a brand can be inextricably – but authentically – tied in, and these moments are naturally shared out for online consumption. A machine may be smart enough to type a perfect 140 character brand message, but that doesn’t mean it’s as impactful as a human happy enough to share a slightly imperfect one.