We typically think of artificial intelligence (AI) in terms of processes and calculations. Many rules-based tasks are ripe for intervention by sophisticated algorithms and machine learning systems.
The commonly-held assumption is that developments such as these will only liberate us to spend more time on creative tasks, like strategy and content production.
There is a distinction drawn between man and machine here; while we handle the arts, they can handle the sciences.
If it is a form of cohabitation, it is they who are working for us, automating the work too time-consuming or cumbersome for us to do.
Those neatly drawn categories seem helpful, but are they still accurate?
As we move from rule-based automation to true AI, should we believe that creativity will remain a singularly human pursuit?
Deep learning algorithms are ushering in an era of Strong AI (also known as True AI), capable of taking abstract, unlabeled inputs and making decisions based on sensory information as well as pure, structured data. This is a hugely significant shift and could mark AI’s advance into the realm of sentience. With sentience comes, many argue, the capacity to be creative.
Depending on your stance, the encouraging (or foreboding) evidence of this is already all around us. Within the last 12 months, AI has been deployed to combat terrorism, it has composed classical music, and produced art that critics couldn’t distinguish from works by human artists.
Moreover, Gartner predicts, “By 2018, 20% of all business content will be authored by machines,” Elon Musk thinks computers will be able to do anything a human can “by 2030 to 2040”, and Google invested over $800,000 in the Press Association’s initiative to generate news stories solely through the use of AI.
The inexorable march towards AI-led creative content seems certain, placed in this context, but the narrative is not so linear.
The challenges for artificial intelligence within the realm of content generation touch as much on the philosophical and the psychological as they do on the technological.
Therefore, within this article, we will seek to define these challenges, assess their potential rewards, and discuss what a future would look like where machines control the message as well as the medium.
First, some definitions
Before we dive into our investigation of the potential for AI to produce high-quality content, we should spell out exactly what we are trying to define and decide.
First of all, we need to ensure a solid grasp on what exactly constitutes intelligence.
Intelligence can succinctly be defined as the ability to convert experience into knowledge or new skills.
Artificial intelligence, then, is (as defined by Stanford University), “the science and engineering of making intelligent machines, especially intelligent computer programs.”
A common (but not ubiquitous) belief has been that AI is therefore an exact replica of human intelligence and, as a result, we can only achieve true AI when we fully understand how our own minds work.
Philosophers such as Julien Offray de La Mettrie, in his 18th century work L’homme Machine (Man, a Machine), were already pondering the difficulties that are with us today:
“Man is so complicated a machine that it is impossible to get a clear idea of the machine’s inner workings, and hence impossible to define it.”
This has long been a barrier to progress for AI; that which we cannot define, we cannot replicate.
Artificial intelligence, therefore, has been held back by the lack of discovery in the cognitive sciences about the essential nature of consciousness.
That dependence on cognitive science is dissipating, however.
The shift is highlighted later in Stanford University’s definition of AI in the phrase, “AI does not have to confine itself to methods that are biologically observable.” In other words, the limits of our understanding need not constitute the limits of artificial intelligence.
Accenture’s helpful categorization goes a little further to define artificial intelligence as systems that can “sense, comprehend, and act.” This is in keeping with most AI research up to this point, which has typically focused on a few different components of intelligence, such as reasoning, problem-solving, and perception.
These are essential points for us to understand if we truly want to discern whether AI-generated creative content is somewhere on the horizon. First, let’s briefly define the process of content generation.
‘Content generation’ can be a rather nebulous term, but we can broadly break it down into three areas: coming up with content ideas, producing the content, and promoting the content.
Content itself is an ever-shifting field, meaning anything from a Tweet to a Hollywood blockbuster and billion other things in-between.
For the purposes of this article, let’s keep this restricted to a typical content marketing campaign that involves idea generation and content production in the forms of a video, some text-based articles with accompanying images, and social media promotion.
Within that scope, we require a combination of problem-solving, critical analysis, and original thinking to come up with an idea that will resonate with the target audience.
There are levels of independence for an AI program, which is what we should really be trying to ascertain here.
Our questions are therefore:
To what extent can artificial intelligence undertake and combine these specialisms, and will the result be the equal or superior of a human effort?
Can a machine be creative, when we aren’t completely sure of the formula behind our own creativity?
The prevailing presumption is that the creative process is so resistant to concrete definition that it is beyond artificial intelligence. Not because of any innate failings of AI systems, but rather because of our own limitations.
We seem to arrive at something of an impasse at this point, but there has been remarkable progress of late that gives cause for optimism as we contemplate the autonomous generation of content by machines.
AI and content marketing today
AI is normally deployed for logistical tasks in marketing, as its computational abilities far exceed our own, both in potency and expediency.
These rule-based systems can exponentially increase the efficiency of media buying, for example, leaving us to get on with the more strategic or creative tasks. This is really an example of automation, rather than narrow AI.
Content ideation, creation, and dissemination are tasks that tend to fall within the category of the arts and, resultantly, seem less prone to mechanical disruption. The machines crunch the numbers, we’ll come up with the ideas, then the machines can personalize the message and target the right customers.
This attitude comes through clearly in a recent survey of CMOs by eMarketer:
There is a clear trend here; marketers expect AI to work with a pre-existing message, tailoring and targeting it for each medium.
The areas expected to see an impact from developments in AI can broadly be categorized as data analysis and online experience customization. These require stimulus materials, such as landing pages on websites or videos. And these, of course, arise from the deep and mysterious wells of human creativity. Once more, we arrive at a neat distinction between the mechanical and the human.
Some efforts to introduce elements of AI into content generation have been quite effective, albeit limited in their ambitions.
Platforms like Quill and Wordsmith offer automated content generation already, which is very useful for creating declarative content or product descriptions at scale.
Undoubtedly, a machine can scan news headlines, assess page-level traffic data, and decide which headlines will be most likely to generate clicks in future.
Whisper recently re-launched too, with an AI-driven content customization engine and the added security of some human editors behind the scenes.
Furthermore, as noted earlier, Gartner predicts that 20% of all business content will be AI-generated in 2018.
That sounds like a big increase, but we need to understand the concept of ‘business content’ as opposed to ‘creative content’ to gain some perspective.
Finance content, such as quarterly earnings reports, is often written by a machine, often by software like Quill or Wordsmith. There is simply no need to inject personality into this content, so a computer is up to the job.
This is why Google’s investment in the AP’s artificial intelligence program will be intriguing to marketers. There is an opportunity to automate some news stories without people noticing the difference, but it will be fascinating to see if this develops beyond informational content.
A study (‘Enter the Robot Journalist’), featured in New York Times in 2015, adds a bit more color to this observation.
The ‘Turing Test’ is often cited in relation to these questions, and a variation of this assessment was used in the study. In the Turing Test (named after English mathematician and AI-pioneer Alan Turing), a computer passes if a human interrogator cannot distinguish its answers from those given by a human.
Turing provided the following example:
Interrogator In the first line of your sonnet which reads ‘Shall I compare thee to a summer’s day’, would not ‘a spring day’ do as well or better?
Computer It wouldn’t scan.
Interrogator How about ‘a winter’s day’? That would scan all right.
Computer Yes, but nobody wants to be compared to a winter’s day.
Interrogator Would you say Mr. Pickwick reminded you of Christmas?
Computer In a way.
Interrogator Yet Christmas is a winter’s day, and I do not think Mr Pickwick would mind the comparison.
Computer I don’t think you’re serious. By a winter’s day one means a typical winter’s day, rather than a special one like Christmas.
The use of poetry as an example seems purposeful, and revealing in the context of our question about the potential for AI to generate creative content.
Analysis of underlying patterns is an ideal task for an AI system, as we will see later.
Within the feature in the New York Times, the Turing Test was re-labeled:
If an Algorithm Wrote This, How Would You Even Know?
This was significant in that it went further than the binary pass/fail of the Turing Test and asked participants which of two articles on the same event best fit a list of descriptive adjectives. Unbeknownst to the participants, one version was written by an AI software, the other by a professional journalist.
The results are striking, if not altogether shocking. The software scores well on descriptors like ‘informative’, ‘trustworthy’ and ‘objective’.
It lags behind on ‘pleasant to read’, but is the runaway victor on ‘boring.’
This seems to confirm our preconceptions about what a computer can and cannot do.
Although just one study, it does hint at an underlying truth: the best creative content is emotive, and computers are not good at being emotive. Without actually having emotions, it is nigh-on impossible for machines to tap into this deep, but impervious, reservoir of inspiration.
That is not to say that artificial intelligence has hit a brick wall when it comes to content creation. News outlets can depend on AI to write accurate stories on events, but AI has also developed the ability to produce creative work through the usage of neural networks.
Neural networks and creative content generation
The move to widespread usage of expensive but effective deep learning neural networks has started to erode the requirement for a priori human knowledge. A machine can learn on its own, like a human can. However, what it learns is not restricted by the limits of our perceptual apparatus.
We don’t even need to understand a neural network’s processes for the outputs to be successful.
A simplified diagram like the below serves to illustrate how many deep learning neural networks function, including Google’s RankBrain algorithms that shape the search results we see.
The number of neurons in the hidden layers can be increased significantly (as can the number of layers) to create a system that can produce any desired outcome.
This approach was used by Aiva, an AI music composer. The input layer for Aiva consisted of sheet music from classical music composers. Within the ‘hidden layers’, Aiva learned the underlying structures and patterns that occur within the music. After mastering the theory, the AI put it into practice — you can listen to Aiva’s work on Soundcloud here.
This is hugely impressive, but some notes of caution are needed. Aiva’s music is dependent on the stimulus material and derives its composition directly from it. Also, a human orchestra is required to play the music. Nonetheless, this is evidence of a significant stride towards creativity.
Similar experiments have been undertaken to see how well an artificial intelligence system can understand the structure of literary works.
Since so much content production is driven by text, this is an important area for us to understand. The study produced visualisations like the one below, which maps out Cinderella’s fortunes through the duration of the eponymous character’s story.
The AI system was fed almost 2,000 works as its input layer and the desired output was an analysis of any common structures it could find. It located the following six:
1. Rags to Riches (rise)
2. Riches to Rags (fall)
3. Man in a Hole (fall then rise)
4. Icarus (rise then fall)
5. Cinderella (rise then fall then rise)
6. Oedipus (fall then rise then fall)
This analysis relates to the emotional arc within each book, not just the narrative or plot-based arc.
Once more, it is impressive, but does not tell us whether the same AI could write a book of similar quality.
We don’t have definitive answers on this yet, but the signs point to a successful AI-authored book within the next decade. Just last year, a short novel written by an AI program made it past the first round of judging in the prestigious Hoshi Shinichi Literary Award in Japan. One of the judges, the science-fiction novelist Satoshi Hase, said: “it was a well-structured novel. But there are still some problems [to overcome] to win the prize, such as character descriptions.”
This chimes with what we have seen above. Well-structured, informative, accurate, but lacking in the artistic elan that marks out a human author’s work. Nonetheless, we should expect mechanical wordsmithery to continue improving as AI systems edge closer to sentience.
Perhaps the best-known studies in the field of creative AI have been in the visual arts.
Work by scientists at Rutgers University demonstrated AI’s capacity to process thousands of years of art history and almost instantly identify the similarities in different paintings. Some of these were entirely new discoveries, but art critics agreed that there was enough of a structural, thematic, or aesthetic tie between the paintings to call them interrelated.
Last year, it was revealed that art created by artificial intelligence was preferred to work by human artists in a public survey. This success was driven by the pairing of two connected neural networks, “a generator, which produces images, and a discriminator, which judges the paintings.”
This is known as a Generative Adversarial Network, or GAN.
We have here a clear example of AI producing original works of art that more than pass the Turing Test.
As a result, it seems perfectly feasible that a similar setup could produce a successful YouTube video and even create some original imagery for our AI-written articles, while the same technology that drives AI chatbots could manage a social media account.
So we are nearing the point where AI-generated creative content is very possible, but is it accessible?
There are hints at an answer here from Google.
Google is training its algorithms to teach each other through a process known as Federated Learning.
Federated Learning is Google’s latest attempt to speed up the data-sharing processes and increase the data quantities that are so central to the improvement of machine learning systems.
This is a significant development, as Google has a history of providing open access to similar programs.
Google has also launched a new initiative with the aim of adding a modicum of humanity into its artificial intelligence systems.
This is precisely the element that has been lacking so far, as so much of content marketing depends on the communication of ideas and emotions.
We are some distance from a marketing ecosystem where AI-generated content is attainable at a large scale, but recent progress should make us optimistic about the potential for this technology.
In summary: how close are we to ‘AI-generated content’?
Much of what occurs today in the arena of ‘AI-generated content’ could be defined more appropriately as content curation.
Without the initial content (created by a person), this process is stultified and becomes an inward spiral with diminishing returns.
It will be in the unification of intelligence across the sensory, the mental, the mechanical, and the interpersonal that AI will reach its apotheosis, as we wrote recently.
If AI systems attain the capacity to think creatively and independently, there is no reason why this skill would not be applied to content generation.
The most intriguing area of this discussion lies in the possibility for collaboration. The potential for ‘extended intelligence’ that maximizes the strengths of humans and machines does sound fanciful, but there are hints already that this could come to fruition.
For inspiration, we can look to the 1993 novel Just This Once, written by a Macintosh IIcx nicknamed ‘Hal’, in collaboration with its programmer, Steve French. Though much derided for its quality (or lack thereof) at the time, it now seems quite prescient in its attempts to bring together the skills of a computer and a human author.
This is significant when we bear in mind that artificial intelligence is not confined by the limits of human intelligence. AI may mimic elements of how we process information or make decisions, but it is concerned primarily with solving problems. As a result, it can discover novel solutions to problems that we could never consider.
Although progress in AI-related fields seems assured, its pace and direction are undecided. So, too, is our place within this landscape. The lines of demarcation between human and machine are constantly shifting, and that matters.
We need to know what this technology can do, what it can’t do, and where we should position ourselves to capitalize on upcoming possibilities for collaboration with AI systems.
This affects our hiring plans too, of course. As an article in the Harvard Business review quite accurately posited:
“What matters now is not the skills you have but how you think. Can you ask the right questions? Do you know what problem you’re trying to solve in the first place?”
Perhaps it is not our creativity that will survive unchallenged in the AI revolution, but rather our innate and inexhaustible curiosity.
Original article published here