Friday, June 30, 2017

The Chatbots That Will Manipulate Us

                                                                  Credit: Parade.com

Humans are emotional beings. In fact, at some level, almost every human action is driven by our desire to experience pleasure and avoid pain, gain social acceptance and avoid rejection, and ultimately, to survive.

Also, contrary to what we’d like to believe, we are far from perfectly rational. Dan Ariely and Daniel Kahneman (and many others), have done an excellent job of exploring the mental flaws, biases, and reactions, that we manifest on a regular basis. An inherent degree of irrationality isn’t a bad thing. In fact, it can make us more effective thinkers, and bring a greater sense of meaning and control to our lives. Our less “logical” behavior, is quite likely evolutionary in nature, rooted in our hunter-gatherer past.

However, our biases, and varied emotional states, are also what makes us vulnerable to a new breed of chatbots, which are skilled at understanding and manipulating human behavior, and combining that knowledge with large data sets and more effective technology, to impact our lives in undesirable ways. As chatbots gain greater popularity, and enjoy more widespread adoption, we will need to develop effective strategies to guard against these dangers.

Ever since Alan Turing developed the eponymous Turing Test back in 1950, people have had an obsession with the idea of human-like intelligence in computers; more specifically, a computer that is indistinguishable from a person, in terms of the sort of responses it offers to various questions. Over the decades, computer scientists have developed software tools have are increasingly adept at communicating with people.

In 1966, Joseph Weizenbaum (a member of the then-emergent MIT Artificial Intelligence Lab) developed ELIZA, the first known chatbot, which used pattern matching to hold (simple, somewhat awkward) conversations with people. Interestingly, despite their awareness that they were communicating with a computer program, Eliza’s human counterparts still became “emotionally attached” to Eliza. Eliza was followed by Parry in 1972 (an AI version of a paranoid schizophrenic), Alice in 1995 (the chatbot most adept at communicating with humans, up to that point), Clippy in 1996 (assisted users of Microsoft Office), and SmarterChild in 2001 (if you were a teen or young adult in the early 2000’s, you might recall this chatbot, which was a part of AOL Instant Messenger, whom you could ask about the weather, news and more).

Each of these chatbots was, for it’s time, a noteworthy achievement. Today, however, we are on the cusp of a transformative era in both chatbot usage, as well as technological capabilities.

The rise of smartphones led to the release of Siri in 2011, as an intelligent assistant for the iPhone and other Apple devices, followed by Amazon’s Alexa, Microsoft’s Cortana, and Google Assistant. These chatbots might not yet pass the Turing test, and they still have a ways to go, in terms of offering contextually relevant answers, on a consistent basis. Yet, as Siri (and the others) leverage advanced strategies in machine learning, deep learning, and natural language processing, these chatbots will learn from their shortcomings, and become better and better at communicating with us.

The major technology companies are locked in an arms race to make use of the latest advances in artificial intelligence, which, amongst other areas, will be incorporated into their chatbots. However, they won’t be the only players in this space. 80% of businesses (across a range of sectors) have indicated that they hope to deploy chatbots by 2020, given the utility in everything from customer service, to personalized user engagement, to sales itself. Analysts predict that every Fortune 1000 company will incorporate a chatbot into their technology platforms by 2017, while small businesses will adopt these technologies as well. In the United States, government agencies have already begun making use of chatbots for various roles, including communication with the public and training of employees. a trend that is likely to continue.  People have demonstrated a willigness to interact with chatbots, and share sensitive information as well.

Here’s where the story becomes even more interesting. Chatbots aren’t just a vehicle for impassive, functional communication, between people and software platforms. Rather, human behavior can be influenced by chatbot interaction. Researchers at Yale University recently found that inserting a bot into collaborative group tasks with humans, and arranging for it to behave in a somewhat uncooperative manner, altered the behavior of humans within this group. In Psychology Today, Liraz Margalit observes that the humans have a bias towards simplification, and avoiding complexity. Since chatbot interactions are far less emotionally demanding than person-to-person relationships, people might gravitate towards chatbots, at the expense of real human connection (this would offer chatbots even more power over us, as we become increasingly dependent upon them, for social fulfillment).

A team of computer scientists in China recently developed a basic version of an emotionally intelligent chatbot, which can better connect with humans, by responding to their current mental/emotional state. Meanwhile, New Zealand startup Soul Machines recently announced the release of Nadia, a video-based chatbot that can read user’s facial expressions and feelings, and adjusts accordingly. Clearly, chatbots are finding their way into our heads.

Let’s think about the possibilities here. Chatbots can be used for some really worthwhile purposes. Woebot, a chatbot based in Facebook messenger, developed by psychologists and AI researchers at Stanford, works to help people improve their mental health. Karim, a chatbot designed by Silicon Valley firm X2A1, fulfills some similar function for Syrian refugees, since victims of this conflict have suffered from high levels of depression. Chatbots are being used to simplify immigration processes, and provide legal aid for the homeless. And, as I explained in an earlier piece, chatbots can play a variety of useful roles for businesses, most prominently in customer service.

Of course, any technology that can be used for good, will also have some rather nefarious applications, and in the case of chatbots, that is something which we need to give some careful thought to. If a chatbot can become a digital companion to a human, and influence human behavior, through it’s understanding of our mental states, couldn’t it do so for negative purposes? ISIS and other terror groups often  recruit online, through one-on-one contact on Skype, Whatsapp, and various other Web platforms. Imagine if these organizations made these efforts more widespread and scalable with chatbots, which understood our biases, and emotional pain points. How many more prospective fighters might they bring into their organization, as a quicker pace?

That’s just the tip of the iceberg. As American diplomat and technologist Matt Chessen explains, “machine driven communication” will end up filling the comment sections, and other areas, of Twitter, Facebook, and other social web platforms, communicating in a way that is indecipherable from human communication (in fact, Chessen believes this will “overwhelm” human speech online). Chessen believes that AI systems will use existing knowledge of our individual preferences, to create a range of content online, which is designed to “achieve a particular outcome.”

Such a system could examine your personality and demographics data, and and the content you already consume, to create content “...everything from comments to full articles - specifically designed to plug into your particular psychological frame and achieve a particular outcome. This content could be a collection of real facts, fake news, or a mix of just enough truth and falsehood to achieve the desired effect.”

These tools would be able to communicate not only through text, but possibly also voice and video, and could use A/B testing, to figure out which messages are most powerful. Known as MADCOM, or machine driven communication tools, they might be used for rather nefarious motives, such as allowing governments and nations to “create the appearance of massive grassroots support (astroturfing)”, or by terror groups to “spread their messages of intolerance” or for “democracy suppression and intimidation.”

These chatbots will take various forms. Propaganda bots will “attempt to persuade and influence” through spreading a mix of truths and lies, while follower bots could create fake support for a cause or individual, and suppression bots can create various distractions and diversions, or, even worse, engage in outright intimidation. These bots can leverage a variety of “theories of influence and persuasion” (think of the methods detailed by Robert Cialdini), such as displaying authority, or using repetition of a message to create an impression that it is true, and thus become even more effective (again, let’s remember that we aren’t rational). Those bots which are designed for purposes of intimidation are perhaps the scariest of all, because they can draw upon personal information, such as financial and arrest records, to craft “precision guided messaging.” As always, let’s remember that artificial intelligence, machine learning, and deep learning, allow these sorts of technologies to both improve and scale quickly, at a cost that will continue to decrease.

If all of this sounds a bit farfetched, let’s keep in mind that we are already in the early stages of this process. Massive bot networks already operate on Twitter, delivering automated messages, in a coordinated manner. Estimates suggest that between 9 to 15% of Twitter accounts are actually bots. Progress in machine learning has made it possible to generate increasingly realistic copycat audio and video of people, in order to effectively impersonate others. Chatbots might also create unique personas that we are likely to connect with (the movie Her might not be so farfetched after all). Thanks to an almost exponential rise in the amount of data available, there’s more information out there, on you and I, than ever before. And, as discussed earlier, robots are continuing to develop emotional intelligence, which will enable them to become only more persuasive (or, in this context, harmful and destructive).

So what’s the solution? It isn’t easy to say. Becoming more aware of our mental biases, and flaws in thinking patterns, might allow us to better guard against efforts to manipulate us. Blockchain could play a role in improving online authentication, and provide at least a partial antidote to the most destructive chatbots, as well as the spread of false information. Better regulation of artificial intelligence, at an international level, might also help (although attempts to stem the spread of technology, can often prove to be a futile endeavor). Or, we might end up taking steps that appear simply unthinkable today, and end up pulling back from a range of online platforms, including social media, where chatbots have simply become too dominant, and are aimed at causing us mental harm.

In the near future, chatbots are going to bring many changes to our lives. Let’s keep in mind that not all aspects of this transformation, will in fact be positive. For this reason, it is important that we take steps to guard against that which can cause real harm. Let’s proceed wisely.



















Tuesday, June 6, 2017

A More Probabilistic Approach To Life

Thomas Bayes (Credit: Wikipedia)

I recently read Nate Silver’s The Signal And The Noise (2012). Silver, the founder of statistical analysis website FiveThirtyEight, is arguably the most prominent statistician and forecaster in the United States today.

In this book, Silver seeks to answer a pressing question: Why do so many predictions fail? More specifically, why is it so hard to distinguish the signal (what Silver describes as “the truth”, or what is really important to observe) from the noise (“what distracts us from the truth”)? Silver also offers a solution: a probability-focused, statistical approach to thinking, specifically by applying a formula developed by mathematician Thomas Bayes, to better shape and inform our thinking, in many areas of life.

Silver cites a range of examples, involving both individual and collective judgment, where these issues play out. Most people aren’t very good at playing poker, or betting on athletic events.  Intelligence analysts failed to predict either the Pearl Harbor or 9/11 attacks. Very few investors (professional or others) successfully beat stock market indexes, over the long run. These finance professionals share the company of economists, who largely failed to anticipate the 2008 financial crisis (both in terms of likelihood and severity). In fact, economists have done rather poorly, in terms of forecasting economic growth and recessions , over the past few decades as a whole. A projected projected outbreak of the H1N1 virus in 2009, never materialized. Political prognostication by (largely partisan) pundits, generally produces mediocre results. Silver even dives into the results of efforts to predict natural phenomena, such as earthquakes and inclement weather, exploring why these undertakings often don’t succeed.

What’s going on here? Do we lack the mental hardware, to see into the future? Not neccessarily. Let’s consider some of the reasons why predictions fall short.

During the 2008 financial crisis, homeowners overlooked the possibility of a large drop in home prices, since that hadn’t happened in the recent past, while economists underestimated the impact of housing prices on the financial system. Both these predictions proved incorrect, in large part because prior data was out of sample and thus not applicable: housing prices had never risen quite this much, in such a short time, while the financial system had never been quite this leveraged, with such large amounts of debt.

Political forecasters frequently fall short as well. The panelists on The McLaughlin Group fared no better than a coin flip, in terms of the accuracy of predictions offered on the show. Silver cites to the work of Philip Tetlock, who found that most professional politician scientists, commenting on global events, didn’t fare much better: “....about 15 percent of events that they claimed had no chance of occurring in fact happened, while about 25 percent of events that they said were absolutely sure things in fact failed to occur.” Much of this was because most forecasters were rather ideological “hedgehogs”, type A personalities with strongly defined worldviews, through which they offered predictions. The minority who were “foxes” (less ideological, more nuanced, and open to incorporating work from various disciplines) were actually much more effective in their predictions.

Poker players seem to lose as much money as they win (largely due to overconfidence in their own abilities; as professional player Tom Dwan put it “People can have some pretty deluded views on poker.”), while local weather forecasters offer innacurate predictions more often than they should, as compared to those offered by the National Weather Service (in part due to a tendency by many local news weather reports, to focus more on television ratings, than accurate weather predictions).

So what’s the solution? Silver offers a framework (really, a mental model), through which we might improve our understanding of the world: Bayes’ Theorem.

Thomas Bayes was an English statistician and minister, who became most famous for his eponymous Bayes’ Theorem, which was published after his death. As Silver explains, Bayes’ theorem is mainly concerned with “the probability that a theory or hypothesis is true if some event has happened.”

Bayes’ Theorem is a simple equation, which makes use of three variables: x (our initial estimate of the probability of some event), y (a new event occurs, which is conditional on x being true), and z (a new event occurs, but x is false). Bayes Theorem uses the formula (xy)/(xy + z(1-x)), to calculate probabilities. (Note that while Silver used the variables x, y and z, in his description of the formula, many professional statisticians use a different notation, to refer to the same variables).

Rather than consider it in the abstract, let’s use a highly relevant (and tragic) example from Silver’s work: the 9/11 terrorist attacks, specifically, the two planes that hit the World Trade Center.
Silver first asks us to examine the prior probability of someone flying a plan (as part of a terrorist attack) into the WTC. Assign that probability to the variable x (Silver estimates 0.005%). Next, we consider the occurrence of a new event, which is the first plane hitting a tower of the World Trade Center. Silver considers the probability of this plane hitting the World Trade Center, if terrorists are in fact attacking Manhattan skyscrapers (he assigns this probability, denoted by variable y, a value of 100%). Lastly, Silver asks us to consider the probability of a plane hitting the World Trade Center, if terrorists are actually not attacking Manhattan skyscrapers (i.e. an accident). He assigns this variable a probability z, of 0.008%. Silver then applies the formula for Bayes’ theorem, which returns a result of 38%. This means that when the first tower of the WTC was hit by a plane, there was a 38% chance that the WTC was under attack.

However, we aren’t done just yet. Next, we must revise our probabilities of a terrorist attack, to reflect the fact that a plane has already hit one of the towers of the World Trade Center. Here, x will reflect a revised probability of an attack on the World Trade Center, given that the first plane already hit a tower of the WTC. As stated above, that number is 38%. For y, the probability remains at 100%, while z remains at 0.008%. Applying Bayes’ theorem again, we find a result of 99.99%. This means that if a first plane already hit the World Trade Center, the probability that a second plane hitting the towers, indicates a terror attack, is 99.99%, a “near-certainty.”

As Silver explains, Bayes’ Theorem is so powerful, because it allows us to take account of uncertainty, that is “the limits of our knowledge”, despite our mental biases, and an overflow of information (much of it noise), in the era of Big Data. It forces us to take a stand, and either revise or strengthen this initial position, depending on additional information we receive. He touches upon the findings of medical researcher John Ioannidis, who published an influential paper arguing that the majority of research findings, published in scientific and medical journals, are in fact false. Ioannidis tells Silver that we can now measure “millions and millions of potentially interesting variables” (i.e. more data) but “most are not really contributing much to generating knowledge.” Silver frames Bayes’ Theorem as a means of cutting through the fog, and offering a more rigorous framework to our analysis of various questions, in a variety of fields.


Silver details how a successful sports bettor applies the principles of Bayes’ Theorem to better test and revise his beliefs about a team’s future performance, while computer chess programs (most notably IBM’s Deep Blue), use a Bayesian approach, to explore “the more promising lines of attack.” Google’s culture of product experimentation is grounded in Bayes’ work, as it allows them to quickly experiment with new ideas, with their users, make a hypothesis as to what will work, test predictions, and act on the results (while Google’s self-driving cars make use of Bayesian calculations). Professional poker, like sports betting, offers considerable upside to those who bring a Bayesian lens to anticipating another player’s hand and strategies (revising probabilities as the game progresses, and a player learns more about an opponent’s actual hand). Those who forecast climate change (which has proven to be a political minefield), also benefit from a Bayesian approach.

The thing is, each of us might think more effectively, through real-world application of Bayes’ theorem. In one of his book’s more humorous moments, Silver explains how to use Bayes’ Theorem, to figure out whether your partner is cheating on you, or there’s a different explanation for their unusual behavior.

Computer science professor Allen Downey offers a fascinating example of how he assessed the chances of whether a carbon monoxide alarm going off in his house, was in fact accurate, based on his knowledge of false alarms, factors in his house that might set off an alarm, and other environmental factors. As Downey puts it “Think like a Bayesian. As you get more information, update your probabilities, and change your decisions accordingly.”  

Interviewed by Gizmodo, mathematician Spencer Greenberg explains how someone might gain a sense of whether a new workout regimen has in fact increase how much extra energy they have have, by looking at prior energy levels, how these have changed, and any other plausible explanations. As Greenberg sees it, Bayesian thinking in everyday life, isn’t really about calculating Bayes’ formula, so much as it is a means of testing and updating our beliefs, by considering the strength of various pieces of evidence, correcting “glitches” in our thinking, and avoiding “absolute certainty”, which often leads us astray.

We live in times of rapid change, and considerable uncertainty. The trajectory of the global economy remains unclear, while political/policy matters, across the globe, remain mired in doubt. Artificial intelligence is changing the world at a brisk pace, and will transform both the nature and availability of work. The lifespan of the largest and most profitable companies is growing shorter and shorter. Revolutionary technologies like CRISPR are transforming genetic engineering, at a breakneck pace. In 2016 and 2015 alone, more data was created, than in the previous 5000 years of humanity.

In this climate, it’s more important than ever, that we have some useful frameworks, to better understand the world around us. Bayes’ Theorem offers an effective way forward, one that we can all make use, in ways large and small, in our personal lives.