Human

Confirmation Bias: Why You Should Seek Out Disconfirming Evidence

“What the human being is best at doing is
interpreting all new information
so that their prior conclusions remain intact.”

— Warren Buffett

***

The Basics

Confirmation bias is our tendency to cherry pick information which confirms pre-existing beliefs or ideas. This is also known as myside bias or confirmatory bias. Two people with opposing views on a topic can see the same evidence, and still come away both validated by it. Confirmation bias is pronounced in the case of ingrained, ideological, or emotionally charged views.

Failing to interpret information in an unbiased way can lead to serious misjudgements. By understanding this, we can learn to identify it in ourselves and others. We can be cautious of data which seems to immediately support our views.

When we feel as if others ‘cannot see sense’, a grasp of how confirmation bias works can enable us to understand why. Willard V Quine and J.S. Ullian described this bias in The Web of Belief as such:

The desire to be right and the desire to have been right are two desires, and the sooner we separate them the better off we are. The desire to be right is the thirst for truth. On all counts, both practical and theoretical, there is nothing but good to be said for it. The desire to have been right, on the other hand, is the pride that goeth before a fall. It stands in the way of our seeing we were wrong, and thus blocks the progress of our knowledge.

Experimentation beginning in the 1960s revealed our tendency to confirm existing beliefs, rather than questioning them or seeking new ones. Other research has revealed our single-minded need to enforce ideas.

Like many mental models, confirmation bias was first identified by the ancient Greeks. In The Peloponnesian War, Thucydides described this tendency as such:

For it is a habit of humanity to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not fancy.

Why we use this cognitive shortcut is understandable. Evaluating evidence (especially when it is complicated or unclear) requires a great deal of mental energy. Our brains prefer to take shortcuts. This saves the time needed to make decisions, in particular when under pressure. As many evolutionary scientists have pointed out, our minds are unequipped to handle the modern world. For most of human history, people experienced very little information during their lifetimes. Decisions tended to be survival based. Now, we are constantly receiving new information and have to make numerous complex choices each day. To stave off overwhelm, we have a natural tendency to take shortcuts.

In The Case for Motivated Reasoning, Ziva Kunda wrote “we give special weight to information that allows us to come to the conclusion we want to reach.” Accepting information which confirms our beliefs is easy and requires little mental energy. Yet contradicting information causes us to shy away, grasping for a reason to discard it.

The confirmation bias is so fundamental to your development and your reality that you might not even realize it is happening. We look for evidence that supports our beliefs and opinions about the world but excludes those that run contrary to our own… In an attempt to simplify the world and make it conform to our expectations, we have been blessed with the gift of cognitive biases.

How Confirmation Bias Clouds our Judgement

“The human understanding when it has once adopted an opinion draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects.”
— Francis Bacon

***

The complexity of confirmation bias partly arises from the fact that it is impossible to overcome it without an awareness of the concept. Even when shown evidence to contradict a biased view, we may still interpret it in a manner which reinforces our current perspective.

In one Stanford study, participants were chosen, half of whom were in favor of capital punishment. The other half were opposed to it. Both groups read details of the same two fictional studies. Half of the participants were told that one study supported the deterrent effect of capital punishment and the other opposed it. The other participants read the inverse information. At the conclusion of the study, the majority of participants stuck to their original views, pointing to the data which supported it and discarding that which did not.

Confirmation bias clouds our judgement. It gives us a skewed view of information, even straight numerical figures. Understanding this cannot fail to transform a person’s worldview. Or rather, our perspective on it. Lewis Carroll stated “we are what we believe we are”, but it seems that the world is also what we believe it to be.

A poem by Shannon L. Adler illustrates this concept:

Read it with sorrow and you will feel hate.
Read it with anger and you will feel vengeful.
Read it with paranoia and you will feel confusion.
Read it with empathy and you will feel compassion.
Read it with love and you will feel flattery.
Read it with hope and you will feel positive.
Read it with humor and you will feel joy.
Read it without bias and you will feel peace.
Do not read it at all and you will not feel a thing.

Confirmation bias is somewhat linked to our memories (similar to availability bias.) We have a penchant for recalling evidence which backs up our beliefs. However neutral the original information was, we fall prey to selective recall. As Leo Tolstoy wrote:

The most difficult subjects can be explained to the most slow-witted man if he has…

The Household Chemical That Might Be Killing Cats

Article Image

Evolution can be a hard process to figure out. Rooting out pseudoscience and conjecture from credible science proves difficult. Understanding the biological utility of certain features might take decades to reverse engineer; speculating on direct ancestral lines is equally confounding. Family trees look more like a tangle of roots than the pretty flowers blossoming atop.

Oxford doctoral student Carlos Driscoll took a decade to figure out the ancestral lines of the domesticated house cat. When his data were all in he was shocked: every single cat today derives from one line, Felis silvestris. It appears that the agricultural revolution roughly 11,000 years ago not only changed humanity, but feline life as well.

We often consider domestication a forced process, though it appears cats chose us. If the goal is continuing the genetic line then their success rate is incredible. Today six hundred million cats roam the earth. More cats are born each day in the United States than lions remaining in the wilderness, writes journalist Abigail Tucker, a number she puts at twenty thousand.

This does not bode well for lions, or cheetahs, or panthers, or any of the remaining felines left in the few forests supporting them. House cats are another story. When humans stopped their nomadic chases they formed large-scale farms. Cities started popping up. Cats appear to have said, well, fine, I’ll take this box here provided you also feed me and scratch me when needed, an arrangement that sums up our relationship today.

Yet for a long time humans were meat for cats. Unlike other animals that eat a variety of foods, cats are hypercarnivores. They don’t have the stomach for vegetables. They’ll die if deprived of protein, plenty of it; that’s what nature does to an animal with no predators. Your finicky cat has a genetic history of food snobbery.

As much as cats have taken over the internet with the same voracity they conquered our homes, we’re not always kind to them. Take cat hyperthyroidism, as reported in the NY Times last week. Whereas this disease was unheard of just forty years ago, today roughly 10 percent of senior cats suffer from this disease, as Emily Anthes writes.

A steady drumbeat of research links the strange feline disease to a common class of flame retardants that…

3 types of artificial intelligence, but only 2 are valid

For all of the visions of robots taking over the world, stealing jobs and outpacing humans in every facet of existence, we haven’t seen many cases of AI drastically changing industries or even our day-to-day lives just yet. For this reason, media and AI deniers alike question whether true broad-scale AI even exists. Some go as far as to conclude that it doesn’t.

The answer is a bit more nuanced than that.

Current AI applications can be broken down into three loose categories: Transformative AI, DIY (Do It Yourself) AI, and Faux AI. The latter two are the most common and therefore tend to be what all AI is judged by.

The everyday AI applications we’ve seen most of so far are geared toward accessing and processing data for you, making suggestions based on it, and sometimes even executing very narrow tasks. Alexa turning on your your music, telling you what’s happening in your day, and how the weather is outside is a good example. Another is your iPhone predicting a phone number for a contact you don’t already have saved.

While these applications might not live up to the image of AI we have in our heads, it doesn’t mean they’re not AI. It just means they’re not all that life-changing.

The kind of AI that will “take over the world” — or at least, have the most dramatic effect on how people live and work — is what I think of as Transformative AI. Transformative AI turns data into insights and insights into instructions. Then, instead of simply delivering those instructions to the user so he or she can make more informed decisions, it gets to work, autonomously carrying out an entire complex process on its own, based on what it’s learned and continues to learn, along the way.

This type of AI isn’t yet ubiquitous. The most universally-known manifestation of it is likely the self-driving car. Self-driving cars are an accessible example of what it looks like for a machine to take in constantly-changing information, process and act on it, and thereby completely eliminate the need for human participation at any stage.

Driving is not a fixed process that is easily automated. (If it were, AI wouldn’t be necessary.) While there is indeed a finite set of actions involved in driving, the data set the AI must process shifts every single time the passenger gets into the car: road conditions, destination, route, oncoming and surrounding traffic, street lanes, street closures, proximity to neighboring vehicles, turning radiuses, a pedestrian stepping out in front of the car, and so on. The AI must be able to take all of this in, make a decision about it, and act on it right then and there, just like a human driver would.

This is Transformative AI, and we know it’s real because it’s already happening.

Now, imagine the implications of this technology…

Humans May Have Accidentally Created a Radiation Shield Around Earth

NASA spends a lot of time researching the Earth and its surrounding space environment. One particular feature of interest are the Van Allen belts, so much so that NASA built special probes to study them! They’ve now discovered a protective bubble they believe has been generated by human transmissions in the VLF range.

VLF transmissions cover the 3-30 kHz range, and thus bandwidth is highly limited. VLF hardware is primarily used to communicate with submarines, often to remind them that, yes, everything is still fine and there’s no need to launch the nukes yet. It’s also used for navigation and broadcasting time signals.

It seems that this…

How ‘Alien: Covenant’ Does What the Original Refused To

Part of what made the first two films great was not knowing exactly what was going on.
Courtesy of Twentieth Century Fox

[Warning: Spoilers ahead for Alien: Covenant]

Alien is widely seen — for good reason — as one of the great science-fiction films of all time, as well as one of the great horror films. Nearly 40 years later, Ridley Scott has directed the latest entry in the franchise, Alien: Covenant. So, with Covenant in theaters, it’s worth discussing one of many reasons why the original Alien succeeds, because it’s a big reason why (for me at least) Covenant doesn’t: it embraces the mystery of the situation instead of explaining everything.

Both the 2012 film Prometheus and Alien: Covenant try to answer the questions that the first Alien and James Cameron’s 1986 sequel Aliens steadfastly avoided. Where did the Xenomorphs come from? What is that mysterious ship that Ripley and the crew of the Nostromo explore? Where did those eggs come from if the planet is deserted? And how is it possible for these bloodthirsty aliens to evolve so fast? By taking place before the events of the 1979 film, Scott’s pair of Alien-adjacent films set out to resolve these burning questions without realizing that they don’t need to be answered.

Alien and Aliens are incredible examples of how science fiction, horror, and action can all blend together in a genuinely thrilling combination. Somehow, they’ve both stood the test of time without telling audiences that the Xenomorphs were created by a self-aware robot who so badly wants to create something that he chooses to create a “perfect” killing machine. These films don’t dispense with the revelation that the mysterious spacecraft from Alien belonged to the Engineers, humanoid aliens who are responsible for both creating humanity and intending to destroy it at a later date. There’s a clear reason why the original films don’t answer these questions: they don’t have to.

Covenant attempts to tie some of these strands together: we find out that once Michael Fassbender’s sociopathic…

3D Printed Bionic Skin Will Help Humans and Machines Merge

Article Image

A new 3D printed “Bionic Skin” developed at the University of Minnesota, is a stretchable, electronic fabric, which would allow robots to gain tactile sensation. The results of this study were published in the journal, Advanced Materials. Scientists have been dreaming of artificial skin since the 1970’s. Thanks to funding by a division of the National Institutes of Health, we are much closer to making it a reality.

Michael McAlpine was the lead researcher on this study. He’s a mechanical engineering associate professor at the university. In 2013, while at Princeton, McApline gained international attention for 3D printing nano-materials to fashion a “bionic ear.” For this project, Prof. McAlpine enlisted graduate students Shuang-Zhuang Guo, Kaiyan Qiu, Fanben Meng, and Sung Hyun Park.

Amputee with natural looking robot arm.

This could change the calculus on options offered to amputees. Getty Images.

Dr. McAlpine and his team created a unique 3D printer unlike any in the world. The device has four nozzles, each with several different functions. To print on the skin, the surface is first carefully scanned for its contours and shape. The printer can follow any curvature. Then, once the surface area has been mapped out precisely, printing can begin. McApline and colleagues were able to print a pressure sensor on a mannequin’s hand.

The base of the “skin” is silicone which when distributed via nozzle, came out as a gel. This contains silver particles to help conduct electricity. A coiled sensor was then printed in the center. Following that, the piece was engulfed in more silicon layers. Above and below the sensor lay electrodes in the form of a conductive ink. At last, a final, temporary layer was printed to hold everything together, while it solidified. The whole thing was just 4-millimeters wide and took mere minutes to carry out. Once it dried, the last layer was washed away, revealing a…

Speech, Action, and the Human Condition: Hannah Arendt on How We Invent Ourselves and Reinvent the World

Speech, Action, and the Human Condition: Hannah Arendt on How We Invent Ourselves and Reinvent the World

“An honorable human relationship — that is, one in which two people have the right to use the word ‘love’ — is a process, delicate, violent, often terrifying to both persons involved, a process of refining the truths they can tell each other,” Adrienne Rich wrote in her piercing 1975 meditation on how relationships refine our truths. But although our words may be the vehicle of our truths, their seedbed is action — we enact the truth of who and what we are as we move through the world. That’s what Anna Deavere Smith spoke to in her advice to young artists: “Start now, every day, becoming, in your actions, your regular actions, what you would like to become in the bigger scheme of things.”

That indelible relationship between speech and action in an honorable existence is what Hannah Arendt (October 14, 1906–December 4, 1975) examines throughout The Human Condition (public library) — the immensely influential 1958 book that gave us Arendt on the crucial difference between how art and science illuminate life.

Hannah Arendt by Fred Stein, 1944 (Photograph courtesy of the Fred Stein Archive)

Arendt examines the dual root of speech and action:

Human plurality, the basic condition of both action and speech, has the twofold character of equality and distinction. If men were not equal, they could neither understand each other and those who came before them nor plan for the future and foresee the needs of those who will come after them. If men were not distinct, each human being distinguished from any other who is, was, or will ever be, they would need neither speech nor action to make themselves understood.

It is useful here to remember that Arendt is living, and therefore writing, nearly half a century before Ursula K. Le Guin unsexed “he” as the universal pronoun — Arendt’s “man,” of course, speaks to and for humanity it is entirety. In fact, she examines the vital complementarity of the universal and the unique. With an eye to the difference between human distinctness and otherness, she writes:

Otherness, it is true, is an important aspect of plurality, the reason why all our definitions are distinctions, why we are unable to say what anything is without distinguishing it from something else. Otherness in its most abstract form is found only in the sheer multiplication of inorganic objects, whereas all organic life already shows variations and distinctions, even between specimens of the same species. But only man can express this distinction and distinguish himself, and only he can communicate himself and not merely something—thirst or hunger, affection or hostility or fear. In man, otherness, which he shares with everything that is, and distinctness, which he shares with everything alive, become uniqueness, and human plurality is the paradoxical plurality of unique beings.

Speech and action reveal this unique distinctness. Through them, men distinguish themselves instead of being merely distinct; they are the modes in which human beings appear to each other, not indeed as physical objects, but qua men. This appearance, as distinguished from mere bodily existence, rests on initiative, but it is an initiative from which no human being can refrain and still be human.

Art by E.B. Lewis from Preaching to the Chickens by Jabari Asim — the illustrated story of how civil rights leader John Lewis’s humble childhood shaped his heroic life at the intersection of speech and action

Not only is the interplay of speech and action our supreme mechanism of self-invention and self-reinvention, but, Arendt suggests, in inventing a self we are effectively inventing the world in which we want to live:

With word and deed we insert ourselves into the human world, and this insertion is like a second birth, in which we confirm and take upon ourselves the naked fact of our original physical appearance. This insertion is not forced upon us by necessity,…

Nuclear weapons made from guinea pigs: Is there any truth to this?

Whether we like them or not, most people see guinea pigs as cute, small, fluffy mammals, which kids usually take care as pets. However, there are reports circulating online saying that these tiny creatures are made into weapons of mass destruction. Is there any truth to such claims?

Why does it have to be guinea pigs? And can this type of weapon wipe out the existence of guinea pigs, worst, our own existence?

The truth is that guinea pigs, along with rats, mice, hamsters, rabbits, cats, and dogs, have been part of atomic studies that aim to improve the well-being of the human race. But as nuclear weapons?

No, there is no such thing as deadly bombs made from these cute, tiny animals.

Guinea pig
Cute, cuddly, peaceful guinea pig – they do not exist in the wild anymore. March is Adopt-a-rescued-guinea-pig month.

However, animals and humans [pdf] have been used as guinea pigs – in the metaphoric sense of being test subjects – in experiments conducted to discover the harmful effects of radiation or to locate its path through the human body.

The worst part of it was that, in the early days of nuclear research, the tests were done without letting their subjects know of the dangers of the experiment.

The Nuclear Guinea Pigs

The use of nuclear guinea pigs was highly evident during the cold war. The United States of America made hundreds of nuclear experiments around the 1940s to 1960s, and these events have been compiled in the report called “American Nuclear Guinea Pigs: Three Decades of Radiation Experiments on U.S. Citizen“. One of the most well-known test sites was Nevada where the citizens were allegedly used as nuclear guinea pigs.

The nuclear tests were deemed important for the sake of national security, but the airburst produced dangerous and toxic radioactive fallout. Officials thought that the particles would only be restrained within the test site’s 125-mile…

What businesses are failing to see about AI

Robots will wipe out 6 percent of existing U.S. jobs by 2021, according to a new report from market research firm Forrester. But that doesn’t mean unemployment lines will soon wrap around the block.

Even the most sophisticated algorithms and machine learning technologies can’t replicate human creativity and ingenuity. As machines take over rote tasks, employees will have more time for work that demands uniquely human talents. In the age of widespread artificial intelligence, the most successful businesses will be the ones that use AI to help employees make smarter, faster, more informed decisions.

Artificial intelligence can make humans vastly more productive. When machines take care of crunching data, conducting micro-analysis, and managing workflow, humans are free to focus on the bigger picture.

Imagine a marketing team huddled around a table, plotting strategy. Right now, if they have a question, they might have to ask an analyst and wait hours or days for a response.

In a few years, that team will be able to ask an AI chatbot and get an answer within seconds. That will allow them to brainstorm more productively. It’s still the humans’ job to come up with a brilliant marketing strategy — the robots just help them do it quicker.

Or consider Kensho, a financial analytics AI system. According to a Harvard Business review, the program can answer 65 million possible question combinations — even off-the-wall ones like “Which cement stocks go up the most when a Category 3 hurricane hits Florida?”

Kensho doesn’t replace human wealth managers, who still must use their reasoning and intuition to invest wisely. But the program ensures they make the most informed decisions possible.

The AI revolution will also enable companies to predict and preemptively respond to customers’ needs.

Consider cable companies. If they could detect when a customer experiences a connection problem or has a bad viewing experience,…

Machine Learning Will Help Us Fix What’s Broken Before It Breaks

Article Image

Any fan of recent sci-fi movies and TV has likely seen some genius designer model an amazing new invention virtually before actually building the thing in the physical world. This is not as much a future thing as it may seem. High-tech industries have lately become infatuated with the idea of “digital twins.”

A digital twin is an exact virtual replica of a physical device. It’s a computer model that operates identically to the physical version. “The ultimate vision for the digital twin is to create, test and build our equipment in a virtual environment,” NASA’s leading manufacturing expert and manager of NASA’s National Center for Advanced Manufacturing John Vickers tells Forbes. “Only when we get it to where it performs to our requirements do we physically manufacture it. We then want that physical build to tie back to its digital twin through sensors so that the digital twin contains all the information that we could have by inspecting the physical build.” GE’s Ganesh Bell tells Forbes, “For every physical asset in the world, we have a virtual copy running in the cloud that gets richer with every second of operational data.”

Tony Stark

One of the reasons people are so excited about digital twins is that they can potentially detect problems virtually before they have the chance to happen in the real world. Combining the digital twin with predictive machine learning, it’s hoped that downtime for devices large and small — like the countless Internet of Things (IoT) devices proliferating — can become a rarity, with problems resolved before they even occur.

atm

Gartner identified digital twins as one of the top ten tech trends of 2017 back in October 2016. In May 2017, there’s still plenty of excitement about them, but real-world issues have emerged that impede a wholesale shift to the technology. While a digital twin can be fantastic for an individual mass-produced high-end product — Tesla keeps a digital twin of every one of the cars it sells, for example, and…