Cats have a regal bearing that seems to have fascinated human beings throughout history. Bastet, a feline goddess, was an important deity in ancient Egypt, most commonly represented by a sitting cat staring straight ahead. A bronze statue of the worshipped feline dating back to 600 BC is one of the highlights of the American Museum of…
If those incredibly masculine pickup truck commercials seem tame to you, if you work at a job with more OSHA inspectors than actual employees, and if you’ve broken more smartphone screens than human bones at your local half pipe, you might be in the market for a rugged phone. These toughened-up models come with extra protection from impacts, water damage, and other otherwise lethal threats to more mortal smartphones.
Unfortunately, picking a smartphone that’s tough-as-nails often means compromising on some other features, and they’re hard to find even among today’s huge carrier retail selections. Here are the best picks on the market right now for each US carrier.
AT&T: Galaxy S7 Active
Ma Bell seems to be the only carrier that’s really interested in offering a premium “tough” phone as part of its retail lineup—it’s been the exclusive US vendor for almost all of the Active variants of the Samsung Galaxy line for years. The latest one is from back in 2016, the Galaxy S7 Active. It’s likely to be replaced in a few months with an updated S8 model, but even now it’s an excellent overall choice.
The Galaxy S7 Active is, more or less, the same phone as the Galaxy S7, in a super-protective shell. It has the same 2560×1440 Super AMOLED screen, the same Snapdragon 820 processor, the same 32GB of storage space and 4GB of RAM, the same fingerprint sensor in the home button, the same excellent 12-megapixel rear camera. It’s even running a relatively recent version of Android, 6.0, and should be upgraded to 7.0 at some point. The beefier plastic body does let Samsung cram in a 4000mAh battery (one third larger than the normal S7). On top of that, it can withstand five feet (1.5 meters) of water pressure for up to half an hour, any amount of dust or sand, and the polymer-reinforced screen is rated for a five-foot drop onto a flat surface without cracking.
You wouldn’t call the S7 Active “pretty,” but Samsung has put a lot of work into making the case much smaller and sleeker than similar rugged designs. Its’ a combination of aluminum and plastic with reinforced impact zones at the corners. As it happens, I put this phone through its paces myself over at Android Police, subjecting it to a battery of tests including a full laundry cycle and 20-foot drops onto concrete. It survived, with a few scars and a lot of bragging rights.
The Galaxy S7 Active is $695 (though you can get one with a payment plan from AT&T), easily the most expensive phone on this list. But it’s also definitely the best, especially if you don’t want to compromise on specifications or creature comforts. (Note, though, that the CAT phone mentioned at the end of this post also works on AT&T, if you prefer something non-Samsung.) There’s a newer, cheaper option from LG, the “X Venture,” that has a similar rugged MIL-STD 810 body with mid-range specs. This phone has only a 1080p screen and just a Snapdragon 435/2GB combo, but the cheaper $330 price tag will be more appealing to anyone who needs a little durability on a budget.
T-Mobile: None (Buy Unlocked)
At the time of writing, T-Mobile doesn’t offer a single “ruggedized” phone in its retail lineup. You’ll have to buy an unlocked GSM-compatible phone yourself and stick your SIM card in. Options include the Galaxy S7 Active above (though you’ll have to buy it outright from AT&T and get the carrier lock removed) or the CAT models below. Previously the carrier sold the Kyocera DuraForce XD, an older and larger version of the PRO…
“What the human being is best at doing is
interpreting all new information
so that their prior conclusions remain intact.”
— Warren Buffett
Confirmation bias is our tendency to cherry pick information which confirms pre-existing beliefs or ideas. This is also known as myside bias or confirmatory bias. Two people with opposing views on a topic can see the same evidence, and still come away both validated by it. Confirmation bias is pronounced in the case of ingrained, ideological, or emotionally charged views.
Failing to interpret information in an unbiased way can lead to serious misjudgements. By understanding this, we can learn to identify it in ourselves and others. We can be cautious of data which seems to immediately support our views.
When we feel as if others ‘cannot see sense’, a grasp of how confirmation bias works can enable us to understand why. Willard V Quine and J.S. Ullian described this bias in The Web of Belief as such:
The desire to be right and the desire to have been right are two desires, and the sooner we separate them the better off we are. The desire to be right is the thirst for truth. On all counts, both practical and theoretical, there is nothing but good to be said for it. The desire to have been right, on the other hand, is the pride that goeth before a fall. It stands in the way of our seeing we were wrong, and thus blocks the progress of our knowledge.
Experimentation beginning in the 1960s revealed our tendency to confirm existing beliefs, rather than questioning them or seeking new ones. Other research has revealed our single-minded need to enforce ideas.
Like many mental models, confirmation bias was first identified by the ancient Greeks. In The Peloponnesian War, Thucydides described this tendency as such:
For it is a habit of humanity to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not fancy.
Why we use this cognitive shortcut is understandable. Evaluating evidence (especially when it is complicated or unclear) requires a great deal of mental energy. Our brains prefer to take shortcuts. This saves the time needed to make decisions, in particular when under pressure. As many evolutionary scientists have pointed out, our minds are unequipped to handle the modern world. For most of human history, people experienced very little information during their lifetimes. Decisions tended to be survival based. Now, we are constantly receiving new information and have to make numerous complex choices each day. To stave off overwhelm, we have a natural tendency to take shortcuts.
In The Case for Motivated Reasoning, Ziva Kunda wrote “we give special weight to information that allows us to come to the conclusion we want to reach.” Accepting information which confirms our beliefs is easy and requires little mental energy. Yet contradicting information causes us to shy away, grasping for a reason to discard it.
The confirmation bias is so fundamental to your development and your reality that you might not even realize it is happening. We look for evidence that supports our beliefs and opinions about the world but excludes those that run contrary to our own… In an attempt to simplify the world and make it conform to our expectations, we have been blessed with the gift of cognitive biases.
How Confirmation Bias Clouds our Judgement
“The human understanding when it has once adopted an opinion draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects.”
— Francis Bacon
The complexity of confirmation bias partly arises from the fact that it is impossible to overcome it without an awareness of the concept. Even when shown evidence to contradict a biased view, we may still interpret it in a manner which reinforces our current perspective.
In one Stanford study, participants were chosen, half of whom were in favor of capital punishment. The other half were opposed to it. Both groups read details of the same two fictional studies. Half of the participants were told that one study supported the deterrent effect of capital punishment and the other opposed it. The other participants read the inverse information. At the conclusion of the study, the majority of participants stuck to their original views, pointing to the data which supported it and discarding that which did not.
Confirmation bias clouds our judgement. It gives us a skewed view of information, even straight numerical figures. Understanding this cannot fail to transform a person’s worldview. Or rather, our perspective on it. Lewis Carroll stated “we are what we believe we are”, but it seems that the world is also what we believe it to be.
A poem by Shannon L. Adler illustrates this concept:
Read it with sorrow and you will feel hate.
Read it with anger and you will feel vengeful.
Read it with paranoia and you will feel confusion.
Read it with empathy and you will feel compassion.
Read it with love and you will feel flattery.
Read it with hope and you will feel positive.
Read it with humor and you will feel joy.
Read it without bias and you will feel peace.
Do not read it at all and you will not feel a thing.
Confirmation bias is somewhat linked to our memories (similar to availability bias.) We have a penchant for recalling evidence which backs up our beliefs. However neutral the original information was, we fall prey to selective recall. As Leo Tolstoy wrote:
The most difficult subjects can be explained to the most slow-witted man if he has…
Evolution can be a hard process to figure out. Rooting out pseudoscience and conjecture from credible science proves difficult. Understanding the biological utility of certain features might take decades to reverse engineer; speculating on direct ancestral lines is equally confounding. Family trees look more like a tangle of roots than the pretty flowers blossoming atop.
Oxford doctoral student Carlos Driscoll took a decade to figure out the ancestral lines of the domesticated house cat. When his data were all in he was shocked: every single cat today derives from one line, Felis silvestris. It appears that the agricultural revolution roughly 11,000 years ago not only changed humanity, but feline life as well.
We often consider domestication a forced process, though it appears cats chose us. If the goal is continuing the genetic line then their success rate is incredible. Today six hundred million cats roam the earth. More cats are born each day in the United States than lions remaining in the wilderness, writes journalist Abigail Tucker, a number she puts at twenty thousand.
This does not bode well for lions, or cheetahs, or panthers, or any of the remaining felines left in the few forests supporting them. House cats are another story. When humans stopped their nomadic chases they formed large-scale farms. Cities started popping up. Cats appear to have said, well, fine, I’ll take this box here provided you also feed me and scratch me when needed, an arrangement that sums up our relationship today.
Yet for a long time humans were meat for cats. Unlike other animals that eat a variety of foods, cats are hypercarnivores. They don’t have the stomach for vegetables. They’ll die if deprived of protein, plenty of it; that’s what nature does to an animal with no predators. Your finicky cat has a genetic history of food snobbery.
As much as cats have taken over the internet with the same voracity they conquered our homes, we’re not always kind to them. Take cat hyperthyroidism, as reported in the NY Times last week. Whereas this disease was unheard of just forty years ago, today roughly 10 percent of senior cats suffer from this disease, as Emily Anthes writes.
A steady drumbeat of research links the strange feline disease to a common class of flame retardants that…
For all of the visions of robots taking over the world, stealing jobs and outpacing humans in every facet of existence, we haven’t seen many cases of AI drastically changing industries or even our day-to-day lives just yet. For this reason, media and AI deniers alike question whether true broad-scale AI even exists. Some go as far as to conclude that it doesn’t.
The answer is a bit more nuanced than that.
Current AI applications can be broken down into three loose categories: Transformative AI, DIY (Do It Yourself) AI, and Faux AI. The latter two are the most common and therefore tend to be what all AI is judged by.
The everyday AI applications we’ve seen most of so far are geared toward accessing and processing data for you, making suggestions based on it, and sometimes even executing very narrow tasks. Alexa turning on your your music, telling you what’s happening in your day, and how the weather is outside is a good example. Another is your iPhone predicting a phone number for a contact you don’t already have saved.
While these applications might not live up to the image of AI we have in our heads, it doesn’t mean they’re not AI. It just means they’re not all that life-changing.
The kind of AI that will “take over the world” — or at least, have the most dramatic effect on how people live and work — is what I think of as Transformative AI. Transformative AI turns data into insights and insights into instructions. Then, instead of simply delivering those instructions to the user so he or she can make more informed decisions, it gets to work, autonomously carrying out an entire complex process on its own, based on what it’s learned and continues to learn, along the way.
This type of AI isn’t yet ubiquitous. The most universally-known manifestation of it is likely the self-driving car. Self-driving cars are an accessible example of what it looks like for a machine to take in constantly-changing information, process and act on it, and thereby completely eliminate the need for human participation at any stage.
Driving is not a fixed process that is easily automated. (If it were, AI wouldn’t be necessary.) While there is indeed a finite set of actions involved in driving, the data set the AI must process shifts every single time the passenger gets into the car: road conditions, destination, route, oncoming and surrounding traffic, street lanes, street closures, proximity to neighboring vehicles, turning radiuses, a pedestrian stepping out in front of the car, and so on. The AI must be able to take all of this in, make a decision about it, and act on it right then and there, just like a human driver would.
This is Transformative AI, and we know it’s real because it’s already happening.
Now, imagine the implications of this technology…
NASA spends a lot of time researching the Earth and its surrounding space environment. One particular feature of interest are the Van Allen belts, so much so that NASA built special probes to study them! They’ve now discovered a protective bubble they believe has been generated by human transmissions in the VLF range.
VLF transmissions cover the 3-30 kHz range, and thus bandwidth is highly limited. VLF hardware is primarily used to communicate with submarines, often to remind them that, yes, everything is still fine and there’s no need to launch the nukes yet. It’s also used for navigation and broadcasting time signals.
It seems that this…
[Warning: Spoilers ahead for Alien: Covenant]
Alien is widely seen — for good reason — as one of the great science-fiction films of all time, as well as one of the great horror films. Nearly 40 years later, Ridley Scott has directed the latest entry in the franchise, Alien: Covenant. So, with Covenant in theaters, it’s worth discussing one of many reasons why the original Alien succeeds, because it’s a big reason why (for me at least) Covenant doesn’t: it embraces the mystery of the situation instead of explaining everything.
Both the 2012 film Prometheus and Alien: Covenant try to answer the questions that the first Alien and James Cameron’s 1986 sequel Aliens steadfastly avoided. Where did the Xenomorphs come from? What is that mysterious ship that Ripley and the crew of the Nostromo explore? Where did those eggs come from if the planet is deserted? And how is it possible for these bloodthirsty aliens to evolve so fast? By taking place before the events of the 1979 film, Scott’s pair of Alien-adjacent films set out to resolve these burning questions without realizing that they don’t need to be answered.
Alien and Aliens are incredible examples of how science fiction, horror, and action can all blend together in a genuinely thrilling combination. Somehow, they’ve both stood the test of time without telling audiences that the Xenomorphs were created by a self-aware robot who so badly wants to create something that he chooses to create a “perfect” killing machine. These films don’t dispense with the revelation that the mysterious spacecraft from Alien belonged to the Engineers, humanoid aliens who are responsible for both creating humanity and intending to destroy it at a later date. There’s a clear reason why the original films don’t answer these questions: they don’t have to.
Covenant attempts to tie some of these strands together: we find out that once Michael Fassbender’s sociopathic…
A new 3D printed “Bionic Skin” developed at the University of Minnesota, is a stretchable, electronic fabric, which would allow robots to gain tactile sensation. The results of this study were published in the journal, Advanced Materials. Scientists have been dreaming of artificial skin since the 1970’s. Thanks to funding by a division of the National Institutes of Health, we are much closer to making it a reality.
Michael McAlpine was the lead researcher on this study. He’s a mechanical engineering associate professor at the university. In 2013, while at Princeton, McApline gained international attention for 3D printing nano-materials to fashion a “bionic ear.” For this project, Prof. McAlpine enlisted graduate students Shuang-Zhuang Guo, Kaiyan Qiu, Fanben Meng, and Sung Hyun Park.
This could change the calculus on options offered to amputees. Getty Images.
Dr. McAlpine and his team created a unique 3D printer unlike any in the world. The device has four nozzles, each with several different functions. To print on the skin, the surface is first carefully scanned for its contours and shape. The printer can follow any curvature. Then, once the surface area has been mapped out precisely, printing can begin. McApline and colleagues were able to print a pressure sensor on a mannequin’s hand.
The base of the “skin” is silicone which when distributed via nozzle, came out as a gel. This contains silver particles to help conduct electricity. A coiled sensor was then printed in the center. Following that, the piece was engulfed in more silicon layers. Above and below the sensor lay electrodes in the form of a conductive ink. At last, a final, temporary layer was printed to hold everything together, while it solidified. The whole thing was just 4-millimeters wide and took mere minutes to carry out. Once it dried, the last layer was washed away, revealing a…
Speech, Action, and the Human Condition: Hannah Arendt on How We Invent Ourselves and Reinvent the World
“An honorable human relationship — that is, one in which two people have the right to use the word ‘love’ — is a process, delicate, violent, often terrifying to both persons involved, a process of refining the truths they can tell each other,” Adrienne Rich wrote in her piercing 1975 meditation on how relationships refine our truths. But although our words may be the vehicle of our truths, their seedbed is action — we enact the truth of who and what we are as we move through the world. That’s what Anna Deavere Smith spoke to in her advice to young artists: “Start now, every day, becoming, in your actions, your regular actions, what you would like to become in the bigger scheme of things.”
That indelible relationship between speech and action in an honorable existence is what Hannah Arendt (October 14, 1906–December 4, 1975) examines throughout The Human Condition (public library) — the immensely influential 1958 book that gave us Arendt on the crucial difference between how art and science illuminate life.
Arendt examines the dual root of speech and action:
Human plurality, the basic condition of both action and speech, has the twofold character of equality and distinction. If men were not equal, they could neither understand each other and those who came before them nor plan for the future and foresee the needs of those who will come after them. If men were not distinct, each human being distinguished from any other who is, was, or will ever be, they would need neither speech nor action to make themselves understood.
It is useful here to remember that Arendt is living, and therefore writing, nearly half a century before Ursula K. Le Guin unsexed “he” as the universal pronoun — Arendt’s “man,” of course, speaks to and for humanity it is entirety. In fact, she examines the vital complementarity of the universal and the unique. With an eye to the difference between human distinctness and otherness, she writes:
Otherness, it is true, is an important aspect of plurality, the reason why all our definitions are distinctions, why we are unable to say what anything is without distinguishing it from something else. Otherness in its most abstract form is found only in the sheer multiplication of inorganic objects, whereas all organic life already shows variations and distinctions, even between specimens of the same species. But only man can express this distinction and distinguish himself, and only he can communicate himself and not merely something—thirst or hunger, affection or hostility or fear. In man, otherness, which he shares with everything that is, and distinctness, which he shares with everything alive, become uniqueness, and human plurality is the paradoxical plurality of unique beings.
Speech and action reveal this unique distinctness. Through them, men distinguish themselves instead of being merely distinct; they are the modes in which human beings appear to each other, not indeed as physical objects, but qua men. This appearance, as distinguished from mere bodily existence, rests on initiative, but it is an initiative from which no human being can refrain and still be human.
Not only is the interplay of speech and action our supreme mechanism of self-invention and self-reinvention, but, Arendt suggests, in inventing a self we are effectively inventing the world in which we want to live:
With word and deed we insert ourselves into the human world, and this insertion is like a second birth, in which we confirm and take upon ourselves the naked fact of our original physical appearance. This insertion is not forced upon us by necessity,…
Whether we like them or not, most people see guinea pigs as cute, small, fluffy mammals, which kids usually take care as pets. However, there are reports circulating online saying that these tiny creatures are made into weapons of mass destruction. Is there any truth to such claims?
Why does it have to be guinea pigs? And can this type of weapon wipe out the existence of guinea pigs, worst, our own existence?
The truth is that guinea pigs, along with rats, mice, hamsters, rabbits, cats, and dogs, have been part of atomic studies that aim to improve the well-being of the human race. But as nuclear weapons?
No, there is no such thing as deadly bombs made from these cute, tiny animals.
However, animals and humans [pdf] have been used as guinea pigs – in the metaphoric sense of being test subjects – in experiments conducted to discover the harmful effects of radiation or to locate its path through the human body.
The worst part of it was that, in the early days of nuclear research, the tests were done without letting their subjects know of the dangers of the experiment.
The Nuclear Guinea Pigs
The use of nuclear guinea pigs was highly evident during the cold war. The United States of America made hundreds of nuclear experiments around the 1940s to 1960s, and these events have been compiled in the report called “American Nuclear Guinea Pigs: Three Decades of Radiation Experiments on U.S. Citizen“. One of the most well-known test sites was Nevada where the citizens were allegedly used as nuclear guinea pigs.
The nuclear tests were deemed important for the sake of national security, but the airburst produced dangerous and toxic radioactive fallout. Officials thought that the particles would only be restrained within the test site’s 125-mile…