• Infants Quickly Learn to Ignore Unreliable and Silly People

    Children learn a lot from imitating the actions of adults, with recent research suggesting that infants as young as 14 months are selective imitators — taking cues from our behaviour in order to decide which of us adults to learn from and which to ignore.

    In a study where researchers expressed delight before either presenting an infant with a toy (the reliable condition) or not presenting the infant with a toy (the unreliable condition), they discovered that infants detect “unreliable” people and choose not to learn from then, opting instead for adults that appear confident and knowledgeable — the reliable group.

    “Infants seem to perceive reliable adults as capable of rational action, whose novel, unfamiliar behaviour is worth imitating,” the researchers said. “In contrast, the same behaviour performed by a previously unreliable adult is interpreted as irrational or inefficient, thus not worthy of imitating.” […]

    The new finding adds to a growing body of research showing children’s selectivity in who they choose to learn from. For example, children prefer to learn from adults as opposed to their peers, and they prefer to learn from people they are familiar with and who appear more certain, confident and knowledgeable.

  • Ebert’s Glossary of Movie Terms

    If there’s one person I can think of who is qualified to produce a movie glossary, it has to be Roger Ebert. And you know what? He did, it was published, and I had no idea until just now.

    Inspiring frequent light giggles and the occasional guffaw, Ebert’s glossary appears to have originated as an article/chapter in Roger Ebert’s Video Companion (that link leads to a probably-not-kosher mirror of the full section). An expanded version was later published as the standalone volume Ebert’s ‘Bigger’ Little Movie Glossary, with the wondrously descriptive subtitle of “a greatly expanded and much improved compendium of movie clichés, stereotypes, obligatory scenes, hackneyed formulas, shopworn conventions, and outdated archetypes” (and that link goes to the fairly extensive Google Books preview… for those of you who don’t want to buy it for the Kindle).

    Five random terms that made me chuckle:

    • Dirt Equals Virtue: In technology movies, a small, dingy, cluttered little lab and eccentric personnel equal high principles; large, well-lighted facilities mask sinister motives.
    • First Law of Funny Names: No names are funny unless used by W.C. Fields or Groucho Marx. Funny names, in general, are a sign of desperation at the screenplay level. See “Dr. Hfuhruhurr” in The Man with Two Brains.
    • Obligatory M & M Shot: Every movie that features a scene in an Arab or Islamic country will begin the scene with a shot of a mosque tower (minaret), or the sound of the muezzin, or both.
    • Principle of Selective Lethality: The lethality of a weapon varies, depending on the situation. A single arrow will drop a stampeding bison in its tracks, but it takes five or six to kill an important character. A single bullet will always kill an extra on the spot, but it takes dozens to bring down the hero.
    • Unmotivated Close-up: A character is given a close-up in a scene where there seems to be no reason for it. This is an infallible tip-off that this character is more significant than at first appears, and is most likely the killer. See the lingering close-up of the undercover KGB agent near the beginning of The Hunt for Red October.
  • The Minds of Dogs and How Pointing Evolved

    Recent research suggests that domestic dogs seem capable of displaying a rudimentary “theory of mind” — a very human characteristic whereby you are able to attribute mental states to others that do not necessarily coincide with your own (in a nutshell). Stray domestic dogs, meanwhile, do not display this trait, suggesting that such mental attributes are developed through close contact with humans. That’s interesting, but not the main reason I’m sharing this information with you.

    This cognitive difference between stray domestic dogs and their housebound brethren was uncovered by testing whether or not they understood the very human action of pointing (y’know, with your index finger). What struck me most in this discussion was this brief theory of how the action of pointing evolved:

    Go ahead, let your wrist go limp and look at your hand from the side, or if you’re too insecure in your own sexuality, just picture Adam’s limp wrist at the moment of creation in Michelangelo’s masterpiece on the Sistine Chapel’s ceiling. See how even in this relaxed state the index finger is slightly extended? By contrast, when chimps do this […] their index finger falls naturally in line with their other fingers. Povinelli and Davis reason that this subtle evolutionary change in the morphology of our hands, which occurred after humans and chimpanzees last shared a common ancestor five million to seven million years ago, is at least partially responsible for the fact that human pointing with the index finger is so culturally ubiquitous today.

    The argument goes something like this. When young infants begin reaching for objects just out of their range, adults are most likely to respond to those reaching attempts and to retrieve the item for the baby when the latter’s index finger is more prominently extended. That is to say, initially, the adult mistakenly reads into the child’s reaching attempt as a communicative gesture on the part of the child. Over time, this dynamic between the child and adult serves to further “pull out” the index finger because the child implicitly learns the behavioral association, so that it slowly becomes a genuine pointing gesture.

  • The History (and Future) of the Universe

    Starting at 10-25 seconds after the start of the universe (inflation) and ending 1015 years later (with the ultimate fate of the universe), the timeline of the universe is an incomprehensibly long and fascinating one. To help understand the forces that led to life as we know it and to get an idea of what’s going to happen in the (distant) future, theoretical astrophysicist Ethan Siegel has broken down the details in a wonderfully accessible and enlightening complete history of the universe (with pictures!).

    Those last couple of steps on the timeline are particularly humbling:

    100 billion years: the Universe has expanded so much that our local group, having merged into a giant elliptical galaxy, is the only one left in the visible Universe!

    We’ve got a long time left of stars going through the great cosmic life-cycle, burning their fuel, exploding, triggering star formation, and burning their new fuel. But this is limited; there’s only a finite amount of hydrogen and other elements to burn via nuclear fusion. The skies will eventually go completely dark, as the last of the dim, red dwarf stars (the longest-lived ones) exhaust their fuel.

    1015 years: the last bit of hydrogen is burned up, and our entire Universe goes dark, being populated only by black holes, neutron stars, and degenerate dwarf stars, which eventually themselves cool, fade, and turn black.

    And that’s the entire Universe, from the very beginning of what we can sensibly say about it to the far distant future!

    via @Foomandoonian

  • Optimal Caffeine Consumption

    Whether caffeine serves any purpose other than removing withdrawal symptoms is a topic of study with conflicting results, but if you’re an optimist as well as a fan of caffeine in any of it’s many forms you’re most likely consuming it sub-optimally.

    Why not improve your caffeine knowledge and learning about the optimal way of consuming the world’s most-used stimulant; caffeine:

    • Consume in small, frequent amounts: Between 20-200mg per hour may be an optimal dose for cognitive function.
    • Play to your cognitive strengths: Caffeine may increase the speed with which you work, may decrease attentional lapses, and may even benefit recall – but is less likely to benefit more complex cognitive functions, and may even hurt others. Plan accordingly.
    • Play to caffeine’s strengths: Caffeine’s effects can be maximized or minimized depending on what else is in your system at the time.
    • Know when to stop – and when to start again: Although you may not grow strongly tolerant to caffeine, you can become dependent on it and suffer withdrawal symptoms. Balance these concerns with the cognitive and health benefits associated with caffeine consumption – and appropriately timed resumption.

    So that’s one cup of regular coffee — with sugar and/or soy milk — every hour when performing relatively simple cognitive tasks.

  • When Uncertainty Increases Persuasiveness

    Common wisdom would suggest that the more certain a person is on a subject, the more persuasive and credible we perceive them to be. However a study looking looking at how certainty affects persuasiveness and perceived credibility found that the opposite is true:

    Experts are more persuasive when they seem tentative about their conclusions […] but the opposite is true of novices, who grow more persuasive with increasing certainty.

    This result held across the three experiments described in the paper (pdf, doi), but it’s worth noting that this only applies in situations where there is no objective truth — such as in consumer situations (the experiments used restaurant reviews, and I imagine product reviews would give similar results):

    Earlier research […] had made the case that expressing certainty generally increases people’s persuasive power, because it boosts their perceived credibility. [However] those studies concerned topics such as witnesses testifying in court or stock market advisers giving stock recommendations where there is an objective truth or correct answer. In those instances […] people might rely on a person’s certainty as an indicator of his or her credibility. “In more subjective domains like consumer contexts, though, […] expressing certainty appears to have a more dynamic effect, giving a message more or less impact depending on who is expressing it.”

    via Marginal Revolution / NYT

  • First Offers and Aggressive Offers: Optimal Negotiating Tactics

    When negotiating ensure that you make the first offer and make sure it’s an aggressive one: this is almost always the optimal negotiation strategy. That’s the conclusion from a study looking at negotiation tactics and the anchoring effect (from the same researchers that discovered the optimal starting prices for negotiations and auctions).

    One of the researchers gives a good overview of the study‘s findings in an article for Harvard Business School’s Working Knowledge that provides succinct negotiation tactics and reasons for why you should make the first offer. Topiccs include: when you should not make the first offer, how to counter first offers, how to construct a reasonable—yet aggressive—offer, how to protect yourself from the effects of anchoring, and more.

    Some key points worth considering (in no particular order):

    We might expect experts to be immune to the anchoring effect. Real estate agents, for example, should be able to resist the anchoring effects of a property’s list price because of their presumed skill at estimating property values. Testing this theory, [it is clear that] anchors affect the judgment of even those who think they are immune to such influence. But why?

    Every item under negotiation (whether it’s a company or a car) has both positive and negative qualities—qualities that suggest a higher price and qualities that suggest a lower price. High anchors selectively direct our attention toward an item’s positive attributes; low anchors direct our attention to its flaws. […]

    The probability of making a first offer is related to one’s confidence and sense of control at the bargaining table. Those who lack power, either due to a negotiation’s structure or a lack of available alternatives, are less inclined to make a first offer. Power and confidence result in better outcomes because they lead negotiators to make the first offer. In addition, the amount of the first offer affects the outcome, with more aggressive or extreme first offers leading to a better outcome for the person who made the offer. Initial offers better predict final settlement prices than subsequent concessionary behaviors do.

    There is one situation in which making the first offer is not to your advantage: when the other side has much more information than you do about the item to be negotiated or about the relevant market or industry. […]

    How extreme should your first offer be? My own research suggests that first offers should be quite aggressive but not absurdly so. Many negotiators fear that an aggressive first offer will scare or annoy the other side and perhaps even cause him to walk away in disgust. However, research shows that this fear is typically exaggerated. In fact, most negotiators make first offers that are not aggressive enough.

     

  • Labelling Homeopathic Products

    Earlier this year the UK’s MHRA opened a consultation to help them decide how homeopathic products should be labelled when sold to the public. As expected, Ben Goldacre — devoted critic of homeopathy, pseudoscience and general quackery — suggested a label of his own and asked his readers for further suggestions.

    Some of the suggestions were truly fantastic (and proved that I couldn’t come up with an original joke, no matter how hard I tried), and so Goldacre published some of the best suggestions for homeopathic labelling in his column for The Guardian:

    On instructions, we have “take as many as you like”, since there are no ingredients. The proposed belladonna homeopathy pill ingredients label simply reads “no belladonna”, which is a convention the MHRA could adapt for all its different homeopathy labels. Other suggestions include “none”, “belief”, “false hopes”, “shattered dreams”, and “the tears of unicorns”.

    For warnings, we have: “not to be taken seriously”, “in case of overdose, consult a lifeguard”, and “contains chemicals, including dihydrogen monoxide“. This, of course, is a scary name for water, which became an internet meme after Nathan Zohner’s school science project: he successfully gathered a petition to ban this chemical on the grounds that it is fatal when inhaled, contributes to the erosion of our natural landscape, may cause electrical failures, and has been found in the excised tumours of terminal cancer patients.

    The comments on both articles are real gems for those in need of a laugh today.

    via @IrregularShed

  • The Evolutionary History of the Brain

    The development of the human brain is intricately linked with almost every moment of our evolution from sea-dwelling animals to advanced, social primates. That is the the overwhelming theme from New Scientist‘s brief history of the brain.

    The engaging article ends with a look at the continued evolution of the human brain (“the visual cortex has grown larger in people who migrated from Africa to northern latitudes, perhaps to help make up for the dimmer light”), and this on why our brains have stopped growing:

    So why didn’t our brains get ever bigger? It may be because we reached a point at which the advantages of bigger brains started to be outweighed by the dangers of giving birth to children with big heads. Or it might have been a case of diminishing returns.

    Our brains are pretty hungry, burning 20 per cent of our food at a rate of about 15 watts, and any further improvements would be increasingly demanding. […]

    One way to speed up our brain, for instance, would be to evolve neurons that can fire more times per second. But to support a 10-fold increase in the “clock speed” of our neurons, our brain would need to burn energy at the same rate as Usain Bolt’s legs during a 100-metre sprint. The 10,000-calorie-a-day diet of Olympic swimmer Michael Phelps would pale in comparison.

    Not only did the growth in the size of our brains cease around 200,000 years ago, in the past 10,000 to 15,000 years the average size of the human brain compared with our body has shrunk by 3 or 4 per cent. Some see this as no cause for concern. Size, after all, isn’t everything, and it’s perfectly possible that the brain has simply evolved to make better use of less grey and white matter. That would seem to fit with some genetic studies, which suggest that our brain’s wiring is more efficient now than it was in the past.

    Others, however, think this shrinkage is a sign of a slight decline in our general mental abilities.

    via @mocost

  • Our Amazing Senses

    As neuroscientist Bradley Voytek points out, “we’re used to thinking of our senses as being pretty shite”, and this is mostly thanks to the plethora of animals that can see, hear, smell and taste far better than we can. “We can’t see as well as eagles, we can’t hear as well as bats, and we can’t smell as well as dogs”, he concludes… and that seems to be the consensus on every nature documentary I’ve ever watched.

    However our brain is a magnificent construction (and our senses are equally as wondrous), and so Voytek tries to reverse this idea by explaining just how sensitive and amazing our senses really are:

    It turns out that humans can, in fact, detect as few as 2 photons entering the retina. Two. As in, one-plus-one. It is often said that, under ideal conditions, a young, healthy person can see a candle flame from 30 miles away. That’s like being able to see a candle in Times Square from Stamford, Connecticut. Or seeing a candle in Candlestick Park from Napa Valley.*

    Similarly, it appears that the limits to our threshold of hearing may actually be Brownian motion. That means that we can almost hear the random movements of atoms.

    We can also smell as few as 30 molecules of certain substances. […]

    These facts suggest that we all have some level of what we’d normally think of as “super human” sensory abilities already.

    But what the hell? If I can supposedly see a candle from 30 miles away, why do I still crack my frakkin’ shin on the coffee table when it’s only slightly dark in my living room?

    It may not surprise you to hear that the answer to that question is attention.

    * For the Europeans among you, that’s more than a fifth longer than the Channel Tunnel‘s underwater section (or Hyde Park to Stansted Airport for the Londoners).