Scientists say orangutans can ‘talk’ about the past JUST LIKE US

The evolution of language converted a defenceless naked ape into a world-dominating force. It fundamentally transformed how humans transmit information and knowledge. A large and potent component of language is our ability to communicate about things that are not here, that happened in the past, or that will happen in the future. This feature of language is known as “displaced reference”.

Displaced reference is universal across the world’s languages and pervades our daily lives. In fact, to speak about the present moment has become a rarity nowadays, though noticeable exceptions are when we comment about the weather, ask for the salt over the dinner table, or talk with very young children.

Displaced reference unshackles speakers from the present. The magnitude of information that becomes available to individuals (or species) capable of displaced reference is therefore immeasurably greater than individuals (or species) strictly living in “the here and the now” – which is the bulk of the animal kingdom.

[embedded content]

So far, besides humans, only social insects are capable of displaced reference. It is remarkable how honey bees (and their tiny brains) can communicate, for instance, about the location of distant food sources to other bees in the hive.

The discovery of this fact merited a Nobel Prize in Physiology and Medicine for Karl von Frisch in 1973. Displaced reference in social insects spawns many fascinating – and unanswered – questions about animal intelligence and what the minimal viable intelligence systems for a particular cognitive capacity are.

However, biologically, bees and other insects are far apart from humans and can tell us very little about how language evolution played out among our ancestors. Lacking examples in vertebrates, mammals, or non-human primates, including great apes – our closest relatives – scientists literally had no clues about how this capacity came about in humans. But this is the new jigsaw piece that wild orangutans are bringing to the puzzle of language evolution.

The missing link?

In the low mountain rainforests of Sumatra, Indonesia, our team simulated a natural encounter with a predator to study the vocal responses of wild orangutan females. The set up consisted of a human researcher, disguised as a forest big cat, parading on all fours across the forest floor in front of the orangutan females.

The Sumatran tiger is one of the orangutan’s forest predators. Credit: Shutterstock

We observed that, despite showing all sorts of distress (including urinating and defecating), orangutan females refrained from responding vocally towards the “predator”. Instead, they waited up to 20 minutes to communicate their alarm to their offspring, long after the predator had left the scene. Across several experiments there was an average delay of seven minutes before the females vocally expressed their alarm.

The data (and simple common sense if we imagine ourselves facing a wild Sumatran tiger!) suggest that to respond vocally in the presence of a predator would have been a huge risk to the orangutans’ safety. If the females had responded immediately by calling out warnings, the predator could have detected them and perhaps attempted an attack, particularly on the infant orangutans.

Instead, the mothers waited for a significant amount of time before signaling vocal alarm about the danger that had now passed. The question that springs to mind, then, is: why did the females signal their alarm at all? If they hadn’t responded vocally at any point, they wouldn’t have faced any danger at all, right?

That is undoubtedly true; but had the mothers not expressed alarm, their infants would have remained oblivious to one of the most lethal dangers in the rainforest. Instead, the females waited long enough until it was safe to call out, but not so long that their infants could not connect their mothers’ vocal distress with what had just happened, and understand that it was extremely dangerous. The female orangutans were teaching their young about the dangers in the forest by referring to something that had happened in the (recent) past.

Orangutan offspring stay with their mothers as long as human children do. Credit: Shutterstock

In the 1970s, early attempts to release rescued orangutans and reintroduce them back into this same forest failed miserably. Nearly all the released animals fell prey to forest cats, essentially for lack of knowledge about survival in the rainforest.

Orangutan infants stay with their mothers as long as human children do. It has been shown that this exceptionally long period ensures that mothers pass on a variety of knowledge, skills and tools to their offspring. Our new findings indicate that teaching about predators is a vital aspect of this.

Widening this out to human language evolution, orangutans exemplify how our ancestors probably communicated beyond the here-and-now about the past, and possibly the future, even before they had uttered their first word. Together with mounting evidence, great apes are helping scientists build a clearer picture of our ancient ancestors as they moved towards fully fledged language.

By showing us that we are, after all, not so different from them, great apes help us learn where we come from, define who we are and, hopefully, decide where we are going as intelligent stewards of our precious planet.

This article is republished from The Conversation by Adriano Reis e Lameira, Marie Curie Fellow, School of Psychology and Neuroscience, University of St Andrews under a Creative Commons license. Read the original article.

Let’s block ads! (Why?)

Link to original source

What smart bees can teach us about collective intelligence

When it comes to making decisions, most of us are influenced to some degree by other people, whether that’s choosing a restaurant or a political candidate. We want to know what others think before we make that choice.

Humans are social animals. So social that we can rarely be independent of others because of our propensity for copying behavior and communication – also known as social learning.

Humans copy each other every day. You might buy the latest trainers because they’re really popular, even though you have no idea how good quality they are. And then you might share that information, perhaps posting a review on social media.

This can induce “smarter” purchasing decisions because usually, if a product is popular, it seems less likely it would be of be poor quality. So sometimes social learning can improve our decision making.

Learning together

Our social learning ability has led to extraordinary technological success. Advances in modern science and technology, from the smart phone to the Higgs Boson particle, have been made possible not only by genius innovation, but by humans’ ability to learn from others.

So social learning is seen as a source of collective intelligence – smart decision making among groups of individuals that improves on the ability of one single person. This can be useful in areas such as management, product development and predicting elections.

However, the opposite can also be true. Crowds can also suffer from collective “madness”, when ineffective or harmful knowledge goes viral due to copying – a phenomenon called maladaptive herding – which can trigger things like instability in stock markets.

Why do groups of humans sometimes exhibit collective wisdom and at other times madness? Can we reduce the risk of maladaptive herding and at the same time increase the possibility of collective wisdom?


Understanding this apparent conflict has been a longstanding problem in social science. The key to this puzzle could be the way that individuals use information from others versus information gained from their own trial-and-error problem solving.

If people simply copy others without reference to their own experience, any idea – even a bad one – can spread. So how can social learning improve our decision making? Striking the right balance between copying others and relying on personal experience is key. Yet we still need to know exactly what the right balance is.

Smart flexible bees

Humans are not the only animals to display collective intelligence. Bees are also well known for their ability to make accurate collective decisions when they search for foods or new nests.

What’s more, bees can avoid maladaptive herding. Bees prevent bad information from becoming viral, although they copy each other through communication and social learning. But how do they do it?

In the early 20th century, Austrian behavioral biologist Karl von Frisch found that worker honey bees use a kind of “waggle dance” for communicating with each other. In short, these waggle dances are bee versions of online shopping rating systems.

Instead of stars or good reviews, bee ratings are based on the duration of the dance. When a bee finds a good source of food, it dances for a long time. When it finds a poor one, the duration of the dance is short or non-existent. The longer the dance, the more bees follow its suggestion to feed there.

Researchers have demonstrated that bee colonies will switch their efforts to a more abundant site, even after foraging is already well underway elsewhere, thus preventing maladaptive herding. Collective flexibility is key.

Not so flexible humans

The question is, why can’t human crowds be flexible like bees, especially when both have a similar social information sharing system? To examine this, we developed a mathematical model that was inspired by collective honey bee foraging behavior.

Two key factors were identified for study: conformity – that is, the extent to which an individual follows the majority opinion; and copying tendency – the extent to which an individual ignores their own personal knowledge and relies solely on following others.

We launched a simple online game as a psychology experiment. Participants had to repeatedly choose one of three slot machines. One slot could drop more money than the others, but players didn’t know which one at the outset.

The mission was to identify the best slot and win as much money as possible. Because many people participated in the same experiment, players could see what other participants were doing in real time. Then they could copy or ignore the choices of the others.

The results revealed that a challenging task elicited greater conformity and the copying increased with group size. This suggests that unlike bees, when large groups are confronted with tough challenges, collective decision-making becomes inflexible, and maladaptive herding behavior is prominent. The popular slot got more popular because people followed the majority choice, even if it was not actually the winning one.

The study also showed that humans in groups can be flexible, like bees, when either conformity or copying was low. Players were able to switch to a new and better option when the group size was small or a less challenging version of the task was undertaken.

Thanks to the low conformity, there were people willing to explore less popular options, who could eventually find the best one as opposed to the one most chosen.

Our results suggest that we should be more aware of the risk of maladaptive herding when these conditions – large group size and a difficult problem – prevail. We should take account of not just the most popular opinion, but also other minority opinions.

In thinking this way, the crowd can avoid maladaptive herding behavior. This research could inform how collective intelligence is applied to real-world situations, including online shopping and prediction markets.

Stimulating independent thought in individuals may reduce the risk of collective madness. Dividing a group into sub-groups or breaking down a task into small easy steps promotes flexible, yet smart, human “swarm” intelligence. There is much we can learn from the humble bee.The Conversation

This article is republished from The Conversation by Wataru Toyokawa, JSPS Research Fellow, School of Biology, University of St Andrews under a Creative Commons license. Read the original article.

Read next:

Scientists say orangutans can ‘talk’ about the past JUST LIKE US

Let’s block ads! (Why?)

Link to original source

How AI could help you learn sign language

Sign languages aren’t easy to learn and are even harder to teach. They use not just hand gestures but also mouthings, facial expressions and body posture to communicate meaning. This complexity means professional teaching programs are still rare and often expensive. But this could all change soon, with a little help from artificial intelligence (AI).

My colleagues and I are working on software for teaching yourself sign languages in an automated, intuitive way. Currently, this tool can analyze the way a student performs a sign in Swiss-German sign language and provide detailed feedback on how to improve the hand shape, motion, location and timing. But our hope is that we can use the AI behind the tool to create software that can teach various sign languages from around the world, and take into account more intricate features of the languages, such as sentence grammar and the non-hand elements of communication.

AI has previously been used for the recognition, translation or interpretation of sign language. But we believe we are the first to actually attempt to assess the signs a person makes. More importantly, we want to leverage the AI technology to provide feedback to the user about what they did wrong.

Practicing and assessing sign language is hard because you can’t read or write it. Instead, we have created a computer game. To practice a sign, the game shows you a video of that sign being performed, or gives you the nearest spoken word that describes it (or both). It then records your attempt to recreate the sign using a video camera and tells you how you can do better. We’ve found that making it a game encourages people to compete to get the best score and improve their signing along the way.

Artificial intelligence is used at all stages of performance assessment. First, a convolutional neural network (CNN) extracts information from the video about the pose of your upper body. A CNN is a type of AI loosely based on the processing done by the visual cortex in your brain. Your skeletal pose information and the original video is then sent to the hand shape analyzer, where another CNN looks at the video and pulls out hand shape information at each point in the video.

The skeletal information and hand shapes are then sent to a hand motion analyzer, which uses something called a Hidden Markov model (HMM). This type of AI allows us to model the skeleton and hand shape information over time. It then compares what it has seen to a reference model which represents the perfect version of that sign, and produces a score of how well it matches.

The results of both the hand shape analyzer and the hand motion analyzer are then scored and presented to you as feedback. So all the AI is hidden behind a simple-to-use interface, letting you focus on the learning. Our hope is that the automatic, personal feedback will make students more engaged with the process of learning to sign.

Bringing AI to the classroom

So far, the software only works for Swiss-German sign language. But our research suggests that the “architecture” of the system wouldn’t need to change to deal with other languages. It would just need more video recordings of each language to act as data to train it with.

An area of research we would like to explore is how we could use what the AI already knows to help it learn new languages. We’d also like to see how we can add other aspects of communication while using sign language, such as facial expressions.

At the moment, the software works best in a simple environment such as a classroom. But if we can develop it to tolerate more variation in the background of the video footage it is assessing it could become like many popular apps that allow you to learn a language wherever you are without the help of an expert. With this sort of technology being developed, it will soon be possible to make learning sign languages just as accessible to everyone as learning their spoken siblings.

This article is republished from The Conversation by Stephanie Stoll, PhD Candidate in Computer Vision, University of Surrey under a Creative Commons license. Read the original article.

Read next:

What smart bees can teach us about collective intelligence

Let’s block ads! (Why?)

Link to original source

How scientists recreated a monster wave that looks like Hokusai’s famous image

Accounts by mariners of freak or rogue waves out in the ocean have long been a common occurrence but until relatively recently remained anecdotal. That is, until January 1 1995, when a huge wave was observed – and recorded – at the Draupner Oil platform in the North Sea.

It was one of the first reliable measurements of a freak wave in the ocean and at a height of 25.6 meters, it was over two times the height of the waves that surrounded it.

Appearing seemingly from nowhere, this seminal observation initiated many years of research into the possible causes of freak ocean waves. Various theories exist, perhaps the most simple of which is that ocean waves are random and while freak waves are rare, they are entirely predictable. Other theories have suggested that under certain conditions waves can become unstable, causing small waves to grow into much larger freak waves.


We decided to see if we could recreate this wave in the laboratory, to understand more about how freak waves happen in the first place. Clearly it is not possible to recreate waves that are 25 meters high and several hundred meters wide in a laboratory. So we reduced the scale by maintaining the same ratios of wavelength to water depth and wave height. Although our wave was 35 times smaller than the actual Draupner wave, the same physical processes dictated the behavior of the waves we produced.

The height of ocean waves is limited when waves break. When we tried to recreate the wave measured at the Draupner platform by creating ones that travelled in the same direction, they broke about two meters before reaching the scaled height of the wave we wanted.

Wave moving in one direction breaks with crest. Credit: Hypervision Creative/Shutterstock

We then attempted to recreate the wave by making two smaller wave groups that crossed at an angle of 120 degrees. We found that it was possible to recreate the full scaled amplitude of the original Draupner measurement. The height of waves produced under these conditions is not limited by breaking in the same way.

Crest of a wave

In general, wave breaking occurs when the fluid in the crest of a wave travels faster than the crest of the wave. This causes the fluid to overtake the crest and the wave to break. For non-crossing waves, large horizontal velocities are generated and this can result in “plunging” wave breaking. When this type of breaking occurs, a jet of water emanates horizontally from the crest of the wave as illustrated by the sequence of images below (top row):

CrossCompSequence.

When waves cross, much of this horizontal motion is cancelled out and this type of plunging wave breaking no longer happens. Instead, wave breaking happens in mainly upward jet-like behavior (see images above (bottom row).

Critically, this type of crossing breaking doesn’t seem to limit the height of waves in the same way as plunging breaking, and this allowed us to reproduce the full scaled height of the Draupner wave under crossing conditions.

The nature of waves

Our findings not only shed light on how the famous Draupner wave may have occurred, but also on the nature and significance of wave breaking in crossing sea conditions. This suggests previously unobserved wave breaking patterns, which differs significantly from our current understanding of what happens in the oceans.

In addition to a better understating of potential freak wave formation, the ability to accurately predict the onset of wave breaking is crucial to accounting for its effect on various other phenomena.

Wave breaking is one of the main mechanisms for the dissipation of energy in the oceans, and is crucial to accurate forecasting. The creation of sea spray and entrainment of air (bubbles of air trapped in the wave) by breaking waves affects the mix of the atmosphere and the flux of heat between air and sea, which in turn affects many geophysical processes and their accurate modeling.

Wave brewing? Credit: Computer Earth/Shutterstock

To our amusement, the wave we created bore an uncanny resemblance to The Great Wave off Kanagawa – often referred to as as Hokusai’s Wave – a woodblock print published in the early 1800s by the Japanese artist Katsushika Hokusai.

The print depicts an enormous wave that towers over fishing boats. Although this similarity is completely unintentional, studies suggest that the wave depicted was most likely a freak wave, and that its shape and structure indicate that it may also have occurred under crossing conditions. Therefore, the likeness to our wave may not have been entirely coincidental after all.The Conversation

This article is republished from The Conversation by Mark McAllister, Lecturer in Engineering, University of Oxford under a Creative Commons license. Read the original article.

Read next:

How AI could help you learn sign language

Let’s block ads! (Why?)

Link to original source

Q Acoustics Concept 300: Magic happens when a budget audio brand makes $4,500 speakers

Ask an audio engineer what their biggest hurdle designing an awesome speaker is, and chances are it’s not a lack of knowledge or technology, but budget constraints. Any engineering team at a reputable audio company has the know-how to design an awesome speaker if price weren’t an issue. The difficulty lies in maximizing performance at a given budget.

So what happens when Q Acoustics, a high-value company whose speakers mostly range from $200 to $500, decides to let loose? First, you get the $6,000 Concept 500, a pair of tower speakers that launched in late 2017 to rave reviews. And now, you get the Concept 300, a more compact pair of bookshelf speakers that might still be a steal at $4,500.

I had the chance to spend some time with them this week, and I’m counting the days until I listen to them again.

Where most high-end speakers focus primarily on adding more drivers or building them with exotic materials, Q Acoustics is all about the cabinet that surrounds the drivers. It’s not that difficult to make drivers that sound really good, and to that point, Q Acoustics uses ordinary materials for its woofer and tweeter. But on most speakers, this sound becomes distorted by resonances in the cabinet, creating interference that muddies up detail and the stereo field.

Q Acoustics gets around this with the Concept 300 in a few clever ways. First, is something called ‘Dual Gelcore,’ an update to technology it introduced with the Concept 20 and 40 a few years back. Each speaker is built like a Russian nesting doll, made up of three layers of MDF wood. The two gaps between these are filled with a special gel that absorbs vibrations. Anytime the cabinet vibrates, the gel absorbs the energy and converts it into heat. (The Concept 20 and 40 only used one Gelcore layer).

But this was true of the Concept 500 as well. What really sets the Concept 300 apart is its ‘Tensegrity‘ stands, and the way Q Acoustics couples the speakers to them. In fact, the stands are such a crucial part of the vibration-reducing equation that Q Acoustics includes them with every purchase. They also might be the coolest-looking stands out there.

Aluminum rods bear the weight of the speakers, while steel cables under tension keep the rods in place. This creates a strong, stable structure with minimal volume, reducing the potential for sympathetic vibrations through this stand. Furthering this effect is a spring-loaded base on the bottom of the Concept 300s, which absorbs even more vibrations. This and many other touches lead to a cabinet that the company claims is near silent; you can read Q Acoustics’ white paper for more.

But okay, you get it: the speaker gets rid of bad vibrations. So how do they sound?

As many speakers as I’ve heard through my line of work, my experience with products above $4,000 is more limited. I also don’t feel comfortable making comparisons with speakers I haven’t heard in my own listening environment.

So what can I say? They sounded glorious. They were some of the best speakers I’ve ever heard. Even though I was sitting far left of the sweet spot, they displayed one of the most remarkably well-defined stereo images I’ve had a chance to listen to. The midrange seemed to have perfect tonality, I could hear all the small details in music I’ve heard a hundred times – something I normally reserve for headphone listening.

To be fair, Q Acoustics brought me to Flux Studios in Manhattan, so I had the advantage of hearing the speakers in an acoustically-treated room. The Concept 300 are passivespeakers and don’t attempt to do any room correction the way many speakers with digital signal processing do, and could very well sound different in a lesser room.

And being passive, they’re ‘only’ rated at 55hz(-6dB) for bass extension. That’s certainly solid in the realm of passive speakers, but I’ve been spoiled by active speakers that can dig deeper than you’d expect. They have plenty of thump, but you might want a subwoofer for movies and the like.

Still, I knew I was listening to something really special. I’ll leave it at that until I get the chance to spend more time with them in my own, not quite-so-fancy living space. I’ll be waiting eagerly.

Published February 2, 2019 — 02:24 UTC

Let’s block ads! (Why?)

Link to original source