WEIRD ANTHROPOLOGY

++++++++++

Face of oldest human ancestor comes into focus with new fossil skull

A facial reconstruction of Australopithecus anamensis, based on a newly found, almost-complete skull
A facial reconstruction of Australopithecus anamensis, based on a newly found, almost-complete skull.  Matt Crow

A new fossil discovery means we’re finally able to look upon the face of our oldest ancestor. Paleontologists have discovered an almost-complete skull of Australopithecus anamensis, which has previously only been known from some jawbones, teeth and bits of leg bones. The new find allowed scientists to realistically recreate the hominin’s face for the first time – and it might just shake up the family tree.

The face of this long-lost ancestor is strangely familiar, not least because of the eerily human eyes. But those – along with the leathery brown skin and muttonchop beard – are the kind of best-guess embellishments you’d expect from a recreation like this. Other features, like the large flat nose, the protruding rounded jaw, and prominent brows and cheekbones, are based on the most complete skull of its kind ever found.

Discovered in Ethiopia in 2016, the skull has been dated to 3.8 million years old and attributed to an adult male specimen of Australopithecus anamensis. Interestingly, that makes it both the youngest fossil of A. anamensis known – which are believed to have lived between 4.2 and 3.9 million years ago – and one of the oldest cranial remains of hominins, which tend to dry up in the fossil record before about 3.5 million years ago.

This new find fills an important gap in the human origin story. A. anamensis is the oldest known species in the Australopithecus genus, who are considered the earliest members of the human evolutionary tree. It’s long been accepted that A. anamensis directly evolved into another subspecies, A. afarensis – the most famous example of which is Lucy herself.

But with this new skull, scientists have far more pieces of the puzzle and have realized that they may have previously been putting them together wrong.

The almost-complete skull of Australopithecus anamensis

The almost-complete skull of Australopithecus anamensis.
Dale Omori

The researchers were able to determine which species the skull belonged to by comparing it to the previously-discovered teeth, jaws and other fragments. The rest of the skull showed a strange mix of primitive and advanced (or “derived”) features. Most interesting is the fact that some of the features on A. anamensis are actually more advanced than those on A. afarensis. That calls into question the long-standing idea that the former evolved directly into the latter.

The revised timeline they created says that A. anamensis lived until at least 3.8 million years ago, while A. afarensis arose earlier than previously thought – maybe as early as 3.9 million years ago. Doing the math, that suggests that the two species may have overlapped by as much as 100,000 years.

Once again, it seems like our evolutionary history needs a rewrite. A more complete fossil record can help us patch up holes and revise what we thought we knew.

The research was described in two papers published in the journal Nature. The researchers describe the find in the video below.

The Face of Lucy’s Ancestor Revealed

(For the source of this, and many other interesting articles, and to watch a video associated with it, please visit: https://newatlas.com/science/oldest-human-ancestor-skull-face-reconstruction/)

++++++++++

Traces of two unknown archaic human species turn up in modern DNA

Evidence of two unknown, archaic human species has turned up in our DNA
Evidence of two unknown, archaic human species has turned up in our DNA. (Credit: sprestiges/Depositphotos).Fossils are the most reliable way we can piece together the history of humans, but some clues have been inside us all along. The human genome can tell us where we’ve come from, and it’s hiding more than a few surprises. Now researchers from the University of Adelaide have found evidence of two unknown, archaic human species in modern DNA.

Although we won the race to many corners of the world, modern humans weren’t necessarily the first hominins to leave Africa. It’s long been known that more archaic species like Homo heidelbergensis beat us into Asia and Europe, where they eventually split into sub-species like Neanderthals and Denisovans.

By the time Homo sapiens made it into these regions, other species already called it home. What happened next was only natural – humans bred with these other species.

“Each of us carry within ourselves the genetic traces of these past mixing events,” says Dr João Teixeira, first author of the study. “These archaic groups were widespread and genetically diverse, and they survive in each of us. Their story is an integral part of how we came to be.

“For example, all present-day populations show about two percent of Neanderthal ancestry which means that Neanderthal mixing with the ancestors of modern humans occurred soon after they left Africa, probably around 50,000 to 55,000 years ago somewhere in the Middle East.”

The team identified the islands of Southeast Asia as a particular hotbed of this interbreeding, with modern humans cozying up to at least three different archaic species. One of them is the Denisovans, which have previously been identified in the genomes of people of Asian, Melanesian and indigenous Australian descent. But the other two remain unidentified.

The researchers reconstructed migration routes and examined fossil vegetation records, and suggested the likely locations of these two mixing events. The first appears to have occurred around southern Asia, between modern humans and an unknown group the team is calling Extinct Hominin 1.

The second seems to have occurred around East Asia, the Philippines, and Indonesia, with a group dubbed Extinct Hominin 2.

“We knew the story out of Africa wasn’t a simple one, but it seems to be far more complex than we have contemplated,” says Teixeira. “The Island Southeast Asia region was clearly occupied by several archaic human groups, probably living in relative isolation from each other for hundreds of thousands of years before the ancestors of modern humans arrived. The timing also makes it look like the arrival of modern humans was followed quickly by the demise of the archaic human groups in each area.”

This isn’t the first time clues to unknown human species have turned up in our own DNA. A recent study found evidence of a “ghost” species in human saliva samples, DNA from an as-yet-unknown relative was found in the “dark hearts” of our chromosomes, and genetic studies on an Alaskan fossil revealed a previously-unknown population of Native Americans.

The research was published in the journal PNAS.

(For the source of this, and many additional important articles, please visit: https://newatlas.com/archaic-human-species-dna/60601/)

++++++++++

FaceApp Uncannily Captures These Classic Biological Signs of Aging 

A guide to what it is, exactly, that makes faces look so old.

faceapp financial benefit
By Emma Betuel

 

This week, celebrities ranging from the Jonas Brothers to Ludacris gave us a peek into what they might look like in old age, all with the help of artificial intelligence. But how exactly has FaceApp taken a stable of celebrities and transformed them into elderly versions of themselves? The app may be powered by A.I., but it’s informed by the biology of aging.

FaceApp was designed by the Russian company Wireless Lab, which debuted the first version of the app back in 2017. But this new round of photos is particularly detailed, which explains the app’s resurgence this week. Just check out geriatric Tom Holland, replete with graying hair and thickened brows — and strangely, a newfound tan.

 

151 people are talking about this

What tweaks does FaceApp make to achieve that unforgiving effect? On the company’s website, the explanation is fairly vague: “We can certainly add some wrinkles to your face,” the team writes. But a closer look at the “FaceApp Challenge” pictures shows that it does far more than that.

FaceApp has been tight-lipped about how its software works — though we know it’s based on a neural network, a type of artificial intelligence. Inverse has reached out to FaceApp for clarification about how the company achieves its aging effects and will update this story when we hear back.

Regardless, scientists have been studying the specific markers of facial aging for decades, which give us a pretty good idea of what changes FaceApp’s neural network takes into consideration when it transports users through time.

The Original FaceApp

Before there was FaceApp, there was Rembrandt, a 17th-century Dutch painter who had a thing for highly unforgiving self portraits, about 40 of which survive today.

In 2012, scientists in Israel performed a robust facial analysis on Rembrandt’s work that was initially intended to separate the real paintings from forgeries. But their paper, published in The Israel Medical Association Journal, also incorporated “subjective and objective” measures of facial aging that they used to measure the impacts of time on the artist’s face. These measures have some applications to our modern-day FaceApp images.

Their formula focused on wrinkles that highlighted Rembrandt’s increasing age. Those included forehead and glabellar wrinkles — the wrinkles between the eyes that show up when you furrow your brow but seem to stick around later in life. They also analyzed accumulations of loose skin around the eyelids, called dermatochalasis (which creates “bags”), and nasolabial folds, which are the emergence of “smile lines” between the nose and mouth.

aging brow index
Rembrandt’s “brow index” suggested that, as he aged, his brow descended. The chart documenting brow droppage reads right to left (age increases as you move left). 

Fortunately, Rembrandt’s commitment to realism also gave them bigger aging-related features to work with. They quantified his “jowl formation” and the development of upper neck fat. But the most powerful metric was their “brow index,” which, over time, documented a descending brow line in the artist. Rembrandt’s eyebrows really descended starting in his ‘20s but leveled out by his ‘40s.

We can see some of the similar markers in these current FaceApp images. Just look at the aged Tottenham Hotspur squad, complete with furrowed brows, eye bags, and descending jowls — just like Rembrandt.

87 people are talking about this

What Makes a Face Look Older?

Wrinkles notwithstanding, there is another way that FaceApp may be working its magic. There’s some evidence that perceived age is partially linked to facial color contrast.

Also, in 2012, a team of scientists in France and Pennsylvania demonstrated the impact of contrast in a series of experiments on images of female, caucasian faces. Faces with high color contrast among facial features (eyes, lips, and mouth, for example) and the skin surrounding them tended to appear younger than faces with low contrast in those areas.

facial aging
An image from the 2012 study showed how low color contrast in the face (right) makes people appear older. 

In 2017, members of that team published another study suggesting that contrast holds information about age across ethnic groups. There, they found that color contrast of facial features decreased with age across groups, but most significantly in Caucasian and South Asian women. Contrast decreased with age in Chinese and Latin American women, too, but not as strongly.

Importantly, they also note that when you artificially enhance contrast, faces tend to look younger as well, suggesting that contrast’s relationship to age perception is strong.

“We have also found that artificially increasing those aspects of facial contrast that decrease with age in diverse races and ethnicities makes the faces look younger, independent of the ethnic origin of the face and the cultural origin of the observers,” they write.

 

151 people are talking about this

 

Let’s take a closer look now.

Now that we know all this, let’s take another look at those photos of Tom Holland. There does seem to be some kind of color manipulation going on, in addition to the obvious wrinkling of his skin, though it’s unclear if maybe the photo was altered after FaceApp was applied.

Still, color contrast and specific physical features (like “jowl formation”) are factors that may be contributing to FaceApp’s seemingly magical transformation of age — which, for now, has captivated the internet.

(For the source of this, and many other interesting articles, please visit: https://www.inverse.com/article/57787-faceapp-challenge-signs-of-biological-aging/)

++++++++++

MIT and Google researchers use deep learning to decipher ancient languages.



MIT and Google researchers use deep learning to decipher ancient languages.

  • Researchers from MIT and Google Brain discover how to use deep learning to decipher ancient languages.
  • The technique can be used to read languages that died long ago.
  • The method builds on the ability of machines to quickly complete monotonous tasks.

There are about 6,500-7,000 languages currently spoken in the world. But that’s less than a quarter of all the languages people spoke over the course of human history. That total number is around 31,000 languages, according to some linguistic estimates. Every time a language is lost, so goes that way of thinking, of relating to the world. The relationships, the poetry of life uniquely described through that language are lost too. But what if you could figure out how to read the dead languages? Researchers from MIT and Google Brain created an AI-based system that can accomplish just that.

While languages change, many of the symbols and how the words and characters are distributed stay relatively constant over time. Because of that, you could attempt to decode a long-lost language if you understood its relationship to a known progenitor language. This insight is what allowed the team which included Jiaming Luo and Regina Barzilay from MIT and Yuan Cao from Google’s AI lab to use machine learning to decipher the early Greek language Linear B (from 1400 BC) and a cuneiform Ugaritic (early Hebrew) language that’s also over 3,000 years old.

Linear B was previously cracked by a human – in 1953, it was deciphered by Michael Ventris. But this was the first time the language was figured out by a machine.

The approach by the researchers focused on 4 key properties related to the context and alignment of the characters to be deciphered – distributional similarity, monotonic character mapping, structural sparsity and significant cognate overlap.

They trained the AI network to look for these traits, achieving the correct translation of 67.3% of Linear B cognates (word of common origin) into their Greek equivalents.

What AI can potentially do better in such tasks, according to MIT Technology Review, is that it can simply take a brute force approach that would be too exhausting for humans. They can attempt to translate symbols of an unknown alphabet by quickly testing it against symbols from one language after another, running them through everything that is already known.

Next for the scientists? Perhaps the translation of Linear A – the Ancient Greek language that no one has succeeded in deciphering so far.

You can check out their paper “Neural Decipherment via Minimum-Cost Flow: from Ugaritic to Linear B” here.

Related Articles Around the Web

(For the source of this, and many other interesting articles, please visit: https://bigthink.com/technology-innovation/a-i-is-translating-messages-of-long-lost-languages/)

++++++++++

Why Do We Procrastinate? Scientists Pinpoint 2 Explanations in the Brain

Scientists find biological evidence that it’s not just about laziness.

relaxing

There are an infinite number of excuses for putting off your to-do list. Letting clutter build up feels easier than Marie Kondo-ing the whole house, and filing your taxes on time can feel unnecessary when you’ll probably get an extension. The roots of this common but troubling habit run deep. In early July, a team of German researchers showed that procrastination’s root cause is not sheer laziness or lack of discipline but rather, a surprising factor deep in the brain.

In the strange study published in Social Cognitive and Affective Neuroscience, scientists at Ruhr University Bochum argue that the urge to procrastinate is governed by your genes.

“To my knowledge, our study is the first to investigate the genetic influences on the tendency to procrastinate,” first author and biopsychology researcher Caroline Schlüter, Ph.D., tells Inverse.

In the team’s study of 287 people, they discovered that women who carried one specific allele (a variant of a gene) were more likely to report more procrastination-like behavior than those who didn’t.

Where Does Procrastination Come From?

sleep, procrastination
Procrastination may have its root cause deep in the brain.

The team, led by Erhan Genc, Ph.D., a professor at the university’s biopsychology department, has been studying how procrastination might manifest in the brain for several years. His brain-based data suggests procrastination is more about managing the way we feel abut about tasks as opposed to simply managing the time we have to dedicate to them.

In 2018, Genc and his colleagues published a study that linked the amygdala, a brain structure involved in emotional processing, to the urge to put things off. People with a tendency to procrastinate, they argued, had bigger amygdalas.

“Individuals with a larger amygdala may be more anxious about the negative consequences of an action — they tend to hesitate and put off things,” he told the BBC.

In the new study, the team tried to identify a genetic pattern underlying their discovery about bigger amygdalas. They believe they’ve found one affecting women specifically. The gene they highlight affects dopamine, a neurotransmitter central to the brain’s reward system that’s implicated in drug use, sex, and other pleasurable activities.

In particular, the gene encodes an enzyme called tyrosine hydroxylase, which helps regulate dopamine production. Women who carried two copies of a variant of that gene, they showed, produce slightly more dopamine than those with an alternative version of the gene and they also tended to be “prone to procrastination,” according to self-reported surveys.

While this is hardly a causal relationship, the authors argue that there’s a connection between the tendency to procrastinate and this gene that regulates dopamine in the brain, at least in women.

They note, however, that this connection probably exists outside of the one highlighted in their earlier study on the amygdala. When they investigated whether there was a connection between the genotype of procrastinators and brain connectivity in the amygdala, they found no significant correlation.

“Thus, this study suggests that genetic, anatomical and functional differences affect trait-like procrastination independently of one another,” they write.

In other words, there is probably more than biological process that underpins procrastination, and these researcher suggest that they may have identified two of them so far.

Why Might Dopamine Influence Procrastination?

Despite its well-known link to pleasure, dopamine’s role in procrastination may not come down to its primary role. As Genc notes, dopamine is also related to “cognitive flexibility,” which is the ability to juggle many different ideas at once or shift your thinking in an instant. While this is helpful for multitasking, the team argues that it might also make someone more prone to being distracted.

“We assume that this makes it more difficult to maintain a distinct intention to act,” says Schlüter. “Women with a higher dopamine level as a result of their genotype may tend to postpone actions because they are more distracted by environmental and other factors.”

It would be a long jump to suggest that one gene related to dopamine production affects all of the complex factors governing human procrastination. There are almost certainly a range of different factors at play that may have influenced their results, notably the hormone estrogen, seeing as the pattern was only found in women. But hormones and neurotransmitters aside, the urge to procrastinate likely comes down to more than just genetic signatures. Sometimes, life just gets in the way.

Media via Pixabay, Inverse, Unplash/ Sander Smeekes

(For the source of this, as well as many other important and interesting articles, please visit: https://www.inverse.com/article/57577-why-do-we-procrastinate-biological-explanations/)

++++++++++

Advice to yourself

What advice would you give your younger self? This is the first study to ever examine it.

By SCOTTY HENDRICKS

The study asked several hundred volunteers over the age of 30 to describe what they most wanted to tell their younger selves. The majority of the answers were fairly predictable, like “Do what you love” and “Don’t smoke,” but some of the answers seemed more insightful, like “What you do twice becomes a habit; be careful of what habits you form.”The study also found that most people had begun to follow the advice they would have given their younger selves, and that it had made them a better person. For this reason, the researchers speculated that consulting yourself for the advice you would offer your younger self may be more useful than seeking out advice from others.

  • A new study asked hundreds of participants what advice they would give their younger selves if they could.
  • The subject matter tended to cluster around familiar areas of regret.
  • The test subjects reported that they did start following their own advice later in life, and that it changed them for the better.

Everybody regrets something; it seems to be part of the human condition. Ideas and choices that sounded good at the time can look terrible in retrospect. Almost everybody has a few words of advice for their younger selves they wish they could give.

Despite this, there has never been a serious study into what advice people would give their younger selves until now.

Let me give me a good piece of advice

The study, by Robin Kowalski and Annie McCord at Clemson University and published in The Journal of Social Psychology, asked several hundred volunteers, all of whom were over the age of 30, to answer a series of questions about themselves. One of the questions asked them what advice they would give their younger selves. Their answers give us a look into what areas of life everybody wishes they could have done better in.

Previous studies have shown that regrets tend to fall into six general categories. The answers on this test can be similarly organized into five groups:

  • Money (Save more money, younger me!)
  • Relationships (Don’t marry that money grabber! Find a nice guy to settle down with.)
  • Education (Finish school. Don’t study business because people tell you to, you’ll hate it.)
  • A sense of self (Do what you want to do. Never mind what others think.)
  • Life goals (Never give up. Set goals. Travel more.)

These pieces of advice were well represented in the survey. Scrolling through them, most of the advice people would give themselves verges on the cliché in these areas. It is only the occasional weight of experience seeping through advice that can otherwise be summed up as “don’t smoke,” “don’t waste your money,” or “do what you love,” that even makes it readable.

A few bits of excellent counsel do manage to slip through. Some of the better ones included:

  • “Money is a social trap.”
  • “What you do twice becomes a habit; be careful of what habits you form.”
  • “I would say do not ever base any decisions on fear.”

The study also asked if the participants have started following the advice they wish they could have given themselves. 65.7% of them said “yes” and that doing so had helped them become the person they want to be rather than what society tells them they should be. Perhaps it isn’t too late for everybody to start taking their own advice.

Kowalski and McCord write:

“The results of the current studies suggest that, rather than just writing to Dear Abby, we should consult ourselves for advice we would offer to our younger selves. The data indicate that there is much to be learned that can facilitate well-being and bring us more in line with the person that we would like to be should we follow that advice.”

(For the source of this, and many other important articles, please visit: https://bigthink.com/personal-growth/advice-to-younger-self/)

++++++++++

Emotional temperament in babies associated with specific gut bacteria species

A new study is the first to investigate associations between gut microbiome composition and the development...
A new study is the first to investigate associations between gut microbiome composition and the development of certain behavioral characteristics. (Credit: muro/Depositphotos).

A new study from the University of Turku has uncovered interesting associations between an infant’s gut microbiome composition at the age of 10 weeks, and the development of certain temperament traits at six months age. The research does not imply causation but instead adds to a compelling growing body of evidence connecting gut bacteria with mood and behavior.

It is still extraordinarily early days for many scientists investigating the broader role of the gut microbiome in humans. While some studies are revealing associations between mental health conditions such as depression or schizophrenia and the microbiome, these are only general correlations. Evidence on these intertwined connections between the gut and brain certainly suggest a fascinating bi-directional relationship, however, positive mental health is certainly not a simple a matter of taking a certain probiotic supplement.

Even less research is out there examining associations between the gut microbiome and behavior in infants. One 2015 study examined this relationship in toddlers aged between 18 and 27 months, but this new study set out to investigate the association at an even younger age. The hypothesis being, if the early months in a young life are so fundamental to neurodevelopment, and our gut bacteria is fundamentally linked with the brain, then our microbiome composition could be vital in the development of basic behavioral traits.

The study recruited 303 infants. A stool sample was collected and analyzed at the age of two and a half months, and then at around six months of age the mothers completed a behavior questionnaire evaluating the child’s temperament. The most general finding was that greater microbial diversity equated with less fear reactivity and lower negative emotionality.

“It was interesting that, for example, the Bifidobacterium genus including several lactic acid bacteria was associated with higher positive emotions in infants,” says Anna Aatsinki, one of the lead authors on the study. “Positive emotionality is the tendency to experience and express happiness and delight, and it can also be a sign of an extrovert personality later in life.”

On a more granular level the study homed in on several specific associations between certain bacterial genera and infant temperaments. High abundance of Bifidobacterium and Streptococcus, and low levels of Atopobium, were associated with positive emotionality. Negative emotionality was associated with Erwinia, Rothia and Serratia bacteria. Fear reactivity in particular was found to be specifically associated with an increased abundance of Peptinophilus and Atopobium bacteria.

The researchers are incredibly clear these findings are merely associational observations and no causal connection is suggested. These kinds of correlational studies are simply the first step, pointing the way to future research better equipped to investigate the underlying mechanisms that could be generating these associations.

“Although we discovered connections between diversity and temperament traits, it is not certain whether early microbial diversity affects disease risk later in life,” says Aatsinki. “It is also unclear what are the exact mechanisms behind the association. This is why we need follow-up studies as well as a closer examination of metabolites produced by the microbes.”

The new study was published in the journal Brain, Behavior, and Immunity.

(For the source of this, and many other important articles, please visit: https://newatlas.com/gut-bacteria-microbiome-baby-infant-behavior-mood/60197/)

++++++++++

Released in the same year as The Wild Bunch and Butch Cassidy and the Sundance Kid, Henry Hathaway’s western was defiantly old-fashioned in comparison

Kim Darby and John Wayne in True Grit. Wayne called Marguerite Roberts’ script the best he’d ever read.
Kim Darby and John Wayne in True Grit. Wayne called Marguerite Roberts’ script the best he’d ever read. Photograph: ClassicStock/Alamy Stock Photo

The year 1969 was a true inflection point for the American western, a once-dominant genre that had become a casualty of the culture, particularly when Vietnam had rendered the moral clarity of white hats and black hats obsolete. A handful of westerns were released by major studios that year, including forgettable or regrettable star vehicles for Burt Reynolds (Sam Whiskey) and Clint Eastwood (Paint Your Wagon), who were trying to revitalize the genre with a touch of whimsy. But 50 years later, three very different films have endured: Butch Cassidy and the Sundance Kid, The Wild Bunch and True Grit. Together, they represented the past, present and future of the western.

 

In the present, there was Butch Cassidy and the Sundance Kid, the year’s runaway box-office smash, grossing more than the counterculture duo of Midnight Cowboy and Easy Rider, the second- and third-place finishers, combined. George Roy Hill’s hip western-comedy, scripted by William Goldman and starring Paul Newman and Robert Redford, turned a story of outlaw bank robbers into a knowing and cheerfully sardonic entertainment that felt attuned to modern sensibilities. Sam Peckinpah’s Wild Bunch predicted a future of revisionist westerns, full of grizzled antiheroes, great spasms of stylized violence, and the messy inevitability of unhappy endings. A whiff of death from a genre in decline.

By contrast, True Grit looks like it could have been released 10, 20 or 30 years earlier, and with many of the same people working behind and in front of the camera. Its legendary producer, Hal B Wallis, was the driving force behind such Golden Age classics as Casablanca and The Adventures of Robin Hood, and his director, Henry Hathaway, cut his teeth as Cecil B DeMille’s assistant on 1923’s Ben Hur before spending decades making studio westerns, including a 1932 debut (Heritage of the Desert) that gave Randolph Scott his start and seven films with Gary Cooper. And then, of course, there’s John Wayne as Rooster Cogburn, stretching himself enough to win his only Oscar for best actor, but drawing heavily on his own pre-established iconography. It was, for him, a well-earned victory lap.

True Grit may be defiantly old-fashioned and stodgy when considered against the films of the day, but it’s also an example of how durable the genre actually was – and how it would be again in 2010, when the Coen brothers took their own crack at Charles Portis’s 1968 novel and produced the biggest hit of their careers. What would be more escapist than ducking into a movie theater in the summer of ’69 and stepping into a time machine where John Wayne is a big star, answering a call to adventure across a beautiful Technicolor expanse of mountains and prairies? The film has much more sophistication than the average throwback, but the search for justice across Indian Territory is uncomplicated and righteous, and the half-contentious/half-sentimental relationship between a plucky teenager and an irascible old coot grounds it in the tried-and-true. The defiant message here is: this can still work!

undefined
Photograph: Paramount/Kobal/Rex/Shutterstock

And boy does it ever. Kim Darby didn’t get much of a career boost for playing Mattie Ross, a fiercely determined and morally upstanding tomboy on the hunt for her father’s killer, but every bit of energy and urgency the film needs comes from her. When Mattie’s father is shot by Tom Chaney (Jeff Corey), a hired hand on their ranch near Fort Smith, Arkansas, she takes it upon herself to make sure he’s caught and dragged before the hanging judge. Whatever emotion she feels about the loss is set aside, limited to a brief crying jag in the privacy of a hotel bedroom, and she’s all business the rest of the time. When the Fort Smith sheriff doesn’t seem sufficiently motivated, she seeks out US marshal Cogburn (Wayne), a one-eyed whiskey guzzler who lives alone with a Chinese shopkeep and a cat he calls General Sterling Price.

 

The odd man out in their posse is a Texas ranger named La Boeuf (Glen Campbell), which Wayne and everyone else pronounce as “La Beef”, as part of his instinctual disrespect for Texans – and, really, anyone who fought for the Confederacy during the civil war. (La Boeuf makes a point of saying he fought for General Kirby Smith, rather than the south, which suggests a sense of shame that stands out in our current age of tiki-torch monument protests.) The chemistry between the three is terrific, despite Campbell’s limitations as an actor, because it’s constantly changing: Rooster and La Boeuf are sometimes aligned as mercenaries who see Chaney as a chance to take money from Mattie and from the family of a Texas state senator that the scoundrel also shot. Rooster comes to Mattie’s defense when Le Boeuf treats her like a wayward child and whips her with a switch, but the tables turn on that, too, when Rooster’s protective side holds her back.

Wayne called Marguerite Roberts’ script the best he’d ever read – she was on the Hollywood blacklist, which made them odd political bedfellows – and True Grit has nearly as much pop in the dialogue as the showier Butch Cassidy. Mattie gets to turn her father’s oversized pistol on Chaney, but language is her weapon of choice, delivered in such an intellectual fusillade that her adversaries tend to surrender quickly. (A running joke about the lawyer she intends to sic on them has a wonderful payoff.) The three leads exchange playful barbs and colorful stories, too, with Rooster ragging on La Boeuf’s marksmanship (“This is the famous horse killer from El Paso”) or spending the downtime before an ambush sharing the troubled events from his life that have gotten him to this place.

There’s a degree to which True Grit is a victory lap for Wayne, who gets one of his last – and certainly one of his best – opportunities to pay off a career in westerns. Yet Wayne genuinely lets down his guard in key moments and allows real pain and vulnerability to seep through, enough to complicate his tough-guy persona without demolishing it altogether. It may not have the gravitas of Clint Eastwood in Unforgiven, but it’s the same type of performance, the reckoning of a western gunslinger who’s seen and done terrible things, lost the people he loved, and seems intent on living out his remaining days alone. Without the redemptive power of Mattie’s kindness and decency, True Grit is about a man left to drink himself to death.

(For the source of this, and many other quite interesting articles and features, please visit: https://www.theguardian.com/film/2019/jun/11/true-grit-john-wayne-1969-henry-hathaway/

John Wayne – Very brief partial bio:
John Wayne was born Marion Robert Morrison in Winterset, Iowa on May 26th, 1907. He attended the University of Southern California (USC) on an athletic scholarship. But he broke his collarbone which ended his athletic career. That accident also ended his scholarship. With no funds available for school he had to leave USC. His coach who had been giving Tom Mix tickets to USC games asked Mix and director John Ford to give Wayne a job as a prop boy and extra. Wayne quickly started appearing as an extra in many films. He also met Wyatt Earp who was friends with Mix. Wayne would later credit Earp for giving his walk, talk and persona.

In 1969, Wayne won the Best Actor Oscar for his role in True Grit. It would be his second time being nominated, the first came 17 years earlier.

Wayne passed away from stomach cancer at the UCLA Medical Center on June 11, 1979.

Wayne was a member of Marion McDaniel Masonic Lodge No. 56 in Tuscon, Arizona. He was a 32° Scottish Rite Mason, a member of York Rite and a member of Al Malaikah Shrine Temple in Los Angeles.

(For a more extensive bio please visit: https://www.masonrytoday.com/index.php?new_month=5&new_day=26&new_year=2015)

++++++++++

By 2100 there could be 4.9bn dead users on Facebook. So who controls our digital legacy after we have gone? As Black Mirror returns, we delve into the issue.

Hayley Atwell in the Black Mirror episode Be Right Back.
Lost … Hayley Atwell in the Black Mirror episode Be Right Back. Photograph: Channel 4.

Esther Earl never meant to tweet after she died. On 25 August 2010, the 16-year-old internet vlogger died after a four-year battle with thyroid cancer. In her early teens, Esther had gained a loyal following online, where she posted about her love of Harry Potter, and her illness. Then, on 18 February 2011 – six months after her death – Esther posted a message on her Twitter account, @crazycrayon.

“It’s currently Friday, January 14 of the year 2010. just wanted to say: I seriously hope that I’m alive when this posts,” she wrote, adding an emoji of a smiling face in sunglasses. Her mother, Lori Earl from Massachusetts, tells me Esther’s online friends were “freaked out” by the tweet.

“I’d say they found her tweet jarring because it was unexpected,” she says. Earl doesn’t know which service her daughter used to schedule the tweet a year in advance, but believes it was intended for herself, not for loved ones after her death. “She hoped she would receive her own messages … [it showed] her hopes and longings to still be living, to hold on to life.”

Although Esther did not intend her tweet to be a posthumous message for her family, a host of services now encourage people to plan their online afterlives. Want to post on social media and communicate with your friends after death? There are lots of apps for that! Replika and Eternime are artificially intelligent chatbots that can imitate your speech for loved ones after you die; GoneNotGone enables you to send emails from the grave; and DeadSocial’s “goodbye tool” allows you to “tell your friends and family that you have died”. In season two, episode one of Black Mirror, a young woman recreates her dead boyfriend as an artificial intelligence – what was once the subject of a dystopian 44-minute fantasy is nearing reality.

Esther Earl at home in 2010 … before she died, she arranged for emails to be sent to her imagined future self.
Pinterest
Esther Earl at home in 2010 … before she died, she arranged for emails to be sent to her imagined future self. Photograph: Boston Globe via Getty Images.But although Charlie Brooker portrayed the digital afterlife as something twisted, in reality online legacies can be comforting for the bereaved. Esther Earl used a service called FutureMe to send emails to herself, stating that her parents should read them if she died. Three months after Esther’s death, her mother received one of these emails. “They were seismically powerful,” she says. “That letter made us weep, but also brought us great comfort – I think because of its intentionality, the fact that she was thinking about her future, the clarity with which she accepted who she was and who she hoped to become.”Because of the power of Esther’s messages, Earl knows that if she were dying, she would also schedule emails for her husband and children. “I think I would be very clear about how many messages I had written and when to expect them,” she adds, noting they could cause anxiety for relatives and friends otherwise.Yet while the terminally ill ponder their digital legacies, the majority of us do not. In November 2018, a YouGov survey found that only 7% of people want their social media accounts to remain online after they die, yet it is estimated that by 2100, there could be 4.9bn dead users on Facebook alone. Planning your digital death is not really about scheduling status updates for loved ones or building an AI avatar. In practice, it is a series of unglamorous decisions about deleting your Facebook, Twitter and Netflix accounts; protecting your email against hackers; bestowing your music library to your friends; allowing your family to download photos from your cloud; and ensuring that your online secrets remain hidden in their digital alcoves.

In Be Right Back, a young woman recreates her dead boyfriend as an artificial intelligence.
Pinterest
In Be Right Back, a young woman recreates her dead boyfriend as an artificial intelligence. Photograph: Channel 4.

“We should think really carefully about anything we’re entrusting or storing on any digital platform,” says Dr Elaine Kasket, a psychologist and author of All the Ghosts in the Machine: Illusions of Immortality in the Digital Age. “If our digital stuff were like our material stuff, we would all look like extreme hoarders.” Kasket says it is naive to assume that our online lives die with us. In practice, your hoard of digital data can cause endless complications for loved ones, particularly when they don’t have access to your passwords.

“I cursed my father every step of the way,” says Richard, a 34-year-old engineer from Ontario who was made executor of his father’s estate four years ago. Although Richard’s father left him a list of passwords, not one remained valid by the time of his death. Richard couldn’t access his father’s online government accounts, his email (to inform his contacts about the funeral), or even log on to his computer. For privacy reasons, Microsoft refused to help Richard access his father’s computer. “Because of that experience I will never call Microsoft again,” he says.

Compare this with the experience of Jan-Ole Lincke, a 24-year-old pharmaceutical worker from Hamburg whose father left up-to-date passwords behind on a sheet of paper when he died two years ago. “Getting access was thankfully very easy,” says Lincke, who was able to download pictures from his father’s Google profile, shut down his email to prevent hacking, and delete credit card details from his Amazon account. “It definitely made me think about my own [digital legacy],” says Lincke, who has now written his passwords down.

Yet despite growing awareness about the data we leave behind, very few of us are doing anything about it. In 2013, a Brighton-based company called Cirrus Legacy made headlines after it began allowing people to securely leave behind passwords for a nominated loved one. Yet the Cirrus website is now defunct, and the Guardian was unable to reach its founder for comment. Clarkson Wright & Jakes Solicitors, a Kent-based law firm that offered the Cirrus service to its clients, says the option was never popular.

“We’ve been aware for quite a period now that the big issue for the next generation is digital footprints,” says Jeremy Wilson, head of the wills and estates team at CWJ. “Cirrus made sense and ticked a lot of boxes but, to be honest, not one client has taken us up on it.”

Wilson also notes that people don’t know about the laws surrounding digital assets such as the music, movies and games they have downloaded. While many of us assume we own our iTunes library or collection of PlayStation games, in fact, most digital downloads are only licensed to us, and this licence ends when we die.

What we want to do and what the law allows us to do with our digital legacy can therefore be very different things. Yet at present it is not the law that dominates our decisions about digital death. “Regulation is always really slow to keep up with technology,” says Kasket. “That means that platforms and corporations like Facebook end up writing the rules.”

Andrew Scott stars in the new Black Mirror episode Smithereens, which explores our digital dependency.
Pinterest
Andrew Scott stars in the new Black Mirror episode Smithereens, which explores our digital dependency. Photograph: Netflix / Black Mirror.

In 2012, a 15-year-old German girl died after being hit by a subway train in Berlin. Although the girl had given her parents her online passwords, they were unable to access her Facebook account because it had been “memorialised” by the social network. Since October 2009, Facebook has allowed profiles to be transformed into “memorial pages” that exist in perpetuity. No one can then log into the account or update it, and it remains frozen as a place for loved ones to share their grief.

The girl’s parents sued Facebook for access to her account – they hoped to use it to determine whether her death was suicide. They originally lost the case, although a German court later granted the parents permission to get into her account, six years after her death.

“I find it concerning that any big tech company that hasn’t really shown itself to be the most honest, transparent or ethical organisation is writing the rulebook for how we should grieve, and making moral judgments about who should or shouldn’t have access to sensitive personal data,” says Kasket. The author is concerned with how Facebook uses the data of the dead for profit, arguing that living users keep their Facebook accounts because they don’t want to be “locked out of the cemetery” and lose access to relatives’ memorialised pages. As a psychologist, she is also concerned that Facebook is dictating our grief.

“Facebook created memorial profiles to prevent what they called ‘pain points’, like getting birthday reminders for a deceased person,” she says. “But one of the mothers I spoke to for my book was upset when her daughter’s profile was memorialised and she stopped getting these reminders. She was like, ‘This is my daughter, I gave birth to her, it’s still her birthday’.”

While Facebook users now have the option to appoint a “legacy contact” who can manage or delete their profile after death, Kasket is concerned that there are very few personalisation options when it comes to things like birthday reminders, or whether strangers can post on your wall. “The individuality and the idiosyncrasy of grief will flummox Facebook every time in its attempts to find a one-size-fits-all solution,” she says.

Pain points … should we allow loved ones to curate our legacy, or create ‘memorial pages’?
Pinterest
Pain points … should we allow loved ones to curate our legacy, or create ‘memorial pages’? Photograph: Yui Mok/PA.

Matthew Helm, a 27-year-old technical analyst from Minnesota, says his mother’s Facebook profile compounded his grief after she died four years ago. “The first year was the most difficult,” says Helm, who felt some relatives posted about their grief on his mother’s wall in order to get attention. “In the beginning I definitely wished I could just wipe it all.” Helm hoped to delete the profile but was unable to access his mother’s account. He did not ask the tech giant to delete the profile because he didn’t want to give it his mother’s death certificate.

Conversely, Stephanie Nimmo, a 50-year-old writer from Wimbledon, embraced the chance to become her husband’s legacy contact after he died of bowel cancer in December 2015. “My husband and I shared a lot of information on Facebook. It almost became a bit of an online diary,” she says. “I didn’t want to lose that.” She is pleased people continue to post on her husband’s wall, and enjoys tagging him in posts about their children’s achievements. “I’m not being maudlin or creating a shrine, just acknowledging that their dad lived and he played a role in their lives,” she explains.

Nimmo is now passionate about encouraging people to plan their digital legacies. Her husband also left her passwords for his Reddit, Twitter, Google and online banking accounts. He also deleted Facebook messages he didn’t want his wife to see. “Even in a marriage there are certain things you wouldn’t want your other half to see because it’s private,” says Nimmo. “It worries me a little that if something happened to me, there are things I wouldn’t want my kids to see.”

When it comes to the choice between allowing relatives access to your accounts or letting a social media corporation use your data indefinitely after your death, privacy is a fundamental issue. Although the former makes us sweat, the latter is arguably more dystopian. Dr Edina Harbinja is a law lecturer at Aston University, who argues that we should all legally be entitled to postmortem privacy.

“The deceased should have the right to control what happens to their personal data and online identities when they die,” she says, explaining that the Data Protection Act 2018 defines “personal data” as relating only to living people. Harbinja says this is problematic because rules such as the EU’s General Data Protection Regulation don’t apply to the dead, and because there are no provisions that allow us to pass on our online data in wills. “There can be many issues because we don’t know what would happen if someone is a legacy contact on Facebook, but the next of kin want access.” For example, if you decide you want your friend to delete your Facebook pictures after you die, your husband could legally challenge this. “There could be potential court cases.”

Kasket says people “don’t realise how much preparation they need to do in order to make plans that are actually able to be carried out”. It is clear that if we don’t start making decisions about our digital deaths, then someone else will be making them for us. “What one person craves is what another person is horrified about,” says Kasket.

Esther Earl continued to tweet for another year after her death. Automated posts from the music website Last.fm updated her followers about the music she enjoyed. There is no way to predict the problems we will leave online when we die; Lori Earl would never have thought of revoking Last.fm’s permissions to post on her daughter’s page before she died. “We would have turned off the posts if we had been able to,” she says.

Kasket says “the fundamental message” is to think about how much you store digitally. “Our devices, without us even having to try, capture so much stuff,” she says. “We don’t think about the consequences for when we’re not here any more.”

Black Mirror season 5 launches on Netflix on 5 June.

(For the source of this, and many other quite interesting articles, please visit: https://www.theguardian.com/tv-and-radio/2019/jun/02/digital-legacy-control-online-identities-when-we-die/)

++++++++++

Why people become vegans: The history, sex and science of a meatless existence

Disclosure statement

Joshua T. Beck does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Partners

University of Oregon provides funding as a member of The Conversation US.

View all partners

At the age of 14, a young Donald Watson watched as a terrified pig was slaughtered on his family farm. In the British boy’s eyes, the screaming pig was being murdered. Watson stopped eating meat and eventually gave up dairy as well.

Later, as an adult in 1944, Watson realized that other people shared his interest in a plant-only diet. And thus veganism – a term he coined – was born.

Flash-forward to today, and Watson’s legacy ripples through our culture. Even though only 3 percent of Americans actually identify as vegan, most people seem to have an unusually strong opinion about these fringe foodies – one way or the other.

As a behavioral scientist with a strong interest in consumer food movements, I thought November – World Vegan Month – would be a good time to explore why people become vegans, why they can inspire so much irritation and why many of us meat-eaters may soon join their ranks.

Early childhood experiences can shape how we feel about animals – and lead to veganism, as it did for Donald Watson. HQuality/Shutterstock.com

It’s an ideology not a choice

Like other alternative food movements such as locavorism, veganism arises from a belief structure that guides daily eating decisions.

They aren’t simply moral high-grounders. Vegans do believe it’s moral to avoid animal products, but they also believe it’s healthier and better for the environment.

Also, just like Donald Watson’s story, veganism is rooted in early life experiences.

Psychologists recently discovered that having a larger variety of pets as a child increases tendencies to avoid eating meat as an adult. Growing up with different sorts of pets increases concern for how animals are treated more generally.

Thus, when a friend opts for Tofurky this holiday season, rather than one of the 45 million turkeys consumed for Thanksgiving, his decision isn’t just a high-minded choice. It arises from beliefs that are deeply held and hard to change.

Sutton and Sons is a vegan fish and chip restaurant in London. Reuters/Peter Nicholls

Veganism as a symbolic threat

That doesn’t mean your faux-turkey loving friend won’t seem annoying if you’re a meat-eater.

Why do some people find vegans so irritating? In fact, it might be more about “us” than them.

Most Americans think meat is an important part of a healthy diet. The government recommends eating 2-3 portions (5-6 ounces) per day of everything from bison to sea bass. As tribal humans, we naturally form biases against individuals who challenge our way of life, and because veganism runs counter to how we typically approach food, vegans feel threatening.

Humans respond to feelings of threat by derogating out-groups. Two out of 3 vegans experience discrimination daily, 1 in 4 report losing friends after “coming out” as vegan, and 1 in 10 believe being vegan cost them a job.

Veganism can be hard on a person’s sex life, too. Recent research finds that the more someone enjoys eating meat, the less likely they are to swipe right on a vegan. Also, women find men who are vegan less attractive than those who eat meat, as meat-eating seems masculine.

The fake meat at one Fort Lauderdale restaurant supposedly tastes like real meat. AP Photo/J. Pat Carter

Crossing the vegan divide

It may be no surprise that being a vegan is tough, but meat-eaters and meat-abstainers probably have more in common than they might think.

Vegans are foremost focused on healthy eating. Six out of 10 Americans want their meals to be healthier, and research shows that plant-based diets are associated with reduced risk for heart disease, certain cancers, and Type 2 diabetes.

It may not be surprising, then, that 1 in 10 Americans are pursuing a mostly veggie diet. That number is higher among younger generations, suggesting that the long-term trend might be moving away from meat consumption.

In addition, several factors will make meat more costly in the near future.

Meat production accounts for as much as 15 percent of all greenhouse gas emissions, and clear-cutting for pasture land destroys 6.7 million acres of tropical forest per year. While some debate exists on the actual figures, it is clear that meat emits more than plants, and population growth is increasing demand for quality protein.

Seizing the opportunity, scientists have innovated new forms of plant-based meats that have proven to be appealing even to meat-eaters. The distributor of Beyond Meat’s plant-based patties says 86 percent of its customers are meat-eaters. It is rumored that this California-based vegan company will soon be publicly traded on Wall Street.

Even more astonishing, the science behind lab-grown, “cultured tissue” meat is improving. It used to cost more than $250,000 to produce a single lab-grown hamburger patty. Technological improvements by Dutch company Mosa Meat have reduced the cost to $10 per burger.

Watson’s legacy

Even during the holiday season, when meats like turkey and ham take center stage at family feasts, there’s a growing push to promote meatless eating.

London, for example, will host its first-ever “zero waste” Christmas market this year featuring vegan food vendors. Donald Watson, who was born just four hours north of London, would be proud.

Watson, who died in 2006 at the ripe old age of 95, outlived most of his critics. This may give quiet resolve to vegans as they brave our meat-loving world.

(For the source of this, and many other interesting articles, please visit: https://theconversation.com/why-people-become-vegans-the-history-sex-and-science-of-a-meatless-existence-106410/)

++++++++++

Did human ancestors split from chimps in Europe, not Africa?

An artist's impression of the early hominin Graecopithecus freybergi in its savannah home in ancient Greece
An artist’s impression of the early hominin Graecopithecus freybergi in its savannah home in ancient Greece. (Credit: Veliza Simeonovski).

It’s generally accepted that humans originated in Africa and gradually spread out across the globe from there, but a pair of new studies may paint a different picture. By examining fossils of early hominins, researchers have found that humans and chimpanzees may have split from their last common ancestor earlier than previously thought, and this important event may have happened in the ancient savannahs of Europe, not Africa.

The split between humans and our closest living relatives, chimpanzees, is a murky area in our history. While the point of original divergence is thought to have been between 5 and 7 million years ago, it wasn’t a clean break, and cross breeding and hybridization may have continued until as recently as 4 million years ago.

Where the divergence took place is contentious as well, but Eastern Africa is the accepted birthplace of the earliest pre-humans. One of the best candidates for the last common ancestor is Sahelanthropus, known from a skull found in Central Africa dating back to around 7 million years ago. But according to the new studies, bones found in Greece and Bulgaria appear to belong to a hominin that’s a few hundred thousand years older.

“Our discovery outlines a new scenario for the beginning of human history – the findings allow us to move the human-chimpanzee split into the Mediterranean area,” says David Begun, co-author of one of the studies. “These research findings call into question one of the most dogmatic assertions in paleoanthropology since Charles Darwin, which is that the human lineage originated in Africa. It is critical to know where the human lineage arose so that we can reconstruct the circumstances leading to our divergence from the common ancestor we share with chimpanzees.”

The lower jaw of Graecopithecus found in Greece, which indicates that early humans may have split...

The Mediterranean bones are from a species called Graecopithecus freybergi, and it’s one of the least understood European apes. The researchers scanned a jawbone found in Greece and an upper premolar from Bulgaria, and found the roots of the teeth to be largely fused together, indicating that the species might have been an early hominin.

“While great apes typically have two or three separate and diverging roots, the roots of Graecopithecus converge and are partially fused – a feature that is characteristic of modern humans, early humans and several pre-humans including Ardipithecus and Australopithecus,” says Madelaine Böhme, co-lead investigator on the project.

To get a clearer picture, the researchers studied the sediment that the fossils were found in, and discovered that the two sites were very similar. Not only were they almost exactly the same age – 7.24 and 7.175 million years – but both areas were dry, grassy savannahs at the time, making them prime conditions for hominins.

The Graecopithecus premolar found in Bulgaria, with fused roots that suggest it belongs to the early...

The researchers found grains of dust that appeared to have blown up from the Sahara desert, which was forming around the same time. This might have contributed to the savannah-like conditions in Europe, and these environmental changes may have driven the two species to evolve differently.

“The incipient formation of a desert in North Africa more than seven million years ago and the spread of savannahs in Southern Europe may have played a central role in the splitting of the human and chimpanzee lineages,” says Böhme.

But inferring information from fossils always leaves room for error, and as New Scientist reports, there are researchers who aren’t convinced such big claims can be projected from such small features of the fossils. Still, it’s an interesting theory, and one that will warrant more study.

The research was published in two separate studies in PLOS ONE.

(For the source of this, and many other interesting articles, please visit: https://newatlas.com/fossil-human-chimp-ancestor-europe/49708/)

++++++++++

Ancient pee helps archaeologists track the rise of farming

The Aşıklı Höyük site in Turkey, where researchers have studied urine salts to estimate human and animal populations over time. (Credit: Güneş Duru).

One of the most important transitions in human history was when we stopped hunting and gathering for food and instead settled down to become farmers. Now, to reconstruct the history of one particular archaeological site in Turkey, scientists have examined a pretty unexpected source – the salts left behind from human and animal pee.

The dig site of Aşıklı Höyük in Turkey has been studied for decades, and it’s clear that humans occupied the area more than 10,000 years ago, where they started experimenting with keeping animals like sheep and goats. But just how many people and animals occupied the site at different times has been trickier to track.

For the new study, researchers from the Universities of Columbia, Tübingen, Arizona and Istanbul realized that the more humans and animals there are on a site, the higher the concentration of salts in the ground. The reason? Everybody and everything pees.

The team began by collecting 113 samples from across Aşıklı Höyük, including trash piles, bricks, hearths and soil, from all different time periods. They examined the levels of sodium, nitrate and chlorine salts, which are all passed in urine.

Researchers Jay Quade (left) and Jordan Abell (right) looking for salt samples in the soil

Sure enough, the fluctuating levels of urine salts revealed the history of human and animal occupation of Aşıklı Höyük. Very little salt was detected in the natural layers, before any settlement existed. Between about 10,400 and 10,000 years ago, salt levels rose slightly, as a few humans began settling. Then things really took off – between 10,000 and 9,700 years ago the salts saw a huge spike, with levels about 1,000 times higher than previously detected. That indicates a similar spike in the number of occupants. After that, concentrations go into decline again.

That large spike, the team says, suggests that domestication of animals in Aşıklı Höyük occurred faster than was previously thought.

Using this data, the researchers estimated that over the 1,000-year period of occupation, an average of 1,790 people and animals lived in the area per day. At its peak, the population density would have reached about one person or animal for every 10 sq m (108 sq ft).

Reconstructed rooftops in Aşıklı Höyük

The estimated inhabitants of each time period can’t be all human – the houses found on site indicate a smaller population. But the team says this is evidence that salt concentrations can be a useful tool to study the density of domesticated animals over time.

The researchers say this technique could be used in other sites, to help find new evidence of the timing and density of human settlement.

The research was published in the journal Science Advances.

An audio version of this article is available to New Atlas Plus subscribers.

More audio articles

(For the source of this, and many other equally interesting articles, please visit: https://newatlas.com/urine-salts-ancient-farming/59360/)

++++++++++

Japan’s ‘vanishing’ Ainu will finally be recognized as indigenous people

Oki Kano is a musician and founder of Oki Dub Ainu, a band that mixes indigenous Ainu music with reggae and other genres.

 

Growing up in Japan, musician Oki Kano never knew he was part of a “vanishing people.”

His Japanese mother was divorced and never told Kano that his birth father was an indigenous Ainu man. Kano was 20 years old when he found out.

For decades, researchers and conservative Japanese politicians described the Ainu as “vanishing,” says Jeffry Gayman, an Ainu peoples researcher at Hokkaido University.

Gayman says there might actually be tens of thousands more people of Ainu descent who have gone uncounted — due to discrimination, many Ainu chose to hide their background and assimilate years ago, leaving younger people in the dark about their heritage.


Traditional Ainu Settlement Area. 

A bill, which was passed recently, for the first time has officially recognized the Ainu of Hokkaido as an “indigenous” people of Japan. The bill also includes measures to make Japan a more inclusive society for the Ainu, strengthen their local economies and bring visibility to their culture.

Japanese land minister Keiichi Ishii told reporters that it was important for the Ainu to maintain their ethnic dignity and pass on their culture to create a vibrant and diverse society.

Yet some warn a new museum showcasing their culture risks turning the Ainu into a cultural exhibit and note the bill is missing one important thing — an apology.

‘Tree without roots’

Kano grew up in Kanagawa prefecture near Tokyo, where he became fascinated with Jamaican reggae. Even without being aware of his ethnic identity, the political commentary underpinning the songs made an impression on him.

“Bob Marley sang that people who forget about their ancestors are the same as a tree without roots,” says Kano, 62. “I checked the lyrics as a teenager, though they became more meaningful to me as I matured.”

After discovering his ethnic origins, Kano was determined to learn more. He traveled to northern Hokkaido to meet his father and immediately felt an affinity with the Ainu community there — the “Asahikawa,” who are known for their anti-establishment stance.

But his sense of belonging was short-lived — some Ainu rejected Kano for having grown up outside of the community, saying he would never fully understand the suffering they had endured under Japanese rule.

Ainu people occupying parts of the Japanese island of Hokkaido, Russian Kuril Islands and Sakhalin, in about 1950.


Yuji Shimizu, an Ainu elder, says he faced open discrimination while growing up in Hokkaido. He says other children called him a dog and bullied him for looking different.

Hoping to avoid prejudice, his parents never taught him traditional Ainu customs or even the language, says the 78-year-old former teacher.

“My mother told me to forget I was Ainu and become like the Japanese if I wanted to be successful,” says Shimizu.

Ainu Moshir (Land of the Ainu)

The origins of the Ainu and their language remain unclear, though many theories exist.

They were early residents of northern Japan, in what is now the Hokkaido prefecture, and the Kuril Islands and Sakhalin, off the east coast of Russia. They revered bears and wolves, and worshiped gods embodied in the natural elements like water, fire and wind.

In the 15th century, the Japanese moved into territories held by various Ainu groups to trade. But conflicts soon erupted, with many battles fought between 1457 and 1789. After the 1789 Battle of Kunasiri-Menasi, the Japanese conquered the Ainu.

Japan’s modernization in the mid-1800s was accompanied by a growing sense of nationalism and, in 1899, the government sought to assimilate the Ainu by introducing the Hokkaido Former Aborigines Protection Act.

A family of Ainu gives a meal to a Western man in a sketch.


The act implemented Japan’s compulsory national education system in Hokkaido and eliminated traditional systems of Ainu land rights and claims. Over time, the Ainu were forced to give up their land and adopt Japanese customs through a series of government initiatives.

Today, there are only two native Ainu speakers worldwide, according to the Endangered Languages Project, a organization of indigenous groups and researchers aimed at protecting endangered languages.

High levels of poverty and unemployment currently hinder the Ainu’s social progress. The percentage of Ainu who attend high school and university is far lower than the Hokkaido average.

The Ainu population also appears to have shrunk. Official figures put the number of Ainu in Hokkaido at 17,000 in 2013, accounting for around 2% of the prefecture’s population. In 2017, the latest year on record, there were only about 13,000.

However, Gayman, the Ainu researcher, says that the number of Ainu could be up to ten times higher than official surveys suggest, because many have chosen not to identify as Ainu and others have forgotten — or never known — their origins.

Finding music

Feeling neither Ainu nor Japanese, Kano left Japan in the late 1980s for New York.

While living there, he befriended several Native Americans at a time when indigenous peoples were putting pressure on governments globally to recognize their rights. He credits them with awakening his political conscience as a member of the Ainu.

“I knew I had to reconnect with my Ainu heritage,” he says. Kano made his way back to Japan and, in 1993, discovered a five-stringed instrument called the “tonkori,” once considered a a symbol of Ainu culture.

“I made a few songs with the tonkori and thought I had talent,” he says, despite never having formally studied music. But finding a tonkori master to teach him was hard after years of cultural erasure.

So he used old cassette tapes of Ainu music as a reference. “It was like when you copy Jimi Hendrix while learning how to play the guitar,” he says.

His persistence paid off. In 2005, Kano created the Oki Dub Ainu group, which fuses Ainu influence with reggae, electronica and folk undertones. He also created his own record label to introduce Ainu music to the world.

Since then, Kano has performed in Australia and toured Europe. He has also taken part in the United Nation’s Working Group on Indigenous Populations to voice Ainu concerns.

UN Declaration on the Rights of Indigenous Peoples (UNDRIP)

The United Nations adopted UNDRIP on September 13, 2007, to enshrine the rights that “constitute the minimum standards for the survival, dignity and well-being of the indigenous peoples of the world.”

The UNDRIP protects collective rights that may not feature in other human rights charters. It emphasizes individual rights, and also safeguards the individual rights of Indigenous people.

New law, new future?

Mark John Winchester, a Japan-based indigenous rights expert, calls the new bill a “small step forward” in terms of indigenous recognition and anti-discrimination, but says it falls short of truly empowering the Ainu people. “Self-determination, which should be the central pillar of indigenous policy-making, is not reflected in the law,” says Winchester.

Winchester and Gayman also say the government failed to consult all Ainu groups when drafting the bill.

For the Ainu elder Shimizu, the new bill is missing an important part: atonement. “Why doesn’t the government apologize? If the Japanese recognized what they did in the past, I think we could move forward,” says Shimizu.

“The Japanese forcibly colonized us and annihilated our culture. Without even admitting to this, they want to turn us into a museum exhibit,” Shimizu adds, referring to the 2019 bill’s provision to open an Ainu culture museum in Hokkaido.

Other Ainu say the museum will create jobs.

Japanese Indigenous Ainu men participate in a traditional ritual called Kamuinomi, held as part of the 2008 Indigenous Peoples Summit.

Both Shimizu and Kano say the new law grants too much power to Japan’s central government, which requires Ainu groups to seek its approval for state-sponsored cultural projects. Furthermore, they say the bill should do more to promote education.

Currently, Ainu youth are eligible for scholarships and grants to study their own language and culture at a few select private universities. But Kano says government funding should extend beyond supporting Ainu heritage, to support the Ainu people.
“We need more Ainu to enter higher education and become Ainu lawyers, film directors and professors,” he says. “If that doesn’t happen, the Japanese will always control our culture.”

++++++++++

Unrelated Languages Often Use Same Sounds for Common Objects and Ideas, Research Finds

A careful statistical examination of words from 6,000+ languages shows that humans tend to use the same sounds for common objects and ideas, no matter what language they’re speaking.

Geographic distribution of the 6,452 word lists analyzed in this study. Colors distinguish different linguistic macroareas, regions with relatively little or no contact between them (but with much internal contact between their populations). These are North America (orange), South America (dark green), Eurasia (blue), Africa (green), Papua New Guinea and the Pacific Islands (red), and Australia (fuchsia). Image credit: Damián E. Blasi et al.

Geographic distribution of the 6,452 word lists analyzed in this study. Colors distinguish different linguistic macroareas, regions with relatively little or no contact between them (but with much internal contact between their populations). These are North America (orange), South America (dark green), Eurasia (blue), Africa (green), Papua New Guinea and the Pacific Islands (red), and Australia (fuchsia). Image credit: Damián E. Blasi et al.

The new research, led by Prof. Morten Christiansen of Cornell University, demonstrates a robust statistical relationship between certain basic concepts – from body parts to familial relationships and aspects of the natural world – and the sounds humans around the world use to describe them.

“These sound symbolic patterns show up again and again across the world, independent of the geographical dispersal of humans and independent of language lineage,” Prof. Christiansen said.

“There does seem to be something about the human condition that leads to these patterns. We don’t know what it is, but we know it’s there.”

Prof. Christiansen and his colleagues from Argentina, Germany, the Netherlands and Switzerland analyzed 40-100 basic vocabulary words in 62% of the world’s more than 6,000 current languages and 85 percent of its linguistic lineages.

“The dataset used for this study is drawn from version 16 of the Automated Similarity Judgment Program database,” they explained.

“The data consist of 28–40 lexical items from 6,452 word lists, with a subset of 328 word lists having up to 100 items. The word lists include both languages and dialects, spanning 62% of the world’s languages and about 85% of its lineages.”

The words included pronouns, body parts and properties (small, full), verbs that describe motion and nouns that describe natural phenomena (star, fish).

The scientists found a considerable proportion of the 100 basic vocabulary words have a strong association with specific kinds of human speech sounds.

For instance, in most languages, the word for ‘nose’ is likely to include the sounds ‘neh’ or the ‘oo’ sound, as in ‘ooze.’

The word for ‘tongue’ is likely to have ‘l’ or ‘u.’

‘Leaf’ is likely to include the sounds ‘b,’ ‘p’ or ‘l.’

‘Sand’ will probably use the sound ‘s.’

The words for ‘red’ and ‘round’ often appear with ‘r,’ and ‘small’ with ‘i.’

“It doesn’t mean all words have these sounds, but the relationship is much stronger than we’d expect by chance. The associations were particularly strong for words that described body parts. We didn’t quite expect that,” Prof. Christiansen said.

The researchers also found certain words are likely to avoid certain sounds. This was especially true for pronouns.

For example, words for ‘I’ are unlikely to include sounds involving u, p, b, t, s, r and l.

‘You’ is unlikely to include sounds involving u, o, p, t, d, q, s, r and l.

The team’s findings, published in the Proceedings of the National Academy of Sciences, challenge one of the most basic concepts in linguistics: the century-old idea that the relationship between a sound of a word and its meaning is arbitrary.

The researchers don’t know why humans tend to use the same sounds across languages to describe basic objects and ideas.

“These concepts are important in all languages, and children are likely to learn these words early in life,” Prof. Christiansen said.

“Perhaps these signals help to nudge kids into acquiring language.”

“Likely it has something to do with the human mind or brain, our ways of interacting, or signals we use when we learn or process language. That’s a key question for future research.”

_____

Damián E. Blasi et al. Sound–meaning association biases evidenced across thousands of languages. PNAS, published online September 12, 2016; doi: 10.1073/pnas.1605782113

(For the source of this, and many other interesting articles, please visit: www.sci-news.com/othersciences/linguistics/languages-use-same-sounds-common-objects-ideas-04185.html/)

++++++++++

Neanderthals, Denisovans May Have Had Their Own Language, Suggest Scientists

A broad range of evidence from linguistics, genetics, paleontology, and archaeology suggests that Neanderthals and Denisovans shared with us something like modern speech and language, according to Dutch psycholinguistics researchers Dr Dan Dediu and Dr Stephen Levinson.

Neanderthals (University of Utah via kued.org)

Neanderthals (University of Utah via kued.org)

Neanderthals have fascinated both the academic world and the general public ever since their discovery almost 200 years ago. Initially thought to be subhuman brutes incapable of anything but the most primitive of grunts, they were a successful form of humanity inhabiting vast swathes of western Eurasia for several hundreds of millennia, during harsh ages and milder interglacial periods.

Scientists knew that Neanderthals were our closest cousins, sharing a common ancestor with us, probably Homo heidelbergensis, but it was unclear what their cognitive capacities were like, or why modern humans succeeded in replacing them after thousands of years of cohabitation.

Due to new discoveries and the reassessment of older data, but especially to the availability of ancient DNA, researchers have started to realize that Neanderthals’ fate was much more intertwined with ours and that, far from being slow brutes, their cognitive capacities and culture were comparable to ours.

Dr Dediu and Dr Levinson, both from the Max Planck Institute for Psycholinguistics and the Radboud University Nijmegen, reviewed all these strands of literature, and argue that essentially modern language and speech are an ancient feature of our lineage dating back at least to the most recent ancestor we shared with the Neanderthals and the Denisovans. Their interpretation of the intrinsically ambiguous and scant evidence goes against the scenario usually assumed by most language scientists.

The study, reported in the journal Frontiers in Language Sciences, pushes back the origins of modern language by a factor of ten – from the often-cited 50,000 years to 500,000 – 1,000,000 years ago – somewhere between the origins of our genus, Homo, some 1.8 million years ago, and the emergence of Homo heidelbergensis.

This reassessment of the evidence goes against a scenario where a single catastrophic mutation in a single individual would suddenly give rise to language, and suggests that a gradual accumulation of biological and cultural innovations is much more plausible.

Interestingly, given that we know from the archaeological record and recent genetic data that the modern humans spreading out of Africa interacted both genetically and culturally with the Neanderthals and Denisovans, then just as our bodies carry around some of their genes, maybe our languages preserve traces of their languages too.

This would mean that at least some of the observed linguistic diversity is due to these ancient encounters, an idea testable by comparing the structural properties of the African and non-African languages, and by detailed computer simulations of language spread.

______

Bibliographic information: Dediu D and Levinson SC. 2013. On the antiquity of language: the reinterpretation of Neanderthal linguistic capacities and its consequences. Front. Psychol. 4: 397; doi: 10.3389/fpsyg.2013.00397

(For the source of this, and other equally important articles, please visit: http://www.sci-news.com/othersciences/linguistics/science-neanderthals-denisovans-language-01211.html/)

++++++++++

A Mysterious Third Human Species Lived Alongside Neanderthals in This Cave

It’s a “fascinating part of human history.”

Denisova cave

Scientists digging in the mountains of southern Siberia have revealed key insights into the lives of Denisovans, a mysterious branch of the ancient human family tree. While these relatives are extinct, their legacy lives on in the modern humans who carry fragments of their DNA and in the tiny artifacts and bones they left behind. Compared to the well-known Neanderthals, there’s a lot we don’t know about the Denisovans — but a pair of papers published recently hint at their place in our shared history.

Both Neanderthals and Denisovans belong to the genus Homo, though it’s still not entirely clear whether the Denisovans are a separate species or a subspecies of modern humans — after all, we only have six fossil fragments to go on. Nevertheless, we’re one step closer to finding out. Both studies, published in Nature, describe new discoveries in the Denisova Cave of the Altai Mountains, where excavations have continued for the past 40 years. Those efforts have revealed ancient human remains carrying the DNA of both the Denisovans and Neanderthals who made the high-ceilinged cave their home — sometimes, even having children together.

For a long time, nobody knew exactly how long this cave was occupied and the nature of the interactions of the hominins living there. But now, the studies collectively reveal that humans occupied the cave from approximately 200,000 years ago to 50,000 years ago.

The authors of one study focused on Denisovan fossils and artifacts to determine “aspects of their cultural and subsistence adaptions.” Katerina Douka, Ph.D., the co-author of that study and a researcher at Max Planck Institute for the Science of Human History, tells Inverse that confirming that they lived in this cave is a “fascinating part of human history.” However, she adds, we still don’t know so much about the Denisovans — not their geographic range, their location of origin, or even what they looked like.

Denisova cave
View of the entrance to the Denisova Cave. 

When they lived in the cave, and with whom, is another mystery about the Denisovans that was investigated, sediment layer by sediment layer, in the second study. Published by scientists from the University of Wollongong and the Russian Academy of Sciences, the analysis is the most comprehensive dating project ever done on the Denisova Cave deposits. The team dated 103 sediment layers and 50 items within them, mostly bits of bone, charcoal, and tools. The oldest Denisovan DNA comes from a layer between 185,000 and 217,000 years old, and the oldest Neanderthal DNA sample is from a layer that’s about 172,000 to 205,000 years old. In the more recent layers of the cave, between 55,200 to 84,100 years old, only Denisovan remains were found.

And it’s in these more recent years where more advanced objects begin to emerge — pieces of tooth pendants and bone points, which “may be assumed” as “associated with the Denisovan population,” write Douka and her team. Those artifacts are the oldest of their kind found in northern Eurasia and representative of something previously unexplored: Denisovan culture.

At this point, says Douka, we cannot definitively say that Denisovans created those items, though the evidence is pointing that way. More sites with Denisovan remains and material culture are needed to answer deeper questions about their culture and symbols.

Denisova, artifacts
Personal ornaments and bone points found in the Denisova Cave.

April Nowell, Ph.D. is a University of Victoria professor and Paleolithic archeologist who specializes in the origins of art and symbol use and wasn’t a part of these recent papers. Evaluating the pendants and bones, she tells Inverse that, assuming these artifacts were made by the Denisovans, she’s “not particularly surprised.” Human culture, very broadly, is thought to have emerged 3.3 million years ago, with the first stone tools. Other ancient humans used the natural clay ochre to paint at least 100,000 years ago, the same time period where archeologists have found the oldest beads.

So, it makes sense that a human subspecies would create cultural artifacts around this time.

What’s novel in the new studies, Nowell says, is that “we know virtually nothing about who Denisovans were, so every study like this one helps to enrich our understanding of their place in the human story.”

“Given that we have items of personal adornment associated with Neanderthals and modern humans all around the same date as the ones thought to be associated with the Denisovans,” she adds, “I would find it more surprising if they were not making similar objects.”

Denisova remains
Human remains found in the Denisova Cave.

These particular items, Nowell explains, especially the tooth pendant, likely speak to “issues of personal identity and group belonging.” The teeth were purposefully chosen, modified, and worn — standing as jewelry that communicates something about both the wearer and likely influenced how the wearer felt about themselves.

Jewelry, she says, can be powerful and laden with meaning — just think about putting on a wedding ring or holding your grandfather’s pocket watch. We can’t tell what these pendants meant to the Denisovans who created and wore them, but their very existence allows archeologists to begin to piece together an idea of the culture from which they were wrought.

(For the source of this, and many additional interesting articles, please visit: https://www.inverse.com/article/52926-denisova-cave-dating-sediment-culture/)

++++++++++

New species of human discovered in cave in Philippines

The bones of Homo luzonensis were discovered in Callao Cave, on the island of Luzon in...
The bones of Homo luzonensis were discovered in Callao Cave, on the island of Luzon in the Philippines. (Credit: Callao Cave Archaeology Project).

A new species of human has been discovered in a cave in the Philippines. Named Homo luzonensis after the island of Luzon where it was found, the hominin appears to have lived over 50,000 years ago, painting a more complete picture of human evolution.

The new species is known from 12 bones found in Callao Cave, which are thought to be the remains of at least two adults and a juvenile. This includes several finger and toe bones, some teeth and a partial femur. While that might not sound like much to work with, scientists can use that to determine more than you might expect.

The upper teeth of one Homo luzonensis individual

“There are some really interesting features – for example, the teeth are really small,” says Professor Philip Piper, co-author of the study. “The size of the teeth generally, though not always, reflect the overall body-size of a mammal, so we think Homo luzonensis was probably relatively small. Exactly how small we don’t know yet. We would need to find some skeletal elements from which we could measure body-size more precisely.”

Even with those scattered bones, scientists are able to start slotting Homo luzonensis into the hominin family tree. Although it is a distinct species of its own, it does share different traits with many of its relatives, including Neanderthals, modern humans, and most notably Homo floresiensis – the “Hobbit” humans discovered in an Indonesian cave in 2003. But perhaps the strangest family resemblance is to the Australopithecus, a far more ancient ancestor of ours.

Homo luzonensis shares different traits with many of its relatives, including Neanderthals, modern humans, and "Hobbit"...

“It’s quite incredible, the hand and feet bones are remarkably Australopithecine-like,” says Piper. “The Australopithecines last walked the Earth in Africa about 2 million years ago and are considered to be the ancestors of the Homo group, which includes modern humans. So, the question is whether some of these features evolved as adaptations to island life, or whether they are anatomical traits passed down to Homo luzonensis from their ancestors over the preceding 2 million years.”

The research was published in the journal Nature.

An audio version of this article is available to New Atlas Plus subscribers.

More audio articles

(For the source of this very interesting articles, plus many others, please visit: https://newatlas.com/new-human-species-homo-luzonensis/59207/)

++++++++++

In contrast to much of what Hollywood has constructed, the real cowboy lifestyle was far less glamorous and happy than you may think. Of course, there were some smiling faces among 19th century cowboys, but the gunslinging frontier hero you may picture is a Wild West myth.

Cowboys in the old American West worked cattle drives and on ranches alike, master horsemen from all walks of life that dedicated themselves to the herd. Cowboy life in the 1800s was full of hard work, danger, and monotonous tasks with a heaping helping of dust, bugs, and beans on the side.

Cowboys Didn’t Get A Lot Of Sleep

Cowboys Didn’t Get A Lot Of Sl... is listed (or ranked) 1 on the list What Was It Really Like To Be A Cowboy In The Wild West?
Photo:  W. Joseph Grand/WikiMedia Commons/Public Domain

A cowboy’s day and night revolved around the herd, a constant routine of guarding, wrangling, and caring for cattle. When cowboys were out with a herd or simply working on a ranch, they had to be on watch. With a typical watch lasting two to four hours, there was usually a rotation of men. This gave cowboys the chance to sleep for relatively short spurts, often getting six hours of sleep at the most.

Cowboys slept on bedrolls, an easily transportable mattress of sorts made out of feathers, canvas, or waterproof tarpaulin. Out on a drive, cowboys slept on the same bedrolls they used at the ranch. Bedrolls were likely full of lice and bedbugs wherever they were used.

Dirt Was Everywhere

Dirt Was Everywhere is listed (or ranked) 2 on the list What Was It Really Like To Be A Cowboy In The Wild West?
Photo:  Plowboylifestyle/WikiMedia Commons/Public Domain

Cowboys out with the herd wore the same clothes day in and day out. While wrangling the herd, cowboys in the back were naturally surrounded by a giant dust cloud stirred up by the animals, but dirt was pretty inescapable from any vantage point.

When cowboys were done with a cattle drive or came to a town, they made their way to a much needed and enjoyed bath. They may have also purchased new clothes and blown off steam at the local saloon.

Life at a ranch could have been less dusty, but not always. Some ranches had elaborate mansions but cowboys spent their days and nights in bunkhouses and other outbuildings. These were modestly better than being out on the range but a lot of cowboys preferred to sleep out under the stars even when they had the option of a roof over their heads.

They Had Their Own Language

They Had Their Own Language is listed (or ranked) 3 on the list What Was It Really Like To Be A Cowboy In The Wild West?
Photo:  Detroit Photographic Co./WikiMedia Commons/Public Domain

The language of cowboys was full of task-specific phrases – and a fair amount of cursing. Much of the cowboy lexicon came from the vaquero tradition, but there was a lot of slang, too.

Cowboys used metaphorical phrases like “above snakes” or “hair case” to indicate being alive and a hat, respectively. They also used Native American words as they interacted with individual tribes.

Cowboys had words for their guns, their horses, the types of work they did, and their gear. A rope could be called many things based on what it was made of and what it looked like. For example, a long black and white horse hair rope was called a “pepper-and-salt rope.”

Their Clothes Were Practical And Protective

Their Clothes Were Practical A... is listed (or ranked) 4 on the list What Was It Really Like To Be A Cowboy In The Wild West?
Photo:  Detroit Publishing Co./WikiMedia Commons/Public Domain

Cowboys wore hats, chaps, boots, and other hardy clothing to keep themselves safe on the trail and in the harsh elements. Hats varied by region but generally they had brims to keep the sun out of cowboys’ eyes. The wider the hat brim, the more shade it could provide.

Chaps were worn over pants to keep cowboys’ legs safe and American cowboys wore bandanas around their necks they could pull over their mouths and noses to keep the dust out.

Cowboy boots were designed with narrow toes and heels so the cowboy’s foot would fit securely in a stirrup but still have the ability to move should the rider need to dismount. Made of leather, they were sturdy and had spurs attached so a cowboy could prod his horse along. Boots were tall, going up the lower leg of a cowboy for protective purposes.

Strength, Courage, And Intelligence Were Equally Essential To Survival

Strength, Courage, And Intelli is listed (or ranked) 5 on the list What Was It Really Like To Be A Cowboy In The Wild West?
Photo: Unknown/WikiMedia Commons/Public Domain

Contemporaries heralded cowboys’ “courage, physical alertness, ability to endure exposure and fatigue, horsemanship, and skill in the use of the lariat.”

Cowboys needed to be physically strong to take on tasks like breaking horses, roping cattle, and riding for hours on end. Courage to chase down stampeding herds or brave the elements on a regular basis was supplemented by the knowledge required to make quick decisions, care for the cattle, and not panic in the face of a crisis.

Often this intelligence came with years of experience, but cowboys needed to be able to understand cow psychology, navigating what a cow would react to, how to get cattle to take water, and techniques to avoid unnecessary risks on the drive.

Laziness was not an option on a cattle drive and was met with harsh treatment. One man who was caught sleeping under the chuck wagon was taught a lesson by being jabbed with a dead tarantula.

A Cowboy’s Horse Was His Best Friend

A Cowboy’s Horse Was His Best is listed (or ranked) 6 on the list What Was It Really Like To Be A Cowboy In The Wild West?
Photo: John C. H. Grabill/WikiMedia Commons/Public Domain

A cowboy needed his horse to travel, guard, protect, and haul on a cattle drive. Horses had to be able to handle long hours with riders on their backs, difficult terrain, and extreme heat. Cowboys maintained their horses, caring for them along drives with the utmost tenderness, and developing bonds that unified steed and rider with Centaur-like cohesion.

A good horse meant a cowboy could keep watch at night, and only the smartest and best-trained horses were used for the task. The best horses made up the remuda, a collection of even-tempered equines thought to understand cattle as much as their riders.

The Money Was Decent But The Life Was Hard

The Money Was Decent But The L is listed (or ranked) 7 on the list What Was It Really Like To Be A Cowboy In The Wild West?
Photo: Internet Archive Book Images/via Flickr/WikiMedia Commons

Cowboys could make anywhere from $25 to $40 a month, which was good money for single men who didn’t have to support families. They’d spend their money on luxuries when they got to town, although any ostentatious purchases would most likely result in ridicule. Some cowboys saved their wages to buy cattle and land of their own.

Cowboys made the same wage regardless of ethnic or racial background. In addition to going on cattle drives, cowboys worked on ranches or in local towns when they could find work.

Cowboys Traveled In Groups For Thousands Of Miles

Cowboys Traveled In Groups For is listed (or ranked) 8 on the list What Was It Really Like To Be A Cowboy In The Wild West?
Photo: Unknown/WikiMedia Commons/Public Domain

It took eight to 12 cowboys to move 3,000 head of cattle, making for cohesive groups of young men traveling across large stretches of land with a common goal. There was a hierarchy of sorts, with a trail boss leading the way. The trail boss decided how many miles the drive would tackle in a day and where the group camped at night. There was also a second in command, a segundo, alongside a cook and several wranglers.

Lone cowboys were particularly vulnerable to attacks and the elements but also evoked fear and suspicion when they were spotted out on the plain.

Most Cowboys Didn’t Carry Guns To Fight

Most Cowboys Didn’t Carry Guns is listed (or ranked) 9 on the list What Was It Really Like To Be A Cowboy In The Wild West?
Illustration: Unknown/WikiMedia Commons/Public Domain

Cowboys had guns, but those guns were used for protection more than in confrontations or quarrels. Cowboys might have fended off wolves and coyotes,warded off hostile Native groups, or deterred potential thieves, but for the most part guns were primarily used in the event of a stampede.

When a stampede broke out, cowboys had no choice but to try to overtake the leaders and bring it to an end. Once they caught up with the front of the group, they would fire their guns at the cattle to get them to stop.

The myth of the cowboy who carried two six-shooters comes from Hollywood, but cowboys often carried multiple weapons. There were hundreds of kinds of guns used by cowboys over time, and most men preferred to have a short sidearm and a longer rifle at their disposal.

Cowboys Were A Diverse Lot

Cowboys Were A Diverse Lot is listed (or ranked) 10 on the list What Was It Really Like To Be A Cowboy In The Wild West?
Photo: Unknown/WikiMedia Commons/Public Domain

The American cowboy owes its origin to the Mexican and Spanish rancher traditions. During the 1700s, vaqueros – derived from vaca, the Spanish word for “cow” – were hired by Spanish ranchers to work the land and tend to their cattle. Vaqueros were native Mexicans who had expertise in roping, herding, and riding.

By the 1800s, waves of European immigrants had made their way west and began to work as cowboys as well. No longer a vocation for just Mexicans, there was a large amount of diversity among cowboy groups. African Americans, Native Americans, and settlers from all around Europe worked with Mexican vaqueros, often picking up the skills they needed to thrive and survive along the way.

The remoteness of cowboy life led to an egalitarianism of sorts, one that transcended ethnic and racial differences. The almost-exclusively male environment also valued hard work and strength over all else, contributing to a relatively discrimination-free setting.

The Food Left A Lot To Be Desired

The Food Left A Lot To Be Desi is listed (or ranked) 11 on the list What Was It Really Like To Be A Cowboy In The Wild West?
Photo: William Henry Jackson/WikiMedia Commons/Public Domain

There wasn’t much variety in a cowboy’s diet. Chuckwagons accompanied cattle drives and cooks, legendarily grumpy but beloved companions, served staple foods like beef, bacon, beans, bread, and coffee.

Cowboys typically ate twice a day, once in the morning and again in the evening, but sometimes a third meal occurred as well. Additionally, most cowboys weren’t gluttonous, eating enough to get full but not over-indulging for fear of an upset stomach or running out of food on a long drive.

Stampedes Were Dangerous Events

Stampedes Were Dangerous Event is listed (or ranked) 12 on the list What Was It Really Like To Be A Cowboy In The Wild West?
Photo: SMU Central University Library/ via Flickr/WikiMedia Commons/Public Domain

A stampede was a terrifying event, one cowboys feared and did everything they could to avoid. Various things could spook cattle – a pistol shot, a storm, a snake – but once a stampede got going, it was up to the cowboys to ride to the front of the herd and bring it under control.

After cowboys ran to their horses and tried to avoid getting trampled while on foot, they had to navigate thousands of pounds of cattle coming at them. As cowboys moved alongside the herd, they could fall or be knocked off their horse. The horse itself could be brought under by the herd, something that resulted in both rider and horse being “mangled to sausage meat,” as was the case in Idaho in 1889.

Cowboy Teddy Blue recalled a stampede in 1876 wherein a cowpuncher and his horse were killed, describing the horse’s ribs as “scraped bare of hide, and all the rest of the horse and man was mashed into the ground as flat as a pancake.”

Cowboys Talked And Sang To The Cows

Cowboys Talked And Sang To The is listed (or ranked) 13 on the list What Was It Really Like To Be A Cowboy In The Wild West?
Photo: William Henry Jackson/WikiMedia Commons/Public Domain

Rising with the sun, cowboys weren’t prone to staying up late but they did spend their evenings telling stories and socializing with their fellows. Around a campfire, cowboys also played fiddles or harmonicas, told jokes, or generally decompressed after a long day.

When they were on watch, cowboys talked to the cattle, telling them stories or soothing them with songs. Songs could be made up on the spot or handed down among cowboys, often perpetuating a tale or focusing on some aspect of cowboy life.

(For the source of this, and many other fascinating articles, please visit: https://www.ranker.com/list/life-of-a-wild-west-cowboy/melissa-sartore/)

++++++++++

A two-piece dress from the 1880s and other items on display at the Tennessee State Museum. CreditCreditWilliam DeShazer for The New York Times

NASHVILLE — Like many girls of my generation in the rural South, I learned every form of handwork my grandmother or great-grandmother could teach me: sewing, knitting, crocheting, quilting. I even learned to tat, a kind of handwork done with a tiny shuttle that turns thread into lace. Some of my happiest memories are of sitting on the edge of my great-grandmother’s bed, our heads bent together over a difficult project, as she pulled out my mangled stitches and patiently demonstrated the proper way to do them.

But by the time I’d mastered those skills, I had also lost the heart for them. Why bother to knit when the stores were full of warm sweaters? Why take months to make a quilt when the house had central heat? Of what possible use is tatting, which my great-grandmother sewed to the edges of handmade handkerchiefs, when Kleenex comes in those little purse-size packages?

But my abandonment of the domestic arts wasn’t just pragmatic. By the time I got to college, I had come to the conclusion that handwork was incompatible with my own budding feminism. Wasn’t such work just a form of subjugation? A way to keep women too busy in the home to assert any influence in the larger world? Without even realizing it, I had internalized the message that work traditionally done by men is inherently more valuable than work traditionally done by women.

I came to this unconscious conclusion almost inevitably. When every history class I ever took featured an endless list of battles won and lost by men, of political contests won and lost by men, of technological advances achieved by men, it’s not surprising that the measure of significance seemed to be the yardstick established by men — almost exclusively white men.

Public history has the power to affect our very understanding of reality. It tells us what we should value most about the past and how we should understand our own place within that context. Just as art museums today must wrestle with an earlier aesthetic that excluded women and artists of color, local-history museums are working to recalibrate the way they present the past.

In Montgomery, Ala., the Legacy Museum and the National Memorial for Peace and Justice convey the history of systemic racism in this country. In Louisiana, the restored Whitney Plantation’s new focus is the way the enslaved people on the plantation lived. In Atlanta, the Cyclorama — a 360-degree diorama the length of a football field that depicts the Battle of Atlanta — was restored and returned to public display, this time with new interpretive materials that defy the Lost Cause myth. And in Memphis, the Pink Palace Museum has just opened an elaborate new exhibition, two years in the making, that celebrates the city’s 200-year history as a kind of web in which specific issues like race thread through seemingly unrelated categories like art and entertainment, commerce and entrepreneurialism, and heritage and identity.

Here in Nashville, the new Tennessee State Museum, which opened last October, addresses the history of the state in a new building whose very design reinforces the idea that history is the story of everyone, of all the people. Andrew Jackson has his space, of course, but so do the Native Americans whom Jackson sent on the Trail of Tears, a genocidal march out of their homeland. All the relevant wars are here, along with all the relevant weaponry, but so are the pottery shards and the bedsteads and the whiskey jugs and the children’s toys. It’s all arranged in a timeline that unfolds at a human pace and on a human scale, equally beautiful and inviting, equally informative and embracing. My people are from Alabama, not Tennessee, but this space feels as though it belongs as much to me as to any Tennessean because it tells the kinds of stories that could be the story of my people, the kinds of stories that earlier versions of public history had always deemed unworthy of celebration or scholarly attention.

As it happens, the museum’s first temporary exhibition, which opened in February and runs through July 7, is a gallery full of gorgeous quilts. That was the exhibition I most wanted to see, and it did not disappoint. The quilts were made by familiar patterns — star and flower garden and log cabin and wedding ring — if not by familiar hands. Some of my own family quilts are gorgeously complex, but others are barely more than plain rectangles sewn in a row. I once asked my mother about those serviceable but hardly beautiful quilts, and she said impatiently: “People were cold, Margaret. They were trying to stay warm.”

The quilts in the exhibition at the Tennessee State Museum would keep people warm, but they are also absolute showpieces, with carefully coordinated colors and tiny stitches so perfectly close together and so perfectly uniform that it seems impossible for them to have been made by human hands. These women were nothing less than artists, and the gallery’s informational placards elevate them to that status and place them within that context. I studied the stitches and thought again and again of the women who had taught me to sit before a table frame and push a needle through all three quilt layers, taking stitches small enough to keep the batting from wadding up in the wash.

“Between the Layers,” an exhibition of quilts at the Tennessee State Museum. CreditWilliam DeShazer for The New York Times.

At the foot of our bed is a cedar chest that holds my share of the family quilts. The maple-leaf quilt was made for my childhood bedroom, but some of the squares were pieced together decades before I was born. The Sunbonnet Sue was my mother’s baby blanket. The flower-garden pattern with the yellow border was the last quilt my great-grandmother pieced by hand. My grandmother made the fan quilt for my husband and me when we got married. Shot through that quilt are memories — patchwork remnants of the dresses my mother made for me as I was growing up, bits left over from the simple blouses and skirts I made for myself in middle school.

(For the balance of this article please visit: https://www.nytimes.com/2019/04/01/opinion/tennessee-state-museum-quilts.html)

++++++++++

This is the best (and simplest) world map of religions

Both panoramic and detailed, this infographic manages to show both the size and distribution of world religions.


  • At a glance, this map shows both the size and distribution of world religions.
  • See how religions mix at both national and regional level.
  • There’s one country in the Americas without a Christian majority – which?

China and India are huge religious outliers

A picture says more than a thousand words, and that goes for this world map as well. This map conveys not just the size but also the distribution of world religions, at both a global and national level.

Strictly speaking it’s an infographic rather than a map, but you get the idea. The circles represent countries, their varying sizes reflect population sizes, and the slices in each circle indicate religious affiliation.

The result is both panoramic and detailed. In other words, this is the best, simplest map of world religions ever. Some quick takeaways:

  • Christianity (blue) dominates in the Americas, Europe and the southern half of Africa.
  • Islam (green) is the top religion in a string of countries from northern Africa through the Middle East to Indonesia.
  • India stands out as a huge Hindu bloc (dark orange).
  • Buddhism (light orange) is the majority religion in South East Asia and Japan
  • China is the country with the world’s largest ‘atheist/agnostic’ population (grey) as well as worshipers of ‘other’ religions (yellow).

The Americas are (mostly) solidly Christian

Which is the least Christian country in the Americas? The answer may surprise you.

But the map – based on figures from the World Religion Database (behind a paywall) – also allows for some more detailed observations.

  • Yes, the United States is majority Christian, but the atheist/agnostic share of its population alone is bigger than the total population of most other countries, in the Americas and elsewhere. Uruguay has the highest share of atheists/agnostics in the Americas. Other countries with a lot of ‘grey’ in their pies include Canada, Cuba, Argentina and Chile.
  • All belief systems represented on the scale below are present in the US and Canada. Most other countries in the Americas are more mono-religiously Christian, with ‘other’ (often syncretic folk religions such as Candomblé in Brazil or Santería in Cuba) the only main alternative.
  • Guyana, Suriname and Trinidad & Tobago are the only American nations with significant shares of Hindus, as well as the largest share of Muslim populations – and consequently have the lowest share of Christians in the Americas (just under half in the case of Suriname).

    Lots of grey area in Europe

    The second-biggest religious affiliation in Europe isn’t Islam, but ‘none’.

    • Christianity is still the biggest belief system in most European countries, but the atheist/agnostic share is strong in many places, mainly in Western Europe, but especially in the Czech Republic, where it is close to half the total.
    • Islam represents a significant slice (and a large absolute number) in France, Germany and the UK, and is stronger in the Balkans: The majority in Albania, almost half in Bosnia and around a quarter in Serbia (although that probably indicates the de facto independent province of Kosovo).

    Islam in the north, Christianity in the south

    The map of Africa and is dominated by the world’s two largest religions

    • Israel is the world’s only majority-Jewish state (75%, with 18% Muslim). The West Bank, shown separate, also has a significant Jewish presence (20%, with 80% Muslim). Counted as one country, the Jewish majority would drop to around 55%.
    • Strictly Islamic Saudi Arabia, but also some of its neighbors in the Gulf, have significant non-Muslim populations – virtually all guest workers and ex-pats.
    • Nigeria, due to its large population and even split between Islam and Christianity, has more Muslims and more Christians than most other African nations.

      Different majorities across Asia

      Close neighbors India, Bangladesh and Myanmar each have a different majority religion.

      • Because countries are sized for population rather than area, some are much bigger or smaller than you’d expect – with some interesting results: There are more Christians in Muslim-majority Indonesia than there are in mainly Christian Australia, for example.
      • Hindus are a minority everywhere outside India, except in Nepal.
      • North Korea is shown as three-quarters atheist/agnostic, but this is debatable, on two counts. In what is often referred to as the last Stalinist state on Earth, religious adherence is probably underreported. And the state-sponsored ideology of ‘Juche’, although in essence based on materialism, makes some supernatural claims. For instance: despite having died in 1994, Kim Il-sung was declared ‘president for eternity’ in 1998.

      Of course, clarity comes at the cost of detail. The map bands together various Christian and Islamic schools of thought that don’t necessarily accept each other as ‘true believers’. It includes Judaism (only 15 million adherents, but the older sibling of the two largest religious groups) yet groups Sikhism (27 million) and various other more numerous faiths in with ‘others’. And it doesn’t make the distinction between atheism (“There is no god“) with agnosticism (“There may or may not be a god, we just don’t know“).And then there’s the whole minefield of nuance between those who practice a religion (but may do so out of social coercion rather than personally held belief), and those who believe in something (but don’t participate in the rituals of any particular faith). To be fair, that requires more nuance than even a great map like this can probably provide.

      This map found here at map infographic designer Carrie Osgood‘s page. Information based on 2010 figures for religious affiliation.

      Strange Maps #967

      Got a strange map? Let me know at strangemaps@gmail.com.

++++++++++

Teen Study Illuminates the Link Between Social Media Use and ADHD

This isn’t good.

By Sarah Sloat

smarphone use, teen

Whether it’s to fight FOMO or play Fortnite, teens are tethered to their phones. Smartphone addiction has become so bad that even smartphone creators want to help people get off their devices, and recent surveys show that half of American teens “feel addicted” to their mobile devices and 78 percent of them check their devices hourly. These habits, write researchers in a new JAMA study on teens, are linked to the development of the classic symptoms of attention-deficit/hyperactivity disorder.

The paper is an analysis of the social media habits and mental health of 2,587 teenagers who, crucially, did not have preexisting ADHD symptoms at the beginning of the study. Those who frequently used digital media platforms over the course of the two-year study, the researchers show, began to display ADHD symptoms, including inattention, hyperactivity, and impulsivity. It’s too early to define the nature of the link, the researchers warn, but it’s a good place to start.

“We can’t confirm the causation from the study, but this was a statically significant association,” co-author and University of Southern California professor of preventative medicine and psychology Adam Leventhal, Ph.D. explains. “We can say with confidence that teens who were exposed to higher levels of digital media were significantly more likely to develop ADHD symptoms in the future.”

teen boys, cell phone, smartphone use
Frequent use of digital platforms was linked to the development of ADHD-like symptoms. 

The study participants, who were between 15 and 16 years old, represented various demographic and socioeconomic statuses and were enrolled in public high schools in Los Angeles County. Every six months between 2014 and 2016, the researchers asked the teens how often they accessed 14 popular digital media platforms on their smartphones and examined them for symptoms of ADHD. Mobile technologies, Leventhal explains, “can provide fast, high-intensity stimulation accessible all day, which has increased digital media exposure far beyond what’s been studied before.” In the past, studies on the link between exposure to technology and mental health focused only on the effects of TV or video games.

The team’s analysis of the data showed that 9.5 percent of the 114 teens who used at least 7 platforms frequently showed ADHD symptoms that hadn’t been present at the beginning of the study. Of the 51 teens who used all 14 platforms frequently, 10.5 percent showed new ADHD symptoms.

10 percent of high frequency media using teens demonstrated new ADHD symptoms.

This study “raises concern” about the ADHD risk that digital media technology poses for teens, but Leventhal emphasizes that there’s no evidence of causation and that further study is needed. Scientists know that ADHD manifests as physical differences in the brain, but they’re still not sure what causes it. There are multiple non-exclusive theories, which include the genes of the individual, a low birth weight, and exposure to toxins like cigarettes when in the womb.

Conversely, smartphone use has been linked to changes in the brain as well, but none that have been associated with ADHD. More studies are needed to know whether frequent use of digital platforms is linked to ADHD or whether it underlies a completely different disorder that shares similar symptoms.

(For the source of this article, plus many additional important articles, please visit: https://www.inverse.com/article/47220-smartphone-digital-media-use-adhd/)

++++++++++

Advances in satellite imagery are shining a light.


  • Today, there are 40.3 million slaves on the planet, more than the number of people living in Canada.
  • Slavery can be hard to find, but it commonly occurs in several key industries like fishing and mining.
  • Using satellite data, researchers and activists are using crowdsourcing and artificial intelligence to identify sites where slavery is taking place.

If you turn on television news at any given moment, you’ll probably be barraged with messages of doom and gloom that assert the world’s going to hell in a handbasket. This isn’t true. In fact, many of the metrics we might use to assess whether the world is doing well or poorly lean towards the former: the number of people living in abject poverty has plummeted, literacy rates are up the world over, and violence has been decreasing steadily for centuries now.

But there is at least one way in which the world is objectively getting worse: Earth is today host to 40.3 million slaves, more than there has ever been in human history. One in 4 of these are children, and 71 percent are female. With more slaves on the planet than there are people in the country of Canada, one would think there would be more evidence of this exploitation. But recent studies are revealing that the evidence is there — it just requires a new perspective to see.

Using an eye in the sky

Slavery takes place in the background, but its fingerprints are all over the products modern society relies on. Textiles, electronics, agriculture, and even brick-making all involve slavery to one degree or another. A growing body of research is using satellite imagery to shine a light on slavery practices. In fact, an estimated one-third of all slavery can be seen from space.

While rooting out specific instances of slavery can be difficult, we can use our knowledge of which industries include slave labor and pair it with satellite imagery and artificial intelligence to track down slavers and bring them to justice.

For example, AP’s Pulitzer-winning reporting uncovered a vast slave network onboard fishing boats off the coast of Papua New Guinea. Although many of these ships were raided and hundreds of slaves were freed, other ships managed to escape. Its not too difficult to evade capture in the open ocean, but DigitalGlobe — a satellite company that provides Google Earth imagery — tracked down the rogue ships.

DigitalGlobe has also engaged in an effort to track slavery in fishing ships on Ghana’s Lake Volta. By inviting the public to pour through their satellite data, more than 80,000 ships, buildings, and fishing cages believed to be related to 35,000 enslaved children in the region have been tagged and mapped. As CEO Jeff Tarr stated, “You can’t hide from space.”

Satellite imagery has uncovered slave labor in the Sundarbans mangrove forest in Bangladesh, where children clear the mangrove forests — critical to the ecosystem in that part of the world — as part of their forced labor processing fish. Still other work is being undertaken to observe mining sites that use slave labor, as well as numerous other industries where slavery is commonplace.

A new approach

The region known as the “Brick Belt,” where slave labor is frequently used, is outlined in red.  Boyd et al., 2018

While these attempts are all laudable, they represent just the beginning of a new satellite-based strategy to combat slavery. One of the biggest leap forward in the use of satellites to fight slavery is being undertaken by Doreen Boyd of the Rights Lab at Nottingham University. Her work focuses on the so-called “Brick Belt” that stretches crosses Pakistan, India, Nepal, and Bangladesh. This part of the world contains a large number of brick kilns. In the India region of the Brick Belt alone, an estimated 70 percent of brick kilns use slave labor.

In Boyd’s previous work, she used crowdsourcing and satellite data to gain an estimate of the number of brick kilns in the region. The number she reached was 55,387 kilns, a significant portion of which utilize slave labor, if expert estimates are to be believed.

Results from an AI trained to identify brick kilns. The proposed brick kilns are surrounded by yellow boxes.  Foody et al., 2019

This is useful work: the problem of slavery can’t be tackled in the region without identifying their locations, and one of these kilns has already been raided, resulting the freedom of 24 slaves. But more work is needed. Her previous study didn’t identify the locations of all brick kilns, only a sample, and the region is too large to pore through manually. Crowdsourcing takes time and resources to complete and verify, and even if all brick kilns in the region were investigated, more would surely crop up in the future. Therefore, Boyd began to work on developing an A.I. that could identify brick kilns automatically from satellite data.

Machine-learning algorithms like the one Boyd used work by having humans “teach” the algorithm what it’s looking for. Humans first tagged brick kilns from a small sample of satellite imagery; these are often circular or oval-shaped with a large chimney in the center. This sample was then fed to the machine-learning algorithm. Then, using the patterns identified by humans, the algorithm searched through other satellite data and pointed out places that matched the pattern. If the algorithm mistakenly selected areas that merely resemble brick kilns, those were used to refine the algorithm, teaching it what may a brick kiln is not.

In the small slice of the Brick Belt that Boyd and colleagues analyzed, their machine-learning algorithm identified 95.08 percent of the brick kilns in the region. While missing any potential sites of slavery — even just 5 percent — is a serious issue, the algorithm can be tweaked to overestimate the number of brick kilns. The advantage of this approach is that, although it would select many regions that were not brick kilns, it wouldn’t miss any actual brick kilns either.

The future of the anti-slavery fight

It’s important to note that although slave labor at brick kilns represents just a small segment of the global slave trade, using an A.I.-driven approach as Boyd and colleagues did is a generalizable strategy. Slavery can appear in many forms, but it frequently leaves a physical mark upon the earth. Mining, fishing, brick-making, and other industries that commonly use slave labor can’t be easily hidden from satellites in space. Previous efforts, such as tracking fishing ships or uncovering slave labor in Bangladeshi mangroves, has relied on human volunteers who are subject to error and can only work so fast. It may very well be the case that future anti-slavery efforts will rely on A.I. to root out slavery regardless of how it manifests.

Related Articles Around the Web

(For the source of this, and many other interesting articles, please visit: https://bigthink.com/technology-innovation/modern-slavery-uncovered-by-satellites/)

++++++++++

Mastodon bones push arrival of early humans in America back by 115,000 years

Artist depiction of an American mastodon
Artist depiction of an American mastodon.  (Credit: Charles R. Knight, via Wikimedia Commons).

When did humans arrive in America? It’s been a hot topic in scientific circles for the last 20 years or so, pegged anywhere from 13,500 to 16,500 years ago. Now new research from the Cerutti Mastodon Discovery, an archeological site in Southern California, blows those estimates away by suggesting early hominids arrived on the continent as early as 130,000 years ago. To give some perspective, it’s believed humans migrated out of Africa 125,000 years ago at the earliest.

The crux of the argument by the scientists, led by the San Diego Natural History Museum, stems from the sharply broken bones, tusks and molars of a mastodon found at a paleontological site first discovered in 1992 as a result of a freeway expansion. Also found buried at the site were large stones which appeared to be used as hammers and anvils. Further research showed that the bones were broken while still fresh by blows from the hammer stones that appeared strategically aimed to get at any marrow inside. With such evidence of human activity, the site suddenly became an archeological dig.

Mastodon bones and tusks found next to what are believed to be stone hammers used by early humans.

At the time of the find, dating techniques weren’t sophisticated enough to reliably assign an age to the bones, and by association, the tool-users who acted upon them. State-of-the-art radiometric dating equipment was used in 2014, however, to determine a more reliable and definitive age of the mastodon bones of around 130,000 years old, give or take 9,400 years. At the same time, experts studying microscopic damage to the bones and rock determined it was indeed consistent with human activity.

The researchers even went so far as to conduct experiments on the bones of large mammals, including elephants, to study breakage patterns and determine how such fractures could be made by early humans. They discovered that a blow from a hammer stone on a fresh elephant limb produced the same patterns of breakage as on the mastodon bones found at the site.

The results of all of this research has now been published in the journal Nature.

“This discovery is rewriting our understanding of when humans reached the New World,” said Judy Gradwohl, president and chief executive officer of the San Diego Natural History Museum. “The evidence we found at this site indicates that some hominin species was living in North America 115,000 years earlier than previously thought. This raises intriguing questions about how these early humans arrived here and who they were.”

For decades, the prevailing theory for human migration to America was via the Beringia land bridge over the Bering Strait from Siberia, dating to around 13,500 years ago. Later discoveries challenged that idea, pushing the arrival of humans back by several millennia. The discovery of the scientists at the Cerutti Mastodon site opens up more questions than it answers, starting with who these early hominins were, how they got here, and what happened to them.

“When we first discovered the site, there was strong physical evidence that placed humans alongside extinct Ice Age megafauna,” said Tom Deméré, curator of paleontology and director of PaleoServices at the San Diego Natural History Museum, as well as an author on the paper. “This was significant in and of itself and a ‘first’ in San Diego County. Since the original discovery, dating technology has advanced to enable us to confirm with further certainty that early humans were here much earlier than commonly accepted.”

Source: San Diego Natural History Museum

(For the source of this, and similarly important articles, please visit: https://newatlas.com/early-humans-arrive-america/49243/)

++++++++++

Bold study claims humans may have arrived in Australia 120,000 years ago

A bold new study suggests that Aboriginal Australians have lived on the southern continent for as...
A bold new study suggests that Aboriginal Australians have lived on the southern continent for as long as 120,000 years – almost twice as long as previously thought. (Credit: lucidwaters/Depositphotos).

Australia’s Aboriginal population is said to be the oldest continuing civilization on Earth – but just how old is that? It’s currently believed that Aboriginal ancestors made their way to Australia as long as 65,000 years ago, but new evidence uncovered at a dig site in the continent’s southeast may push the timeline back much further. If the site does turn out to be human-made, it suggests that people have been living in Australia for as long as 120,000 years.

The place of interest, known as the Moyjil site, is located in the city of Warrnambool, Victoria. Archaeologists have been investigating the area for over a decade, and the basis for these extraordinary claims is a mound of materials including sand, seashells and stones.

That might not sound like much, but the scientists suggest this is what’s known as a midden – essentially, an ancient landfill. The remains of fish, crabs and shellfish have been found in the mound, which may be all that remains of long-eaten meals, while charcoal, blackened stones and other features may be all that’s left of ancient fireplaces.

But the really intriguing part of the site is its age. If Moyjil does turn out to be a human site, it could force us to rewrite not just the history of Australian occupation but our understanding of human migration worldwide.

“What makes the site so significant is its great age,” says John Sherwood, an author of the study. “Dating of the shells, burnt stones and surrounding cemented sands by a variety of methods has established that the deposit was formed about 120,000 years ago. That’s about twice the presently accepted age of arrival of people on the Australian continent, based on archaeological evidence. A human site of this antiquity, at the southern edge of the continent, would be of international significance because of its implications for the movement of modern humans out of Africa.”

But there are quite a few caveats to these claims. For one, there’s every chance that the mounds aren’t middens at all, but natural formations of some kind. Definitive proof of human occupation from that era, such as tools or bones, have yet to be found.

On top of that, it doesn’t quite make sense within the current narrative. Genetic studies have shown that Aboriginal people only split off from other human populations about 75,000 years ago, after their ancestors migrated out of Africa, through Southeast Asia into Australia.

The oldest known definitive proof of humans on the continent are artifacts dated to 65,000 years ago, found in Kakadu National Park, along Australia’s northern coast. This makes sense, given it’s close to the islands the people were thought to have used to cross over.

But the Moyjil site is on the complete opposite side of the continent, and it’s hard to believe humans appeared that far south, at a time when they were otherwise believed to be more or less restricted to Africa. Humans aren’t thought to have even entered East Asia before about 100,000 years ago.

The researchers acknowledge the weight of the claims, and say they’re working to continue examining the Moyjil site for further evidence of human occupation, and hope others will do the same.

“We recognize the need for a very high level of proof for the site’s origin,” says Sherwood. “Within our own research group the extent to which members believe the current evidence supports a theory of human agency ranges from ‘weak’ to ‘strong.’ But importantly, and despite these differences, we all agree that available evidence fails to prove conclusively that the site is of natural origin. What we need now is to attract the attention of other researchers with specialist techniques which may be able to conclusively resolve the question of whether or not humans created the deposit.”

The research was published in the journal Proceedings of the Royal Society of Victoria.

Source: Deakin University

(For the source of this, and many other quite interesting articles, please visit: https://newatlas.com/dig-site-australia-humans-moyjil/58886/)

++++++++++

Amelia Earhart mystery may be solved, says scientist

++++++++++

More evidence lack of sleep drives Alzheimer’s progression

A new study solidifies the link between sleep disruption and an increase in the toxic proteins...
A new study solidifies the link between sleep disruption and an increase in the toxic proteins that are thought to contribute to the onset of Alzheimer’s disease. (Credit: photographee.eu/Depositphotos).

A new study from researchers at Washington University School of Medicine in St. Louis has revealed further evidence of how sleep deprivation can drive the spread of toxic Alzheimer’s-inducing proteins throughout the brain. The study bolsters the growing hypothesis suggesting sleep disruption plays a major role in the progression of neurodegenerative disease.

Over the last year or two there have been several notable studies published investigating how poor sleep seems to be fundamentally linked to neurodegenerative diseases such as Alzheimer’s. Prior work has clearly demonstrated how just one night of disrupted sleep can increase accumulations in the brain of a protein called amyloid-beta, one of the central pathological drivers of Alzheimer’s disease.

Now sleep researchers have turned their focus towards the other major toxic protein often implicated in Alzheimer’s pathology – tau. Alongside the amyloid clumps, often hypothesized to be the driver of Alzheimer’s-induced brain damage, tau proteins are also implicated as being damaging. These abnormal tau clumps, called neurofibrillary tangles, are often identified in neurodegenerative disease.

A recent study from the Washington University School of Medicine in St. Louis revealed higher levels of tau proteins were identified in human subjects who reported disrupted sleep patterns. It was unclear from that research whether the sleep disruptions preceded or followed these pathological brain changes. Now, a new study from the same team has revealed strong evidence suggesting sleep disruption does indeed directly cause tau protein levels to rise and more rapidly spread through the brain.

The new research describes several experiments, in both mice and humans, that clearly establish tau levels rising as a result of sleep deprivation. Tests in humans revealed a single sleepless night correlated with tau levels in cerebrospinal fluid rising about 50 percent. These results were also observed in mouse models subjected to extensive stretches of sleep deprivation.

The researchers also investigated whether sleep deprivation accelerates the spread of toxic tau neurofibrillary tangles. Two groups of mice were seeded with neurofibrillary tangles in their hippocampi, with one group allowed to sleep according to normal patterns, while the other group was kept awake for long periods every day.

After four weeks, the mice subjected to sleep deprivation showed significantly greater spread and growth of the tau tangles, compared to the well rested animals. These increased neurofibrillary tangles were also found in brain areas similar to those seen in human subjects suffering from Alzheimer’s disease.

“The interesting thing about this study is that it suggests that real-life factors such as sleep might affect how fast the disease spreads through the brain,” says David Holtzman, senior author on the new study. “We’ve known that sleep problems and Alzheimer’s are associated in part via a different Alzheimer’s protein – amyloid beta – but this study shows that sleep disruption causes the damaging protein tau to increase rapidly and to spread over time.”

Despite the robust research described in the new study, there are still several limitations to how the conclusions can be interpreted. For example, it is unclear how long-lasting these tau spikes actually are. Does a good night’s sleep clear out the increased amyloid and tau load caused by a bad night’s sleep? Does this even play a major role in the slow, long-term onset of diseases such as Alzheimer’s? There is growing debate over whether tau and amyloid are even the right targets for understanding the pathogenic origins of Alzheimer’s disease.

Holtzman is open about the limitations of his research, however, he suggests if the outcome is that people try to pay more attention to their sleep cycles then that will undoubtedly be beneficial.

“Our brains need time to recover from the stresses of the day,” says Holtzman. “We don’t know yet whether getting adequate sleep as people age will protect against Alzheimer’s disease. But it can’t hurt, and this and other data suggest that it may even help delay and slow down the disease process if it has begun.”

The new study was published in the journal Science.

An audio version of this article is available to New Atlas Plus subscribers.

More audio articles

(For the source of this, and many other important similar articles, please visit: https://newatlas.com/sleep-deprivation-alzheimers-dementia-tau/58201/)

++++++++++

New insight into how lack of quality sleep is linked to Alzheimer’s disease

The red and orange shades illustrate the areas in the brain that display higher levels of...
The red and orange shades illustrate the areas in the brain that display higher levels of toxic proteins aggregating in relation to reduced amounts of slow-wave sleep (Credit: Brendan Lacey).

Adding to a growing body of research associating sleep quality with the development of dementia and Alzheimer’s disease, a new study from the Washington University School of Medicine in St. Louis has homed in on the specific sleep phase that, when disrupted, can be linked to early stages of cognitive decline.

Sleep is important. That is something we know for sure. More recently a series of studies have been revealing compelling associations between disrupted sleep and neurodegenerative diseases such as Alzheimer’s. Last year it was discovered that sleep deprivation can directly lead to an increase in amyloid-beta accumulations in the brain, one of the central pathological observations seen in people with Alzheimer’s disease.

A new study is further elucidating the relationship between sleep and Alzheimer’s. The hypothesis behind the research is that decreased slow-wave sleep may correlate with increases in a brain protein called tau, which alongside amyloid-beta has been found to be significantly linked to the cognitive decline associated with Alzheimer’s disease.

The researchers examined the sleep patterns of 119 subjects over the age of 60, the majority of whom were cognitively healthy with no signs of dementia or Alzheimer’s. For a week the subjects’ sleep patterns were monitored using sensors and portable EEG monitors. Tau and amyloid levels were also tracked in all subjects using either PET scans or spinal fluid sampling.

The results revealed that those subjects suffering from lower levels of slow-wave sleep displayed higher volumes of tau protein in the brain. Slow-wave sleep is the deepest phase of non-rapid eye movement sleep and this stage of a person’s sleep cycle has been strongly linked to memory consolidation, with many researchers also suggesting slow-wave sleep is vital for maintaining general brain health.

“The key is that it wasn’t the total amount of sleep that was linked to tau, it was the slow-wave sleep, which reflects quality of sleep,” explains Brendan Lucey, first author on the new study. “The people with increased tau pathology were actually sleeping more at night and napping more in the day, but they weren’t getting as good quality sleep.”

Huge questions still remain unanswered though, particularly when trying to discern whether bad sleep is ultimately a cause, or consequence, of conditions such as Alzheimer’s. The study does clearly note a significant limitation in the conclusion is an inability to establish whether sleep changes precede, or follow, any pathological changes in the brain.

Age-related neurodegenerative diseases are inarguably more complicated than simply being the effect of years of bad sleep, however, the researchers do suggest sleep disruptions may be an effective early warning tool to help doctors spot patients in the earliest, pre-clinical stages of cognitive decline.

“What’s interesting is that we saw this inverse relationship between decreased slow-wave sleep and more tau protein in people who were either cognitively normal or very mildly impaired, meaning that reduced slow-wave activity may be a marker for the transition between normal and impaired,” says Lacey. “Measuring how people sleep may be a noninvasive way to screen for Alzheimer’s disease before or just as people begin to develop problems with memory and thinking.”

The new study was published in the journal Science Translational Medicine.

Source: Washington University School of Medicine in St. Louis

An audio version of this article is available to New Atlas Plus subscribers.

More audio articles

(For the source of this, and other related articles, please visit: https://newatlas.com/sleep-slow-wave-alzheimers-dementia/57968/)

++++++++++

Researchers discover whether genes or social interaction shape personality.

  • Scientists looked at pairs of people who looked like each other but were not twins.
  • The results showed that genetics plays a stronger role in personality formation than how alike people were treated by others.
  • Behaving similarly is a stronger social glue than physical resemblance.

People have many misconceptions and strange theories about twins and people who look alike. One great one courtesy of the Internet claimed that Nic Cage is actually a vampire on account of a Civil-War-era photo of a man looking remarkably like Cage. Another, more down-to-earth conjecture about twins that’s been discussed considerably by researchers has been the idea that the personalities of identical twins would be similar because they get treated the same way by others on account of looking alike. It turns out that’s also a myth, finds a new study.

In an ongoing project to identify the factors that affect personality, researchers looked at just how much effect social interaction with others can have on people who look the same. The team, led by Nancy L. Segal, professor of psychology and director of the Twin Studies Center at California State University, Fullerton, started off by connecting with the Canadian photographer François Brunelle, who was famous for taking pictures of people who resembled each other over many years.

credit: François Brunelle

By analyzing 45 pairs who looked like each other, Professor Segal wanted to understand if their personalities were similar as if they were identical twins. Otherwise, she reasoned, if they didn’t show many traits in common, it would mean that genetic factors contribute more strongly to personality formation.

The study participants (called U-LAs for “unrelated look-alikes”) had the mean age of 42.42 years, ranging between 16 and 84 years old. Each pair completed the French Questionnaire de Personnalite au Travail, which produces a score on the five measures of personality: stability, openness, extraversion, agreeableness and conscientiousness. The subjects also answered items from the widely-used Rosenberg Self-Esteem Scale. Social relatedness was gauged via the Social Relationship Inventory, adapted from the Twin Relationship Survey (completed by twins in the Minnesota Study of Twins Reared Apart).

Segal discovered that the participants showed very little similarity in either personality traits or self-esteem. This proves, according to Segal, that the similar personalities of identical twins arise from their shared genes. It’s nature’s doing. Genes largely shape personality and self-esteem, “rebutting the notion that personality resides in the face, ” as the research paper states.

Progressive America would be half as big, but twice as populated as its conservative twin.

  • America’s two political tribes have consolidated into ‘red’ and ‘blue’ nations, with seemingly irreconcilable differences.
  • Perhaps the best way to stop the infighting is to go for a divorce and give the two nations a country each
  • Based on the UN’s partition plan for Israel/Palestine, this proposal provides territorial contiguity and sea access to both ‘red’ and ‘blue’ America

If more proof were needed that the U.S. is two nations in one, it was offered by the recent mid-term elections. Democrats swept the House, but Republicans managed to increase their Senate majority. There is less middle ground, and less appetite for compromise, than ever.

To oversimplify America’s electoral divide: Democrats win votes in urban, coastal areas; Republicans gain seats in the rural middle of the country. Those opposing blocs have consolidated into ‘red’ and ‘blue’ states decades ago.

Occasionally, and often after tight-run presidential elections, that divide is translated into a cartographic meme that reflects the state of the nation.

Jesusland vs. the U.S. of Canada

In 2004, this cartoon saw the states that had voted for Democratic presidential candidate John F. Kerry join America’s northern neighbor to form the United States of Canada. The states re-electing George W. Bush were dubbed Jesusland.

Trumpistan vs. Clintonesia

 

Trumpistan is a perforated continent, Clintonesia is a disjointed archipelago.  Image: The New York Times.

In 2016, these two maps disassembled the U.S. into Trumpistan, a vast, largely empty and severely punctuated land mass; and Clintonesia, a much smaller but more densely populated archipelago whose biggest bits of dry land were at the edges, with a huge, empty sea in the middle.

Soyland vs. the FSA

Writing in The Federalist, Jesse Kelly in April this year likened America to a couple that can’t stop fighting and should get a divorce. Literally. His proposal was to split the country into two new ones: a ‘red’ state and a ‘blue’ state.

On a map accompanying the article, he proposed a division of the U.S. into the People’s Republic of Soyland and the Federalist States of America (no prizes for guessing Mr Kelly’s politics).

It’s a fairly crude map. For example, it includes Republican-leaning states such as Montana and the Dakotas in the ‘blue’ state for seemingly no other reason than to provide a corridor between the blue zones in the west and east of the country.

Mr Kelly admitted that his demarcational talents left some room for improvement: “We can and will draw the map and argue over it a million different ways for a million different reasons but draw it we must,” he wrote. “I suspect the final draft would look similar (to mine).”

Partition, Palestine-style

“No, this map won’t do,” comments reader Dicken Schrader. “It’s too crude and would leave too many members of the ‘blue’ tribe in the ‘red’ nation, and too much ‘red’ in the ‘blue’ state.”

Agreeing with the basic premise behind Mr Kelly’s map but not with its crude execution, Mr Schrader took it upon himself to propose a better border between red and blue.

Analyzing election maps from the past 12 years, he devised his own map of America’s two nations, “inspired by the original UN partition map for Israel and Palestine from 1947.” Some notes on the map:

  • To avoid the distortions of gerrymandering, it is based on electoral majorities in counties, rather than electoral districts.
  • As with the UN partition plan for Israel/Palestine, all territories of both states are contiguous. There are no enclaves. Citizens of either state can travel around their nation without having to cross a border.
  • The intersections between both nations are placed at actual interstate overpasses, so both states have frictionless access to their own territory.
  • In order to avoid enclaves, some ‘blue’ islands had to be transferred to ‘red’, and some ‘red’ zones were granted to the ‘blue’ nation. “This exchange is fair to both sides, in terms of area and population”.
  • Both nations have access to the East, West and Gulf Coasts, and each has a portion of Alaska.

​Red vs. blue

Some interesting stats on these two new nations:

Progressive America (blue)

  • Area: 1.44 million sq. mi (3.74 million km2), 38% of the total U.S.
  • Population: 210 million, 64.5% of the total U.S.
  • Pop. Density: 146 inhabitants/sq mi (56/km2), similar to Mexico
  • Capital: Washington DC
  • Ten Largest Cities: New York, Los Angeles, Chicago, Houston, Phoenix, Philadelphia, San Antonio, San Diego, San Jose, Jacksonville

Conservative America (red)

  • Area: 2.35 million sq. mi (6.08 million km2), 62% of the total
  • Population: 115.4 million, 35.5% of the total
  • Pop. Density: 49 inhabitants/sq mi (19/km2), similar to Sudan
  • Capital: Dallas
  • Ten Largest Cities: Dallas, Austin, Fort Worth, Charlotte, Nashville, Oklahoma City, Louisville, Kansas City, Omaha, Colorado Springs.

What about the nukes?

‘Blue’ America would be roughly half the size of ‘red’ America but have almost double the population.

In terms of area, ‘blue’ America would be the 13th-largest country in the world, larger than México but smaller than Saudi Arabia. ‘Red’ America would be the 6th-largest country in the world, larger than India but smaller than Australia.

In terms of population, ‘blue’ America would now be the 5th-most populous county in the world, with more population than Brazil but less than Indonesia. ‘Red’ America would be the 12th, with more population than Ethiopia but less than Japan.

For those who think this divorce would end the argument between both tribes, consider that both countries would still have to live next to each other. And then there’s the question of the kids. Or, in Mr Schrader’s translation to geopolitics: “Who gets the nukes?”

Many thanks to Mr Schrader for sending in this map.

Strange Maps #948

Got a strange map? Let me know at strangemaps@gmail.com.

(For the source of this, and many other interesting articles, please visit: https://bigthink.com/strange-maps/us-red-blue-partition-plan/)

++++++++++

The world’s oldest cave paintings were probably made by Neanderthals

For a long time, we thought our species were the only artists.

  By Patty Hamrick, Archaeology, New York University.

 

Every single culture engages in some kind of art, whether that’s telling stories, dancing, weaving elaborate textiles, cooking, making jewelry or pottery, or painting landscapes and portraits. It’s so common that creating art, known to social scientists as “symbolic behavior,” seems to be an important part of what it means to be human.

Language is a type of symbolic behavior. For example, the sounds that make up the word “chair” don’t have any connection to an actual chair. English speakers have just agreed to share this audible symbol to refer to the objective reality, and different languages use different sounds to symbolize the same thing. But when did symbolic behavior begin? That’s a question archaeologists have been trying to answer for as long as there have been archaeologists. One of our favorite ways to study this topic is through cave art.

Lions painted in the Chauvet Cave.
Lions painted in the Chauvet Cave.
Wikimedia Commons. 

Cave art includes paintings, carvings, and sculptures. Perhaps you’re already familiar with the magnificent horses of Lascaux, so popular that France built an exact replica of the entire cave for tourists, or you might know the beautiful Panel of the Lions in Chauvet Cave. This European cave art isn’t the oldest evidence of symbolic behavior, but it is the best-studied and largest collection.

Most European cave art dates to between 40,000 and 10,000 years ago. Chauvet’s paintings, formerly known as the oldest cave art, are 37,000 years old. In 2012, El Castillo, Spain, took top prize for oldest cave art in the world, with one painting dated to 40,800 years ago. During the vast majority of this time period our own species, Homo sapiens, were the only humans in Europe, so archaeologists assumed that we must have been the artists. But a new study out earlier this year means that assumption may be wrong.

Neanderthals (Homo neanderthalensis, sometimes also called Homo sapiens neanderthalensis) lived in Europe, Asia, and the Middle East from around 430,000 years ago until they died out about 40,000 years ago. Despite their unintelligent reputation, Neanderthals were quite smart. We know that they used fire, made stone tools, and were excellent hunters.

New evidence suggests that Neanderthals may have independently practiced symbolic behavior. Neanderthals painted. In February 2018, researchers published an article in Science showing that some cave art is far too old to have been made by Homo sapiens. Dirk Hoffman of the Max Planck Institute for Evolutionary Anthropology and his team examined paintings from three caves in Spain: a red geometric shape, from La Pasiego, part of the same cave complex as El Castillo, which they dated to 64,800 years ago; a red hand outline, from Maltravieso, which they dated to 66,700 years ago; and an abstract red swath at Ardales, dated to at least 65,500 years ago. The dates are shocking, and not only because they trump the El Castillo painting by more than 20,000 years. When these three pieces were painted, there were no Homo sapiens anywhere in Europe. We didn’t arrive on the continent until around 44,000 years ago. That leaves Neanderthals as the only possible artists for these Spanish caves.

Skull of a Neandertal man.
Skull of a ‘Neandertal man’ from the cavern of La Chapelle-aux-Saints (Correze), FrancePage 438 of The age of mammals in Europe, Asia and North America (1910)

Because cave art has been studied since 1880, it might seem strange that we could have our image of the artists changed so dramatically now. Part of the problem is that it isn’t easy to date cave art. Carbon dating, which archaeologists use when we need to find out the age of most human artifacts, is not ideal for cave art for three reasons. Carbon dating requires carbon in the paint; black paint is sometimes made of carbon, but red paint is not. Second, carbon dating requires removing a small sample of the paint itself; archaeologists are often reluctant to destroy even a tiny part of these ancient and rare pieces of art. Finally, carbon dating is unreliable for objects older than 50,000 years, which all three of these pieces are.

That’s why Hoffman and his team used a different method, called uranium-thorium (U-Th) dating. U-Th dating, which is reliable all the way back to 500,000 years, does not date the paintings themselves. Instead, it works on very thin mineral layers that slowly form on cave walls over thousands of years. Sometimes these crusts form directly on top of the art, sealing it in. The paintings underneath must have been there first, so archaeologists get a minimum age for the art by dating the mineral layer.

To be clear, this new discovery doesn’t mean that all of the cave art was made by Neanderthals. In fact, many of the most famous caves were painted only after Neanderthals went extinct. But this discovery does mean that perhaps Neanderthals should be included along with us as creators of symbolism. If so, it would drastically change our understanding of how Neanderthals behaved. Did they use language, another type of symbolic behavior? Did they have religion? Or music? Studying their art may help us get at the answers to these questions.

Until recently, the best case for Neanderthal symbolism came from the Châtelperronian jewelry, a collection of animal teeth, shells, and ivory pieces worn as beads. However, the Châtelperronian comes from the very end of the Neanderthals’ existence. They may have seen nearby Homo sapiens wearing jewelry and just copied what we were doing. But, Hoffman’s study is revising what we know about Neanderthals. Our cousin-species may well have been creative artists, just like us.

(For the source of this, and other equally interesting articles, please visit: https://massivesci.com/articles/cave-art-neanderthal-painting/)

++++++++++

The Dalai Lama is important, so important that he might decide not to come back after his next death.

  • Tibetan monks from all over the world are scheduled to visit India to discuss the issues related to the next reincarnation of the Dalai Lama.
  • Some, including the Dalai Lama himself, have questioned if the institution should be continued.
  • The final decision will have far reaching effects, since China is unlikely to let the monks have the last word on the matter.

The world’s most famous Buddhist, The Dalai Lama, is 83 years old. Questions about his next reincarnation are uncomfortably pressing. However, a planned meeting of Tibetan Monks in India, initially slated for this week, may change the future of the position and Tibetan Buddhism forever by agreeing to not have him come back at all.

Why would they do a thing like that?

The Dalai Lama is not only the spiritual leader of Tibetan Buddhism but also historically the leader of the Tibetan state. Given the precarious situation in Tibet, most observers maintain it was invaded by China in 1950 and remains under brutal military occupation, the question of who comes next and how that is determined is extremely important for the Tibetan people, the Chinese government, and Tibetan Buddhists all over the world.

The stage has been set for a legitimacy crisis following the death of the current Dalai Lama by the Chinese government, who really don’t like the Dalai Lama and have gone to great lengths to reduce his influence. Two years ago, they blockaded Mongolia after they allowed him to visit. His image is banned in Tibet, and they have called him a “wolf in monk’s clothing” to leave no doubt as to how they feel about him.

The most serious problem was created in 1995 when the second holiest Lama in Tibetan Buddhism, the Panchen Lama, was identified by the Dalai Lama and then arrested by the Chinese government, who declared their own Panchen Lama to replace the six-year-old prisoner. Since the Panchen Lama helps to pick the next Dalai Lama, to control one is to nearly assure control of the other.

In any case, China has already declared their right to pick the next Dalai Lama and requires all high-ranking Lamas to have a permit to reincarnate and will undoubtedly choose their own Lama while announcing the one the Tibetans choose is a pretender who didn’t fill out the paperwork. This isn’t without precedent, as the Qing Dynasty often intervened in the process of finding the next Dalai Lama during the 1700s.

Given the importance of the Dalai Lama to the Tibetan people and the Tibetan Government in Exile, it is understandable why they might want to take steps to assure that the next one isn’t a puppet and is chosen in a way that is theologically satisfying.

How can the Tibetans do that? How does any of this work?

The Dalai Lama you know, Tenzin Gyatso, is the 14th Dalai Lama. He is part of a line of spiritual leaders going back 700 years. Each Dalai Lama is maintained to be the living incarnation of Avalokiteśvara, a Buddhist saint of compassion, who continuously returns to Earth to live a life of service to the Tibetan people. As an enlightened soul, they are not required to reincarnate but choose to in order to help reduce human suffering. Such beings are called Bodhisattvas and are highly respected in many branches of Buddhism.

The basic details on how the next Dalai Lama is found are well known. After his death, high ranking monks, clergymen, and the Panchen Lama gather for a series of rituals and meditation. These events often take place at holy locations in Tibet. Using the information on where to look they collect from these practices and clues given to them by the sacred sites, they set out to find the reincarnation of the Dalai Lama.

When they think they’ve found him, the holy men subject the child to a series of tests, including having him try to identify which personal items placed in front of him were owned by his previous incarnation, to determine if they got it right. If they did, the new Dalai Lama is enthroned shortly afterward.

As the reincarnation of a Bodhisattva, the Dalai Lama is assumed to have some control over the details of their reincarnations. He could also choose to not select rebirth as a child but to incarnate in an already living person. Tenzin Gyatso has also said he might come back as a woman and has assured he would probably be born in India the next time around; if there is a next time.

What would they do instead?

One thing the monks will consider is if they should alter some of the ritual involved in finding the next Dalai Lama, since getting unrestricted access to the holy sites in Tibet is unlikely. They could just decide to change the rituals so that they don’t need the Panchen Lama or specific locations in Tibet to find the next reincarnation successfully.

They are also open to more radical options, however. One proposal that has been making the rounds is that the system of reincarnation should be replaced with one of nomination. The Dalai Lama himself endorsed the idea in an interview with Nikkei. Saying “one elder, truly popular and respected, can be chosen as Dalai Lama. I think sooner or later, we should start that kind of practice.”

He has also suggested that he might not come back at all as the institution is getting old. He once told the BBC:

There is no guarantee that some stupid Dalai Lama won’t come next, who will disgrace himself or herself….That would be very sad. So, much better that a centuries-old tradition should cease at the time of a quite popular Dalai Lama.

If he decides to come back, the Dalai Lama will have plenty of issues to deal with right out of the gate. The question of how and if he should be the one to deal with them must be settled now. While few of the ideas above are pleasant ones, they may be the only things that can maintain the spiritual independence of Tibetan Buddhism.

(For the source of this article, and to watch some related videos, please visit: https://bigthink.com/culture-religion/dalai-lama-reincarnation/)

++++++++++

We’re studying collapsed civilizations so that ours can endure climate change

Paleoclimatologists are digging into the connections between the collapse of Maya Civilization and extreme droughts

Over 1,000 years ago, droughts plagued the Yucatán peninsula. The Yucatán was home to the Classic Lowland Maya Civilization, of pyramids and the number zero fame. Droughts occurred intermittently for centuries, from 200 to 1100 CE. This is an era of Mayan history typically split in two – the Classic (200-800 CE) and Terminal Classic (800-1100 CE) Periods. The droughts coincided with the widespread collapse of the Maya Civilization around 1100 CE.

The first scientific evidence of these droughts was discovered by Dr. David Hodell and other researchers at the University of Florida in 1995, using ancient sediments from Lake Chinchancanab. Since then, the droughts have been a popular example of how extreme climate fluctuations can impact society. However, the magnitude of the droughts has remained an elusive and difficult question to answer.

Further, archaeological research has revealed a much more complex history of the Classic Maya reorganization than originally thought, suggesting the droughts were not the only factor destabilizing the Classic Maya. Archaeologists have found evidence that suggest the Maya Civilization experienced social changes including class conflicts, warfare, invasion, and ideological change. 20 years after the discovery of drought evidence in the Yucatán, researchers returned to Lake Chinchancanab to investigate a seemingly simple question: just how dry was it?

In the pioneering Yucatán drought research, Hoddell and his team sampled sediment cores from the bottom of Lake Chinchancanab that were thousands of years old. In the core, they found layers of gypsum, a white chalky mineral often used in plaster or cement. Because gypsum can only form in a lake setting when a large amount of evaporation has occurred, presence of gypsum in lake sediments is evidence of periods in the past where lake levels dropped significantly—signs of past drought events.

A photo of a stepped Mayan pyramid with a weathered stone sculpture of a serpent's head in the foreground.
Photo by Marv Watson on Unsplash

Interestingly, archaeological records show that these periods of droughts coincided with sociopolitical unrest in the region, including increased warfare and internal violence of the Lowland Maya. This sediment core from Lake Chinchancanab was the first quantitative link between climate and instability of the Classic Maya.

Hoddell is now based at the University of Cambridge, but he and his group are still interested in human-climate connections. In this new study, his team went back to Lake Chinchancanab. The researchers are still focusing on the lake’s gypsum, but now they are looking to the ancient lake water that has been trapped in the gypsum since the droughts. The researchers developed new chemistry and modeling techniques to assess how extreme the Classic and Terminal Classic droughts were.

By measuring the chemistry of the trapped ancient lake water, they established ideas for what the chemistry and depth of the lake would have looked like during the droughts. With these constraints, the researchers developed a theoretical model of a lake. They tested different climate scenarios to see how the lake chemistry would respond, until the modeled lake chemistry matched the ancient water in the gypsum. It’s the scientific equivalent of turning light switches on and off until you figure out which one is the light you actually want.

The researchers ultimately found that rainfall decreased by 50 percent on average compared to today, and as much as 70 percent during the most intense drought conditions. Humidity decreased by two to seven percent. That decrease in rainfall is the equivalent of Seattle becoming as dry as Tucson. “We don’t really know what changes in relative humidity might be in that past, because we never really had a tool to constrain it before,” says Thomas Bauska, a co-author of the study and researcher at Cambridge University. ”[The results] do tell us that the Yucatán was experiencing dry season conditions during a much longer period of the year.”

These new results and techniques can open many doors. Much of paleoclimate research relies on qualitatively studying past climate records rather than measuring past climate. It’s useful, but this approach prevents us from asking questions precise quantitative data can raise. It’s the difference between saying “wow, it was really dry in 1100 CE” and saying “there was a 50-70 percent decrease in rainfall compared to today.”

For example, we can learn a lot from the ancient Maya about human resilience and adaptation to climatic extremes. Previous studies in this region showed that during the first Classic Period droughts, the Maya adapted their agricultural practices by rotating their crops to maize (corn) varieties that required less water, but were unsuccessful at adapting in later droughts. But these results were based on qualitative paleoclimate records; hopefully providing more exact estimates of drought intensity will lead to a better understanding of how the Classic Maya reacted in the face of extreme climatic change.

This civilization flourished in the not too distant past: the demise of the Classic Maya occurred around 900 – 1200 CE (though this collapse doesn’t mean the Maya disappeared – the remaining population reorganized and formed new communities). Cambridge University, where this research was done, was founded in 1209 CE. But there’s still so much that scientists don’t know about Mayan history. More concretely understanding the past climate changes of this region is one monumental step toward understanding how the Maya interacted with their environment.

(For the source of this, and many other interesting articles, please visit: https://massivesci.com/articles/mayan-empire-collapse-drought-climate-change/)

++++++++++

Ancient DNA discovery reveals previously unknown population of native Americans

An artist's impression of the camp in central Alaska where the fossil was unearthed
An artist’s impression of the camp in central Alaska where the fossil was unearthed. (Credit: Illustration by Eric S. Carlson in collaboration with Ben A. Potter).

A few years ago the fossilized remains of a baby girl were uncovered in a harsh and isolated part of central Alaska. The remains were dated at 11,500 years old, and a new DNA study has now revealed not only an incredible insight into the origins of human migration into North America, but also the existence of a previously undiscovered population of humans that have been named “Ancient Beringians”.

The conventional theory about how humans migrated into the Americas suggests that sometime between 15,000 and 30,000 years ago, humans wandered from Asia into North America across a land bridge called Beringia that connected the two continents.

This latest discovery reveals a distinctive, and previously undiscovered human lineage that surprised researchers, who were expecting to find a genetic profile that matched northern Native American people. The study of this ancient child’s DNA pointed to an entirely new population of people, separate to those that ultimately spread throughout the rest of North America.

An estimated timeline showing the migration of humans into the Americas

The researchers suggest two possible theories to explain this new lineage. Either two separate groups of people crossed the land bridge into the Americas over 15,000 years ago, or one group crossed, and then split into two entirely independent populations. Closer genetic sequencing suggests the latter outcome is the most likely, but why and how this Ancient Beringian population remained so genetically isolated and distinct for so many subsequent years remains a mystery.

Members of the archaeology field team watch as University of Alaska Fairbanks professors Ben Potter and...

The study also posits that a type of “back migration” occurred, possibly around 6,000 years ago, as northern Native American populations spread back up into Alaska and either absorbed or replaced the Beringian population, resulting in a distinct Alaskan native population called the Athabascan.

“There is very limited genetic information about modern Alaska Athabascan people,” says Ben Potter, one of the lead authors on the study. “These findings create opportunities for Alaska Native people to gain new knowledge about their own connections to both the northern Native American and Ancient Beringian people.”

The new study was published in the journal Nature.

Source: University of Alaska Fairbanks

(For the source of this, and many other interesting articles, please see: https://newatlas.com/ancient-dna-native-american-migration-beringian/52831/)

++++++++++

Or how I learned to stop worrying and love my tsundoku.

  • Many readers buy books with every intention of reading them only to let them linger on the shelf.
  • Statistician Nassim Nicholas Taleb believes surrounding ourselves with unread books enriches our lives as they remind us of all we don’t know.
  • The Japanese call this practice tsundoku, and it may provide lasting benefits.

I love books. If I go to the bookstore to check a price, I walk out with three books I probably didn’t know existed beforehand. I buy second-hand books by the bagful at the Friends of the Library sale, while explaining to my wife that it’s for a good cause. Even the smell of books grips me, that faint aroma of earthy vanilla that wafts up at you when you flip a page.

The problem is that my book-buying habit outpaces my ability to read them. This leads to FOMO and occasional pangs of guilt over the unread volumes spilling across my shelves. Sound familiar?

But it’s possible this guilt is entirely misplaced. According to statistician Nassim Nicholas Taleb, these unread volumes represent what he calls an “antilibrary,” and he believes our antilibraries aren’t signs of intellectual failings. Quite the opposite.

Living with an antilibrary

Umberto Eco signs a book. You can see a portion of the author’s vast antilibrary in the background.  (Photo from Wikimedia)

Taleb laid out the concept of the antilibrary in his best-selling book The Black Swan: The Impact of the Highly Improbable. He starts with a discussion of the prolific author and scholar Umberto Eco, whose personal library housed a staggering 30,000 books.

When Eco hosted visitors, many would marvel at the size of his library and assumed it represented the host’s knowledge — which, make no mistake, was expansive. But a few savvy visitors realized the truth: Eco’s library wasn’t voluminous because he had read so much; it was voluminous because he had so much more he desired to read.

Eco stated as much. Doing a back-of-the-envelope calculation, he found he could only read about 25,200 books if he read one book a day, every day, between the ages of ten and eighty. A “trifle,” he laments, compared to the million books available at any good library.

Drawing from Eco’s example, Taleb deduces:

Read books are far less valuable than unread ones. [Your] library should contain as much of what you do not know as your financial means, mortgage rates, and the currently tight real-estate market allows you to put there. You will accumulate more knowledge and more books as you grow older, and the growing number of unread books on the shelves will look at you menacingly. Indeed, the more you know, the larger the rows of unread books. Let us call this collection of unread books an antilibrary. [Emphasis original]

Maria Popova, whose post at Brain Pickings summarizes Taleb’s argument beautifully, notes that our tendency is to overestimate the value of what we know, while underestimating the value of what we don’t know. Taleb’s antilibrary flips this tendency on its head.

The antilibrary’s value stems from how it challenges our self-estimation by providing a constant, niggling reminder of all we don’t know. The titles lining my own home remind me that I know little to nothing about cryptography, the evolution of feathers, Italian folklore, illicit drug use in the Third Reich, and whatever entomophagy is. (Don’t spoil it; I want to be surprised.)

“We tend to treat our knowledge as personal property to be protected and defended,” Taleb writes. “It is an ornament that allows us to rise in the pecking order. So this tendency to offend Eco’s library sensibility by focusing on the known is a human bias that extends to our mental operations.”

These selves of unexplored ideas propel us to continue reading, continue learning, and never be comfortable that we know enough. Jessica Stillman calls this realization intellectual humility.

People who lack this intellectual humility — those without a yearning to acquire new books or visit their local library — may enjoy a sense of pride at having conquered their personal collection, but such a library provides all the use of a wall-mounted trophy. It becomes an “ego-booting appendage” for decoration alone. Not a living, growing resource we can learn from until we are 80 — and, if we are lucky, a few years beyond.

Tsundoku

Book swap attendees will no doubt find their antilibrary/tsundoku grow.  (Photo from Flickr)

I love Taleb’s concept, but I must admit I find the label “antilibrary” a bit lacking. For me, it sounds like a plot device in a knockoff Dan Brown novel — “Quick! We have to stop the Illuminati before they use the antilibrary to erase all the books in existence.”

Writing for the New York Times, Kevin Mims also doesn’t care for Taleb’s label. Thankfully, his objection is a bit more practical: “I don’t really like Taleb’s term ‘antilibrary.’ A library is a collection of books, many of which remain unread for long periods of time. I don’t see how that differs from an antilibrary.”

His preferred label is a loanword from Japan: tsundoku. Tsundoku is the Japanese word for the stack(s) of books you’ve purchased but haven’t read. Its morphology combines tsunde-oku (letting things pile up) and dukosho (reading books).

The word originated in the late 19th century as a satirical jab at teachers who owned books but didn’t read them. While that is opposite of Taleb’s point, today the word carries no stigma in Japanese culture. It’s also differs from bibliomania, which is the obsessive collecting of books for the sake of the collection, not their eventual reading.

The value of tsundoku

Granted, I’m sure there is some braggadocious bibliomaniac out there who owns a collection comparable to a small national library, yet rarely cracks a cover. Even so, studies have shown that book ownership and reading typically go hand in hand to great effect.

One such study found that children who grew up in homes with between 80 and 350 books showed improved literacy, numeracy, and information communication technology skills as adults. Exposure to books, the researchers suggested, boosts these cognitive abilities by making reading a part of life’s routines and practices.

Many other studies have shown reading habits relay a bevy of benefits. They suggest reading can reduce stress, satisfy social connection needs, bolster social skills and empathy, and boost certain cognitive skills. And that’s just fiction! Reading nonfiction is correlated with success and high achievement, helps us better understand ourselves and the world, and gives you the edge come trivia night.

In her article, Jessica Stillman ponders whether the antilibrary acts as a counter to the Dunning-Kruger effect, a cognitive bias that leads ignorant people to assume their knowledge or abilities are more proficient than they truly are. Since people are not prone to enjoying reminders of their ignorance, their unread books push them toward, if not mastery, then at least a ever-expanding understanding of competence.

“All those books you haven’t read are indeed a sign of your ignorance. But if you know how ignorant you are, you’re way ahead of the vast majority of other people,” Stillman writes.

Whether you prefer the term antilibrary, tsundoku, or something else entirely, the value of an unread book is its power to get you to read it.

(For the source of this, and many other interesting articles, please visit: https://bigthink.com/personal-growth/value-of-unread-books/)

++++++++++

Panama
An indigenous woman in Ciudad Panamá, Panamá. 

The debate over who arrived in the New World first is a contentious one. Their identities aside, nobody can quite decide how those first Americans traveled or how they dispersed once they arrived. But now, a new study published in Cell, illuminating the genetic history of some of those early travelers, reveals a unifying thread.

An international team of scientists announced recently that the majority of people in Central and South America can be linked to a single ancestral lineage of humans who journeyed across the Bering Strait at least 15,000 years ago. After their journey southward into the new world, this source population broke into at least three branches, which diversified and spread, some of them back toward the north.

Two of those branches are new to science. One is unexpectedly connected to the Clovis people — who were thought to be the first Americans until the early 2000s — whereas the other links ancient North Americans to people who lived in Southern Peru and Northern Chile at least 4,200 years ago.

“These [findings] are fascinating as they open new gateways into archeological and genetic research,” explains co-author and Harvard Ph.D. candidate Nathan Nakatuska to Inverse. “It was previously not known that the Clovis culture extended into South America, and it is incredible that these people were able to migrate all the way through North, Central, and South America. In addition, the new migration into the Southern Andes was not previously known, and we are unsure what historical events led to this.”

migration
The majority of Central and South American ancestry arrived from at least three different streams of people.

Nakatuska and his colleagues analyzed DNA from 49 ancient individuals who once lived in what is now Belize, Brazil, the Central Andes, and the southernmost parts of Chile and Argentina and died between 10,900 and 8,600 ago. The team worked with government agencies and indigenous people to identify the samples, extract powder from skeletal material, and extract the DNA necessary to created double-stranded DNA libraries.

The use of DNA is one of the most novel aspects of this research. When studying migration of ancient peoples, other scientists often have to rely on other factors, such as old footprints or lice.

This broad dataset allowed the team to link genetic exchanges between people in North and South America and confirm the common origin of North, Central, and South Americans. The analysis made it clear that the original “source” population, fresh off the Bering Strait, diversified before they spread into South America.

What surprised the study authors most was the genetic connection they found between the Clovis culture and South America. About 13,000 years ago, the Clovis were distributed across North America. Though they were long thought to be the first Americans, findings of even older remains stripped them of that title. In the new paper, the team links DNA from a Clovis boy who lived in Montana about 12,800 years ago to some of the data set’s oldest individuals, who lived much farther south, in modern-day Belize, Chile, and Brazil.

“This [previously unknown gene flow event] suggests that, surprisingly, the genetic ancestry of people who produced the Clovis culture expanded further south,” explains first author and Max Planck Institute for the Science of Human History researcher Cosimo Posth, Ph.D. to Inverse. “However, this ancestry was replaced at least by 9,000 years ago from another lineage, which left a long lasting population continuity until today, in multiple South American regions.”

The second previously unknown population links ancient individuals who lived on California’s Channel Islands to individuals who lived at least 4,200 years ago in Southern Peru and Northern Chile. Posth notes that “this might be linked to a population expansion in the region seen in the archeological record around that time.”

Clovis
Clovis spearheads found in Iowa. 

Nakatuska hopes the team’s research will stimulate further investigation into these genetic bonds and emphasizes the need for researchers to respectfully work with indigenous people. While strides have been made in the past two decades, archeology has a history of cultural imperialism.

“We hope the findings will facilitate greater collaboration and engagement with indigenous communities where the communities are deeply engaged and provide their insights to help drive the science and complement the studies with their own indigenous epistemologies,” Nakatuska says.

“We must ensure that our studies benefit indigenous people, particularly those currently living in the areas near the ancient individuals from our studies.”

(For the balance of this article, plus a video, please visit: https://www.inverse.com/article/50624-genetic-flow-north-south-america/)

++++++++++

Just Months of American Life Change the Microbiome

++++++++++

His book warns us of the dangers of mass media, passivity, and how even an intelligent population can be driven to gladly choose dictatorship over freedom.

  • While other dystopias get more press, Brave New World offers us a nightmare world that we’ve moved steadily towards over the last century.
  • Author Aldous Huxley’s ideas on a light handed totalitarian dictatorship stand in marked contrast to the popular image of a dictatorship that relies on force.

When most people think of what dystopia our society is sprinting towards, they tend to think of 1984, The Handmaid’s Tale, or the Hunger Games. These top selling, well known, and well-written titles are excellent warnings of worlds that could come to pass that we would all do well to read.

However, one lesser-known dystopian novel has done a much better job at predicting the future than these three books. Brave New World, written in 1931 by author, psychonaut, and philosopher Aldous Huxley, is well known but hasn’t quite had the pop-culture breakthrough that the other three did.

This is regrettable, as it offers us a detailed image of a dystopia that our society is not only moving towards but would be happy to have.

Good Ford!

 

For those who haven’t read it, Brave New World is the description of a nightmare society where everybody is perfectly happy all the time. This is assured through destroying the free will of most of the population using genetic engineering and Pavlovian conditioning, keeping everybody entertained continuously with endless distractions, and offering a plentiful supply of the wonder drug Soma to keep people happy if all else fails.

The world state is a dictatorship which strives to assure order. The dictatorship is managed by ten oligarchs who rely on an extensive bureaucracy to keep the world running. The typical person is conditioned to love their subservience and either be proud of the vital work they do or be relieved that they don’t have to worry about the problems of the world.

Global stability is ensured through the Fordist religion, which is based on the teachings of Henry Ford and Sigmund Freud and involves the worship of both men. The tenets of this faith encourage mass consumerism, sexual promiscuity, and avoiding unhappiness at all costs. The assembly line is praised as though it were a gift from God.

Huxley’s dystopia is especially terrifying in that the enslaved population absolutely loves their slavery. Even the characters who are smart enough to know what is going on (and why they should be concerned) are instead content with everything that is happening. Perhaps more terrifying than other dystopian novels, in Brave New World there is truly no hope for change.

The similarities between the world of today and the world of the book are many, even if our technology hasn’t quite caught up yet.

Genetic Engineering

While the human assembly line described in the first part of the story is still a far-off fantasy, the basic concepts that make it work are already here. Today, people make choices to influence the genetic makeup of their children regularly.

Pre-natal screening has created the ability for many parents to decide if they wish to carry a disabled fetus to term or not. In Iceland, this has resulted in the near eradication of new cases of Down Syndrome in the country. Almost 100% of detected cases lead to an abortion shortly after.

Similarly, testing for a child’s sex before birth is a well-known procedure that leads to a wide gender gap in many countries. Less well known is the process of sperm sorting, which allows for a couple to choose the gender of their child as part of the process of in-vitro fertilization.

The above examples suggest we’re open to soft eugenics already. Imagine what would happen if people could determine their child’s potential IQ before birth, or how rebellious they will be as a teenager. It would be difficult to suggest that the development of such technology would not be hailed as progress by those who could afford to use it. Huxley’s visions of a genetically perfected upper caste might be available soon.

As this article suggests, some choice in baby design is already here and more will be available soon.

Endless Distractions

The characters of Brave New World enjoy endless distractions between their hours at work. Various complex games have been invented, movies now engage all five senses, and there are even televisions at the feet of death beds. Nobody ever has to worry about being bored for long. The idea of enjoying solitude is taboo, and most people go out to parties every night.

In our modern society, most people genuinely can’t go thirty minutes without wanting to check their phones. We have, just as Huxley predicted, made it possible to abolish boredom and time for spare thoughts no matter where you are. This is already having measurable effects on our mental health and our brain structure.

Huxley wasn’t warning us against watching television or going to the movies occasionally; he says in this interview with Mike Wallace that TV can be harmless, but rather against the constant barrage of distraction becoming more important in our lives than facing the problems that affect us. Given how stressful people find the idea of a tech-free day and how we take our pop culture so seriously that it was targeted for use by Russian bots, he might have been onto something.

Drugs: A gram is better than a damn!

Brave New World‘s favorite pill, Soma, is quite the drug. In small doses it causes euphoria. In moderate doses, it causes enjoyable hallucinations, and in large doses, it is a tranquilizer. It is probably a pharmacological impossibility, but his concept of a society that pops pills to eradicate any vestige of negative feelings and escape the doldrums of the day is very real.

While it seems odd to say that we are moving towards Brave New World in this era when official policy is opposed to drug use, Huxley would suggest we consider it a blessing, since a dictatorship that encouraged drug use to zonk out their population would be a powerful, if light handed one.

While today we have a war on drugs, it is not on all drugs. Anti-depressants, a powerful tool for the treatment of mental illness, are so popular that one in eight Americans are on them right now. This doesn’t include the large number of Americans on tranquilizers, anti-anxiety medications, or those who self-medicate with alcohol or increasingly legal marijuana.

These drugs aren’t quite Soma, but they bear a striking resemblance in function and use.

(For the balance of this article please visit: https://bigthink.com/culture-religion/brave-new-world-prediction-novel/)

++++++++++

Ancient civilizations may have been more connected than previously thought

Energy consumption was used to measure the extent of globalization for early civilizations
Energy consumption was used to measure the extent of globalization for early civilizations (Credit: ralwel/Depositphotos)

Ancient civilizations could have benefited, and at times suffered from belonging to an interconnected global economy, according to evidence presented in a newly-published study. The international team behind the research hope that the work could help present-day society learn from the mistakes of early globalism.

It is a sad but unavoidable fact that flourishing civilizations use up vast amounts of raw materials, and, subsequently, produce prodigious amounts of waste. By observing the amount of waste produced by an ancient society, researchers can estimate the amount of energy used, and attempt to track periods of growth, prosperity and decline.

This was the approach used in a new study, which attempted to determine whether historical civilizations ranging back 10,000 years were connected by a global economy. If this were the case, the fortunes of contemporary societies would be observed to rise and fall in tandem. This is known as synchrony.

Joining an interdependent global network can bring significant benefits. This could include an increase in wealth from trade goods, and other resources that allow a society to increase its carrying capacity, or maximum population, beyond the limits of an isolated people.

However, it would also render the societies involved susceptible to the maladies of their partners. For example, open trade and movement of peoples could encourage the spread of disease, and lead to detrimental changes to a nation’s ecosystem and social system.

“The more tightly connected and interdependent we become, the more vulnerable we are to a major social or ecological crisis in another country spreading to our country,” said Rick Robinson, a postdoctoral assistant research scientist at the University of Wyoming, and co-author of the new study. “The more we are synced, the more we put all our eggs in one basket, the less adaptive to unforeseen changes we become.”

In the new study, researchers tracked the energy use of civilizations spread across the world using a combination of radiocarbon dating and historical records. Energy, in this case, refers to the amount of biomass that was converted into work and waste.

To determine the amount of energy used, the team carbon-dated the trash of ancient civilizations, including animal bones, charcoal, wood, and small seeds. The scientists were able to provide energy-use estimates for a diverse range of societies spanning from roughly 10,000 years in the past, to 400 years ago.

The more recent historical records were used to provide a frame of reference for the estimates made by the radiocarbon dating technique.

It was discovered that there were significant levels of long-term synchrony regarding the booms and busts of ancient civilizations. This suggests that there was a greater level of early globalization than had previously been believed.

(For the balance of this article please visit: https://newatlas.com/ancient-civilizations-global-trade-network-globalization/56406/)

++++++++++

The healthiest end-of-day sleep is 6 to 8 hours, but not more. Or less. As for napping, it depends on how you want to wake up.

Rachel Calamusa.

It’s obvious that being exhausted is no fun unless you’re Keith Richards. For the rest of us, it’s clear we’re not at our best when we’re too tired, and it’s not much of a leap to understand it’s not a healthy state in which to live, especially for one’s cardiovascular system—heart issues and a greater incidence of stroke have both been associated with not getting enough sleep.

But how much sleep do you need to stay healthy? Depends on how you’re getting it. For some, it’s a matter of adjusting one’s bedtime habits and schedule to get the best rest. For others—people with excessively long commutes or those whose schedules or dispositions preclude extended stretches in bed—it’s about finding the most effective way to nap. Regardless, there are right ways and wrong ways to recharge your tired self.

The sweet spot for sleeping at the end of the day

At a recent European Society of Cardiology conference, researchers at the Onassis Cardiac Surgery Centre, Athens, Greece identified the cardiovascular sweet spot for end-of-day sleep. (We’re phrasing it that way to accommodate people who work regular night shifts.) It’s between six and eight hours a night.

To arrive at their conclusion, the researchers performed a meta-analysis of 11 previous sleep studies published in the last five years, using data collected from 1,000,541 subjects. The subjects were sorted into three groups. The reference group slept six to eight hours a night. Another group slept less than six hours, and the final group slept more than eight.

It turned out that those getting either less than six hours sleep or more than eight sleep were at significantly higher risk of developing or dying from coronary artery disease or stroke over the course of the next decade. (In the study, the average follow-up was 9.3 years.).

  • Subjects sleeping less than 6 hours were 11% more likely to develop cardiovascular issues
  • Subjects sleeping more than 8 hours were 33% more likely to develop cardiovascular issues

It’s interesting to note that getting too much sleep is more dangerous than getting too little. Lead author Epameinondas Fountas sums up the findings: “Our findings suggest that too much or too little sleep may be bad for the heart. More research is needed to clarify exactly why, but we do know that sleep influences biological processes like glucose metabolism, blood pressure, and inflammation—all of which have an impact on cardiovascular disease.”

Sweet spots, literally and figuratively, for napping

Nap pods

For those without the option of a full night’s (day’s?) sleep, or who need to be at their best from beginning to end of very long days, naps are often the only option. A new industry is springing up in cities around the world to provide busy people cozy places in which to catch some Zzzs.

(For the balance of this interesting article please visit: https://bigthink.com/robby-berman/the-sleep-sweet-spot-including-one-you-can-rent/)

++++++++++

Sugar pill placebos as effective as powerful pain relieving drugs – for some

A new study has shown sugar pills can be effective pain relief – for those with...
A new study has shown sugar pills can be effective pain relief – for those with the right brains (Credit: kavusta/Depositphotos)

Researchers at Northwestern University have shown that sugar pill placebos are as effective as any drug on the market for relieving chronic pain in people with a certain brain anatomy and psychological characteristics. Amazingly, such patients will even experience the same reduction in pain when they are told the pill they are taking has no physiological effect.

Previous studies have found that placebos can have an effect on a number of conditions, including sleep disorders, depression and pain. The new Northwestern study, however, has shown it is possible to predict which patients suffering from chronic pain will experience relief when given a sugar pill – and it’s basically all in their heads.

“Their brain is already tuned to respond,” says senior study author A. Vania Apkarian, professor of physiology at Northwestern University Feinberg School of Medicine. “They have the appropriate psychology and biology that puts them in a cognitive state that as soon as you say, ‘this may make your pain better,’ their pain gets better.”

Additionally, there’s no need for subterfuge because those primed to respond to a placebo will do so even when they know that’s what they’re getting.

“You can tell them, ‘I’m giving you a drug that has no physiological effect but your brain will respond to it,'” Apkarian adds. “You don’t need to hide it. There is a biology behind the placebo response.”

The study involved 60 patients experiencing chronic back pain who were randomly split into two arms. Subjects in one arm were given either a real pain relief drug or a placebo – those receiving the drug weren’t studied by the researchers. Those in the other arm received neither the drug nor a placebo and served as the control group.

Patients that received the placebo and reported a reduction in pain were examined and found to have a similar brain anatomy and psychological traits, such as the right side of their emotional brain being larger than the left and a larger cortical sensory area than those in the placebo group that reported no reduction in pain. The researchers say the patients that had a response to the placebo were also more emotionally self-aware, sensitive to painful situations, and mindful of their environment.

The researchers say their findings have a number of potential benefits, the most obvious being the ability for doctors to prescribe a placebo rather than addictive pharmacological drugs that may have negative long-term effects, while getting the same result. Prescribing a cheap sugar pill would also result in a significant reduction in healthcare costs for the patient and the health care system as a whole.

“Clinicians who are treating chronic pain patients should seriously consider that some will get as good a response to a sugar pill as any other drug,” says Apkarian. “They should use it and see the outcome. This opens up a whole new field.”

Additionally, the findings may make it possible to eliminate the placebo effect from drug trials, meaning fewer subjects would need to be recruited and it would be easier to identify the physiological effects of the drug under examination.

The team’s research is published in Nature Communications. Source: Northwestern University.

(Source, and for additional interesting article like this one, please visit: https://newatlas.com/placebo-sugar-pill-pain-relief/56319/)

++++++++++

10 reasons why Finland’s education system is the best in the world

by Mike Colagrossi –

According to a recent European study, Finland is the country which has best school results in Europe thanks to its teaching system. AFP PHOTO OLIVIER MORIN.

Time and time again, American students continually rank near the middle or bottom among industrialized nations when it comes to performance in math and science. The Program for International Student Assessment (PISA) which in conjunction with the Organization for Economic Cooperation and Development (OECD) routinely releases data which shows that Americans are seriously lagging behind in a number of educational performance assessments.

Despite calls for education reform and a continual lackluster performance on the international scale, not a lot is being done or changing within the educational system. Many private and public schools run on the same antiquated systems and schedules that were once conducive to an agrarian society. The mechanization and rigid assembly-line methods we use today are spitting out ill-prepared worker clones, rudderless adults and an uninformed populace.

But no amount of pontificating will change what we already know. The American education system needs to be completely revamped – from the first grade to the Ph.D. It’s going to take a lot more than a well-meaning celebrity project to do that…

Many people are familiar with the stereotype of the hard-working, rote memorization, myopic tunnel vision of Eastern Asian study and work ethics. Many of these countries, like China, Singapore, and Japan amongst others routinely rank in the number one spots in both math and science.

Some pundits point towards this model of exhaustive brain draining as something Americans should aspire to become. Work more! Study harder! Live less. The facts and figures don’t lie – these countries are outperforming us, but there might be a better and healthier way to go about this.

Finland is the answer – a country rich in intellectual and educational reform has initiated over the years a number of novel and simple changes that have completely revolutionized their educational system. They outrank the United States and are gaining on Eastern Asian countries.

Are they cramming in dimly-lit rooms on robotic schedules?  Nope. Stressing over standardized tests enacted by the government? No way. Finland is leading the way because of common-sense practices and a holistic teaching environment that strives for equity over excellence. Here are 10 reasons why Finland’s education system is dominating America and the world stage.

Photo By Craig F. Walker / The Denver Post

No standardized testing

Staying in line with our print-minded sensibilities, standardized testing is the blanket way we test for subject comprehension. Filling in little bubbles on a scantron and answering pre-canned questions is somehow supposed to be a way to determine mastery or at least competence of a subject. What often happens is that students will learn to cram just to pass a test and teachers will be teaching with the sole purpose of students passing a test. Learning has been thrown out of the equation.

Finland has no standardized tests. Their only exception is something called the National Matriculation Exam, which is a voluntary test for students at the end of an upper-secondary school (equivalent to an American high school.)  All children throughout Finland are graded on an individualized basis and grading system set by their teacher. Tracking overall progress is done by the Ministry of Education, which samples groups across different ranges of schools.

Accountability for teachers (not required)

A lot of the blame goes to the teachers and rightfully so sometimes. But in Finland, the bar is set so high for teachers, that there is often no reason to have a rigorous “grading” system for teachers.  Pasi Sahlberg, director of the Finnish Ministry of Education and writer of Finnish Lessons: What Can the World Learn from Educational Change in Finland? Said the following about teachers’ accountability:

“There’s no word for accountability in Finnish… Accountability is something that is left when responsibility has been subtracted.”

All teachers are required to have a master’s degree before entering the profession. Teaching programs are the most rigorous and selective professional schools in the entire country. If a teacher isn’t performing well, it’s the individual principal’s responsibility to do something about it.

The concept of the pupil-teacher dynamic that was once the master to apprentice cannot be distilled down to a few bureaucratic checks and standardized testing measures. It needs to be dealt with on an individual basis.

Photo By Craig F. Walker / The Denver Post

Cooperation not competition

While most Americans and other countries see the educational system as one big Darwinian competition, the Finns see it differently. Sahlberg quotes a line from a writer named Samuli Paronen which says that:

“Real winners do not compete.”

Ironically, this attitude has put them at the head of the international pack. Finland’s educational system doesn’t worry about artificial or arbitrary merit-based systems. There are no lists of top performing schools or teachers. It’s not an environment of competition – instead, cooperation is the norm.

Make the basics a priority

Many school systems are so concerned with increasing test scores and comprehension in math and science, they tend to forget what constitutes a happy, harmonious and healthy student and learning environment. Many years ago, the Finnish school system was in need of some serious reforms.

The program that Finland put together focused on returning to the basics. It wasn’t about dominating with excellent marks or upping the ante. Instead, they looked to make the school environment a more equitable place.

Since the 1980s, Finnish educators have focused on making these basics a priority:

  • Education should be an instrument to balance out social inequality.
  • All students receive free school meals.
  • Ease of access to health care.
  • Psychological counseling
  • Individualized guidance

Beginning with the individual in a collective environment of equality is Finland’s way.

Starting school at an older age

Here the Finns again start by changing very minute details. Students start school when they are seven years old. They’re given free reign in the developing childhood years to not be chained to compulsory education. It’s simply just a way to let a kid be a kid.

There are only 9 years of compulsory school that Finnish children are required to attend. Everything past the ninth grade or at the age of 16 is optional.

Just from a psychological standpoint, this is a freeing ideal. Although it may seem anecdotal, many students really feel like they’re stuck in a prison. Finland alleviates this forced ideal and instead opts to prepare its children for the real world.

Providing professional options past a traditional college degree

The current pipeline for education in America is incredibly stagnant and immutable. Children are stuck in the K-12 circuit jumping from teacher to teacher. Each grade a preparation for the next, all ending in the grand culmination of college, which then prepares you for the next grand thing on the conveyor belt. Many students don’t need to go to college and get a worthless degree or flounder about trying to find purpose and incur massive debt.

Finland solves this dilemma by offering options that are equally advantageous for the student continuing their education. There is a lesser focused dichotomy of college-educated versus trade-school or working class. Both can be equally professional and fulfilling for a career.

In Finland, there is the Upper Secondary School which is a three-year program that prepares students for the Matriculation Test that determines their acceptance into a University. This is usually based off of specialties they’ve acquired during their time in “high-school”

Next, there is vocational education, which is a three-year program that trains students for various careers. They have the option to take the Matriculation test if they want to then apply to University.

Finns wake up later for less strenuous schooldays

Waking up early, catching a bus or ride, participating in morning and after school extracurriculars are huge time sinks for a student. Add to the fact that some classes start anywhere from 6am to 8am and you’ve got sleepy, uninspired adolescents on your hands.

Students in Finland usually start school anywhere from 9:00 – 9:45 AM. Research has shown that early start times are detrimental to students’ well-being, health, and maturation. Finnish schools start the day later and usually end by 2:00 – 2:45 PM. They have longer class periods and much longer breaks in between. The overall system isn’t there to ram and cram information into their students, but to create an environment of holistic learning.

Consistent instruction from the same teachers

There are fewer teachers and students in Finnish schools. You can’t expect to teach an auditorium of invisible faces and breakthrough to them on an individual level. Students in Finland often have the same teacher for up to six years of their education. During this time, the teacher can take on the role of a mentor or even a family member. During those years, mutual trust and bonding are built so that both parties know and respect each other.

Different needs and learning styles vary on an individual basis. Finnish teachers can account for this because they’ve figured out the student’s own idiosyncratic needs. They can accurately chart and care for their progress and help them reach their goals. There is no passing along to the next teacher because there isn’t one.

Levi, Finland. Photo by Christophe Pallot/Agence Zoom/Getty Images.

A more relaxed atmosphere

There is a general trend in what Finland is doing with its schools. Less stress, less unneeded regimentation and more caring. Students usually only have a couple of classes a day. They have several times to eat their food, enjoy recreational activities and generally just relax. Spread throughout the day are 15 to 20-minute intervals where the kids can get up and stretch, grab some fresh air and decompress.

This type of environment is also needed by the teachers. Teacher rooms are set up all over Finnish schools, where they can lounge about and relax, prepare for the day or just simply socialize. Teachers are people too and need to be functional so they can operate at the best of their abilities.

Less homework and outside work required

According to the OECD, students in Finland have the least amount of outside work and homework than any other student in the world. They spend only half an hour a night working on stuff from school. Finnish students also don’t have tutors. Yet they’re outperforming cultures that have toxic school-to-life balances without the unneeded or unnecessary stress.

Finnish students are getting everything they need to get done in school without the added pressures that come with excelling at a subject. Without having to worry about grades and busy-work they are able to focus on the true task at hand – learning and growing as a human being.

(Source of this article, and for a video, see: https://bigthink.com/mike-colagrossi/no-standardized-tests-no-private-schools-no-stress-10-reasons-why-finlands-education-system-in-the-best-in-the-world/)

++++++++++

Procrastinator’s brains are different than those who get things done

Article Image

Young girl in Chicago classroom, Stanley Kubrick for LOOK Magazine, c/o Creative Commons

Daydreaming is important — studies have repeatedly said as much — but maybe you shouldn’t daydream too much, as a recent study by researchers at Ruhr-Universität Bochum has come to the conclusion that — after looking at MRI scans of 264 individuals — the brains of doers differ from those of procrastinators.

Before we explain how they came to that conclusion, it’s worth going over a few basic terms: the first is the amygdala, which are two almond-shaped clusters of neurons buried deep within the brain. The amygdala helps you process smell, store memory, rewards your brain with dopamine, and helps you “assess different situations with regard to their respective outcomes.” If you’re trying to recognize a smell — if you beat a video game level and pleasant graphics fill the screen — if you’re unsure whether or not it will be worthwhile to go to a concert in the evening — all this goes through your amygdala. There’s also the “dorsal anterior cingulate cortex.” This section of the brain currently appears to have a role in blood pressure, heart rate, attention, the anticipation of reward, impulse control, emotion, and — more broadly, though this appears to still be an area of active research — decision-making.

It’s helpful to have an understanding of these two sections of the brain when you read that “Individuals with poor action control had a larger amygdala” and that “the functional connection between the amygdala and the so-called dorsal anterior cingulate cortex (dorsal ACC) was less pronounced.” These results led Erhan Genç — a member of the research team at Ruhr-Universität Bochum — to hypothesize that “Individuals with a higher amygdala volume may be more anxious about the negative consequences of an action – they tend to hesitate and put off things.”

The study has sparked a wide-ranging conversation on Reddit, with questions being raised as to neuroplasticity (with an excellent reply reminding us about just how contextual neuroplasticity is) to former procrastinators chiming in with their autobiographical two cents to teachers talking about how they might apply the gist of this research in the classroom. (“This is great supporting evidence as to why teaching kids to take risks in the classroom is so effective.”)

The study works towards finding a neural base for some of these patterns — why, at the level of our hardware, things work in the way that they do — but — just as there was an active question as to the neural base of non-canonical uses of the nervous system — it’s worth wondering what a certain neural base actually looks like here when we can see so many different things come from the same seeming place — how a larger than usual amygdala seems capable of translating itself into procrastination; into a larger than average number of unique responses to a Rorschach test, autism, or the fact that “after an eight-week course of mindfulness practice, … MRI scans show that … the amygdala appears to shrink.”

From an outsider’s perspective, it may feel a little like looking at birds and dinosaurs and knowing that each come from the same place.

But they do. And that’s the next thing to be figured out.
(For source of this article, and for a video, please visit: https://bigthink.com/evan-fleischer/procrastinators-brains-are-different-than-those-who-get-things-done/)

++++++++++

The end of the middle class: Why prosperity is failing in America

Executive Editor of the Economic Hardship Reporting Project.
.

‘Middle class’ doesn’t mean what it used to. Owning a home, two cars, and having a summer vacation to look forward to is a dream that’s no longer possible for a growing percentage of American families. So what’s changed? That safe and stable class has become shaky as unions collapsed, the gig economy surged, and wealth concentrated in the hands of the top 1%, the knock-on effects of which include sky-high housing prices, people working second jobs, and a cultural shift marked by ‘one-percent’ TV shows (and presidents). Alissa Quart, executive editor of the Economic Hardship Reporting Project, explains how the American dream became a dystopia, and why it’s so hard for middle-class Americans to get by. Alissa Quart is the author of Squeezed: Why Our Families Can’t Afford America.

(See the video at: https://bigthink.com/videos/why-americas-middle-class-is-disappearing/)

++++++++++

Genome study of cave bones reveals early human hybrid

Genetic analysis on an ancient bone fragment has revealed the direct descendant of a Neanderthal and...
Genetic analysis on an ancient bone fragment has revealed the direct descendant of a Neanderthal and a Denisovan (Credit: James633/Depositphotos)

Although Homo sapiens won the world domination contest, we weren’t without our competitors. For thousands of years we shared the planet with other hominin species, such as the Neanderthals and Denisovans. These early humans were known to have fought, competed and even cross-bred when they crossed paths, and now the most direct evidence of those meetings has been found. By sequencing the genome of a hominin bone from a Siberian cave, anthropologists have discovered the direct descendant of a Neanderthal and a Denisovan.

Neanderthals were a lot like us, but stockier, stronger and probably hairier. They inhabited Europe long before we modern humans trekked out there, and their range stretched into southwest Asia and as far north as Siberia. Denisovans lived around the same time, ranging from Siberia to Southeast Asia, although we don’t know as much about them since all we have are a few teeth, finger and toe bones.

Genetic studies have revealed that these two species interbred with each other – and modern humans. Around two percent of the modern human genome is estimated to contain Neanderthal DNA, while some humans may be up to six percent Denisovan. But the two of them are far closer to each other than to us – possibly up to 17 percent of the Denisovan genome comes from Neanderthals.

The researchers studied the genome of this bone fragment, found in Denisova Cave, Russia

Researchers from Max Planck have now conducted genetic analysis of a small bone fragment found in Denisova Cave in Russia, where most Denisovan remains have been found so far. The team discovered that the bone belonged to a female of at least 13 years of age, but it was her parents that were most interesting to the crew – her mother was a Neanderthal and her father a Denisovan.

“We knew from previous studies that Neanderthals and Denisovans must have occasionally had children together,” says Viviane Slon, a first author of the study. “But I never thought we would be so lucky as to find an actual offspring of the two groups.”

By studying this individual’s genome, the researchers were able to learn more about the parents. In an unexpected twist, the mother turned out to be a closer genetic match to a distant Neanderthal population in western Europe, rather than another individual that had lived earlier in Denisova Cave. On the father’s side of the family tree, the Denisovan apparently had at least one Neanderthal ancestor himself, suggesting the two species must have met in the past.

“It is striking that we find this Neanderthal/Denisovan child among the handful of ancient individuals whose genomes have been sequenced,” says Svante Pääbo, lead author of the study. “Neanderthals and Denisovans may not have had many opportunities to meet. But when they did, they must have mated frequently – much more so than we previously thought.”

The research was published in the journal Nature.

++++++++++

Can Neuroscience Predict How Likely Someone Is to Commit Another Crime?

Researchers propose using brain imaging technology to improve risk assessments—tools to help courts determine appropriate sentencing, probation, and parole. It’s controversial to say the least.

by Andrew R. Calderón –

Gavel and person.

Roy Scott/Getty Images.

(This story was published in partnership with The Marshall Project, a nonprofit newsroom covering the US criminal justice system.)

In 1978, Thomas Barefoot was convicted of killing a police officer in Texas. During the sentencing phase of his trial, the prosecution called two psychiatrists to testify about Barefoot’s “future dangerousness,” a capital-sentencing requirement that asked the jury to determine if the defendant posed a threat to society.

The psychiatrists declared Barefoot a “criminal psychopath,” and warned that whether he was inside or outside a prison, there was a “one hundred percent and absolute chance” that he would commit future acts of violence that would “constitute a continuing threat to society.” Informed by these clinical predictions, the jury sentenced Barefoot to death.

Although such psychiatric forecasting is less common now in capital cases, a battery of risk assessment tools has since been developed that aims to help courts determine appropriate sentencing, probation, and parole. Many of these risk assessments use algorithms to weigh personal, psychological, historical, and environmental factors to make predictions of future behavior. But it is an imperfect science, beset by accusations of racial bias and false positives.

Now a group of neuroscientists at the University of New Mexico propose to use brain imaging technology to improve risk assessments. Kent Kiehl, a professor of psychology, neuroscience, and the law at the University of New Mexico, says that by measuring brain structure and activity they might better predict the probability an individual will offend again.

Neuroprediction, as it has been dubbed, evokes uneasy memories of a time when phrenologists used body proportions to make pronouncements about a person’s intelligence, virtue, and—in its most extreme iteration—racial inferiority.

Yet predicting likely human behavior based on algorithms is a fact of modern life, and not just in the criminal justice system. After all, what is Facebook if not an algorithm for calculating what we will like, what we will do, and who we are?

In a recent study, Kiehl and his team set out to discover whether brain age—an index of the volume and density of gray matter in the brain—could help predict re-arrest.

Age is a key factor of standard risk assessments. On average, defendants between 18 to 25 years old are considered more likely to engage in risky behavior than their older counterparts. Even so, chronological age, wrote the researchers, may not be an accurate measure of risk.

The advantage of brain age over chronological age is its specificity. It accounts for “individual differences” in brain structure and activity over time, which have an impact on decision-making and risk-taking.

After analyzing the brain scans of 1,332 New Mexico and Wisconsin men and boys —ages 12 to 65—in state prisons and juvenile facilities, the team found that by combining brain age and activity with psychological measures, such as impulse control and substance dependence, they could accurately predict re-arrest in most cases.

The brain age experiment built on the findings from research Kiehl had conducted in 2013, which demonstrated that low activity in a brain region partially responsible for inhibition seemed more predictive of re-arrest than the behavioral and personality factors used in risk assessments.

“This is the largest brain age study of its kind,” Kiehl says, and the first time that brain age was shown to be useful in the prediction of future antisocial behavior.

In the study, subjects lie inside an MRI scanner as a computer sketches the peaks and troughs of their brains to construct a profile. With hundreds of brain profiles, the researchers can train algorithms to look for unique patterns.

(For the balance of this interesting article please visit: https://tonic.vice.com/en_us/article/j5nky3/neuroscience-predict-if-someone-will-commit-another-crime/)

++++++++++

U.S. students lag far behind rest of the world in learning a second language. Here’s why that matters.

Article Image
Photo: powerofforever / Getty Images

 

 

 

 

If you live in the Community of Belgium, one of the nation’s three federal communities, you most likely speak multiple languages. Though the local dialect is German, three-year-olds are required to study a foreign language. As it turns out, this is the easiest time during human development to grasp multiple dialects, given the plasticity of the brain. The longer you wait, the harder it becomes.

Most European countries require that their students speak foreign languages. At what age they start learning is another story, though for most of Europe, knowing at least two other languages is compulsory. Only Ireland (save Northern Ireland) and Scotland escape this fate, but even there you’ll hear many tongues spoken by every citizen:

Ireland and Scotland are two exceptions that do not have compulsory language requirements, but Irish students learn both English and Gaelic (neither is considered a foreign language); Scottish schools are still obligated to offer at least one foreign-language option to all students ages 10-18.

Then you have America, a nation in which less than half of citizens own a passport. This number, thankfully, has risen to 42 percent from 27 percent since 2007, but data still hint at a majority disinterested in international travel. A new Pew Research poll shows that most American states have less than one-quarter of students studying a foreign language.

US foreign language

That’s because learning a foreign language is not nationally mandated. The state with the most students enrolled—New Jersey has 51 percent—happens to be where I grew up. In high school, you either took Spanish, German, or French; looking back, I thought it was required everywhere. Not the case, at least broadly—school districts (and even states) can require language studies, but the U.S. Department of Education has no broad requirements.

Which is in stark contrast to Europe. In France, Romania, Austria, Norway, Malta, Luxembourg, and Liechtenstein, every student must learn another language. The country with the least amount of students enrolled is actually Belgium, with 64 percent, just behind Portugal (69 percent) and the Netherlands (70 percent). Overall, 92 percent of European students know multiple languages. In America, that number is 20 percent.

It also depends on which state you’re discussing. In New Mexico, Arizona, and Arkansas, only 9 percent of students study a language other than English, an especially disturbing fact given that two are border states that benefit greatly from communicating with their neighbors.
The numbers don’t get much better as we investigate older demographics. Only 36 percent of Americans believe speaking another language is “extremely or very important” in the modern workplace. Strangely, most Americans realize that further training is required to stay competitive in the market:

The vast majority of U.S. workers say that new skills and training may hold the key to their future job success.

Americans spend so much time focused on bringing jobs “back,” yet we actually have no clue where they “go.” It’s impossible to compete in a global workforce if you refuse to educate yourself on anywhere beyond your neighborhood. Eight in ten Americans believe outsourcing is a serious problem and seven in ten claims that responsibility is on the individual, yet just over one-third consider that preparation should include learning another language.

Foreign language

Considering English is the most studied language across Europe, it’s not surprising American citizens are lazy. We can communicate almost anywhere we travel, our privileged reality. During my four trips to Morocco, I was often approached in French; upon learning I’m American, the speaker immediately switched to English. This is beside the native Moroccan Arabic. Many citizens also know Spanish and Italian.

One can argue that their economy depends on it. English is, after all, the business language of the world. Beyond staying competitive in the marketplace, however, there are many personal benefits. Early language learning increases cognitive benefits and helps fight diseases of dementia. Being multilingual has positive effects on memory, problem-solving, verbal and spatial abilities, and intelligence. These are all important skillsets in business. They also make you a healthier citizen, physically and socially.

Still, many Americans don’t recognize the value of curiosity. Instead of bristling when hearing people communicate in a language they don’t understand, they can attempt to make sense of it. Instead, we’re constantly confronted with videos of Americans demanding that immigrants “learn to speak the language.” Complacency usurps curiosity—and common sense.

Within the English language, the more words you know, the broader the population you can dialogue with. That extends exponentially when you know multiple languages. Why we wouldn’t want to talk to as many people as possible sheds light on rampant nationalism, which is a shame. The larger one’s vocabulary, the more likely we’ll get along, in business and in life. Everyone’s health improves.

(Source of this, and many other interesting articles: https://bigthink.com/21st-century-spirituality/us-students-lag-far-behind-rest-of-the-world-in-learning-a-second-language-heres-why-that-matters/)

++++++++++

The Role and Power of Women in Ancient Egypt

Throughout history, the status and importance of women varied by culture and period. Some groups maintained a highly matriarchal culture during certain times, while at other times they were predominantly patriarchal. Likewise, the roles of women in ancient Egypt and their ability to ascend to positions of power varied through history. Little is known about female status during the Early Dynastic Period (c. 3000 BCE). However, during the First and Second Intermediate Periods (2100 BCE–1550 BCE), the New Kingdom (1550 BCE–1200 BCE), and certainly during the Ptolemaic Period (300 BCE–30 BCE), Egyptians had a unique attitude about women.

Nefertiti- Role of women in ancient egypt

Queen Nefertiti, ruler and mother of six, kissing one of her daughters. Limestone relief, c. 1332-1356 BCE. Image: CC 2.5.

The Rise and Fall of Women in Egypt

Not only were women in ancient Egypt responsible for the nurturance and admonition of children, but they could also work at a trade, own and operate a business, inherit property, and come out well in divorce proceedings. Some women of the working class even became prosperous. They trained in medicine as well as in other highly skilled endeavors. There were female religious leaders in the priesthood, but in this instance, they were not equal to the men. In ancient Egypt, women could buy jewelry and fine linens. At times, they ruled as revered queens or pharoahs.

The role of women in ancient Egypt diminished during the late dynastic period but reappeared within the Ptolemaic dynasty. Both Ptolemy I and II put the portraits of their wives on the coins. Cleopatra VII became a very powerful figure internationally. However, after her death, the role of women receded markedly and remained virtually subservient until the 20th century.

How the Moon Shaped the Role of Women in Ancient Egypt

Through history, strong patriarchal societies existed when the sun was worshiped and times when there was a matriarchal society when the moon was worshiped. During much of Egyptian history, people worshiped both the moon and the sun, which gave rise to both matriarchal and patriarchal societies. For the most part, both the sun, Ra, and the moon, Konsu, were a vital part of the religion of ancient Egypt. It might be that the main objection to Amenhotep IV was that he stressed worship only to the sun disk at the expense of the moon god. Much of the traditional Egyptian society rejected this new concept and wanted a balance between the sun and the moon.

Examples of Powerful Egyptian Women

Hatshepsut

In the middle of the 15th century BCE, one of the most important people to appear on the Egyptian scene was a woman. Her name was Hatshepsut. She came to power during a very critical time in Egyptian history. For many years Egypt was ruled by the Hyksos, foreigners who conquered Egypt and attempted to destroy many important aspects of Egyptian society. In 1549 BCE, a strong leader emerged by the name of  Ahmose I, founder of the 18th Dynasty. He drove out the invaders.

Egypt was once more restored to its glory by the time his successor, Amenhotep I, became Pharaoh. His granddaughter, Hatshepsut, became the fifth pharaoh of the 18th Dynasty in c. 1478 BCE after her sickly husband and pharaoh Thutmose II died. The female ruler was a builder, she directed expositions, built ships, enlarged the army, and presented Egypt as having a major presence in the international arena. She also utilized the services of other skilled women in various governmental capacities. Interestingly, she ruled Egypt as a queen and as a king, and her statues often portray her as a man wearing a beard. After her death, Thutmose III built upon Hatshepsut’s strong foundation, which resulted in the largest Egyptian empire the world had ever seen.

Hatshepsut and women in ancient egypt

Hatshepsut is depicted with a bare chest and false beard. Granite statue, c. 1479-1458 BCE. Modified, public domain.

Tiye

Amenhotep III continued to advance the cause of Egypt and to provide for its people a better life than they had ever known in the past. During this time, several women of great talent appeared and were able to make many contributions. His queen was named Tiye. She was perhaps the first in this hierarchy of counselors to the king. She presumably molded the pharaoh’s thinking in matters of state and religion and provided him with strong support.

Nefertiti

It was during this time that another famous and important woman appeared. Her name was Nefertiti and she became the wife of the son of Amenhotep III and Queen Tiye. The man was also known in history as Amenhotep IV. and later as Ankenaten. We are now being told that Nefertiti may have been a more powerful and influential person than her husband.

The status of women in ancient Egyptian society was of such importance that the right to the crown itself passed through the royal women and not the men. The daughters of kings were all important.

Nefertari

During the reign of Ramesses II (c. 1279–1213 BCE, his favorite wife and queen, Nefertari, was raised to the status of Royal Wife and Royal Mother. At Abu Simbel temple in Southern Egypt, her statue is as large as the pharaoh’s statue. Thus, we see her portrayed as an important person during the reign of the pharaoh. Often the name of his queen Auset-nefert would appear along with his own. Thus, pharaohs, such as Ramesses II, who esteemed their queens and gave them equal status, also helped to bolster the role and stature of women in ancient Egypt.

role of women in ancient egypt

Queen Nefertari stands alongside her husband, Ramesses II, in equal scale. Image: CC2.0 Dennis Jarvis.

It is also of interest to note that Ramesses II restored the temple of Hatshepsut in Deir el Bahri. In so many other instances, he either destroyed evidence of the very existence of his predecessors or usurped their creations, but with this famous woman, he went to great length to acknowledge her existence and to protect her memory.

Cleopatra VII

Cleopatra VII was the seventh Cleopatra and the last of the Greek or Ptolemic rulers of Egypt. Her son, Ptolemy XV possibly reigned for a few weeks after her death, however, she was the last of the significant Egyptian rulers. She was the last of the powerful women in ancient Egypt, and after her death, Egypt fell to the Romans.

Cleopatra was schooled in science, politics, and diplomacy, and she was a proponent of merging the cultures of Greece and Egypt. She could also read and write the ancient Egyptian language.

Egypt’s Class Society

From the beginning, Egypt was a class society. There was a marked line of distinction that was maintained between the different ranks of society. Although sons tended to follow the trade or profession of their fathers, this was not always the case, and there were even some instances where people were also able to advance themselves regardless of their birth status.

Women in ancient Egypt were, like their male counterparts, subject to a rank system. The highest of them was the queen followed by the wives and daughters of the high priest. Their duties were very specific and equally as important as those of the men. Women within the royal family performed duties much like we see today in the role of ladies in waiting to the Queen of England. Additionally, the role of women as teachers and guides for their children was very prominent in ancient Egypt.

Priesthood and Non-Traditional Roles

There were holy women who possessed both dignity and importance. As to the priesthood, and perhaps other professions, only the women of a higher rank trained in these endeavors. Both male and female priests enjoyed great privileges. They were exempt from taxes, they used no part of their own income in any of the expenses related to their office, and they were permitted to own land in their own right.

Women in ancient Egypt had the authority to manage affairs in the absence of their husbands. They had traditional duties such as needlework, drawing water, spinning, weaving, attending to the animals, and a variety of domestic tasks. However, they also took on some non-traditional roles. According to Diodorus, he saw images depicting some women making furniture and tents and engaging in other pursuits that may seem more suitable to men. It seems that women on every socioeconomic level could do pretty much what a man could do with perhaps the exception of being a part of the military. This was evident when a husband died; the wife would take over and attend to whatever business or trade he may have been doing.

Marriage and Family

Both men and women could decide whom they would marry. However, elders helped to introduce suitable males and females to each other. After the wedding, the husband and wife registered the marriage. A woman could own property that she had inherited from her family, and if her marriage ended in divorce, she could keep her own property and the children and was free to marry again.

Women held the extremely important role of wife and mother. In fact, Egyptian society held high regard for women with many children. A man could take other women to live in his family, but the primary wife would have ultimate responsibility. Children from other wives would have equal status to those of the first wife.

The Wisdom of the Ages

The high-points for women in ancient Egypt came to a screeching halt after Cleopatra. The Greek-Macedonian Ptolemys ascended Egypt’s throne beginning in 323 BCE after Alexander the Great died. This marked a permanent and profound change from an Egyptian culture to one of a Graeco-Egyptian influence. As a result of non-native Egyptian sentiments, the roles of women continued to wane during this time and into the Roman period. The well-known fact that Cleopatra VII became such a strong ruler is a testament to the tenacity of native Egyptians to maintain their cultural views. Additionally, her shrewd intellect, wily relationship-building skills, and desire to support the Egyptian people won them over. Today, Cleopatra is remembered as the last pharaoh and, more importantly, the last female to ever be edified to that stature by the Egyptians.

You may also like:
Oxyrhynchus Papyri: Historical Treasure in Ancient Egyptian Garbage

Updated by Historic Mysteries March 7, 2018

(Source of this and other interesting articles: https://www.historicmysteries.com/role-of-women-in-ancient-egypt/)

++++++++++

Easter Island, also known as Rapa Nui, is a 63-square-mile spot of land in the Pacific Ocean. In 1995, science writer Jared Diamond popularized the “collapse theory” in a Discover magazine story about why the Easter Island population was so small when European explorers arrived in 1772. He later published Collapse, a book hypothesizing that infighting and an overexploiting of resources led to a societal “ecocide.” However, a growing body of evidence contradicts this popular story of a warring, wasteful culture.

Scientists contend in a new study that the island’s most iconic features are also the best evidence that the ancient Rapa Nui society was more sophisticated than previously thought, and the biggest clue lies in the island’s most iconic features.

The iconic “Easter Island heads” or moai, are actually full-bodied but often partially buried statues that cover the island. There are almost a thousand of them, and the largest is over seventy feet tall. Scientists hailing from UCLA, the University of Queensland, and the Field Museum of Natural History in Chicago believe that, much like Stonehenge, the process by which these monoliths were created is indicative of a collaborative society.

Their research was published recently in the Journal of Pacific Archeology.

Study co-author and director of the Easter Island Statue Project Jo Anne Van Tilburg, Ph.D. is focused on measuring the visibility, number, size, and location of the moai. She tells Inverse that “visibility, when linked to geography, tells us something about how Rapa Nui, like all other traditional Polynesian societies, is built on family identity.”

Van Tilburg and her team say that understanding how these families interacted with the craftsmen who made the tools that helped create the giant statues is indicative of how different parts of Rapa Nui society interacted.

moai
Easter Island statues, or moai.

Previous excavations led by Van Tilburg revealed that the moai were created from basalt tools. In this study, the scientist focused on figuring out where on the island the basalt came from. Between 1455 and 1645 AD there was a series of basalt transfers from quarries to the actual location of the statues — so the question became, which quarry did they come from?

Chemical analysis of the stone tools revealed that the majority of these instruments were made of basalt that was dug up from one quarry. This demonstrated to the scientists that, because everyone was using one type of stone, there had to be a certain level of collaboration in the creation of the giant statues.

“There was more interaction and collaboration”

“We had hypothesized that elite members of the Rapa Nui culture had controlled resources and would only use them for themselves,” lead author and University of Queensland Ph.D. candidate Dale Simpson Jr. tells Inverse. “Instead, what we found is that the whole island was using similar material, from similar quarries. This led us to believe that there was more interaction and collaboration in the past that has been noted in the collapse narrative.”

Simpson explains that the scientists intend to continue to map the quarries and perform other geochemcial analysis on artifacts, so they can continue to “paint a better picture” about Rapa Nui prehistoric interactions.

After Europeans arrived on the island, slavery, disease, and colonization decimated much of Rapa Nui society — although its culture continues to exist today. Understanding exactly what happened in the past there is key to recognizing a history that became clouded by colonial interpretation.

“What makes me excited is that through my long-term relationship with the island, I’ve been able to better understand how people in the ancient past interacted and shared information — some of this interaction can be seen between thousands of Rapa Nui who still live today,” says Simpson. “In short, Rapa Nui is not a story about collapse, but about survival!”

++++++++++

Article Image

What qualities define a good leader? Is it vision, the ability to understand and negotiate with people, drive, an expectation of excellence, or a stunningly brilliant intellect? A new study finds that the last one may actually be a hindrance. Those who are exceedingly intelligent, while still some of the top producers, don’t necessarily make the best leaders, it finds.

Researchers at the University of Lausanne in Switzerland, led by John Antonakis, set out to test the assumption that the brightest people make the best leaders. Their results were published in the Journal of Applied Psychology. This team was building on the work of UC psychology professor Dean Keith Simonton. He theorized that there’s a sweet spot where peak performance is reached, when the intelligence of the leader correlates with that of the followers.

We expect leaders to be smarter than us, but not too smart, according to Prof. Simonton. While the average IQ is 100-110, the optimal IQ for someone managing a team of average folks, would be 120-125, no more than 1.2 standard deviations above the mean. This relationship is called curvilinear, represented when graphed as an inverted U.


At a certain point, high intelligence hurts leadership if it isn’t balanced by other traits. Credit: Getty Images.

In the Swiss study, 379 middle managers from companies within 30 different, mostly European countries, participated. They were followed over six years and their leadership styles evaluated periodically. Researchers gave participants the Wonderlic Personnel Test, which assesses both personality and IQ. Their scores were spread across the spectrum. Antonakis and colleagues matched these with the Multifactor Leadership Questionnaire, which evaluates a manager’s leadership style and how effective it is.

Subordinates and peers at each participant’s job filled these out. The managers were evaluated by seven to eight people each. Personality and intelligence were the key indicators on how effective a leader was. A higher IQ meant a better relationship, up until the leader’s IQ reached above 120. Those with higher intelligence, beyond 128, were found to be less effective.

Bucking another stereotype, researchers uncovered that women tended to express more effective leadership styles. A little over 26% of the participants were women. Older managers scored higher too, but to a lesser extent. What these results show is that balance is important. Intelligence does benefit leadership, Antonakis says, but only if it’s balanced with other parts of one’s personality, like agreeableness and charisma.

Mostly, it comes down to good people skills. Conscientiousness surprisingly didn’t play too much into effective leadership. Of course, whether one is an effective leader or not depends on the IQ of the group. So there isn’t exactly a perfect level of intelligence for a leader to have.

Why do the smartest leaders often fail to reach subordinates? In Simonton’s work, he and colleagues believe that they often put forth more sophisticated plans than others, meaning team members might fail to understand all the intricacies, and thus fail to execute them well. Another problem: complex communication styles might fail to influence others. Also, if a manager comes off as too intellectual, it sets him or her apart. In other words, it makes subordinates feel the leader is not one of them. In the words of the study’s authors:

To conclude, Sheldon Cooper, the genius physicist from “The Big Bang Theory” TV series is often portrayed as being detached and distant from normal folk, particularly because of his use of complex language and arguments. However… Sheldon could still be a leader—if he can find a group of followers smart enough to appreciate his prose!

There are shortcomings to this model. It originally only looked at simulations and perceptions rather than actual work environments and performance. This latest study was the first to really put Simonton’s theory to the test.

(Emotional intelligence (EQ) is really important for leaders to have. To learn more about that, click here: https://bigthink.com/philip-perry/why-highly-intelligent-people-make-the-worst-leaders/)

++++++++++

Mysterious fossil footprints may cast doubt on human evolution timeline

A set of fossilized human-like footprints in Greece may end up rewriting the story of human...A set of fossilized human-like footprints in Greece may end up rewriting the story of human evolution (Credit: Andrzej Boczarowski)
We share plenty of features with apes, but the shape of our feet isn’t one of them. So that makes the discovery of human-like footprints dating back 5.7 million years – a time when our ancestors were thought to still be getting around on ape-like feet – a surprising one. Further confounding the mystery is the fact that these prints were found in the Greek islands, implying hominins left Africa much earlier than our current narrative suggests.

Fossilized bones and footprints have helped us piece together the history of human evolution. One of the earliest hominins – ancestors of ours that are more closely related to humans than chimps – was a species called Ardipithecus ramidus, which is known from over 100 specimens. Living about 4.4 million years ago, it had an ape-like foot, with the hallux (the big toe) pointing out sideways rather than falling in line like ours. Fast-forward about 700,000 years, and a set of footprints from Laetoli in Tanzania shows that a more human foot shape had evolved by then.

Enter the newly-discovered footprints. Found in Trachilos in western Crete, they have a distinctly human-like shape, with a big toe of a similar size, shape and position to ours. They appear to have been made by a more primitive hominin than the creature that left the Laetoli prints, but there’s a problem: they also predate Ardipithecus by about 1.3 million years. That means a human-like foot had evolved much earlier than previously thought, throwing a spanner into the accepted idea that the ape-footed Ardipithecus was a direct human ancestor.

A close-up of one of the 5.7 million-year-old footprints, which shows a remarkably human-like shape from...

These footprints were fairly clearly dated to the Miocene period, about 5.7 million years ago. According to the researchers, they lie in a layer of rock just below a distinctive layer that formed when the Mediterranean sea dried out, about 5.6 million years ago. To further back up the dating, the team analyzed the age of marine microfossils from sections of rock above and below the prints.

But the age of the Trachilos tracks isn’t the only mysterious feature about them: where they were found is also key. Until recently, the fossil record suggested that hominins originated in Africa and didn’t expand into Europe and Asia until about 1.8 million years ago. But these prints indicate that something with remarkably humanoid feet was traipsing through Greece millions of years earlier than conventional wisdom holds.

Interestingly, this find lines up with another recent discovery that could rewrite human history. Back in May, a study described 7 million-year-old bones of a hominin species called Graecopithecus freybergi, which were discovered in Greece and Bulgaria. That find represented such a huge discrepancy from the current thinking that the researchers pondered whether it meant that the human and chimp branches of the family tree originally split in Europe, and not Africa. The new study might correlate that conclusion.

“This discovery challenges the established narrative of early human evolution head-on and is likely to generate a lot of debate,” says Per Ahlberg, last author of the paper. “Whether the human origins research community will accept fossil footprints as conclusive evidence of the presence of hominins in the Miocene of Crete remains to be seen.”

The research was published in the journal Proceedings of the Geologist’s Association.

++++++++++

 

Smiles have a sound, and it’s contagious

Basketball coach Frank McGuire speaks on the phone smiling while hie wife listens, in 1956

 

The next time you catch yourself smiling during a phone conversation, just because, ask the person on the other end of the line whether they’re smiling, too. According to a small study from cognitive-science researchers in Paris, there’s a strong possibility that one person smiled, and the other “heard” it, then mimicked the gesture.

In other words, not only do smiles have a sound, but it’s contagious.

A path to empathy

Smiles, we’ve long known, are a universal human signal. They are understood across cultures and “pre-programmed,” as a professor of psychology at Knox College in Illinois, once explained to Scientific American. People who are born blind smile in the same way as the sighted, and for the same reasons, he said.

We’ve also been aware of a smile’s catchiness for decades. Scientists have documented how the sight of a various facial gestures, including a genuine or “duchenne” smile, can trigger the same in its viewer. In fact, psychologists first theorized that facial mimicry was a key path to accessing another person’s inner state, and thus developing empathy, more than 100 years ago (pdf).

In 2008, scientists in the UK found that people don’t even need to see a smile to perceive it. We can pick out the sound of different types of smiles when merely listening to someone speak.

Now, this research suggests that not only can we identify what the study authors call the “spectral signature of phonation with stretched lips” or “the smile effect” in speech, but that it seems to register on an unconscious level. And—as with the visual cue—it inspires imitation.

To conduct their experiments, the Paris researchers first recreated the smile’s auditory signature digitally, creating software that adds a smile to any recorded voice. They then outfitted 35 participants with electrodes attached to their facial muscles to see whether they could detect the sound of a smile in recorded French sentences—some of which were manipulated to include the effect, others not.

Their results showed that not only could the listeners most often hear the enhancement, even when they consciously missed a smile, their zygomaticus-major muscles prepared to grin in response to it.

Admittedly, they acknowledge that they don’t know how the experiment would have turned out had its participants not been asked to listen specifically for a smiling voice. Nevertheless, they argue in the paper that “the cognition of smiles is not as deeply rooted in visual processing as previously believed.”

(For the balance of this article please visit: https://qz.com/1342753/smiles-have-a-sound-and-its-contagious-a-study-says/)


++++++++++

Ancient stone tools found in China shake up human ancestor timeline (again)

The discovery of two-million-year-old stone tools in China may rewrite the migration timeline of early human ancestors(Credit: Professor Zhaoyu Zhu)
Archaeologists have discovered ancient tools and bones in China that, once again, shake up the timeline of the human origin story. The items are more than two million years old, indicating that early hominins had spread much further east earlier than previously thought.

Although it’s being updated all the time, the general consensus holds that hominins – the group of our ancestors that are more closely related to humans than to chimps – originated in Africa, before spreading out into Europe and Asia about 1.8 million years ago.

But more recent discoveries suggest our ancestors had packed their bags and left home way before then. A set of startlingly-human footprints found in the Greek islands date back some 5.7 million years, while 7-million-year-old bones found in Greece and Bulgaria are so old that it led researchers to wonder (somewhat controversially) whether humans and chimps actually split from their last common ancestor in Europe, not Africa.

Thankfully, the new find isn’t quite so dramatic, but it’s no less fascinating. At a maximum age of 2.12 million years, the recently-discovered artifacts are about 270,000 years older than bones and stone tools found in Dmanisi, Georgia, which are widely accepted to be the oldest remains of hominins beyond Africa. Not only that, they’re much further from Africa than human ancestors were believed to have spread at that time.

The team discovered bones and stone tools, including a notch, scrapers, cobble, hammer stones and pointed...

The team discovered bones and stone tools, including a notch, scrapers, cobble, hammer stones and pointed pieces(Credit: Professor Zhaoyu Zhu)

The discovery was made in Shangchen on the Chinese Loess Plateau. Alongside animal bone fragments, the team found 80 stone tools, including a notch, scrapers, cobble, hammer stones and pointed pieces, which all showed clear signs of use. Most of them were made of quartz and quartzite that are believed to have come from the nearby Qinling Mountains.

Whoever left them behind weren’t just passing through, either. These artifacts were found in 17 different layers of dust and fossil soil, deposited during different climates over the span of close to a million years, from 2.12 to 1.2 million years ago.

(For the balance of this article see: https://newatlas.com/ancient-stone-tools-china-human-migration/55425/)

++++++++++

 

This Face Changes the Human Story. But How?

Scientists have discovered a new species of human ancestor deep in a South African cave, adding a baffling new branch to the family tree.

++++++++++

 

++++++++++

 

Ancient mummy DNA reveals surprises about genetic origins of Egyptians

Scientists have recently, for the first time, extracted full nuclear genome data from ancient Egyptian mummies
Scientists have recently, for the first time, extracted full nuclear genome data from ancient Egyptian mummies(Credit: bpk/Aegyptisches Museum und Papyrussammlung, SMB/Sandra Steiss)
For the first time, scientists have extracted full nuclear genome data from ancient Egyptian mummies. The results offer exciting insights into how different ancient civilizations intermingled and also establishes a breakthrough precedent in our ability to study ancient DNA.

The international team of scientists, led by researchers from the University of Tuebingen and the Max Planck Institute for the Science of Human History in Jena, sampled 151 mummified remains from a site called Abusir el-Meleq in Middle Egypt along the Nile River. The samples dated from 1400 BCE to 400 CE and were subjected to a new high-throughput DNA sequencing technique that allowed the team to successfully recover full genome-wide datasets from three individuals and mitochondria genomes from 90 individuals.

“We wanted to test if the conquest of Alexander the Great and other foreign powers has left a genetic imprint on the ancient Egyptian population,” explains one of the lead authors of the study, Verena Schuenemann.

In 332 BCE, for example, Alexander the Great and his army tore through Egypt. Interestingly the team found no genetic trace of not only Alexander the Great’s heritage, but of any foreign power that came through Egypt in the 1,300-year timespan studied.

“The genetics of the Abusir el-Meleq community did not undergo any major shifts during the 1,300 year timespan we studied,” says Wolfgang Haak, group leader at the Max Planck Institute, “suggesting that the population remained genetically relatively unaffected by foreign conquest and rule.”

They found that ancient Egyptians were closely related to Anatolian and Neolithic European populations, as well showing strong genetic traces from the Levant areas in the near east (Turkey, Lebanon).

(To read the full article visit: https://newatlas.com/ancient-egyptian-mummy-dna-study/49792/)

 

++++++++++

North Sentinel Island

The Sentinelese are among the last people worldwide to remain virtually untouched by modern civilization.

North Sentinel Island.jpg

2009 NASA image of North Sentinel Island; the island’s protective fringe of coral reefs can be seen clearly.

North Sentinel Island is located in Andaman and Nicobar Islands
Location of North Sentinel Island

 

North Sentinel Island is one of the Andaman Islands, which includes South Sentinel Island, in the Bay of Bengal. It is home to the Sentinelese who, often violently, reject any contact with the outside world, and are among the last people worldwide to remain virtually untouched by modern civilization. As such, only limited information about the island is known.

Nominally, the island belongs to the South Andaman administrative district, part of the Indian union territory of Andaman and Nicobar Islands.[8] In practice, Indian authorities recognise the islanders’ desire to be left alone and restrict their role to remote monitoring, even allowing them to kill non-Sentinelese people without prosecution.[9][10] Thus the island can be considered a sovereign entity under Indian protection.

(Source: https://en.wikipedia.org/wiki/North_Sentinel_Island)

++++++++++

Article Image
(NASA Goddard and Steve Byrne)

A paper recently published in International Journal of Astrobiology asks a fascinating question: “Would it be possible to detect an industrial civilization in the geological record?” Put another way, “How do we really know our civilization is the only one that’s ever been on earth?” The truth is, we don’t. Think about it: The earliest evidence we have of humans is from 2.6 million years ago, the Quarternary period. Earth is 4.54 billion years old. That leaves 4,537,400,000 years unaccounted for, plenty of time for evidence of an earlier industrial civilization to disappear into dust.

The paper grew out of a conversation between co-authors Gavin Schmidt, director of NASA’s Goddard Institute for Space Studies, and astrophysics professor Adam Frank. (Frank recalls the exchange in an excellent piece in The Atlantic.) Considering the possible inevitability of any planets’ civilization destroying the environment on which it depends, Schmidt suddenly asked, “Wait a second. How do you know we’re the only time there’s been a civilization on our own planet?”

Schmidt and Frank recognize the whole question is a bit trippy, writing, “While much idle speculation and late night chatter has been devoted to this question, we are unaware of previous serious treatments of the problem of detectability of prior terrestrial industrial civilizations in the geologic past.”

There’s a thought-provoking paradox to consider here, too, which is that the longest-surviving civilizations might be expected to be the most sustainable, and thus leave less of a footprint than shorter-lived ones. So the most successful past civilizations would leave the least evidence for us to discover now. Hm.

Earlier humans, or…something else?

One of the astounding implications of the authors’ question is that it would mean — at least as far as we can tell from the available geologic record — that an earlier industrial civilization could not be human, or at least not homo sapiens or our cousins. We appeared only about 300,000 years back. So anyone else would have to have been some other intelligent species for which no evidence remains, and that we thus know nothing about. Schmidt is calling the notion of some previous non-human civilization the “Silurian hypothesis,” named for brainy reptiles featured in a 1970 episode of Dr. Who.

silurian

Dr. Who’s Silurians evolved from rubber suits to prosthetics (BBC)

Wouldn’t there be fossils?

Well, no. “The fraction of life that gets fossilized is always extremely small and varies widely as a function of time, habitat and degree of soft tissue versus hard shells or bones,’ says the paper, noting further that, even for dinosaurs, there are only a few thousand nearly complete specimens. Chillingly, “species as short-lived as Homo Sapiens (so far) might not be represented in the existing fossil record at all.”

(For full article visit: https://bigthink.com/robby-berman/is-human-civilization-earths-first)

++++++++++

Article Image
The Bed by Henri de Toulouse-Lautrec.

She was wide awake and it was nearly two in the morning. When asked if everything was alright, she said, “Yes.” Asked why she couldn’t get to sleep she said, “I don’t know.” Neuroscientist Russell Foster of Oxford might suggest she was exhibiting “a throwback to the bi-modal sleep pattern.” Research suggests we used to sleep in two segments with a period of wakefulness in-between.

A. Roger Ekirch, historian at Virginia Tech, uncovered our segmented sleep history in his 2005 book At Day’s Close: A Night in Time’s Past. There’s very little direct scientific research on sleep done before the 20th century, so Ekirch spent years going through early literature, court records, diaries, and medical records to find out how we slumbered. He found over 500 references to first and second sleep going all the way back to Homer’s Odyssey. “It’s not just the number of references—it is the way they refer to it as if it was common knowledge,” Ekirch tells BBC.

“He knew this, even in the horror with which he started from his first sleep, and threw up the window to dispel it by the presence of some object, beyond the room, which had not been, as it were, the witness of his dream.” — Charles Dickens, Barnaby Rudge (1840)

Here’s a suggestion for dealing with depression from English ballad ‘Old Robin of Portingale’:

“And at the wakening of your first sleepe/You shall have a hott drinke made/And at the wakening of your next sleepe/Your sorrowes will have a slake.”

Two-part sleep was practiced into the 20th century by people in Central America and Brazil and is still practiced in areas of Nigeria.

night street
(Photo: Alex Berger)

Night split in half

Segmented sleep—also known as broken sleep or biphasic sleep—worked like this:

  • First sleep or dead sleep began around dusk, lasting for three to four hours.
  • People woke up around midnight for a few hours of activity sometimes called “the watching.” They used it for things like praying, chopping wood, socializing with neighbors, and for sex. A 1500s character in Chaucer’s Canterbury Tales posited that the lower classes had more children because they used the waking period for procreation. In fact, some doctors recommended it for making babies. Ekirch found a doctor’s reference from 16th century France that said the best time to conceive was not upon first going to bed, but after a restful first sleep, when it was likely to lead to “more enjoyment” and when lovers were more likely to “do it better.”
  • “Second sleep,” or morning sleep, began after the waking period and lasted until morning.

Why and when it ended

Given that we spend a third of our lives in slumber, it is odd that so little is known about our early sleep habits, though Ekirch says that writings prove people slept that way for thousands of years. If for no other reason, someone had to wake in the middle of the night to tend to fires and stoves.

Author Craig Koslofsky suggests in Evening’s Empire that before the 18th century, the wee hours beyond the home were the domain of the disreputable, and so the watching was all the nighttime activity anyone wanted. With the advent of modern lighting, though, there was an explosion in all manner of nighttime activity, and it ultimately left people exhausted. Staying up all night and sleepwalking through the day came to be viewed as distastefully self-indulgent, as noted in this advice for parents from an 1825 medical journal found by Ekirch: “If no disease or accident there intervene, they will need no further repose than that obtained in their first sleep, which custom will have caused to terminate by itself just at the usual hour. And then, if they turn upon their ear to take a second nap, they will be taught to look upon it as an intemperance not at all redounding to their credit.” Coupled with the desire for efficiency promoted by industrialization, the watch was increasingly considered a pointless disruption of much-needed rest.

The rise of insomnia

wide awake

Intriguingly, right about the time accounts of first sleep and second sleep began to wane, references to insomnia began appearing. Foster isn’t the only one who wonders if this isn’t a biological response to un-segmented sleep. Sleep psychologist Gregg Jacobs tells BBC, ”For most of evolution we slept a certain way. Waking up during the night is part of normal human physiology.” He also notes that the watch was often a time for reflection and meditation that we may miss. “Today we spend less time doing those things,” he says. “It’s not a coincidence that, in modern life, the number of people who report anxiety, stress, depression, alcoholism and drug abuse has gone up.” It may also not a coincidence, though, that we don’t die at 40 anymore.

Subjects in an experiment in the 1990s gradually settled themselves into bi-phasic sleep after being kept in darkness 10 hours a day for a month, so it may be the way we naturally want to sleep. But is it the healthiest way?

Science says we’re doing it right right now

Not everyone restricts their rest to a full night of sleep. Siestas are popular in various places, and there are geniuses who swear by short power naps throughout a day. Some have no choice but to sleep in segments, such as parents of infants and shift workers.

But, according to sleep specialist Timothy A. Connolly of Center of Sleep Medicine at St. Luke’s Episcopal Hospital in Houston speaking to Everyday Health, “Studies show adults who consistently sleep seven to eight hours every night live longest.” Some people do fine on six hours, and some need 10, but it needs to be in one solid chunk. He says that each time sleep is disrupted, it impacts every cell, tissue, and organ, and the chances go up for a range of serious issues including stroke, heart disease, obesity and mood disorders.

Modern science is pretty unanimous: Sleeping a long, solid chunk each night gives you the best chance of living a long life, natural or not.

(Article source: https://bigthink.com/robby-berman/for-1000s-of-years-we-went-to-bed-twice-a-night-2)

++++++++++

A new theory of consciousness: the mind exists as a field connected to the brain

Between quantum physics and neuroscience, a theory emerges of a mental field we each have, existing in another dimension and behaving in some ways like a black hole
October 11, 2017 12:22 pm, Last Updated: October 16, 2017 1:58 pm
By Tara MacIsaac, Epoch Times

The relationship between the mind and the brain is a mystery that is central to how we understand our very existence as sentient beings. Some say the mind is strictly a function of the brain — consciousness is the product of firing neurons. But some strive to scientifically understand the existence of a mind independent of, or at least to some degree separate from, the brain.

The peer-reviewed scientific journal NeuroQuantology brings together neuroscience and quantum physics — an interface that some scientists have used to explore this fundamental relationship between mind and brain.

An article published in the September 2017 edition of NeuroQuantology reviews and expands upon the current theories of consciousness that arise from this meeting of neuroscience and quantum physics.

Dr. Dirk Meijer (Courtesy of Dr. Dirk Meijer)
Dr. Dirk Meijer (Courtesy of Dr. Dirk Meijer)

Dr. Dirk K.F. Meijer, a professor at the University of Groningen in the Netherlands, hypothesizes that consciousness resides in a field surrounding the brain. This field is in another dimension. It shares information with the brain through quantum entanglement, among other methods. And it has certain similarities with a black hole.

This field may be able to pick up information from the Earth’s magnetic field, dark energy, and other sources. It then “transmits wave information into the brain tissue, that … is instrumental in high-speed conscious and subconscious information processing,” Dirk wrote.

In other words, the “mind” is a field that exists around the brain; it picks up information from outside the brain and communicates it to the brain in an extremely fast process.

He described this field alternately as “a holographic structured field,” a “receptive mental workspace,” a “meta-cognitive domain,” and the “global memory space of the individual.”

Extremely rapid functions of the brain suggest it processes information through a mechanism not yet revealed.

(HypnoArt)
(HypnoArt)

There’s an unsolved mystery in neuroscience called the “binding problem.” Different parts of the brain are responsible for different things: some parts work on processing color, some on processing sound, et cetera. But, it somehow all comes together as a unified perception, or consciousness.

Information comes together and interacts in the brain more quickly than can be explained by our current understanding of neural transmissions in the brain. It thus seems the mind is more than just neurons firing in the brain.

(To read the entire article visit: https://m.theepochtimes.com/uplift/a-new-theory-of-consciousness-the-mind-exists-as-a-field-connected-to-the-brain_2325840.html)

++++++++++

Dogon dwelling on the Bandiagara Escarpment in Mali, West Africa – 300px-Falaise_de_Bandiagara

Dogon astronomical beliefs

Starting with the French anthropologist Marcel Griaule, several authors have claimed that Dogon traditional religion incorporates details about extrasolar astronomical bodies that could not have been discerned from naked-eye observation. This idea has entered the New Age and ancient astronaut literature as evidence that extraterrestrial aliens visited Mali in the distant past.

https://en.wikipedia.org/wiki/Dogon_people

++++++++++

Cliff Palace, Mesa Verde, Colorado, USA

Cliff Palace at Mesa Verde.jpg

 This multi-storied ruin, the best-known cliff dwelling in Mesa Verde, is located in the largest alcove in the center of the Great Mesa. It was south- and southwest-facing, providing greater warmth from the sun in the winter. Dating back more than 700 years, the dwelling is constructed of sandstone, wooden beams, and mortar. Many of the rooms were brightly painted. Cliff Palace was home to approximately 125 people, but was likely an important part of a larger community of sixty nearby pueblos, which housed a combined six hundred or more people. With 23 kivas and 150 rooms, Cliff Palace is the largest cliff dwelling in Mesa Verde National Park.
++++++++++

The Border Between the 'Two Englands'. In Great Britain as in the US, two cultural sub-nations identify themselves (and the other) as North and South. There is a place used as shorthand for describing the divide, with the rougher, poorer North and wealthier, middle-to-upper-class South referring to each other as ‘on the other side of the Watford Gap’.
(https://strangemaps.files.wordpress.com/2007/10/england2410_468x8161.jpg)

In Great Britain as in the US, two cultural sub-nations identify themselves (and the other) as North and South. The US’s North and South are quite clearly delineated, by the states’ affiliations during the Civil War (which in the east coincides with the Mason-Dixon line). That line has become so emblematic that the US South is referred to as ‘Dixieland’.

There’s no similarly precise border in Great Britain, maybe because the ‘Two Englands’ never fought a civil war against each other.There is, however, a place used as shorthand for describing the divide, with the rougher, poorer North and wealthier, middle-to-upper-class South referring to each other as ‘on the other side of the Watford Gap’.

Not to be confused with the sizeable town of Watford in Hertfordshire, Watford Gap is a small village in Northamptonshire. It was named for the eponymous hill pass that has facilitated travel east-west and north-south since at least Roman times (cf. Watling Street, now passing through it as the A5 motorway). Other routes passing through the Gap are the West Coast Main Line railway, the Grand Union Canal and the M1, the UK’s main North-South motorway.

In olden times, the Gap was the location of an important coaching inn (operating until closure in approximately 2000 as the Watford Gap Pub), and nowadays it has the modern equivalent in a service station – which happened to be the first one in the UK – on the M1, the main North-South motorway in the UK.

Because of its function as a crossroads, its location on the main road and its proximity to the perceived ‘border’ between North and South, the Watford Gap has become the colloquial separator between both. Other such markers don’t really exist, so the border between North and South is quite vague. Until now, that is.

It turns out the divide is more between the Northwest and the Southeast: on this map, the line (which, incidentally, does cross the Watford Gap –  somewhere in between Coventry and Leicester) runs from the estuary of the Severn (near the Welsh-English border) to the mouth of the Humber. Which means that a town like Worcester is firmly in the North, although it’s much farther south than the ‘southern’ town of Lincoln.

At least, that’s the result of a Sheffield University study, which ‘divided’ Britain according to statistics about education standards, life expectancy, death rates, unemployment levels, house prices and voting patterns. The result splits the Midlands in two. “The idea of the Midlands region adds more confusion than light,” the study says.

The line divides Britain according to health and wealth, separating upland from lowland Britain, Tory from Labour Britain, and indicates a £100.000 house price gap – and a year’s worth of difference in life expectancy (in case you’re wondering: those in the North live a year less than those in the South).

The line does not take into account ‘pockets of wealth’ in the North (such as the Vale of York) or ‘pockets of poverty’ in the South, especially in London.

The map was produced for the Myth of the North exhibition at the Lowry arts complex in Manchester, and was mentioned recently in the Daily Mail . I’m afraid I don’t have an exact link to the article, but here is the page at the Lowry for the aforementioned exhibition.

(This article from: https://bigthink.com/strange-maps/193-the-border-between-the-two-englands)

++++++++++
Wendish in Japan

You may wonder how I came to search for traces of Wendish as far afield as Japan. It happened quite accidentally. I became curious about whether there was a linguistic connection between ancient Japanese and Wendish in the mid-1980s, when reading a biography of an American who had grown up in Japan. He mentions that a very ancient Japanese sword is called meich in Japanese. Surprisingly, meich or mech has the same meaning also in Wendish. How did Wends reach Japan, and when? I decided to find out first if this particular word, meich, really exists in Japanese. And, if it does, at which point in time in the past Wendish speakers could have had contact with Japanese islands.

I describe in more detail, mentioning my tentative conclusions with regard to the origins of Wendish in Japanese, and its relation to the Ainu language, in the 5th installment of my article,The Extraordinary History of a Unique People, published in the Glasilo magazine, Toronto, Canada. Anyone interested will find all the already published installments of this article, including the 5th installment, on my still not quite organized website, www.GlobalWends.com. In the next, winter issue of Glasilo, i.e., in the 6th installment of my article, I will report my discoveries and conclusions with regard to the origins of Wendish in the Ainu language, the language of the aboriginal white population of Japan.

I started my search for the word meich by buying Kenkyusha’s New School Japanese-English Dictionary. Unfortunately, I had acquired a dictionary meant for ordinary students and meich is not mentioned in it. Obviously, I should have bought a dictionary of Old Japanese instead, in which ancient terms are mentioned. Nevertheless, to my amazement, I found in Kenkyusha’s concise dictionary, instead of meich, many other Wendish words and cognates, which I am quoting below in my List.

I found it intriguing that the present form of words in Japanese, with clearly Wendish roots, show that Chinese and Korean immigrants to the islands were trying to learn Wendish, not vice versa. This indicates that the original population of Japan was Caucasian and that the influx of the Asian population was, at least at first, gradual. Today, after over 3000 years of Chinese and Korean immigrations, about half of the Japanese vocabulary is based on Chinese.

There is another puzzle to be solved. Logically, one would expect the language of the white aboriginies of Japan, the Ainu – also deeply influenced by Wendish – to have been the origin of Wendish in modern Japanese. Yet, considering the set up of the Wendish vocabulary occurring in Japanese, Ainu does not seem to have played any part in the formation of modern Japanese, or only a negligible one. Wendish vocabulary in Japanese points to a different source. It seems to have been the result of a second, perhaps even a third Wendish migration wave into the Islands, at a much later date. Ainu seem to have arrived already in the Ice Age, when present Japan was still a part of the Asian continent. They have remained hunters and gatherers until their final demise in the mid-20th century. They retained their Ice Age religion, which regarded everything in the universe and on earth as a spiritual entity, to be respected and venerated – including rocks and stars. Wendish words in Japanese, however, mirror an evolved megalithic agricultural culture and a sun-venerating religion.

A list of all Wendish cognates I have discovered in the Kenkyusha’s dictionary is on my website, under the heading of a List of Wendish in Japanese. It is by no means a complete list. My Japanese is very limited, based solely on Kenkyusha’s dictionary and some introductory lessons to the Japanese culture, history, language, literature and legends, by a Japanese friend of mine, with an authentic Wendish name Hiroko, pronounced in the Tokyo dialect, as in Wendish, shirokowide, all-encompassing. Besides, although I have a university level knowledge of Wendish, I do not possess the extensive Wendish vocabulary necessary to discover most of Wendish words which may have changed somewhat their meaning with thousands of passing years, complicated by the arrival of a new population whose language had nothing in common with Wendish.

Future, more thorough and patient researchers – whose mother-tongue is Wendish but who also have a thorough knowledge of Japanese – will, no doubt, find a vastly larger number of Wendish cognates in Japanese than I did.

(For more information visit: https://www.globalwends.com/introduction.html)

++++++++++
Spaniard raised by wolves disappointed with human life
Marcos Rodríguez Pantoja, who lived among animals for 12 years, finds it hard just to get through the winter
Marcos Rodríguez Pantoja, outside his house.
Marcos Rodríguez Pantoja, outside his house. ÓSCAR CORRAL

Marcos Rodríguez Pantoja was once the “Mowgli” of Spain’s Sierra Morena mountain range, but life has changed a lot since then. Now the 72-year-old lives in a small, cold house in the village of Rante, in the Galician province of Ourense. This past winter has been hard for him, and a violent cough interrupts him often as he speaks.

His last happy memories were of his childhood with the wolves. The wolf cubs accepted him as a brother, while the she-wolf who fed him taught him the meaning of motherhood. He slept in a cave alongside bats, snakes and deer, listening to them as they exchanged squawks and howls. Together they taught him how to survive. Thanks to them, Rodríguez learned which berries and mushrooms were safe to eat.

Today, the former wolf boy, who was 19 when he was discovered by the Civil Guard and ripped away from his natural home, struggles with the coldness of the human world. It’s something that didn’t affect him so much when he was running around barefoot and half-naked with the wolves. “I only wrapped my feet up when they hurt because of the snow,” he remembers. “I had such big calluses on my feet that kicking a rock was like kicking a ball.”

After he was captured, Rodríguez’s world fell apart and he has never been able to fully recover. He’s been cheated and abused, exploited by bosses in the hospitality and construction industries, and never fully reintegrated to the human tribe. But at least his neighbors in Rante accept him as “one of them.” And now, the environmental group Amig@s das Arbores is raising money to insulate Rodríguez’s house and buy him a small pellet boiler – things that his meager pension cannot cover.

They laugh at me because I don’t know about politics or soccer

Marcos Rodríguez Pantoja

Rodríguez is one of the few documented cases in the world of a child being raised by animals away from humans. He was born in Añora, in Córdoba province, in 1946. His mother died giving birth when he was three years old, and his father left to live with another woman in Fuencaliente. Rodríguez only remembers abuse during this period of his life.

They took him to the mountains to replace an old goatherd who cared for 300 animals. The man taught him the use of fire and how to make utensils, but then died suddenly or disappeared, leaving Rodríguez completely alone around 1954, when he was just seven years old. When authorities found Rodríguez, he had swapped words for grunts. But he could still cry. “Animals also cry,” he says.

Marcos Rodríguez in his home. ampliar foto
Marcos Rodríguez in his home. ÓSCAR CORRAL

He admits that he has tried to return to the mountains but “it is not what it used to be,” he says. Now the wolves don’t see him as a brother anymore. “You can tell that they are right there, you hear them panting, it gives you goosebumps … but it’s not that easy to see them,” he explains. “There are wolves and if I call out to them they are going to respond, but they are not going to approach me,” he says with a sigh. “I smell like people, I wear cologne.” He was also sad to see that there were now cottages and big electric gates where his cave used to be.

His experience has been the subject of various anthropological studies, books by authors such as Gabriel Janer, and the 2010 film Among wolves (Entrelobos) by Gerardo Olivares. He insists that life has been much harder since he was thrown back into the modern world. “I think they laugh at me because I don’t know about politics or soccer,” he said one day. “Laugh back at them,” his doctor told him. “Everyone knows less than you.”

He has encountered many bad people along the way, but there have also been acts of solidarity. The forest officer Xosé Santos, a member of Amig@s das Arbores, organizes sessions at schools where Rodríguez can talk about his love for animals and the importance of caring for the environment. “It’s amazing how he enthralls the children with his life experience,” says Santos. Children, after all, are the humans whom Rodríguez feels most comfortable with.

(From: https://elpais.com/elpais/2018/03/28/inenglish/1522237746_629465.html?id_externo_rsoc=FB_CM)

English version by Melissa Kitson.

++++++++++
Discovered: 300,000-Year-Old Tools and Paints That Point to Early Humanity’s Cleverness

Findings out of Kenya offer a new understanding of when early humans got organized and started trading.

ancient tools kenya

A team of anthropologists have determined that humanity has been handy for far longer than ever realized. These researchers discovered tools in East Africa that date back to around 320,000 years ago, far earlier than scientists previously thought humans were using such items.

Coming from the Olorgesailie geologic formation in southern Kenya, the findings, published in Science, show how the collection and creation of various colors through a pigmentation process was crucial to early human society. In addition to color creation, the team also found a variety of stone tools.

The earliest human life found in Olorgesailie dates back 1.2 million years. The question is, when did homo sapiens started becoming a collective society? When did the transition occur, and what did it look like? That date has generally been seen as around 100,000 years ago, thanks to evidence such as cave paintings in Ethiopia. However, the findings at Olorgesailie, where famed paleonanthropologists Louis and Mary Leaky also worked, show evidence of a social contract between geographically distant groups.

++++++++++

Lithuanian, the most conservative of all Indo-European languages, is riddled with references to bees.

In mid-January, the snow made the little coastal town of Šventoji in north-west Lithuania feel like a film set. Restaurants, shops and wooden holiday cabins all sat silently with their lights off, waiting for the arrival of spring.

I found what I was looking for on the edge of the town, not far from the banks of the iced-over Šventoji river and within earshot of the Baltic Sea: Žemaitiu alka, a shrine constructed by the Lithuanian neo-pagan organisation Romuva. Atop a small hillock stood 12 tall, thin, slightly tapering wooden figures. The decorations are austere but illustrative: two finish in little curving horns; affixed to the top of another is an orb emitting metal rays. One is adorned with nothing but a simple octagon. I looked down to the words carved vertically into the base and read ‘Austėja’. Below it was the English word: ‘bees’.

The Žemaitiu alka shrine features a wooden figure dedicated to Austėja, the pagan goddess of bees (Credit: Credit: Will Mawhood)

The Žemaitiu alka shrine features a wooden figure dedicated to Austėja, the pagan goddess of bees (Credit: Will Mawhood)

You may also be interested in:
• Ethiopia’s dangerous art of beekeeping
• Europe’s earliest written language
• The town that’s losing its language

This was not the first time I’d encountered references to bees in Lithuania. During previous visits, my Lithuanian friends had told me about the significance of bees to their culture.

Lithuanians don’t speak about bees grouping together in a colony like English-speakers do. Instead, the word for a human family (šeimas) is used. In the Lithuanian language, there are separate words for death depending on whether you’re talking about people or animals, but for bees – and only for bees – the former is used. And if you want to show a new-found Lithuanian pal what a good friend they are, you might please them by calling them bičiulis, a word roughly equivalent to ‘mate’, which has its root in bitė – bee. In Lithuania, it seems, a bee is like a good friend and a good friend is like a bee.

A bee is like a good friend and a good friend is like a bee

Seeing the shrine in Šventoji made me wonder: could all these references be explained by ancient Lithuanians worshipping bees as part of their pagan practices?

Lithuania has an extensive history of paganism. In fact, Lithuania was the last pagan state in Europe. Almost 1,000 years after the official conversion of the Roman Empire facilitated the gradual spread of Christianity, the Lithuanians continued to perform their ancient animist rituals and worship their gods in sacred groves. By the 13th Century, modern-day Estonia and Latvia were overrun and forcibly converted by crusaders, but the Lithuanians successfully resisted their attacks. Eventually, the state gave up paganism of its own accord: Grand Duke Jogaila converted to Catholicism in 1386 in order to marry the Queen of Poland.

This rich pagan history is understandably a source of fascination for modern Lithuanians – and many others besides. The problem is that few primary sources exist to tell us what Lithuanians believed before the arrival of Christianity. We can be sure that the god of thunder Perkūnas was of great importance as he is extensively documented in folklore and song, but most of the pantheon is based on guesswork. However, the Lithuanian language may provide – not proof, exactly, but clues, tantalising hints, about those gaps in the country’s past.

Before Grand Duke Jogaila converted to Catholicism in 1386, Lithuania was the last pagan state in Europe (Credit: Credit: PHAS/Getty Images)

Before Grand Duke Jogaila converted to Catholicism in 1386, Lithuania was the last pagan state in Europe (Credit: PHAS/Getty Images)

In Kaunas, Lithuania’s second-largest city, I spoke to Dalia Senvaitytė, a professor of cultural anthropology at Vytautas Magnus University. She was sceptical about my bee-worshipping theory, telling me that there may have been a bee goddess by the name of Austėja, but she’s attested in just one source: a 16th-Century book on traditional Lithuanian beliefs written by a Polish historian.

It’s more likely, she said, that these bee-related terms reflect the significance of bees in medieval Lithuania. Beekeeping, she explained “was regulated by community rules, as well as in special formal regulations”. Honey and beeswax were abundant and among the main exports, I learned, which is why its production was strictly controlled.

But the fact that these references to bees have been preserved over hundreds of years demonstrates something rather interesting about the Lithuanian language: according to the Lithuanian Quarterly Journal of Arts and Sciences, it’s the most conservative of all living Indo-European languages. While its grammar, vocabulary and characteristic sounds have changed over time, they’ve done so only very slowly. For this reason, the Lithuanian language is of enormous use to researchers trying to reconstruct Proto-Indo-European, the single language, spoken around four to five millennia ago, that was the progenitor of tongues as diverse as English, Armenian, Italian and Bengali.

The Lithuanian word bičiulis, meaning ‘friend’, has its root in bite, the word for ‘bee’ (Credit: Credit: Rambynas/Getty Images)

The Lithuanian word bičiulis, meaning ‘friend’, has its root in bite, the word for ‘bee’ (Credit: Rambynas/Getty Images)

All these languages are related, but profound sound shifts that have gradually taken place have made them distinct from one another. You’d need to be a language expert to see the connection between English ‘five’ and French cinq – let alone the word that Proto-Indo-Europeans are thought to have used, pénkʷe. However, that connection is slightly easier to make out from the Latvian word pieci, and no trouble at all with Lithuanian penki. This is why famous French linguist Antoine Meillet once declared that “anyone wishing to hear how Indo-Europeans spoke should come and listen to a Lithuanian peasant”. [Editor’s note: The little finger, or pinky finger, is also known as the fifth digit or just pinky.]

Lines can be drawn to other ancient languages too, even those that are quite geographically distant. For example, the Lithuanian word for castle or fortress – pilis – is completely different from those used by its non-Baltic neighbours, but is recognisably similar to the Ancient Greek word for town, polis. Surprisingly, Lithuanian is also thought to be the closest surviving European relative to Sanskrit, the oldest written Indo-European language, which is still used in Hindu ceremonies. [Editor’s note: The strength of the castle or fortress is similar, in some ways, to the strength of the Police.]

This last detail has led to claims of similarities between Indian and ancient Baltic cultures. A Lithuanian friend, Dovilas Bukauskas, told me about an event organised by local pagans that he attended. It began with the blessing of a figure of a grass snake – a sacred animal in Baltic tradition – and ended with a Hindu chant.

Honey and beeswax were among medieval Lithuania’s main exports (Credit: Credit: Roman Babakin/Alamy)

Honey and beeswax were among medieval Lithuania’s main exports (Credit: Roman Babakin/Alamy)

I asked Senvaitytė about the word gyvatė. This means ‘snake’, but it shares the same root with gyvybė, which means ‘life’. The grass snake has long been a sacred animal in Lithuania, reverenced as a symbol of fertility and luck, partially for its ability to shed its skin. A coincidence? Perhaps, but Senvaitytė thinks in this case probably not.

The language may also have played a role in preserving traditions in a different way. After Grand Duke Jogaila took the Polish throne in 1386, Lithuania’s gentry increasingly adopted not only Catholicism, but also the Polish language. Meanwhile, rural Lithuanians were much slower to adopt Christianity, not least because it was almost always preached in Polish or Latin. Even once Christianity had taken hold, Lithuanians were reluctant to give up their animist traditions. Hundreds of years after the country had officially adopted Christianity, travellers through the Lithuanian countryside reported seeing people leave bowls of milk out for grass snakes, in the hope that the animals would befriend the community and bring good luck.

Anyone wishing to hear how Indo-Europeans spoke should come and listen to a Lithuanian peasant

Similarly, bees and bee products seem to have retained importance, especially in folk medicine, for their perceived healing powers. Venom from a bee was used to treat viper bites, and one treatment for epilepsy apparently recommended drinking water with boiled dead bees. But only, of course, if the bees had died from natural causes.

But Lithuanian is no longer exclusively a rural language. The last century was a tumultuous one, bringing war, industrialisation and political change, and all of the country’s major cities now have majorities of Lithuanian-speakers. Following its accession to the EU in 2004, the country is now also increasingly integrated with Europe and the global market, which has led to the increasing presence of English-derived words, such as alternatyvus (alternative) and prioritetas (priority).

Lithuanian is no longer exclusively a rural language (Credit: Credit: Will Mawhood)

Lithuanian is no longer exclusively a rural language (Credit: Will Mawhood)

Given Lithuania’s troubled history, it’s in many ways amazing the language has survived to the present day. At its peak in the 14th Century, the Grand Duchy of Lithuania stretched as far as the Black Sea, but in the centuries since, the country has several times disappeared from the map entirely.

It’s too simplistic to say that Lithuanian allows us to piece together the more mysterious stretches in its history, such as the early, pagan years in which I’m so interested. But the language acts a little like the amber that people on the eastern shores of the Baltic have traded since ancient times, preserving, almost intact, meanings and structures that time has long since worn away everywhere else.

And whether or not Austėja was really worshipped, she has certainly remained a prominent presence. Austėja remains consistently in the top 10 most popular girls names in Lithuania. It seems that, despite Lithuania’s inevitable cultural and linguistic evolution, the bee will always be held in high esteem.

Join more than three million BBC Travel fans by liking us on Facebook, or follow us on Twitter and Instagram.

If you liked this story, sign up for the weekly bbc.com features newsletter called “If You Only Read 6 Things This Week”. A handpicked selection of stories from BBC Future, Earth, Culture, Capital and Travel, delivered to your inbox every Friday.

(www.bbc.com/travel/story/20180319-are-lithuanians-obsessed-with-bees)

++++++++++