Face of oldest human ancestor comes into focus with new fossil skull
A new fossil discovery means we’re finally able to look upon the face of our oldest ancestor. Paleontologists have discovered an almost-complete skull of Australopithecus anamensis, which has previously only been known from some jawbones, teeth and bits of leg bones. The new find allowed scientists to realistically recreate the hominin’s face for the first time – and it might just shake up the family tree.
The face of this long-lost ancestor is strangely familiar, not least because of the eerily human eyes. But those – along with the leathery brown skin and muttonchop beard – are the kind of best-guess embellishments you’d expect from a recreation like this. Other features, like the large flat nose, the protruding rounded jaw, and prominent brows and cheekbones, are based on the most complete skull of its kind ever found.
Discovered in Ethiopia in 2016, the skull has been dated to 3.8 million years old and attributed to an adult male specimen of Australopithecus anamensis. Interestingly, that makes it both the youngest fossil of A. anamensis known – which are believed to have lived between 4.2 and 3.9 million years ago – and one of the oldest cranial remains of hominins, which tend to dry up in the fossil record before about 3.5 million years ago.
This new find fills an important gap in the human origin story. A. anamensis is the oldest known species in the Australopithecus genus, who are considered the earliest members of the human evolutionary tree. It’s long been accepted that A. anamensis directly evolved into another subspecies, A. afarensis – the most famous example of which is Lucy herself.
But with this new skull, scientists have far more pieces of the puzzle and have realized that they may have previously been putting them together wrong.
The researchers were able to determine which species the skull belonged to by comparing it to the previously-discovered teeth, jaws and other fragments. The rest of the skull showed a strange mix of primitive and advanced (or “derived”) features. Most interesting is the fact that some of the features on A. anamensis are actually more advanced than those on A. afarensis. That calls into question the long-standing idea that the former evolved directly into the latter.
The revised timeline they created says that A. anamensis lived until at least 3.8 million years ago, while A. afarensis arose earlier than previously thought – maybe as early as 3.9 million years ago. Doing the math, that suggests that the two species may have overlapped by as much as 100,000 years.
Once again, it seems like our evolutionary history needs a rewrite. A more complete fossil record can help us patch up holes and revise what we thought we knew.
The research was described in two papers published in the journal Nature. The researchers describe the find in the video below.
The Face of Lucy’s Ancestor Revealed
(For the source of this, and many other interesting articles, and to watch a video associated with it, please visit: https://newatlas.com/science/oldest-human-ancestor-skull-face-reconstruction/)
Traces of two unknown archaic human species turn up in modern DNA
FaceApp Uncannily Captures These Classic Biological Signs of Aging
A guide to what it is, exactly, that makes faces look so old.
This week, celebrities ranging from the Jonas Brothers to Ludacris gave us a peek into what they might look like in old age, all with the help of artificial intelligence. But how exactly has FaceApp taken a stable of celebrities and transformed them into elderly versions of themselves? The app may be powered by A.I., but it’s informed by the biology of aging.
FaceApp was designed by the Russian company Wireless Lab, which debuted the first version of the app back in 2017. But this new round of photos is particularly detailed, which explains the app’s resurgence this week. Just check out geriatric Tom Holland, replete with graying hair and thickened brows — and strangely, a newfound tan.
What tweaks does FaceApp make to achieve that unforgiving effect? On the company’s website, the explanation is fairly vague: “We can certainly add some wrinkles to your face,” the team writes. But a closer look at the “FaceApp Challenge” pictures shows that it does far more than that.
FaceApp has been tight-lipped about how its software works — though we know it’s based on a neural network, a type of artificial intelligence. Inverse has reached out to FaceApp for clarification about how the company achieves its aging effects and will update this story when we hear back.
Regardless, scientists have been studying the specific markers of facial aging for decades, which give us a pretty good idea of what changes FaceApp’s neural network takes into consideration when it transports users through time.
The Original FaceApp
Before there was FaceApp, there was Rembrandt, a 17th-century Dutch painter who had a thing for highly unforgiving self portraits, about 40 of which survive today.
In 2012, scientists in Israel performed a robust facial analysis on Rembrandt’s work that was initially intended to separate the real paintings from forgeries. But their paper, published in The Israel Medical Association Journal, also incorporated “subjective and objective” measures of facial aging that they used to measure the impacts of time on the artist’s face. These measures have some applications to our modern-day FaceApp images.
Their formula focused on wrinkles that highlighted Rembrandt’s increasing age. Those included forehead and glabellar wrinkles — the wrinkles between the eyes that show up when you furrow your brow but seem to stick around later in life. They also analyzed accumulations of loose skin around the eyelids, called dermatochalasis (which creates “bags”), and nasolabial folds, which are the emergence of “smile lines” between the nose and mouth.
Fortunately, Rembrandt’s commitment to realism also gave them bigger aging-related features to work with. They quantified his “jowl formation” and the development of upper neck fat. But the most powerful metric was their “brow index,” which, over time, documented a descending brow line in the artist. Rembrandt’s eyebrows really descended starting in his ‘20s but leveled out by his ‘40s.
We can see some of the similar markers in these current FaceApp images. Just look at the aged Tottenham Hotspur squad, complete with furrowed brows, eye bags, and descending jowls — just like Rembrandt.
What Makes a Face Look Older?
Wrinkles notwithstanding, there is another way that FaceApp may be working its magic. There’s some evidence that perceived age is partially linked to facial color contrast.
Also, in 2012, a team of scientists in France and Pennsylvania demonstrated the impact of contrast in a series of experiments on images of female, caucasian faces. Faces with high color contrast among facial features (eyes, lips, and mouth, for example) and the skin surrounding them tended to appear younger than faces with low contrast in those areas.
In 2017, members of that team published another study suggesting that contrast holds information about age across ethnic groups. There, they found that color contrast of facial features decreased with age across groups, but most significantly in Caucasian and South Asian women. Contrast decreased with age in Chinese and Latin American women, too, but not as strongly.
Importantly, they also note that when you artificially enhance contrast, faces tend to look younger as well, suggesting that contrast’s relationship to age perception is strong.
“We have also found that artificially increasing those aspects of facial contrast that decrease with age in diverse races and ethnicities makes the faces look younger, independent of the ethnic origin of the face and the cultural origin of the observers,” they write.
Let’s take a closer look now.
Now that we know all this, let’s take another look at those photos of Tom Holland. There does seem to be some kind of color manipulation going on, in addition to the obvious wrinkling of his skin, though it’s unclear if maybe the photo was altered after FaceApp was applied.
Still, color contrast and specific physical features (like “jowl formation”) are factors that may be contributing to FaceApp’s seemingly magical transformation of age — which, for now, has captivated the internet.
(For the source of this, and many other interesting articles, please visit: https://www.inverse.com/article/57787-faceapp-challenge-signs-of-biological-aging/)
Why Do We Procrastinate? Scientists Pinpoint 2 Explanations in the Brain
Scientists find biological evidence that it’s not just about laziness.
There are an infinite number of excuses for putting off your to-do list. Letting clutter build up feels easier than Marie Kondo-ing the whole house, and filing your taxes on time can feel unnecessary when you’ll probably get an extension. The roots of this common but troubling habit run deep. In early July, a team of German researchers showed that procrastination’s root cause is not sheer laziness or lack of discipline but rather, a surprising factor deep in the brain.
In the strange study published in Social Cognitive and Affective Neuroscience, scientists at Ruhr University Bochum argue that the urge to procrastinate is governed by your genes.
“To my knowledge, our study is the first to investigate the genetic influences on the tendency to procrastinate,” first author and biopsychology researcher Caroline Schlüter, Ph.D., tells Inverse.
In the team’s study of 287 people, they discovered that women who carried one specific allele (a variant of a gene) were more likely to report more procrastination-like behavior than those who didn’t.
Where Does Procrastination Come From?
The team, led by Erhan Genc, Ph.D., a professor at the university’s biopsychology department, has been studying how procrastination might manifest in the brain for several years. His brain-based data suggests procrastination is more about managing the way we feel abut about tasks as opposed to simply managing the time we have to dedicate to them.
In 2018, Genc and his colleagues published a study that linked the amygdala, a brain structure involved in emotional processing, to the urge to put things off. People with a tendency to procrastinate, they argued, had bigger amygdalas.
“Individuals with a larger amygdala may be more anxious about the negative consequences of an action — they tend to hesitate and put off things,” he told the BBC.
In the new study, the team tried to identify a genetic pattern underlying their discovery about bigger amygdalas. They believe they’ve found one affecting women specifically. The gene they highlight affects dopamine, a neurotransmitter central to the brain’s reward system that’s implicated in drug use, sex, and other pleasurable activities.
In particular, the gene encodes an enzyme called tyrosine hydroxylase, which helps regulate dopamine production. Women who carried two copies of a variant of that gene, they showed, produce slightly more dopamine than those with an alternative version of the gene and they also tended to be “prone to procrastination,” according to self-reported surveys.
While this is hardly a causal relationship, the authors argue that there’s a connection between the tendency to procrastinate and this gene that regulates dopamine in the brain, at least in women.
They note, however, that this connection probably exists outside of the one highlighted in their earlier study on the amygdala. When they investigated whether there was a connection between the genotype of procrastinators and brain connectivity in the amygdala, they found no significant correlation.
“Thus, this study suggests that genetic, anatomical and functional differences affect trait-like procrastination independently of one another,” they write.
In other words, there is probably more than biological process that underpins procrastination, and these researcher suggest that they may have identified two of them so far.
Why Might Dopamine Influence Procrastination?
Despite its well-known link to pleasure, dopamine’s role in procrastination may not come down to its primary role. As Genc notes, dopamine is also related to “cognitive flexibility,” which is the ability to juggle many different ideas at once or shift your thinking in an instant. While this is helpful for multitasking, the team argues that it might also make someone more prone to being distracted.
“We assume that this makes it more difficult to maintain a distinct intention to act,” says Schlüter. “Women with a higher dopamine level as a result of their genotype may tend to postpone actions because they are more distracted by environmental and other factors.”
It would be a long jump to suggest that one gene related to dopamine production affects all of the complex factors governing human procrastination. There are almost certainly a range of different factors at play that may have influenced their results, notably the hormone estrogen, seeing as the pattern was only found in women. But hormones and neurotransmitters aside, the urge to procrastinate likely comes down to more than just genetic signatures. Sometimes, life just gets in the way.
Media via Pixabay, Inverse, Unplash/ Sander Smeekes
(For the source of this, as well as many other important and interesting articles, please visit: https://www.inverse.com/article/57577-why-do-we-procrastinate-biological-explanations/)
Advice to yourself
What advice would you give your younger self? This is the first study to ever examine it.
- A new study asked hundreds of participants what advice they would give their younger selves if they could.
- The subject matter tended to cluster around familiar areas of regret.
- The test subjects reported that they did start following their own advice later in life, and that it changed them for the better.
Everybody regrets something; it seems to be part of the human condition. Ideas and choices that sounded good at the time can look terrible in retrospect. Almost everybody has a few words of advice for their younger selves they wish they could give.
Despite this, there has never been a serious study into what advice people would give their younger selves until now.
Let me give me a good piece of advice
The study, by Robin Kowalski and Annie McCord at Clemson University and published in The Journal of Social Psychology, asked several hundred volunteers, all of whom were over the age of 30, to answer a series of questions about themselves. One of the questions asked them what advice they would give their younger selves. Their answers give us a look into what areas of life everybody wishes they could have done better in.
Previous studies have shown that regrets tend to fall into six general categories. The answers on this test can be similarly organized into five groups:
- Money (Save more money, younger me!)
- Relationships (Don’t marry that money grabber! Find a nice guy to settle down with.)
- Education (Finish school. Don’t study business because people tell you to, you’ll hate it.)
- A sense of self (Do what you want to do. Never mind what others think.)
- Life goals (Never give up. Set goals. Travel more.)
These pieces of advice were well represented in the survey. Scrolling through them, most of the advice people would give themselves verges on the cliché in these areas. It is only the occasional weight of experience seeping through advice that can otherwise be summed up as “don’t smoke,” “don’t waste your money,” or “do what you love,” that even makes it readable.
A few bits of excellent counsel do manage to slip through. Some of the better ones included:
- “Money is a social trap.”
- “What you do twice becomes a habit; be careful of what habits you form.”
- “I would say do not ever base any decisions on fear.”
The study also asked if the participants have started following the advice they wish they could have given themselves. 65.7% of them said “yes” and that doing so had helped them become the person they want to be rather than what society tells them they should be. Perhaps it isn’t too late for everybody to start taking their own advice.
Kowalski and McCord write:
“The results of the current studies suggest that, rather than just writing to Dear Abby, we should consult ourselves for advice we would offer to our younger selves. The data indicate that there is much to be learned that can facilitate well-being and bring us more in line with the person that we would like to be should we follow that advice.”
(For the source of this, and many other important articles, please visit: https://bigthink.com/personal-growth/advice-to-younger-self/)
Emotional temperament in babies associated with specific gut bacteria species
A new study from the University of Turku has uncovered interesting associations between an infant’s gut microbiome composition at the age of 10 weeks, and the development of certain temperament traits at six months age. The research does not imply causation but instead adds to a compelling growing body of evidence connecting gut bacteria with mood and behavior.
It is still extraordinarily early days for many scientists investigating the broader role of the gut microbiome in humans. While some studies are revealing associations between mental health conditions such as depression or schizophrenia and the microbiome, these are only general correlations. Evidence on these intertwined connections between the gut and brain certainly suggest a fascinating bi-directional relationship, however, positive mental health is certainly not a simple a matter of taking a certain probiotic supplement.
Even less research is out there examining associations between the gut microbiome and behavior in infants. One 2015 study examined this relationship in toddlers aged between 18 and 27 months, but this new study set out to investigate the association at an even younger age. The hypothesis being, if the early months in a young life are so fundamental to neurodevelopment, and our gut bacteria is fundamentally linked with the brain, then our microbiome composition could be vital in the development of basic behavioral traits.
The study recruited 303 infants. A stool sample was collected and analyzed at the age of two and a half months, and then at around six months of age the mothers completed a behavior questionnaire evaluating the child’s temperament. The most general finding was that greater microbial diversity equated with less fear reactivity and lower negative emotionality.
“It was interesting that, for example, the Bifidobacterium genus including several lactic acid bacteria was associated with higher positive emotions in infants,” says Anna Aatsinki, one of the lead authors on the study. “Positive emotionality is the tendency to experience and express happiness and delight, and it can also be a sign of an extrovert personality later in life.”
On a more granular level the study homed in on several specific associations between certain bacterial genera and infant temperaments. High abundance of Bifidobacterium and Streptococcus, and low levels of Atopobium, were associated with positive emotionality. Negative emotionality was associated with Erwinia, Rothia and Serratia bacteria. Fear reactivity in particular was found to be specifically associated with an increased abundance of Peptinophilus and Atopobium bacteria.
The researchers are incredibly clear these findings are merely associational observations and no causal connection is suggested. These kinds of correlational studies are simply the first step, pointing the way to future research better equipped to investigate the underlying mechanisms that could be generating these associations.
“Although we discovered connections between diversity and temperament traits, it is not certain whether early microbial diversity affects disease risk later in life,” says Aatsinki. “It is also unclear what are the exact mechanisms behind the association. This is why we need follow-up studies as well as a closer examination of metabolites produced by the microbes.”
The new study was published in the journal Brain, Behavior, and Immunity.
Source: University of Turku
(For the source of this, and many other important articles, please visit: https://newatlas.com/gut-bacteria-microbiome-baby-infant-behavior-mood/60197/)
Released in the same year as The Wild Bunch and Butch Cassidy and the Sundance Kid, Henry Hathaway’s western was defiantly old-fashioned in comparison
The year 1969 was a true inflection point for the American western, a once-dominant genre that had become a casualty of the culture, particularly when Vietnam had rendered the moral clarity of white hats and black hats obsolete. A handful of westerns were released by major studios that year, including forgettable or regrettable star vehicles for Burt Reynolds (Sam Whiskey) and Clint Eastwood (Paint Your Wagon), who were trying to revitalize the genre with a touch of whimsy. But 50 years later, three very different films have endured: Butch Cassidy and the Sundance Kid, The Wild Bunch and True Grit. Together, they represented the past, present and future of the western.
In the present, there was Butch Cassidy and the Sundance Kid, the year’s runaway box-office smash, grossing more than the counterculture duo of Midnight Cowboy and Easy Rider, the second- and third-place finishers, combined. George Roy Hill’s hip western-comedy, scripted by William Goldman and starring Paul Newman and Robert Redford, turned a story of outlaw bank robbers into a knowing and cheerfully sardonic entertainment that felt attuned to modern sensibilities. Sam Peckinpah’s Wild Bunch predicted a future of revisionist westerns, full of grizzled antiheroes, great spasms of stylized violence, and the messy inevitability of unhappy endings. A whiff of death from a genre in decline.
By contrast, True Grit looks like it could have been released 10, 20 or 30 years earlier, and with many of the same people working behind and in front of the camera. Its legendary producer, Hal B Wallis, was the driving force behind such Golden Age classics as Casablanca and The Adventures of Robin Hood, and his director, Henry Hathaway, cut his teeth as Cecil B DeMille’s assistant on 1923’s Ben Hur before spending decades making studio westerns, including a 1932 debut (Heritage of the Desert) that gave Randolph Scott his start and seven films with Gary Cooper. And then, of course, there’s John Wayne as Rooster Cogburn, stretching himself enough to win his only Oscar for best actor, but drawing heavily on his own pre-established iconography. It was, for him, a well-earned victory lap.
True Grit may be defiantly old-fashioned and stodgy when considered against the films of the day, but it’s also an example of how durable the genre actually was – and how it would be again in 2010, when the Coen brothers took their own crack at Charles Portis’s 1968 novel and produced the biggest hit of their careers. What would be more escapist than ducking into a movie theater in the summer of ’69 and stepping into a time machine where John Wayne is a big star, answering a call to adventure across a beautiful Technicolor expanse of mountains and prairies? The film has much more sophistication than the average throwback, but the search for justice across Indian Territory is uncomplicated and righteous, and the half-contentious/half-sentimental relationship between a plucky teenager and an irascible old coot grounds it in the tried-and-true. The defiant message here is: this can still work!
And boy does it ever. Kim Darby didn’t get much of a career boost for playing Mattie Ross, a fiercely determined and morally upstanding tomboy on the hunt for her father’s killer, but every bit of energy and urgency the film needs comes from her. When Mattie’s father is shot by Tom Chaney (Jeff Corey), a hired hand on their ranch near Fort Smith, Arkansas, she takes it upon herself to make sure he’s caught and dragged before the hanging judge. Whatever emotion she feels about the loss is set aside, limited to a brief crying jag in the privacy of a hotel bedroom, and she’s all business the rest of the time. When the Fort Smith sheriff doesn’t seem sufficiently motivated, she seeks out US marshal Cogburn (Wayne), a one-eyed whiskey guzzler who lives alone with a Chinese shopkeep and a cat he calls General Sterling Price.
The odd man out in their posse is a Texas ranger named La Boeuf (Glen Campbell), which Wayne and everyone else pronounce as “La Beef”, as part of his instinctual disrespect for Texans – and, really, anyone who fought for the Confederacy during the civil war. (La Boeuf makes a point of saying he fought for General Kirby Smith, rather than the south, which suggests a sense of shame that stands out in our current age of tiki-torch monument protests.) The chemistry between the three is terrific, despite Campbell’s limitations as an actor, because it’s constantly changing: Rooster and La Boeuf are sometimes aligned as mercenaries who see Chaney as a chance to take money from Mattie and from the family of a Texas state senator that the scoundrel also shot. Rooster comes to Mattie’s defense when Le Boeuf treats her like a wayward child and whips her with a switch, but the tables turn on that, too, when Rooster’s protective side holds her back.
Wayne called Marguerite Roberts’ script the best he’d ever read – she was on the Hollywood blacklist, which made them odd political bedfellows – and True Grit has nearly as much pop in the dialogue as the showier Butch Cassidy. Mattie gets to turn her father’s oversized pistol on Chaney, but language is her weapon of choice, delivered in such an intellectual fusillade that her adversaries tend to surrender quickly. (A running joke about the lawyer she intends to sic on them has a wonderful payoff.) The three leads exchange playful barbs and colorful stories, too, with Rooster ragging on La Boeuf’s marksmanship (“This is the famous horse killer from El Paso”) or spending the downtime before an ambush sharing the troubled events from his life that have gotten him to this place.
There’s a degree to which True Grit is a victory lap for Wayne, who gets one of his last – and certainly one of his best – opportunities to pay off a career in westerns. Yet Wayne genuinely lets down his guard in key moments and allows real pain and vulnerability to seep through, enough to complicate his tough-guy persona without demolishing it altogether. It may not have the gravitas of Clint Eastwood in Unforgiven, but it’s the same type of performance, the reckoning of a western gunslinger who’s seen and done terrible things, lost the people he loved, and seems intent on living out his remaining days alone. Without the redemptive power of Mattie’s kindness and decency, True Grit is about a man left to drink himself to death.
(For the source of this, and many other quite interesting articles and features, please visit: https://www.theguardian.com/film/2019/jun/11/true-grit-john-wayne-1969-henry-hathaway/)
John Wayne – Very brief partial bio:
John Wayne was born Marion Robert Morrison in Winterset, Iowa on May 26th, 1907. He attended the University of Southern California (USC) on an athletic scholarship. But he broke his collarbone which ended his athletic career. That accident also ended his scholarship. With no funds available for school he had to leave USC. His coach who had been giving Tom Mix tickets to USC games asked Mix and director John Ford to give Wayne a job as a prop boy and extra. Wayne quickly started appearing as an extra in many films. He also met Wyatt Earp who was friends with Mix. Wayne would later credit Earp for giving his walk, talk and persona.
In 1969, Wayne won the Best Actor Oscar for his role in True Grit. It would be his second time being nominated, the first came 17 years earlier.
Wayne passed away from stomach cancer at the UCLA Medical Center on June 11, 1979.
Wayne was a member of Marion McDaniel Masonic Lodge No. 56 in Tuscon, Arizona. He was a 32° Scottish Rite Mason, a member of York Rite and a member of Al Malaikah Shrine Temple in Los Angeles.
(For a more extensive bio please visit: https://www.masonrytoday.com/index.php?new_month=5&new_day=26&new_year=2015)
By 2100 there could be 4.9bn dead users on Facebook. So who controls our digital legacy after we have gone? As Black Mirror returns, we delve into the issue.
Esther Earl never meant to tweet after she died. On 25 August 2010, the 16-year-old internet vlogger died after a four-year battle with thyroid cancer. In her early teens, Esther had gained a loyal following online, where she posted about her love of Harry Potter, and her illness. Then, on 18 February 2011 – six months after her death – Esther posted a message on her Twitter account, @crazycrayon.
“It’s currently Friday, January 14 of the year 2010. just wanted to say: I seriously hope that I’m alive when this posts,” she wrote, adding an emoji of a smiling face in sunglasses. Her mother, Lori Earl from Massachusetts, tells me Esther’s online friends were “freaked out” by the tweet.
“I’d say they found her tweet jarring because it was unexpected,” she says. Earl doesn’t know which service her daughter used to schedule the tweet a year in advance, but believes it was intended for herself, not for loved ones after her death. “She hoped she would receive her own messages … [it showed] her hopes and longings to still be living, to hold on to life.”
Although Esther did not intend her tweet to be a posthumous message for her family, a host of services now encourage people to plan their online afterlives. Want to post on social media and communicate with your friends after death? There are lots of apps for that! Replika and Eternime are artificially intelligent chatbots that can imitate your speech for loved ones after you die; GoneNotGone enables you to send emails from the grave; and DeadSocial’s “goodbye tool” allows you to “tell your friends and family that you have died”. In season two, episode one of Black Mirror, a young woman recreates her dead boyfriend as an artificial intelligence – what was once the subject of a dystopian 44-minute fantasy is nearing reality.
Why people become vegans: The history, sex and science of a meatless existence
Disclosure statement
Joshua T. Beck does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Partners
University of Oregon provides funding as a member of The Conversation US.
At the age of 14, a young Donald Watson watched as a terrified pig was slaughtered on his family farm. In the British boy’s eyes, the screaming pig was being murdered. Watson stopped eating meat and eventually gave up dairy as well.
Later, as an adult in 1944, Watson realized that other people shared his interest in a plant-only diet. And thus veganism – a term he coined – was born.
Flash-forward to today, and Watson’s legacy ripples through our culture. Even though only 3 percent of Americans actually identify as vegan, most people seem to have an unusually strong opinion about these fringe foodies – one way or the other.
As a behavioral scientist with a strong interest in consumer food movements, I thought November – World Vegan Month – would be a good time to explore why people become vegans, why they can inspire so much irritation and why many of us meat-eaters may soon join their ranks.
It’s an ideology not a choice
Like other alternative food movements such as locavorism, veganism arises from a belief structure that guides daily eating decisions.
They aren’t simply moral high-grounders. Vegans do believe it’s moral to avoid animal products, but they also believe it’s healthier and better for the environment.
Also, just like Donald Watson’s story, veganism is rooted in early life experiences.
Psychologists recently discovered that having a larger variety of pets as a child increases tendencies to avoid eating meat as an adult. Growing up with different sorts of pets increases concern for how animals are treated more generally.
Thus, when a friend opts for Tofurky this holiday season, rather than one of the 45 million turkeys consumed for Thanksgiving, his decision isn’t just a high-minded choice. It arises from beliefs that are deeply held and hard to change.
Veganism as a symbolic threat
That doesn’t mean your faux-turkey loving friend won’t seem annoying if you’re a meat-eater.
Why do some people find vegans so irritating? In fact, it might be more about “us” than them.
Most Americans think meat is an important part of a healthy diet. The government recommends eating 2-3 portions (5-6 ounces) per day of everything from bison to sea bass. As tribal humans, we naturally form biases against individuals who challenge our way of life, and because veganism runs counter to how we typically approach food, vegans feel threatening.
Humans respond to feelings of threat by derogating out-groups. Two out of 3 vegans experience discrimination daily, 1 in 4 report losing friends after “coming out” as vegan, and 1 in 10 believe being vegan cost them a job.
Veganism can be hard on a person’s sex life, too. Recent research finds that the more someone enjoys eating meat, the less likely they are to swipe right on a vegan. Also, women find men who are vegan less attractive than those who eat meat, as meat-eating seems masculine.
Crossing the vegan divide
It may be no surprise that being a vegan is tough, but meat-eaters and meat-abstainers probably have more in common than they might think.
Vegans are foremost focused on healthy eating. Six out of 10 Americans want their meals to be healthier, and research shows that plant-based diets are associated with reduced risk for heart disease, certain cancers, and Type 2 diabetes.
It may not be surprising, then, that 1 in 10 Americans are pursuing a mostly veggie diet. That number is higher among younger generations, suggesting that the long-term trend might be moving away from meat consumption.
In addition, several factors will make meat more costly in the near future.
Meat production accounts for as much as 15 percent of all greenhouse gas emissions, and clear-cutting for pasture land destroys 6.7 million acres of tropical forest per year. While some debate exists on the actual figures, it is clear that meat emits more than plants, and population growth is increasing demand for quality protein.
Seizing the opportunity, scientists have innovated new forms of plant-based meats that have proven to be appealing even to meat-eaters. The distributor of Beyond Meat’s plant-based patties says 86 percent of its customers are meat-eaters. It is rumored that this California-based vegan company will soon be publicly traded on Wall Street.
Even more astonishing, the science behind lab-grown, “cultured tissue” meat is improving. It used to cost more than $250,000 to produce a single lab-grown hamburger patty. Technological improvements by Dutch company Mosa Meat have reduced the cost to $10 per burger.
Watson’s legacy
Even during the holiday season, when meats like turkey and ham take center stage at family feasts, there’s a growing push to promote meatless eating.
London, for example, will host its first-ever “zero waste” Christmas market this year featuring vegan food vendors. Donald Watson, who was born just four hours north of London, would be proud.
Watson, who died in 2006 at the ripe old age of 95, outlived most of his critics. This may give quiet resolve to vegans as they brave our meat-loving world.
(For the source of this, and many other interesting articles, please visit: https://theconversation.com/why-people-become-vegans-the-history-sex-and-science-of-a-meatless-existence-106410/)
Did human ancestors split from chimps in Europe, not Africa?
It’s generally accepted that humans originated in Africa and gradually spread out across the globe from there, but a pair of new studies may paint a different picture. By examining fossils of early hominins, researchers have found that humans and chimpanzees may have split from their last common ancestor earlier than previously thought, and this important event may have happened in the ancient savannahs of Europe, not Africa.
The split between humans and our closest living relatives, chimpanzees, is a murky area in our history. While the point of original divergence is thought to have been between 5 and 7 million years ago, it wasn’t a clean break, and cross breeding and hybridization may have continued until as recently as 4 million years ago.
Where the divergence took place is contentious as well, but Eastern Africa is the accepted birthplace of the earliest pre-humans. One of the best candidates for the last common ancestor is Sahelanthropus, known from a skull found in Central Africa dating back to around 7 million years ago. But according to the new studies, bones found in Greece and Bulgaria appear to belong to a hominin that’s a few hundred thousand years older.
“Our discovery outlines a new scenario for the beginning of human history – the findings allow us to move the human-chimpanzee split into the Mediterranean area,” says David Begun, co-author of one of the studies. “These research findings call into question one of the most dogmatic assertions in paleoanthropology since Charles Darwin, which is that the human lineage originated in Africa. It is critical to know where the human lineage arose so that we can reconstruct the circumstances leading to our divergence from the common ancestor we share with chimpanzees.”
The Mediterranean bones are from a species called Graecopithecus freybergi, and it’s one of the least understood European apes. The researchers scanned a jawbone found in Greece and an upper premolar from Bulgaria, and found the roots of the teeth to be largely fused together, indicating that the species might have been an early hominin.
“While great apes typically have two or three separate and diverging roots, the roots of Graecopithecus converge and are partially fused – a feature that is characteristic of modern humans, early humans and several pre-humans including Ardipithecus and Australopithecus,” says Madelaine Böhme, co-lead investigator on the project.
To get a clearer picture, the researchers studied the sediment that the fossils were found in, and discovered that the two sites were very similar. Not only were they almost exactly the same age – 7.24 and 7.175 million years – but both areas were dry, grassy savannahs at the time, making them prime conditions for hominins.
The researchers found grains of dust that appeared to have blown up from the Sahara desert, which was forming around the same time. This might have contributed to the savannah-like conditions in Europe, and these environmental changes may have driven the two species to evolve differently.
“The incipient formation of a desert in North Africa more than seven million years ago and the spread of savannahs in Southern Europe may have played a central role in the splitting of the human and chimpanzee lineages,” says Böhme.
But inferring information from fossils always leaves room for error, and as New Scientist reports, there are researchers who aren’t convinced such big claims can be projected from such small features of the fossils. Still, it’s an interesting theory, and one that will warrant more study.
Source: University of Toronto
(For the source of this, and many other interesting articles, please visit: https://newatlas.com/fossil-human-chimp-ancestor-europe/49708/)
Ancient pee helps archaeologists track the rise of farming
One of the most important transitions in human history was when we stopped hunting and gathering for food and instead settled down to become farmers. Now, to reconstruct the history of one particular archaeological site in Turkey, scientists have examined a pretty unexpected source – the salts left behind from human and animal pee.
The dig site of Aşıklı Höyük in Turkey has been studied for decades, and it’s clear that humans occupied the area more than 10,000 years ago, where they started experimenting with keeping animals like sheep and goats. But just how many people and animals occupied the site at different times has been trickier to track.
For the new study, researchers from the Universities of Columbia, Tübingen, Arizona and Istanbul realized that the more humans and animals there are on a site, the higher the concentration of salts in the ground. The reason? Everybody and everything pees.
The team began by collecting 113 samples from across Aşıklı Höyük, including trash piles, bricks, hearths and soil, from all different time periods. They examined the levels of sodium, nitrate and chlorine salts, which are all passed in urine.
Sure enough, the fluctuating levels of urine salts revealed the history of human and animal occupation of Aşıklı Höyük. Very little salt was detected in the natural layers, before any settlement existed. Between about 10,400 and 10,000 years ago, salt levels rose slightly, as a few humans began settling. Then things really took off – between 10,000 and 9,700 years ago the salts saw a huge spike, with levels about 1,000 times higher than previously detected. That indicates a similar spike in the number of occupants. After that, concentrations go into decline again.
That large spike, the team says, suggests that domestication of animals in Aşıklı Höyük occurred faster than was previously thought.
Using this data, the researchers estimated that over the 1,000-year period of occupation, an average of 1,790 people and animals lived in the area per day. At its peak, the population density would have reached about one person or animal for every 10 sq m (108 sq ft).
The estimated inhabitants of each time period can’t be all human – the houses found on site indicate a smaller population. But the team says this is evidence that salt concentrations can be a useful tool to study the density of domesticated animals over time.
The researchers say this technique could be used in other sites, to help find new evidence of the timing and density of human settlement.
The research was published in the journal Science Advances.
Source: Columbia University
An audio version of this article is available to New Atlas Plus subscribers.
(For the source of this, and many other equally interesting articles, please visit: https://newatlas.com/urine-salts-ancient-farming/59360/)
Japan’s ‘vanishing’ Ainu will finally be recognized as indigenous people
Growing up in Japan, musician Oki Kano never knew he was part of a “vanishing people.”
For decades, researchers and conservative Japanese politicians described the Ainu as “vanishing,” says Jeffry Gayman, an Ainu peoples researcher at Hokkaido University.
Gayman says there might actually be tens of thousands more people of Ainu descent who have gone uncounted — due to discrimination, many Ainu chose to hide their background and assimilate years ago, leaving younger people in the dark about their heritage.
A bill, which was passed recently, for the first time has officially recognized the Ainu of Hokkaido as an “indigenous” people of Japan. The bill also includes measures to make Japan a more inclusive society for the Ainu, strengthen their local economies and bring visibility to their culture.
Japanese land minister Keiichi Ishii told reporters that it was important for the Ainu to maintain their ethnic dignity and pass on their culture to create a vibrant and diverse society.
Yet some warn a new museum showcasing their culture risks turning the Ainu into a cultural exhibit and note the bill is missing one important thing — an apology.
‘Tree without roots’
“Bob Marley sang that people who forget about their ancestors are the same as a tree without roots,” says Kano, 62. “I checked the lyrics as a teenager, though they became more meaningful to me as I matured.”
After discovering his ethnic origins, Kano was determined to learn more. He traveled to northern Hokkaido to meet his father and immediately felt an affinity with the Ainu community there — the “Asahikawa,” who are known for their anti-establishment stance.
But his sense of belonging was short-lived — some Ainu rejected Kano for having grown up outside of the community, saying he would never fully understand the suffering they had endured under Japanese rule.
–
Yuji Shimizu, an Ainu elder, says he faced open discrimination while growing up in Hokkaido. He says other children called him a dog and bullied him for looking different.
Hoping to avoid prejudice, his parents never taught him traditional Ainu customs or even the language, says the 78-year-old former teacher.
“My mother told me to forget I was Ainu and become like the Japanese if I wanted to be successful,” says Shimizu.
Ainu Moshir (Land of the Ainu)
They were early residents of northern Japan, in what is now the Hokkaido prefecture, and the Kuril Islands and Sakhalin, off the east coast of Russia. They revered bears and wolves, and worshiped gods embodied in the natural elements like water, fire and wind.
In the 15th century, the Japanese moved into territories held by various Ainu groups to trade. But conflicts soon erupted, with many battles fought between 1457 and 1789. After the 1789 Battle of Kunasiri-Menasi, the Japanese conquered the Ainu.
Japan’s modernization in the mid-1800s was accompanied by a growing sense of nationalism and, in 1899, the government sought to assimilate the Ainu by introducing the Hokkaido Former Aborigines Protection Act.
The act implemented Japan’s compulsory national education system in Hokkaido and eliminated traditional systems of Ainu land rights and claims. Over time, the Ainu were forced to give up their land and adopt Japanese customs through a series of government initiatives.
Today, there are only two native Ainu speakers worldwide, according to the Endangered Languages Project, a organization of indigenous groups and researchers aimed at protecting endangered languages.
High levels of poverty and unemployment currently hinder the Ainu’s social progress. The percentage of Ainu who attend high school and university is far lower than the Hokkaido average.
The Ainu population also appears to have shrunk. Official figures put the number of Ainu in Hokkaido at 17,000 in 2013, accounting for around 2% of the prefecture’s population. In 2017, the latest year on record, there were only about 13,000.
However, Gayman, the Ainu researcher, says that the number of Ainu could be up to ten times higher than official surveys suggest, because many have chosen not to identify as Ainu and others have forgotten — or never known — their origins.
Finding music
While living there, he befriended several Native Americans at a time when indigenous peoples were putting pressure on governments globally to recognize their rights. He credits them with awakening his political conscience as a member of the Ainu.
“I knew I had to reconnect with my Ainu heritage,” he says. Kano made his way back to Japan and, in 1993, discovered a five-stringed instrument called the “tonkori,” once considered a a symbol of Ainu culture.
“I made a few songs with the tonkori and thought I had talent,” he says, despite never having formally studied music. But finding a tonkori master to teach him was hard after years of cultural erasure.
So he used old cassette tapes of Ainu music as a reference. “It was like when you copy Jimi Hendrix while learning how to play the guitar,” he says.
His persistence paid off. In 2005, Kano created the Oki Dub Ainu group, which fuses Ainu influence with reggae, electronica and folk undertones. He also created his own record label to introduce Ainu music to the world.
Since then, Kano has performed in Australia and toured Europe. He has also taken part in the United Nation’s Working Group on Indigenous Populations to voice Ainu concerns.
–
UN Declaration on the Rights of Indigenous Peoples (UNDRIP)
The United Nations adopted UNDRIP on September 13, 2007, to enshrine the rights that “constitute the minimum standards for the survival, dignity and well-being of the indigenous peoples of the world.”
The UNDRIP protects collective rights that may not feature in other human rights charters. It emphasizes individual rights, and also safeguards the individual rights of Indigenous people.
New law, new future?
Winchester and Gayman also say the government failed to consult all Ainu groups when drafting the bill.
For the Ainu elder Shimizu, the new bill is missing an important part: atonement. “Why doesn’t the government apologize? If the Japanese recognized what they did in the past, I think we could move forward,” says Shimizu.
“The Japanese forcibly colonized us and annihilated our culture. Without even admitting to this, they want to turn us into a museum exhibit,” Shimizu adds, referring to the 2019 bill’s provision to open an Ainu culture museum in Hokkaido.
–
Other Ainu say the museum will create jobs.
Currently, Ainu youth are eligible for scholarships and grants to study their own language and culture at a few select private universities. But Kano says government funding should extend beyond supporting Ainu heritage, to support the Ainu people.
–
Unrelated Languages Often Use Same Sounds for Common Objects and Ideas, Research Finds
A careful statistical examination of words from 6,000+ languages shows that humans tend to use the same sounds for common objects and ideas, no matter what language they’re speaking.
The new research, led by Prof. Morten Christiansen of Cornell University, demonstrates a robust statistical relationship between certain basic concepts – from body parts to familial relationships and aspects of the natural world – and the sounds humans around the world use to describe them.
“These sound symbolic patterns show up again and again across the world, independent of the geographical dispersal of humans and independent of language lineage,” Prof. Christiansen said.
“There does seem to be something about the human condition that leads to these patterns. We don’t know what it is, but we know it’s there.”
Prof. Christiansen and his colleagues from Argentina, Germany, the Netherlands and Switzerland analyzed 40-100 basic vocabulary words in 62% of the world’s more than 6,000 current languages and 85 percent of its linguistic lineages.
“The dataset used for this study is drawn from version 16 of the Automated Similarity Judgment Program database,” they explained.
“The data consist of 28–40 lexical items from 6,452 word lists, with a subset of 328 word lists having up to 100 items. The word lists include both languages and dialects, spanning 62% of the world’s languages and about 85% of its lineages.”
The words included pronouns, body parts and properties (small, full), verbs that describe motion and nouns that describe natural phenomena (star, fish).
The scientists found a considerable proportion of the 100 basic vocabulary words have a strong association with specific kinds of human speech sounds.
For instance, in most languages, the word for ‘nose’ is likely to include the sounds ‘neh’ or the ‘oo’ sound, as in ‘ooze.’
The word for ‘tongue’ is likely to have ‘l’ or ‘u.’
‘Leaf’ is likely to include the sounds ‘b,’ ‘p’ or ‘l.’
‘Sand’ will probably use the sound ‘s.’
The words for ‘red’ and ‘round’ often appear with ‘r,’ and ‘small’ with ‘i.’
“It doesn’t mean all words have these sounds, but the relationship is much stronger than we’d expect by chance. The associations were particularly strong for words that described body parts. We didn’t quite expect that,” Prof. Christiansen said.
The researchers also found certain words are likely to avoid certain sounds. This was especially true for pronouns.
For example, words for ‘I’ are unlikely to include sounds involving u, p, b, t, s, r and l.
‘You’ is unlikely to include sounds involving u, o, p, t, d, q, s, r and l.
The team’s findings, published in the Proceedings of the National Academy of Sciences, challenge one of the most basic concepts in linguistics: the century-old idea that the relationship between a sound of a word and its meaning is arbitrary.
The researchers don’t know why humans tend to use the same sounds across languages to describe basic objects and ideas.
“These concepts are important in all languages, and children are likely to learn these words early in life,” Prof. Christiansen said.
“Perhaps these signals help to nudge kids into acquiring language.”
“Likely it has something to do with the human mind or brain, our ways of interacting, or signals we use when we learn or process language. That’s a key question for future research.”
_____
Damián E. Blasi et al. Sound–meaning association biases evidenced across thousands of languages. PNAS, published online September 12, 2016; doi: 10.1073/pnas.1605782113
(For the source of this, and many other interesting articles, please visit: www.sci-news.com/othersciences/linguistics/languages-use-same-sounds-common-objects-ideas-04185.html/)
Neanderthals, Denisovans May Have Had Their Own Language, Suggest Scientists
A broad range of evidence from linguistics, genetics, paleontology, and archaeology suggests that Neanderthals and Denisovans shared with us something like modern speech and language, according to Dutch psycholinguistics researchers Dr Dan Dediu and Dr Stephen Levinson.
Neanderthals have fascinated both the academic world and the general public ever since their discovery almost 200 years ago. Initially thought to be subhuman brutes incapable of anything but the most primitive of grunts, they were a successful form of humanity inhabiting vast swathes of western Eurasia for several hundreds of millennia, during harsh ages and milder interglacial periods.
Scientists knew that Neanderthals were our closest cousins, sharing a common ancestor with us, probably Homo heidelbergensis, but it was unclear what their cognitive capacities were like, or why modern humans succeeded in replacing them after thousands of years of cohabitation.
Due to new discoveries and the reassessment of older data, but especially to the availability of ancient DNA, researchers have started to realize that Neanderthals’ fate was much more intertwined with ours and that, far from being slow brutes, their cognitive capacities and culture were comparable to ours.
Dr Dediu and Dr Levinson, both from the Max Planck Institute for Psycholinguistics and the Radboud University Nijmegen, reviewed all these strands of literature, and argue that essentially modern language and speech are an ancient feature of our lineage dating back at least to the most recent ancestor we shared with the Neanderthals and the Denisovans. Their interpretation of the intrinsically ambiguous and scant evidence goes against the scenario usually assumed by most language scientists.
The study, reported in the journal Frontiers in Language Sciences, pushes back the origins of modern language by a factor of ten – from the often-cited 50,000 years to 500,000 – 1,000,000 years ago – somewhere between the origins of our genus, Homo, some 1.8 million years ago, and the emergence of Homo heidelbergensis.
This reassessment of the evidence goes against a scenario where a single catastrophic mutation in a single individual would suddenly give rise to language, and suggests that a gradual accumulation of biological and cultural innovations is much more plausible.
Interestingly, given that we know from the archaeological record and recent genetic data that the modern humans spreading out of Africa interacted both genetically and culturally with the Neanderthals and Denisovans, then just as our bodies carry around some of their genes, maybe our languages preserve traces of their languages too.
This would mean that at least some of the observed linguistic diversity is due to these ancient encounters, an idea testable by comparing the structural properties of the African and non-African languages, and by detailed computer simulations of language spread.
______
Bibliographic information: Dediu D and Levinson SC. 2013. On the antiquity of language: the reinterpretation of Neanderthal linguistic capacities and its consequences. Front. Psychol. 4: 397; doi: 10.3389/fpsyg.2013.00397
(For the source of this, and other equally important articles, please visit: http://www.sci-news.com/othersciences/linguistics/science-neanderthals-denisovans-language-01211.html/)
A Mysterious Third Human Species Lived Alongside Neanderthals in This Cave
It’s a “fascinating part of human history.”
Scientists digging in the mountains of southern Siberia have revealed key insights into the lives of Denisovans, a mysterious branch of the ancient human family tree. While these relatives are extinct, their legacy lives on in the modern humans who carry fragments of their DNA and in the tiny artifacts and bones they left behind. Compared to the well-known Neanderthals, there’s a lot we don’t know about the Denisovans — but a pair of papers published recently hint at their place in our shared history.
Both Neanderthals and Denisovans belong to the genus Homo, though it’s still not entirely clear whether the Denisovans are a separate species or a subspecies of modern humans — after all, we only have six fossil fragments to go on. Nevertheless, we’re one step closer to finding out. Both studies, published in Nature, describe new discoveries in the Denisova Cave of the Altai Mountains, where excavations have continued for the past 40 years. Those efforts have revealed ancient human remains carrying the DNA of both the Denisovans and Neanderthals who made the high-ceilinged cave their home — sometimes, even having children together.
For a long time, nobody knew exactly how long this cave was occupied and the nature of the interactions of the hominins living there. But now, the studies collectively reveal that humans occupied the cave from approximately 200,000 years ago to 50,000 years ago.
The authors of one study focused on Denisovan fossils and artifacts to determine “aspects of their cultural and subsistence adaptions.” Katerina Douka, Ph.D., the co-author of that study and a researcher at Max Planck Institute for the Science of Human History, tells Inverse that confirming that they lived in this cave is a “fascinating part of human history.” However, she adds, we still don’t know so much about the Denisovans — not their geographic range, their location of origin, or even what they looked like.
When they lived in the cave, and with whom, is another mystery about the Denisovans that was investigated, sediment layer by sediment layer, in the second study. Published by scientists from the University of Wollongong and the Russian Academy of Sciences, the analysis is the most comprehensive dating project ever done on the Denisova Cave deposits. The team dated 103 sediment layers and 50 items within them, mostly bits of bone, charcoal, and tools. The oldest Denisovan DNA comes from a layer between 185,000 and 217,000 years old, and the oldest Neanderthal DNA sample is from a layer that’s about 172,000 to 205,000 years old. In the more recent layers of the cave, between 55,200 to 84,100 years old, only Denisovan remains were found.
And it’s in these more recent years where more advanced objects begin to emerge — pieces of tooth pendants and bone points, which “may be assumed” as “associated with the Denisovan population,” write Douka and her team. Those artifacts are the oldest of their kind found in northern Eurasia and representative of something previously unexplored: Denisovan culture.
At this point, says Douka, we cannot definitively say that Denisovans created those items, though the evidence is pointing that way. More sites with Denisovan remains and material culture are needed to answer deeper questions about their culture and symbols.
April Nowell, Ph.D. is a University of Victoria professor and Paleolithic archeologist who specializes in the origins of art and symbol use and wasn’t a part of these recent papers. Evaluating the pendants and bones, she tells Inverse that, assuming these artifacts were made by the Denisovans, she’s “not particularly surprised.” Human culture, very broadly, is thought to have emerged 3.3 million years ago, with the first stone tools. Other ancient humans used the natural clay ochre to paint at least 100,000 years ago, the same time period where archeologists have found the oldest beads.
So, it makes sense that a human subspecies would create cultural artifacts around this time.
What’s novel in the new studies, Nowell says, is that “we know virtually nothing about who Denisovans were, so every study like this one helps to enrich our understanding of their place in the human story.”
“Given that we have items of personal adornment associated with Neanderthals and modern humans all around the same date as the ones thought to be associated with the Denisovans,” she adds, “I would find it more surprising if they were not making similar objects.”
These particular items, Nowell explains, especially the tooth pendant, likely speak to “issues of personal identity and group belonging.” The teeth were purposefully chosen, modified, and worn — standing as jewelry that communicates something about both the wearer and likely influenced how the wearer felt about themselves.
Jewelry, she says, can be powerful and laden with meaning — just think about putting on a wedding ring or holding your grandfather’s pocket watch. We can’t tell what these pendants meant to the Denisovans who created and wore them, but their very existence allows archeologists to begin to piece together an idea of the culture from which they were wrought.
(For the source of this, and many additional interesting articles, please visit: https://www.inverse.com/article/52926-denisova-cave-dating-sediment-culture/)
New species of human discovered in cave in Philippines
A new species of human has been discovered in a cave in the Philippines. Named Homo luzonensis after the island of Luzon where it was found, the hominin appears to have lived over 50,000 years ago, painting a more complete picture of human evolution.
The new species is known from 12 bones found in Callao Cave, which are thought to be the remains of at least two adults and a juvenile. This includes several finger and toe bones, some teeth and a partial femur. While that might not sound like much to work with, scientists can use that to determine more than you might expect.
“There are some really interesting features – for example, the teeth are really small,” says Professor Philip Piper, co-author of the study. “The size of the teeth generally, though not always, reflect the overall body-size of a mammal, so we think Homo luzonensis was probably relatively small. Exactly how small we don’t know yet. We would need to find some skeletal elements from which we could measure body-size more precisely.”
Even with those scattered bones, scientists are able to start slotting Homo luzonensis into the hominin family tree. Although it is a distinct species of its own, it does share different traits with many of its relatives, including Neanderthals, modern humans, and most notably Homo floresiensis – the “Hobbit” humans discovered in an Indonesian cave in 2003. But perhaps the strangest family resemblance is to the Australopithecus, a far more ancient ancestor of ours.
“It’s quite incredible, the hand and feet bones are remarkably Australopithecine-like,” says Piper. “The Australopithecines last walked the Earth in Africa about 2 million years ago and are considered to be the ancestors of the Homo group, which includes modern humans. So, the question is whether some of these features evolved as adaptations to island life, or whether they are anatomical traits passed down to Homo luzonensis from their ancestors over the preceding 2 million years.”
The research was published in the journal Nature.
An audio version of this article is available to New Atlas Plus subscribers.
(For the source of this very interesting articles, plus many others, please visit: https://newatlas.com/new-human-species-homo-luzonensis/59207/)
In contrast to much of what Hollywood has constructed, the real cowboy lifestyle was far less glamorous and happy than you may think. Of course, there were some smiling faces among 19th century cowboys, but the gunslinging frontier hero you may picture is a Wild West myth.
Cowboys in the old American West worked cattle drives and on ranches alike, master horsemen from all walks of life that dedicated themselves to the herd. Cowboy life in the 1800s was full of hard work, danger, and monotonous tasks with a heaping helping of dust, bugs, and beans on the side.
Cowboys Didn’t Get A Lot Of Sleep
A cowboy’s day and night revolved around the herd, a constant routine of guarding, wrangling, and caring for cattle. When cowboys were out with a herd or simply working on a ranch, they had to be on watch. With a typical watch lasting two to four hours, there was usually a rotation of men. This gave cowboys the chance to sleep for relatively short spurts, often getting six hours of sleep at the most.
Cowboys slept on bedrolls, an easily transportable mattress of sorts made out of feathers, canvas, or waterproof tarpaulin. Out on a drive, cowboys slept on the same bedrolls they used at the ranch. Bedrolls were likely full of lice and bedbugs wherever they were used.
Dirt Was Everywhere
Cowboys out with the herd wore the same clothes day in and day out. While wrangling the herd, cowboys in the back were naturally surrounded by a giant dust cloud stirred up by the animals, but dirt was pretty inescapable from any vantage point.
When cowboys were done with a cattle drive or came to a town, they made their way to a much needed and enjoyed bath. They may have also purchased new clothes and blown off steam at the local saloon.
Life at a ranch could have been less dusty, but not always. Some ranches had elaborate mansions but cowboys spent their days and nights in bunkhouses and other outbuildings. These were modestly better than being out on the range but a lot of cowboys preferred to sleep out under the stars even when they had the option of a roof over their heads.
They Had Their Own Language
The language of cowboys was full of task-specific phrases – and a fair amount of cursing. Much of the cowboy lexicon came from the vaquero tradition, but there was a lot of slang, too.
Cowboys used metaphorical phrases like “above snakes” or “hair case” to indicate being alive and a hat, respectively. They also used Native American words as they interacted with individual tribes.
Cowboys had words for their guns, their horses, the types of work they did, and their gear. A rope could be called many things based on what it was made of and what it looked like. For example, a long black and white horse hair rope was called a “pepper-and-salt rope.”
Their Clothes Were Practical And Protective
Cowboys wore hats, chaps, boots, and other hardy clothing to keep themselves safe on the trail and in the harsh elements. Hats varied by region but generally they had brims to keep the sun out of cowboys’ eyes. The wider the hat brim, the more shade it could provide.
Chaps were worn over pants to keep cowboys’ legs safe and American cowboys wore bandanas around their necks they could pull over their mouths and noses to keep the dust out.
Cowboy boots were designed with narrow toes and heels so the cowboy’s foot would fit securely in a stirrup but still have the ability to move should the rider need to dismount. Made of leather, they were sturdy and had spurs attached so a cowboy could prod his horse along. Boots were tall, going up the lower leg of a cowboy for protective purposes.
Strength, Courage, And Intelligence Were Equally Essential To Survival
Contemporaries heralded cowboys’ “courage, physical alertness, ability to endure exposure and fatigue, horsemanship, and skill in the use of the lariat.”
Cowboys needed to be physically strong to take on tasks like breaking horses, roping cattle, and riding for hours on end. Courage to chase down stampeding herds or brave the elements on a regular basis was supplemented by the knowledge required to make quick decisions, care for the cattle, and not panic in the face of a crisis.
Often this intelligence came with years of experience, but cowboys needed to be able to understand cow psychology, navigating what a cow would react to, how to get cattle to take water, and techniques to avoid unnecessary risks on the drive.
Laziness was not an option on a cattle drive and was met with harsh treatment. One man who was caught sleeping under the chuck wagon was taught a lesson by being jabbed with a dead tarantula.
A Cowboy’s Horse Was His Best Friend
A cowboy needed his horse to travel, guard, protect, and haul on a cattle drive. Horses had to be able to handle long hours with riders on their backs, difficult terrain, and extreme heat. Cowboys maintained their horses, caring for them along drives with the utmost tenderness, and developing bonds that unified steed and rider with Centaur-like cohesion.
A good horse meant a cowboy could keep watch at night, and only the smartest and best-trained horses were used for the task. The best horses made up the remuda, a collection of even-tempered equines thought to understand cattle as much as their riders.
The Money Was Decent But The Life Was Hard
Cowboys could make anywhere from $25 to $40 a month, which was good money for single men who didn’t have to support families. They’d spend their money on luxuries when they got to town, although any ostentatious purchases would most likely result in ridicule. Some cowboys saved their wages to buy cattle and land of their own.
Cowboys made the same wage regardless of ethnic or racial background. In addition to going on cattle drives, cowboys worked on ranches or in local towns when they could find work.
Cowboys Traveled In Groups For Thousands Of Miles
It took eight to 12 cowboys to move 3,000 head of cattle, making for cohesive groups of young men traveling across large stretches of land with a common goal. There was a hierarchy of sorts, with a trail boss leading the way. The trail boss decided how many miles the drive would tackle in a day and where the group camped at night. There was also a second in command, a segundo, alongside a cook and several wranglers.
Lone cowboys were particularly vulnerable to attacks and the elements but also evoked fear and suspicion when they were spotted out on the plain.
Most Cowboys Didn’t Carry Guns To Fight
Cowboys had guns, but those guns were used for protection more than in confrontations or quarrels. Cowboys might have fended off wolves and coyotes,warded off hostile Native groups, or deterred potential thieves, but for the most part guns were primarily used in the event of a stampede.
When a stampede broke out, cowboys had no choice but to try to overtake the leaders and bring it to an end. Once they caught up with the front of the group, they would fire their guns at the cattle to get them to stop.
The myth of the cowboy who carried two six-shooters comes from Hollywood, but cowboys often carried multiple weapons. There were hundreds of kinds of guns used by cowboys over time, and most men preferred to have a short sidearm and a longer rifle at their disposal.
Cowboys Were A Diverse Lot
The American cowboy owes its origin to the Mexican and Spanish rancher traditions. During the 1700s, vaqueros – derived from vaca, the Spanish word for “cow” – were hired by Spanish ranchers to work the land and tend to their cattle. Vaqueros were native Mexicans who had expertise in roping, herding, and riding.
By the 1800s, waves of European immigrants had made their way west and began to work as cowboys as well. No longer a vocation for just Mexicans, there was a large amount of diversity among cowboy groups. African Americans, Native Americans, and settlers from all around Europe worked with Mexican vaqueros, often picking up the skills they needed to thrive and survive along the way.
The remoteness of cowboy life led to an egalitarianism of sorts, one that transcended ethnic and racial differences. The almost-exclusively male environment also valued hard work and strength over all else, contributing to a relatively discrimination-free setting.
The Food Left A Lot To Be Desired
There wasn’t much variety in a cowboy’s diet. Chuckwagons accompanied cattle drives and cooks, legendarily grumpy but beloved companions, served staple foods like beef, bacon, beans, bread, and coffee.
Cowboys typically ate twice a day, once in the morning and again in the evening, but sometimes a third meal occurred as well. Additionally, most cowboys weren’t gluttonous, eating enough to get full but not over-indulging for fear of an upset stomach or running out of food on a long drive.
Stampedes Were Dangerous Events
A stampede was a terrifying event, one cowboys feared and did everything they could to avoid. Various things could spook cattle – a pistol shot, a storm, a snake – but once a stampede got going, it was up to the cowboys to ride to the front of the herd and bring it under control.
After cowboys ran to their horses and tried to avoid getting trampled while on foot, they had to navigate thousands of pounds of cattle coming at them. As cowboys moved alongside the herd, they could fall or be knocked off their horse. The horse itself could be brought under by the herd, something that resulted in both rider and horse being “mangled to sausage meat,” as was the case in Idaho in 1889.
Cowboy Teddy Blue recalled a stampede in 1876 wherein a cowpuncher and his horse were killed, describing the horse’s ribs as “scraped bare of hide, and all the rest of the horse and man was mashed into the ground as flat as a pancake.”
Cowboys Talked And Sang To The Cows
Rising with the sun, cowboys weren’t prone to staying up late but they did spend their evenings telling stories and socializing with their fellows. Around a campfire, cowboys also played fiddles or harmonicas, told jokes, or generally decompressed after a long day.
When they were on watch, cowboys talked to the cattle, telling them stories or soothing them with songs. Songs could be made up on the spot or handed down among cowboys, often perpetuating a tale or focusing on some aspect of cowboy life.
(For the source of this, and many other fascinating articles, please visit: https://www.ranker.com/list/life-of-a-wild-west-cowboy/melissa-sartore/)
++++++++++
At new museums, the past is finally becoming more than the story of men and wars.
NASHVILLE — Like many girls of my generation in the rural South, I learned every form of handwork my grandmother or great-grandmother could teach me: sewing, knitting, crocheting, quilting. I even learned to tat, a kind of handwork done with a tiny shuttle that turns thread into lace. Some of my happiest memories are of sitting on the edge of my great-grandmother’s bed, our heads bent together over a difficult project, as she pulled out my mangled stitches and patiently demonstrated the proper way to do them.
But by the time I’d mastered those skills, I had also lost the heart for them. Why bother to knit when the stores were full of warm sweaters? Why take months to make a quilt when the house had central heat? Of what possible use is tatting, which my great-grandmother sewed to the edges of handmade handkerchiefs, when Kleenex comes in those little purse-size packages?
But my abandonment of the domestic arts wasn’t just pragmatic. By the time I got to college, I had come to the conclusion that handwork was incompatible with my own budding feminism. Wasn’t such work just a form of subjugation? A way to keep women too busy in the home to assert any influence in the larger world? Without even realizing it, I had internalized the message that work traditionally done by men is inherently more valuable than work traditionally done by women.
I came to this unconscious conclusion almost inevitably. When every history class I ever took featured an endless list of battles won and lost by men, of political contests won and lost by men, of technological advances achieved by men, it’s not surprising that the measure of significance seemed to be the yardstick established by men — almost exclusively white men.
Public history has the power to affect our very understanding of reality. It tells us what we should value most about the past and how we should understand our own place within that context. Just as art museums today must wrestle with an earlier aesthetic that excluded women and artists of color, local-history museums are working to recalibrate the way they present the past.
In Montgomery, Ala., the Legacy Museum and the National Memorial for Peace and Justice convey the history of systemic racism in this country. In Louisiana, the restored Whitney Plantation’s new focus is the way the enslaved people on the plantation lived. In Atlanta, the Cyclorama — a 360-degree diorama the length of a football field that depicts the Battle of Atlanta — was restored and returned to public display, this time with new interpretive materials that defy the Lost Cause myth. And in Memphis, the Pink Palace Museum has just opened an elaborate new exhibition, two years in the making, that celebrates the city’s 200-year history as a kind of web in which specific issues like race thread through seemingly unrelated categories like art and entertainment, commerce and entrepreneurialism, and heritage and identity.
Here in Nashville, the new Tennessee State Museum, which opened last October, addresses the history of the state in a new building whose very design reinforces the idea that history is the story of everyone, of all the people. Andrew Jackson has his space, of course, but so do the Native Americans whom Jackson sent on the Trail of Tears, a genocidal march out of their homeland. All the relevant wars are here, along with all the relevant weaponry, but so are the pottery shards and the bedsteads and the whiskey jugs and the children’s toys. It’s all arranged in a timeline that unfolds at a human pace and on a human scale, equally beautiful and inviting, equally informative and embracing. My people are from Alabama, not Tennessee, but this space feels as though it belongs as much to me as to any Tennessean because it tells the kinds of stories that could be the story of my people, the kinds of stories that earlier versions of public history had always deemed unworthy of celebration or scholarly attention.
As it happens, the museum’s first temporary exhibition, which opened in February and runs through July 7, is a gallery full of gorgeous quilts. That was the exhibition I most wanted to see, and it did not disappoint. The quilts were made by familiar patterns — star and flower garden and log cabin and wedding ring — if not by familiar hands. Some of my own family quilts are gorgeously complex, but others are barely more than plain rectangles sewn in a row. I once asked my mother about those serviceable but hardly beautiful quilts, and she said impatiently: “People were cold, Margaret. They were trying to stay warm.”
The quilts in the exhibition at the Tennessee State Museum would keep people warm, but they are also absolute showpieces, with carefully coordinated colors and tiny stitches so perfectly close together and so perfectly uniform that it seems impossible for them to have been made by human hands. These women were nothing less than artists, and the gallery’s informational placards elevate them to that status and place them within that context. I studied the stitches and thought again and again of the women who had taught me to sit before a table frame and push a needle through all three quilt layers, taking stitches small enough to keep the batting from wadding up in the wash.
At the foot of our bed is a cedar chest that holds my share of the family quilts. The maple-leaf quilt was made for my childhood bedroom, but some of the squares were pieced together decades before I was born. The Sunbonnet Sue was my mother’s baby blanket. The flower-garden pattern with the yellow border was the last quilt my great-grandmother pieced by hand. My grandmother made the fan quilt for my husband and me when we got married. Shot through that quilt are memories — patchwork remnants of the dresses my mother made for me as I was growing up, bits left over from the simple blouses and skirts I made for myself in middle school.
(For the balance of this article please visit: https://www.nytimes.com/2019/04/01/opinion/tennessee-state-museum-quilts.html)
Teen Study Illuminates the Link Between Social Media Use and ADHD
This isn’t good.
By Sarah Sloat –
Whether it’s to fight FOMO or play Fortnite, teens are tethered to their phones. Smartphone addiction has become so bad that even smartphone creators want to help people get off their devices, and recent surveys show that half of American teens “feel addicted” to their mobile devices and 78 percent of them check their devices hourly. These habits, write researchers in a new JAMA study on teens, are linked to the development of the classic symptoms of attention-deficit/hyperactivity disorder.
The paper is an analysis of the social media habits and mental health of 2,587 teenagers who, crucially, did not have preexisting ADHD symptoms at the beginning of the study. Those who frequently used digital media platforms over the course of the two-year study, the researchers show, began to display ADHD symptoms, including inattention, hyperactivity, and impulsivity. It’s too early to define the nature of the link, the researchers warn, but it’s a good place to start.
“We can’t confirm the causation from the study, but this was a statically significant association,” co-author and University of Southern California professor of preventative medicine and psychology Adam Leventhal, Ph.D. explains. “We can say with confidence that teens who were exposed to higher levels of digital media were significantly more likely to develop ADHD symptoms in the future.”
The study participants, who were between 15 and 16 years old, represented various demographic and socioeconomic statuses and were enrolled in public high schools in Los Angeles County. Every six months between 2014 and 2016, the researchers asked the teens how often they accessed 14 popular digital media platforms on their smartphones and examined them for symptoms of ADHD. Mobile technologies, Leventhal explains, “can provide fast, high-intensity stimulation accessible all day, which has increased digital media exposure far beyond what’s been studied before.” In the past, studies on the link between exposure to technology and mental health focused only on the effects of TV or video games.
The team’s analysis of the data showed that 9.5 percent of the 114 teens who used at least 7 platforms frequently showed ADHD symptoms that hadn’t been present at the beginning of the study. Of the 51 teens who used all 14 platforms frequently, 10.5 percent showed new ADHD symptoms.
10 percent of high frequency media using teens demonstrated new ADHD symptoms.
This study “raises concern” about the ADHD risk that digital media technology poses for teens, but Leventhal emphasizes that there’s no evidence of causation and that further study is needed. Scientists know that ADHD manifests as physical differences in the brain, but they’re still not sure what causes it. There are multiple non-exclusive theories, which include the genes of the individual, a low birth weight, and exposure to toxins like cigarettes when in the womb.
Conversely, smartphone use has been linked to changes in the brain as well, but none that have been associated with ADHD. More studies are needed to know whether frequent use of digital platforms is linked to ADHD or whether it underlies a completely different disorder that shares similar symptoms.
(For the source of this article, plus many additional important articles, please visit: https://www.inverse.com/article/47220-smartphone-digital-media-use-adhd/)
Mastodon bones push arrival of early humans in America back by 115,000 years
When did humans arrive in America? It’s been a hot topic in scientific circles for the last 20 years or so, pegged anywhere from 13,500 to 16,500 years ago. Now new research from the Cerutti Mastodon Discovery, an archeological site in Southern California, blows those estimates away by suggesting early hominids arrived on the continent as early as 130,000 years ago. To give some perspective, it’s believed humans migrated out of Africa 125,000 years ago at the earliest.
The crux of the argument by the scientists, led by the San Diego Natural History Museum, stems from the sharply broken bones, tusks and molars of a mastodon found at a paleontological site first discovered in 1992 as a result of a freeway expansion. Also found buried at the site were large stones which appeared to be used as hammers and anvils. Further research showed that the bones were broken while still fresh by blows from the hammer stones that appeared strategically aimed to get at any marrow inside. With such evidence of human activity, the site suddenly became an archeological dig.
Mastodon bones and tusks found next to what are believed to be stone hammers used by early humans.
At the time of the find, dating techniques weren’t sophisticated enough to reliably assign an age to the bones, and by association, the tool-users who acted upon them. State-of-the-art radiometric dating equipment was used in 2014, however, to determine a more reliable and definitive age of the mastodon bones of around 130,000 years old, give or take 9,400 years. At the same time, experts studying microscopic damage to the bones and rock determined it was indeed consistent with human activity.
The researchers even went so far as to conduct experiments on the bones of large mammals, including elephants, to study breakage patterns and determine how such fractures could be made by early humans. They discovered that a blow from a hammer stone on a fresh elephant limb produced the same patterns of breakage as on the mastodon bones found at the site.
The results of all of this research has now been published in the journal Nature.
“This discovery is rewriting our understanding of when humans reached the New World,” said Judy Gradwohl, president and chief executive officer of the San Diego Natural History Museum. “The evidence we found at this site indicates that some hominin species was living in North America 115,000 years earlier than previously thought. This raises intriguing questions about how these early humans arrived here and who they were.”
For decades, the prevailing theory for human migration to America was via the Beringia land bridge over the Bering Strait from Siberia, dating to around 13,500 years ago. Later discoveries challenged that idea, pushing the arrival of humans back by several millennia. The discovery of the scientists at the Cerutti Mastodon site opens up more questions than it answers, starting with who these early hominins were, how they got here, and what happened to them.
“When we first discovered the site, there was strong physical evidence that placed humans alongside extinct Ice Age megafauna,” said Tom Deméré, curator of paleontology and director of PaleoServices at the San Diego Natural History Museum, as well as an author on the paper. “This was significant in and of itself and a ‘first’ in San Diego County. Since the original discovery, dating technology has advanced to enable us to confirm with further certainty that early humans were here much earlier than commonly accepted.”
Source: San Diego Natural History Museum
(For the source of this, and similarly important articles, please visit: https://newatlas.com/early-humans-arrive-america/49243/)
Bold study claims humans may have arrived in Australia 120,000 years ago
Australia’s Aboriginal population is said to be the oldest continuing civilization on Earth – but just how old is that? It’s currently believed that Aboriginal ancestors made their way to Australia as long as 65,000 years ago, but new evidence uncovered at a dig site in the continent’s southeast may push the timeline back much further. If the site does turn out to be human-made, it suggests that people have been living in Australia for as long as 120,000 years.
The place of interest, known as the Moyjil site, is located in the city of Warrnambool, Victoria. Archaeologists have been investigating the area for over a decade, and the basis for these extraordinary claims is a mound of materials including sand, seashells and stones.
That might not sound like much, but the scientists suggest this is what’s known as a midden – essentially, an ancient landfill. The remains of fish, crabs and shellfish have been found in the mound, which may be all that remains of long-eaten meals, while charcoal, blackened stones and other features may be all that’s left of ancient fireplaces.
But the really intriguing part of the site is its age. If Moyjil does turn out to be a human site, it could force us to rewrite not just the history of Australian occupation but our understanding of human migration worldwide.
“What makes the site so significant is its great age,” says John Sherwood, an author of the study. “Dating of the shells, burnt stones and surrounding cemented sands by a variety of methods has established that the deposit was formed about 120,000 years ago. That’s about twice the presently accepted age of arrival of people on the Australian continent, based on archaeological evidence. A human site of this antiquity, at the southern edge of the continent, would be of international significance because of its implications for the movement of modern humans out of Africa.”
But there are quite a few caveats to these claims. For one, there’s every chance that the mounds aren’t middens at all, but natural formations of some kind. Definitive proof of human occupation from that era, such as tools or bones, have yet to be found.
On top of that, it doesn’t quite make sense within the current narrative. Genetic studies have shown that Aboriginal people only split off from other human populations about 75,000 years ago, after their ancestors migrated out of Africa, through Southeast Asia into Australia.
The oldest known definitive proof of humans on the continent are artifacts dated to 65,000 years ago, found in Kakadu National Park, along Australia’s northern coast. This makes sense, given it’s close to the islands the people were thought to have used to cross over.
But the Moyjil site is on the complete opposite side of the continent, and it’s hard to believe humans appeared that far south, at a time when they were otherwise believed to be more or less restricted to Africa. Humans aren’t thought to have even entered East Asia before about 100,000 years ago.
The researchers acknowledge the weight of the claims, and say they’re working to continue examining the Moyjil site for further evidence of human occupation, and hope others will do the same.
“We recognize the need for a very high level of proof for the site’s origin,” says Sherwood. “Within our own research group the extent to which members believe the current evidence supports a theory of human agency ranges from ‘weak’ to ‘strong.’ But importantly, and despite these differences, we all agree that available evidence fails to prove conclusively that the site is of natural origin. What we need now is to attract the attention of other researchers with specialist techniques which may be able to conclusively resolve the question of whether or not humans created the deposit.”
The research was published in the journal Proceedings of the Royal Society of Victoria.
Source: Deakin University
(For the source of this, and many other quite interesting articles, please visit: https://newatlas.com/dig-site-australia-humans-moyjil/58886/)
Amelia Earhart mystery may be solved, says scientist
More evidence lack of sleep drives Alzheimer’s progression
A new study from researchers at Washington University School of Medicine in St. Louis has revealed further evidence of how sleep deprivation can drive the spread of toxic Alzheimer’s-inducing proteins throughout the brain. The study bolsters the growing hypothesis suggesting sleep disruption plays a major role in the progression of neurodegenerative disease.
Over the last year or two there have been several notable studies published investigating how poor sleep seems to be fundamentally linked to neurodegenerative diseases such as Alzheimer’s. Prior work has clearly demonstrated how just one night of disrupted sleep can increase accumulations in the brain of a protein called amyloid-beta, one of the central pathological drivers of Alzheimer’s disease.
Now sleep researchers have turned their focus towards the other major toxic protein often implicated in Alzheimer’s pathology – tau. Alongside the amyloid clumps, often hypothesized to be the driver of Alzheimer’s-induced brain damage, tau proteins are also implicated as being damaging. These abnormal tau clumps, called neurofibrillary tangles, are often identified in neurodegenerative disease.
A recent study from the Washington University School of Medicine in St. Louis revealed higher levels of tau proteins were identified in human subjects who reported disrupted sleep patterns. It was unclear from that research whether the sleep disruptions preceded or followed these pathological brain changes. Now, a new study from the same team has revealed strong evidence suggesting sleep disruption does indeed directly cause tau protein levels to rise and more rapidly spread through the brain.
The new research describes several experiments, in both mice and humans, that clearly establish tau levels rising as a result of sleep deprivation. Tests in humans revealed a single sleepless night correlated with tau levels in cerebrospinal fluid rising about 50 percent. These results were also observed in mouse models subjected to extensive stretches of sleep deprivation.
The researchers also investigated whether sleep deprivation accelerates the spread of toxic tau neurofibrillary tangles. Two groups of mice were seeded with neurofibrillary tangles in their hippocampi, with one group allowed to sleep according to normal patterns, while the other group was kept awake for long periods every day.
After four weeks, the mice subjected to sleep deprivation showed significantly greater spread and growth of the tau tangles, compared to the well rested animals. These increased neurofibrillary tangles were also found in brain areas similar to those seen in human subjects suffering from Alzheimer’s disease.
“The interesting thing about this study is that it suggests that real-life factors such as sleep might affect how fast the disease spreads through the brain,” says David Holtzman, senior author on the new study. “We’ve known that sleep problems and Alzheimer’s are associated in part via a different Alzheimer’s protein – amyloid beta – but this study shows that sleep disruption causes the damaging protein tau to increase rapidly and to spread over time.”
Despite the robust research described in the new study, there are still several limitations to how the conclusions can be interpreted. For example, it is unclear how long-lasting these tau spikes actually are. Does a good night’s sleep clear out the increased amyloid and tau load caused by a bad night’s sleep? Does this even play a major role in the slow, long-term onset of diseases such as Alzheimer’s? There is growing debate over whether tau and amyloid are even the right targets for understanding the pathogenic origins of Alzheimer’s disease.
Holtzman is open about the limitations of his research, however, he suggests if the outcome is that people try to pay more attention to their sleep cycles then that will undoubtedly be beneficial.
“Our brains need time to recover from the stresses of the day,” says Holtzman. “We don’t know yet whether getting adequate sleep as people age will protect against Alzheimer’s disease. But it can’t hurt, and this and other data suggest that it may even help delay and slow down the disease process if it has begun.”
The new study was published in the journal Science.
An audio version of this article is available to New Atlas Plus subscribers.
(For the source of this, and many other important similar articles, please visit: https://newatlas.com/sleep-deprivation-alzheimers-dementia-tau/58201/)
++++++++++
New insight into how lack of quality sleep is linked to Alzheimer’s disease
Adding to a growing body of research associating sleep quality with the development of dementia and Alzheimer’s disease, a new study from the Washington University School of Medicine in St. Louis has homed in on the specific sleep phase that, when disrupted, can be linked to early stages of cognitive decline.
Sleep is important. That is something we know for sure. More recently a series of studies have been revealing compelling associations between disrupted sleep and neurodegenerative diseases such as Alzheimer’s. Last year it was discovered that sleep deprivation can directly lead to an increase in amyloid-beta accumulations in the brain, one of the central pathological observations seen in people with Alzheimer’s disease.
A new study is further elucidating the relationship between sleep and Alzheimer’s. The hypothesis behind the research is that decreased slow-wave sleep may correlate with increases in a brain protein called tau, which alongside amyloid-beta has been found to be significantly linked to the cognitive decline associated with Alzheimer’s disease.
The researchers examined the sleep patterns of 119 subjects over the age of 60, the majority of whom were cognitively healthy with no signs of dementia or Alzheimer’s. For a week the subjects’ sleep patterns were monitored using sensors and portable EEG monitors. Tau and amyloid levels were also tracked in all subjects using either PET scans or spinal fluid sampling.
The results revealed that those subjects suffering from lower levels of slow-wave sleep displayed higher volumes of tau protein in the brain. Slow-wave sleep is the deepest phase of non-rapid eye movement sleep and this stage of a person’s sleep cycle has been strongly linked to memory consolidation, with many researchers also suggesting slow-wave sleep is vital for maintaining general brain health.
“The key is that it wasn’t the total amount of sleep that was linked to tau, it was the slow-wave sleep, which reflects quality of sleep,” explains Brendan Lucey, first author on the new study. “The people with increased tau pathology were actually sleeping more at night and napping more in the day, but they weren’t getting as good quality sleep.”
Huge questions still remain unanswered though, particularly when trying to discern whether bad sleep is ultimately a cause, or consequence, of conditions such as Alzheimer’s. The study does clearly note a significant limitation in the conclusion is an inability to establish whether sleep changes precede, or follow, any pathological changes in the brain.
Age-related neurodegenerative diseases are inarguably more complicated than simply being the effect of years of bad sleep, however, the researchers do suggest sleep disruptions may be an effective early warning tool to help doctors spot patients in the earliest, pre-clinical stages of cognitive decline.
“What’s interesting is that we saw this inverse relationship between decreased slow-wave sleep and more tau protein in people who were either cognitively normal or very mildly impaired, meaning that reduced slow-wave activity may be a marker for the transition between normal and impaired,” says Lacey. “Measuring how people sleep may be a noninvasive way to screen for Alzheimer’s disease before or just as people begin to develop problems with memory and thinking.”
The new study was published in the journal Science Translational Medicine.
Source: Washington University School of Medicine in St. Louis
An audio version of this article is available to New Atlas Plus subscribers.
(For the source of this, and other related articles, please visit: https://newatlas.com/sleep-slow-wave-alzheimers-dementia/57968/)
++++++++++
The world’s oldest cave paintings were probably made by Neanderthals
For a long time, we thought our species were the only artists.
Every single culture engages in some kind of art, whether that’s telling stories, dancing, weaving elaborate textiles, cooking, making jewelry or pottery, or painting landscapes and portraits. It’s so common that creating art, known to social scientists as “symbolic behavior,” seems to be an important part of what it means to be human.
Language is a type of symbolic behavior. For example, the sounds that make up the word “chair” don’t have any connection to an actual chair. English speakers have just agreed to share this audible symbol to refer to the objective reality, and different languages use different sounds to symbolize the same thing. But when did symbolic behavior begin? That’s a question archaeologists have been trying to answer for as long as there have been archaeologists. One of our favorite ways to study this topic is through cave art.
Wikimedia Commons.
Cave art includes paintings, carvings, and sculptures. Perhaps you’re already familiar with the magnificent horses of Lascaux, so popular that France built an exact replica of the entire cave for tourists, or you might know the beautiful Panel of the Lions in Chauvet Cave. This European cave art isn’t the oldest evidence of symbolic behavior, but it is the best-studied and largest collection.
Most European cave art dates to between 40,000 and 10,000 years ago. Chauvet’s paintings, formerly known as the oldest cave art, are 37,000 years old. In 2012, El Castillo, Spain, took top prize for oldest cave art in the world, with one painting dated to 40,800 years ago. During the vast majority of this time period our own species, Homo sapiens, were the only humans in Europe, so archaeologists assumed that we must have been the artists. But a new study out earlier this year means that assumption may be wrong.
Neanderthals (Homo neanderthalensis, sometimes also called Homo sapiens neanderthalensis) lived in Europe, Asia, and the Middle East from around 430,000 years ago until they died out about 40,000 years ago. Despite their unintelligent reputation, Neanderthals were quite smart. We know that they used fire, made stone tools, and were excellent hunters.
New evidence suggests that Neanderthals may have independently practiced symbolic behavior. Neanderthals painted. In February 2018, researchers published an article in Science showing that some cave art is far too old to have been made by Homo sapiens. Dirk Hoffman of the Max Planck Institute for Evolutionary Anthropology and his team examined paintings from three caves in Spain: a red geometric shape, from La Pasiego, part of the same cave complex as El Castillo, which they dated to 64,800 years ago; a red hand outline, from Maltravieso, which they dated to 66,700 years ago; and an abstract red swath at Ardales, dated to at least 65,500 years ago. The dates are shocking, and not only because they trump the El Castillo painting by more than 20,000 years. When these three pieces were painted, there were no Homo sapiens anywhere in Europe. We didn’t arrive on the continent until around 44,000 years ago. That leaves Neanderthals as the only possible artists for these Spanish caves.
Because cave art has been studied since 1880, it might seem strange that we could have our image of the artists changed so dramatically now. Part of the problem is that it isn’t easy to date cave art. Carbon dating, which archaeologists use when we need to find out the age of most human artifacts, is not ideal for cave art for three reasons. Carbon dating requires carbon in the paint; black paint is sometimes made of carbon, but red paint is not. Second, carbon dating requires removing a small sample of the paint itself; archaeologists are often reluctant to destroy even a tiny part of these ancient and rare pieces of art. Finally, carbon dating is unreliable for objects older than 50,000 years, which all three of these pieces are.
That’s why Hoffman and his team used a different method, called uranium-thorium (U-Th) dating. U-Th dating, which is reliable all the way back to 500,000 years, does not date the paintings themselves. Instead, it works on very thin mineral layers that slowly form on cave walls over thousands of years. Sometimes these crusts form directly on top of the art, sealing it in. The paintings underneath must have been there first, so archaeologists get a minimum age for the art by dating the mineral layer.
To be clear, this new discovery doesn’t mean that all of the cave art was made by Neanderthals. In fact, many of the most famous caves were painted only after Neanderthals went extinct. But this discovery does mean that perhaps Neanderthals should be included along with us as creators of symbolism. If so, it would drastically change our understanding of how Neanderthals behaved. Did they use language, another type of symbolic behavior? Did they have religion? Or music? Studying their art may help us get at the answers to these questions.
Until recently, the best case for Neanderthal symbolism came from the Châtelperronian jewelry, a collection of animal teeth, shells, and ivory pieces worn as beads. However, the Châtelperronian comes from the very end of the Neanderthals’ existence. They may have seen nearby Homo sapiens wearing jewelry and just copied what we were doing. But, Hoffman’s study is revising what we know about Neanderthals. Our cousin-species may well have been creative artists, just like us.
(For the source of this, and other equally interesting articles, please visit: https://massivesci.com/articles/cave-art-neanderthal-painting/)
We’re studying collapsed civilizations so that ours can endure climate change
Paleoclimatologists are digging into the connections between the collapse of Maya Civilization and extreme droughts
Over 1,000 years ago, droughts plagued the Yucatán peninsula. The Yucatán was home to the Classic Lowland Maya Civilization, of pyramids and the number zero fame. Droughts occurred intermittently for centuries, from 200 to 1100 CE. This is an era of Mayan history typically split in two – the Classic (200-800 CE) and Terminal Classic (800-1100 CE) Periods. The droughts coincided with the widespread collapse of the Maya Civilization around 1100 CE.
The first scientific evidence of these droughts was discovered by Dr. David Hodell and other researchers at the University of Florida in 1995, using ancient sediments from Lake Chinchancanab. Since then, the droughts have been a popular example of how extreme climate fluctuations can impact society. However, the magnitude of the droughts has remained an elusive and difficult question to answer.
Further, archaeological research has revealed a much more complex history of the Classic Maya reorganization than originally thought, suggesting the droughts were not the only factor destabilizing the Classic Maya. Archaeologists have found evidence that suggest the Maya Civilization experienced social changes including class conflicts, warfare, invasion, and ideological change. 20 years after the discovery of drought evidence in the Yucatán, researchers returned to Lake Chinchancanab to investigate a seemingly simple question: just how dry was it?
In the pioneering Yucatán drought research, Hoddell and his team sampled sediment cores from the bottom of Lake Chinchancanab that were thousands of years old. In the core, they found layers of gypsum, a white chalky mineral often used in plaster or cement. Because gypsum can only form in a lake setting when a large amount of evaporation has occurred, presence of gypsum in lake sediments is evidence of periods in the past where lake levels dropped significantly—signs of past drought events.
Interestingly, archaeological records show that these periods of droughts coincided with sociopolitical unrest in the region, including increased warfare and internal violence of the Lowland Maya. This sediment core from Lake Chinchancanab was the first quantitative link between climate and instability of the Classic Maya.
Hoddell is now based at the University of Cambridge, but he and his group are still interested in human-climate connections. In this new study, his team went back to Lake Chinchancanab. The researchers are still focusing on the lake’s gypsum, but now they are looking to the ancient lake water that has been trapped in the gypsum since the droughts. The researchers developed new chemistry and modeling techniques to assess how extreme the Classic and Terminal Classic droughts were.
By measuring the chemistry of the trapped ancient lake water, they established ideas for what the chemistry and depth of the lake would have looked like during the droughts. With these constraints, the researchers developed a theoretical model of a lake. They tested different climate scenarios to see how the lake chemistry would respond, until the modeled lake chemistry matched the ancient water in the gypsum. It’s the scientific equivalent of turning light switches on and off until you figure out which one is the light you actually want.
The researchers ultimately found that rainfall decreased by 50 percent on average compared to today, and as much as 70 percent during the most intense drought conditions. Humidity decreased by two to seven percent. That decrease in rainfall is the equivalent of Seattle becoming as dry as Tucson. “We don’t really know what changes in relative humidity might be in that past, because we never really had a tool to constrain it before,” says Thomas Bauska, a co-author of the study and researcher at Cambridge University. ”[The results] do tell us that the Yucatán was experiencing dry season conditions during a much longer period of the year.”
These new results and techniques can open many doors. Much of paleoclimate research relies on qualitatively studying past climate records rather than measuring past climate. It’s useful, but this approach prevents us from asking questions precise quantitative data can raise. It’s the difference between saying “wow, it was really dry in 1100 CE” and saying “there was a 50-70 percent decrease in rainfall compared to today.”
For example, we can learn a lot from the ancient Maya about human resilience and adaptation to climatic extremes. Previous studies in this region showed that during the first Classic Period droughts, the Maya adapted their agricultural practices by rotating their crops to maize (corn) varieties that required less water, but were unsuccessful at adapting in later droughts. But these results were based on qualitative paleoclimate records; hopefully providing more exact estimates of drought intensity will lead to a better understanding of how the Classic Maya reacted in the face of extreme climatic change.
This civilization flourished in the not too distant past: the demise of the Classic Maya occurred around 900 – 1200 CE (though this collapse doesn’t mean the Maya disappeared – the remaining population reorganized and formed new communities). Cambridge University, where this research was done, was founded in 1209 CE. But there’s still so much that scientists don’t know about Mayan history. More concretely understanding the past climate changes of this region is one monumental step toward understanding how the Maya interacted with their environment.
(For the source of this, and many other interesting articles, please visit: https://massivesci.com/articles/mayan-empire-collapse-drought-climate-change/)
Ancient DNA discovery reveals previously unknown population of native Americans
A few years ago the fossilized remains of a baby girl were uncovered in a harsh and isolated part of central Alaska. The remains were dated at 11,500 years old, and a new DNA study has now revealed not only an incredible insight into the origins of human migration into North America, but also the existence of a previously undiscovered population of humans that have been named “Ancient Beringians”.
The conventional theory about how humans migrated into the Americas suggests that sometime between 15,000 and 30,000 years ago, humans wandered from Asia into North America across a land bridge called Beringia that connected the two continents.
This latest discovery reveals a distinctive, and previously undiscovered human lineage that surprised researchers, who were expecting to find a genetic profile that matched northern Native American people. The study of this ancient child’s DNA pointed to an entirely new population of people, separate to those that ultimately spread throughout the rest of North America.
The researchers suggest two possible theories to explain this new lineage. Either two separate groups of people crossed the land bridge into the Americas over 15,000 years ago, or one group crossed, and then split into two entirely independent populations. Closer genetic sequencing suggests the latter outcome is the most likely, but why and how this Ancient Beringian population remained so genetically isolated and distinct for so many subsequent years remains a mystery.
The study also posits that a type of “back migration” occurred, possibly around 6,000 years ago, as northern Native American populations spread back up into Alaska and either absorbed or replaced the Beringian population, resulting in a distinct Alaskan native population called the Athabascan.
“There is very limited genetic information about modern Alaska Athabascan people,” says Ben Potter, one of the lead authors on the study. “These findings create opportunities for Alaska Native people to gain new knowledge about their own connections to both the northern Native American and Ancient Beringian people.”
The new study was published in the journal Nature.
Source: University of Alaska Fairbanks
(For the source of this, and many other interesting articles, please see: https://newatlas.com/ancient-dna-native-american-migration-beringian/52831/)
Just Months of American Life Change the Microbiome
++++++++++
++++++++++
Ancient mummy DNA reveals surprises about genetic origins of Egyptians
The international team of scientists, led by researchers from the University of Tuebingen and the Max Planck Institute for the Science of Human History in Jena, sampled 151 mummified remains from a site called Abusir el-Meleq in Middle Egypt along the Nile River. The samples dated from 1400 BCE to 400 CE and were subjected to a new high-throughput DNA sequencing technique that allowed the team to successfully recover full genome-wide datasets from three individuals and mitochondria genomes from 90 individuals.
“We wanted to test if the conquest of Alexander the Great and other foreign powers has left a genetic imprint on the ancient Egyptian population,” explains one of the lead authors of the study, Verena Schuenemann.
In 332 BCE, for example, Alexander the Great and his army tore through Egypt. Interestingly the team found no genetic trace of not only Alexander the Great’s heritage, but of any foreign power that came through Egypt in the 1,300-year timespan studied.
“The genetics of the Abusir el-Meleq community did not undergo any major shifts during the 1,300 year timespan we studied,” says Wolfgang Haak, group leader at the Max Planck Institute, “suggesting that the population remained genetically relatively unaffected by foreign conquest and rule.”
They found that ancient Egyptians were closely related to Anatolian and Neolithic European populations, as well showing strong genetic traces from the Levant areas in the near east (Turkey, Lebanon).
(To read the full article visit: https://newatlas.com/ancient-egyptian-mummy-dna-study/49792/)
++++++++++
North Sentinel Island
The Sentinelese are among the last people worldwide to remain virtually untouched by modern civilization.
2009 NASA image of North Sentinel Island; the island’s protective fringe of coral reefs can be seen clearly.
North Sentinel Island is one of the Andaman Islands, which includes South Sentinel Island, in the Bay of Bengal. It is home to the Sentinelese who, often violently, reject any contact with the outside world, and are among the last people worldwide to remain virtually untouched by modern civilization. As such, only limited information about the island is known.
Nominally, the island belongs to the South Andaman administrative district, part of the Indian union territory of Andaman and Nicobar Islands.[8] In practice, Indian authorities recognise the islanders’ desire to be left alone and restrict their role to remote monitoring, even allowing them to kill non-Sentinelese people without prosecution.[9][10] Thus the island can be considered a sovereign entity under Indian protection.
(Source: https://en.wikipedia.org/wiki/North_Sentinel_Island)
++++++++++
A paper recently published in International Journal of Astrobiology asks a fascinating question: “Would it be possible to detect an industrial civilization in the geological record?” Put another way, “How do we really know our civilization is the only one that’s ever been on earth?” The truth is, we don’t. Think about it: The earliest evidence we have of humans is from 2.6 million years ago, the Quarternary period. Earth is 4.54 billion years old. That leaves 4,537,400,000 years unaccounted for, plenty of time for evidence of an earlier industrial civilization to disappear into dust.
The paper grew out of a conversation between co-authors Gavin Schmidt, director of NASA’s Goddard Institute for Space Studies, and astrophysics professor Adam Frank. (Frank recalls the exchange in an excellent piece in The Atlantic.) Considering the possible inevitability of any planets’ civilization destroying the environment on which it depends, Schmidt suddenly asked, “Wait a second. How do you know we’re the only time there’s been a civilization on our own planet?”
Schmidt and Frank recognize the whole question is a bit trippy, writing, “While much idle speculation and late night chatter has been devoted to this question, we are unaware of previous serious treatments of the problem of detectability of prior terrestrial industrial civilizations in the geologic past.”
There’s a thought-provoking paradox to consider here, too, which is that the longest-surviving civilizations might be expected to be the most sustainable, and thus leave less of a footprint than shorter-lived ones. So the most successful past civilizations would leave the least evidence for us to discover now. Hm.
Earlier humans, or…something else?
One of the astounding implications of the authors’ question is that it would mean — at least as far as we can tell from the available geologic record — that an earlier industrial civilization could not be human, or at least not homo sapiens or our cousins. We appeared only about 300,000 years back. So anyone else would have to have been some other intelligent species for which no evidence remains, and that we thus know nothing about. Schmidt is calling the notion of some previous non-human civilization the “Silurian hypothesis,” named for brainy reptiles featured in a 1970 episode of Dr. Who.
Wouldn’t there be fossils?
Well, no. “The fraction of life that gets fossilized is always extremely small and varies widely as a function of time, habitat and degree of soft tissue versus hard shells or bones,’ says the paper, noting further that, even for dinosaurs, there are only a few thousand nearly complete specimens. Chillingly, “species as short-lived as Homo Sapiens (so far) might not be represented in the existing fossil record at all.”
(For full article visit: https://bigthink.com/robby-berman/is-human-civilization-earths-first)
++++++++++
She was wide awake and it was nearly two in the morning. When asked if everything was alright, she said, “Yes.” Asked why she couldn’t get to sleep she said, “I don’t know.” Neuroscientist Russell Foster of Oxford might suggest she was exhibiting “a throwback to the bi-modal sleep pattern.” Research suggests we used to sleep in two segments with a period of wakefulness in-between.
A. Roger Ekirch, historian at Virginia Tech, uncovered our segmented sleep history in his 2005 book At Day’s Close: A Night in Time’s Past. There’s very little direct scientific research on sleep done before the 20th century, so Ekirch spent years going through early literature, court records, diaries, and medical records to find out how we slumbered. He found over 500 references to first and second sleep going all the way back to Homer’s Odyssey. “It’s not just the number of references—it is the way they refer to it as if it was common knowledge,” Ekirch tells BBC.
“He knew this, even in the horror with which he started from his first sleep, and threw up the window to dispel it by the presence of some object, beyond the room, which had not been, as it were, the witness of his dream.” — Charles Dickens, Barnaby Rudge (1840)
Here’s a suggestion for dealing with depression from English ballad ‘Old Robin of Portingale’:
“And at the wakening of your first sleepe/You shall have a hott drinke made/And at the wakening of your next sleepe/Your sorrowes will have a slake.”
Two-part sleep was practiced into the 20th century by people in Central America and Brazil and is still practiced in areas of Nigeria.
(Photo: Alex Berger)
Night split in half
Segmented sleep—also known as broken sleep or biphasic sleep—worked like this:
- First sleep or dead sleep began around dusk, lasting for three to four hours.
- People woke up around midnight for a few hours of activity sometimes called “the watching.” They used it for things like praying, chopping wood, socializing with neighbors, and for sex. A 1500s character in Chaucer’s Canterbury Tales posited that the lower classes had more children because they used the waking period for procreation. In fact, some doctors recommended it for making babies. Ekirch found a doctor’s reference from 16th century France that said the best time to conceive was not upon first going to bed, but after a restful first sleep, when it was likely to lead to “more enjoyment” and when lovers were more likely to “do it better.”
- “Second sleep,” or morning sleep, began after the waking period and lasted until morning.
Why and when it ended
Given that we spend a third of our lives in slumber, it is odd that so little is known about our early sleep habits, though Ekirch says that writings prove people slept that way for thousands of years. If for no other reason, someone had to wake in the middle of the night to tend to fires and stoves.
Author Craig Koslofsky suggests in Evening’s Empire that before the 18th century, the wee hours beyond the home were the domain of the disreputable, and so the watching was all the nighttime activity anyone wanted. With the advent of modern lighting, though, there was an explosion in all manner of nighttime activity, and it ultimately left people exhausted. Staying up all night and sleepwalking through the day came to be viewed as distastefully self-indulgent, as noted in this advice for parents from an 1825 medical journal found by Ekirch: “If no disease or accident there intervene, they will need no further repose than that obtained in their first sleep, which custom will have caused to terminate by itself just at the usual hour. And then, if they turn upon their ear to take a second nap, they will be taught to look upon it as an intemperance not at all redounding to their credit.” Coupled with the desire for efficiency promoted by industrialization, the watch was increasingly considered a pointless disruption of much-needed rest.
The rise of insomnia
Intriguingly, right about the time accounts of first sleep and second sleep began to wane, references to insomnia began appearing. Foster isn’t the only one who wonders if this isn’t a biological response to un-segmented sleep. Sleep psychologist Gregg Jacobs tells BBC, ”For most of evolution we slept a certain way. Waking up during the night is part of normal human physiology.” He also notes that the watch was often a time for reflection and meditation that we may miss. “Today we spend less time doing those things,” he says. “It’s not a coincidence that, in modern life, the number of people who report anxiety, stress, depression, alcoholism and drug abuse has gone up.” It may also not a coincidence, though, that we don’t die at 40 anymore.
Subjects in an experiment in the 1990s gradually settled themselves into bi-phasic sleep after being kept in darkness 10 hours a day for a month, so it may be the way we naturally want to sleep. But is it the healthiest way?
Science says we’re doing it right right now
Not everyone restricts their rest to a full night of sleep. Siestas are popular in various places, and there are geniuses who swear by short power naps throughout a day. Some have no choice but to sleep in segments, such as parents of infants and shift workers.
But, according to sleep specialist Timothy A. Connolly of Center of Sleep Medicine at St. Luke’s Episcopal Hospital in Houston speaking to Everyday Health, “Studies show adults who consistently sleep seven to eight hours every night live longest.” Some people do fine on six hours, and some need 10, but it needs to be in one solid chunk. He says that each time sleep is disrupted, it impacts every cell, tissue, and organ, and the chances go up for a range of serious issues including stroke, heart disease, obesity and mood disorders.
Modern science is pretty unanimous: Sleeping a long, solid chunk each night gives you the best chance of living a long life, natural or not.
(Article source: https://bigthink.com/robby-berman/for-1000s-of-years-we-went-to-bed-twice-a-night-2)
++++++++++
The relationship between the mind and the brain is a mystery that is central to how we understand our very existence as sentient beings. Some say the mind is strictly a function of the brain — consciousness is the product of firing neurons. But some strive to scientifically understand the existence of a mind independent of, or at least to some degree separate from, the brain.
The peer-reviewed scientific journal NeuroQuantology brings together neuroscience and quantum physics — an interface that some scientists have used to explore this fundamental relationship between mind and brain.
An article published in the September 2017 edition of NeuroQuantology reviews and expands upon the current theories of consciousness that arise from this meeting of neuroscience and quantum physics.
Dr. Dirk K.F. Meijer, a professor at the University of Groningen in the Netherlands, hypothesizes that consciousness resides in a field surrounding the brain. This field is in another dimension. It shares information with the brain through quantum entanglement, among other methods. And it has certain similarities with a black hole.
This field may be able to pick up information from the Earth’s magnetic field, dark energy, and other sources. It then “transmits wave information into the brain tissue, that … is instrumental in high-speed conscious and subconscious information processing,” Dirk wrote.
In other words, the “mind” is a field that exists around the brain; it picks up information from outside the brain and communicates it to the brain in an extremely fast process.
He described this field alternately as “a holographic structured field,” a “receptive mental workspace,” a “meta-cognitive domain,” and the “global memory space of the individual.”
Extremely rapid functions of the brain suggest it processes information through a mechanism not yet revealed.
There’s an unsolved mystery in neuroscience called the “binding problem.” Different parts of the brain are responsible for different things: some parts work on processing color, some on processing sound, et cetera. But, it somehow all comes together as a unified perception, or consciousness.
Information comes together and interacts in the brain more quickly than can be explained by our current understanding of neural transmissions in the brain. It thus seems the mind is more than just neurons firing in the brain.
(To read the entire article visit: https://m.theepochtimes.com/uplift/a-new-theory-of-consciousness-the-mind-exists-as-a-field-connected-to-the-brain_2325840.html)
++++++++++
Dogon dwelling on the Bandiagara Escarpment in Mali, West Africa – 300px-Falaise_de_Bandiagara
Dogon astronomical beliefs
Starting with the French anthropologist Marcel Griaule, several authors have claimed that Dogon traditional religion incorporates details about extrasolar astronomical bodies that could not have been discerned from naked-eye observation. This idea has entered the New Age and ancient astronaut literature as evidence that extraterrestrial aliens visited Mali in the distant past.
https://en.wikipedia.org/wiki/Dogon_people
++++++++++
Cliff Palace, Mesa Verde, Colorado, USA
(https://strangemaps.files.wordpress.com/2007/10/england2410_468x8161.jpg)
In Great Britain as in the US, two cultural sub-nations identify themselves (and the other) as North and South. The US’s North and South are quite clearly delineated, by the states’ affiliations during the Civil War (which in the east coincides with the Mason-Dixon line). That line has become so emblematic that the US South is referred to as ‘Dixieland’.
There’s no similarly precise border in Great Britain, maybe because the ‘Two Englands’ never fought a civil war against each other.There is, however, a place used as shorthand for describing the divide, with the rougher, poorer North and wealthier, middle-to-upper-class South referring to each other as ‘on the other side of the Watford Gap’.
Not to be confused with the sizeable town of Watford in Hertfordshire, Watford Gap is a small village in Northamptonshire. It was named for the eponymous hill pass that has facilitated travel east-west and north-south since at least Roman times (cf. Watling Street, now passing through it as the A5 motorway). Other routes passing through the Gap are the West Coast Main Line railway, the Grand Union Canal and the M1, the UK’s main North-South motorway.
In olden times, the Gap was the location of an important coaching inn (operating until closure in approximately 2000 as the Watford Gap Pub), and nowadays it has the modern equivalent in a service station – which happened to be the first one in the UK – on the M1, the main North-South motorway in the UK.
Because of its function as a crossroads, its location on the main road and its proximity to the perceived ‘border’ between North and South, the Watford Gap has become the colloquial separator between both. Other such markers don’t really exist, so the border between North and South is quite vague. Until now, that is.
It turns out the divide is more between the Northwest and the Southeast: on this map, the line (which, incidentally, does cross the Watford Gap – somewhere in between Coventry and Leicester) runs from the estuary of the Severn (near the Welsh-English border) to the mouth of the Humber. Which means that a town like Worcester is firmly in the North, although it’s much farther south than the ‘southern’ town of Lincoln.
At least, that’s the result of a Sheffield University study, which ‘divided’ Britain according to statistics about education standards, life expectancy, death rates, unemployment levels, house prices and voting patterns. The result splits the Midlands in two. “The idea of the Midlands region adds more confusion than light,” the study says.
The line divides Britain according to health and wealth, separating upland from lowland Britain, Tory from Labour Britain, and indicates a £100.000 house price gap – and a year’s worth of difference in life expectancy (in case you’re wondering: those in the North live a year less than those in the South).
The line does not take into account ‘pockets of wealth’ in the North (such as the Vale of York) or ‘pockets of poverty’ in the South, especially in London.
The map was produced for the Myth of the North exhibition at the Lowry arts complex in Manchester, and was mentioned recently in the Daily Mail . I’m afraid I don’t have an exact link to the article, but here is the page at the Lowry for the aforementioned exhibition.
(This article from: https://bigthink.com/strange-maps/193-the-border-between-the-two-englands)
Wendish in Japan
You may wonder how I came to search for traces of Wendish as far afield as Japan. It happened quite accidentally. I became curious about whether there was a linguistic connection between ancient Japanese and Wendish in the mid-1980s, when reading a biography of an American who had grown up in Japan. He mentions that a very ancient Japanese sword is called meich in Japanese. Surprisingly, meich or mech has the same meaning also in Wendish. How did Wends reach Japan, and when? I decided to find out first if this particular word, meich, really exists in Japanese. And, if it does, at which point in time in the past Wendish speakers could have had contact with Japanese islands.
I describe in more detail, mentioning my tentative conclusions with regard to the origins of Wendish in Japanese, and its relation to the Ainu language, in the 5th installment of my article,The Extraordinary History of a Unique People, published in the Glasilo magazine, Toronto, Canada. Anyone interested will find all the already published installments of this article, including the 5th installment, on my still not quite organized website, www.GlobalWends.com. In the next, winter issue of Glasilo, i.e., in the 6th installment of my article, I will report my discoveries and conclusions with regard to the origins of Wendish in the Ainu language, the language of the aboriginal white population of Japan.
I started my search for the word meich by buying Kenkyusha’s New School Japanese-English Dictionary. Unfortunately, I had acquired a dictionary meant for ordinary students and meich is not mentioned in it. Obviously, I should have bought a dictionary of Old Japanese instead, in which ancient terms are mentioned. Nevertheless, to my amazement, I found in Kenkyusha’s concise dictionary, instead of meich, many other Wendish words and cognates, which I am quoting below in my List.
I found it intriguing that the present form of words in Japanese, with clearly Wendish roots, show that Chinese and Korean immigrants to the islands were trying to learn Wendish, not vice versa. This indicates that the original population of Japan was Caucasian and that the influx of the Asian population was, at least at first, gradual. Today, after over 3000 years of Chinese and Korean immigrations, about half of the Japanese vocabulary is based on Chinese.
There is another puzzle to be solved. Logically, one would expect the language of the white aboriginies of Japan, the Ainu – also deeply influenced by Wendish – to have been the origin of Wendish in modern Japanese. Yet, considering the set up of the Wendish vocabulary occurring in Japanese, Ainu does not seem to have played any part in the formation of modern Japanese, or only a negligible one. Wendish vocabulary in Japanese points to a different source. It seems to have been the result of a second, perhaps even a third Wendish migration wave into the Islands, at a much later date. Ainu seem to have arrived already in the Ice Age, when present Japan was still a part of the Asian continent. They have remained hunters and gatherers until their final demise in the mid-20th century. They retained their Ice Age religion, which regarded everything in the universe and on earth as a spiritual entity, to be respected and venerated – including rocks and stars. Wendish words in Japanese, however, mirror an evolved megalithic agricultural culture and a sun-venerating religion.
A list of all Wendish cognates I have discovered in the Kenkyusha’s dictionary is on my website, under the heading of a List of Wendish in Japanese. It is by no means a complete list. My Japanese is very limited, based solely on Kenkyusha’s dictionary and some introductory lessons to the Japanese culture, history, language, literature and legends, by a Japanese friend of mine, with an authentic Wendish name Hiroko, pronounced in the Tokyo dialect, as in Wendish, shiroko, wide, all-encompassing. Besides, although I have a university level knowledge of Wendish, I do not possess the extensive Wendish vocabulary necessary to discover most of Wendish words which may have changed somewhat their meaning with thousands of passing years, complicated by the arrival of a new population whose language had nothing in common with Wendish.
Future, more thorough and patient researchers – whose mother-tongue is Wendish but who also have a thorough knowledge of Japanese – will, no doubt, find a vastly larger number of Wendish cognates in Japanese than I did.
(For more information visit: https://www.globalwends.com/introduction.html)
Spaniard raised by wolves disappointed with human life
Marcos Rodríguez Pantoja, who lived among animals for 12 years, finds it hard just to get through the winter
Marcos Rodríguez Pantoja was once the “Mowgli” of Spain’s Sierra Morena mountain range, but life has changed a lot since then. Now the 72-year-old lives in a small, cold house in the village of Rante, in the Galician province of Ourense. This past winter has been hard for him, and a violent cough interrupts him often as he speaks.
His last happy memories were of his childhood with the wolves. The wolf cubs accepted him as a brother, while the she-wolf who fed him taught him the meaning of motherhood. He slept in a cave alongside bats, snakes and deer, listening to them as they exchanged squawks and howls. Together they taught him how to survive. Thanks to them, Rodríguez learned which berries and mushrooms were safe to eat.
Today, the former wolf boy, who was 19 when he was discovered by the Civil Guard and ripped away from his natural home, struggles with the coldness of the human world. It’s something that didn’t affect him so much when he was running around barefoot and half-naked with the wolves. “I only wrapped my feet up when they hurt because of the snow,” he remembers. “I had such big calluses on my feet that kicking a rock was like kicking a ball.”
After he was captured, Rodríguez’s world fell apart and he has never been able to fully recover. He’s been cheated and abused, exploited by bosses in the hospitality and construction industries, and never fully reintegrated to the human tribe. But at least his neighbors in Rante accept him as “one of them.” And now, the environmental group Amig@s das Arbores is raising money to insulate Rodríguez’s house and buy him a small pellet boiler – things that his meager pension cannot cover.
They laugh at me because I don’t know about politics or soccer
Marcos Rodríguez Pantoja
Rodríguez is one of the few documented cases in the world of a child being raised by animals away from humans. He was born in Añora, in Córdoba province, in 1946. His mother died giving birth when he was three years old, and his father left to live with another woman in Fuencaliente. Rodríguez only remembers abuse during this period of his life.
They took him to the mountains to replace an old goatherd who cared for 300 animals. The man taught him the use of fire and how to make utensils, but then died suddenly or disappeared, leaving Rodríguez completely alone around 1954, when he was just seven years old. When authorities found Rodríguez, he had swapped words for grunts. But he could still cry. “Animals also cry,” he says.
He admits that he has tried to return to the mountains but “it is not what it used to be,” he says. Now the wolves don’t see him as a brother anymore. “You can tell that they are right there, you hear them panting, it gives you goosebumps … but it’s not that easy to see them,” he explains. “There are wolves and if I call out to them they are going to respond, but they are not going to approach me,” he says with a sigh. “I smell like people, I wear cologne.” He was also sad to see that there were now cottages and big electric gates where his cave used to be.
His experience has been the subject of various anthropological studies, books by authors such as Gabriel Janer, and the 2010 film Among wolves (Entrelobos) by Gerardo Olivares. He insists that life has been much harder since he was thrown back into the modern world. “I think they laugh at me because I don’t know about politics or soccer,” he said one day. “Laugh back at them,” his doctor told him. “Everyone knows less than you.”
He has encountered many bad people along the way, but there have also been acts of solidarity. The forest officer Xosé Santos, a member of Amig@s das Arbores, organizes sessions at schools where Rodríguez can talk about his love for animals and the importance of caring for the environment. “It’s amazing how he enthralls the children with his life experience,” says Santos. Children, after all, are the humans whom Rodríguez feels most comfortable with.
(From: https://elpais.com/elpais/2018/03/28/inenglish/1522237746_629465.html?id_externo_rsoc=FB_CM)
English version by Melissa Kitson.
Discovered: 300,000-Year-Old Tools and Paints That Point to Early Humanity’s Cleverness
Findings out of Kenya offer a new understanding of when early humans got organized and started trading.
A team of anthropologists have determined that humanity has been handy for far longer than ever realized. These researchers discovered tools in East Africa that date back to around 320,000 years ago, far earlier than scientists previously thought humans were using such items.
Coming from the Olorgesailie geologic formation in southern Kenya, the findings, published in Science, show how the collection and creation of various colors through a pigmentation process was crucial to early human society. In addition to color creation, the team also found a variety of stone tools.
The earliest human life found in Olorgesailie dates back 1.2 million years. The question is, when did homo sapiens started becoming a collective society? When did the transition occur, and what did it look like? That date has generally been seen as around 100,000 years ago, thanks to evidence such as cave paintings in Ethiopia. However, the findings at Olorgesailie, where famed paleonanthropologists Louis and Mary Leaky also worked, show evidence of a social contract between geographically distant groups.
Lithuanian, the most conservative of all Indo-European languages, is riddled with references to bees.
In mid-January, the snow made the little coastal town of Šventoji in north-west Lithuania feel like a film set. Restaurants, shops and wooden holiday cabins all sat silently with their lights off, waiting for the arrival of spring.
I found what I was looking for on the edge of the town, not far from the banks of the iced-over Šventoji river and within earshot of the Baltic Sea: Žemaitiu alka, a shrine constructed by the Lithuanian neo-pagan organisation Romuva. Atop a small hillock stood 12 tall, thin, slightly tapering wooden figures. The decorations are austere but illustrative: two finish in little curving horns; affixed to the top of another is an orb emitting metal rays. One is adorned with nothing but a simple octagon. I looked down to the words carved vertically into the base and read ‘Austėja’. Below it was the English word: ‘bees’.
You may also be interested in:
• Ethiopia’s dangerous art of beekeeping
• Europe’s earliest written language
• The town that’s losing its language
This was not the first time I’d encountered references to bees in Lithuania. During previous visits, my Lithuanian friends had told me about the significance of bees to their culture.
Lithuanians don’t speak about bees grouping together in a colony like English-speakers do. Instead, the word for a human family (šeimas) is used. In the Lithuanian language, there are separate words for death depending on whether you’re talking about people or animals, but for bees – and only for bees – the former is used. And if you want to show a new-found Lithuanian pal what a good friend they are, you might please them by calling them bičiulis, a word roughly equivalent to ‘mate’, which has its root in bitė – bee. In Lithuania, it seems, a bee is like a good friend and a good friend is like a bee.
A bee is like a good friend and a good friend is like a bee
Seeing the shrine in Šventoji made me wonder: could all these references be explained by ancient Lithuanians worshipping bees as part of their pagan practices?
Lithuania has an extensive history of paganism. In fact, Lithuania was the last pagan state in Europe. Almost 1,000 years after the official conversion of the Roman Empire facilitated the gradual spread of Christianity, the Lithuanians continued to perform their ancient animist rituals and worship their gods in sacred groves. By the 13th Century, modern-day Estonia and Latvia were overrun and forcibly converted by crusaders, but the Lithuanians successfully resisted their attacks. Eventually, the state gave up paganism of its own accord: Grand Duke Jogaila converted to Catholicism in 1386 in order to marry the Queen of Poland.
This rich pagan history is understandably a source of fascination for modern Lithuanians – and many others besides. The problem is that few primary sources exist to tell us what Lithuanians believed before the arrival of Christianity. We can be sure that the god of thunder Perkūnas was of great importance as he is extensively documented in folklore and song, but most of the pantheon is based on guesswork. However, the Lithuanian language may provide – not proof, exactly, but clues, tantalising hints, about those gaps in the country’s past.
In Kaunas, Lithuania’s second-largest city, I spoke to Dalia Senvaitytė, a professor of cultural anthropology at Vytautas Magnus University. She was sceptical about my bee-worshipping theory, telling me that there may have been a bee goddess by the name of Austėja, but she’s attested in just one source: a 16th-Century book on traditional Lithuanian beliefs written by a Polish historian.
It’s more likely, she said, that these bee-related terms reflect the significance of bees in medieval Lithuania. Beekeeping, she explained “was regulated by community rules, as well as in special formal regulations”. Honey and beeswax were abundant and among the main exports, I learned, which is why its production was strictly controlled.
But the fact that these references to bees have been preserved over hundreds of years demonstrates something rather interesting about the Lithuanian language: according to the Lithuanian Quarterly Journal of Arts and Sciences, it’s the most conservative of all living Indo-European languages. While its grammar, vocabulary and characteristic sounds have changed over time, they’ve done so only very slowly. For this reason, the Lithuanian language is of enormous use to researchers trying to reconstruct Proto-Indo-European, the single language, spoken around four to five millennia ago, that was the progenitor of tongues as diverse as English, Armenian, Italian and Bengali.
All these languages are related, but profound sound shifts that have gradually taken place have made them distinct from one another. You’d need to be a language expert to see the connection between English ‘five’ and French cinq – let alone the word that Proto-Indo-Europeans are thought to have used, pénkʷe. However, that connection is slightly easier to make out from the Latvian word pieci, and no trouble at all with Lithuanian penki. This is why famous French linguist Antoine Meillet once declared that “anyone wishing to hear how Indo-Europeans spoke should come and listen to a Lithuanian peasant”. [Editor’s note: The little finger, or pinky finger, is also known as the fifth digit or just pinky.]
Lines can be drawn to other ancient languages too, even those that are quite geographically distant. For example, the Lithuanian word for castle or fortress – pilis – is completely different from those used by its non-Baltic neighbours, but is recognisably similar to the Ancient Greek word for town, polis. Surprisingly, Lithuanian is also thought to be the closest surviving European relative to Sanskrit, the oldest written Indo-European language, which is still used in Hindu ceremonies. [Editor’s note: The strength of the castle or fortress is similar, in some ways, to the strength of the Police.]
This last detail has led to claims of similarities between Indian and ancient Baltic cultures. A Lithuanian friend, Dovilas Bukauskas, told me about an event organised by local pagans that he attended. It began with the blessing of a figure of a grass snake – a sacred animal in Baltic tradition – and ended with a Hindu chant.
I asked Senvaitytė about the word gyvatė. This means ‘snake’, but it shares the same root with gyvybė, which means ‘life’. The grass snake has long been a sacred animal in Lithuania, reverenced as a symbol of fertility and luck, partially for its ability to shed its skin. A coincidence? Perhaps, but Senvaitytė thinks in this case probably not.
The language may also have played a role in preserving traditions in a different way. After Grand Duke Jogaila took the Polish throne in 1386, Lithuania’s gentry increasingly adopted not only Catholicism, but also the Polish language. Meanwhile, rural Lithuanians were much slower to adopt Christianity, not least because it was almost always preached in Polish or Latin. Even once Christianity had taken hold, Lithuanians were reluctant to give up their animist traditions. Hundreds of years after the country had officially adopted Christianity, travellers through the Lithuanian countryside reported seeing people leave bowls of milk out for grass snakes, in the hope that the animals would befriend the community and bring good luck.
Anyone wishing to hear how Indo-Europeans spoke should come and listen to a Lithuanian peasant
Similarly, bees and bee products seem to have retained importance, especially in folk medicine, for their perceived healing powers. Venom from a bee was used to treat viper bites, and one treatment for epilepsy apparently recommended drinking water with boiled dead bees. But only, of course, if the bees had died from natural causes.
But Lithuanian is no longer exclusively a rural language. The last century was a tumultuous one, bringing war, industrialisation and political change, and all of the country’s major cities now have majorities of Lithuanian-speakers. Following its accession to the EU in 2004, the country is now also increasingly integrated with Europe and the global market, which has led to the increasing presence of English-derived words, such as alternatyvus (alternative) and prioritetas (priority).
Given Lithuania’s troubled history, it’s in many ways amazing the language has survived to the present day. At its peak in the 14th Century, the Grand Duchy of Lithuania stretched as far as the Black Sea, but in the centuries since, the country has several times disappeared from the map entirely.
It’s too simplistic to say that Lithuanian allows us to piece together the more mysterious stretches in its history, such as the early, pagan years in which I’m so interested. But the language acts a little like the amber that people on the eastern shores of the Baltic have traded since ancient times, preserving, almost intact, meanings and structures that time has long since worn away everywhere else.
And whether or not Austėja was really worshipped, she has certainly remained a prominent presence. Austėja remains consistently in the top 10 most popular girls names in Lithuania. It seems that, despite Lithuania’s inevitable cultural and linguistic evolution, the bee will always be held in high esteem.
Join more than three million BBC Travel fans by liking us on Facebook, or follow us on Twitter and Instagram.
If you liked this story, sign up for the weekly bbc.com features newsletter called “If You Only Read 6 Things This Week”. A handpicked selection of stories from BBC Future, Earth, Culture, Capital and Travel, delivered to your inbox every Friday.
(www.bbc.com/travel/story/20180319-are-lithuanians-obsessed-with-bees)
++++++++++