An indigenous woman in Ciudad Panamá, Panamá. 

The debate over who arrived in the New World first is a contentious one. Their identities aside, nobody can quite decide how those first Americans traveled or how they dispersed once they arrived. But now, a new study published in Cell, illuminating the genetic history of some of those early travelers, reveals a unifying thread.

An international team of scientists announced recently that the majority of people in Central and South America can be linked to a single ancestral lineage of humans who journeyed across the Bering Strait at least 15,000 years ago. After their journey southward into the new world, this source population broke into at least three branches, which diversified and spread, some of them back toward the north.

Two of those branches are new to science. One is unexpectedly connected to the Clovis people — who were thought to be the first Americans until the early 2000s — whereas the other links ancient North Americans to people who lived in Southern Peru and Northern Chile at least 4,200 years ago.

“These [findings] are fascinating as they open new gateways into archeological and genetic research,” explains co-author and Harvard Ph.D. candidate Nathan Nakatuska to Inverse. “It was previously not known that the Clovis culture extended into South America, and it is incredible that these people were able to migrate all the way through North, Central, and South America. In addition, the new migration into the Southern Andes was not previously known, and we are unsure what historical events led to this.”

The majority of Central and South American ancestry arrived from at least three different streams of people.

Nakatuska and his colleagues analyzed DNA from 49 ancient individuals who once lived in what is now Belize, Brazil, the Central Andes, and the southernmost parts of Chile and Argentina and died between 10,900 and 8,600 ago. The team worked with government agencies and indigenous people to identify the samples, extract powder from skeletal material, and extract the DNA necessary to created double-stranded DNA libraries.

The use of DNA is one of the most novel aspects of this research. When studying migration of ancient peoples, other scientists often have to rely on other factors, such as old footprints or lice.

This broad dataset allowed the team to link genetic exchanges between people in North and South America and confirm the common origin of North, Central, and South Americans. The analysis made it clear that the original “source” population, fresh off the Bering Strait, diversified before they spread into South America.

What surprised the study authors most was the genetic connection they found between the Clovis culture and South America. About 13,000 years ago, the Clovis were distributed across North America. Though they were long thought to be the first Americans, findings of even older remains stripped them of that title. In the new paper, the team links DNA from a Clovis boy who lived in Montana about 12,800 years ago to some of the data set’s oldest individuals, who lived much farther south, in modern-day Belize, Chile, and Brazil.

“This [previously unknown gene flow event] suggests that, surprisingly, the genetic ancestry of people who produced the Clovis culture expanded further south,” explains first author and Max Planck Institute for the Science of Human History researcher Cosimo Posth, Ph.D. to Inverse. “However, this ancestry was replaced at least by 9,000 years ago from another lineage, which left a long lasting population continuity until today, in multiple South American regions.”

The second previously unknown population links ancient individuals who lived on California’s Channel Islands to individuals who lived at least 4,200 years ago in Southern Peru and Northern Chile. Posth notes that “this might be linked to a population expansion in the region seen in the archeological record around that time.”

Clovis spearheads found in Iowa. 

Nakatuska hopes the team’s research will stimulate further investigation into these genetic bonds and emphasizes the need for researchers to respectfully work with indigenous people. While strides have been made in the past two decades, archeology has a history of cultural imperialism.

“We hope the findings will facilitate greater collaboration and engagement with indigenous communities where the communities are deeply engaged and provide their insights to help drive the science and complement the studies with their own indigenous epistemologies,” Nakatuska says.

“We must ensure that our studies benefit indigenous people, particularly those currently living in the areas near the ancient individuals from our studies.”

(For the balance of this article, plus a video, please visit:


Just Months of American Life Change the Microbiome


His book warns us of the dangers of mass media, passivity, and how even an intelligent population can be driven to gladly choose dictatorship over freedom.

  • While other dystopias get more press, Brave New World offers us a nightmare world that we’ve moved steadily towards over the last century.
  • Author Aldous Huxley’s ideas on a light handed totalitarian dictatorship stand in marked contrast to the popular image of a dictatorship that relies on force.

When most people think of what dystopia our society is sprinting towards, they tend to think of 1984, The Handmaid’s Tale, or the Hunger Games. These top selling, well known, and well-written titles are excellent warnings of worlds that could come to pass that we would all do well to read.

However, one lesser-known dystopian novel has done a much better job at predicting the future than these three books. Brave New World, written in 1931 by author, psychonaut, and philosopher Aldous Huxley, is well known but hasn’t quite had the pop-culture breakthrough that the other three did.

This is regrettable, as it offers us a detailed image of a dystopia that our society is not only moving towards but would be happy to have.

Good Ford!


For those who haven’t read it, Brave New World is the description of a nightmare society where everybody is perfectly happy all the time. This is assured through destroying the free will of most of the population using genetic engineering and Pavlovian conditioning, keeping everybody entertained continuously with endless distractions, and offering a plentiful supply of the wonder drug Soma to keep people happy if all else fails.

The world state is a dictatorship which strives to assure order. The dictatorship is managed by ten oligarchs who rely on an extensive bureaucracy to keep the world running. The typical person is conditioned to love their subservience and either be proud of the vital work they do or be relieved that they don’t have to worry about the problems of the world.

Global stability is ensured through the Fordist religion, which is based on the teachings of Henry Ford and Sigmund Freud and involves the worship of both men. The tenets of this faith encourage mass consumerism, sexual promiscuity, and avoiding unhappiness at all costs. The assembly line is praised as though it were a gift from God.

Huxley’s dystopia is especially terrifying in that the enslaved population absolutely loves their slavery. Even the characters who are smart enough to know what is going on (and why they should be concerned) are instead content with everything that is happening. Perhaps more terrifying than other dystopian novels, in Brave New World there is truly no hope for change.

The similarities between the world of today and the world of the book are many, even if our technology hasn’t quite caught up yet.

Genetic Engineering

While the human assembly line described in the first part of the story is still a far-off fantasy, the basic concepts that make it work are already here. Today, people make choices to influence the genetic makeup of their children regularly.

Pre-natal screening has created the ability for many parents to decide if they wish to carry a disabled fetus to term or not. In Iceland, this has resulted in the near eradication of new cases of Down Syndrome in the country. Almost 100% of detected cases lead to an abortion shortly after.

Similarly, testing for a child’s sex before birth is a well-known procedure that leads to a wide gender gap in many countries. Less well known is the process of sperm sorting, which allows for a couple to choose the gender of their child as part of the process of in-vitro fertilization.

The above examples suggest we’re open to soft eugenics already. Imagine what would happen if people could determine their child’s potential IQ before birth, or how rebellious they will be as a teenager. It would be difficult to suggest that the development of such technology would not be hailed as progress by those who could afford to use it. Huxley’s visions of a genetically perfected upper caste might be available soon.

As this article suggests, some choice in baby design is already here and more will be available soon.

Endless Distractions

The characters of Brave New World enjoy endless distractions between their hours at work. Various complex games have been invented, movies now engage all five senses, and there are even televisions at the feet of death beds. Nobody ever has to worry about being bored for long. The idea of enjoying solitude is taboo, and most people go out to parties every night.

In our modern society, most people genuinely can’t go thirty minutes without wanting to check their phones. We have, just as Huxley predicted, made it possible to abolish boredom and time for spare thoughts no matter where you are. This is already having measurable effects on our mental health and our brain structure.

Huxley wasn’t warning us against watching television or going to the movies occasionally; he says in this interview with Mike Wallace that TV can be harmless, but rather against the constant barrage of distraction becoming more important in our lives than facing the problems that affect us. Given how stressful people find the idea of a tech-free day and how we take our pop culture so seriously that it was targeted for use by Russian bots, he might have been onto something.

Drugs: A gram is better than a damn!

Brave New World‘s favorite pill, Soma, is quite the drug. In small doses it causes euphoria. In moderate doses, it causes enjoyable hallucinations, and in large doses, it is a tranquilizer. It is probably a pharmacological impossibility, but his concept of a society that pops pills to eradicate any vestige of negative feelings and escape the doldrums of the day is very real.

While it seems odd to say that we are moving towards Brave New World in this era when official policy is opposed to drug use, Huxley would suggest we consider it a blessing, since a dictatorship that encouraged drug use to zonk out their population would be a powerful, if light handed one.

While today we have a war on drugs, it is not on all drugs. Anti-depressants, a powerful tool for the treatment of mental illness, are so popular that one in eight Americans are on them right now. This doesn’t include the large number of Americans on tranquilizers, anti-anxiety medications, or those who self-medicate with alcohol or increasingly legal marijuana.

These drugs aren’t quite Soma, but they bear a striking resemblance in function and use.

(For the balance of this article please visit:


Ancient civilizations may have been more connected than previously thought

Energy consumption was used to measure the extent of globalization for early civilizations
Energy consumption was used to measure the extent of globalization for early civilizations (Credit: ralwel/Depositphotos)

Ancient civilizations could have benefited, and at times suffered from belonging to an interconnected global economy, according to evidence presented in a newly-published study. The international team behind the research hope that the work could help present-day society learn from the mistakes of early globalism.

It is a sad but unavoidable fact that flourishing civilizations use up vast amounts of raw materials, and, subsequently, produce prodigious amounts of waste. By observing the amount of waste produced by an ancient society, researchers can estimate the amount of energy used, and attempt to track periods of growth, prosperity and decline.

This was the approach used in a new study, which attempted to determine whether historical civilizations ranging back 10,000 years were connected by a global economy. If this were the case, the fortunes of contemporary societies would be observed to rise and fall in tandem. This is known as synchrony.

Joining an interdependent global network can bring significant benefits. This could include an increase in wealth from trade goods, and other resources that allow a society to increase its carrying capacity, or maximum population, beyond the limits of an isolated people.

However, it would also render the societies involved susceptible to the maladies of their partners. For example, open trade and movement of peoples could encourage the spread of disease, and lead to detrimental changes to a nation’s ecosystem and social system.

“The more tightly connected and interdependent we become, the more vulnerable we are to a major social or ecological crisis in another country spreading to our country,” said Rick Robinson, a postdoctoral assistant research scientist at the University of Wyoming, and co-author of the new study. “The more we are synced, the more we put all our eggs in one basket, the less adaptive to unforeseen changes we become.”

In the new study, researchers tracked the energy use of civilizations spread across the world using a combination of radiocarbon dating and historical records. Energy, in this case, refers to the amount of biomass that was converted into work and waste.

To determine the amount of energy used, the team carbon-dated the trash of ancient civilizations, including animal bones, charcoal, wood, and small seeds. The scientists were able to provide energy-use estimates for a diverse range of societies spanning from roughly 10,000 years in the past, to 400 years ago.

The more recent historical records were used to provide a frame of reference for the estimates made by the radiocarbon dating technique.

It was discovered that there were significant levels of long-term synchrony regarding the booms and busts of ancient civilizations. This suggests that there was a greater level of early globalization than had previously been believed.

(For the balance of this article please visit:


The healthiest end-of-day sleep is 6 to 8 hours, but not more. Or less. As for napping, it depends on how you want to wake up.

Rachel Calamusa.

It’s obvious that being exhausted is no fun unless you’re Keith Richards. For the rest of us, it’s clear we’re not at our best when we’re too tired, and it’s not much of a leap to understand it’s not a healthy state in which to live, especially for one’s cardiovascular system—heart issues and a greater incidence of stroke have both been associated with not getting enough sleep.

But how much sleep do you need to stay healthy? Depends on how you’re getting it. For some, it’s a matter of adjusting one’s bedtime habits and schedule to get the best rest. For others—people with excessively long commutes or those whose schedules or dispositions preclude extended stretches in bed—it’s about finding the most effective way to nap. Regardless, there are right ways and wrong ways to recharge your tired self.

The sweet spot for sleeping at the end of the day

At a recent European Society of Cardiology conference, researchers at the Onassis Cardiac Surgery Centre, Athens, Greece identified the cardiovascular sweet spot for end-of-day sleep. (We’re phrasing it that way to accommodate people who work regular night shifts.) It’s between six and eight hours a night.

To arrive at their conclusion, the researchers performed a meta-analysis of 11 previous sleep studies published in the last five years, using data collected from 1,000,541 subjects. The subjects were sorted into three groups. The reference group slept six to eight hours a night. Another group slept less than six hours, and the final group slept more than eight.

It turned out that those getting either less than six hours sleep or more than eight sleep were at significantly higher risk of developing or dying from coronary artery disease or stroke over the course of the next decade. (In the study, the average follow-up was 9.3 years.).

  • Subjects sleeping less than 6 hours were 11% more likely to develop cardiovascular issues
  • Subjects sleeping more than 8 hours were 33% more likely to develop cardiovascular issues

It’s interesting to note that getting too much sleep is more dangerous than getting too little. Lead author Epameinondas Fountas sums up the findings: “Our findings suggest that too much or too little sleep may be bad for the heart. More research is needed to clarify exactly why, but we do know that sleep influences biological processes like glucose metabolism, blood pressure, and inflammation—all of which have an impact on cardiovascular disease.”

Sweet spots, literally and figuratively, for napping

Nap pods

For those without the option of a full night’s (day’s?) sleep, or who need to be at their best from beginning to end of very long days, naps are often the only option. A new industry is springing up in cities around the world to provide busy people cozy places in which to catch some Zzzs.

(For the balance of this interesting article please visit:


Sugar pill placebos as effective as powerful pain relieving drugs – for some

A new study has shown sugar pills can be effective pain relief – for those with...
A new study has shown sugar pills can be effective pain relief – for those with the right brains (Credit: kavusta/Depositphotos)

Researchers at Northwestern University have shown that sugar pill placebos are as effective as any drug on the market for relieving chronic pain in people with a certain brain anatomy and psychological characteristics. Amazingly, such patients will even experience the same reduction in pain when they are told the pill they are taking has no physiological effect.

Previous studies have found that placebos can have an effect on a number of conditions, including sleep disorders, depression and pain. The new Northwestern study, however, has shown it is possible to predict which patients suffering from chronic pain will experience relief when given a sugar pill – and it’s basically all in their heads.

“Their brain is already tuned to respond,” says senior study author A. Vania Apkarian, professor of physiology at Northwestern University Feinberg School of Medicine. “They have the appropriate psychology and biology that puts them in a cognitive state that as soon as you say, ‘this may make your pain better,’ their pain gets better.”

Additionally, there’s no need for subterfuge because those primed to respond to a placebo will do so even when they know that’s what they’re getting.

“You can tell them, ‘I’m giving you a drug that has no physiological effect but your brain will respond to it,'” Apkarian adds. “You don’t need to hide it. There is a biology behind the placebo response.”

The study involved 60 patients experiencing chronic back pain who were randomly split into two arms. Subjects in one arm were given either a real pain relief drug or a placebo – those receiving the drug weren’t studied by the researchers. Those in the other arm received neither the drug nor a placebo and served as the control group.

Patients that received the placebo and reported a reduction in pain were examined and found to have a similar brain anatomy and psychological traits, such as the right side of their emotional brain being larger than the left and a larger cortical sensory area than those in the placebo group that reported no reduction in pain. The researchers say the patients that had a response to the placebo were also more emotionally self-aware, sensitive to painful situations, and mindful of their environment.

The researchers say their findings have a number of potential benefits, the most obvious being the ability for doctors to prescribe a placebo rather than addictive pharmacological drugs that may have negative long-term effects, while getting the same result. Prescribing a cheap sugar pill would also result in a significant reduction in healthcare costs for the patient and the health care system as a whole.

“Clinicians who are treating chronic pain patients should seriously consider that some will get as good a response to a sugar pill as any other drug,” says Apkarian. “They should use it and see the outcome. This opens up a whole new field.”

Additionally, the findings may make it possible to eliminate the placebo effect from drug trials, meaning fewer subjects would need to be recruited and it would be easier to identify the physiological effects of the drug under examination.

The team’s research is published in Nature Communications. Source: Northwestern University.

(Source, and for additional interesting article like this one, please visit:


10 reasons why Finland’s education system is the best in the world

by Mike Colagrossi –

According to a recent European study, Finland is the country which has best school results in Europe thanks to its teaching system. AFP PHOTO OLIVIER MORIN.

Time and time again, American students continually rank near the middle or bottom among industrialized nations when it comes to performance in math and science. The Program for International Student Assessment (PISA) which in conjunction with the Organization for Economic Cooperation and Development (OECD) routinely releases data which shows that Americans are seriously lagging behind in a number of educational performance assessments.

Despite calls for education reform and a continual lackluster performance on the international scale, not a lot is being done or changing within the educational system. Many private and public schools run on the same antiquated systems and schedules that were once conducive to an agrarian society. The mechanization and rigid assembly-line methods we use today are spitting out ill-prepared worker clones, rudderless adults and an uninformed populace.

But no amount of pontificating will change what we already know. The American education system needs to be completely revamped – from the first grade to the Ph.D. It’s going to take a lot more than a well-meaning celebrity project to do that…

Many people are familiar with the stereotype of the hard-working, rote memorization, myopic tunnel vision of Eastern Asian study and work ethics. Many of these countries, like China, Singapore, and Japan amongst others routinely rank in the number one spots in both math and science.

Some pundits point towards this model of exhaustive brain draining as something Americans should aspire to become. Work more! Study harder! Live less. The facts and figures don’t lie – these countries are outperforming us, but there might be a better and healthier way to go about this.

Finland is the answer – a country rich in intellectual and educational reform has initiated over the years a number of novel and simple changes that have completely revolutionized their educational system. They outrank the United States and are gaining on Eastern Asian countries.

Are they cramming in dimly-lit rooms on robotic schedules?  Nope. Stressing over standardized tests enacted by the government? No way. Finland is leading the way because of common-sense practices and a holistic teaching environment that strives for equity over excellence. Here are 10 reasons why Finland’s education system is dominating America and the world stage.

Photo By Craig F. Walker / The Denver Post

No standardized testing

Staying in line with our print-minded sensibilities, standardized testing is the blanket way we test for subject comprehension. Filling in little bubbles on a scantron and answering pre-canned questions is somehow supposed to be a way to determine mastery or at least competence of a subject. What often happens is that students will learn to cram just to pass a test and teachers will be teaching with the sole purpose of students passing a test. Learning has been thrown out of the equation.

Finland has no standardized tests. Their only exception is something called the National Matriculation Exam, which is a voluntary test for students at the end of an upper-secondary school (equivalent to an American high school.)  All children throughout Finland are graded on an individualized basis and grading system set by their teacher. Tracking overall progress is done by the Ministry of Education, which samples groups across different ranges of schools.

Accountability for teachers (not required)

A lot of the blame goes to the teachers and rightfully so sometimes. But in Finland, the bar is set so high for teachers, that there is often no reason to have a rigorous “grading” system for teachers.  Pasi Sahlberg, director of the Finnish Ministry of Education and writer of Finnish Lessons: What Can the World Learn from Educational Change in Finland? Said the following about teachers’ accountability:

“There’s no word for accountability in Finnish… Accountability is something that is left when responsibility has been subtracted.”

All teachers are required to have a master’s degree before entering the profession. Teaching programs are the most rigorous and selective professional schools in the entire country. If a teacher isn’t performing well, it’s the individual principal’s responsibility to do something about it.

The concept of the pupil-teacher dynamic that was once the master to apprentice cannot be distilled down to a few bureaucratic checks and standardized testing measures. It needs to be dealt with on an individual basis.

Photo By Craig F. Walker / The Denver Post

Cooperation not competition

While most Americans and other countries see the educational system as one big Darwinian competition, the Finns see it differently. Sahlberg quotes a line from a writer named Samuli Paronen which says that:

“Real winners do not compete.”

Ironically, this attitude has put them at the head of the international pack. Finland’s educational system doesn’t worry about artificial or arbitrary merit-based systems. There are no lists of top performing schools or teachers. It’s not an environment of competition – instead, cooperation is the norm.

Make the basics a priority

Many school systems are so concerned with increasing test scores and comprehension in math and science, they tend to forget what constitutes a happy, harmonious and healthy student and learning environment. Many years ago, the Finnish school system was in need of some serious reforms.

The program that Finland put together focused on returning to the basics. It wasn’t about dominating with excellent marks or upping the ante. Instead, they looked to make the school environment a more equitable place.

Since the 1980s, Finnish educators have focused on making these basics a priority:

  • Education should be an instrument to balance out social inequality.
  • All students receive free school meals.
  • Ease of access to health care.
  • Psychological counseling
  • Individualized guidance

Beginning with the individual in a collective environment of equality is Finland’s way.

Starting school at an older age

Here the Finns again start by changing very minute details. Students start school when they are seven years old. They’re given free reign in the developing childhood years to not be chained to compulsory education. It’s simply just a way to let a kid be a kid.

There are only 9 years of compulsory school that Finnish children are required to attend. Everything past the ninth grade or at the age of 16 is optional.

Just from a psychological standpoint, this is a freeing ideal. Although it may seem anecdotal, many students really feel like they’re stuck in a prison. Finland alleviates this forced ideal and instead opts to prepare its children for the real world.

Providing professional options past a traditional college degree

The current pipeline for education in America is incredibly stagnant and immutable. Children are stuck in the K-12 circuit jumping from teacher to teacher. Each grade a preparation for the next, all ending in the grand culmination of college, which then prepares you for the next grand thing on the conveyor belt. Many students don’t need to go to college and get a worthless degree or flounder about trying to find purpose and incur massive debt.

Finland solves this dilemma by offering options that are equally advantageous for the student continuing their education. There is a lesser focused dichotomy of college-educated versus trade-school or working class. Both can be equally professional and fulfilling for a career.

In Finland, there is the Upper Secondary School which is a three-year program that prepares students for the Matriculation Test that determines their acceptance into a University. This is usually based off of specialties they’ve acquired during their time in “high-school”

Next, there is vocational education, which is a three-year program that trains students for various careers. They have the option to take the Matriculation test if they want to then apply to University.

Finns wake up later for less strenuous schooldays

Waking up early, catching a bus or ride, participating in morning and after school extracurriculars are huge time sinks for a student. Add to the fact that some classes start anywhere from 6am to 8am and you’ve got sleepy, uninspired adolescents on your hands.

Students in Finland usually start school anywhere from 9:00 – 9:45 AM. Research has shown that early start times are detrimental to students’ well-being, health, and maturation. Finnish schools start the day later and usually end by 2:00 – 2:45 PM. They have longer class periods and much longer breaks in between. The overall system isn’t there to ram and cram information into their students, but to create an environment of holistic learning.

Consistent instruction from the same teachers

There are fewer teachers and students in Finnish schools. You can’t expect to teach an auditorium of invisible faces and breakthrough to them on an individual level. Students in Finland often have the same teacher for up to six years of their education. During this time, the teacher can take on the role of a mentor or even a family member. During those years, mutual trust and bonding are built so that both parties know and respect each other.

Different needs and learning styles vary on an individual basis. Finnish teachers can account for this because they’ve figured out the student’s own idiosyncratic needs. They can accurately chart and care for their progress and help them reach their goals. There is no passing along to the next teacher because there isn’t one.

Levi, Finland. Photo by Christophe Pallot/Agence Zoom/Getty Images.

A more relaxed atmosphere

There is a general trend in what Finland is doing with its schools. Less stress, less unneeded regimentation and more caring. Students usually only have a couple of classes a day. They have several times to eat their food, enjoy recreational activities and generally just relax. Spread throughout the day are 15 to 20-minute intervals where the kids can get up and stretch, grab some fresh air and decompress.

This type of environment is also needed by the teachers. Teacher rooms are set up all over Finnish schools, where they can lounge about and relax, prepare for the day or just simply socialize. Teachers are people too and need to be functional so they can operate at the best of their abilities.

Less homework and outside work required

According to the OECD, students in Finland have the least amount of outside work and homework than any other student in the world. They spend only half an hour a night working on stuff from school. Finnish students also don’t have tutors. Yet they’re outperforming cultures that have toxic school-to-life balances without the unneeded or unnecessary stress.

Finnish students are getting everything they need to get done in school without the added pressures that come with excelling at a subject. Without having to worry about grades and busy-work they are able to focus on the true task at hand – learning and growing as a human being.

(Source of this article, and for a video, see:


Procrastinator’s brains are different than those who get things done

Article Image

Young girl in Chicago classroom, Stanley Kubrick for LOOK Magazine, c/o Creative Commons

Daydreaming is important — studies have repeatedly said as much — but maybe you shouldn’t daydream too much, as a recent study by researchers at Ruhr-Universität Bochum has come to the conclusion that — after looking at MRI scans of 264 individuals — the brains of doers differ from those of procrastinators.

Before we explain how they came to that conclusion, it’s worth going over a few basic terms: the first is the amygdala, which are two almond-shaped clusters of neurons buried deep within the brain. The amygdala helps you process smell, store memory, rewards your brain with dopamine, and helps you “assess different situations with regard to their respective outcomes.” If you’re trying to recognize a smell — if you beat a video game level and pleasant graphics fill the screen — if you’re unsure whether or not it will be worthwhile to go to a concert in the evening — all this goes through your amygdala. There’s also the “dorsal anterior cingulate cortex.” This section of the brain currently appears to have a role in blood pressure, heart rate, attention, the anticipation of reward, impulse control, emotion, and — more broadly, though this appears to still be an area of active research — decision-making.

It’s helpful to have an understanding of these two sections of the brain when you read that “Individuals with poor action control had a larger amygdala” and that “the functional connection between the amygdala and the so-called dorsal anterior cingulate cortex (dorsal ACC) was less pronounced.” These results led Erhan Genç — a member of the research team at Ruhr-Universität Bochum — to hypothesize that “Individuals with a higher amygdala volume may be more anxious about the negative consequences of an action – they tend to hesitate and put off things.”

The study has sparked a wide-ranging conversation on Reddit, with questions being raised as to neuroplasticity (with an excellent reply reminding us about just how contextual neuroplasticity is) to former procrastinators chiming in with their autobiographical two cents to teachers talking about how they might apply the gist of this research in the classroom. (“This is great supporting evidence as to why teaching kids to take risks in the classroom is so effective.”)

The study works towards finding a neural base for some of these patterns — why, at the level of our hardware, things work in the way that they do — but — just as there was an active question as to the neural base of non-canonical uses of the nervous system — it’s worth wondering what a certain neural base actually looks like here when we can see so many different things come from the same seeming place — how a larger than usual amygdala seems capable of translating itself into procrastination; into a larger than average number of unique responses to a Rorschach test, autism, or the fact that “after an eight-week course of mindfulness practice, … MRI scans show that … the amygdala appears to shrink.”

From an outsider’s perspective, it may feel a little like looking at birds and dinosaurs and knowing that each come from the same place.

But they do. And that’s the next thing to be figured out.
(For source of this article, and for a video, please visit:


The end of the middle class: Why prosperity is failing in America

Executive Editor of the Economic Hardship Reporting Project.

‘Middle class’ doesn’t mean what it used to. Owning a home, two cars, and having a summer vacation to look forward to is a dream that’s no longer possible for a growing percentage of American families. So what’s changed? That safe and stable class has become shaky as unions collapsed, the gig economy surged, and wealth concentrated in the hands of the top 1%, the knock-on effects of which include sky-high housing prices, people working second jobs, and a cultural shift marked by ‘one-percent’ TV shows (and presidents). Alissa Quart, executive editor of the Economic Hardship Reporting Project, explains how the American dream became a dystopia, and why it’s so hard for middle-class Americans to get by. Alissa Quart is the author of Squeezed: Why Our Families Can’t Afford America.

(See the video at:


Genome study of cave bones reveals early human hybrid

Genetic analysis on an ancient bone fragment has revealed the direct descendant of a Neanderthal and...
Genetic analysis on an ancient bone fragment has revealed the direct descendant of a Neanderthal and a Denisovan (Credit: James633/Depositphotos)

Although Homo sapiens won the world domination contest, we weren’t without our competitors. For thousands of years we shared the planet with other hominin species, such as the Neanderthals and Denisovans. These early humans were known to have fought, competed and even cross-bred when they crossed paths, and now the most direct evidence of those meetings has been found. By sequencing the genome of a hominin bone from a Siberian cave, anthropologists have discovered the direct descendant of a Neanderthal and a Denisovan.

Neanderthals were a lot like us, but stockier, stronger and probably hairier. They inhabited Europe long before we modern humans trekked out there, and their range stretched into southwest Asia and as far north as Siberia. Denisovans lived around the same time, ranging from Siberia to Southeast Asia, although we don’t know as much about them since all we have are a few teeth, finger and toe bones.

Genetic studies have revealed that these two species interbred with each other – and modern humans. Around two percent of the modern human genome is estimated to contain Neanderthal DNA, while some humans may be up to six percent Denisovan. But the two of them are far closer to each other than to us – possibly up to 17 percent of the Denisovan genome comes from Neanderthals.

The researchers studied the genome of this bone fragment, found in Denisova Cave, Russia

Researchers from Max Planck have now conducted genetic analysis of a small bone fragment found in Denisova Cave in Russia, where most Denisovan remains have been found so far. The team discovered that the bone belonged to a female of at least 13 years of age, but it was her parents that were most interesting to the crew – her mother was a Neanderthal and her father a Denisovan.

“We knew from previous studies that Neanderthals and Denisovans must have occasionally had children together,” says Viviane Slon, a first author of the study. “But I never thought we would be so lucky as to find an actual offspring of the two groups.”

By studying this individual’s genome, the researchers were able to learn more about the parents. In an unexpected twist, the mother turned out to be a closer genetic match to a distant Neanderthal population in western Europe, rather than another individual that had lived earlier in Denisova Cave. On the father’s side of the family tree, the Denisovan apparently had at least one Neanderthal ancestor himself, suggesting the two species must have met in the past.

“It is striking that we find this Neanderthal/Denisovan child among the handful of ancient individuals whose genomes have been sequenced,” says Svante Pääbo, lead author of the study. “Neanderthals and Denisovans may not have had many opportunities to meet. But when they did, they must have mated frequently – much more so than we previously thought.”

The research was published in the journal Nature.


Can Neuroscience Predict How Likely Someone Is to Commit Another Crime?

Researchers propose using brain imaging technology to improve risk assessments—tools to help courts determine appropriate sentencing, probation, and parole. It’s controversial to say the least.

by Andrew R. Calderón –

Gavel and person.

Roy Scott/Getty Images.

(This story was published in partnership with The Marshall Project, a nonprofit newsroom covering the US criminal justice system.)

In 1978, Thomas Barefoot was convicted of killing a police officer in Texas. During the sentencing phase of his trial, the prosecution called two psychiatrists to testify about Barefoot’s “future dangerousness,” a capital-sentencing requirement that asked the jury to determine if the defendant posed a threat to society.

The psychiatrists declared Barefoot a “criminal psychopath,” and warned that whether he was inside or outside a prison, there was a “one hundred percent and absolute chance” that he would commit future acts of violence that would “constitute a continuing threat to society.” Informed by these clinical predictions, the jury sentenced Barefoot to death.

Although such psychiatric forecasting is less common now in capital cases, a battery of risk assessment tools has since been developed that aims to help courts determine appropriate sentencing, probation, and parole. Many of these risk assessments use algorithms to weigh personal, psychological, historical, and environmental factors to make predictions of future behavior. But it is an imperfect science, beset by accusations of racial bias and false positives.

Now a group of neuroscientists at the University of New Mexico propose to use brain imaging technology to improve risk assessments. Kent Kiehl, a professor of psychology, neuroscience, and the law at the University of New Mexico, says that by measuring brain structure and activity they might better predict the probability an individual will offend again.

Neuroprediction, as it has been dubbed, evokes uneasy memories of a time when phrenologists used body proportions to make pronouncements about a person’s intelligence, virtue, and—in its most extreme iteration—racial inferiority.

Yet predicting likely human behavior based on algorithms is a fact of modern life, and not just in the criminal justice system. After all, what is Facebook if not an algorithm for calculating what we will like, what we will do, and who we are?

In a recent study, Kiehl and his team set out to discover whether brain age—an index of the volume and density of gray matter in the brain—could help predict re-arrest.

Age is a key factor of standard risk assessments. On average, defendants between 18 to 25 years old are considered more likely to engage in risky behavior than their older counterparts. Even so, chronological age, wrote the researchers, may not be an accurate measure of risk.

The advantage of brain age over chronological age is its specificity. It accounts for “individual differences” in brain structure and activity over time, which have an impact on decision-making and risk-taking.

After analyzing the brain scans of 1,332 New Mexico and Wisconsin men and boys —ages 12 to 65—in state prisons and juvenile facilities, the team found that by combining brain age and activity with psychological measures, such as impulse control and substance dependence, they could accurately predict re-arrest in most cases.

The brain age experiment built on the findings from research Kiehl had conducted in 2013, which demonstrated that low activity in a brain region partially responsible for inhibition seemed more predictive of re-arrest than the behavioral and personality factors used in risk assessments.

“This is the largest brain age study of its kind,” Kiehl says, and the first time that brain age was shown to be useful in the prediction of future antisocial behavior.

In the study, subjects lie inside an MRI scanner as a computer sketches the peaks and troughs of their brains to construct a profile. With hundreds of brain profiles, the researchers can train algorithms to look for unique patterns.

(For the balance of this interesting article please visit:


U.S. students lag far behind rest of the world in learning a second language. Here’s why that matters.

Article Image
Photo: powerofforever / Getty Images





If you live in the Community of Belgium, one of the nation’s three federal communities, you most likely speak multiple languages. Though the local dialect is German, three-year-olds are required to study a foreign language. As it turns out, this is the easiest time during human development to grasp multiple dialects, given the plasticity of the brain. The longer you wait, the harder it becomes.

Most European countries require that their students speak foreign languages. At what age they start learning is another story, though for most of Europe, knowing at least two other languages is compulsory. Only Ireland (save Northern Ireland) and Scotland escape this fate, but even there you’ll hear many tongues spoken by every citizen:

Ireland and Scotland are two exceptions that do not have compulsory language requirements, but Irish students learn both English and Gaelic (neither is considered a foreign language); Scottish schools are still obligated to offer at least one foreign-language option to all students ages 10-18.

Then you have America, a nation in which less than half of citizens own a passport. This number, thankfully, has risen to 42 percent from 27 percent since 2007, but data still hint at a majority disinterested in international travel. A new Pew Research poll shows that most American states have less than one-quarter of students studying a foreign language.

US foreign language

That’s because learning a foreign language is not nationally mandated. The state with the most students enrolled—New Jersey has 51 percent—happens to be where I grew up. In high school, you either took Spanish, German, or French; looking back, I thought it was required everywhere. Not the case, at least broadly—school districts (and even states) can require language studies, but the U.S. Department of Education has no broad requirements.

Which is in stark contrast to Europe. In France, Romania, Austria, Norway, Malta, Luxembourg, and Liechtenstein, every student must learn another language. The country with the least amount of students enrolled is actually Belgium, with 64 percent, just behind Portugal (69 percent) and the Netherlands (70 percent). Overall, 92 percent of European students know multiple languages. In America, that number is 20 percent.

It also depends on which state you’re discussing. In New Mexico, Arizona, and Arkansas, only 9 percent of students study a language other than English, an especially disturbing fact given that two are border states that benefit greatly from communicating with their neighbors.
The numbers don’t get much better as we investigate older demographics. Only 36 percent of Americans believe speaking another language is “extremely or very important” in the modern workplace. Strangely, most Americans realize that further training is required to stay competitive in the market:

The vast majority of U.S. workers say that new skills and training may hold the key to their future job success.

Americans spend so much time focused on bringing jobs “back,” yet we actually have no clue where they “go.” It’s impossible to compete in a global workforce if you refuse to educate yourself on anywhere beyond your neighborhood. Eight in ten Americans believe outsourcing is a serious problem and seven in ten claims that responsibility is on the individual, yet just over one-third consider that preparation should include learning another language.

Foreign language

Considering English is the most studied language across Europe, it’s not surprising American citizens are lazy. We can communicate almost anywhere we travel, our privileged reality. During my four trips to Morocco, I was often approached in French; upon learning I’m American, the speaker immediately switched to English. This is beside the native Moroccan Arabic. Many citizens also know Spanish and Italian.

One can argue that their economy depends on it. English is, after all, the business language of the world. Beyond staying competitive in the marketplace, however, there are many personal benefits. Early language learning increases cognitive benefits and helps fight diseases of dementia. Being multilingual has positive effects on memory, problem-solving, verbal and spatial abilities, and intelligence. These are all important skillsets in business. They also make you a healthier citizen, physically and socially.

Still, many Americans don’t recognize the value of curiosity. Instead of bristling when hearing people communicate in a language they don’t understand, they can attempt to make sense of it. Instead, we’re constantly confronted with videos of Americans demanding that immigrants “learn to speak the language.” Complacency usurps curiosity—and common sense.

Within the English language, the more words you know, the broader the population you can dialogue with. That extends exponentially when you know multiple languages. Why we wouldn’t want to talk to as many people as possible sheds light on rampant nationalism, which is a shame. The larger one’s vocabulary, the more likely we’ll get along, in business and in life. Everyone’s health improves.

(Source of this, and many other interesting articles:


The Role and Power of Women in Ancient Egypt

Throughout history, the status and importance of women varied by culture and period. Some groups maintained a highly matriarchal culture during certain times, while at other times they were predominantly patriarchal. Likewise, the roles of women in ancient Egypt and their ability to ascend to positions of power varied through history. Little is known about female status during the Early Dynastic Period (c. 3000 BCE). However, during the First and Second Intermediate Periods (2100 BCE–1550 BCE), the New Kingdom (1550 BCE–1200 BCE), and certainly during the Ptolemaic Period (300 BCE–30 BCE), Egyptians had a unique attitude about women.

Nefertiti- Role of women in ancient egypt

Queen Nefertiti, ruler and mother of six, kissing one of her daughters. Limestone relief, c. 1332-1356 BCE. Image: CC 2.5.

The Rise and Fall of Women in Egypt

Not only were women in ancient Egypt responsible for the nurturance and admonition of children, but they could also work at a trade, own and operate a business, inherit property, and come out well in divorce proceedings. Some women of the working class even became prosperous. They trained in medicine as well as in other highly skilled endeavors. There were female religious leaders in the priesthood, but in this instance, they were not equal to the men. In ancient Egypt, women could buy jewelry and fine linens. At times, they ruled as revered queens or pharoahs.

The role of women in ancient Egypt diminished during the late dynastic period but reappeared within the Ptolemaic dynasty. Both Ptolemy I and II put the portraits of their wives on the coins. Cleopatra VII became a very powerful figure internationally. However, after her death, the role of women receded markedly and remained virtually subservient until the 20th century.

How the Moon Shaped the Role of Women in Ancient Egypt

Through history, strong patriarchal societies existed when the sun was worshiped and times when there was a matriarchal society when the moon was worshiped. During much of Egyptian history, people worshiped both the moon and the sun, which gave rise to both matriarchal and patriarchal societies. For the most part, both the sun, Ra, and the moon, Konsu, were a vital part of the religion of ancient Egypt. It might be that the main objection to Amenhotep IV was that he stressed worship only to the sun disk at the expense of the moon god. Much of the traditional Egyptian society rejected this new concept and wanted a balance between the sun and the moon.

Examples of Powerful Egyptian Women


In the middle of the 15th century BCE, one of the most important people to appear on the Egyptian scene was a woman. Her name was Hatshepsut. She came to power during a very critical time in Egyptian history. For many years Egypt was ruled by the Hyksos, foreigners who conquered Egypt and attempted to destroy many important aspects of Egyptian society. In 1549 BCE, a strong leader emerged by the name of  Ahmose I, founder of the 18th Dynasty. He drove out the invaders.

Egypt was once more restored to its glory by the time his successor, Amenhotep I, became Pharaoh. His granddaughter, Hatshepsut, became the fifth pharaoh of the 18th Dynasty in c. 1478 BCE after her sickly husband and pharaoh Thutmose II died. The female ruler was a builder, she directed expositions, built ships, enlarged the army, and presented Egypt as having a major presence in the international arena. She also utilized the services of other skilled women in various governmental capacities. Interestingly, she ruled Egypt as a queen and as a king, and her statues often portray her as a man wearing a beard. After her death, Thutmose III built upon Hatshepsut’s strong foundation, which resulted in the largest Egyptian empire the world had ever seen.

Hatshepsut and women in ancient egypt

Hatshepsut is depicted with a bare chest and false beard. Granite statue, c. 1479-1458 BCE. Modified, public domain.


Amenhotep III continued to advance the cause of Egypt and to provide for its people a better life than they had ever known in the past. During this time, several women of great talent appeared and were able to make many contributions. His queen was named Tiye. She was perhaps the first in this hierarchy of counselors to the king. She presumably molded the pharaoh’s thinking in matters of state and religion and provided him with strong support.


It was during this time that another famous and important woman appeared. Her name was Nefertiti and she became the wife of the son of Amenhotep III and Queen Tiye. The man was also known in history as Amenhotep IV. and later as Ankenaten. We are now being told that Nefertiti may have been a more powerful and influential person than her husband.

The status of women in ancient Egyptian society was of such importance that the right to the crown itself passed through the royal women and not the men. The daughters of kings were all important.


During the reign of Ramesses II (c. 1279–1213 BCE, his favorite wife and queen, Nefertari, was raised to the status of Royal Wife and Royal Mother. At Abu Simbel temple in Southern Egypt, her statue is as large as the pharaoh’s statue. Thus, we see her portrayed as an important person during the reign of the pharaoh. Often the name of his queen Auset-nefert would appear along with his own. Thus, pharaohs, such as Ramesses II, who esteemed their queens and gave them equal status, also helped to bolster the role and stature of women in ancient Egypt.

role of women in ancient egypt

Queen Nefertari stands alongside her husband, Ramesses II, in equal scale. Image: CC2.0 Dennis Jarvis.

It is also of interest to note that Ramesses II restored the temple of Hatshepsut in Deir el Bahri. In so many other instances, he either destroyed evidence of the very existence of his predecessors or usurped their creations, but with this famous woman, he went to great length to acknowledge her existence and to protect her memory.

Cleopatra VII

Cleopatra VII was the seventh Cleopatra and the last of the Greek or Ptolemic rulers of Egypt. Her son, Ptolemy XV possibly reigned for a few weeks after her death, however, she was the last of the significant Egyptian rulers. She was the last of the powerful women in ancient Egypt, and after her death, Egypt fell to the Romans.

Cleopatra was schooled in science, politics, and diplomacy, and she was a proponent of merging the cultures of Greece and Egypt. She could also read and write the ancient Egyptian language.

Egypt’s Class Society

From the beginning, Egypt was a class society. There was a marked line of distinction that was maintained between the different ranks of society. Although sons tended to follow the trade or profession of their fathers, this was not always the case, and there were even some instances where people were also able to advance themselves regardless of their birth status.

Women in ancient Egypt were, like their male counterparts, subject to a rank system. The highest of them was the queen followed by the wives and daughters of the high priest. Their duties were very specific and equally as important as those of the men. Women within the royal family performed duties much like we see today in the role of ladies in waiting to the Queen of England. Additionally, the role of women as teachers and guides for their children was very prominent in ancient Egypt.

Priesthood and Non-Traditional Roles

There were holy women who possessed both dignity and importance. As to the priesthood, and perhaps other professions, only the women of a higher rank trained in these endeavors. Both male and female priests enjoyed great privileges. They were exempt from taxes, they used no part of their own income in any of the expenses related to their office, and they were permitted to own land in their own right.

Women in ancient Egypt had the authority to manage affairs in the absence of their husbands. They had traditional duties such as needlework, drawing water, spinning, weaving, attending to the animals, and a variety of domestic tasks. However, they also took on some non-traditional roles. According to Diodorus, he saw images depicting some women making furniture and tents and engaging in other pursuits that may seem more suitable to men. It seems that women on every socioeconomic level could do pretty much what a man could do with perhaps the exception of being a part of the military. This was evident when a husband died; the wife would take over and attend to whatever business or trade he may have been doing.

Marriage and Family

Both men and women could decide whom they would marry. However, elders helped to introduce suitable males and females to each other. After the wedding, the husband and wife registered the marriage. A woman could own property that she had inherited from her family, and if her marriage ended in divorce, she could keep her own property and the children and was free to marry again.

Women held the extremely important role of wife and mother. In fact, Egyptian society held high regard for women with many children. A man could take other women to live in his family, but the primary wife would have ultimate responsibility. Children from other wives would have equal status to those of the first wife.

The Wisdom of the Ages

The high-points for women in ancient Egypt came to a screeching halt after Cleopatra. The Greek-Macedonian Ptolemys ascended Egypt’s throne beginning in 323 BCE after Alexander the Great died. This marked a permanent and profound change from an Egyptian culture to one of a Graeco-Egyptian influence. As a result of non-native Egyptian sentiments, the roles of women continued to wane during this time and into the Roman period. The well-known fact that Cleopatra VII became such a strong ruler is a testament to the tenacity of native Egyptians to maintain their cultural views. Additionally, her shrewd intellect, wily relationship-building skills, and desire to support the Egyptian people won them over. Today, Cleopatra is remembered as the last pharaoh and, more importantly, the last female to ever be edified to that stature by the Egyptians.

You may also like:
Oxyrhynchus Papyri: Historical Treasure in Ancient Egyptian Garbage

Updated by Historic Mysteries March 7, 2018

(Source of this and other interesting articles:


Easter Island, also known as Rapa Nui, is a 63-square-mile spot of land in the Pacific Ocean. In 1995, science writer Jared Diamond popularized the “collapse theory” in a Discover magazine story about why the Easter Island population was so small when European explorers arrived in 1772. He later published Collapse, a book hypothesizing that infighting and an overexploiting of resources led to a societal “ecocide.” However, a growing body of evidence contradicts this popular story of a warring, wasteful culture.

Scientists contend in a new study that the island’s most iconic features are also the best evidence that the ancient Rapa Nui society was more sophisticated than previously thought, and the biggest clue lies in the island’s most iconic features.

The iconic “Easter Island heads” or moai, are actually full-bodied but often partially buried statues that cover the island. There are almost a thousand of them, and the largest is over seventy feet tall. Scientists hailing from UCLA, the University of Queensland, and the Field Museum of Natural History in Chicago believe that, much like Stonehenge, the process by which these monoliths were created is indicative of a collaborative society.

Their research was published recently in the Journal of Pacific Archeology.

Study co-author and director of the Easter Island Statue Project Jo Anne Van Tilburg, Ph.D. is focused on measuring the visibility, number, size, and location of the moai. She tells Inverse that “visibility, when linked to geography, tells us something about how Rapa Nui, like all other traditional Polynesian societies, is built on family identity.”

Van Tilburg and her team say that understanding how these families interacted with the craftsmen who made the tools that helped create the giant statues is indicative of how different parts of Rapa Nui society interacted.

Easter Island statues, or moai.

Previous excavations led by Van Tilburg revealed that the moai were created from basalt tools. In this study, the scientist focused on figuring out where on the island the basalt came from. Between 1455 and 1645 AD there was a series of basalt transfers from quarries to the actual location of the statues — so the question became, which quarry did they come from?

Chemical analysis of the stone tools revealed that the majority of these instruments were made of basalt that was dug up from one quarry. This demonstrated to the scientists that, because everyone was using one type of stone, there had to be a certain level of collaboration in the creation of the giant statues.

“There was more interaction and collaboration”

“We had hypothesized that elite members of the Rapa Nui culture had controlled resources and would only use them for themselves,” lead author and University of Queensland Ph.D. candidate Dale Simpson Jr. tells Inverse. “Instead, what we found is that the whole island was using similar material, from similar quarries. This led us to believe that there was more interaction and collaboration in the past that has been noted in the collapse narrative.”

Simpson explains that the scientists intend to continue to map the quarries and perform other geochemcial analysis on artifacts, so they can continue to “paint a better picture” about Rapa Nui prehistoric interactions.

After Europeans arrived on the island, slavery, disease, and colonization decimated much of Rapa Nui society — although its culture continues to exist today. Understanding exactly what happened in the past there is key to recognizing a history that became clouded by colonial interpretation.

“What makes me excited is that through my long-term relationship with the island, I’ve been able to better understand how people in the ancient past interacted and shared information — some of this interaction can be seen between thousands of Rapa Nui who still live today,” says Simpson. “In short, Rapa Nui is not a story about collapse, but about survival!”


Article Image

What qualities define a good leader? Is it vision, the ability to understand and negotiate with people, drive, an expectation of excellence, or a stunningly brilliant intellect? A new study finds that the last one may actually be a hindrance. Those who are exceedingly intelligent, while still some of the top producers, don’t necessarily make the best leaders, it finds.

Researchers at the University of Lausanne in Switzerland, led by John Antonakis, set out to test the assumption that the brightest people make the best leaders. Their results were published in the Journal of Applied Psychology. This team was building on the work of UC psychology professor Dean Keith Simonton. He theorized that there’s a sweet spot where peak performance is reached, when the intelligence of the leader correlates with that of the followers.

We expect leaders to be smarter than us, but not too smart, according to Prof. Simonton. While the average IQ is 100-110, the optimal IQ for someone managing a team of average folks, would be 120-125, no more than 1.2 standard deviations above the mean. This relationship is called curvilinear, represented when graphed as an inverted U.

At a certain point, high intelligence hurts leadership if it isn’t balanced by other traits. Credit: Getty Images.

In the Swiss study, 379 middle managers from companies within 30 different, mostly European countries, participated. They were followed over six years and their leadership styles evaluated periodically. Researchers gave participants the Wonderlic Personnel Test, which assesses both personality and IQ. Their scores were spread across the spectrum. Antonakis and colleagues matched these with the Multifactor Leadership Questionnaire, which evaluates a manager’s leadership style and how effective it is.

Subordinates and peers at each participant’s job filled these out. The managers were evaluated by seven to eight people each. Personality and intelligence were the key indicators on how effective a leader was. A higher IQ meant a better relationship, up until the leader’s IQ reached above 120. Those with higher intelligence, beyond 128, were found to be less effective.

Bucking another stereotype, researchers uncovered that women tended to express more effective leadership styles. A little over 26% of the participants were women. Older managers scored higher too, but to a lesser extent. What these results show is that balance is important. Intelligence does benefit leadership, Antonakis says, but only if it’s balanced with other parts of one’s personality, like agreeableness and charisma.

Mostly, it comes down to good people skills. Conscientiousness surprisingly didn’t play too much into effective leadership. Of course, whether one is an effective leader or not depends on the IQ of the group. So there isn’t exactly a perfect level of intelligence for a leader to have.

Why do the smartest leaders often fail to reach subordinates? In Simonton’s work, he and colleagues believe that they often put forth more sophisticated plans than others, meaning team members might fail to understand all the intricacies, and thus fail to execute them well. Another problem: complex communication styles might fail to influence others. Also, if a manager comes off as too intellectual, it sets him or her apart. In other words, it makes subordinates feel the leader is not one of them. In the words of the study’s authors:

To conclude, Sheldon Cooper, the genius physicist from “The Big Bang Theory” TV series is often portrayed as being detached and distant from normal folk, particularly because of his use of complex language and arguments. However… Sheldon could still be a leader—if he can find a group of followers smart enough to appreciate his prose!

There are shortcomings to this model. It originally only looked at simulations and perceptions rather than actual work environments and performance. This latest study was the first to really put Simonton’s theory to the test.

(Emotional intelligence (EQ) is really important for leaders to have. To learn more about that, click here:


Mysterious fossil footprints may cast doubt on human evolution timeline

A set of fossilized human-like footprints in Greece may end up rewriting the story of human...A set of fossilized human-like footprints in Greece may end up rewriting the story of human evolution (Credit: Andrzej Boczarowski)
We share plenty of features with apes, but the shape of our feet isn’t one of them. So that makes the discovery of human-like footprints dating back 5.7 million years – a time when our ancestors were thought to still be getting around on ape-like feet – a surprising one. Further confounding the mystery is the fact that these prints were found in the Greek islands, implying hominins left Africa much earlier than our current narrative suggests.

Fossilized bones and footprints have helped us piece together the history of human evolution. One of the earliest hominins – ancestors of ours that are more closely related to humans than chimps – was a species called Ardipithecus ramidus, which is known from over 100 specimens. Living about 4.4 million years ago, it had an ape-like foot, with the hallux (the big toe) pointing out sideways rather than falling in line like ours. Fast-forward about 700,000 years, and a set of footprints from Laetoli in Tanzania shows that a more human foot shape had evolved by then.

Enter the newly-discovered footprints. Found in Trachilos in western Crete, they have a distinctly human-like shape, with a big toe of a similar size, shape and position to ours. They appear to have been made by a more primitive hominin than the creature that left the Laetoli prints, but there’s a problem: they also predate Ardipithecus by about 1.3 million years. That means a human-like foot had evolved much earlier than previously thought, throwing a spanner into the accepted idea that the ape-footed Ardipithecus was a direct human ancestor.

A close-up of one of the 5.7 million-year-old footprints, which shows a remarkably human-like shape from...

These footprints were fairly clearly dated to the Miocene period, about 5.7 million years ago. According to the researchers, they lie in a layer of rock just below a distinctive layer that formed when the Mediterranean sea dried out, about 5.6 million years ago. To further back up the dating, the team analyzed the age of marine microfossils from sections of rock above and below the prints.

But the age of the Trachilos tracks isn’t the only mysterious feature about them: where they were found is also key. Until recently, the fossil record suggested that hominins originated in Africa and didn’t expand into Europe and Asia until about 1.8 million years ago. But these prints indicate that something with remarkably humanoid feet was traipsing through Greece millions of years earlier than conventional wisdom holds.

Interestingly, this find lines up with another recent discovery that could rewrite human history. Back in May, a study described 7 million-year-old bones of a hominin species called Graecopithecus freybergi, which were discovered in Greece and Bulgaria. That find represented such a huge discrepancy from the current thinking that the researchers pondered whether it meant that the human and chimp branches of the family tree originally split in Europe, and not Africa. The new study might correlate that conclusion.

“This discovery challenges the established narrative of early human evolution head-on and is likely to generate a lot of debate,” says Per Ahlberg, last author of the paper. “Whether the human origins research community will accept fossil footprints as conclusive evidence of the presence of hominins in the Miocene of Crete remains to be seen.”

The research was published in the journal Proceedings of the Geologist’s Association.



Smiles have a sound, and it’s contagious

Basketball coach Frank McGuire speaks on the phone smiling while hie wife listens, in 1956


The next time you catch yourself smiling during a phone conversation, just because, ask the person on the other end of the line whether they’re smiling, too. According to a small study from cognitive-science researchers in Paris, there’s a strong possibility that one person smiled, and the other “heard” it, then mimicked the gesture.

In other words, not only do smiles have a sound, but it’s contagious.

A path to empathy

Smiles, we’ve long known, are a universal human signal. They are understood across cultures and “pre-programmed,” as a professor of psychology at Knox College in Illinois, once explained to Scientific American. People who are born blind smile in the same way as the sighted, and for the same reasons, he said.

We’ve also been aware of a smile’s catchiness for decades. Scientists have documented how the sight of a various facial gestures, including a genuine or “duchenne” smile, can trigger the same in its viewer. In fact, psychologists first theorized that facial mimicry was a key path to accessing another person’s inner state, and thus developing empathy, more than 100 years ago (pdf).

In 2008, scientists in the UK found that people don’t even need to see a smile to perceive it. We can pick out the sound of different types of smiles when merely listening to someone speak.

Now, this research suggests that not only can we identify what the study authors call the “spectral signature of phonation with stretched lips” or “the smile effect” in speech, but that it seems to register on an unconscious level. And—as with the visual cue—it inspires imitation.

To conduct their experiments, the Paris researchers first recreated the smile’s auditory signature digitally, creating software that adds a smile to any recorded voice. They then outfitted 35 participants with electrodes attached to their facial muscles to see whether they could detect the sound of a smile in recorded French sentences—some of which were manipulated to include the effect, others not.

Their results showed that not only could the listeners most often hear the enhancement, even when they consciously missed a smile, their zygomaticus-major muscles prepared to grin in response to it.

Admittedly, they acknowledge that they don’t know how the experiment would have turned out had its participants not been asked to listen specifically for a smiling voice. Nevertheless, they argue in the paper that “the cognition of smiles is not as deeply rooted in visual processing as previously believed.”

(For the balance of this article please visit:


Ancient stone tools found in China shake up human ancestor timeline (again)

The discovery of two-million-year-old stone tools in China may rewrite the migration timeline of early human ancestors(Credit: Professor Zhaoyu Zhu)
Archaeologists have discovered ancient tools and bones in China that, once again, shake up the timeline of the human origin story. The items are more than two million years old, indicating that early hominins had spread much further east earlier than previously thought.

Although it’s being updated all the time, the general consensus holds that hominins – the group of our ancestors that are more closely related to humans than to chimps – originated in Africa, before spreading out into Europe and Asia about 1.8 million years ago.

But more recent discoveries suggest our ancestors had packed their bags and left home way before then. A set of startlingly-human footprints found in the Greek islands date back some 5.7 million years, while 7-million-year-old bones found in Greece and Bulgaria are so old that it led researchers to wonder (somewhat controversially) whether humans and chimps actually split from their last common ancestor in Europe, not Africa.

Thankfully, the new find isn’t quite so dramatic, but it’s no less fascinating. At a maximum age of 2.12 million years, the recently-discovered artifacts are about 270,000 years older than bones and stone tools found in Dmanisi, Georgia, which are widely accepted to be the oldest remains of hominins beyond Africa. Not only that, they’re much further from Africa than human ancestors were believed to have spread at that time.

The team discovered bones and stone tools, including a notch, scrapers, cobble, hammer stones and pointed...

The team discovered bones and stone tools, including a notch, scrapers, cobble, hammer stones and pointed pieces(Credit: Professor Zhaoyu Zhu)

The discovery was made in Shangchen on the Chinese Loess Plateau. Alongside animal bone fragments, the team found 80 stone tools, including a notch, scrapers, cobble, hammer stones and pointed pieces, which all showed clear signs of use. Most of them were made of quartz and quartzite that are believed to have come from the nearby Qinling Mountains.

Whoever left them behind weren’t just passing through, either. These artifacts were found in 17 different layers of dust and fossil soil, deposited during different climates over the span of close to a million years, from 2.12 to 1.2 million years ago.

(For the balance of this article see:



This Face Changes the Human Story. But How?

Scientists have discovered a new species of human ancestor deep in a South African cave, adding a baffling new branch to the family tree.





Ancient mummy DNA reveals surprises about genetic origins of Egyptians

Scientists have recently, for the first time, extracted full nuclear genome data from ancient Egyptian mummies
Scientists have recently, for the first time, extracted full nuclear genome data from ancient Egyptian mummies(Credit: bpk/Aegyptisches Museum und Papyrussammlung, SMB/Sandra Steiss)
For the first time, scientists have extracted full nuclear genome data from ancient Egyptian mummies. The results offer exciting insights into how different ancient civilizations intermingled and also establishes a breakthrough precedent in our ability to study ancient DNA.

The international team of scientists, led by researchers from the University of Tuebingen and the Max Planck Institute for the Science of Human History in Jena, sampled 151 mummified remains from a site called Abusir el-Meleq in Middle Egypt along the Nile River. The samples dated from 1400 BCE to 400 CE and were subjected to a new high-throughput DNA sequencing technique that allowed the team to successfully recover full genome-wide datasets from three individuals and mitochondria genomes from 90 individuals.

“We wanted to test if the conquest of Alexander the Great and other foreign powers has left a genetic imprint on the ancient Egyptian population,” explains one of the lead authors of the study, Verena Schuenemann.

In 332 BCE, for example, Alexander the Great and his army tore through Egypt. Interestingly the team found no genetic trace of not only Alexander the Great’s heritage, but of any foreign power that came through Egypt in the 1,300-year timespan studied.

“The genetics of the Abusir el-Meleq community did not undergo any major shifts during the 1,300 year timespan we studied,” says Wolfgang Haak, group leader at the Max Planck Institute, “suggesting that the population remained genetically relatively unaffected by foreign conquest and rule.”

They found that ancient Egyptians were closely related to Anatolian and Neolithic European populations, as well showing strong genetic traces from the Levant areas in the near east (Turkey, Lebanon).

(To read the full article visit:



North Sentinel Island

The Sentinelese are among the last people worldwide to remain virtually untouched by modern civilization.

North Sentinel Island.jpg

2009 NASA image of North Sentinel Island; the island’s protective fringe of coral reefs can be seen clearly.

North Sentinel Island is located in Andaman and Nicobar Islands
Location of North Sentinel Island


North Sentinel Island is one of the Andaman Islands, which includes South Sentinel Island, in the Bay of Bengal. It is home to the Sentinelese who, often violently, reject any contact with the outside world, and are among the last people worldwide to remain virtually untouched by modern civilization. As such, only limited information about the island is known.

Nominally, the island belongs to the South Andaman administrative district, part of the Indian union territory of Andaman and Nicobar Islands.[8] In practice, Indian authorities recognise the islanders’ desire to be left alone and restrict their role to remote monitoring, even allowing them to kill non-Sentinelese people without prosecution.[9][10] Thus the island can be considered a sovereign entity under Indian protection.



Article Image
(NASA Goddard and Steve Byrne)

A paper recently published in International Journal of Astrobiology asks a fascinating question: “Would it be possible to detect an industrial civilization in the geological record?” Put another way, “How do we really know our civilization is the only one that’s ever been on earth?” The truth is, we don’t. Think about it: The earliest evidence we have of humans is from 2.6 million years ago, the Quarternary period. Earth is 4.54 billion years old. That leaves 4,537,400,000 years unaccounted for, plenty of time for evidence of an earlier industrial civilization to disappear into dust.

The paper grew out of a conversation between co-authors Gavin Schmidt, director of NASA’s Goddard Institute for Space Studies, and astrophysics professor Adam Frank. (Frank recalls the exchange in an excellent piece in The Atlantic.) Considering the possible inevitability of any planets’ civilization destroying the environment on which it depends, Schmidt suddenly asked, “Wait a second. How do you know we’re the only time there’s been a civilization on our own planet?”

Schmidt and Frank recognize the whole question is a bit trippy, writing, “While much idle speculation and late night chatter has been devoted to this question, we are unaware of previous serious treatments of the problem of detectability of prior terrestrial industrial civilizations in the geologic past.”

There’s a thought-provoking paradox to consider here, too, which is that the longest-surviving civilizations might be expected to be the most sustainable, and thus leave less of a footprint than shorter-lived ones. So the most successful past civilizations would leave the least evidence for us to discover now. Hm.

Earlier humans, or…something else?

One of the astounding implications of the authors’ question is that it would mean — at least as far as we can tell from the available geologic record — that an earlier industrial civilization could not be human, or at least not homo sapiens or our cousins. We appeared only about 300,000 years back. So anyone else would have to have been some other intelligent species for which no evidence remains, and that we thus know nothing about. Schmidt is calling the notion of some previous non-human civilization the “Silurian hypothesis,” named for brainy reptiles featured in a 1970 episode of Dr. Who.


Dr. Who’s Silurians evolved from rubber suits to prosthetics (BBC)

Wouldn’t there be fossils?

Well, no. “The fraction of life that gets fossilized is always extremely small and varies widely as a function of time, habitat and degree of soft tissue versus hard shells or bones,’ says the paper, noting further that, even for dinosaurs, there are only a few thousand nearly complete specimens. Chillingly, “species as short-lived as Homo Sapiens (so far) might not be represented in the existing fossil record at all.”

(For full article visit:


Article Image
The Bed by Henri de Toulouse-Lautrec.

She was wide awake and it was nearly two in the morning. When asked if everything was alright, she said, “Yes.” Asked why she couldn’t get to sleep she said, “I don’t know.” Neuroscientist Russell Foster of Oxford might suggest she was exhibiting “a throwback to the bi-modal sleep pattern.” Research suggests we used to sleep in two segments with a period of wakefulness in-between.

A. Roger Ekirch, historian at Virginia Tech, uncovered our segmented sleep history in his 2005 book At Day’s Close: A Night in Time’s Past. There’s very little direct scientific research on sleep done before the 20th century, so Ekirch spent years going through early literature, court records, diaries, and medical records to find out how we slumbered. He found over 500 references to first and second sleep going all the way back to Homer’s Odyssey. “It’s not just the number of references—it is the way they refer to it as if it was common knowledge,” Ekirch tells BBC.

“He knew this, even in the horror with which he started from his first sleep, and threw up the window to dispel it by the presence of some object, beyond the room, which had not been, as it were, the witness of his dream.” — Charles Dickens, Barnaby Rudge (1840)

Here’s a suggestion for dealing with depression from English ballad ‘Old Robin of Portingale’:

“And at the wakening of your first sleepe/You shall have a hott drinke made/And at the wakening of your next sleepe/Your sorrowes will have a slake.”

Two-part sleep was practiced into the 20th century by people in Central America and Brazil and is still practiced in areas of Nigeria.

night street
(Photo: Alex Berger)

Night split in half

Segmented sleep—also known as broken sleep or biphasic sleep—worked like this:

  • First sleep or dead sleep began around dusk, lasting for three to four hours.
  • People woke up around midnight for a few hours of activity sometimes called “the watching.” They used it for things like praying, chopping wood, socializing with neighbors, and for sex. A 1500s character in Chaucer’s Canterbury Tales posited that the lower classes had more children because they used the waking period for procreation. In fact, some doctors recommended it for making babies. Ekirch found a doctor’s reference from 16th century France that said the best time to conceive was not upon first going to bed, but after a restful first sleep, when it was likely to lead to “more enjoyment” and when lovers were more likely to “do it better.”
  • “Second sleep,” or morning sleep, began after the waking period and lasted until morning.

Why and when it ended

Given that we spend a third of our lives in slumber, it is odd that so little is known about our early sleep habits, though Ekirch says that writings prove people slept that way for thousands of years. If for no other reason, someone had to wake in the middle of the night to tend to fires and stoves.

Author Craig Koslofsky suggests in Evening’s Empire that before the 18th century, the wee hours beyond the home were the domain of the disreputable, and so the watching was all the nighttime activity anyone wanted. With the advent of modern lighting, though, there was an explosion in all manner of nighttime activity, and it ultimately left people exhausted. Staying up all night and sleepwalking through the day came to be viewed as distastefully self-indulgent, as noted in this advice for parents from an 1825 medical journal found by Ekirch: “If no disease or accident there intervene, they will need no further repose than that obtained in their first sleep, which custom will have caused to terminate by itself just at the usual hour. And then, if they turn upon their ear to take a second nap, they will be taught to look upon it as an intemperance not at all redounding to their credit.” Coupled with the desire for efficiency promoted by industrialization, the watch was increasingly considered a pointless disruption of much-needed rest.

The rise of insomnia

wide awake

Intriguingly, right about the time accounts of first sleep and second sleep began to wane, references to insomnia began appearing. Foster isn’t the only one who wonders if this isn’t a biological response to un-segmented sleep. Sleep psychologist Gregg Jacobs tells BBC, ”For most of evolution we slept a certain way. Waking up during the night is part of normal human physiology.” He also notes that the watch was often a time for reflection and meditation that we may miss. “Today we spend less time doing those things,” he says. “It’s not a coincidence that, in modern life, the number of people who report anxiety, stress, depression, alcoholism and drug abuse has gone up.” It may also not a coincidence, though, that we don’t die at 40 anymore.

Subjects in an experiment in the 1990s gradually settled themselves into bi-phasic sleep after being kept in darkness 10 hours a day for a month, so it may be the way we naturally want to sleep. But is it the healthiest way?

Science says we’re doing it right right now

Not everyone restricts their rest to a full night of sleep. Siestas are popular in various places, and there are geniuses who swear by short power naps throughout a day. Some have no choice but to sleep in segments, such as parents of infants and shift workers.

But, according to sleep specialist Timothy A. Connolly of Center of Sleep Medicine at St. Luke’s Episcopal Hospital in Houston speaking to Everyday Health, “Studies show adults who consistently sleep seven to eight hours every night live longest.” Some people do fine on six hours, and some need 10, but it needs to be in one solid chunk. He says that each time sleep is disrupted, it impacts every cell, tissue, and organ, and the chances go up for a range of serious issues including stroke, heart disease, obesity and mood disorders.

Modern science is pretty unanimous: Sleeping a long, solid chunk each night gives you the best chance of living a long life, natural or not.

(Article source:


A new theory of consciousness: the mind exists as a field connected to the brain

Between quantum physics and neuroscience, a theory emerges of a mental field we each have, existing in another dimension and behaving in some ways like a black hole
October 11, 2017 12:22 pm, Last Updated: October 16, 2017 1:58 pm
By Tara MacIsaac, Epoch Times

The relationship between the mind and the brain is a mystery that is central to how we understand our very existence as sentient beings. Some say the mind is strictly a function of the brain — consciousness is the product of firing neurons. But some strive to scientifically understand the existence of a mind independent of, or at least to some degree separate from, the brain.

The peer-reviewed scientific journal NeuroQuantology brings together neuroscience and quantum physics — an interface that some scientists have used to explore this fundamental relationship between mind and brain.

An article published in the September 2017 edition of NeuroQuantology reviews and expands upon the current theories of consciousness that arise from this meeting of neuroscience and quantum physics.

Dr. Dirk Meijer (Courtesy of Dr. Dirk Meijer)
Dr. Dirk Meijer (Courtesy of Dr. Dirk Meijer)

Dr. Dirk K.F. Meijer, a professor at the University of Groningen in the Netherlands, hypothesizes that consciousness resides in a field surrounding the brain. This field is in another dimension. It shares information with the brain through quantum entanglement, among other methods. And it has certain similarities with a black hole.

This field may be able to pick up information from the Earth’s magnetic field, dark energy, and other sources. It then “transmits wave information into the brain tissue, that … is instrumental in high-speed conscious and subconscious information processing,” Dirk wrote.

In other words, the “mind” is a field that exists around the brain; it picks up information from outside the brain and communicates it to the brain in an extremely fast process.

He described this field alternately as “a holographic structured field,” a “receptive mental workspace,” a “meta-cognitive domain,” and the “global memory space of the individual.”

Extremely rapid functions of the brain suggest it processes information through a mechanism not yet revealed.


There’s an unsolved mystery in neuroscience called the “binding problem.” Different parts of the brain are responsible for different things: some parts work on processing color, some on processing sound, et cetera. But, it somehow all comes together as a unified perception, or consciousness.

Information comes together and interacts in the brain more quickly than can be explained by our current understanding of neural transmissions in the brain. It thus seems the mind is more than just neurons firing in the brain.

(To read the entire article visit:


Dogon dwelling on the Bandiagara Escarpment in Mali, West Africa – 300px-Falaise_de_Bandiagara

Dogon astronomical beliefs

Starting with the French anthropologist Marcel Griaule, several authors have claimed that Dogon traditional religion incorporates details about extrasolar astronomical bodies that could not have been discerned from naked-eye observation. This idea has entered the New Age and ancient astronaut literature as evidence that extraterrestrial aliens visited Mali in the distant past.


Cliff Palace, Mesa Verde, Colorado, USA

Cliff Palace at Mesa Verde.jpg

 This multi-storied ruin, the best-known cliff dwelling in Mesa Verde, is located in the largest alcove in the center of the Great Mesa. It was south- and southwest-facing, providing greater warmth from the sun in the winter. Dating back more than 700 years, the dwelling is constructed of sandstone, wooden beams, and mortar. Many of the rooms were brightly painted. Cliff Palace was home to approximately 125 people, but was likely an important part of a larger community of sixty nearby pueblos, which housed a combined six hundred or more people. With 23 kivas and 150 rooms, Cliff Palace is the largest cliff dwelling in Mesa Verde National Park.

The Border Between the 'Two Englands'. In Great Britain as in the US, two cultural sub-nations identify themselves (and the other) as North and South. There is a place used as shorthand for describing the divide, with the rougher, poorer North and wealthier, middle-to-upper-class South referring to each other as ‘on the other side of the Watford Gap’.

In Great Britain as in the US, two cultural sub-nations identify themselves (and the other) as North and South. The US’s North and South are quite clearly delineated, by the states’ affiliations during the Civil War (which in the east coincides with the Mason-Dixon line). That line has become so emblematic that the US South is referred to as ‘Dixieland’.

There’s no similarly precise border in Great Britain, maybe because the ‘Two Englands’ never fought a civil war against each other.There is, however, a place used as shorthand for describing the divide, with the rougher, poorer North and wealthier, middle-to-upper-class South referring to each other as ‘on the other side of the Watford Gap’.

Not to be confused with the sizeable town of Watford in Hertfordshire, Watford Gap is a small village in Northamptonshire. It was named for the eponymous hill pass that has facilitated travel east-west and north-south since at least Roman times (cf. Watling Street, now passing through it as the A5 motorway). Other routes passing through the Gap are the West Coast Main Line railway, the Grand Union Canal and the M1, the UK’s main North-South motorway.

In olden times, the Gap was the location of an important coaching inn (operating until closure in approximately 2000 as the Watford Gap Pub), and nowadays it has the modern equivalent in a service station – which happened to be the first one in the UK – on the M1, the main North-South motorway in the UK.

Because of its function as a crossroads, its location on the main road and its proximity to the perceived ‘border’ between North and South, the Watford Gap has become the colloquial separator between both. Other such markers don’t really exist, so the border between North and South is quite vague. Until now, that is.

It turns out the divide is more between the Northwest and the Southeast: on this map, the line (which, incidentally, does cross the Watford Gap –  somewhere in between Coventry and Leicester) runs from the estuary of the Severn (near the Welsh-English border) to the mouth of the Humber. Which means that a town like Worcester is firmly in the North, although it’s much farther south than the ‘southern’ town of Lincoln.

At least, that’s the result of a Sheffield University study, which ‘divided’ Britain according to statistics about education standards, life expectancy, death rates, unemployment levels, house prices and voting patterns. The result splits the Midlands in two. “The idea of the Midlands region adds more confusion than light,” the study says.

The line divides Britain according to health and wealth, separating upland from lowland Britain, Tory from Labour Britain, and indicates a £100.000 house price gap – and a year’s worth of difference in life expectancy (in case you’re wondering: those in the North live a year less than those in the South).

The line does not take into account ‘pockets of wealth’ in the North (such as the Vale of York) or ‘pockets of poverty’ in the South, especially in London.

The map was produced for the Myth of the North exhibition at the Lowry arts complex in Manchester, and was mentioned recently in the Daily Mail . I’m afraid I don’t have an exact link to the article, but here is the page at the Lowry for the aforementioned exhibition.

(This article from:

Wendish in Japan

You may wonder how I came to search for traces of Wendish as far afield as Japan. It happened quite accidentally. I became curious about whether there was a linguistic connection between ancient Japanese and Wendish in the mid-1980s, when reading a biography of an American who had grown up in Japan. He mentions that a very ancient Japanese sword is called meich in Japanese. Surprisingly, meich or mech has the same meaning also in Wendish. How did Wends reach Japan, and when? I decided to find out first if this particular word, meich, really exists in Japanese. And, if it does, at which point in time in the past Wendish speakers could have had contact with Japanese islands.

I describe in more detail, mentioning my tentative conclusions with regard to the origins of Wendish in Japanese, and its relation to the Ainu language, in the 5th installment of my article,The Extraordinary History of a Unique People, published in the Glasilo magazine, Toronto, Canada. Anyone interested will find all the already published installments of this article, including the 5th installment, on my still not quite organized website, In the next, winter issue of Glasilo, i.e., in the 6th installment of my article, I will report my discoveries and conclusions with regard to the origins of Wendish in the Ainu language, the language of the aboriginal white population of Japan.

I started my search for the word meich by buying Kenkyusha’s New School Japanese-English Dictionary. Unfortunately, I had acquired a dictionary meant for ordinary students and meich is not mentioned in it. Obviously, I should have bought a dictionary of Old Japanese instead, in which ancient terms are mentioned. Nevertheless, to my amazement, I found in Kenkyusha’s concise dictionary, instead of meich, many other Wendish words and cognates, which I am quoting below in my List.

I found it intriguing that the present form of words in Japanese, with clearly Wendish roots, show that Chinese and Korean immigrants to the islands were trying to learn Wendish, not vice versa. This indicates that the original population of Japan was Caucasian and that the influx of the Asian population was, at least at first, gradual. Today, after over 3000 years of Chinese and Korean immigrations, about half of the Japanese vocabulary is based on Chinese.

There is another puzzle to be solved. Logically, one would expect the language of the white aboriginies of Japan, the Ainu – also deeply influenced by Wendish – to have been the origin of Wendish in modern Japanese. Yet, considering the set up of the Wendish vocabulary occurring in Japanese, Ainu does not seem to have played any part in the formation of modern Japanese, or only a negligible one. Wendish vocabulary in Japanese points to a different source. It seems to have been the result of a second, perhaps even a third Wendish migration wave into the Islands, at a much later date. Ainu seem to have arrived already in the Ice Age, when present Japan was still a part of the Asian continent. They have remained hunters and gatherers until their final demise in the mid-20th century. They retained their Ice Age religion, which regarded everything in the universe and on earth as a spiritual entity, to be respected and venerated – including rocks and stars. Wendish words in Japanese, however, mirror an evolved megalithic agricultural culture and a sun-venerating religion.

A list of all Wendish cognates I have discovered in the Kenkyusha’s dictionary is on my website, under the heading of a List of Wendish in Japanese. It is by no means a complete list. My Japanese is very limited, based solely on Kenkyusha’s dictionary and some introductory lessons to the Japanese culture, history, language, literature and legends, by a Japanese friend of mine, with an authentic Wendish name Hiroko, pronounced in the Tokyo dialect, as in Wendish, shirokowide, all-encompassing. Besides, although I have a university level knowledge of Wendish, I do not possess the extensive Wendish vocabulary necessary to discover most of Wendish words which may have changed somewhat their meaning with thousands of passing years, complicated by the arrival of a new population whose language had nothing in common with Wendish.

Future, more thorough and patient researchers – whose mother-tongue is Wendish but who also have a thorough knowledge of Japanese – will, no doubt, find a vastly larger number of Wendish cognates in Japanese than I did.

(For more information visit:

Spaniard raised by wolves disappointed with human life
Marcos Rodríguez Pantoja, who lived among animals for 12 years, finds it hard just to get through the winter
Marcos Rodríguez Pantoja, outside his house.
Marcos Rodríguez Pantoja, outside his house. ÓSCAR CORRAL

Marcos Rodríguez Pantoja was once the “Mowgli” of Spain’s Sierra Morena mountain range, but life has changed a lot since then. Now the 72-year-old lives in a small, cold house in the village of Rante, in the Galician province of Ourense. This past winter has been hard for him, and a violent cough interrupts him often as he speaks.

His last happy memories were of his childhood with the wolves. The wolf cubs accepted him as a brother, while the she-wolf who fed him taught him the meaning of motherhood. He slept in a cave alongside bats, snakes and deer, listening to them as they exchanged squawks and howls. Together they taught him how to survive. Thanks to them, Rodríguez learned which berries and mushrooms were safe to eat.

Today, the former wolf boy, who was 19 when he was discovered by the Civil Guard and ripped away from his natural home, struggles with the coldness of the human world. It’s something that didn’t affect him so much when he was running around barefoot and half-naked with the wolves. “I only wrapped my feet up when they hurt because of the snow,” he remembers. “I had such big calluses on my feet that kicking a rock was like kicking a ball.”

After he was captured, Rodríguez’s world fell apart and he has never been able to fully recover. He’s been cheated and abused, exploited by bosses in the hospitality and construction industries, and never fully reintegrated to the human tribe. But at least his neighbors in Rante accept him as “one of them.” And now, the environmental group Amig@s das Arbores is raising money to insulate Rodríguez’s house and buy him a small pellet boiler – things that his meager pension cannot cover.

They laugh at me because I don’t know about politics or soccer

Marcos Rodríguez Pantoja

Rodríguez is one of the few documented cases in the world of a child being raised by animals away from humans. He was born in Añora, in Córdoba province, in 1946. His mother died giving birth when he was three years old, and his father left to live with another woman in Fuencaliente. Rodríguez only remembers abuse during this period of his life.

They took him to the mountains to replace an old goatherd who cared for 300 animals. The man taught him the use of fire and how to make utensils, but then died suddenly or disappeared, leaving Rodríguez completely alone around 1954, when he was just seven years old. When authorities found Rodríguez, he had swapped words for grunts. But he could still cry. “Animals also cry,” he says.

Marcos Rodríguez in his home. ampliar foto
Marcos Rodríguez in his home. ÓSCAR CORRAL

He admits that he has tried to return to the mountains but “it is not what it used to be,” he says. Now the wolves don’t see him as a brother anymore. “You can tell that they are right there, you hear them panting, it gives you goosebumps … but it’s not that easy to see them,” he explains. “There are wolves and if I call out to them they are going to respond, but they are not going to approach me,” he says with a sigh. “I smell like people, I wear cologne.” He was also sad to see that there were now cottages and big electric gates where his cave used to be.

His experience has been the subject of various anthropological studies, books by authors such as Gabriel Janer, and the 2010 film Among wolves (Entrelobos) by Gerardo Olivares. He insists that life has been much harder since he was thrown back into the modern world. “I think they laugh at me because I don’t know about politics or soccer,” he said one day. “Laugh back at them,” his doctor told him. “Everyone knows less than you.”

He has encountered many bad people along the way, but there have also been acts of solidarity. The forest officer Xosé Santos, a member of Amig@s das Arbores, organizes sessions at schools where Rodríguez can talk about his love for animals and the importance of caring for the environment. “It’s amazing how he enthralls the children with his life experience,” says Santos. Children, after all, are the humans whom Rodríguez feels most comfortable with.


English version by Melissa Kitson.

Discovered: 300,000-Year-Old Tools and Paints That Point to Early Humanity’s Cleverness

Findings out of Kenya offer a new understanding of when early humans got organized and started trading.

ancient tools kenya

A team of anthropologists have determined that humanity has been handy for far longer than ever realized. These researchers discovered tools in East Africa that date back to around 320,000 years ago, far earlier than scientists previously thought humans were using such items.

Coming from the Olorgesailie geologic formation in southern Kenya, the findings, published in Science, show how the collection and creation of various colors through a pigmentation process was crucial to early human society. In addition to color creation, the team also found a variety of stone tools.

The earliest human life found in Olorgesailie dates back 1.2 million years. The question is, when did homo sapiens started becoming a collective society? When did the transition occur, and what did it look like? That date has generally been seen as around 100,000 years ago, thanks to evidence such as cave paintings in Ethiopia. However, the findings at Olorgesailie, where famed paleonanthropologists Louis and Mary Leaky also worked, show evidence of a social contract between geographically distant groups.


Lithuanian, the most conservative of all Indo-European languages, is riddled with references to bees.

In mid-January, the snow made the little coastal town of Šventoji in north-west Lithuania feel like a film set. Restaurants, shops and wooden holiday cabins all sat silently with their lights off, waiting for the arrival of spring.

I found what I was looking for on the edge of the town, not far from the banks of the iced-over Šventoji river and within earshot of the Baltic Sea: Žemaitiu alka, a shrine constructed by the Lithuanian neo-pagan organisation Romuva. Atop a small hillock stood 12 tall, thin, slightly tapering wooden figures. The decorations are austere but illustrative: two finish in little curving horns; affixed to the top of another is an orb emitting metal rays. One is adorned with nothing but a simple octagon. I looked down to the words carved vertically into the base and read ‘Austėja’. Below it was the English word: ‘bees’.

The Žemaitiu alka shrine features a wooden figure dedicated to Austėja, the pagan goddess of bees (Credit: Credit: Will Mawhood)

The Žemaitiu alka shrine features a wooden figure dedicated to Austėja, the pagan goddess of bees (Credit: Will Mawhood)

You may also be interested in:
• Ethiopia’s dangerous art of beekeeping
• Europe’s earliest written language
• The town that’s losing its language

This was not the first time I’d encountered references to bees in Lithuania. During previous visits, my Lithuanian friends had told me about the significance of bees to their culture.

Lithuanians don’t speak about bees grouping together in a colony like English-speakers do. Instead, the word for a human family (šeimas) is used. In the Lithuanian language, there are separate words for death depending on whether you’re talking about people or animals, but for bees – and only for bees – the former is used. And if you want to show a new-found Lithuanian pal what a good friend they are, you might please them by calling them bičiulis, a word roughly equivalent to ‘mate’, which has its root in bitė – bee. In Lithuania, it seems, a bee is like a good friend and a good friend is like a bee.

A bee is like a good friend and a good friend is like a bee

Seeing the shrine in Šventoji made me wonder: could all these references be explained by ancient Lithuanians worshipping bees as part of their pagan practices?

Lithuania has an extensive history of paganism. In fact, Lithuania was the last pagan state in Europe. Almost 1,000 years after the official conversion of the Roman Empire facilitated the gradual spread of Christianity, the Lithuanians continued to perform their ancient animist rituals and worship their gods in sacred groves. By the 13th Century, modern-day Estonia and Latvia were overrun and forcibly converted by crusaders, but the Lithuanians successfully resisted their attacks. Eventually, the state gave up paganism of its own accord: Grand Duke Jogaila converted to Catholicism in 1386 in order to marry the Queen of Poland.

This rich pagan history is understandably a source of fascination for modern Lithuanians – and many others besides. The problem is that few primary sources exist to tell us what Lithuanians believed before the arrival of Christianity. We can be sure that the god of thunder Perkūnas was of great importance as he is extensively documented in folklore and song, but most of the pantheon is based on guesswork. However, the Lithuanian language may provide – not proof, exactly, but clues, tantalising hints, about those gaps in the country’s past.

Before Grand Duke Jogaila converted to Catholicism in 1386, Lithuania was the last pagan state in Europe (Credit: Credit: PHAS/Getty Images)

Before Grand Duke Jogaila converted to Catholicism in 1386, Lithuania was the last pagan state in Europe (Credit: PHAS/Getty Images)

In Kaunas, Lithuania’s second-largest city, I spoke to Dalia Senvaitytė, a professor of cultural anthropology at Vytautas Magnus University. She was sceptical about my bee-worshipping theory, telling me that there may have been a bee goddess by the name of Austėja, but she’s attested in just one source: a 16th-Century book on traditional Lithuanian beliefs written by a Polish historian.

It’s more likely, she said, that these bee-related terms reflect the significance of bees in medieval Lithuania. Beekeeping, she explained “was regulated by community rules, as well as in special formal regulations”. Honey and beeswax were abundant and among the main exports, I learned, which is why its production was strictly controlled.

But the fact that these references to bees have been preserved over hundreds of years demonstrates something rather interesting about the Lithuanian language: according to the Lithuanian Quarterly Journal of Arts and Sciences, it’s the most conservative of all living Indo-European languages. While its grammar, vocabulary and characteristic sounds have changed over time, they’ve done so only very slowly. For this reason, the Lithuanian language is of enormous use to researchers trying to reconstruct Proto-Indo-European, the single language, spoken around four to five millennia ago, that was the progenitor of tongues as diverse as English, Armenian, Italian and Bengali.

The Lithuanian word bičiulis, meaning ‘friend’, has its root in bite, the word for ‘bee’ (Credit: Credit: Rambynas/Getty Images)

The Lithuanian word bičiulis, meaning ‘friend’, has its root in bite, the word for ‘bee’ (Credit: Rambynas/Getty Images)

All these languages are related, but profound sound shifts that have gradually taken place have made them distinct from one another. You’d need to be a language expert to see the connection between English ‘five’ and French cinq – let alone the word that Proto-Indo-Europeans are thought to have used, pénkʷe. However, that connection is slightly easier to make out from the Latvian word pieci, and no trouble at all with Lithuanian penki. This is why famous French linguist Antoine Meillet once declared that “anyone wishing to hear how Indo-Europeans spoke should come and listen to a Lithuanian peasant”. [Editor’s note: The little finger, or pinky finger, is also known as the fifth digit or just pinky.]

Lines can be drawn to other ancient languages too, even those that are quite geographically distant. For example, the Lithuanian word for castle or fortress – pilis – is completely different from those used by its non-Baltic neighbours, but is recognisably similar to the Ancient Greek word for town, polis. Surprisingly, Lithuanian is also thought to be the closest surviving European relative to Sanskrit, the oldest written Indo-European language, which is still used in Hindu ceremonies. [Editor’s note: The strength of the castle or fortress is similar, in some ways, to the strength of the Police.]

This last detail has led to claims of similarities between Indian and ancient Baltic cultures. A Lithuanian friend, Dovilas Bukauskas, told me about an event organised by local pagans that he attended. It began with the blessing of a figure of a grass snake – a sacred animal in Baltic tradition – and ended with a Hindu chant.

Honey and beeswax were among medieval Lithuania’s main exports (Credit: Credit: Roman Babakin/Alamy)

Honey and beeswax were among medieval Lithuania’s main exports (Credit: Roman Babakin/Alamy)

I asked Senvaitytė about the word gyvatė. This means ‘snake’, but it shares the same root with gyvybė, which means ‘life’. The grass snake has long been a sacred animal in Lithuania, reverenced as a symbol of fertility and luck, partially for its ability to shed its skin. A coincidence? Perhaps, but Senvaitytė thinks in this case probably not.

The language may also have played a role in preserving traditions in a different way. After Grand Duke Jogaila took the Polish throne in 1386, Lithuania’s gentry increasingly adopted not only Catholicism, but also the Polish language. Meanwhile, rural Lithuanians were much slower to adopt Christianity, not least because it was almost always preached in Polish or Latin. Even once Christianity had taken hold, Lithuanians were reluctant to give up their animist traditions. Hundreds of years after the country had officially adopted Christianity, travellers through the Lithuanian countryside reported seeing people leave bowls of milk out for grass snakes, in the hope that the animals would befriend the community and bring good luck.

Anyone wishing to hear how Indo-Europeans spoke should come and listen to a Lithuanian peasant

Similarly, bees and bee products seem to have retained importance, especially in folk medicine, for their perceived healing powers. Venom from a bee was used to treat viper bites, and one treatment for epilepsy apparently recommended drinking water with boiled dead bees. But only, of course, if the bees had died from natural causes.

But Lithuanian is no longer exclusively a rural language. The last century was a tumultuous one, bringing war, industrialisation and political change, and all of the country’s major cities now have majorities of Lithuanian-speakers. Following its accession to the EU in 2004, the country is now also increasingly integrated with Europe and the global market, which has led to the increasing presence of English-derived words, such as alternatyvus (alternative) and prioritetas (priority).

Lithuanian is no longer exclusively a rural language (Credit: Credit: Will Mawhood)

Lithuanian is no longer exclusively a rural language (Credit: Will Mawhood)

Given Lithuania’s troubled history, it’s in many ways amazing the language has survived to the present day. At its peak in the 14th Century, the Grand Duchy of Lithuania stretched as far as the Black Sea, but in the centuries since, the country has several times disappeared from the map entirely.

It’s too simplistic to say that Lithuanian allows us to piece together the more mysterious stretches in its history, such as the early, pagan years in which I’m so interested. But the language acts a little like the amber that people on the eastern shores of the Baltic have traded since ancient times, preserving, almost intact, meanings and structures that time has long since worn away everywhere else.

And whether or not Austėja was really worshipped, she has certainly remained a prominent presence. Austėja remains consistently in the top 10 most popular girls names in Lithuania. It seems that, despite Lithuania’s inevitable cultural and linguistic evolution, the bee will always be held in high esteem.

Join more than three million BBC Travel fans by liking us on Facebook, or follow us on Twitter and Instagram.

If you liked this story, sign up for the weekly features newsletter called “If You Only Read 6 Things This Week”. A handpicked selection of stories from BBC Future, Earth, Culture, Capital and Travel, delivered to your inbox every Friday.