Something about Science, Gender, and Jobs

WomaRecently, the Globe and Mail sought readers’ opinions on getting more women into male-dominated professions, the sciences in particular. According to the article, more Canadian men than women pursue a career in the sciences. While the numbers are closer for those who study science in university (“less than 40 percent” are women), after graduation the discrepancy widens when it comes to employment (“less than 22 percent”).

The writers don’t offer any explanation for this gap. However, the piece’s title, “How can we encourage more girls into science careers?” suggests a tacit assumption. “We” (whoever that is) are not doing enough to promote science careers to young women.

Education, parents, media, marketing, and whatever else constitutes “we” might very well be guilty of persuading women that science is for men. It’s hard to say; the article provides no evidence, which is to be expected considering it never states the claim explicitly anyway.

Since we’re in speculating mode, I can come up with a few other reasons for the gender difference in employment. Please bear in mind that we’d need actual research to substantiate any of these.

  • Older people have more of a gender gap than younger people
    • It wouldn’t surprise me if accounting for age or length of time in the field changes the way we understand the data. If recent numbers show less of a gap among science graduates, it’s likely that we’ll see less of a gap in employment once the older generation retires.
  • Women have babies
    • Yes, I know. More men are staying home with their kids these days, and that’s great if that’s what both partners want. However, I’d guess it’s still more common for women to stay home out of choice and/or tradition. More importantly, many women get pregnant, which requires at least some time off. Creating a human being is hard work, but not the kind you can put on your CV (unless you’re creating a homunculus in a lab). Even with the most supportive family, childbearing can put women behind in their careers when compared to their childless counterparts, including men. The more children you have, the further behind you will fall. A male commenter on the Globe article made this point quite well.
  • Employers are sexist
    • Not all employers are sexist. Obviously. But unless things have changed drastically since 2012, many employers have an implicit bias that they might not even be aware of. One study gave potential science mentors the exact same student application, but changed the name from male to female on half of them. They discovered that a gender bias really does exist: “Results found that the ‘female’ applicants were rated significantly lower than the ‘males’ in competence, hireability, and whether the scientist would be willing to mentor the student.”

These are just a few possible roots of the gender gap. Luckily, it does appear to be shifting. So yay.

Now here’s an issue nobody talks about in these discussions: why is no one encouraging boys to enter female-dominated professions? Where are the articles decrying the lack of men in nursing, social work, counseling, event planning, or teaching?

To be fair, earlier this year, Business Insider did note which jobs tend to employ more women than men. However, the brief article was bereft of the sense of alarm so often used to highlight the relative lack of women in traditionally masculine fields.

So why the paucity of interest in getting men into traditionally feminine careers? Let’s speculate some more.

  • Work traditionally viewed as masculine is more highly valued than work viewed as feminine

That’s the only reason I can think of. The work that women have done traditionally just doesn’t garner the same level of respect, as evidenced by the higher salaries often received for many masculine jobs.

The respective valuation of traditionally masculine and feminine work may be the real crux of ongoing gender inequality in the labour force. Today’s movement encourages women to be like men. On a large scale, “we” still tend to value masculine things over feminine things. The goal is to raise women up to the level of men, because women’s work does not have the same social standing, no matter how much it contributes to our health and economic function (e.g. social work or primary education).

In other words, it’s great to encourage women to do the same work as men. But we won’t have true equality until men can do the same work as women, without losing their social standing.


Pope Francis Announces What We Already Knew

Pope FrancisLots of people have been getting excited about Pope Francis. He seems moderate and progressive, a humanitarian Christian voice in a world plagued by religious extremism. Recently, instead of staying for dinner with politicians, he decided to eat with some of the homeless in Washington, DC.

That’s a great action and he seems like a decent person. He’s made waves by refusing to ride a bulletproof Popemobile and speaking out about climate change. What’s not to like?

He even went so far as to accept evolution and the Big Bang theory. Good for him. He has caught up with the rest of us.

Except not quite. According to The Independent, his acceptance of these theories relies on the fact that they necessarily incorporate a creator – that they don’t work without intentional design:

“The Big Bang, which today we hold to be the origin of the world, does not contradict the intervention of the divine creator but, rather, requires it.

“Evolution in nature is not inconsistent with the notion of creation, because evolution requires the creation of beings that evolve.”

These statements reveal how little he understands evolution. If “beings” automatically require a creator, then nothing can exist without something else existing to design it.

So wouldn’t the creator also require a creator and so on ad infinitum? This is an example of the infinite regress fallacy.

Okay, so maybe he’s imposing God on a theory that pretty much negates the possibility of a creator. But he’s a Catholic, so of course, he’s going to find a way to work God into proven scientific facts, right?

He’s entitled to express his opinion, even if it is based on a fallacy. What I don’t understand is why people applaud him for announcing a distorted, unfounded version of what scientists have already been telling us for a long time. Is Catholicism so far behind on the facts that even inaccurate science has become worthy of praise?

Then we have Elton John, the famous gay musician, calling Pope Francis his “hero” for promoting gay rights in the Catholic Church.  The Advocate, a gay rights magazine, named the pope Person of the Year, allegedly for nudging the church in the direction of greater tolerance and inclusion of the LGBT community.

Wouldn’t that be great if it were true? Sadly, before becoming pope, Francis (then known as Archbishop Jorge Bergoglio) spoke out against legalizing gay marriage, calling it “an attempt to destroy God’s plan.”

People seem to have misunderstood what he meant by his now-famous quote, which appeared on the cover of The Advocate:

“If someone is gay and seeks the Lord with good will, who am I to judge?”

Of course, I can’t say what’s in his heart, but judging by his track record and other comments he has made, this statement seems more like a reference to casting the first stone only if you’re without sin yourself. He hasn’t actually said that homosexuality is not a sin, just that Catholics should stop judging others. For a more detailed analysis of the Vatican’s current stance on homosexuality, check out this insightful article from TIME. It’s not as radical as you might think.

The fact that people get so excited about this guy shows how restrictive and judgmental the Catholic clergy often are, not to mention hypocritical. At least Pope Francis appears to practice what he preaches.

It’s like if a two-year-old draws a stick person compared to an adult doing the same. The feat seems more impressive when the person isn’t fully developed.

I guess the same goes for the church taking baby steps. We’re so impressed by this pope that we forget how completely Catholicism would have to reinvent itself if it wanted to achieve any kind of progressive status.

Saudi Arabia, Land of Human Rights

“Farasan Island 3” by Bandar Yuosef – Flickr: Farasan Island_0392. Licensed under CC BY 2.0 via Wikimedia Commons –

Last year, all atheists became terrorists – at least according to new laws introduced in Saudi Arabia. Non-believers aren’t alone though: anyone who criticizes the state, its rulers, or the Saudi version of Islam could be charged with terrorism. The Penal Law for Crimes of Terrorism and its Financing explicitly includes non-violent acts, effectively prohibiting any semblance of free speech, association, or religion.

The International Humanist and Ethical Union explains some of the key terms of the legislation:

The provisions of the “terrorism” law define and outlaw numerous acts and forms of expression as “terrorism”, including:

  • “Calling for atheist thought in any form”

  • any disloyalty “to the country’s rulers”, or anyone “who swears allegiance to any party, organization, current [of thought], group, or individual inside or outside [the kingdom]”;

  • anyone who aids, affiliates, or holds “sympathy” with any “terrorist” organization as defined by the act, which “includes participation in audio, written, or visual media; social media in its audio, written, or visual forms; internet websites; or circulating their contents in any form”;

  • contact with groups or individuals who are “hostile to the kingdom”

  • and the article on “Seeking to shake the social fabric or national cohesion” prohibits all protest, without qualification as to its message or intent, by outlawing “calling, participating, promoting, or inciting sit-ins, protests, meetings, or group statements in any form”.

Additionally, apostasy (denying Islam by adopting another faith or becoming an atheist) is punishable by death. The International Business Times states that 100 people have been put to death already this year, in compliance with laws prescribing capital punishment for “murder, rape, armed robbery, using recreational drugs, and smuggling, in addition to homosexuality, false prophecy, apostasy, adultery, witchcraft and sorcery.”

Raif Badawi is a case in point: he’s a Saudi Arabian blogger sentenced to ten years in prison and a thousand lashes for political criticism. It sounds crazy and it is, yet he is only one example of extreme corporal punishment among countless others that remain invisible to the international world.

With this lovely human rights record, Saudi Arabia somehow remains a full member of the United Nations. Not only that, but one of its representatives was quietly selected in June to head a panel of independent experts on the UN Human Rights Council.

This appointment followed on the heels of Saudi Arabia’s job opening for eight new executioners, described in the ad as “religious functionaries” working in the civil service, according to The Independent.

It’s like putting the head of ISIS in charge of human rights. Actually, the folks at UN Watch say that Saudi Arabia has beheaded more people than the famous extremist group this year.

Somehow, this is the real world, where farce sometimes merges with tragedy.

Science Is Not Truth

How do we know what is true? It’s an age-old question that hasn’t been fully resolved.

We do know that evaluating evidence and recognizing the role of subjectivity are part of the most reliable approach we’ve discovered so far in our trajectory as a species.

Generally, we call this approach “science.” But science is not the same as truth.

A few years ago, the American Anthropological Association (AAA) removed the word “science” from its description of the discipline of anthropology. Unsurprisingly, this act created an uproar with partisans on either side arguing for or against the definition of anthropology as a science.

Unfortunately, a lot of them seemed unable to articulate what science really is.

It’s a common mistake and one you will encounter in other areas. When people debate the validity of science, we tend to take for granted that everyone knows what the term means. Fun fact: we don’t.

You may have come across the phrase “the scientific method.” This phrase provides a clearer indication of what’s going on than the single word “science.” “Science tell us that the Earth orbits the sun” would become “The scientific method tells us that the Earth orbits the sun.” It’s a subtle difference, but revealing nonetheless.

The first statement implies some sort of oracle or god (“Science, goddess of the Sky”) revealed an absolute truth. In the second version, we understand that a rigorous process was involved.

In the early twentieth century, anthropologist Alfred Kroeber (drawing on his teacher Franz Boas) wrote down three principles of science:

  • The method of science is to begin with questions, not with answers, least of all with value judgements.

  • Science is dispassionate inquiry and therefore cannot take over outright any ideologies “already formulated in everyday life”, since these are themselves inevitably traditional and normally tinged with emotional prejudice.

  • Sweeping all-or-none, black-and-white judgements are characteristic of categorical attitudes and have no place in science, whose very nature is inferential and judicious.

Kroeber’s teacher, Franz Boas, divided science into two branches, which he called the general sciences and the historical sciences. For him, the general sciences try to discover universal laws, while the historical sciences uncover the processes behind things that happen only once, usually in a specific time and place. The social sciences, including history, linguistics, sociology, and anthropology (among many others), fall into the latter category.

Both use the scientific method. Although the type of knowledge they produce is not identical, both the general sciences and the historical sciences contribute “true” ideas to the collective body of knowledge.

I put “true” in quotation marks because scientists of all stripes constantly work to disprove accepted theories. When evidence supports a theory, that’s great, but it’s more important to see what other evidence might disprove the theory. In this way, while it is a challenge to conclusively prove a scientific theory beyond any possibility of dispute, it can be quite simple to disprove the same theory. This is how we know that vaccines don’t cause autism (a disproven theory) and that gravity is likely more of a push than a pull (a theory modified from its original version).

As The Skeptical Raptor suggests, evidence against a hypothesis is more powerful than evidence in favour of it. In other words, if you have three bits of evidence in favour of a theory and only one against it, the theory is wrong. It might not be entirely wrong (as in the gravity example above), but something about it needs to be changed to more accurately represent what is happening.

This is why so-called scientific laws are not carved in stone. Just like national laws, they can change.

Scientists face a lot of criticism when they announce that they were wrong about something. Also when they refuse to state anything with 100% certainty.

But what they are trying to say is that, while there is no absolute truth, there are degrees of validity.

Additionally, a theory is only based on the scientific method if people can imagine a way to test or disprove that theory.

If there is no way to test it, then there is also no way to prove it or argue it either way. We’ve left the realm of science to enter the realm of philosophy (which, by the way, has a lot of value in its right).

One of psychoanalyst Sigmund Freud’s more infamous theories says that all girls experience penis envy, recognizing the male sex organ as superior to their own. When challenged with evidence that went against his theory, namely that many girls and women say they do not have penis envy, Freud simply asserted that they were in denial.

This is an example of how logic can fail us by denying or explaining away contradictory evidence. Evidence is more important than logic and the hypothetical plausibility of a theory based on logic alone says little about how a process might actually play out in the real world. We can imagine all kinds of logically possible beings, processes, and events. Both Freud’s theory and his explanation of contradictory evidence sound logical and plausible. But as soon as he explains away the gaps with circular logic that draws on no evidence, the theory stops being scientific. You can no longer falsify (i.e. come up with a way to test his theory) since there is nothing observable (i.e. no evidence). We are forced to toss the whole thing out the window.

Real scientists and seekers of knowledge are always trying to disprove their own theories. They don’t need something to be right just because they have always thought it must be true. Instead, they constantly re-examine their assumptions and come up with new ways to disprove existing explanations.

Of course, if people test a theory for decades and centuries with no successful disproof, then the theory generally becomes established as fact. In careless everyday speech, many people including scientists will say it is therefore “true,” but I would recommend being more specific in what we say. Otherwise, we risk obscuring that in this context the word “true” simply means “rigorously tested but never disproven.” If contradictory evidence surfaced, we would accept the error in the original theory.

Science requires a flexible mind, as well as an acceptance of uncertainty.

On a final note, I use the word “disproof” above, which is probably more recent incarnation than the word “proof.” Originally, to “prove” something meant to test it, as in its derivative word “probation.” Knowing this helps us understand what the scientific method means by proving something is true – it actually means the opposite of what we might think: you can only prove a theory by seeking to show it is false, but failing to find evidence that goes against it.

Too Much Sleep and Not Enough Research: The Blunders of Popular Science

Whether shift work, high-performance careers, or the incessant appeals of infants, many of us just can’t get the sleep we want. So when a miraculous day off appears, the freedom to sleep in feels like the best thing that has ever happened.

Yet somehow, after ten, twelve, or even fifteen hours of sleep, we wake up groggy and still sleepy. The refreshment we craved has eluded us even after a bout of solid sleep. How can that be?

That’s the question posed by Wired science reporter Nick Stockton in his article “What’s Up With That: Why Does Sleeping In Just Make Me More Tired?”

Stockton explains that sleeping in upsets your body’s sense of time in the same way that quickly crossing time zones leads to jet lag. Your “biological clock” gets confused and your cells don’t know how much energy you need at what time. This part of the article makes sense.

He should’ve left it at that.

He then goes on to explain how sleep scientists have linked regular oversleep with health issues, specifically “diabetes, obesity, and even early death.” He lists a number of factors that could induce oversleeping, from alcohol and drugs to a lumpy bed that inhibits deep sleep, thereby causing you to feel tired for longer.

The article starts to leap from subject to subject with no transition or explanation. We start off with oversleeping being like jet lag, and then suddenly we are talking about regularly getting nine to eleven hours of sleep in a twenty-four-hour period (which is not like jet lag at all, since it’s regular). All at once, he turns to the subject of irregular sleep hours and how those who sleep in the day can trick their brains into thinking it’s nighttime. We are just getting our bearings when Stockton switches over to what happens during a sleep cycle and how to improve the overall quality of your sleep by changing your sleep situation. At this point, we try to catch our breath when he sprints over to discussing recognized disorders like sleep apnea and narcolepsy, one of which causes you to stop breathing in your sleep and the other which makes you fall asleep at inopportune times. He tenuously links these conditions to his topic with the line, “In addition to all the other terrifying aspects of this disease, it’s not doing your quality of sleep any favors.” We are far from the realm of oversleep at this point.

In the end, Stockton recalls his original point, advising readers to  establish “some equilibrium between your weekend and weekday sleep.” Huh? Most of the article has nothing to do with this kind of imbalance and a lot of the health issues he presents don’t actually have anything to do with oversleeping.

Although a punchy, enjoyable writer in terms of style, Stockton doesn’t seem to know what his article is about. He never defines oversleep (is it sleeping more than you usually do or sleeping more than the social or scientific norm?). He answers the question in his title in the first four paragraphs and then takes a nosedive into a bunch of tangentially related material.

He also sets up an artificial causality not present in the scientific studies he cites. Although cliche, we have to remember the famous statistics phrase: Correlation does not imply causation.

Just because memory loss or diabetes is more common in people who sleep more than is scientifically accepted as normal in this part of the world, we can’t assume that the relationship between a health condition and sleep is unidirectional. In fact, one study he cites from Harvard makes it explicitly clear that the jury is still out on how any of this works:

“Another possibility is a two-way street between sleep and memory: sleep quality may affect memory and thinking, and the brain changes that cause memory and thinking problems may disturb sleep.”

I’m not arguing that sleep has no impact on health. However, popular science writers like Stockton tend to ignore other possibilities. Their presentation style, if not their actual arguments, suggest that what they say is an absolute fact. But maybe people who work a lot have diabetes because they don’t eat well, not because of their sleep. Perhaps someone who chronically “oversleeps” (whatever that means – we still don’t know) has some other condition that leads to both longer sleep duration and earlier death.

Nobody knows. Scientists don’t know, and they will be the first to tell you that sleep studies are still in their infancy. The world has only had the technology and social conditions to study sleep objectively since around the mid-twentieth century (see, for example, Kenton Kroker, The Sleep of Others and the Transformations of Sleep Research, 2007).

It’s no wonder we’re still confused.

So instead of pretending we have all the answers, let’s allow ourselves to live with our lack of knowledge. And let’s cut the muddled pop science articles trying to create coherence out of limited scientific evidence.