Category Archives: Culture

Culture: ranging from national culture to religion, all articles will touch upon a facet of morality in culture

Overcoming our obsession with homeownership

The idea of homeownership is deeply embedded into the narrative of the modern middle class. Owning a home is almost synonymous with controlling one’s own destiny, financial security in retirement, and setting up roots in a community.

For more than a century, politicians have encouraged this mentality and pushed policies that shift people into homeownership over alternative means of residence and investment. In America specifically, this culture has led to imprudent investing, regressive wealth distribution, and a perpetuation of inequality. America needs to get over its cultural obsession with homeownership and the accompanying policies that ostensibly promote it.

Homeownership as an investment

A large house
Photo by Jesse Roberts

In the wonderful story of homeownership, families that plunk down money into home equity over the course of a 30-year mortgage are promised a comfortable nest egg by the time they reach retirement. Encouraging Americans to save more is a worthy goal, but investing in a home often fails to deliver on the storied promise. Putting significant savings into home equity is the equivalent of putting all of one’s eggs into one basket and it’s a basket that cannot move its physical location. Stock prices go up and up over the long term with bumps along the way, so anyone who can ride out the ups and downs will do well over a time span like thirty years. Yet unlike an S&P ETF, a house cannot always wait until retirement to be sold. Families need to uproot themselves and move for a variety of reasons. When this situation arises, they may find themselves in the middle of a down housing market. And as we saw in 2008, housing prices are not guaranteed to go up over time.

Policies are regressive and destabilizing

Agencies, legislation, and fiscal policies have been set up in the name of promoting homeownership. Perhaps the most significant is the mortgage interest deduction (MID). By deducting the interest paid on a mortgage from their taxes, families are incentivized to take out more expensive mortgages than they’d otherwise without the deduction. Politicians sometimes defend the policy as helping the group of people just on the margin of being able to afford a home. This tax deduction, the thinking goes, will give them that extra boost that brings them into that exciting club of homeowners. Regardless of intentions, this has led to American housing policy emphasizing wealthy homeowners rather than those on the border of rental and ownership.

Graph depicting household income percentile
Source: Vox

Policies like the MID also serve to destabilize our financial system. As Brink Lindsey and Steven Teles argue in their recent book, The Captured Economy, current housing policies encourage an over-reliance on debt that makes for a massive house of cards in the financial system. All the while, homeownership rates have barely budged from their 1980 levels.

Homeownership rates in the USA from 1995 to 2018
Source: US Census Bureau

Compared to a more targeted policy like down payment subsidies based on applicants’ income, the MID bloats purchases across the spectrum and means even slight moves in the economy can cause a wave of foreclosures. The MID is so embedded in middle class society that it’s a political non-starter when it comes to reform or abolishing a law that almost all economists believe is bad.

Aggravate NIMBYism

By tying families to homes that are immobile and a significant amount of their savings, homeownership pressures people to move heaven and earth to preserve the value of their homes. This perpetuates NIMBY – Not In My BackYard – policies that often serve a narrow group of residents over the common good.

Skyline of San Francisco
Skyline of San Francisco

When a city like San Francisco sees a massive increase in demand for housing due to an economic boom, construction of new housing is often choked off in part because the existing residents know an increased supply will lower their home values. The residents who would like to live in San Francisco but cannot afford it have no political clout compared to the current residents. Shutting people out of productive hubs like Silicon Valley perpetuates inequality by keeping lower-income people in low productivity areas and giving owners of capital more wealth.

As Mehrsa Baradaran has noted, NIMBYism can drive otherwise progressive people to go to extensive lengths to preserve their home values. Existing homeowners, because of the potential of the policies to lower home values, often vehemently oppose policies that encourage integration of neighborhoods and schools. Although the policies may have uncertain effects on housing values, the risk involved with changing the character of their neighborhood and education system is too great to endure when it comes to their retirement nest egg. The existing public education landscape in America, tying school funding and choice to property taxes and zip codes, is strongly kept in place by a desire to preserve property values.

Every area needs to put undesirable infrastructure like garbage dumps, sewage plants, or prisons somewhere. Because people are so closely tied to the value of their homes, political pressure is placed on politicians to place these structures far away from the highest-value homes rather than what make most sense for the community. Now whether one is a renter or an owner, no one wants to be next door to a toxic waste dump. But the motivation to make sure it’s not in Your Back Yard is much higher when you’re an owner compared to if you’re a renter with much more mobility. Simply put, the risks involved to your property value from any change in your neighborhood can be so daunting that the political equilibrium is just to maintain a stagnant status quo.

Racial Wealth Gap

The racial wealth gap in America today is staggering. The median white household has $171k in wealth compared to $17.6k for the median black household. Matt Rognlie, now at Northwestern University, separated the different kinds of wealth making up the prolific dataset in Thomas Piketty’s Capital in the 21st Century that described the increasing inequality in wealth. For America, he found that housing alone could essentially explain this entire divergence of wealth. If we are going to be serious about narrowing the racial wealth gap, we need to reconsider housing policy in America and recognize that our current policy scheme that claims to encourage homeownership but instead perpetuates NIMBYism and bloats the housing market is a significant contributing factor.

Alternatives

Without an emphasis on homeownership, would Americans stumble into retirement without a nest egg and live in communities where ever-transient families shied away from setting up roots and investing in local social institutions? Germany and Switzerland, compared to the American rate of around 65%, have homeownership rates of around 40%. These countries have different policies and cultures that can substitute for positive effects of homeownership, but they certainly aren’t community-less dystopias or non-saving wastelands. A gradual transition to less ownership and more rentals in America is possible and would improve the finances and social cohesion of the country.

American colleges: the average student isn’t who you think

The image you have of the average American student is probably wrong. Let’s try to find out why: discussions of higher education policy in America are structured around three assumptions: 

  1. A college degree is a necessary ticket to get into the middle class
  2. Debt from college is crippling millennials’ financial trajectory
  3. The existing college landscape perpetuates inequality

Policies typically aim to level the playing field in terms of access and financial hardship. In order to correctly address these goals, several misconceptions about the typical college experience in America need to be corrected. The narrative of an American student commonly depicted in the media is inaccurate and is counter-productive to achieving goals of equity and decreased financial duress.

‘Typical’ American students

The cast of ‘National Lampoon’s Animal House.’ (Courtesy of Universal Pictures)
The cast of ‘National Lampoon’s Animal House.’ (Courtesy of Universal Pictures)

Reading popular newspapers and watching adolescent hijinks movies, one gets the impression that the median American college student lives in a dorm, graduates in four years, and spends a good amount of time participating in alcohol-fueled debauchery. The years before college are wholly dedicated to gaining acceptance to that elusive elite private university. The time in between school years is spent between the promising unpaid internships and the well-paid summer gig in the big city. The reality, however, is that pretty much none of this represents a “typical American college experience”.

By nature of being in organizations that have mass exposure, the people in Hollywood or those writing for the Washington Post, the Atlantic, and the New York Times disproportionately craft the narrative around what college in America apparently resembles. While not everyone in these industries come from privilege or attended an elite university, it’s fair to say that they are relatively high-performing and many of their peers had similar experiences in college. Their view of college is one defined by four years of self-exploration, graduating with hundreds of thousands of dollars in debt, and a first taste at independence and adulthood.

Actual student profiles

Despite the constant discussion of what’s going on at the Ivy League universities, NYU, Berkeley, or Stanford, more than 40 percent of the undergraduates last year in America went to community colleges. How often does the New York Times do a profile of the student activities or cuts to funding at any community college? More often, there seems to be an article about how more Americans are choosing to study at a particular university in Scotland; while this may be relevant to those in the New York Times newsroom and their social bubbles, we’re talking about a few hundred people a year enrolling at St Andrews. Only 62% of those attending a community college are able to attend full-time, throwing shade on the image of the idle student sleeping until noon. The wild dorm life is less common than one might think as well. Over half—yes half—of American undergraduates live at home during their studies and around 40% work 30 or more hours a week. Although the commonly pictured college student is on the cusp of their 20s ready to freshly face the world, a quarter of undergraduate students are actually older than 25, and an equal number are single parents.

Increasing student debt is placing significant burdens on graduates—and drop-outs—as they enter the labor market and eventually try to buy a home. Of course, any level of debt is a higher burden for students coming from lower-income backgrounds and will discourage prospective students away from studying in the first place. 

Calls to have a student debt jubilee seriously need to consider the profiles of debt distribution. Those who hold the most debt are more likely to come from higher-income families, have additional degrees, and have a much higher lifetime income. 

Deleting all student debt

Debt Is Higher among Graduates with Higher Degrees. Source

Wiping out all student debt, in the simplest plan, would thus be much more regressive than most people realize. Lower-income students at community college would benefit from having their debts eliminated in this hypothetical situation, but most debt relief in dollar terms would actually be going towards people from higher-income families, with high future income, and with less financial burden. Presumably, the taxpayers picking this up means a complete debt elimination would be a redistribution upward. Decreasing the financial burden of lower-income individuals in their higher education pursuits should be elevated as a policy priority: debt relief and support just needs to be targeted based on financial need rather than those with a steadier path to financial security.

Wealthy individuals love to make a philanthropic splash by giving money to build their alma mater a new library, football stadium or dorm. Of the $40+ billion given to higher education institutions last year, nearly a quarter went to only twenty universities, easily less than ~1%. Giving to Harvard to make sure the under-privileged kid can attend without paying tuition is ostensibly a noble cause, but the students that really need help are at community colleges, perennially ignored by the donor class and policymakers at large. The reality is that when elite university graduates give back to their alma mater, it’s more likely to improve the experience of someone from a middle-class background than give an underprivileged student an opportunity to succeed.

Students from Austin Community College
Students. Source: Austin Community College

Stories about typical college life, even outside admissions and tuition costs, on things like “political correctness” again focus on campuses in that Ivy universe where only 0.4% of American students attend. Even among large universities, the actual destinations for most students are ignored in the national mediate narrative. During the most recent academic year, the four universities with the highest combined undergraduate and graduate enrolment were Texas A&M University, University of Central Florida, Ohio State University, and Florida International University. 

When it comes to making glamorous movies or writing dramatic articles about college dreams, the focus understandably can be more centered on elite university life rather than the community college student who works in retail part-time. But if we are going to level the playing field in American higher education and improve the financial hardship of attendance, there needs to be a reality check of what the typical American college experience is. Obvious places to start would be giving community colleges more focus, increasing childcare funding for students who are parents, and redirecting philanthropic efforts towards institutions that serve lower-income students. Without recognizing the typical student experience outside of the private/elite universities, we risk dodging the issue and potentially making it even worse.

Economists need more sympathy

The economic profession lost credibility after the financial crisis, seen by the public as overpromising the certainty and benefits of their models and not properly evaluating the risks involved. Even before Lehman Brothers collapsed, there existed a gap between how economists and how the general public sees the world.

Economic models’ overreliance on rational self-interest as the basis of human nature made their conclusions appear selfish and out of touch with reality. By not embracing a more nuanced view of human nature, economists lack a full understanding of how people behave and risk losing more credibility with the general public.

Economists: the textbook model

Mainstream academic economics is based on models that view individuals as rational utility-maximizers. People are seen as behaving in a way that benefits them to the highest degree given their available information.

 

Economic evolution

The Max U model is simple and beautiful. Homo Economicus, as it is sometimes called, is the rational economic human, guided only by its desire to maximize its well-being measured by a metric called utility – not exactly happiness, but somewhere close to it. A model is meant to be a representation of reality, so there will inevitably be shortcomings.

Anyone who has lived in a functioning society observes that we are not always guided by selfish reasons. We often try to design institutions that promote justice, we help others in need, and we act morally even when someone isn’t looking.

Looking to not have to re-invent the wheel (and keep the same model…), some economists have tried to explain this behavior by redefining what it means to “maximize utility.” Perhaps I cooperate with you because I know it is to my benefit in the long run. Maybe I help someone in need under the expectation of future reciprocity. Or I give to a charity not because it helps someone else but because it gives me a sort of warm glow. All of these seemingly selfless behaviors could arguably be seen through a lens of self-interest.

Towards more sympathy

Adam Smith, ironically considered the founder of modern economics, had a different view. Human behavior is founded upon our capacity for sympathetic fellow-feeling, not robotic prudent actions. In his book, Theory of Moral Sentiments, he gives instances of fellow-feeling that simply cannot be out of self-love.

When we see a stroke aimed and just ready to fall up on the leg or arm of another person, we naturally shrink back our own leg or our own arm.

Adam Smith

Or when we see a tightrope walker struggle for balance, we twist and writhe as if our own bodies were seeking steadiness. Consider the strong emotions we can have when watching a movie or reading a book. Crying for fictional characters undergoing fictional circumstances is difficult to explain using the homo economicus model. Our tears will not help the characters involved and we cannot expect these fictional people to reciprocate our sympathy for them.

By Smith’s account, there is something inherent and universal in human nature that makes us instantaneously put ourselves in the situation of others, regardless of any potential payoff.

Some economists have recently tried to incorporate this enriched understanding of human nature into their models. Vernon Smith, 2002 Nobel Laureate in economics, has used Smith’s writings to design experiments that show our interactions with each other being more than just transactional.

Behavioral economists like Richard Thaler—also a Nobel Laureate—have tried to incorporate ways people can be “predictably irrational” into models and policies that can “nudge” people in a better direction. The ideas that humans are irrational or sympathetic no longer on the margins, but economists have yet to successfully integrate them into models of the labor market, financial markets, or international trade. There has been progress away from the Max U foundation, but it still dominates economic theory.

Why the lack of change hurts economics

The foundational view of modern economic analysis does a disservice to the profession in two mutually reinforcing ways. First, by failing to acknowledge the sympathetic tendencies of people, economists lose predictive and explanatory power in their models. Additionally, the profession ends up with people who are more likely to see humans as behaving purely through rational self-interest.

Woman holding money
Source: Niels Steeman

In an experiment called The Dictator Game, people are given ten dollars and told they can give an anonymous peer as much of that ten dollars as they wish. In the Homo Economicus model, every person endowed with the ten dollars should keep all to themselves and give none to their peer. The game is anonymous and does not have repetition, so any altruistic behavior can’t be seen as expecting reciprocity or wanting to appear nice in the eyes of their peers. Yet people across all age groups and walks of life show relatively similar tendencies of charity, consistently giving more than zero to other people.

One major exception is economists, who give significantly lower to their peers in these experiments. The explanation is likely some combination of two things but both spell trouble: economists are naturally more selfish people, and studying economics convinces them that this is how we’re supposed to behave. Here on planet Earth, humans don’t always behave as their models suggest.

By relying only on rational self-interest, the economics profession is often met with a reasonable amount of skepticism about its findings. The caricature of the selfish human is not only inaccurate but also deeply unsettling to anyone concerned about morality. It’s therefore easy for the general public to brush aside any findings economists have as unreasonable and not promoting a just or altruistic version of the world. Unfortunately, this means that instances of economics that go against people’s intuitions can be dismissed as heartless.

Rent control is an excellent example where economists believe a well-intentioned policy hurts the poor more than it helps. The laws of supply and demand predict that putting a price ceiling on rent will cause a shortage of available apartments and lead to poorly-maintained dwellings.

These predictions pan out noticeably in a highly-regulated housing market like Stockholm, where more than half a million people are on a waiting list for housing. By mandating rental prices lower than market rates, the government also incentivizes landowners to put their properties on the less-regulated sellers market, further decreasing the supply available for renters and making the shortage problem worse.

Of course, with a wait time of ten years or longer, those in Stockholm with friends or family that already have apartments are able to circumvent this dire situation. This means the few apartments that are available for rent are even more inaccessible to the less fortunate.

Rather than see economists’ views on rent control as a valuable insight into how well-intentioned policies actually work in the real world, the public dismisses their conclusions as being out of a heartless devotion to efficiency and wealth maximization.

Could it be good enough?

It could be that the Max U model is good enough. Newton’s F=ma equation doesn’t apply when considering Einstein’s Theory of Relativity or theories of quantum mechanics. While on planet Earth, however, we’re not going the speed of light and we just want to figure out how to build skyscrapers and bridges. F=ma has gotten physicists pretty far.

Similarly, one could argue that Max U has managed to make economics a powerful explanatory tool, albeit an imperfect one, but there still seems to be overreach with what economists feel they can explain, especially without better incorporating fellow-feeling into their models. A better way to look at it is that, as much as economists want to come up with a “theory of everything” to explain all human behavior, economics probably just doesn’t have all the answers we’re looking for.

Discomfort about the changing modern world that led voters to Brexit and Trump are more likely a sociological phenomenon rather than an economic one. Utility maximization can’t explain opioid addictions or homelessness. In order for economists to regain the trust of the public, they need to have more humility and recognize their current framework is an incomplete view of human nature. Otherwise, issues where they have much more certainty will continue to be met with skepticism from the general public.

Ignore Brexit and Trump, we’re better off than we used to be

The strong current of populism in high-income countries in the last year has taken many by surprise. An unexpected victory for the Brexit campaign and a shocking level of support for Donald Trump’s Presidential run are among the recent political events that show a drastic turn away from cosmopolitanism and towards nationalism.

Everyone is trying to make sense of these phenomena, blaming whatever aspect of international policy trends that will prove their previously held ideology. The racial aspect and a resistance to multiculturalism could be viewed in hindsight as a significant catalyst for this sentiment. Perhaps the most popular explanation right now is the economic one: so-called neoliberalism has made a few rich at the expense of the working-class and this surge in populism is a revolt after being ignored for so long.

Whatever legitimate economic anxiety Brexiteers and Trumpkins have from the last few decades of increasing globalization, it is dwarfed by the historic rise in living standards nearly everywhere else in the world.

Elephant graph
Source: Bloomberg

Take a look at the graph above. A quick check on the internet will reveal slightly different permutations of it: maybe a different time period, an emphasis on certain countries, etc. All of them have the same message: the last few decades have seen incomes rise for the vast majority of people across the world.

The Good News

The quick way to interpret this graph is that the poorest 77 percent of the world—the first three quarters of people starting from the left-hand side—have, on average, seen their incomes drastically increase between 1988 and 2008.

Everyone above the 85th percentile has also seen their incomes rise. Consider that, during this time period, more people escaped poverty than the rest of human history combined. The oft-vilified globalization, characterized in popular discourse by sweatshops, environmental destruction, and greedy multinational corporations, has coincided with bringing 320 hundred million people out of poverty.

The time period and its policies or corporate behavior are not without flaws – climate change being an obvious and urgent downside. But if one is to only look at the economic outcomes, this time period did more to lift the standard of living of the bottom 75 percent of the world than any other period in history. Was it because of globalization or in spite of it? Of the 320 million that escaped poverty during this time, 270 million were in China. China’s general trend post-Mao has been a gradual embrace of markets and opening itself up to trade. China is experimenting with its own flavor of capitalism, there’s no doubt that its embrace of international trade and markets is the underlying cause of their tremendous growth during this time period. In other words, aspects of what anyone would include in their definition of globalization are at the root of the country growing the way it has.

The Bad News

Notably, there is still five percent of the global population—the poorest five percent—that have not seen their incomes rise during this time. These people should not be ignored in global policy arguments, but how to lift the material well-being of that income group is a separate conversation. For now, focus on the eight percent of the world population who consider themselves between the 77th and 85th percentiles, whose incomes have seen a slight dip. This group of people can be considered the working-class in America, Britain, and other high-income countries. Assembly line jobs that used to be able to support entire families are now being competed away by a combination of cheaper labor overseas and more efficient machinery. So far, the gains of globalization and technology have increased the affordability of every good and service this group can buy, but not the wages many need to purchase them.

In a sense, we can think of the “Western working-class” being pushed aside by an “Emerging Market working-class.” Emerging market economies like China, India, Brazil, and Indonesia are building their own middle classes, simultaneously lifting hundreds of millions out of poverty and displacing the Westerners that used to do that work.

The Political Reality

As demand has shifted more to labor in these emerging markets, the Western working-class of yesterday has seen their wages go down. As the graph illustrates, the high-income earners in the world have seen their material well-being go up over the same time. This has indeed increased inequality on a national scale in high-income countries. However what is less appreciated is the decrease in global inequality. The massive increase in the purchasing power of the average worker in the bottom 77 percent has made the difference between the gold-yacht billionaires—or even your typical middle-class American suburbanite—and the median Indonesian significantly smaller.

If we could increase the standard of living for everyone without any rough adjustment period, we would and it would be our best option. The alternative, the era of increased globalization, offered a regime that increased the well-being of maybe 90 percent of the global population. Remember that the 77–85ers are still in the richest quarter of the global population. I imagine a global vote on the 1988-2008 international order would be mostly favorable.

In reality, the current marriage of national sovereignty and electoral democracy means that only citizens of a given country will vote for its leaders and policies, even if every country’s political landscape increasingly spills over into the rest of the world.

This means that the 77-85ers, generally being citizens of Western democracies, are, in a sense, overrepresented in the voting electorate. They represent a significant share of voters in high-income countries that, for the time being, play a powerful role in geopolitics and international economic affairs. With decisions like Brexit and the success of Trump so far, they are shifting international politics in a direction that reflects their economic conditions more than global economic conditions.

The last few decades of increased globalization, technology, and trade have not been perfect. The system is sub-optimal in some ways through things like corporatism, unfair intellectual property enforcement, and environmental degradation. Yet, through all of this, hundreds of millions of people have escaped poverty. Those in the global 77–85 percentile feel they have been left behind. Some of this may be racial resentment or general fear of change. From an economic point of view their struggles, while legitimate, need to be placed in the greater global context during the last few decades. Rather than dismantling the current system and replacing it with protectionism, nationalism and xenophobia, a preferred remedy would be to assist the 77-85ers within the existing system.

What’s the solution…more emphasis on education and skill-training? More progressive taxation? It’s not obvious what the most effective or cost-efficient policy would be. But before we upend an economic order with unparalleled ability to lift people out of poverty, let’s appreciate the phenomenal gains of the last thirty years.

We do not live in a plague-free world

A young girl is admitted to the intensive care unit, presenting with fever, chills and painful, swollen lymph nodes. A course of antibiotics is administered. After laboratory confirmation, she is diagnosed with an infection of Yersinia pestis. Or as most will recognise by its disease name: the bubonic plague.

The plague is not only an incendiary illness medically, but it can also provoke vivid imagery of mass graves and corpses dragged around squalid villages in the Middle Ages.

Taking a step back from these sensational and almost romantic images of the medieval pandemic (which killed between 25 and 100 million people in the 1340s), look again at the more clinical and prosaic case of the young infected girl. She likely contracted the bacteria from a flea bite during a hunting trip in a small county in Oregon. Sifting through the news and social media and conversing with friends and family, I was surprised by the surprise that was awakened in people by this case:

We still have the plague?

Yes, we do not in fact live in a plague-free world. The causative agent survives in fleas which live in small animals in the wild, serving as reservoirs for the infection. The plague therefore persists, circulating at very low rates in our environment. A bite from an infected flea allows for the bacteria to enter a human and travel through the lymphatic system to the nearest lymph node. Here it replicates itself and produces flu-like symptoms. If untreated, the infection can be rapidly fatal, with the World Health Organisation predicting a case fatality ratio of up to 60 percent. Fortunately, with early diagnosis, treatment with antibiotics and supportive therapy is enough to treat the infection.

Unexpected prevalence of the plague

Despite the shock in those I spoke to and what seemed like it subsuming media attention, this case in Oregon was not an aberration. Searching for “plague, 2015” online, all the search results that Google’s comprehensive algorithm produced on the first few pages yielded information only on this girl in Oregon. One singular case.

Between 1989 and 2003 there were 38,310 reported cases of the plague in 25 countries. As recently as 2013, there were 783 reported cases and 126 deaths (“reported” being the crucial word). The plague is a largely underreported infection, its diagnosis and confirmation is dependent on laboratory testing, involving actually identifying the bacteria under a microscope in a fluid sample. The three most endemic countries for the disease are Madagascar, the Democratic Republic of the Congo and Peru. Within these countries, the infection is rife in areas lacking the ability to perform this laboratory confirmation.

Perhaps providing more perspective on the media’s bias: there is an ongoing outbreak of the plague in Madagascar. 263 cases and 71 deaths have been reported so far; 263 cases that were not found in the Google search. The plague flourishes in countries with areas of high population density, rodent infestation, poor health care infrastructure and low hygiene standards. Madagascar is the country most severely affected by the plague, with an outbreak occurring almost every year.

Neglected diseases

Although there are a multitude of other diseases deserving perhaps even more consideration (such as trachoma or kala-azar, for example), the plague is a paragon of all of the diseases neglected by the media. Neglected diseases, like the examples just given, collectively affect more than one billion people in 149 countries.

As such with the plague in Madagascar, neglected tropical diseases also affect vulnerable populations living in inadequate sanitary conditions with limited access to healthcare. There lies the predicament: these illnesses attract little attention and funding but many of them are silent killers and causative agents of debilitating disabilities. I was completely unaware that trachoma is a leading infectious cause of blindness until I started researching this article. These diseases tend to be less sensational, like Ebola, as many do not feel personally threatened by them. Unless a case happens to occur in proximity to us in the developed world, such as our young girl in Oregon, a media takeover is not created.

While programmes and funds have been implemented to tackle these diseases, there are probably a handful of diseases that you have never heard of, affecting and killing people around the world. While Ebola and the unexpected infection of a young girl with Yersinia pestis are both relevant and medically paramount events, a coordinated global effort and media monsoon should be created for these forgotten and ignored diseases like the plague, too. It is about time we stop neglecting the neglected.

Uterus on loan: the final hurdle in fertility medicine?

A woman gave birth last year, many are not aware of the medical and emotional significance of this particular birth. The Swedish woman in question gave birth with the help of a transplanted uterus, loaned to her by a family friend. After decades of research and numerous attempts, this proved to be the first successful birth with a transplanted uterus; other attempts around the world having resulted in rejection of the organ.

It is beyond question a sweeping breakthrough not only in the field of organ transplantation but also in fertility medicine, you could argue that it is the biggest breakthrough in fertility medicine since IVF. In essence, we now have the capability to allow women who do not have a uterus—or have one that is not viable for pregnancy—to carry their own child.

In the UK, it is estimated that 15,000 women of childbearing age do not have a uterus. Countless other women have undergone radiation therapy for cancer and have been left with uterine infertility. For these groups of women, the chance to carry their own child is becoming more tangible as approval has now been given to perform ten uterus transplants.

The science explained

The first successful story in Sweden, with hopefully many more to come, saw a uterus transplant followed by a mandatory year of monitoring. This was done in order to ensure rejection would not occur and to allow for immunosuppressant drugs to be taken to further reduce this risk. After the pregnancy and birth, the uterus was removed.

Where did the uterus come from? The uterus was donated from a live donor. Although it may strike as different to the typical organ transplantations (coming, for the most part, from brain-dead but heart-beating donors), the Swedish trials used live donors. The main advantage being the extended amount of time for pre-transplantation investigations, such as the ruling out of infection or abnormalities that might make the uterus non-viable for donation. There are also better survival rates associated with live-donor transplants, such as the case with kidney transplants.

That being said, the transplants being performed in the UK will be from the classically characterised donors I mentioned. The reasoning for this is simply, or not so simply, that the procedure itself is already highly complex. There are substantial medical risks to consider, utilising classic donors reduces the number of surgeries. To avoid using technical surgical jargon about vascular pedicles, the veins of the uterus have very thin walls with several branches, making cutting and reconnecting a very difficult process.

Do the benefits outweigh the risks?

Uterus transplantation is an intricate procedure that bears medical risks. It is not a life-saving procedure: these women are otherwise healthy and the procedure does not prevent or fix illness. To many sceptics, it is plainly a non-vital transplantation, or in other words, it is a quality of life-enhancing procedure. These women still have the ability to become mothers, with certain financial requisites of course: adoption or surrogacy. I believe to many people debating the ethics of the procedure, it is difficult for them to visualize the perceivable benefit over the perhaps more tangible risk.

A face-transplant is not a medical necessity either, but by many can be identified as an operation that can greatly enhance the quality of life of someone severely disfigured. And here lies the issue: many can identify a face transplant as greatly impacting quality of life—giving someone a face is possibly more discernible to some in meeting the definition of quality more so than the temporary transplantation of a uterus.

For many women, not being able to carry their own child is psychologically gruelling, the procedure is an enhancement to the quality of life of these women. It is not solely a case of having children or not, but about experiencing pregnancy. I am not a mother and I do not plan on getting pregnant any time soon, but I am able to discern that it is a very real burden for many women.

With a transplantation that can eliminate emotional distress during an era in which we are increasingly adding to the expanding collection of transplantable organs, why the scepticism? We are now transplanting faces and hands and there is a body of evidence confirming uterus transplantation is viable. Why not alleviate suffering and let a woman consent to carrying her own child?

One father, two mothers: why are we resisting three-parent IVF?

There exists a heterogeneous group of diseases that affect many organ systems, like muscles and the central nervous system. Albeit rare, these diseases can sometimes leave children lucky to reach their fourth birthday. All of these diseases are caused by the same genetic malfunction.

These are called mitochondrial diseases and they are caused by a malfunction in the so-called powerhouse of the cell, the mitochondria, the part of the cell responsible primarily for energy conversion. You might remember this from biology classes in school, and you may also remember that another component of the cell is the nucleus. The nucleus stores the genetic material that makes you, you.

However, what you may not have known is that the mitochondria actually have its own genetic material, too, its own DNA. This DNA, composed of 37 genes, is passed down to you from your mother. While you receive your nuclear DNA from your mother and father—defining your personal traits—the mitochondrial DNA is solely there to make up your mitochondria in all your cells. The mitochondrial DNA is unfortunately very unstable and incredibly prone to acquiring mutations. It is these mutations that can cause the aforementioned devastating diseases. Many are not curable; however, all are now preventable.

DNA from a man and woman plus one set of healthy mitochondrial DNA from a second woman

You may have heard of IVF—in vitro fertilisation—a fertilisation technique used since the late seventies where conception occurs outside of the womb. You may also be aware of the hype surrounding three-parent IVF, but do you understand what it truly means on a scientific and human level? With the advent of this new procedure, we are now able to use two sets of nuclear DNA from a man and woman plus one set of healthy mitochondrial DNA from a second woman. This has the capacity to prevent children from suffering these debilitating and eventually fatal illnesses. This year, the United Kingdom approved the use of these fertilisation techniques to prevent mitochondrial diseases from manifesting.

Doubt and ethics

As with any mention of tampering with DNA, comes the onslaught of doubt and ethical debate. One side of the debate argues that the technique can lead to dangerous and unpredictable results. I argue that not using these techniques can lead to equally dangerous results: the suffering and early death of infants, deaths that are ultimately preventable.

Based on meticulous and highly regulated studies, a law was passed in an effort to regulate the techniques used for three-parent IVF. I also argue that as the Human Fertilisation and Embryology Authority (HFEA), Britain’s fertility regulator, will be scrupulously vetting and approving clinics to use these procedures, it leaves little leeway for dangerous results.

So then comes the eugenics debate. As I mentioned, editing the human genome is a touchy subject. Some believe legalising forms of it could lead to selective breeding. However I would like to point out that in a therapeutic context, such as three-parent IVF, it is currently being used and developed in research facilities globally. We have (for years) been editing our DNA to enhance our own immune cells in the fight against certain cancers, a field of research that is not being equated to eugenics. Although, since the current DNA modifications I am talking about result in a brand new human beings, it is a much more sensitive topic (I suppose).

Oversimplifying science

I believe this sensitivity arises, at least in part, from ignorance and over-simplified media coverage. Calling the product of three-parent IVF a genetically modified baby is somewhat of a stretch. It needs to be made clearer what this technique actually entails and what impact it really has on the genome of this brand new human being. The mitochondrial DNA in question does not determine any traits; it makes up for a tiny fraction of a person’s whole genome, less than 0.2 percent. Modification of nuclear DNA, the DNA that determines traits, is still banned.

I would then like to elucidate that the genes themselves are not being manipulated in this IVF procedure. Rather, a healthy set of mitochondrial DNA is being transplanted from the “third parent.” I am using quotation marks for the “third parent,” as a contributor of less than 0.2 percent of the genetic makeup of a baby—genetic makeup that does not provide any characteristics or traits—claiming all three contributors, or parents, are providing DNA in equal quantities is false.

Furthermore, if we are referring to these babies as having three parents, should the same not be applied to those born from a surrogate? To clarify, with a surrogate the genetic material to form the baby is from two people, but an intrauterine environment in the surrogate also has significant effects on the baby’s health and development. Therefore, if we use such simplified language for preventing mitochondrial diseases with IVF, why is the same not applied to other fertilisation techniques?

Science coverage in the media can be dangerously over-simplified and even misleading, resulting in ethical debate. Transplanting mitochondrial DNA to create a baby free of mitochondrial disease absolutely not a simple procedure. But if I were to ask you: if you had the opportunity to prevent the suffering and early death of thousands of babies with a regulated and approved procedure, would you approve of it? I would hope your answer was a simple ‘yes.’ After all, if the ethics behind performing a heart transplant to stave off death are not questioned, why should the ethics behind the transplant of mitochondrial DNA be?

Light in the foggy field of HIV treatment

Today a positive HIV diagnosis is not necessarily a death sentence. Tentatively we can now define HIV as a chronic disease rather than a fatal one. HIV treatment, or management of the virus with appropriate and timely therapy, can keep deadly secondary infections and disease at bay.

Of course I say this prudently as many are not fortunate enough to afford antiretrovirals, the current and only therapy option. Antivirals are also importantly unable to completely clear the virus from an infected individual. Thus rendering HIV a life long burden.

Thus it is obvious to us when scrolling through the news and medical journals that ‘cure’ and ‘eradicate’ are used cautiously in the field of HIV/AIDS research. It is not to say that hope and possibility do not exist, but they are blurred by the genetic and immune complexities involved in targeting the virus.

HIV possesses the ability to change its genetic makeup rapidly, making it arduous for human immune systems to keep up and produce the appropriate antibodies. The virus is also skilled at hiding, exacerbating the difficulty in targeting it. Recently though the media has shone a light through this haze, by using words like “solution” and “promising” after the publication of a prospective breakthrough in March this year.

New hope

3BNC117 is this new hope. 3BNC117 is a highly powerful antibody produced by the immune systems of only a small fraction of individuals infected by HIV. It has a potent ability to restrict the replication of a wide array of HIV strains, solving the problem of the immune systems disability to keep up with genetic changes.

Researchers in New York cloned these antibodies and infused large doses into 17 HIV positive patients. As published in Nature, the antibodies were found to be safe and highly effective, the antibodies boosted the patients’ own immune systems. This lead to a monumental decrease in the amount of HIV in the blood, a phenomenon demonstrated for the first time in human beings. A phenomenon that also made me more hopeful in seeing HIV further redefined from a chronic illness; these promising results are now abound with possibilities.

Cue the numerous thoughts starting to creep through my head: could 3BNC117 be used along with current treatment be effective in eradicating the virus in an infected individual? Could the antibodies be used as markers to find HIV hiding inside a patient’s cells? Finally, could 3BNC117 be used as a preventative tool, perhaps as part of a vaccine against HIV?

Caution required

It is however far too soon to hail these antibodies as the cure-all and end-all of HIV. I believe these results must still be gauged with some caution. Many different factors must still be taken into account, not to mention the years it requires to develop upon and polish novel therapies.

Of perhaps even greater importance is the necessity to draw focus on issues with our current therapy options. Undetected cases, disadvantages in access to drugs and stigmatisation are all prevalent. The World Health Organization estimated in 2013 that a mere 37% of HIV positive adults were receiving treatment. The CDC reports that persons unaware of their HIV infection are responsible for nearly one in three of the ongoing transmissions in the United States.

A cure for HIV would be a landmark in medical history, but without knowing whom to cure it would be superfluous. Many communities are still plagued by stigmatisation, as risky behaviours such as injecting drug use are associated with HIV transmission. It is therefore equally crucial to normalise a positive HIV diagnosis. Advocating governments and health authorities to expand upon initiatives to address stigma and psychological issues, which are hindering people from seeking testing for HIV, is a promising step. Without this step, these promising headlines of a cure are meaningless.

Cure and eradication are crucial goals to strive for, but focus must not be taken away from the socio-economic issues HIV/AIDS also harbours.

 

Religion: The Beautiful Game?

In the beginning, Something created Everything—the rest is just speculation…

It was late, dark, and bitterly cold. Walking home, making the usual right-turn into my own street, half conscious after another tough day at the office, I spotted two shadowy figures approaching on the pavement ahead. As I crossed the road towards my house, strangely, the silhouettes mirrored my movements. Suspicious of the coincidence of the currently anonymous figures, I carried on, gaze fixed firmly on the ground as we got closer. Within a few feet, I heard a “hey!” The thoughts that go through a tired mind in that situation are many and varied: Who is this? Why are they talking to me in the street? Do I know these people? I am about to be attacked, yards from my own door? Fight-or-flight engaged, I looked up to confront my accosters…

The first thing I spotted was the goofy smiling faces, instantly lessening any sense of danger. Then, looking down at impeccable blazers wrapped around winter coats, the badges and motif: The Jesus Christ Church of Latter Day Saints. The universal sign that suggests, “you are not about to be mugged… but a mugging might be more enjoyable.”

Like many an atheist, the dismissive attitude towards any kind of potential religious or spiritual engagement is something I have previously relaxed into in every similar situation my whole life. Whether through a lack of connection to the subject matter, a general suspicion, or a staunch (nay militant) anti-theist stance, we non-believers tend to react negatively to any attempt made by those of faith to engage us in the Divine. Recently, after some eye-opening experiences, I have begun a process of mellowing somewhat. After all, within the UK, 75% of people identify with a religion. They can’t all be wrong…right?

And this is the purpose of this piece: to challenge the attitude of atheists towards believers.

A secular metaphor

A wise friend once used a metaphor to describe religion:

Religion is like a sport… I really enjoy the game, but don’t follow any particular team.

The analogy is one which this author has recently begun to connect with, becoming engrossed in the discussion without ever deviating from inherent values instilled during a largely secular childhood. However, before this removal of a metaphorical spiritual filter paper, the mere mention of Him, or Adam, or Eve, or anything greater than “what is” was enough to send a feeling through me that became a mixture of queasiness, irritation and incredulity. It never really became an issue, given the aforementioned upbringing and non-religiousness of friends and family.

Church was just a place where we were made to go once or twice a year whilst at school, to sit on uncomfortable wooden planks (specifically designed to be so uncomfortable that nodding off was impossible), to sing awful hymns about angels and listen to a dusty old man ramble on about nothing in particular from a book that, unbelievably, contained no pictures. Religion for me, from an early age, was associated with the outdated, the traditional, the mundane, and the irrelevant. It seemed somewhat oppressive too—you can’t do this, don’t do that. Growing older, as an analytical mind assessed the merits of the Bible’s content, some of the moral teachings remained useful but the overarching proclamation of a Creator, a big entity in the sky who made everything, saw everything, and had a plan for everything, became more and more unlikely in my mind.

Older still, set in my ways with an arrogant belief in my own opinion on the matter, I made attempts to intellectualise. No realm of rationality could possibly explain what these “believers” espoused as undeniable truth. Therefore, in my mind, the Grand Proclamations were false; a well meaning collection of stories that had been manipulated and exaggerated to incredible proportions. Those who were wise enough to stand-toe-to-toe with my views on the matter were sprayed with (in my opinion) water-tight arguments and reason, my blasphemy launched in a blaze of heated rationality. Suggestions in favour of faith were dismissed, and further retorts about God’s Will, Eternity, and “what-it-says-in-The-Holy-Book” stirred a powerfully exasperated irritation within me. With no resolution ever forthcoming, we would agree to disagree, and I would scuttle away silently wondering how my seemingly intelligent argumentative adversary could believe something so strongly that was so out-of-line with my perspective of rationality and truth.

However, recent developments in my personal life have initiated a rethink on the matter. Experiences, at which I would have scoffed at years ago, have opened my mind to the concept that I might not actually be right (ironically, God forbid). I hasten to add, this does not refer to my atheist beliefs which remain strong (though, we’ll see about that on my day of judgement, should it so come). I am referring instead to the attitude I had towards those of faith: the Disciples, the Bible-bashers, the born-agains and the general believers who put their faith in God.

For I have realised that, to fully understand the belief of someone who believes, you must fully engage them with a completely open mind. A mind open to the concept that there is a Higher Being, despite the absence of evidence and the atheist’s ardent opposition to the idea. You must open yourself up to the world of religion to see why people believe as they believe and, more importantly, the benefits they gain from it. An aspect of my dismissive attitude towards an omnipotent chieftain may even have stemmed from a fear of the implications of such. But you must also realise that, no matter how strongly anyone feels, we as a human race are, despite our efforts, unlikely to ever provide conclusive evidence to either prove, or disprove, the existence of a god or gods.

The latter argument suggests that the whole debate could be considered redundant. But what is life without a bit or arguing, eh?

Praise Jesus

I’ve recently spent a considerable amount of time around people of faith in the South of the USA (the Bible Belt, no less). Church is different there in comparison to the UK. Traditional gothic buildings are replaced with sprawling concert-esque venues. Organs are replaced with full rock bands, including backing singers. Wooden pews are replaced with luxurious cinema chairs (cup holders not included). Stained-glass windows and tapestries are replaced with multiple video screens and dry ice. So aesthetically at least, the US knows how to church. The success continues socially, too: the congregation, where over 500 vibrant and engaged members file in cheerfully, manoeuvre to get the best seats and greet almost everyone else with enthusiastic chatter. At the commencement of the service, they hush to listen intently to a shamelessly charismatic preacher. The sermon is delivered on a theme of the day, littered with anecdotes, jokes, and inspiration. Powerful proclamations are met with sporadic “Jesus Christ!” or “praise Jesus” outbursts. Musical interludes and videos break up the “show.” It was, at once, like the stereotypes, but not like the stereotypes. For me, the whole thing was captivating.

Church seats
Image credit

 

More than just a service, I attended a Bible Studies class. Bible Studies, for the uninitiated, are like extra-curricular tutorials at school or university, to supplement the lecture—but the subject is The Big Man. Again, surrounded by people with an urge to dedicate even more time out of their week to God beyond the minimum church service, proceedings were hypnotising. I, of course, contributed little in the way of discussion. These were a time for praise and peace, not cynicism, and so I absorbed. I felt that the mellowing was well and truly underway…

There was something quite strange about these experiences. I felt at once relaxed, encapsulated and entertained. Again, I must stress that this was in no way an epiphanic episode, but I guess I was starting to “enjoy the game” a bit more. The most powerful aspect for me, however, was the people I met, who, without exception, were warm, friendly, welcoming, and positive. These were the kind of people who exuded positive energy and passed it on, amongst themselves and even into me. Church in the US seemed like a battery recharge, fuelling people for the week until the next spiritual pitstop. The themes of service, sacrifice and gratefulness are widespread, inspiring a great deal of community action, charity, and goodwill. Finally, I had the insight I should have had years ago about why people believe the way they believe, and the positive influence it can have on life. I felt strangely tranquil.

An atheistic conversion

I’ve often wondered what I’m arguing about as an atheist. Am I a Crusader of Truth, in defence of science and rationality? Am I attempting to challenge a frequent source of oppression in the world? Am I trying to convert people to the relatively meaningless world of atheism? I conclude that I just enjoy a good argument. Even so, considering all the good that belonging to a religion can inspire, would a conversion to atheism make the world a better place? Given my experience in the USA, and the energy that I saw growing in people as a direct result of their belief in something greater, I cannot say for sure that it would. If we, as non-believers, move our focus from the negative aspects of religion to the positive, then we will be in a position to recognise its potential for good in the world.

I probably should clarify why, given the tone of this piece, I am not on my way to a conversion. The basic answer is that, although I am actively trying to support those of faith in their beliefs, I still fundamentally believe that there is no God. Further, I do not feel the need for a belief in God in my life.  Some turn to God for comfort, some for hope, some for happiness, some for purpose. At this stage, my life has plenty of each of these, without feeling like there’s anything missing, spiritually at least. Some, in turn, would say that is something to do with the presence of God anyway. They are, of course, welcome to that belief. I’ve stopped being concerned with what happens in the afterlife, if anything, which can be a criticism aimed at atheists. When questioned about eternity, Ricky Gervais hit the nail on the head in stating that atheists “have nothing to die for, and have everything to live for.” However, as mentioned earlier, no one will ever be able to confirm or reject the existence of a deity, hence my commitment to respecting beliefs which so contradict my own. My experience has been with Christianity mainly, but I imagine the exploration would be similar with other denominations.

The messengers

Remember those two shadowy figures, from the dark street? The young gentlemen I met were indeed lovely chaps (from Japan and Las Vegas respectively, no less). I asked why they had come so far. They told me that God had told them to come to my dreary hometown to help the community here find Him. In my previous incarnation, I may have retorted “well… He’s taking the piss then,” but I resisted the urge. I engaged them in conversation, argument and counter argument to such an extent that they actually said to me “eh, listen dude, we gotta go…” and in a flash they were gone, onto their next (and hopefully less primed) target. I had been so annoying that these saintly messengers from The Big Man had had their tolerance broken. I imagine they are currently writing an identical article to this from the reverse angle…

I wish them well in their quest. I would hope that, should they find success in bringing God to people in the UK, the newfound belief would create a powerful force for good in those individuals. Ultimately, that should be the end-goal for everyone: being and doing good. The source of this good, I feel, shouldn’t be important—atheist, Christian, Muslim or even Pastafarian, I wish them all good luck. I’ll continue to enjoy the game. When the final whistle goes, I guess we all might even find out for sure if there’s a winner.

 

Suis-je libre? Freedom of speech, freedom of press

Charlie Hebdo. The name could refer to anyone—a neighbour, a co-worker, a family member, the local postman even—but the moniker has recently taken on an entirely greater significance as the quasi-personification of the right of free speech and the freedom of the press that is enshrined in Western liberal democracy. The banner ‘Je Suis Charlie’ (“I am Charlie”) has been adopted by campaigners as an empowering tag-line reinforcing solidarity amongst those who seek to defend our rights and condemn the actions of those who would seek to oppress it.

The shocking events in Paris in January, where at least 12 people lost their lives in a massacre at the offices of satirical magazine named Charlie Hebdo, were allegedly the latest in an all-too-long line of appalling acts by extremists, attempting to punish and intimidate. The attackers were said to have been shouting “the Prophet is avenged” in response to the magazine’s publication of cartoons involving both the Prophet Mohammed and the leader of the Islamic State, Abu Bakr al-Baghdadi, strongly suggesting the presence of vengeance as an over-riding motive. However, the horrific actions have also ignited a passion in the population regarding the freedom of speech we apparently enjoy in most Western democracies and a desire to unify to defend it.

But the question remains: how free is our freedom? How free should it be?

Legalities

The Human Rights Act 1998, the moral legislative code of the EU, includes Article 10: The right to freedom of expression. Article 10 states that “everyone has the right to freedom of expression,” a potentially simple and emancipating ruling. However, in the subsequent chapter, the right appears to be immediately qualified, “the exercise of these freedoms…may be subject to such formalities, conditions, restrictions or penalties as prescribed by law and are necessary in a democratic society.” So immediately, our beacon of freedom of speech has been blunted in law.

The UK attempts to “prescribe by law” the aforementioned restrictions in Section 5 of the Public Order Act 1986 by criminalising “threatening, abusive or insulting words of behaviour.” Without tackling the legal intricacies, the ruling contains a noticeable subjective element which, on one occasion, led to a conviction of a student for questioning the sexuality of a police officer’s horse… The government is said to have made efforts to repeal this controversial clause of Section 5, but some say that new extremist disruption orders will fill the anti-democratic void.

It seems clear, given the above, that our right to freedom of speech is, at present, qualified. Like telling someone they can run around a shop Supermarket Sweep style and grab anything they want to keep for free, as long as it’s valued at less than £50.

Hate Speech

The reasons for the restricting of this misleadingly universal freedom are important but thought-provoking. A lot of these focus on the prevalence of “hate speech,” the advocacy of hatred based on nationality, race or religion.

One could reasonably expect that allowing people, especially charismatic people addressing an impressionable audience, to espouse hatred against a particular group (or groups) would lead to negative and dangerous consequences. Exposure to highly-charged emotional speech is likely to lead to a similarly emotionally charged reaction from human beings on occasion, and, when adrenaline and passion run high, the result is tragically often violence. Therefore, in an effort to protect society, the authorities have acted to suppress the public vocalisation of opinions which would incite or cause the kind of behaviours causing a threat to public order and safety.

In a debate at the Hart House Debating Club in Canada, a participant offers up the notion that “hate is the ejection button of rationality,” thereby implying that someone who proclaims hate against another group does so because of passion, historical cultural triggers or one-off past experiences, with the absence of rational reasoning. With no way to reason with an individual who is this way inclined, it is argued that the only defence against the spread of hatred and potential tragic implications is to control the source, i.e. the criminalisation of public advocacy of hatred.

Obviously, this is a noble effort, and the UK benefits from a relatively peaceful society where views are offered and exchanged reasonably, on the whole. The primary issue is the inherent subjectivity in the current legislation, and the increase in this subjectivity with every subsequent law. There has to be someone who decides what constitutes an “insult” for legislative purposes, but offence is something that is taken, not given, thereby being unique to the individual. Individuals, including the couch potato’s encyclopaedia, Stephen Fry, have questioned the entire validity of the concepts of insult and offence, though Fry’s comments surely do not assume the extreme consequence of vitriolic hatred as a result of insult.

The late Christopher Hitchens intimated that freedom of speech is comprised of two parts: the right to speak, and the right to listen. He argued that setting controls on what can be spoken deprives the individual of their right to listen, to process, to consider, and to reply. He openly defended the notion of the absolute right of free speech—even for those who lack manners, judgment, and sanity.

With this in mind, we consider the flip side of the coin: freedom of speech is extended without limits.

A controversial debate is the potential for hate speech to lead to positive outcomes. When hate speech is experienced by a society, it allows the opportunity for those in opposition to analyse and critique the opinions proffered, thereby inducing an informed debate. This discourse can be used to bring groups closer together and promote engagement and the exchange of ideas in the process. Once we, as a society, become accustomed and adept at these conversations, there is potential to “build immunity to taking offence” as the comedian Rowan Atkinson infers in his impassioned defence of the Reform Section 5 campaign. Atkinson’s mantra for tackling hate speech is more speech, to tackle an “intolerance of intolerance” which he sees as a false remedy.

Though wholly well-intentioned, this feels like a naive way to view the current situation due to the “rationality ejection” mentioned above. There seems to be very little one can gain through attempts at discourse with aggrieved extremists of any background (illustrated recently in the intense but disturbing documentary, Angry, White and Proud), in the same way little can be gained by holding a white flag in the path of a runaway train.

Free speech is also seen as crucial to an individual’s ability to “self-actualise.” If a person sees the proclamation and proliferation of their views as necessary to achieve their self-actualisation—thus realising their personal potential—then any barriers encountered will naturally be met by frustration, resistance and anger. Whether the un-qualification of full freedom of speech would therefore lead to a happier society where the various goals of self-actualisation are mutually exclusive, though, is debatable. One could think of a sea of irresistible forces meeting a forest of immovable objects.

Utopia versus War of the Worlds

Consider a hypothetical world with no restriction on free speech of any sort. Two opposing extremes could reasonably be foreseen, using the UK as an example:

Scenario 1: War of the Worlds

The UK Home Secretary announces that the barriers to any kind of free speech have been removed, and all opinions, however offensive, disgusting and inflammatory, are passable in public.

Religious and political fundamentalists increase their visibility, holding rallies for their supporters and openly recruiting in the streets. Leaflets, pamphlets and social media denouncing non-believers litter the consciences of public.

In response, groups of nationalists seeking to defend the British way of life band together, holding rallies for their supporters and openly recruiting in the streets. They hold marches to show strength and solidarity, and denounce the religious groups seeking to force their beliefs on the UK.

Young, impressionable minds are drawn to various extreme ideologies, fuelled by the charisma of their leaders and utter-conviction of their principles. Distrust grows. Heated exchanges often lead to violence, with no group willing to give an inch. Rational voices are drowned out by extreme views in the media.

The country becomes divided. A broken society, with cracks becoming crevasses…

Scenario 2: Paradise Found

The UK Home Secretary announces that the barriers to any kind of free speech have been dropped, and all opinions, however offensive, disgusting and inflammatory, are passable in public.

Those who hold extreme views take to the podiums, altars and stages, inciting violence and discrimination.

Society, blessed with a highly educated and reasonable majority, resists the extremism. Inflammatory views are scrutinised, taken apart, traced to their origins, and questioned, both in public and private. Satire becomes a weapon of the masses, holding up poisonous ideologies to ridicule.

Discussions are held. Deep, probing, informed discussions, at work; at home; at school; at the gym and in bars. People with opposing viewpoints from all kinds of backgrounds, thrown together in the UK cultural melting pot, take it upon themselves to address differences and exchange views with each other through reasonable discourse. Community is built across ethnicities, religions and political leanings to build an acceptance of, but immunity from, intolerance. Conditioned from exposure, people become less offended by hate speech, instead their resolution is reinforced.

Gradually, hatred is replaced by rationality, a polite and compassionate acceptance of different beliefs and perspectives.

Which proposed scenario is more likely? Although there may exist pockets of each, realistically society would linger somewhere in the middle.

Freedom of speech

The most important response to the Charlie Hebdo attack then, has not been legislative or authoritarian. The most powerful outcome has been the collective solidarity that huge numbers of people have shown, towards the defence of the right to freedom of expression. But what version of freedom of expression are we defending—the qualified or the unqualified?

Personally, this author would love nothing more than to say the latter, giving any excuse to run down the street reciting Voltaire’s saying of “I do not agree with what you have to say, but I’ll defend to the death your right to say it” to my buddies as they make fun of my favourite sports team, question my political allegiances, or even take the mick out of my parentage (though, it should be noted, even the Pope doesn’t allow this).

But the truth is that this entire debate is not centred on the relatively moderate and secular views and beliefs of myself and countless others. Whatever it is about my upbringing and the development of my beliefs about the world, it is highly unlikely for me to resort to violence as a result of someone else’s words, however disagreeable. The danger, and the reason for control on freedom of speech, is down to those to whom being offended is the trigger for deadly action. Whilst the limit on freedom of speech does not impact most people on a day-to-day basis, it places a restriction in law on those who would encourage violence and retribution against others, or who would encourage persecution of specific groups. This author, for one, will not be dying for anyone’s right to action the aforementioned anytime soon.

The UK is not Saudi Arabia, where Raif Badawi, a pro-liberal blogger, was recently flogged after online criticism of his nation’s government—an act banned by law. We can (and some would say, should) slag off our government. We can publicly state our opposition to the Royal Family. We can even question the existence of a divine entity. All of the above, protected by law, should not be taken for granted. What we cannot do is incite hatred, and that is a restriction on our liberty we should be happy to live with.

This author, for one, longs for a day where the limitation on free speech will be removed. A day where, after years, perhaps decades, of education, integration, and discourse between disagreeing voices carry us through conflict to a social paradise where the UK is free of hatred, accepting of all cultures and beliefs but alive with healthy debate, discussion, and exchanges. The limits on free speech as it stands act in the same way as stabilisers on a bicycle, attempting to control the risk of danger and injury until we are able to ride on freely ourselves as a society, shouting “look, no hands!” as the streets are lined with proud onlookers—an Englishman, an Irishman, a Chinese-man, a Muslim, a Catholic, a Protestant, a homosexual, a trans-gendered person, a disabled person, and many more—all applauding the wonderful achievement to which each had contributed in some way.

With the show of strength, unity, and determination of the people who took to the streets in Paris and around the world to commemorate the deaths of 12 people and support freedom of the press, we can live and hope that this day is not as far away as it may seem. Any one of us could have been Charlie; Charlie would have been proud.