Can an unjust world be better than a just one?

Can an unjust world be better than a just one? Injustice surrounds us in this world. Good people suffer. Bad people thrive. One person benefits at the expense of another. It seems the planet is groaning under the weight of its selfish inhabitants.

What if injustice did not exist? What if every relationship was fair? What if we all respected and honored each other and the other living things we share the world with? What if we did not have to work to defeat injustice. What if the world was always fair?

Imagine the lack of suffering. There would be no war. There would be no poor, no homelessness. There would be no abuse or divorce. There would be no crime, no need for laws or punishments. Employers would pay fair wages and charge fair prices.

A world without injustice would be a perfect world, but could it be better? I believe that it can. To take a simple example, imagine two different relationships (whether romantic, financial or political). One relationship is perfectly fair. The other is not.

In the first imagined relationship, the people involved have a perfectly fair arrangement. Both agree that neither one benefits more than the other. Both are happy.

In the second imagined relationship, one person benefits more than the other. The disadvantaged person is unhappy and complains. In this particular scenario, the advantaged person listens and rectifies the situation. Both grow together as a result.

At first, the preferable relationship is clearly the fair one, when both are happy. But which relationship is happier at the end? Is it the people who never had a conflict? Or the ones who experienced different sides of an unjust situation and rectified it?

I believe that when a person being harmed by injustice speaks up and the person benefiting from it fixes it, they both end up happier than the people who neither benefited from or were harmed by injustice.

I believe injustice is an opportunity to make the world a better place than it would be if injustice had never occurred. When we let injustice go by, we lose twice, because of the injustice itself and because we miss the opportunity to become happier by fixing injustice.

This description of fixing injustice is obviously idealized. It assumes voluntary solutions and ignores how people fall short in their solutions or create new, smaller injustices in their attempts to fix the old, bigger ones. Injustice is rarely solved by a single act.

 

Advertisements

The Neolithic: Revolution or Evolution?

I, like you, am a small piece of a community, which is part of a society, which is a piece of an ecosystem, which is part of a planet which is part of a solar system, which makes up a bit of a galaxy, which is one of a cluster, which is part of a supercluster, which is part of a universe.  And yet, I am also a collection of individual cells, organized in differentiated organs, each with its own role and each of those cells is itself a small community of organelles, including the nucleus and the mitochondria that were once different species and each of which harbors a community of genes, which struggle individually and separately to replicate themselves through cell division, reproduction, survival of the individual person, survival of the community, survival of society and so on.  I am a part of a larger whole and I am a whole of smaller parts.

The grand organization I exist in developed slowly, very slowly, over billions of years, beginning with the simplest genes, or so we assume, and developing more and more complex forms of organization.  We now live in a multi-species community.  It is hard to call it anything else.  We convince ourselves that we are its masters, that we have molded it and not the other way around, that we have shaped the plants and animals around us into a world that meets our liking.  Perhaps we have, but things are never so simple.

Since the Neolithic Revolution brought multiple species of plants and animals into permanent relationships with human beings in different parts of the world at similar times, we have come to depend on a variety of other species for our existence, and they have come to depend on us.  It is more than symbiosis and it is something different than an ecosystem.  Human beings and our numerous allied species have transformed much of the planet into something we find more suitable to the propagation of our genes.  The plants and animals that evolved into human beings’ pleasures or pests have succeeded in propagating their genes in a way that would have been unimaginable to our hunter-gatherer forebears, if they ever thought to imagine such a thing.

Look at corn, or maize, as it is more properly called.  It descended from the humble teosinte grass and is so different from its ancestor that its origins are still debated.  Human beings bred and adapted this plant in Mesoamerica, as they bred and adapted wheat in the Middle East.  Or did the plant evolve to take advantage of human agriculture, making itself more appealing to us so we would spread its genes far and wide?  Did we choose corn or did it choose us?

I am more fascinated with the chili pepper, a plant that evolved long ago to be pleasant to birds, which spread its seeds, and unpleasant to mammals, which don’t, except for human beings, the only mammals that eat the fruit of the plant and have happily spread its seeds around the globe without once thinking they were doing the plant a favor.  Did we choose chili peppers or did they somehow choose us?

There are some species that have clearly joined up with human beings voluntarily, especially pests, but also cats and dogs, two carnivores that seem out of place in human society.  Cats’ participation in our communities appears to have been completely voluntary and some people believe dogs also domesticated themselves, joining up to get a regular lunch.  Maybe someday a raccoon subspecies will tame itself in a similar way.

But what about cows?  Cows no longer exist as a wild species.  We feel sorry for them.  I think that is appropriate, but at the same time, they are a spectacularly successful species mostly because human beings like to eat them and drink their milk.  If people never ate them, never bred them, how many cows would there be?  Cows may suffer, but their genes are fulfilling their one and only goal: reproduction.

In any case, human societies stopped being exclusively human thousands of years ago.  We are now part of a web of interdependent species.  Many of the species we rely on would struggle to survive in any other environment and we would struggle to survive without them.   We are part of a new ecosystem that can pick itself up and move in toto from one continent to another, an ecosystem that has cut down forests, drained swamps, filled in shallow seas, terraced hills, irrigated deserts and paved over grasslands in all but the least amenable climates.

Trees, grasses and corals are among the species that have played similar transformative roles, making homes for numerous other species, but ours is just a bit different.  In forests, grasslands and reefs, there are also complex relationships, but it is less important which species of tree, grass or coral is providing the home and which species play other roles, like pollinating flowers or spreading seeds.  In contrast, human ecosystems require a specific set of species.  Since the dawn of the Neolithic Revolution, when people move they take their web of species with them.

The hybridized human ecosystems that developed after Columbus have been more successful than the older ones.  The new mix of plants and animals that we can’t live without is one of the reasons for the huge growth in human numbers during the last few centuries.

So, was the Neolithic Revolution spark a new way of life or a new kind of life?  Was it cultural change like the Industrial Revolution or evolutionary change like multicellular life?  Are we really separate from the plants and animals we rely on?  Or do we need them as much as we need human society?  And is there a difference anymore?

 

What I Know to Be True

One of the few things I regret from my nearly 50 years as a member of the Church of Jesus Christ of Latter-day Saints (formerly the Mormons), was standing up in church and talking about knowing the church was true.

I know what experiences I based those statements on, but I no longer believe those experiences meant the church was the one and only church with authority to act in the name of Jesus Christ. I believe such experiences are available in other churches and religions. I believe they do indicate that God exists, but I can’t say that for sure, either.

The one thing I know is that it is wrong to mistreat other people. I know it is right to be good to each other. I know it is right to be kind. I know it is wrong to deliberately hurt someone else. That’s the sort of thing I know. Everything else is belief.

Why Occam’s razor works for science, but not religion

For those who don’t know, “Occam’s razor” describes a way of choosing between two different explanations for the same thing.  It basically means that the simplest explanation is the best one.  It was named after a 14th century Scottish scholar/friar (because pretty much all scholars were all churchmen at the time).

It is my observation that Occam’s razor really describes an almost universal human strategy.  It is the way human beings approach truth.  “Where there’s smoke, there’s fire” is one example of how people apply the principle to ordinary life.  This saying basically says the simplest explanation for human behavior is the true one.

The problem with Occam’s razor is when it is used as a way to determine absolute truth, rather than determine what truth is most likely.  It’s not unusual for people to use Occam’s razor as an argument-ender (Hypothesis A is simpler, so Occam’s razor says A is true. End of discussion.).  The problem with this is that the principle is only as good as the data we have.  More data can change what the simplest explanation is.  What we thought was the simplest explanation can turn out to be very complicated.

For example, for most of the period that human beings have been around to contemplate the world we live in, most people believed the sun revolved around the earth.  That was the simplest explanation.  Until the 16th century, it was, in fact, the explanation demanded by Occam’s razor in almost all human societies.

What changed?  More and better data.  Europeans realized that the model of the universe they were using didn’t explain the movements of the planets well enough.  If all the planets revolved around the earth, they didn’t stay in nice, neat orbits.  The geocentric model of the universe became more and more complicated.  Copernicus put the sun at the center and the motions of the planets were all described by nice, smooth paths.  More data made a heliocentric model of the universe the simplest explanation and it was eventually adopted by everyone (until it became clear that the sun was not the center of the universe, either).

The amount of data we have today makes a geocentric model completely impossible (the few people who claim to doubt it are obliged to say the data is false).  We have sent people to the moon and machines to the farthest reaches of the solar system.  We have taken so many photographs and measurements of objects in space and of the earth from space that a geocentric model of the universe isn’t even an option.  It’s easy to forget that there was a time when the simplest, most scientific explanation for the motion of the sun, moon, planets and stars was that everything revolved around the earth.

When Copernicus first described his theory, there was nowhere near as much data available and only experts knew the data existed.  At first, Copernicus’ model was only highly probable, not proven, and non-experts had little or no reason to believe him.  We judge the critics of Copernicus and Galileo a bit too harshly.

There is also a flaw in the human mind.  People have a hard time with concepts like “almost certain,” and “highly probable.”  Human beings see them as meaning the same thing as either “true” or “false,” depending on what they are inclined to believe.  If you are willing to believe an idea that is “almost certain” or even “highly probable,” you will probably see “highly probable” as being the same as “true.”  If you really don’t want to believe an idea, you will probably focus on the inherent uncertainty in the term “highly probable” or “almost certain” and say that the idea is “false.”

This is a problem in science, for both scientists and the public, since science rarely declares an idea to be completely true or false at first.  In most cases, science initially rates ideas as being “probable,” “unlikely,” “highly probable,” etc.  Scientists themselves often take sides in scientific debates and talk about their side as if it were “true,” while scientists on the other side talk about it as if it were “false.”  We can hardly expect the public to be more nuanced than scientists are themselves.

As scientists accumulate more data, their ideas stop being “probable” and become either “highly probable” if the evidence supports it or “not very likely” if it doesn’t.  Then—if we’re lucky—more data will show the idea to be either “proven” or “disproved.”  This has happened again and again.  It is how science works, as a whole.

Take evolution.  The amount of data Darwin was working with was fairly small, but over time, biologists described more and more species and archaeologists dug up a seemingly immense number of fossil, with approximate dates provided by dating methods that have themselves gained more and more certainty as more data has been accumulated.  Biologists examined minute cell structures under the microscope and then geneticists added in DNA evidence.

The amount of evidence that supports evolution is now staggering, with much of that evidence discovered in the last 50 years.  Of course, not everyone believes it, for the same reasons human beings rejected previous new ideas: they don’t know the evidence and hearing any uncertainty about an idea they really don’t like is the same as hearing that it’s false.

Of course, not every idea in science is proven to be true.  Some are quietly forgotten as new evidence shows that they aren’t just unlikely, but false.  For example, scientists used to declare that there was a sharp division between animal intelligence and human intelligence.  That difference is slowly blurring as scientists accumulate more data.  New evidence always seems to contradict that idea, rather than confirm it.  The idea that there is a large gap between the intelligence of human beings and that of all (other) animals is headed for the dustbins of history.

This does not bother scientists because science is supposed to be the best description we have of the world around us and how it works, and science is always evolving.  That’s the whole point of doing it.  Some ideas in science are now beyond dispute, but others are not.  The disputable ideas are the ones scientists love investigating and arguing about, by the way.

Religion is not science.  Religion does not purport to be “the best description of the world available,” but “the truth.”  It also makes claims about things that cannot be investigated, proven or disproved.  That’s the whole point of religion.  “Highly probable” is not an acceptable level of certainty in religion.  Religion is supposed to go beyond the available data.

Religion is, indeed, a matter of faith.  For the believer, it is a matter of knowing true things that cannot be discovered by science.  In a religious context Occam’s razor becomes unhelpful because it describes what is most likely true, given the available data, while religion is supposed to describe what is true, without any available data.  That is the very definition of faith.

For example, does evolution disprove the Bible?  Some people believe that it does, but many others believe that it does not.  These believers do not see the question as being “Is the creation story in Genesis true or false?” but rather as “Can the Bible be true even if the creation story is false?” or “Can my religious beliefs be true if the creation story in Genesis is false?”  Millions of people have decided that the answer to one or both of those questions is yes.

For the religious, it is not a question of what is most likely, but of what is possible. Religious people arrive at their beliefs through methods that are not subject to scientific investigation.  When they use science and evidence to test their religious beliefs, their question is not usually “Is it likely that my religious beliefs are true?” but “Is it possible that my religious beliefs are true?”

While specific beliefs of religion can be proven or disproved, the scant evidence surrounding religious belief almost always leaves some uncertainty in general matters, enough wiggle room for people to say “Yes, my central religious beliefs can be true.”  For true believers, the possibility that their beliefs are true is all they need, since they didn’t base their belief on physical evidence in the first place and never expected to have proof of them.

That’s the attitude they have if they’re objective, which most people aren’t.  Most people will do the same as they do with science, except it kind of works in reverse.  When people want to believe in a religion, any uncertainty about evidence that contradicts it will make them see that evidence as “false,” but when people don’t want to believe in a religion, any evidence that “very probably” contradicts it will be seen as “certain proof.”  Believers will think the evidence is irrelevant, while doubters will think that the belief in question has been as thoroughly disproved as the idea that the sun revolves around the earth.

In sum, Occam’s razor is a useful tool, but it is not the same as proof.  When applied to faith, it loses its usefulness.  In addition, the human emotions surrounding religion will cause most people either to exaggerate what it says or ignore it entirely.

Religion, Atheism and Critical Thinking

To begin with, I have to say that I believe in God, but I do not believe atheists are going to hell.  In my opinion, there is little significant difference between believing in goodness and believing in God.  I am more confident in the eternal fate of an atheist who tries to be the best person they can than I am in the fate of a believer who muddles through life without making the hard choices that true goodness requires on a regular basis.

I believe I am on firm ground on this, by the way.  Changing your opinion is easy.  Changing your character is not.  As far as I’m concerned, character is what counts, now and forever.

So, when the New Testament, for example, talks about the need to believe in Jesus Christ, I see that belief as being measured by a person’s actions, not their opinions.  And if we cannot change our opinions after death, we are all in trouble.  Imagine being stuck with the opinions you have now for all eternity!  I know that a belief in God is not just any opinion, but is it so different that it cannot be changed after death?

My religion leads me to this view, of course.  While I am not an “active” Mormon at the moment, the church’s teachings still inform my beliefs and the idea that people can choose to convert after death is essential to the religion’s view of life, death and the eternities.  So, the arguments I have made here are really nothing more than a way of explaining and defending Mormon belief with generic terminology.

As a result of this belief, Mormons do not usually fear for the eternal fate of non-believers.  I am hardly alone in this perspective.  While individual Mormons may look at some of their loved ones and believe they are bound for hell, most Mormons are quite optimistic about the fate of the people who are dear to them.  The religion gives you a choice in how you see others.

The real issue I wanted to address, however, was the claim that religious people (especially Mormons) abandon critical thinking to maintain their beliefs.  I had to include the above paragraph so that I didn’t contribute to—or participate in—the atheist-bashing that is so common in American culture.

It is simply not true that religious people do not engage in critical thinking.  It is not even true that Mormons do not engage in the practice, no matter how many people may claim otherwise.  (For examples of Mormons using critical thinking, see Joanna Brooks and Jana Riess or the sprawling website By Common Consent.)

Someone who does not use critical thinking will encounter facts that counter their beliefs and ignore them.  Someone who uses critical thinking will encounter those same facts and see if there is some way they can be reconciled with their beliefs.  They ask whether it is their understanding of God and religion that is wrong, rather than rejecting the facts or abandoning their beliefs.  They ask whether they need to adjust and adapt their beliefs.  They ask whether rejecting one of their beliefs will really diminish the rest of them.

Someone who engages in critical thinking does not immediately abandon their beliefs, although they may do so eventually if they can find no way of reconciling or adapting their new knowledge to their former beliefs.  I’m not saying that critical thinkers never come to the conclusion that they must abandon what they previously believed.  It is clear that some do.  But the majority of critical thinkers with deep-held beliefs about God do maintain most of them, without rejecting the new facts they encounter.

In sum, you cannot divine a person’s character or their eternal fate by asking if they believe in God, and neither can you use the question to assess a person’s critical thinking skills.

 

 

Noble Selfishness: Donald Trump, Bernie Sanders, the Brexit and Islamic Terrorism

I woke up this morning in a world that seemed somehow changed.  After months of hourly news developments related to Donald Trump’s latest outrage, followed by the “worst mass shooting in American history,” (which may or may not have been an act of terrorism), even as we approached one year since a white supremacist murdered nine blacks in a historic Charleston church, we now face a British vote to exit the European Union.  As my mind struggles to make sense of it all, I recall a common phenomenon I have noticed among parents: noble selfishness.

An individual’s efforts to advance their own welfare are labeled, quite appropriately, as acts of selfishness, but a parent’s efforts to advance the welfare of their children are often seen as acts of love.  It is rare when a parent’s advocacy for their children is seen as a fault.

Coincidentally, we have also seen one of those rare moments in recent weeks, as the father of a young rapist was dragged through hell on the internet for defending his son and brushing aside the profound effect his son’s actions had on another human being.  Even so, the selfishness of his words would have been completely overlooked if the tears of his distraught victim had not spread around the nation before his unfortunate letter did.  Having absorbed her pain before we heard his compassion for his son, we reacted quite differently than the lenient judge who decided the case.

While that case may be extreme, it is hardly an isolated phenomenon.  Parents are usually given great leeway in advancing the interests of their children, even when other children are indirectly—or even directly—harmed as a result.  Schools and teachers are quite familiar with this kind of noble selfishness as they deal with the righteous indignation of a parent whose child did not receive every benefit and every accolade the school could provide.

Continue reading

My personal code for life

This is my personal code for life.  It’s a bunch of obvious things that most people already know.  I am sure I’ve missed some significant things, but I still like it:

  • Anything that leads to love is good. Anything that leads away from it is not.
  • I claim the god-given right to be imperfect.  I also claim the inherent right to have “issues.”  I recognize that everyone else has the exact same rights.
  • I cannot control other people and if I try to do so, I will make both them and me miserable. I do have the right to set boundaries for the treatment I will accept from them. It is my responsibility to communicate those boundaries to them. I have the inherent right to disassociate myself from them if they refuse to honor those boundaries. If someone still tries to hurt me, I can seek out appropriate protection.
  • The universal golden rule still applies: treat others the way you want to be treated. It is wrong to harm others. Allowing someone else to come to harm through inaction is also wrong. And one way we cause harm through inaction is choosing to remain ignorant of the harm we cause to others.
  • It is also wrong to cause harm to ourselves, through action or inaction.
  • I am never responsible for another person’s behavior, but I am responsible for the temptations I create for them. I will inevitably tempt others to be angry, to lash out, to be jealous, to seek revenge, etc., but if I choose to ignore the temptations I create for others I am harming them through inaction.
  • I have the right to decide to believe in God or not. If I believe in God, I have the right to decide what expectations that being has of me. I also have the right to follow those expectations. What I may not do is harm others through either action or inaction in order to satisfy God.
  • It is more important to be wise than to be happy. Seeking happiness over wisdom is likely to lead to neither one, but seeking wisdom over happiness is likely to lead to both. Wisdom is the key to happiness. Understanding is the key to wisdom. Knowledge is the key to understanding.
  • There may be nothing in life I can truly control. Life is a matter of probabilities and odds. I cannot change that. What I can do is change the odds.