Archive for March, 2010|Monthly archive page

Why I Eat Meat

In Specific Facts on March 31, 2010 at 8:20 pm

When I consider what action I do today is mostly likely to be considered barbaric or immoral in the future and I usually decide that it’s my consumption of animals.  Thus, I have followed the progress of Matthew Herbert’s album about the life and death of a pig using sounds from an actual pig.  He recently suffered a setback when he was prevented from recording the sound of the pig’s death, but that didn’t stop PETA from condemning the project:

“No one with any true talent or creativity hurts animals to attract attention … Pigs are inquisitive, highly intelligent, sentient animals who become frightened when they are sent to slaughterhouses, where they kick and scream and try to escape the knife. They are far more worthy of respect than Matthew Herbert or anyone else who thinks cruelty is entertainment”

 PETA’s public persona is the sort of moral absolutism that renders disagreement as immoral.  That usually doesn’t make for good conversation.  Mr. Herbert took offense to PETA’s characterization of his project, which he described as fundamentally concerned with the morality of eating meat.

the pig was always going to be killed, and for me to not bear witness to that difficult fact, would have been to cheat myself and the listener from the friction that comes from raising animals for food.

I eat meat. as I get older, I feel less proud of that fact. however, since I do eat meat, I think that I have a responsibility to understand the implications of that decision. as much as I didn’t relish the prospect of witnessing the death of a pig I had seen being born and raised, I felt it an important reality to face. … in an otherwise distant and anonymous food chain, this one pig’s life has been clearly and respectfully acknowledged. 

Mr. Herbert goes on to describe all of his problems with the production of food, but I think this initial statement is the crux of my interest in the project.  

I eat meat even though I’m completely aware of the fact that animals suffer for my consumption.  I think many people on the left- that’s where most animal right’s activists are found – seem to think that there is a “banality of evil” that goes into the consumption of meat; if only people thought about or realized what they were perpetrating on innocent animals then they would forswear meat.  The synopsis of Jonathan Safran Foer’s recent polemic includes this doozy of a sentence that perfectly illustrates the condescending paternalism discussions of this subject are rife with: “Eating Animals explores the many fictions we use to justify our eating habits-from folklore to pop culture to family traditions and national myth-and how such tales can lull us into a brutal forgetting.”

No! I eat meat because I enjoy it and I ultimately think that is enough to justify the killing of animals.  I have seen PETA’s videos of slaughterhouses (and watched the far superior Blood of the Beasts) and wasn’t persuaded in the least because I think humans have the right to kill and eat animals.  Why?  Because animals kill and eat each other.  The state of nature consists entirely of killing and eating other species, and so there has to be another argument about the immorality of meat for me to overcome the love of consuming it.

That humans are morally superior to other animals and thus do not need to eat them, I think is more interesting.  I will concede that a vegetarian is making an attempt at living a more humane life than mine, but I don’t think that provides enough moral urgency to stop eating meat.  Many people do good things that I find morally enviable, but too inconvenient to attempt myself.  I don’t volunteer in a soup kitchen and I didn’t move to a third world country to practice pro-bono microfinance either.  Since becoming a vegetarian would require switching to a more expensive diet and I could conceivably just use that money for a superior moral cause- donating to childhood cancer charities for example – I think the argument eventually devolves into trade-offs and “buying” indulgences.  Further, small animals and insects are killed by farm machines and pesticides, so its impossible to completely avoid killing animals.

Another argument is that ultimately meat consumption comes down to intentionally and unnecessarily inflicting suffering, rather than simply killing.  Suffering entails conscious pain; no one worries about the pain of mowed grass.  So how much consciousness do animals have?  No one knows, but David Foster Wallace wrote in “Consider the Lobster” that boiling lobsters alive demonstrated suffering because “the lobster’s behavior in the kettle appears to be the expression of a preference; and it may well be that an ability to form preferences is the decisive criterion for real suffering.” I think this ignores the possibility that pain is stimulus requiring a reaction, pain means dangerous harm necessitating immediate remedy.  It’s hard to imagine a more basic evolutionary instinct than reacting to avoid death and pain and the presence of such a reaction does not entail consciousness of pain.

On the other hand, to be on the safe side it seems like avoiding particularly barbaric methods of killing and preparing animals makes sense.  Lobsters can be killed prior to cooking, humane butchering actually leads to superior meat and foie gras fowl can be raised without force feeding.  The problem is these methods cost more, while cruel or inhumane methods are dirt cheap.  This criticism of meat consumption I am entirely sympathetic to: meat is under-priced for how much it costs environmentally, medically and ethically.  Subsidized grain makes it possible to buy fast-food beef for less than the price of just about anything.  That soda made out of corn, water and chemicals costs more than a hamburger that requires an adult cow demonstrates a market distorted beyond belief.  Until defenders of the free market and advocates of improved food bridge the tribal divide and unite to end farm subsidies that reality will not change.

 In the end, I think that trying to do better is the best that can be done in life generally, and in the consumption of meat specifically.  Don’t eat dolphins or beat monkeys to death to eat their brains.  Support the production of quality meat that costs what it should.  Eat more plants, poultry and sustainable fish.  Eat from head to tail to avoid waste.  And remember where your meat comes from.  An animal died so you could enjoy delicious meat, its a luxury not to be taken for granted.

An Uncertain deFense of deBoer

In Empires of the Mind on March 30, 2010 at 5:24 pm

I. The Argument’s Genesis

Sam Harris’s February TED lecture begins with a provocative premise:

…It’s generally understood that questions of morality, questions of good and evil and right and wrong, are questions about which science officially has no opinion.  It’s thought that science can help us get what we value, but it can never tell us what we ought to value.  And consequently most people – I think most people probably here think that science will never answer the most important questions in human life, questions like, ‘What is worth living for?’, ‘What is worth dying for?’, ‘What constitutes a good life?’; so I’m going to argue that this is an illusion, and the separation between science and human values is an illusion.  And actually quite a dangerous one at this point in human history.  Now, it’s often said that science can not give us a foundation for morality and human values because science deals with facts.  And facts and values seem to belong to different spheres.  It’s often thought that there is no description of the way the world is that can tell us the way the world ought to be.  But I think this is quite clearly untrue.  Values are a certain kind of fact.  They are facts about the well-being of conscious creatures.  

Andrew Sullivan recently linked to the lecture with this response from Freddie deBoer:

[I]f we are indeed a cosmic accident, the result of the directionless and random process of evolution, then it makes little sense to imagine that we are capable of ordering the world around us, beyond the limited perspective of our individual, subjective selves. This has always been to me the simplest step in the world, from the first two beliefs to the third, from the collapse of geocentrism and creationism to the collapse of objective knowing. Yet I find that it is one many people not only refuse to make, but one that they react against violently. This is the skepticism that is refused, and this refusal is the last dogma.

There’s also this clarificationthis clarification, and this clarification from deBoer.  Several other bloggers have weighed in on the debate.  The highlights: from Julian Sanchez:

God or whatever other transcendent sources of certainty we might posit just serve as baffles to conceal the ineradicable circularity that’s going to sit at the bottom of any system of knowledge. You’re always ultimately going to have a process of belief formation whose reliability can only be vouchsafed in terms of the internal criteria of that very process. Calling it a divinely endowed rational faculty rather than an adaptive complex of truth-tracking modules doesn’t actually change the structure of it any…I do think we can make “objective” judgments. They’re only “objective” relative to our contingently evolved nervous systems, but since that’s all objective can ever have meant, that’s objective.  This is totally distinct from the question of how confident we ought to feel about most of our conclusions. I can be mistaken about an objective fact, but that doesn’t entail that it’s a mistake to think of it as objective one way or the other.  Because objectivity is a system-relative property, it’s not undermined by the fact of our cognitive limitations.

And from Will Wilson:

Contingent minds merely undermine the necessity of our being able to comprehend the world (a necessity that the faithful take quite seriously, as an old Dominican friar once explained to me), they leave open, however, the possibility of contingent minds that “just happen” to be of the sort that can make sense of the universe in which they happen to be located. Nevertheless, Freddie is right about one thing: once we eliminate necessity, we need reasons to think that our minds are of the right sort; after all, the humble Giraffe is well adapted to its environment, but will never come to understand particle physics or the workings of its own neurophysiology. How are we to know that we are not like Giraffes, only with considerably wider possible-knowledge horizons?

This discussion has occupied nearly all my time and brainpower for the last week, and it has stretched my patience and eyesight more than a few times.  Ultimately, many people have forgotten where the debate started: subsequent commentary has wandered uncontrollably from the cosmic questions first proffered by Harris to the merits of various political ideologies to the nature of science, morality, and knowledge.  Straw men pepper the electronic landscape, and there are more than a few reductio ad Hitlerum sprinkled throughout multiple sites.  So, I am going to attempt to grossly (over)simplify the terms of the debate for clarification. 

Let’s return to Harris’s initial proposition, that science can give us an objective foundation for morality and human values.  By this, Harris means that not only can science explain why morality exists or why we believe what we do, but it can provide certain answers to moral questions.  He specifically models a scenario where advances in neurology will allow for the pinpointing and understanding of “moral” processes in the brain, enough to draw universal conclusions about morality and values, although he does allow for a plurality of individual moral systems. 

DeBoer responds that morality is subjective and personal and therefore beyond objective, universal understanding.  Nothing can truly be said with certain objectivity, and all-embracing theories such as Harris’s are dangerous and hypocritical when coming from self-described “skeptics.”  True skepticism recognizes the limits of its own theoretical framework: “from the collapse of geocentrism and the collapse of creationism follows logically the collapse of objective knowing.”

Sanchez posits that the term “objective” only has meaning relative to an amorphous aggregate of individual subjectivities.  All systems of knowledge we devise will be qualified by the fact that we devised them as conventions to ensure our own well-being, and hence their circularity is inescapable; their objectivity is akin to perfect efficiency for the given system rather than true universality.  Thus, when we speak of something being “objective,” we don’t mean that it is truly objective, but that it is effectively so.

Wilson believes that while natural selection only establishes a minimum standard for existence, namely that species survive and reproduce, there is not necessarily a maximum.  Nothing in science suggests that human minds are capable of perfectly understanding the world, but that doesn’t exclude the possibility that they can.

To (over)simplify these four responses even further, let’s regard the question as “Can moral questions be definitively answered (by science or otherwise)?”  The four responses can be characterized as believer, Harris: yes, and they will be very soon; strong agnostic, deBoer: objective truth can not be found and the pursuit itself is dangerous; ignostic, Sanchez: the question is devoid of meaning and absurd; and weak agnostic, Wilson: we can not know now, but that does not preclude the possibility of knowing in the future.

My personal leanings are towards Wilson’s weak agnosticism on the issue, but I must admit that this inclination is based on my own uncertain interpretations of the relevant terms “science” and “morality”.   


II. Science

More than any other philosophy here, I think Wilson’s weak agnosticism serves as the major underpinning of what we call the scientific method.  Creationists and intelligent design advocates are correct to point out that evolution is a theory, but only in the sense that nothing in science purports to be objective fact.  This crucial point is tragically and ironically ignored by radical men like Harris and his creationist kinsmen. 

By deifying science and declaring it infallible, the new atheists replace one god with another – a slippery slope indeed.  The scientific certainty of the early twentieth century bequeathed upon humankind the gifts of electric shock therapy, lobotomies, and eugenics.  The prevailing, visceral reaction to the atrocities of World War II provided a necessary correction to scientific overreach.  Some believers in science today exhibit a certain tribalism, that only their methods are legitimate; however, whatever the current woes of the scientific community, they are nothing like the dark path of only a few generations past.

Because of past and present muscular misinterpretations, the true nature of science remains widely bastardized.  True science represents what measured comparison of available alternatives shows to be the best explanation we have for the various events unfolding around us.  Accordingly, we enforce principles of competition to mimic our most measured perceptions of the forces of nature, and we impose universal standards of procedure which theoretically and empirically seem to correct for what we recognize as human bias and error.  Therefore, we can never be sure that the prevailing explanations are the “best” explanations.  We can , however, believe with a high degree of confidence that they are the best given all the other explanations of which we are aware.  Science is ultimately a process, and as such, never purports to be perfect and complete.  And so we see with quantum physics that science itself has embraced the nonexistence of objective truth.  

Friedrich Nietzsche seems to imply in On Truth and Lies in a Nonmoral Sense (referenced and linked by deBoer) that rationality and intuition are mutually-exclusive, antagonistic concepts:

As a “rational” being, (man) now places his behavior under the control of abstractions. He will no longer tolerate being carried away by sudden impressions, by intuitions…There are ages in which the rational man and the intuitive man stand side by side, the one in fear of intuition, the other with scorn for abstraction. 

However, science represents a balanced symbiosis of rationality and intuition.  Even a cursory glance at the history of science reveals it is full of intuitive leaps and accidents.  Witness the work of Gregor Mendel: the result of Mendel’s famous first bean sprout experiment was something like 3,468 tall plants and 1,256 short plants.  The ratio here is not 3:1, but 2.76:1.  Mendel made a leap of faith, assumed he had made some mistakes along the way, and guessed that the data were trying to tell him that the ratio should be 3:1.  When he conducted more trials under this assumption, the data seemed to fit his intuition, and he modeled with reasonable certainty the theory of inheritance that bears his name, a theory that so far has proven very useful in helping to solve problems and allowing people to live better lives.

And so accordingly, I must disagree with deBoer’s statement that: “Among the few necessary social  functions that religion performed, and that we now are lacking in a post-theistic world, is the enforcement of a certain humility.”  Far from lacking humility, science is humble by its very nature.  This fundamental humility has been confused of late by ideologues like Harris who dress up their political agendas with unwelcome scientism.  A clarification is necessary.


III. Morality

While I think deBoer wins the day on the question of whether or not moral questions can be definitively answered by science, his conclusion begs the question of application.  We have to do stuff.  We have to act.  Our very survival requires it.  Action is a predetermined, necessary condition for continuing existence, and morality is the complex mechanism whereby we attempt to ascertain and clarify “right” ways of acting for the benefit of the community.  Intellectuals may imagine a perfect morality, and for the blindly faithful, like Sam Harris, a philosopher king or deity is the source, but in practice, community moral order evolves spontaneously.  Morality exists by convention and is enforced and clarified by a particular community in the form of derivative maxims on which a majority of community members agree; for example, “thou shalt not commit adultery” or laws against fraud.  

As such, morality is akin to a social contract: constructs of morality represent anything that we perceive as “better” than the “war of all against all”.  And from a pragmatic standpoint, it is in the interest of the community to clarify and enforce moral codes.  Serious violations still often arise.  Since our moral codes represent but an approximation bound to the human conditions of penetrative ignorance and subjectivity and no one can perfectly understand the moral code of the community, there is less potential for harm if moral actors lack conviction.  If I mistakingly believe that President Obama intends to enslave mankind and must be stopped by any means necessary, but lack the conviction to act on this belief, there is no harm.  The resulting nothing is the same were I not to believe that absurdity.  

Accordingly, as deBoer posits – and I agree – history’s greatest villains were all certain of their convictions; they all thought they were serving the greater good.  No one thinks he or she is evil.  No one foresees the unintended consequences of his or her actions.  It is most prudent to recognize this fact, do nothing, and avoid stepping on the ant that could be your reincarnated grandmother.

Nevertheless, I agree with Edmund Burke and William Butler Yeats that action is often necessary to prevent evil.  The moral conundrum consists of balancing the tendency to make mistakes out of self-righteousness with the necessity to correct the moral failings of others.  At the level of the subjective individual, this is a daunting task with large margins for error.  At the community level, even more so.  Yet this is the task that is our responsibility as humans: to act with uncertainty is the burden of existence.


IV.  Conclusion  

For simplicity’s sake I have assumed four possible answers to the question of whether science can answer moral questions: yes (Harris), never (deBoer), silence (Sanchez), and not now (Wilson).  My answer is that because science is a process for maintaining an appropriate balance between rationality and intuition, with the ultimate goal of explaining phenomena, and morality is a process for maintaining an appropriate balance between action and inaction, with the ultimate goal of minimizing harm, we can use the tools of science to help us more-effectively minimize harm vis-a-vis, for example, a polio vaccine; but the idea that science itself can be a moral authority seems incoherent and frankly undesirable.

Yet, another epistemic arrogance lies in assuming that science and morality will be forever irreconcilable. It is extremely doubtful that science or morality will ever be more than imperfect processes, yet it is because I lack conviction that I am inclined towards Wilson’s cautious hope.  I must now venture out of my cave, and it serves me to choose an answer that is immediately applicable.  The only answer with this practicality is Wilson’s, and so I suspect that deBoer is correct, yet behave as though Wilson is.

“Mandate”: Loud Bark and Nibble

In Specific Facts on March 30, 2010 at 5:03 pm

No part of health care reform feels more uncomfortable than the individual mandate, the government requirement that all adults purchase health insurance.  Adding a new responsibility of citizenship chafes the lover of individual freedom.  Why should I have to buy insurance?  The subject’s complexity becomes a good deal more complicated by the fact that most of the hand-wringing from policy minded analysts actually goes the other direction: will too many Americans ignore the mandate and just pay the annual fine, bankrupting our health care system in the process?

The individual mandate- distinct from the corporate mandate requiring companies to offer insurance or pay a fine- is necessitated by ending the preexisting condition exclusion that insurance companies currently employ.  If people were not required to have insurance, then rationally the best strategy would be to wait until you got sick to buy insurance.  Thus, only deathly ill people would have insurance, making selling insurance economically impossible or fantastically expensive.  The mandate hopes that rather than pay a small fine people will purchase insurance and make the system viable.

 The problem, from a policy standpoint, is that the fine is still way too low to change the optimum strategy.  Health insurance costs thousands of dollars a year, paying 695 bucks to wait until you need expensive health care to buy insurance is a great deal.  Imagine if for a tenth of the price of car insurance you could buy the right to purchase car insurance right after you got in a wreck.  In effect, the Democrats and Obama just gave Americans a new option: disaster insurance with 20-20 hindsight.

Of course, health insurance is useful in situations other than when you are incredibly ill.  That’s actually the main reason why health care keeps getting more expensive, people use insurance paid for by their employers to fund all of their medical expenses so they have no clue how much things actually cost.  Since the federal government is now willing to assume responsibility for large swaths of the cost of health care for low income Americans, in many cases opting into the health insurance system will be a good deal.  

Further, I am skeptical that all that many people are calculating enough to opt out of insurance; Massachusetts achieved universal coverage without a mandate with significantly higher fees.  Forgoing insurance is still a risky bet; for example, what if you have a heart attack?  It’s gonna be pretty hard to get insurance on the ambulance ride to the hospital.  The government’s framing of the issue also will encourage participation.  If instead of a “mandate” it was described more accurately as a “tax” I think many people would choose to forgo insurance.  People are used to paying taxes, but when the government says that it is against the law to not have insurance I think most Americans will comply. 

Two possible tweaks could make the system more secure, however.  First, a small waiting period for the coverage of pre-existing conditions could be instituted.  In Massachusetts, for the first six months on an insurance plan preexisting conditions are not covered.  This changes the calculus of risk taking decisively and would make it reckless to avoid insurance under the mandate.  Second, allowing people to opt out of the pre-existing condition exclusion altogether in exchange for not paying the fine will avoid a backlash from those who still don’t want to participate.  This proposal, from The American Prospect, would allow Americans to trade five years of not paying the mandated fine for signing away five years of their right to government subsidized insurance with pre-existing condition coverage.  This meets the “libertarian paternalism” standard of making the optimal option the default, but allowing people the freedom to go in another direction if they choose.  Both or either of these changes could pretty easily be passed to improve health care reform.

In the meantime, take heart lovers of freedom, the mandate may take your money, but it can never take your freedom.

Fantasia and the Narrative Fallacy

In Empires of the Mind on March 29, 2010 at 3:13 pm

As a new parent, I introspect constantly about the impact various media will have on my ten-month-old daughter’s neural and moral development.  I seem to find major problems with nearly everything we try watching together, whether it’s a disappointment with the Euclidean oversimplifications and anthropomorphism of everything in Inai Inai Baa, or a skeptical wariness of preachy Sesame Street.  While I certainly don’t think it’s healthy to be obsessed with a particular, fictitious, red monster, I usually convince myself that my criticisms are slightly overbearing, and that, as important as the first year of neurodevelopment is, thirty seconds a week of three triangles and a rectangle suddenly becoming a penguin is not going to force my daughter into a compartmentalized world-view or stymie an appreciation of the profound, true complexity of the cosmos.

When my daughter and I discovered Fantasia, I initially could not find any problems: classical music is variously reported to stimulate the mathematical parts of the brain, the artistic presentation of Fantasia is complex and beautiful, themes are drawn from the deepest realms of culture, history, and mythology, my daughter likes Fantasia, she watches attentively, and she usually falls asleep quickly and quietly (although The Sorcerer’s Apprentice made her cry).  However, we recently watched one particular vignette on YouTube set to Stravinsky’s “The Rite of Spring” which chronicles the history of dinosaur existence, from the volcanic, predevelopmental phase, through efficacious rise and decadent fall.  As meteors come crashing down and the earth shakes, the remaining brontosauri fight over the last few leaves, pterodactyls steal from kin, and duckbilled hadrosaurs greedily consume their food supply to exhaustion.  The effect is to show that the dinosaurs were morally reprehensible and deserved their fate, which is just ridiculous, yet somehow seems normal to us!

In his classic work The Hero with a Thousand Faces, Joseph Campbell chronicled the tendency of all human societies to retell the same story over and over again, while changing the names and circumstances slightly to fit the times.  The success metrics of Hollywood films and mainstream journalism suggest this imposition of narrative structure to even uncompelling events is an inevitability of convention, but I wonder if we can all agree that such oversimplifications are not welcome in an enlightened world.  A didactive, narrative version of history may keep members of a particular society in line, encourage organized violence, or make cultures feel better about original sins, but isn’t it actually morally reprehensible to suggest the dinosaurs, Carthaginians, or Mayans deserved their fates?  Shouldn’t we see evil as an effect representing the sum of all human suffering rather than a just redemption?  Or shouldn’t we at least be exposed to art which causes us to ask these questions, rather than art which assumes their answers?  Good art does not make moral assumptions; it does not preach – instead it is thought-provoking.  This is why Shakespeare and Greek Tragedy have enjoyed popularity throughout the ages and we’ll forget all about Avatar in five years.    

Rather than report facts and allow for readers to make their own inference, the way our media goes about its business – especially more unscrupulous elements like the Huffington Post and Fox News – is to infer then report, crafting a misleading, ideology-based mythology along the way.  It’s a lot easier for us to cope with emotional events such as September 11th if we see ourselves as innocent victims of an evil plot: the terrorists hate freedom.  What we should be doing is looking at the facts and trying to minimize suffering and weighing the potential consequences of our actions against their potential benefits.  In the case of September 11th, radical Islamicists, whose own worldviews account for clear good guys and clear bad guys, were responding to what they perceived as evil American infidels defiling the Holy Land with their military presence.  The Arabian Peninsula was occupied by an invading army.  For the Islamicists involved, September 11th was an act of just war.  

When Ron Paul famously brought up this prevailing motivation at the 2007 Republican Primary Debate, referencing our own CIA’s analysis, he was childishly shouted down by Rudy Giuliani, who was simply reinforcing the cultural myth of American moral infallibility, and made into a straw man by moderators.  The truth is that our presence in the Arabian peninsula is an effect of the Cold War, where the two world superpowers competed for the hearts, minds, and governments of the non-aligned.  U.S. military presence in Saudi Arabia and elsewhere is certainly understandable if not altogether justified by these realities, and the events of September 11th are an unfortunate and tragic consequence, but that doesn’t mean we shouldn’t ask the hard questions that need to be asked and construct a compelling critique of the causes and motivations of all parties involved rather than fit the events into a pre-existing, template narrative structure.  I prefer to engage with something resembling reality.

More on The Cove and Japanese Education

In Dispatches from the Wild Wild East on March 28, 2010 at 3:27 pm

I wrote this in response to comments on my review of The Cove.  I’m posting it here as well because it highlights one of the two major tightropes English teachers in Japan have to walk: that of effectively teaching English while satisfying superiors.


Thank you for your insightful comments. I agree with you that elementary schools and junior high schools in Japan are two different animals. I have a lot to say about the positive aspects of the Japanese education system and public health especially in my blog on this website, which I hope you’ll read and comment on.

Whatever the reason rote memorization and passive learning is perpetuated, I dislike overly-standardized, test-based education. Of course some standards are necessary so universities can compare prospective students, but requiring Okinawan high school students to learn and reproduce on a life-determing test the history of Kyoto is not only obviously pointless, but it also does a disservice to the Okinawan people. In the particular case of Okinawa, albeit relatively removed from the context of this review, mainland textbooks were specifically imposed to weaken and destroy Okinawan cultural identity. Granted, this is fairly ancient history, but change is slow in Japan, and that ancient history forms the underpinnings of the current system.

I dislike the nationally standardized text series, New Horizon. The English textbook in particular is abysmal. The token foreigner of the text, Ms. Green, is a complete moron who is ridiculed for her inability to properly speak Japanese and goes about reassuring the nation’s students that Japan already has the best of everything, including really big public parks, which is patently false. The textbook further reinforces cultural and language gaffs, as well as stereotypes, by, for example, demanding that students ask foreigners if they’re from America instead of where they are from and demanding that students use the word “cap” instead of “hat” to refer to that particular part of a baseball uniform.

Although these examples are incredibly minor, there are hundreds of them that in aggregate give Japan the worst English test scores in Asia after North Korea. To a native speaker of English, the unnatural language and counterproductive focus on not making any mistakes are obvious and correctable, but the vast majority of foreigners working in the Japanese school system have absolutely no control over what they teach and merely serve as sounding boards for the head English teacher, who is selected only on the basis of a standardized multiple choice test of written English, and often unable to carry on basic conversation. The teacher’s editions of New Horizon actually have pronunciation guides written in Japanese, which as you know, comes no where near to approximating the sounds of English. If a foreign teacher ever gets uppity and tries to change the curriculum, he finds himself without a job and a visa pretty quickly. The tendency is to do nothing, not break heiwa, not get fired, and still get paid and get to travel.

This unproductive equilibrium means that nothing is ever changed, unless it is changed from the top down, since the education system encourages lower-ranking civil servants to avoid responsibility. The education system plays no small role in reinforcing the grouping of foreigners into a generic hoard, emphasizing the Japanese cultural myth of unique uniqueness, and makes seeing foreigners as more than just clowns even more difficult. Watch Eigo de asobou for more confirmation.

You are absolutely correct that the whole whale thing is a pet issue of the culturally conservative right-wing of Japanese politics, which has, in my city at least, become more active since the election of Yukio Hatoyama. I agree with commenter Dyske that the most tragic part of The Cove’s moment is that the pro-whaling elements of the Japanese political scene will only feel more threatened and more emboldened by the continuing, loud, self-righteous protests and generally obnoxious behavior of Western anti-whaling activists. This issue will never be resolved so long as a wall remains between open and honest dialogue, mutual distrust remains the norm, and activists remain more focused on ideological purity and battling caricatured bad guys than cooperatively solving problems

Be Human: Don’t Let the Politicians Win

In Specific Facts on March 26, 2010 at 12:55 am

Democrat Congressmen who voted for heath care reform have been getting death threats and the natural reaction has been to blame the heated rhetoric of the right in stoking up tea party outrage.  Let’s start with the obvious: Congressmen who supported health care reform did it out of a genuine interest in bettering the country, even if you disagree with them threatening violence, or worse doing violence, is deplorable.  The debate got ugly, but all sides should immediately condemn the violence in the strongest terms possible.  That out of the way, I find myself in the uncomfortable position of agreeing with The Corner.  Victor Davis Hanson:

This week’s talking point is the sudden danger of new right-wing violence, and the inflammatory push-back against health care.  I’m sorry, but all this concern is a day late and a dollar short. The subtext is really one of class — right-wing radio talk-show hosts, Glenn Beck idiots, and crass tea-party yokels are foaming at the mouth and dangerous to progressives. In contrast, write a book in which you muse about killing George Bush, and its Knopf imprint proves it is merely sophisticated literary speculation; do a docudrama about killing George Bush, and it will win a Toronto film prize for its artistic value rather than shock from the liberal community about over-the-top discourse.

Nearly everyone in the Tea Party movement are sincerely concerned citizens and blaming them for the misinformed, violent indignation of a few people is unjust.  The reason the violent political craziness comes from the right at the moment, is because the left is in power.  That’s just the nature of the beast; when your guy is running the show people might get annoyed, but by an large they don’t ram into cars for wearing the wrong bumpersticker.  

When Bush was in office, the left thought of his very existence as a baneful reminder of all things wrong with the United States.  This feeling led people to behave very badly.  I don’t think it’s exactly the equivalent of the present situation; most of the examples Mr. Harris makes are from private citizens, not members of Congress like Michelle Bachmann:

“This cannot pass. What we have to do today is make a covenant, to slit our wrists, be blood brothers on this thing. This will not pass. We will do whatever it takes to make sure this doesn’t pass.”  

Nonetheless, when you act badly the other side can rightly claim that you don’t have the credibility criticize what they say.

It would behoove all of us to try to be more civil, less prone to hyperbole and, most importantly, to stop linking every issue into a score keeping of who is the most wrong.  The fact that Republicans were wrong about Iraq doesn’t make the Democrats right about health care reform.  Treating the other side as permanently disgraced because of positions held sincerely, if ultimately incorrectly, leads to a situation where people began to dehumanize their political opponents.  

At the same time, political leaders should refrain from saying that a bill to modify health care will end freedom in this country or that people who oppose health care reform do so out of a selfish lack of concern for the plight of the poor and infirm.  That’s absurd, counterproductive and using the law of very large numbers, possibly dangerous.  Some people out there are crazy, and crazy just needs an excuse.  Don’t give it one, be better than that, Democrats and Republicans.

The View from My Window: Active Commuting

In Dispatches from the Wild Wild East on March 24, 2010 at 9:02 am

On February first, I decided that instead of purchasing a ninety-dollar rail pass every month and taking the train to work, I would ride my bike the thirty-minute straight shot into Fukushima City.  The reasons for the switch were manifold: one private lesson meant approximately one hour of light to moderate exercise, several lessons meant several hours.  There was also the additional savings of ninety dollars a month, a chance to fully listen to the many classic hip hop, rock, and jazz albums I had stored on my iPod and theretofore not had the opportunity to appreciate, and I didn’t have to wait for trains or arrive excessively early for work, which I hate.  Another bonus: I calculated that riding my bike actually took less time on average than waiting for and taking the train.  

Throughout the weeks that ensued, after exhaustively consuming all the albums in my current library, I began downloading a variety of podcasts and free science and Spanish courses from Yale, MIT, and other top schools, which I now listen to as I bike.  As a result of this simple switch from public transportation to biking, I am a healthier, more energetic, more cultured, more educated, more free-time having individual.  If the switch had been from driving my own car to biking, I would have doubtlessly affected the environment for the better.  I now regret the unthinking months spent relying on motorized transport, and intend to continue biking to work until my family and I return to Boston in November.  

But that’s where the trouble starts.  I don’t know for sure, but I imagine Boston generally lacks the infrastructure to make such a commute possible.  My last time in the states, I remember rolling the baby carriage through deep puddles, over cavernous fissures, around homeless people, and getting stuck in more than a few grates.  Much has been made of the tremendous degree to which the Japanese and other Asian economies are centered around public transportation and biking, and much has been made of the failure of such a system to take effect in America, but our major American problems of traffic congestion, pollution, obesity, and general unhealthiness can all be greatly reduced by widespread active commuting.  However, creating an infrastructure to make this possible is an expensive proposition indeed, especially when it must be funded by broke and bankrupt state and local governments.

There is also a cultural cancer blocking widespread active commuting from taking effect: people who don’t have their own cars are considered losers, and if one bikes or rides the train to work everyday, there is little need to purchase an automobile.  “Stranger danger” makes anything but driving the kids to school in the Escalade grounds for social services intervention.  There is also a considerably higher degree of conspicuous consumption in American economic/automotive culture than in that of Japan.  In Japan, small, fuel-efficient cars are ubiquitous; mopeds, K-Trucks, and Smart Cars are popular.  Most people ride their bikes, especially the elderly.  People who drive big, loud SUVs, if they exist at all, are considered rude, disruptive, and selfish.  Anyone in a luxury automobile is probably a member of the yakuza.

The structure of Japanese cities is also partly responsible for the country’s superior commuting methods.  When cities of paper are firebombed, destruction is widespread.  Japan had the opportunity (And I realize the insensitivity of calling it that.), after World War II, to essentially rebuild its infrastructure from scratch.  Modern Japanese cities are centered around major railway stations and situated in valleys flattened by thousands of years of rice cultivation.  From a bird’s-eye-view, wide boulevards take on the structure of a giant spider web.  A persistent dedication to public works means that concrete is replaced every five to ten years.  67% of the population lives in urban areas, and this is most certainly understated.  I live on “farmland” yet you wouldn’t know it looking out my window, from which multiple cell-phone towers, a major railway, a large bank’s headquarters, two supermarkets, a shopping center, a train station, multiple factories, and a major highway stretch eastwards towards the distant stone face of Mt. Ryouzen.     

The Japanese are often derided for their tendency to copy exactly the successful programs or cultural elements of other countries.  The Yamato Period saw the importation of rice cultivation and letters from China.  The Meiji Restoration saw Western military and industrial structures take hold in Japan.  And the period after World War II saw the Americanization of Japan in areas ranging from industry to fashion and music.  Of course, this analysis glosses over historical details and instead focuses on a pejorative perception of Japan that exists in some Western circles.  But pejorative or not, an accurate depiction of the Japanese or not, an empirical approach to policy based on observing and assimilating the successes and avoiding the failures of others is undoubtedly wise.  

The recent healthcare debate in the U.S. saw the acknowledgement that the French healthcare system, with its mixture of public and private, is the world’s best.  But simply copying the French system and making minor changes to account for differences, or picking and choosing desirable elements to incorporate into the U.S. healthcare system, was politically and culturally unpalatable.  Faced with rising global healthcare costs in the 1980s, a pragmatic Japan, counter-intuitively more open to foreign ideas than an idealist America, decided to pursue policy more in line with that of European countries, while America chose to do nothing and stay cool.  We have reaped the consequences of inaction for the last several years, and now that our healthcare system has undergone massive changes, things may get better, or they may get worse.  The key difference between Japan and the U.S. that will most affect public health going forward, however, is the fact that the Japanese populus is as a whole much, much healthier.  

A truly good public healthcare system starts with healthy individuals.  An incredibly easy way to make sure we get the most bang for our buck with the new system is to implement widespread, grassroots measures aimed at encouraging good health, and why not copy exactly the model of the longest-living people in the world?  We need to fix potholes obsessively, widen and flatten streets and sidewalks, build bicycle parking lots on every corner, and do whatever we can to otherwise encourage active commuting.  

And why not do it now when our collective, American problems and their Japanese solution are apparent, the economy is sluggish, people are out of work, and air pollution levels are reaching critical mass?  Politically, such a move would satisfy those on the left, who want more environmental legislation and public spending on green jobs, and those on the right, who want people to take more responsibility for their own health choices.  We may be poor and in debt now, but we’ll harvest gold in the long run if we focus more on the health of our citizens.  With a widespread active-commuting infrastructure-building program, we can make urban roads far less congested for people who actually need to drive, significantly reduce greenhouse emissions, hedge against some of the uncertainties of a new public healthcare system, and, most importantly, make ourselves better people. 

Prison Reform through Electoral Reform

In Specific Facts on March 23, 2010 at 2:39 pm

America is desperately in need of prison reform.  We have the largest prison population in the world, with 2.3 million people incarcerated, and our rate of imprisonment is six times as large as the global median.  We send too many people to prison, often for minor offenses like using drugs or writing bad checks, and for too long, since older people are more expensive to incarcerate and much, much less likely to engage in criminal activity.  Unfortunately, there isn’t much of a constituency for prison reform since ex-felons can’t vote and generally politicians fear seeming “soft on crime.”  However, the New York Times editorial yesterday advocating allowing felons to vote in federal elections may partially solve that problem.  The proposal that makes sense on it’s own merits, but will also create a powerful new incentive for politicians to treat former criminals as human beings.

Currently, about four million Americans who have been released from prison are disenfranchised in federal elections by laws barring people with felony convictions from voting. […]

There is no good reason to deny former prisoners the vote. Once they are back in the community — paying taxes, working, raising families — they have the same concerns as other voters, and they should have the same say in who represents them.

Disenfranchisement laws also work against efforts to help released prisoners turn their lives around. Denying the vote to ex-offenders, who have paid their debt, continues to brand them as criminals, setting them apart from the society they should be rejoining.

Disenfranchisement for felons is justified as an earned “loss of citizenship” after committing acts that violate the social compact.   While this makes sense in a limited way- it seems inappropriate for prisoners to be voting while incarcerated- lifetime bans on voting are punishments above time served without the possibility of appeal.  A middle-aged, taxpaying father or mother has a vital interest in participatory democracy despite past offenses.  Replacing lifetime bans on voting with a probationary period would allow these people to reclaim full rights of citizenship when they have proven they deserve it.  That’s good for them and good for America, because these people- and hundreds of thousands of other like them currently incarcerated – would be a strong constituency for reforming our overactive criminal justice system.

We lack a sense of proportionality in our punishment of crimes.  In pursuit of safety we have prioritized inflexible retribution over humane justice; that a person would lose decades of their life for nonviolent crimes seems apparently unjust.  Mandatory minimums and repeat offender laws like “3 strikes and you’re out” life sentencing make it impossible for judges to exercise human discretion when faced with situations where effectively ending a person’s life just doesn’t make sense.  

Long prison sentences are supposedly a necessary deterrent on crime, but in reality just keeping young, male criminals out of society for a few years is usually sufficient to ensure a massive drop off in criminality, especially when accompanied by aggressive rehabilitation.  Even frequent recidivists began to retire to a quieter life as they age and by age 50 criminals are a rarity.  The blood cools, the bones and muscles ache and most people just want to live their life.   Additionally, these long sentences force states to support geriatrics who pose no danger to society during their expensive end of life care.  

Shorter sentences and rehabilitating middle aged career prisoners offers a chance for a stronger American society and stronger government balance sheets.  Enfranchising people who have experienced that reality first hand might spare future generations of Americans from inefficient and masochistic punishments posing as justice.

Travel Writing in the 21st Century

In Dispatches from the Wild Wild East on March 23, 2010 at 1:14 pm

Tom Swick recently wrote an excellent essay for Worldhum on travel writing as a dynamic and changing genre.  I agree wholeheartedly with the author that the purpose of travel writing has become unclear and solipsistic in the age of mass-specialization, tour groups, study abroad, travel blogs, and internet multimedia.  Yet, there is still a place for the skilled travel writer, whose craft is a blend of the generalized and the specific, the academic and the narrative, and more like poetry than ever before.   

Brookings: How We’re Doing in the World

In General Principles on March 21, 2010 at 1:29 pm

Brookings recently released its annual survey of how the U.S. is doing in the world, a series of indices for the last four years concerning foreign policy and diplomacy as well as global economics and development.  According to the survey, the United States has made considerable diplomatic progress under the Obama Administration in nearly all spheres, while global economic indicators have gotten decidedly worse across the board.  And while this shouldn’t surprise anyone, the progress made over the last two years goes to show the enduring power of a cooperative and cordial international stance and good PR, and the statistics highlight several neglected issues.      

As for the common defense, global fatalities from terrorism in 2009 (statistics only through September) are listed at 1,300 per month – down considerably from 1,900 per month in 2007, the year of raids on Xinjiang, the Qahtaniya suicide bombings, escalating violence in Afghanistan and the Philippines, the assassination of Benazir Bhutto, election riots in Kenya, and the Iraq surge.  

In 2009, the number of U.S. troops in Iraq and Afghanistan, at 178,000, was down slightly from the 2007 peak of 185,000; however combat fatalities were nearly cut in half over the same period, from 990 in 2007 to 449 in 2009, due to the dramatic improvement of the situation in Iraq .  From 2006 to 2009, U.S. popularity climbed steadily throughout Asia (52 to 61) and the Middle East (23 to 31), while undergoing an about-face of astronomical proportions in fair-weather Europe (39 to 67).

Economic indicators are not so good: U.S. GDP shrank 2.4% in 2009, essentially undoing the previous two years of growth; global GDP shrank 2.1%, down 6% from peak growth of 3.9% in 2006.  BRIC-plus was hit hard by the financial crisis, but in nations with developing domestic economies and solid fundamentals, such as China and India, growth continued at robust rates throughout 2009.  

One of the more shocking statistics in the report is that world trade growth was at negative 12.3% in 2009 from its 9.1% 2006 level – a difference of more than twenty percentage points.  Worker’s remittances experienced a similar shift throughout the same period.  Federal debt as a percentage of GDP went up to 53% in 2009 from 2006 levels of 36.5%.  The unemployment rate doubled, from 4.6% to 9.3%, while inflation generally stayed in the low, single digits throughout the period.

A statistic of particular note is the degree to which various U.S. cities experienced the recession in the form of change in payroll employment.  Washington D.C. payroll employment change in 2006 was 1.3%; this dropped to -0.5% in 2009.  Cleveland went from -0.4% to -4.1% in this same period, and Las Vegas went from 2.7% to -7.4%.  That means a percent change of -1.8 for Washington, -3.7 for Cleveland, and -10.1% for Las Vegas.  Indeed the recession has had completely different effects on different cities nationwide: Washington’s economy was doubtlessly spared because of its high dependence on government jobs, while Las Vegas’s construction and tourism-based economy was hit hard by mortgage, construction, wage, and liquidity problems.  This pattern holds for cities like Oklahoma City and Miami as well.  

Otherwise, the Dow gained almost 2000 points in 2009, mortgage rates dropped steadily throughout the period, and the personal savings rate has more than doubled from 2007 levels, laying a solid foundation for recovery.  

The Brookings report has an interesting third section called “Blessings of Liberty,” which measures key miscellaneous statistics.  The global happiness index experienced little change over the last four years.  It started at 5.41 in 2006, went up to 5.51 in 2007 (probably a function of the Transformers movie), dropped back to 5.41 in 2008, and rose back up ever so slightly to 5.45 in 2009.  The Presidential approval rate nearly doubled from 2008 to 2009; from Bush’s last year’s abysmal 30% to Obama’s initial 60% approval rating.  Approval of Congress hovered around 30% for the most part, briefly dropping to 19% in 2008.  This pessimism was generally mirrored in the average satisfied rating of Americans.  The gap between Republican and Democratic Presidential approval ratings dropped slightly from 71% in 2007 to 65% in 2009.   

Brookings’s survey is interesting for several reasons: while it generally confirms experience, it highlights some under-appreciated or neglected issues, such as – considering how little has tangibly changed over the last four years – the importance of humility, cooperation, and pragmatism in international relations, how small a role terrorism actually plays in anyone’s life despite how large a role airport security plays, the truly whopping degree of decline in world trade, the unjustified sudden hoopla over Federal Deficit now that a Democratic President is in charge, the disproportionate effects of the recession on some U.S. cities over others, a general disaffection with government, and an absurdly pronounced partisan gap.