Writing Convincing Aliens Part 1: Biology and Ecology

couleur-etThis series is intended to help writers brainstorm the elements that go into creating convincing aliens*. It can also be applied to creating convincing nonhuman fantasy creatures, artificial intelligences, and so on.

My science fiction novel, Absence of Blade, is set in a universe where humans are only one of many intelligent species, who maintain an uneasy coexistence in a complex web of interstellar relations. I pushed the creative envelope by making the lead characters members of a nonhuman species, the Osk, and narrating large sections of the novel through their eyes. To do so effectively, I had to create characters with relatable emotions and inner lives who were nonetheless distinctly alien.

Part 1: Biology and Ecology

Creating convincing aliens starts with establishing a well thought-out biology and ecological niche for your alien species. The more thought you put into an alien’s basic biology and environment up front, the less chance you’ll end up contradicting yourself and dispelling the aura of verisimilitude around your invented species once you begin to write. Taking the time to make notes on your critter’s biology and environment will also help you start thinking about the ways your aliens differ from humans in their worldview, culture, and society, which saves time when you’re developing these details later.

Where to begin defining an alien’s environment? Try starting with the big picture: did they evolve on a planet? While that’s definitely the place of origin most ready to hand, it’s far from the only possibility. Take Robert L. Forward’s Cheela, the arguable protagonists of his novel, Dragon’s Egg: the Cheela are flat, wormlike entities who evolved on the surface (and in the crushing gravity) of a neutron star.

Taking an environment that is extreme or seemingly inhospitable to life and imagining what kind of creature could live there is a great way to create really alien aliens like the Cheela, or the sentient hydrogen clouds of Fred Hoyle’s novel The Black Cloud, which evolved in the cold wastes of interstellar space.

If you do decide your aliens evolved on a planet, start thinking about their potential ecological niche. Do they dwell on land, in the oceans, or in floating cities? Do they prefer a narrow range of environments, or are they generalists like humans? Are they herbivores, carnivores, or omnivores? Or perhaps that doesn’t apply at all–maybe they draw sustenance from inorganic sources, by photosynthesis or metabolizing hydrogen sulfide from deep ocean vents. Try looking at terrestrial examples for inspiration: Earth possesses stunning biodiversity, and there’s no reason to believe alien species wouldn’t be just as diverse.

Of course, if you’re like me, you may have thought up your alien’s basic biology before considering the environment that spawned it. That’s okay! In this case, you can work backward from what you know about your species to imagine the environment that gave rise to it. When applied to a work in progress or in the late outlining stage, this method can also help you spot inconsistencies in your species’ design that don’t make sense given their environmental niche.

What Not to Do: Human with a Coat of Paint

Unless there’s a conscious reason for it, avoid creating aliens that are very similar in appearance to humans or other Earth animals. This can come off as lazy worldbuilding because your readers are likely aware that a “space” version of an existing animal is very unlikely to be discovered.

Sorry, Space Dog.

Sorry, Space Dog.

An exception to this rule is if the author is consciously writing humanoid aliens for reasons relevant to the story. Ursula K. LeGuin’s Ekumen universe concerns several different societies of humanoids who are implied to be offshoots of our own species, but who possess varying biologies and cultures that she uses as a vehicle for commentary about our own society.

Similarly, aliens who look nonhuman but think or behave exactly like modern humans probably won’t pass muster with your readers either. I’ll explore how to create convincing nonhuman worldviews and cultures in Part Two.

*I use the term “convincing aliens” rather than “realistic aliens” because at present humans have never made contact with an alien species. It’s disingenuous to comment on what a realistic alien would be like, biologically or socially. However, for the purpose of writing fiction, we can make certain assumptions about the beings that might evolve from a given ecological niche that provide a foundation for creating a convincing nonhuman ontology.

Posted in Articles, Commentary | Tagged , , , | 3 Comments

This Book Is Not Yet Rated

About a year and a half ago, I was spending a lot of time on the fiction sharing platform Inkitt. Inkitt lets writers create a profile and upload short stories and even entire novels to the site, where other users can rate and review them, kind of like an online beta reading system. Inkitt also sponsors writing contests. When I was still interested in traditional publication for my first book, Absence of Blade, I entered it in Inkitt’s breakout novel contest. Inkitt pledged to act as the agent for the winner and shop the manuscript to publishers.

I don’t know how it all turned out for the eventual winner, or the ethics of Inkitt’s “platform-agent” model. I bring it up here because of Inkitt’s rating system. For each piece you upload, Inkitt asks you to rate the story for content, such as violence, profanity, sex, and other assorted adult themes. When I uploaded Absence of Blade to the site, I found myself checking off many of these boxes for content. If my book were made into a movie, it would almost certainly be rated R. Yet anyone of any age could waltz into a bookstore and buy it without the supervision of a guardian.

This made me wonder–why don’t books have a rating system like television and film do?
One might argue that violent or explicit visual media have both a more immediate and more lasting impact on impressionable minds, but I’m not sure that’s true. Some of the most powerful ideas and images I’ve encountered came, for me, from books. Literature can be every bit as affecting (positively or negatively) as visual media: take me at fourteen, forcing myself to finish Stephen King’s Pet Sematary at 9 am in broad daylight because I was too scared to read it after dark.

So why no warning ratings on literature?

In a sense there are–they’re called categories. Physical bookstores maintain separate shelves for children’s, YA, and adult literature as well as many other categories. Virtual bookstores like Amazon accomplish the same categorization with algorithms. Besides making it easier for readers of various demographics to find what they want to read, separate kids’ and adults’ shelves help screen kids from material they’re not ready for.

Publishing isn’t subject to ratings boards as film is. The Motion Picture Association of America’s Classification and Ratings Administration controls the ratings films receive based on their violent and sexual content, profanity, and other so-called adult themes. These ratings are often highly subjective and biased against sex while being much more permissible toward violence, including violence against women. For those interested in learning how deep this rabbit hole goes, I highly recommend the documentary “This Film is Not Yet Rated” (itself rated NC-17 due only to the clips from other films it includes), which explores the peculiarities of the MPAA’s rating system.

Some former rating guidelines would be side-eyed now: for example, the Hays Act of 1934 made homosexuality a forbidden subject; when homosexual characters appeared in film at all, it was often as villains whose villainy was heightened by their supposed “sexual perversion” (as homosexuality was defined at the time), and who got their comeuppance in the form of death by the end of the film. This “bury your gays” trope still appears all too frequently in film and literature despite the Hays Code being long abandoned.

Finally, printed material has in fact been rated and censored in the past, but such strictures have relaxed with the times. Take the mid-century Congressional hearing on comic books, which resulted in the implementation of the Comics Code Authority of 1954. The CCA was essentially a self-censorship handbook for comics publishers.

Based on the argument that children were the primary audience for comic books (I’m not sure this has ever been true, but that’s another story), the CCA provided a list of obscene, graphic, and unwholesome material that publishers should strike from their comic books, including murder, true crime, rape, and the depiction of monsters such as vampires, werewolves, and zombies. Although it was a set of guidelines, many distributors would not carry comics that didn’t follow the CCA, giving it the de facto force of law. However, the CCA was all but abandoned by the 21st century, and by 2011 it was completely defunct.

Finally, cultural attitudes toward books may play a role. Reading has long been viewed as a more “wholesome” pastime than watching TV, and educators have promoted literature to kids from the age they can read. In the 18th and 19th centuries especially, it was believed the purpose of reading was to enrich the individual; stories were as much instruction as entertainment.

It’s also a lot harder to point to graphic content in a book, because reading is such a private and subjective experience. It’s much easier to take clips from a film or game and argue the material should get a rating. With a book, readers create the graphics. An enormous amount of imagination and inference is required of readers to take static words on a page and extrapolate them into a mental world; the world thus created is as much the reader’s as the writer’s. In a way, rating literature would be tantamount to rating the reader’s own imagination and the ideas they have access to. Personally, that’s not a road I want to go down.

Readers, I want to hear from you: Should books have ratings, or trigger warnings for sensitive readers? Where is the line between concern for readers’ sensibilities and censorship?

Posted in Articles, Commentary | Tagged , , | Leave a comment

Why You Should Create Ugly Characters

Bear with me. Anyone who’s watched any Hollywood release or network television series in the past, well, ever, knows that lookism is a powerful force. It shapes not only the actors chosen to play leading roles, but also the types of characters written for television and film.
The same culture of beauty also exerts an influence on literature, as anyone who has flipped through some recent YA releases (for starters) can attest. However, literature is a much less visual medium than television or film. The best literature draws on a full range of sensory cues and also evokes non-sensory information such as memory and interior thought to create living, breathing characters. It is this non-visual flexibility that gives the prose writer more freedom to create ugly characters. Here are four reasons why you should. 
1) It’s rare. Writers feel pressure to stick to conventionally attractive characters; as a result, when a character in a book is described at all, they’re likely to be at least average if they’re not beautiful. An ugly character will be more memorable than another pretty face. We’re abundantly used to watching and reading about attractive people. After a while, they all blend together, don’t they? 

2) It challenges lookism. Books are a less visual medium than film, which means there are fewer contraints on a character’s appearance. It’s easier in prose to present a character’s interiority, personality and goals alongside their ugliness. This creates a fuller picture of the character and encourages readers to value them for more than their appearance.

3) Ugly characters will have a different outlook and mindset than handsome ones. Think about the ways in which your own appearance has influenced how you present yourself and interact with people, and how you perceive the way others relate to you. An ugly character will have a different social history than an attractive one, which will influence their attitude and outlook. 

4) Including ugly characters honors human variation. Not everyone in real life is conventionally attractive, so why should they be in your story? Writing characters who vary not just in their personalities and back stories but in their physical features will add verisimilitude to your story, while communicating that attractiveness isn’t the be-all end-all of a character’s value. 

Now I want to hear from you, fellow writers: when creating a character, how much thought do you put into their physical appearance? Is it an important element or a minor consideration? Tell me in the comments below!

Posted in Commentary | Tagged | 6 Comments

Have You Done Enough in 2016? 

 For many people this is a loaded question, apt to inspire feelings of guilt, panic, and apathy. We live in a society where doing, doing, doing is presented the bedrock of productive citizenship, where activity is lauded as the path to a happy life, without much critical reflection as to what all that busyness means. Even those of us who look at the ideal askance, like me, can feel ourselves succumbing to its invitation to judge our yearly accomplishments and find them wanting.

I’ve come to believe the very question is no good. For one, our lives are continuously unfolding processes; chopping them up into calendar years should be merely a convenient way of delineating our life histories, rather than a reified benchmark of success. Secondly, the very continuous nature of time means, just as you sometimes have trouble remembering what you had for breakfast, we tend to forget what we’ve done over time. Our accomplishments get papered over by succeeding days, weeks, months and years. Even if it doesn’t seem like you did much in any given day, you did something–and those small actions add up in a big way over the course of a year. 

As an example, here is my year’s end log for 2016. In the past year, I:
-Started writing my 5th novel, which is now nearing completion at ~100,000 words;
 -Wrote or completed two short stories, and a novella which is being anthologized;
-Moved from Brooklyn to upstate New York;
-Traveled to Okinawa–my first trip out of North America since 2010, and to Japan since 2004.
-Launched a freelance editing business and indie publishing house;
-Wrote my thesis and earned my master’s degree in Humanities and Social Thought from New York University;
-Drove across the Eastern half of the country three times–once from Nebraska to New York, and a round trip from New York to Minnesota and back;
-Helped an indie author launch the first book in her paranormal romance series, Awakening: Bloodline;
-Committed to NaNoWriMo and drafted a total of 50,029 words on the current work in progress; 
-Voted for the first woman candidate for president in a general election. 

This list isn’t to brag (or not only that). In all likelihood, you have a similar list of achievements, one which time has flattened out and made seem less notable than it is. That’s the nature of memory: the achievements of each day are usually gradual, so we tend to forget their magnitude over the long run. 

If you’re one of many feeling like 2016 was a stagnant year, try this exercise: rather than making a list of New Year’s resolutions for 2017, make a list of what you did in the past year. It doesn’t have to be a list of traditional AchievementsTM–it could be as simple as getting out of bed and going to school or work every day. The point is that you probably did more than you think you did. If you’re reading this, you accomplished at least one thing: you’re here. You survived another year, and that itself is a victory. 

Readers, I want to hear from you: what are some things you accomplished in 2016? 

Posted in Commentary | Tagged , | 1 Comment

8 Things I Learned From NaNoWriMo

Well, I did it. I completed NaNoWriMo with a grand total of 50,029 new words drafted between November 1st and November 30th. This was my second time attempting NaNoWriMo; the first stalled out due to student-related deadlines. Though completing the challenge in no way makes me an expert, I thought I’d take the time to share 8 things I learned from NaNoWriMo. 

1) Getting started is the hardest part. People are good at making excuses for why they don’t have time to write. NaNoWriMo laughs at these excuses. It demands that you write every day to keep up with word count. This will be tough at first, especially if you’re unused to maintaining a regular writing schedule. However…

2) It gets easier with time. I consider myself a disciplined writer, but it felt strange at first to write every day. That wore off as I picked up steam: the ideas started flowing faster and I got stuck less. The frequency also made it easier to pick up where I left off each day.

3) Writing in increments is invaluable. 1667 words in a sitting is daunting; I broke the task down into sessions of about 500 words, spread throughout the day. This method made it much easier for me to fit the writing in around other responsibilities. 

4) You will not make word count every day. Work, chores, commuting, the need to have some kind of social life, and my own fatigue levels sometimes got in the way of completing that day’s 1667-word chunk. Though I did write something on my work in progress every day, sometimes I had to compromise on word count.

5) You can make up deficits on days with more free time. Weekends were invaluable for me. I often wrote more than the minimum word count on weekends, so I’d know I had some flexibility on days when life made me fall short of word count. I don’t recommend relying on free days too much; those unwritten words accumulate a lot faster than words on the page. 

6) Outlining is invaluable. Like so much in life, NaNoWriMo is a numbers game: not only in terms of daily minimum word count, but also in the amount of time you spend figuring out plot, character arcs, and other essential novel elements. Preparing an outline minimizes the time spent staring at your screen and maximizes the time spent writing your story. 

7) Writing a novel doesn’t have to take years (or even a few months). Confession–my NaNo novel isn’t quite done yet. But it’s come a hell of a lot closer in just 30 days, going from 40,000 to 90,000 words. Before NaNoWriMo, that would have been a solid 5 or 6 months of writing for me. It doesn’t have to be this way! Fast drafting lets you get the words on the page quickly, giving you more time for those all-important revisions. 

8) NaNoWriMo doesn’t end on November 30th. You can make any month an occasion for fast drafting. After all, the point is to become a more productive writer. I for one plan to finish my current work in progress–an estimated 20,000 more words–before Christmas break. 

What about you, readers? Did you participate in NaNoWriMo this year, and if so, what did you learn? 

Posted in Commentary, Uncategorized | Tagged , | 1 Comment

I’m Tackling NaNoWriMo 2016!

November sucks. Halloween’s over, and Christmas isn’t for nearly two months. Thanksgiving’s nice, but for the most part this is a dull, gray thirty days of the year that you just have to endure to get back to the good parts. That’s why I decided to do something different this year. Instead of thirty days of drear, I decided to make November thirty days of challenge and excitement. This year, I decided to tackle NaNoWriMo.

For those of you still unfamiliar with the term, NaNoWriMo stands for National Novel Writing Month. Writers of all stripes, from professional bestsellers to novices tackling their first novel, challenge themselves to write 50,000 words in a month, the equivalent of a short novel. Many people use the opportunity to start a new work, but there’s no requirement; it can be a chunk of a draft you’ve already begun, and the story doesn’t need to finish in 50,000 words. It’s not a contest, and there’s no entry fee or judge hovering over your shoulder. You’re your own judge of success.

50,000 words in a month. 1667 words, at least, every day for thirty days. At this point many writers, even regular, disciplined writers, are scratching their heads and wondering, “Why write a book this way?”
These are my reasons:

1) November sucks. It’s boring and dull, and it comes at about that point in the year when many of us start to wonder if we accomplished enough in 2016. (I’ve decided this is almost always a misguided question; more about that in next week’s post.) Why not make things interesting with a personal challenge?

2) Because my current work in progress was stalling out. I started the fifth book in my Expansion series in February and it was just inching along–a few hundred words here, a few hundred there–until nine months later, there were only a bit less than 40,000 words on the page. I wasn’t even halfway through what I’d outlined and was running out of steam. The slow pace was making too much time for me to self-edit and be in my own head too much, getting in the way of my own creative process. Enter NaNoWriMo, a shock therapy for the writer’s soul that forces one to charge through creative blocks through sheer volume.

3) Solidarity with other writers. It’s just plain fun to log onto Twitter or Facebook (not for too long, of course), see other writers posting about the pains and joys of writing, and add something of my own. Writing is a lonely profession; there’s no water cooler to gather round, so we made our own.

4) To go pro. For a lot of long-time writers, blasting out 2000 or even 3000 words is just a normal writing day. But you need to build stamina to get to that point, both mental and physical. Typing or writing long hand makes physical demands on your body. So along with being shock therapy, NaNoWriMo is also endurance training.

5) Because I can. This year is the first NaNoWriMo I could realistically participate in. I’ve been either an undergrad or a grad student nearly every November since I learned about this challenge, and universities have the unfortunate tendency of imposing big, non-negotiable deadlines in November. (The one year I was out of school and working, I had just finished my first novel and was taking a break while I let it steep, so to speak.)

So yeah. By the end of this month, instead of a 40,000-word manuscript, I’ll have a 90,000-word manuscript. Will it be a finished novel? I don’t know, but it’ll be pretty damn near.

Posted in Articles, Uncategorized | Tagged , | 1 Comment

Don’t Sell Your Books on the Cheap!

I was strolling through the comments to a recent post on Chuck Wendig’s writer blog when I came across a commenter lamenting that recent changes in Amazon’s pricing structure have slashed profits for self-published authors. Having planned to go into the indie writing arena myself, I was understandably concerned and immediately visited Amazon’s pricing page to see what the changes were. I’m not entirely sure this is everything they changed since I don’t have an earlier version of the contract for comparison, but the major potential change I could identify is this: in order to qualify for Amazon’s 70% royalty rate, e-books now have to be priced at $2.99 or higher. Books from lower than $2.99 to $0.99 only receive the 35% royalty payout.

Brutal. I sympathize with authors who must have found their income decreased by 35% in just 24 hours. Except for one problem–they shouldn’t be selling their books for $0.99 in the first place.

    Keep in mind I’m not talking about short-term promotional discounts where you lower your book’s price to $0.99 from some higher number. I’m talking about setting your book’s regular price to under $2.99. There are two reasons this is a bad idea that could actually negatively impact your sales (even before the Amazon pricing changes):

People perceive something they receive cheaply as having less value. Especially when self-publishing was getting started, many indie writers priced their books low with the expectation that readers would see them as a good deal and be more likely to buy. However, when a product (and that is what a book is) is priced noticeably lower than products in the same category and it’s not obviously a short-term sale price, buyers are more likely to evaluate its worth critically. This is a side effect of what’s called the “anchor price” phenomenon: within a small range, whatever price people are accustomed to paying for a product is the anchor price, whether it’s a book or a diamond ring or a plasma TV. The anchor price is the perceived market worth of that product. When a buyer sees a product priced significantly lower than other products in the same category (especially if it isn’t part of a sale), they are likely to be suspicious about its quality.

In contrast, a phenomenon called the “zero-price effect” means people often perceive a product they received for free as being more valuable than it is. However, when it comes to works of subjective value like a book, the tradeoff of higher perceived value is often more critical reviews. It’s true: people are more likely to review a book harshly if they received that book for free.

This seems counterintuitive at first. After all, if you didn’t have to part with your hard-earned cash to get the book, aren’t you more likely to go easy on it? I’m not a psychologist, but if I had to guess, when you receive a product for free you’re less likely to forgive its flaws because part of forgiving flaws involves rationalizing our own purchase decisions. You want to convince yourself that money you spent on a book was money well spent, that the book was worth what economists call the “opportunity cost” of the money to acquire it and the time to read it. But readers who receive a book for free have no skin in the game. They don’t have to rationalize their purchase decision. Instead, they may be looking for reasons why the book is free, and one of those might be that the author didn’t think it was of high enough quality to sell. Which brings us to point two:

 Pricing your books low sets you up to devalue your own work. Putting your books out there for consumption by a global audience is scary, I know. I think the motivation behind a lot of indie writers selling their books for $0.99 is the perception that buyers will be less likely to balk at making a purchase if the book is cheap (see the “good deal” argument above). But I think the perception often goes deeper than a tactic to goose sales. Self-published authors are still struggling with a certain amount of stigma, especially with the perception that their books are lower quality than traditionally published books. That inner voice sounds something like this: How can I make people pay full price for a book that wasn’t good enough to get published?

Let me say, simply, stop that. Stop that kind of thinking right now.

First of all, the quality of a book has nothing to do with how it was published. Nothing. As a self-published author or indie author, you chose to use the tools available to provide your work directly to readers rather than go through the intermediary of a publisher. That’s it. Avid readers will be the first to tell you they’ve read plenty of crappy traditionally published books. Buying a traditionally published book no more ensures quality than buying a self-published book means it will be crap.

It is true that with self-publishing your book’s quality is entirely on you. As an indie writer, you are an entrepreneur in an international business*. And of course, as an entrepreneur, you would never publish a book that isn’t of publishable quality, right? Your book has been professionally edited, proofread, and formatted before going up for sale, right? You’ve hired the talent to do up a professional cover and invested in a website, right? Of course you have. You’re a professional writer using the tools available to you to reach readers directly.

This is the attitude I encourage indie authors to adopt toward their business. It’s definitely an attitude I’m still cultivating as well. Pricing our work at market value is one way to signal to readers that we’re serious, we realize this is an international business, and we’re doing our part to make our work competitive in this business. Competitive doesn’t mean as cheap as possible. It means selling our books at a fair price that reflects the years of hard work and out-of-pocket investment we’ve placed into each and every title. Because if authors don’t value their work at its market price, why should readers?

*Thanks go to Kristine Kathryn Rusch and her blog on the business of publishing for helping me cultivate this perspective.

Posted in Articles, Commentary | Tagged | Leave a comment

A Singular Enlightenment

A Singular EnlightenmentI’m a reformed Singularitarian. In high school, my dad gave me my brick-like copy of Ray Kurzweil’s The Singularity Is Near, a spiritual successor to his speculative books on the future of artificial intelligence, The Age of Intelligent Machines, followed by The Age of  Spiritual Machines. Already a long-time fan of science fiction and imagined futures, I opened The Singularity Is Near and read it cover to cover.

And I was converted. Kurzweil’s calm, rational explanations of how we would meld our bodies and minds with machinery and become immortal made perfect sense. His charts tracking technology’s exponential growth, Moore’s Law predicting the doubling of computing power every 18 months, seemed to have history on their side. It would only be a matter of time before we attained enough computing power to model a human brain, then many human brains; inevitably leading to computing power equivalent to every human brain on Earth. It didn’t hurt that I was reading a lot of science fiction by hard SF writers like Greg Egan and Charles Stross that basically stated the same thing. If anything though, Kurzweil was more ambitious than science fiction: published in 2005, The Singularity Is Near claims we will have the computational chops to simulate a human brain by about 2025.

I no longer believe this. In some ways, I think it was my own fervor that did me in. In university and as a graduate student I started finding and reading more books on science and technology, denser theoretical works that explore many of the same themes as The Singularity Is Near. I was able to design my own syllabus in my program, so I loaded it with works on artificial intelligence, cybernetics, virtual worlds, and the history and social impact of technology. To quote Tennessee Williams, I guess you might call it the catastrophe of success: Kurzweil’s book had gotten me interested in the future of science and technology in the first place; yet as I read deeper and sought mastery on the subject, I realized how many assumptions his thesis of the Singularity rests on.

For instance, he claims the human brain is basically just a very complex computer, and if we just had enough computing power and could compress that data like shrinking an mp3 file, we could effectively simulate the brain. However, the more I read the more I could see the buried contradictions and gaps in this idea. The brain is no more a computer than a car is actually a horse (even though its motive power may be rated in horsepower). On the one hand, you have a highly complex biological organ that has taken billions of years (counting the whole evolutionary age of the Earth) to evolve, embedded in an equally complex organic body that is interfaced with it in ways we still don’t fully understand. In turn, the brain-body system is embedded in a very specific environmental context that has evolved right along with it. To think we could somehow reduce this level of complexity into a linear series of bytes, no matter how large, is naive if not arrogant. Even more so the prediction that this task could be done in the next twenty years.

Once the cracks formed in my conviction that the Singularity was Inevitable with a capital I, I saw how deeply the tendrils of early modern Enlightenment thought have penetrated the Singularitarian movement. The Singularity is the inevitable, historically predetermined moment (via the exponential advance of technology) in which our technological (read: rational, industrial) advancements will master nature (in this case, extended into our own bodies), and free us from the messy “wetware” of the lifeworld into a perfectly abstract (indeed virtual) world shaped totally by human will and rationality. The dream of modernity before Nietzsche brought down the nihilistic hammer lives on in the Singularitarian movement.

And it’s still strong. Not long after I finished the science studies course, I volunteered at an annual conference put on by the World Future Society. As you might imagine, these guys’ business is coordinating professionals—academics, writers, economists, scientists and others—under the loose umbrella of trying to extrapolate future trends and developments, mostly in technology. Futurism covers everything from conservative, data-driven economic predictions reaching just a few months into the future, to far-fetched discussions of space colonization and time travel. Of course, the Singularitarians were well-represented. I talked to a few of them, neither disagreeing nor agreeing with their claims. Just listening. Trying to get a bead on how they really felt, under their rational minds, about the whole project of the Singularity.

Because the Singularity is a project. Enlightenment thinkers cast modernization in terms of an inevitable process, ordained by nature, history, or the underlying structure of human society. But Singularitarians are intensely aware that none of this historically inevitable technological advance can happen naturally. It’s one of the contradictions of the Singularity that parallels early modernism: the Singularity is both historically inevitable and utterly dependent on our actions for its realization. What’s changed, I think, is the level of conscious direction involved. No one trusts anymore that the process will arise out of some universal structure underpinning human patterns of technological development. There’s a considerable if sublimated current of anxiety running through the Singularitarian movement. Maybe the collapse of meaning represented in Nietzsche’s famous “God is Dead” still has us running scared after all. The Singularitarians’ answer, I think, is there can still be a God—but we have to build Him.

Posted in Commentary, Non-Fiction | Tagged | Leave a comment

Large Hadrosaur Collider*– Or, Thoughts on Jurassic World

Jurassic World*Disclaimer: my resident irreverent jokester thought of the title. I take no credit for it.

Jurassic World is one of those rare sequels of such colossal letdown it makes me question not just why it exists, but why the entire industry of franchised sequels exists. (Other than making kajillions, a point I’ll return to below.) If the first sentence of this review made you groan and brace yourself for another laundry list of all the ways Jurassic World is 1) sexist 2) scientifically inaccurate 3) nonsensical, don’t click away just yet. I’m not here to rehash those points. The movie is all those things, but other fine people have already covered that ground.

I’m also not saying franchise sequels’ quality is always inversely proportional to the rising number after the title. Lots of franchises have fine sequels that build well on their original films (The Dark Night, Terminator 2, Aliens). In some cases, the sequel can even validate the original (Mad Max: Fury Road).

Hence the true sin of Jurassic World–a franchise sequel that takes the opportunity to engage with the existential themes of Jurassic Park and turns it into a four-year-old’s play hour smashing plastic dinosaurs together.

I love Jurassic Park on several levels: the palpable sense of wonder as the characters have their first encounters with living dinosaurs; the questionable wisdom of creating life for our amusement, however innocent John Hammond’s motives; the terror and helplessness of the characters as the park’s carefully orchestrated controls break down and they realize they are no longer at the top of the food chain; and most of all, how they rally to the situation and just manage, with cooperation and grit, to escape by the skin of their teeth. Jurassic Park refuses to conclude with a comfortable reassertion of control. Its message is that the raw force of nature is not something to be controlled but at best coexisted uneasily alongside.

Set twenty-some years after the events of Jurassic Park, Jurassic World retcons The Lost World and Jurassic Park 3, establishing itself as a sequel to the first movie. Isla Nublar has become a multi-million dollar resort and amusement park that draws hundreds of thousands a year with the prospect of seeing living dinosaurs. Except for a few rough interactions between the ice-queen corporate executive, Claire (Bryce Dallas Howard), and her adorable nephews (Claire would be a great mom if she just tried, you guys), the first twenty minutes of the movie are a delight. I loved seeing Mosasaurs shown off in Sea World-style performances (warning: first fifty rows will get wet), and baby protoceratops saddled up like prehistoric ponies in the “Gentle Giants” petting zoo. The cynical technician wearing a Jurassic Park T-shirt and pining for the authenticity of the original park is a stroke of diabolical brilliance.

And that’s where Jurassic World almost had me. The first act dives pretty unflinchingly into the same territory of profiting off resurrected dinosaurs, offering a strong critique of the commodification of the natural world for human entertainment. If the failure of John Hammond’s vision in Jurassic Park made us wonder what could have been, Jurassic World lays bare the banal reality of a corporate attraction in which dinosaurs are just another product for consumption. And like any product, even living dinosaurs only hold the interest of a jaded public so long, pushing the corporate interests that run Isla Nublar to come up with new species to increase attendance rates.

This treadmill quest to attract visitors with what’s new and shiny leads to the creation of the Indominus Rex, a mishmash of dino DNA which embodies a kid’s fantasy of the ultimate prehistoric predator: larger and scarier than a T-Rex, and a better hunter than any “real” theropod. The Indominus is a perfect critique of what Jean Baudrillard calls the “precession of simulacra”: he argues that entertainments such as Disney World derive their hold over us by creating an artificial reality more interesting and engaging–seemingly more real– than life. Simulacra are copies of things that either never existed, or no longer exist. The Indominus is such a simulacrum–a copy with no original.

It was about here, as I primed myself in delicious anticipation of seeing the massaged user experience of Jurassic World fall apart, that it became an entirely different movie. All signs of intelligent critique halted in favor of bland action scenes that mostly felt like retreads from Jurassic Park. I have no problem with homaging an earlier installment in a series, but in a sequel you’re generally supposed to do something new. Though Jurassic World gives us new characters and a new threat, the sequences that stood out most were those basically lifted from the first movie. A slavering Indominus Rex trying to eat the gyrosphere with the two kids trapped inside is nearly identical to the T-Rex attacking the jeep in Jurassic Park. (And of course, no spoilers, it’s still the T-Rex that plays the starring role in the climax of Jurassic World.)

A scene where Claire and Owen (Chris Pratt) take refuge in the ruins of Jurassic Park is poignant for a far different reason than I imagine the writers intended: it reminds us of the quality of the original film. Jurassic Park is biting, suspenseful, scary, and fun. Jurassic World promises all those things and snatches them away, devolving into another big, dumb (and not all that fun) Hollywood action movie.

I found myself wondering if the sorry state of the back half of Jurassic World was due to executive intervention. It’s as if there was a more satirical script that took aim at the corporate interests driving Jurassic World before studio execs put the hammer down. Such a script would, after all, come uncomfortably close to taking aim at the bottom-line culture of the Hollywood film industry itself.

In a way, the fate of Jurassic World the film parallels the fate of the park: the dream of Jurassic Park–to give people a glimpse of wonder in the form of real, living dinosaurs–is repackaged by commercial interests into another consumable experience. The people cycling through Isla Nublar are still seeing living dinosaurs, but the experience is delivered through the same Disney World lens as any other commercial attraction. Until things go wrong, the dinosaurs might as well be animatronic puppets: they are still commodities created for our consumption, with no more staying power than the movie itself.

Posted in Commentary | Tagged , , | Leave a comment

Where Are the Futures?

In his article “Where Are the Jobs?”, David Brooks observes that outside of the information technology arena, we have made little progress in realizing the ideal future imagined thirty or forty years ago. We still depend largely on fossil fuels for energy. Medicine still hasn’t found a cure for cancer. Flying cars and cities on the moon are both a bust. The litany goes on.

Brooks attributes this slow-down of progress to two factors: the first is an unavoidable reality of scientific discovery, the double-humped learning curve. A lot of scientific fields were in their infancy a few decades ago: computing, genetics, robotics, spaceflight, etc. The first breakthroughs then seemed  like the start of an inevitable watershed of progress. Then scientists reached the top of the first learning curve and were confronted with fundamental issues of physics that forced them to rethink their approach. This is the second, steeper learning curve. If you want to build a colony on the moon, for instance, you must find a solution for the enormous demands of material and  energy such a project requires. You must figure out how to make the colony self-sufficient and cost-feasible. This is a lot more complicated than launching a few guys into space.

At this point, I still agree with Brooks: “breakthroughs will come, just not as soon as we thought”. Where I take issue is with Brooks’ conclusion that modern science fiction has also become “moribund”. It’s true that science fiction has changed a lot in the past thirty years: you’re unlikely to find a story in which technological advances act like a deus ex-machina to usher us into a utopian future. (Even these stories haven’t entirely disappeared; Charles Stross’ superb Accelerando is a perfect example.) Brooks argues that “the new work is dystopian, not inspiring,” and that “the roots of great innovation are never just in the technology itself. They are in the wider historical context.”

If there was one question I could ask David Brooks about his article, it would be “How much science fiction have you actually read?” His above conclusion belabors the obvious to anyone who has a little culture in the genre: the idea of a less-than-rosy future has had roots in science fiction since its inception. Dystopia is not a new product of an uninspiring present; it is a commentary on the present in which it was written, a subtle but crucial difference. Jules Verne’s Paris in the Year 1961 and Mary Shelley’s The Last Man are two early examples of dystopia. I especially recommend John Brunner’s Stand on Zanzibar, a dystopian vision of the year 2010 (written in 1968) which eerily predicts some aspects of the present (while getting others completely wrong, of course).

Science fiction, as much as science, is a product of its wider historical context. If sf stories have moved away from worshipping science, this is a sign of maturation, not stagnation. The new SF has moved into grittier territory, examining the role of science against the dark background of history, misuse, and yes— even the human fallacy of thinking that progress makes perfect. Human emotions, desires and foibles have gotten into the mix. In fact, science fiction is starting to look a lot like literature. And that isn’t a bad thing.

Posted in Commentary | Tagged , , , | Leave a comment