Unmistakable Marks
Warranted Genuine Snarks
The Power of Music
As everyone knows by now, Apple's digital music store is open for business. I've been reading about this obsessively, and I don't even directly care about it -- I don't have a Mac, the last piece of pop music I bought was back in 1995, and I'm just enough of an audio snob to turn my nose up at the idea of buying a piece of music in a lossily-compressed format. But even though there's no chance of my buying music via this service, it still matters intensely to me: the resolution of the piracy/free-use problem that Apple's come to is likely to be enormously influential (at least, if the service is successful).
It's curious, actually, that the copyright debate has focused so heavily on music, rather than books, movies, or television. Essentially, the music-centricity of the copyright battle is an accident of two technical realities: that music is small enough to copy and share widely (unlike movies), and distributed in a digital form (unlike books). Because of this technohistorical accident, the demographics and business realities of the music business have shaped the terms of the debate.
Which is a pity, really. I can't help but imagine that the world would be a better place if this "conflict" were being played out with literate adults on one side, and people like Tim O'Reilly and Jim Baen on the other. But alas, we're stuck with teenagers and record execs. Let's just hope that they don't make too big a hash out of the deal.
| April
29,
2003
The Myth of Opportunity
One of the rallying cries of the right is "equality of opportunity,
not equality of outcome." As rallying cries go, it's a good one; as a
defense of the status quo, it leaves something to be desired.
I was looking through the Statistical Abstract of the United States today -- I do this every now and then; it's fascinating reading, as tables full of numbers go. I stumbled across the thrillingly named Table 731, Annual Expenditure Per Child by Husband-Wife Families
by Family Income and Expenditure Type.
There's no point in my reproducing the table here when you can
click right through to it, but the numbers for "child care and
education" are worth highlighting. For children between 15-17 years
old (that is, the time when education is going to dominate that
category, and right before the kids need to start worrying about
getting into college), families making less than $36,800 spend $360 a
year on each child; families making more than $61,900 spend
$1,330.
Sure, there are diminishing returns there, and money can't buy
learnin' -- but all else being equal, the child who's had three times
as much money spent on his high school education is likely to have a
significant advantage. And all else isn't equal: That well-off kid
has also had better and more expensive healthcare, food, and housing
throughout his life. There's simply no way that the lower-income kid
is going to be able to compete equally in such a tilted race.
You don't need to make any appeal to "the culture of poverty" to
explain why the children of the poor are more likely to end up poor --
poverty alone provides all the explanation that's necessary.
Fortunately, I can rest secure that the Republicans in charge of our
government are as concerned as I am about equality of opportunity.
| April
26,
2003
Comparative Advantage In Action
Comparative advantage, they say (for certain values of "they"), is one of the principles of economics that's at once true, useful, and non-obvious.
The fundamental concept of comparative advantage is that you should
specialize in, not necessarily what you're absolutely best at, but
what you're relatively best at. (An explanation here explains how
Portugal can gain importing wheat from England, even if Portugal is
more efficient at producing wheat.)
Recently, I've had the opportunity to apply this notion (a bit
loosely) to my own life. I'm mostly a Java developer; 90% of what
I've done in the last three years is Java development, with a bit of
experience in Microsoft's .NET technology. I have a huge absolute
advantage in Java. Yet, the contacts I've gotten so far have been
going 5:1 in favor of .NET. This baffled me, until I realized that it
was comparative advantage in action.
Because, see, Java's been around for a long time in computer terms,
and is a very popular environment. Java developers, in today's
economy, are a dime a dozen. .NET, though, is new -- there are very
few people who actually have real .NET experience on their resume.
So, compared to your typical job-seeker, I have a very slight
advantage in Java but a huge advantage in .NET. The principles behind
comparative advantage would tell us that the greatest efficiency would
come from my being hired onto a .NET project, and one of the many
other Java developers (even one who was slightly less skilled than me)
being hired into the Java positions.
Of course, if I don't get a .NET job now, I probably never will; if
I spend another year or two doing Java, my relative Java advantage
will be higher, and my current .NET advantage will have turned into a
huge handicap, as all sorts of other people rack up years of .NET
experience. A bit bemusing to consider that the job I get now could
have a drastic effect on the next five years of my professional
life.
| April
25,
2003
Diesel Pigs
The New York Times has an interesting article about the tension between cars and cities. It talks at length about London's experiment in charging a hefty car toll; the toll is deemed a success:
The number of cars entering the cordon zone the day before, the first day of the charge, dropped by about 60,000, remarkable even in the context of a school holiday. One automobile group estimated that average speeds in central London had doubled, nothing less than a miracle in the world of road policy.
That is, I suppose, proof that the plan is actually working to reduce
traffic volume; but it's not proof that the plan has actually made
things better on the whole. The problem of traffic, of course, is
that it causes people to waste too much time in unpleasant
surroundings; the article even gives estimated productivity losses for
the bad traffic. Any traffic solution, then, needs to be judged on
the extent to which it reduces the wasted time and productivity. The
relevant measurement isn't how much less traffic there is; it's how
long it takes people to get where they're going now.
The reduction in traffic is unquestionably great for those drivers
who still drive in London; the question is whether it's good for those
60,000 drivers who aren't there any more. I could probably look this
up somewhere on the Web if I were slightly less lazy; but then, if
Randy Kennedy were less lazy, he'd've put these numbers right into the
story in the first place.
(And then there's the fun of trying to figure out which is the liberal angle here -- for the toll, because it reduces automotive congestion and promotes more eco-friendly mass transit? Or against the toll, because it makes automobile driving more pleasant for the people rich enough to afford it, while making mass transit more crowded for the poor? Applauding a program whose success is measured by the amount of comparatively poor people that have been forced out of their preferred mode of transit seems awfully illiberal, but oh those greenhouse gases!)
But whatever faults the story may have had, all is redeemed by the following statistic:
The average speed of a car in central London was 12
miles an hour, or a little faster than the top running speed of a
domestic pig.
Ah, the days of yore, when young men would hop on their pig and go off to the market, and bankers would ride in their pig-drawn carriages. Regrettably, no stats were given on the resurgence of pig traffic in downtown London.
| April
22,
2003
Like That, Except Not
Dan Benjamin talks about what it's like to be unemployed, and he's pretty much nailed how it feels. Assuming that you've got the sort of singleminded focus on work that ants would envy.
Me, I don't. I'm guiltily enjoying my vacation, in fact -- it occurred to me today that this is officially the longest I've been off of work (excluding Christmas breaks) since my freshman year of college. And while this isn't my dream vacation, I figure that I might as well take my silver linings where I can. So if blogging's light here, it's not because I'm out hustling contracts; it's because I'm taking our son to the park, playing Metroid Prime on the GameCube, or taking an afternoon nap.
(Of course, this lemonade-producing attitude is what I've got after a bare week of unemployment. If I'm still on the market in six months, I'll probably be standing in an intersection and shoving my resumé at any car that stops.)
| April
21,
2003
The Silly Web
It's coming up on Easter, which means (for those of us with children), that it's time to stock up on Easter candy. This year, we pretty much limited ourselves to the compulsories -- a chocolate bunny, speckled-egg malted milk balls, jelly beans, and Peeps.
It's all very nostalgic enough; but oddly enough what Peeps make me nostalgic for isn't my youth, but the Web's. I think back fondly to the days of 1997, looking at pages like Peep Research, and the dozens of other completely silly pages that were among the most interesting things the Web then had to offer.
So, for old times' sake, enjoy.
| April
17,
2003
Layers for the Web
The foundation of good software engineering is layering -- from network models to enterprise applications, coding in layers gives you pleasant abstractions and (relatively) easy maintainability.
Web pages are no different. In the early days of the Web, writing
a Web page involved mashing all the layers together rather horribly;
but it's (almost) possible now to architect Web pages with clean
layers. I've found that it's most useful to think of Web pages as
consisting of three layers: the structural layer, the presentation
layer, and the behavior layer.
The Structural Layer
Everything starts with the structural layer, which consists
of the HTML markup. When writing the HTML markup, it's important to
have the right mindset -- you shouldn't be thinking of how the page
will look or act; you should be thinking of what it is. The
easiest way to do this is to pretend that all your users are blind and
are going to be using audio browsers, and that you have no idea how
audio browsers actually work (which probably requires little
pretense).
Now, instead of thinking "Okay, I want my links to appear in a column
down the right-hand side, and to highlight when the user mouses over
them", you'll be thinking, "Okay, I want to present the user with a
list of links in a separate section from my main text." Rather than
worrying about how you're going to get a certain visual effect, you're
concerning yourself with how your page is structured, and what the
elements of your page actually represent.
If you view your page at this point, before you've done any work
with the presentation and behavior layers, it should be fully
functional (no hidden navigation options, for instance) and look
sensible, in an ugly, bare-bones sort of way.
The Presentation Layer
Once you've got a solid structural foundation, you can move on to
the presentation layer, where you'll be dealing with CSS. The most important thing
to remember is to keep your presentation layer separate from the
structural layer.
HTML -- even modern, strict XHTML -- lets you put
style
attributes on individual tags. It'll be tempting
to avail yourself of this facility every now and then; resist the
temptation. As soon as you start letting presentation creep into the
structural layer, you're going to find it more difficult to maintain
the site. Your goal is to be able to completely change the look of
the site simply by swapping in a new stylesheet file. So, keep all
your CSS in that single file, and make sure that all your
presentational images (that is, images which are decorative, rather
than interesting in their own right) are brought in from the CSS,
rather than with IMG
tags.
As an example of how drastically different CSS can make the same
page look, consider my booklog. If you're using Mozilla (I'll
explain the reason for that restriction when I talk about the
behavioral layer), you'll see a "Switch Style" link. Clicking on it
will cycle through different looks, all of which are variant CSS
stylings of the same HTML. You're going to want to switch your look
around that drastically at some point, so go to every effort to make
things easy on your future self.
The Behavioral Layer
At this point, you've got a Web page that's structurally sound, and
that looks as pretty as your mad design skillz can make it. The only
other thing you might want is to change how the page acts; this is
the behavioral layer, and for practical purposes, you can
consider it to be JavaScript.
JavaScript has a bad rep on the Web, for a couple of reasons. One
reason is that pre-modern browsers (the 4.0 generation) had wildly
non-standard behavior, and writing JavaScript meant favoring one
browser at the expense of others, leading to the infamous "This page
best viewed in..." notices. Thanks to the standardization of the
DOM, this is no longer a
problem. It's mostly possible to write perfectly standard JavaScript
that'll work on the latest versions of any browser.
But the biggest problem with JavaScript's reputation was, and still
is, JavaScript programmers. While any Web designer worth his salt has
abandoned presentational HTML in favor of CSS, Web scripters still
tend to use terrible, awful, hackish code that smushes the behavioral
layer right into the structural layer in ugly ways. The bad news is,
this isn't going to change soon -- for better or worse, JavaScript
programmers tend to be inexperienced programmers (or, as often,
designers pushed into doing something they don't quite understand),
and are unlikely to discover good architectural principles en
masse. The good news is, you can do things right.
Or, at least, mostly right. As with the presentation layer, there
are two basic rules you want to follow with the behavioral layer.
First, you want to make sure that the default behavior is good enough.
The page should be completely usable without JavaScript; if you did
things right when you were writing the structural layer, you're all
set here. Second, you want to separate off your behavioral code as
much as possible. As with the presentation layer, your goal should be
to have all your code in a single file, with nothing more than a link
to that file in the HTML.
Unfortunately, this turns out not to be quite practical yet. The problem is with the DOM Events specification. DOM Events lets you associate JavaScript with particular events (a window opening, a user clicking on a picture, whatever). Unfortunately, DOM Events support is terrible in even modern browsers. Mozilla supports it just fine, of course, but neither Opera nor IE can handle it at all.
This leaves you with an ugly choice. Either you can structure your page properly and have your JavaScript fail to execute on most browsers, or you can litter up your page with ugly little event-handler attributes (onclick="doSomeFunction()"
and the like). Unless you're writing for an intranet, where you can guarantee that everyone will have Mozilla installed; or a personal page, where you value clean architecture over practicality; you should probably just break down and use the stupid event-handler attributes. But pay attention to progress in this area, because someday it really will be possible to do this right.
This, of course, explains why the style switcher on my booklog only works in Mozilla. On my personal site, I've taken a relentlessly forward-looking approach to matters. It's interesting to note, though, that since I'm using a layered architecture, nothing ever works worse because my JavaScript isn't supported; it only fails to work better. Consider my Colorado picture gallery. There's JavaScript associated with that page, such that clicking on the links at the bottom of the page will preview the image in a thumbnail, and clicking on the thumbnail will load a full-sized image in a popup window. Because I'm using DOM Events, this won't work in most browsers -- but because my structural layer provides sensible default behaviors, users in any browser can still view my images by clicking on the links.
Notice that I didn't have to work hard to make sure that older browsers were supported; the page was designed to be correct at a basic structural level before I even thought about adding presentational and behavioral fillips. This is the philosophy of progressive enhancement, and it falls naturally out of a layered approach to page architecture.
That kind of emergent benefit is the pudding-proof that a layered
Web page architecture is worth the hassle -- ultimately, every elegant
theoretical architectural principle needs to be matched to reality to
see if it really makes things better or is just needless
overhead. This architecture is a good 'un.
| April
17,
2003
I never meant for it to be like this
Looking back at my last entries, I see that this place has inadvertantly turned into a tech blog. Oops. The problem is that everything interesting that I've read lately has been about technology or the war, and I'm making a point of avoiding war talk.
So... how 'bout dem Packers?
| April
15,
2003
Premature Pessimism
Jeffrey Zeldman today leaps on the anti-XHTML 2.0 bandwagon. This seems to be a popular bandwagon of late, but I can't for the life of me figure out why. Most of the resistance seems to be resistance to change in general, rather than to any particular changes. Consider Zeldman's complaints:
-
The
IMG
tag is removed in favor of
OBJECT
. From Zeldman's article, you'd believe that
the HTML Working Group made this change simply to spite Web developers. In fact, there's a strong and sensible reason to make this change: OBJECT
degrades better, so that you can use (for instance) a Flash file that would display as an image if the Flash plugin weren't installed, and as marked up text if image display weren't possible. This is a vast improvement over the limited ALT
attribute that IMG
gives us.
-
The
OBJECT
tag is broken in Internet Explorer right now. This is just a bizarre complaint. Nothing in XHTML 2.0 will work in current Internet Explorer. Heck, even XHTML 1.1 doesn't work properly in IE. Yes, this means that would-be authors of XHTML 2.0 will need to wait until newer browsers are widely deployed, but that'd be the case almost no matter what XHTML 2.0 looked like, so can't be used as an argument against the spec.
-
The
BR
tag is deprecated. Zeldman makes this sound like an ivory-tower move by people who've never written real HTML, and thus don't realize that BR
sometimes can come in handy (though it's handy much less often than it's abused). The HTML people do realize that there's a use for specifying line breaks -- they've just decided that it's better to do it with the <l>
element (short for "line"), which is less amenable to cheap abuse. If he's got a legitimate use-case for BR
that isn't solved by the line element, I'd be surprised.
-
XHTML 2.0 is more complicated to write by hand. Fair cop, here. It is, and that is bad. But... XHTML 2.0 is also vastly more powerful than previous versions of (X)HTML, and a lot of what makes it complicated is also what makes it powerful. XForms, for instance, are far more complicated than traditional HTML forms; but they're such an enormous improvement that anyone writing real Web applications would kill to have them. If XForms had existed three years ago, I'd've probably saved hundreds of hours of programming by now, and we'd be spared abominations like the ViewState kludge in ASP.Net. Ideally, the XHTML people would find a way to make things simpler without removing the power; but failing that, I'll take the power. XHTML 1.0 will still be around for those who want simplicity, after all.
For all the hand-wringing about the state of XHTML 2.0, it's a solid spec. It's fundamentally an evolutionarily cleaned up version of HTML, with major revolutionary improvements in its form handling, and some nice features (like a navigation list implemented as intrinsic tags rather than the HTML/CSS/JavaScript hacks that you see all over today). It may not give you whiter teeth and fresher breath, but it will fix some long-standing irritations at a manageable cost of added complexity. Whatever complaints there are today, I predict that when XHTML 2.0 does come out, it'll be widely adopted as fast as the installed browser base makes it feasible.
| April
15,
2003
Nobody Likes a MIME
About a month ago, I wrote an entry about XHTML
, wherein I mentioned en passant that the handling of XHTML documents depends heavily on its MIME type. Over on XML.com, Mark Pilgrim writes an article which explores this in more detail. The article has an eye toward XHTML 2.0, but it talks explicitly about how XHTML 1.0 and 1.1 should be handled.
It's a great article, with one flaw: The content-negotiation advice Pilgrim gives is wrong. Admittedly, it's wrong in the "Well, in theory, a browser could come along that'd break this algorithm" sense, and in practice will still work perfectly fine; but if you're the sort of person who's writing XHTML now and is concerned about MIME types, you're probably also the sort of person who wants to get all the little fiddly details right. So, let me explain what the article has wrong, and how to do it right.
The key problem with XHTML MIME types right now is that to be as conformant as possible, you want to serve up XHTML with a the MIME type of application/xhtml+xml
to those browsers that say they can handle it, but text/html
otherwise. As Pilgrim correctly notes, the way to find out which MIME type to use is to look at the HTTP_ACCEPT
header. But here's where Pilgrim goes astray: He suggests just looking for the application/xhtml+xml
string in the HTTP_ACCEPT
, and serving the page with that MIME type if it's present. In practice, that'll work (right now, with the current set of browsers); but it is wrong, and future browsers could use HTTP_ACCEPT
headers that would break that algorithm.
The neglected detail is the q
parameter. According to RFC 2068 (which defines HTTP 1.1), the q
parameter defines quality values ranging from 0.0 to 1.0; the higher the number, the more desirable the MIME type. This is a bit abstract, so let's take a look at a concrete example, of Mozilla's well-crafted HTTP_ACCEPT
header:
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9, text/plain;q=0.8,video/x-mng,image/png,image/jpeg,image/gif;q=0.2,*/*;q=0.1
According to this header, Mozilla would love to get
text/xml
, application/xml
, or
application/xhtml+xml
; since no q
parameter
is set on any of those, they have the default value of 1.0. Failing
that, Mozilla will take text/html
with an only slight
degradation of performance (with a quality value of 0.9); if even that
doesn't exist, text/plain
will be accepted only slightly
less eagerly. If none of those are feasible, then (ignoring the image
and video MIME types) Mozilla says it'll take absolutely anything
with */*
-- but it won't like it (quality value of 0.1).
This header makes perfect sense in the context of Mozilla's actual behavior. It deals with XML and XHTML through a top-notch, standards-compliant processor; HTML has to go through the cruftified Tag Soup processor; plain text is just displayed with no features; and unrecognized types are accepted, but passed along to plugins or the user (via the "What would you like to do with this file?" dialog box) for handling. And, having looked at this header, we know we should give XHTML to Mozilla as application/xhtml+xml
.
But imagine now a different, hypothetical browser. This browser has a great HTML processing engine, and loves to get HTML content. It also has an XML parser in it, so it can handle basic XML; but it doesn't have any particular support for XHTML, so when presented with an XHTML document served as application/xhtml+xml
, it displays as an XML tree rather than a rendered HTML document. This example browser might give us an HTTP_ACCEPT
header that looks like:
text/xml;q=0.5,application/xml;q=0.5,application/xhtml+xml;q=0.5,text/html, text/plain;q=0.8,video/x-mng,image/png,image/jpeg,image/gif;q=0.2,*/*;q=0.1
Here's where the danger of Pilgrim's fast and loose processing of the HTTP_ACCEPT
header becomes clear -- his code would look at that header and decide that the browser would prefer to get application/xhtml+xml
instead of text/html
. Oops.
The only proper way to handle things is to follow the standard fully, and that means parsing out the quality values and serving up the favored type. Fortunately, this isn't all that hard. Here's a snippet of Perl code that'll do it; translate to your language or environment as necessary.
# Unless the server tells us otherwise, we assume that it really wants
# text/html and can't handle application/xhtml+xml (on the basis that
# any browser smart enough to handle application/xhtml+xml is also
# smart enough to give us a decent HTTP_ACCEPT header).
$qHtml = 1;
$qXhtml = 0;
# Now we look at the HTTP_ACCEPT header. If a qvalue is given, we use
# that value; if not, we give it a value of 1 (as per HTTP 1.1 specs).
$accept = $ENV{'HTTP_ACCEPT'};
if ($accept =~ m{application/xhtml\+xml(;q=([\d.]+))?}) {
$qXhtml = 1;
$qXhtml = $2 if $2 ne '';
}
if ($accept =~ m{text/html(;q=([\d.]+))?}) {
$qHtml = 1;
$qHtml = $2 if $2 ne '';
}
# Now we serve the client-preferred MIME type. If the client is
# indifferent between text/html and application/xhtml+xml, we prefer
# the latter.
if ($qXhtml >= $qHtml ) {
print "Content-Type: application/xhtml+xml\r\n\r\n";
} else {
print "Content-Type: text/html\r\n\r\n";
}
| April
13,
2003
The War's Not Over Yet
Lest my title be confusing to the unwary reader, I should make it clear that I'm talking here about the browser wars. That stuff in Iraq... well, I'm not paying attention to that; I've enough bad news without actively seeking out more.
In browsers, thankfully, the news is all good. I've been using Mozilla as my sole browser for a while now; rather famously (in the community of people who care about such things), the Mozilla folks released a new development roadmap recently. The new roadmap outlines an effective coup, as the rebels of the Phoenix project have taken over browser development.
Phoenix is a rather misunderstood project. If you listen to the press on Slashdot, you'd get the impression that it's trying to be a stripped-down, ultra-lean version of Mozilla. This isn't quite true; what it's actually trying to be is the good version of Mozilla -- one with sensible defaults, sane options, and a cleaner UI. It's also not trying to be the suite to end all suites; it's just a browser, with no email client (though the standalone Mozilla-based Minotaur client would integrate well with it).
I haven't paid too much attention to Phoenix, because I generally make a point of ignoring pre-release software, and Phoenix is still officially at version 0.5. But with the news that post-1.4 version of Mozilla will be based on the Phoenix and Minotaur applications, I decided to take a look. I downloaded the latest nightly builds (that is, the absolute latest bleeding-edge version, with all the code that the developers finished today but absolutely no pre-release polishing), and... I'm impressed.
I've had three gripes about Mozilla, despite my general fondness for it. The major one is that the default configuration is terrible. There's so much in it that needs to be changed that a new user starting out with it may give up before they discover the good stuff. The more minor ones are, that the look of the application is wildly inconsistent with standard Windows XP widgets; and that there's very clearly no guiding hand on the user interface, leading to a proliferation of pointless preferences -- I'm a programmer who keeps up obsessively with Web technologies, and I don't know what some of these acronyms are. FIPS? OCSP? Why would I possibly want to change those settings?
Phoenix fixes my gripes. The default theme is attractive, and all the widgets look and act like native WinXP widgets. There are nice new features, like a customizable toolbar (something that every app except Mozilla has had as a standard feature forever) and better form auto-completion. Most importantly, though, all the defaults are set right. Tabbed browser is enabled; popup blocking is enabled; everything just works and looks pretty, out of the box.
I don't recommend Phoenix to everyone, not yet. I've noticed a few
minor bugs already (which is very much to be expected of any nightly
build), and I haven't been using it for very long. But overall,
Phoenix is very, very good and promises to dramatically improve the
already-great Mozilla. Thanks to its default bundling, Microsoft has
a commanding lead in the browser market now; but unless IE7 is
spectacular, it could easily become the AltaVista to Mozilla's Google.
| April
9,
2003
Party Like It's Late 2000
So, in what must surely count as the final repercussions of the failing of the tech boom, my employer -- scratch that, former employer -- just went down. I hope this excuses my lack of recent blogging.
On the silver lining side, I should have scads 'n' scads of time for blogging starting next week.
| April
9,
2003
Stagnating Happily
There've been a spate of articles lately about a housing bubble,
with experts predicting that in some areas home prices won't
appreciate at all for the near future. As a new home-owner (assuming
nothing unexpected happens between now and May 2), I sure as heck hope
they're right and that our house doesn't appreciate one whit.
I'll pause a moment while you recover from the shock of that
provocative statement and put your monocle back. Ready?
The thing is, when we do sell our house, we're not going to be
moving back into an apartment -- we'll be moving into another house,
and hopefully a bigger and more expensive one. Since the new house is
also part of the housing market, whatever gains our house has had will
also have been had by the new place. And since we'll be "trading up",
the house we don't own will have experienced a bigger rise in price
than the house we do own. Assuming a big enough differential between
our current house and our future house, appreciation might end up
making us poorer.
Allow me to give an example. Let's say a young homeowning couple
buys themselves a nice $150K starter house. (This may or may not be a
starter house in any particular neighborhood -- around here, it'd be
small and in a bad neighborhood -- but we'll use it for the sake of
example.) What they really wanted, though, was the gorgeous $450K
McMansion in the new suburb. "Well," they console themselves, "we'll
live here for ten years, and then we'll be able to afford that more
easily."
Ten years pass, and they've paid their mortgage payments faithfully,
thereby getting the balance on their house down to $116K (figuring 5%
down, and a 6% fixed-rate 30 year mortgage). What's more, house
prices have risen at 3%, so their house is now worth $201K. They have
$85K in house equity now, and are primed to make a run at that $450K
house.
(Inflation makes the calculations more complicated, but doesn't
change the result. For purposes of discussion here, assume that I'm
talking about a 3% appreciation above the rate of inflation, and am
therefore using constant dollars throughout.)
Except, that $450K house also partook in the 3% growth, and is now
worth $605K. So, ten years ago, when they had no equity at all, the
young couple would have needed to come up with $450K to buy that
house; now, when they have $85K in home equity, they need to come up
with another $520K. Even with their newly accumulated worth, the big
house has gotten further out of their reach.
On the other hand, if house prices had stagnated for that decade,
our fictitious couple would only have $34K in equity, but that $450K
house would still only be $450K, and the couple would only need to
come up with an additional $416K to buy it.
All of which is to say, if you plan to buy another house when you
sell your existing one (rather than moving into a retirement
home/nursing home/grave), it's probably not in your best interest for
the housing market to rise. C'mon, stagnation!
| April
4,
2003
The Ephemeral Backlist
Everyone's used to seeing bestselling book lists, but according to a Slate article, until recently, nobody had ever seen a full sales list. But with the deployment of Nielsen BookScan, there's now a reliable record of exactly how much every title sells. And the results show that classic literature does better than you might think:
Take Jane Austen's Pride and Prejudice. It sold 110,000 copies last
year, according to Nielsen BookScan, which excludes academic sales
from its calculations -- which means these numbers aren't inflated by
students who have no choice but to buy Austen. Compare it to figures
for, say, The Runaway Jury by John Grisham, which was the No. 1 best
seller in 1996: Last year, Grisham's novel sold 73,337 copies.almost
40,000 fewer than Pride and Prejudice. ...
Take Leo Tolstoy's War and Peace -- which runs some 1,400 pages and
is not a book you associate with light bedtime reading. Last year, it
sold 33,000 copies, according to BookScan. The Cardinal of the
Kremlin, another Russia-set novel, by spy-genre grandee Tom Clancy,
and 1988's No. 1 best-selling book, just barely scraped ahead of War
and Peace, with 35,000 copies sold. Its sales have been dropping, and
it probably won't hit those figures next year, or ever again.
This sounds superficially surprising, but upon further reflection, it shouldn't. Looking at historical bestseller lists shows that unless a book gets into a classic rotation, it's likely to disappear from memory quickly. Consider the top-selling fiction list from 1953, a mere 50 years ago:
- The Robe, Lloyd C. Douglas
- The Silver Chalice, Thomas B. Costain
- Desiree, Annemarie Selinko
- Battle Cry, Leon M. Uris
- From Here to Eternity, James Jones
- The High and the Mighty, Ernest K. Gann
- Beyond This Place, A. J. Cronin
- Time and Time Again, James Hilton
- Lord Vanity, Samuel Shellabarger
- The Unconquered, Ben Ames Williams
Not only have I not heard of any of those books, I haven't heard of
any of those writers (with the possible exception of Leon
M. Uris, who sounds vaguely familiar for no obvious reason). Even the ten year list isn't exactly inspiring:
- The Bridges of Madison County, Robert James Waller
- The Client, John Grisham
- Slow Waltz at Cedar Bend, Robert James Waller
- Without Remorse, Tom Clancy
- Nightmares and Dreamscapes, Stephen King
- Vanished, Danielle Steel
- Lasher, Anne Rice
- Pleading Guilty, Scott Turow
- Like Water for Chocolate, Laura Esquivel
- The Scorpio Illusion, Robert Ludlum
I recognize all those writers and most of those books, sure, but if I were going to a bookstore today, I doubt I'd be looking for any of them. (Though to be fair, I wouldn't have bought any of those back in 1993, either.) In another 40 years, it's quite likely that they'll all be as obscure as the 1953 list is today.
With backlist competition like that, it's no wonder that Penguin Classics make a steady, modest killing.
| April
3,
2003
The Hills Are Alive
One of the things I miss most, living in Detroit, is a classical radio station. This shouldn't be a big deal in theory, what with having a CD player and a sizable collection of classical CDs, but it's different somehow.
Part of it is just the serendipity factor -- never knowing what
will be on and hearing things that I'd never even heard of is
cool. Another part of it is the ego-boosting factor -- hearing
constant ads for Lexuses, Persian rugs, and upscale restaurants made
me feel all rich and snooty without having to so much as spend a
dime.
Maybe I should start watching golf.
| April
1,
2003
Well-Connected
It's no secret that next-generation digital formats (HDTV, SACD, DVD-Audio) haven't really taken off yet. Conventionally, blame for this is placed on the high prices of the equipment, uncertainty about eventual format wars, foot-dragging content providers, and the general irritation and uncertainty caused by copy protection issues. And that's all right -- but I submit that even if all this stuff were cheap and content was abundant, there'd still be some consumer resistance, because the connectors have historically sucked.
With most existing HDTVs and DVD-A/SACD players, connections need to be analog. This is irritating, because it both degrades the signal needlessly, and requires some truly hideous feats of cabling -- connecting a modern HDTV with analog video requires three standard RCA cables running from each source (e.g., DVD player, satellite receiver, cable box) to the TV; hooking up a DVD-Audio player with analog audio requires no fewer than six RCA cables going between the player and the receiver.
What's worse, all those analog connections are dumb, so none of the devices can communicate at all. Your VCR can't tell your cable box to turn to channel 38 at 7:00; your DVD player can't suggest that the TV flip to input two when it starts playing; and your recever has no idea who's doing what. Trying to watch a movie might require careful coordination of four devices with separate remote controls. Even if you can hook up your advanced home-theater setup, you're going to be lucky if you can remember how to use it.
But that will change soon, with the advent of HDMI, a new connector that carries high-definition digital video, multichannel digital audio, and inter-device communication all on one small vaguely-USB-style plug. Once this replaces the legacy plugs (which will take a while -- the HDMI standard was only ratified in December, and consumer devices with HDMI connectors don't even exist yet), the rat's nest of cabling and pile of remotes will be much simplified. And that, I predict, is when HDTV will really start to pick up steam -- when your mom can buy an HDTV and actually watch it without a set of written instructions and a Visio diagram.
(Of course, your mom may wonder how come she can't record her favorite shows any more, what with the built-in copy protection on HDMI connectors; but copy-protection-mania is a disease afflicting society widely right now, and it seems ungrateful to blame an otherwise very welcome connector for the sins of the entertainment industry.)
| April
1,
2003
Previous Entries...
Come, listen, my men, while I tell you again
The five unmistakable marks
By which you may know, wheresoever you go,
The warranted genuine Snarks.
-Lewis Carroll
Me
Others
General
Tech
Books