Final Fantasy XIII

It should no longer be surprising when a new Final Fantasy game leads to violent disagreement among fans of the series. The characteristic marriage of over-the-top production values and highly experimental game design remains a winning formula in spite of – or perhaps because of – the controversy surrounding each new release.

I still haven’t made up my mind about FF13; both the supporters and detractors make valid points. There are some things the game does brilliantly, and other areas where it just falls flat. Given this situation, I thought it would be appropriate to look at some of the design choices made in this game, talking about which of them worked, and which of them didn’t.

What Worked

Automatic Recovery after Battles

Halo brought automatic health recovery to the FPS genre, and now FF13 might bring it to RPGs.

I have maintained (even before FF13) that this is just the Right Way to build an RPG in the modern era. If you look at the trend over the entire Final Fantasy series (which is in many ways representative of the entire JRPG genre), you would see that post-battle recovery has become easier with each installment. FF13 simply jumps to the end-point of this trajectory.

Some hardened RPG vets might complain that automatic recovery robs the game of its challenge. Such a complaint, though, completely misses the point. In older RPGs, surviving to the end of a massive dungeon was always a matter of attrition. Would the little nuisance encounters along the way drain your health, MP and stock of items enough to make the boss fight unwinnable? FF13 does abandon this kind of attrition challenge, but in return it adds a new challenge:

More Difficult Battles

Even in the early/mid game, the no-longer-random battles between boss fights can pose a real challenge in FF13. Enemies hit for a lot more and soak up a lot more damage than in any recent Final Fantasy. Coming into battle with the wrong strategy, or holding to a damage-dealing Paradigm just a bit too long, can quickly bring up the Game Over screen.

The battles are never overly cheap, though, as the more challenging random battles in the earliest JRPGs tended to be. You are unlikely to fall victim to a one-hit KO in the first round of battle, and almost every battle in the story progression can be won on a second or third try by bringing a better strategy to the table. And speaking of second tries…

No Penalty for Failure

Losing a battle in FF13 simply resets your party to a point right before that battle. At most you have to contend with a (skippable) cutscene before you can dive right back into the encounter with your new strategy.

Once again, the old-fashioned RPG vanguard might balk, but FF13 is simply subscribing to a more modern design ethos. Today’s players (if they are anything like me) have less free time, and thus have far less patience for being forced to replay an hour-long slog through a dungeon because of a single mistake made in a boss encounter.

Boss encounters in FF13 can be truly intimidating. Failure is never far away (see More Difficult Battles above), and victory usually depends on an understanding of the new battle mechanics that FF13 brings to the table. This brings us to:

Chaining, Stagger and Launch

The Chain Gauge is the key to victory in FF13’s hardest encounters, and adds a welcome layer of complexity to the battle system.

Every enemy has a Chain Gauge, initially empty. Attacks from a Ravager (aka Black Mage) fill the gauge, but it will quickly drain. Attacks from a Commando (aka Fighter) do little to fill the gauge, but slow its rate of decay. Once the gauge is filled to the top the enemy temporarily enters a Stagger state in which it is more susceptible to attacks of all kinds. Some enemies, when staggered, can be Launched into the air and juggled by successive attacks, preventing them from acting.

The stagger system is a great addition, and really forms the meat of FF13’s combat. Many enemies are effectively impervious to attack until staggered, while others can have their devastating offensive abilities completely shut down by a timely Launch. Mages and fighters feel balanced, without being interchangeable, because both play essential roles in the stagger mechanic.

Roles and Character Differentiation

A big problem looming over the 3D Final Fantasy games has been how to allow players to personalize their character development while still giving each character unique strengths and weaknesses. Final Fantasy VII and VIII occupy one extreme, where characters are nothing more than their base stats and a place to junction materia or spells. Final Fantasy IX stands at the other extreme, with each character belonging to a single designer-selected class (thief, knight, black mage, etc.) and only able to learn abilities appropriate to that class.

On this spectrum, FF13 occupies a nice middle ground. Every character is allowed to use any class, but has inherent aptitudes that make them most effective in certain roles. These aptitudes aren’t just a matter of their base stats, but are also reflected in the relative cost to upgrade them in each class, as well as the order in which they receive each class’s abilities (if at all). Because of these aptitudes, and because a character can only use one class at a time in battle, the game encourages players to specialize each character in a role that suits them.

What Didn’t Work

The Story

Final Fantasy XIII has an interesting setting (the dual worlds of Cocoon and Pulse) and a somewhat compelling scenario; ordinary people pressed into the service of warring deities, forced to fulfill their proscribed task or suffer a terrible curse. What it lacks, however, is any kind of story arc. After the initial early-game gauntlet is cleared, the protagonists blunder forward without any clear purpose. They don’t even know what task they are supposed to perform, let alone whether they intend to defy it. Once the antagonist steps forward, things only get worse. Even at the end of the game, it is unclear precisely how the protagonists’ ambitions differ from the antagonist.

All of this might not be so bad if the dialogue didn’t consist entirely of the various characters taking turns suffering crises of faith; questioning the morality/feasibility of their nebulously-defined mission. These crises are inevitably resolved with meaningless anime-inspired platitudes (e.g. “We can do it – we just need to believe in each other!”), and do nothing to further the plot or characterization.

The only visible purpose these ridiculous dialogues serve is to set up the reworked summons:

The Summons

There really isn’t anything about FF13’s “eidolons” that was executed well. First, they are clumsily shoehorned into the narrative; despite characters talking about how important they are, they could be completely excised from the story without any impact. The only time the eidolons appear outside of the scenes that introduce them is in a few style-over-substance cutscenes.

The (non-optional) battles to acquire each eidolon are tedious, and come at the end of some of the most insipid dialogue scenes in the game (see “The Story” above). The idea of a “puzzle battle” probably sounded good on paper, but in practice is just boring. The conditions for victory are so obtuse that the only way to find them is with a guide or the “Libra” technique. At that point, however, you are just applying a set formula, and your intelligence isn’t really being tested.

Your reward for acquiring each eidolon is underwhelming. Each can only be summoned by one character, and then only when that character is the party leader. The summons are barely powerful enough to take down ordinary enemy mobs, but they do have the side-effect of recovering the health and status of the party. As a result, the only reason to use a summon in practice is for its ancillary healing effect in major encounters.

The final nail in the coffin for FF13’s eidolons, though, is the patently ridiculous “Transformers” thing they have going. From a bright red race car to a disturbingly misogynistic motorcycle, these alternate forms for the summons look comically out of place in the game’s setting, and completely shatter suspension of disbelief.

Battle System Design

This might seem confusing after I talked up the whole Chain/Stagger/Launch thing earlier. While FF13 did make some interesting strategic additions to the battle system, it also took several missteps.

FF13 encourages players to lean on the “Auto-Battle” feature, letting the AI select which commands to use and on which targets, and reserving player control for “Paradigm Shifts.” Most battles can be won without any manual command entry, and the game’s creators have even commented about how manual command entry is neither practical (because of the speed of battles) nor desirable (since the AI can exploit nuances of the battle system that are not explained to the player).

Why, then, do I have to sit there and repeatedly press the X button to tell the party leader’s AI to do its job? Why, if I decide to input a command manually, can’t I pause the action while I buffer up commands? Why do I get a Game Over if my party leader falls in battle? Why can’t the non-leader characters use items? Why can’t I switch leaders in battle, or at least manually enter commands for non-leader characters? Why can’t I instruct the AI to favor reviving fallen characters over healing only moderately-wounded ones? Why can I only switch character classes in bulk, instead of for each character?

It may sound like I am just picking nits, but these are all things that Final Fantasy XII – its immediate predecessor – got right! FF13 inherits the emphasis on AI-controlled characters in battle, but robs the system of the extra degree of control (and hence of user-friendliness) that was present in FF12.

End-Game Tedium

In the early game, FF13 seemed to breathe new life into stagnant JPRG conventions. Minor battles can be cleared in tens of seconds (or less), and even boss encounters can be dealt with in mere minutes with the right strategy. It is unfortunate that by the end game, things have changed for the worse.

Random enemy mobs in the late game can take several minutes to clear (which makes grinding for experience more tedious than it needs to be), and boss battles can take 20 minutes or more if your team is even slightly under-leveled. Once your party has gained access to the most important abilities in the Synergist and Saboteur roles, most battles fall into a predictable and tired cycle of (de)buffing, attacking, and healing.

By this point, strategic decision making is largely absent from the picture. The player is responsible for switching Paradigms at the right time, but the set of Paradigms used in each enemy encounter rarely changes – only the particular attacks, buffs and debuffs used by the characters vary. But when the AI characters automatically “learn” what elements and status effects work best in each encounter, that leaves the player with very little to do during these 10-minute-plus battles.


In the end, lucky number XIII isn’t going to be remembered as one of the best Final Fantasies. It takes a bold stand on where JRPGs might go in the future – aspiring for accessibility with streamlined mechanics – but it stumbles in just as many places as it succeeds.


Learning to Write

The recent slew of changes that Apple has made to the secret iDevice developer agreement has finally pushed me to write up these thoughts, which have been nagging at me for a while now.

In the new world of computing, “devices” are king. And nobody sells a device these days without also having the forethought to build a walled garden around it. Apple is now the poster child for the “app store” model, but the video game industry had already proven the value of controlling both the hardware platform and software distribution.

It would be disingenuous of me to decry the practice outright. I’ve owned a variety of these locked-down devices, and have purchased software through the “approved” online stores. Yes, I have been frustrated by the consequences of restrictive copy-protection – re-buying games that I had already purchased when my XBOX 360 came up with a Red Ring of Death. Yes, I have often pondered “jailbreaking” these devices, and occasionally tried it out on those that I was willing to risk “bricking.”

In all of this, I have never stood up and “voted with my wallet” – passing up on a device because I didn’t approve of its software development and distribution model. Simply put, the value I got out of these devices – the latest Nintendo game, or the ease-of-use of the iPhone – surpassed the cost – the inability to load my own software and fully customize the device. Or at least, that was the only cost I could perceive at the time.

Now that I have a daughter, my perception is a bit different.

My generation has already formed a powerful attachment to our mobile devices – I often joke that given the choice between air and my iPhone I would have to think carefully. My daughter is going to grow up thinking that it is normal to be able to stream live video from across the world while riding in the back seat of a car. We could debate whether this pervasive access to technology will be harmful for the next generation, but this misses the point. The presence or absence of technology is not what is important. What is important is how future generations will relate to the technology of their time.

Remember that written language was once a “technology.” Those of you reading this have grown up in a world immersed in that technology; most of us are within sight of written words every moment of every day. The spread of literacy over millennia has changed human society, human history. We know that mastery of this technology – the ability to both consume and produce – is necessary for success in our society.

Those in my generation who consider themselves “computer literate” can often trace their learning process back to a handful of software systems: “turtle graphics” in Logo, BASIC on the Apple II or DOS, HyperCard on the early Mac. All of these systems allowed anyone with a computer to experiment with programming, and allowed even young children to use the technology to create and not just consume. The Scratch project provides a similar exploratory programming environment for today’s children. My wife teaches a technology course that, among other things, has 7th and 8th grade students build their own computer games using Scratch.

In case you hadn’t noticed, all of those software systems have another feature in common – none of them are allowed in Apple’s App Store. Children who want to use an iPad to create animations or games in Scratch can no longer do so. If they are especially motivated, and have helpful parents, I suppose they could pay to join Apple’s developer program, go out and buy dense technical books to teach them C, Objective C and Apple’s proprietary APIs, and spend months or years creating a project that would have taken mere minutes in Scratch.

That hardly sounds fair, though. It’s like telling kids they can have their Dr. Seuss book after they finish a thesis on Finnegans Wake. There is a reason these simple, intuitive programming environments exist, and the companies selling these devices shouldn’t just ignore the programmers of the future – the authors of tomorrow’s technology – in the name of platform lock-in.

I guess I’ve already nailed the point into the ground, but just to throw in a few closing comments: There are two technologies that shaped me most in my formative years – books, and desktop computers. What really frightens me is that both of these are brilliant ideas that would never be invented in today’s world. Books, which can be read, shared, bought and sold by anyone in any country – that don’t need to be bought separately for at-home and on-the-go use – would never even be considered by today’s media companies. The earliest PCs/Macs, which allowed anybody to buy, sell, install and develop any software they wanted, without diverting revenue back to the hardware manufacturer, would be seen as a squandered opportunity.

These technologies – books and computers – changed the world for the better, and now we are making haste to bury them in favor of their more profitable successors.

In Defense of Explosive Barrels

I’m always a little confused when people talk about “realism” in games. Maybe it started with Old Man Murray’s famous screed about the medium’s overreliance on crates.

As comedy, this kind of analysis works. More worrying, though, is when players, critics and designers decide that the conceits of the medium, or of a particular genre, should be excised in the name of “realism.”

The classic elemental environments (fire level, ice level, etc.) have already fallen out of favor. Characters being obstructed by knee-high fences is unforgivable in the era of open-world games. RPG loot drops must be rationalized because a celebrity game designer noticed that birds don’t carry swords. And the explosive barrel – stalwart vetran of FPS design – is now seen as a ridiculous affront to realism.

What the detractors fail to acknowledge is that explosive barrels are fun. Very fun. I’d go so far as to say that the explosive barrel is one of the fundamental elements in the Periodic Table of Interactivity.

Just trying to enumerate the possible uses for a well-realized explosive barrel (like the version in Half-Life 2) reveals how much interactivity is packed into each:

The barrel is a physics object, and thus can be stacked or climbed upon. It can be thrown with the game’s gravity gun, wherupon it explodes on impact, providing an impromptu grenade launcher. It can be exploded with a burst of gunfire, but a few careful shots with the pistol will start it burning, creating a timed explosive used by player and enemy alike. Finally, the barrel can be triggered by other explosions, leading to the possibilty of chain reactions engineered by the level designer, or savvy players.

The explosive barrel is a powerful tool of player expression. It elevates the vocabulary afforded to the FPS player beyond just “point” and “shoot.”

I could go on raving about the merits of the explosive barrel, or describe how other “unrealistic” conceits like elemental levels/abilities or double-jumping benefit their respective genres, but I hope my point is clear.

These “cliches” persist because they are more fun than the monotonous grey “realism” of so many of today’s games.

[Note: I do give the TV Tropes site credit for understanding that tropes are not bad. Now we just need to get the players out there to understand it.]


I have been neglectful of my nascent blog since the beginning, but now I have an excuse. Just over a month ago, I became a father.

Having spent a month at home with my wife, and falling in love with baby Morganne has been an amazing experience. Even though many people had told me what to expect, I was completely unprepared for what an exhilarating and terrifying endeavor it is to be responsible for a new life (in every sense of the word).

I’m back at work now, which I’m sure will present its own challenges for the new parents. Whether this bodes well or ill for the prospect of further posts, I do not know.

The Inevitable Fall of Link and Samus

When I first played through The Legend of Zelda: The Wind Waker, I felt the nagging sense that something was wrong. Despite the beautiful art style and excellent combat mechanics, the game didn’t seem to live up to the standard set by Ocarina of Time and Majora’s Mask. At the time I dismissed these concerns; nostalgia has a tendency to cloud judgment, and my own nostalgia for the N64 Zelda games is pretty hefty.

When I had the same reaction to Twilight Princess, I began to worry. Were my expectations for the Zelda series so high that no game could meet them? Or was there something real, something identifiably lacking in these games as compared to their predecessors?

When I dug into Nintendo’s Metroid Prime Trilogy for the Wii, I noticed a similar pattern. The first Metroid Prime is a brilliant game; carefully blending new mechanics and abilities with homage to Super Metroid (itself as close to perfection as is possible). Playing the Prime sequels, Echoes and Corruption, shortly after the original led to the same sense of hollowness I’d felt with Zelda. Though the gameplay mechanics are the same throughout, the latter two games seem to lack the magic of the first Prime.

So what is happening?

The core design of the Zelda games and the entire “Metroidvania” sub-genre are pretty consistent (once you strip away the differences in presentation, perspective, combat, and story – you know, the small stuff). Under the hood, these games are about the interplay between exploration and discovery.

The player is presented with a large, varied environment to explore, and given an initial set of tools and abilities (I’ll refer to these uniformly as “items”). Invariably, some parts of the world are inaccessible at first; the player might take note of a ledge just out of reach, or a suspicious crack in a wall of stone. The reward for diligent exploration is then discovery – of new items that render those parts of the environment accessible.

It is this cyclic relationship – exploration leads to discovery, discovery enables exploration – that drives the experience. Clever designers can gently guide players towards the right discoveries in the right order, all while giving them the impression that they are in control; that the discoveries are theirs.

But the converse is also true, and this is the crux of the problem: careless design gives the player a strong sense of being led on a leash. If every attempt to explore out of the proscribed sequence is impeded by artificial barriers, then exploration ceases to be fun. If each discovered item serves only to lower the next designer-imposed barrier, then discovery ceases to be rewarding.

Once I came to understand these things, I was able to identify the issues that had troubled me in the most recent Zelda and Metroid games. As a service to any game designers listening, I will provide a handy list of things not to include in your games.

Proceeding from least to most egregious:

Items that only work where the designer intends

The hookshot in Ocarina of Time could latch on to almost anything with a wood material applied, whether the designer had consciously planned for it or not. Starting with Wind Waker, however, the hookshot can only latch onto specific designer-placed targets.

Limiting the applicability of an item in this way dumbs down gameplay. The player never has to think about when to use the item; whenever they see the telltale marker, they respond. The rest of the time, the item goes unused. With this kind of design every challenge admits only one solution – the one the designer intended.

Of course, this is precisely why designers employ this approach. When you are planning an elaborate puzzle, you don’t want to think about how it could be approached with every possible combination of items. It is much simpler to just rule out whatever items you don’t want the player to use.

In the limit, though, you end up with:

Items that are only useful for a limited time

This has unfortunately become one of the hallmarks of the Zelda series. Upon receiving a new dungeon item, you can expect to see several brilliant set-piece puzzles using the new item, along with a boss battle in which the item will be critical to success. But once you leave the dungeon, your shiny new toy finds itself relegated to the inventory with the other trinkets.

If you are lucky, the designers might throw you a bone, making the item useful for unlocking a few upgrades, or opening the way to a new area of the world. But because the item can only be used where the designer intends, it eventually loses all utility and just sits in your inventory wasting space.

Twilight Princess is chock full of items like this. The Spinner only works on special “rails,” and there are almost none of these outside of the dungeon where it is found. The horse whistle gets special mention for being useless from the instant it is given to the player. The Dominion Rod is particularly reviled – it’s only post-dungeon utility is for a late-game fetch quest.

Speaking of the Dominion Rod, lets talk about:

Items that only work as keys

Getting a new item is supposed to be exciting. It might make your player character more effective in combat. It might be used to destroy obstructions or pass obstacles. It might speed up navigation, or enable the use of new shortcuts. The best items do all these things.

Alternatively, you could just make your item a stupid key, uninteresting to use and with no other functions.

The modern Zelda and Metroid games all have their fair share of late-game fetch quests for meaningless MacGuffins. The older Zelda games may have had the standard “collect the 8 Bafmodads to kill the evil Foozle” plot, but at least along with each Bafmodad you got a fun new dungeon item.

The Metroid Prime sequels are the worst offenders here. The primary player task in each region of Echoes is to collect three keys – really, the designers just call them “keys” and move on. Corruption, tries to dress up its keys in the form of the “Hyper-Mode” abilities. With few exceptions, though, these abilities are only useful at painfully obvious choke points. Both games feature a missile “upgrade” that locks on to targets and fires a five missile burst. Of course, the only time players every use this ability, is to open a handful of locked doors.

The most brazen, nonsensical example of this has to be the “Translators” in Echoes. After the defeat of each main boss, the guide NPC gives the player a new Translator which allows the player to read certain messages and open certain doors. Each translator has a color, and can only work with like-colored messages/doors.

Take a second to think about that.

How does it make sense that you need new translation software to read messages in different colors? And even then, what possible reason does this NPC have for teaching you his language in this piecemeal fashion? The only purpose it serves it to corral the player, and that brings us to:

Artificially controlling access to the world

Exploration-based games work best when the player can actually, you know, explore. Unfortunately, some designers have decided that their carefully crafted narrative would be ruined if the player were allowed to access regions of the world even a little bit earlier than intended.

This is of course patently ridiculous, but it doesn’t stop the Zelda games from holding the player’s hand and carefully leading them through the first 10 or more hours of the narrative. In Metroid Fusion and the Metroid Prime sequels, Samus can only explore in those places that her orders have explicitly unlocked.

What are these designers afraid of? Sure, a small number of die-hard players are going to look for opportunities to “sequence break” the game. But the dedication of players like these is a sign of just how much they enjoy a game (look at speed runs for Super Metroid or Castlevania: SOTN and tell me those aren’t labors of love), and their antics do nothing to diminish the enjoyment of less hardcore players.

Wrapping up

What’s happening here is simple – by limiting the options available to the player, you can produce a more streamlined game, at the expense of its depth. While these newer games may superficially resemble the classics from which they are derived, they have sacrificed some of the core design principles in order to make the production easier (and one assumes, cheaper).

Nintendo still produces some of the best games out there, and the recent Zelda and Metroid games are still a lot of fun. I’ll just have to learn to live with that sense of hollowness when I play them.

Improving on Constructors

Constructors, as they appear in mainstream object-oriented languages, have numerous issues. Directly allocating objects with constructors creates coupling, and since most languages cannot abstract over constructors, we must resort to techniques like Factory patterns or Dependency Injection to provide the abstraction.

These issues seem to be well understood (or at least well documented), so I thought I’d bring up a less dangerous but no less annoying issue: when I try to code in a mostly-functional style, the approach to construction in C++, Java and C# forces me to write way too much boilerplate.

For an example, Imagine I am defining some C# classes to represent a simple lambda-calculus AST:

class Exp { ... }
class Abs : Exp { ... }
class App : Exp { ... }
class Var : Exp { ... }

Immutable Objects

I’d like to work with objects that are immutable once constructed. That means that I will not expose their fields, and will expose only “getter” properties. If I’d like each Exp to have some information on its range in the source code (using a value type SourceRange) I might write a canonical Exp as:

public class Exp
    public Exp( SourceRange range )
        _range = range;

    public SourceRange Range { get { return _range; } }

    private SourceRange _range;

For an immutable class with a single attribute, I’ve had to write a surprising amount of boilerplate. I’ve had to write the type of the attribute (SourceRange) three times, and variations on the name of the attribute (range, Range, _range) six times.

If I were using Scala, though, I could express the original intent quite compactly:

public class Exp( val Range : SourceRange )

This notation defines both a parameter to the default constructor of Exp and a read-only property Range that gives access to the value passed into the constructor.

Derived Classes

So it appears that Scala can eliminate our boilerplate in Exp, but what happens in our derived classes? Starting with a canonical C# encoding again, here is Abs:

public class Abs : Exp
    public Abs( SourceRange range,
                string name,
                Exp body )
        : base( range )
        _name = name;
        _body = body;

    public string Name { get { return _name; } }
    public Exp Body { get { return _body; } }

    private string _name;
    private Exp _body;

The boilerplate for the new properties is the same as before. What is new, though, is that we are forced to re-state the attributes of the base class in our new constructor. While this seems like a relatively small annoyance at first, we end up having to repeat this boilerplate in each subclass we add. If the base class has a non-trivial number of attributes, this obviously gets proportionally worse.

In this case Scala doesn’t provide a solution to avoid this kind of boilerplate:

public class Abs( range : SourceRange,
                  val name : String,
                  val body : Exp )
    extends Exp(range)

Extending the Base Class

So what’s so bad about this per-subclass boilerplate? The dogmatic answer is that it is a violation of Once and Only Once. A more pragmatic answer arises if we need to alter or extend the base class.

Suppose we decide to add a Type attribute to Exp. This attribute might have a default value (e.g. null), so existing call sites that create expressions do not need to be updated. How much code do we have to edit to achieve this?

Adding the a new field and property to Exp is relatively easy, as is adding a new Exp constructor with an additional parameter. In addition, though, we’d have to update every subclass of Exp to include another constructor with the new parameter.

This is a serious compromise in modularity. If we are creating a class library used by other programmers or other organizations then we may not even have access to all subclasses. This means there are certain edits that we cannot make to the base class.

A Possible Compromise

If we sacrificed the goal of having immutable objects, we could use C# auto-generated properties to avoid the per-subclass boilerplate:

public class Exp
    public SourceRange Range { get; set; }

public class Abs : Exp
    public string Name { get; set; }
    public Exp Body { get; set; }

With this approach we would then use the property-based initialization syntax when constructing an instance:

var abs = new Abs{ Range = new SourceRange(...),
                   Name = "x",
                   Body = ... };

Adding a Type property to Exp could then be accomplished without affecting every subclass. Clients who create expressions could freely include the new parameter in their initializer lists.

There are two big downsides to this approach, though. The first is that we have sacrificed the immutability of our objects – every property has both a getter and a setter. The second is that clients can now create uninitialized or partially-initialized objects by forgetting to include any of the “required” attributes in their initializer.

You can decide for yourself whether that is an appropriate solution. I for one find it distasteful, and dislike that newer .NET technologies like WPF and XAML seem to be encouraging this style.

Doing Better

Ideally we’d have a solution that combines the declarative style and guaranteed initialization of the Scala approach with the easy extensibility of the C# automatic-property approach. It turns out that CLOS (the Common Lisp Object System) and its descendent Dylan already use a solution along these lines.

Casting our example into idiomatic Dylan, we would have:

define class <exp>
    constant slot range :: <source-range>, required-init-keyword: range:;
end class;

define class <abs> (<exp>)
    constant slot name :: <string>, required-init-keyword: name:;
    constant slot body :: <exp>, required-init-keyword: body:;
end class;

A user could then create an expression using the standard make function (the Dylan equivalent of the new operator in other languages):

let abs = make(<abs>,
               range: someRange,
               name: "x",
               body: ... );

Because Dylan and CLOS are dynamic languages, failure to provide all required parameters yields a runtime rather than compile-time error. Except for this, however, the Dylan approach provides exactly the combination of benefits described above.


Object initialization is a thorny issue in many modern object-orientated languages. In order to gain the benefits of both safety and extensibility, we should be willing to look at a wide variety of languages for inspiration.

In Defense of OldSpeak

My last post tried to make a case in favor of static typing based on the fact that it allows us to do overload resolution. At the time I hadn’t read this post on Gilad Bracha’s Newspeak blog. In the thread on another post, he summarized the sprit of this essay when he commented that:

“Static type based overloading is a truly bad idea, with no redeeming value whatsoever”

I’m not going to claim that I know language design better than Bracha. I will, however, disagree with this extreme position on overloading. If you haven’t read Bracha’s essay, please do so before you proceed…

Let’s first touch on the examples that Bracha used to illustrate his case. Some of these examples relate to legacy issues in Java, and are thus not inherent to languages with overloading. I’ll happily dismiss them since I don’t have to deal with Java.

The rest involve overloads with the following two properties:

  1. The methods are all defined within a single class
  2. The methods are specialized on types that are related by inheritance

I claim that it is the combination of these two properties that is the crux of the argument. If the types involved are not related by inheritance, the “gotcha” aspect of figuring out which overload will be called goes away. And because the methods are all defined in one class (by one programmer?) we can trivialize the cost associated with renaming one of the overloads, or of planning to avoid the situation altogether.

For this limited case – “(1) and (2)” – I actually buy the argument. Static overloading in this case doesn’t do what you want. But what Bracha neglects to mention is that a pure object-oriented message send doesn’t achieve the desired result either! What you want in this case is dispatch on the run-time types of multiple arguments, aka multiple dispatch, aka multimethods.

There are legitimate concerns with multimethods (which Bracha notes) as expressed in e.g. CLOS and Dylan. There are, however, other approaches that are more suitable for a new language. That is a discussion for another day.

Having ceded the argument in the “(1) and (2)” case, in favor of multimethods, that leaves us with the remaining cases, which Bracha didn’t directly address.

The “(1) but not (2)” case is harmless – there is no chance of ambiguity in dispatch. Multimethods subsume this case for overloading anyway, so I don’t think it is particularly useful to discuss.

The remaining cases must all deal with methods that weren’t defined within a single class. We might also presume, then, that we should consider the possibility that the methods involved were defined by different programmers, working at different organizations.

Suppose programmer A defines their Widget class version 1.0. Programmer B decides to use it as the base class for their SuperWidget. SuperWidget has extended Widget by adding a new message “doSomethingSuper” with semantics that are tied into B’s product.

Unbeknownst to B, though, A has been upgrading Widget for version 1.1 by adding their own “doSomethingSuper” method, with completely different semantics (after all, B doesn’t know about A’s product). If B tries to upgrade to the new version of Widget, then what happens?

In a language like Python or SmallTalk, SuperWidget will accidentally override the base class definition of “doSomethingSuper“. Now clients that try to use a SuperWidget as a Widget 1.1 will fail because SuperWidget responds to a message with an unexpected behavior.

If you try this same scenario out in C# and the Microsoft CLR, you’ll find that previously-compiled versions of SuperWidget keep working with Widget 1.1, and clients that use it as a Widget will have no problems. If you recompile SuperWidget after the upgrade, you will be told that your “doSomethingSuper” method might introduce an ambiguity – you will be forced to decorate it explicitly as either an override of the base-class method, or a new method that just happens to have the same name.

The secret that makes this technique work is – you guessed it – static overload resolution. This is exactly the opposite of Bracha’s claim about static overloading in his essay:

“This means that existing code breaks when you recompile, or does the wrong thing if you don’t”

In this case, however, it is the overloading-free languages which inhibit the modular extensibility of the system, and static overloading that makes it possible for another language to avoid the problem.

Overloading is generally not something we pursue, even when our languages support it. Instead, we simply recognize that it is something that arises inevitably when we develop large software systems that aggregate and extend components developed by other programmers and other organizations. The space of useful identifiers is just too small to avoid all conflicts.

Given this fact, I choose to use tools that recognize the inevitability of name conflicts and give me mechanisms for resolving them.