With bridled Curiosity

So Curiosity has been out for awhile now, and the dust has settled on the hype surrounding it’s creation, release and teething problems.

The “game” is crushingly dull to play (in fact, I’m sure you don’t “play” it, in any conventional sense). But that isn’t my concern here.

Bored and the Borg

Perhaps because of the boredom induced by the game, 22 Cans seem to avoid publicly calling it a “game” at all. Rather it’s an “experiment”; a data-gathering tool for game developers to better understand the economics of free-to-play, social games and pay-to-win mechanics. The stated objective of the project is that the users’ propensity to spend on virtual goods is measured – a study into the elasticities of demand in relation to price, social status and time that affect the gamer.

At first glance, that sounds kind of interesting. But here is my problem with Curiosity: because of the very structure of the “experiment” itself, it can’t tell us anything useful or interesting about gaming. Specifically, there are three structural, experimental problems, which mean that Curiosity cannot answer three of the most interesting questions about social games models.

(1) The data is incomplete

We can’t answer the question: “how many people did you reach?”

If we’re looking to study the interrelationship between social psychology and gaming, the first thing we’re interested in is the games “peak”. As a developer, a marketer, an investor, it’s our bragging metric – our showpiece. It’s the most basic yardstick of a thing’s virality – the extent to which it was shared across many, many networks of people. It’s a simple gauge of whether your game was Gangnam Style or Right Now.

Unfortunately, if we were curious about the extent to which Curiosity could go viral, we were never going to get an answer. The servers were quickly overloaded on release, and numbers were severely limited for several days. According to 22 Cans, c.160,000 people downloaded the app within 24 hours, and servers weren’t able to handle the load. Server load may well have been addressed now, but I also suspect that many people who would have engaged with the project simply lost interest.

Typically, the first “peak” is the highest when a social game is launched. Where Curiosity is concerned, we’ll never know how many people would have engaged, even for a minute or two, in that first few days. We’ll never know how many never came back.

(2) The data is incomplete

We can’t answer the question: “what virtual goods do your users value?”

If we’re looking to better understand microtransactions and virtual goods, we’re going to have to build a better understanding of what virtual goods gamers value. Because platforms have been eager to report sales metrics, we have a fairly clear idea that spending on virtual goods and in-game items has grown quickly and now represents a significant target for games developers, and a globally-contested space. The more interesting and as-yet-unanswered question is to do with which virtual goods users value, and how they arrive at a valuation (even if it is purely psychological).

To answer this kind of question, an “experiment” would need to test users’ propensity to spend on items in relation to different forms and levels of value. There is a genuine challenge here, and a real sparsity of considered empiricism.

However, because of its game design, Curiosity can’t attempt to answer this question. The game has one, single objective: be the last person in the game to tap the screen. I would argue that none of the game’s purchasable items can (until the end of the game at least) provide the user with any value. The different picks/chisels, which increase the rate at which the user can knock apart the cube, give the user no tangible advantage in the “last tap” stakes. Sure, the user has never played this game before, but their intuition has to be that a “winning strategy” is about timing, not about breaking lots of blocks.

The items that the game designer is “selling” to the user are, within the context of the game, next-to-useless (for the vast majority of the game’s played time) and will only, very momentarily, be of similar use to a lottery ticket.

(3) The data is incomplete

We can’t answer the question: “how can we better understand a cohort’s spend?”

We know that out there, in the wild, there are “whales”, prepared to spend significant sums for virtual goods. But once a game developer has achieved a substantial user base (1) and has understood which virtual items will be valuable to players (2), how should we price virtual goods for the entire distribution of players (3) – not just the “whales”?

To answer these questions, an experiment would need to offer a range of goods, at a range of prices (potentially at variable prices) and to a constant (or near constant) user-base. The most interesting questions for games developers relate to a user’s spending profile during the lifetime of their engagement with a product. Many games developers look to move users between games once engagement rates fall. Therefore, it’s important to understand the spend profile of different cohorts of users. When should users be offered new titles? When should they be offered new items? When should they be reminded to revisit the games that they already know?

Curiosity is incomplete regarding its data collection for this purpose. The game contains two or three highly priced items, and one ludicrously-highly priced item – the “diamond chisel”. Although I suggest in (2) that the items offer no value to the player, a wide distribution of items is also important to better understand the wide-distribution of player behaviours, and Curiosity makes no attempt to understand this distribution.

If a player’s lifetime spend in a game is $30, do they typically start with high-price items and migrate to lower-price items as their engagement diminishes? Is the opposite the case? Curiosity can’t give us an answer. It isn’t even interested in collecting that data.

The bottom line:

Sure, Curiosity is boring. That may well be a forgivable creative let-down.

The more significant problem is that the project has been engineered as an “experiment”, but as an experiment, it fails to answer the interesting questions in a field where interesting questions abound.

It’s the lack of scientific appetite, not artistic creativity, that marks Curiosity out as a failure.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: