Second Life and the Scalability of Online Games

Add Your Comments

The virtual world Second Life has been the beneficiary of extraordinary buzz lately. The MMORPG has had been featured in a Business Week cover story as well as favorable coverage from influential blogs. As Second Life wins fans and users, can its infrastructure scale along with its audience?

That’s what News.com is wondering in a story that looks at the backend of Second Life, in terms of both its technology and business model. Second Life differs from traditional MMORPGs such as World of Warcraft or Everquest, which run copies of the same “virtual world” on hundreds of servers, with each environment known as a “realm.” Second Life operates as a grid, with different components of its environment spread across multiple servers. Here’s an excerpt from the News.com story:

“Second Life” currently runs on 2,579 servers that use the dual-core Opteron chip produced by AMD. Each server is responsible for an individual “sim,” or 16 acres of virtual “Second Life” land. At peak usage that means that each server is handling about three users. “Most (massively multiplayer online games) have hundreds to thousands of players per server machine,” said Michael Sellers, who runs Online Alchemy, a provider of artificial-intelligence tools for online games. “Is there a way they can achieve (significant) elements of scale? I haven’t seen that.”

Some observers of virtual worlds see challenges for Second Life as it scales beyond its current structure – which has a very low ratio of users to servers – and seeks to accommodate more users. Retaining that server-to-user ratio would be expensive.


The vulnerability of Second Life’s grid structure has been on display in several significant outages caused by in-game griefers unleashing attacks using rapid generation of virtual objects they created using the game’s tools. When a server fails in WoW or Everquest, one realm goes offline. When the Second Life grid fails, the entire game is unavailable.

While the News.com article doesn’t get deep into the nitty-gritty, Tim O’Reilly recently conducted a review of infrastructure used by Web 2.0 (sorry no trademark jokes today) companies, which included detail on Second Life’s evolving infrastructure. Here’s a snippet from Ian Wilkes, Director of Operations and architect of Second Life’s database and asset backend:

We’ve eschewed any of the general purpose cluster technologies (mysql cluster, various replication schemes) in favor of explicit data partitioning. So, we still have a central db that keeps track of where to find what data (per-user, for instance), and N additional dbs that do the heavy lifting. Our feeling is that this is ultimately far more scalable than black-box clustering. Right now we’re still in the transition process, so we remain vulnerable to overload.

For additional discussion, see Clickable Culture and Play No Evil.

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.