Twitter

Physical Technology and Play

 

IMG_2400

The Oculus Rift reminds me why I took a break from traditional video games to focus on experimental systems. More specifically, it reminds me how starved the video game space is for something different — not because the Oculus is different — but because of how same it is, and yet, how much attention it gets.

For those who missed some of Oculus’ recent announcements, a few key ones are:

  • Oculus will be shipping with an Xbox: One controller
  • Highlights included playing standard video games on a larger virtual screen

 


 

Computer games, more than any other artistic medium, are bound by their delivery mechanisms. Think of the last five games you’ve played, and how you’ve interacted with them. Quite likely, your experience went something like this: your hands engaged with a standard control interface (keyboard, controller, touch), that control interface was tethered to a machine where the game resides (console or PC), which was in turn attached to a monitor for output (TV, computer monitor, hell, even Oculus Rift). The feedback loop being player -> controller -> machine -> A/V output -> player.

This of course ignores newer output and input options like haptic feedback, the nuances of touchpad games, etc, but the basic components largely hold true no matter where we look.

Examining game technology further, we can look at other standardizations enforced by console manufacturers and, as an extension, game platforms. These standardizations include the controls available to a game developer (i.e. A, B, Select, Start, WASD), the technical parameters that a video game must work within (hardware limitations), and video and sound output considerations.

As developers and players we, by default, accept these things. Of course a controller has buttons, and joysticks, and triggers: when has it ever been any other way? Yet these are some of the most important factors when it comes to the game experience because, by their very nature, they are key affordances through which we play.

Current computer game hardware can be looked at as an arbitrary bottleneck, then, because it all operates under a limited set of controlled affordances.

Much of my work lately is based around the use of open source hardware to develop alternative reality games (ARG), communal games, and alternative game interfaces. These games explore the breakdown of our standard feedback loop. What happens when we challenge commonly held game constructs? How do players engage with games when you introduce them to non-standard feedback loops?

One project, an epidemic simulator, turned attendees at a professional conference into disease carriers, and used the conference as a setting for an outbreak. Another, an alternative controller that averaged player inputs, allows players to play single player games as multiplayer, changing the intended experience dramatically. Both projects utilize new forms of hardware and different forms of signal transmission.

I’m not offering these games up as a solution to the problem, but rather as an exploration of the possible. Cheap, open source hardware has made it easy to create computer games differently. Things like the Makey Makey have opened up the possibility space that game makers can begin to explore. Taken further, devices like Arduino, Raspberry Pi, Leap Motion, and more make it very easy to explore augmenting computer game hardware.

What happens when we start undermining a console manufacturers’ and game designers’ intent? What happens when the hardware to hack these things becomes so ubiquitous that it’s as easy as creating an indie game?

We often say that art is what happens between the player and the work. But what if we’re able to swap out the window that people look at that art through?

Comments are closed.