On February 1, 2012, Microsoft released the official Kinect SDK and Runtime for PCs. Prepare yourself for a flood of new ways to interact with your PC using gestures, gyrations, and other nontraditional mannerisms. Whether these new interactive methods will become generally accepted is certainly up for debate, but there is little doubt that there will be many creative designs coming soon.

Now, don’t get me wrong — I think the Kinect for Windows will be a great addition to PC gaming. It will open up genres and inspire innovation, and there is no telling where it will all lead — that part is fun and exciting. However, the cynic in me is cringing at the prospect of waving at my PC to highlight a set of cells in Excel — or whatever “interface improvements” some well-meaning but misguided developer may try to foist upon us.

That bit of curmudgeonly skepticism aside, if you would like to develop some Kinect for Windows applications you can download the SDK directly from Microsoft. You can also buy a Kinect sensor for $250.

In her blog post on ZDNet, Mary Jo Foley points out several improvements Microsoft has made to the Kinect SDK since the beta release, including:

  • Support for up to four Kinect sensors plugged in to the same computer,
  • Improved skeletal tracking, including the ability for developers to control which user is being tracked by the sensor, and
  • Near Mode for the new Kinect for Windows hardware, which enables the depth camera to see objects as close as 40 centimeters in front of the device

Are you looking forward to making gestures at your PC that mean more than the fact that you are frustrated by something it is, or is not, doing? Do you think interacting with your PC via Kinect is going to become the predominant way you will control your PC?

Also read: