3D Gestures For Phones?
Latest from the rumour mill is that the next Lumia flagship phone will use Kinect-like features to enable "3D gestures".
I hope this is just a rumour.
Don't get me wrong. I'd be willing to bet that somewhere inside of Microsoft or their recently acquired Nokia factories someone is working on exactly that. Companies cannot sit still and in the name of innovation they will pursue just about any whacked out idea that floats. I'd wager something similar is being, at the least, experimented on at Google and Apple as well. None of that means that it will see the light of day, or if it does that it will be released in the next X years.
This all goes back to the amusing shock that Microsoft was testing an optical drive-less Xbox One at one point. I argued Sony had probably tossed the idea around and prototyped it at some stage as well. And even if they hadn't, there would be equally way out there ideas that were floated from inception to delivery of the PS4. This isn't a guess. This is how high tech industries work. And, I'm simply hoping this rumour is founded in the same sort of light as the disc-less XB1.
My beef here is that the world has already proven 2 things. People will mock voice and camera based gestures until they become perfect (a la Xbox One + Kinect) and also that people are too lazy to learn additional gestures, at all (a la multi-finger gestures in iOS and BB10).
I remember when Apple released the version of iOS that allowed you to close an app simply by putting all of your fingers on the screen and bringing them together. I used this madly... for a week. And, when I happen to pick up someone's iDevice and remember I can close an app that way and do it, they look at me in awe and say "You can do that?". And then proceed to never do it more than a few times themselves.
The point is; Even awesome adds like that to a gesture system, while no one makes fun of them, are too complex for the average person to remember it seems. It is definitely quicker than the alternatives to closing the apps, so it certainly isn't an efficiency argument.
And Kinect-like gestures are even worse. Firstly, like the iOS gestures, even in the rare cases where they are more efficient, people simply don't remember to use them. So, they are plagued by the same problems as touch screen gestures. Then, there is the frustration factor. Thanks to previous generations of consoles, people accept the camera sensor and its quirks on a console. But that doesn't stop people from being irate. The accuracy is too low for certain gestures. It gets false positives on things it thinks are hands. It mistakes arbitrary movements for gestures. And the list goes on.
On this topic, where I'm heading is; Microsoft should keep Kinect, and Kinect-like functionality on the Xbox until the hardware and software reach a point where the frustration level drops to near nil, and that tech can then be bundled in a phone. If you can't deliver a consistent experience on a machine with an 8-core processor, loads of power and a sensor larger than any phone, then why would you try and ship even a stripped down version of that functionality in a much smaller, weaker form factor?
The reason I started with the Apple example actually had a dual purpose. I don't know to what extent the testing is taking gestures into the realm of Kinect-like capabilities. But showing that expanding the adopted set of gestures without even changing the interaction model as Apple did was already doomed to fail, it shows that IF Microsoft wants to do this, they should wait until it is both substantially different from the current touchscreen paradigm AND robust enough to avoid the high degree of mistakes a technology like Kinect makes today.
I hope this is just a rumour.
Don't get me wrong. I'd be willing to bet that somewhere inside of Microsoft or their recently acquired Nokia factories someone is working on exactly that. Companies cannot sit still and in the name of innovation they will pursue just about any whacked out idea that floats. I'd wager something similar is being, at the least, experimented on at Google and Apple as well. None of that means that it will see the light of day, or if it does that it will be released in the next X years.
This all goes back to the amusing shock that Microsoft was testing an optical drive-less Xbox One at one point. I argued Sony had probably tossed the idea around and prototyped it at some stage as well. And even if they hadn't, there would be equally way out there ideas that were floated from inception to delivery of the PS4. This isn't a guess. This is how high tech industries work. And, I'm simply hoping this rumour is founded in the same sort of light as the disc-less XB1.
My beef here is that the world has already proven 2 things. People will mock voice and camera based gestures until they become perfect (a la Xbox One + Kinect) and also that people are too lazy to learn additional gestures, at all (a la multi-finger gestures in iOS and BB10).
I remember when Apple released the version of iOS that allowed you to close an app simply by putting all of your fingers on the screen and bringing them together. I used this madly... for a week. And, when I happen to pick up someone's iDevice and remember I can close an app that way and do it, they look at me in awe and say "You can do that?". And then proceed to never do it more than a few times themselves.
The point is; Even awesome adds like that to a gesture system, while no one makes fun of them, are too complex for the average person to remember it seems. It is definitely quicker than the alternatives to closing the apps, so it certainly isn't an efficiency argument.
And Kinect-like gestures are even worse. Firstly, like the iOS gestures, even in the rare cases where they are more efficient, people simply don't remember to use them. So, they are plagued by the same problems as touch screen gestures. Then, there is the frustration factor. Thanks to previous generations of consoles, people accept the camera sensor and its quirks on a console. But that doesn't stop people from being irate. The accuracy is too low for certain gestures. It gets false positives on things it thinks are hands. It mistakes arbitrary movements for gestures. And the list goes on.
On this topic, where I'm heading is; Microsoft should keep Kinect, and Kinect-like functionality on the Xbox until the hardware and software reach a point where the frustration level drops to near nil, and that tech can then be bundled in a phone. If you can't deliver a consistent experience on a machine with an 8-core processor, loads of power and a sensor larger than any phone, then why would you try and ship even a stripped down version of that functionality in a much smaller, weaker form factor?
The reason I started with the Apple example actually had a dual purpose. I don't know to what extent the testing is taking gestures into the realm of Kinect-like capabilities. But showing that expanding the adopted set of gestures without even changing the interaction model as Apple did was already doomed to fail, it shows that IF Microsoft wants to do this, they should wait until it is both substantially different from the current touchscreen paradigm AND robust enough to avoid the high degree of mistakes a technology like Kinect makes today.
Comments
Post a Comment