The projection system seems stupid, yes. But think about this...
I have always imagined the ideal peripheral to a human-integrated information system to be some type of
heads-up display. That is a great way to present information, layered "on top" of the real-world environment, but what
can't it do? It isn't an INUPUT device
(unless you added some eye tracking capability).
What does the clunky, prototype "6th sense" device do? It combines
an input AND an output system "on top" of the real world environment. Input and output is accomplished, from the user's perspective in one "place" (you don't have to type on an input device , a keyboard, and also look at an output device, a monitor) and furthermore, this "place" is
the world--the actual, real world you are walking around in. In order to do this, it has to involve the dubious technique of creating a feedback loop between a projection system and a visual recognition engine (the details of which we are wholely ignorant of at this point).
Take a step back from the details, the specifics, and think about
the idea here (what, in my mind, this demo was designed to communicate). A human-integrated interface that layers itself "on top" of the real world, and allows seamless input and output, out there in front of your eyes--where we have been looking at things, and doing stuff to things, for millions of years. Compare this to our recent habit of pulling little boxes out of our pocket and punching little buttons on them. THAT, my friend, is not the way to go. That is a stopgap that will be archaic soon.
They may not have solved the problem here, but they have realized the correct idea. This is the
TED Conference - "ideas worth spreading" ...