Lean and Zoom: Semantic Zooming and User experience evolution Project Background:
Lean &Zoom is revolutionary software that has been developed to make the work on the computer easier on the eyes and posture by using the camera to sense the movement of the face and to magnify the on-screen content proportionally. This software can be used on any computer or phone as well as serve as a base for developing other applications.
Current Stage:
Geometric (Standard) Zooming: The view depends on the physical properties of what is being viewed. Lean & Zoom works on any electronic platform (Computer, cellphones) with a camera embedded on the same surface of the screen. User could zoom the content shown in the screen by just lean to the screen, without using keyboard. Now Lean & Zoom is capable with pictures and documents. This is called Geometric(Standard)Zooming.
Method to improve experiences:
During the 2010 Las Vegas CES, Lean & Zoom gain a high reputation, at the mean time, a lot of customers also gives feed back to help Lean & Zoom work more efficient and precisely.
Thus, Lean & Zoom team reviewed those user feedbacks and starting a research to apply Semantic Zooming concepts into the Lean & Zoom.
Semantic Zooming is a non-graphical zoom called semantic zoom is a mechanism to do the view transformation from any view formats to the underlying meaning inside the target object. Normally this mechanism reaches into the derived data contained in the data table stage.
Improvement ideas (part):
My Approach:
I believe apply Lean & Zoom with map tool is very useful in many situations, not only in daily life but also in the army duty. During 2010 CES, one of the officers from Defense Department indicates that Lean & Zoom would help solider search on the electronic map more efficient without using hands by just changing head and eye position. To start with, I would like to do research study on how to implement "Catch eye's movement" first and then associate Lean & Zoom with Google Map API.