Yesterday I had my first opportunity to visit The Fantastic Tavern. (It's like a software user group but aimed at designers) The theme of the night was "[Augmented] Realitites".
A couple of more details write ups of the event can be found at:
http://vinf.net/2010/04/29/augmented-reality-tftlondon/
http://ubelly.com/2010/04/realities-blurring-the-boundaries-at-the-fantastic-tavern/
At the end of the night the open Q&A was concluded with an unanswered question about how to prevent a situation where you look through your AR application to see hundreds (thousands) of additional competing pieces of information all laid over (and probably obscuring) the image?
I have a different but related question though. How do we get to that point? At the moment all AR apps are separate and you could potentially have dozens on a device. How do you combine the information from different applications? Is anyone looking at/making a way to do this? And are they doing it in a way that avoids the overload feared in the original question?
My other question:
All AR apps seem to be driven by the person specifically using a device. Is anything being developed where the location identifies the person and provides additional relevant information to them? At an extreme case think of the iris recognition in Minority Report. But on a more practical (and realisitic/easily achievable) level what about ticket booths at railway stations. Why not give me useful information while waiting to buy my ticket? For instance, if I buy a ticket to a location I've not visited before (based on previous purchases made with that card) why not tell me which platform I need and show me directions.
As I was tired and the place was very hot and noisy I didn't hang around for long afterwards. Lots of interesting ideas though.
A couple of more details write ups of the event can be found at:
http://vinf.net/2010/04/29/augmented-reality-tftlondon/
http://ubelly.com/2010/04/realities-blurring-the-boundaries-at-the-fantastic-tavern/
At the end of the night the open Q&A was concluded with an unanswered question about how to prevent a situation where you look through your AR application to see hundreds (thousands) of additional competing pieces of information all laid over (and probably obscuring) the image?
I have a different but related question though. How do we get to that point? At the moment all AR apps are separate and you could potentially have dozens on a device. How do you combine the information from different applications? Is anyone looking at/making a way to do this? And are they doing it in a way that avoids the overload feared in the original question?
My other question:
All AR apps seem to be driven by the person specifically using a device. Is anything being developed where the location identifies the person and provides additional relevant information to them? At an extreme case think of the iris recognition in Minority Report. But on a more practical (and realisitic/easily achievable) level what about ticket booths at railway stations. Why not give me useful information while waiting to buy my ticket? For instance, if I buy a ticket to a location I've not visited before (based on previous purchases made with that card) why not tell me which platform I need and show me directions.
As I was tired and the place was very hot and noisy I didn't hang around for long afterwards. Lots of interesting ideas though.