Tải bản đầy đủ (.doc) (2 trang)

Visual soundscapes from your augmented reality glasses

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (35.16 KB, 2 trang )

Visual soundscapes from your augmented reality glasses
Abstract
Rapid developments in mobile computing and sensing are opening up new
opportunities for augmenting or mediating our reality with information and
experiences that our biological senses could not directly provide. Apart from
possible mass-market use of augmented reality glasses in the near future,
there also arise new uses in niche markets such as assistive technology for
the blind: the visual content of live camera views may be conveyed through
sound or touch.
I will in my talk discuss how this brings together research on new manmachine interfaces, visual prostheses, computer vision, brain plasticity,
synesthesia, esthetics, and even contemporary philosophy. It is also an area
where progress in fundamental research (on brain plasticity) could quickly
become socially relevant through software applications and training
paradigms that are made globally available over the web, for use with widely
available devices (smartphones, netbooks and camera glasses).
Neuroscience research has in the past decade established that the visual
cortex of blind people becomes responsive to sound and touch, with the
visual cortex acting more like a "metamodal" processor of fine spatial
information. This supports the biological plausibility of sensory substitution for
the blind, as in seeing (or "seeing") live camera views encoded in one-second
soundscapes.
More info: www.seeingwithsound.com


Peter Meijer received his M.Sc. in Physics from Delft University of Technology
in 1985, for work performed in the Solid State Physics group (nowadays
Quantum Transport group) on non-equilibrium superconductivity and submicron photolithography. From September 1985 until August 2006 he worked
as a research scientist at Philips Research Laboratories in Eindhoven, The
Netherlands, initially focusing on black-box modelling techniques for analogue
circuit simulation. In May 1996 he received his Ph.D. from Eindhoven
University of Technology on the subject of dynamic neural networks for device


and subcircuit modeling for circuit simulation. From 1999 until 2003 he was
cluster leader of the Future Design Technologies cluster within the research
group Digital Design and Test at Philips Research, while working on
nanotechnology and the simulation and modeling of RF effects in high-speed
digital circuits. In October 2006 he left Philips and joined the newly founded
NXP Semiconductors, where he now works in the field of computer vision
research. In parallel with his regular work in the electronics industry, he
developed an image-to-sound conversion system known as “The vOICe”,
aimed at the development of a synthetic vision device (prosthetic vision
system) for the totally blind. Cooperation with Harvard Medical School led to a
publication on the subject in Nature Neuroscience of June 2007.



×