Department of Ophthalmology

Research Faculty Profile

Zhiyong Yang, Ph.D.
Research Arena

Summary: Dr. Yang's interests are computational principles and neural mechanisms of natural vision, robot-assisted independent living, and virtual reality-based vision and cognitive rehabilitation.

Key research areas:

--Computational principles of natural vision

--Probabilistic models of Visual system structure and function

--Robots as living memory and vision machines

--Virtual reality-based vision rehabilitation for age-related macular degeneration

Contact information:
Voice: 7067214506
Fax: 7067213829

» Ophthalmology


» Brain & Behavior Discovery Institute

-- Assistant Professor of Ophthalmology --
Educational Background
B.S. Applied Mechanics --Cheng Du Institute of Technology, Cheng Du, PRC --1988

M.S. Theoretical Physics --Beijing Normal University, Beijing, PRC --1994

Ph.D.Electrical Engineering --Institute of Automation, Chinese Academy of Science, Beijing, PRC --1997

Bio >>> Dr. Yang is an Assistant Professor in the Departments of Ophthalmology as wel as the Brain and Behavior Discovery Institute. Dr.Yang's laboratory is pursuing several lines of research. First, we are developing probabilistic frameworks of vision, visual system structure and function. The broad hypothesis we are exploring is that the response properties of visual neurons and their connections, the organization of visual cortex, the patterns of activity elicited by visual stimuli, and visual perception are all determined by the probability distributions of relevant physical variables of visual stimuli. Second, we are developing co-robots that live with people and help people. These co-robots will emulate aspects of human natural vision, act as living memory machines, and perform predictive behavioral modeling. Third, we are developing augment reality and augment reality-based visual and cognitive rehabilitation. In this new paradigm, photo-realistic visual scenes are presented to human subjects through a head-mounted display; human subjects are guided to perform natural tasks; and visual information is enhanced at each fixation visually and by sounds and speech descriptions.