Home
General Information
Registration / Login
Abstract Submission
Travel Information
links
Contact
FAQ
Science
Symposia

 Symposium 1
 
The Perception of Colored Patterns, Materials, and Scenes

There has been a shift recently from measuring color constancy for flat uniformly colored stimuli towards the perception of material qualities of real surfaces. This shift has been driven by the realization that whereas shape is important in object recognition, material perception can be just as important in identifying objects and their qualities, e.g. natural versus artificial fruits, or soft versus hard seats. Chemical and physical properties of objects provide them with specific surface patterns of colors and textures. Endogenous and exogenous forces alter these colors and patterns over time. Using material appearance to estimate physical and chemical properties of objects has great utility for organisms and is critical to survival in certain conditions. Whether color information facilitates object and scene recognition is supported only by a few studies, but color may take center stage as its role in these processes through material identification becomes clearer. Unfortunately, studies of pattern, texture, material, object, and scene perception have generally used achromatic images, thus leaving out potentially critical information. The papers in this symposium will use psychophysics, fMRI, image statistics and computational modeling to examine how color information is used in these tasks.

1.   Dr. Qasim Zaidi
SUNY College of Optometry, New York, NY, USA

【Visual Perception of Material Changes】

In snapshots, scenes consist of things.  In reality, the world consists of processes.  Some are repetitive like foliage through seasons or terrain becoming wet then dry, others are unidirectional like fruit ripening and decaying, water damage, or dust accumulating.  Chemical and physical properties of objects are manifested as patterns of colors and textures, which are altered by endogenous and exogenous factors.  To examine how observers identify these changes, we used calibrated images acquired from 15 viewpoints of 26 materials, including fruits, foods, woods, minerals, metals, fabrics and papers, undergoing changes like drying, burning, decaying, rusting, oxidizing and heating.  The images revealed that material changes exhibit complex spatial and chromatic patterns, e.g. specular and diffuse components are affected differently by dust, and the spatial pattern of drying is different on stone than on fabric.  Observers identified type of material and change for colored and achromatic images.  Color cues improved performance in all conditions but most dramatically for organic materials.  Interestingly, material changes create natural “metamers”, e.g. wetting and polishing are confused for hard materials, while bleaching and drying are confused for porous materials.  We hope to elucidate the role of color in object recognition through its role in material perception.

Acknowledgments:  Grants EY07556 & EY13312 to QZ.


2.   Dr. Karl R. Gegenfurtner
Giessen University, Germany

【Color Vision for Objects Made of Different Materials】

The objects in our environment are made from a wide range of materials. The color appearance of the objects is influenced by many factors, including the geometry of the illumination, the three-dimensional structure of the objects, and the surface reflectance properties of their materials. Only few studies have investigated the effect of material properties on color perception. In most of these studies the stimuli were three-dimensional objects rendered on a computer screen. Here we set out to investigate color perception for real objects made from different materials. The surface properties of the materials ranged from smooth and glossy (porcelain) to matte and corrugated (crumpled paper). We tested objects with similar colors made from different materials and objects made from the same material that differed only in color. Observers matched the color and lightness of the objects by adjusting the chromaticity and the luminance of a homogeneous, uniformly colored disk presented on a CRT screen. In general observers' matches were close to the true chromatic and luminance distributions due to the objects. However, observers systematically tended to discount the variations in reflected light induced by the geometry of the objects and rather matched the light reflected from the materials themselves.


3.   Dr. Shin'ya Nishida
NTT Communication Science Laboratories, Japan

【Perception of the Colorful Natural Scene】

(i) Color provides useful information about whether bright spots on a surface body are highlights, or splashed ink. For instance, white spots on a red body are seen as highlights, but red spots on a white body are not, even when the luminance profiles are identical in the two cases. This suggests that the human visual system utilizes a physical rule of optics that the color spectrum of highlight normally includes the spectrum of the surface body (Nishida et al, 2008, VSS). (ii) When chromatic contrasts are increased (decreased) while luminance contrasts remain the same, or when luminance contrasts are decreased (increased) while chromatic contrasts remain the same, natural photos look unnaturally oversaturated (undersaturated). A clear perceptual sign of oversaturation is unnatural self-glowing of normal objects, such as apples and oranges (Nakano et al, 2009, VSS). For both (i) and (ii), the human brain should compute natural luminance-color relationships. I will discuss that a redundant color representation consisting of multiple intensity images for different color bands, like a set of RGB images, may be more suited for processing luminance-color interactions in natural scene, than the orthogonal color representation consisting of an achromatic channel and two chromatic opponent channels.


4.   Dr. Colin Clifford
The University of Sydney, Australia

【Interactions in the Processing of Color and Orientation】

According to the modular view of visual processing, different aspects of a scene, such as its colour, form and motion, are analyzed by dedicated and anatomically distinct sub-systems. Modularity creates a binding problem: representations of the various features of an object are distributed across brain areas but must be associated with, or bound to, the same object. These issues are controversial – just how modular is the visual system, and is there really a binding problem? Here, I describe evidence from psychophysics and fMRI indicating that the processing of colour and orientation is closely coupled early in visual cortex, challenging the strongly modular view of vision. This coupling appears to alleviate the binding problem for colour and orientation. However, for more complex forms, the functional architecture appears to be essentially modular and a marked binding problem demonstrably exists.


TOP
 Symposium 2
  Visual Cortex in Primates, Retinotopic Organisation and Plasticity

Retinotopic maps are not simply an accidental property of early visual cortex, but a fundamental organisational principle of information processing. This symposium will present recent insights into contemporary concepts of retinotopic organisation. It will further present results on plasticity at three levels, at a micro level within V1, at an intermediate level across a complex of visual areas and finally on a system level across hemispheres.

1.   Dr. Mark M. Schira
University of New South Wales, Sydney, Australia

【A Hyper Complex of Visual Areas, the Fovea Confluence and Its Consequences for Anisotropy and Magnification】

After remaining terra incognita for 40 years, the detailed organization of the foveal confluence has just recently been described in humans. I will present recent high resolution mapping results in human subjects and introduce current concepts of its organization in human and other primates (Schira et al. 2009, J. Nsci). I will then introduce a new algebraic retino-cortical projection function that accurately models the V1-V3 complex to the level of our knowledge about the actual organization (Schira et al. PLoS Comp. Biol. 2010). Informed by this model I will discuss important properties of foveal cortex in primates. These considerations demonstrate that the observed organization though surprising at first hand is in fact ideal with respect to cortical surface and local isotropy, proving a potential explanation for this organization. Finally, I will introduce simple techniques that allow fairly accurate estimates of the foveal organization in research subjects within a reasonable timeframe of approximately 20 minutes, providing a powerful tool for research of foveal vision.


2.   Dr. Stelios M. Smirnakis
Baylor College of Medicine, Houston, TX, USA

【Visual Cortex Reorganization After Injury: Lessons from Primate fMRI】

The ability of networks of neurons to undergo plastic rearrangement represents a general organizing principle of the nervous system and has been demonstrated to persist in adulthood in several areas, including motor, visual, auditory and somatosensory cortex. To date animal models appropriate for studying recovery after cerebrovascular injury remain scarce. Paradigms developed in the rodent, though valuable, are far removed from human physiology and have restricted behavioral repertoires, which limits the type of questions they can be used to address. By contrast, a macaque model of cortical reorganization based on fMRI is closer to human physiology, has a rich behavioral repertoire and can be compared directly to fMRI results from human patients. This makes it highly versatile for testing experimental hypotheses on the nature of plasticity, and for gauging the global effect of pharmacologic or rehabilitative manipulations on neuronal recovery after cortical injury. In my talk, I shall describe the emergence of macaque fMRI as an animal paradigm/tool for studying cortical plasticity, and will discuss its application to the study of cortical reorganization following primary visual cortical lesions (Schmid et al., PLoS ONE, 4(5):e5527, 2009).


3.   Dr. James A. Bourne
Monash University, Melbourne, Australia

【Maturation of the Visual Brain: Lessons from Lesions】

In the nonhuman primate (marmoset monkey), which comprises a lissencephalic cortex, we have demonstrated that there may be not one, but two areas (V1 and the middle temporal (MT) area), which develop earlier than the “extrastriate” visual areas of the cortex. To this end we created a model that suggests the primary areas serve as genetically pre-determined organisational “anchors” which prompt the development of the rest of the visual cortex in a sequential manner. In order to further characterize the importance of the specific areas and connections in visual cortical maturation, we unilaterally lesioned V1 in adult and neonate monkeys. Surprisingly, the consequences of focal injury to the cerebral cortex in the immature brain differ from those induced by similar damage to the mature cerebrum.  The ability of the immature brain to reorganize and reroute connections to result in considerable sparing of visual function is a phenomenon which has been largely unstudied. I will discuss the relevance of the nonhuman primate in such studies and the affect of a V1 lesion during development.  Also, I will discuss whether better knowledge of the mechanisms operative in the immature brain will enable us to one day repair lesioned pathways/ cortical nuclei in the adult brain?


4.   Dr. Lars Muckli
University of Glasgow, Glasgow, United Kingdom

【Bilateral Visual Field Maps in a Patient with Only One Hemisphere】

The prenatal development of retinotopic maps is regulated by two basic principles: gradients of molecular markers determine map orientations and waves of spontaneous activity determine neighbourhood relations.
We were able to observe the developed retinotopic maps of a 10 year old girl who was born with only one cerebral hemisphere. The retinal ganglion cells from her normal developed left eye project entirely to the left thalamus and the left cerebral hemisphere, where full-field visual maps emerged. The precise retinotopic folding patterns in V1, V2 and LGN provide evidence for the underlying developmental mechanisms based on molecular markers and on activity-dependent cues.


TOP

 Symposium 3
  The Other-race Effect in Face Perception

We are all experts at recognizing faces, and are much better at recognizing faces than most equally-complex non-face objects. But faces from races other than our own show a signature cost in recognition, relative to faces from our own race. What causes this other-race effect? Is it due to less efficient coding of faces with which we lack expertise, and if so, how? Or is it due to social categorization differences in the way we treat in-group and out-group members? In this symposium speakers will present recent studies that use a variety of methodologies to shed light on the causes of the other-race effect.

1.   Dr. William Hayward
University of Hong Kong, Hong Kong

【Perceptual and Social Processes Interact to Cause the Other-race Effect】

Is the other-race effect is caused by perceptual expertise (greater experience with own race faces endows us with more efficient processing of the visual features that discriminate them from each other) or social categorization (people normally encode individuating features for own-race faces, but only category-defining features for faces from other races)? In this talk I will discuss a range of studies that demonstrate that both contribute to the effect. First, I will present data that show better recognition of own-race faces when they are presented as coming from one's own social group than another social group. Second, I will show that even if all faces need to be individuated, however, own-race faces are still learned more efficiently than other-race faces. Third, we will examine changes in holistic processing of faces caused by social categorization. These results suggest that a combination of perceptual and social processes combine to cause the other-race effect.


2.   Dr. Jim Tanaka
University of Victoria, Canada

【Reversing the Other-race Effect: The cognitive, Neural and Social Plasticity of Face Recognition】

Although it is well established that we are better at recognizing faces from our own race than faces from another race, the factors that contribute to the own-race face advantage are not well understood.  Levin (2001) hypothesized that people initially classify faces from one's own race as individuals and faces from other races as members of their racial group. His hypothesis is compatible with an expertise view:  As own-race "experts," we categorize own-race faces at the subordinate level of the individual and as other-race "novices," we categorize other-race faces at the basic level of race. According to the expertise view, the subordinate categorization tunes the recognition system to a finer grain of perceptual analysis that in turn, produces an own-race advantage. Is it possible to train up other-race face recognition similar to other forms of perceptual expertise? In my talk, I will discuss an expertise training protocol intended to improve other-race face recognition through other-race individuation.  In our study, Caucasian participants were trained to differentiate African (or Hispanic) faces at the subordinate individual level or at the basic level of race.  Our results showed that individuation training reduced the other-race effect by improving the recognition of novel faces from the individuated race. We found that other-race training also produced changes in event-related brain potentials characteristic of expert function and ameliorated the negative biases that are implicitly associated with other-race faces.


3.   Dr. Siegfried Ludwig Sporer
University of Giessen, Germany

【Becoming a Face Expert: Inversion and the Own-ethnicity Effect】

This presentation focuses on the role of expertise in processing faces and other visual stimuli. The majority of our studies were conducted on the own-ethnicity ethnicity effect, the differentially better recognition of faces of one's own ethnic group in comparison to faces of another ethnic group. An in-group/out-group model is proposed that integrates existing explanatory models and suggests additional hypotheses regarding a general in-group advantage in processing in-group stimuli. Studies on recognition of faces and horses in normal and inverted view as well as classification and matching studies with real and forged bank notes from different countries are presented. Expertise was operationalized via group membership (own- vs. other ethnic group, adults vs. children), riding experience (horseback riders vs. non-riders), and pre- and post-experience with a new currency (the Euro).


4.   Dr. Roberto Caldara
University of Glasgow, United Kingdom

【Tracking Early Sensitivity to Race on the Human Visual Cortex】

Race is a universal, socially constructed concept used to categorize humans originating from different geographical locations by salient physiognomic variations (i.e., skin tone, eye shape, etc.). Race is extracted quickly and effectively from faces and, interestingly, such visual categorization impacts upon face processing performance. Humans are noticeably better at recognizing faces from the same- compared to other-racial groups: the so-called other-race effect. This well-established phenomenon is also paired by the perception of individuals belonging to “other-races” as all looking alike. However, despite the impressive number of studies showing the robustness of the other-race effect at the behavioural level, whether electrophysiological sensitivity to race occurs at early or late (post-) perceptual stages of face processing remains to be clarified.
To this end, we recorded high-time resolution electrophysiological scalp signals in East Asian and Western Caucasian observers in a series of face processing experiments (i.e., face inversion effect, face adaptation and parametric manipulation of noise) with well-controlled East Asian and Western Caucasian faces. Our results consistently show that discrimination of same- and other-race faces begin early at the perceptual level in both groups of observers. Such very early detection of race could relate to biologically relevant mechanisms that shape human social interactions.


TOP

 Symposium 4
  Spatial and Temporal Aspects of Perception and Attention

Our visual experience is affected by both the spatial and the temporal aspects of the scene we are viewing. Understanding how our system processes both aspects is crucial for a comprehensive view of the visual system, yet we know considerably more about the spatial than the temporal domain. This is especially true of our knowledge of visual attention. The various talks in this symposium have taken different routes to explore the interplay between the temporal and spatial domains. These include an investigation of the temporal limits on extracting spatial relationships, tradeoffs between temporal and spatial processes and the role played by attention, spatial and eye specificity of top-down attentional modulation, and the effects of perceptual organization on temporal process.

1.   Dr. Alex O. Holcombe
University of Sydney, Australia

【Successes and Failures of Perception on the Fly】

Hurrying past other passengers on a subway platform, running down the basketball court towards the net, driving on a lonely country highway— this is perception on the fly, when one must perceive the arrangement of objects as they move rapidly across one's retina. Some perceptual qualities, such as motion direction and edges, are computed with high temporal resolution and are perceived even at high speeds. However, some aspects of the visual world cannot be apprehended. To investigate the speeds above which various information is lost, two concentric circular arrays of objects orbit fixation at variable speed. At high rates, the color of each patch is easily perceived. However, judging which patches are aligned cannot be done accurately above a low speed of 1.3 revolutions per second. Furthermore, judging the order of the sequence of colors around a single ring could only be done at similarly low rates. Finally, keeping track of a single object as it revolves round and round was also very limited, to about 1.4 rps. These failures of perception on the fly may reflect a limit on the ability of attention to keep up with moving objects and feed selected objects into visual cognition.


2.   Dr. David I. Shore
McMaster University, Canada

【Objects, Space and Time: How Perceptual Grouping Affects
Temporal Perception】

Perceptual grouping provides a fundamental organizing factor in understanding the environment. Simple features are bound together into objects, which form the basis for both perception and action. Within this context, the perceived relative onset of stimuli can be very different if they come from the same object or from two different objects. Using Temporal Order Judgments (TOJs), these relations were explored across different modalities and within the visual modality alone.  Poorer perception was usually observed when stimuli were presented from the same location in space and the same object.  The one interesting exception occurs with the pairing of audition and touch.  Discussion will focus on the potentially special role of vision in organizing perception in space.

 


3.   Dr. Sheng He
University of Minnesota, USA

【Hemispheric Constraint and Eye Specificity of Spatial Attention】

Stimuli presented on opposite sides of the vertical meridian initially project to different hemispheres. For a target (grating or letter) presented near the vertical meridian, we observed a stronger spatial crowding effect when a distractor was on the same side of the meridian compared with an equidistant distractor on the opposite side. No such ipsi vs contra modulation was observed across the horizontal meridian. These results constrain the cortical locus of the crowding effect to a stage where left and right visual spaces are represented discontinuously, but the upper and lower visual fields are represented continuously, likely beyond the early retinotopic areas.
In addition to the hemispheric constraint, we also showed that attending to a monocular cue while remaining oblivious to its eye of origin significantly enhanced the signal strength of a stimulus presented to the cued eye. Furthermore, this eye-specific attentional effect is insensitive to low-level properties of the cue, but depends on the attentional load on the cue. Thus voluntary attention could be eye-specific, modulating visual processing associated with a specific monocular channel, despite the fact that observers normally do not have explicit access to the eye-of-origin information.


4.   Dr. Yaffa Yeshurun
University of Haifa, Israel

【Transient Attention and Perceptual Tradeoffs】

In this talk I will present a mechanism of transient attention that takes into account the tradeoffs between segregation and integration processes and between the spatial and temporal domains. Specifically, I will suggest that attention facilitates spatial segregation and temporal integration but impairs their counterparts –spatial integration and temporal segregation. Support for this mechanism is derived from various studies that explored the effects of transient attention on various temporal and spatial processes such as enhancement of spatial resolution, degradation of temporal resolution, prolongation of perceived duration, prolongation of temporal integration,and degradation of spatial integration. I will further suggest a possible physiological instantiation of this mechanism: an attentional preference for parvocellular over magnocellular neuronal activity. Finally I will present evidence in support of this physiological instantiation. This will include evidence from different stimuli and paradigms including attentional effects on selective adaptation, isoluminant stimuli, reversed apparent motion, and the steady-pedestal and pulsed-pedestal paradigms.


TOP

 Symposium 5
  Fading, Perceptual Filling-in, and Motion-induced Blindness: Phenomenology, Psychophysics,
  and Neurophysiology

Why do we see what is not there? This symposium strives to give the answer. Fading of a target on a uniform background and perceptual filling-in from the edge are examples of the failure of the visual system to provide sustained vision under conditions of prolonged fixation. Motion-induced blindness is another phenomenon that falls into this category. The relationship between these three phenomena is not entirely clear, but the preference for small, peripheral, and low contrast targets suggests that the mechanism underlying these effects may be similar. It has lately been shown that in addition to the above stimulus attributes, salience of the stimulus and perceptual grouping may also affect fading and filling-in. Furthermore, stereo-depth and monocular depth cues have been proposed as relevant factors. These findings imply that top-down (salience) as well as bottom-up mechanisms are responsible for the perceptual disappearance of a fixated target by fading and filling-in. Speakers will summarize the phenomenological and psychophysical findings for fading and filling-in of brightness, color, and texture and correlate these observations with the presumed neurophysiological mechanisms.

1.   Dr. Lothar Spillmann
University Hospital, Freiburg, Germany

【Fading and Filling-in and the Perception of Extended Surfaces】

Since Troxler's original observation in 1804, fading and filling-in phenomena have aroused the interest of researchers. However, the question of why we see what is not there, i.e., induced properties from the surround, has been systematically studied only during the last 20 years. We now know that with prolonged fixation most figures fade into the background: targets can be static, moving, flickering, and textured. Furthermore, the background need not be uniform and steady, dynamic visual noise is as effective and even more so. While the mode of disappearance and the time course of fading for these different conditions may differ, the tendency of the visual system to make a background appear spatially uniform is common to all. We have recently found that filling-in requires a minimum of surround information. A thin red ring hugging the boundary of the physiological blind spot will fill in the enclosed area uniformly and completely. Similarly, a thin chromatic double-contour will induce watercolor over a large area. This spread of color suggests long-range horizontal interaction in the cortex for an explanation. On the other hand, fading time for stimuli consisting of oriented vs. randomly oriented bars depends on what is center and what surround. A uniformly oriented center is less salient and takes less time to fade than a randomly oriented center despite an identical texture contrast. This suggests an influence by figure-ground organization.


2.   Dr. Hidehiko Komatsu
National Institute for Physiological Sciences, Okazaki, Japan

【Bridging Gaps at V1: Neural Responses for Filling-in and
Completion at the Blind Spot】

No retinal input exists at the blind spot (BS). However, we do not perceive a hole in the visual field.  Instead, within the BS, we perceive the same color, contour or texture as the stimuli surrounding the BS. This is called perceptual filling-in or completion at the BS.  Filling-in and completion occurs not only at the BS but also in the retinal scotoma and in various phenomena in the normal visual field.  This suggests that there are some mechanisms in our visual system that could interpolate incomplete retinal signals to form contiguous surface and contour.  To elucidate the role of V1 in filling-in and completion, we analyzed neural responses at the retinotopic representation of the BS in V1.  We addressed two questions: (1) Do V1 neurons respond to stimuli inducing filling-in or completion even though there is no direct retinal input to this region? (2) Do the responses of V1 neurons to such stimuli correlate with perception?  Our results suggest that V1 plays an important role in the occurrence of filling-in and completion at the BS. References
Komatsu H et al. (2000) J Neurosci 20: 9310-9319.
Matsumoto M and Komatsu H (2005) J Neurophysiol 93: 2374-2387.
Komatsu H (2006) Nat Rev Neurosci 7:220-231.


3.   Dr. Li-Chuan Hsu
Medical College and Institute of Neural and Cognitive Sciences, China Medical University, Taichung, Taiwan

Dr. Su-Ling Yeh
National Taiwan University, Taiwan

【Perceptual Rivalry as Revealed by Perceptual Filling-in and
Motion-induced Blindness】

Perceptual-Filling-in (PFI) and Motion-Induced-Blindness (MIB) are two phenomena of perceptual rivalry in which a perceptually salient target, among a field of non-targets, disappear and reappear alternatively after prolonged viewing. Despite the apparent differences between PFI and MIB, when we manipulate eccentricity, contrast, perceptual grouping, and depth ordering, the results indicate that both PFI and MIB are most likely caused by a common mechanism. We argue that this mechanism involves boundary adaptation but it is a sufficient but not a necessary condition. Given that more PFI/MIB is observed if the target has an uncrossed than crossed disparity, we test further whether monocular depth cues such as interposition and watercolor illusion can also affect them, and whether they can affect perceptual fading in static displays as well. We find positive answers to these questions, implying that perceived depth affects perceptual fading in almost any stimulus, dynamic or static.


4.   Dr. Peter de Weerd
Universiteit Maastricht, The Netherlands

【fMRI Evidence for a Correlate of Surface Brightness in Early Visual Areas】

The neural mechanisms of surface perception are surprisingly poorly understood and ongoing research both in the domain of neurophysiology with animal models and fMRI with humans has led to conflicting outcomes. In the domain of surface brightness perception, it is debated whether surface perception would depend on an interpolation of brightness in early visual areas across regions in the visual field where that information is physically absent. We used fMRI in human subjects to test a possible contribution of early visual areas to the perception of surface brightness. A brightness induction paradigm was employed in which the perceptual appearance of a surface is modulated in absence of physical changes. In this paradigm, dynamic luminance changes in inducers produce counterphase (illusory) brightness changes in an enclosed grey surface of constant luminance. We found activity modulations in the retinotopic region of early visual areas corresponding to the visual field location of the constant, grey surface that corresponded to the brightness modulations in the constant surface. These data suggest a role in early visual cortex in the perception of surface brightness. The data will be presented in the context of related neurophysiological studies in cat and monkey, as well as human fMRI studies. Further experiments that need to be performed to fully demonstrate a correlate of brightness interpolation during surface perception will be suggested briefly.


TOP

 Symposium 6
  Bionic Vision: A Vision for the Blind

Blindness afflicts millions of people worldwide. Although a number of approaches are currently being pursued in the hope of preventing blindness, once vision is totally lost, retinal transplantation and bioelectronic visual prosthesis are only two of the existing strategies for restoring vision. Several groups in past decade have developed electrical implants that can be attached directly to the retinas of patients suffering from retinal degeneration, and have shown promise of retinal prostheses that can be used clinically. In this symposium, leaders of retinal prosthesis around the world will present recent advances in artificial vision, and discuss major obstacles in improving these prosthetic devices.

1.   Dr. Joseph Rizzo
Department of Veterans Administration, and the Harvard Medical School/
Massachusetts Eye and Ear Infirmary, USA

【The Development of the Boston Retinal Prosthesis: What is the Potential for Devices of This Type to Restore Vision to the Blind ?】

In the late 1980s, the Boston Retinal Implant Project was formed as one of the first two projects of this type. Our group has developed a wireless, hermetic, implantable device with “back telemetry” that is designed for implantation into the sub-retinal space.  Our development strategy has been to fully develop all of the technologies that would be needed to produce a device with hundreds of electrodes, each of which can be individually controlled, prior to performing human implants. This approach has been taken to improve the likelihood that our device would yield higher quality vision. The results from other groups that have implanted retinal prosthetics have revealed very promising results from early human trials. The question of what ultimate level of vision might be attainable with devices of this type will be discussed.


2.  

Dr. Gregg Suaning
University of New South Wales, Australia

【Supra-Choroidal Electrical Stimulation of the Retina】

The key to an efficacious neural prosthesis is its electrode-tissue interface, and the ease at which this interface can be established. This is particularly true when applying neuroprosthesis as a treatment to some forms of profound blindness. Assessment of the efficacy and surgical difficulty in reaching various sites of intervention within the visual system: the visual cortex; the lateral geniculate nucleus; the optic nerve, and three sites on the retina: the epi-retinal surface; the sub-retinal space; and the supra-choroidal space have led us to believe that for so-called “first-generation” devices comprising electrical stimulation delivered via several tens- to hundreds-order electrodes, the supra-choroidal space may provide the most readily accessible, consistent, and efficacious electrode-tissue interface. This paper will show recent results that illustrate the benefits and identify the limitations of the supra-choroidal approach in terms of surgical intervention, electrode separation, and discrete phosphene thresholds. Further, a device for chronic implantation into the supra-choroidal space will be presented.


3.  

Dr. Long-Sheng Fan
Inst. of NEMS / Electronic Research Lab., Taiwan

【A Flexible Sensing CMOS Technology for Sensor-Integrated, Intelligent Retinal Prosthesis】

Previous technologies available for artificial retinal prosthesis implant devices include micro electrodes array on flexible polymers or the integration of photodiodes and micro electrode arrays driven directly by the outputs from the photodiodes. It is now feasible to monolithically integrate mm-sized flexible microsystems with 180 nm CMOS transistors, image sensors and the cell-size-pitched micro electrode array with the total microsystem thickness comparable to that of the thinnest soft contact lens for potential sub-retinal or epi-retinal prosthesis applications. The flexible format allows better proximity between stimulating electrodes and retina neurons for local stimulation, the integrated photo sensors sense local light intensity, and the integrated pixel electronics allows calculating and supplying individual electrode the adequate and appropriate stimulation waveforms right at each individual electrode. Since each element can include its photo sensor, transistor electronics and microelectrode, the multiple-module sensors/electronics/electrodes interconnections problem in implementing large arrays is greatly simplified. We implemented 1,024-element arrays and are implementing 4,096-element arrays using this technology. We use in vitro loose patch and whole-cell patch clamp techniques to characterize the retina ganglion cell responses on these arrays.


©2010 Asia-Pacific Conference on Vision