Home
General Information
Registration / Login
Abstract Submission
Travel Information
links
Contact
FAQ
Science
Talk Sessions

 APCV 2010 Talk Sessions
 
Friday, July 23 / Saturday, July 24 / Sunday, July 25 / Monday, July 26


Friday, July 23

9:40-9:50

Opening

Dr. Si-Chen Lee

9:50-10:50

Keynote Speech

Dr. Chung-Yu Wu

The Design of Implantable Retinal Chips for Visual Prostheses

10:50-11:00 Break

11:00-12:00

Poster Session

12:00-13:30 Lunch

13:30-15:30 (a)

Symposium "Bionic Vision: A Vision for the Blind"

Joseph Rizzo

The Development of the Boston Retinal Prosthesis: What is the Potential for Devices of This Type to Restore Vision to the Blind ?

Gregg Suaning

Supra-Choroidal Electrical Stimulation of the Retina

Long-Sheng Fan

A Flexible Sensing CMOS Technology for Sensor-Integrated, Intelligent Retinal Prosthesis

13:30-15:30 (b)

Talk Session "Face & Objects"
Please tick ▼ for abstract content

Kate Crookes
Individual-level discrimination – an innate capacity? 4-month-old infants individuate upright but not inverted horses

Are there innate representations of structural form that support individual-level discrimination of some object classes? Previous studies demonstrate this for primate faces: babies and monkeys with no or little visual experience of the class discriminate primate faces upright but not inverted. Here, we show this finding extends beyond primate faces. Four-month-old babies without prior experience of horses individuate side views of whole horse bodies upright but not inverted. This is despite adults showing the classic pattern of good discrimination only for upright faces, with a large inversion effect for faces and none for horses. We discuss these finding in terms of a possible broad representation of animal body shape that undergoes perceptual narrowing across infancy to eventually support discrimination only of faces of conspecifics. (Supported by Australian Research Council DP0770923 and DP0984558)

Aki Tsuruhara
Infants' preference of moving face-like figure to top-heavy figure

Infants, even at birth, show looking preference of face-like figures to non-face-like figures. However, newborns younger than 1-month-old look longer at top-heavy configurations (i.e., more elements in the upper part than in the lower part) than bottom-heavy configurations (i.e., more elements in the lower part than in the upper part), even if both of the configurations did not looks like faces for adults (Simion et al., 2002). This suggests that young infants did not discriminate ‘faces’ from top-heavy figures. In this study, we examined infants' preference of face-like figure to top-heavy figure. A face-like moving with which ‘eyes’ and ‘mouth’ seems open and close was added to the figures, and the infants’ looking preferences with this moving condition was compared those with the static condition. Our results showed that if a face-like moving was added, 2- and 3-month-old infants looked longer at the face-like-figure than the non-face-like top-heavy figure. By contrast, in the static condition, the infants did not show a preference to the face-like figure. Facial movements were shown to enhance face recognition in infants (Otsuka et al. 2009). Our results suggest that facial movements also enhances in discrimination of ‘faces’ from non-face-like figures.

Sarina Hui-Lin Chien
The “top-heavy” bias is gone: An eye-tracking study in infants and adults revealed common preferences specifically to real faces

Newborns show preferences for “top-heavy” configuration and which has been proposed to explain neonatal face preference (Simion et al, 2002). However, the later development of such a preference has not been fully studied. Thus, using an eye-tracker apparatus (Tobii T60), we intended to investigate the face preference mechanism for 2-5 month-old infants and for adults as a comparison group. Each infant and adult viewed three classes of stimuli: “top-heavy” and “bottom-heavy” geometric patterns, face-like figures, and photographed faces. Using area of interest (AOI) analyses on fixation duration and count, we computed a top-heavy bias index (between -1 ~ +1) for each pair of stimuli and for each participant. Our results showed that the top-heavy bias indices for geometric and face-like patterns were close to zero in both infants and adults, indicating a disappearance of the top-heavy bias. Moreover, we found significant looking preferences for photographed natural faces over inverted or unnatural ones in both infants and adults, indicating a specific sensitivity to up-right real faces, and not top-heavy configuration. Lastly, the patterns of looking preferences across stimulus types were strikingly similar in infants and adults. Taken together, these findings suggest a very early cognitive specialization process toward face representation.

Garga Chatterjee
Perceptual and cognitive processes in a widely prevalent face recognition deficit: the case of developmental prosopagnosia

Developmental prosopagnosia (DP) is an important test case regarding modularity and structure of the visual system.In this widely prevalent face recognition deficit, subjects are severely impaired in the face memory test confirming their face recognition deficits. They were also impaired on two novel tests of non-face visual memory, the abstract art and the within category object memory test. However, they did not show deficits in verbal memory. Hence most cases did show general visual memory deficits.The implications of this result is discussed.Certain models of face processing ( Bruce and Young , 1986) postulate that certain types of non-identity based facial information ( like age, gender, attractiveness) can be processed independently of face identity recognition. Taking advantage of the severe deficit in face-identity in prosopagnosia, we show that normal performances in age and gender processing can exist concomitantly with identity recognition deficits.The kinds of facial information that are compromised along with face-based identity recognition speak to the organization of these information processing streams by understanding what deficits go together and what do not. Phenotype differences also exist in developmental prosopagnosia in the nature of the associations and dissociations – information from individual differences in this regard is presented.

Withdrawn

Derek Arnold
Binocular Rivalry: Facial Dominance and Monocular Channels

When different images are presented to the two eyes, each can intermittently disappear, leaving the other to dominate perception. This is called binocular rivalry (BR). The causes of BR are debated. One view is that BR is driven by a low-level visual process, characterized by competition between monocular channels. Another is that BR is driven by higher-level processes involved in interpreting ambiguous input. We assessed these proposals via two manipulations involving facial images. We found that when a dominance change is triggered in one section of a facial image, dominance changes propagate through the rest of the image via monocular channels. We also assessed the timing of BR changes in proximate pairs of rival images. We found that the timing of BR changes, for pairs of both simple (orthogonal gratings) and complex (houses / faces) stimuli were related, but only when similar images were encoded in the same monocular channels. These observations show that monocular channel interactions are integral to determining the dominance of facial images. This is consistent with BR being driven by an inherently visual process, intended to suppress monocular obstructions from awareness, and thereby enhance the visibility of fixated objects.

Yu-Chin Wu
Modulation of Familiarity on Dynamic Advantage Effect in Matching Faces

This study examines how familiarity modulates the advantage of dynamic information on face recognition. A sequential face matching task was used to measure the ability of face recognition, allowing for an unbiased comparison between famous and unfamiliar faces. Moreover, a new display method was developed to present moving and multi-frame static face stimuli, controlling for extraneous confounding factors. In Experiment 1, where intact face stimuli were used, the results revealed the dynamic advantage effect when participants judged whether two sequentially presented images of famous faces were the same person, but not when matching images of unfamiliar faces. Face stimuli with different degrees of blurredness were created by adjusting blur radius for subsequent experiments. In Experiment 2, where less blurred faces were used, no dynamic advantage was found with either famous or unfamiliar faces. In Experiment 3, where face stimuli were more degraded, however, a reversed pattern emerged in that the dynamic advantage effect was found only with unfamiliar faces. Taken together, our findings indicate that the dynamic advantage effect only exists in intact famous faces and highly degraded unfamiliar faces, suggesting that the mechanisms underlying dynamic advantage effect may be qualitatively different between famous and unfamiliar faces.

Takao Sato
Dominance shift with hybrid images is dependent on relative spatial frequency

In prototypical hybrid images such as that of Einstein vs. Monroe pictures, low-spatial frequency face becomes perceptually dominant with smaller image-sizes or longer viewing-distances (Schyns & Oliva, 1999). This apparently indicates an importance of absolute (retinal) spatial frequency in facial recognition. However, this hypothesis is not very definite, since the cut-off frequency also shifts as or and distance are manipulated. To examine the roles of absolute and relative (as defined against face-width) spatial frequencies, we measured the dominance shift with hybrid facial images generated by combining a low-pass and a high-pass face with a common absolute cut-off frequency. In experiments, such hybrid images with 13 different cut-off frequencies were presented with either a fixed size, or a fixed viewing-distance while varying the other parameter, i.e. distance or size. It was found that the cut-off frequencies where the dominance shifts occur are almost identical if they are expressed in relative spatial frequency regardless of the viewing-distance or image-size. These results indicate the importance of relative spatial frequency in face recognition. The dominance shift with regular hybrid images occurs because the cut-off frequency shifts higher and high SF components becomes invisible with smaller sizes or longer distances.

15:30-15:40

Break

15:40-16:30

Poster Session


TOP

Saturday, July 24

9:30-10:30

Keynote Speech

Dr. Izumi Ohzawa

Recent Advances in the Functional Analysis of High-order Visual Neurons

10:30-10:40 Break
10:40-11:40 Poster Session
11:40-13:15 Lunch
13:15-15:15 (a) Symposium "Visual Cortex in Primates, Retinotopic Organisation and Plasticity"

Mark M. Schira

A Hyper Complex of Visual Areas, the Fovea Confluence and Its Consequences for Anisotropy and Magnification

Stelios M. Smirnakis

Visual Cortex Reorganization After Injury: Lessons from Primate fMRI

James A. Bourne

Maturation of the Visual Brain: Lessons from Lesions

Lars Muckli

Bilateral Visual Field Maps in a Patient with Only One Hemisphere

13:15-15:15 (b)

Talk Session "Attention II"
Please tick ▼ for abstract content

Tsung-Ren Huang
Context-guided visual search via global-to-local evidence accumulation

How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? The ARTSCENE Search model is developed to illustrate the neural mechanisms of such memory-based context learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant cueing effects during vi¬sual search, as well as related neuroanatomical, neurophysiological, and neuroimaging data. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined as a scene is scanned with saccadic eye movements. The model simulates the interactive dynamics of object and spatial contextual cueing and attention in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortex (area 46) primes possible target locations in posterior parietal cortex based on goal-modulated percepts of spatial scene gist that are represented in parahippocampal cortex. Model ventral prefrontal cortex (area 47/12) primes possible target identities in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex.

Arni Kristjansson
Varied timecourses for priming for different feature values in pop-out visual search.

Brascamp and colleagues (PlosONE, 3, e1497) have shown how fluctuations in the perception of ambiguous stimuli reflect memory traces operating at multiple different timescales. The percept at any given moment is affected by perception during a very long period, as well as influences from the immediately preceding percepts. Here we investigate whether similar multiplicity in timescales is seen for priming effects in pop-out visual search tasks. We contrasted long-term trial-by-trial build up of priming of pop-out of a particular color against shorter term build-up for a different color. We found that the priming effects from the two colors do indeed reflect memory traces at different timescales, and that the priming decay function for the long term priming is well described with a long time constant while the short-term time priming decay reflects memory traces with a shorter time constant. The results suggest that priming effects in visual search reflect neural modulations from repeated presentation of a feature value which operate at multiple different time scales. These similarities between attentional priming and perception of ambiguous stimuli are striking and suggest compelling avenues of further research into the relation between the two effects.

Louis K. H. Chan
No attentional capture for target detection – it occurs exclusively in compound search

It has been believed that simple visual features are detected preattentively. If this description is strictly true, one should not expect attentional capture, in which attention is driven away from the target by a salient distractor, to impair performance. Consistent with this, attentional capture is generally reported only in compound search, which requires attention to be focused on the target in order to judge the response. It has been recently reported, however, that attentional capture can be produced in detection by mixing distractor trials with no-distractor trials. In this study, in a similar setting, we measured attentional capture in terms of accuracy. If detection requires attention, attentional capture should render search less accurate; however, accuracy should not be influenced by other factors, such as a slowing down in response production. We presented brief search displays in which duration was set so that accuracy was near 0.8. Results show attentional capture in compound search, but not in detection. Therefore, attention does not enhance the registering of a simple feature in the same way that it enhances compound search performance. The present results are consistent with a proposal (Chan & Hayward, 2009, JEP:HPP) that feature detection and localization involve distinct search processes.

Ryota Kanai
Awareness of absence and absence of awareness: Failures of sensation and attention

Failure of conscious visual perception occurs under a range of circumstances. The causes and processes leading to incidences of stimulus-blindness are poorly understood. Failure of conscious report could be, for example, a consequence of reduction of the sensory signal or lack of attentional access to sensory signals. When examining these phenomena one has the intuition that in some types of invisibility, a target is phenomenally invisible (awareness of absence), whereas in other types of manipulations, we do have a sense that we missed a target (absence of awareness). To distinguish different causes leading to a failure of visual awareness, we employed a new measure, termed subjective discriminability of invisibility (SDI) that measures whether confidences of reporting the absence of a target are different for trials in which visual awareness was impaired (miss trials) from those where no target was present (correct rejections). Targets misses were subjectively indistinguishable from physical absence when contrast reduction, backward masking and flash suppression were used. Confidence could be appropriately adjusted when dual task, attentional blink and spatial uncertainty methods were employed. These results show that failure of visual perception can be either a result of perceptual or attentional blindness depending on the circumstances under which visual awareness was impaired.

Chun Hung Alexander Ng
The role of working memory in visual attention

Working memory (WM) plays a crucial role in the guidance of visual attention. However, findings from past studies of the WM effect on visual attention are quite controversial. Some studies claimed that the presence of such an effect is automatic, i.e. attention is driven to stimuli related to the working memory representation, independent of the relevance to any explicit task goal. In contrast, some other studies provided evidence that such WM effects are not so automatic or rigid.
The present study aims to investigate the controversy over memory-driven effects in visual selective attention. We found that an automatic WM effect may not be present in some visual search tasks using consistent mapping (search target remains unchanged between trials) and high energy stimuli. Furthermore, with modifications of the experimental setup that has previously been used to support positive findings of the automatic WM effect, we report some strategic uses of the WM item to speed up visual search. Therefore, our experiments support an alternative view that an automatic WM effect may be an outcome of incongruence of concurrent processing between a WM item and a non-target stimulus in the search set.

The present study aims to investigate the controversy over memory-driven effects in visual selective attention. We found that an automatic WM effect may not be present in some visual search tasks using consistent mapping (search target remains unchanged between trials) and high energy stimuli. Furthermore, with modifications of the experimental setup that has previously been used to support positive findings of the automatic WM effect, we report some strategic uses of the WM item to speed up visual search. Therefore, our experiments support an alternative view that an automatic WM effect may be an outcome of incongruence of concurrent processing between a WM item and a non-target stimulus in the search set.

Xun He
Joint memory effects on visual attention: Effect of Closedness

It has been established that when engaged in joint action, people will represent and be affected by others’ actions. Our previous research went further in demonstrating that people could share information in working memory according to the overlap in concurrently performed tasks and guide the subsequent attentional deployment accordingly. In the present study, we tested one group of close friends and a second group of strangers with a setup in which two participants had to hold particular stimuli in working memory and carried out a visual search task. Participants were drawn to stimuli in search that matched their own memory. Priming images which requested no memory performance yielded similar results. For the testing partner’s stimuli, however, the attentional guidance effect only occurred among strangers, and not between close friends. These data suggest that, participants who engage in joint action represent in memory information relevant to their co-actor; and its subsequent effects on attention allocation may well depend on the relationship between the participating persons. In the present setup, joint memory effects were inhibited as a result of closedness.

San-Yuan Lin
Hierarchical object representation: How the object is changed affects object-based attention

Lin and Yeh (submitted) have shown that when an attended object was later changed via amodal completion induced by an occluder, the changed object display (but not the initially attended one) determines object-based attention, supporting the changed-object hypothesis. This study followed the previous one but added changes at the element level to see whether the changed-object hypothesis still holds. We used a variation of Egly, Driver, Rafal (1994)’s double rectangle paradigm, but created discrepancies between the elements in the initial display (four separate hashes) and those in the final display (four separate squares). The two displays were linked either by an abrupt change, a smooth transition, or a shuffled one, occurring after attention was cued to the initial object display. After the element-level change, the four squares are grouped into two larger objects via amodal completion induced by an occluder in the final display. The object effect resulting from the final changed object display was found only when the element-level transition was in a smooth sequence. This suggests that attention selects on a hierarchical representation of objects that is sensitive to element change while it can tolerate global configural reorganization such as those induced by amodal completion.

15:15-15:25

Break

15:25-16:15 Poster Session
16:15-20:00 Banquet & Tour

TOP

Sunday, July 25

9:00-11:00 (a)

Symposium "The Perception of Colored Patterns, Materials, and Scenes"

Qasim Zaidi

Visual perception of material changes

Karl R. Gegenfurtner

Color Vision for Objects Made of Different Materials

Shin'ya Nishida

Perception of colorful natural scenes

Colin Clifford

Interactions in the processing of color and orientation

9:00-11:00 (b)

Talk Session "Motion II"
Please tick ▼ for abstract content

Lizhuang Yang
Dynamic feature change affects the object persistence

According to the object file theory (Kahneman et al., 1992), object persistence is guided by position consistency (or spatial-temporal continuity) instead of object’s visual features. This study aimed to test whether changing dynamics (gradually or abruptly) of feature had influence on object persistence. Object reviewing paradigm was used and object persistence was indicated by the Object Specific Preview Benefits (OSPBs). From experiment 1 to 3, OSPBs in 3 kinds of changing dynamics (no change, gradual change and abrupt change of object’s size) were measured separately. The result showed that object persistence was preserved in no change condition and abrupt change condition while not in the gradual change condition. The abrupt change have no effect on object persistence may because the change happens at the final frame of motion while object persistence had been preserved during previous frames. In the following 3 experiments, object persistence were measured when the object’s shape unchanged, changed abruptly at the first frame of motion and change gradually during the motion. The results showed that no significant OSPBs were found when object shape changed, no matter gradually or abruptly.

Li Li
Humans Use both Form and Motion Information for Heading Perception

It has long been known that humans use the focus of expansion (FOE) in a radial optic flow pattern to perceive their instantaneous direction of self-motion (heading). Here we report that motion-streak-like form information is also used for heading perception. We presented observers with an integrated form and motion display in which the dot pairs of a radial Glass patterns were oriented toward one direction on the screen (the form FOE) while moving in a different direction in depth (the motion FOE). Heading judgments were strongly biased towards the form FOE. We then manipulated the global form strength in the integrated display by randomly orienting certain dot pairs in the radial Glass pattern. As the global form strength in the radial Glass pattern decreased, so did the heading bias towards the form FOE. Lastly, we examined how the local effect of each dot-pair orientation on its perceived motion direction shifted heading estimation. We found that the visual system functioned like a maximum-likelihood integrator in combining the global and local interactions between form and motion signals for heading perception. The findings support the claim that humans make optimal use of both form and motion information for heading perception.

Hirohiko Kaneko
Perceived trajectory of moving object under normal- and hyper-gravity conditions

An object moving vertically with downward acceleration appears to move with uniform velocity. This bias in motion perception could be related to the motion bias in the natural environment produced by the gravitational acceleration. In this study, we investigated whether this motion bias differs depending on the magnitude of environmental acceleration. Visual stimulus was a moving object on the frontal plane with various magnitude of vertical acceleration. In the first experiment, the object was moving vertically and the subject responded whether the stimulus was accelerated or decelerated. In the second experiment, the object was moving horizontally and the subject responded whether the stimulus was moving upward or downward. We manipulated environmental acceleration along the vertical axis of the body from 1 G up to 2 G using a flight simulator and determined the acceleration magnitudes contained in the perceived motions with uniform velocity and with straight trajectory. The results showed that the acceleration bias in the perceived motion was grater in the hyper-gravity conditions than in the normal-gravity condition. This result suggests that the perceptual baseline for motion perception does not depend on the experience of visual motion but on the vestibular signal or on the eye movement signal.

Hong-Jin Sun
Visual Processing of Impending Collision of a Looming Object

When an object moves toward an observer, the expansion of the retinal image of the object can be used to process information about impending collision. It has been proposed that an optical variable called tau can inform the observer about the "time-to-collision" which refers to the time that will elapse before the observer collides with the object. In the present study we examine whether human visual system can encode tau by directly manipulating the visual information of the retinal image independently from other cues (e.g., distance information). Observers were presented with a visual display of an object moving on a collision course toward the observer. In a relative time-to-collision judgment task, the physical size of object was sometimes made to either expand or contract while approaching the observer. These increases or decreases in object size altered the relative rate of retinal image expansion that occurred during the approaching movement. It was found that expanding the size of the object (decreasing the tau value) caused an underestimation of time-to-collision while contracting the object (increasing the tau value) led to overestimation of time-to-collision. The results indicate that observers are able to use the tau strategy to process impending collision.

Brian Timney
The Perception of Visual Acceleration

We measured thresholds for the detection of acceleration. Subjects were required which of two temporal intervals contained dots that were accelerating. The measurements were made for a range of starting velocities and presentation durations. We also measured speed discrimination under identical conditions. Acceleration thresholds increased systematically as a function of start velocity. Thresholds also varied with the duration of stimulus presentation, requiring greater acceleration rates for a shorter presentation. A subsequent analysis was directed towards the question of whether the perception of visual acceleration is direct; i.e. mediated by "acceleration detectors" in the visual cortex, or whether it is a second-order process in which acceleration is "inferred" by the visual system when it detects that the speed of a target is detectably different from its starting velocity. When the data were replotted as a function of final velocity achieved, rather than acceleration rate, the differences due to presentation duration were eliminated. These results suggest that the perception of acceleration is indirect and contingent on the recognition of speed differences. A direct comparison of the acceleration and speed discrimination data showed that thresholds for velocity differences in both speed and acceleration conditions were very similar.

Alan Johnston
Position-dependent perceptual organisation of an ambiguous global motion pattern

Local motions are often ambiguous due to the aperture problem. Multiple ambiguous motions combine to give unambiguous perceptions of rigid translation (Amano et al., 2009; doi:10.1167/9.3.4) or rotation (Rider and Johnston, 2008, ECVP). For translation, the velocity can be found by a least squares fit to the intersection of the constraints. This solution is unique if the 1D motions are not all parallel. We can use a similar least squares method to find the centre and speed of rotation for a rotating stimulus. Again the 1D motions must be non-parallel but one further criterion is needed to provide a unique solution. We constructed an array of drifting Gabors that fails to meet this criterion and is therefore consistent with an infinite number of rotations. Perception varies with position in the visual field (Rider et al., 2010, VSS). Combining two such arrays gives a stimulus that is consistent with a single global rotation. This array appears coherent when the global rotation centre is close to fixation and separates into two transparent rotations when the global centre is in the periphery. The visual system can compute several solutions to ambiguous motion stimuli but we perceive only one solution at a time.

Koichi Shimono
The dual egocenter hypothesis can explain directional discrimination between a visual target and a kinesthetic target

We examined the hypothesis (Shimono & Higashiyama, 2009) that angular errors in visually directed pointing can be due to the difference of locations of visual and kinesthetic egocenters. This hypothesis assumes that direction of the target judged from the visual egocenter between the two eyes is transferred to the kinesthetic space where its direction is judged from the kinesthetic egocenter between the neck and the hand-shoulder assembly used to point. Seven observers judged whether kinesthetic direction of a target, touched with the left or right hand after it was seen in advance, was the right or left of the visual target, or whether visual direction of a target, seen after it was touched with one hand, was to the right or left of the kinesthetic target. Consistent with the hypothesis, the results showed that the kinesthetic direction judged with the right hand shifted to the right of that judged with the left hand, and the visual direction after the right hand was used shifted to the left of that after the left hand was used.

11:00-11:10

Break

11:10-12:00

Poster Session

12:00-13:30 Lunch
13:30-15:30 (a) Symposium "Spatial and Temporal Aspects of Perception and Attention"

Alex O. Holcombe

Successes and failures of perception on the fly

David I. Shore

Objects, space and time: how perceptual grouping affects temporal perception

Sheng He

Hemispheric constraint and eye specificity of spatial attention

Yaffa Yeshurun

Transient attention and perceptual tradeoffs

13:30-15:30 (b)

Talk Session "Form & Surface"
Please tick ▼ for abstract content

Isamu Motoyoshi
Adaptation-induced blindness and spatiotemporal filling-in

We recently showed that adaptation to dynamic stimuli strongly suppresses the conscious detection of a sluggish target (Motoyoshi & Hayakawa, 2008, 2010). Here, we used this adaptation-induced blindness (AIB) to distinguish visual functions achieved via unconscious and conscious neural processes. The adapting stimulus was an annular contour of 3.4 deg diameter flickering at 8 Hz in the peripheral, and the test was a filled bright disc of the same diameter gradually presented at the same location. We found that following adaptation to the flickering annulus, the whole disc became invisible even though the central region of the disc was not subject to adaptation. We also found that when a small square patch of constant luminance was embedded in the center of the test disc, the patch appeared to be darkened even though the test disc was invisible. These results indicate that AIB disrupts filling-in, but not spatial contrast, of brightness, leading to a notion that integrative visual processes are more correlated with the conscious awareness than the analytical

Branka Spehar
An EEG analysis of visually evoked responses to modally and amodally completed contours

A number of visual phenomena share the property that boundaries and shapes are perceived in locations where no local information is present. Central to many models of these phenomena is the assumption that they are mediated by the common underlying mechanism. Here we investigated visual evoked potentials (VEP) in response to Kanizsa type inducers that support modal completion, amodal completion or no completion in inward-oriented configurations that were closely matched in terms of local geometry and luminance. The equivalent, but outward-oriented configurations served as a control. We found significantly greater negativity for the inward- compared to the outward-oriented inducers in both 150-210ms and 240-270ms period. However, significant difference between modal and amodal configurations emerged only in 240-270 ms period. The time/frequency decomposition analysis revealed significantly higher mean Alpha amplitude (8-12 Hz) during 150-210 period for modally completed stimuli compared to their outwardly oriented control configurations. However, there was no difference in the mean Alpha amplitude between amodally completed and control configurations.

Hsin-Hung Li
Spatial Configuration Specific Surround Modulation of Global Form Perception

We studied how the detection of a Glass pattern (target) can be modulated by the presence of a surround Glass pattern (mask). The stimuli were Glass patterns consisted of random dot pairs (dipoles). The orientation of dipoles conformed a designated geometric transform to create a percept of concentric, radial, spiral or translational global forms. In Experiment 1, the target was presented in a central disc surrounded by an annulus mask. In Experiment 2, the target was presented in an annulus while the masks were placed either inside or outside the target annulus. We measured the target coherence threshold with and without the presence of masks. For concentric targets, the concentric and the spiral masks increased the thresholds in both experiments while the radial mask had little effect, and the outer mask produced greater effect than the inner mask. For radial targets, the threshold was elevated only by spiral mask in Experiment 1, and in Experiment 2, the form tuning of this masking effect is broader. The threshold elevation of radial target was similar regardless the target-mask configuration. No surround modulations were found for translational or spiral targets. Our results suggest a configuration specific lateral interaction among global form detectors.

Shinya Takahashi
Unnoticed explanation of the ‘transparency on contrast’ pattern

The ‘transparency on contrast’ pattern, originally presented by Albert (2006) and examined by Takahashi (2006), was reconsidered. This pattern comprises two middle-gray test fields of equal lightness, the lighter/darker inner inducing fields and the even-lighter/even-darker outer inducing fields. The test fields change their apparent lightness in keeping with traditional lightness contrast phenomenon. What is intriguing is that this lightness illusion is more prominent in this pattern than in a pattern omitting the inner inducing fields where the test fields are directly surrounded by the even-lighter/even-darker fields. Observation of the stereo versions and the moving versions of this pattern shows that the illusion becomes stronger when the test fields look like transparent patches put together with the other transparent surface (inner inducing fields) than when they look like opaque patches seen through the transparent surface. It was argued that the lightness illusion seen here is different in nature from other lightness illusions usually explained by the ‘discounting’ theory, and should be understood as the result of a perceptual reorganization of the entire pattern caused by the transparency perception. In addition, chromatic illusions in a similar configuration were introduced.

Naokazu Goda
Representation of surface materials in human visual cortex

We can easily discriminate and identify material of a surface (wood, metal, fabric etc) at a glance. Little is known, however, about the neural bases of this ability. Here we used functional MRI (fMRI) to uncover how information of the surface material is represented in human visual cortex. We measured the physical, perceptual, and neural similarities between pairs of nine material categories, each of which consisted of eight different realistic, synthesized images with controlled 3D shape. Physical similarities for each pair of categories were obtained from statistics of image features (spatial frequency and color histogram); perceptual similarities were obtained based on perceptual material space measured with a semantic differential method; and neural similarities in various cortical regions were obtained from a multivoxel pattern analysis of the fMRI data. We found that the neural similarities in various cortical regions correlated with the physical and perceptual similarities differently; the early visual areas mainly reflected physical similarities, whereas neural similarity in the ventral-occipital region including fusiform gyrus reflected perceptual similarity. This finding indicates that representation of surface materials is transformed along the ventral pathway: from image-based representation in early visual areas into perceptual category representation in the ventral-occipital region.

Mel Goodale
Extracting shape and material properties from the same surface cues: an fMRI study

We used fMRI to investigate the brain areas that extract different kinds of information (shape vs. material) from an object’s surface cues. Participants attended to differences in the shape (flat/convex), texture (wood/rock), or material properties (soft/hard) of a set of circular visual surfaces. Attending to surface curvature activated the lateral occipital area (LO) whereas attending to texture activated a region of the collateral sulcus (CoS) within the parahippocampal place area (PPA). Attending to material properties activated the same texture-sensitive region in the CoS as well as a dorsal sub-division of the left LO. Our results suggest that the processing of surface texture, which takes place within the scene-sensitive PPA, is a route to accessing stored knowledge about the material properties of objects. In addition, the results suggest that area LO has a complex organization, with neurons tuned not only to the outline shape of objects, but also to their surface curvature, independent of contour, and to aspects of their material properties. We argue that the organization of category-selective areas in the ventral stream may arise in part from specialization within different areas for the processing of the stimulus attributes that best define those categories.

15:30-15:40

Break

15:40-16:30

Poster Session

16:30-16:45  
16:45-17:45 Keynote Speech Dr. Christopher W. Tyler

The Human Representation of Visual Space through the Millennia


TOP

Monday, July 26

8:30-10:00 (a)

Talk Session "Color Vision II"
Please tick ▼ for abstract content

Keizo Shinomori
Selective age-related changes in temporal S-cone ON- and OFF-pathways.

S-cone sensitivity decreases with age and this influences the temporal response. In this study, age-related changes in an S-cone pathway were quantified for chromatic increments and decrements in terms of their impulse response functions (IRF).
Thresholds for double pulses, separated by varying interstimulus intervals, were measured for chromatically modulated stimuli in 4AFC method. Isoluminance and the location of tritan lines were determined individually. The stimuli were presented as a Gaussian patch on an equiluminant white background. Subjects included ten younger (mean, 24 years) and nine older (74) observers, carefully screened to rule out anterior segment, retinal or optic nerve abnormalities. IRFs were calculated from thresholds as a function of ISI using a model that varied four parameters of an exponentially-damped sinewave.
IRFs for S-cone increments in excitation were slower than for luminance modulation. We now find that S-IRF decrements are even slower. In terms of age-related changes, time for peak amplitude is significantly slower with age in S-cone OFF IRF, but not for the S-cone ON IRF. These results are consistent with detection by separate ON- and OFF- S-cone pathways and indicates that their neural substrates change differently with age.

Misha Vorobyev
Chromatic and achromatic vision in primates, birds and bees

Perceptual separation of chromatic aspects of colour (hue and chroma) from achromatic ones (lightness) is a fundamental property of human colour vision. The separation of chromatic from achromatic aspects of colour can be a consequence of constraints imposed by neural wiring in retina of primates. Alternatively, this separation might be generally useful for detection and identification of objects in conditions of patchy illumination. In humans, stimuli subtending large visual angles are discriminated on the basis of their chromatic properties – large variations in the intensity of light stimuli are ignored. In contrast, high spatial resolution vision is mediated by a luminance channel that is sensitive to changes in stimulus intensity, but is not sensitive to variation in the chromatic aspects of colour. Here I show that bees and birds also have chromatic and luminance mechanisms that are functionally similar to ours. Because colour vision in primates, birds and bees evolved independently, I conclude that chromatic vision probably evolved independently in different animals to achieve colour constancy in conditions of patchy illumination.

Yoko Mizokami
Colorfulness-adaptation influenced by low-level and high-level factors in natural images

It has been shown that perceived colorfulness changes with adaptation to chromatic contrast modulation and to surrounding chromatic variance. It is not clear how colorfulness perception changes with adaptation to color variations in actual environments or natural images and what levels of visual mechanisms contribute to the perception.
We examined if the colorfulness perception of an image was influenced by adaptation to various natural images. To compare the effect of low- and high-level factors, three different types of image-sets were used for adaptation; natural images consisting of natural scene or objects, jumbled images consisting of the collage of color patches cut from original images, and phase-scrambled images maintaining low-level factors such as color distribution and spatial frequency spectrum similar to original images. Observers adapted to several images with different levels of saturation and judged the colorfulness impression of a test image after the adaptation.
The results show that colorfulness perception is changed by adaptation to the levels of image saturation. The effect is stronger with adaptation to natural images than with jumbled and phase-scrambled images, implying that the colorfulness-adaptation mechanism works better with natural scenes including high-level factors such as the presence of recognizable objects and naturalness.

Hidehiko Komatsu
Neural selectivity for the luminance gradients in the posterior inferior temporal cortex of the monkey

Shading generated by luminance gradient is an important visual stimulus for perception of the three-dimensional structure of objects or their surface qualities. However, little has been known about where and how shading is represented in the brain. We recorded single neuron activities from the posterior inferior temporal cortex (PIT) of the monkeys performing a visual fixation task and examined the responses to the linear luminance gradients. We found that neurons selective for the direction of luminance gradients were concentrated in a small region in PIT anterior and dorsal to the PIT color area (Yasuda et al. Cerebral Cortex 2009). They responded strongly to stimuli with luminance gradient in one direction and did not respond or responded little to the opposite direction. The sharpness of selectivity varied from cell to cell. Many of them showed position invariance in direction selectivity when the stimulus position was changed within the RF, indicating that the selectivity cannot be explained by the heterogeneity in the luminance preference within the RF. These results together with recent human fMRI study (Georgieva et al. Cerebral Cortex 2008) suggest that PIT cortex plays an important role in the coding of shading stimuli.

Keiji Uchikawa
Effects of luminance balance of surfaces on estimating the illuminant color

The human visual system can discount the illuminant color from a light reflected from surfaces. This ability of human color vision is known as color constancy. Mean chromaticity across all surfaces in an image can be a strong cue for estimating the illuminant color. However this cue is only effective for limited cases. The luminance balance as well as chromaticities of surfaces varies with the illuminant color. Here we investigated how effectively the luminance balance of surfaces works for color constancy. We used the stimulus that consisted of 61 hexagons, presented on a CRT. The center hexagon served as a test stimulus. The 60 surrounding hexagons were of bright and dime red, green and blue colors. We used simulated 3000, 6500 and 20000K black body radiations as test illuminants. The observer adjusted the chromaticity of the test stimulus so that it appeared as gray or white. In a condition where the chromaticities of surrounding colors were set invariant, and only the luminance balance of those colors varied according to the test illuminant the observer’s achromatic point shifted consistently with the illuminant chromaticity. This result indicates that the luminance balance can be an effective cue for estimating the illuminant color.

Manana Khomeriki
Color naming and color visual searching in the Georgian-speaking

When verbally sequencing colors, Georgian-speaking individuals name approximately forty colors, starting with the basic colors mainly primarily with red. When viewing a collage composed of familiar colors, four year old children initially name the basic colors, in most cases starting with red, without giving preference to any specific strategy. However, school-aged children and adults name colors in a sequence that coincides with the eye movement during the color viewing process; specifically, from left to right and top to bottom. The influence of color saliency is overridden by a behavioural strategy which is not specific to color.
It can be supposed that the during an acquisition of color names by children they should be kept in mind in certain succession what, while verbal reproduction, is reveled first of all in enumeration of basic colors. This succession should not be changed significantly in age as the preference of basic colors is obvious by name them orally. School children and adults, while visual searching, give preference to the color according to its position. Acquiring the reading and writing skills has certain influence on the color hierarchy existing in mind and causes some shifts of earlier priority colors.

8:30-10:00 (b)

Talk Session "Reading and Learning"
Please tick ▼ for abstract content

Sze-Man Lam
Bilinguals have different hemispheric lateralization in visual processing from monolinguals

Previous bilingual studies showed reduced hemispheric asymmetry in non-verbal tasks such as face perception in alphabetic bilinguals compared with alphabetic monolinguals. Here we examined whether this effect can also be observed in bilinguals of a logographic language and an alphabetic language, i.e., Chinese-English bilinguals. Since logographic and alphabetic languages are dramatically different in their orthography and how orthographic components are mapped to pronunciations and meanings, Chinese-English bilinguals may have different visual experience from bilinguals and monolinguals of alphabetic languages. We compared performance of English monolinguals, Chinese-English bilinguals, and alphabetic-English bilinguals in three tachistoscopic recognition tasks: Chinese character sequential matching, English word sequential matching, and intact-altered face judgment tasks. In discrimination sensitivity measures (D-prime), we found that both Chinese-English bilinguals and alphabetic-English bilinguals exhibited a stronger right visual field/left hemisphere advantage in English word matching than English monolinguals; in addition, a tendency of reduced right hemisphere lateralization in face judgment was observed in Chinese-English bilinguals, consistent with the findings in the previous studies. Our results suggest that increased experience in and exposure to more than one language may have influences on hemispheric lateralization in visual processing in general.

Sheila Crewther
TMS stimulation of V5 interferes with single word reading

Word reading is a skill commonly associated with parvocellular and ventral pathway processing rather than magnocellular or dorsal stream processing. However there is considerable evidence for a magnocellular/dorsal impairment in developmental dyslexia. We investigated the necessity of of V1/V2 and dorsal area V5/MT+ in word recognition using transcranial magnetic stimulation (TMS). Twelve healthy young adults viewed brief presentations of single words followed by a mask of white noise. On each trial a paired-pulse of TMS was delivered to either V1 or V5 at randomly selected onset asynchronies between 0 and 225ms post word onset. TMS over V1/V2, 4-36 ms post word onset disrupted accurate word discrimination, with disruption also evident at approximately 99 ms. TMS over V5/MT+ also disrupted accuracy following stimulation at 4 ms and also at 130 ms post word onset. Thus, a role for V5/MT+ in accurate single word identification is apparent suggesting rapid parietal attention mechanisms may be required prior to word specific processing in primary and temporal cortical regions.

Hsuan-Chih Chen
Effects of different RSVP displays on semantic integration

Using rapid serial visual presentation (RSVP) procedure and event-related brain potential (ERP) recording, this study investigated the possible effects of display mode and presentation rate on semantic integration during Chinese sentence reading. In two experiments, participants read 160 individually presented sentences containing a single-character target word either congruent or incongruent with the sentential context for comprehension. Experiment 1 manipulated the display mode of sentence presentation (i.e., character-by-character or word-by-word with a constant display rate of 300ms per display and 200ms interval between two consecutive displays), while Experiment 2 manipulated the display rate (i.e., 250ms or 400ms per display with a 200ms interval) on the word-by-word presentation. Results of both experiments consistently demonstrated that the only significant difference across different display conditions lied in the amplitude of the N400 component. More importantly, regardless of the display conditions used, the N400 elicited by the incongruent target word was larger than that elicited by the congruent one. Taken together, these results indicate that whether or not providing explicit word marker in written Chinese would not affect high-level on-line semantic integration in reading Chinese.

Chien-Hui Kao
Inversion effect in visual word forms: the role of spatial configurations and character components

We investigated the configural processing of orthographic stimuli by measuring the inversion effect in five types of stimuli. The real-characters were composites with two components arranged in a left-right configuration. The non-characters had the two components swapped in position. The lexical components were components in a composite that are also independent characters while the non-lexical components are not. The oracle bone characters have the same structure as modern Chinese characters but contain no familiar components. The inverted stimuli were the upside-down versions of their upright counterparts. Two characters of the same type were presented to the left and right of the fixation. Observers’ task was to determine whether the two characters presented in a trial were the same.
The percentage correct of matching for the upright real-characters and lexical components was greater than that for their inverted versions. Such inversion effect was not observed in the non-characters, non-lexical components, or oracle characters. Thus, the configuration processing, as manifested in the inversion effect, only occurs in well practiced characters. This result is consistent with the template hypothesis of visual word form processing, in which a word was recognized through a holistic process rather than an analysis of its components.

Chia-Huei Tseng
The suppression component of attentional selection in long-term visual search learning

Attention can alter human’s long-term motion perception, mainly through enhancing the attended feature (Tseng et al, 2010). This finding conflicts with ample evidences supporting the co-existence of facilitatory and inhibitory components of attentional selection. We revisited this issue with an extreme visual search task to maximize the previously non-significant suppression component.
We trained each observer in two phases of visual search that requires to identity target letter on a target color background. In phase 1, there was one target color and three distracter colors, while in phase 2, there was one distracter but three target colors. The most economic strategy for phase 2 search would be to inhibit the single distracter color. Observers’ sensitivities to target color in phase 1 increased after 7-10 hours of search, measured by the same isoluminant ambiguous motion display as in Tseng et al. (2004, 2010). No significant desensitization for distracter colors was found, consistent with previous results. In phase 2, strong inhibition on the relative salience of distractor color was discovered. This suggests that facilitation is not always the primary contributor in a visual search task, and people are flexible in the use of facilitation/suppression based on the task demand.

We trained each observer in two phases of visual search that requires to identity target letter on a target color background. In phase 1, there was one target color and three distracter colors, while in phase 2, there was one distracter but three target colors. The most economic strategy for phase 2 search would be to inhibit the single distracter color. Observers’ sensitivities to target color in phase 1 increased after 7-10 hours of search, measured by the same isoluminant ambiguous motion display as in Tseng et al. (2004, 2010). No significant desensitization for distracter colors was found, consistent with previous results. In phase 2, strong inhibition on the relative salience of distractor color was discovered. This suggests that facilitation is not always the primary contributor in a visual search task, and people are flexible in the use of facilitation/suppression based on the task demand.

Jun-Yun Zhang
Reweighting rule learning explains visual perceptual learning and its specificity and generalization

Visual perceptual learning models are constrained by orientation and location specificities, leading to the assumption that learning either reflects changes in V1 neuronal tuning, or reweighting of specific V1 inputs in either the visual cortex or higher areas. Here we used a “training plus exposure” procedure, in which the observers were attentively trained at one orientation and inattentively exposed to the transfer orientation, to demonstrate complete learning transfer across orientations in three tasks known to be orientation specific, indicating that perceptual learning involves more general learning. We also demonstrate that precise learning specificity, once regarded as the strongest evidence for V1 involvement, may result from “over-attention”. Learning becomes more transferrable to nearby orientations with reduced attention during training. We thus propose a new reweighting rule learning model to explain perceptual learning and its specificity and generalization. In this model a decision unit in high-level brain areas learns the rules of reweighting V1 inputs. However, the learned reweighting rules can only be applied to a new orientation/location when functional connections between the decision unit and new V1 inputs are established through repeated orientation exposure or location training.

10:00-10:15

Break

10:15-12:15 (a) Symposium "Fading, Perceptual filling-in, and Motion-induced Blindness: Phenomenology, Psychophysics, and Neurophysiology "

Lothar Spillmann

Fading and Filling-in and the Perception of Extended Surfaces

Hidehiko Komatsu

Bridging Gaps at V1: Neural Responses for Filling-in and Completion at the Blind Spot

Li-Chuan Hsu

Perceptual Fading as Revealed by Perceptual Filling-in and
Motion-Induced Blindness

Peter de Weerd

fMRI Evidence for a Correlate of Surface Brightness in Early Visual Areas

10:15-12:15 (b)

Talk Session "Eye movement & Gaze II"
Please tick ▼ for abstract content

Qian Qian
Intertrial Inhibition Effect of Gaze Cueing

An uninformative cue by a centrally-presented face gazing to one-location can trigger attention shifts in observers toward the location gazed at, facilitating the detection of simple targets. In the literature, it has been widely accepted that perception of another people’s gaze can shift observer’s attention automatically and reflexively. In present study, the magnitude of gaze-cueing effects, which preceded by a cooperative or deceptive gaze, was compared. The results showed that gaze-cueing effect induced by current gaze was inhibited when the observer was deceived by a previous gaze on target detection. This intertrial inhibition effect was found for both schematic faces and real faces as central cues. The arrow cues also can elicit the intertrial inhibition effect, but there has a subtle difference between gaze cues and arrow cues. The results presented here suggest that gaze cueing is not purely automatic or reflexive but is influenced by the cueing states in the immediate past. This intertrial effect is probably based on a general processing for any directional cues and afforded by implicit visual memory mechanisms of previous views in human brain.

Choongkil Lee
Temporal impulse response of V1 for saccadic decision

From spike sequences of single V1 neurons recorded from the macaque monkeys trained to make saccadic eye movements to a visual target, we determined the time course of the signal related to saccadic decision. The firing rate during sequential epochs of 10ms following target onset was correlated with saccadic response time. The correlation between firing rate and response time dynamically changed until saccadic onset; significant correlation emerged at around 45 ms, peaked at 65 ms after target onset, and lasted but decayed until saccadic onset. The time of peak correlation was roughly the same as the mean time of the first spike of visual response. The time course of correlation is reminiscent of the impulse response of human vision to luminance change supporting the hypothesis that ‘single-shot’ output of early temporal filter providing signals for saccadic decision resides within V1 (Ludwig et al. 2005). The results are also consistent with the finding that spike activity of MT within tens of milliseconds can reliably convey information about behavioral choice for a rapid perceptual judgment (Ghose and Harrison 2009).

Yu-Li Liu
Gaze cueing with multiple faces: The time course of facilitation and inhibition

Recent studies have demonstrated that orienting of attention in response to nonpredictive gaze cues arises rapidly and automatically, and inhibition of return (IOR) for gaze cueing emerges only at long cue-target intervals. Here, we investigated whether time course of gaze cueing is influenced by the number of faces on display. In Experiment 1, we used a single face and replicated previous findings, namely facilitation at 200-ms SOA, null effect at 1200-ms SOA, and inhibition at 2400-ms SOA. In Experiment 2, we manipulated the number of face and found identical time courses for facilitation and inhibition between one- and two-face conditions. In Experiment 3, we compared three faces with one, and found IOR at 2400-ms SOA disappeared in the three-face condition. Finally, in Experiment 4, we compared all three face conditions and found IOR at 2400-ms SOA disappeared in both two- and three-face conditions. Furthermore, facilitative, rather than null, cueing effect was found with SOA of 1200 ms for all three face conditions. Taken together, we conclude that gaze cueing with multiple faces indeed evoked a time course of facilitation and inhibition different from that evoked with a single face. Implications for how multiple faces may affect gaze cueing are discussed.

Doris Braun
Localization of speed perturbations of context stimuli during fixation and smooth pursuit eye movements

We investigated whether smooth pursuit eye movements improve the ability to localize a short (500 ms) speed perturbation affecting one of two moving peripheral context stimuli consisting of vertical sine wave gratings placed above and below a pursuit or central fixation target. Psychophysical thresholds were measured for localization, discrimination and detection during fixation and smooth pursuit at the same or different speeds in the same direction. We also tested the effect of stimulus size, feedback, depth and speed difference between both context stimuli. While detection and discrimination thresholds for speed perturbations were in the normal range (10%-15% Weber fraction), localization thresholds were dramatically increased (30%-50%). These high localization thresholds were particularly observed when retinal motion was only or mainly due to movements of the context stimuli as during fixation or slower pursuit movements. When the retinal motion was mainly due to pursuit, localization thresholds at equal retinal velocities were lower. We conclude that the localization of speed perturbations of peripheral objects is a difficult task for the visual system probably due to the dominance of relative motion signals. Smooth pursuit eye improve the localization of speed perturbations, also feedback and the reduction of relative motion cues have positive effects.

Masahiko Terao
Contrast-dependent change of the effect of pursuit eye movements on the perceived direction of retinally ambiguous motion

When an apparent-motion stimulus, whose direction is ambiguous in the retinal image, is presented during smooth pursuit eye movements, the dominant perceived direction is opposite to the direction of pursuit (Terao et al, 2009, SfN). This finding suggests that the pursuit movement enhances motion signals in the anti-pursuit direction, relative to that in the pro-pursuit direction. In contrast, it is known that the contrast sensitivity of luminance grating is reduced for the anti-pursuit direction (Shütz et al, 2007, Journal of Vision). To resolve the apparent inconsistency between these findings, we investigated how stimulus contrast affected motion perception during pursuit. We presented a retinally counter-phase sinusoidal grating on a gray background, while the observer’s eyes were tracking a marker smoothly moving below the grating. The counter-phase grating was a linear sum of two gratings that had the same spatiotemporal frequency but drifted in opposite directions. Our preliminary result suggests that dominant perceived direction is the anti-pursuit direction when the grating contrast is high, while it can be changed to the pursuit direction when the grating contrast is significantly reduced. These results indicate that the effect of pursuit on the perceived motion direction is dependent on the level of stimulus contrast.

David Crewther
Adaptation affects binocular rivalry dynamics at the endpoint of ventral processing

Blake and Sobel proposed that low-level neural adaptation is key to rival instability on the basis that orbital movement slows binocular rivalry between simple patterns by stimulating fresh receptive fields, reducing build up of neural adaptation. We extended this theory to rivalry of complex stimuli - faces and houses, for which ventral processing is terminated within the inferotemporal cortex (IT), where the receptive fields of object selective neurons are much larger in visual extent than in V1/V2. We predicted that stimulus movement would increase the perceptual stability of rivalry between grating stimuli, but not face and house stimuli because changing the stimulus position does not prevent adaptation in IT. A secondary experiment revealed that stimulus orbit also destabilizes rivalry by increasing the saccade rate for both simple and complex stimulus pairs. We argue that stimulus orbit can both stabilize rivalry by delaying neural adaptation and destabilize rivalry by evoking saccades. As orbit reduces adaptation within retinotopic but not complex object selective neural representations, we conclude that adaptation controls rivalry dynamics at the endpoint ventral processing for the inducer stimuli.

12:15-13:30

Lunch

13:30-15:30 (a) Symposium "The Other-race Effect in Face Perception"

William Hayward

Perceptual and social processes interact to cause the other-race effect

Jim Tanaka

Reversing the other-race effect: The cognitive, neural and social plasticity of face recognition

Siegfried Ludwig Sporer

Becoming a face expert: Inversion and the own-ethnicity effect

Roberto Caldara

Tracking early sensitivity to race on the human visual cortex

13:30-15:30 (b)

Talk Session "Neural Mechanisms"
Please tick ▼ for abstract content

Dave Saint-Amour
Developmental follow-up of the effects of PCB exposure on visual processing in Inuit children from Arctic Quebec

Alterations of visual function in the developing human brain have been linked to heavy metals exposure but very little is known regarding persistent organic pollutants such as polychlorinated biphenyls (PCBs). In a cohort of preschool Inuit children from Arctic Quebec (Canada), we previously found alterations of visual evoked potentials (VEPs) in association with chronic PBC exposure. This follow-up study aimed at assessing the impact of PCBs on visual processing at school age. Blood concentrations of several toxins including PCBs were measured at birth from cord blood samples and at the time of testing. VEPs were obtained at different contrast levels. Spatial visual attention was also assessed using a Posner cue-target paradigm. The relationships between PCBs and outcomes were assessed by multivariate regression analyses. No effects were observed for VEPs. In the Posner task, high level of PCBs during postnatal development was significantly related to longer reaction times. In addition, prenatal PCB exposure was associated with a greater number of missing targets and false alarms. This prenatal effect remained significant after adjustment for postnatal exposure. This study suggests that chronic PCB exposure has transitory effects on early visual processing but impairs vigilance and impulsivity at school age.

Yuki Kamatani
Spatio-temporal resolution of steady-state visual evoked potentials for a brain-computer interface

We aimed to investigate spatio-temporal resolutions of steady-state visual evoked potentials (SSVEP) for applying to development of a brain-computer interface (BCI), particularly a BCI system for steering a car. We measured SSVEPs (O1 and O2 with Fz reference and Fpz ground) with presenting two flickering checker patterns, which were located apart right and left (30deg) and had different reversal frequencies (12, 15, 20, 30Hz, 15s duration). Three different time-windows were used for time-frequency analysis (250, 500, 1000ms, every 62.5ms shift). When observers looked at one of 7 positions between two checkers, observers' SSVEPs quantitatively changed depending on the distance between the fixation point and the left/right checkers. With a long time-window, SSVEP's fixation dependency was clear and a 15-20Hz pattern combination was best. With a short time-window, the fixation dependency generally decreased, but better with higher frequency combinations such as a 20-30Hz pattern. Then, we applied these findings to a BCI-driving simulator, in which a driver could steer a car by looking at a heading direction between two checkers flickering at 15 and 20Hz. Drivers' steering was stable with a longer time-window, but sometimes delayed. With a shorter time-window, the steering was unstable, but quick and sensitive.

Chun-I Yeh
The structure of cortical receptive fields varies with different stimulus ensembles

We found that simple cells in layer 2/3 of macaque primary visual cortex (V1), which provide visual information to extrastriate cortical areas, have qualitatively different receptive-field maps when measured with sparse noise (Jones and Palmer, 1987) and Hartley subspace (Ringach et al, 1997) stimuli. Furthermore, the layer-2/3 population also shows a black-over-white preference in response to sparse noise (less evident with Hartley stimuli). Because sparse noise and Hartley differ in many ways (e.g. sparse vs. dense), what stimulus parameters contribute to the discrepancy between the two maps remains unclear. To address this question, we introduced a third stimulus ensemble, a spatio-temporal white noise (m-sequence, Reid et al, 1997) to measure receptive-field maps in sufentanil-anesthetized monkey V1. The receptive field similarity (RFS) for the two dense noises (white noise and Hartley) is somewhat greater than that between Hartley and sparse noise, but still RFS is significantly smaller in layer 2/3 than in layer 4. Moreover, the black-preference of layer-2/3 neurons is more evident from white noise maps than from both Hartley and sparse-noise maps. These results challenge the idea that the receptive field is a fixed property of V1 neurons.

Yueh-Peng Chen
Functional circuitry of key dimensions in local macaque AIT ensemble activity

The anterior inferior temporal (AIT) cortex is the last purely visual processing stage at the end of the macaque ventral visual stream and is thought to underlie invariant object recognition via a high-dimensional code for complex shapes. The number and identity of these key dimensions is unknown and has been hypothesized to range from 36 'geons' to an infinite number of shape dimensions. We recorded from 64-site multielectrode arrays spanning 1.4x1.4 mm (distance x depth) in AIT of anaesthetized Macaca cyclopis monkeys and found ensemble activity patterns by applying principal component analysis. We previously showed that the patterns allow generalization/extrapolation across independently constructed stimulus sets and are differentially driven by stimuli with opposite features. Here, we applied cross-correlation analysis with covariation correction to map the functional circuitry underlying the ensemble patterns. The patterns of excitatory and suppressive functional connections suggest a precise functional network underlying the ensemble patterns. The consistency between ensemble activity and functional networks strengthens the case for a topographically stable map of key dimensions capable of supporting invariant object recognition and generalization to novel objects.

Takayuki Sato
Hierarchically Organized ‘Functional Structures’ in Monkey Inferior Temporal Cortex

Previously, we showed that neurons in a columnar region have a common property in object selectivity, and the common property is different from the property in adjacent columnar region, supporting the columnar organizations in IT cortex (Sato et al., 2009). Here, in the present study, we address a question whether functional structures larger than columns do exist in IT cortex.
We recorded multi-unit activities (MUAs) densely from wide range of exposed IT. For each site, we recorded fifteen MUAs from surface down to the white matter. Object responses of these MUAs were averaged to extract the common response property of the site (avgMUA). As in the previous study, analysis of common response property showed that the property was different from site to site. Then, we categorized the object response property of avgMUAs according to hierarchical clustering based on the similarity in object selectivity of each avgMUA. We found that there were multiple functional domains each covering multiple sites. Each functional domain was specific to a particular stimulus category such as face, animal body etc. Taking into account the previous study, these findings suggest that there is hierarchical organization of functional structures from single cells, columns to functional domains.

Manabu Tanifuji
Cortical columnar organization is reconsidered in inferotemporal cortex

The previous study suggested that inferotemporal (IT) cortex is organized with columnar structure (Fujita, et al., 1992). However, we frequently observed that nearby cells revealed seemingly different object selectivity. Thus, columnar organization in IT cortex is still controversial. One critical point in the previous study was that they used the simplest visual feature critical for cells, and then columnar organization was examined for the feature. In this study, we used object stimuli to reexamine the columnar organization.
We identified activity spots revealed by optical imaging, and recorded unit activities from these spots. To quantify the similarity among the cells, we calculated correlation coefficients of responses to 100 object stimuli. Pairs of nearby single units did not show significant correlation in object selectivity, but MUA pairs did so. This result indicates that averaging single cell activity in MUA reduced variability in object selectivity and disclosed common property among the cells. We also found that this common property is similar for the cells in the same columnar region, but is different for pairs of cells from nearby columnar regions. These results reconfirm the columnar organization in IT cortex. Furthermore, the detailed analysis suggests that the columns do not cover entire IT cortex.

Go Uchida
Visual information represented in different levels of functional hierarchy in monkey IT cortex revealed by machine learning

Inferotemporal (IT) cortex is essential for visual object recognition. In our previous study (Sato et al., Cereb. Cortex, 2009), we showed that neurons with similar object selectivity are locally clustered in IT cortex, and forming functional columns. Furthermore, we recently found a larger functional structure (functional domain) that covers multiple columns, suggesting hierarchy in functional structure in IT. What kind of visual information is represented by each level of the hierarchy? To address this problem, in the present study, we identified local clusters that are essential for categorization and identification of object images (faces) with a regularized linear classifier. We first trained the classifier so as to categorize faces among various objects. Although local clusters essential for the classification appeared across different functional domains, the one with positive weight parameters appeared only in a domain that sensitive to faces. On the other hand, there is no correlation with the domain structure for the local clusters essential for identification of a particular face among other faces. These results suggest that the functional domain represents information about object category such as faces, whereas local clusters represent generic visual features useful for identifying an object.

TOP

 

©2010 Asia-Pacific Conference on Vision