5. Object experiment
5.5. Discussion 2
5.5.2. Future research
Due to time restrictions, it was not possible to continue research to the point where cross-modal association was tested, but with the results so far, we can conclude that kea do pay attention not only to visual, but also to auditory cues, when confronted with a cross-modal experiment. This, however, can only be the starting point in investigating CMP in this species.
The obvious next step will be to separate the sound source and the visual stimulus again, after the subjects could learn the audio-visual association in the training so far. The experimental setup can stay quite similar to the one used for this thesis, but the sound should then come not from the S+, but from a ‘neutral’ location, optimally from the middle between the stimuli.
If the birds have only learnt to move towards the sound source, they would then approach the loudspeaker in the middle. If they have built associations between the sounds and the objects, they should choose the S+.
It would also be possible to return to the touchscreen setup and use the objects and sounds introduced in the ‘real objects task’. Discrimination should be easier, compared to the photographs used in the original touchscreen experiment, possibly overcoming the obstacles considered in ‘Discussion 1’. Nevertheless, I would suggest continuing to work with real objects, as this experimental setup seems promising. Also we know little about the kea’s vision, but we for sure can say that using pictures on a screen will not make the task easier (O’Hara et al. 2015).
39
Once CMP in kea has then successfully been shown, further research can move into several directions. One possibility is to investigate cross-modal individual recognition of con- or heterospecifics. Conspecific recognition is widespread in the animal kingdom (Leopold and Rhodes 2010), also when tested cross-modally (Kojima et al. 2003; Bovet and Deputte 2009;
Proops et al. 2009; Adachi and Hampton 2011; Kondo et al. 2012; Kulahci et al. 2014). Some studies also suggest cross-modal recognition of heterospecifics in several species (Adachi and Fujita 2007; Adachi et al. 2007, 2009; Sliwa et al. 2011; Lampe and Andre 2012a; Proops and McComb 2012). This has so far only been tested with recognition of humans. The precondition for it seems to be extensive contact with individuals of the other species. The adaptive value for this ability might be the recognition of predators (Huber et al. 2013), but we know very little about this phenomenon so far. Also, one needs to be careful with terminology in this area of research. When offering tasks for cross-modal recognition, the subjects might be successful simply for the fact that they regard one stimuli as more familiar compared to another one (Tibbetts and Dale 2007; Huber et al. 2013). Should kea be successful at a task investigating cross-modal recognition of humans, then the question arises on what components this recognition is based. It would be interesting to test different visual displays, such as whole body, only face, body without face or maybe even just clothing.
Although the research question posed could not be answered I still regard the experiments conducted for this thesis to be a success. Possible experimental setups to investigate CMP in kea have been proposed and also applied, showing advantages and also potential obstacles to different approaches. On the basis of the results future research can be built on. Hopefully the studies conducted help to induce further studies in this fascinating field of research.
40 6. References
Adachi I, Fujita K (2007) Cross-modal representation of human caretakers in squirrel monkeys. Behav Processes 74:27–32. doi: 10.1016/j.beproc.2006.09.004
Adachi I, Hampton RR (2011) Rhesus monkeys see who they hear: Spontaneous cross-modal memory for familiar conspecifics. PLoS One. doi: 10.1371/journal.pone.0023345
Adachi I, Kuwahata H, Fujita K (2007) Dogs recall their owner’s face upon hearing the owner's voice. Anim Cogn 10:17–21. doi: 10.1007/s10071-006-0025-8
Adachi I, Kuwahata H, Fujita K, et al (2009) Plasticity of ability to form cross-modal
representations in infant Japanese macaques. Dev Sci 12:446–452. doi: 10.1111/j.1467-7687.2008.00780.x
Auersperg AMI, Gajdon GK, Huber L (2009) Kea (Nestor notabilis) consider spatial relationships between objects in the support problem. Biol Lett 5:455–458. doi:
10.1098/rsbl.2009.0114
Auersperg AMI, Gajdon GK, Huber L (2010) Kea, Nestor notabilis, produce dynamic
relationships between objects in a second-order tool use task. Anim Behav 80:783–789.
doi: 10.1016/j.anbehav.2010.08.007
Auersperg AMI, Huber L, Gajdon GK (2011) Navigating a tool end in a specific direction: stick-tool use in kea (Nestor notabilis). Biol Lett 7:825–8. doi: 10.1098/rsbl.2011.0388
Auersperg AMI, Oswald N, Domanegg M, et al (2014) Unrewarded Object Combinations in Captive Parrots. Anim Behav Cogn 1:470–488. doi: 10.12966/abc.11.05.2014
Bahrick LE, Lickliter R, Flom R (2004) Intersensory Redundancy Guides the Development of Selective Attention , Perception , and Cognition in Infancy. Curr Dir Psychol Sci 13:99–
102. doi: 10.1111/j.0963-7214.2004.00283.x
Bennett ATD, Cuthill IC (1994) Ultraviolet Function ? Vision in Birds : What is its. J Comp Physiol 34:1471–1478. doi: 10.1016/0042-6989(94)90149-X
Bischof H-J, Engelage J (1985) Flash evoked responses in a song control nucleus in the zebra finch (Taeniopygia guttata castanotis). Brain Res 326:370–374. doi: 10.1016/0006-8993(85)90048-4
41
Boesch C (2007) What makes us human (Homo sapiens)? The challenge of cognitive cross-species comparison. J Comp Psychol 121:227–40. doi: 10.1037/0735-7036.121.3.227 Bovet D, Deputte BL (2009) Matching vocalizations to faces of familiar conspecifics in
grey-cheeked mangabeys (Lophocebus albigena). Folia Primatol 80:220–232. doi:
10.1159/000235688
Bovet D, Vauclair J (2000) Picture recognition in animals and humans. Behav Brain Res 109:143–165. doi: 10.1016/S0166-4328(00)00146-7
Calvert GA, Brammer MJ, Iversen SD (1998) Crossmodal identification. Trends Cogn Sci 2:247–253. doi: 10.1016/S1364-6613(98)01189-9
Cowey A, Weiskrantz L (1975) Demonstration of cross-modal matching in rhesus monkeys, Macaca Mulatta. Neuropsychologia 13:117–120. doi: 10.1016/0028-3932(75)90057-3 Cuthill IC, Partridge JC, Bennett ATD, et al (2000) Ultraviolet Vision in Birds. Adv study Behav
29:159–214.
D’Eath RB (1998) Can video images imitate real stimuli in animal behaviour experiments?
Biol Rev 73:267–292. doi: 10.1111/j.1469-185X.1998.tb00031.x
Davenport RK (1977) Cross-Modal Perception: A Basis for Language? In: Language Learning in a Chimpanzee: The Lana Project. pp 73–83
Davenport RK (1976) Cross-modal perception in apes. Ann N Y Acad Sci 280:143–149.
Davenport RK, Rogers CM, Russell IS (1975) Cross-modal perception in apes : altered visual cues and delay. Neuropsychologia 13:229–235.
Davenport RK, Rogers CM, Russell IS (1973) Cross modal perception in apes.
Neuropsychologia 11:21–28. doi: 10.1016/0028-3932(73)90060-2
Dawkins MS, Woodington A (1997) Distance and the presentation of visual stimuli to birds.
Anim Behav 54:1019–25. doi: 10.1006/anbe.1997.0519
Del Hoyo J, Elliot A, Sargatal J (1997) Handbook of the birds of the world, Vol. 4. Sandgrouse to Cuckoos.
Delius J, Emmerton J, Hörster W, et al (1999) Picture-object recognition in pigeons. Cah
42 Psychol Cogn 18:621–656.
Diamond J, Bond AB (1999) Kea, Bird of Paradox: The Evolution and Behavior of a New Zealand Parrot. University of California Press
Eaton MD (2005) Human vision fails to distinguish widespread sexual dichromatism among sexually “monochromatic” birds. Proc Natl Acad Sci U S A 102:10942–10946. doi:
10.1073/pnas.0501891102
Emery NJ (2006) Cognitive ornithology: the evolution of avian intelligence. Philos Trans R Soc Lond B Biol Sci 361:23–43. doi: 10.1098/rstb.2005.1736
Ernst MO, Bülthoff HH (2004) Merging the senses into a robust percept. Trends Cogn Sci 8:162–169. doi: 10.1016/j.tics.2004.02.002
Ettlinger G (1960) Cross-modal transfer of training in monkeys. Behaviour 16:56–65.
Ettlinger G (1973) The transfer of information between sense-modalities: A
neuropsychological review. In: Memory and Transfer of Information. pp 43–64 Ettlinger G, Blakemore CB (1967) Cross-modal matching in the monkey. Neuropsychologia
5:147–154.
Ettlinger G, Wilson WA (1990) Cross-modal performance: behavioural processes,
phylogenetic considerations and neural mechanisms. Behav Brain Res 40:169–192. doi:
10.1016/0166-4328(90)90075-P
Fagot J, Parron C (2010) Picture Perception in Birds: Perspective from Primatologists. Comp Cogn Behav Rev 5:132–135. doi: 10.3819/ccbr.2010.50007
Gsell AC, Hagelin JC, Brunton DH (2012) Olfactory sensitivity in Kea and Kaka. Emu 112:60–
66. doi: 10.1071/MU11052
Guilford T, Dawkins MS (1991) Receiver Psychology and the Evolution of Animal Signals.
Anim Behav 42:1–14. doi: 10.1016/s0003-3472(05)80600-1
Güntürkün O (2005) The avian “prefrontal cortex” and cognition. Curr Opin Neurobiol 15:686–693. doi: 10.1016/j.conb.2005.10.003
Harrison JM (1984) The functional analysis of auditory discrimination. J Acoust Soc Am
43 751845-1854 1848–1854.
Harrison JM, Pratt SR (1977) By Location of Auditory Stimuli : J Exp Anal Behav 3:243–251.
Hashiya K, Kojima S (2001) Acquisition of auditory-visual intermodal matching-to-sample by a chimpanzee (Pan troglodytes): Comparison with visual-visual intramodal matching.
Anim Cogn 4:231–239. doi: 10.1007/s10071-001-0118-3
Huber L (2000) Wie das Neue in die Gehirne kommt. Emergenz und Chaos in neuronalen Prozessen. In: Huber L (ed) Wie das Neue in die Welt kommt. WUV Universitätsverlag, pp 157–174
Huber L (2007) Emulation learning: the integration of technical and social cognition. In:
Nehaniv C, Dautenhahn K (eds) Imitation and social learning in robots, humans and animals : behavioural, social and communicative dimensions. pp 427–439
Huber L, Gajdon GK (2006) Technical intelligence in animals: The kea model. Anim Cogn 9:295–305. doi: 10.1007/s10071-006-0033-8
Huber L, Racca A, Scaf B, et al (2013) Discrimination of familiar human faces in dogs (Canis familiaris). Learn Motiv 44:258–269. doi: 10.1016/j.lmot.2013.04.005
Huber L, Rechberger S, Taborsky M (2001) Social learning affects object exploration and manipulation in keas, Nestor notabilis. Anim Behav 62:945–954. doi:
10.1006/anbe.2001.1822
Hultsch H, Schleuss F, Todt D (1999) Auditory-visual stimulus pairing enhances perceptual learning in a songbird. Anim Behav 58:143–149. doi: 10.1006/anbe.1999.1120
Iwaniuk AN, Dean KM, Nelson JE (2005) Interspecific allometry of the brain and brain regions in parrots (Psittaciformes): Comparisons with other birds and primates. Brain Behav Evol 65:40–59. doi: 10.1159/000081110
Jarvis ED, Güntürkün O, Bruce L, et al (2005) Avian brains and a new understanding of vertebrate brain evolution. Nat Rev Neurosci 6:151–159. doi: 10.1038/nrn1606 Johnstone RA (1996) Multiple displays in animal communication:`Backup signals’ and
`multiple messages'. Philos Trans R Soc B Biol Sci 351:329. doi: 10.1098/rsta.1892.0001 King AJ, Calvert GA, Hospital JR (2001) Multisensory integration: Perceptual grouping by eye
44
and ear for Functional Magnetic Resonance Imaging of the Brain (FMRIB). Curr Biol 11:322–325. doi: 10.1016/S0960-9822(01)00175-0
Knudsen EI, Brainard MS (1995) Creating a unified representation of visual and auditory space in the brain. Annu Rev Neurosci 18:19–43. doi: 10.1146/annurev.neuro.18.1.19 Kojima S, Izumi A, Ceugniet M (2003) Identification of vocalizers by pant hoots, pant grunts
and screams in a chimpanzee. Primates 44:225–230. doi: 10.1007/s10329-002-0014-8 Kondo N, Izawa E-I, Watanabe S (2012) Crows cross-modally recognize group members but
not non-group members. Proc R Soc B Biol Sci 279:1937–1942. doi:
10.1098/rspb.2011.2419
Kulahci IG, Drea CM, Rubenstein DI, Ghazanfar A a (2014) Individual recognition through olfactory-auditory matching in lemurs. Proc Biol Sci 281:20140071. doi:
10.1098/rspb.2014.0071
Lalanne C, Lorenceau J (2004) Crossmodal integration for perception and action. J Physiol Paris 98:265–279. doi: 10.1016/j.jphysparis.2004.06.001
Lampe JF, Andre J (2012a) Cross-modal recognition of human individuals in domestic horses (Equus caballus). Anim Cogn 15:623–630. doi: 10.1007/s10071-012-0490-1
Lampe JF, Andre J (2012b) Cross-modal recognition of human individuals in domestic horses (Equus caballus). Anim Cogn 15:623–630. doi: 10.1007/s10071-012-0490-1
Lefebvre L, Reader SM, Sol D (2004) Brains, innovations and evolution in birds and primates.
Brain Behav Evol 63:233–246. doi: 10.1159/000076784
Leopold D a, Rhodes G (2010) A Comparative View of Face Perception. J Comp Psychol 124:233–251. doi: 10.1037/a0019460.A
Lickliter R, Bahrick LE (2000) The Development of Infant Intersensory Perception: Advantages of a Comparative Convergent-Operations Approach. Psychol Bull Aslin Smith 126:260–
280. doi: 10.1037/0033-2909.126.2.260
Lickliter R, Bahrick LE (2004) Perceptual Development and the Origins of Multisensory Responsiveness. In: The Handbook of multisensory processes. pp 643–654
Malone DR, Tolan JC, Rogers CM (1980) Cross-modal matching of objects and photographs in
45 the monkey. Neuropsychologia 18:693–697.
Martinez L, Matsuzawa T (2009a) Visual and auditory conditional position discrimination in chimpanzees (Pan troglodytes). Behav Processes 82:90–94. doi:
10.1016/j.beproc.2009.03.010
Martinez L, Matsuzawa T (2009b) Effect of species-specificity in auditory-visual intermodal matching in a chimpanzee (Pan troglodytes) and humans. Behav Processes 82:160–163.
doi: 10.1016/j.beproc.2009.06.014
Meredith MA (2002) On the neuronal basis for multisensory convergence: A brief overview.
Cogn Brain Res 14:31–40. doi: 10.1016/S0926-6410(02)00059-9
Miller RE, Murphy J V. (1964) Influence of the spatial relationships between the cue, reward, and response in discrimination learning.
O’Hara M, Gajdon GK, Huber L (2012) Kea Logics: How These Birds Solve Difficult Problems and Outsmart Researchers. In: Logic and Sensibility. pp 23–38
O’Hara M, Huber L, Gajdon GK (2015) The advantage of objects over images in discrimination and reversal learning by kea, Nestor notabilis. Anim Behav 101:51–60. doi:
10.1016/j.anbehav.2014.12.022
O’Hara M, Schwing R, Federspiel I, et al (2016) Reasoning by exclusion in the kea (Nestor notabilis). Anim Cogn 19:965–975. doi: 10.1007/s10071-016-0998-x
Over R, Mackintosh NJ (1969) Cross-modal transfer of intensity discrimination by rats.
Nature 224:918–919. doi: 10.1038/224918a0
Partan SR, Marler P (1999) Communication goes multimodal. Science (80- ) 283:1272–1273.
doi: 10.1017/CBO9781107415324.004
Proops L, McComb K (2012) Cross-modal individual recognition in domestic horses (Equus caballus) extends to familiar humans.
Proops L, McComb K, Reby D (2009) Cross-modal individual recognition in domestic horses (Equus caballus). Proc Natl Acad Sci U S A 106:947–951. doi: 10.1073/pnas.0809127105 Ratcliffe VF, McComb K, Reby D (2014) Cross-modal discrimination of human gender by
domestic dogs. Anim Behav 91:127–135. doi: 10.1016/j.anbehav.2014.03.009
46
Ravignani A, Westphal-Fitch G, Aust U, et al (2015) More than one way to see it: Individual heuristics in avian visual computation. Cognition 143:13–24. doi:
10.1016/j.cognition.2015.05.021
Romanski LM (2007) Representation and integration of auditory and visual stimuli in the primate ventral lateral prefrontal cortex. Cereb Cortex 17:61–69. doi:
10.1093/cercor/bhm099 Roskies AL (1999) Review. 24:7–9.
Rowe C (1999) Receiver psychology and evolution of multicomponent signals. Anim Behav 58:921–931.
Rowe C (2002) Sound improves visual discrimination learning in avian predators. Proc R Soc B Biol Sci 269:1353–1357. doi: 10.1098/rspb.2002.2012
Rowe C, Guilford T (1999) The evolution of multimodal warning displays. Evol Ecol 13:655–
671. doi: 10.1023/A:1011021630244
Sakata S, Yamamori T, Sakurai Y (2004) Behavioral studies of auditory-visual spatial recognition and integration in rats. Exp Brain Res 159:409–417. doi: 10.1007/s00221-004-1962-6
Schiller P v. (1932) Intersensorielle Transposition bei Fischen. Z Vgl Physiol 19:304–309.
Schloegl C, Dierks A, Gajdon GK, et al (2009) What you see is what you get? Exclusion performances in ravens and keas. PLoS One. doi: 10.1371/journal.pone.0006368 Schwing R (2014) The Acoustic Signal Pathway of the Kea (Nestor norabilis). University of
Auckland
Schwing R, Nelson XJ, Parsons S (2016) Audiogram of the kea parrot, Nestor notabilis. J Acoust Soc Am 140:3739–3744. doi: 10.1121/1.4967757
Schwing R, Nelson XJ, Wein A, Parsons S (2017) Positive emotional contagion in a New Zealand parrot. Curr Biol 27:R213–R214. doi: 10.1016/j.cub.2017.02.020
Shimizu T (2009) Why Can Birds Be So Smart? Background, Significance, and Implications of the Revised View of the Avian Brain. Comp Cogn Behav Rev 4:103–115. doi:
10.3819/ccbr.2009.40011
47
Sliwa J, Duhamel J-R, Pascalis O, Wirth S (2011) Spontaneous voice-face identity matching by rhesus monkeys for familiar conspecifics and humans. Proc Natl Acad Sci 108:1735–
1740. doi: 10.1073/pnas.1008169108
Smith F V., Bird MW (1963) The relative attraction for the domestic chick of combinations of stimuli in different sensory modalities. Anim Behav 11:300–305. doi: 10.1016/S0003-3472(63)80115-3
Soto F a, Wasserman E a (2010a) Comparative Vision Science: Seeing Eye to Eye? Comp Cogn Behav Rev 5:148–154. doi: 10.3819/ccbr.2010.50011.Comparative
Soto FA, Wasserman EAE. (2010b) Error-driven learning in visual categorization and object recognition: a common-elements model. Psychol Rev 117:349–381. doi:
10.1037/a0018695
Spence C (2011) Crossmodal correspondences: A tutorial review. 971–995. doi:
10.3758/s13414-010-0073-7
Spetch ML, Friedman a. (2006) Comparative Cognition of Object Recognition. Comp Cogn Behav Rev 1:12–35. doi: 10.3819/ccbr.2008.10002
Steurer MM, Aust U, Huber L (2012) The Vienna comparative cognition technology (VCCT):
An innovative operant conditioning system for various species and experimental procedures. Behav Res Methods 909–918. doi: 10.3758/s13428-012-0198-9 Stobbe N, Westphal-Fitch G, Aust U, Fitch WT (2012) Visual artificial grammar learning:
comparative research on humans, kea (Nestor notabilis) and pigeons (Columba livia).
Philos Trans R Soc B Biol Sci 367:1995–2006. doi: 10.1098/rstb.2012.0096
Sturdy CB, Weisman RG (2006) Rationale and methodology for testing auditory cognition in songbirds. Behav Processes 72:265–272. doi: 10.1016/j.beproc.2006.03.007
Tibbetts EA, Dale J (2007) Individual recognition: it is good to be different. Trends Ecol Evol 22:529–537. doi: 10.1016/j.tree.2007.09.001
Van Kampen HS, Bolhuis JJ (1991) Auditory Learning and Filial Imprinting in the Chick.
Behaviour 117:303–319. doi: 10.1163/156853991X00607
Van Kampen HS, Bolhuis JJ (1993) Interaction between auditory and visual learning during
48
filial imprining. Anim Behav 45:623–625. doi: 10.1006/anbe.1993.1074
Van Kampen HS, Bolhuis JJ (1992) An Evaluation of Auditory Learning in Filial Imprinting.
Behaviour 122:195–230. doi: 10.1163/156853992X00516
Watanabe S, Huber L (2006) Animal logics: Decisions in the absence of human language.
Anim Cogn 9:235–245. doi: 10.1007/s10071-006-0043-6
Wein A, Gajdon GK, Schwing R (2015) Picture - Object Recognition in Kea (Nestor notabilis).
Ethology 121:1059–1070. doi: 10.1111/eth.12423
Weisman R, Spetch ML (2010) Determining When Birds Perceive Correspondence Between Pictures and Objects: A Critique. Comp Cogn Behav Rev 5:117–131. doi:
10.3819/ccbr.2010.50006
Werdenich D, Huber L (2006) A case of quick problem solving in birds: String pulling in keas, Nestor notabilis. Anim Behav 71:855–863. doi: 10.1016/j.anbehav.2005.06.018
Yehle AL, Ward JP (1970) Cross-modal transfer of a specific discrimination in the bushbaby (Galago senegalensis). J Comp Physiol Psychol 73:74–77. doi: 10.1037/h0029812
49 7. Appendix
The following graphs show the learning curves of all individuals, differently coloured for each training phase, with the number of correct trials per session mapped directly in the graph. The first five kea are from the group that started with the “pyramid” (An, Ly, Pa, Pi, Pu), the last four kea are from the “cube” group (Ke, Ro, Sy, Wy):
Figure 9: Anu's results, showing the amount of correct trials per session for each phase
Figure 10: Lilly's results, showing the amount of correct trials per session for each phase
50
Figure 11: Paul's results, showing the amount of correct trials per session for each phase
Figure 12: Pick's results, showing the amount of correct trials per session for each phase
Figure 13: Papu's results, showing the amount of correct trials per session for each phase
51
Figure 14: Kermit's results, showing the amount of correct trials per session for each phase
Figure 15: Roku's results, showing the amount of correct trials per session for each phase
Figure 16: Sunny's results, showing the amount of correct trials per session for each phase
52
Figure 17: Willy's results, showing the amount of correct trials per session for each phase