Noise Health Home 

[Download PDF]
Year : 2015  |  Volume : 17  |  Issue : 77  |  Page : 227--232

Effects of sounds of locomotion on speech perception

Matz Larsson1, Seth Reino Ekström2, Parivash Ranjbar3,  
1 The Cardiology-­Lung Clinic; School of Health and Medical Sciences, Örebro University; Institute of Environmental Medicine, Karolinska Institutet, Örebro, Stockholm, Sweden
2 Audiological Research Center, Ahlsén's Research Institute, Örebro, Stockholm, Sweden
3 School of Health and Medical Sciences, Örebro University; Audiological Research Center, Ahlsén's Research Institute, Örebro, Stockholm, Sweden

Correspondence Address:
Matz Larsson
The Cardiology-­Lung Clinic, Örebro University Hospital, 70185 Örebro


Human locomotion typically creates noise, a possible consequence of which is the masking of sound signals originating in the surroundings. When walking side by side, people often subconsciously synchronize their steps. The neurophysiological and evolutionary background of this behavior is unclear. The present study investigated the potential of sound created by walking to mask perception of speech and compared the masking produced by walking in step with that produced by unsynchronized walking. The masking sound (footsteps on gravel) and the target sound (speech) were presented through the same speaker to 15 normal-hearing subjects. The original recorded walking sound was modified to mimic the sound of two individuals walking in pace or walking out of synchrony. The participants were instructed to adjust the sound level of the target sound until they could just comprehend the speech signal («DQ»just follow conversation«DQ» or JFC level) when presented simultaneously with synchronized or unsynchronized walking sound at 40 dBA, 50 dBA, 60 dBA, or 70 dBA. Synchronized walking sounds produced slightly less masking of speech than did unsynchronized sound. The median JFC threshold in the synchronized condition was 38.5 dBA, while the corresponding value for the unsynchronized condition was 41.2 dBA. Combined results at all sound pressure levels showed an improvement in the signal-to-noise ratio (SNR) for synchronized footsteps; the median difference was 2.7 dB and the mean difference was 1.2 dB [P < 0.001, repeated-measures analysis of variance (RM-ANOVA)]. The difference was significant for masker levels of 50 dBA and 60 dBA, but not for 40 dBA or 70 dBA. This study provides evidence that synchronized walking may reduce the masking potential of footsteps.

How to cite this article:
Larsson M, Ekström SR, Ranjbar P. Effects of sounds of locomotion on speech perception.Noise Health 2015;17:227-232

How to cite this URL:
Larsson M, Ekström SR, Ranjbar P. Effects of sounds of locomotion on speech perception. Noise Health [serial online] 2015 [cited 2020 Sep 22 ];17:227-232
Available from:

Full Text


An animal's locomotion, breathing, and vocalizations produce sounds that may stimulate its own auditory system. A possible consequence is masking of signals originating in the surroundings. [1] Human locomotion typically creates audible sound containing a number of qualitatively dissimilar acoustic events: isolated impulse signals, sliding sounds, crushing sounds, and complex temporal patterns of overlapping impulse signals. [2] Other airborne and bone-conducted locomotion sounds produced by arm movements, irregularities in joints, or clothing movement may also be perceived. [3] Healthy adults take approximately 10,000 steps per day throughout life; [4],[5] however, aspects of sound associated with human locomotion have received little attention. The question raised here is how walking sound may influence the perception of speech.

Walking and running are periodic activities consisting of repeated gait cycles (GC). By definition, a GC begins when one foot comes in contact with the ground and ends when the same foot again contacts the ground. [3] Running is defined as a gait in which there is an aerial phase, a time when neither foot touches the ground. Human walking rates are generally in the range of 75-125 steps per minute (SPM). [6]

The GC is comprised of stance and swing phases. [3] In walking, the two initial portions of the stance phase, initial contact and the loading response, normally produce more sound energy than other stance phase portions, although their combined duration is less than 10% of the GC. [3] In addition to propagating sound waves through the air, walking also results in self-generated sound transmitted to the inner ear via the bones of the skull, [7] which is likely to contribute to its masking potential. A walking sound is usually a sequence of isolated impact sounds generated by a temporally limited interaction between the foot and the walking surface. [2] In a rare investigation of the masking potential of sounds of locomotion, sound generated by a man walking on a beach and in shallow water was recorded at ear level. The former increased the sound level by 24 dB above baseline (from 38 dBA to 62 dBA-equivalent level) and the latter increased the level by 32 dB (from 34 dBA to 66 dBA-equivalent level). [8] These measurements included ambient noise; however, excluding this background noise would have reduced the level by no more than a few dB. Hence, the air-conducted sounds of locomotion were approximately 60 dB LAeq.

Individuals walking side by side often subconsciously synchronize steps, [9],[10],[11],[12],[13] with each person making adjustments to resemble their partner's walking behavior. [11] In paired walking, participants can exhibit a phase difference close to 0° (inphase), or they can show a phase difference close to 180° (antiphase), with the walkers' opposite-side feet contacting the ground simultaneously. [11] Only a small amount of sensory information is sufficient to cause unintentional synchronization. Leg length difference has been found to be negatively related to the locking of step. [9] It has been proposed that the evolutionary basis of synchronized human walking and other synchronized behaviors is improved social cohesion [14],[15] Social dynamics have been proposed to influence synchronization, and an individual's movement pattern has been characterized as the result of interaction between her/his ideal movement pattern and that of nearby individuals. [16] Larsson [8] suggested that synchronized human gait may result in improved perception of important signals. Analogously, it has also been proposed that animal groups such as schooling fish, cetaceans, and birds flying in formation may achieve acoustic advantages by moving in synchrony. [8],[17],[18],[19],[20] This ability to produce subtle differences in ambient signal-to-noise ratios (SNRs) may have adaptive value in improving the perception of low-amplitude signals from the environment. The hypothesis proposed here is that the synchronization of footsteps in walkers may improve the perception of environmental sound.

Synchronization of human gait may improve the capacity to discriminate sound sources, as the onset time of the sounds of the GC will coincide. In synchronized walking, the pairs of footsteps may be grouped together to form an auditory object, [21] improving the brain's ability to discriminate footsteps from other sound sources. Moreover, it is likely that two humans walking in pace on a consistent surface will be familiar with the sound patterns produced. The predictability of masking sounds may reduce backward masking due to a learning effect. [7] Word identification was shown to be better in the presence of familiar background music than with unfamiliar background music. [22] It is likely that synchronized maskers differ substantially from unsynchronized ones with respect to the distribution of sound intensity throughout the GC. Normal-hearing listeners can take advantage of the momentary favorable SNR that occurs in the temporal valleys of fluctuating maskers to improve speech perception. [23],[24] This ability has been called "listening in the dips" or "glimpsing." [24] It is likely that paced stimuli provide greater opportunities to glimpse speech or other important signals during relatively quiet segments of the GC.

The aim of this study was to investigate the potential of footsteps to mask a potentially critical signal, speech, and to compare masking by synchronized and unsynchronized footsteps.



The participants were eight females and eight males, ages 22-70 years (mean = 30.9, median = 27), all of whom were native Swedish speakers with hearing thresholds better than 20 dB hearing level (HL) at the frequencies 500 Hz, 1000 Hz, 2000 Hz, and 4000 Hz.

Equipment and test conditions

The tests were performed in an anechoic chamber (3 m L × 3.4 m W × 2.4 m H) at the Audiological Research Center, Örebro, Sweden. [25],[26] The masking sound (footsteps) and the target sound (reader's continuous speech) were presented through the same speaker, which was situated 1.83 m in front of the subject.

Speech was chosen as the target sound, as standardized methods for assessing its perception have been developed. [26],[27],[28] The participants were instructed to adjust the level of the target sound to the level where he or she could just follow a story, as in the "just follow conversation" (JFC) method. [26],[27],[28] Assessment was strictly subjective; the subject was not required to repeat words or questioned about the content. The JFC method was used in preference to conventional speech-recognition threshold measures because it is more quickly performed and has greater face validity. [25] This was an advantage in light of the number of threshold determinations (n = 16) needed in the study design. Calibration of the level of the masker and the test sound was conducted using a Brüel och Kjaer Pulse meter system with the microphone: B&K 4191, Frontend: B&K 3160 and the software: B&K Pulse version 17. The audiometric thresholds were obtained using standard clinical procedures, with audiometers calibrated according to the International Organization for Standardization (ISO) 389.

Test conditions

The target sound was a male voice reading a story in Swedish (standardized by Borg et al. [26],[27],[28] ) recorded in an anechoic chamber at the Ahlséns Research Institute. The spectrum is presented in [Figure 1].{Figure 1}

The maskers were produced by manipulating the sound of a 177-cm tall male weighing 74 kg, walking at 93 SPM on 10 mm gravel recorded at 16,000 Hz sampling frequency. The walker was wearing rubber boots of European standard size 42. The microphone was held by the walker approximately 1 m above the ground.

Synchronized and unsynchronized conditions

To measure masking with respect to rhythmic properties and intensity, the original walking sound was modified and transformed to mimic the sound of two individuals walking in pace, designated the synchronized condition, and to mimic the sound of two individuals walking out of phase, designated the unsynchronized condition. The latter consisted of the original walking sound (A) plus a transformed variant of that sound (B), which was manipulated (time-stretched) to mimic the steps of a person walking at 85 SPM, i.e., out of phase with sound A. The sound pressure level of barefoot walking in shallow water and in gravel/sand, measured at ear level, was reported to be approximately at 60 dBA-equivalent level. [8] The synchronized and unsynchronized walking sounds were played at 40 dBA, 50 dBA, 60 dBA, and 70 dBA. The temporal pattern and spectrum of the synchronized and unsynchronized walking sounds are shown in [Figure 2]. The spectra are indistinguishable, which is to be expected, as the sounds were synthesized from the same source.{Figure 2}


A standard clinical audiogram was obtained to confirm that all subjects had pure-tone average hearing thresholds better than 20 dB HL over a frequency range of 500-4000 Hz. The participant was seated in the test room and given instructions with respect to the test conditions and the JFC method. The participant first listened to the unmasked target sound and was instructed to adjust the sound level until it was just possible to follow the story without necessarily hearing every word. They then performed the exercise in the presence of the masking sounds. Each masking condition (synchronized and unsynchronized at 40 dBA, 50 dBA, 60 dBA, and 70 dBA) was presented twice (2 × 4 × 2 = 16 trials) in random order to balance learning effects. [29] Before presenting the masker at 70 dBA, the participant was warned that they would be presented with a loud stimulus. The participants were allowed to take as much time as needed to adjust the JFC thresholds and to take a break during the testing. Finally, the participant listened to the target sound again without masking and adjusted the level to the JFC threshold. The testing took a maximum of 1 h. All participants completed the test without a break.

Statistical analysis

The data, levels of JFC thresholds, are presented as percentiles, and as mean and standard deviation (mean ± SD). There were no deviations from normality as assessed by the Shapiro-Wilk test. Two-way repeated-measures analysis of variance (RM-ANOVA) was implemented as a generalized linear model to evaluate the JFC thresholds. The factors were masking level and walking type (synchronized or unsynchronized). Post hoc estimations of the mean JFC thresholds of walking synchronized and unsynchronized were calculated for each masker level with 95% confidence intervals (CI). Statistical analyses were performed with SPSS Inc. Released 2008. SPSS Statistics for Windows, Version 17.0. Chicago: SPSS Inc.


The study was approved by the Regional Ethics Committee, Uppsala, Sweden (Reg. no. 2012/079).


There was a significant effect of masking (P < 0.001, RM-ANOVA) [Figure 3]. The mean values of the JFC thresholds in the unmasked condition before and after the masking trials were similar (14 ± 3 dBA and 14 ± 2 dBA, P = 0.715, RM-ANOVA).{Figure 3}

The mean JFC threshold (for all four masking levels) in the synchronized condition was 39.8 ± 9.6 dBA, while the corresponding value for the unsynchronized condition was 40.9 ± 9.9 dBA. The difference was 1.1 dB (95% CI, 0.7-1.8, P < 0.001, RM-ANOVA). A significant interaction between masking level and walking type was observed (P = 0.037, RM-ANOVA). The median JFC threshold in the synchronized condition was 38.5 dBA, while the corresponding value for the unsynchronized condition was 41.2 dBA.

JFC thresholds for the synchronized and unsynchronized conditions, their differences, and 95% CIs were estimated for each masker level [Table 1]. {Table 1}Synchronized footsteps at 60 dBA [Table 1] resulted in a mean JFC threshold of 43.1 dBA, while the corresponding value for unsynchronized was 45.5, a difference in SNR of 2.4 dB (P < 0.001, RM-ANOVA). The minimum, maximum, median, and 25th and 75th percentiles of the JFC thresholds for each masker level in unsynchronized and synchronized conditions are shown in [Figure 3]. The mean JFC threshold was higher in the unsynchronized condition at all masking levels, with the difference being significant at 50 dBA (P = 0.009, RM-ANOVA) and 60 dBA (P < 0.001, RM-ANOVA). Significance was not observed at 40 dBA (P = 0.816, RM-ANOVA) and was borderline at 70 dBA (P = 0.047, RM-ANOVA).


To our knowledge, this is the first experimental study exploring the hypothesis that synchronized locomotion in an animal group may improve the hearing of vital signals. The study demonstrated a modest masking effect of locomotion sound on speech perception. The sound that mimicked synchronized walking showed slightly less masking effect than did the sound of unsynchronized walking. The average difference in JFC thresholds was 1.1 dB. Although Hagerman [30] has demonstrated an increase in speech intelligibility of approximately 25% when the SNR was increased from 0 dB to 1 dB, an improvement of 10-15% dB -1 is more typical for each dB improved SNR. [30] Speech perception in other types of noise has been investigated by Plomp et al. [31] and Persson et al. [32] in binaural versus monaural conditions. These studies demonstrated a difference in SNR of around 2-3 dB in favor of the binaural condition. Thus, the observed difference in the SNR between synchronized and unsynchronized walking in the present study was comparatively lower.

The results of the present study indicate that a slight improvement in speech perception may be obtained by synchronized walking. Synchronized footsteps at the masking level of 60 dB [Table 1] improved the SNR by 2.4 dB. The difference was of the same order as that reported between monaural and binaural listening. [31],[32] The observed difference in SNR is likely to influence perception of verbal communication only for speech at approximately 45 dBA. As this is substantially lower than the normal speech level of ~60 dB, the masking by locomotion sounds is unlikely to influence speech perception inside a small group more than marginally. However, even a slight improvement in perception of low-amplitude signals in the environment may have adaptive value.

The results of this study imply that synchronized walking may improve the perception, or reception, of speech produced at some distance from the walkers, revealing the presence of other individuals. It is also likely that other signals, such as the sound of running water, distant footsteps, moving prey or predators, flying birds, or hissing snakes, may be more easily detected during the relatively quiet intervals in synchronized walking. Common onset of sound (in this case footsteps) may improve auditory grouping. [33] The rhythmic properties of a masker seem to influence speech perception. A piano played at 50 dBA was shown to exert masking effects on speech perception thresholds differing according to frequency and beats per minute. [25] A low octave with rapid cadence produced the greatest masking effect. The typical walking tempo of humans is close to 100 SPM. Two people walking in pace at this tempo will produce a regular rhythm of 100, while unsynchronized walking, for instance 90 SPM combined with 110 SPM, will produce a more rapid and unpredictable rhythm. The footsteps produced by a small group may be similar in frequency distribution to the locomotory sounds produced by a predator or potential prey. As these self-produced sounds are produced closer to the ear, they are likely to be a potent masker of environmental signals. This type of masking remains to be studied. In evolutionary terms, even a slight benefit in perception may have affected survival by increasing the potential for detection of a nearby prey or a stalker. If synchronized group locomotion improves hearing of critical signals in the surroundings, it may have implications for the evolution of synchronized behavior in humans and other vertebrate groups.

Predictable noise

Human speech perception often takes place against a background of intense and irrelevant noise. [34] As mentioned, learning reduces backward masking, increasing signal detection. [7] Whether the repetitive sounds produced by one's own locomotion are associated with such learning remains to be studied. If there is a learning effect, it seems likely that sounds produced concurrently in extended sequences by an accompanying person might be included in the learning process. That may contribute to reduced masking of synchronized compared with unsynchronized walking, when moving on a similar surface during extended periods.

Limitations and suggestions for further research

The walking sounds used here were not real-time recordings, but modified from a natural walking sound. The sounds represented a single type of substrate, a single subject, a single type of footwear, and one walking tempo. The unsynchronized walking sound represented only one of an innumerable number of possible rhythms that may be produced by two walkers. It is unclear why significant values were found at 50 dB and 60 dB masking noise, while at 40 dB and 70 dB, no effect was observed. Random variation due to the low number of subjects (n = 15) is a possibility, and this number might preferably be doubled in a similar, future study. Sound pressure level and other characteristics of footsteps are scarcely investigated. [8] How the walking surface echoes, and other environmental factors influence masking are of interest, along with how the slope of the speech recognition curve versus speech level or SNR varies with the type of speech material, the psychophysical test method, and the temporal and spectral characteristics of the masker. A limitation of the study is that the signal and the masking sound came from the same direction. The presentation of speech and masker from a single frontally located speaker in an anechoic chamber differs from a real-life situation with the speech source to one side at ear-level (assuming similar height) and the masking footstep sounds from below. In a real-life situation, the masking sound would come from different directions and distances from those of speech. The study did not include self-generated sounds. The sounds produced and perceived by a walker are, by definition, self-generated. Sensory attenuation of the effects of self-generated action has been described. [35],[36] It remains to be studied whether sounds produced concurrently by another walker may be attenuated in the central nervous system (CNS) in a manner similar to that demonstrated for self-produced sound. It would be of interest to study the masking potential of synchronized and unsynchronized walking sounds in a situation that mimics real life, for example, investigating how auditory perception is influenced when two subjects switch from synchronized to unsynchronized walking and vice versa. Does the CNS attenuation of a self-generated masking sound include the concurrently produced sound of an accompanying walker? That may be the case, particularly as concurrent onset time is an important mechanism in auditory grouping. [33] If so, the result is likely to be a further reduction of masking by walking in step. The walking sound investigated here was produced with shod feet. It is unclear when the use of footwear began, but there is anatomical evidence for the habitual use of footwear approximately 40,000 years ago, [37] presumably too recent to have shown an impact on the evolution of synchronized walking. Similar studies using barefoot walking sounds, as well as of footsteps produced on other materials, might be conducted. Synchronized and unsynchronized running may also be of interest to study. Masking due to bone conduction is likely to differ substantially in different phases of the GC. Masking due to bone-conducted locomotion sounds and the possible sensory attenuation of such effects of self-generated action are little investigated, as is how bone conduction may interact with air conducted transmission of footstep sounds.


Walking sound was shown to mask the perception of speech. Synchronized walking sound had a somewhat lower masking effect than unsynchronized. Synchronization is unlikely to improve speech perception at normal and high speech levels. A possible advantage of synchronized walking may be an improved ability to decipher weak signals in the environment.


We thank two anonymous reviewers for valuable suggestions. Professor Brian C. Moore provided much useful commentary. We are grateful to: Anders Magnuson, statistician; the Lucidus Consultancy for editorial comments and help with the English language; Erik Borg, Professor Emeritus, Department of Audiology, ÖUH for information on auditory perception; Örebro County Council for a postdoctoral grant; and the Cardiology Clinic of ÖUH for support in publication.


1von Holst E, Mittelstaedt H. Das Reafferenzprinzip. Wechselwirkungen zwischen Zentralnervensystem und Peripheri. Naturwissenschaften 1950;37:464-76.
2Visell Y, Fontana F, Giordano BL, Nordahl R, Serafin S, Bresin R. Sound design and perception in walking interactions. Int J Hum Comput Stud 2009;67:947-59.
3Novacheck TF. The biomechanics of running. Gait Posture 1998;7:77-95.
4Tudor-Locke CE, Myers AM. Methodological considerations for researchers and practitioners using pedometers to measure physical (ambulatory) activity. Res Q Exerc Sport 2001;72:1-12.
5Bohannon RW. Number of pedometer-assessed steps taken per day by adults: A descriptive meta-analysis. Phys Ther 2007;87:1642-50.
6Sabatier JM, Ekimov AE. A review of human signatures in urban environments using seismic and acoustic methods. Waltham, MA: The Institute of Electrical and Electronics Engineers (IEEE); 2008. p. 215-20.
7Moore B. An Introduction to the Psychology of Hearing. 5th ed. San Diego, California: Academic Press; 2003. p. 107.
8Larsson M. Self-generated sounds of locomotion and ventilation and the evolution of human rhythmic abilities. Anim Cogn 2014;17:1-14.
9Nessler JA, Gilliland SJ. Interpersonal synchronization during side by side treadmill walking is influenced by leg length differential and altered sensory feedback. Hum Mov Sci 2009;28:772-85.
10Nessler JA, Gilliland SJ. Kinematic analysis of side-by-side stepping with intentional and unintentional synchronization. Gait Posture 2010;31:527-9.
11Nessler JA, McMillan D, Schoulten M, Shallow T, Stewart B, De Leone C. Side by side treadmill walking with intentionally desynchronized gait. Ann Biomed Eng 2013;41:1680-91.
12van Ulzen NR, Lamoth CJ, Daffertshofer A, Semin GR, Beek PJ. Characteristics of instructed and uninstructed interpersonal coordination while walking side-by-side. Neurosci Lett 2008;432:88-93.
13Zivotofsky AZ, Hausdorff JM. The sensory feedback mechanisms enabling couples to walk synchronously: An initial investigation. J Neuroeng Rehabil 2007;4:28.
14McNeill WH. Keeping Together in Time. Cambridge, MA: Harvard University Press; 1995.
15Fessler DM, Holbrook C. Marching into battle: Synchronized walking diminishes the conceptualized formidability of an antagonist in men. Biol Lett 2014;10: pii: 20140592.
16Issartel J, Marin L, Cadopi M. Unintended interpersonal co-ordination: "Can we march to the beat of our own drum?". Neurosci Lett 2007;411:174-9.
17Larsson M. Incidental sounds of locomotion in animal cognition. Anim Cognit 2012;15:1-13.
18Larsson M. Why do fish school? Curr Zool 2012;58:116-28.
19Larsson M. Schooling fish: A multisensory approach. Reference Module in Earth Systems and Environmental Sciences. Elsevier; 2013.
20Larsson M. Possible functions of the octavolateralis system in fish schooling. Fish Fish 2009;10:344-53.
21Bizley JK, Cohen YE. The what, where and how of auditory-object perception. Nat Rev Neurosci 2013;14:693-707.
22Russo FA, Pichora-Fuller MK. Tune in or tune out: Age-related differences in listening to speech in music. Ear Hear 2008;29:746-60.
23Miller GA, Licklider JC. The intelligibility of interrupted speech. J Acoust Soc Am 1950;22:167-73.
24Brown CA, Bacon SP. Fundamental frequency and speech intelligibility in background noise. Hear Res 2010;266:52-9.
25Ekström SR, Borg E. Hearing speech in music. Noise Health 2011;13:277-85.
26Borg E, Wilson M, Samuelsson E. Towards an ecological audiology: Stereophonic listening chamber and acoustic environmental tests. Scand Audiol 1998;27:195-206.
27Hawkins DB, Montgomery A, Mueller H, Sedge R. Assessment of speech intelligibility by hearing-impaired listeners. In: Berglund B, Karlsson J, Lindvall T, editors. Noise as a Public Health Problem. Stockholm: Sweden Swedish Council for Building Research; 1988. p. 241-6.
28Hygge S, Rönnberg J, Larsby B, Arlinger S. Normal-hearing and hearing-impaired subjects′ ability to just follow conversation in competing speech, reversed speech, and noise backgrounds. J Speech Hear Res 1992;35:208-15.
29Rhebergen KS, Versfeld NJ, Dreschler WA. Learning effect observed for the speech reception threshold in interrupted noise with normal hearing listeners. Int J Audiol 2008;47:185-8.
30Hagerman B. Sentences for testing speech intelligibility in noise. Scand Audiol 1982;11:79-87.
31Plomp R, Mimpen AM. Speech-reception threshold for sentences as a function of age and noise level. J Acoust Soc Am 1979;66:1333-42.
32Persson P, Harder H, Arlinger S, Magnuson B. Speech recognition in background noise: Monaural versus binaural listening conditions in normal-hearing patients. Otol Neurotol 2001;22:625-30.
33Bregman AS. Auditory Scene Analysis: The Perceptual Organization of Sound. Cambridge, Massachusetts: MIT Press; 1990.
34Darwin CJ. Listening to speech in the presence of other sounds. Philos Trans R Soc Lond B Biol Sci 2008;363:1011-21.
35Blakemore SJ, Frith CD, Wolpert DM. Spatio-temporal prediction modulates the perception of self-produced stimuli. J Cogn Neurosci 1999;11:551-9.
36Sato A. Action observation modulates auditory perception of the consequence of others′ actions. Conscious Cogn 2008;17:1219-27.
37Trinkaus E, Shang H. Anatomical evidence for the antiquity of human footwear: Tianyuan and Sunghir. J Archaeol Sci 2008;35:1928-33.