Voice assistants like Alexa, Siri, and Cortana have become increasingly popular, with nearly half of Americans using them on their smartphones. These digital voice assistants have been the subject of various studies, including their impact on speech patterns and communication with users.
Researchers have found that there are acoustic regularities in infant-directed speech and song across different cultures, highlighting universal prosodic features in motherese. Studies have also compared foreigner- and infant-directed speech, showing variations in speech rate adjustments and prosodic differences.
The field of human-computer interaction has explored how people interact with voice assistants, with studies on the effects of anthropomorphism, individual differences in anthropomorphism, and language attitudes toward voice-AI. Research has also shown that children attribute mental lives to technology like smart speakers, and that there are challenges in automatic recognition of children’s speech.
Studies have investigated the impact of prosodic changes on speech intelligibility and corrections in spoken dialogue systems, highlighting the importance of adapting speech after misrecognition. Overall, these studies shed light on the complex nature of communication with voice assistants and the various factors that influence these interactions.
As voice assistants continue to evolve and become more integrated into everyday life, understanding how speech patterns and communication strategies are influenced by these devices is crucial for improving user experience and facilitating effective human-technology interaction.
Source
Photo credit www.nature.com