U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Front Psychol

The psychological functions of music listening

Thomas schäfer.

1 Department of Psychology, Chemnitz University of Technology, Chemnitz, Germany

Peter Sedlmeier

Christine städtler, david huron.

2 School of Music, Cognitive and Systematic Musicology Laboratory, Ohio State University, Columbus, OH, USA

Why do people listen to music? Over the past several decades, scholars have proposed numerous functions that listening to music might fulfill. However, different theoretical approaches, different methods, and different samples have left a heterogeneous picture regarding the number and nature of musical functions. Moreover, there remains no agreement about the underlying dimensions of these functions. Part one of the paper reviews the research contributions that have explicitly referred to musical functions. It is concluded that a comprehensive investigation addressing the basic dimensions underlying the plethora of functions of music listening is warranted. Part two of the paper presents an empirical investigation of hundreds of functions that could be extracted from the reviewed contributions. These functions were distilled to 129 non-redundant functions that were then rated by 834 respondents. Principal component analysis suggested three distinct underlying dimensions: People listen to music to regulate arousal and mood , to achieve self-awareness , and as an expression of social relatedness . The first and second dimensions were judged to be much more important than the third—a result that contrasts with the idea that music has evolved primarily as a means for social cohesion and communication. The implications of these results are discussed in light of theories on the origin and the functionality of music listening and also for the application of musical stimuli in all areas of psychology and for research in music cognition.

Introduction

Music listening is one of the most enigmatic of human behaviors. Most common behaviors have a recognizable utility that can be plausibly traced to the practical motives of survival and procreation. Moreover, in the array of seemingly odd behaviors, few behaviors match music for commandeering so much time, energy, and money. Music listening is one of the most popular leisure activities. Music is a ubiquitous companion to people's everyday lives.

The enthusiasm for music is not a recent development. Recognizably musical activities appear to have been present in every known culture on earth, with ancient roots extending back 250,000 years or more (see Zatorre and Peretz, 2001 ). The ubiquity and antiquity of music has inspired considerable speculation regarding its origin and function.

Throughout history, scholars of various stripes have pondered the nature of music. Philosophers, psychologists, anthropologists, musicologists, and neuroscientists have proposed a number of theories concerning the origin and purpose of music and some have pursued scientific approaches to investigating them (e.g., Fitch, 2006 ; Peretz, 2006 ; Levitin, 2007 ; Schäfer and Sedlmeier, 2010 ).

The origin of music is shrouded in prehistory. There is little physical evidence—like stone carvings or fossilized footprints—that might provide clues to music's past. Necessarily, hypotheses concerning the original functions of music will remain speculative. Nevertheless, there are a number of plausible and interesting conjectures that offer useful starting-points for investigating the functions of music.

A promising approach to the question of music's origins focuses on how music is used—that is, it's various functions. In fact, many scholars have endeavored to enumerate various musical functions (see below). The assumption is that the function(s) that music is presumed to have served in the past would be echoed in at least one of the functions that music serves today. Of course, how music is used today need have no relationship with music's function(s) in the remote past. Nevertheless, evidence from modern listeners might provide useful clues pertinent to theorizing about origins.

In proposing various musical functions, not all scholars have related these functions to music's presumed evolutionary roots. For many scholars, the motivation has been simply to identify the multiple ways in which music is used in everyday lives (e.g., Chamorro-Premuzic and Furnham, 2007 ; Boer, 2009 ; Lonsdale and North, 2011 ; Packer and Ballantyne, 2011 ). Empirical studies of musical functions have been very heterogeneous. Some studies were motivated by questions related to development. Many related to social identity. Others were motivated by cognitive psychology, aesthetics, cultural psychology, or personality psychology. In addition, studies differed according to the target population. While some studies attempted to assemble representative samples of listeners, others explicitly focused on specific populations such as adolescents. Most studies rely on convenient samples of students. Consequently, the existing literature is something of a hodgepodge.

The aim of the present study is to use the extant literature as a point of departure for a fresh re-appraisal of possible musical functions. In Part 1 of our study, we summarize the results of an extensive literature survey concerning the possible functions of music. Specifically, we identified and skimmed hundreds of publications that explicitly suggest various functions, uses, or benefits for music. We provide separate overviews for the empirical literatures and the theoretical literatures. This survey resulted in just over 500 proposed musical functions. We do not refer to each of the identified publications but concentrate on the ones that have identified either more than one single function of music listening or a single unique function that is not captured in any other publication. In Part 2, we present the results of an empirical study whose purpose was to distill—using principal components analysis (PCA)—the many proposed functions of music listening. To anticipate our results, we will see that PCA suggests three main dimensions that can account for much of the shared variance in the proposed musical functions.

Review of the research on the functions of music

Discussions and speculations regarding the functions of music listening can be found in both theoretical literature concerning music as well as in empirical studies of music. Below, we offer a review of both literatures. The contents of the reviews are summarized in Tables ​ TablesA1, A1 , ​ ,A2. A2 . Table ​ TableA1 A1 provides an overview of theoretical proposals regarding musical function, whereas Table ​ TableA2 A2 provides an overview of empirical studies regarding musical function. Together, the two tables provide a broad inventory of potential functions for music.

Theoretical approaches

Many scholars have discussed potential functions of music exclusively from a theoretical point of view. The most prominent of these approaches or theories are the ones that make explicit evolutionary claims. However, there are also other, non-evolutionary approaches such as experimental aesthetics or the uses-and-gratifications approach. Functions of music were derived deductively from these approaches and theories. In addition, in the literature, one commonly finds lists or collections of functions that music can have. Most of these lists are the result of literature searches; in other cases authors provide no clear explanation for how they came up with the functions they list. Given the aim of assembling a comprehensive list, all works are included in our summary.

Functions of music as they derive from specific approaches or theories

Evolutionary approaches. Evolutionary discussions of music can already be found in the writings of Darwin. Darwin discussed some possibilities but felt there was no satisfactory solution to music's origins (Darwin, 1871 , 1872 ). His intellectual heirs have been less cautious. Miller ( 2000 ), for instance, has argued that music making is a reasonable index of biological fitness, and so a manifestation of sexual selection—analogous to the peacock's tail. Anyone who can afford the biological luxury of making music must be strong and healthy. Thus, music would offer an honest social signal of physiological fitness.

Another line of theorizing refers to music as a means of social and emotional communication. For example, Panksepp and Bernatzky ( 2002 , p. 139) argued that

in social creatures like ourselves, whose ancestors lived in arboreal environments where sound was one of the most effective ways to coordinate cohesive group activities, reinforce social bonds, resolve animosities, and to establish stable hierarchies of submission and dominance, there could have been a premium on being able to communicate shades of emotional meaning by the melodic character (prosody) of emitted sounds.

A similar idea is that music contributes to social cohesion and thereby increases the effectiveness of group action. Work and war songs, lullabies, and national anthems have bound together families, groups, or whole nations. Relatedly, music may provide a means to reduce social stress and temper aggression in others. The idea that music may function as a social cement has many proponents (see Huron, 2001 ; Mithen, 2006 ; Bicknell, 2007 ).

A novel evolutionary theory is offered by Falk ( 2004a , b ) who has proposed that music arose from humming or singing intended to maintain infant-mother attachment. Falk's “putting-down-the-baby hypothesis” suggests that mothers would have profited from putting down their infants in order to make their hands free for other activities. Humming or singing consequently arose as a consoling signal indicating caretaker proximity in the absence of physical touch.

Another interesting conjecture relates music to human anxiety related to death, and the consequent quest for meaning. Dissanayake ( 2009 ), for example, has argued that humans have used music to help cope with awareness of life's transitoriness. In a manner similar to religious beliefs about the hereafter or a higher transcendental purpose, music can help assuage human anxiety concerning mortality (see, e.g., Newberg et al., 2001 ). Neurophysiological studies regarding music-induced chills can be interpreted as congruent with this conjecture. For example, music-induced chills produce reduced activity in brain structures associated with anxiety (Blood and Zatorre, 2001 ).

Related ideas stress the role music plays in feelings of transcendence. For example, (Frith, 1996 , p. 275) has noted that: “We all hear the music we like as something special, as something that defies the mundane, takes us “out of ourselves,” puts us somewhere else.” Thus, music may provide a means of escape. The experience of flow states (Nakamura and Csikszentmihalyi, 2009 ), peaks (Maslow, 1968 ), and chills (Panksepp, 1995 ), which are often evoked by music listening, might similarly be interpreted as forms of transcendence or escapism (see also Fachner, 2008 ).

More generally, Schubert ( 2009 ) has argued that the fundamental function of music is its potential to produce pleasure in the listener (and in the performer, as well). All other functions may be considered subordinate to music's pleasure-producing capacity. Relatedly, music might have emerged as a safe form of time-passing—analogous to the sleeping behaviors found among many predators. As humans became more effective hunters, music might have emerged merely as an entertaining and innocuous way to pass time during waking hours (see Huron, 2001 ).

The above theories each stress a single account of music's origins. In addition, there are mixed theories that posit a constellation of several concurrent functions. Anthropological accounts of music often refer to multiple social and cultural benefits arising from music. Merriam ( 1964 ) provides a seminal example. In his book, The anthropology of music , Merriam proposed 10 social functions music can serve (e.g., emotional expression, communication, and symbolic representation). Merriam's work has had a lasting influence among music scholars, but also led many scholars to focus exclusively on the social functions of music. Following in the tradition of Merriam, Dissanayake ( 2006 ) proposed six social functions of ritual music (such as display of resources, control, and channeling of individual aggression, and the facilitation of courtship).

Non-evolutionary approaches. Many scholars have steered clear of evolutionary speculation about music, and have instead focused on the ways in which people use music in their everyday lives today. A prominent approach is the “uses-and-gratifications” approach (e.g., Arnett, 1995 ). This approach focuses on the needs and concerns of the listeners and tries to explain how people actively select and use media such as music to serve these needs and concerns. Arnett ( 1995 ) provides a list of potential uses of music such as entertainment, identity formation, sensation seeking, or culture identification.

Another line of research is “experimental aesthetics” whose proponents investigate the subjective experience of beauty (both artificial or natural), and the ensuing experience of pleasure. For example, in discussing the “recent work in experimental aesthetics,” Bullough ( 1921 ) distinguished several types of listeners and pointed to the fact that music can be used to activate associations, memories, experiences, moods, and emotions.

By way of summary, many musical functions have been proposed in the research literature. Evolutionary speculations have tended to focus on single-source causes such as music as an indicator of biological fitness, music as a means for social and emotional communication, music as social glue, music as a way of facilitating caretaker mobility, music as a means of tempering anxiety about mortality, music as escapism or transcendental meaning, music as a source of pleasure, and music as a means for passing time. Other accounts have posited multiple concurrent functions such as the plethora of social and cultural functions of music found in anthropological writings about music. Non-evolutionary approaches are evident in the uses-and-gratifications approach—which revealed a large number of functions that can be summarized as cognitive, emotional, social, and physiological functions—and the experimental aesthetics approach, whose proposed functions can similarly be summarized as cognitive and emotional functions.

Functions of music as they derive from literature research

As noted, many publications posit musical functions without providing a clear connection to any theory. Most of these works are just collections of functions of music from the literature. Not least, there are also accounts of such collections where it remained unclear how the author(s) came up with the functions contained. Some of these works refer to only one single function of music—most often because this functional aspect was investigated not with the focus on music but with a focus on other psychological phenomena. Yet other works list extensive collections of purported musical functions.

Works that refer to only one single functional aspect of music include possible therapeutic functions for music in clinical settings (Cook, 1986 ; Frohne-Hagemann and Pleß-Adamczyk, 2005 ), the use of music for symbolic exclusion in political terms (Bryson, 1996 ), the syntactic, semantic, and mediatizing use of film music (Maas, 1993 ), and the use of music to manage physiological arousal (Bartlett, 1996 ).

The vast majority of publications identify several possible musical functions, most of which—as stated above—are clearly focused on social aspects. Several comprehensive collections have been assembled, such as those by Baacke ( 1984 ), Gregory ( 1997 ), Ruud ( 1997 ), Roberts and Christenson ( 2001 ), Engh ( 2006 ), and Laiho ( 2004 ). Most of these studies identified a very large number of potential functions of music.

By way of summary, there exists a long tradition of theorizing about the potential functions of music. Although some of these theories have been deduced from a prior theoretical framework, none was the result of empirical testing or exploratory data-gathering. In the ensuing section, we turn to consider empirically-oriented research regarding the number and nature of potential musical functions.

Empirical investigations

A number of studies have approached the functions of music from an empirical perspective. Two main approaches might be distinguished. In the first approach, the research aim is to uncover or document actual musical functioning. That is, the research aims to observe or identify one or more ways in which music is used in daily life. In the second approach, the research goal is to infer the structure or pattern underlying the use of music. That is, the research aims to uncover potential basic or fundamental dimensions implied by the multiple functions of music. This is mostly done using PCA or factor analyses or cluster analyses that reduce a large number of functions to only a few basic dimensions. In some cases, the analyses are run exploratively whereas in other cases, they are run in a confirmatory way, that is—with a predefined number of dimensions. The empirical studies can be categorized according to several criteria (see Table ​ TableA2). A2 ). However, when discussing some of the most important works here, we will separate studies where respondents were asked for the functions of music in open surveys from studies where the authors provided their own collections of functions, based on either literature research or face validity.

Surveys about the functions music can have

A number of studies have attempted to chronicle the broad range of musical functions. Most of these studies employed surveys in which people were asked to identify the ways in which they make use of music in their lives. In some studies, expert interviews were conducted in order to identify possible functions. Table ​ TableA2 A2 provides a summary of all the pertinent studies including their collections of functions and—where applicable—their derived underlying dimensions. We will restrict our ensuing remarks to the largest and most comprehensive studies.

Chamorro-Premuzic and Furnham ( 2007 ) identified 15 functions of music among students and subsequently ran focus groups from which they distilled three distinct dimensions: emotional use, rational use, and background use. Some of the largest surveys have been carried out by Boer ( 2009 ). She interviewed more than a thousand young people in different countries and assembled a comprehensive collection of musical functions. Using factor analysis, she found 10 underlying dimensions: emotion, friends, family, venting, background, dancing, focus, values, politic, and culture. (Lonsdale and North, 2011 , Study 1) pursued a uses-and-gratifications approach. They identified 30 musical uses that could be reduced to six distinct dimensions. In a related study employing a larger sample, the same authors came up with eight distinct dimensions: identity, positive and negative mood management, reminiscing, diversion, arousal, surveillance, and social interaction (Lonsdale and North, 2011 , Study 4). When interviewing older participants, Hays and Minichiello ( 2005 ) qualitatively identified six dimensions: linking, life events, sharing and connecting, wellbeing, therapeutic benefits, escapism, and spirituality.

The various surveys and interview studies clearly diverge with regard to the number of different musical functions. Similarly, the various cluster and factor analyses often end up producing different numbers of distinct dimensions. Nevertheless, the results are often quite similar. On a very broad level, there are four categories that appear consistently: social functions, emotional functions, cognitive or self-related functions, and physiological or arousal-related functions (see also Hargreaves and North, 1999 ; Schäfer and Sedlmeier, 2009 , 2010 ).

Empirical studies using predefined collections of functions of music

Apart from the open-ended surveys and interview methods, a number of studies investigating musical functions begin with researcher-defined collections or even categories/dimensions. Some of these predefined collections or categories/dimensions were simply borrowed from the existing published research, whereas others were derived from specific theoretical perspectives.

Empirical studies on functions of music emerging from specific theoretical approaches. Some of the above mentioned theoretical approaches to the functionality of music have been investigated in empirical studies. Boehnke and Münch ( 2003 ) developed a model of the relationship of adolescents' development, music, and media use. They proposed seven functions of music that relate to the developmental issues of young people (such as peer group integration, physical maturation, or identity development). In two studies with a large number of participants, Lonsdale and North ( 2011 ) applied the model of media gratification (from McQuail et al., 1972 ) and used a collection of 30 functions of music they assembled from literature research and interviews. In both studies, they ran factor analyses—reducing the number of functions to six dimensions and eight dimensions, respectively. Lehmann ( 1994 ) developed a situations-functions-preference model and proposed that music preferences emerge from the successful use of music to serve specific functions for the listener, depending on the current situation. Lehmann identified 68 ways in which people use music, from which he was able to reduce them to 15 music reception strategies (Rezeptionsweisen) such as compensation/escapism, relaxation, and identification. Misenhelter and Kaiser ( 2008 ) adopted Merriam's ( 1964 ) anthropological approach and attempted to identify the functions of music in the context of music education. They surveyed teachers and students and found six basic functions that were quite similar to the ones proposed by Merriam ( 1964 ). Wells and Hakanen ( 1997 ) adopted Zillmann's ( 1988a , b ) mood management theory and identified four types of users regarding the emotional functions of music: mainstream, music lover, indifferent, and heavy rockers.

Empirical studies on functions of music emerging from literature research. A number of studies have made use of predefined musical functions borrowed from the existing research literature. The significance of these functions and/or their potential underlying structure has then been empirically investigated using different samples. As mentioned, not all of those studies tried to assemble an exhaustive collection of musical functions in order to produce a comprehensive picture of the functions of music; but many studies were focused on specific aspects such as the emotional, cognitive, or social functions of music.

Schäfer and Sedlmeier ( 2009 ) collected 17 functions of music from the literature and found functions related to the management of mood and arousal as well as self-related functions to be the ones that people highly ascribe to their favorite music. Tarrant et al. ( 2000 ) used a collection of 10 functions of music from the literature and factor analyzed them resulting in three distinct dimensions of music use: self-related, emotional, and social.

Sun and Lull ( 1986 ) collected 18 functions of music videos and were able to reduce them to four dimensions: social learning, passing time, escapism/mood, and social interaction. Melton and Galician ( 1987 ) identified 15 functions of radio music and music videos; and Greasley and Lamont ( 2011 ) collected 15 functions of music, as well. Ter Bogt et al. ( 2011 ) collected 19 functions of music from the literature and used confirmatory factor analysis to group them into five dimensions. In a clinical study with adolescents, Walker Kennedy ( 2010 ) found 47 functions of music that could be reduced to five dimensions.

By way of summary, extant empirical studies have used either an open approach—trying to capture the variety of musical functions in the course of surveys or questionnaire studies—or predefined collections of functions as they resulted from specific theoretical approaches or from literature research. These different approaches have led to quite heterogeneous collections of possible musical functions—from only few functions posited by a specific hypothesis, to long lists arising from open surveys. Moreover, although the many attempts to distill the functions of music to fewer dimensions have produced some points of agreement, the overall picture remains unclear.

The structure among the functions of music

With each successive study of musical functions, the aggregate list of potential uses has grown longer. Questionnaire studies, in particular, have led to the proliferation of possible ways in which music may be relevant in people's lives. Even if one sidesteps the question of possible evolutionary origins, the multitude of hundreds of proposed functions raises the question of whether these might not be distilled to a smaller set of basic dimensions.

As noted earlier, previous research appears to converge on four dimensions: social functions (such as the expression of one's identity or personality), emotional functions (such as the induction of positive feelings), cogni tive or self-related functions (such as escapism), and arousal-related functions (such as calming down or passing time). These four dimensions might well account for the basic ways in which people use music in their daily lives.

Notice that cluster analysis and PCA/factor analysis presume that the research begins with a range of variables that ultimately capture all of the factors or dimensions pertaining to the phenomenon under consideration. The omission of even a single variable can theoretically lead to incomplete results if that variable proves to share little variance in common with the other variables. For example, in studying the factors that contribute to a person's height, the failure to include a variable related to developmental nutrition will led to deceptive results; one might wrongly conclude that only genetic factors are important. The validity of these analyses depends, in part, on including a sufficient range of variables so that all of the pertinent factors or dimensions are likely to emerge.

Accordingly, we propose to address the question of musical functions anew, starting with the most comprehensive list yet of potential music-related functions. In addition, we will aim to recruit a sample of participants covering all age groups, a wide range of socio-economic backgrounds, and pursue our analysis without biasing the materials to any specific theory.

Fundamental functions of music—a comprehensive empirical study

The large number of functions of music that research has identified during the last decades has raised the question of a potential underlying structure: Are there functions that are more fundamental and are there others that can be subsumed under the fundamental ones? And if so, how many fundamental functions are there? As we have outlined above, many scientists have been in search of basic distinct dimensions among the functions of music. They have used statistical methods that help uncover such dimensions among a large number of variables: factor analyses or cluster analyses.

However, as we have also seen, the approaches and methods have been as different as the various functions suggested. For instance, some scholars have focused exclusively on the social functions of music while others have been interested in only the emotional ones; some used only adolescent participants while others consulted only older people. Thus, these researchers arrived at different categorizations according to their particular approach. To date, there is still no conclusive categorization of the functions of music into distinct dimensions, which makes psychological studies that rely on the use of music and its effects on cognition, emotion, and behavior still difficult (see also Stefanija, 2007 ). Although there exist some theoretically driven claims about what fundamental dimensions there might be (Tarrant et al., 2000 ; Laiho, 2004 ; Schubert, 2009 ; Lonsdale and North, 2011 ), there has been no large-scale empirical study that analyzed the number and nature of distinct dimensions using the broad range of all potential musical functions—known so far—all at once.

We sought to remedy this deficiency by assembling an exhaustive list of the functions of music that have been identified in past research and putting them together in one questionnaire study. Based on the research reviewed in the first part of this study, we identified more than 500 items concerned with musical use or function. Specifically, we assembled an aggregate list of all the questions and statements encountered in the reviewed research that were either theoretically derived or used in empirical studies. Of course, many of the items are similar, analogous, or true duplicates. After eliminating or combining redundant items, we settled on a list of 129 distinct items. All of the items were phrased as statements in the form “I listen to music because … ” The complete list of items is given in Table ​ TableA3, A3 , together with their German versions as used in our study.

Participants were asked to rate how strongly they agreed with each item-statement on a scale from 0 ( not at all ) to 6 ( fully agree ). When responding to items, participants were instructed to think of any style of music and of any situation in which they would listen to music. In order to obtain a sample that was heterogeneous with regard to age and socioeconomic background, we distributed flyers promoting the Internet link to our study in a local electronics superstore. Recruitment of participants was further pursued via some mailing lists of German universities, students from comprehensive schools, and members of a local choir. As an incentive, respondents got the chance to win a tablet computer. A total of 834 people completed the survey. Respondents ranged from 8 to 85 years of age ( M = 26, SD = 10.4, 57% female).

Notice that in carrying out such a survey, we are assuming that participants have relatively accurate introspective access to their own motivations in pursuing particular musical behaviors, and that they are able to accurately recall the appropriate experiences. Of course, there exists considerable empirical research casting doubt on the accuracy of motivational introspection in self-report tasks (e.g., Wilson, 2002 ; Hirstein, 2005 ; Fine, 2006 ). These caveats notwithstanding, in light of the limited options for gathering pertinent empirical data, we nevertheless chose to pursue a survey-based approach.

Principal component analysis revealed three distinct dimensions behind the 129 items (accounting for about 40% of the variance), based on the scree plot. This solution was consistent over age groups and genders. The first dimension (eigenvalue: 15.2%) includes statements about self-related thoughts (e.g., music helps me think about myself), emotions and sentiments (e.g., music conveys feelings), absorption (e.g., music distracts my mind from the outside world), escapism (e.g., music makes me forget about reality), coping (e.g., music makes me believe I'm better able to cope with my worries), solace (e.g., music gives comfort to me when I'm sad), and meaning (e.g., music adds meaning to my life). It appears that this dimension expresses a very private relationship with music listening. Music helps people think about who they are, who they would like to be, and how to cut their own path. We suggest labeling this dimension self-awareness . The second dimension (eigenvalue: 13.7%) includes statements about social bonding and affiliation (e.g., music helps me show that I belong to a given social group; music makes me feel connected to my friends; music tells me how other people think). People can use music to feel close to their friends, to express their identity and values to others, and to gather information about their social environment. We suggest labeling this dimension social relatedness . The third dimension (eigenvalue: 10.2%) includes statements about the use of music as background entertainment and diversion (e.g., music is a great pastime; music can take my mind off things) and as a means to get into a positive mood and regulate one's physiological arousal (e.g., music can make me cheerful; music helps me relax; music makes me more alert). We suggest labeling this dimension arousal and mood regulation . All factor loadings are reported in Table ​ TableA3 A3 .

In order to analyze the relative significance of the three derived dimensions for the listeners, we averaged the ratings for all items contained in each dimension (see Figure ​ Figure1). 1 ). Arousal and mood regulation proved to be the most important dimension of music listening closely followed by self-awareness. These two dimensions appear to represent the two most potent reasons offered by people to explain why they listen to music, whereas social relatedness seems to be a relatively less important reason (ranging below the scale mean). This pattern was consistent across genders, socioeconomic backgrounds, and age groups. All differences between the three dimensions are significant (all p s < 0.001). The reliability indices (Cronbach's α) for the three dimensions are α = 0.97 for the first, α = 0.96 for the second, and α = 0.92 for the third dimension.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-04-00511-g0001.jpg

The three distinct dimensions emerging from 129 reasons for listening to music . Error bars are 95% confidence intervals. Self-awareness: M = 3.59 ( SE = 0.037); social relatedness: M = 2.01 ( SE = 0.035); arousal and mood regulation: M = 3.78 ( SE = 0.032).

General discussion

Since the earliest writing on the psychology of music, researchers have been concerned with the many ways in which people use music in their lives. In the first part of this paper, we reviewed literature spanning psychological, musicological, biological, and anthropological perspectives on musical function. The picture that emerged from our review was somewhat confusing. Surveying the literature from the past 50 years, we identified more than 500 purported functions for music. From this list, we assembled a somewhat catholic list of 129 non-redundant musical functions. We then tested the verisimilitude of these posited functions by collecting survey responses from a comparatively large sample. PCA revealed just three distinct dimensions: People listen to music to achieve self-awareness , social relatedness , and arousal and mood regulation . We propose calling these the Big Three of music listening.

In part one of our study we noted that several empirical studies suggest grouping musical functions according to four dimensions: cognitive, emotional, social/cultural, and physiological/arousal-related functions. This raises the question of how our three-dimensional result might be reconciled with the earlier work. We propose that there is a rather straightforward interpretation that allows the four-dimensional perspective to be understood within our three-dimensional result. Cognitive functions are captured by the first dimension (self-awareness); social/cultural functions are captured by the second dimensions (social relatedness); physiological/arousal-related functions are captured by the third dimension (arousal and mood regulation); and emotional functions are captured by the first and third dimensions (self-awareness + arousal and mood regulation). Notably—as can be seen with the items in Table ​ TableA3—there A3 —there is a dissociation of emotion-related and mood-related functions. Emotions clearly appear in the first dimension (e.g., music conveys feelings; music can lighten my mood; music helps me better understand my thoughts and emotions), indicating that they might play an important role in achieving self-awareness, probably in terms of identity formation and self-perception, respectively. However, the regulation of moods clearly appears in the third dimension (e.g., music makes me cheerful; music can enhance my mood; I'm less bored when I listen to music), suggesting that moods are not central issues pertaining to identity. Along with the maintenance of a pleasant level of physiological arousal, the maintenance of pleasant moods is an effect of music that might rather be utilized as a “background” strategy, that is, not requiring a deep or aware involvement in the music. The regulation of emotions, on the other side, could be a much more conscious strategy requiring deliberate attention and devotion to the music. Music psychology so far has not made a clear distinction between music-related moods and emotions; and the several conceptions of music-related affect remain contentious (see Hunter and Schellenberg, 2010 ). Our results appear to call for a clearer distinction between moods and emotions in music psychology research.

As noted earlier, a presumed evolutionary origin for music need not be reflected in modern responses to music. Nevertheless, it is plausible that continuities exist between modern responses and possible archaic functions. Hence, the functions apparent in our study may echo possible evolutionary functions. The three functional dimensions found in our study are compatible with nearly all of the ideas about the potential evolutionary origin of music mentioned in the introduction. The idea that music had evolved as a means for establishing and regulating social cohesion and communication is consistent with the second dimension. The idea of music satisfying the basic human concerns of anxiety avoidance and quest for meaning is consistent with the first dimension. And the notion that the basic function of music could have been to produce dissociation and pleasure in the listener is consistent with the third dimension.

In light of claims that music evolved primarily as a means for promoting social cohesion and communication—a position favored by many scholars—the results appear noteworthy. Seemingly, people today hardly listen to music for social reasons, but instead use it principally to relieve boredom, maintain a pleasant mood, and create a comfortable private space. Such a private mode of music listening might simply reflect a Western emphasis on individuality: self-acknowledgement and well-being appear to be more highly valued than social relationships and relatedness (see also Roberts and Foehr, 2008 ; Heye and Lamont, 2010 ).

The results of the present study may be of interest to psychologists who make use of music as a tool or stimulus in their research. The way people usually listen to music outside the laboratory will surely influence how they respond to musical stimuli in psychological experiments. For those researchers who make use of music in psychological studies, some attention should be paid to how music is used in everyday life. The three dimensions uncovered in this study can provide a parsimonious means to identify the value a person sets on each of three different types of music use. It is also conceivable that individual patterns of music use are related to personality traits, a conjecture which may warrant future research.

With regard to music cognition, the present results are especially relevant to studies about aesthetic preferences, style or genre preferences, and musical choice. Recent research suggests that musical functions play an important role in the formation and development of music preferences (e.g., Schäfer and Sedlmeier, 2009 ; Rentfrow et al., 2011 ). It will be one of the future tasks of music cognition research to investigate the dependence of music preference and music choice on the functional use of music in people's lives.

By way of summary, in a self-report study, we found that people appear to listen to music for three major reasons, two of which are substantially more important than the third: music offers a valued companion, helps provide a comfortable level of activation and a positive mood, whereas its social importance may have been overvalued.

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Overview of theoretical contributions that have derived, proposed, or addressed more than one function or functional aspect of music listening .

An external file that holds a picture, illustration, etc.
Object name is fpsyg-04-00511-i0001.jpg

Overview about empirical studies that have identified and/or investigated more than one function or functional aspect of music listening .

An external file that holds a picture, illustration, etc.
Object name is fpsyg-04-00511-i0002.jpg

In some places, we could only provide exemplary functions because either the total number of functions was too large to be displayed here or not all functions were given in the original publications .

The 129 statements referring to the functions of music exhaustively derived from past research, together with their means, standard deviations, and factor loadings (varimax rotated) .

Dimension 1, self-awareness; Dimension 2, social relatedness; Dimension 3, arousal and mood regulation .

A survey of music emotion recognition

Frontiers of Computer Science volume  16 , Article number:  166335 ( 2022 ) Cite this article

1000 Accesses

13 Citations

1 Altmetric

Metrics details

Music is the language of emotions. In recent years, music emotion recognition has attracted widespread attention in the academic and industrial community since it can be widely used in fields like recommendation systems, automatic music composing, psychotherapy, music visualization, and so on. Especially with the rapid development of artificial intelligence, deep learning-based music emotion recognition is gradually becoming mainstream. This paper gives a detailed survey of music emotion recognition. Starting with some preliminary knowledge of music emotion recognition, this paper first introduces some commonly used evaluation metrics. Then a three-part research framework is put forward. Based on this three-part research framework, the knowledge and algorithms involved in each part are introduced with detailed analysis, including some commonly used datasets, emotion models, feature extraction, and emotion recognition algorithms. After that, the challenging problems and development trends of music emotion recognition technology are proposed, and finally, the whole paper is summarized.

This is a preview of subscription content, access via your institution .

Access options

Buy single article.

Instant access to the full article PDF.

Price includes VAT (Russian Federation)

Rent this article via DeepDyve.

Yang X Y, Dong Y Z, Li J. Review of data features-based music emotion recognition methods. Multimedia System, 2018, 24(4): 365–389

Article   Google Scholar  

Cheng Z Y, Shen J L, Zhu L, Kankanhalli M, Nie L Q. Exploiting music play sequence for music recommendation. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence. 2017, 3654–3660

Cheng Z Y, Shen J L, Nie L Q, Chua T S, Kankanhalli M. Exploring user-specific information in music retrieval. In: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2017, 655–664

Kim Y E, Schmidt E M, Migneco R, Morton B G, Richardson P, Scott J, Speck J A, Turnbull D. Music emotion recognition: a state of the art review. In: Proceedings of the 11th International Society for Music Information Retrieval Conference. 2010, 255–266

Yang Y H, Chen H H. Machine recognition of music emotion: a review. ACM Transactions on Intelligent Systems and Technology. 2011, 3(3): 1–30

Bartoszewski M, Kwasnicka H, Kaczmar M U, Myszkowski P B. Extraction of emotional content from music data. In: Proceedings of the 7th International Conference on Computer Information Systems and Industrial Management Applications. 2008, 293–299

Hevner K. Experimental studies of the elements of expression in music. The American Journal of Psychology, 1936, 48(2): 246–268

Russell J A. A circumplex model of affect. Journal of Personality and Social Psychology, 1980, 39(6): 1161–1178

Posner J, Russell J A, Peterson B S. The circumplex model of affect: an integrative approach to affective neuroscience, cognitive development, and psychology. Development and Psychopathology, 2005, 17(3): 715–734

Chekowska-Zacharewicz M, Janowski M. Polish adaptation of the geneva emotional music scale (GEMS): factor structure and reliability. Psychology of Music, 2020, 57(6): 427–438

Google Scholar  

Thayer R. The Biopsychology of Mood and Arousal. 1st ed. Oxford: Oxford University Press, 1989

Weninger F, Eyben F, Schuller B W. On-line continuous-time music mood regression with deep recurrent neural networks. In: Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing. 2014, 5412–5416

Yang Y H, Lin Y C, Su Y F, Chen H H. A regression approach to music emotion recognition. IEEE Transactions on Audio, Speech, and Language Processing, 2008, 16(2): 448–457

Li X X, Xianyu H S, Tian J S, Chen W X, Meng F H, Xu M X, Cai L H. A deep bidirectional long short-term memory based multi-scale approach for music dynamic emotion prediction. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal. 2016, 544–548

Fan J Y, Tatar K, Thorogood M, Pasquier P. Ranking-based emotion recognition for experimental music. In: Proceedings of the 18th International Society for Music Information Retrieval Conference. 2017, 368–375

Thammasan N, Fukui K I, Numao M. Multimodal fusion of EEG and musical features in music-emotion recognition. In: Proceedings of the 31st AAAI Conference on Artificial Intelligence. 2017, 4991–4992

Yang Y H, Chen H H. Prediction of the distribution of perceived music emotions using discrete samples. IEEE Transactions on Audio, Speech and Language Processing, 2011, 19(7): 2184–2196

Liu H P, Fang Y, Huang Q H. Music emotion recognition using a variant of recurrent neural network. In: Proceedings of the International Conference on Matheatics, Modeling, Simulation and Statistics Application. 2018, 15–18

Soleymani M, Caro M N, Schmidt E M, Sha C Y, Yang Y H. 1000 songs for emotional analysis of music. In: Proceedings of the 2nd ACM International Workshop on Crowdsourcing for Multimedia. 2013, 1–6

Turnbull D, Barrington L, Torres D, Lanckriet G. Towards musical query-by-semantic-description using the CAL500 data set. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. 2007, 439–446

Wang S Y, Wang J C, Yang Y H, Wang H M. Towards time-varying music auto-tagging on CAL500 expansion. In: Proceedings of the IEEE International Conference on Multimedia and Expo. 2014, 1–6

Chen Y A, Yang Y H, Wang J C, Chen H. The AMG1608 dataset for music emotion recognition. In: Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing. 2015, 693–697

Aljanaki A, Yang Y H, Soleymani M. Developing a benchmark for emotional analysis of music. PLoS ONE, 2017, 12(3): e0173392

Speck J A, Schmidt E M, Morton B G, Kim Y E. A comparative study of collaborative vs. traditional musical mood annotation. In: Proceedings of the 12th International Society for Music Informational Retrieval Conference. 2011, 549–554

Eerola T, Vuoskoski J K. A comparison of the discrete and dimensional models of emotion in music. Psychology Music, 2011, 39(1): 18–49

Zentner M, Grandjean D, Scherer K R. Emotions evoked by the sound of music: characterization, classification, and measurement. Emotion, 2008, 8(4): 494–521

Mahieux T B, Ellis D P W, Whitman B, Lamere P. The million songs dataset. In: Proceedings of the 12th International Society for Music Information Retrieval Conference. 2011, 591–596

Tzanetakis G, Cook P. MARSYAS: a framework for audio analysis. Organised Sound, 2000, 4(3): 169–175

Mathieu B, Essid S, Fillon T, Prado J, Richard G. YAAFE, an easy to use and efficient audio feature extraction software. In: Proceedings of the 11th International Society for Music Information Retrieval Conference. 2010, 441–446

Lartillot O, Toiviainen P. MIR in MATLAB (II)A toolbox for musical feature extraction from audio. In: Proceedings of the 8th International Conference on Music Information Retrieval. 2007, 127–130

McEnnis D, Mckay C, Fujinaga I, Depalle P. jAudio: a feature extraction library. In: Proceedings of the 6th International Conference on Music Information Retrieval. 2005, 600–603

Liu X, Chen Q C, Wu X P, Liu Y, Liu Y. CNN based music emotion classification. 2017, arXiv preprint arXiv: 1704.5665

Han W J, Li H F, Ruan H B, Ma Lin. Review on speech emotion recognition (In Chinese). Journal of Software, 2014, 25(1): 37–50

MATH   Google Scholar  

Barthet M, Fazekas G, Sandler M. Multidisciplinary perspectives on music emotion recognition: implications for content and context-based model. In: Proceedings of the 9th International Symposium on Computer Music Modelling and Retrieval. 2012, 492–507

Chen P L, Zhao L, Xin Z Y, Qiang Y M, Zhang M, Li T M. A scheme of MIDI music emotion classification based on fuzzy theme extraction and neural network. In: Proceedings of the 12th International Conference on Computational Intelligence and Security. 2016, 323–326

Juslin P N, Laukka P. Expression, perception, and induction of musical emotions: a review and a questionnaire study of everyday listening. Journal of New Music Research, 2004, 33(3): 217–238

Yang D, Lee W S. Disambiguating music emotion using software agents. In: Proceedings of the 5th International Conference on Music Information Retrieval. 2004, 218–223

He H, Jin J M, Xiong Y H, Chen B, Zhao L. Language feature mining for music emotion classification via supervised learning from lyrics. In: Proceedings of International Symposium on Intelligence Computation and Applications. 2008, 426–435

Hu X, Downie J S, Ehmann A F. Lyric text mining in music mood classification. In: Proceedings of the 10th International Society for Music Information Retrieval Conference. 2009, 411–416

Zaanen M V, Kanters P. Automatic mood classification using TF*IDF based on lyrics. In: Proceedings of the 11th International Society for Music Information Retrieval Conference. 2010, 75–80

Wang X, Chen X O, Yang D S, Wu Y Q. Music emotion classification of Chinese songs based on lyrics using TF*IDF and rhyme. In: Proceedings of the 12th International Society for Music Information Retrieval Conference. 2011, 765–770

Malheiro R, Panda R, Gomes P, Paiva R P. Emotionally-relevant features for classification and regression of music lyrics. IEEE Transactions on Affective Computing, 2018, 9(2): 240–254

Hu Y J, Chen X O, Yang D S. Lyric-based song emotion detection with affective lexicon and fuzzy clustering method. In: Proceedings of the 10th International Society for Music Information Retrieval Conference. 2009, 123–128

Yang D, Lee W S. Music emotion identification from lyrics. In: Proceedings of the 11th IEEE International Symposium on Multimedia. 2009, 624–629

Dakshina K, Sridhar R. LDA based emotion recognition from lyrics. Advanced Computing, Networking and Informatics, 2014, 27(1): 187–194

Thammasan N, Fukui K I, Numao M. Application of deep belief networks in EEG-based dynamic music-emotion recognition. In: Proceedings of the 2016 International Joint Conference on Neural Networks. 2016, 881–888

Hu X, Li F J, Ng D T J. On the relationships between music-induced emotion and physiological signals. In: Proceedings of the 19th International Society for Music Information Retrieval Conference. 2018, 362–369

Nawa N E, Callan D E, Mokhtari P, Ando H, Iversen J. Decoding music-induced experienced emotions using functional magnetic resonance imaging- Preliminary result. In: Proceedings of the 2018 International Joint Conference on Neural Networks. 2018, 1–7

Li T, Ogihara M. Detecting emotion in music. In: Proceedings of the 4th International Conference on Music Information Retrieval. 2003, 239–240

Laurier C, Grivolla J, Herrera P. Multimodal music mood classification using audio and lyrics. In: Proceedings of the 7th International Conference on Machine Learning and Applications. 2008, 688–693

Yang Y H, Lin Y C, Cheng H T, Liao I B, Ho Y C, Chen H. Toward multi-modal music emotion classification. In: Proceedings of the 9th Pacific Rim Conference on Multimedia. 2008, 70–79

Liu Y, Liu Y, Zhao Y, Hua K A. What strikes the strings of your heart? — feature mining for music emotion analysis. IEEE Transactions on Affective Computing, 2015, 6(3): 247–260

Wang J C, Yang Y H, Wang H M, Jeng S K. The acoustic emotion gaussians model for emotion-based music annotation and retrieval. In: Proceedings of the 20th ACM Multimedia Conference. 2012, 89–98

Chen Y A, Wang J C, Yang Y H, Chen H. Component tying for mixture model adaptation in personalization of music emotion recognition. IEEE ACM Transactions on Audio, Speech and Language Processing, 2017, 25(7): 1409–1420

Chen Y A, Wang J C, Yang Y H, Chen H. Linear regression-based adaptation of music emotion recognition models for personalization. In: Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing. 2014, 2149–2153

Fukayama S, Goto M. Music emotion recognition with adaptive aggregation of Gaussian process regressors. In: Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing. 2016, 71–75

Soleymani M, Aljanaki A, Yang Y H, Caro M N, Eyben F, Markov K, Schuller B, Veltkamp R C, Weninger F, Wiering F. Emotional analysis of music: a comparison of methods. In: Proceedings of the ACM International Conference on Multimedia. 2014, 1161–1164

Lu L, Liu D, Zhang H J. Automatic mood detection and tracking of music audio signals. IEEE Transactions on Audio, Speech and Language Processing, 2006, 14(1): 5–18

Schmidt E M, Turnbull D, Kim Y E. Feature selection for content-based, time-varying musical emotion regression. In: Proceedings of the 11th ACM SIGMM International Conference on Multimedia Information Retrieval. 2010, 267–274

Xianyu H S, Li X X, Chen W S, Meng F H, Tian J S, Xu M X, Cai L H. SVR based double-scale regression for dynamic emotion prediction in music. In: Proceedings of the 2016 IEEE International Conference on Acoustic, Speech and Signal Processing. 2016, 549–553

Huang M Y, Rong W G, Arjannikov T, Nan J, Xiong Z. Bi-modal deep Boltzmann machine based musical emotion classification. In: Proceedings of the 25th International Conference on Artificial Neural Network. 2016, 199–207

Keelawat P, Thammasan N, Kijsirikul B, Numao M. Subject-independent emotion recognition during music listening based on EEG using deep convolutional neural networks. In: Proceedings of the 2019 the 15th IEEE International Colloquium on Signal Processing & Its Application. 2019, 21–26

Sarkar R, Choudhury S, Dutta S, Roy A, Saha S K. Recognition of emotion in music based on deep convolutional neural network. Multimedia Tools and Application, 2020, 79(9): 765–783

Yang P T, Kuang S M, Wu C C, Hsu J L. Predicting music emotion by using convolutional neural network. In: Proceedings of the 22nd HCI International Conference. 2020, 266–275

Ma Y, Li X X, Xu M X, Jia J, Cai L H. Multi-scale context based attention for dynamic music emotion prediction. In: Proceedings of the 25th ACM International Conference on Multimedia Conference. 2017, 1443–1450

Chang W H, Li J L, Lin Y S, Lee C C. A genre-affect relationship network with task-specific uncertainty weighting for recognizing induced emotion in music. In: Proceedings of the 2018 IEEE International Conference on Multimedia and Expo. 2018, 1–8

Delbouys R, Hennequin R, Piccoli F, Letelier J R, Moussallam M. Music mood detection based on audio and lyrics with deep neural net. In: Proceedings of the 19th International Society for Music Information Retrieval Conference. 2018, 370–375

Dong Y Z, Yang X Y, Zhao X, Li J. Bidirectional convolutional recurrent sparse network (BCRSN): an efficient model for music emotion recognition. IEEE Transactions on Multimedia, 2019, 21(12): 3150–3163

Chowdhury S, Vall A, Haunscmid V, Widmer G. Towards explainable music emotion recognition: the route via mid-level features. In: Proceedings of the 20th International Society for Music Information Retrieval Conference. 2019, 237–243

Li X X, Tian J S, Xu M X, Ning Y S, Cai L H. DBLSTM-based multi-scale fusion for dynamic emotion prediction in music. In: Proceedings of the IEEE International Conference on Multimedia and Expo. 2016, 1–6

Chaki S, Doshi P, Patnaik P, Bhattacharya S. Attentive RNNs for continuous-time emotion prediction in music clips. In: Proceedings of the 3rd Workshop in Affective Content Analysis co-located with 34th AAAI Conference on Artificial Intelligence. 2020, 36–45

Panda R, Malheiro R, Paiva R P. Novel audio features for music emotion recognition. IEEE Transactions on Affective Computing, 2020, 11(4): 614–626

Deng S G, Wang D J, Li X T, Xu G D. Exploring user emotion in microblogs for music recommendation. Expert System with Applications, 2015, 42(1): 9284–9293

Ferreira L N, Whitehead J. Learning to generate music with sentiment. In: Proceedings of the 20th International Society for Music Information Retrieval Conference. 2019, 384–390

Download references

Acknowledgements

This work was supported by the National Nature Science Foundation of China (Grant Nos. 61672144, 61872072, 61173029) and the National Key R&D Program of China (2019YFB1405302)

Author information

Authors and affiliations.

School of Computer Science and Engineering, Northeastern University, Shenyang, 110000, China

Donghong Han & Yanru Kong

Institute of Science and Technology Brain-Inspired Intelligence, Fudan University, Shanghai, 200082, China

School of Computer Science & Technology, Beijing Institute of Technology, Beijing, 100089, China

Guoren Wang

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Donghong Han .

Additional information

Donghong Han received the PhD degree from Northeastern University, China in 2007. She is currently an associate professor with the School of Computer Science and Engineering, Northeastern University, China. She is the reviewer of Applied Intelligence, IEEE Transaction on Cybernetics, Frontiers of Information Technology & Electronic Engineering, etc. She has in total more than 40 publications till date. Her current research interests include data flow management, uncertain data flow analysis and social network sentiment analysis. She is a member of China Computer Federation (CCF), and a member of Chinese Information Processing Society, Social Media Processing.

Yanru Kong received the BS degree from Shandong University of Science and Technology, China in 2018. She is working toward the MS degree in computer science from Northeastern University, China. Her current research interests include natural language processing, sentiment analysis and music emotion recognition.

Jiayi Han received the BS degree from Northeastern University, China in 2018. He is currently pursuing a PhD degree in the Institute of Science and Technology for Brain-Inspired Intelligence at Fudan University, China. His research interests focus on facial expression recognition and medical imaging. He published paper on ICBEB.

Guoren Wang received the PhD degree in computer science from Northeastern University, China in 1996. He is currently a professor with the School of Computer Science & Technology, Beijing Institute of Technology, China. He has published about 300 journal and conference papers. He received The National Science Fund for Distinguished Young Scholars in 2010. His current research interests include uncertain data management, data intensive computing, visual media data management and analysis, unstructured data management, distributed query processing and optimization technology, bioinformatics. He is the vice chairman of China Computer Federation, Technical Committee on Databases (CCF TCDB). And he is an expert review member of National Nature Science Foundation of China, Information Science Department.

Electronic Supplementary Material

A survey of music emotion recognition, rights and permissions.

Reprints and Permissions

About this article

Cite this article.

Han, D., Kong, Y., Han, J. et al. A survey of music emotion recognition. Front. Comput. Sci. 16 , 166335 (2022). https://doi.org/10.1007/s11704-021-0569-4

Download citation

Received : 29 November 2020

Accepted : 25 June 2021

Published : 22 January 2022

DOI : https://doi.org/10.1007/s11704-021-0569-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Advertisement

Please wait while your request is being verified...

Smart music player integrating facial emotion recognition and music mood recommendation

Ieee account.

Purchase Details

Profile Information

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2023 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Music Literature Review

This page is a collection of relevant references on music, with a focus on domain-specific notation and accessibility implications. It is part of RQTF's activity looking at accessibility and Domain-specific notation . The reference list shouldn't be considered complete or definitive, and is likely to regularly undergo formatting improvement and reorganization to support the review and analysis process.

Publications with direct link to music and accessibility/disability

Tensions and perplexities within teacher education and p–12 schools for music teachers with visual impairments.

Abstract: We have written this article seeking to connect societal perceptions of disability with P–12 schools and higher education institutions toward the goal of greater understanding and equitable employment opportunities for music teachers with disabilities, specifically teacher candidates with visual impairment. In our investigation, we examine the following questions: (a) How have special education programs within P–12 schools, universities, and schools of music reflected societal perceptions of persons with disabilities and how do those in turn influence perceptions of teacher candidates? (b) How have the essential functions of teaching been articulated by accreditation programs and what tensions arise when music teachers with visual impairments are considered for employment? and (c) What are potential ways forward for P–12 education, teacher education programs, and schools of music?

To disrupt binaries between able and disabled in schools, we recommend embracing a broader, interdependent view of music education, one that is defined by and includes all teaching professionals and school communities. Additionally, we support recruitment of teacher candidates with disabilities to music education programs and consistent advocacy through matriculation and job placement to encourage entry into P–12 schools.

A Narrative of Two Preservice Music Teachers With Visual Impairment

Abstract: The purpose of this narrative inquiry was to re-story the student teaching experience of two preservice music education majors who are visually impaired or blind. While music education scholars have devoted attention to P–12 students with disabilities, research with preservice music teachers with impairments is seemingly nonexistent.

Using a transformative paradigm and social model of disability as lenses, we retell participants’ experiences across three commonplaces of narrative inquiry: sociality, temporality, and place. Participants told their student teaching stories through various field texts, including interviews, journals, emails, and informal conversations.

Three particular issues were highlighted strongly within their narratives: accessible music, reliance on others, and individuals’ attitudes. Issues of what constitutes effective teaching, teacher identity construction, and preparedness for working with individuals with disabilities also emerged. Multiple avenues are suggested for practice, research, and policy in music, teacher education, and teachers with disabilities.

Perceptions of schooling, pedagogy and notation in the lives of visually-impaired musicians

Abstract: This article discusses findings on schooling, pedagogy and notation in the life-experiences of amateur and professional visually-impaired musicians/music teachers, and the professional experiences of sighted music teachers who work with visually-impaired learners. The study formed part of a broader UK Arts and Humanities Research Council funded project, officially entitled “Visually-impaired musicians’ lives: Trajectories of musical practice, participation and learning”, but which came to be known as “Visually-impaired musicians’ lives” (VIML). VIML was led at the UCL Institute of Education, London, UK and supported by the Royal Academy of Music, London, and Royal National Institute of Blind People (RNIB) UK, starting in 2013 and concluding in 2015. It sourced “insider” perspectives from 225 adult blind and partially-sighted musicians/music teachers, and 6 sighted music teachers, through life history interviews and an international questionnaire, which collected quantitative and qualitative data.

Through articulating a range of “insider” voices, this article examines some issues, as construed by respondents, around educational equality and inclusion in music for visually-impaired children and adults in relation to three main areas: the provision of mainstream schooling versus special schools pedagogy, including the preparedness of teachers to respond to the needs of visually-impaired learners and the educational role of notation, focusing particularly on Braille as well as other print media.

The investigation found multifaceted perspectives on the merits of visually-impaired children being educated in either mainstream or special educational contexts. These related to matters such as access to specific learning opportunities, a lack of understanding of visually-impaired musicians’ learning processes (including accessible technologies and score media) in mainstream contexts, and concerns about the knowledge of music educators in relation to visual impairment. Regarding pedagogy, there were challenges raised, but also helpful areas for sighted music educators to consider, such as differentiation by sight condition and approach, and the varying roles of gesture, language, light and touch. There was diversity in musical participation of visually-impaired adult learners, along with some surprising barriers as well as opportunities linked to different genres and musical contexts, particularly in relation to various print media, and sight reading.

A Comparative Case Study of Learning Strategies and Recommendations of Five Professional Musicians With Dyslexia

Abstract: Many of the characteristics of dyslexia—such as difficulties with decoding written symbols, phonemic awareness, physical coordination, and readable handwriting—may adversely affect music learning. Despite challenges, individuals with dyslexia can succeed in music.

The purpose of this study was to examine the perceptions of five professional musicians with dyslexia as they reflect on their experiences learning music. Answers to the following research questions were sought: (a) What are the perceived abilities and challenges that the participants believe they have developed in music because of their diagnoses of dyslexia? (b) What strategies have the participants used to overcome the challenges associated with dyslexia? and (c) What recommendations did the participants have for adults to assist students with dyslexia who are enrolled in school music programs?

The findings in this study included support for multisensory teaching, isolating musical components, learning of jazz and popular music, using technology, and small group instruction.

Planning for Student Variability: Universal Design for Learning in the Music Theory Classroom and Curriculum

Abstract: Universal Design for Learning (UDL) embodies a set of principles for developing accessible curricula and inclusive classroom learning environments. It is a flexible framework that can be adapted to the individual needs and predilections of a diverse set of learners, including students with disabilities. UDL can reduce the need for individual accommodations for disabled students, but its goal is to enhance learning for all students. Research and practical applications have demonstrated that designing curricula that are intended to provide greater access to learners who are in the margins also benefits many other learners. The objective of UDL is to develop expert learners throughout a curriculum by providing multiple means for learning, engagement, and demonstration at each level of instruction. The core music theory and musicianship curriculum taught at most colleges and universities will benefit from the guidelines established for UDL, and these are adaptable to various forms of curricular content. This article provides an overview of the history of UDL and its guidelines, and then proposes strategies for their implementation that are specific to music theory and musicianship pedagogy at the planning phase of course design, including assessment. The discussion engages learning typologies as a means for addressing learner variability throughout the course design.

Music lessons from a tablet computer: The effect of incorporating a touchscreen device in teaching music staff notation to students with dyslexia

Abstract: The purpose of this study was to examine the effectiveness of a software application for guided practice on a tablet computer used as a multisensory instructional tool in the process of teaching music staff notation to students who have dyslexia. Between 15 to 20% of people in the United States may have dyslexia or related learning differences in the form of difficulties with reading and language processing. Having dyslexia does not preclude engagement in playing music however, evidence shows students with dyslexia often have trouble learning how to read music notation (Ganschow, Lloyd-Jones & Miles, 1994; Miles & Westcombe, 2004; Stewart, 2008).

Technology, specifically the tablet computer, has potential to address individual needs of students in the domain of music a variety of applications have been created for teaching and practicing the recognition of musical notation. The theoretical framework underlying the study was based on two theories related to the learning process of students with dyslexia: the phonological deficit and the dyslexia automatization deficit theories.

A quasi-experimental design was employed using intact classes of third, fourth, and fifth grade students (N=72) who attended an academy for students with dyslexia. The students were taught a series of lessons on reading music staff notation for seven weeks. The same teacher taught all classes. The treatment classes were given time for the guided-practice of music staff notation on the tablet the control classes used the tablets for the same amount of time with other music applications, but were not given access to the specific treatment program. Data used to tabulate results of the study were collected with the use of pre and posttests of music staff notation recognition.

The overall conclusion was that the use of the tablet for guided-practice in conjunction with instruction was significantly more effective at increasing the ability of students to recognize musical staff notation than using instruction alone.

Go-with-the-flow : Tracking, Analysis and Sonification of Movement and Breathing to Build Confidence in Activity Despite Chronic Pain

Abstract: Chronic (persistent) pain (CP) affects one in ten adults. Clinical resources are insufficient, and anxiety about activity restricts lives. Technological aids monitor activity but lack necessary psychological support. This paper proposes a new sonification framework, Go-with-the-Flow , informed by physiotherapists and people with CP. The framework proposes articulation of user-defined sonified exercise spaces (SESs) tailored to psychological needs and physical capabilities that enhance body and movement awareness to rebuild confidence in physical activity. A smartphone-based wearable device and a Kinect-based device were designed based on the framework to track movement and breathing and sonify them during physical activity.

In control studies conducted to evaluate the sonification strategies, people with CP reported increased performance, motivation, awareness of movement and relaxation with sound feedback. Home studies, a focus group and a survey of CP patients conducted at the end of a hospital pain management session provided an in-depth understanding of how different aspects of the SESs and their calibration can facilitate self-directed rehabilitation and how the wearable version of the device can facilitate transfer of gains from exercise to feared or demanding activities in real life.

We conclude by discussing the implications of our findings on the design of technology for physical rehabilitation.

Strategies for Successfully Teaching Students with ADD or ADHD in Instrumental Lessons

Abstract: Teachers can easily encounter students with Attention Deficit Disorder (ADD) or Attention Deficit Hyperactivity Disorder (ADHD) in the instrumental lesson setting. Applicable to instrumental lesson settings in the public or private schools, private studios, or college studios, this article focuses on specific strategies ranging from the organization of the teaching studio to the instructional delivery that can help students with ADD and ADHD achieve their highest musical potential. By making small changes in studio arrangement/decoration, maintaining open lines of communication with parents, and understanding some key elements that can affect students’ ability to most efficiently learn, instrumental lesson teachers can improve the learning not only of students with ADD or ADHD, but of all students. 

Music Training Interface for Visually Impaired through a Novel Approach to Optical Music Recognition

Abstract: Some inherited barriers which limits the human abilities can be surprisingly win through technology. This research focuses on defining a more reliable and a controllable interface for visually impaired people to read and study eastern music notations which are widely available in printed format. One of another concept behind was that differently-abled people should be assisted in a way which they can proceed interested tasks in an independent way.

The research provide means to continue on researching the validity of using a controllable auditory interface instead using Braille music scripts converted with the help of 3 rd parties. The research further summarizes the requirements aroused by the relevant users, design considerations, evaluation results on user feedbacks of proposed interface.

Teaching Music to Blind Children: New Strategies for Teaching through Interactive Use of Musibraille Software

Abstract: This paper presents a methodology for teaching music to blind children, based on the interaction with the Musibraille software, to which specific functions were added so it can support the activities of basic music education. The main activities and related functions are described and illustrated. Some essential project characteristics are also quickly shown, especially to explain the big changes it has produced in the role of Braille Music and in the music education of blind people in Brazil.

Accessible presentation of information for people with visual disabilities

Abstract: Personal computers, palm top computers, media players and cell phones provide instant access to information from around the world. There are a wide variety of options available to make that information available to people with visual disabilities, so many that choosing one for use in any given context can often feel daunting to someone new to the field of accessibility. This paper reviews tools and techniques for the presentation of textual, graphic, mathematic and web documents through audio and haptic modalities to people with visual disabilities.

Intelligent computing technologies in music processing for blind people

Abstract: A discussion on involvement of knowledge based methods in implementation of user friendly computer programs for disabled people is the goal of this paper. The paper presents a concept of a computer program that is aimed to aid blind people dealing with music and music notation. The concept is solely based on computational intelligence methods involved in implementation of the computer program.

The program is build around two research fields: information acquisition and knowledge representation and processing which are still research and technology challenges. Information acquisition module is used for recognizing printed music notation and storing acquired information in computer memory. This module is a kind of the paper-to-memory data flow technology. Acquired music information stored in computer memory is then subjected to mining implicit relations between music data, to creating a space of music information and then to manipulating music information. Storing and manipulating music information is firmly based on knowledge processing methods.

The program described in this paper involves techniques of pattern recognition and knowledge representation as well as contemporary programming technologies. It is designed for blind people: music teachers, students, hobbyists, musicians.

BMML: Braille Music Markup Language

Abstract: Thanks to the WAI (Web Accessibility Initiative) guidelines for producing accessible HTML documents, visually impaired people can have better access to a lot of textual information. Concerning musical score, several encoding formats are available, focusing on the representation of different aspects of this kind of content. As XML is the standard for exchanging content through the Web, several XML applications have already been specified for representing musical scores, using the traditional music notation. As a result, users can access and share a lot of different types of musical content using the Web. However, for specific notations - like the Braille one - no dedicated XML application has been developed yet. Therefore, visually impaired musicians cannot easily represent, share, and access scores using the Web.

This paper presents the application we have developed to respond to this need: BMML (Braille Music Markup Language). BMML handles specificities of Braille Music notation and takes into account the core features of existing formats. The main objective of BMML is to improve the accessibility of Braille musical scores.

Transformation frameworks and their relevance in universal design

Abstract: Music, engineering, mathematics, and many other disciplines have established notations for writing their documents. Adjusting these notations can contribute to universal access by helping to address access difficulties, such as disabilities, cultural backgrounds, or restrictive hardware. Tools that support the programming of such transformations can also assist by allowing the creation of new notations on demand, which is an under-explored option in the relief of educational difficulties.

This paper reviews some programming tools that can be used to effect such transformations. It also introduces a tool, called “4DML,” which allows the programmer to create a “model” of the desired result, from which the transformation is derived.

Towards accessible multimedia music

Abstract: This paper addresses the provision of music for the print impaired in the digital age. In recent years a number of key initiatives, such as those undertaken by EC funded projects like HARMONICA and WEDELMUSIC, have opened up new opportunities in the field of interactive multimedia music. The area of music encoding is moving towards greater unification and co-ordination of effort with the activities and strategies being pursued by the Music Network. For organizations providing support and alternative format materials for print impaired people this offers the exciting challenge of bringing together several disparate activities and building a far stronger future for coding activities in this field.

This paper provides an overview of the current situation, a detailed description of the key emergent themes, information about recent technical initiatives, and some insight into the activities planned for the coming years.

Universal interfaces to multimedia documents

Abstract: Electronic documents theoretically have great advantages for people with print disabilities, although currently this potential is not being realized. This paper reports research to develop multimedia documents with universal interfaces which can be configured to the needs of people with a variety of print disabilities. The implications of enriching multimedia documents with additional and alternative single media objects is discussed and an implementation using HTML + TIME has been undertaken.

Publications on music notation and education with less obvious links to to accessibility/disability

The harmonic walk: an interactive physical environment to learn tonal melody accompaniment.

Abstract: The Harmonic Walk is an interactive physical environment designed for learning and practicing the accompaniment of a tonal melody. Employing a highly innovative multimedia system, the application offers to the user the possibility of getting in touch with some fundamental tonal music features in a very simple and readily available way. Notwithstanding tonal music is very common in our lives, unskilled people as well as music students and even professionals are scarcely conscious of what these features actually are.

The Harmonic Walk, through the body movement in space, can provide all these users a live experience of tonal melody structure, chords progressions, melody accompaniment, and improvisation. Enactive knowledge and embodied cognition allow the user to build an inner map of these musical features, which can be acted by moving on the active surface with a simple step. Thorough assessment tests with musicians and nonmusicians high school students could prove the high communicative power and efficiency of the Harmonic Walk application both in improving musical knowledge and in accomplishing complex musical tasks.

Uses of iPad® Applications in Music Therapy

Re-Connecting to Music Technology: Looking Back and Looking Forward

Abstract: The rate of change in the technological advances available to music therapists is incredible. While Music Therapy Perspectives has hosted discussions on music technology in therapy in the past (for instance, see “Integrating Technology” columns in the early 1990s issues), keeping apace of technological changes, and their impact on education and clinical training, is challenging. This paper contextualizes current advances in music technology through a review of technology applications in the field, and looks to the future, in both educational and clinical applications.

Pick-up the Musical Information from Digital Musical Score Based on Mathematical Morphology and Music Notation

Abstract: The basic rule of musical notation for image processing is analyzed, in this paper. Using the structuring elements of musical notation and the basic algorithms of mathematical morphology, a new recognizing for the musical information of digital musical score is presented, and then the musical information is transformed to MIDI file for the communication and restoration of musical score. The results of experiment show that the statistic average value of recognition rate for musical information from digital musical score is 94.4%, and can be satisfied the practical applied demand, and it is a new way for applications of digital library, musical education, musical theory analysis and so on.

A Transcription System from MusicXML Format to Braille Music Notation

Abstract: The Internet enables us to freely access music as recorded sound and even music scores. For the visually impaired, music scores must be transcribed from computer-based musical formats to Braille music notation. This paper proposes a transcription system from the MusicXML format to Braille music notation using a structural model of Braille music notation. The resultant Braille scores inspected by volunteer transcribers are up to the international standard. Using this simple and efficient transcription system, it should be possible to provide Braille music scores via the Internet to the visually impaired.

A proposal for the integration of symbolic music notation into multimedia frameworks

Abstract: Integration of music notation in multimedia frameworks, and particularly in MPEG, could open new ways of valorization for that important part of our cultural heritage that is known as "music notation". Integration of music notation with multimedia content could also increase the distribution and diffusion of music notation. Moreover, integration with video, interactivity, digital rights management would enable the development of a huge number of completely new applications in several domains, from education and distance learning, to rehearsal and musical practice at home, and any forms of enjoyment of music that can be imagined.

For these reasons we started a work for integrating symbolic music representation into MPEG standardization process and format. A proposal for realizing this integration in the MPEG-4 players is presented together with the main relationships that the symbolic music representation could have with all the MPEG components. The proposal is grounded on the basis of the assessment of the requirements of a large set of emerging new applications in which music notation is synchronized with multimedia content.

Navigation menu

Personal tools.

Powered by MediaWiki

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

paper cover thumbnail

IRJET- A Survey on Emotion-Based Music Player

Profile image of IRJET  Journal

2020, IRJET

The human face is an essential part of an individual's body, and it plays a significant role of general knowledge of an individual's psychological state. The appropriate human figure feedback can now be imported directly through the use of a monitor. They can then use this input in many ways. One implementation of this input may be to extract the information in order to deduce an individual's mood. Then, this data can be used to get a list of emotional state-compliant songs derived from the actually provided input. The aim of Facial Emotion Based Music Player is to scan and interpret the data and hence to create a play list depending on the parameters provided. It could however be inappropriate if the music doesn't match the listener's present emotion. In addition, there is no music player that can select songs based on an individual's emotions. This does away with the time consuming. This paper offers an emotion-based music player to resolve this issue, which can suggest songs based on an individual's emotions; sad, happy, neutral or impartial and angry.

Related Papers

IRJET Journal

literature survey on music player

IOSR Journals

At present law and enforcement agencies mostly depend on CCTV footage to trace the criminal. Sometimes, crime involves searching CCTV footage of various locations of numerous days and duration. This will pose the biggest hurdle in the investigation process as it involves tremendous manpower to search the suspect in CCTV footage. This project proposes intelligent searching of CCTV footage with an added module of Machine Learning. To build an Intelligent Search security system which takes suspect image as an input and searches the suspect in CCTV footage by considering the parameter such as appearance of suspect and facial orientation features using machine learning. The system will return the position of suspect in CCTV footage video reel with highlighted suspect figure time when the suspect was identified in CCTV footage. Various algorithms are reviewed and planned to use the following algorithms like Haar cascade for face detection and CNN and one shot learning method for face recognition and comparison.

International Journal of Advance Research Ideas and Innovations in Technology

Ijariit Journal , Mahek Gupta

One of the most essential components of an individual's body is the human face and it acts as the main indicator for the behavioural and the emotional state of the individual face and it's very important for the human to extracting the required input from the human face can be done by using the camera directly. The mean of this examination is making Facial Expression Recognition (FER) conspire by Utilizing the CNN Algorithm and tensor flow to recognize the face by the camera. Facial expression analysis is used in a different way to detect human emotions. There are four types of emotions are recognized: happy, sad, angry, neutral depends on the mood. The playlist itself have the songs in the database, it plays the songs according to the mood detect by the Camera. This research paper is effective because we are using the different algorithm i.e. CNN model which is based on Machine Learning which gives accuracy and reduces the time to recognize the emotions.

Face recognition is one of the most important tasks in computer vision and biometrics where many algorithms have been developed for the betterment of one other. There are many techniques used for face recognition some of them are PCA, LDA, LBP. Among that Local Binary Pattern (LBP) has been proved to be an effective algorithm for facial image representation and analysis, but it is too local to be robust. In this paper, we present an improved method for face recognition named Efficient Local Binary Pattern (ELBP), which is based on Local Binary Pattern (LBP). The efficient LBP method is used to extract local features of the new training subset independently and then a set of feature histogram vectors can be obtained. For a given unknown facial image, sub-feature histogram vectors of the corresponding sub-region are gained after the same preprocessing and partition techniques.

International Journal of Scientific Research in Science and Technology IJSRST , Celina Jenefer C, Leena. S , Nirmala Devi M , Dr. J. SelvaKumar

The facial expression plays an important role in detecting the mindset of an individual. Facial Expression is one of the natural ways to express emotions. In the interpersonal communications, humans use nonverbal clues like facial expression, hand gestures and tone of voice. Detecting and understanding the facial expressions are challenging tasks. Expression based music player involved in various fields like computer science, human computer interface and psychology. The facial Expressions are detected by using various feature extraction techniques from an image as well as from real-time videos. Expression based Music Player involves the image processing, facial feature detection, expression classification and audio feature extraction. This paper provides the information about various research works carried out by many authors in the field of expression based music player.

—To address human face recognition problem lots of algorithms have been proposed in the past few years. Although the available algorithms provides good face recognition, but the field is still looking for the efficient mild stone algorithm which can provide higher face recognition efficiency. To efficiently address this problem, in this paper, an Automatic Face Recognition System (AFRS) is proposed. The proposed system uses an efficient approach for the recognition of human faces on the basis of some extracted features. For the detection of the frontal face proposed method uses Viola Jones face detection technique. Once face detection is completed, feature of interested region that is eyes and mouth are pull out. In feature extraction, local binary pattern (LBP) is proposed as a feature. After the extraction of the LBP feature for the recognition classification or, the proposed method employed highly efficient K-Nearest Neighbors Classification structure to efficiently cluster the obtained LBP features. The whole system is implemented on the dataset of 150 images of frontal faces of 30 persons in five different emotions by using MATLAB 2012(b). The images were collected from the Karolinska Directed Emotional Faces Database. The novel contribution of the proposed face recognition system is the recognition of an individual person on neutral emotion as well as from the frontal images of other emotions. After the successful testing with the proposed system the face recognition efficiency found for the proposed system is very high and close to 100% for all the face images Keywords— Face Recognition, Face Features, Feature Extraction, LBP, K-NN Classifier.

This project is mainly focused on emotion recognition by face detection. Facial expression is key aspect of social interactions in various situations. We used synthetic happy, sad, angry, fearful, disgust faces determining the amount of geometric change required to recognize these emotions. Emotion is a part of a person's character that consists of their feelings as opposed to their thought that is the key point of emotionalizing and analyzing each and every emotion by software i.e. able to read emotions as well as our brains do. It is basically developed on python using machine learning.

Psychological stress is an important factor that affects our healthy life. Traditional stress detection methods use face-to-face interviews which is time consuming and laborious task. Due to rise in social media networks, instead of making interaction in real; most of the people spend their daily interaction with their family and friends through social media. The user emotions such as angry, sad, happy, joy and depressed conditions in social media can be identified through their weekly tweets. An identified emotion categorized in to positive and negative tweets, finding their stress state through continuous negative tweets and informing the user regarding the stress state to prevent them from suicide and also from other attacks. The sentence pattern labeling method in facebook contains abundant information for data analysis. Utilizing above information and features extracted from multiple modalities through convolution neural network model. The extracted features are fed into the several classifiers for tweet classification. Experimental results show that the Random Forest (RF) model provides higher accuracy rate than Support Vector Machine (SVM) and Probabilistic Neural Network (PNN) models.

Facial expression recognition in the computer vision community was recognized as an important research subject. Consequently, many progress has been made in this sector. Emotions are expressed in words, hands and movements and facial expressions of the body. The extraction and understanding of emotion is therefore of great importance for the interaction between human communication and machine communication. The challenge involves face recognition, correct data representation, correct classification systems, accurate database, etc. This paper discusses the progress made in this area, as well as the different methods used to identify emotions. Thus, we recognized seven emotions such as Angry, Disgust, Fear, Happy, Sad, Surprise, and Neutral. Facial expression prediction preparation and research data sets are from FER 2013, which incorporate geometric features which appearance features. The main aim of the paper is to introduce a method of emotion detection in real time

RELATED PAPERS

International Journal of Emerging Trends in Engineering Research

WARSE The World Academy of Research in Science and Engineering , shrikala deshmukh

Journal 4 Research - J4R Journal

J4R - Journal for Research

parth garg , Kiran D Yesugade

friend group

International Journal for Research in Applied Science and Engineering Technology IJRASET

IJRASET Publication

Pragmatics and Cognition, Spec. Iss. on Facial …

Diane J Schiano

Rajesh R , Jyoti Kumari

International Research Journal of Engineering and Technology

Dr. Surabhi shanker

IJAERS Journal

Jyoti Khokhar

IJCST - Journal

IJCST Eighth Sense Research Group

RELATED TOPICS

IMAGES

  1. 11+ Music Survey Templates in PDF

    literature survey on music player

  2. 11+ Music Survey Templates in PDF

    literature survey on music player

  3. PPT

    literature survey on music player

  4. Jackdaw research music survey report

    literature survey on music player

  5. Questionnaire review

    literature survey on music player

  6. 😀 Music report essay. Essay on Music. Research Paper on CONCERT PAPER 3

    literature survey on music player

VIDEO

  1. Типа диалекты,ну

  2. Audio library

  3. Lord of All I Survey

  4. THE SURVEY FROM HELL?!?!

  5. No Season 5 Tomorrow?

  6. Cold Start Problems in Collaborative filtering

COMMENTS

  1. PDF A Survey on Emotion-Based Music Player

    LITERATURE SURVEY Anuja Arora ; Aastha Kaul ; Vatsala Mittal [2], they submitted a program in which the DEAM data set was used to classify the emotions. It has more than 2800 songs with 4 emotions annotated: Happy, Sad, Angry and Relax, and with their values of valence and Image dataset

  2. Full article: Reading Music through Literature: Introduction

    In 1982 Steven Paul Scher identified three general categories to help us understand the rich connections between music and literature. 1 The category of "music in literature"—which includes the literary "imitation…of the acoustic quality of music," adaptations of "larger musical structures and patterns and the application of certain musical tech...

  3. Music Recommendation Systems: A Survey

    Alicja Wieczorkowska Chapter First Online: 08 April 2021 518 Accesses 1 Citations Part of the Studies in Computational Intelligence book series (SCI,volume 946) Abstract This introductory chapter presents an overview of music recommendation systems, supported by a comprehensive list of references.

  4. The psychological functions of music listening

    In Part 1 of our study, we summarize the results of an extensive literature survey concerning the possible functions of music. Specifically, we identified and skimmed hundreds of publications that explicitly suggest various functions, uses, or benefits for music. ... the use of MP3 players while travelling. Music. Sci. 14, 95-120 [Google ...

  5. (PDF) Literature Review: Music Technology in Education ETEC 5203

    Was it to learn to play one song, or was it to learn to play the instrument? ... R. J. (2009). A survey of technology-based music classes in New Jersey high schools. ... An examination of music ...

  6. A survey of music emotion recognition

    13 Citations 1 Altmetric Metrics Abstract Music is the language of emotions. In recent years, music emotion recognition has attracted widespread attention in the academic and industrial community since it can be widely used in fields like recommendation systems, automatic music composing, psychotherapy, music visualization, and so on.

  7. PDF Survey on Emotion Based Music Player

    LITERATURE SURVEY Charles Darwin is the first scientist to acknowledge that facial expression is one of the robust and instant means for human being to communicate their emotions, intentions and point of view its out look to each other.

  8. PDF A Survey : Expression Based Music Player

    A Survey : Expression Based Music Player Celina Jenefer C, Leena. S , Nirmala Devi M , Dr. J. SelvaKumar Department of Computer Science and Engineering, Sri Ramakrishna Engineering College, Coimbatore, Tamil Nadu, India ABSTRACT The facial expression plays an important role in detecting the mindset of an individual.

  9. (PDF) A Survey : Expression Based Music Player

    Download PDF Related Papers Emotion Based Music Player - XBeats IJAERS Journal — This paper showcases the development of an Android platform based application named XBeats which acts as a Music Player working on Image Processing fundamentals to capture, analyze and present music as per the emotion or mood of the user using this application.

  10. Development and Research of Music Player Application Based on Android

    Development and Research of Music Player Application Based on Android Abstract: First, Introduce the Google's mobile equipment platform -Android, and then develop a kind of music player through the research and analysis on the system structure and the application framework of the platform.

  11. PDF Emotion Based Music Player

    Emotion based music player with automated playlist can help users to maintain a particular emotional state. This research ... Literature survey A. Existing work The existing system [5] has only the boring traditional type music players in android, in which users can only simply listen

  12. (PDF) Emotional Detection and Music Recommendation System based on User

    Literature Survey. Renuka R Londhe et al. [1] ... Emotion based music player with automated playlist can help users to maintain a particular emotional state. This research proposes an emotion ...

  13. PDF A Novel Method To Design Emotion-Based-Music-Player

    LITERATURESURVEY:- [1]Inthis paper ,Authorsstatesthat ,Musicplaysareallyimportant roleinhuman'slifestyleandwithinthemodernadvancedtechnologies. Usually, theuser hasgottofacethetaskof manuallybrowsingthroughtheplaylist ofsongstopick.

  14. (PDF) Emotion Detection Music Player

    Emotion Detection Music Player May 2020 Authors: Udaysingh Kushwaha SAM Global University Abstract and Figures The human face is an important organ of the human body and plays a major role in...

  15. University students' use of music for learning and well-being: A

    Despite the growing body of literature on user-centric music retrieval, attempts to understand the reasons behind music behaviours are scarce. ... 2012 conducted a survey-based study on music management and listening behaviours and offered recommendations for improving music player applications. Their sample included both university students ...

  16. Smart music player integrating facial emotion recognition and music

    In this paper, we present an affective cross-platform music player, EMP, which recommends music based on the real-time mood of the user. EMP provides smart mood based music recommendation by incorporating the capabilities of emotion context reasoning within our adaptive music recommendation system.

  17. Music Literature Review

    Music Literature Review navigation search This page is a collection of relevant references on music, with a focus on domain-specific notation and accessibility implications. It is part of RQTF's activity looking at accessibility and Domain-specific notation.

  18. (PDF) EMOTIONS BASED MUSIC PLAYER Submitted By

    IRJET IRJET- Human Emotion based Music Player using ML 2020 • IRJET Journal Recent studies ensure that humans respond and react to music which music incorporates a high impact on person brain activity. The common yank listens up to four hours of music daily. Individuals tend to concentrate on music that supported their mood and interests.

  19. PDF Emotion Based Music Recommendation System

    proposed because music play a vital role in recent times that is to reduce stress.so,in order to detect the ... 2.LITERATURE SURVEY: The process of multidimensional reduction by taking the primary data that is lowered to many other classes

  20. (PDF) IRJET- A Survey on Emotion-Based Music Player

    The aim of Facial Emotion Based Music Player is to scan and interpret the data and hence to create a play list depending on the parameters provided. It could however be inappropriate if the music doesn't match the listener's present emotion. ... LITERATURE SURVEY The music's emotional definition is subjective and Anuja Arora ; Aastha Kaul ...