By Kat Patten
Abstract
In 2022, the widespread use of generative AI has increased and since then it has been incorporated into our everyday lives. Not only can AI be used to look up simple questions, but it can be used to aid specific areas such as questions within a company, exercise tips, and even music suggestions. AI is a powerful tool that is utilized to generate code, research topics, and help aid in making decisions. Artificial Intelligence can be used to design chatbots that teach user preferences and assist users in making decisions that fit their preferences. For example, using a recommender system to This research aims to develop a framework in understanding how an AI chatbot can aid in the user’s emotional decision-making process when selecting music to listen to.
Introduction
Music is a very powerful, versatile tool that influences our emotions far more than we realize. Music can lift us up when we are sad or tired and calm us down when we are angry or stressed. With emerging technologies such as AI and machine learning, people can find songs that match their preferences at a faster rate. There is no longer a need to go searching for a song that matches how you feel. A chatbot can aid users in finding the perfect song. All it takes is the user interacting with the chatbot and expressing their emotions. AI and Machine Learning recommend songs that match users’ preferences. Recently, the rise of chatbots has led to people developing models that are able to analyze human emotion and emotion patterns in music to recommend a song to the user. To analyze human emotions, a variety of different approaches such as analyzing text data, voice data, and facial image data can be utilized.
Emotion and Music: Background
Music and emotion are interconnected. The lyrics and tone of a song have the power to influence how we feel and think. When people make the decision to listen to music, they pick a genre that fits the way they feel. Music serves as a way for people to express themselves and to put their heads in an alternative space. For example, if someone is feeling sad, they may listen to a song with a sad tone and lyrics to “escape” reality. Mind wandering is when the mind enters a mental phenomenon that is evoked by music1 In a mind-wondering study conducted by Taruffi, 63% of participants’ minds and emotions were connected to the music that they were listening to too. The study found that participants had higher valences when listening to music compared to not.1 Valence is how positive or negative a song is.2 A song that has a valence that is closer to 1.0 can be described as a happy song while a song that has a valence closer to 0 can be described as a sad song.2 Music not only dictates brain activity, but it can have an impact on our decision-making process and how we view the world.3
Depending on the valence of the music, mind wondering can be detrimental or beneficial to mental health. However, in some instances it can be hard to classify music as one emotion. High order crossing, adaptive filtering, and genetic algorithms are all techniques used to classify emotion.3 Generally, there are 4-5 emotions that can be classified from a song.3 Basic emotions such as happiness, surprise, anger, disgust, sadness, and fear are used to classify music into different categories. The emotion of a song is classified by both the lyrics and the audio features.3 Electroencephalogram (EEG) is used to measure human emotion in response to music. The lyrics of songs and the valence are classified into emotion allowing for features to be extracted from the music to dictate the emotion. There are various datasets related to music mood classification including: ElderReact, FMA, AudioSet, AMG, Emotify, Musan, and etc.3
There is a large variety of music for users to listen to which can make it increasingly difficult to classify music into one mood. To efficiently recognize music mood and human emotion recognition, it is recommended to use short snippets of a song. EEG-Annotate can identify responses to music and analyze the human emotions associated with it.3 Another method is MNE Python which can analyze and visualize EEG data from humans making it easier to interpret.3
AI Song Recommender Chatbots
AI chatbots can aid in helping users select music that matches their emotional preference or provide music that promotes stronger mental health. Through techniques such as Natural Langrage Processing, (NLP) models can understand the mood and emotion associated with a song.4 Music recommending chatbots can utilize pretrained models or models can be trained to recognize specific emotions for a user. Models can be trained for both music classification and human emotion classification.3 Deep learning is utilized when developing a chatbot that can recognize the emotion of a song and the corresponding human emotion. Chatbots have the capability to recommend music to users based on their emotions by recognizing features that dictate a specific emotion.
Neural Networks are used to classify emotions for chatbots to process. K-Nearest Neighbor (k-NN) and Convolutional Neural Networks (CNN) are amongst the most popular techniques. K-NN is used to classify songs that are most like each other into one category of emotions.3 CNNs classify song emotion based off a raw signal and pattern.5 They can analyze user emotion through various techniques such as direct interaction with the user, facial emotion detection, and EEG signals. While music emotion is classified based on off rhythm features, lyrics, tempo, loudness, keys, and instrument presence.4 Utilizing neural networks allow Chabots to categorize human emotion and music emotion.
Python is the industry standard for generative AI and is one of the most common languages used for chatbots. Pandas, PyTorch, NumPy, and TensorFlow are all libraries that can be used for a chatbot that recommends songs based on user emotions. In addition to libraries, integrated API’s takes chatbot a step further by allowing real time data to be incorporated into the system.6 A popular API for music recommendation system is the Last.FM API.4 This API allows for capabilities such as face detection, emotion recognition, and music recommendation which are beneficial for a system that analyzes music valence and human emotion.
Ethical Concerns and Policy
As AI recommender systems become more popular, there are challenges surrounding the ethics of relying on a machine to analyze, judge, and influence human emotions. AI recommender systems have the power to influence human emotion by recommending songs with low valences.7 Recommendations systems can help users manage choice overload when it comes to deciding what song to listen to.7 There are a variety of different genres and songs that users can listen to. AI song recommenders can make the choice for the user. However, that choice may lead the user to listen to songs that influence depressing emotions.7 Since people use music as a form of therapy, this becomes an ethical challenge that has limited regulation.
Standford’s recent AI mental health study highlights the challenges of incorporating AI into therapy and influencing users’ emotions. 8 The research found that AI chatbots enhanced mental health stigmas and biases across multiple AI models.8 Chatbots are not human and lack empathy and human emotions, making it difficult for them to recognize patterns of poor mental health.8 This leads chatbots to making suggestions to users that will lead them to making bad decisions.8 In the case of a song recommender, the model may not be able to pick up on the fact that the user is depressed and recommends songs with a low valence which could inhibit their poor mental state.
To make decisions for users, recommendation systems rely on data from the given user. These systems collect personal information, and this can lead to privacy concerns about how the data is being stored.7 In order for chatbots to remain transparent, it is important that informed consent is obtained. AI recommenders can emphasize existing biases and reinforce stereotypes which can result in discrimination against some groups.7 It is crucial that there are techniques put in place that can detect and prevent biases when recommender systems are developed. Often the time associated with the decision-making process is not transparent to the users.7 An explanation should be provided to users of how the system operates. In addition, the user should have control over of how they interact with the system and the ability to override decisions that are made by systems.7 Giving users content that aligns with existing preferences can limit the user’s exposure to alternative viewpoints and choices resulting in polarization.7 Chatbots can have a societal impact, so it is important to address the ethical concerns when developing a music recommendation of chatbots.
The complexity of recommendation systems makes it difficult for policymakers to explain how to regulate and to reduce biases/ethical concerns in the systems.9 To mitigate the policy problem specifically for music recommendation systems, policymakers have resorted to cooperative modes of governance.9 Algorithms are often similar to black boxes in which inputs and outputs are unseen to the users or even to the developers. This can result in unexpected outputs that are difficult to address with policy or regulatory efforts. Developers need to improve the transparency of the systems to provide auditability. 9 The European Union (EU), is working towards increasing the awareness of understanding algorithmic decision-making systems in upcoming legislative efforts.9
Conclusion
The use of AI chatbots is continuing to grow. Utilizing AI such as natural language processing can allow users to narrow their song choices down to music that matches their preference and allow users to explore more music that has a similar valence to their song choices in the past. While AI chatbots can make finding songs that match user emotion more convenient, it is important to note that the recommendation may be more detrimental to a user’s mental health than beneficial. Policymakers need to work closely with AI engineers to find ways to mitigate negative externalities by implementing guardrails that make to prevent chatbots from recommending harmful choices to users. Although chatbots raise ethical concerns, they also make the decision-making processes easier and more efficient. Through proper policy and education on AI, we can safely use chatbots for music recommendations.
Bibliography
1. Taruffi, L. Mind-Wandering during Personal Music Listening in Everyday Life: Music-Evoked Emotions Predict Thought Valence. Int. J. Environ. Res. Public Heal. 18, 12321 (2021).
2. Sortlee. Sort By Emotional Tone Valence.
3. Chaturvedi, V. et al. Music mood and human emotion recognition based on physiological signals: a systematic review. Multimedia Syst. 28, 21–44 (2022).
4. U.L.Tupe, Kulkarni, A., Nimbokar, G., Mahajan, P. & Rau, N. AISongRecommendations.pdf. Grenze International Journal of Engineering and Technology, Jan Issue (2024).
5. Siam, A. I., Soliman, N. F., Algarni, A. D., El-Samie, F. E. A. & Sedik, A. Deploying Machine Learning Techniques for Human Emotion Detection. Comput. Intell. Neurosci. 2022, 8032673 (2022).
6. Mathew, N., Chooramun, N. & Sharif, M. S. Implementing a Chatbot Music Recommender System based on User Emotion. 2023 Int. Conf. Innov. Intell. Inform., Comput., Technol. (3ICT) 00, 195–199 (2023).
7. Masciari, E., Umair, A. & Ullah, M. H. A Systematic Literature Review on AI-Based Recommendation Systems and Their Ethical Considerations. IEEE Access 12, 121223–121241 (2024).
8. Wells, S. New Study Warns of Risk In AI Mental Health Tools. (2025).
9. Hesmondhalgh, D., Campos, R., Kaye, D. B. V. & Li, Z. MusicRecommenderEthics.pdf. (2023).


Leave a Reply