In this article, music-tech writier Cherie Hu delves into the ability of context awareness to reshape creativity as well as create new revenue streams and new business models for the music industry in general.
________________________
Guest Post by Cherie Hu on Synchblog
In his book Who Owns the Future?, computer scientist Jaron Lanier highlights the importance of “decision reduction” in technology. In an advanced, noisy information economy, he explains, the most desirable services will anticipate and execute on the myriad of decisions their customers want to make, leaving cognitive room for freer, more creative work.
By this logic, music streaming services are decision-reduction services. One needs to make a mind-boggling number of decisions even to start listening to a song in today’s digital, networked world: what device to use, whether to listen to familiar or new music, whether to fit the music to a particular activity or time of day, the list goes on.
Each of the major streaming services centers its value proposition around how it compresses these decisions. Spotify uses data science to build habits around algorithmic, personalized playlists; Pandora leverages the nostalgic, simple mechanics of radio (although it now has more on-demand features through Pandora Plus); Apple Music draws on its mainstream appeal to recruit celebrity curators and exclusives; Tidal creates an aura of elite secrecy, also around exclusives; SoundCloud builds on its reputation for supporting underground, DIY music communities.
Most of these values, however, are confined to their respective platforms. The next big frontier in music streaming will have to reimagine music distribution from the ground up, creating entirely new paradigms for creation and consumption. This is where context awareness—the ability to perceive and adapt to one’s surroundings—comes in. Digital music does not have to exist solely as a static file on a streaming service, but can interact with its listeners and surrounding environment.
“This is where context awareness—the ability to perceive and adapt to one’s surroundings—comes in. Digital music does not have to exist solely as a static file on a streaming service, but can interact with its listeners and surrounding environment.”
In 2013, multimedia artist Ryan Holladay gave a TED talk about his location-aware album The National Mall, a mobile app that tagged hundreds of musical segments, composed by himself and his brother Hays, throughout the namesake park. Since then, startups have been experimenting aggressively with applications of context awareness to consuming and sharing music, with Geobeat, AudioDropsand Aitokaiku creating adaptive interfaces on the go and Prizm building a responsive, Alexa-like speaker at home. Google is already incorporating its Context Awareness API into its Play Music app, recommending songs based on factors such as location and weather.
The technological and philosophical implications of context-aware music apps go far beyond their normalized nicknames (“Instagram for music,” “Pokémon Go for music,” etc.). Context awareness can transform the creative process, disrupt artist-fan and artist-geography relationships, and generate new revenue streams and business models for the music industry as a whole. Let’s look at how:
“Context awareness can transform the creative process, disrupt artist-fan and artist-geography relationships, and generate new revenue streams and business models for the music industry as a whole.”
1. Music will become not simply functional, but also adaptive, democratizing the composition process.
A common point of skepticism about context awareness is that it makes music too “functional”—too commoditized for a particular external event or moment in time, rather than freely creative and self-expressive. In reality, by evolving in real time in response to its user and environment, context awareness actually unveils a wider range of creative and interactive possibilities for artists.
Moreover, the unconventional listening experience that accompanies this technology can make listeners feel more present and aware of the story behind the music, instead of tuning out and treating the music as background filler. To an extent, choose-your-own-adventure music projects like The National Malltruly democratize and distribute the composition process for its users, in that the album cannot come to life without their movement and participation as deliberate actors
2. Local culture and knowledge can reenter the spotlight.
The Internet is a double-edged sword for local cultures. As The Information‘s Sam Lessin wrote, a crucial consequence of information being more open and accessible to everyone in a global marketplace is that it becomes “less possible for niche ideas to stand their ground with locally relevant audiences.” Music faces a similar plight; it becomes more difficult for niche genres and scenes to gain exposure, as mainstream sounds become both more homogenized and more internationalized.
Once music becomes geotagged, it can tie itself to local cultures, traditions and topographies, creating a multimedia storytelling experience. One of the most prominent examples of this phenomenon is Hear the City, an app developed by CISCO’s Urban Innovation Legacy Project in August 2016 in partnership with the City of Rio de Janeiro, Brazil. CISCO commissioned sound designer Rob Thomas for an adaptive composition project, in which Thomas converted real-time local data on transportation, connectivity, weather and emotions into dynamic melodies and graphic visualizations that were accessible only by downloading the app. The project also installed interactive kiosks and organized periodic gatherings throughout the region of Porto Maravilha in Rio, bringing people together through shared sounds and sights.
3. The seamlessness of context-aware music experiences can inspire a more sophisticated IoT suite for artists and fans.
Whether or not the music itself is adaptive, context-aware curation and distribution services can smooth out the listening experience for consumers. In an ideal world, there would be no need to press pause in transition from driving in a car or riding a subway to working out at the gym (think Spotify Running, but automated), or to relaxing at home alone or with friends.
Executing on this vision requires more advanced developments in remote sensing, image recognition, motion tracking and other areas of artificial intelligence. With the rise in mobile music consumption and the fragmentation and onset of new “devices” and tools such as autonomous vehicles, such advancements will be in higher and higher demand over the coming years.
Cherie Hu is a music-tech thinker, researcher and writer, passionate about combining a data-driven business acumen with superior storytelling skills to propel innovation in the music industry. She covers trends at the intersection of technology and the music business as a contributor for Forbes, and her writings have also appeared in Music Ally and the Harvard Political Review.