AI Music vs. Human Catalogs: What the Suno-UMG Talks Reveal About the Future of Creativity
A deep dive into Suno’s stalled label talks, AI training data, and the licensing models that could define music’s future.
AI Music vs. Human Catalogs: What the Suno-UMG Talks Reveal About the Future of Creativity
The stalled licensing discussions between Suno and major labels like UMG and Sony are more than a business headline—they are a preview of the next phase of copyright AI music policy, platform economics, and creator compensation. At the center of the dispute is a deceptively simple question: if an AI music tool is trained on human-made recordings, how much of its output depends on that catalog, and what should it cost to use it? For labels, the answer is straightforward: AI startups benefit from decades of investment in recordings, and that value should be licensed, measured, and paid for. For startups, the answer is more complicated: models learn patterns, not tracks, and overbroad licensing could punish innovation before a product even proves itself. That tension is shaping the future of synthetic audio, and it will likely define the next generation of creative leadership in 2026.
To understand why the talks stalled, it helps to zoom out from the headline and look at the larger policy dispute: what counts as training data, what legal rights attach to it, and how the music business could build a workable marketplace for AI without collapsing artist trust. This is not just a label-vs-startup fight. It is also a data-rights fight, a market-design fight, and a future-of-labor fight. And for fans, creators, and independent artists, the outcome could determine whether AI becomes a tool that expands discovery or a layer of extraction that quietly monetizes the entire recorded-music archive.
1. Why the Suno-UMG Talks Matter Beyond One Deal
The dispute is about precedent, not just price
When negotiations stall between an AI music company and a major label group, the immediate takeaway is often “they couldn’t agree on money.” That is true, but incomplete. The real issue is that any agreement would set a precedent for the rest of the market: if one AI startup gets access on favorable terms, others will expect similar treatment; if a label accepts a weak framework, it risks normalizing underpayment for years. The labels’ position reflects a broader industry instinct to avoid repeating the early streaming era, when distribution grew faster than compensation models. For a useful comparison, see how the creator economy often struggles when platforms scale before monetization rules mature in creator relationship building and long-term creator revenue.
The Suno talks also matter because they involve the two biggest levers in modern AI music licensing: catalog access and commercial usage. A company may be able to argue that a model can be trained with publicly accessible recordings, snippets, metadata, or licensed corpora, but labels are asking a more direct question: if the resulting tool competes with music made by humans, shouldn’t the humans who built the source market be paid? That argument has parallels in other content markets where transparent sourcing became central to trust. The debate echoes lessons from data transparency in marketing, where users increasingly expect to know where data came from and how it is used.
Why stalled talks can still shape the market
Deals do not need to close for their logic to spread. Often, the mere existence of talks tells us where each side thinks the market is headed. If labels are demanding payment based on “reliance” on human-made music, they are signaling that AI companies are not being treated like neutral software vendors; they are being treated like downstream users of copyrighted cultural assets. That framing pushes the industry toward structured licensing rather than open-ended scraping. On the other side, startups are trying to preserve enough flexibility to train models efficiently, iterate quickly, and avoid a rights stack so expensive that only giant firms can afford to compete. Similar scale pressures have surfaced in cloud price optimization and software release gates, where infrastructure costs can decide which ideas survive.
In practical terms, Suno negotiations may end up functioning like a draft standard. Even if the parties never sign a public agreement, their positions can influence future licensing templates, litigation strategies, and policy proposals. That is why this moment deserves attention from music executives, indie artists, and anyone building music tech. The result may not be a single universal license, but a menu of models that reflect different uses, different risk levels, and different degrees of “reliance” on the human catalog.
2. How AI Music Training Data Actually Works
Training data is not the same as output
One of the biggest sources of confusion in the training data debate is the assumption that if a model was trained on a song, it must be “copying” that song every time it generates new audio. In reality, training data is used to learn statistical relationships: how rhythm, timbre, phrasing, chord movement, and arrangement patterns tend to behave across a corpus. That does not automatically make every output a derivative work, but it does raise a deeper legal and ethical question: what rights are implicated when a model’s creative capacity is built from human expression at scale? This distinction is central to AI content ownership, and it’s why labels want to know not just what the tool can do, but how it was trained.
For non-technical readers, think of it like building a chef from a thousand cookbooks. The resulting chef does not reproduce every recipe word-for-word, but their style, palate, and instincts are inseparable from the books they studied. Labels argue that music models are similar: they are not merely inspired by existing catalogs; they are industrially dependent on them. Startups counter that inspiration is not infringement, and that society has long allowed artists, students, and systems to learn from prior work. The legal and commercial problem is that AI is not a single student—it is a machine that can ingest entire markets in a way no human could. That scale difference is where policy pressure begins.
Where the data comes from shapes the legal risk
AI companies may source training data from a mix of licensed recordings, public-domain material, user uploads, synthetic datasets, metadata, and scraped content. The legal and reputational risk changes drastically depending on which of these sources dominate. Licensed data reduces uncertainty but raises cost; scraped data lowers cost but increases exposure. A startup that cannot explain its provenance chain may struggle with investor diligence, product partnerships, and platform trust. This is why the conversation is no longer about “Can you train on music?” but “Can you prove which music, under what permissions, and for what downstream uses?”
Transparency is becoming a competitive advantage. Just as consumers increasingly prefer brands that explain their sourcing in other industries, music buyers and rights holders want explainability in AI. The most durable products may be those that can clearly separate training tiers: openly licensed public datasets, direct label deals, artist opt-ins, and separately governed synthetic layers. That approach mirrors the way teams build trust in other tech stacks, from explainable models to assessment systems that detect homogenized AI outputs. In all of these cases, auditability is the difference between adoption and backlash.
Why “training” is becoming a policy category
In the music industry, “training” used to be a technical term; now it is becoming a policy category. Lawmakers and regulators are starting to ask whether machine learning should receive a special framework because it uses copyrighted works differently from search, sampling, or broadcasting. That matters because the outcome can affect how labels negotiate, how startups raise money, and how courts interpret fair use or analogous doctrines. If training is treated as a controlled commercial use, AI companies will likely need standard access deals. If it remains a loosely governed use, litigation will keep defining the boundary on a case-by-case basis, which is costly and unpredictable for everyone involved.
From a strategic standpoint, this is why both sides are trying to define language early. Whoever controls the vocabulary—training, ingestion, inference, output, model weights, derivative risk—often controls the negotiation. That is a lesson familiar to anyone who has watched platforms define their own rules in public disputes or policy battles, including debates around TikTok business strategy and social influence metrics.
3. What “Reliance on Human-Made Music” Could Legally Mean
Economic dependence versus technical dependence
When a label executive says an AI system “relies on human-made music,” that phrase can mean several things. It may refer to economic dependence: the product could not exist without access to a vast body of recorded music made by human artists. It may refer to technical dependence: the model’s performance improves because it learns from patterns embedded in those recordings. It may even refer to market substitution: if the AI product can generate songs that satisfy similar consumer demand, it may compete with the very catalog that trained it. Each of these meanings supports a different legal theory, and each creates a different bargaining position.
Labels tend to emphasize the economic version because it is easiest to explain publicly. Their argument is intuitive: if a startup builds a subscription product on top of the creative labor of millions of musicians, then value should flow back into that ecosystem. AI companies often respond by stressing technical abstraction: the model does not store songs as a playlist, they say, but compresses them into learned parameters. That distinction is important legally, but not always persuasive commercially, because the market often cares less about internal architecture than about whether the product is built on uncompensated creative labor. The same tension appears in public reactions to AI, where users may not understand the model but still react strongly to perceived unfairness.
How courts and regulators may interpret the phrase
Legally, “reliance” could become part of a new licensing test. Regulators might ask whether the model’s outputs are meaningfully substitutable for human catalog use, whether the system was trained on identifiable copyrighted recordings, and whether the company has a duty to compensate the rights holders whose works materially contributed to the commercial product. That could produce an attribution-like standard, a data-dividend standard, or a negotiated access standard. In other words, “reliance” may become the bridge between abstract copyright theory and practical payment rules.
A useful analogy comes from supply-chain regulation: once a company depends on a critical upstream input, the business may be expected to document origin, quality, and risk controls. Music AI may be headed in a similar direction, with catalog provenance becoming as important as model performance. If that happens, startups that are able to prove clean sourcing could be rewarded with lower friction, while companies that depend heavily on scraped catalogs could face higher license costs and greater legal exposure. This is why the debate is not just about rights; it is about operational maturity.
Why the labels’ demand is more than a royalty request
At first glance, the labels’ demand looks like a simple plea for compensation. But it is also a request for recognition that music catalogs are infrastructure, not just content. Major labels are asking to be treated the way other critical data suppliers are treated in regulated ecosystems: as partners whose assets cannot be freely industrialized without compensation and oversight. That is a powerful argument because it reframes the question from “Should AI pay artists?” to “Should AI infrastructure pay the owners of the data layer that makes the infrastructure possible?”
This framing matters to the wider industry because it may influence how future deals are structured. For example, a startup could license training data from labels the way a film studio licenses archival footage, or the way a product team licenses datasets from publishers. It could also adopt a hybrid model where use during training is covered by one fee, generation at runtime is covered by another, and premium features such as stem extraction or style controls require an additional rate card. The more granular the use case, the more likely a durable deal becomes.
4. The Licensing Models That Could Emerge
Model 1: Flat fee access to training catalogs
The simplest model is a flat-fee license granting an AI company access to a defined catalog for training and evaluation. This would be easier to administer than per-output billing, and it would give startups certainty about costs. Labels may like it because it creates immediate revenue and preserves control over who can access the catalog. But flat fees also create a difficult pricing problem: if the model becomes wildly successful, the rights holders may feel underpaid, while the startup may feel overcharged if adoption is modest. That makes flat fees best suited to smaller datasets, pilot programs, or narrowly scoped commercial partnerships.
Flat-fee access could work as an entry point, much like licensing a limited pilot season before rolling out a broader distribution deal. It may also appeal to independent rights holders seeking predictable income, especially if bundled with discovery or promotional opportunities. For creators thinking about how distribution and revenue are linked, the logic is not unlike planning around tour budgeting or building multi-channel rollout plans: you trade complexity for control and certainty.
Model 2: Usage-based or output-based royalties
A more sophisticated structure is a usage-based license, where the AI company pays according to how much the catalog contributes to training, how often the system is used commercially, or how many tracks are generated. This sounds fairer in theory because it ties payment to actual use and value creation. However, usage-based pricing is hard to measure when models are probabilistic and outputs are not individually attributable to specific recordings. The more the output is blended across many sources, the more difficult it becomes to map a single generated track back to a licensing ledger.
Still, this model may become attractive as provenance tools improve. Watermarking, content credentials, dataset logging, and model lineage audits could make output-based pricing more feasible over time. It would also align with broader platform economics: the more a product is used, the more the underlying data suppliers participate in the upside. That principle appears in other digital markets where measurement has become a monetization enabler, much like conversion insights turn performance data into revenue strategy.
Model 3: Tiered rights by use case
The most realistic near-term model may be tiered licensing. Under this approach, a startup would pay different rates depending on what it is doing with the data. Training a foundation model might carry one fee, fine-tuning for a commercial product another, and consumer-facing generation or stem manipulation a third. The tiered model recognizes that not all AI music activities pose the same level of market substitution or rights risk. It also gives labels room to negotiate based on strategic exposure rather than one-size-fits-all pricing.
This structure resembles enterprise software pricing, where basic access, API volume, and premium compliance features all carry different costs. For music tech, tiering could be the most practical way to accommodate both innovation and compensation. It also makes room for different parties with different leverage: a startup with a small prototype may only need a narrow license, while a major platform launching a mass-market generator may need broad rights and indemnities. That flexibility is likely essential if the market is going to avoid endless one-off negotiations.
5. The Legal and Economic Stakes for Artists and Labels
Why artist trust is the real currency
Even if a label strikes a profitable deal, the music industry still has to answer a harder question: will artists trust that deal? If creators feel that their work was used to build AI tools without fair consent, the reputational damage can spread quickly through fan communities and creator circles. Trust is especially fragile because AI music touches identity, style, and labor all at once. For many musicians, the issue is not merely economic—it is existential. That is why discussions around artistic expression and emotional processing resonate so strongly in this debate.
Artists also care about control over likeness and style. Even if a tool is not literally copying a song, it may evoke a recognizable aesthetic that competes with the original artist’s market position. That possibility is one reason many creators are asking for opt-in systems, clearer consent mechanisms, and the ability to audit whether their recordings were included in training sets. The labels are negotiating not just for money, but for rules that maintain legitimacy in a market increasingly shaped by generative systems.
The economics of substitution versus discovery
The most important commercial question is whether AI music substitutes for human catalog consumption or expands it. If AI primarily serves as a discovery engine, a remixing layer, or a production assistant, it could increase demand for original music by lowering friction and stimulating fandom. If it behaves like an on-demand replacement factory, it could compress streaming, sync, and commission markets. The industry’s response will depend on which of these outcomes appears most likely in user behavior. That is why the policy conversation is not just about outputs, but about market structure and consumer intent.
There are precedents for tools that increase demand by making participation easier. In other words, technology does not always cannibalize the thing it automates. But when it does compete directly with labor, stakeholders demand guardrails. Music is especially sensitive because the supply side includes not just stars, but working musicians, producers, engineers, and writers. Any licensing model that ignores that ecosystem risks solving for platform growth while eroding the creative base it depends on. For more on the creator side of value capture, see unexpected industry influence and chart-topping influence patterns.
Why independent artists are watching closely
Independent musicians often worry that AI debates will be decided entirely by major labels, even though the downstream effects may hit them hardest. If licensing becomes expensive, startups may prioritize large catalogs and bypass smaller rights holders. If licensing becomes too loose, indies may see their stylistic fingerprints harvested without recognition or compensation. The most equitable framework would create accessible onboarding for independents: opt-in registries, transparent reporting, revocable permissions, and small-holder pricing that does not exclude emerging artists from the market.
That is where community-driven platforms and discovery ecosystems matter. Fans who care about the future of music can push for fairer standards by supporting tools that make provenance visible and compensation clear. Communities that already value curation, education, and artist support are well positioned to insist on better norms. The broader lesson is that policy is not only made in negotiations; it is also made in the choices listeners make when they discover and share music.
6. What the Music Industry Should Ask Before Signing an AI Deal
Questions about data provenance
Before any label or startup signs a deal, it should ask exactly what data is included, how it was sourced, and what rights were cleared. Is the corpus limited to fully licensed masters and publishing rights, or does it include scraped recordings and metadata? Are there audit logs? Can the license be narrowed to certain territories, markets, or use cases? Those questions sound technical, but they determine whether a deal is commercially scalable and legally defensible. In the AI era, data provenance is not an administrative detail—it is the foundation of trust.
Clear provenance also helps companies avoid downstream disputes. If a model was trained on clearly documented data, it is easier to resolve claims, attribute value, and respond to policy changes. This is similar to how structured workflows reduce surprises in product launches or event planning. For example, a disciplined rollout resembles the planning logic behind multi-channel event calendars or even the kind of contingency thinking that protects a launch when dependencies change, as discussed in contingency planning for AI-dependent launches.
Questions about output controls and misuse
Negotiators should also define what the model cannot do. Can users request outputs that imitate a living artist’s voice, phrasing, or arrangement style? Are there guardrails against near-duplication? Is there a reporting pipeline for takedown requests and audit disputes? These controls matter because many of the public fears around AI music are really fears about impersonation and displacement. The more precise the restrictions, the more likely the public will accept the product as a tool rather than a threat.
Output controls are also a reputational safeguard for labels. A deal that monetizes the catalog while allowing harmful cloning would likely face backlash from artists and fans. By contrast, a deal that includes transparent naming, use restrictions, and enforcement mechanisms could create a higher trust baseline for the entire category. That is one reason why the music industry may benefit from borrowing best practices from adjacent sectors such as enterprise AI governance and explainable systems.
Questions about economics and reporting
Finally, the parties should ask how value will be measured and reported. Will rights holders receive fixed payments, per-use fees, or revenue shares? Will reports show aggregate training use, output use, or both? Will artists have a way to verify whether their work influenced a model? If not, how will disputes be adjudicated? These are not trivial admin questions; they are the machinery of an AI music economy that can survive scrutiny. Deals that cannot be audited may not survive public pressure.
The key is to treat reporting as part of the product, not a legal afterthought. The most credible music AI companies will likely be those that make usage measurement, consent management, and rights reporting visible from the start. That is exactly the kind of infrastructure that creates long-term market confidence—much like trustworthy data systems do in other sectors where consumers and partners need to know what is happening behind the scenes.
7. The Policy Path Forward: What a Fairer Music-AI Market Could Look Like
Start with voluntary standards, then formalize them
The fastest route to a stable market may be a voluntary standards phase: major labels, independent rights groups, and AI companies agreeing on baseline definitions for licensed training, prohibited uses, auditability, and dispute resolution. Those standards could then be translated into industry-wide templates or policy guidance. This would reduce transaction costs and make it easier for smaller players to participate. In a fragmented market, common language often matters as much as common pricing.
Voluntary standards also create room for experimentation. A company could pilot a restricted catalog license, a rights-holder opt-in program, or a revenue share on premium generations. If a model works, it can be expanded; if it fails, it can be revised without the pressure of a courtroom ruling. That is how many digital markets evolve: first through informal best practices, then through documentation, and finally through formal enforcement. Music AI appears to be entering that same maturity curve.
Make provenance and consent machine-readable
One of the biggest opportunities in AI music licensing is to make consent machine-readable. That means rights holders could tag works with permissions that specify whether a recording can be used for training, fine-tuning, research, or commercial generation. If adopted at scale, that would dramatically reduce ambiguity and improve compliance. It would also let creators participate in the AI economy on terms they understand, rather than through opaque blanket agreements.
This approach aligns with broader trends in digital rights infrastructure. When permissions are structured clearly, platforms can automate compliance instead of guessing. That creates a healthier ecosystem for everyone: faster licensing for companies, clearer control for artists, and better product trust for listeners. In a market defined by speed and scale, machine-readable consent may become one of the most important forms of creative protection.
Balance innovation with creator dignity
Ultimately, the future of music AI should not be framed as humans versus machines. It should be framed as creators versus extraction, and innovation versus opacity. The Suno-UMG talks reveal that the industry is trying to find a middle path where AI tools can exist, but not at the expense of the people whose work made them possible. That means paying for training data, limiting misuse, and designing products that amplify rather than erase human creativity.
If the market gets this right, AI music could become a discovery layer, a composing assistant, and a new creative medium with clear rules. If it gets it wrong, it could deepen distrust and intensify legal fights for years. Either way, the next phase of the debate will be shaped by the same questions surfaced in these stalled talks: what was used, who benefited, and what does fairness look like when a machine learns from human art?
Pro Tip: For labels, the strongest negotiating position is not simply demanding payment—it is insisting on auditability, use restrictions, and a pricing model tied to actual commercial dependence on the catalog.
8. A Practical Comparison of Possible AI Music Licensing Models
Before the industry settles on a standard, it helps to compare the likely models side by side. Each one solves a different problem, and each has different implications for cost, risk, and scalability. The right answer may ultimately be hybrid, but the table below clarifies the tradeoffs that are driving the current standoff.
| Licensing Model | How It Works | Pros | Cons | Best Fit |
|---|---|---|---|---|
| Flat-fee catalog license | AI company pays a set amount for access to a defined dataset | Simple, predictable, fast to negotiate | Can underprice success or overprice small startups | Pilots, limited catalogs, early-stage products |
| Usage-based royalty | Payments scale with training volume, generation volume, or revenue | More aligned with value creation | Hard to measure and attribute precisely | Large-scale commercial platforms with strong reporting |
| Tiered rights model | Different fees for training, fine-tuning, and consumer generation | Flexible and use-case specific | More complex to administer | Enterprise deals and multi-product companies |
| Opt-in creator registry | Artists choose whether their work can be used and under what terms | High consent and trust | Requires broad adoption to be useful | Independent artists and transparent ecosystems |
| Hybrid license + revenue share | Upfront fee plus a percentage of downstream commercial success | Balances certainty with upside participation | Requires robust reporting and audit rights | Flagship partnerships and major-label deals |
What stands out from this comparison is that no single model solves every problem. Flat fees are easy but blunt. Royalties are fairer but harder to track. Tiered models offer nuance but add operational complexity. Hybrid models are probably the most realistic for high-stakes deals, especially where both sides want a long-term relationship rather than a one-time transaction. The negotiation challenge, then, is not choosing perfection; it is choosing a structure that can survive contact with the real market.
For music businesses and creators who want to understand how deal design affects long-term leverage, this is similar to building a scalable promotional plan or investing in community flywheels. The best systems reward ongoing participation and make value visible. The worst systems hide value extraction behind convenience.
9. FAQ: AI Music Licensing, Training Data, and Label Demands
What is the core issue in the Suno-UMG negotiations?
The core issue is whether an AI music company should pay to train on human-made recordings and, if so, how that payment should be structured. Labels argue the tool depends on copyrighted catalogs and should compensate rights holders. Startups argue they are learning patterns, not copying songs, and that overly restrictive licensing could block innovation.
Why does training data matter so much in AI music?
Training data is the foundation of the model’s ability to generate music. If the dataset is sourced from copyrighted recordings, the legal and ethical status of that usage becomes central to the business. In music, where style and identity are highly valued, data provenance can affect trust, monetization, and litigation risk.
What does “reliance on human-made music” mean legally?
It can mean economic dependence, technical dependence, or market substitution. Labels use the phrase to argue that AI music tools are built on the value of human creativity and should pay accordingly. Regulators or courts may eventually treat reliance as a trigger for licensing obligations or enhanced disclosure.
Could AI music licensing become standardized?
Yes. The most likely path is a mix of voluntary standards, tiered licensing, and hybrid payment models that combine upfront fees with ongoing royalties. Standardization would reduce uncertainty, help startups scale, and give rights holders clearer leverage over usage terms.
How can artists protect themselves in this new market?
Artists should push for opt-in consent, clear reporting, audit rights, and restrictions on impersonation or voice cloning. Joining rights organizations, tracking policy developments, and supporting platforms with transparent sourcing can also help. The more machine-readable the permissions, the easier it is for artists to stay in control.
Will AI music replace human musicians?
It is more likely to reshape parts of the market than replace it outright. AI can already accelerate ideation, production, and personalization, but live performance, emotional authenticity, and fandom still give human artists a major advantage. The key question is whether AI becomes a complement to the ecosystem or a substitute that captures too much value.
10. The Bottom Line: What Comes After the Talks
The Suno-UMG and Sony licensing standoff is not just a temporary deadlock. It is a sign that the music business is trying to redraw the boundary between inspiration, training, and monetization in an era where machines can learn from vast catalogs at near-zero marginal cost. That boundary will shape who gets paid, who gets credit, and which AI music products are allowed to scale. For anyone following music and media ownership in the AI era, this is the negotiation to watch.
The future likely won’t be an all-or-nothing answer. It will be a layered market: licensed datasets for serious commercial products, stricter controls around voice and style imitation, reporting frameworks for revenue, and opt-in systems for artists who want participation. That future is more complex than a simple “yes” or “no” to AI training, but it is also more realistic. If the industry can build fair rules now, AI music may become a sustainable part of the creative economy rather than a recurring legal crisis. If it cannot, stalled talks will keep turning into lawsuits, and lawsuits will keep turning into lost trust. The smartest path is to make the economics legible before the courts make them for everyone.
For readers tracking how media industries adapt to platform shifts, it is worth pairing this debate with broader lessons from AI’s cultural backlash, creative leadership in AI, and the way companies prepare for dependency risk in launch contingency planning. The companies that win will not be the ones that ignore the debate; they will be the ones that design for it.
Related Reading
- Navigating AI Content Ownership: Implications for Music and Media - A broader look at rights, attribution, and monetization in AI-powered media.
- Why AI Is Becoming Pop Culture’s Favorite Villain—and What Music Creators Can Learn From It - Understand the public backlash shaping policy and product design.
- Art Movements and AI: Navigating Creative Leadership in 2026 - Explore how creative leadership is changing as generative tools mature.
- When Your Launch Depends on Someone Else’s AI: Contingency Plans for Product Announcements - A practical guide to dependency risk and launch planning.
- Navigating Data in Marketing: How Consumers Benefit from Transparency - Why data transparency is becoming a competitive advantage across industries.
Related Topics
Marcus Ellington
Senior Music Policy Editor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Playlist: Sounds Inspired by Elisabeth Waldo — A Modern Mix of Indigenous Instruments and Cinematic Scores
Elisabeth Waldo’s Legacy: How One Violinist Rewrote the Rules for Film and World Music
Jazzing Up Sports: The Intersection of Legendary Matches and Legendary Jams
Fans, Sponsors, and the Power of Platform: How Communities Influence Who Gets to Play
When Festivals Collide With Conscience: How Promoters Should Vet Controversial Headliners
From Our Network
Trending stories across our publication group