Notes on Creative Transparency

2025-06-2412:4310virtualelena.substack.com

Wherein I investigate why (some) companies are publishing increasingly granular data

I’ve recently become captivated by Ramp’s approach to marketing. No, not the TBPN sponsorship, though that demands a David Ogilvy-style “Confessions” book at some point.

I’m talking instead about the regular cadence with which the team at the Ramp Economics Lab, led by Ara Kharazian, publishes data about topics ranging from tech firms’ adoption of AI to shifts in digital advertising spending. This data is sourced from Ramp’s corporate card spend data, which of course they have access to, but are in no obligation to publish in any format. A Ramp run differently could do what other brokers of spending data do, and charge businesses hundreds of thousands of dollars (in some cases millions) for the favor of surfacing proprietary insights into how businesses and consumers are spending money. A portion of Visa’s $3.2 billion “Other” revenue earned in 2024, for example, was generated by this exact service. Ramp is publishing (at least some of it) for free.

These reports are fascinating artifacts of standalone research. In the AI Index, we learn that OpenAI dominates other model providers like Anthropic, xAI, Google, and DeepSeek in terms of firm-wide adoption (a little more than 41% of businesses that use Ramp’s corporate cards also pay for AI, led by OpenAI at 33.9%). Notably, for the first time there was a flatlining of AI adoption in May; despite this slowdown, the US government is likely dramatically underestimating AI model adoption, as evidenced by the fact that they report only 9% of firms spend on AI model subscriptions. Either that, or Ramp’s customers are just way more tech-native than the average business, which in itself is a significant finding.

When Ramp published their first data deep-dive (a fun look at corporate spending on booze at business dinners) I remarked that companies are now behaving like unofficial offshoots of FRED or the US Census Bureau. This isn’t just a Ramp phenomenon though: any company that reaches significant enough scale, and has a birds-eye view into user behavior can publish this kind of information. Some do.

I’m beginning to think of this phenomenon as creative transparency. There are lots of different motivations for creative transparency: a company wants to dick-swing a bit, and demonstrate just how valuable the trove of data they’re sitting on is (the range of insights Ramp is sharing makes you both grateful that they’re being so forthcoming, but also makes you wonder, “what aren’t they telling us?"). It’s also a way to insert your company into whatever talking point du jour is trending. Ryan Petersen, the founder of the shipping company Flexport, engaged in creative transparency when he posted screenshots of their internal tracking system, which served as particularly salient fodder during the heyday of Trump’s tariff impositions.

Sources here and here

Creative transparency is emerging when the act of amassing large swathes of user data is no longer a source of shame for large tech companies. While Meta, Google, and other tech giants of the past cycle are still reeling from the latent trauma of being dragged in front of Congress and lambasted for everything from being a monopoly to unprovable responsibility for the outcome of the 2016 election, a new era of tech companies view the data they have on users and customers as information in the public interest. Perhaps that’s why we’re seeing more of it now.

I wanted to trace some kind of origin story for creative transparency, so I reached out to Ara from Ramp, who very gamely agreed to go on a walk through downtown Manhattan and explain how he ended up doing this kind of work. It turns out that “this kind of work” started when he was employed by Square, which had already cultivated a practice of compiling internal company metrics for journalists who asked for data to illuminate trend pieces about culture and economics. The success of those kinds of stories (including a popular story on fidget spinner sales) prompted more publications from Square itself, which Ara took the initiative on. “We started releasing a lot of economics work, specifically tracking restaurant-worker wages and retail-worker wages and tips ... [the] press started referring to that and so that did very well,” Ara told me.

Creative transparency often has a political underbite, and aims to compensate for gaps in official reporting. Ara remarked that when he worked at Square, “Government metrics weren’t quick to track changes in wages when they were changing so quickly month over month, particularly at the local level.” This observation has carried forward to his AI Index reporting at Ramp. When reflecting on the divergence between Ramp’s findings on AI adoption and the US government’s (recall, per Ramp’s data, adoption stands at over 41%, the U.S. Census Business Trends and Outlook Survey reports just 9%), Ara pointed to a failure of the way government surveys are phrased to explain this divergence: “The [survey] question, ‘Do you use AI to produce goods and services?’ sounds like it’s asking if AI is being used to make widgets in your factory." Obviously, not all companies make widgets and not all companies have factories.

The true level of AI adoption is likely somewhere between Ramp’s data and the US government’s. “Ramp businesses are not representative. They're efficient. They’re tech-forward and they’re early adopters,” per Ara.

Other companies try to complement US government reports. Anthropic, for example, publishes the AI Economic Index, which tracks how professionals are integrating AI into their daily workflows. If you read through a recent publication from February on AI usage across the labor market, you’ll find a fairly intuitive story suggesting that queries on Claude are primarily driven by computer programmers and those in left-brained fields like math, followed by arts and media (which encompasses fields like technical writing) and education. This corroborates more colorful on-the-field reporting: the erasure of entry-level programming jobs (perhaps because developers using Claude and tools like it have cannibalized the new hire market for engineers) and of course newfound anxieties around students using AI to cheat their way out of an education.

To understand the impulses behind creative transparency, it helps to think about intended audiences. For a firm like Ramp, which is trying (and evidently succeeding) to establish itself as the new corporate card provider to challenge incumbents like American Express, creative transparency is a mechanism to demonstrate their deep penetration into tech-forward businesses (with the insinuation that if you are not included in Ramp’s repository of data, you are not a participant in technological progress, and you really should hurry up and become a Ramp user).

Anthropic, meanwhile, seems to be using their AI Index as a form of political lobbying, with findings that are intended to be complementary to more established forms of data like the BLS’ monthly jobs report (their comms team is also tripling its size, with notable incursions in DC). Anthropic’s publications allow them to steer conversations around legislation, and perhaps serves as a counternarrative to the aforementioned less-optimistic stories that have been percolating around AI adoption. Anthropic would argue that AI is augmenting programmers, not cannibalizing them, and reinforcing student learning, rather than rendering it obsolete.

I enjoy Ramp’s AI Index reports in particular because they have a juicy air of “who are the hottest boys at our high school” lists you’d see in bathroom stalls as a teenager, except in this case the hottest boys are the hottest AI startups and the bathroom stall is the internet. In other reports, you can see the fastest-growing software vendors among Ramp card users (this month included smaller companies like Descript, n8n, and Lindy.ai). I took the opportunity to ask Ara about this choice. While I enjoy Ramp’s reports, I’m always a bit surprised by their level of candidness: if I’m Anthropic, am I really happy that everyone knows Ramp users spend 4x more on OpenAI's LLMs than they do on mine?

Ara had a different take: “I think that a business that is lagging in market share would also find it helpful to know exactly where it is in the market. It's not usually a surprise [and] even those businesses themselves often do not have that level of granularity in the data.” I thought this argument made a ton of sense: if a firm already knows it’s behind, it can use Ramp’s data to recalibrate strategy rather than feel blindsided.

Creative transparency is often positioned as an ongoing conversation: Ramp now updates its AI and Advertising indexes on a monthly basis, and Anthropic open-sourced their AI usage research to invite more outside contributions and analysis. It can also be social fodder: like the Spotify Wrapped campaign, or imitators like Strava’s recent “Year in Sport” review.

Spotify is a particularly fecund purveyor of creative transparency because they do it across multiple vectors: users and artists. While the user-level Spotify Wrapped functions as viral marketing (like telling strangers about your dreams, there is a dueling sport of solipsism and collectivism that dominates our feeds during year-end Wrapped screenshot cavalcade), their slightly less-popular annual Loud and Clear report argues that, contrary to the prevailing narrative, Spotify is growing the economic pie for artists, rather than putting a ceiling on their earnings. So Spotify uses creative transparency to speak to myriad audiences to say different things: to users, they’re glazing about the fact that we as listeners all contain multitudes, to artists and concerned politicians, they’re assuring us that they are not destroying the livelihoods of creatives.

Spotify is actually one of the few public companies that treats data as a substrate for ongoing conversation: the most prolific companies that engage in creative transparency have not yet IPO’ed. So part of me wonders whether creative transparency is a substitute for the kinds of disclosures, like quarterly and annual reports, that public companies are required to file with the SEC. If we wanted to psychoanalyze another impulse behind creative transparency, perhaps it's the desire to have a regular pulse on speaking directly to business journalists and the outside world. Especially in an age where companies are “perennially private”, because of larger VC funding rounds and the wide availability of private credit, it makes sense that companies would want to forge ahead on proposing their own narratives in the vacuum created by a lack of analysis found in more conventional avenues. Ben Thompson can’t write about your quarterly earnings if you never file them, so companies are building Stratechery in-house.

Another possibility is that there are only a limited number of companies with the introspection required to recognize what makes their datasets interesting to normal people. During our chat, Ara from Ramp remarked that "Not every company can do this. I think it's really hard to figure out what your company's data set is particularly well suited to do.” The impulse at most companies is to try to speak to trending topics, like US unemployment data. But then “you're going to forget that ‘wait a minute, my data set cannot actually answer this question effectively or credibly.”

Or maybe an alternative route of speculation is that the pace of creative transparency quickens when companies are growing closer to an IPO, and want to start broadening their audience. Who can say?

There is the “look at what our internals say about you” form of creative transparency, which Ramp and others are doing well. There is also an inversion, when a company’s entire raison d’etre is predicated on the act of transparency itself. The AI benchmarking startup LMArena is perhaps the best example of this. Let’s call this the “look what you say about our internals” form of creative transparency.

For those who aren’t familiar, LMArena is a (now venture-backed) startup that asks the internet to adjudicate anonymous “model-versus-model” cage matches (or in simpler terms, it asks people to rank their preferences of two outputs from two different models). It then open-sources the tallies in the form of a leaderboard, which anyone can peruse to understand which models perform the best across a number of tasks, including coding, web development, search, vision, and more.

The site advertises 3.5-plus million human votes on its landing page and the leaderboard it generates auto-updates every few days. Contrast that with the grandees of the benchmark circuit. MMLU ships a frozen multiple-choice exam; you get an 86.4% and call it a day, even though researchers now catalog numerous errors that quietly warp the scoreboard. Scale AI and the ARC prize run similar leaderboards with similarly game-able training datasets, where researchers can intentionally or unintentionally devise models intended to do well solely on benchmarks, and disappoint users when they fail on real-world tasks. Many have called AI benchmarks an exemplary of Goodhart’s Law (“when a measure becomes a target, it ceases to be a good measure”). Model overfitting is a well-known problem.

While LMArena itself is not fully immune to accusations of gameability (here is LMArena’s response to those accusations), I think its existence points to a kind of apotheosis in creative transparency, where the product is devoted to the exclusive act of collecting, parsing, and amplifying metrics. In other words, they are selling transparency as a service. While most companies engage in creative transparency as a marketing side-channel, for LMArena, transparency is the business model.

First Twitch played Pokemon, then Claude played Pokemon, and now AI plays itself.

Anyway, the fact that I’ve now spent the better part of a Sunday and over 2,500 words spilling ink on creative transparency is maybe the most telling lesson about its power. I was not paid by Ramp or any other company mentioned to write this piece – they all posted some interesting metrics, and which compelled me enough to write a post exploring why. My boyfriend actually noted that this essay is not an example of creative transparency, but another form of advertising, namely the free kind. We were having a conversation about the way Crumbl Cookies was able to garner countrywide-renown (and worldwide astonishment) simply by hypnotizing influencers into creating mukbang videos highlighting their consumption, and he noted that I was essentially doing the same thing, but for a corporate card startup.

I do think, at least, that producing this essay is a bit more philosophically bountiful than watching people consume 1,000-calorie cookies. Toward the end of my conversation with Ara he said something that struck me: "In my life, I don't want to be something I'm not. And I don't think my data should do something that it's not well suited to do." If it’s ever possible to find truth, it’s only achievable after sifting through a cacophony of diversions: statistics whose underlying premise is based on a poorly-worded question, newsreels with sensationalized commentary, scientific expertise with a political agenda. Bad data purveyors will twist the numbers they sit on to suit a preordained fact. Good data purveyors let numbers tell a story.

It would be naive to argue that creative has a neutral agenda: companies want you to buy their products after all. But the best forms of creative transparency have a level of rigor and honesty that make me soften to the fact that I know I’m being sold something. So maybe that’s ultimately why creative transparency appeals to me. It’s honest about what it is, and what it isn’t.


Read the original article

Comments

HackerNews