TBPN Gong Analysis
Ethan Frost, 9/20/2025
The Origin.
TBPN is a fast-growing daily live-streamed tech talk show, hosted by former founders John Coogan and Jordi Hays, that's quickly become a staple for those following tech and business news. Broadcasting five days a week, the show blends unscripted, sports talk show energy with insightful discussions, interviewing prominent founders, investors, and tech leaders about the latest industry news. Like many fans of the show, TBPN has been my oasis from the traditional news I consume during my day job in equity research. I've been watching TBPN for quite some time now, and the running gag that always stuck with me was the “size gong.” Whenever a founder announces a new funding round or another major milestone, they hit a “size gong,” which has become a recurring in-joke about startup success.
Jordi doing what he does best.
I had a hunch: the bigger the fundraising round, the harder they hit the gong. What started as an opportunity to learn new skills with Python quickly spiraled into a multi-month side project teaching myself TensorFlow, ultimately building an automated gong detection pipelineto detect, label, and track every gong hit from over 200 hours of TBPN content, and all the while questioning the underlying philosophical question as to what loud truly means. If you want the TLDR, just scroll down to the charts for the results.
The Process.
For readers interested in all code, charts, and implementation details, my GitHub has extensive documentation.
Initial planning: I took this opportunity to expand my Python skills, and terrifying thought of manually skimming through hours of videos for gong hits lead to one option, using machine learning, something I have never done before nor know where to start.
Discovery: Given I wanted to work with Python, TensorFlow was the logical choice. This led me to learning about YAMNet, an open-source audio classification model, which luckily already had a gong sound classifier.
First test run: Tested TBPN audio and it detected ~35% of the gong hits in my test batch. YAMNet expected a slow, traditional gong, John and Jordi do not hit it that way, in fact you would not be reading this post right now if that wasn't the case.
Training: At that stage, “verification” meant dragging through 3-hour YouTube videos looking for sudden camera cuts or timestamps when a guest announced funding. Collected ~300 manually labeled samples and trained a classifier on top of the model. It worked very well with ~95% accuracy in testing, and caught every gong hit during testing (along with the occasional false positive from Jordi spamming the boat horn on the soundboard).
Output: Every run generated a CSV with timestamps, confidence levels, loudness metrics, and clip links the backbone of all further testing and analysis.
Measuring loudness: Peak detection alone didn't work. Nearly every hit was maxed out, so I implemented EBU-R128 broadcast standards: LUFS for loudness, True Peak for spikes, and Peak-to-Loudness Ratio (PLR) as the core metric. Thank you Perplexity.
Consolidation quirks: Some hits rang so long they triggered multiple detections despite my ~8-second detection consolidation mechanic, which was mainly from John using good form.
Final pass: Built to export all results for every detection across all episodes since the new studio, added manual data, and built the dataset that powers the analysis below. Between labeling, testing, and refining, I estimate I listened to 1,000+ gong hits.
The Results.
There is a measurable relationship between the funding amount and gong loudness, with the analysis producing an R² of about 0.19. Interesting results... at least according to behavioral science standards. Click on a point to view the gong hit.
A few factors explain why I think there was some correlation. Same data, John vs. Jordi. Jordi hits the gong consistently louder than John...

And for significantly more money. Which could partially explain the correlation.

John hits the gong more often for early-stage rounds, and more than twice as often overall when compared to Jordi, which limits the impact of Jordi's propensities above. Given John represents almost 70% of the funding gong hits, I decided to look closer.

Cool but John hitting more early stage rounds than Jordi doesn't explain why he is more quiet than Jordi. Why I think John is quieter isn't because he lacks power because he hits bangers too, its because he does sitting gong hits. In fact John hits the gong sitting 37% of the time overall, with more sitting Series A hits than Jordi, who never sits.

Not only that, John stands for dollars, regardless of funding stage.

John is significantly less loud when sitting versus standing for gong hits.

Conclusion.
What started with a simple question about the size gong became a project that turned long weekends into a dataset and intriguing results. The correlation between funding and loudness is there, with an R² of about 0.19, even if it is messy. Factors like microphone placement, room reverb, and how loudness is defined complicate the picture. The point, though, is that data can describe patterns but not capture the performance at the heart of TBPN. Loudness and funding rounds are measurable, but the human experience of Jordi rupturing your eardrums is something computers will never understand.
Beyond Fundraising, All 235 Gong Hits.
Here is every single gong hit my model detected.
Gong hits per episode by day of the week.

Loudness of the top five episodes ranked by number of total gong hits per episode.

Loudness of the top five episodes ranked by loudest median gong hit in episode.
