I Analyzed Every Gong Hit on TBPN and Here is What I Found
Ethan Frost, September 30, 2025
The Origin.
TBPN is a fast-growing, daily live-streamed tech talk show, hosted by John Coogan and Jordi Hays, which has quickly become a staple for those following tech and business news. Broadcasting five days a week, the show combines unscripted sports talk show energy with insightful discussions, featuring interviews with prominent founders, investors, and tech leaders on the latest industry news. Like many fans of the show, TBPN has been my oasis from the traditional news I consume during my day job in equity research. I've been watching TBPN for quite some time now, and the running gag that always stuck with me was the “size gong.” Whenever a founder announces a new funding round or another major milestone, they “use” the gong as shown below.
Jordi doing what he does best.
I had a hunch: the bigger the fundraising round, the harder they hit the gong. What started as an opportunity to learn new skills with Python quickly evolved into a multi-month side project, where I taught myself TensorFlow and ultimately built an automated gong detection pipeline to detect, label, and track every gong hit from over 200 hours of TBPN content. All the while pondering the underlying philosophical question as to what loud truly means. If you want the TLDR, scroll down to the charts for the results.
The Process.
For readers interested in all code, charts, and implementation details, my GitHub has you covered.
Initial Planning: I took this opportunity to expand my Python skills. The terrifying thought of manually skimming through hours of videos for gong hits left me with one option: using machine learning, an area in which I had no prior experience and no clear starting point.
Discovery: Since I wanted to work with Python, TensorFlow was the logical choice. This led me to learn about YAMNet, an open-source audio classification model, which luckily already had a gong sound classifier.
First Test Run: On my first test run, the TBPN audio analysis system detected roughly 35% of the gong hits in my initial batch, which was not a surprise, as the gong classifier is very poorly rated. For example, YAMNet expected a smooth, traditional hum of a gong (sample from YAMNet library), whereas John and Jordi hit the gong as if it owed them money. In fact, you would not be reading this post right now if that weren't the case.
My favorite podcaster on my favorite live podcast (thumbnail).
Training: At that stage, “verification” meant dragging through 3-hour YouTube videos looking for sudden camera cuts or timestamps when a guest announced funding. I collected ~300 manually labeled samples and trained a classifier on top of the model. It worked very well with ~95% accuracy in testing, and caught every gong hit during testing (along with the occasional false positive from Jordi playing the boat horn sound effect).
Detection Process: When processing an episode, the system downloads the YouTube audio and preprocesses it for detection. The audio is then analyzed locally using a custom-trained model running on my computer, applied in 0.96-second overlapping windows. Any segment scoring at least 92% confidence is flagged as a potential gong hit. Since a single gong strike can trigger several overlapping detections, the system consolidates detections in close temporal proximity, retaining only the highest-confidence detection in each group. Each detection is accompanied by an automatically generated timestamp, which is subsequently verified by hand, an essential step for accurately labeling funding-related gong hits. The model is purposely tuned to overreport rather than underreport, as it's preferable to filter out false positives than miss actual events.
Measuring Loudness: Peak detection alone was ineffective because nearly every hit was “maxed out.” To address this, I implemented the EBU-R128 broadcast standards, using Loudness Units Full Scale (LUFS) for relative loudness and True Peak for spike measurement. Both are recognized as more accurate indicators of how the human ear perceives loudness. With these measurements, I calculated the Peak-to-Loudness Ratio (PLR), which became the core standard for the entire project. PLR was chosen because it captures the absolute loudness of the gong hit at impact (True Peak) while accounting for relative sound levels before and after impact (LUFS). Here is the primary reference I leveraged for this entire element of the project. For my analysis, I normalized PLR values across all gong hits to enable easier comparison.
Output: Results were output as .csv files containing timestamps, confidence scores, and video metadata. The pipeline processed entire episodes in minutes, and detection parameters could be easily adjusted as needed; further implementation details and code are available in the linked GitHub repository. Some hits lasted so long that they triggered multiple detections, despite my ~8 second detection consolidation mechanic when John used the “proper” form.
Manual Labeling: For manual labeling, I personally reviewed each hyperlinked video segment flagged by the detection system. For every detected gong hit, I watched the corresponding moment to verify if it was related to a funding announcement. If it was, I marked down the relevant funding data, and for every hit (not just the funding hits), I noted which host struck the gong. This careful review ensured each event was accurately categorized and attributed within the dataset.

Late August was pretty fun.
Final Dataset: The final dataset was built to export all results for every detection across all episodes since the new studio. I added manual data and built the comprehensive dataset that powers the analysis below. Between labeling, testing, and refining, I listened to a bare minimum of 1,163 gong hits during the development process. However, the actual number is likely at least double, as I was usually listening to 2-3 hits per page visit. On August 30th alone, I heard 556 gong hits.
Note: my data set spans from the launch of the new studio in late May to the end of August 2025. I essentially spent the month of September writing this post. If I had the time to do all the charts and analysis again, frankly, I would have included September, but at least anecdotally, it seems as though the TBPN team has cracked down on the audio volatility of the gong hits... for better or worse.
The Results.
There is a measurable relationship between the funding amount and the loudness of the gong, with the analysis producing an R² of approximately 0.19. Interesting results... at least according to behavioral science standards. Click on a point to view the gong hit on any chart with scatter points (desktop only).

A few factors explain why I believe there was some correlation. Same data, John vs. Jordi. Jordi hits the gong consistently louder than John...

And for significantly more money. This could partially explain the correlation.

John hits the gong more often for early-stage rounds, and more than twice as often overall when compared to Jordi, which limits the impact of Jordi's propensities above. Given that John represents almost 70% of the funding gong hits, I decided to look closer.

Cool, but John hitting more early-stage rounds than Jordi doesn't explain why he is quieter than Jordi. Why I think John is quieter isn't because he lacks power, because he hits bangers too; it's because he does sitting gong hits. In fact, John hits the gong sitting 37% of the time overall, with more sitting Series A hits than Jordi, who never sits.

Not only that, John stands for dollars, regardless of the funding stage.

John is significantly less loud when sitting versus standing for gong hits.

Conclusion.
What started with a simple question about the size gong became a project that turned long weekends into a dataset and intriguing results. A correlation between funding and loudness is somewhat present, with an R² of about 0.19, even if it is messy. Factors such as microphone placement, room reverb, and how loudness is defined complicate the picture. The point, though, is that data can describe patterns but not capture the performance at the heart of TBPN. Loudness and funding rounds are measurable, but the human experience of Jordi rupturing your eardrums is something computers will never understand.

Beyond Fundraising, All 235 Gong Hits.
Here is every single gong hit my model detected.

John vs. Jordi.

Gong hits per episode by day of the week.

Loudness of the top five episodes ranked, by the total number of gong hits per episode (from left to right).

Loudness of the top five episodes, ranked by the loudest median gong hit in each episode.

Total gongs I heard by day of the week.

Greatest hits, my favorites from the project.