logo
logo

Platform

Resources

Company

NEWS ARTICLE

Expo

Share

CVPR’24 in Numbers

CVPR 2024 is just around the corner and just like every year, we are keen to explore the numbers behind the largest computer vision conference. How many authors does it take to write a paper? Which universities, research labs or companies contribute the most? From which countries to the authors come from? Or, what are […]

CVPR 2024 is just around the corner and just like every year, we are keen to explore the numbers behind the largest computer vision conference. How many authors does it take to write a paper? Which universities, research labs or companies contribute the most? From which countries to the authors come from? Or, what are the new trends and topics compared the last year?

The main challenge? Before the conference, the only official information available is the list of accepted papers containing just the paper titles and the author names. In this two part blog post series we:

  • Highlight key CVPR’24 statistics (this blog post), without worrying about how these were obtained, as well as
  • Provide the technical details of how these statistics were obtained (next blog post). In particular, we will dive into how LLMs can help, where did the LLMs work well and, where they provide inconsistent or wrong results.

CVPR is Growing Faster than Ever

At this year’s CVPR, there were 11 532 submissions out of which 2 719 were accepted to the main conference (23.6% acceptance rate). Looking at the historical data we see that not only is CVPR still growing, it is in fact growing faster than ever — compared to last year there was 25+% increase in paper submissions, which is more than last two years combined.

More than 10 000 authors

With 10 260 authors behind the 2 719 accepted papers, the CVPR growth is not only because the same people submit more papers, but also because many new authors attend. In comparison, last year the number of authors was “only” 8 457. Furthermore, getting a paper accepted to CVPR is not an easy task, so much so that only one-third of this year’s authors also had a paper in 2023. This is also reflected in the number of accepted papers per author, where the vast majority of authors have a single paper accepted.

Team Size Remains Unchanged

Similar to last year, the most common size of the team is 5 authors, closely followed up with a team size of 4 and 6. Naturally, there are extremes on both sides, most notably, congratulations to the 15 researchers who managed to get their work accepted as single authors.

26 Accepted Papers? No Problem.

The opposite of single author contributors are researchers and professors with a decades long history of contributions to CVPR, who regularly have 10+ papers accepted. At this year’s CVPR, there are 30 authors with 10+ papers, the most prolific of which is prof. Yu Qiao with 26 accepted papers. Naturally, this is not something a single person can achieve alone and it requires a large number of collaborators, in some cases even more than a hundred.

Strong Collaborations between Industry and Academia

CVPR has a traditionally strong industrial presence, with 48% of attendees coming from the industry last year. However, we are interested not only in who is attending the conference but more importantly, who is contributing to the research. For that, let us look into the author affiliations and whether they are from academia, industry, or a research lab.

Unsurprisingly, CVPR papers are still dominated by academia, with 39.4% of papers having university authors only. However, in second place, 27.6% papers are a result of collaboration between industry and academia. Similarly, there is a strong collaboration between academia and various research labs, contributing to 18.8% of papers.

Note, that the affiliation statistics are approximate as they are obtained using an automated analysis and web search that cross-references arXiv papers to CVPR submissions. For more information and error analysis, please read our technical report.

Strong Industrial Presence led by Google, Tencert and Meta

With industry contributing to more than 30% of accepted papers, which are the companies driving this research? The leading company is Google with more than 50 papers. This places Google as the 6th biggest contributor overall, even among all the universities. After a big of gap, Tencert and Meta follow-up with 35 papers each.

Note, similar to the previous section, the affiliation statistics are approximate and correspond to a lower bound on the papers by each company or university. 

Tsinghua University with almost 100 Papers

Even though industry is much more prominent at CVPR than at other conferences, universities are still the main driving force behind the research. However, as the conference grows it is also reflected in the contribution of each university, with more and more departments contributing their work. As a result, the best universities are now regularly averaging more than 30 accepted papers. According to the statistics, the best university this year is Tsinghua University with 88 accepted research papers. 

China and US combine for almost 70% of Papers

Going one step further, we also analyze where the institutions originate geographically. Naturally, the key players are the US and China, due to their large amount of top universities. Specifically, China has a strong presence not only because of universities but also to a large degree because of research labs and the academy of science. Among other countries, the runners-up are Germany, Singapore, South Korea, United Kingdom, and Switzerland.

Use the interactive map below to drill down each country further.

Trending Research Topics? Diffusion and Generative Models

Lastly, we turn our attention to the actual content of the research papers. For this, we analyze paper titles to see which keywords are being used the most and compare the statistics with last year’s data. We can see that all craze about LLMs also transferred to CVPR, with a two-fold increase in research papers combining language and vision, such as: 

  • OneLLM: One Framework to Align All Modalities with Language
  • Language Models as Black-Box Optimizers for Vision-Language Models
  • Inversion-Free Image Editing with Language-Guided Diffusion Models
  • Towards Better Vision-Inspired Vision-Language Models
  • A Vision Check-up for Language Models
  • and many more


Following the same trend, diffusion models used for generative vision applications also see a more than three-fold increase. This is aligned with where the industry is going as well, toward large multimodal models that can understand and generate vision, language or even audio. 

Join the LatticeFlow team at CVPR in Seattle to learn more about reinventing data curation and how to find critical model blind spots in your models before it’s too late.

Root Causes of Data Leakage

What do The Perfect Score movie (2004), LLMs passing the bar exam, and a trained YOLO model correctly detecting objects have in common? These are all examples where the AI models, or students, achieve seemingly great results not because of the model generalization, but due to data leakage – whether it’s because the students stole the example questions the night before, or that the models are literally trained and evaluated on the same or very similar data.

				
					# Example snippet in Python
def main():
    # Print "worked" to the console
    print("worked")

# Run the main function
if __name__ == "__main__":
    main()
				
			

More Articles

LatticeFlow AI Named a Top Swiss AI Company by CB Insights

LatticeFlow AI Named a Top Swiss AI Company by CB Insights

Read Article

LatticeFlow AI Introduces Suite 2.0 to Enhance Performance, Reliability, and Compliance in AI Systems

LatticeFlow AI Introduces Suite 2.0 to Enhance Performance, Reliability, and Compliance in AI Systems

Read Article

Finding and Managing Audio Anomalies: Case Study on Speech Commands dataset

Finding and Managing Audio Anomalies: Case Study on Speech Commands dataset

Read Article