Real-time analytics for Twitter's top AI influencers. Track followers, engagement, and viral content.
Real-time data from Twitter's most influential AI voices

@elonmusk
238.1M
Followers
1.3K
Following
101.4K
Tweets
RT @AhmedSharif: This is what a black politician in South Africa said… “We will kill white women, we will kill white children, and we will…

@sama
4.6M
Followers
990
Following
7.5K
Tweets
I wrote this early this morning and I wasn't sure if I would actually publish it, but here it is: https://t.co/7Dw9UFpeep

@karpathy
2.2M
Followers
1.1K
Following
10.1K
Tweets
Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code. But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much $$$ value. The goldmines are elsewhere, and the focus comes along. So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex / Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions. TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are.

@ylecun
1.1M
Followers
782
Following
25.8K
Tweets
RT @Gianl1974: Hey Republicans; He pardoned 1,600 violent criminals. You said nothing. He bulldozed the East Wing. You’ve said nothing. He…

@fchollet
644.4K
Followers
826
Following
25.2K
Tweets
There's a broadly held misconception in AI that methods that scale well are simple methods -- even, that simple methods usually scale. This is completely wrong. Pretty much none of the truly simple methods in ML scale well. SVM, kNN, random forests are some of the simplest methods out there, and they don't scale at all. Meanwhile "train a transformer via backprop and gradient descent" is a very high-entropy method, easily 10x more complex than random forest fitting. But it scales very well. Further, given a simple method that doesn't scale, it is usually the case that you alter it to make it scale by adding a lot of complication. For instance, take a simple a simple combinatorial search-based method (not scalable at all) -- you can make it scale by adding deep learning guidance (which blows up complexity). Scalability usually belongs to high-entropy, complex systems.

@hardmaru
397.3K
Followers
1.9K
Following
25.8K
Tweets
We are hiring Software Engineers in Tokyo to help us scale Sakana AI’s R&D. If you’re interested in building the data pipelines and full stack infrastructure needed to push the boundaries of automated scientific discovery, we’d love to hear from you. 🗼🎌 https://t.co/E1uPJa5tIy
Track, analyze, and grow with real-time Twitter intelligence.
Hourly updates on follower counts, engagement rates, and viral content. Never miss a trend.
Gemini AI analyzes content patterns to predict viral potential.
See what's working for top creators in your niche.
Track likes, retweets, views, and more in real-time.
Get notified when content starts trending.
Join thousands growing with DataV.
"Finally, real-time Twitter analytics that actually work."
Alex R.
Tech Creator
"The AI insights helped me 3x my engagement in a month."
Sarah J.
AI Researcher
"Best tool for tracking what's working in the AI space."
Mike C.
Founder
Everything you need to know about DataV
Free to start. No credit card required.
Get Started Free