Winter Rant

"I’m utterly disgusted. I strongly feel that this is an insult to life itself." – Miyazaki

,

The Weekly: AI Eats AI Research

I am earnestly trying to think and write about anything but AI these days. I was honestly going to write about cricket this week. The recent drama that is engulfing Indian Cricket at the moment is worth talking about. But that will have to wait for something a little more consequential.


AI Eats AI Research

Is it possible to be both shocked at something, but also expect nothing more in the world? The story of the research submissions and reviews to the premier ICLR conference is causing such a reaction in me.

Context about ICLR:

ICLR, aka International Conference on Learning Representations, is the premier conference for deep learning – a recently popular branch of machine learning and artificial intelligence. If you are a deep learning researcher at the very cutting of the field, you are probably submitting research papers to this venue. If you are a seasoned research in deep learning and have been submitting and presenting research at ICLR, you are also likely reviewing paper submissions to this conference. According to the Wikipedia entry for this conference, “Along with NeurIPS and ICML, it is one of the three primary conferences of high impact in machine learning and artificial intelligence research.” I could not have said it more emphatically. The who’s who of ML/AI/GenAI show up at this conference.

They cheated.

Pangram is an AI detection company. On Nov 18, 2025, it dropped a blog post with a few shocking claims:

  1. Turns out that in the 2026 iteration of the ICLR conference, 21% of the paper reviews (conducted as part of a research peer-review process) were AI-generated.
  2. And “several hundred fully AI-generated papers” as per the same Pangram blog post.

I will not review the full blog post. They go into their methodology and details of their analysis. I will say that the post comes across as an interesting marketing moment for them.

Regardless, Nature picks up on this, and publishes this jaw dropping report, which is where i first learned about this: Major AI conference flooded with peer reviews written fully by AI.

Researchers cheated. That’s the bottom line.

I can beat around the bush. I can offer context and background. I can say, “it was not everyone.” I can say, “it was more rampant in reviews than in the actual paper submissions.” I can wax poetry about the conference’s AI-use policy. Or I can pretend that we should be leveraging AI everywhere, and if it improves the research then we should us it.

But the bottom line remains:

  • If human experts are not doing the peer-review, at the premier deep learning research venue, then it is cheating.
  • The “peer” in peer-review is a human-being who is an expert in the subject matter.
  • Human experts are why peer-review continues to be the gold standard for conducting and publishing research and science.

Why I am stunned.

The researchers and paper reviewers know better. They know that these conferences and research publications are serious business. They know. There is no way they do not know. And yet, they opted to outsource their peer-review responsibility to an AI chat bot.

Here’s why this is serious work:

  1. Reading the papers to assess quality, as part of peer-review, generates materially useful feedback. Such feedback is important. It improves the research. It often sets research projects in wildly different and successful directions (than what the paper originally set out to do). It helps clarify the data and inferences in the paper. This stuff actually improves research.
  2. These researchers get to list such experience on their CVs. People actually give a damn if you served as a paper reviewer on a major research publication – it typically gets listed under the “Service” section.
  3. The feedback from peer-review can be invaluable for a young grad student. They learn a ton from it. You are literally shaping their thinking about research and a career in science.
  4. And we get to say that papers at peer-reviewed journals and conferences were looked at rigorously by actual people. We get to say that we did not phone it in with the rigor. We get to say that at least three people took a close look, and thought it was worth humanity’s time to record and read this work.

And I do not mean to sound redundant, but the researchers know all this(!). And to think that they did not care — is just stunning. The apathy numbs my soul.

I get that these services are unpaid. I get that there is a lot that professors and researchers have on their plate. I really get that world-changing research is often happening on shoe-string, thread-bare budgets.

But none of that is an excuse. And yet…

Why it’s not surprising.

Like with everything AI is just pouring gasoline on the fire.

Among all the responsibilities in an academic’s day job, peer-review work is always getting the shaft (so to speak). Like i said above, the role of a peer reviewer is unpaid. In many ways it has to be. If money becomes a motive in this function, then it will be very difficult to trust the quality of the reviews.

But it also creates an untenable economic situation. It also does not help that most researchers around the world are underpaid and are working under a publish-or-perish edict.

It is hard being an academic in this world. Much is expected off you, with very little in return.

I repeat, not an excuse. But it is important to learn about this larger context. And in that context, enters AI – to devour its own creators.

Picture this: i have 20 papers to peer-review, and

  • running up against paper submission deadlines of my own,
  • trying to prep for the upcoming week’s lectures,
  • have midterms to grade,
  • while my grad student is having a nervous breakdown, and
  • my college dean and/or department chair is dumping more administrative work on my plate.

And there is a free AI chat bot that can do my paper reviews for me.

I would be surprised if someone in that position did not use it. Maybe I am just a cynic.

Bad AI Apps eat AI Research

I have always been skeptical of AI – particularly in how it gets applied. Is it impressive that it appears to understand my email? Sure. Is that a good thing … maybe? Depends on how it gets used.
I have also long been convinced that there are no good applications for AI, just yet. Some legit uses of AI exist, but they existing in the corners, most in deep trenches of enterprise software. None of the uses should amount to earth shattering capital investments, valuations or the general upending of the software business every 6 months, all of which has been happening for the last three years. It’s been reckless, stupid, and in some cases unethical.

So it is only very delicious to see how bad applications of AI are eating into the trust and respect that AI research and researchers have built up over decades.

This outcome was only inevitable. We have been barreling towards an insipid future full of AI slop. It was fun and games to think of the banal world where one AI tool generates and sends an email, while another AI tool summarizes that email. Ha!
But now, this AI-outsourcing has come full circle. The AI slop machine is taking aim at its own creators. The dog is shitting in its own bowl.

And ICLR 2026 is not an isolated event. arXiv recently called out this AI slop machine:

arXiv’s computer science (CS) category has updated its moderation practice with respect to review (or survey) articles and position papers. Before being considered for submission to arXiv’s CS category, review articles and position papers must now be accepted at a journal or a conference and complete successful peer review.
Attention Authors: Updated Practice for Review Articles and Position Papers in arXiv CS Category

From the same blog post:

In the past few years, arXiv has been flooded with papers. Generative AI / large language models have added to this flood by making papers – especially papers not introducing new research results – fast and easy to write. While categories across arXiv have all seen a major increase in submissions, it’s particularly pronounced in arXiv’s CS category.
Attention Authors: Updated Practice for Review Articles and Position Papers in arXiv CS Category

With that clarification, the maintainers of arXiv just called out some long-standing nonsense that has pervaded CS and AI research. About time if you ask me. And this will not be the last we hear about such incidents. I hope that publishers like IEEE and ACM take note. I hope that research funding bodies take note. I hope universities take note. I hope that more people wake up to this situation, because it is not great right now.

And If things do not improve — well, then it’s time to buy a lot of popcorn, sit back, relax and enjoy the fireworks. That reminds me: I should be investing in popcorn futures.


Life is for Living

Instead of rushing through life, I find myself standing still more than I used to. It has allowed me to notice life around me. And when not intensely private, I capture it with my camera.

Was working late the other night, at office. And caught this glimpse of the moon on my way out. The railing in the bottom of the picture is that of a walk way from the main building to the parking structure at my place of work.

The golden hues coming from the moon and the lighting along the walkway, with the near-pitch-black backdrop just stopped me in my tracks.

Leave a comment