academic publishers that charge thousands of euros for publishing articles are scum of the earth.
I have blocked the .xyz and .ru TLD’s (because they’re riddled with malware) so every image in this community is blank and the link is dead.
perhaps you should unblock specific domains in that case
but i’d also suggest a blocker that uses auto-updated lists rather than whole gTLDs; they’re likely to catch more and deny fewer false positives
Uh… it gave me ~45 min wait time and then gave up. lol
Sounds neat tho
If research was funded with public money, be it government money or from people buying their products, then that research belongs to the people.
Too right! Why, if regular people can get science for free, Capitalism might not profit!
Those chilling FBI warnings on old videotapes mean absolutely nothing to me now.
I can’t tell the difference between those and OP’s.
I tried it on a couple things that are controversial or problematic in the literature and its about what I expected. It parrots the literature, for better or worse. Which means it’s great at getting an overview of the literature and finding citations and stuff. But it’s not gonna magically figure out which papers are quality and which ones are rubbish. It’ll just parrot all of them, even if they contradict each other. Very interesting, and possibly quite a useful tool. But I really wouldn’t use it as an arbiter of truth.
That’s all it should do. We’re nowhere near an AI that could be an arbiter of truth. Hell, most AI couldn’t even be trusted to parrot the literature accurately.
It seems like a good way to kick off a literature review.
I would find this extremely useful as a tool to help me find sources that I then review myself - similar to how I use Wikipedia. But the danger is in people trying to use it for more.
This is all it’s good for.
Chat bots are a starting place. I find them useful for rubber ducking.
It’s nice to be able to blab to the machine about shit I know no one actually wants to listen to. My partner has been saved countless hours of me going in circles about broken code lol.
nothing more evil than have prestigious journals gatekeep, and paywall research articles without even the scientists knowledge, which only universities and research teams are privy to. looking at nature, phytotaxa.
No no, you see they trained an ai on it. Therefore this “pirating” is a 100% legitimate practice.
The way the law is being enforced now, this should be an entirely legitimate argument. A snowball’s chance in hell though that it holds up without a legal team like OpenAI has.
AI Sloppers lacking awareness is so sickening.
Asked it the following to test it:
What caused the cooling at the end of the cenezoic that lead to the glacial quarternary period?
Took a while, actively showed the source articles it was looking into while it was processing which were clickable. Here’s a pdf of the response which is long but here’s the initial overview:
The cooling at the end of the Cenozoic Era — which culminated in the glacial-interglacial cycles of the Quaternary Period — is one of Earth’s most profound climate transitions. This was not a single event but a stepwise process driven by interconnected mechanisms operating over tens of millions of years. The primary cause was a long-term decline in atmospheric CO₂ (pCO₂), driven fundamentally by plate tectonic processes that altered the global carbon cycle. Oceanic gateway openings and orbital variations played important modulating roles.
Which my partner, whose taken some climate classes in college, said sounds right. If anyone thinks this is wrong please feel free to call it out.
Yeah, it’s just a model with a semantic database it can query (RAG)
AI doesnt understand truth, it averages on data points. It cannot tell the “truth”. it can be right sometimes based on frequency of mentioned words and related ones.
You have to go into each article and check the key points, trust me.
It is a god-tier liar.
To be fair though, even if you read the abstracts of papers you need to go in and check the actual data itself to confirm what the authors describe is actually there.
Likewise if a paper cites another study in support and it seems weird what they say, you need to go and check that paper too.
Scientists have been inflating their claims as long as the impact factor exists (and probably longer). This now just makes it even easier to receive lies.
Have they taken out the AI generated papers? We know that training LLMs on LLM-generated text leads to an absolute collapse in quality, and we also know that AI has been showing up in papers so if they haven’t, then this will be quite unreliable.

We know that training LLMs on LLM-generated text leads to an absolute collapse in quality.
This is often repeated, and true. But needs to be qualified.
Modern LLMs use tons and tons of “augmented” data, which is code for LLM generated or massaged data. Some is even generated during training, and judged; papers on that are what made Deepseek famous.
Training on LLM trash will, of course, yield greater trash, and obviously good text has to come from something real. But that’s because slop is slop. And there are issues with “deep frying” LLMs, yes, but simply training on LLM on LLM output does not necessarily reduce quality. It often helps, significantly.
And we also know that AI has been showing up in papers so if they haven’t, then this will be quite unreliable.
Now this is a problem.
TBH LLMs would be pretty good at flagging papers for humans to check, similar to what Wikipedia is already doing. But yeah, if you just feed a prompt bad papers, LLMs just assume the context is true, generally, and that’s a tremendous problem.
I would be surprised if it was something that they trained themselves, and not an off the shelf model hooked up to a search.
Could have just called it Claude
I stared at it, and didn’t know what to ask, so I closed it.
Getting hugged to death









