This screams the need for an intelligent search engine, with summarizing features that point out relevant parts of the paper, and (imho) a UI and algorithms that reach across disciplines and encourage discovery. So, for instance (I'm making this up), someone looking for a way to understand interferometry data might stumble on a useful regularization technique from image processing.
I imagine academic libraries would pay really good money for that. Of course, it would help if a mostly complete body of scientific literature was available to crawl--like SciHub.
Wouldn't be some graph-traversing be enough? You find an important paper and then just search for papers which cite this one? And then sort by how many cite these (similar to the pagerank algorithm) and filter them by year and keyword?
There aren't that many different citing stiles, so I'm pretty sure it would be possible.
It is usually possible to do this (a digital journal can include links to cited/citing papers), but citations happen for a variety of reasons (background information, a single statistic, boosting a colleague's work...). Every citation applies to a different sentence or paragraph in an article. To understand the content and whether it is relevant, we still need to digest the text.
There are also less-cited papers and journals that can be just as relevant--every article starts with zero citations, and the vast majority of good work out there isn't exciting enough to be accepted for Nature.
Further, the process is rather incestuous: Previous person solved X problem with Y approach because that's the first thing someone came up with and it's just how it's done in our field, so let's continue doing this inefficient thing and citing that paper, maybe working on a better way to do things. Meanwhile in some other field, a mathematician or whatever came up with a far better solution to a similar problem long ago, but nobody in this field ever knew that, so it goes unnoticed.
I do want to see what is highly cited because it's probably interesting, but I also want to see things that don't get that kind of attention but are applicable to my work, and things that I wouldn't know are applicable to my work.
Another example: Currently, authors enter relevant keywords when they submit a paper. Maybe that works when someone searches the right combination of words, or maybe it doesn't because the search engines suck. Or because in one part of the world the topic has a completely different vocabulary, and we miss a whole library's worth of useful papers.
I speak from a background in STEM. I can't vouch for other disciplines, but I imagine they have similar pain points. Heck, I don't even know what others in STEM think, other than "that's just how research works."
I don't think it's an easy problem, or we wouldn't still be sifting through mounds of crap to find a few relevant, reproducible works worth reading and citing. I think it would involve figuring out the overarching themes and important methods in a paper and sorting them by their importance, and a little bit of fuzziness to say "hey, this isn't exactly what you're looking for, but it sure seems useful." This could even allow un-cited works a second chance.
I think I'm describing two separate goals, and the fuzzy part could wait until the relevance problem is addressed. I don't think the problems are particularly easy, or Google Scholar would have solved the problem and monetized the solution already. But it seems like they're solvable problems.
This screams the need for an intelligent search engine, with summarizing features that point out relevant parts of the paper, and (imho) a UI and algorithms that reach across disciplines and encourage discovery. So, for instance (I'm making this up), someone looking for a way to understand interferometry data might stumble on a useful regularization technique from image processing.
I imagine academic libraries would pay really good money for that. Of course, it would help if a mostly complete body of scientific literature was available to crawl--like SciHub.