openscience.works builds open, reusable services that help researchers, libraries, and publishers understand how open scholarly works are used, shared, and discussed across the research ecosystem. Drawing on open data and transparent methods, the platform turns diverse signals into clear, contextual impact narratives rather than opaque scores.
Instead of reducing research to single metrics, openscience.works focuses on helping organisations tell richer, evidence-based stories about the reach and value of open scholarship. The services are designed to support responsible, discipline-sensitive interpretation and to make every data source transparent and inspectable.
We believe that citations alone do not tell the full story of a piece of research. This engine looks at a variety of signals to determine the impact of a work, resulting in what we call Inferred Roles.
Our pipeline goes beyond simple citation counts. By analyzing the composition of a work's attention, usage, and citing landscape, we dynamically assign "Inferred Roles" across four major impact domains:
High citation density within core academic journals and monographs. The work serves as a foundational building block for further research.
The work has accumulated a high volume of citations or downloads in an unusually short period since publication, indicating immediate relevance and urgency within the field.
Frequently cited in literature reviews, meta-analyses, or encyclopedic entries, serving as a "shorthand" for a specific finding or historical overview.
The work serves as a standard protocol, framework, or software tool utilized by other researchers (triggered by specific citation contexts or high usage in datasets and software).
The work is explicitly applied as hard data or methodology (triggered by high "supporting" citation tallies via scite or links to registered clinical trials).
The work demonstrates immense value through direct consumption rather than formal citation (triggered by high download counts on platforms like OAPEN/OPERAS, massive HTML views, or heavy library holdings).
The work has been widely adopted for teaching and training (triggered by inclusion in Open Educational Resources (OER) syllabi, reading lists, or textbook citations).
High, fast-moving engagement in community spaces; the work has sparked conversation (triggered by heavy activity on X/Twitter, Reddit, Facebook, or community blogs).
The work serves as a trusted reference for the general public, establishing a lasting public record (triggered by citations in Wikipedia, Stack Exchange, or non-academic wikis).
The work has "broken out" into mainstream awareness (triggered by mentions in prestigious global news outlets, broadsheet newspapers, or high-impact professional magazines).
Direct influence on governance and societal frameworks (triggered by citations in government white papers, WHO reports, NGO briefs, or UN policy documents).
The work crosses over into the retail or commercial sphere, indicating consumer interest or industry application (triggered by strong presence on Amazon, Goodreads, or patents).
Our reports use dynamic visual chips and status badges in the header to provide an immediate census of a work's output type, integrity, and real-world application.
Defines the specific form and availability of the scholarly contribution:
Dynamic tags that signal publication maturity and research integrity:
Badges that track how research data is shared, applied, and discussed globally:
Traditional academic metrics only tell a fraction of the story. To build a comprehensive picture of a work's true impact, OpenScience.works aggregates real-time data from a diverse ecosystem of global APIs, repositories, and custom heuristic engines:
Because some of our metrics rely on text-mining rather than strict DOI matching (such as our YouTube lecture tracker or OpenCourseWare scanner), we employ an AI Academic Confidence Score. This score evaluates the context of a mention—for example, verifying if a YouTube channel belongs to an academic institution and checking the transcript for specific educational keywords—ensuring we filter out casual mentions and noise.
Martijn has more than 20 years of experience in scholarly publishing and biomedical research. After completing his PhD in Basel, he built a career at the intersection of publishing, technology, and open science, with a particular focus on books and journals.
In 2014 he was co-founder of Bookmetrix, a Springer–Altmetric initiative that pioneered book- and chapter-level metrics by aggregating multiple signals of reach, attention, and use. Through this work, Martijn gained first-hand experience in the opportunities and limitations of impact reporting, including the importance of transparent data provenance and discipline-sensitive interpretation, especially in the humanities and social sciences.
A seasoned editor and publishing professional, Martijn brings deep knowledge of the scholarly communication ecosystem and specialises in building bridges between startups, publishers, libraries, and research organisations. At openscience.works, he focuses on partnerships, product direction, and ensuring that tools are aligned with real-world publishing and library workflows.