Disclaimer: Fittingly, AI tools were employed to help summarize the webinar, the output of which served as the outline for this recap.
Moderated by Avi Staiman, the Founder and CEO of Academic Language Experts and a Chef at The Scholarly Kitchen, “AI Efficiencies to Optimize Workflow from Submission to Publication,” discussed the practicalities and use cases of AI in scholarly publishing. Panelists took a pragmatic approach to questions around AI’s reliability, potential for replacing human roles, and ability to enhance efficiency, focusing on how AI is currently being implemented in publishing workflows. Grounded in real-world examples, the webinar explored everything from analyzing text suitability for machine translation to training staff to review AI outputs. Below are key takeaways from each panelist.
Sarah Taylor, Vice President of Artificial Intelligence, Springer Nature Group.
- Though new tools and technologies like AI can feel overwhelming, neural networks have been around for a while. What has changed is the speed at which models can be trained and the reduced quantity of data needed to build these models. These advancements have already been employed in publishing, often behind the scenes in operational workflows.
- In Springer Nature’s experience, these tools should augment human work, not replace it, allowing people more time for complex decision-making, thereby improving publishing output quality.
- Taylor shared a case study on automating aspects of the editing process. They found that, while the tool made 60% of the changes, 40% still required human input. The tool was able to make easier changes (around things like formatting), but humans who understood the content were required to make the best edits possible without changing the author’s meaning. Overall, the tool has increased editor speed, leading to customer price reduction and reinvestment in new tools.
- Recently, Springer Nature launched an AI-powered writing assistant that provides digital editing and translation. They anticipate many more of these types of offerings in the coming year.
Dr. Julia Kostova, Director of Publishing, Frontiers
- Frontiers’ AI tool, AIRA (Artificial Intelligence Review Assistant), launched in 2019 and aids in the manuscript submission and peer-review process. AIRA performs over 20 checks, including plagiarism, conflict of interest, data availability, ethics statements, author identity, and more. It also analyzes images for manipulation, assesses language quality, and checks references.
- AIRA’s thoroughness and ability to perform these checks automatically (and almost instantaneously) exceeds industry standards. However, Kostova emphasized that AIRA does not replace human judgment. The tools supports and augments decision-making, but every decision still requires human validation.
- Frontiers invested in AIRA to meet researchers’ needs for quality and speed in the publication process. In an industry where initial checks can take days, AIRA’s ability to perform these checks within seconds significantly speeds up the review process, saving time for editors and reviewers. It also performs detailed integrity checks, identifying issues before reviewers and editors are engaged, allowing for scalability and efficiency in the face of growing scientific output.
- Ultimately, Frontiers sees AI as a crucial tool for transforming how research is disseminated, ensuring quality, supporting decision-making, publishing research faster, and building trust in science.
Hong Zhou, Director of Intelligent Service Group, Wiley Partner Solutions
- Wiley has developed an AI service for automatic content classification, also known as auto-tagging, a popular application in scholarly publishing. This service tackles several challenges in content classification, such as taxonomy creation and maintenance, labor-intensive document tagging, and fragmented workflows. Traditional taxonomy creation can be costly and time-consuming, often taking 6-18 months to tag 5,000 terms.
- The AI classification tool is 30% more accurate and provides a confidence score for each tag, reducing curator effort. The tool also offers a multidisciplinary taxonomy, created using AI and human subject matter experts. This taxonomy includes 250,000 tags covering 19 disciplines and six hierarchical levels.
- The auto-tagging service has been successfully deployed across several publishers and applied in various real-case scenarios, including displaying topics on topic pages and content items, improving content discoverability, facilitating audience profiling, and being used in the pre-publication phase.
Dustin Smith, Co-Founder & President, Hum
- Hum uses a mix of first-party data and AI to target and recruit authors efficiently. This data, collected from user-content interactions, is used to understand individual reader’s topic affinities and engagement levels. In the aggregate, this data enables the identification of audiences likely to be engaged with specific topics.
- Hum serves as the nervous system, receiving data from platforms, understanding individual behavior, and providing insights, recommendations, and predictions. Their AI engine, Alchemist, meanwhile, acts as the brain, which houses a foundational model that interprets data for deep understanding.
- An example of this tool in action is the selection of special issue topics. A publisher can identify high-engagement, low-content areas as potential topics, then pass that through a generative model to recommend special issue topics and descriptions. A targeted campaign can then be launched from the platform, putting CFP messages directly in front of the target audience on connected properties.
- As with the other tools mentioned in the webinar, Smith emphasizes the importance of continuously monitoring and refining the AI’s learning through human intervention to ensure optimal results.
News contribution by SSP member, Stephanie Lovegrove Hansen. Stephanie is the Vice President of Marketing at Silverchair.