EBQ helped lead a workshop at GES 2024 on how free off-the-shelf AI tools could be used to complete systematic reviews more efficiently.
Following several hands-on exercises using an AI tool to conduct data extraction or full-text review, participants engaged in an active conversation about these tools.
Concerns were raised about AI tools retaining copies of everything users upload, and how identifying tools that respect privacy and copyright must be a priority.
Concerns were also raised about the many emerging tools which overpromise and underdeliver, despite charging considerable user fees. Until the field evolves to the point where it converges on a core set of choices, the selection of gen AI/ML tools should be done incrementally (e.g., sign up for 1 month instead of 12 months). In this manner, an organization can easily try a different tool if their first choice underwhelms.
Some best practices for prompt engineering were discussed, which piggybacked off of other prior presentations at GES 2024.
Finally, the discussion turned to what standards needed to emerge for the field to “keep up” with the inevitable increased use of these tools in systematic reviews. At the conclusion of the workshop, participants reported a 22% increase in feeling comfortable with AI tools in systematic reviews, a 17% increase in optimism about using AI tools in systematic reviews, and a 16% increase in their likelihood to recommend the use of AI tools to their organization.
Increased familiarity with the assets and pitfalls of these tools benefitted workshop attendees, and EBQ was happy to play a small role in that growth!
Thomas Schofield
