Once you have removed all of the duplicate records, you are ready to complete the initial title and abstract screening for all of the retrieved studies/articles. The Cochrane Handbook for Systematic Reviews of Interventions specifies that the title/abstract screening should include the following elements:
Conduct a Pilot Screening Test. Before screening all of the retrieved articles, it is helpful to conduct a pilot test of the eligibility criteria on a small sample of articles. This allows all of the screeners to be trained and calibrated to the eligibility criteria. It also verifies that the criteria can be applied consistently by each of the screeners. Based on the experience with the pilot test, the eligibility criteria can be modified for clarity before screening all of the retrieved results. This test sample should deliberately include articles from the following three categories: articles that are likely eligible, articles that are likely not eligible, and articles that are more difficult to determine eligibility.
Screen Articles Using Eligibility Criteria. Ideally, at least two people should independently review the title and abstract for each of the retrieved studies to decide if they meet the predetermined eligibility criteria. However, it is acceptable for only one person to conduct this title/abstract screening. If an article does not meet all of the eligibility criteria, it should be excluded from the systematic review.
Rank Eligibility Criteria. When evaluating each article, the eligibility criteria should be assessed in order of importance. When the first eligibility criterion is not met, this should be listed as the primary reason for excluding the article.
Develop Protocol for Screening Disagreements. Define in advance the process for resolving screening disagreements. For example, when two screeners disagree on whether or not an article meets the eligibility criteria, they may be able to resolve this disagreement through discussion. If the two screeners are unable to come to a consensus for a particular article, a third person can act as arbitrator to make a final decision about the contested article.
Report Screening Outcomes. As part of the record keeping for the systematic review, the total number of articles that met all of the eligibility criteria as well as those that were excluded during the title/abstract screening should be reported in the review. This should also include a tally for the primary reasons for exclusion. This information is often reported in the form of a PRISMA diagram or other flowchart or table.
Proceed to Full-Text Review. All articles that successfully meet all of the eligibility criteria during the title/abstract screening phase should then be evaluated during the full-text review.
The Covidence Academy article "Best practice guidelines for abstract screening - systematic reviews" is another place to get help with this step.
The articles that met all of the eligibility criteria during the title/abstract screening are then ready for the full-text review to verify that these studies comply with the eligibility criteria. The full-text review should follow a similar process as the title/abstract screening:
Find the Full Text. Retrieve the full text of all of the articles that meet all of the eligibility criteria during the title/abstract screening process. If the library does not have access to an article identified, place an interlibrary loan request to find the full-text of this article.
Conduct a Pilot Review Test. Before reviewing the full-text of all of the retrieved articles, it is helpful to conduct a pilot test of the eligibility criteria on a small sample of articles. The full-text pilot test can be conducted during the same time as the title/abstract screening test. This allows all of the reviewers to be trained and calibrated to the eligibility criteria. It also verifies that the criteria can be applied consistently by each of the reviewers. Based on the experience with the pilot test, the eligibility criteria can be modified for clarity before reviewing all of the retrieved results.
Review Full Text Using Eligibility Criteria. At least two people should independently review the full text for each of the retrieved studies to decide if they meet the predetermined eligibility criteria. If an article does not meet all of the eligibility criteria, it should be excluded from the review.
Rank Eligibility Criteria. When evaluating each article, the eligibility criteria should be assessed in order of importance. When the first eligibility criterion is not met, this should be listed as the primary reason for excluding the article.
Develop Protocol for Eligibility Disagreements. Define in advance the process for resolving eligibility disagreements. For example, when two reviewers disagree on whether or not an article meets the eligibility criteria, they may be able to resolve this disagreement through discussion. If the two reviewers are unable to come to a consensus for a particular article, a third person can act as arbitrator to make a final decision about the contested article.
Report Full-Text Review Outcomes. The total number of articles that met all of the eligibility criteria as well as those that were excluded during the full-text review should be reported in the systematic review. This should also include a tally for the primary reasons for exclusion. This information is often reported in the form of a PRISMA diagram or other flowchart or table.
Proceed to Data Extraction. All articles that successfully meet all of the eligibility criteria during the full-text review should be reported in the manuscript and constitute the articles analyzed as part of the data extraction of the qualitative synthesis of the systematic review.
Various software tools exist to help manage the eligibility assessment phase of a systematic review. Some basic productivity tools can be helpful for organizing reviewed articles, such as spreadsheet (e.g. Google Sheets, Excel) or reference management software (e.g. EndNote, RefWorks, Mendeley, Zotero). In addition, several products are available specifically for managing systematic reviews. A good place to find information about various screening tools is the Systematic Review Toolbox. Below is a list of open access and subscription-based tools for systematic reviews:
Open Access Tools
Abstrackr – a free web-based screening tool that can prioritize the screening of records using machine learning techniques.
CADIMA – a free web tool facilitating the conduct and assuring for the documentation of systematic reviews, systematic maps and further literature reviews.
Colandr – an open access machine-learning assisted online platform for conducting reviews and syntheses of text-based evidence.
Rayyan – a free web-based application for collaborative citation screening and full-text selection.
RobotAnalyst – a free web-based application that uses machine learning and text mining for literature screening in systematic reviews.
Subscription-Based Tools
Covidence – a web-based software platform for conducting systematic reviews, which includes support for collaborative title and abstract screening, full-text review, risk-of-bias assessment and data extraction. Full access to this system normally requires a paid subscription but is free for authors of Cochrane Reviews. A free trial for non-Cochrane review authors is also available.
DistillerSR – a web-based software application for undertaking bibliographic record screening and data extraction. It has a number of management features to track progress, assess interrater reliability and export data for further analysis. Reduced pricing for Cochrane and Campbell reviews is available.
EPPI-Reviewer – web-based software designed to support all stages of the systematic review process, including reference management, screening, risk of bias assessment, data extraction and synthesis. The system is free to use for Cochrane and Campbell reviews, otherwise it requires a paid subscription. A free trial is available.
SWIFT Active Screener – a subscription-based online collaborative systematic review software application.
For a more in depth analysis of the various screening tools available, see the following articles:
Gates, A., Guitard, S., Pillay, J., Elliott, S. A., Dyson, M. P., Newton, A. S., & Hartling, L. (2019). Performance and usability of machine learning for screening in systematic reviews: a comparative evaluation of three tools. Systematic reviews, 8(1), 278. https://doi.org/10.1186/s13643-019-1222-2
Gates, A., Johnson, C., & Hartling, L. (2018). Technology-assisted title and abstract screening for systematic reviews: a retrospective evaluation of the Abstrackr machine learning tool. Systematic Reviews, 7(1), 45. https://doi.org/10.1186/s13643-018-0707-8
Harrison, H., Griffin, S. J., Kuhn, I., & Usher-Smith, J. A. (2020). Software tools to support title and abstract screening for systematic reviews in healthcare: An evaluation. BMC Medical Research Methodology, 20(1), 7. https://doi.org/10.1186/s12874-020-0897-3
Tsou, A. Y., Treadwell, J. R., Erinoff, E., & Schoelles, K. (2020). Machine learning for screening prioritization in systematic reviews: Comparative performance of Abstrackr and EPPI-Reviewer. Systematic Reviews, 9(1), 73. https://doi.org/10.1186/s13643-020-01324-7
Van der Mierden, S., Tsaioun, K., Bleich, A., & Leenaars, C. (2019). Software tools for literature screening in systematic reviews in biomedical research. ALTEX, 36(3), 508–517. https://doi.org/10.14573/altex.1902131