Configure your AI model settings and API credentials for optimal literature screening performance
Best for: Cost-effective research, coding tasks
Strengths: Excellent price-performance ratio, strong reasoning
Best for: General-purpose tasks, proven reliability
Strengths: Most mature ecosystem, consistent quality
Best for: Multimodal tasks, large context windows
Strengths: Advanced reasoning, integrated with Google services
Best for: Complex analysis, safety-critical tasks
Strengths: Exceptional reasoning, ethical AI design
Purpose: Fast, conversational interactions
Best for:
Examples: GPT-4o Mini, DeepSeek V3, Gemini Flash
Purpose: Deep analysis and complex reasoning
Best for:
Examples: Claude 4 Opus, DeepSeek R1, GPT-4.1
Recommended: DeepSeek V3 or Gemini Flash
Great balance of cost and quality for most screening tasks
Recommended: Claude 4 Opus or GPT-4.1
Best for complex criteria and high-stakes research
Recommended: GPT-4o Mini or DeepSeek V3
Fast processing for large literature databases
Recommended: DeepSeek R1 or Claude 4 Sonnet
Advanced reasoning for nuanced screening criteria
MetaScreener is provided "as-is" without any warranties of any kind, express or implied. The accuracy of AI-driven screening and data extraction heavily depends on the chosen LLM, the quality of the input data, and the clarity of the defined criteria.
Users are solely responsible for verifying all results and making final decisions. The developers assume no liability for any outcomes resulting from the use of this software.
Please use responsibly and in accordance with all applicable ethical guidelines and institutional policies.
Centre for Tropical Medicine and Global Health, Nuffield Department of Medicine, University of Oxford
Centre for Tropical Medicine and Global Health, Nuffield Department of Medicine, University of Oxford
Oxford University Clinical Research Unit, National Hospital for Tropical Diseases, Hanoi, Vietnam
We welcome feedback about MetaScreener's performance, functionality, and improvements, especially usage data from actual literature screening workflows (sensitivity, specificity, time saved, etc.). Your real-world usage data is invaluable for improving this tool! We will include any valuable feedback in the contributions or acknowledgments.
Faculty of Medicine, Macau University of Science and Technology