diff --git a/public/css/style.css b/public/css/style.css
index ffcfd7d..922e0ca 100644
--- a/public/css/style.css
+++ b/public/css/style.css
@@ -47,7 +47,7 @@ header .info-btn {
padding: 0;
}
-fieldset .info-btn {
+fieldset .info-btn, .modal-overlay .info-btn{
padding: revert;
}
diff --git a/public/index.html b/public/index.html
index f77ecea..6009ba1 100644
--- a/public/index.html
+++ b/public/index.html
@@ -58,7 +58,7 @@
CRAB (Code Review Automation Benchmark) is a research-driven platform designed to evaluate deep - learning models for code review tasks. Developed as part of a master's thesis at the Università - della Svizzera italiana, CRAB provides a high-quality, curated benchmark dataset of Java code review - triplets: submitted code, reviewer comment, and revised code. Each instance is manually validated to - ensure that reviewer comments directly address code issues and that the revised code implements the - feedback accurately.
- -The platform supports two core tasks: generating human-like review comments and refining code based - on those comments. It also accounts for paraphrased feedback and alternative valid code revisions, - offering a more realistic and robust evaluation. CRAB addresses the shortcomings of existing - datasets by eliminating noise and ensuring functional correctness through testing. Researchers can - upload model predictions to receive standardized evaluations, making CRAB an essential tool for - advancing automated code review technologies.
++ This project introduces CRAB (Code Review Automated Benchmark), a high-quality + benchmark designed to evaluate deep learning-based code review automation tools. It focuses on two + key tasks: +
+
+ The dataset consists of
+ carefully curated triplets
+ <submitted_code, reviewer_comment, revised_code>
—ensuring each comment is
+ actionable and each revision implements the suggested change. This eliminates noise common in
+ previous datasets and supports reliable, meaningful evaluation.
+
+ To support model benchmarking, we also provide a web-based evaluation platform (the website on which + you are reading this description) that allows researchers to download the dataset, submit their + predictions, and assess model performance across both tasks. +
++ You can explore the source code for each component here: +
+ ++ This website lets you evaluate code review models against the CRAB benchmark. You can download input + files for either the comment generation or code refinement task, upload your model’s predictions, + and view the results once processing is complete. Each section includes a help icon + that provides more detailed + instructions and file format guidelines. +