diff --git a/docs/index.md b/docs/index.md index 6cf3a81..6130555 100644 --- a/docs/index.md +++ b/docs/index.md @@ -82,7 +82,7 @@ In designing test cases for evaluation, we incorporate domain-specific test case | **Material Science** | [Semiconductor Materials](#semiconductor-materials) (7), [Molecular Modeling](#molecular-modeling) (6) | ![Image Title](figures/SciCode_chart.png) -
**Left:** Distribution of Main Problems **Right:** Distribution of Subproblems
+Left: Distribution of Main Problems Right: Distribution of Subproblems
## Experiment Results We evaluate our model using zero-shot prompts. We keep the prompts general and design different ones for different evaluation setups only to inform the model about the tasks. We keep prompts the same across models and fields, and they contain the model’s main and sub-problem instructions and code for previous subproblems. The standard setup means the model is tested without background knowledge and carrying over generated solutions to previous subproblems. The scientists' annotated background provides the necessary knowledge and reasoning steps to solve the problems, shifting the evaluation’s focus more towards the models’ coding and instruction-following capabilities.