Skip to content

Commit

Permalink
Add FAQ (#1)
Browse files Browse the repository at this point in the history
  • Loading branch information
klieret authored Sep 9, 2024
1 parent ef14247 commit 7a744d8
Show file tree
Hide file tree
Showing 2 changed files with 53 additions and 0 deletions.
52 changes: 52 additions & 0 deletions docs/faq.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# FAQ

* **How do you know that the subproblems are non-overlapping and complete? I assume this is up to the judgment of the question-writers and the in-domain validators, but it'd be nice if there was some way of more formally confirming this (not sure that's possible though).**
In practice, we differentiate subproblems based on their context and their role within the broader problem-solving framework (i.e. given previous subproblem-code pairs). For example, the same function for calculating a derivative could be used in different contexts: computing the force from a potential at step 3 in one main problem, and computing a velocity at step 5 in another main problem. These subproblems, although based on the same mathematical operation, are not considered overlapping because the physics and context are totally different.

* **SciCode doesn’t actually have other scientists try and complete the problems, so your notion of “human validation” is somewhat weak (I think the validators just look at the problems and say “yep this looks good” or “I think this is bad” with feedback)**. Ideally, we would have both in-domain and out-of-domain scientists fully engage with the problem by writing and executing the code themselves. This would allow us to systematically record their mistakes, feedback, and discussions, resulting in an ideal version of both the question design and code solution. However, given the limited capacity of our scientists pool, we need to maximize the efficiency of contributions while still achieving a high-quality validation process. Here’s what we believe is the most efficient approach:
* **In-Domain Scientists**: In-domain scientists focus on ensuring the scientific accuracy of the problem. For each major problem, they verify that the design is sound, check the subproblem methodology, and confirm that the solution correctly reproduces published results or conforms to established theoretical models. Their expertise ensures that the scientific underpinnings of the problem are accurate.
* **Out-of-Domain Scientists**: Out-of-domain scientists help by reviewing the question design and background information. The background is designed to augment graduate-level people who do not have specific domain expertise to successfully solve the problem. Therefore, while they may not be experts in the specific field, they can still ensure that the information provided is clear and sufficient to solve the problem.
* **Leveraging GPT-4 for Reproduction**: Instead of requiring human scientists to reproduce the problem, we delegate this task to GPT-4. This is both cost-effective and efficient. The scientists’ primary responsibility then becomes performing error analysis on the outputs, identifying areas where the question design, solution code, or test cases may need refinement.

This streamlined process enables us to make the most of our scientific expertise while still ensuring robust validation of the problems. By focusing human effort on critical evaluation points and using AI for reproduction tasks, we can iterate and validate problems with greater speed and precision.


* **I don’t think that SciCode validates/confirms that solutions to these problems don’t exist online.** For each main problem, our approach ensures that it reproduces results from a published paper or established theoretical model, using these references as a benchmark. This is a standard practice in scientists’ research workflow, and these benchmark results are often accessible online.

The challenge lies in designing specific question-code pairs that faithfully reproduce those results. These question-code pairs are either derived from our own research workflows or created from scratch for topics we consider fundamental or important to the domain.


Take, for example, the 2D Ising model, where the transition temperature is kT/J = 2.269. While this result is easily found in this paper, reproducing it with the simulation method requires significant effort.

By doing so, we ensure that the problems, while based on known results, still demand an additional level of effort and insight, making them meaningful and non-trivial challenges.

* **What's the difference between the numerical tests (inputs/outputs), and “domain-specific tests”?** The difference between numerical tests (inputs/outputs) and domain-specific tests lies in how scientists assess the correctness of a problem.


Numerical tests focus on data points, ensuring that the inputs and outputs match expected numerical values. For example, in a molecular dynamics problem, numerical tests would check the final positions and velocities of atoms under a specific potential in a closed box after some time t.


Domain-specific tests, on the other hand, involve checking whether the solution follows the relevant physical laws or constraints. Scientists don’t just stop at numerical data; We also verify whether any physical laws are violated. In the same molecular dynamics example, apart from verifying positions and velocities, we would also check the conservation of energy and momentum. In a closed box, total energy, angular momentum, and linear momentum should remain the same before and after time t.


The domain-specific test is a critical part of validating correctness not only for SciCode but also for real research workflows in natural science domains.




* **What were the kinds of updates made during in-domain validation?**

During in-domain validation, all aspects of the problem can be updated to ensure accuracy and alignment with domain-specific expectations:

* **Question Design**: Refining or rewriting the problem to better align with the desired scientific solution or to ensure clarity in how the problem is framed.
* **Formulation of Background**: Updating the derivation or contextual background to provide a more accurate or comprehensive explanation of the problem and its relevance to the domain.
* **Methodology**: Revising the methods used to solve the problem, which involves adjusting algorithms, simulation techniques, or approaches to ensure the solution is robust and scientifically valid.
* **Domain-specific Tests**: Updating the domain-specific checks that validate the fundamental physical laws or constraints.

At this stage, we have even totally rewritten a couple of problems to ensure that the question design can be properly paired with the code, making the problem-solving process more effective and scientifically rigorous.



* **How can I trust the out-of-domain validation?**
The role of out-of-domain validation is not to ensure scientific correctness but rather to verify that the problem is presented in a way that is clear, complete, and accessible to someone outside the specific field. The primary focus of out-of-domain validators is to ensure that the combination of the question and its background information provides enough context for someone to solve the problem, even if they lack domain-specific expertise.

1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@ nav:
- Overview:
- "SciCode": index.md
- "Example problem": example_problem.md
- "FAQ": faq.md
- "Problem list": problems.md
- Leaderboard: leaderboard.md
- Preprint: "https://raw.githubusercontent.com/scicode-bench/scicode-bench.github.io/main/SciCode.pdf"
Expand Down

0 comments on commit 7a744d8

Please sign in to comment.