Skip to content

040 07 26 2021 to 07 30 2021

max-ellis edited this page Aug 2, 2021 · 3 revisions

07/26/2021

Planned tasks for this week

  • Get statistics from 20 projects ⏳

  • Determine manual sample size

  • Work on paper ⏳

  • Set up evaluation pipeline, metrics, database ⌛

  • Update first experiment based on IntelliMerge's response ⌛

  • Run full evaluation

Progress

Get statistics from 20 projects

  • Getting the statistics for the 20 projects is taking longer than I expected. I'm hoping it'll be done analyzing the projects by Sunday.

  • I ran into a couple setbacks but worked around them by using the SMR server. I'm just waiting for it to finish now.

Set up evaluation pipeline, metrics, database

  • I figured out how I am going to compare the merged files, specifically using a git diff --name-only base merge.

Work on paper

  • I have spent the past couple of days working on the methodology section. I need to add the refactoring conflict detection and replaying refactorings sections and then I'll send the draft to Sarah. There's a lot of areas that I think need improvement but I'm not quite sure how, so I'm going to spend some time tomorrow working on those as well.

  • I'm waiting to continue the evaluation section until I at least have the statistics for the 20 projects.

Update first experiment based on IntelliMerge's response

  • I am working on getting the input files for each merge scenario, then I'm going to run IntelliMerge on each of the scenarios. I'm expecting this to be done by Sunday.

  • I have the input files for each merge scenario. I am going to finish updating the replication pipeline on Monday and run it. It shouldn't take more than a day to run.

Discussion

  • We talked about how to run IntelliMerge. We agreed that the first experiment will be replicating IntelliMerge's results using their preprocessing script. After that, we will use the modified IntelliMerge version to use the hard-coded commits so we can use the real IntelliMerge experience instead of the evaluation IntelliMerge experience.

  • We will evaluate on merge scenarios with refactorings because the tools are designed to handle these scenarios.

  • We will use all diff files to calculate precision and recall. We could just use files with refactoring-related conflicts, but we want to cover the full merge scenario experience. We also want to consider that some refactorings touch multiple files and class hierarchy.

  • I need to update our precision/recall calculation to only look at diff files

Next Steps

  • Run replication pipeline and compare with IntelliMerge paper

  • Run evaluation

  • Work with Sarah on methodology draft

  • Work on evaluation draft and send it Sarah's way

Clone this wiki locally