Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Duplicate method in papers-with-abstracts #10

Open
jmelot opened this issue Apr 19, 2021 · 0 comments
Open

Duplicate method in papers-with-abstracts #10

jmelot opened this issue Apr 19, 2021 · 0 comments

Comments

@jmelot
Copy link

jmelot commented Apr 19, 2021

Thanks for this great resource! We ingest PWC data daily and as of about a week ago (April 13) one of our automated checks that checks whether method full_names are unique within a paper in papers-with-abstracts.json.gz started failing. I haven't checked whether this is the case for more than one paper, but for arXiv id 1912.07651, there appear to be two versions of the same method, both:

      {
        "name": "DNAS",
        "full_name": "Differentiable Neural Architecture Search",
        "description": "**DNAS**, or **Differentiable Neural Architecture Search**, uses gradient-based methods to optimize ConvNet architectures, avoiding enumerating and training individual architectures separately as in previous methods. DNAS allows us to explore a layer-wise search space where we can choose a different block for each layer of the network. DNAS represents the search space by a super net whose operators execute stochastically. It relaxes the problem of finding the optimal architecture to find a distribution that yields the optimal architecture. By using the Gumbel Softmax technique, it is possible to directly train the architecture distribution using gradient-based optimization such as SGD.\r\n\r\nThe loss used to train the stochastic super net consists of both the cross-entropy loss that leads to better accuracy and the latency loss that penalizes the network's latency on a target device. To estimate the latency of an architecture, the latency of each operator in the search space is measured and a lookup table model is used to compute the overall latency by adding up the latency of each operator. Using this model allows for estimation of the latency of architectures in an enormous search space. More importantly, it makes the latency differentiable with respect to layer-wise block choices.",
        "introduced_year": 2000,
        "source_url": null,
        "source_title": null,
        "code_snippet_url": "",
        "main_collection": {
          "name": "Neural Architecture Search",
          "description": "**Neural Architecture Search** methods are search methods that seek to learn architectures for machine learning tasks, including the underlying build blocks. Below you can find a continuously updating list of neural architecture search algorithms. ",
          "parent": null,
          "area": "General"
        }
      },

and

      {
        "name": "Differentiable NAS",
        "full_name": "Differentiable Neural Architecture Search",
        "description": "",
        "introduced_year": 2000,
        "source_url": null,
        "source_title": null,
        "code_snippet_url": null,
        "main_collection": {
          "name": "Neural Architecture Search",
          "description": "**Neural Architecture Search** methods are search methods that seek to learn architectures for machine learning tasks, including the underlying build blocks. Below you can find a continuously updating list of neural architecture search algorithms. ",
          "parent": null,
          "area": "General"
        }
      },
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant