Skip to content

Roadmap

Ozzie Gooen edited this page Apr 10, 2018 · 14 revisions

Note: This is still very much in flux

April 2018

Epics:

May 2018

Epics:

June 2018

Epics:

July 2018

Epics:

Tests:

  • Do 10k+ tasks in small study, for machine learning tests.

August 2018

September 2018

  • Do 100k+ tasks in medium size study, for machine learning tests.

Future / Possible Epics

  • Improving cache hits by showing similar questions
  • Using ML to automate abstraction (pointer introduction) for questions
  • Tasks with built-in search
  • Dialogue Trees
    • Users can send messages to specific people involved in the main questions. The messages and answers get treated like pointers.
  • Consider switching to an event-sourced database: https://blog.risingstack.com/event-sourcing-with-examples-node-js-at-scale/

Tidying/small refactors to do at some point

  • Clean up & test various slate.js /pointer logic
  • Other frontend testing (e.g. testing of various React components)
  • Backend testing (should probably wait until after the refactor is completed)
  • adding real types throughout the codebase (should perhaps wait until after the refactor is completed)
  • set up compilation of the backend code (it's currently run via ts-node: https://github.com/TypeStrong/ts-node)

Future Goals of the Project

Robustness

For the first year, we focus on having 3-20 quality people, approximated as one person. This means quality control is not much of an issue. In the future, the system could include hundreds of people, so quality control will be a thing to work on.

Comparative Advantages

Very long term. Use machine learning to figure out who is good at what and assign them to those tasks.