This is the code for the system introduced in SIGCHI 2024 paper "CoQuest: Exploring Research Question Co-Creation with an LLM-based Agent".
We proposed a novel system called CoQuest, which allows an AI agent to initiate research question (RQ) generation by tapping the power of LLMs and taking humans' feedback into a co-creation process.
Major features of the CoQuest system:
- RQ Flow Editor that facilitates a user’s major interactions, such as generating RQs, providing input and feedback to AI, and editing the RQ flow (e.g., drag and delete).
- Paper Graph Visualizer that displays the literature space related to each RQ.
- AI Thoughts that explains AI’s rationale of why each RQ is generated.
coquest-demo-github.mp4
We thankfully acknowledge the following projects/libraries based on the which this prototype is made possible.
- xyflow: https://github.com/xyflow/xyflow
- AutoGPT: https://github.com/Significant-Gravitas/AutoGPT
- LangFlow: https://github.com/logspace-ai/langflow
This is an example of how you may give instructions on setting up your project locally. To get a local copy up and running follow these simple example steps.
Recommended: Install Docker (https://docs.docker.com/get-docker/) and docker-compose.
Alternatively, to install from source, you would need to install Node.js and Python>=3.10.
This is required for both running from docker or source.
-
First create a config file
backend/.env
based on your API choice and info. Refer to thebackend/.env.example
for an example. -
IF you are using Azure OpenAI API, create another Azure config file at
backend/azure.yaml
. Refer to thebackend/azure.yaml.example
for an example.
- To deploy locally, build and run docker containers with docker-compose (Note that this runs a DEV server)
docker-compose up
- Clone the repo
git clone https://github.com/yiren-liu/coquest.git
- Install and run server for the frontend
cd frontend/rq-flow
npm install
npm start
- Install backend Python server requirements
cd backend/
pip install -r requirements.txt
python -m spacy download en_core_web_sm
- Run backend server (Dev mode)
python main.py
The backend DB for logging change be changed to any self-hosted postgres DB by modifying the .env
configs.
After deploying the service locally, visit: http://localhost:3000/app
The paper pool used in the search function is vectorized and stored using ChromaDB under backend/paper_graph/db
.
Currently, the embedding model we used is OpenAI Ada 2, but you could use any other models if needed. When swapping the paper pool, modify and run the backend/paper_graph/get_embeddings.py
.
Yiren Liu - @yirenl2 - [email protected]
SALT Lab @ UIUC: https://socialcomputing.web.illinois.edu/index.html