Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experiment to test insight interpretation #55

Open
01painadam opened this issue Dec 18, 2024 · 0 comments
Open

Experiment to test insight interpretation #55

01painadam opened this issue Dec 18, 2024 · 0 comments

Comments

@01painadam
Copy link
Collaborator

01painadam commented Dec 18, 2024

Summary

What does this research help us do? What future work will this unlock or what decisions it enable?

An opportunity for an LLM solution within the monitoring use case is to improve the accessibility of insights. More specifically, adding text summaries to accompany visualisations to increase understandability.

We need to test how accurately the LLM can describe these results, as well as test where it can pick out headline insights, features in the data, or incorporate contextual data about the area being analysed.

Possible trigger prompts:

  • I need to monitor KBAs in South America
  • Set up a dashboard to monitor KBAs in South America
  • How has ecosystem health changed in KBAs in South America?
  • What are the biggest threats to KBAs in South America?
  • What’s the status of KBAs in South America
  • lots of other potential prompts based on attributes of individual KBAs, e.g. largest ecosystem, global KBA criteria, species present (see KBA metadata KBA info scrape)

Key questions

Given #53 and #86, when an portfolio of areas is selected and data is generated successfully:

  1. Can the LLM interpret the results of the analysis accurately and clearly
  2. Can the LLM incorporate text data about the areas into the insights (e.g. from KBAs or Project data etc)
    a. highlight individual KBAs w/ most severe threats based on alerts within and around KBAs
    b. highlight potential causes of threats based on text description of each KBA
    c. highlights based on KBA metadata e.g. species are threatened
    d. suggest explanations/causes that draw upon text descriptions of KBAs in metadata
  3. Based on follow up user prompts, can the LLM generate refine insights to:
    a. use new change filters, aggregation, format (e.g. % vs. area)
    b. change language or framing (simplify, make concise/bullet, use analogy e.g. football pitches not hectares, or frame for a specific audience or persona)

Note: we should use predefined charts - we are not interested in whether the LLM can select the right chart component

Expected Output(s)

What artefacts do we expect to produce that will help us answer the above questions? May be designs, documents, a prototype feature…

  • notebook/streamlit

Hints

Suggestions about how to go about the research. How might we answer these questions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant