Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

notion doc #246

Draft
wants to merge 12 commits into
base: main
Choose a base branch
from
45 changes: 41 additions & 4 deletions docs/ingestion/notion.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,49 @@
# Notion
[Notion ](https://www.notion.so/) is an all-in-one workspace for note-taking, project management, and database management.

In order to have set up a Notion connection, you need to add a configuration item to `connections` in the `.bruin.yml` file complying with the following schema.
For more information on how to get these credentials check the Notion Section in [Ingestr documentation](https://bruin-data.github.io/ingestr/supported-sources/notion.html).
Bruin supports Notion as a source for [Ingestr assets](/assets/ingestr), and you can use it to ingest data from Notion into your data warehouse.

In order to set up Notion connection, you need to add a configuration item in the `.bruin.yml` file and in `asset` file. You need `api_key`. For details on how to obtain these credentials, please refer [here](https://dlthub.com/docs/dlt-ecosystem/verified-sources/notion#setup-guide).

Follow the steps below to correctly set up Notion as a data source and run ingestion.

### Step 1: Add a connection to .bruin.yml file

To connect to Notion, you need to add a configuration item to the connections section of the `.bruin.yml` file. This configuration must comply with the following schema:

```yaml
connections:
notion:
- name: "connection_name"
api_key: "XXXXXXXX"
- name: "my-notion"
api_key: "YOUR_NOTION_API_KEY"
```
### Step 2: Create an asset file for data ingestion

To ingest data from Notion, you need to create an [asset configuration](/assets/ingestr#asset-structure) file. This file defines the data flow from the source to the destination. Create a YAML file (e.g., notion_ingestion.yml) inside the assets folder and add the following content:

```yaml
name: public.notion
type: ingestr
connection: postgres

parameters:
source_connection: my_notion
source_table: 'd8ee2d159ac34cfc85827ba5a0a8ae71'

destination: postgress
```

- `name`: The name of the asset.
- `type`: Specifies the type of the asset. Set this to ingestr to use the ingestr data pipeline.
- `connection`: This is the destination connection, which defines where the data should be stored. For example: `postgres` indicates that the ingested data will be stored in a Postgres database.
- `source_connection`: The name of the Notion connection defined in .bruin.yml.
- `source_table`: The name of the data table in Notion that you want to ingest. Use the `database ID` as the source_table. For example, if the Notion URL is: https://www.notion.so/d8ee2d159ac34cfc85827ba5a0a8ae71?v=c714dec3742440cc91a8c38914f83b6b, the database ID is the string immediately following notion.so/ and preceding any question marks. In this example, the `database ID` is `d8ee2d159ac34cfc85827ba5a0a8ae71`.

### Step 3: [Run](/commands/run) asset to ingest data
```
bruin run assets/notion_ingestion.yml
```
As a result of this command, Bruin will ingest data from the given Notion table into your Postgres database.

<img width="1178" alt="notion" src="https://github.com/user-attachments/assets/347b9714-e6c2-4e82-ad4a-2cbea37953ab">

55 changes: 45 additions & 10 deletions docs/ingestion/zendesk.md
Original file line number Diff line number Diff line change
@@ -1,27 +1,62 @@
# Zendesk
[Zendesk](https://www.hubspot.com/) is a cloud-based customer service and support platform. It offers a range of features including ticket management, self-service options, knowledgebase management, live chat, customer analytics, and conversations.
[Zendesk](https://www.zendesk.com/) is a cloud-based customer service and support platform. It offers a range of features including ticket management, self-service options, knowledgebase management, live chat, customer analytics, and conversations.

ingestr supports Zendesk as a source for [ingestr assets](https://bruin-data.github.io/bruin/assets/ingestr.html), allowing you to ingest data from zendesk into your data warehouse.
Bruin supports Zendesk as a source for [Ingestr assets](/assets/ingestr), and you can use it to ingest data from Zendesk into your data warehouse.

In order to have set up Zendesk connection, you need to add a configuration item to `connections` in the `.bruin.yml` file complying with the following schema.
Depending on the data you are ingesting (source table), you will need to use either API Token authentication or OAuth Token authentication. Choose the appropriate method based on your source table. [Ingestr documentation](https://bruin-data.github.io/ingestr/supported-sources/zendesk.html)
In order to set up Zendesk connection, you need to add a configuration item to `connections` in the `.bruin.yml` file and in `asset` file. Depending on the data you are ingesting (source_table), you will need to use either `API Token authentication` or `OAuth Token authentication`. Choose the appropriate method based on your source table. [Ingestr documentation](https://bruin-data.github.io/ingestr/supported-sources/zendesk.html)

Follow the steps below to correctly set up zendesk as a data source and run ingestion.
### Step 1: Add a connection to .bruin.yml file

To connect to Zendesk, you need to add a configuration item to the connections section of the `.bruin.yml` file. This configuration must comply with the following schema:

API Token Authentication:
```yaml
connections:
zendesk:
- name: "connection_name",
api_key: "xyzKey",
- name: "my_zendesk",
api_token: "xyzKey",
email: "[email protected]",
subdomain: "myCompany",
sub_domain: "myCompany",
```

OAuth Token Authentication:
```yaml
connections:
zendesk:
- name: "connection_name",
- name: "my_zendesk",
oauth_token: "abcToken",
subdomain: "myCompany",
```
sub_domain: "myCompany",
```

- `sub_domain`: the unique Zendesk subdomain that can be found in the account URL. For example, if your account URL is https://my_company.zendesk.com/, then `my_company` is your subdomain
- `email`: the email address of the user
- `api_token`: the API token used for authentication with Zendesk
- `oauth_token`: the OAuth token used for authentication with Zendesk

### Step 2: Create an asset file for data ingestion
To ingest data from zendesk, you need to create an [asset configuration](/assets/ingestr#asset-structure) file. This file defines the data flow from the source to the destination. Create a YAML file (e.g., zendesk_ingestion.yml) inside the assets folder and add the following content:

```yaml
name: public.zendesk
type: ingestr
connection: postgres

parameters:
source_connection: my_zendesk
source_table: 'brands'

destination: postgress
```

- `name`: The name of the asset.
- `type`: Specifies the type of the asset. Set this to ingestr to use the ingestr data pipeline.
- `connection`: This is the destination connection, which defines where the data should be stored. For example: `postgres` indicates that the ingested data will be stored in a Postgres database.
- `source_connection`: The name of the zendesk connection defined in .bruin.yml.
- `source_table`: The name of the data table in zendesk that you want to ingest. You can find the available source tables in Zendesk [here](https://bruin-data.github.io/ingestr/supported-sources/zendesk.html#tables)

### Step 3: [Run](/commands/run) asset to ingest data
```
bruin run assets/zendesk_ingestion.yml
```
As a result of this command, Bruin will ingest data from the given zendesk table into your Postgres database.
4 changes: 2 additions & 2 deletions integration-tests/expected_connections_schema.json
Original file line number Diff line number Diff line change
Expand Up @@ -951,7 +951,7 @@
"oauth_token": {
"type": "string"
},
"subdomain": {
"sub_domain": {
"type": "string"
}
},
Expand All @@ -962,7 +962,7 @@
"api_token",
"email",
"oauth_token",
"subdomain"
"sub_domain"
]
}
}
Expand Down
2 changes: 1 addition & 1 deletion pkg/config/connections.go
Original file line number Diff line number Diff line change
Expand Up @@ -404,7 +404,7 @@ type ZendeskConnection struct {
APIToken string `yaml:"api_token" json:"api_token" mapstructure:"api_token"`
Email string `yaml:"email" json:"email" mapstructure:"email"`
OAuthToken string `yaml:"oauth_token" json:"oauth_token" mapstructure:"oauth_token"`
Subdomain string `yaml:"subdomain" json:"subdomain" mapstructure:"subdomain"`
Subdomain string `yaml:"sub_domain" json:"sub_domain" mapstructure:"sub_domain"`
}

func (c ZendeskConnection) GetName() string {
Expand Down
4 changes: 2 additions & 2 deletions pkg/config/testdata/simple.yml
Original file line number Diff line number Diff line change
Expand Up @@ -155,11 +155,11 @@ environments:
- name: conn25
api_token: "zendeskKey"
email: "zendeskemail"
subdomain: "zendeskUrl"
sub_domain: "zendeskUrl"

- name: conn25-1
oauth_token: "zendeskToken"
subdomain: "zendeskUrl"
sub_domain: "zendeskUrl"
s3:
- name: conn25
bucket_name: "my-bucket"
Expand Down
4 changes: 2 additions & 2 deletions pkg/config/testdata/simple_win.yml
Original file line number Diff line number Diff line change
Expand Up @@ -155,11 +155,11 @@ environments:
- name: conn25
api_token: "zendeskKey"
email: "zendeskemail"
subdomain: "zendeskUrl"
sub_domain: "zendeskUrl"

- name: conn25-1
oauth_token: "zendeskToken"
subdomain: "zendeskUrl"
sub_domain: "zendeskUrl"
s3:
- name: conn25
bucket_name: "my-bucket"
Expand Down
1 change: 0 additions & 1 deletion pkg/connection/connection.go
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,6 @@ func (m *Manager) GetConnection(name string) (interface{}, error) {

connZendesk, err := m.GetZendeskConnectionWithoutDefault(name)
if err == nil {
fmt.Println("err", err)
return connZendesk, nil
}
availableConnectionNames = append(availableConnectionNames, maps.Keys(m.Zendesk)...)
Expand Down
Loading