Skip to content

Commit

Permalink
GITBOOK-1884: change request with no subject merged in GitBook
Browse files Browse the repository at this point in the history
  • Loading branch information
xiangfu0 authored and gitbook-bot committed Jan 23, 2024
1 parent 0e41b8b commit 887bef1
Show file tree
Hide file tree
Showing 45 changed files with 65 additions and 46 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/.unused/image (11) (1) (1).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/.unused/image (18) (1).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/.unused/image (27) (1) (1).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/.unused/image (43) (1) (1).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/.unused/image (53) (1) (1).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/.unused/image (9) (1) (1).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/.unused/snapshot-msk (1) (1).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions .gitbook/assets/Pinot-Architecture (1).svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/batch-deep-store (1).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/docker-resource-setup (1).png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .gitbook/assets/example-dataset (1).png
Binary file added .gitbook/assets/presto-cluster-ui (1).png
Binary file added .gitbook/assets/server-deep-store (1).png
8 changes: 4 additions & 4 deletions basics/components/exploring-pinot.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ If you want to view the contents of a server, click on its instance name. You'll

To view the _baseballStats_ table, click on its name, which will show the following screen:

![baseballStats Table](<../../.gitbook/assets/view-table-baseball-stats.png>)
![baseballStats Table](<../../.gitbook/assets/view-table-baseball-stats (1).png>)

From this screen, we can edit or delete the table, edit or adjust its schema, as well as several other operations.

Expand Down Expand Up @@ -69,17 +69,17 @@ Pinot supports a subset of standard SQL. For more information, see [Pinot Query

The [Pinot Admin UI](http://localhost:9000/help) contains all the APIs that you will need to operate and manage your cluster. It provides a set of APIs for Pinot cluster management including health check, instances management, schema and table management, data segments management.

![](<../../.gitbook/assets/pinot-admin-ui.png>)
![](../../.gitbook/assets/pinot-admin-ui.png)

Let's check out the tables in this cluster by going to [Table -> List all tables in cluster](http://localhost:9000/help#/Table/listTables), click **Try it out**, and then click **Execute**. We can see the`baseballStats` table listed here. We can also see the exact cURL call made to the controller API.

![List all tables in cluster](<../../.gitbook/assets/list-all-tables.png>)
![List all tables in cluster](../../.gitbook/assets/list-all-tables.png)

You can look at the configuration of this table by going to [Tables -> Get/Enable/Disable/Drop a table](http://localhost:9000/help#!/Table/alterTableStateOrListTableConfig), click **Try it out**, type `baseballStats` in the table name, and then click **Execute**.

Let's check out the schemas in the cluster by going to [Schema -> List all schemas in the cluster](http://localhost:9000/help#!/Schema/listSchemaNames), click **Try it out**, and then click **Execute**. We can see a schema called `baseballStats` in this list.

![List all schemas in the cluster](<../../.gitbook/assets/list-all-schemas.png>)
![List all schemas in the cluster](../../.gitbook/assets/list-all-schemas.png)

Take a look at the schema by going to [Schema -> Get a schema](http://localhost:9000/help#!/Schema/getSchema), click **Try it out**, type `baseballStats` in the schema name, and then click **Execute**.

Expand Down
6 changes: 3 additions & 3 deletions basics/components/table/segment/deep-store.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,19 +20,19 @@ There are several different ways that segments are persisted in the deep store.

For offline tables, the batch ingestion job writes the segment directly into the deep store, as shown in the diagram below:

![Batch job writing a segment into the deep store](<../../../../.gitbook/assets/batch-deep-store.png>)
![Batch job writing a segment into the deep store](<../../../../.gitbook/assets/batch-deep-store (1).png>)

The ingestion job then sends a notification about the new segment to the controller, which in turn notifies the appropriate server to pull down that segment.

For real-time tables, by default, a segment is first built-in memory by the server. It is then uploaded to the lead controller (as part of the Segment Completion Protocol sequence), which writes the segment into the deep store, as shown in the diagram below:

![Server sends segment to Controller, which writes segments into the deep store](<../../../../.gitbook/assets/server-controller-deep-store.png>)
![Server sends segment to Controller, which writes segments into the deep store](<../../../../.gitbook/assets/server-controller-deep-store (1).png>)

Having all segments go through the controller can become a system bottleneck under heavy load, in which case you can use the peer download policy, as described in [Decoupling Controller from the Data Path](../../../../operators/operating-pinot/decoupling-controller-from-the-data-path.md).

When using this configuration, the server will directly write a completed segment to the deep store, as shown in the diagram below:

![Server writing a segment into the deep store](<../../../../.gitbook/assets/server-deep-store.png>)
![Server writing a segment into the deep store](<../../../../.gitbook/assets/server-deep-store (1).png>)

## Configuring the deep store

Expand Down
18 changes: 7 additions & 11 deletions basics/data-import/segment-compaction-on-upserts.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,8 @@ To compact segments on upserts, complete the following steps:
}
```

* `bufferTimePeriod:` To compact segments once they are complete, set to `“0d”`. To delay compaction (as the configuration above shows by 7 days (`"7d"`)), specify the number of days to delay compaction after a segment completes.&#x20;
* `invalidRecordsThresholdPercent` (Optional) Limits the older records allowed in the completed segment as a percentage of the total number of records in the segment. In the example above, the completed segment may be selected for compaction when 30% of the records in the segment are old.&#x20;
* `bufferTimePeriod:` To compact segments once they are complete, set to `“0d”`. To delay compaction (as the configuration above shows by 7 days (`"7d"`)), specify the number of days to delay compaction after a segment completes.
* `invalidRecordsThresholdPercent` (Optional) Limits the older records allowed in the completed segment as a percentage of the total number of records in the segment. In the example above, the completed segment may be selected for compaction when 30% of the records in the segment are old.
* `invalidRecordsThresholdCount` (Optional) Limits the older records allowed in the completed segment by record count. In the example above, if the segment contains more than 100K records, it may be selected for compaction.

{% hint style="info" %}
Expand All @@ -40,9 +40,9 @@ Because segment compaction is an expensive operation, we **do not recommend** se

## Example

The following example includes a dataset with 24M records and 240K unique keys that have each been duplicated 100 times. After ingesting the data, there are 6 segments (5 completed segments and 1 consuming segment) with a total estimated size of 22.8MB.&#x20;
The following example includes a dataset with 24M records and 240K unique keys that have each been duplicated 100 times. After ingesting the data, there are 6 segments (5 completed segments and 1 consuming segment) with a total estimated size of 22.8MB.

<figure><img src="../../.gitbook/assets/example-dataset.png" alt=""><figcaption><p>Example dataset</p></figcaption></figure>
<figure><img src="../../.gitbook/assets/example-dataset (1).png" alt=""><figcaption><p>Example dataset</p></figcaption></figure>

Submitting the query `“set skipUpsert=true; select count(*) from transcript_upsert”` before compaction produces 24,000,000 results:

Expand All @@ -56,24 +56,20 @@ After the compaction tasks are complete, the [Minion Task Manager UI](../compone

<figure><img src="../../.gitbook/assets/minion-task-completed.png" alt=""><figcaption><p>Minion compaction task completed</p></figcaption></figure>

Segment compactions generates a task for each segment to compact. Five tasks were generated in this case because 90% of the records (3.6–4.5M records) are considered ready for compaction in the completed segments, exceeding the configured thresholds.&#x20;
Segment compactions generates a task for each segment to compact. Five tasks were generated in this case because 90% of the records (3.6–4.5M records) are considered ready for compaction in the completed segments, exceeding the configured thresholds.

{% hint style="info" %}
If a completed segment only contains old records, Pinot immediately deletes the segment (rather than creating a task to compact it).
{% endhint %}

Submitting the query again shows the count matches the set of 240K unique keys.



<figure><img src="../../.gitbook/assets/results-after-segment-compaction.png" alt=""><figcaption><p>Results after segment compaction</p></figcaption></figure>

Once segment compaction has completed, the total number of segments remain the same and the total estimated size drops to 2.77MB.&#x20;
Once segment compaction has completed, the total number of segments remain the same and the total estimated size drops to 2.77MB.

{% hint style="info" %}
To further improve query latency, merge small segments into larger one.
{% endhint %}



\
\\
28 changes: 25 additions & 3 deletions basics/getting-started/frequent-questions/general.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
description: >-
This page has a collection of frequently asked questions of a general nature with answers from the
community.
This page has a collection of frequently asked questions of a general nature
with answers from the community.
---

# General
Expand All @@ -20,4 +20,26 @@ Pinot uses Apache Helix for cluster management, which in turn is built on top of

## Why am I getting "Could not find or load class" error when running Quickstart using 0.8.0 release?

Please check the JDK version you are using. You may be getting this error if you are using an older version than the current Pinot binary release was built on. If so, you have two options: switch to the same JDK release as Pinot was built with or download the [source code](https://downloads.apache.org/pinot/apache-pinot-0.8.0/apache-pinot-0.8.0-src.tar.gz) for the Pinot release and [build](https://github.com/apache/pinot/pull/6424) it locally.
Please check the JDK version you are using. You may be getting this error if you are using an older version than the current Pinot binary release was built on. If so, you have two options: switch to the same JDK release as Pinot was built with or download the [source code](https://downloads.apache.org/pinot/apache-pinot-0.8.0/apache-pinot-0.8.0-src.tar.gz) for the Pinot release and [build](https://github.com/apache/pinot/pull/6424) it locally.

## How to change TimeZone when running Pinot?

There are 2 ways to do it:

1. Setting an environment variable: `TZ=UTC`.

E.g.

```
export TZ=UTC
```

2. Setting JVM argument: `user.timezone`

```
-Duser.timezone=UTC
```

3. TODO: [https://github.com/apache/pinot/issues/12299](https://github.com/apache/pinot/issues/12299)

Plan to add a configuration to change time zone using cluster config or pinot component config
2 changes: 1 addition & 1 deletion basics/getting-started/troubleshooting-pinot.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Start with the [debug API](../../users/api/controller-api-reference.md) which wi

The table debug API can be invoked via the Swagger UI, as in the following image:

![Swagger - Table Debug Api](<../../.gitbook/assets/swagger-table-debug-api (1) (1).png>)
![Swagger - Table Debug Api](<../../.gitbook/assets/.unused/image (11) (1) (1).png>)

It can also be invoked directly by accessing the URL as follows. The api requires the `tableName`, and can optionally take `tableType (offline|realtime)` and `verbosity` level.

Expand Down
6 changes: 3 additions & 3 deletions basics/releases/0.3.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ The reason behind the architectural change from the previous release (0.2.0) and

For instance, the picture below shows the module dependencies of the 0.2.X or previous releases. If we wanted to support a new storage type, we would have had to change several modules. Pretty bad, huh?

![0.2.0 and before Pinot Module Dependency Diagram](../../.gitbook/assets/old\_architecture.svg)
![0.2.0 and before Pinot Module Dependency Diagram](../../.gitbook/assets/.unused/old\_architecture.svg)

In order to conquer this challenge, below major changes are made:

Expand All @@ -27,7 +27,7 @@ In order to conquer this challenge, below major changes are made:

Now the architecture supports a **plug-and-play** fashion, where new tools can be supported with little and simple extensions, without affecting big chunks of code. Integrations with new streaming services and data formats can be developed in a much more simple and convenient way.

![Dependency graph after introducing pinot-plugin in 0.3.0 ](<../../.gitbook/assets/Pinot Dependency Graph.svg>)
![Dependency graph after introducing pinot-plugin in 0.3.0](<../../.gitbook/assets/Pinot Dependency Graph.svg>)

## **Notable New Features**

Expand Down Expand Up @@ -131,7 +131,7 @@ Now the architecture supports a **plug-and-play** fashion, where new tools can b
* _`/tasks/taskqueues`_: List all task queues
* `/tasks/taskqueuestate/{taskType}` -> `/tasks/{taskType}/state`
* `/tasks/tasks/{taskType}` -> `/tasks/{taskType}/tasks`
* `/tasks/taskstates/{taskType}` -> `/tasks/{taskType}/taskstates`
* `/tasks/taskstates/{taskType}` -> `/tasks/{taskType}/taskstates`
* `/tasks/taskstate/{taskName}` -> `/tasks/task/{taskName}/taskstate`
* `/tasks/taskconfig/{taskName}` -> `/tasks/task/{taskName}/taskconfig`
* PUT:
Expand Down
2 changes: 1 addition & 1 deletion developers/advanced/advanced-pinot-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Set up Pinot by starting each component individually
If running locally, ensure your docker cluster has enough resources, below is a sample config.
{% endhint %}

![Sample Docker resources](<../../.gitbook/assets/docker-resource-setup.png>)
![Sample Docker resources](<../../.gitbook/assets/docker-resource-setup (1).png>)

**Pull Docker image**

Expand Down
6 changes: 3 additions & 3 deletions developers/advanced/v2-multi-stage-query-engine.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,13 @@ To query using distributed joins, window functions, and other multi-stage operat
* Query [using REST APIs](v2-multi-stage-query-engine.md#use-rest-apis)
* Query outside of the APIs [using the query option](v2-multi-stage-query-engine.md#use-the-query-option)

To learn more about what the multi-stage query engine is, see [Multi-stage query engine (v2)](../../reference/multi-stage-engine.md).&#x20;
To learn more about what the multi-stage query engine is, see [Multi-stage query engine (v2)](../../reference/multi-stage-engine.md).

## Enable the multi-stage query engine in the Query Console

* To enable the multi-stage query engine, in the Pinot Query Console, select the **Use Multi-Stage Engine** check box.

<figure><img src="../../.gitbook/assets/pinot-query-console-multi-stage-enabled.png" alt=""><figcaption><p>Pinot Query Console with Use Multi Stage Engine enabled</p></figcaption></figure>
<figure><img src="../../.gitbook/assets/pinot-query-console-multi-stage-enabled (2).png" alt=""><figcaption><p>Pinot Query Console with Use Multi Stage Engine enabled</p></figcaption></figure>

## Programmatically access the multi-stage query engine

Expand Down Expand Up @@ -56,7 +56,7 @@ curl -X POST http://localhost:8000/query/sql -d '

### Use the query option

To enable the multi-stage engine via a query outside of the API, add the `useMultistageEngine=true` option to the top of your query.&#x20;
To enable the multi-stage engine via a query outside of the API, add the `useMultistageEngine=true` option to the top of your query.

For example:

Expand Down
5 changes: 2 additions & 3 deletions developers/developers-and-contributors/code-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ $mvn install package -DskipTests -Pbin-dist -DdownloadSources -DdownloadJavadocs

Import the project into your favorite IDE. Set up stylesheet according to your IDE. We have provided instructions for intellij and eclipse. If you are using other IDEs, please ensure you use stylesheet based on [this](https://github.com/apache/pinot/blob/master/config/codestyle-intellij.xml).


#### Intellij

To import the Pinot stylesheet this launch intellij and navigate to `Preferences` (on Mac) or `Settings` on Linux.
Expand All @@ -51,7 +50,7 @@ To import the Pinot stylesheet this launch intellij and navigate to `Preferences
* Select `Import Scheme` -> `Intellij IDES code style XML`
* Choose `codestyle-intellij.xml` from `pinot/config` folder of your workspace. Click Apply.

![](../../.gitbook/assets/import\_scheme.png)
![](../../.gitbook/assets/.unused/import\_scheme.png)

#### Eclipse

Expand All @@ -60,7 +59,7 @@ To import the Pinot stylesheet this launch eclipse and navigate to `Preferences`
* Navigate to Java->Code Style->Formatter
* Choose `codestyle-eclipse.xml` from `pinot/config folder` of your workspace. Click Apply.

![](../../.gitbook/assets/eclipse\_style.png)
![](../../.gitbook/assets/.unused/eclipse\_style.png)

### **Starting Pinot via IDE**

Expand Down
2 changes: 1 addition & 1 deletion integrations/presto.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ Splits: 17 total, 17 done (100.00%)

Meanwhile you can access [Presto Cluster UI](http://localhost:8080/ui/) to see query stats.

![Presto Cluster UI](<../.gitbook/assets/presto-cluster-ui.png>)
![Presto Cluster UI](<../.gitbook/assets/presto-cluster-ui (1).png>)
{% endtab %}
{% endtabs %}

Expand Down
2 changes: 1 addition & 1 deletion operators/operating-pinot/segment-assignment.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Segment assignment refers to the strategy of assigning each segment from a table

Balanced Segment Assignment is the default assignment strategy, where each segment is assigned to the server with the least segments already assigned. With this strategy, each server will have balanced query load, and each query will be routed to all the servers. It requires minimum configuration, and works well for small use cases.

![](../../.gitbook/assets/balanced-segment-assignment.png)
![](<../../.gitbook/assets/balanced-segment-assignment (1).png>)

## Replica-Group Segment Assignment

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,19 +19,18 @@ Follow this [AWS Quickstart Wiki](https://docs.pinot.apache.org/getting-started/
{% hint style="info" %}
Note:

- For demo simplicity, this MSK cluster reuses same VPC created by EKS cluster in the previous step. Otherwise a [VPC Peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html) is required to ensure two VPCs could talk to each other.
- Under **Encryption** section, choose **`Both TLS encrypted and plaintext traffic allowed`**
* For demo simplicity, this MSK cluster reuses same VPC created by EKS cluster in the previous step. Otherwise a [VPC Peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html) is required to ensure two VPCs could talk to each other.
* Under **Encryption** section, choose **`Both TLS encrypted and plaintext traffic allowed`**
{% endhint %}

2. Click **Create**. b

2. Click **Create**. b
3. Once the cluster is created, click **`View client information`** to see the Zookeeper and Kafka Broker list.

![MSK Cluster View](<../../.gitbook/assets/msk-cluster-view.png>)
![MSK Cluster View](../../.gitbook/assets/msk-cluster-view.png)

Sample Client Information

![](<../../.gitbook/assets/msk-sample-client-info.png>)
![](../../.gitbook/assets/msk-sample-client-info.png)

## Connect to MSK

Expand All @@ -46,19 +45,19 @@ This is configured through Amazon VPC Page.
1. Record the Amazon MSK `SecurityGroup` from the Cluster page, in the above demo, it's `sg-01e7ab1320a77f1a9`.
2. Open [Amazon VPC Page](https://us-west-2.console.aws.amazon.com/vpc/home), click on **`SecurityGroups`** on left bar. Find the EKS Security group: `eksctl-${PINOT_EKS_CLUSTER}-cluster/ClusterSharedNodeSecurityGroup.`

![Amazon EKS ClusterSharedNodeSecurityGroup](<../../.gitbook/assets/amazon\_eks\_cluster (3).png>)
![Amazon EKS ClusterSharedNodeSecurityGroup](<../../.gitbook/assets/.unused/amazon\_eks\_cluster (1) (1) (4).png>)

{% hint style="info" %}
Ensure you are picking **ClusterShardNodeSecurityGroup**
{% endhint %}

1. In SecurityGroup, click on MSK SecurityGroup (`sg-01e7ab1320a77f1a9`), then Click on `Edit Rules` , then add above `ClusterSharedNodeSecurityGroup` (`sg-0402b59d7e440f8d1`) to it.

![Add SecurityGroup to Amazon MSK](<../../.gitbook/assets/msk-add-security-group.png>)
![Add SecurityGroup to Amazon MSK](../../.gitbook/assets/msk-add-security-group.png)

1. Click EKS Security Group `ClusterSharedNodeSecurityGroup` (`sg-0402b59d7e440f8d1`), add In bound Rule for MSK Security Group (`sg-01e7ab1320a77f1a9`).

![Add SecurityGroup to Amazon EKS](<../../.gitbook/assets/eks-add-security-group.png>)
![Add SecurityGroup to Amazon EKS](../../.gitbook/assets/eks-add-security-group.png)

Now, EKS cluster should be able to talk to Amazon MSK.

Expand Down Expand Up @@ -118,7 +117,7 @@ You can download below yaml file, then replace:
* `${BROKER_LIST_STRING}` -> MSK **Plaintext** Broker String in the deployment
* `${GITHUB_PERSONAL_ACCESS_TOKEN}` -> A personal Github Personal Access Token generated from [here](https://github.com/settings/tokens), grant all read permissions to it. Here is the [source code](https://github.com/apache/pinot/commit/1baede8e760d593fcd539d61a147185816c44fc9) to generate Github Events.

{% file src="../../.gitbook/assets/github-events-aws-msk-demo (2).yaml" %}
{% file src="../../.gitbook/assets/.unused/github-events-aws-msk-demo (2).yaml" %}
github-events-aws-msk-demo.yaml
{% endfile %}

Expand Down
Loading

0 comments on commit 887bef1

Please sign in to comment.