From 840ca21ab8d09a2a33bd1e32a9808856a15219d4 Mon Sep 17 00:00:00 2001 From: Scott Anderson Date: Mon, 29 Apr 2019 15:44:46 -0600 Subject: [PATCH] updated parameters for highest/lowest functions in the SPEC (#1217) --- docs/SPEC.md | 80 ++++++++++++++++++++++++++-------------------------- 1 file changed, 40 insertions(+), 40 deletions(-) diff --git a/docs/SPEC.md b/docs/SPEC.md index 903e026c47..097f7c2d02 100644 --- a/docs/SPEC.md +++ b/docs/SPEC.md @@ -602,7 +602,7 @@ An expression specifies the computation of a value by applying the operators and Operands denote the elementary values in an expression. -Primary expressions are the operands for unary and binary expressions. +Primary expressions are the operands for unary and binary expressions. A primary expressions may be a literal, an identifier denoting a variable, or a parenthesized expression. PrimaryExpression = identifier | Literal | "(" Expression ")" . @@ -796,7 +796,7 @@ The operator precedence is encoded directly into the grammar as the following. PostfixOperator = MemberExpression | CallExpression | IndexExpression . - + ### Packages Flux source is organized into packages. @@ -930,7 +930,7 @@ Parameters to function types define whether the parameter is a pipe forward para The `<-` indicates the parameter is the pipe forward parameter. Examples: - + // alias the bool type type boolean = bool @@ -943,7 +943,7 @@ Examples: // Define addition on ints type intAdd = (a: int, b: int) -> int - // Define polymorphic addition + // Define polymorphic addition type add = (a: 'a, b: 'a) -> 'a // Define funcion with pipe parameter @@ -973,7 +973,7 @@ A function produces side effects when it is explicitly declared to have side eff Packages are initialized in the following order: -1. All imported packages are initialized and assigned to their package identifier. +1. All imported packages are initialized and assigned to their package identifier. 2. All option declarations are evaluated and assigned regardless of order. An option cannot have a dependency on another option assigned in the same package block. 3. All variable declarations are evaluated and assigned regardless of order. A variable cannot have a direct or indirect dependency on itself. 4. Any package side effects are evaluated. @@ -1063,7 +1063,7 @@ These are builtin functions that all take a single `time` argument and return an * `month` int Month returns the month of the year for the provided time in the range `[1-12]`. -[IMPL#155](https://github.com/influxdata/flux/issues/155) Implement Time and date functions +[IMPL#155](https://github.com/influxdata/flux/issues/155) Implement Time and date functions ### System Time @@ -1087,7 +1087,7 @@ An interval is a built-in named type: start: time, stop: time, } - + Intervals has the following parameters: | Name | Type | Description | @@ -1348,7 +1348,7 @@ Example: #### Buckets Buckets is a type of data source that retrieves a list of buckets that the caller is authorized to access. -It takes no input parameters and produces an output table with the following columns: +It takes no input parameters and produces an output table with the following columns: | Name | Type | Description | | ---- | ---- | ----------- | @@ -1359,9 +1359,9 @@ It takes no input parameters and produces an output table with the following col | retentionPolicy | string | The name of the retention policy, if present. | | retentionPeriod | duration | The duration of time for which data is held in this bucket. | -Example: +Example: - buckets() |> filter(fn: (r) => r.organization == "my-org") + buckets() |> filter(fn: (r) => r.organization == "my-org") #### Yield @@ -1387,7 +1387,7 @@ Fill will scan a stream for null values and replace them with a non-null value. The output stream will be the same as the input stream, with all null values in the column replaced. -Fill has the following properties: +Fill has the following properties: | Name | Type | Description | | ---- | ---- | ----------- | @@ -1529,7 +1529,7 @@ Cov has the following properties: | valueDst | string | ValueDst is the column into which the result will be placed. Defaults to `_value`. | Example: - + cpu = from(bucket: "telegraf/autogen") |> range(start:-5m) |> filter(fn:(r) => r._measurement == "cpu") mem = from(bucket: "telegraf/autogen") |> range(start:-5m) |> filter(fn:(r) => r._measurement == "mem") cov(x: cpu, y: mem) @@ -1642,9 +1642,9 @@ Quantile has the following properties: The method parameter must be one of: -* `estimate_tdigest`: an aggregate result that uses a tdigest data structure to compute an accurate quantile estimate on large data sources. -* `exact_mean`: an aggregate result that takes the average of the two points closest to the quantile value. -* `exact_selector`: see Quantile (selector) +* `estimate_tdigest`: an aggregate result that uses a tdigest data structure to compute an accurate quantile estimate on large data sources. +* `exact_mean`: an aggregate result that takes the average of the two points closest to the quantile value. +* `exact_selector`: see Quantile (selector) Example: ``` @@ -1799,7 +1799,7 @@ from(bucket:"telegraf/autogen") Min is a selector operation. Min selects the minimum record from the input table. -Example: +Example: ``` from(bucket:"telegraf/autogen") @@ -1916,11 +1916,11 @@ There are six highest/lowest functions that compute the top or bottom N records All of the highest/lowest functions take the following parameters: -| Name | Type | Description | -| ---- | ---- | ----------- | -| n | int | N is the number of records to select. | -| columns | []string | Columns is the list of columns to use when aggregating. Defaults to `["_value"]`. | -| groupColumns | []string | GroupColumns are the columns on which to group to perform the aggregation. | +| Name | Type | Description | +| ---- | ---- | ----------- | +| n | int | N is the number of records to select. | +| column | string | Column is the column to use when aggregating. Defaults to `"_value"`. | +| groupColumns | []string | GroupColumns are the columns on which to group to perform the aggregation. | #### Histogram @@ -2157,7 +2157,7 @@ from(bucket:"telegraf/autogen") r._field == "usage_system") ``` -#### Rename +#### Rename Rename renames specified columns in a table. There are two variants: one which takes a map of old column names to new column names, @@ -2165,7 +2165,7 @@ and one which takes a mapping function. If a column is renamed and is part of the group key, the column name in the group key will be updated. If a specified column is not present in a table an error will be thrown. -Rename has the following properties: +Rename has the following properties: | Name | Type | Description | | ---- | ---- | ----------- | @@ -2190,7 +2190,7 @@ from(bucket: "telegraf/autogen") |> rename(fn: (column) => column + "_new") ``` -#### Drop +#### Drop Drop excludes specified columns from a table. Columns to exclude can be specified either through a list, or a predicate function. When a dropped column is part of the group key it will also be dropped from the key. @@ -2221,7 +2221,7 @@ from(bucket: "telegraf/autogen") |> drop(fn: (column) => column =~ /usage*/) ``` -#### Keep +#### Keep Keep is the inverse of drop. It returns a table containing only columns that are specified, ignoring all others. @@ -2250,19 +2250,19 @@ Keep all columns matching a predicate: ``` from(bucket: "telegraf/autogen") |> range(start: -5m) - |> keep(fn: (column) => column =~ /inodes*/) + |> keep(fn: (column) => column =~ /inodes*/) ``` -#### Duplicate +#### Duplicate Duplicate duplicates a specified column in a table. If the specified column is not present in a table an error will be thrown. If the specified column is part of the group key, it will be duplicated, but it will not be part of the group key of the output table. -If the column indicated by `as` does not exist, a column will be added to the table. +If the column indicated by `as` does not exist, a column will be added to the table. If the column does exist, that column will be overwritten with the values specified by `column`. -If the `as` column is in the group key, there are two possible outcomes: +If the `as` column is in the group key, there are two possible outcomes: If the column indicated by `column` is in the group key, then `as` will remain in the group key and have the same group key value as `column`. -If `column` is not part of the group key, then `as` is removed from the group key. +If `column` is not part of the group key, then `as` is removed from the group key. Duplicate has the following properties: | Name | Type | Description | @@ -2344,8 +2344,8 @@ __Examples__ _By_ ``` -from(bucket: "telegraf/autogen") - |> range(start: -30m) +from(bucket: "telegraf/autogen") + |> range(start: -30m) |> group(columns: ["host", "_measurement"]) ``` @@ -2535,7 +2535,7 @@ Window has the following properties: | stopColumn | string | StopColumn is the name of the column containing the window stop time. Defaults to `_stop`. | | createEmpty | bool | CreateEmpty specifies whether empty tables should be created. Defaults to `false`. -Example: +Example: ``` from(bucket:"telegraf/autogen") |> range(start:-12h) @@ -2561,7 +2561,7 @@ Pivot has the following properties: | valueColumn | string | ValueColumn identifies the single column that contains the value to be moved around the pivot. | The group key of the resulting table will be the same as the input tables, excluding the columns found in the `columnKey` and `valueColumn`. -This is because these columns are not part of the resulting output table. +This is because these columns are not part of the resulting output table. Any columns in the original table that are not referenced in the `rowKey` or the original table's group key will be dropped. @@ -2576,7 +2576,7 @@ The output is constructed as follows: - A new row is created for each unique value identified in the input by the `rowKey` parameter. - For each new row, values for group key columns stay the same, while values for new columns are determined from the input tables by the value in `valueColumn` at the row identified by the `rowKey` values and the new column's label. If no value is found, the value is set to null. - + Example 1, align fields within each measurement that have the same timestamp: ``` @@ -2584,7 +2584,7 @@ Example 1, align fields within each measurement that have the same timestamp: |> range(start: 1970-01-01T00:00:00.000000000Z) |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value") ``` - + Input: | _time | _value | _measurement | _field | @@ -2756,7 +2756,7 @@ Example: #### Union Union concatenates two or more input streams into a single output stream. In tables that have identical -schema and group keys, contents of the tables will be concatenated in the output stream. The output schemas of +schema and group keys, contents of the tables will be concatenated in the output stream. The output schemas of the Union operation shall be the union of all input schemas. Union does not preserve the sort order of the rows within tables. A sort operation may be added if a specific sort order is needed. @@ -3136,18 +3136,18 @@ Example: |> filter(fn:(r) => r._measurement == "net" and r._field == "bytes_sent") |> top(n:10, columns:["_value"]) -#### Contains +#### Contains Tests whether a value is a member of a set. -Contains has the following parameters: +Contains has the following parameters: | Name | Type | Description | | ---- | ---- | ----------- | | value | bool, int, uint, float, string, time | The value to search for. | | set | array of bool, int, uint, float, string, time | The set of values to search. | -Example: +Example: `contains(value:1, set:[1,2,3])` will return `true`. #### Type conversion operations