Skip to content

Commit

Permalink
updated parameters for highest/lowest functions in the SPEC (#1217)
Browse files Browse the repository at this point in the history
  • Loading branch information
sanderson authored Apr 29, 2019
1 parent 3e19f89 commit 840ca21
Showing 1 changed file with 40 additions and 40 deletions.
80 changes: 40 additions & 40 deletions docs/SPEC.md
Original file line number Diff line number Diff line change
Expand Up @@ -602,7 +602,7 @@ An expression specifies the computation of a value by applying the operators and

Operands denote the elementary values in an expression.

Primary expressions are the operands for unary and binary expressions.
Primary expressions are the operands for unary and binary expressions.
A primary expressions may be a literal, an identifier denoting a variable, or a parenthesized expression.

PrimaryExpression = identifier | Literal | "(" Expression ")" .
Expand Down Expand Up @@ -796,7 +796,7 @@ The operator precedence is encoded directly into the grammar as the following.
PostfixOperator = MemberExpression
| CallExpression
| IndexExpression .

### Packages

Flux source is organized into packages.
Expand Down Expand Up @@ -930,7 +930,7 @@ Parameters to function types define whether the parameter is a pipe forward para
The `<-` indicates the parameter is the pipe forward parameter.

Examples:

// alias the bool type
type boolean = bool

Expand All @@ -943,7 +943,7 @@ Examples:
// Define addition on ints
type intAdd = (a: int, b: int) -> int

// Define polymorphic addition
// Define polymorphic addition
type add = (a: 'a, b: 'a) -> 'a

// Define funcion with pipe parameter
Expand Down Expand Up @@ -973,7 +973,7 @@ A function produces side effects when it is explicitly declared to have side eff

Packages are initialized in the following order:

1. All imported packages are initialized and assigned to their package identifier.
1. All imported packages are initialized and assigned to their package identifier.
2. All option declarations are evaluated and assigned regardless of order. An option cannot have a dependency on another option assigned in the same package block.
3. All variable declarations are evaluated and assigned regardless of order. A variable cannot have a direct or indirect dependency on itself.
4. Any package side effects are evaluated.
Expand Down Expand Up @@ -1063,7 +1063,7 @@ These are builtin functions that all take a single `time` argument and return an
* `month` int
Month returns the month of the year for the provided time in the range `[1-12]`.

[IMPL#155](https://github.com/influxdata/flux/issues/155) Implement Time and date functions
[IMPL#155](https://github.com/influxdata/flux/issues/155) Implement Time and date functions

### System Time

Expand All @@ -1087,7 +1087,7 @@ An interval is a built-in named type:
start: time,
stop: time,
}

Intervals has the following parameters:

| Name | Type | Description |
Expand Down Expand Up @@ -1348,7 +1348,7 @@ Example:
#### Buckets

Buckets is a type of data source that retrieves a list of buckets that the caller is authorized to access.
It takes no input parameters and produces an output table with the following columns:
It takes no input parameters and produces an output table with the following columns:

| Name | Type | Description |
| ---- | ---- | ----------- |
Expand All @@ -1359,9 +1359,9 @@ It takes no input parameters and produces an output table with the following col
| retentionPolicy | string | The name of the retention policy, if present. |
| retentionPeriod | duration | The duration of time for which data is held in this bucket. |

Example:
Example:

buckets() |> filter(fn: (r) => r.organization == "my-org")
buckets() |> filter(fn: (r) => r.organization == "my-org")

#### Yield

Expand All @@ -1387,7 +1387,7 @@ Fill will scan a stream for null values and replace them with a non-null value.

The output stream will be the same as the input stream, with all null values in the column replaced.

Fill has the following properties:
Fill has the following properties:

| Name | Type | Description |
| ---- | ---- | ----------- |
Expand Down Expand Up @@ -1529,7 +1529,7 @@ Cov has the following properties:
| valueDst | string | ValueDst is the column into which the result will be placed. Defaults to `_value`. |

Example:

cpu = from(bucket: "telegraf/autogen") |> range(start:-5m) |> filter(fn:(r) => r._measurement == "cpu")
mem = from(bucket: "telegraf/autogen") |> range(start:-5m) |> filter(fn:(r) => r._measurement == "mem")
cov(x: cpu, y: mem)
Expand Down Expand Up @@ -1642,9 +1642,9 @@ Quantile has the following properties:

The method parameter must be one of:

* `estimate_tdigest`: an aggregate result that uses a tdigest data structure to compute an accurate quantile estimate on large data sources.
* `exact_mean`: an aggregate result that takes the average of the two points closest to the quantile value.
* `exact_selector`: see Quantile (selector)
* `estimate_tdigest`: an aggregate result that uses a tdigest data structure to compute an accurate quantile estimate on large data sources.
* `exact_mean`: an aggregate result that takes the average of the two points closest to the quantile value.
* `exact_selector`: see Quantile (selector)

Example:
```
Expand Down Expand Up @@ -1799,7 +1799,7 @@ from(bucket:"telegraf/autogen")
Min is a selector operation.
Min selects the minimum record from the input table.

Example:
Example:

```
from(bucket:"telegraf/autogen")
Expand Down Expand Up @@ -1916,11 +1916,11 @@ There are six highest/lowest functions that compute the top or bottom N records

All of the highest/lowest functions take the following parameters:

| Name | Type | Description |
| ---- | ---- | ----------- |
| n | int | N is the number of records to select. |
| columns | []string | Columns is the list of columns to use when aggregating. Defaults to `["_value"]`. |
| groupColumns | []string | GroupColumns are the columns on which to group to perform the aggregation. |
| Name | Type | Description |
| ---- | ---- | ----------- |
| n | int | N is the number of records to select. |
| column | string | Column is the column to use when aggregating. Defaults to `"_value"`. |
| groupColumns | []string | GroupColumns are the columns on which to group to perform the aggregation. |

#### Histogram

Expand Down Expand Up @@ -2157,15 +2157,15 @@ from(bucket:"telegraf/autogen")
r._field == "usage_system")
```

#### Rename
#### Rename

Rename renames specified columns in a table.
There are two variants: one which takes a map of old column names to new column names,
and one which takes a mapping function.
If a column is renamed and is part of the group key, the column name in the group key will be updated.
If a specified column is not present in a table an error will be thrown.

Rename has the following properties:
Rename has the following properties:

| Name | Type | Description |
| ---- | ---- | ----------- |
Expand All @@ -2190,7 +2190,7 @@ from(bucket: "telegraf/autogen")
|> rename(fn: (column) => column + "_new")
```

#### Drop
#### Drop

Drop excludes specified columns from a table. Columns to exclude can be specified either through a list, or a predicate function.
When a dropped column is part of the group key it will also be dropped from the key.
Expand Down Expand Up @@ -2221,7 +2221,7 @@ from(bucket: "telegraf/autogen")
|> drop(fn: (column) => column =~ /usage*/)
```

#### Keep
#### Keep

Keep is the inverse of drop. It returns a table containing only columns that are specified,
ignoring all others.
Expand Down Expand Up @@ -2250,19 +2250,19 @@ Keep all columns matching a predicate:
```
from(bucket: "telegraf/autogen")
|> range(start: -5m)
|> keep(fn: (column) => column =~ /inodes*/)
|> keep(fn: (column) => column =~ /inodes*/)
```

#### Duplicate
#### Duplicate

Duplicate duplicates a specified column in a table.
If the specified column is not present in a table an error will be thrown.
If the specified column is part of the group key, it will be duplicated, but it will not be part of the group key of the output table.
If the column indicated by `as` does not exist, a column will be added to the table.
If the column indicated by `as` does not exist, a column will be added to the table.
If the column does exist, that column will be overwritten with the values specified by `column`.
If the `as` column is in the group key, there are two possible outcomes:
If the `as` column is in the group key, there are two possible outcomes:
If the column indicated by `column` is in the group key, then `as` will remain in the group key and have the same group key value as `column`.
If `column` is not part of the group key, then `as` is removed from the group key.
If `column` is not part of the group key, then `as` is removed from the group key.
Duplicate has the following properties:

| Name | Type | Description |
Expand Down Expand Up @@ -2344,8 +2344,8 @@ __Examples__
_By_

```
from(bucket: "telegraf/autogen")
|> range(start: -30m)
from(bucket: "telegraf/autogen")
|> range(start: -30m)
|> group(columns: ["host", "_measurement"])
```

Expand Down Expand Up @@ -2535,7 +2535,7 @@ Window has the following properties:
| stopColumn | string | StopColumn is the name of the column containing the window stop time. Defaults to `_stop`. |
| createEmpty | bool | CreateEmpty specifies whether empty tables should be created. Defaults to `false`.

Example:
Example:
```
from(bucket:"telegraf/autogen")
|> range(start:-12h)
Expand All @@ -2561,7 +2561,7 @@ Pivot has the following properties:
| valueColumn | string | ValueColumn identifies the single column that contains the value to be moved around the pivot. |

The group key of the resulting table will be the same as the input tables, excluding the columns found in the `columnKey` and `valueColumn`.
This is because these columns are not part of the resulting output table.
This is because these columns are not part of the resulting output table.

Any columns in the original table that are not referenced in the `rowKey` or the original table's group key will be dropped.

Expand All @@ -2576,15 +2576,15 @@ The output is constructed as follows:
- A new row is created for each unique value identified in the input by the `rowKey` parameter.
- For each new row, values for group key columns stay the same, while values for new columns are determined from the input tables by the value in `valueColumn` at the row identified by the `rowKey` values and the new column's label.
If no value is found, the value is set to null.

Example 1, align fields within each measurement that have the same timestamp:

```
from(bucket:"test")
|> range(start: 1970-01-01T00:00:00.000000000Z)
|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")
```

Input:

| _time | _value | _measurement | _field |
Expand Down Expand Up @@ -2756,7 +2756,7 @@ Example:
#### Union

Union concatenates two or more input streams into a single output stream. In tables that have identical
schema and group keys, contents of the tables will be concatenated in the output stream. The output schemas of
schema and group keys, contents of the tables will be concatenated in the output stream. The output schemas of
the Union operation shall be the union of all input schemas.

Union does not preserve the sort order of the rows within tables. A sort operation may be added if a specific sort order is needed.
Expand Down Expand Up @@ -3136,18 +3136,18 @@ Example:
|> filter(fn:(r) => r._measurement == "net" and r._field == "bytes_sent")
|> top(n:10, columns:["_value"])

#### Contains
#### Contains

Tests whether a value is a member of a set.

Contains has the following parameters:
Contains has the following parameters:

| Name | Type | Description |
| ---- | ---- | ----------- |
| value | bool, int, uint, float, string, time | The value to search for. |
| set | array of bool, int, uint, float, string, time | The set of values to search. |

Example:
Example:
`contains(value:1, set:[1,2,3])` will return `true`.

#### Type conversion operations
Expand Down

0 comments on commit 840ca21

Please sign in to comment.