diff --git a/CHANGELOG.md b/CHANGELOG.md index d36373ea..07222a7c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,6 +1,18 @@ # Changelog Changes to this project are documented in this file. More detail and links can be found in the Telemetry Streaming [Document Revision History](https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/revision-history.html). +## 1.18.0 +### Added +- AUTOTOOL-1987: Added new Namespace declare endpoint (/namespace/$namespace/declare) which supports POST and GET +- AUTOTOOL-2148 [GitHub #104](https://github.com/F5Networks/f5-telemetry-streaming/issues/104): Add support for TLS Client Authentication to Generic HTTP Consumer +### Fixed +- AUTOTOOL-1710: Fix Event Listener startup errors that might cause restnoded to crash +- AUTOTOOL-2227: Splunk multiEvent format should ignore 'References' +### Changed +- AUTOTOOL-2111: Update npm packages (applicationinsights from 1.8.7 to 1.8.9, aws-sdk from 2.775.0 to 2.830.0, google-auth-library from 6.1.1 to 6.1.4, mustache from 4.0.0 to 4.1.0) +- AUTOTOOL-2212: Add AWS specific certificates to AWS Consumers +### Removed + ## 1.17.0 ### Added - AUTOTOOL-2027 [GitHub #91](https://github.com/F5Networks/f5-telemetry-streaming/issues/91): Add custom timestamp for APM Events diff --git a/SUPPORT.md b/SUPPORT.md index 6f305e58..5e4560fd 100644 --- a/SUPPORT.md +++ b/SUPPORT.md @@ -17,9 +17,8 @@ Currently supported versions: | Software Version | Release Type | First Customer Ship | End of Support | |------------------|---------------|---------------------|-----------------| -| TS 1.15.0 | Feature | 13-Oct-2020 | 13-Jan-2021 | -| TS 1.16.0 | Feature | 20-Nov-2020 | 20-Feb-2021 | | TS 1.17.0 | Feature | 12-Jan-2021 | 12-Apr-2021 | +| TS 1.18.0 | Feature | 23-Feb-2021 | 23-May-2021 | Versions no longer supported: @@ -39,5 +38,7 @@ Versions no longer supported: | TS 1.12.0 | Feature | 02-Jun-2020 | 02-Sep-2020 | | TS 1.13.0 | Feature | 21-Jul-2020 | 21-Oct-2020 | | TS 1.14.0 | Feature | 01-Sep-2020 | 01-Dec-2020 | +| TS 1.15.0 | Feature | 13-Oct-2020 | 13-Jan-2021 | +| TS 1.16.0 | Feature | 20-Nov-2020 | 20-Feb-2021 | See the [Release notes](https://github.com/F5Networks/f5-telemetry-streaming/releases) and [Telemetry Streaming documentation](https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/revision-history.html) for new features and issues resolved for each release. diff --git a/contributing/README.md b/contributing/README.md index 53f712fc..aaf08ba2 100644 --- a/contributing/README.md +++ b/contributing/README.md @@ -108,7 +108,7 @@ How does the project handle a typical `POST` request? "trace": false, "format": "default" }, - "schemaVersion": "1.17.0" + "schemaVersion": "1.18.0" } } ``` diff --git a/contributing/process_release.md b/contributing/process_release.md index 0175dddc..07fb7487 100644 --- a/contributing/process_release.md +++ b/contributing/process_release.md @@ -52,6 +52,7 @@ * 1.15.0 - 10.9 MB * 1.16.0 - 11.3 MB * 1.17.0 - 13.1 MB (NOTE: grpc module deps increase) + * 1.18.0 - 13.3 MB * Install build to BIG-IP, navigate to folder `/var/config/rest/iapps/f5-telemetry/` and check following: * Run `du -sh` and check that folder's size (shouldn't be much greater than previous versions): * 1.4.0 - 65 MB @@ -68,6 +69,7 @@ * 1.15.0 - 79 MB * 1.16.0 - 82 MB * 1.17.0 - 95 MB (NOTE: grpc module deps increase) + * 1.18.0 - 100 MB * Check `nodejs/node_modules` folder - if you see `eslint`, `mocha` or something else from [package.json](package.json) `devDependencies` section - something wrong with build process. Probably some `npm` flags are work as not expected and it MUST BE FIXED before publishing. * Ensure that all tests (unit tests and functional tests passed) * Optional: Ensure that your local tags match remote. If not, remove all and re-fetch: diff --git a/docs/conf.py b/docs/conf.py index fc44fe64..1b02b2a3 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -78,7 +78,7 @@ # The short X.Y version. version = u'' # The full version, including alpha/beta/rc tags. -release = u'1.17.0' +release = u'1.18.0' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. diff --git a/docs/declarations.rst b/docs/declarations.rst index b2bee18b..f99ff1e1 100644 --- a/docs/declarations.rst +++ b/docs/declarations.rst @@ -2,7 +2,7 @@ Example Declarations ==================== -This section contains example Telemetry Streaming declarations. Use the index on the left to go directly to a specific declaration. +This section contains example Telemetry Streaming declarations. Use the index on the right to go directly to a specific declaration. Base Declaration diff --git a/docs/event-listener.rst b/docs/event-listener.rst index 47268382..bdb23693 100644 --- a/docs/event-listener.rst +++ b/docs/event-listener.rst @@ -5,6 +5,9 @@ Event Listener class The Telemetry Streaming Event Listener collects event logs it receives on the specified port from configured BIG-IP sources, including LTM, ASM, AFM, APM, and AVR. +.. NOTE:: Each **Telemetry_Event_Listener** opens 3 ports: TCP (dual stack - IPv4 and IPv6), UDPv4, and UDPv6 |br| If two or more Event Listeners use same port, all of them receive same events, but you can still use filters for each listener individually. + + To use the Event Listener, you must: 1. Configure the sources of log/event data. You can do this by either POSTing a single AS3 declaration or you can use TMSH or the GUI to configure individual modules. diff --git a/docs/namespaces.rst b/docs/namespaces.rst index 55283ba9..320e8d44 100644 --- a/docs/namespaces.rst +++ b/docs/namespaces.rst @@ -12,9 +12,9 @@ The following are important notes about namespaces. - Each namespace works separately from another and cannot share configuration or object references. - While each namespace must have a unique name, the components in a namespace can share the same name as the components in another namespace. - Namespaces are not tied in any way to RBAC. -- Currently, Telemetry Streaming only supports a full declaration sent to **/telemetry/declare**. If there are multiple namespaces, they must all be declared in the POST body. +- You must send a full declaration to **/telemetry/declare**. If there are multiple namespaces, you must declare them all in the POST body, otherwise, they are omitted. To configure a single namespace, see :ref:`namespaceEP`. - All namespaces inherit the top level **controls** object. -- For pull consumers: If a pull consumer is declared under a namespace, the URI to get the data should specify the namespace in path, for example **/mgmt/shared/telemetry/namespace/${namespaceName} pullconsumer/${pullConsumerName}** +- For pull consumers: If you declare a pull consumer under a namespace, the URI to get the data should specify the namespace in path, for example **/mgmt/shared/telemetry/namespace/${namespaceName}/pullconsumer/${pullConsumerName}** The following examples show how you can use namespaces in your Telemetry Streaming declarations. @@ -23,6 +23,8 @@ Basic declaration with namespace only ------------------------------------- In this example, all objects are in the namespace named **My_Namespace**. Because there is only one namespace, except for the name, this is essentially the same as if there were no namespace specified. +This example uses the **/telemetry/declare** endpoint. + .. literalinclude:: ../examples/declarations/basic_namespace.json :language: json @@ -32,7 +34,7 @@ Multiple namespaces in a declaration ------------------------------------ In this example, we show how you can use multiple namespaces in a declaration. This shows how namespaces can be used to group components by function. -Note that the Consumers in each namespace are using the same name (highlighted in the example). +Note that the Consumers in each namespace are using the same name (highlighted in the example). This example also uses the **/telemetry/declare** endpoint. .. literalinclude:: ../examples/declarations/multiple_namespaces.json @@ -50,4 +52,99 @@ The lines that are not highlighted in the example are all part of the default na .. literalinclude:: ../examples/declarations/default_and_custom_namespace.json :language: json - :emphasize-lines: 24-37 \ No newline at end of file + :emphasize-lines: 24-37 + +| + +.. _namespaceEP: + +Namespace-specific endpoints +---------------------------- +Telemetry Streaming 1.18 and later introduced new endpoints specific to individual namespaces. Using this endpoint allows you to configure a specific namespace without needing to know about other namespaces. + +The following table describes the endpoint and request types you can use. + ++------------------------------------------------------------+--------------+---------------------------------------------------------------------------------------------------+ +| URI | Request Type | Description | ++============================================================+==============+===================================================================================================+ +| /mgmt/shared/telemetry/namespace/${namespace_name}/declare | GET | - Returns the single Telemetry Namespace object (configuration data), referenced by name | ++ +--------------+---------------------------------------------------------------------------------------------------+ +| | POST | - Configures a single Telemetry Namespace class - accepts just a single Telemetry_Namespace class | ++ | | - Assumes defaults/existing configuration for Controls and Telemetry classes | +| | | | ++------------------------------------------------------------+--------------+---------------------------------------------------------------------------------------------------+ + +| + +For example, we use the new endpoint, and POST the following declaration to ``https://{{host}}/mgmt/shared/telemetry/namespace/NamespaceForEvents/declare`` + +.. code-block:: json + + { + "class": "Telemetry_Namespace", + "My_Listener": { + "class": "Telemetry_Listener", + "port": 6514, + "trace": true + }, + "Elastic": { + "class": "Telemetry_Consumer", + "type": "ElasticSearch", + "host": "192.168.10.10", + "protocol": "http", + "port": "9200", + "apiVersion": "6.5", + "index": "eventdata", + "enable": true, + "trace": true + } + } + +| + +And we receive the following response: + +.. code-block:: json + + { + "message": "success", + "declaration": { + "class": "Telemetry_Namespace", + "My_Listener": { + "class": "Telemetry_Listener", + "port": 6514, + "trace": true, + "enable": true, + "match": "", + "actions": [ + { + "setTag": { + "tenant": "`T`", + "application": "`A`" + }, + "enable": true + } + ] + }, + "Elastic": { + "class": "Telemetry_Consumer", + "type": "ElasticSearch", + "host": "192.168.10.10", + "protocol": "http", + "port": 9200, + "apiVersion": "6.5", + "index": "eventdata", + "enable": true, + "trace": true, + "allowSelfSignedCert": false, + "dataType": "f5.telemetry" + } + } + } + +| + +You receive the same output as response above when you send a GET request to ``https://{{host}}/mgmt/shared/telemetry/namespace/NamespaceForEvents/declare``. + + + diff --git a/docs/revision-history.rst b/docs/revision-history.rst index f9fb2bf1..a99a086f 100644 --- a/docs/revision-history.rst +++ b/docs/revision-history.rst @@ -11,6 +11,10 @@ Document Revision History - Description - Date + * - 1.18.0 + - Updated the documentation for Telemetry Streaming v1.18.0. This release contains the following changes: |br| * Added new endpoints for individual namespaces (see :ref:`Namespace endpoints`) |br| |br| Issues Resolved: |br| * Fix Event Listener startup errors that might cause restnoded to crash |br| * Splunk multiEvent format should ignore 'References' + - 2-23-21 + * - 1.17.0 - Updated the documentation for Telemetry Streaming v1.17.0. This release contains the following changes: |br| * Added support for configuring proxy settings on Generic HTTP consumers, `GitHub #92 `_ (see :ref:`proxy`) |br| * Added support for configuring proxy settings on Splunk consumers, `GitHub #85 `_ (see :ref:`splunkproxy`) |br| * Added a timestamp for APM Request Log output, `GitHub #91 `_ (see :ref:`APM Request Log`) |br| * Added support for TLS client authentication to the Kafka consumer, `GitHub #90 `_ (see :ref:`kafka-ref`) |br| * Added an F5 Internal Only push consumer for F5 Cloud (see :ref:`F5 Cloud`) |br| * Added the ability to use the Splunk multi-metric format, currently EXPERIMENTAL (see :ref:`multi-metric`) |br| * Added a new reference for the Telemetry Streaming Default Output (see :ref:`Default Output Appendix`) |br| * Tracefile now stores up to 10 items |br| * Added a note to the System Information output page stating there is new pool and virtual server information collected (see :ref:`System Information`) |br| * Deprecated TS support for the :ref:`Splunk Legacy Format` |br| * Posting a declaration while a previous declaration is still processing now returns an HTTP 503 status code |br| |br| Issues Resolved: |br| * Fixed error where unavailable Custom Endpoint would return HTTP 500 - 1-12-20 diff --git a/examples/declarations/generic_http_tls_client_auth.json b/examples/declarations/generic_http_tls_client_auth.json new file mode 100644 index 00000000..feb3328d --- /dev/null +++ b/examples/declarations/generic_http_tls_client_auth.json @@ -0,0 +1,27 @@ +{ + "class": "Telemetry", + "My_Consumer": { + "class": "Telemetry_Consumer", + "type": "Generic_HTTP", + "host": "192.0.2.1", + "protocol": "https", + "port": 443, + "path": "/", + "method": "POST", + "headers": [ + { + "name": "content-type", + "value": "application/json" + } + ], + "privateKey": { + "cipherText": "-----BEGIN PRIVATE KEY-----\nMIIE...\n-----END PRIVATE KEY-----" + }, + "clientCertificate": { + "cipherText": "-----BEGIN CERTIFICATE-----\nMIID...\n-----END CERTIFICATE-----" + }, + "rootCertificate": { + "cipherText": "-----BEGIN CERTIFICATE-----\nMIID...\n-----END CERTIFICATE-----" + } + } +} \ No newline at end of file diff --git a/package-lock.json b/package-lock.json index 95b00da0..0e27ca95 100644 --- a/package-lock.json +++ b/package-lock.json @@ -1,6 +1,6 @@ { "name": "f5-telemetry", - "version": "1.17.0-4", + "version": "1.18.0-0", "lockfileVersion": 1, "requires": true, "dependencies": { @@ -383,14 +383,14 @@ } }, "applicationinsights": { - "version": "1.8.8", - "resolved": "https://registry.npmjs.org/applicationinsights/-/applicationinsights-1.8.8.tgz", - "integrity": "sha512-B43D4t/taGP5quGviVSdFWqarhIlzyGSi5mfngjbXpR2Ed3VrikJGIr1i5UtGzvwWqEbfIF6i298GvjFaB8RFA==", + "version": "1.8.9", + "resolved": "https://registry.npmjs.org/applicationinsights/-/applicationinsights-1.8.9.tgz", + "integrity": "sha512-APk+dhOVdM1nF/CvsOYX+QJym3w7X2rqeDmKxXMa6tMZhPXSlBxtNvrJ5L0f8STXIqGLlug5gBUHvWfLMPSb7w==", "requires": { "cls-hooked": "^4.2.2", "continuation-local-storage": "^3.2.1", "diagnostic-channel": "0.3.1", - "diagnostic-channel-publishers": "0.4.2" + "diagnostic-channel-publishers": "0.4.3" } }, "aproba": { @@ -520,9 +520,9 @@ "integrity": "sha1-x57Zf380y48robyXkLzDZkdLS3k=" }, "aws-sdk": { - "version": "2.794.0", - "resolved": "https://registry.npmjs.org/aws-sdk/-/aws-sdk-2.794.0.tgz", - "integrity": "sha512-Qqz8v0WfeGveaZTPo9+52nNUep/CTuo18OcdCwF4WrnNBv7bAxExUOwN9XkqhoxLjBDk/LuMmHGhOXRljFQgRw==", + "version": "2.830.0", + "resolved": "https://registry.npmjs.org/aws-sdk/-/aws-sdk-2.830.0.tgz", + "integrity": "sha512-vFatoWkdJmRzpymWbqsuwVsAJdhdAvU2JcM9jKRENTNKJw90ljnLyeP1eKCp4O3/4Lg43PVBwY/KUqPy4wL+OA==", "requires": { "buffer": "4.9.2", "events": "1.1.1", @@ -1163,9 +1163,9 @@ } }, "diagnostic-channel-publishers": { - "version": "0.4.2", - "resolved": "https://registry.npmjs.org/diagnostic-channel-publishers/-/diagnostic-channel-publishers-0.4.2.tgz", - "integrity": "sha512-gbt5BVjwTV1wnng0Xi766DVrRxSeGECAX8Qrig7tKCDfXW2SbK7bKY6A3tgGjk5BB50aXgVXIsbtQiYIkt57Mg==" + "version": "0.4.3", + "resolved": "https://registry.npmjs.org/diagnostic-channel-publishers/-/diagnostic-channel-publishers-0.4.3.tgz", + "integrity": "sha512-E3Fyg41SJd2GbLC63fkAaqsQRLVMKptpnZ0HoDsRYmqOVd92HLIt/c/EZqYnANM9+YPU3H1lx+GmjAMZWs65Nw==" }, "diff": { "version": "3.5.0", @@ -1800,9 +1800,9 @@ } }, "gaxios": { - "version": "4.0.1", - "resolved": "https://registry.npmjs.org/gaxios/-/gaxios-4.0.1.tgz", - "integrity": "sha512-jOin8xRZ/UytQeBpSXFqIzqU7Fi5TqgPNLlUsSB8kjJ76+FiGBfImF8KJu++c6J4jOldfJUtt0YmkRj2ZpSHTQ==", + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/gaxios/-/gaxios-4.1.0.tgz", + "integrity": "sha512-vb0to8xzGnA2qcgywAjtshOKKVDf2eQhJoiL6fHhgW5tVN7wNk7egnYIO9zotfn3lQ3De1VPdf7V5/BWfCtCmg==", "requires": { "abort-controller": "^3.0.0", "extend": "^3.0.2", @@ -1894,9 +1894,9 @@ "dev": true }, "google-auth-library": { - "version": "6.1.3", - "resolved": "https://registry.npmjs.org/google-auth-library/-/google-auth-library-6.1.3.tgz", - "integrity": "sha512-m9mwvY3GWbr7ZYEbl61isWmk+fvTmOt0YNUfPOUY2VH8K5pZlAIWJjxEi0PqR3OjMretyiQLI6GURMrPSwHQ2g==", + "version": "6.1.4", + "resolved": "https://registry.npmjs.org/google-auth-library/-/google-auth-library-6.1.4.tgz", + "integrity": "sha512-q0kYtGWnDd9XquwiQGAZeI2Jnglk7NDi0cChE4tWp6Kpo/kbqnt9scJb0HP+/xqt03Beqw/xQah1OPrci+pOxw==", "requires": { "arrify": "^2.0.0", "base64-js": "^1.3.0", @@ -2065,9 +2065,9 @@ } }, "gtoken": { - "version": "5.1.0", - "resolved": "https://registry.npmjs.org/gtoken/-/gtoken-5.1.0.tgz", - "integrity": "sha512-4d8N6Lk8TEAHl9vVoRVMh9BNOKWVgl2DdNtr3428O75r3QFrF/a5MMu851VmK0AA8+iSvbwRv69k5XnMLURGhg==", + "version": "5.2.0", + "resolved": "https://registry.npmjs.org/gtoken/-/gtoken-5.2.0.tgz", + "integrity": "sha512-qbf6JWEYFMj3WMAluvYXl8GAiji6w8d9OmAGCbBg0xF4xD/yu6ZaO6BhoXNddRjKcOUpZD81iea1H5B45gAo1g==", "requires": { "gaxios": "^4.0.0", "google-p12-pem": "^3.0.3", @@ -2901,9 +2901,9 @@ } }, "mime": { - "version": "2.4.6", - "resolved": "https://registry.npmjs.org/mime/-/mime-2.4.6.tgz", - "integrity": "sha512-RZKhC3EmpBchfTGBVb8fb+RL2cWyw/32lshnsETttkBAyAUXSGHxbEJWWRXc751DrIxG1q04b8QwMbAwkRPpUA==" + "version": "2.5.0", + "resolved": "https://registry.npmjs.org/mime/-/mime-2.5.0.tgz", + "integrity": "sha512-ft3WayFSFUVBuJj7BMLKAQcSlItKtfjsKDDsii3rqFDAZ7t11zRe8ASw/GlmivGwVUYtwkQrxiGGpL6gFvB0ag==" }, "mime-db": { "version": "1.44.0", @@ -3127,9 +3127,9 @@ "integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==" }, "mustache": { - "version": "4.0.1", - "resolved": "https://registry.npmjs.org/mustache/-/mustache-4.0.1.tgz", - "integrity": "sha512-yL5VE97+OXn4+Er3THSmTdCFCtx5hHWzrolvH+JObZnUYwuaG7XV+Ch4fR2cIrcYI0tFHxS7iyFYl14bW8y2sA==" + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/mustache/-/mustache-4.1.0.tgz", + "integrity": "sha512-0FsgP/WVq4mKyjolIyX+Z9Bd+3WS8GOwoUTyKXT5cTYMGeauNTi2HPCwERqseC1IHAy0Z7MDZnJBfjabd4O8GQ==" }, "mute-stream": { "version": "0.0.7", diff --git a/package.json b/package.json index 83e08360..01732300 100644 --- a/package.json +++ b/package.json @@ -1,6 +1,6 @@ { "name": "f5-telemetry", - "version": "1.17.0-4", + "version": "1.18.0-0", "author": "F5 Networks", "license": "Apache-2.0", "repository": { @@ -37,18 +37,18 @@ "@f5devcentral/f5-teem": "^1.4.6", "@grpc/proto-loader": "~0.3.0", "ajv": "^6.12.6", - "applicationinsights": "^1.8.7", - "aws-sdk": "^2.775.0", + "applicationinsights": "^1.8.9", + "aws-sdk": "^2.830.0", "commander": "^2.20.3", "deep-diff": "^1.0.2", "eventemitter2": "^6.4.3", - "google-auth-library": "^6.1.1", + "google-auth-library": "^6.1.4", "grpc-js-0.2-modified": "git://github.com/f5devcentral/grpc-js-0.2-modified.git#1.0", "jsonwebtoken": "^8.5.1", "kafka-node": "^2.6.1", "lodash": "^4.17.20", "long": "^4.0.0", - "mustache": "^4.0.0", + "mustache": "^4.1.0", "prom-client": "11.0.0", "request": "^2.88.2", "statsd-client": "^0.4.5", diff --git a/src/lib/config.js b/src/lib/config.js index 9e695542..b508874b 100644 --- a/src/lib/config.js +++ b/src/lib/config.js @@ -10,6 +10,7 @@ const EventEmitter2 = require('eventemitter2'); const nodeUtil = require('util'); +const errors = require('./errors'); const deviceUtil = require('./utils/device'); const logger = require('./logger'); @@ -17,6 +18,7 @@ const persistentStorage = require('./persistentStorage').persistentStorage; const util = require('./utils/misc'); const configUtil = require('./utils/config'); const TeemReporter = require('./teemReporter').TeemReporter; +const CONFIG_CLASSES = require('./constants').CONFIG_CLASSES; const PERSISTENT_STORAGE_KEY = 'config'; const BASE_CONFIG = { @@ -32,7 +34,7 @@ const BASE_CONFIG = { * @event change - config was validated and can be propagated */ function ConfigWorker() { - this.validator = configUtil.getValidator(); + this.validators = configUtil.getValidators(); this.teemReporter = new TeemReporter(); } @@ -184,19 +186,24 @@ ConfigWorker.prototype.getConfig = function () { * Validate JSON data against config schema * * @public - * @param {Object} data - data to validate against config schema - * @param {Object} [context] - context to pass to validator + * @param {Object} data - data to validate against config schema + * @param {Object} options - optional validation settings + * @param {String} options.schemaType - type of schema to validate against. Defaults to full (whole schema) + * @param {Object} options.context - additional context to pass through to validator * * @returns {Object} Promise which is resolved with the validated schema */ -ConfigWorker.prototype.validate = function (data, context) { - if (this.validator) { - return configUtil.validate(this.validator, data, context) - .catch((err) => { - err.code = 'ValidationError'; - return Promise.reject(err); - }); +ConfigWorker.prototype.validate = function (data, options) { + options = util.assignDefaults(options, { schemaType: 'full' }); + + if (!util.isObjectEmpty(this.validators)) { + const validatorFunc = this.validators[options.schemaType]; + if (typeof validatorFunc !== 'undefined') { + return configUtil.validate(validatorFunc, data, options.context) + .catch(err => Promise.reject(new errors.ValidationError(err))); + } } + return Promise.reject(new Error('Validator is not available')); }; @@ -210,7 +217,7 @@ ConfigWorker.prototype.validate = function (data, context) { * @returns {Object} Promise which is resolved with the expanded config */ ConfigWorker.prototype.expandConfig = function (rawConfig) { - return this.validate(rawConfig, { expand: true }); // set flag for additional decl processing + return this.validate(rawConfig, { context: { expand: true } }); // set flag for additional decl processing }; /** @@ -220,17 +227,19 @@ ConfigWorker.prototype.expandConfig = function (rawConfig) { * * @public * @param {Object} data - namespace-only data to process - * @param {String} options.namespace - namespace to which config belongs to + * @param {String} namespace - namespace to which config belongs to * - * @returns {Object} Promise resolved with copy of validated config resolved on success + * @returns {Object} Promise resolved with copy of validated namespace config resolved on success */ ConfigWorker.prototype.processNamespaceDeclaration = function (data, namespace) { - return this.getConfig() + return this.validate(data, { schemaType: CONFIG_CLASSES.NAMESPACE_CLASS_NAME }) + .then(() => this.getConfig()) .then((savedConfig) => { const mergedDecl = util.isObjectEmpty(savedConfig.raw) ? { class: 'Telemetry' } : util.deepCopy(savedConfig.raw); mergedDecl[namespace] = data; - return this.processDeclaration(mergedDecl, { savedConfig, namespaceToUpdate: namespace }); + return this.processDeclaration(mergedDecl, { savedConfig, namespaceToUpdate: namespace }) + .then(fullConfig => Promise.resolve(fullConfig[namespace] || {})); }); }; @@ -298,17 +307,31 @@ ConfigWorker.prototype.processDeclaration = function (data, options) { }; /** - * Get raw (origin) config + * Get raw (original) config * * @public - * @param {Object} restOperation + * @param {String} namespace - namespace name * - * @returns {Promise} resolved with raw (origin) config + * @returns {Promise} resolved with raw (original) config + * - full declaration if namespace param provided, + * otherwise just the namespace config */ -ConfigWorker.prototype.getRawConfig = function () { - return this.getConfig().then(config => Promise.resolve((config && config.raw) || {})); +ConfigWorker.prototype.getRawConfig = function (namespace) { + return this.getConfig() + .then(config => Promise.resolve((config && config.raw) || {})) + .then((fullConfig) => { + if (namespace) { + const namespaceConfig = fullConfig[namespace]; + if (util.isObjectEmpty(namespaceConfig)) { + return Promise.reject(new errors.ObjectNotFoundInConfigError(`Namespace with name '${namespace}' doesn't exist`)); + } + return namespaceConfig; + } + return fullConfig; + }); }; + // initialize singleton let configWorker; try { diff --git a/src/lib/consumers/AWS_CloudWatch/index.js b/src/lib/consumers/AWS_CloudWatch/index.js index 7b540ecf..22ffcb76 100644 --- a/src/lib/consumers/AWS_CloudWatch/index.js +++ b/src/lib/consumers/AWS_CloudWatch/index.js @@ -8,7 +8,7 @@ 'use strict'; -const awsUtil = require('./awsUtil'); +const awsUtil = require('./../shared/awsUtil'); const EVENT_TYPES = require('../../constants').EVENT_TYPES; /** diff --git a/src/lib/consumers/AWS_S3/index.js b/src/lib/consumers/AWS_S3/index.js index aa2fef0f..b3b5f1ba 100644 --- a/src/lib/consumers/AWS_S3/index.js +++ b/src/lib/consumers/AWS_S3/index.js @@ -9,6 +9,7 @@ 'use strict'; const AWS = require('aws-sdk'); +const awsUtil = require('./../shared/awsUtil'); const util = require('../../utils/misc'); /** * See {@link ../README.md#context} for documentation @@ -26,25 +27,9 @@ module.exports = function (context) { return `${year}/${month}/${day}/${dateString}.log`; }; - const setupPromise = new Promise((resolve, reject) => { - try { - const awsConfig = { region: context.config.region }; - if (context.config.username && context.config.passphrase) { - awsConfig.credentials = new AWS.Credentials({ - accessKeyId: context.config.username, - secretAccessKey: context.config.passphrase - }); - } - AWS.config.update(awsConfig); - s3 = new AWS.S3({ apiVersion: '2006-03-01' }); - resolve(); - } catch (err) { - reject(err); - } - }); - - return setupPromise + return awsUtil.initializeConfig(context) .then(() => { + s3 = new AWS.S3({ apiVersion: '2006-03-01' }); const params = { // fallback to host if no bucket Bucket: context.config.bucket || context.config.host, diff --git a/src/lib/consumers/Generic_HTTP/index.js b/src/lib/consumers/Generic_HTTP/index.js index e50cdb6c..b37119ec 100644 --- a/src/lib/consumers/Generic_HTTP/index.js +++ b/src/lib/consumers/Generic_HTTP/index.js @@ -23,26 +23,35 @@ module.exports = function (context) { const host = context.config.host; const fallbackHosts = context.config.fallbackHosts || []; const headers = httpUtil.processHeaders(context.config.headers); // no defaults - provide all headers needed + const key = context.config.privateKey || undefined; + const cert = context.config.clientCertificate || undefined; + const ca = context.config.rootCertificate || undefined; let allowSelfSignedCert = context.config.allowSelfSignedCert; if (!util.isObjectEmpty(context.config.proxy) && typeof context.config.proxy.allowSelfSignedCert !== 'undefined') { allowSelfSignedCert = context.config.proxy.allowSelfSignedCert; } + // If authenticating with certificates, do not allow self signed certs + if (!util.isObjectEmpty(cert) || !util.isObjectEmpty(ca)) { + allowSelfSignedCert = false; + } + const proxy = context.config.proxy; if (context.tracer) { + const redactString = '*****'; let tracedHeaders = headers; // redact Basic Auth passphrase, if provided if (tracedHeaders.Authorization) { - tracedHeaders = JSON.parse(JSON.stringify(tracedHeaders)); - tracedHeaders.Authorization = '*****'; + tracedHeaders = util.deepCopy(tracedHeaders); + tracedHeaders.Authorization = redactString; } let tracedProxy; if (!util.isObjectEmpty(proxy)) { tracedProxy = util.deepCopy(proxy); - tracedProxy.passphrase = '*****'; + tracedProxy.passphrase = redactString; } context.tracer.write(JSON.stringify({ @@ -55,7 +64,10 @@ module.exports = function (context) { port, protocol, proxy: tracedProxy, - uri + uri, + privateKey: util.isObjectEmpty(key) ? undefined : redactString, + clientCertificate: util.isObjectEmpty(cert) ? undefined : redactString, + rootCertificate: util.isObjectEmpty(ca) ? undefined : redactString }, null, 4)); } return httpUtil.sendToConsumer({ @@ -69,7 +81,10 @@ module.exports = function (context) { port, protocol, proxy, - uri + uri, + key, + cert, + ca }).catch((err) => { context.logger.exception(`Unexpected error: ${err}`, err); }); diff --git a/src/lib/consumers/Splunk/multiMetricEventConverter.js b/src/lib/consumers/Splunk/multiMetricEventConverter.js index e8638ada..44c9ba43 100644 --- a/src/lib/consumers/Splunk/multiMetricEventConverter.js +++ b/src/lib/consumers/Splunk/multiMetricEventConverter.js @@ -143,7 +143,7 @@ const processObject = function (data, options, cb) { cb(event); const subCollectionsOptions = options.subCollections || {}; - const ignoreSubCollectionCb = options.ignoreSubCollectionsCb; + const ignoreSubCollectionCb = options.ignoreSubCollectionCb; Object.keys(data).forEach((key) => { let value = data[key]; @@ -235,10 +235,13 @@ const DEFAULT_CAST_CB = (key, value) => { return parseFloat(value); }; +// default options - will be used as default values const DEFAULT_OPTS = { handler: processCollectionOfObjects, keyName: 'name', // key to use to store object's name castCb: DEFAULT_CAST_CB, + // ignore References to sub collection + ignoreSubCollectionCb: (key, value) => key.endsWith('Reference') && value.link, skipCb: key => DEFAULT_PROPS_TO_SKIP.indexOf(key) !== -1 }; @@ -426,7 +429,7 @@ module.exports = function (data, cb) { * @property {CastCb} [castCb] - callback to parse value to metric * @property {DeleteCb} [deleteCb] - callback to check if key has to be deleted * @property {Boolean} [enabled] - enable processing (should be set to 'false' explicitly to disable) - * @property {SkipCb} [ignoreSubCollectionsCb] - callback to call if collection of data should be ignored + * @property {SkipCb} [ignoreSubCollectionCb] - callback to call if collection of data should be ignored * @property {String} [keyName] - key to use to store object's name * @property {HandlerCb} [handler] - handler to call to process data * @property {String} [objectName] - object's name diff --git a/src/lib/consumers/shared/awsRootCerts.js b/src/lib/consumers/shared/awsRootCerts.js new file mode 100644 index 00000000..0984f5f4 --- /dev/null +++ b/src/lib/consumers/shared/awsRootCerts.js @@ -0,0 +1,119 @@ +'use strict'; + +module.exports = [ + // Certificates from https://aws.amazon.com/blogs/security/how-to-prepare-for-aws-move-to-its-own-certificate-authority/ + /* Amazon Root CA 1 */ + `-----BEGIN CERTIFICATE----- +MIIDQTCCAimgAwIBAgITBmyfz5m/jAo54vB4ikPmljZbyjANBgkqhkiG9w0BAQsFADA5MQsw +CQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6b24gUm9vdCBDQSAx +MB4XDTE1MDUyNjAwMDAwMFoXDTM4MDExNzAwMDAwMFowOTELMAkGA1UEBhMCVVMxDzANBgNV +BAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJvb3QgQ0EgMTCCASIwDQYJKoZIhvcNAQEB +BQADggEPADCCAQoCggEBALJ4gHHKeNXjca9HgFB0fW7Y14h29Jlo91ghYPl0hAEvrAIthtOg +Q3pOsqTQNroBvo3bSMgHFzZM9O6II8c+6zf1tRn4SWiw3te5djgdYZ6k/oI2peVKVuRF4fn9 +tBb6dNqcmzU5L/qwIFAGbHrQgLKm+a/sRxmPUDgH3KKHOVj4utWp+UhnMJbulHheb4mjUcAw +hmahRWa6VOujw5H5SNz/0egwLX0tdHA114gk957EWW67c4cX8jJGKLhD+rcdqsq08p8kDi1L +93FcXmn/6pUCyziKrlA4b9v7LWIbxcceVOF34GfID5yHI9Y/QCB/IIDEgEw+OyQmjgSubJrI +qg0CAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAYYwHQYDVR0OBBYE +FIQYzIU07LwMlJQuCFmcx7IQTgoIMA0GCSqGSIb3DQEBCwUAA4IBAQCY8jdaQZChGsV2USgg +NiMOruYou6r4lK5IpDB/G/wkjUu0yKGX9rbxenDIU5PMCCjjmCXPI6T53iHTfIUJrU6adTrC +C2qJeHZERxhlbI1Bjjt/msv0tadQ1wUsN+gDS63pYaACbvXy8MWy7Vu33PqUXHeeE6V/Uq2V +8viTO96LXFvKWlJbYK8U90vvo/ufQJVtMVT8QtPHRh8jrdkPSHCa2XV4cdFyQzR1bldZwgJc +JmApzyMZFo6IQ6XU5MsI+yMRQ+hDKXJioaldXgjUkK642M4UwtBV8ob2xJNDd2ZhwLnoQdeX +eGADbkpyrqXRfboQnoZsG4q5WTP468SQvvG5 +-----END CERTIFICATE-----`, + /* Amazon Root CA 2 */ + `-----BEGIN CERTIFICATE----- +MIIFQTCCAymgAwIBAgITBmyf0pY1hp8KD+WGePhbJruKNzANBgkqhkiG9w0BAQwFADA5MQsw +CQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6b24gUm9vdCBDQSAy +MB4XDTE1MDUyNjAwMDAwMFoXDTQwMDUyNjAwMDAwMFowOTELMAkGA1UEBhMCVVMxDzANBgNV +BAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJvb3QgQ0EgMjCCAiIwDQYJKoZIhvcNAQEB +BQADggIPADCCAgoCggIBAK2Wny2cSkxKgXlRmeyKy2tgURO8TW0G/LAIjd0ZEGrHJgw12MBv +IITplLGbhQPDW9tK6Mj4kHbZW0/jTOgGNk3Mmqw9DJArktQGGWCsN0R5hYGCrVo34A3MnaZM +UnbqQ523BNFQ9lXg1dKmSYXpN+nKfq5clU1Imj+uIFptiJXZNLhSGkOQsL9sBbm2eLfq0OQ6 +PBJTYv9K8nu+NQWpEjTj82R0Yiw9AElaKP4yRLuH3WUnAnE72kr3H9rN9yFVkE8P7K6C4Z9r +2UXTu/Bfh+08LDmG2j/e7HJV63mjrdvdfLC6HM783k81ds8P+HgfajZRRidhW+mez/CiVX18 +JYpvL7TFz4QuK/0NURBs+18bvBt+xa47mAExkv8LV/SasrlX6avvDXbR8O70zoan4G7ptGmh +32n2M8ZpLpcTnqWHsFcQgTfJU7O7f/aS0ZzQGPSSbtqDT6ZjmUyl+17vIWR6IF9sZIUVyzfp +YgwLKhbcAS4y2j5L9Z469hdAlO+ekQiG+r5jqFoz7Mt0Q5X5bGlSNscpb/xVA1wf+5+9R+vn +SUeVC06JIglJ4PVhHvG/LopyboBZ/1c6+XUyo05f7O0oYtlNc/LMgRdg7c3r3NunysV+Ar3y +VAhU/bQtCSwXVEqY0VThUWcI0u1ufm8/0i2BWSlmy5A5lREedCf+3euvAgMBAAGjQjBAMA8G +A1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgGGMB0GA1UdDgQWBBSwDPBMMPQFWAJI/TPl +Uq9LhONmUjANBgkqhkiG9w0BAQwFAAOCAgEAqqiAjw54o+Ci1M3m9Zh6O+oAA7CXDpO8Wqj2 +LIxyh6mx/H9z/WNxeKWHWc8w4Q0QshNabYL1auaAn6AFC2jkR2vHat+2/XcycuUY+gn0oJMs +XdKMdYV2ZZAMA3m3MSNjrXiDCYZohMr/+c8mmpJ5581LxedhpxfL86kSk5Nrp+gvU5LEYFiw +zAJRGFuFjWJZY7attN6a+yb3ACfAXVU3dJnJUH/jWS5E4ywl7uxMMne0nxrpS10gxdr9HIcW +xkPo1LsmmkVwXqkLN1PiRnsn/eBG8om3zEK2yygmbtmlyTrIQRNg91CMFa6ybRoVGld45pIq +2WWQgj9sAq+uEjonljYE1x2igGOpm/HlurR8FLBOybEfdF849lHqm/osohHUqS0nGkWxr7JO +cQ3AWEbWaQbLU8uz/mtBzUF+fUwPfHJ5elnNXkoOrJupmHN5fLT0zLm4BwyydFy4x2+IoZCn +9Kr5v2c69BoVYh63n749sSmvZ6ES8lgQGVMDMBu4Gon2nL2XA46jCfMdiyHxtN/kHNGfZQIG +6lzWE7OE76KlXIx3KadowGuuQNKotOrN8I1LOJwZmhsoVLiJkO/KdYE+HvJkJMcYr07/R54H +9jVlpNMKVv/1F2Rs76giJUmTtt8AF9pYfl3uxRuw0dFfIRDH+fO6AgonB8Xx1sfT4PsJYGw= +-----END CERTIFICATE-----`, + /* Amazon Root CA 3 */ + `-----BEGIN CERTIFICATE----- +MIIBtjCCAVugAwIBAgITBmyf1XSXNmY/Owua2eiedgPySjAKBggqhkjOPQQDAjA5MQswCQYD +VQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6b24gUm9vdCBDQSAzMB4X +DTE1MDUyNjAwMDAwMFoXDTQwMDUyNjAwMDAwMFowOTELMAkGA1UEBhMCVVMxDzANBgNVBAoT +BkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJvb3QgQ0EgMzBZMBMGByqGSM49AgEGCCqGSM49 +AwEHA0IABCmXp8ZBf8ANm+gBG1bG8lKlui2yEujSLtf6ycXYqm0fc4E7O5hrOXwzpcVOho6A +F2hiRVd9RFgdszflZwjrZt6jQjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgGG +MB0GA1UdDgQWBBSrttvXBp43rDCGB5Fwx5zEGbF4wDAKBggqhkjOPQQDAgNJADBGAiEA4IWS +oxe3jfkrBqWTrBqYaGFy+uGh0PsceGCmQ5nFuMQCIQCcAu/xlJyzlvnrxir4tiz+OpAUFteM +YyRIHN8wfdVoOw== +-----END CERTIFICATE-----`, + /* Amazon Root CA 4 */ + `-----BEGIN CERTIFICATE----- +MIIB8jCCAXigAwIBAgITBmyf18G7EEwpQ+Vxe3ssyBrBDjAKBggqhkjOPQQDAzA5MQswCQYD +VQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6b24gUm9vdCBDQSA0MB4X +DTE1MDUyNjAwMDAwMFoXDTQwMDUyNjAwMDAwMFowOTELMAkGA1UEBhMCVVMxDzANBgNVBAoT +BkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJvb3QgQ0EgNDB2MBAGByqGSM49AgEGBSuBBAAi +A2IABNKrijdPo1MN/sGKe0uoe0ZLY7Bi9i0b2whxIdIA6GO9mif78DluXeo9pcmBqqNbIJhF +XRbb/egQbeOc4OO9X4Ri83BkM6DLJC9wuoihKqB1+IGuYgbEgds5bimwHvouXKNCMEAwDwYD +VR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAYYwHQYDVR0OBBYEFNPsxzplbszh2naaVvuc +84ZtV+WBMAoGCCqGSM49BAMDA2gAMGUCMDqLIfG9fhGt0O9Yli/W651+kI0rz2ZVwyzjKKlw +CkcO8DdZEv8tmZQoTipPNU0zWgIxAOp1AE47xDqUEpHJWEadIRNyp4iciuRMStuW1KyLa2tJ +ElMzrdfkviT8tQp21KW8EA== +-----END CERTIFICATE-----`, + /* Starfield Services Root Certificate Authority - G2 */ + `-----BEGIN CERTIFICATE----- +MIID7zCCAtegAwIBAgIBADANBgkqhkiG9w0BAQsFADCBmDELMAkGA1UEBhMCVVMxEDAOBgNV +BAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxJTAjBgNVBAoTHFN0YXJmaWVsZCBU +ZWNobm9sb2dpZXMsIEluYy4xOzA5BgNVBAMTMlN0YXJmaWVsZCBTZXJ2aWNlcyBSb290IENl +cnRpZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5MDkwMTAwMDAwMFoXDTM3MTIzMTIzNTk1 +OVowgZgxCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6b25hMRMwEQYDVQQHEwpTY290dHNk +YWxlMSUwIwYDVQQKExxTdGFyZmllbGQgVGVjaG5vbG9naWVzLCBJbmMuMTswOQYDVQQDEzJT +dGFyZmllbGQgU2VydmljZXMgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgLSBHMjCCASIw +DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANUMOsQq+U7i9b4Zl1+OiFOxHz/Lz58gE20p +OsgPfTz3a3Y4Y9k2YKibXlwAgLIvWX/2h/klQ4bnaRtSmpDhcePYLQ1Ob/bISdm28xpWriu2 +dBTrz/sm4xq6HZYuajtYlIlHVv8loJNwU4PahHQUw2eeBGg6345AWh1KTs9DkTvnVtYAcMtS +7nt9rjrnvDH5RfbCYM8TWQIrgMw0R9+53pBlbQLPLJGmpufehRhJfGZOozptqbXuNC66DQO4 +M99H67FrjSXZm86B0UVGMpZwh94CDklDhbZsc7tk6mFBrMnUVN+HL8cisibMn1lUaJ/8viov +xFUcdUBgF4UCVTmLfwUCAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMC +AQYwHQYDVR0OBBYEFJxfAN+qAdcwKziIorhtSpzyEZGDMA0GCSqGSIb3DQEBCwUAA4IBAQBL +NqaEd2ndOxmfZyMIbw5hyf2E3F/YNoHN2BtBLZ9g3ccaaNnRbobhiCPPE95Dz+I0swSdHynV +v/heyNXBve6SbzJ08pGCL72CQnqtKrcgfU28elUSwhXqvfdqlS5sdJ/PHLTyxQGjhdByPq1z +qwubdQxtRbeOlKyWN7Wg0I8VRw7j6IPdj/3vQQF3zCepYoUz8jcI73HPdwbeyBkdiEDPfUYd +/x7H4c7/I9vG+o1VTqkC50cRRj70/b17KSa7qWFiNyi2LSr2EIZkyXCn0q23KXB56jzaYyWf +/Wi3MOxw+3WKt21gZ7IeyLnp2KhvAotnDU0mV3HaIPzBSlCNsSi6 +-----END CERTIFICATE-----`, + /* Baltimore CyberTrust Root */ + `-----BEGIN CERTIFICATE----- +MIIDdzCCAl+gAwIBAgIEAgAAuTANBgkqhkiG9w0BAQUFADBaMQswCQYDVQQGEwJJRTESMBAG +A1UEChMJQmFsdGltb3JlMRMwEQYDVQQLEwpDeWJlclRydXN0MSIwIAYDVQQDExlCYWx0aW1v +cmUgQ3liZXJUcnVzdCBSb290MB4XDTAwMDUxMjE4NDYwMFoXDTI1MDUxMjIzNTkwMFowWjEL +MAkGA1UEBhMCSUUxEjAQBgNVBAoTCUJhbHRpbW9yZTETMBEGA1UECxMKQ3liZXJUcnVzdDEi +MCAGA1UEAxMZQmFsdGltb3JlIEN5YmVyVHJ1c3QgUm9vdDCCASIwDQYJKoZIhvcNAQEBBQAD +ggEPADCCAQoCggEBAKMEuyKrmD1X6CZymrV51Cni4eiVgLGw41uOKymaZN+hXe2wCQVt2ygu +zmKiYv60iNoS6zjrIZ3AQSsBUnuId9Mcj8e6uYi1agnnc+gRQKfRzMpijS3ljwumUNKoUMMo +6vWrJYeKmpYcqWe4PwzV9/lSEy/CG9VwcPCPwBLKBsua4dnKM3p31vjsufFoREJIE9LAwqSu +XmD+tqYF/LTdB1kC1FkYmGP1pWPgkAx9XbIGevOF6uvUA65ehD5f/xXtabz5OTZydc93Uk3z +yZAsuT3lySNTPx8kmCFcB5kpvcY67Oduhjprl3RjM71oGDHweI12v/yejl0qhqdNkNwnGjkC +AwEAAaNFMEMwHQYDVR0OBBYEFOWdWTCCR1jMrPoIVDaGezq1BE3wMBIGA1UdEwEB/wQIMAYB +Af8CAQMwDgYDVR0PAQH/BAQDAgEGMA0GCSqGSIb3DQEBBQUAA4IBAQCFDF2O5G9RaEIFoN27 +TyclhAO992T9Ldcw46QQF+vaKSm2eT929hkTI7gQCvlYpNRhcL0EYWoSihfVCr3FvDB81ukM +JY2GQE/szKN+OMY3EU/t3WgxjkzSswF07r51XgdIGn9w/xZchMB5hbgF/X++ZRGjD8ACtPhS +NzkE1akxehi/oCr0Epn3o0WC4zxe9Z2etciefC7IpJ5OCBRLbf1wbWsaY71k5h+3zvDyny67 +G7fyUIhzksLi4xaNmjICq44Y3ekQEe5+NauQrz4wlHrQMz2nZQ/1/I6eYs9HRCwBXbsdtTLS +R9I4LtD+gdwyah617jzV/OeBHRnDJELqYzmp +-----END CERTIFICATE-----` +]; diff --git a/src/lib/consumers/AWS_CloudWatch/awsUtil.js b/src/lib/consumers/shared/awsUtil.js similarity index 92% rename from src/lib/consumers/AWS_CloudWatch/awsUtil.js rename to src/lib/consumers/shared/awsUtil.js index d13b75c8..87b369ba 100644 --- a/src/lib/consumers/AWS_CloudWatch/awsUtil.js +++ b/src/lib/consumers/shared/awsUtil.js @@ -9,7 +9,9 @@ 'use strict'; const AWS = require('aws-sdk'); +const https = require('https'); const util = require('../../utils/misc'); +const rootCerts = require('./awsRootCerts'); const METRICS_BATCH_SIZE = 20; /** @@ -17,10 +19,12 @@ const METRICS_BATCH_SIZE = 20; * * @param {Object} context Consumer context containing config and data * See {@link ../../README.md#context} + * @param {Object} [options] Consumer options + * @param {Object} [options.httpAgent] Custom HTTP(s) agent to pass to AWS config * * @returns {Promise} resolved upon completion */ -function initializeConfig(context) { +function initializeConfig(context, options) { const awsConfig = { region: context.config.region }; if (context.config.username && context.config.passphrase) { awsConfig.credentials = new AWS.Credentials({ @@ -29,12 +33,39 @@ function initializeConfig(context) { }); } + let agent; + // Accept consumer specific HTTPs agents + if (options && options.httpAgent) { + agent = options.httpAgent; + } else { + // Use defaults in the aws-sdk, but with a subset of CA Certs + agent = new https.Agent({ + rejectUnauthorized: true, + keepAlive: false, + maxSockets: 50, + ca: getAWSRootCerts() + }); + } + + awsConfig.httpOptions = { + agent + }; + return Promise.resolve() .then(() => { AWS.config.update(awsConfig); }); } +/** + * Gets Amazon Root Certificates + * + * @returns {Array} Array of certificate strings + */ +function getAWSRootCerts() { + return rootCerts; +} + /** * Sends data to CloudWatch Logs * @@ -293,5 +324,6 @@ module.exports = { sendLogs, getDefaultDimensions, getMetrics, - sendMetrics + sendMetrics, + getAWSRootCerts }; diff --git a/src/lib/declarationValidator.js b/src/lib/declarationValidator.js index 3500a124..0adf5a80 100644 --- a/src/lib/declarationValidator.js +++ b/src/lib/declarationValidator.js @@ -24,6 +24,7 @@ const pullConsumerSchema = require('../schema/latest/pull_consumer_schema.json') const iHealthPollerSchema = require('../schema/latest/ihealth_poller_schema.json'); const endpointsSchema = require('../schema/latest/endpoints_schema.json'); const namespaceSchema = require('../schema/latest/namespace_schema.json'); +const CLASSES = require('./constants').CONFIG_CLASSES; /** * Process errors @@ -53,15 +54,14 @@ function processErrors(errors) { return errorsResp; } - module.exports = { /** * Pre-compile schema * * @public - * @returns {Object} AJV validator function + * @returns {Object} AJV validator functions */ - getValidator() { + getValidators() { const schemas = { base: baseSchema, consumer: consumerSchema, @@ -94,7 +94,13 @@ module.exports = { Object.keys(customKeywords.keywords).forEach((k) => { ajv.addKeyword(k, customKeywords.keywords[k]); }); - return ajv.compile(schemas.base); + const validators = { + full: ajv.compile(schemas.base) + }; + // retrieve previously compiled schema + validators[CLASSES.NAMESPACE_CLASS_NAME] = ajv.getSchema(`${namespaceSchema.$id}#/definitions/namespace`); + + return validators; }, /** diff --git a/src/lib/endpointLoader.js b/src/lib/endpointLoader.js index ffce6887..a384e42d 100644 --- a/src/lib/endpointLoader.js +++ b/src/lib/endpointLoader.js @@ -8,10 +8,11 @@ 'use strict'; -const deviceUtil = require('./utils/device'); const constants = require('./constants'); -const util = require('./utils/misc'); +const deviceUtil = require('./utils/device'); const logger = require('./logger'); +const retryPromise = require('./utils/promise').retry; +const util = require('./utils/misc'); /** @module EndpointLoader */ @@ -299,7 +300,7 @@ EndpointLoader.prototype.getData = function (uri, options) { backoff: 100 }; const fullUri = options.endpointFields ? `${uri}?$select=${options.endpointFields.join(',')}` : uri; - return util.retryPromise(() => deviceUtil.makeDeviceRequest(this.host, fullUri, httpOptions), retryOpts) + return retryPromise(() => deviceUtil.makeDeviceRequest(this.host, fullUri, httpOptions), retryOpts) .then((data) => { const ret = { name: options.name !== undefined ? options.name : uri, diff --git a/src/lib/errors.js b/src/lib/errors.js index 4424c2fb..44f85215 100644 --- a/src/lib/errors.js +++ b/src/lib/errors.js @@ -27,8 +27,14 @@ class ConfigLookupError extends BaseError {} */ class ObjectNotFoundInConfigError extends ConfigLookupError {} +/** + * Validation error + */ +class ValidationError extends BaseError {} + module.exports = { BaseError, ConfigLookupError, - ObjectNotFoundInConfigError + ObjectNotFoundInConfigError, + ValidationError }; diff --git a/src/lib/eventListener.js b/src/lib/eventListener.js deleted file mode 100644 index e9ad92ed..00000000 --- a/src/lib/eventListener.js +++ /dev/null @@ -1,491 +0,0 @@ -/* - * Copyright 2018. F5 Networks, Inc. See End User License Agreement ("EULA") for - * license terms. Notwithstanding anything to the contrary in the EULA, Licensee - * may copy and modify this software product for its internal business purposes. - * Further, Licensee may upload, publish and distribute the modified version of - * the software product on devcentral.f5.com. - */ - -'use strict'; - -const net = require('net'); -const dgram = require('dgram'); - -const logger = require('./logger'); -const constants = require('./constants'); -const normalize = require('./normalize'); -const dataPipeline = require('./dataPipeline'); -const configWorker = require('./config'); -const configUtil = require('./utils/config'); -const properties = require('./properties.json'); - -const tracers = require('./utils/tracer').Tracer; -const stringify = require('./utils/misc').stringify; -const isObjectEmpty = require('./utils/misc').isObjectEmpty; - -const global = properties.global; -const events = properties.events; -const definitions = properties.definitions; - -const CLASS_NAME = constants.CONFIG_CLASSES.EVENT_LISTENER_CLASS_NAME; - -const MAX_BUFFER_SIZE = 16 * 1024; // 16k chars -const MAX_BUFFER_TIMEOUTS = 5; -const MAX_BUFFER_TIMEOUT = 1 * 1000; // 1 sec - -const LISTENERS = {}; -const protocols = ['tcp', 'udp']; - -/** @module EventListener */ - -// LTM request log (example) -// eslint-disable-next-line max-len -// [telemetry] Client: ::ffff:10.0.2.4 sent data: EVENT_SOURCE="request_logging",BIGIP_HOSTNAME="hostname.test.com",CLIENT_IP="x.x.x.x",SERVER_IP="",HTTP_METHOD="GET",HTTP_URI="/",VIRTUAL_NAME="/Common/app.app/app_vs" - -/** - * Event Listener class - * - * @param {String} name - listener's name - * @param {String} port - port to listen on - * @param {Object} opts - additional configuration options - * @param {Object} [opts.tags] - tags to add to the event data - * @param {String} [opts.protocol] - protocol to listen on: tcp or udp - * @param {module:util~Tracer} [opts.tracer] - tracer - * @param {Array} [opts.actions] - list of actions to apply to the event data - * @param {Function} [opts.filterFunc] - function to filter events - * - * @returns {Object} Returns EventListener object - */ -function EventListener(name, port, opts) { - this.name = name; - this.port = port; - this.protocol = opts.protocol || 'tcp'; - this.logger = logger.getChild(`${this.name}:${this.port}:${this.protocol}`); - this.updateConfig(opts); - - this._server = null; - this._clientConnMap = {}; - this._lastConnKey = 0; - this._connDataBuffers = {}; -} - -/** - * Update listener's configuration - tracer, tags, actions and etc. - * - * @param {Array} [opts.actions] - list of actions to apply to the event data - * @param {Object} [opts.tags] - tags to add to the event data - * @param {Function} [opts.filterFunc] - function to filter events - * @param {module:util~Tracer} [opts.tracer] - tracer - * - * @returns {void} - */ -EventListener.prototype.updateConfig = function (config) { - this.tracer = config.tracer; - this.tags = config.tags; - this.filterFunc = config.filterFunc; - this.actions = config.actions; - this.id = config.id; - this.destinationIds = config.destinationIds; -}; - -/** - * Server options to start listening - * - * @returns {Object} listening options - */ -EventListener.prototype.getServerOptions = function () { - if (this.protocol === 'tcp') { - return { - port: this.port - }; - } - return {}; -}; - -/** - * Returns current listeners - * - * @returns {Object} - */ -EventListener.prototype.getListeners = function () { - return LISTENERS; -}; - -/** - * Start Event listener - */ -EventListener.prototype.start = function () { - this.logger.debug('Starting event listener'); - try { - this._start(); - } catch (err) { - this.logger.exception('Unable to start', err); - } -}; - -/** - * Start listening - */ -EventListener.prototype._listen = function () { - if (this.protocol === 'tcp') { - this._server.listen(this.getServerOptions()); - } else if (this.protocol === 'udp') { - this._server.bind(this.port); - } -}; - -/** - * Start event listener - internals - * - * @private - */ -EventListener.prototype._start = function () { - // TODO: investigate constraining listener when running on local BIG-IP, however - // for now cannot until a valid address is found - loopback address not allowed for LTM objects - if (protocols.indexOf(this.protocol) === -1) throw new Error(`Protocol unexpected: ${this.protocol}`); - - if (this.protocol === 'tcp') { - this._server = net.createServer((conn) => { - const connKey = this._lastConnKey; - this._lastConnKey += 1; - this._clientConnMap[connKey] = conn; - - // event on client data - conn.on('data', (data) => { - this.processRawData(String(data), { - address: conn.remoteAddress, - port: conn.remotePort - }); - }); - // event on client connection error - conn.on('error', () => { - conn.destroy(); - }); - // event on client connection close - conn.on('close', () => { - delete this._clientConnMap[connKey]; - }); - // the other end of the socket sends a FIN packet - conn.on('end', () => { - // allowHalfOpen is false by default - // so, don't need to call 'end' explicitly - }); - }); - } else if (this.protocol === 'udp') { - this._server = dgram.createSocket({ type: 'udp6', ipv6Only: false }); - - // eslint-disable-next-line no-unused-vars - this._server.on('message', (data, remoteInfo) => { - this.processRawData(String(data), remoteInfo); - }); - } - - // catch any errors - this._server.on('error', (err) => { - this.logger.error(`Unexpected error: ${err}`); - this.restart(); - }); - - // message on listening event - this._server.on('listening', () => { - this.logger.debug('Event listener started'); - }); - - // message on close event - this._server.on('close', (err) => { - if (err) { - this.logger.exception('Unexpected error on attempt to stop', err); - } else { - this.logger.debug('Event listener stopped'); - } - }); - - // start listening on port/protocol - this._listen(); -}; - -/** - * Process raw data - * - * @param {String} data - raw data - * @param {Object} connInfo - remote info - * @param {String} connInfo.address - remote address - * @param {Integer} connInfo.port - remote port - */ -EventListener.prototype.processRawData = function (data, connInfo) { - const key = `${connInfo.address}-${connInfo.port}`; - let bufferInfo = this._connDataBuffers[key]; - let incompleteData; - - if (bufferInfo) { - data = bufferInfo.data + data; - // cleanup timeout to avoid dups - if (bufferInfo.timeoutID) { - clearTimeout(bufferInfo.timeoutID); - } - } - // TS assumes message to have trailing '\n'. - if (!data.endsWith('\n')) { - const idx = data.lastIndexOf('\n'); - incompleteData = data; - - /** - * String.slice / String.substring keeps reference to original string, - * it means GC is unable to remove original string until all references - * to it will be removed. - * So, let's use some strategy like if valid data takes less then 70% - * of string then keep it as incomplete and wait for more data. - * In any case max lifetime is about 5-7 sec. - */ - if (idx === -1 || idx / data.length < 0.7) { - data = null; - } else { - // string deep copy to release origin string after processing - incompleteData = data.slice(idx + 1).split('').join(''); - data = data.slice(0, idx + 1); - } - } - // in case if all data is like incomplete message - if (!data && ((!isObjectEmpty(bufferInfo) && bufferInfo.timeoutNo >= MAX_BUFFER_TIMEOUTS) - || (incompleteData && incompleteData.length >= MAX_BUFFER_SIZE))) { - // if limits exceeded - flush all data - data = incompleteData; - incompleteData = null; - } - - if (data) { - if (bufferInfo) { - // reset counter due we have valid data to process now - bufferInfo.timeoutNo = 0; - } - this.processData(data); - } - // if we have incomplete data to buffer - if (incompleteData) { - if (!bufferInfo) { - bufferInfo = { timeoutNo: 0 }; - this._connDataBuffers[key] = bufferInfo; - } - bufferInfo.data = incompleteData; - bufferInfo.timeoutNo += 1; - bufferInfo.timeoutID = setTimeout(() => { - delete this._connDataBuffers[key]; - this.processData(bufferInfo.data); - }, MAX_BUFFER_TIMEOUT); - } else { - delete this._connDataBuffers[key]; - } -}; - -/** - * Restart listener - */ -EventListener.prototype.restart = function () { - if (this._server) { - // probably need to increase restart timeout - this.logger.debug('Restarting in 5 seconds'); - setTimeout(() => { - this._server.close(); - this._listen(); - }, 5000); - } -}; - -/** - * Close all opened client connections - * - * @private - */ -EventListener.prototype._closeAllConnections = function () { - Object.keys(this._clientConnMap).forEach(connKey => this._clientConnMap[connKey].destroy()); -}; - -/** - * Stop Event listener - */ -EventListener.prototype.stop = function () { - this.logger.debug('Stopping event listener'); - if (this.protocol === 'tcp') { - this._closeAllConnections(); - } - this._server.close(); - this._server = null; -}; - -/** - * Process data - * - * @param {String} data - data - * - * @returns {Promise} resolved once data processed - */ -EventListener.prototype.processData = function (data) { - try { - return this._processData(data); - } catch (err) { - this.logger.exception('EventListener:processData unexpected error', err); - } - return Promise.resolve(); -}; - -/** - * Process data - * - * @private - * @param {String} data - data - * - * @returns {Promise} resolved once data processed - */ -EventListener.prototype._processData = function (data) { - // normalize and send to data pipeline - // note: addKeysByTag uses regex for default tags parsing (tenant/app) - const options = { - renameKeysByPattern: global.renameKeys, - addKeysByTag: { - tags: this.tags, - definitions, - opts: { - classifyByKeys: events.classifyByKeys - } - }, - formatTimestamps: global.formatTimestamps.keys, - classifyEventByKeys: events.classifyCategoryByKeys, - addTimestampForCategories: events.addTimestampForCategories - }; - const promises = []; - - // note: data may contain multiple events separated by newline - // however newline chars may also show up inside a given event - // so split only on newline with preceding double quote. - // Expected behavior is that every channel (TCP connection) - // has only particular type of events and not mix of different types. - // Note: if OneConnect profile will be used for pool then there might be an issue - // with different event types in single channel but not sure. - normalize.splitEvents(data).forEach((line) => { - line = line.trim(); - if (line.length === 0) { - return; - } - // lets normalize the data - const normalizedData = normalize.event(line, options); - - // keep filtering as part of event listener for now - if (!this.filterFunc || this.filterFunc(normalizedData)) { - const dataCtx = { - data: normalizedData, - type: normalizedData.telemetryEventCategory || constants.EVENT_TYPES.EVENT_LISTENER, - sourceId: this.id, - destinationIds: this.destinationIds - }; - const p = dataPipeline.process(dataCtx, { tracer: this.tracer, actions: this.actions }) - .catch(err => this.logger.exception('EventListener:_processData unexpected error from dataPipeline:process', err)); - promises.push(p); - } - }); - - return Promise.all(promises) - .catch(err => this.logger.exception('EventListener:_processData unexpected error:', err)); -}; - -/** - * Create function to filter events by pattern defined in config - * - * @param {Object} config - listener's config - * @param {String} config.match - pattern to filter data - * - * @returns {Function(Object)} function to filter data, returns boolean value if data matches - */ -function buildFilterFunc(config) { - if (!config.match || !events.classifyByKeys) { - return null; - } - const pattern = new RegExp(config.match, 'i'); - const props = events.classifyByKeys; - logger.debug(`Building events filter function with following params: pattern=${pattern} properties=${stringify(props)}`); - - return function (data) { - for (let i = 0; i < props.length; i += 1) { - const val = data[props[i]]; - if (val && pattern.test(val)) { - return true; - } - } - return false; - }; -} - -function removeListener(listener, name) { - protocols.forEach((protocol) => { - const protocolListener = listener[protocol]; - if (protocolListener) { - protocolListener.stop(); - } - }); - delete LISTENERS[name]; -} - -// config worker change event -configWorker.on('change', config => new Promise((resolve) => { - logger.debug('configWorker change event in eventListener'); // helpful debug - // timestamp to find out-dated tracers - const tracersTimestamp = new Date().getTime(); - - const eventListeners = configUtil.getTelemetryListeners(config); - // remove listeners not defined in config - Object.keys(LISTENERS).forEach((key) => { - const existingListener = LISTENERS[key]; - const configMatch = eventListeners.find(n => n.traceName === key); - if (!configMatch) { - removeListener(existingListener, key); - } - }); - - eventListeners.forEach((listenerFromConfig) => { - if (!listenerFromConfig.skipUpdate) { - // use name (prefixed if namespace is present) - const name = listenerFromConfig.traceName; - const existingListener = LISTENERS[name]; - // no listener's config or it was disabled - remove it - if (listenerFromConfig.enable === false && existingListener) { - removeListener(existingListener, name); - return; - } - - protocols.forEach((protocol) => { - let listenerInstance = existingListener ? existingListener[protocol] : undefined; - const opts = { - protocol, - tags: listenerFromConfig.tag, - actions: listenerFromConfig.actions, - tracer: tracers.createFromConfig(CLASS_NAME, name, listenerFromConfig), - filterFunc: buildFilterFunc(listenerFromConfig), - id: listenerFromConfig.id, - destinationIds: config.mappings[listenerFromConfig.id] - }; - - // when port is the same - no sense to restart listener and drop connections - if (listenerInstance && listenerInstance.port === listenerFromConfig.port) { - logger.debug(`Updating event listener '${name}' protocol '${protocol}'`); - listenerInstance.updateConfig(opts); - } else { - // stop existing listener to free the port - if (listenerInstance) { - listenerInstance.stop(); - } - listenerInstance = new EventListener(name, listenerFromConfig.port, opts); - listenerInstance.start(); - LISTENERS[name] = LISTENERS[name] || {}; - LISTENERS[name][protocol] = listenerInstance; - } - }); - } - }); - - logger.debug(`${Object.keys(LISTENERS).length} event listener(s) listening`); - tracers.remove(tracer => tracer.name.startsWith(CLASS_NAME) - && tracer.lastGetTouch < tracersTimestamp); - - resolve(); -})); - -module.exports = EventListener; diff --git a/src/lib/eventListener/baseDataReceiver.js b/src/lib/eventListener/baseDataReceiver.js new file mode 100644 index 00000000..a2b858db --- /dev/null +++ b/src/lib/eventListener/baseDataReceiver.js @@ -0,0 +1,457 @@ +/* * Copyright 2020. F5 Networks, Inc. See End User License Agreement ("EULA") for + * license terms. Notwithstanding anything to the contrary in the EULA, Licensee + * may copy and modify this software product for its internal business purposes. + * Further, Licensee may upload, publish and distribute the modified version of + * the software product on devcentral.f5.com. + */ + +'use strict'; + +const EventEmitter2 = require('eventemitter2'); + +const errors = require('../errors'); +const mainLogger = require('../logger'); + +/** @module BaseDataReceiver */ + +class BaseDataReceiverError extends errors.BaseError {} +class StateTransitionError extends BaseDataReceiverError {} + +/** + * Catch error (if promise throws it) and set next state + * + * @async + * @param {Promise} promise - promise + * @param {DataReceiverState | String} successState - state to try to set when promise is fulfilled + * @param {DataReceiverState | String} failState - state to try to set when promise is rejected + * @param {Object} [options] - options for ._setState + * + * @returns {Promise} resolved with original return value + */ +function callAndSetState(promise, successState, failState, options) { + let uncaughtErr; + let originRet; + + return promise.then((ret) => { + originRet = ret; + }) + .catch((err) => { + uncaughtErr = err; + }) + .then(() => this._setState(uncaughtErr ? failState : successState, options)) + .then(() => (uncaughtErr ? Promise.reject(uncaughtErr) : Promise.resolve(originRet))); +} + +/** + * Log error + * + * @param {Error} error - error to log + * + * @returns {Error} error + */ +function logSafeEmitException(error) { + if (this.logger) { + this.logger.exception(`${this.constructor.name}.safeEmit(Async) uncaught error`, error); + } + return error; +} + +/** + * Subclass of EventEmitter2 with safe 'emit' + */ +class SafeEventEmitter extends EventEmitter2 { + /** + * Emit event + * + * @returns {Boolean | Error} true if the event had listeners, false otherwise or Error if caught one + */ + safeEmit() { + try { + return this.emit.apply(this, arguments); + } catch (emitErr) { + return logSafeEmitException.call(this, emitErr); + } + } + + /** + * Emit async event + * + * @async + * @returns {Promise | Error>} promise resolved with array of responses or + * Error if caught one (no rejection) + */ + safeEmitAsync() { + try { + return this.emitAsync.apply(this, arguments) + .catch(logSafeEmitException.bind(this)); + } catch (emitErr) { + return Promise.resolve(logSafeEmitException.call(this, emitErr)); + } + } +} + +/** + * Base class for Data Receivers (base on EventEmitter2) + * + * Note: + * - state should be changed only at the beginning/end of an 'atomic' operation like start/stop/destroy and etc. + * - all event listeners will be removed when instance DESTROYED + * + * @property {Logger} logger - logger instance + * + * @fires BaseDataReceiver#stateChanged + */ +class BaseDataReceiver extends SafeEventEmitter { + /** + * Constructor + * + * @param {Logger} [logger] - logger instance + */ + constructor(logger) { + super(); + this.logger = logger || mainLogger.getChild(this.constructor.name); + this._state = this.constructor.STATE.NEW; + this.on('stateChanged', () => { + if (this.hasState(this.constructor.STATE.DESTROYED)) { + this.removeAllListeners(); + } + }); + } + + /** + * Set state + * + * @private + * @async + * @param {DataReceiverState} nextState - next state + * + * @returns {Promise} resolved when state changed + */ + __setState(nextState, force) { + force = typeof force === 'undefined' ? false : force; + this.logger.debug(`changing state from '${this.getCurrentStateName()}' to '${nextState.name}' [force = ${force}]`); + const prevState = this._state; + this._state = nextState; + return this.safeEmitAsync('stateChanged', { current: this.getCurrentStateName(), previous: prevState.name }); + } + + /** + * Set new state + * + * Note: note when state has no 'waitForTransition' then it would be good to check + * for desired state before starting any operation. + * + * @async + * @param {DataReceiverState | String} desiredState - new state + * @param {Object} [options = {}] - options + * @param {Boolean} [options.wait = true] - wait until current transition finished + * @param {Boolean} [options.force = false] - force state change + * + * @returns {Promise} resolved when state changed + * + * @fires BaseDataReceiver#stateChanged + */ + _setState(desiredState, options) { + options = options || {}; + desiredState = typeof desiredState === 'string' ? this.constructor.STATE[desiredState] : desiredState; + if (!options.force) { + if (this._state.waitForTransition && !this.nextStateAllowed(desiredState)) { + const wait = typeof options.wait === 'undefined' || options.wait; + if (wait === false) { + this.logger.debug(`ignoring state change from '${this.getCurrentStateName()}' to '${desiredState.name}' [wait = ${wait}]`); + return Promise.reject(this.getStateTransitionError(desiredState)); + } + return this.waitFor('stateChanged').then(() => this._setState(desiredState)); + } + if (!this.nextStateAllowed(desiredState)) { + // time to check if transition to next state is allowed + this.logger.debug(`ignoring state change from '${this.getCurrentStateName()}' to '${desiredState.name}'`); + return Promise.reject(this.getStateTransitionError(desiredState)); + } + } + return this.__setState(desiredState, options.force); + } + + /** + * Current state's name + * + * @public + * @returns {String} state name + */ + getCurrentStateName() { + return this._state.name; + } + + /** + * Destroy receiver + * + * Note: + * - can't call 'restart', 'start' and 'stop' methods any more. Need to create new instance + * - all attached listeners will be removed once instance destroyed + * + * @public + * @async + * @returns {Promise} resolved once receiver destroyed + */ + destroy() { + const stateOpts = { wait: false, force: true }; + return this._setState(this.constructor.STATE.DESTROYING, stateOpts) + .then(() => callAndSetState.call( + this, + this.stopHandler(), + this.constructor.STATE.DESTROYED, + this.constructor.STATE.DESTROYED, + stateOpts + )); + } + + /** + * Get error with message about state transition + * + * @param {DataReceiverState | String} desiredState - desired state + * + * @returns {StateTransitionError} error + */ + getStateTransitionError(desiredState) { + return new StateTransitionError(`Cannot change state from '${this.getCurrentStateName()}' to '${typeof desiredState === 'string' ? desiredState : desiredState.name}'`); + } + + /** + * Check if current state matches desired state + * + * @public + * @param {DataReceiverState | String} desiredState - desired state + * + * @returns {Boolean} true if matched + */ + hasState(desiredState) { + return this.getCurrentStateName() === (typeof desiredState === 'string' ? desiredState : desiredState.name); + } + + /** + * Check if receiver was destroyed + * + * @public + * @returns {Boolean} true if receiver was destroyed + */ + isDestroyed() { + return this.hasState(this.constructor.STATE.DESTROYED); + } + + /** + * Check if receiver can be restarted + * + * @public + * @returns {Boolean} true if receiver was destroyed + */ + isRestartAllowed() { + return this._state.next.indexOf(this.constructor.STATE.RESTARTING.name) !== -1; + } + + /** + * Check if receiver is running + * + * @public + * @returns {Boolean} true if receiver is running + */ + isRunning() { + return this.hasState(this.constructor.STATE.RUNNING); + } + + /** + * Check if transition to next state allowed + * + * @public + * @param {DataReceiverState | String} nextState - next state + * + * @returns {Boolean} true when allowed + */ + nextStateAllowed(nextState) { + return this._state.next.indexOf(typeof nextState === 'string' ? nextState : nextState.name) !== -1; + } + + /** + * Restart receiver + * + * @public + * @async + * @param {Object} [options = {}] - options + * @param {Number} [options.attempts] - number of attempts to try + * @param {Number} [options.delay] - delay before each attempt (in ms.) + * + * @returns {Promise} resolved once receiver restarted + */ + restart(options) { + options = options || {}; + const attempts = typeof options.attempts !== 'number' ? true : options.attempts; + const delay = options.delay; + + return this._setState(this.constructor.STATE.RESTARTING) + .then(() => new Promise((resolve, reject) => { + const inner = () => this.stop() + .catch(stopError => this.logger.exception('caught error on attempt to stop during restart', stopError)) + .then(() => this.start()) + .catch(restartErr => this._setState(this.constructor.STATE.FAILED_TO_RESTART) + .then(() => { + if ((attempts === true || options.attempts > 1) && this.isRestartAllowed()) { + this.logger.exception('re-trying to restart due error', restartErr); + if (attempts !== true) { + options.attempts -= 1; + } + return this.restart(options); + } + this.logger.debug('restart not allowed'); + return Promise.reject(restartErr); + })) + .then(resolve) + .catch(reject); + + if (delay) { + this.logger.debug(`restarting in ${delay} ms.`); + setTimeout(inner, delay); + } else { + inner(); + } + })); + } + + /** + * Start receiver + * + * @public + * @async + * @param {Boolean} [wait = true] - wait till previous operation finished + * + * @returns {Promise} resolved once receiver started + */ + start(wait) { + const stateOpts = { wait: typeof wait === 'undefined' || wait }; + return this._setState(this.constructor.STATE.STARTING, stateOpts) + .then(() => callAndSetState.call( + this, + this.startHandler(), + this.constructor.STATE.RUNNING, + this.constructor.STATE.FAILED_TO_START, + stateOpts + )); + } + + /** + * Stop receiver + * + * Note: still can call 'restart' and 'start' methods + * + * @public + * @async + * @param {Boolean} [wait = true] - wait till previous operation finished + * + * @returns {Promise} resolved once receiver stopped + */ + stop(wait) { + const stateOpts = { wait: typeof wait === 'undefined' || wait }; + return this._setState(this.constructor.STATE.STOPPING, stateOpts) + .then(() => callAndSetState.call( + this, + this.stopHandler(), + this.constructor.STATE.STOPPED, + this.constructor.STATE.FAILED_TO_STOP, + stateOpts + )); + } + + /** + * Start receiver + * + * @async + * @returns {Promise} resolved once receiver started + */ + startHandler() { + throw new Error('Not implemented'); + } + + /** + * Stop receiver + * + * @async + * @returns {Promise} resolved once receiver stopped + */ + stopHandler() { + throw new Error('Not implemented'); + } +} + +/** + * @property {Object.} STATE - states + */ +BaseDataReceiver.STATE = { + NEW: { + name: 'NEW', + next: ['DESTROYING', 'RESTARTING', 'STARTING', 'STOPPING'] + }, + DESTROYED: { + name: 'DESTROYED', + next: [] + }, + DESTROYING: { + name: 'DESTROYING', + next: ['DESTROYED'], + waitForTransition: true + }, + FAILED_TO_RESTART: { + name: 'FAILED_TO_RESTART', + next: ['DESTROYING', 'RESTARTING', 'STARTING', 'STOPPING'] + }, + FAILED_TO_START: { + name: 'FAILED_TO_START', + next: ['DESTROYING', 'FAILED_TO_RESTART', 'RESTARTING', 'STARTING', 'STOPPING'] + }, + FAILED_TO_STOP: { + name: 'FAILED_TO_STOP', + next: ['DESTROYING', 'FAILED_TO_RESTART', 'RESTARTING', 'STARTING', 'STOPPING'] + }, + RESTARTING: { + name: 'RESTARTING', + next: ['DESTROYING', 'STARTING', 'STOPPING'] + }, + RUNNING: { + name: 'RUNNING', + next: ['DESTROYING', 'RESTARTING', 'STOPPING'] + }, + STARTING: { + name: 'STARTING', + next: ['FAILED_TO_START', 'RUNNING'], + waitForTransition: true + }, + STOPPED: { + name: 'STOPPED', + next: ['DESTROYING', 'RESTARTING', 'STARTING'] + }, + STOPPING: { + name: 'STOPPING', + next: ['FAILED_TO_STOP', 'STOPPED'], + waitForTransition: true + } +}; + +module.exports = { + BaseDataReceiver, + BaseDataReceiverError, + SafeEventEmitter, + StateTransitionError +}; + +/** + * @typedef DataReceiverState + * @type {Object} + * @property {String} name - state name + * @property {Array} next - allowed state transitions + * @property {Boolean} waitForTransition - doesn't allow mid-state transitions until it finishes current transition + */ +/** + * State changed event + * + * @event BaseDataReceiver#stateChanged + * @type {Object} + * @property {String} current - current state + * @property {String} previous - previous state + */ diff --git a/src/lib/eventListener/index.js b/src/lib/eventListener/index.js new file mode 100644 index 00000000..d1213ed9 --- /dev/null +++ b/src/lib/eventListener/index.js @@ -0,0 +1,378 @@ +/* + * Copyright 2018. F5 Networks, Inc. See End User License Agreement ("EULA") for + * license terms. Notwithstanding anything to the contrary in the EULA, Licensee + * may copy and modify this software product for its internal business purposes. + * Further, Licensee may upload, publish and distribute the modified version of + * the software product on devcentral.f5.com. + */ + +'use strict'; + +const configUtil = require('../utils/config'); +const configWorker = require('../config'); +const constants = require('../constants'); +const dataPipeline = require('../dataPipeline'); +const logger = require('../logger'); +const messageStream = require('./messageStream'); +const normalize = require('../normalize'); +const promiseUtil = require('../utils/promise'); +const properties = require('../properties.json'); +const stringify = require('../utils/misc').stringify; +const tracers = require('../utils/tracer').Tracer; + + +const CLASS_NAME = constants.CONFIG_CLASSES.EVENT_LISTENER_CLASS_NAME; +const normalizationOpts = { + global: properties.global, + events: properties.events, + definitions: properties.definitions +}; + +/** @module EventListener */ + +/** + * Create function to filter events by pattern defined in config + * + * @param {Object} config - listener's config + * @param {String} config.match - pattern to filter data + * + * @returns {Function(Object)} function to filter data, returns boolean value if data matches + */ +function buildFilterFunc(config) { + if (!config.match || !normalizationOpts.events.classifyByKeys) { + return null; + } + const pattern = new RegExp(config.match, 'i'); + const props = normalizationOpts.events.classifyByKeys; + logger.debug(`Building events filter function with following params: pattern=${pattern} properties=${stringify(props)}`); + + return function (data) { + for (let i = 0; i < props.length; i += 1) { + const val = data[props[i]]; + if (val && pattern.test(val)) { + return true; + } + } + return false; + }; +} + +/** + * Data Receivers Manager + * + * @property {Object} registered - registered receivers + */ +class ReceiversManager { + constructor() { + this.registered = {}; + } + + /** + * Destroy all receivers + * + * @returns {Promise} resolved once all receivers destroyed + */ + destroyAll() { + return promiseUtil.allSettled(this.getAll().map(receiver => receiver.destroy())) + .then((ret) => { + this.registered = {}; + return ret; + }); + } + + /** + * All registered receivers + * + * @returns {Array} registered receivers + */ + getAll() { + return Object.keys(this.registered).map(key => this.registered[key]); + } + + /** + * Get existing or create new MessageStream receiver + * + * @param {Number} port - port to listen on + * + * @returns {MessageStream} receiver + */ + getMessageStream(port) { + if (!this.registered[port]) { + this.registered[port] = new messageStream.MessageStream(port, { logger: logger.getChild(`messageStream:${port}`) }); + } + return this.registered[port]; + } + + /** + * Start all available instances + * + * @returns {Promise} resolved once all instances started + */ + start() { + const receivers = []; + Object.keys(this.registered).forEach((port) => { + const receiver = this.registered[port]; + if (receiver.hasListeners('messages') && !receiver.isRunning()) { + receivers.push(receiver); + } + }); + return promiseUtil.allSettled( + receivers.map(r => r.restart({ attempts: 10 }) // without delay for now (REST API is sync) + .catch(err => r.stop() // stop to avoid resources leaking + .then(() => Promise.reject(err)))) + ) + .then(promiseUtil.getValues); + } + + /** + * Stop all inactive instances + * + * @returns {Promise} resolved once all inactive instances stopped + */ + stopAndRemoveInactive() { + const receivers = []; + Object.keys(this.registered).forEach((port) => { + const receiver = this.registered[port]; + if (!receiver.hasListeners('messages')) { + delete this.registered[port]; + if (!receiver.isDestroyed()) { + receivers.push(receiver); + } + } + }); + return promiseUtil.allSettled(receivers.map(r => r.destroy() + .catch(destroyErr => r.logger.exception('unable to stop and destroy receiver', destroyErr)))); + } +} + +/** + * Event Listener Class + */ +class EventListener { + /** + * Constructor + * + * @param {String} name - listener's name + * @param {Object} [options = {}] - additional configuration options + * @param {Array} [options.actions] - list of actions to apply to the event data + * @param {Array} [options.destinationIds] - data destination IDs + * @param {Function} [options.filterFunc] - function to filter events + * @param {String} [options.id] - config unique ID + * @param {Object} [options.tags] - tags to add to the event data + * @param {module:util~Tracer} [options.tracer] - tracer + * + * @returns {Object} Returns EventListener object + */ + constructor(name, options) { + this.name = name; + this.logger = logger.getChild(this.name); + this.callback = messages => this.onMessagesHandler(messages); + this.updateConfig(options); + } + + attachMessageStream(ms) { + if (this.messageStream) { + throw new Error('Message Stream attached already!'); + } + this.messageStream = ms; + this.messageStream.on('messages', this.callback); + } + + detachMessageStream() { + if (this.messageStream) { + this.messageStream.removeListener('messages', this.callback); + this.messageStream = null; + } + } + + /** + * Process events + * + * @param {Array} newMessages - events + * + * @returns {Promise} resolved once all events processed + */ + onMessagesHandler(newMessages) { + // normalize and send to data pipeline + // note: addKeysByTag uses regex for default tags parsing (tenant/app) + const options = { + renameKeysByPattern: normalizationOpts.global.renameKeys, + addKeysByTag: { + tags: this.tags, + definitions: normalizationOpts.definitions, + opts: { + classifyByKeys: normalizationOpts.events.classifyByKeys + } + }, + formatTimestamps: normalizationOpts.global.formatTimestamps.keys, + classifyEventByKeys: normalizationOpts.events.classifyCategoryByKeys, + addTimestampForCategories: normalizationOpts.events.addTimestampForCategories + }; + const promises = []; + + newMessages.forEach((event) => { + event = event.trim(); + if (event.length === 0) { + return; + } + const normalizedData = normalize.event(event, options); + if (!this.filterFunc || this.filterFunc(normalizedData)) { + const dataCtx = { + data: normalizedData, + type: normalizedData.telemetryEventCategory || constants.EVENT_TYPES.EVENT_LISTENER, + sourceId: this.id, + destinationIds: this.destinationIds + }; + const p = dataPipeline.process(dataCtx, { tracer: this.tracer, actions: this.actions }) + .catch(err => this.logger.exception('EventListener:_processEvents unexpected error from dataPipeline:process', err)); + promises.push(p); + } + }); + return promiseUtil.allSettled(promises); + } + + /** + * Update listener's configuration - tracer, tags, actions and etc. + * + * @param {Object} [config = {}] - config + * @param {Array} [config.actions] - list of actions to apply to the event data + * @param {Array} [config.destinationIds] - data destination IDs + * @param {Function} [config.filterFunc] - function to filter events + * @param {String} [config.id] - config unique ID + * @param {Object} [config.tags] - tags to add to the event data + * @param {module:util~Tracer} [config.tracer] - tracer + * + * @returns {void} + */ + updateConfig(config) { + config = config || {}; + this.actions = config.actions; + this.destinationIds = config.destinationIds; + this.filterFunc = config.filterFunc; + this.id = config.id; + this.tracer = config.tracer; + this.tags = config.tags; + } +} + +/** + * Instance to manage data receivers + */ +EventListener.receiversManager = new ReceiversManager(); + +/** + * All created instances + */ +EventListener.instances = {}; + +/** + * Create new Event Listener + * + * @see EventListener + * + * @returns {EventListener} event listener instance + */ +EventListener.get = function (name, port) { + if (!EventListener.instances[name]) { + EventListener.instances[name] = new EventListener(name); + } + const listener = EventListener.instances[name]; + listener.detachMessageStream(); + listener.attachMessageStream(EventListener.receiversManager.getMessageStream(port)); + return listener; +}; + +/** + * Return Event Listener + * + * @returns {EventListener} event listener instance + */ +EventListener.getByName = function (name) { + return EventListener.instances[name]; +}; + +/** + * Returns current listeners + * + * @returns {Array} current listeners + */ +EventListener.getAll = function () { + return Object.keys(EventListener.instances).map(key => EventListener.instances[key]); +}; + +/** + * Stop and remove listener + * + * @param {EventListener} listener - event listener to remove + * + * @returns {Promise} resolved once listener removed + */ +EventListener.remove = function (listener) { + listener.detachMessageStream(); + delete EventListener.instances[listener.name]; +}; + +// config worker change event +configWorker.on('change', (config) => { + logger.debug('configWorker change event in eventListener'); // helpful debug + // timestamp to find out-dated tracers + const tracersTimestamp = new Date().getTime(); + const configuredListeners = configUtil.getTelemetryListeners(config); + + // stop all removed listeners + EventListener.getAll().forEach((listener) => { + const configMatch = configuredListeners.find(n => n.traceName === listener.name); + if (!configMatch) { + logger.debug(`Removing event listener - ${listener.name} [port = ${listener.port}]. Reason - removed from configuration.`); + EventListener.remove(listener); + } + }); + // stop all disabled listeners and those that have port updated + configuredListeners.forEach((listenerConfig) => { + const listener = EventListener.getByName(listenerConfig.traceName); + if (listener && listenerConfig.enable === false) { + logger.debug(`Removing event listener - ${listener.name} [port = ${listener.port}]. Reason - removed from configuration. Reason - disabled.`); + EventListener.remove(listener); + } + }); + + configuredListeners.forEach((listenerConfig) => { + if (listenerConfig.skipUpdate || listenerConfig.enable === false) { + return; + } + // use name (prefixed if namespace is present) + const name = listenerConfig.traceName; + const port = listenerConfig.port; + + const msgPrefix = EventListener.getByName(name) ? 'Updating event' : 'Creating new event'; + logger.debug(`${msgPrefix} listener - ${name} [port = ${port}]`); + + const listener = EventListener.get(name, port); + listener.updateConfig({ + actions: listenerConfig.actions, + destinationIds: config.mappings[listenerConfig.id], + filterFunc: buildFilterFunc(listenerConfig), + id: listenerConfig.id, + tags: listenerConfig.tag, + tracer: tracers.createFromConfig(CLASS_NAME, name, listenerConfig) + }); + }); + + tracers.remove(tracer => tracer.name.startsWith(CLASS_NAME) + && tracer.lastGetTouch < tracersTimestamp); + + return EventListener.receiversManager.stopAndRemoveInactive() + .then(() => EventListener.receiversManager.start()) + .then(() => logger.debug(`${EventListener.getAll().length} event listener(s) listening`)) + .catch(err => logger.exception('Unable to start some (or all) of the event listeners', err)); +}); + +function sendShutdownEvent() { + EventListener.getAll().map(EventListener.remove); + EventListener.receiversManager.destroyAll().then(() => logger.info('All Event Listeners and Data Receivers destroyed')); +} +process.on('SIGINT', sendShutdownEvent); +process.on('SIGTERM', sendShutdownEvent); +process.on('SIGHUP', sendShutdownEvent); + +module.exports = EventListener; diff --git a/src/lib/eventListener/messageStream.js b/src/lib/eventListener/messageStream.js new file mode 100644 index 00000000..f462b60e --- /dev/null +++ b/src/lib/eventListener/messageStream.js @@ -0,0 +1,336 @@ + +/* * Copyright 2020. F5 Networks, Inc. See End User License Agreement ("EULA") for + * license terms. Notwithstanding anything to the contrary in the EULA, Licensee + * may copy and modify this software product for its internal business purposes. + * Further, Licensee may upload, publish and distribute the modified version of + * the software product on devcentral.f5.com. + */ + +'use strict'; + +const baseDataReceiver = require('./baseDataReceiver'); +const logger = require('../logger'); +const promiseUtil = require('../utils/promise'); +const tcpUdpReceiver = require('./tcpUdpDataReceiver'); + +/** @module MessageStream */ + +class MessageStreamError extends baseDataReceiver.BaseDataReceiverError {} + +/** + * Data Receiver for messages separated by a new line + * + * Note: data may contain multiple events separated by newline + * however newline chars may also show up inside a given event + * so split only on newline with preceding double quote. + * Expected behavior is that every channel (TCP connection) + * has only particular type of events and not mix of different types. + * If OneConnect profile will be used for pool then there might be an issue + * with different event types in single channel but not sure.' + * + * @see module:BaseDataReceiverError.BaseDataReceiver + * + * @property {String} address - address to listen on + * @property {Logger} logger - logger instance + * @property {Number} port - port to listen on + * @property {Array} protocols - protocols to use + * + * @fires MessageStream#messages + */ +class MessageStream extends baseDataReceiver.BaseDataReceiver { + /** + * Constructor + * + * @param {Number} port - port to listen on + * @param {Object} [options={}] - additional options + * @param {String} [options.address] - address to listen on + * @param {Logger} [options.logger] - logger to use instead of default one + * @param {Array} [options.protocols=['tcp', 'udp']] - protocols to use + */ + constructor(port, options) { + super(); + options = options || {}; + this.address = options.address; + this.logger = (options.logger || logger).getChild('messageStream'); + this.port = port; + this.protocols = options.protocols || ['tcp', 'udp']; + } + + /** + * Create internal receivers + */ + createReceivers() { + this._receivers = this.protocols.map((originProtocol) => { + const protocol = originProtocol.toLowerCase(); + if (!this.constructor.PROTOCOL_RECEIVER[protocol]) { + throw new MessageStreamError(`Unknown protocol '${originProtocol}'`); + } + const receiver = new this.constructor.PROTOCOL_RECEIVER[protocol]( + this.port, + { + address: this.address, + logger: this.logger.getChild(protocol) + } + ); + receiver.on('data', (data, connKey) => this.dataHandler(protocol, data, connKey)); + return receiver; + }); + this._dataBuffers = {}; + } + + /** + * Data handler + * + * @param {String} proto - protocol + * @param {Buffer} data - data to process + * @param {String} senderKey - sender's unique key + */ + dataHandler(proto, data, senderKey) { + data = data.toString(); + senderKey = `${proto}-${senderKey}`; + let bufferInfo = this._dataBuffers[senderKey]; + + if (bufferInfo) { + data = bufferInfo.data + data; + // cleanup timeout to avoid dups + if (bufferInfo.timeoutID) { + clearTimeout(bufferInfo.timeoutID); + } + } + const lengthBefore = data.length; + data = this.extractMessages(data); + + if (data.length >= this.constructor.MAX_BUFFER_SIZE + || (bufferInfo && bufferInfo.timeoutNo >= this.constructor.MAX_BUFFER_NUM_TIMEOUTS)) { + data = this.extractMessages(data, true); + } + // if we have incomplete data to buffer + if (data) { + if (!bufferInfo) { + bufferInfo = { timeoutNo: 1 }; + this._dataBuffers[senderKey] = bufferInfo; + } else if (data.length / lengthBefore < this.constructor.MAX_UNPARSED_DATA_CAP) { + bufferInfo.timeoutNo = 1; + } + bufferInfo.data = data; + bufferInfo.timeoutNo += 1; + bufferInfo.timeoutID = setTimeout(() => { + delete this._dataBuffers[senderKey]; + this.extractMessages(bufferInfo.data, true); + }, this.constructor.MAX_BUFFER_TIMEOUT); + } else { + delete this._dataBuffers[senderKey]; + } + } + + /** + * Split data received by Event Listener into events + * + * Valid separators are: + * - \n + * - \r\n + * + * - If line separator(s) enclosed with quotes then it will be ignored. + * - If last line has line separator(s) and opened quote but no closing quote then + * this line will be splitted into multiple lines + * - When line has an opening quote and no closing quote and field's size is >= MAX_OPEN_QUOTE_SIZE + * then line will be splitted into multiple lines too + * + * @param {String} data - data + * @param {Boolean} [incomplete = false] - when some data left treat it as complete message + * + * @returns {String} incomplete data + * + * @fires MessageStream#messages + */ + extractMessages(data, incomplete) { + let backSlashed = false; + let char; + let forceSplit = false; + let idx = 0; + // never be zero because it should be preceded by quote + // so, we can use 0 as 'false' + let newlineClosestToOpenQuotePos = 0; + let openQuotePos; + let quoted = ''; + let startIdx = 0; + const lines = []; + + for (;idx < data.length; idx += 1) { + char = data[idx]; + if (char === '\\') { + backSlashed = !backSlashed; + // eslint-disable-next-line no-continue + continue; + } else if (char === '"' || char === '\'') { + if (backSlashed) { + backSlashed = false; + // eslint-disable-next-line no-continue + continue; + } + if (!quoted) { + // reset value, this new line is invalid now (before quote starts) + newlineClosestToOpenQuotePos = 0; + quoted = char; + openQuotePos = idx; + } else if (quoted === char) { + // reset value, this new line is invalid now (between quotes) + newlineClosestToOpenQuotePos = 0; + quoted = ''; + openQuotePos = null; + } + } else if (char === '\n' || (char === '\r' && data[idx + 1] === '\n') || forceSplit) { + if (!(newlineClosestToOpenQuotePos || forceSplit)) { + // remember new line pos if not set yet + newlineClosestToOpenQuotePos = idx; + } + if (!quoted || forceSplit) { + lines.push(data.slice(startIdx, idx)); + if (!forceSplit) { + // jump to next char + idx = char === '\r' ? (idx + 1) : idx; + startIdx = idx + 1; + } else { + startIdx = idx; + } + // reset value, this new line is invalid now + newlineClosestToOpenQuotePos = 0; + } + forceSplit = false; + } else if (quoted && idx - openQuotePos >= this.constructor.MAX_OPEN_QUOTE_SIZE) { + // let's say a quote was opened and we are far away from the beginning of a chunk + // and still no closing quote - probably message was malformed. What we can do is + // (force) split data using position of a newline sequence closest to the open quote + // or by position of open quote + if (newlineClosestToOpenQuotePos) { + idx = newlineClosestToOpenQuotePos - 1; + } else { + idx = openQuotePos; + forceSplit = true; + } + quoted = ''; + openQuotePos = null; + } + backSlashed = false; + } + // idx > startIdx - EOL reached earlier + // idx <= startIdx - EOL reached and line separator was found + if (incomplete && startIdx < data.length && idx > startIdx) { + // looks like EOL reached, so we have to check last line + const lastLine = data.slice(startIdx); + if (openQuotePos === null) { + lines.push(lastLine); + } else { + // quote was opened and not closed + // might worth to check if there are new line separators + openQuotePos -= startIdx; + const leftPart = lastLine.slice(0, openQuotePos); + const rightParts = lastLine.slice(openQuotePos).split(/\n|\r\n/); + lines.push(leftPart + rightParts[0]); + rightParts.forEach((elem, elemId) => elem && elemId && lines.push(elem)); + } + data = ''; + } + this.safeEmitAsync('messages', lines); + return data.length ? data.slice(startIdx) : data; + } + + /** + * Check if has any internal receivers + * + * @returns {Boolean} + */ + hasReceivers() { + return this._receivers && this._receivers.length > 0; + } + + /** + * Start receiver + * + * @async + * @returns {Promise} resolved once receiver started + */ + startHandler() { + if (!this.hasState(this.constructor.STATE.STARTING)) { + return Promise.reject(this.getStateTransitionError(this.constructor.STATE.STARTING)); + } + this.createReceivers(); + return promiseUtil.allSettled(this._receivers.map(receiver => receiver.start())) + .then(promiseUtil.getValues); + } + + /** + * Stop receiver + * + * @async + * @returns {Promise} resolved once receiver stopped + */ + stopHandler() { + if (!this.hasReceivers()) { + return Promise.resolve(); + } + return promiseUtil.allSettled(this._receivers.map(receiver => receiver.destroy())) + .then((statuses) => { + this._dataBuffers = null; + this._receivers = null; + return promiseUtil.getValues(statuses); + }); + } +} + +/** + * Length of buffer for each connection. When amount of data stored in buffer + * is higher than threshold then even incomplete data will be flushed + * + * @type {Integer} + */ +MessageStream.MAX_BUFFER_SIZE = 16 * 1024; // 16k chars + +/** + * Number of time a timeout for particular buffer can be reset before + * flushing all data + * + * @type {Integer} + */ +MessageStream.MAX_BUFFER_NUM_TIMEOUTS = 5; + +/** + * Buffer timeout + * + * @type {Integer} + */ +MessageStream.MAX_BUFFER_TIMEOUT = 10 * 1000; // 10 sec. + +/** + * Max percent of unparsed data that still allows to reset buffer timeout + */ +MessageStream.MAX_UNPARSED_DATA_CAP = 0.7; + +/** + * Number of chars that string with open quote can contain before it will be + * treated as malformed message. In other words this parameter declares how + * many chars single field can contain. + * + * @type {Integer} + */ +MessageStream.MAX_OPEN_QUOTE_SIZE = 512; + +/** + * Map protocol to its implementation + */ +MessageStream.PROTOCOL_RECEIVER = { + tcp: tcpUdpReceiver.TCPDataReceiver, + udp: tcpUdpReceiver.DualUDPDataReceiver +}; + +module.exports = { + MessageStream, + MessageStreamError +}; + +/** + * Messages event + * + * @event MessageStream#messages + * @param {Array} messages - array of received messages + */ diff --git a/src/lib/eventListener/tcpUdpDataReceiver.js b/src/lib/eventListener/tcpUdpDataReceiver.js new file mode 100644 index 00000000..b316b45f --- /dev/null +++ b/src/lib/eventListener/tcpUdpDataReceiver.js @@ -0,0 +1,448 @@ +/* * Copyright 2020. F5 Networks, Inc. See End User License Agreement ("EULA") for + * license terms. Notwithstanding anything to the contrary in the EULA, Licensee + * may copy and modify this software product for its internal business purposes. + * Further, Licensee may upload, publish and distribute the modified version of + * the software product on devcentral.f5.com. + */ + +'use strict'; + +const dgram = require('dgram'); +const net = require('net'); + +const baseDataReceiver = require('./baseDataReceiver'); +const logger = require('../logger'); +const promiseUtil = require('../utils/promise'); + +/** @module TcpUdpDataReceiver */ + +class TcpUdpDataReceiverError extends baseDataReceiver.BaseDataReceiverError {} + +/** + * Base Data Receiver for TCP and UDP protocols + * + * @see module:BaseDataReceiverError.BaseDataReceiver + * + * @property {String} address - address to listen on + * @property {Logger} logger - logger instance + * @property {Number} port - port to listen on} + * + * @fires TcpUdpBaseDataReceiver#data + */ +class TcpUdpBaseDataReceiver extends baseDataReceiver.BaseDataReceiver { + /** + * Constructor + * + * @param {Number} port - port to listen on + * @param {Object} [options = {}] - additional options + * @param {String} [options.address] - address to listen on + * @param {Logger} [options.logger] - logger to use instead of default one + */ + constructor(port, options) { + super(); + options = options || {}; + this.address = options.address; + this.logger = options.logger || logger.getChild(this.constructor.name); + this.port = port; + } + + /** + * Call data callback + * + * @param {Buffer} data + * @param {Object} connInfo + * + * @fires TcpUdpBaseDataReceiver#data + * + * @returns {Promise} resolved once data processed + */ + callCallback(data, connInfo) { + return this.safeEmitAsync('data', data, this.getConnKey(connInfo)); + } + + /** + * Connection unique key + * + * @private + * @param {Socket} conn - connection + * + * @returns {String} unique key + */ + getConnKey() { + throw new Error('Not implemented'); + } + + /** + * Get receiver options to start listening for data + * + * @private + * @returns {Object} options + */ + getReceiverOptions() { + const options = { port: this.port }; + if (this.address) { + options.address = this.address; + } + return options; + } + + /** + * Restart receiver + * + * @returns {Promise} resolved despite on result + */ + safeRestart() { + this.logger.debug('safely restarting'); + try { + return this.restart({ delay: this.constructor.RESTART_DELAY }).catch(() => {}); + } catch (restartErr) { + this.logger.exception(`${this.constructor.name}.safeRestart uncaught error`, restartErr); + // silently ignore error + } + return Promise.resolve(); + } +} + +/** + * Delay before restart + * + * @property {Number} + */ +TcpUdpBaseDataReceiver.RESTART_DELAY = 10 * 1000; // 10 sec. delay before restart + +/** + * Data Receiver over TCP + * + * @see TcpUdpBaseDataReceiver + */ +class TCPDataReceiver extends TcpUdpBaseDataReceiver { + /** + * Constructor + * + * @see TcpUdpBaseDataReceiver + */ + constructor(port, options) { + super(port, options); + this._connections = []; + } + + /** + * Add connection to the list of opened connections + * + * @private + * @param {Socket} conn - connection to add + */ + _addConnection(conn) { + this.logger.debug(`new connection - ${this.getConnKey(conn)}`); + this._connections.push(conn); + } + + /** + * Close all opened client connections + * + * @private + */ + _closeAllConnections() { + this.logger.debug('closing all client connections'); + // do .slice in case if ._removeConnection will be called + this._connections.slice(0).forEach(conn => conn.destroy()); + this._connections = []; + } + + /** + * Remove connection from the list of opened connections + * + * @private + * @param {Socket} conn - connection to remove + */ + _removeConnection(conn) { + this.logger.debug(`removing connection - ${this.getConnKey(conn)}`); + const idx = this._connections.indexOf(conn); + if (idx > -1) { + this._connections.splice(idx, 1); + } + } + + /** + * Connection handler + * + * @param {Socket} conn - connection + */ + connectionHandler(conn) { + this._addConnection(conn); + conn.on('data', data => this.callCallback(data, conn)) + .on('error', () => conn.destroy()) // destroy emits 'close' event + .on('close', () => this._removeConnection(conn)) + .on('end', () => {}); // allowHalfOpen is false by default, no needs to call 'end' explicitlyq + } + + /** + * Connection unique key + * + * @private + * @param {Socket} conn - connection + * + * @returns {String} unique key + */ + getConnKey(conn) { + return `${conn.remoteAddress}-${conn.remotePort}`; + } + + /** + * Start TCP data receiver + * + * @async + * @returns {Promise} resolved once receiver started + */ + startHandler() { + let isStarted = false; + return new Promise((resolve, reject) => { + if (!this.hasState(this.constructor.STATE.STARTING)) { + reject(this.getStateTransitionError(this.constructor.STATE.STARTING)); + } else { + this._socket = net.createServer({ + allowHalfOpen: false, + pauseOnConnect: false + }); + this._socket.on('error', (err) => { + this.logger.exception('unexpected error', err); + if (isStarted) { + this.safeRestart(); + } else { + reject(err); + } + }) + .on('listening', () => { + this.logger.debug('listening'); + if (!isStarted) { + isStarted = true; + resolve(); + } + }) + .on('close', () => { + this.logger.debug('closed'); + if (!isStarted) { + reject(new TcpUdpDataReceiverError('socket closed before being ready')); + } + }) + .on('connection', this.connectionHandler.bind(this)); + + const options = this.getReceiverOptions(); + this.logger.debug(`starting listen using following options ${JSON.stringify(options)}`); + this._socket.listen(options); + } + }); + } + + /** + * Stop receiver + * + * @async + * @returns {Promise} resolved once receiver stopped + */ + stopHandler() { + return new Promise((resolve) => { + if (!this._socket) { + resolve(); + } else { + this._closeAllConnections(); + this._socket.close(() => { + this._socket.removeAllListeners(); + this._socket = null; + resolve(); + }); + } + }); + } +} + +/** + * Data Receiver over UDP + * + * @see TcpUdpBaseDataReceiver + * + * @property {String} family - listener type - 'udp4' or 'udp6', by default 'udp4' + */ +class UDPDataReceiver extends TcpUdpBaseDataReceiver { + /** + * Constructor + * + * @param {Number} port - port to listen on + * @param {Object} [options = {}] - additional options + * @param {String} [options.address] - address to listen on + * @param {Logger} [options.logger] - logger to use instead of default one + * @param {String} [family = 'udp4'] - socket type, 'udp4' or 'udp6', by default 'udp4' + */ + constructor(port, options, family) { + super(port, options); + this.family = (family || '').toLowerCase() === 'udp6' ? 'udp6' : 'udp4'; + } + + /** + * Connection unique key + * + * @private + * @param {Object} remoteInfo - connection info + * + * @returns {String} unique key + */ + getConnKey(remoteInfo) { + return `${remoteInfo.address}-${remoteInfo.port}`; + } + + /** + * Start TCP data receiver + * + * @async + * @returns {Promise} resolved once receiver started + */ + startHandler() { + let isStarted = false; + return new Promise((resolve, reject) => { + if (!this.hasState(this.constructor.STATE.STARTING)) { + reject(this.getStateTransitionError(this.constructor.STATE.STARTING)); + } else { + this._socket = dgram.createSocket({ + type: this.family, + ipv6Only: this.family === 'udp6', // available starting from node 11+ only + reuseAddr: true // allows UDPv6 and UDPv4 be bound to 0.0.0.0 and ::0 at the same time + }); + this._socket.on('error', (err) => { + this.logger.exception('unexpected error', err); + if (isStarted) { + this.safeRestart(); + } else { + reject(err); + } + }) + .on('listening', () => { + this.logger.debug('listening'); + if (!isStarted) { + isStarted = true; + resolve(); + } + }) + .on('close', () => { + this.logger.debug('closed'); + if (!isStarted) { + reject(new TcpUdpDataReceiverError('socket closed before being ready')); + } + }) + .on('message', this.callCallback.bind(this)); + + const options = this.getReceiverOptions(); + this.logger.debug(`starting listen using following options ${JSON.stringify(options)}`); + this._socket.bind(options); + } + }); + } + + /** + * Close UDP data receiver + * + * @async + * @private + * @returns {Promise} resolve on receiver closed + */ + stopHandler() { + return new Promise((resolve) => { + if (!this._socket) { + resolve(); + } else { + this._socket.close(() => { + this._socket.removeAllListeners(); + this._socket = null; + resolve(); + }); + } + }); + } +} + + +/** + * Data Receiver over UDPv4 and UDPv6 + * + * Note: this class is needed to support DualStack on node.js versions older than 11.x + * + * @see TcpUdpBaseDataReceiver + */ +class DualUDPDataReceiver extends TcpUdpBaseDataReceiver { + /** + * Create internal receivers + */ + createReceivers() { + this._receivers = ['udp4', 'udp6'].map((family) => { + const receiver = new UDPDataReceiver( + this.port, + { + address: this.address, + logger: this.logger.getChild(family) + }, + family + ); + // passthrough 'data' event + this.listenTo(receiver, { data: 'data' }); + return receiver; + }); + } + + /** + * Check if has any internal receivers + * + * @returns {Boolean} + */ + hasReceivers() { + return this._receivers && this._receivers.length > 0; + } + + /** + * Start receiver + * + * @async + * @returns {Promise} resolved once receiver started + */ + startHandler() { + if (!this.hasState(this.constructor.STATE.STARTING)) { + return Promise.reject(this.getStateTransitionError(this.constructor.STATE.STARTING)); + } + this.createReceivers(); + return promiseUtil.allSettled(this._receivers.map(receiver => receiver.start())) + .then(promiseUtil.getValues); + } + + /** + * Stop receiver + * + * @async + * @returns {Promise} resolved once receiver stopped + */ + stopHandler() { + if (!this.hasReceivers()) { + return Promise.resolve(); + } + // stop to listen for 'data' event + this.stopListeningTo(); + return promiseUtil.allSettled(this._receivers.map(receiver => receiver.destroy())) + .then((statuses) => { + this._receivers = null; + return promiseUtil.getValues(statuses); + }); + } +} + +module.exports = { + DualUDPDataReceiver, + TcpUdpBaseDataReceiver, + TcpUdpDataReceiverError, + TCPDataReceiver, + UDPDataReceiver +}; + +/** + * Data event + * + * @event TcpUdpBaseDataReceiver#data + * @param {Buffer} data - data + * @param {String} connKey - connection unique key + */ diff --git a/src/lib/normalize.js b/src/lib/normalize.js index b6519468..e20b0e45 100644 --- a/src/lib/normalize.js +++ b/src/lib/normalize.js @@ -475,77 +475,6 @@ module.exports = { return ret; }, - /** - * Split data received by Event Listener into events - * - * Valid separators are: - * - \n - * - \r\n - * If line separator(s) enclosed with quotes then it will be ignored. - * If last line has line separator(s) and opened quote but no closing quote then - * this line will be splitted into multiple lines - * - * @param {String} data - data - * - * @returns {Array} array of events/chunks - */ - splitEvents(data) { - let backSlashed = false; - let char; - let idx = 0; - let openQuotePos; - let quoted = ''; - let startIdx = 0; - const lines = []; - - for (;idx < data.length; idx += 1) { - char = data[idx]; - if (char === '\\') { - backSlashed = !backSlashed; - // eslint-disable-next-line no-continue - continue; - } else if (char === '"' || char === '\'') { - if (backSlashed) { - backSlashed = false; - // eslint-disable-next-line no-continue - continue; - } - if (!quoted) { - quoted = char; - openQuotePos = idx; - } else if (quoted === char) { - quoted = ''; - openQuotePos = null; - } - } else if (char === '\n' || (char === '\r' && data[idx + 1] === '\n')) { - if (!quoted) { - lines.push(data.slice(startIdx, idx)); - idx = char === '\r' ? (idx + 1) : idx; - startIdx = idx + 1; - } - } - backSlashed = false; - } - // idx > startIdx - EOL reached earlier - // idx <= startIdx - EOL reached and line separator was found - if (startIdx < data.length && idx > startIdx) { - // looks like EOL reached, so we have to check last line - const lastLine = data.slice(startIdx); - if (openQuotePos === null) { - lines.push(lastLine); - } else { - // quote was opened and not closed - // might worth to check if there are new line separators - openQuotePos -= startIdx; - const leftPart = lastLine.slice(0, openQuotePos); - const rightParts = lastLine.slice(openQuotePos).split(/\n|\r\n/); - lines.push(leftPart + rightParts[0]); - rightParts.forEach((elem, elemId) => elemId && lines.push(elem)); - } - } - return lines; - }, - /** * Normalize iHealth data * diff --git a/src/lib/properties.json b/src/lib/properties.json index a12b1164..65e9a108 100644 --- a/src/lib/properties.json +++ b/src/lib/properties.json @@ -776,7 +776,7 @@ "includeFirstEntry": { "pattern": "/stats", "excludePattern": "/members/" } }, { - "filterKeys": { "exclude": [ "tmName", "availableMemberCnt", "sessionStatus", "connqAll.ageEdm", "connqAll.ageEma", "connqAll.ageHead", "connqAll.ageMax", "connqAll.depth", "connqAll.serviced", "connq.ageEdm", "connq.ageEma", "connq.ageHead", "connq.ageMax", "connq.depth", "connq.serviced", "curSessions", "memberCnt", "minActiveMembers", "monitorRule", "ipTosToServer", "minUpMembersAction", "appService", "appServiceReference", "minUpMembersChecking", "kind","ignorePersistedWeight", "fullPath", "partition", "linkQosToClient", "linkQosToServer", "ipTosToClient", "generation", "serviceDownAction", "queueDepthLimit", "queueTimeLimit", "allowNat", "reselectTries", "minUpMembers", "nodeName", "poolName", "allowSnat", "monitor", "selfLink", "subPath", "queueOnConnectionLimit", "loadBalancingMode", "slowRampTime" ] } + "filterKeys": { "exclude": [ "tmName", "availableMemberCnt", "sessionStatus", "connqAll.ageEdm", "connqAll.ageEma", "connqAll.ageHead", "connqAll.ageMax", "connqAll.depth", "connqAll.serviced", "connq.ageEdm", "connq.ageEma", "connq.ageHead", "connq.ageMax", "connq.depth", "connq.serviced", "curSessions", "memberCnt", "minActiveMembers", "monitorRule", "ipTosToServer", "minUpMembersAction", "appService", "appServiceReference", "minUpMembersChecking", "kind","ignorePersistedWeight", "fullPath", "partition", "linkQosToClient", "linkQosToServer", "ipTosToClient", "generation", "serviceDownAction", "queueDepthLimit", "queueTimeLimit", "allowNat", "reselectTries", "minUpMembers", "nodeName", "poolName", "allowSnat", "monitor", "selfLink", "subPath", "queueOnConnectionLimit", "loadBalancingMode", "slowRampTime", "gatewayFailsafeDeviceReference" ] } }, { "renameKeys": { "patterns": { "name/": { "pattern": "name\/(.*)", "group": 1 }, "ltm/pool/": { "pattern": "pool\/(.*)\\?", "group": 1 } , "members/": { "pattern": "members\/(.*)\/", "group": 1 }, "membersReference": "members" } } diff --git a/src/lib/requestHandlers/declareHandler.js b/src/lib/requestHandlers/declareHandler.js index 9f0b22de..0a4e5ecb 100644 --- a/src/lib/requestHandlers/declareHandler.js +++ b/src/lib/requestHandlers/declareHandler.js @@ -11,10 +11,11 @@ const nodeUtil = require('util'); const BaseRequestHandler = require('./baseHandler'); +const ErrorHandler = require('./errorHandler'); +const httpErrors = require('./httpErrors'); const configWorker = require('../config'); const logger = require('../logger'); const router = require('./router'); -const ServiceUnavailableErrorHandler = require('./httpStatus/serviceUnavailableErrorHandler'); /** * /declare endpoint handler @@ -52,13 +53,19 @@ DeclareEndpointHandler.prototype.getBody = function () { */ DeclareEndpointHandler.prototype.process = function () { let promise; + const namespace = this.params && this.params.namespace ? this.params.namespace : undefined; + if (this.getMethod() === 'POST') { if (DeclareEndpointHandler.PROCESSING_DECLARATION_FLAG) { logger.debug('Can\'t process new declaration while previous one is still in progress'); - return Promise.resolve(new ServiceUnavailableErrorHandler(this.restOperation)); + return Promise.resolve(new ErrorHandler(new httpErrors.ServiceUnavailableError())); } DeclareEndpointHandler.PROCESSING_DECLARATION_FLAG = true; - promise = configWorker.processDeclaration(this.restOperation.getBody()) + promise = namespace + ? configWorker.processNamespaceDeclaration(this.restOperation.getBody(), this.params.namespace) + : configWorker.processDeclaration(this.restOperation.getBody()); + + promise = promise .then((config) => { DeclareEndpointHandler.PROCESSING_DECLARATION_FLAG = false; return config; @@ -68,8 +75,9 @@ DeclareEndpointHandler.prototype.process = function () { return Promise.reject(err); }); } else { - promise = configWorker.getRawConfig(); + promise = configWorker.getRawConfig(namespace); } + return promise.then((config) => { this.code = 200; this.body = { @@ -78,18 +86,7 @@ DeclareEndpointHandler.prototype.process = function () { }; return this; }) - .catch((error) => { - if (error.code === 'ValidationError') { - this.code = 422; - this.body = { - code: this.code, - message: 'Unprocessable entity', - error: error.message - }; - return this; - } - return Promise.reject(error); - }); + .catch(error => new ErrorHandler(error).process()); }; DeclareEndpointHandler.PROCESSING_DECLARATION_FLAG = false; @@ -97,6 +94,8 @@ DeclareEndpointHandler.PROCESSING_DECLARATION_FLAG = false; router.on('register', (routerInst) => { routerInst.register('GET', '/declare', DeclareEndpointHandler); routerInst.register('POST', '/declare', DeclareEndpointHandler); + routerInst.register('GET', '/namespace/:namespace/declare', DeclareEndpointHandler); + routerInst.register('POST', '/namespace/:namespace/declare', DeclareEndpointHandler); }); module.exports = DeclareEndpointHandler; diff --git a/src/lib/requestHandlers/errorHandler.js b/src/lib/requestHandlers/errorHandler.js new file mode 100644 index 00000000..8b166b66 --- /dev/null +++ b/src/lib/requestHandlers/errorHandler.js @@ -0,0 +1,81 @@ +/* + * Copyright 2020. F5 Networks, Inc. See End User License Agreement ("EULA") for + * license terms. Notwithstanding anything to the contrary in the EULA, Licensee + * may copy and modify this software product for its internal business purposes. + * Further, Licensee may upload, publish and distribute the modified version of + * the software product on devcentral.f5.com. + */ + +'use strict'; + +const errors = require('../errors'); +const HttpError = require('./httpErrors.js').HttpError; + +/** + * Handler for errors encountered during requests + * + * @param {Object} error - Error object + */ +function ErrorHandler(error) { + this.error = error; +} + +ErrorHandler.prototype.getCode = function () { + if (this.error instanceof HttpError) { + return this.error.getCode(); + } + if (this.error instanceof errors.BaseError) { + const httpError = this.getHttpEquivalent(this.error); + return httpError.code; + } + return undefined; +}; + +ErrorHandler.prototype.getBody = function () { + if (this.error instanceof HttpError) { + return this.error.getBody(); + } + if (this.error instanceof errors.BaseError) { + const httpError = this.getHttpEquivalent(this.error); + return httpError.body; + } + return undefined; +}; + +ErrorHandler.prototype.process = function () { + if (this.error instanceof HttpError) { + this.code = this.getCode(); + this.body = this.getBody(); + return Promise.resolve(this); + } + + if (this.error instanceof errors.BaseError) { + const httpError = this.getHttpEquivalent(this.error); + this.code = httpError.code; + this.body = httpError.body; + return Promise.resolve(this); + } + + return Promise.reject(this.error); +}; + +ErrorHandler.prototype.getHttpEquivalent = function (error) { + const httpError = {}; + if (error instanceof errors.ConfigLookupError) { + httpError.code = 404; + httpError.body = { + code: 404, + message: error.message + }; + } else if (error instanceof errors.ValidationError) { + httpError.code = 422; + httpError.body = { + code: 422, + message: 'Unprocessable entity', + error: error.message + }; + } + return httpError; +}; + +module.exports = ErrorHandler; diff --git a/src/lib/requestHandlers/httpErrors.js b/src/lib/requestHandlers/httpErrors.js new file mode 100644 index 00000000..98e4cd3c --- /dev/null +++ b/src/lib/requestHandlers/httpErrors.js @@ -0,0 +1,102 @@ +/* + * Copyright 2020. F5 Networks, Inc. See End User License Agreement ("EULA") for + * license terms. Notwithstanding anything to the contrary in the EULA, Licensee + * may copy and modify this software product for its internal business purposes. + * Further, Licensee may upload, publish and distribute the modified version of + * the software product on devcentral.f5.com. + */ + +'use strict'; + +class HttpError extends Error { + getCode() { + throw new Error('Method "getCode" not implemented'); + } + + getBody() { + throw new Error('Method "getBody" not implemented'); + } +} + +class BadURLError extends HttpError { + constructor(pathName) { + super(); + this.pathName = pathName; + } + + getCode() { + return 400; + } + + getBody() { + return `Bad URL: ${this.pathName}`; + } +} + +class InternalServerError extends HttpError { + getCode() { + return 500; + } + + getBody() { + return { + code: this.getCode(), + message: 'Internal Server Error' + }; + } +} + +class MethodNotAllowedError extends HttpError { + constructor(allowedMethods) { + super(); + this.allowedMethods = allowedMethods; + } + + getCode() { + return 405; + } + + getBody() { + return { + code: this.getCode(), + message: 'Method Not Allowed', + allow: this.allowedMethods + }; + } +} + +class ServiceUnavailableError extends HttpError { + getCode() { + return 503; + } + + getBody() { + return { + code: this.getCode(), + message: 'Service Unavailable' + }; + } +} + +class UnsupportedMediaTypeError extends HttpError { + getCode() { + return 415; + } + + getBody() { + return { + code: this.getCode(), + message: 'Unsupported Media Type', + accept: ['application/json'] + }; + } +} + +module.exports = { + HttpError, + BadURLError, + InternalServerError, + MethodNotAllowedError, + ServiceUnavailableError, + UnsupportedMediaTypeError +}; diff --git a/src/lib/requestHandlers/httpStatus/badUrlHandler.js b/src/lib/requestHandlers/httpStatus/badUrlHandler.js deleted file mode 100644 index fe39e59d..00000000 --- a/src/lib/requestHandlers/httpStatus/badUrlHandler.js +++ /dev/null @@ -1,42 +0,0 @@ -/* - * Copyright 2020. F5 Networks, Inc. See End User License Agreement ("EULA") for - * license terms. Notwithstanding anything to the contrary in the EULA, Licensee - * may copy and modify this software product for its internal business purposes. - * Further, Licensee may upload, publish and distribute the modified version of - * the software product on devcentral.f5.com. - */ - -'use strict'; - -const nodeUtil = require('util'); -const BaseRequestHandler = require('../baseHandler'); - -/** - * Bad URL Handler - * - * @param {Object} restOperation - */ -function BadURLHandler() { - BaseRequestHandler.apply(this, arguments); -} -nodeUtil.inherits(BadURLHandler, BaseRequestHandler); - -/** - * Get response code - * - * @returns {Integer} response code - */ -BadURLHandler.prototype.getCode = function () { - return 400; -}; - -/** - * Get response body - * - * @returns {Any} response body - */ -BadURLHandler.prototype.getBody = function () { - return `Bad URL: ${this.restOperation.getUri().pathname}`; -}; - -module.exports = BadURLHandler; diff --git a/src/lib/requestHandlers/httpStatus/internalServerErrorHandler.js b/src/lib/requestHandlers/httpStatus/internalServerErrorHandler.js deleted file mode 100644 index dbc6617a..00000000 --- a/src/lib/requestHandlers/httpStatus/internalServerErrorHandler.js +++ /dev/null @@ -1,45 +0,0 @@ -/* - * Copyright 2020. F5 Networks, Inc. See End User License Agreement ("EULA") for - * license terms. Notwithstanding anything to the contrary in the EULA, Licensee - * may copy and modify this software product for its internal business purposes. - * Further, Licensee may upload, publish and distribute the modified version of - * the software product on devcentral.f5.com. - */ - -'use strict'; - -const nodeUtil = require('util'); -const BaseRequestHandler = require('../baseHandler'); - -/** - * Internal Server Error Handler - * - * @param {Object} restOperation - */ -function InternalServerErrorHandler() { - BaseRequestHandler.apply(this, arguments); -} -nodeUtil.inherits(InternalServerErrorHandler, BaseRequestHandler); - -/** - * Get response code - * - * @returns {Integer} response code - */ -InternalServerErrorHandler.prototype.getCode = function () { - return 500; -}; - -/** - * Get response body - * - * @returns {Any} response body - */ -InternalServerErrorHandler.prototype.getBody = function () { - return { - code: this.getCode(), - message: 'Internal Server Error' - }; -}; - -module.exports = InternalServerErrorHandler; diff --git a/src/lib/requestHandlers/httpStatus/methodNotAllowedHandler.js b/src/lib/requestHandlers/httpStatus/methodNotAllowedHandler.js deleted file mode 100644 index 2f6b0e90..00000000 --- a/src/lib/requestHandlers/httpStatus/methodNotAllowedHandler.js +++ /dev/null @@ -1,47 +0,0 @@ -/* - * Copyright 2020. F5 Networks, Inc. See End User License Agreement ("EULA") for - * license terms. Notwithstanding anything to the contrary in the EULA, Licensee - * may copy and modify this software product for its internal business purposes. - * Further, Licensee may upload, publish and distribute the modified version of - * the software product on devcentral.f5.com. - */ - -'use strict'; - -const nodeUtil = require('util'); -const BaseRequestHandler = require('../baseHandler'); - -/** - * Method Not Allowed Handler - * - * @param {Object} restOperation - * @param {Array} allowed - list of allowed methods - */ -function MethodNotAllowedHandler() { - BaseRequestHandler.apply(this, arguments); -} -nodeUtil.inherits(MethodNotAllowedHandler, BaseRequestHandler); - -/** - * Get response code - * - * @returns {Integer} response code - */ -MethodNotAllowedHandler.prototype.getCode = function () { - return 405; -}; - -/** - * Get response body - * - * @returns {Any} response body - */ -MethodNotAllowedHandler.prototype.getBody = function () { - return { - code: this.getCode(), - message: 'Method Not Allowed', - allow: this.params - }; -}; - -module.exports = MethodNotAllowedHandler; diff --git a/src/lib/requestHandlers/httpStatus/serviceUnavailableErrorHandler.js b/src/lib/requestHandlers/httpStatus/serviceUnavailableErrorHandler.js deleted file mode 100644 index da59f998..00000000 --- a/src/lib/requestHandlers/httpStatus/serviceUnavailableErrorHandler.js +++ /dev/null @@ -1,45 +0,0 @@ -/* - * Copyright 2020. F5 Networks, Inc. See End User License Agreement ("EULA") for - * license terms. Notwithstanding anything to the contrary in the EULA, Licensee - * may copy and modify this software product for its internal business purposes. - * Further, Licensee may upload, publish and distribute the modified version of - * the software product on devcentral.f5.com. - */ - -'use strict'; - -const nodeUtil = require('util'); -const BaseRequestHandler = require('../baseHandler'); - -/** - * Service Unavailable Error Handler - * - * @param {Object} restOperation - */ -function ServiceUnavailableErrorHandler() { - BaseRequestHandler.apply(this, arguments); -} -nodeUtil.inherits(ServiceUnavailableErrorHandler, BaseRequestHandler); - -/** - * Get response code - * - * @returns {Integer} response code - */ -ServiceUnavailableErrorHandler.prototype.getCode = function () { - return 503; -}; - -/** - * Get response body - * - * @returns {Any} response body - */ -ServiceUnavailableErrorHandler.prototype.getBody = function () { - return { - code: this.getCode(), - message: 'Service Unavailable' - }; -}; - -module.exports = ServiceUnavailableErrorHandler; diff --git a/src/lib/requestHandlers/httpStatus/unsupportedMediaTypeHandler.js b/src/lib/requestHandlers/httpStatus/unsupportedMediaTypeHandler.js deleted file mode 100644 index 1c566343..00000000 --- a/src/lib/requestHandlers/httpStatus/unsupportedMediaTypeHandler.js +++ /dev/null @@ -1,46 +0,0 @@ -/* - * Copyright 2020. F5 Networks, Inc. See End User License Agreement ("EULA") for - * license terms. Notwithstanding anything to the contrary in the EULA, Licensee - * may copy and modify this software product for its internal business purposes. - * Further, Licensee may upload, publish and distribute the modified version of - * the software product on devcentral.f5.com. - */ - -'use strict'; - -const nodeUtil = require('util'); -const BaseRequestHandler = require('../baseHandler'); - -/** - * Unsupported Media Type Handler - * - * @param {Object} restOperation - */ -function UnsupportedMediaTypeHandler() { - BaseRequestHandler.apply(this, arguments); -} -nodeUtil.inherits(UnsupportedMediaTypeHandler, BaseRequestHandler); - -/** - * Get response code - * - * @returns {Integer} response code - */ -UnsupportedMediaTypeHandler.prototype.getCode = function () { - return 415; -}; - -/** - * Get response body - * - * @returns {Any} response body - */ -UnsupportedMediaTypeHandler.prototype.getBody = function () { - return { - code: this.getCode(), - message: 'Unsupported Media Type', - accept: ['application/json'] - }; -}; - -module.exports = UnsupportedMediaTypeHandler; diff --git a/src/lib/requestHandlers/ihealthPollerHandler.js b/src/lib/requestHandlers/ihealthPollerHandler.js index f1be649b..ad8a38df 100644 --- a/src/lib/requestHandlers/ihealthPollerHandler.js +++ b/src/lib/requestHandlers/ihealthPollerHandler.js @@ -11,7 +11,7 @@ const nodeUtil = require('util'); const BaseRequestHandler = require('./baseHandler'); -const errors = require('../errors'); +const ErrorHandler = require('./errorHandler'); const ihealth = require('../ihealth'); const isObjectEmpty = require('../utils/misc').isObjectEmpty; const router = require('./router'); @@ -73,17 +73,7 @@ IHealthPollerEndpointHandler.prototype.process = function () { }; return this; }) - .catch((error) => { - if (error instanceof errors.ConfigLookupError) { - this.code = 404; - this.body = { - code: this.code, - message: error.message - }; - return this; - } - return Promise.reject(error); - }); + .catch(error => new ErrorHandler(error).process()); }; router.on('register', (routerInst, enableDebug) => { diff --git a/src/lib/requestHandlers/pullConsumerHandler.js b/src/lib/requestHandlers/pullConsumerHandler.js index c7d83646..7aa09472 100644 --- a/src/lib/requestHandlers/pullConsumerHandler.js +++ b/src/lib/requestHandlers/pullConsumerHandler.js @@ -11,7 +11,7 @@ const nodeUtil = require('util'); const BaseRequestHandler = require('./baseHandler'); -const errors = require('../errors'); +const ErrorHandler = require('./errorHandler'); const pullConsumers = require('../pullConsumers'); const router = require('./router'); @@ -55,17 +55,7 @@ PullConsumerEndpointHandler.prototype.process = function () { this.code = 200; this.body = data; return this; - }).catch((error) => { - if (error instanceof errors.ConfigLookupError) { - this.code = 404; - this.body = { - code: this.code, - message: error.message - }; - return this; - } - return Promise.reject(error); - }); + }).catch(error => new ErrorHandler(error).process()); }; router.on('register', (routerInst) => { diff --git a/src/lib/requestHandlers/router.js b/src/lib/requestHandlers/router.js index 3e1370d0..fc30a9ce 100644 --- a/src/lib/requestHandlers/router.js +++ b/src/lib/requestHandlers/router.js @@ -11,13 +11,10 @@ const EventEmitter = require('events'); const nodeUtil = require('util'); const TinyRequestRouter = require('tiny-request-router').Router; - -const BadURLHandler = require('./httpStatus/badUrlHandler'); +const ErrorHandler = require('./errorHandler'); +const httpErrors = require('./httpErrors'); const configWorker = require('../config'); -const InternalServerErrorHandler = require('./httpStatus/internalServerErrorHandler'); const logger = require('../logger'); -const MethodNotAllowedHandler = require('./httpStatus/methodNotAllowedHandler'); -const UnsupportedMediaTypeHandler = require('./httpStatus/unsupportedMediaTypeHandler'); const configUtil = require('../utils/config'); /** @@ -77,11 +74,11 @@ RequestRouter.prototype.processRestOperation = function (restOperation, uriPrefi } catch (err) { // in case if synchronous part of the code failed logger.exception('restOperation processing error', err); - responsePromise = (new InternalServerErrorHandler(restOperation)).process(); + responsePromise = (new ErrorHandler(new httpErrors.InternalServerError())).process(); } return responsePromise.catch((err) => { logger.exception('restOperation processing error', err); - return (new InternalServerErrorHandler(restOperation)).process(); + return (new ErrorHandler(new httpErrors.InternalServerError())).process(); }) .then((handler) => { logger.info(`${handler.getCode()} ${restOperation.getMethod().toUpperCase()} ${restOperation.getUri().pathname}`); @@ -144,7 +141,7 @@ RequestRouter.prototype.findRequestHandler = function (restOperation, uriPrefix) // evaluate data as JSON and returns code 500 on failure. // Don't know how to re-define this behavior. if (restOperation.getBody() && restOperation.getContentType().toLowerCase() !== 'application/json') { - return new UnsupportedMediaTypeHandler(); + return new ErrorHandler(new httpErrors.UnsupportedMediaTypeError()); } const requestURI = restOperation.getUri(); @@ -161,13 +158,13 @@ RequestRouter.prototype.findRequestHandler = function (restOperation, uriPrefix) } const match = this.router.match(requestMethod, normalizedPathname); if (!match) { - return new BadURLHandler(restOperation); + return new ErrorHandler(new httpErrors.BadURLError(requestPathname)); } const RequestHandler = this.pathToMethod[match.path][requestMethod]; if (!RequestHandler) { const allowed = Object.keys(this.pathToMethod[match.path]); allowed.sort(); - return new MethodNotAllowedHandler(restOperation, allowed); + return new ErrorHandler(new httpErrors.MethodNotAllowedError(allowed)); } const handler = new RequestHandler(restOperation, match.params); diff --git a/src/lib/requestHandlers/systemPollerHandler.js b/src/lib/requestHandlers/systemPollerHandler.js index c9c5ce3d..5db7d3dd 100644 --- a/src/lib/requestHandlers/systemPollerHandler.js +++ b/src/lib/requestHandlers/systemPollerHandler.js @@ -11,7 +11,7 @@ const nodeUtil = require('util'); const BaseRequestHandler = require('./baseHandler'); -const errors = require('../errors'); +const ErrorHandler = require('./errorHandler'); const router = require('./router'); const systemPoller = require('../systemPoller'); @@ -61,17 +61,7 @@ SystemPollerEndpointHandler.prototype.process = function () { this.body = fetchedData.map(d => d.data); return this; }) - .catch((error) => { - if (error instanceof errors.ConfigLookupError) { - this.code = 404; - this.body = { - code: this.code, - message: error.message - }; - return this; - } - return Promise.reject(error); - }); + .catch(error => new ErrorHandler(error).process()); }; router.on('register', (routerInst, enableDebug) => { diff --git a/src/lib/utils/config.js b/src/lib/utils/config.js index 2bce9f00..e62b96d2 100644 --- a/src/lib/utils/config.js +++ b/src/lib/utils/config.js @@ -14,7 +14,8 @@ const logger = require('../logger'); const util = require('./misc'); const CLASSES = constants.CONFIG_CLASSES; -const VALIDATOR = declValidator.getValidator(); +// trigger early compile of all schemas +const VALIDATORS = declValidator.getValidators(); const POLLER_KEYS = { toCopyToMissingSystem: [ 'allowSelfSignedCert', 'enable', 'enableHostConnectivityCheck', 'host', @@ -34,21 +35,26 @@ const IHEALTH_POLLER_KEYS = { /** @module configUtil */ /** - * Gets the config validator + * Gets the config validators * * @public * - * @returns {Object} An instance of the config validator + * @returns {Object} Available config validation functions + * + * { + * full : validationFuncForFullSchema, + * $className: validationFuncForClassName + * } */ -function getValidator() { - return VALIDATOR; +function getValidators() { + return VALIDATORS; } /** * Validate JSON data against config schema * * @public - * @param {Object} validator - the validator instance to use + * @param {Object} validator - the validator function to use * @param {Object} data - data to validate against config schema * @param {Object} [context] - context to pass to validator * @@ -129,7 +135,7 @@ function getComponentDefaults() { } }; - return validate(VALIDATOR, defaultDecl, { expand: true }); + return validate(getValidators().full, defaultDecl, { expand: true }); } /** @@ -498,7 +504,7 @@ function normalizeTelemetrySystemPollers(originalConfig, componentDefaults) { .filter(poller => originalConfig.refdPollers.indexOf(poller.id) === -1); function createSystemFromSystemPoller(systemPoller) { - const newSystem = componentDefaults[CLASSES.SYSTEM_CLASS_NAME]; + const newSystem = util.deepCopy(componentDefaults[CLASSES.SYSTEM_CLASS_NAME]); POLLER_KEYS.toCopyToMissingSystem.forEach((key) => { if (Object.prototype.hasOwnProperty.call(systemPoller, key)) { newSystem[key] = systemPoller[key]; @@ -535,7 +541,7 @@ function normalizeTelemetryIHealthPollers(originalConfig, componentDefaults) { .filter(poller => originalConfig.refdPollers.indexOf(poller.id) === -1); function createSystemFromIHealthPoller(iHealthPoller) { - const newSystem = componentDefaults[CLASSES.SYSTEM_CLASS_NAME]; + const newSystem = util.deepCopy(componentDefaults[CLASSES.SYSTEM_CLASS_NAME]); POLLER_KEYS.toCopyToMissingSystem.forEach((key) => { if (Object.prototype.hasOwnProperty.call(iHealthPoller, key)) { newSystem[key] = iHealthPoller[key]; @@ -792,7 +798,7 @@ function mergeNamespaceConfig(namespaceConfig, options) { module.exports = { getPollerTraceValue, - getValidator, + getValidators, validate, componentizeConfig, normalizeComponents, diff --git a/src/lib/utils/metadata.js b/src/lib/utils/metadata.js index 82fe78d3..c8d719d4 100644 --- a/src/lib/utils/metadata.js +++ b/src/lib/utils/metadata.js @@ -8,9 +8,10 @@ 'use strict'; -const util = require('./misc'); const azureUtil = require('../consumers/shared/azureUtil'); const logger = require('../logger'); +const retryPromise = require('./promise').retry; +const util = require('./misc'); /** @module metadataUtil */ // provides a facade for metadata related methods based on instance environment @@ -29,7 +30,7 @@ function getInstanceMetadata(consumerContext) { const consumerType = consumerContext.config.type; let promise = Promise.resolve(); if (consumerType.indexOf('Azure') > -1) { - promise = util.retryPromise(() => azureUtil.getInstanceMetadata(consumerContext), { maxTries: 1 }); + promise = retryPromise(() => azureUtil.getInstanceMetadata(consumerContext), { maxTries: 1 }); } return promise diff --git a/src/lib/utils/misc.js b/src/lib/utils/misc.js index 990ca7c4..9c326b7f 100644 --- a/src/lib/utils/misc.js +++ b/src/lib/utils/misc.js @@ -53,48 +53,6 @@ const fsPromisified = (function promisifyNodeFsModule(fsModule) { return newFsModule; }(fs)); -/** - * Function that will attempt the promise over and over again - * - * @param {Function} fn - function which returns Promise as the result of execution - * @param {Object} [opts] - options object - * @param {Array} [opts.args] - array of arguments to apply to the function. By default 'null'. - * @param {Object} [opts.context] - context to apply to the function (.apply). By default 'null'. - * @param {Number} [opts.maxTries] - max number of re-try attempts. By default '1'. - * @param {Function} [opts.callback] - callback(err) to execute when function failed. - * Should return 'true' to continue 'retry' process. By default 'null'. - * @param {Number} [opts.delay] - a delay to apply between attempts. By default 0. - * @param {Number} [opts.backoff] - a backoff factor to apply between attempts after the second try - * (most errors are resolved immediately by a second try without a delay). By default 0. - * - * @returns Promise resolved when 'fn' succeed - */ -function retryPromise(fn, opts) { - opts = opts || {}; - opts.tries = opts.tries || 0; - opts.maxTries = opts.maxTries || 1; - - return fn.apply(opts.context || null, opts.args || null) - .catch((err) => { - if (opts.tries < opts.maxTries && (!opts.callback || opts.callback(err))) { - opts.tries += 1; - let delay = opts.delay || 0; - - // applying backoff after the second try only - if (opts.backoff && opts.tries > 1) { - /* eslint-disable no-restricted-properties */ - delay += opts.backoff * Math.pow(2, opts.tries - 1); - } - if (delay) { - return new Promise((resolve) => { - setTimeout(() => resolve(retryPromise(fn, opts)), delay); - }); - } - return retryPromise(fn, opts); - } - return Promise.reject(err); - }); -} const VERSION_COMPARATORS = ['==', '===', '<', '<=', '>', '>=', '!=', '!==']; @@ -438,9 +396,6 @@ module.exports = { return objectGet(object, propertyPath, defaultValue); }, - /** @see retryPromise */ - retryPromise, - /** * @see fs */ diff --git a/src/lib/utils/moduleLoader.js b/src/lib/utils/moduleLoader.js index ea625233..a9468678 100644 --- a/src/lib/utils/moduleLoader.js +++ b/src/lib/utils/moduleLoader.js @@ -1,5 +1,3 @@ - - /* * Copyright 2018. F5 Networks, Inc. See End User License Agreement ("EULA") for * license terms. Notwithstanding anything to the contrary in the EULA, Licensee diff --git a/src/lib/utils/promise.js b/src/lib/utils/promise.js new file mode 100644 index 00000000..5415aa48 --- /dev/null +++ b/src/lib/utils/promise.js @@ -0,0 +1,109 @@ +/* + * Copyright 2018. F5 Networks, Inc. See End User License Agreement ("EULA") for + * license terms. Notwithstanding anything to the contrary in the EULA, Licensee + * may copy and modify this software product for its internal business purposes. + * Further, Licensee may upload, publish and distribute the modified version of + * the software product on devcentral.f5.com. + */ + +'use strict'; + +/** + * Function that will attempt the promise over and over again + * + * @param {Function} fn - function which returns Promise as the result of execution + * @param {Object} [opts] - options object + * @param {Array} [opts.args] - array of arguments to apply to the function. By default 'null'. + * @param {Object} [opts.context] - context to apply to the function (.apply). By default 'null'. + * @param {Number} [opts.maxTries] - max number of re-try attempts. By default '1'. + * @param {Function} [opts.callback] - callback(err) to execute when function failed. + * Should return 'true' to continue 'retry' process. By default 'null'. + * @param {Number} [opts.delay] - a delay to apply between attempts. By default 0. + * @param {Number} [opts.backoff] - a backoff factor to apply between attempts after the second try + * (most errors are resolved immediately by a second try without a delay). By default 0. + * + * @returns {Promise} resolved when 'fn' succeed + */ +function retry(fn, opts) { + opts = opts || {}; + opts.tries = opts.tries || 0; + opts.maxTries = opts.maxTries || 1; + + return fn.apply(opts.context || null, opts.args || null) + .catch((err) => { + if (opts.tries < opts.maxTries && (!opts.callback || opts.callback(err))) { + opts.tries += 1; + let delay = opts.delay || 0; + + // applying backoff after the second try only + if (opts.backoff && opts.tries > 1) { + /* eslint-disable no-restricted-properties */ + delay += opts.backoff * Math.pow(2, opts.tries - 1); + } + if (delay) { + return new Promise((resolve) => { + setTimeout(() => resolve(retry(fn, opts)), delay); + }); + } + return retry(fn, opts); + } + return Promise.reject(err); + }); +} + +module.exports = { + /** + * Returns a promise that resolves after all of the given promises have either fulfilled or rejected, + * with an array of objects that each describes the outcome of each promise. + * + * Note: original method is available on node 12.9.0+ + * + * This function is useful when you run promises that doesn't depend on each other and + * you don't want them to be in unknown state like Promise.all do when one of the + * promises was rejected. Ideally this function should be used everywhere instead of Promise.all + * + * @returns {Promise>} resolved once all of the + * given promises have either fulfilled or rejected + */ + allSettled(promises) { + return Promise.all(promises.map(p => Promise.resolve(p) + .then( + val => ({ status: 'fulfilled', value: val }), + err => ({ status: 'rejected', reason: err }) + ))); + }, + + /** + * Get values returned by 'allSettled' + * + * Note: when 'ignoreRejected' is true then 'undefined' will be returned for rejected promises + * to preserve order as in an original array + * + * @param {Array} statuses - array of statuses + * @param {Boolean} [ignoreRejected = false] - ignore rejected promises + * + * @returns {Array} filtered results + * @throws {Error} original rejection error + */ + getValues(statuses, ignoreRejected) { + return statuses.map((status) => { + if (!ignoreRejected && typeof status.reason !== 'undefined') { + throw status.reason; + } + return status.value; + }); + }, + + /** @see retry */ + retry +}; + +/** + * Promise status + * + * @typedef PromiseResolutionStatus + * @type {Object} + * @property {String} status - fulfilled or rejected + * @property {Any} value - value returned by fulfilled promise + * @property {Error} reason - rejection reason (error object) + */ diff --git a/src/lib/utils/tracer.js b/src/lib/utils/tracer.js index c798db61..5394123e 100644 --- a/src/lib/utils/tracer.js +++ b/src/lib/utils/tracer.js @@ -1,5 +1,3 @@ - - /* * Copyright 2018. F5 Networks, Inc. See End User License Agreement ("EULA") for * license terms. Notwithstanding anything to the contrary in the EULA, Licensee diff --git a/src/nodejs/restWorker.js b/src/nodejs/restWorker.js index 54b5d974..09184aa6 100644 --- a/src/nodejs/restWorker.js +++ b/src/nodejs/restWorker.js @@ -18,7 +18,7 @@ const logger = require('../lib/logger'); const util = require('../lib/utils/misc'); const deviceUtil = require('../lib/utils/device'); -const retryPromise = require('../lib/utils/misc').retryPromise; +const retryPromise = require('../lib/utils/promise').retry; const persistentStorage = require('../lib/persistentStorage'); const configWorker = require('../lib/config'); const requestRouter = require('../lib/requestHandlers/router'); diff --git a/src/schema/1.18.0/base_schema.json b/src/schema/1.18.0/base_schema.json new file mode 100644 index 00000000..9bb2d9dd --- /dev/null +++ b/src/schema/1.18.0/base_schema.json @@ -0,0 +1,410 @@ +{ + "$id": "base_schema.json", + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "Telemetry Streaming", + "description": "", + "type": "object", + "definitions": { + "enable": { + "title": "Enable", + "description": "This property can be used to enable/disable the poller/listener" , + "type": "boolean" + }, + "trace": { + "title": "Trace", + "description": "Enables data dumping to file. Boolean uses pre-defined file location, however value could be a string which contains path to a specific file instead" , + "type": ["boolean", "string"] + }, + "secret": { + "title": "Passphrase (secret)", + "description": "" , + "type": "object", + "properties": { + "class": { + "title": "Class", + "description": "Telemetry streaming secret class", + "type": "string", + "enum": [ "Secret" ], + "default": "Secret" + }, + "cipherText": { + "title": "Cipher Text: this contains a secret to encrypt", + "type": "string" + }, + "environmentVar": { + "title": "Environment Variable: this contains the named env var where the secret resides", + "type": "string" + }, + "protected": { + "$comment": "Meta property primarily used to determine if 'cipherText' needs to be encrypted", + "title": "Protected", + "type": "string", + "enum": [ "plainText", "plainBase64", "SecureVault" ], + "default": "plainText" + } + }, + "oneOf": [ + { "required": [ "cipherText" ] }, + { "required": [ "environmentVar" ] } + ], + "f5secret": true + }, + "username": { + "$comment": "Common field for username to use everywhere in scheme", + "title": "Username", + "type": "string" + }, + "stringOrSecret": { + "allOf": [ + { + "if": { "type": "string" }, + "then": {}, + "else": {} + }, + { + "if": { "type": "object" }, + "then": { "$ref": "base_schema.json#/definitions/secret" }, + "else": {} + } + ] + }, + "constants": { + "title": "Constants", + "description": "" , + "type": "object", + "properties": { + "class": { + "title": "Class", + "description": "Telemetry streaming constants class", + "type": "string", + "enum": [ "Constants" ] + } + }, + "additionalProperties": true + }, + "tag": { + "$comment": "Defaults do not get applied for $ref objects, so place defaults alongside instead.", + "title": "Tag", + "description": "" , + "type": "object", + "properties": { + "tenant": { + "title": "Tenant tag", + "type": "string" + }, + "application": { + "title": "Application tag", + "type": "string" + } + }, + "additionalProperties": true + }, + "action": { + "title": "Action", + "description": "An action to be done on system data or on event data.", + "type": "object", + "properties": { + "enable": { + "title": "Enable", + "description": "Whether to enable this action in the declaration or not.", + "type": "boolean", + "default": true + }, + "setTag": { + "title": "Set Tag", + "description": "The tag values to be added.", + "type": "object", + "additionalProperties": true + }, + "ifAllMatch": { + "title": "If All Match", + "description": "The conditions that will be checked against. All must be true.", + "type": "object", + "additionalProperties": true + }, + "ifAnyMatch": { + "title": "If Any Match", + "description": "An array of ifAllMatch objects. Any individual ifAllMatch object may match, but each condition within an ifAllMatch object must be true", + "type": "array", + "additionalProperties": false + }, + "includeData": { + "title": "Include Data", + "description": "The data fields to include in the output", + "type": "object", + "additionalProperties": false + }, + "excludeData": { + "title": "Exclude Data", + "description": "The data fields to exclude in the output", + "type": "object", + "additionalProperties": false + }, + "locations": { + "title": "Location", + "description": "The location(s) to apply the action.", + "type": "object", + "additionalProperties": { + "$ref": "#/definitions/location" + } + } + }, + "dependencies": { + "includeData": { + "allOf": [ + { + "required": ["locations"] + }, + { + "not": { "required": ["setTag"] } + }, + { + "not": { "required": ["excludeData"] } + } + ] + }, + "excludeData": { + "allOf": [ + { + "required": ["locations"] + }, + { + "not": { "required": ["setTag"] } + }, + { + "not": { "required": ["includeData"] } + } + ] + }, + "setTag": { + "allOf": [ + { + "not": { "required": ["includeData"] } + }, + { + "not": { "required": ["excludeData"] } + } + ] + }, + "ifAnyMatch": { + "allOf": [ + { + "not": { "required": ["ifAllMatch"] } + } + ] + }, + "ifAllMatch": { + "allOf": [ + { + "not": { "required": ["ifAnyMatch"] } + } + ] + } + }, + "additionalProperties": false, + "if": { + "required": [ "setTag" ], + "properties": { + "setTag": { + "anyOf": [ + { + "additionalProperties": { + "const": "`A`" + } + }, + { + "additionalProperties": { + "const": "`T`" + } + } + ] + } + } + }, + "then": { + "not": { + "required": ["locations"] + } + } + }, + "location": { + "title": "Location", + "description": "Used to specify a location in TS data. Use boolean type with value true to specify the location.", + "oneOf": [ + { + "type": "boolean", + "const": true + }, + { + "type": "object", + "additionalProperties": { + "$ref": "#/definitions/location" + } + } + ] + }, + "match": { + "$comment": "Defaults do not get applied for $ref objects, so place defaults alongside instead.", + "title": "Pattern to filter data", + "description": "", + "type": "string" + }, + "enableHostConnectivityCheck": { + "$comment": "This property can be used to enable/disable the host connectivity check in configurations where this is in effect", + "title": "Host", + "description": "" , + "type": "boolean" + }, + "allowSelfSignedCert": { + "$comment": "This property can be used by consumers, system pollers to enable/disable SSL Cert check", + "title": "Allow Self-Signed Certificate", + "description": "" , + "type": "boolean" + }, + "host": { + "$comment": "This property can be used by consumers, system pollers", + "title": "Host", + "description": "" , + "type": "string", + "anyOf": [ + { "format": "ipv4" }, + { "format": "ipv6" }, + { "format": "hostname" } + ], + "hostConnectivityCheck": true + }, + "port": { + "title": "Port", + "description": "" , + "type": "integer", + "minimum": 0, + "maximum": 65535 + }, + "protocol": { + "title": "Protocol", + "description": "" , + "type": "string", + "enum": [ "http", "https" ] + }, + "proxy": { + "title": "Proxy Configuration", + "description": "", + "type": "object", + "dependencies": { + "passphrase": [ "username" ] + }, + "required": [ "host" ], + "properties": { + "host": { + "$ref": "#/definitions/host" + }, + "port": { + "default": 80, + "allOf": [ + { + "$ref": "#/definitions/port" + } + ] + }, + "protocol": { + "default": "http", + "allOf": [ + { + "$ref": "#/definitions/protocol" + } + ] + }, + "enableHostConnectivityCheck": { + "$ref": "#/definitions/enableHostConnectivityCheck" + }, + "allowSelfSignedCert": { + "$ref": "#/definitions/allowSelfSignedCert" + }, + "username": { + "$ref": "#/definitions/username" + }, + "passphrase": { + "$ref": "#/definitions/secret" + } + }, + "additionalProperties": false + } + }, + "properties": { + "class": { + "title": "Class", + "description": "Telemetry streaming top level class", + "type": "string", + "enum": [ "Telemetry" ] + }, + "schemaVersion": { + "title": "Schema version", + "description": "Version of ADC Declaration schema this declaration uses", + "type": "string", + "$comment": "IMPORTANT: In enum array, please put current schema version first, oldest-supported version last. Keep enum array sorted most-recent-first.", + "enum": [ "1.18.0", "1.17.0", "1.16.0", "1.15.0", "1.14.0", "1.13.0", "1.12.0", "1.11.0", "1.10.0", "1.9.0", "1.8.0", "1.7.0", "1.6.0", "1.5.0", "1.4.0", "1.3.0", "1.2.0", "1.1.0", "1.0.0", "0.9.0" ], + "default": "1.18.0" + }, + "$schema": { + "title": "Schema", + "description": "", + "type": "string" + } + }, + "additionalProperties": { + "$comment": "AJV does not resolve defaults inside oneOf/anyOf, so instead use allOf. Any schema refs should also use allOf with an if/then/else on class", + "properties": { + "class": { + "title": "Class", + "type": "string", + "enum": [ + "Telemetry_System", + "Telemetry_System_Poller", + "Telemetry_Listener", + "Telemetry_Consumer", + "Telemetry_Pull_Consumer", + "Telemetry_iHealth_Poller", + "Telemetry_Endpoints", + "Telemetry_Namespace", + "Controls", + "Shared" + ] + } + }, + "allOf": [ + { + "$ref": "system_schema.json#" + }, + { + "$ref": "system_poller_schema.json#" + }, + { + "$ref": "listener_schema.json#" + }, + { + "$ref": "consumer_schema.json#" + }, + { + "$ref": "pull_consumer_schema.json#" + }, + { + "$ref": "ihealth_poller_schema.json#" + }, + { + "$ref": "endpoints_schema.json#" + }, + { + "$ref": "controls_schema.json#" + }, + { + "$ref": "shared_schema.json#" + }, + { + "$ref": "namespace_schema.json#" + } + ] + }, + "required": [ + "class" + ] +} \ No newline at end of file diff --git a/src/schema/1.18.0/consumer_schema.json b/src/schema/1.18.0/consumer_schema.json new file mode 100644 index 00000000..a732ce6b --- /dev/null +++ b/src/schema/1.18.0/consumer_schema.json @@ -0,0 +1,1000 @@ +{ + "$id": "consumer_schema.json", + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "Telemetry Streaming Consumer schema", + "description": "", + "type": "object", + "definitions": { + "host": { + "$comment": "Required for certain consumers: standard property", + "title": "Host", + "description": "FQDN or IP address" , + "type": "string", + "anyOf": [ + { "format": "ipv4" }, + { "format": "ipv6" }, + { "format": "hostname" } + ], + "hostConnectivityCheck": true + }, + "protocols": { + "$comment": "Required for certain consumers: standard property", + "title": "Protocols (all)", + "description": "" , + "type": "string", + "enum": [ "https", "http", "tcp", "udp", "binaryTcpTls", "binaryTcp" ] + }, + "port": { + "$comment": "Required for certain consumers: standard property", + "title": "Port", + "description": "" , + "type": "integer", + "minimum": 0, + "maximum": 65535 + }, + "path": { + "$comment": "Required for certain consumers: standard property", + "title": "Path", + "description": "Path to post data to", + "type": ["string", "object"], + "f5expand": true, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/stringOrSecret" + } + ] + }, + "method": { + "$comment": "Required for certain consumers: standard property", + "title": "Method", + "description": "HTTP method to use (limited to sensical choices)" , + "type": "string", + "enum": [ "POST", "GET", "PUT" ] + }, + "headers": { + "$comment": "Required for certain consumers: standard property", + "title": "Headers", + "description": "HTTP headers to use" , + "type": "array", + "items": { + "properties": { + "name": { + "description": "Name of this header", + "type": "string", + "f5expand": true, + "minLength": 1 + }, + "value": { + "description": "Value of this header", + "type": ["string", "object"], + "f5expand": true, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/stringOrSecret" + } + ] + } + }, + "required": [ + "name", + "value" + ], + "additionalProperties": false + } + }, + "customOpts": { + "$comment": "Required for certain consumers: standard property", + "title": "Custom Opts (Client Library Dependent)", + "description": "Additional options for use by consumer client library. Refer to corresponding consumer lib documentation for acceptable keys and values." , + "type": "array", + "items": { + "properties": { + "name": { + "description": "Name of the option", + "type": "string", + "f5expand": true, + "minLength": 1 + }, + "value": { + "description": "Value of the option", + "minLength": 1, + "anyOf": [ + { + "type": "boolean" + }, + { + "type": "number" + }, + { + "allOf": [ + { + "f5expand": true + }, + { + "$ref": "base_schema.json#/definitions/stringOrSecret" + } + ] + } + ] + } + }, + "required": [ + "name", + "value" + ], + "additionalProperties": false + }, + "minItems": 1 + }, + "format": { + "$comment": "Required for certain consumers: Splunk", + "title": "Format (informs consumer additional formatting may be required)", + "description": "Legacy format is deprecated", + "type": "string", + "enum": [ "default", "legacy", "multiMetric" ] + }, + "username": { + "$comment": "Required for certain consumers: standard property", + "title": "Username", + "description": "" , + "type": "string", + "f5expand": true + }, + "region": { + "$comment": "Required for certain consumers: AWS_CloudWatch, AWS_S3, Azure_Log_Analytics, Azure_App_Insights", + "title": "Region", + "description": "" , + "type": "string", + "f5expand": true + }, + "bucket": { + "$comment": "Required for certain consumers: AWS_S3", + "title": "Bucket", + "description": "" , + "type": "string", + "f5expand": true + }, + "logGroup": { + "$comment": "Required for certain consumers: AWS_CloudWatch", + "title": "Log Group", + "description": "" , + "type": "string", + "f5expand": true + }, + "logStream": { + "$comment": "Required for certain consumers: AWS_CloudWatch", + "title": "Log Stream", + "description": "" , + "type": "string", + "f5expand": true + }, + "metricNamespace": { + "$comment": "Required for certain consumers: AWS_CloudWatch", + "title": "Metric Name", + "description": "The namespace for the metrics" , + "type": "string", + "f5expand": true, + "minLength": 1 + }, + "workspaceId": { + "$comment": "Required for certain consumers: Azure_Log_Analytics", + "title": "Workspace ID", + "description": "" , + "type": "string", + "f5expand": true + }, + "useManagedIdentity": { + "$comment": "Required for certain consumers: Azure_Log_Analytics and Azure_Application_Insights", + "title": "Use Managed Identity", + "description": "Determines whether to use Managed Identity to perform authorization for Azure services", + "type": "boolean", + "default": false + }, + "appInsightsResourceName": { + "$comment": "Required for certain consumers: Azure_Application_Insights", + "title": "Application Insights Resource Name (Pattern)", + "description": "Name filter used to determine which App Insights resource to send metrics to. If not provided, TS will send metrics to App Insights in the subscription in which the managed identity has permissions to", + "type": "string" + }, + "instrumentationKey": { + "$comment": "Required for certain consumers: Azure_Application_Insights", + "title": "Instrumentation Key", + "description": "Used to determine which App Insights resource to send metrics to", + "anyOf": [ + { + "type": "string", + "f5expand": true, + "minLength": 1 + }, + { + "type":"array", + "items": { + "type": "string", + "f5expand": true, + "minLength": 1 + }, + "minItems": 1 + } + ] + }, + "maxBatchIntervalMs": { + "$comment": "Required for certain consumers: Azure_Application_Insights", + "title": "Maximum Batch Interval (ms)", + "description": "The maximum amount of time to wait in milliseconds to for payload to reach maxBatchSize", + "type": "integer", + "minimum": 1000, + "default": 5000 + }, + "maxBatchSize": { + "$comment": "Required for certain consumers: Azure_Application_Insights", + "title": "Maximum Batch Size", + "description": "The maximum number of telemetry items to include in a payload to the ingestion endpoint", + "type": "integer", + "minimum": 1, + "default": 250 + }, + "topic": { + "$comment": "Required for certain consumers: Kafka", + "title": "Topic", + "description": "" , + "type": "string", + "f5expand": true + }, + "index": { + "$comment": "Required for certain consumers: ElasticSearch", + "title": "Index Name", + "description": "" , + "type": "string", + "f5expand": true + }, + "apiVersion": { + "$comment": "Required for certain consumers: ElasticSearch", + "title": "API Version", + "description": "" , + "type": "string", + "f5expand": true + }, + "dataType": { + "$comment": "Required for certain consumers: AWS_CloudWatch, ElasticSearch", + "title": "Data type", + "description": "" , + "type": "string", + "f5expand": true + }, + "authenticationProtocol": { + "$comment": "Required for certain consumers: Kafka", + "title": "Authentication Protocol", + "description": "" , + "type": "string", + "f5expand": true, + "enum": [ + "SASL-PLAIN", + "TLS", + "None" + ] + }, + "clientCertificate": { + "$comment": "Required for certain consumers: Kafka, Generic HTTP", + "title": "Client Certificate", + "description": "Certificate(s) to use when connecting to a secured endpoint.", + "type": "object", + "f5expand": true, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/secret" + } + ] + }, + "rootCertificate": { + "$comment": "Required for certain consumers: Kafka, Generic HTTP", + "title": "Root Certificate", + "description": "Certificate Authority root certificate, used to validate certificate chains.", + "type": "object", + "f5expand": true, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/secret" + } + ] + }, + "projectId": { + "$comment": "Required for certain consumers: Google_Cloud_Monitoring", + "title": "Project ID", + "description": "The ID of the relevant project.", + "type": "string", + "f5expand": true + }, + "serviceEmail": { + "$comment": "Required for certain consumers: Google_Cloud_Monitoring", + "title": "Service Email", + "description": "The service email.", + "type": "string", + "f5expand": true + }, + "privateKeyId": { + "$comment": "Required for certain consumers: Google_Cloud_Monitoring", + "title": "Private Key ID", + "description": "The private key ID.", + "type": "string", + "f5expand": true + }, + "privateKey": { + "$comment": "Required for certain consumers: Kafka, Generic HTTP", + "title": "Private Key", + "description": "Private Key", + "type": "object", + "f5expand": true, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/secret" + } + ] + }, + "f5csTenantId": { + "$comment": "Required for certain consumers: F5_Cloud", + "title": "F5CS Tenant ID", + "description": "" , + "type": "string", + "f5expand": true + }, + "f5csSensorId": { + "$comment": "Required for certain consumers: F5_Cloud", + "title": "F5CS Sensor ID", + "description": "" , + "type": "string", + "f5expand": true + }, + "payloadSchemaNid": { + "$comment": "Required for certain consumers: F5_Cloud", + "title": "Namespace ID for payloadSchema", + "description": "" , + "type": "string", + "f5expand": true + }, + "serviceAccount": { + "$comment": "Required for certain consumers: F5_Cloud", + "title": "Service Account", + "description": "Service Account to authentication" , + "type": "object", + "properties": { + "authType": { + "$comment": "Required for certain consumers: F5_Cloud", + "title": "SA Type", + "description": "" , + "type": "string", + "enum": ["google-auth" ] + }, + "type": { + "$comment": "Required for certain consumers: F5_Cloud", + "title": "SA Type", + "description": "" , + "type": "string", + "f5expand": true + }, + "projectId": { + "$comment": "Required for certain consumers: F5_Cloud", + "title": "Project Id", + "description": "" , + "type": "string", + "f5expand": true + }, + "privateKeyId": { + "$comment": "Required for certain consumers: F5_Cloud", + "title": "Private Key Id", + "description": "" , + "type": "string", + "f5expand": true + }, + "privateKey": { + "$ref": "base_schema.json#/definitions/secret" + }, + "clientEmail": { + "$comment": "Required for certain consumers: F5_Cloud", + "title": "Client Email", + "description": "" , + "type": "string", + "f5expand": true + }, + "clientId": { + "$comment": "Required for certain consumers: F5_Cloud", + "title": "Client Id", + "description": "" , + "type": "string", + "f5expand": true + }, + "authUri": { + "$comment": "Required for certain consumers: F5_Cloud", + "title": "Auth Uri", + "description": "" , + "type": "string", + "f5expand": true + }, + "tokenUri": { + "$comment": "Required for certain consumers: F5_Cloud", + "title": "Token Uri", + "description": "" , + "type": "string", + "f5expand": true + }, + "authProviderX509CertUrl": { + "$comment": "Required for certain consumers: F5_Cloud", + "title": "Auth Provider X509 Cert Url", + "description": "" , + "type": "string", + "f5expand": true + }, + "clientX509CertUrl": { + "$comment": "Required for certain consumers: F5_Cloud", + "title": "Client X509 Cert Url", + "description": "" , + "type": "string", + "f5expand": true + } + }, + "additionalProperties": false, + "allOf": [ + { + "if": { "properties": { "authType": { "const": "google-auth" } } }, + "then": { + "required": [ + "type", + "projectId", + "privateKeyId", + "privateKey", + "clientEmail", + "clientId", + "authUri", + "tokenUri", + "authProviderX509CertUrl", + "clientX509CertUrl" + ] + }, + "else": {} + }] + }, + "targetAudience": { + "$comment": "Required for certain consumers: F5_Cloud", + "title": "Target Audience", + "description": "" , + "type": "string", + "f5expand": true + }, + "useSSL": { + "$comment": "Required for certain consumers: F5_Cloud", + "title": "useSSL", + "description": "To decide if GRPC connection should use SSL and then it is secured" , + "type": "boolean", + "f5expand": true + } + }, + "allOf": [ + { + "if": { "properties": { "class": { "const": "Telemetry_Consumer" } } }, + "then": { + "required": [ + "class", + "type" + ], + "properties": { + "class": { + "title": "Class", + "description": "Telemetry Streaming Consumer class", + "type": "string", + "enum": [ "Telemetry_Consumer" ] + }, + "enable": { + "default": true, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/enable" + } + ] + }, + "trace": { + "default": false, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/trace" + } + ] + }, + "type": { + "title": "Type", + "description": "" , + "type": "string", + "enum": [ + "AWS_CloudWatch", + "AWS_S3", + "Azure_Log_Analytics", + "Azure_Application_Insights", + "default", + "ElasticSearch", + "Generic_HTTP", + "Google_Cloud_Monitoring", + "Google_StackDriver", + "Graphite", + "Kafka", + "Splunk", + "Statsd", + "Sumo_Logic", + "F5_Cloud" + ] + }, + "enableHostConnectivityCheck": { + "$ref": "base_schema.json#/definitions/enableHostConnectivityCheck" + }, + "allowSelfSignedCert": { + "default": false, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/allowSelfSignedCert" + } + ] + } + }, + "allOf": [ + { + "$comment": "This allows enforcement of no additional properties in this nested schema - could reuse above properties but prefer a separate block", + "properties": { + "class": {}, + "enable": {}, + "trace": {}, + "type": {}, + "enableHostConnectivityCheck": {}, + "allowSelfSignedCert": {}, + "host": {}, + "protocol": {}, + "port": {}, + "path": {}, + "method": {}, + "headers": {}, + "customOpts": {}, + "username": {}, + "passphrase": {}, + "format": {}, + "workspaceId": {}, + "useManagedIdentity": {}, + "instrumentationKey": {}, + "appInsightsResourceName": {}, + "maxBatchIntervalMs": {}, + "maxBatchSize": {}, + "region": {}, + "logGroup": {}, + "logStream": {}, + "metricNamespace": {}, + "bucket": {}, + "topic": {}, + "apiVersion": {}, + "index": {}, + "dataType": {}, + "authenticationProtocol": {}, + "projectId": {}, + "serviceEmail": {}, + "privateKey": {}, + "privateKeyId": {}, + "clientCertificate": {}, + "rootCertificate": {}, + "fallbackHosts": {}, + "f5csTenantId": {}, + "f5csSensorId": {}, + "payloadSchemaNid": {}, + "serviceAccount": {}, + "targetAudience": {}, + "useSSL": {}, + "proxy": {} + }, + "additionalProperties": false + }, + { + "if": { "properties": { "type": { "const": "default" } } }, + "then": { + "required": [], + "properties": {} + }, + "else": {} + }, + { + "if": { "properties": { "type": { "const": "Generic_HTTP" } } }, + "then": { + "required": [ + "host" + ], + "properties": { + "host": { "$ref": "#/definitions/host" }, + "fallbackHosts": { + "type": "array", + "description": "List FQDNs or IP addresses to be used as fallback hosts" , + "minItems": 1, + "items": { + "allOf": [{ + "$ref": "#/definitions/host" + }] + } + }, + "protocol": { "$ref": "#/definitions/protocols", "default": "https" }, + "port": { "$ref": "#/definitions/port", "default": 443 }, + "path": { "$ref": "#/definitions/path", "default": "/" }, + "method": { "$ref": "#/definitions/method", "default": "POST" }, + "headers": { "$ref": "#/definitions/headers" }, + "passphrase": { "$ref": "base_schema.json#/definitions/secret" }, + "proxy": { "$ref": "base_schema.json#/definitions/proxy" }, + "privateKey": { "$ref": "#/definitions/privateKey" }, + "clientCertificate": { "$ref": "#/definitions/clientCertificate" }, + "rootCertificate": { "$ref": "#/definitions/rootCertificate" } + }, + "allOf": [ + { + "if": { "required": [ "clientCertificate" ] }, + "then": { "required": [ "privateKey" ] } + }, + { + "if": { "required": [ "privateKey" ] }, + "then": { "required": [ "clientCertificate" ] } + } + ] + }, + "else": {} + }, + { + "if": { "properties": { "type": { "const": "Splunk" } } }, + "then": { + "required": [ + "host", + "passphrase" + ], + "properties": { + "host": { "$ref": "#/definitions/host" }, + "protocol": { "$ref": "#/definitions/protocols", "default": "https" }, + "port": { "$ref": "#/definitions/port", "default": 8088 }, + "passphrase": { "$ref": "base_schema.json#/definitions/secret" }, + "format": { "$ref": "#/definitions/format", "default": "default" }, + "proxy": { "$ref": "base_schema.json#/definitions/proxy" } + } + }, + "else": {} + }, + { + "if": { "properties": { "type": { "const": "Azure_Log_Analytics" } } }, + "then": { + "required": [ + "workspaceId" + ], + "properties": { + "workspaceId": { "$ref": "#/definitions/workspaceId" }, + "passphrase": { "$ref": "base_schema.json#/definitions/secret" }, + "useManagedIdentity": { "$ref": "#/definitions/useManagedIdentity", "default": false }, + "region": { "$ref": "#/definitions/region" } + }, + "allOf": [ + { + "dependencies": { + "passphrase": { + "anyOf": [ + { "not": {"required": [ "useManagedIdentity" ] } }, + { "properties": { "useManagedIdentity": { "const": false } } } + ] + } + } + }, + { + "if": { "not": { "required" : [ "useManagedIdentity"] } }, + "then": { "required": ["passphrase"] }, + "else": { + "if": { "properties": { "useManagedIdentity": { "const": true } } }, + "then": { "not": { "required": ["passphrase"] } }, + "else": { "required": ["passphrase"]} + } + } + ] + }, + "else": {} + }, + { + "if": { "properties": { "type": { "const": "Azure_Application_Insights" } } }, + "then": { + "properties": { + "instrumentationKey": { "$ref": "#/definitions/instrumentationKey" }, + "maxBatchSize": { "$ref": "#/definitions/maxBatchSize", "default": 250 }, + "maxBatchIntervalMs": { "$ref": "#/definitions/maxBatchIntervalMs", "default": 5000 }, + "customOpts": { "$ref": "#/definitions/customOpts" }, + "useManagedIdentity": { "$ref": "#/definitions/useManagedIdentity", "default": false }, + "appInsightsResourceName": { "$ref": "#/definitions/appInsightsResourceName" }, + "region": { "$ref": "#/definitions/region" } + }, + "allOf": [ + { + "dependencies": { + "instrumentationKey": { + "allOf": [ + { + "anyOf": [ + { "not": { "required": [ "useManagedIdentity" ] } }, + { "properties": { "useManagedIdentity": { "const": false } } } + ] + }, + { + "not": { "required": ["appInsightsResourceName"] } + } + ] + } + } + }, + { + "if": { "not": { "required" : [ "useManagedIdentity"] } }, + "then": { "required": ["instrumentationKey"] }, + "else": { + "if": { "properties": { "useManagedIdentity": { "const": true } } }, + "then": { "not": { "required": ["instrumentationKey"] } }, + "else": { + "allOf": [ + { "required": [ "instrumentationKey" ]}, + { "not": { "required": [ "appInsightsResourceName" ] } } + ] + } + } + }, + { + "if": { "required": [ "appInsightsResourceName" ] }, + "then": { "properties": { "appInsightsResourceName": { "minLength": 1 } }} + } + ] + }, + "else": {} + }, + { + "if": { "properties": { "type": { "const": "AWS_CloudWatch" } } }, + "then": { + "required": [ + "region", + "dataType" + ], + "properties": { + "region": { "$ref": "#/definitions/region" }, + "dataType": { "$ref": "#/definitions/dataType", "default": "logs" }, + "username": { "$ref": "#/definitions/username" }, + "passphrase": { "$ref": "base_schema.json#/definitions/secret" } + }, + "allOf": [ + { "not": { "required": ["username"], "not": { "required": ["passphrase"] }}}, + { "not": { "required": ["passphrase"], "not": { "required": ["username"] }}}, + { "oneOf": + [ + { + "allOf": [ + { + "properties": { + "logGroup": { "$ref": "#/definitions/logGroup" }, + "logStream": { "$ref": "#/definitions/logStream" }, + "dataType": { + "allOf": + [ + { "$ref": "#/definitions/dataType"}, + { "enum": ["logs", null] } + ] + } + } + }, + { "required":[ "logGroup", "logStream" ] }, + { "not": { "required": ["metricNamespace"] }} + ] + }, + { + "allOf": [ + { + "properties": { + "metricNamespace": { "$ref": "#/definitions/metricNamespace" }, + "dataType": { + "allOf": [ + { "$ref": "#/definitions/dataType"}, + { "enum": ["metrics"] } + ] + } + } + }, + { "required":[ "metricNamespace" ] }, + { "not": { "required":[ "logStream" ] }}, + { "not": { "required": [ "logGroup" ] }} + ] + } + ] + } + ] + }, + "else": {} + }, + { + "if": { "properties": { "type": { "const": "AWS_S3" } } }, + "then": { + "required": [ + "region", + "bucket" + ], + "properties": { + "region": { "$ref": "#/definitions/region" }, + "bucket": { "$ref": "#/definitions/bucket" }, + "username": { "$ref": "#/definitions/username" }, + "passphrase": { "$ref": "base_schema.json#/definitions/secret" } + }, + "dependencies": { + "passphrase": [ "username" ], + "username":[ "passphrase" ] + } + }, + "else": {} + }, + { + "if": { "properties": { "type": { "const": "Graphite" } } }, + "then": { + "required": [ + "host" + ], + "properties": { + "host": { "$ref": "#/definitions/host" }, + "protocol": { "$ref": "#/definitions/protocols", "default": "https" }, + "port": { "$ref": "#/definitions/port", "default": 443 }, + "path": { "$ref": "#/definitions/path", "default": "/events/" } + } + }, + "else": {} + }, + { + "if": { "properties": { "type": { "const": "Kafka" } } }, + "then": { + "required": [ + "host", + "topic" + ], + "properties": { + "authenticationProtocol": { "$ref": "#/definitions/authenticationProtocol", "default": "None" }, + "host": { "$ref": "#/definitions/host" }, + "protocol": { "$ref": "#/definitions/protocols", "default": "binaryTcpTls" }, + "port": { "$ref": "#/definitions/port", "default": 9092 }, + "topic": { "$ref": "#/definitions/topic" } + }, + "allOf": [ + { + "if": { "properties": { "authenticationProtocol": { "const": "SASL-PLAIN" } } }, + "then": { + "required": [ + "username" + ], + "properties": { + "username": { "$ref": "#/definitions/username" }, + "passphrase": { "$ref": "base_schema.json#/definitions/secret" } + }, + "dependencies": { + "passphrase": [ "username" ] + } + }, + "else": {} + }, + { + "if": { "properties": { "authenticationProtocol": { "const": "TLS" } } }, + "then": { + "required": [ + "privateKey", + "clientCertificate" + ], + "allOf": [ + { "not": { "required": [ "username" ] } }, + { "not": { "required": [ "passphrase" ] } } + ], + "properties": { + "privateKey": { "$ref": "#/definitions/privateKey" }, + "clientCertificate": { "$ref": "#/definitions/clientCertificate" }, + "rootCertificate": { "$ref": "#/definitions/rootCertificate" }, + "protocol": { "const": "binaryTcpTls" } + } + }, + "else": {} + } + ] + }, + "else": {} + }, + { + "if": { "properties": { "type": { "const": "ElasticSearch" } } }, + "then": { + "required": [ + "host", + "index" + ], + "properties": { + "host": { "$ref": "#/definitions/host" }, + "protocol": { "$ref": "#/definitions/protocols", "default": "https" }, + "port": { "$ref": "#/definitions/port", "default": 9200 }, + "path": { "$ref": "#/definitions/path" }, + "username": { "$ref": "#/definitions/username" }, + "passphrase": { "$ref": "base_schema.json#/definitions/secret" }, + "apiVersion": { "$ref": "#/definitions/apiVersion"}, + "index": { "$ref": "#/definitions/index" }, + "dataType": { "$ref": "#/definitions/dataType", "default": "f5.telemetry" } + } + }, + "else": {} + }, + { + "if": { "properties": { "type": { "const": "Sumo_Logic" } } }, + "then": { + "required": [ + "host", + "passphrase" + ], + "properties": { + "host": { "$ref": "#/definitions/host" }, + "protocol": { "$ref": "#/definitions/protocols", "default": "https" }, + "port": { "$ref": "#/definitions/port", "default": 443 }, + "path": { "$ref": "#/definitions/path", "default": "/receiver/v1/http/" }, + "passphrase": { "$ref": "base_schema.json#/definitions/secret" } + } + }, + "else": {} + }, + { + "if": { "properties": { "type": { "const": "Statsd" } } }, + "then": { + "required": [ + "host" + ], + "properties": { + "host": { "$ref": "#/definitions/host" }, + "protocol": { + "title": "Protocol", + "type": "string", + "enum": [ "tcp", "udp" ], + "default": "udp" + }, + "port": { "$ref": "#/definitions/port", "default": 8125 } + } + }, + "else": {} + }, + { + "if": { + "properties": { "type": { "enum": ["Google_Cloud_Monitoring", "Google_StackDriver"] } } + }, + "then": { + "required": [ + "projectId", + "privateKeyId", + "privateKey", + "serviceEmail" + ], + "properties": { + "privateKeyId": { "$ref": "#/definitions/privateKeyId" }, + "serviceEmail": { "$ref": "#/definitions/serviceEmail" }, + "privateKey": { "$ref": "base_schema.json#/definitions/secret" }, + "projectId": { "$ref": "#/definitions/projectId" } + } + }, + "else": {} + }, + { + "if": { "properties": { "type": { "const": "F5_Cloud" } } }, + "then": { + "required": [ + "f5csTenantId", + "f5csSensorId", + "payloadSchemaNid", + "serviceAccount", + "targetAudience" + ], + "properties": { + "port": { "$ref": "#/definitions/port", "default": 443 }, + "f5csTenantId": { "$ref": "#/definitions/f5csTenantId" }, + "f5csSensorId": { "$ref": "#/definitions/f5csSensorId" }, + "payloadSchemaNid": { "$ref": "#/definitions/payloadSchemaNid" }, + "serviceAccount": { "$ref": "#/definitions/serviceAccount" }, + "targetAudience": { "$ref": "#/definitions/targetAudience" }, + "useSSL": { "$ref": "#/definitions/useSSL", "default": true } + }, + "nodeSupportVersion": "8.11.1" + }, + "else": {} + } + ] + }, + "else": {} + } + ] +} diff --git a/src/schema/1.18.0/controls_schema.json b/src/schema/1.18.0/controls_schema.json new file mode 100644 index 00000000..72e3be96 --- /dev/null +++ b/src/schema/1.18.0/controls_schema.json @@ -0,0 +1,44 @@ +{ + "$id": "controls_schema.json", + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "Telemetry Streaming Controls schema", + "description": "", + "type": "object", + "allOf": [ + { + "if": { "properties": { "class": { "const": "Controls" } } }, + "then": { + "required": [ + "class" + ], + "properties": { + "class": { + "title": "Class", + "description": "Telemetry Streaming Controls class", + "type": "string", + "enum": [ "Controls" ] + }, + "logLevel": { + "title": "Logging Level", + "description": "", + "type": "string", + "default": "info", + "enum": [ + "debug", + "info", + "error" + ] + }, + "debug": { + "title": "Enable debug mode", + "description": "", + "type": "boolean", + "default": false + } + }, + "additionalProperties": false + }, + "else": {} + } + ] +} \ No newline at end of file diff --git a/src/schema/1.18.0/endpoints_schema.json b/src/schema/1.18.0/endpoints_schema.json new file mode 100644 index 00000000..73fc5ee7 --- /dev/null +++ b/src/schema/1.18.0/endpoints_schema.json @@ -0,0 +1,158 @@ +{ + "$id": "endpoints_schema.json", + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "Telemetry Streaming Endpoints schema", + "description": "", + "type": "object", + "definitions": { + "endpoint": { + "title": "Telemetry Endpoint", + "description": "", + "type": "object", + "properties": { + "enable": { + "title": "Enable endpoint", + "default": true, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/enable" + } + ] + }, + "name": { + "title": "Endpoint name", + "type": "string", + "minLength": 1 + }, + "path": { + "title": "Path to query data from", + "type": "string", + "minLength": 1 + } + }, + "additionalProperties": false + }, + "endpoints": { + "title": "Telemetry Endpoints", + "description": "", + "type": "object", + "properties": { + "enable": { + "title": "Enable endpoints", + "default": true, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/enable" + } + ] + }, + "basePath": { + "title": "Base Path", + "description": "Optional base path value to prepend to each individual endpoint paths", + "type": "string", + "default": "" + }, + "items": { + "title": "Items", + "description": "Object with each property an endpoint with their own properties", + "type": "object", + "additionalProperties": { + "allOf": [ + { + "$ref": "#/definitions/endpoint" + }, + { + "required": [ "path" ] + } + ] + }, + "minProperties": 1 + } + } + }, + "endpointsObjectRef": { + "allOf": [ + { + "$ref": "#/definitions/endpoints" + }, + { + "properties": { + "enable": {}, + "basePath": {}, + "items": {} + }, + "required": [ "items" ], + "additionalProperties": false + } + ] + }, + "endpointObjectRef": { + "allOf": [ + { + "$ref": "#/definitions/endpoint" + }, + { + "properties": { + "enable": {}, + "name": {}, + "path": {} + }, + "required": [ "name", "path" ], + "additionalProperties": false + } + ] + }, + "endpointsPointerRef": { + "title": "Telemetry_Endpoints Name", + "description": "Name of the Telemetry_Endpoints object", + "type": "string", + "declarationClass": "Telemetry_Endpoints", + "minLength": 1 + }, + "endpointsItemPointerRef": { + "title": "Telemetry_Endpoints Name and Item Key", + "description": "Name of the Telemetry_Endpoints object and the endpoint item key, e.g endpointsA/item1", + "type": "string", + "declarationClassProp": { + "path" :"Telemetry_Endpoints/items", + "partsNum": 2 + }, + "minLength": 1 + } + }, + "allOf": [ + { + "if": { "properties": { "class": { "const": "Telemetry_Endpoints" } } }, + "then": { + "required": [ + "class", + "items" + ], + "properties": { + "class": { + "title": "Class", + "description": "Telemetry Streaming Endpoints class", + "type": "string", + "enum": [ "Telemetry_Endpoints" ] + } + }, + "allOf": [ + { + "$comment": "This allows enforcement of no additional properties in this nested schema - could reuse above properties but prefer a separate block", + "properties": { + "class": {}, + "enable": {}, + "basePath": {}, + "items": {} + }, + "additionalProperties": false + }, + { + "$ref": "#/definitions/endpoints" + } + ] + }, + "else": {} + } + ] +} \ No newline at end of file diff --git a/src/schema/1.18.0/ihealth_poller_schema.json b/src/schema/1.18.0/ihealth_poller_schema.json new file mode 100644 index 00000000..63083fd8 --- /dev/null +++ b/src/schema/1.18.0/ihealth_poller_schema.json @@ -0,0 +1,236 @@ +{ + "$id": "ihealth_poller_schema.json", + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "Telemetry Streaming iHealth Poller schema", + "description": "", + "type": "object", + "definitions": { + "time24hr": { + "title": "Time in HH:MM, 24hr", + "description": "", + "type": "string", + "pattern": "^([0-9]|0[0-9]|1[0-9]|2[0-3]):[0-5][0-9]?$" + }, + "iHealthPoller": { + "$comment": "system_schema.json should be updated when new property added", + "title": "iHealth Poller", + "description": "", + "type": "object", + "required": [ + "interval", + "username", + "passphrase" + ], + "properties": { + "enable": { + "default": true, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/enable" + } + ] + }, + "trace": { + "default": false, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/trace" + } + ] + }, + "proxy": { + "title": "Proxy configuration", + "properties": { + "port": { + "default": 80 + }, + "protocol": { + "default": "http" + } + }, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/proxy" + } + ] + }, + "username": { + "title": "iHealth Username", + "$ref": "base_schema.json#/definitions/username" + }, + "passphrase": { + "title": "iHealth Passphrase", + "$ref": "base_schema.json#/definitions/secret" + }, + "downloadFolder": { + "title": "Directory to download Qkview to", + "description": "", + "type": "string", + "minLength": 1, + "pathExists": true + }, + "interval": { + "title": "Operating interval", + "description": "" , + "type": "object", + "properties": { + "timeWindow": { + "title": "Two or more hours window in 24hr format that iHealth data can be sent", + "description": "", + "type": "object", + "properties": { + "start": { + "title": "Time when the window starts", + "$ref": "#/definitions/time24hr" + }, + "end": { + "title": "Time when the window ends", + "$ref": "#/definitions/time24hr" + } + }, + "timeWindowMinSize": 120, + "required": [ "start", "end" ], + "additionalProperties": false + }, + "frequency": { + "title": "Interval frequency", + "description": "", + "type": "string", + "default": "daily", + "enum": [ + "daily", + "weekly", + "monthly" + ] + } + + }, + "required": [ + "timeWindow" + ], + "allOf": [ + { + "if": { "properties": { "frequency": { "const": "daily" } } }, + "then": { + "properties": { + "timeWindow": {}, + "frequency": {} + }, + "additionalProperties": false + } + }, + { + "if": { "properties": { "frequency": { "const": "weekly" } } }, + "then": { + "properties": { + "timeWindow": {}, + "frequency": {}, + "day": { + "title": "", + "description": "", + "oneOf": [ + { + "type": "string", + "pattern": "^([mM]onday|[tT]uesday|[wW]ednesday|[tT]hursday|[fF]riday|[sS]aturday|[sS]unday)$" + }, + { + "$comment": "0 and 7 eq. Sunday", + "type": "integer", + "minimum": 0, + "maximum": 7 + } + ] + } + }, + "required": [ "day" ], + "additionalProperties": false + } + }, + { + "if": { "properties": { "frequency": { "const": "monthly" } } }, + "then": { + "properties": { + "timeWindow": {}, + "frequency": {}, + "day": { + "title": "", + "description": "", + "type": "integer", + "minimum": 1, + "maximum": 31 + } + }, + "required": [ "day" ], + "additionalProperties": false + } + } + ] + } + } + }, + "iHealthPollerPointerRef": { + "type": "string", + "declarationClass": "Telemetry_iHealth_Poller" + }, + "iHealthPollerObjectRef": { + "allOf": [ + { + "$comment": "This allows enforcement of no additional properties in this nested schema - could reuse above properties but prefer a separate block", + "properties": { + "enable": {}, + "trace": {}, + "interval": {}, + "proxy": {}, + "username": {}, + "passphrase": {}, + "downloadFolder": {} + }, + "additionalProperties": false + }, + { + "$ref": "ihealth_poller_schema.json#/definitions/iHealthPoller" + } + ] + } + }, + "allOf": [ + { + "if": { "properties": { "class": { "const": "Telemetry_iHealth_Poller" } } }, + "then": { + "required": [ + "class", + "username", + "passphrase" + ], + "properties": { + "class": { + "title": "Class", + "description": "Telemetry Streaming iHealth Poller class", + "type": "string", + "enum": [ "Telemetry_iHealth_Poller" ] + } + }, + "allOf": [ + { + "$comment": "This allows enforcement of no additional properties in this nested schema - could reuse above properties but prefer a separate block", + "properties": { + "class": {}, + "enable": {}, + "trace": {}, + "interval": {}, + "proxy": {}, + "username": {}, + "passphrase": {}, + "downloadFolder": {} + }, + "additionalProperties": false + }, + { + "$ref": "#/definitions/iHealthPoller" + } + ] + }, + "else": {} + } + ] +} \ No newline at end of file diff --git a/src/schema/1.18.0/listener_schema.json b/src/schema/1.18.0/listener_schema.json new file mode 100644 index 00000000..0ef0d872 --- /dev/null +++ b/src/schema/1.18.0/listener_schema.json @@ -0,0 +1,89 @@ +{ + "$id": "listener_schema.json", + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "Telemetry Streaming event listener schema", + "description": "", + "type": "object", + "allOf": [ + { + "if": { "properties": { "class": { "const": "Telemetry_Listener" } } }, + "then": { + "required": [ + "class" + ], + "properties": { + "class": { + "title": "Class", + "description": "Telemetry Streaming Event Listener class", + "type": "string", + "enum": [ "Telemetry_Listener" ] + }, + "enable": { + "default": true, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/enable" + } + ] + }, + "trace": { + "default": false, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/trace" + } + ] + }, + "port": { + "minimum": 1024, + "maximum": 65535, + "default": 6514, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/port" + } + ] + }, + "tag": { + "$comment": "Deprecated! Use actions with a setTag action.", + "allOf": [ + { + "$ref": "base_schema.json#/definitions/tag" + } + ] + }, + "match": { + "default": "", + "allOf": [ + { + "$ref": "base_schema.json#/definitions/match" + } + ] + }, + "actions": { + "title": "Actions", + "description": "Actions to be performed on the listener.", + "type": "array", + "items": { + "allOf": [ + { + "$ref": "base_schema.json#/definitions/action" + } + ] + }, + "default": [ + { + "setTag": { + "tenant": "`T`", + "application": "`A`" + } + } + ] + } + }, + "additionalProperties": false + }, + "else": {} + } + ] +} \ No newline at end of file diff --git a/src/schema/1.18.0/namespace_schema.json b/src/schema/1.18.0/namespace_schema.json new file mode 100644 index 00000000..f6cb09fc --- /dev/null +++ b/src/schema/1.18.0/namespace_schema.json @@ -0,0 +1,92 @@ +{ + "$id": "namespace_schema.json", + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "Telemetry Streaming Namespace schema", + "description": "", + "type": "object", + "definitions": { + "namespace": { + "required": [ + "class" + ], + "type": "object", + "properties": { + "class": { + "title": "Class", + "description": "Telemetry Streaming Namespace class", + "type": "string", + "enum": [ "Telemetry_Namespace" ] + } + }, + "additionalProperties": { + "$comment": "All objects supported under a Telemetry Namespace", + "properties": { + "class": { + "title": "Class", + "type": "string", + "enum": [ + "Telemetry_System", + "Telemetry_System_Poller", + "Telemetry_Listener", + "Telemetry_Consumer", + "Telemetry_Pull_Consumer", + "Telemetry_iHealth_Poller", + "Telemetry_Endpoints", + "Shared" + ] + } + }, + "allOf": [ + { + "$ref": "system_schema.json#" + }, + { + "$ref": "system_poller_schema.json#" + }, + { + "$ref": "listener_schema.json#" + }, + { + "$ref": "consumer_schema.json#" + }, + { + "$ref": "pull_consumer_schema.json#" + }, + { + "$ref": "ihealth_poller_schema.json#" + }, + { + "$ref": "endpoints_schema.json#" + }, + { + "$ref": "shared_schema.json#" + } + ] + } + } + }, + "allOf": [ + { + "if": { "properties": { "class": { "const": "Telemetry_Namespace" } } }, + "then": { + "required": [ + "class" + ], + "properties": { + "class": { + "title": "Class", + "description": "Telemetry Streaming Namespace class", + "type": "string", + "enum": [ "Telemetry_Namespace" ] + } + }, + "allOf": [ + { + "$ref": "#/definitions/namespace" + } + ] + }, + "else": {} + } + ] +} \ No newline at end of file diff --git a/src/schema/1.18.0/pull_consumer_schema.json b/src/schema/1.18.0/pull_consumer_schema.json new file mode 100644 index 00000000..0747cbfd --- /dev/null +++ b/src/schema/1.18.0/pull_consumer_schema.json @@ -0,0 +1,101 @@ +{ + "$id": "pull_consumer_schema.json", + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "Telemetry Streaming Pull Consumer schema", + "description": "", + "type": "object", + "allOf": [ + { + "if": { "properties": { "class": { "const": "Telemetry_Pull_Consumer" } } }, + "then": { + "required": [ + "class", + "type", + "systemPoller" + ], + "properties": { + "class": { + "title": "Class", + "description": "Telemetry Streaming Pull Consumer class", + "type": "string", + "enum": [ "Telemetry_Pull_Consumer" ] + }, + "enable": { + "default": true, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/enable" + } + ] + }, + "trace": { + "default": false, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/trace" + } + ] + }, + "type": { + "title": "Type", + "description": "" , + "type": "string", + "enum": [ + "default", + "Prometheus" + ] + }, + "systemPoller": { + "title": "Pointer to System Poller(s)", + "anyOf": [ + { + "$ref": "system_poller_schema.json#/definitions/systemPollerPointerRef" + }, + { + "type": "array", + "items": { + "anyOf": [ + { + "$ref": "system_poller_schema.json#/definitions/systemPollerPointerRef" + } + ] + }, + "minItems": 1 + } + ] + } + }, + "allOf": [ + { + "$comment": "This allows enforcement of no additional properties in this nested schema - could reuse above properties but prefer a separate block", + "properties": { + "class": {}, + "enable": {}, + "trace": {}, + "type": {}, + "systemPoller": {} + }, + "additionalProperties": false + }, + { + "if": { "properties": { "type": { "const": "default" } } }, + "then": { + "required": [], + "properties": {} + }, + "else": {} + }, + { + "if": { "properties": { "type": { "const": "Prometheus" } } }, + "then": { + "required": [], + "properties": {} + }, + "else": {} + } + ] + }, + "else": {} + } + ] +} diff --git a/src/schema/1.18.0/shared_schema.json b/src/schema/1.18.0/shared_schema.json new file mode 100644 index 00000000..aa96cb2e --- /dev/null +++ b/src/schema/1.18.0/shared_schema.json @@ -0,0 +1,50 @@ +{ + "$id": "shared_schema.json", + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "Telemetry streaming Shared schema", + "description": "", + "type": "object", + "allOf": [ + { + "if": { "properties": { "class": { "const": "Shared" } } }, + "then": { + "required": [ + "class" + ], + "properties": { + "class": { + "title": "Class", + "description": "Telemetry streaming Shared class", + "type": "string", + "enum": [ "Shared" ] + } + }, + "additionalProperties": { + "properties": { + "class": { + "title": "Class", + "type": "string", + "enum": [ + "Constants", + "Secret" + ] + } + }, + "allOf": [ + { + "if": { "properties": { "class": { "const": "Constants" } } }, + "then": { "$ref": "base_schema.json#/definitions/constants" }, + "else": {} + }, + { + "if": { "properties": { "class": { "const": "Secret" } } }, + "then": { "$ref": "base_schema.json#/definitions/secret" }, + "else": {} + } + ] + } + }, + "else": {} + } + ] +} \ No newline at end of file diff --git a/src/schema/1.18.0/system_poller_schema.json b/src/schema/1.18.0/system_poller_schema.json new file mode 100644 index 00000000..0ebae3a4 --- /dev/null +++ b/src/schema/1.18.0/system_poller_schema.json @@ -0,0 +1,248 @@ +{ + "$id": "system_poller_schema.json", + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "Telemetry Streaming system poller schema", + "description": "", + "type": "object", + "definitions": { + "systemPoller": { + "$comment": "system_schema.json should be updated when new property added", + "title": "System Poller", + "description": "", + "type": "object", + "properties": { + "enable": { + "default": true, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/enable" + } + ] + }, + "interval": { + "title": "Collection interval (in seconds)", + "description": "If endpointList is specified, minimum=1. Without endpointList, minimum=60 and maximum=60000. Allows setting interval=0 to not poll on an interval.", + "type": "integer", + "default": 300 + }, + "trace": { + "$ref": "base_schema.json#/definitions/trace" + }, + "tag": { + "$comment": "Deprecated! Use actions with a setTag action.", + "allOf": [ + { + "$ref": "base_schema.json#/definitions/tag" + } + ] + }, + "actions": { + "title": "Actions", + "description": "Actions to be performed on the systemPoller.", + "type": "array", + "items": { + "allOf": [ + { + "$ref": "base_schema.json#/definitions/action" + } + ] + }, + "default": [ + { + "setTag": { + "tenant": "`T`", + "application": "`A`" + } + } + ] + }, + "endpointList": { + "title": "Endpoint List", + "description": "List of endpoints to use in data collection", + "oneOf": [ + { + "type": "array", + "items": { + "oneOf": [ + { + "$ref": "endpoints_schema.json#/definitions/endpointsPointerRef" + }, + { + "$ref": "endpoints_schema.json#/definitions/endpointsItemPointerRef" + }, + { + "if": { "required": [ "items" ]}, + "then": { + "$ref": "endpoints_schema.json#/definitions/endpointsObjectRef" + }, + "else": { + "$ref": "endpoints_schema.json#/definitions/endpointObjectRef" + } + } + + ] + }, + "minItems": 1 + }, + { + "$ref": "endpoints_schema.json#/definitions/endpointsPointerRef" + }, + { + "$ref": "endpoints_schema.json#/definitions/endpointsObjectRef" + } + ] + } + }, + "oneOf": [ + { + "allOf": [ + { + "if": { "required": [ "endpointList" ] }, + "then": { + "properties": { + "interval": { + "minimum": 1 + } + } + }, + "else": { + "properties":{ + "interval": { + "minimum": 60, + "maximum": 6000 + } + } + } + } + ] + }, + { + "allOf": [ + { + "properties": { + "interval": { + "enum": [0] + } + } + } + ] + } + ] + }, + "systemPollerPointerRef": { + "type": "string", + "declarationClass": "Telemetry_System_Poller" + }, + "systemPollerObjectRef": { + "allOf": [ + { + "$comment": "This allows enforcement of no additional properties in this nested schema - could reuse above properties but prefer a separate block", + "properties": { + "enable": {}, + "trace": {}, + "interval": {}, + "tag": {}, + "actions": {}, + "endpointList": {} + }, + "additionalProperties": false + }, + { + "$ref": "#/definitions/systemPoller" + } + ] + } + }, + "allOf": [ + { + "if": { "properties": { "class": { "const": "Telemetry_System_Poller" } } }, + "then": { + "required": [ + "class" + ], + "properties": { + "class": { + "title": "Class", + "description": "Telemetry Streaming System Poller class", + "type": "string", + "enum": [ "Telemetry_System_Poller" ] + }, + "host": { + "$comment": "Deprecated! Use Telemetry_System to define target device", + "default": "localhost", + "allOf": [ + { + "$ref": "base_schema.json#/definitions/host" + } + ] + }, + "port": { + "$comment": "Deprecated! Use Telemetry_System to define target device", + "default": 8100, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/port" + } + ] + }, + "protocol": { + "$comment": "Deprecated! Use Telemetry_System to define target device", + "default": "http", + "allOf": [ + { + "$ref": "base_schema.json#/definitions/protocol" + } + ] + }, + "allowSelfSignedCert": { + "$comment": "Deprecated! Use Telemetry_System to define target device", + "title": "Allow Self-Signed Certificate", + "default": false, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/allowSelfSignedCert" + } + ] + }, + "enableHostConnectivityCheck": { + "$comment": "Deprecated! Use Telemetry_System to define target device", + "$ref": "base_schema.json#/definitions/enableHostConnectivityCheck" + }, + "username": { + "$comment": "Deprecated! Use Telemetry_System to define target device", + "$ref": "base_schema.json#/definitions/username" + }, + "passphrase": { + "$comment": "Deprecated! Use Telemetry_System to define target device", + "$ref": "base_schema.json#/definitions/secret" + } + }, + "allOf": [ + { + "$comment": "This allows enforcement of no additional properties in this nested schema - could reuse above properties but prefer a separate block", + "properties": { + "class": {}, + "enable": {}, + "trace": {}, + "interval": {}, + "tag": {}, + "host": {}, + "port": {}, + "protocol": {}, + "allowSelfSignedCert": {}, + "enableHostConnectivityCheck": {}, + "username": {}, + "passphrase": {}, + "actions": {}, + "endpointList": {} + }, + "additionalProperties": false + }, + { + "$ref": "#/definitions/systemPoller" + } + ] + } + } + ] +} \ No newline at end of file diff --git a/src/schema/1.18.0/system_schema.json b/src/schema/1.18.0/system_schema.json new file mode 100644 index 00000000..cba58faa --- /dev/null +++ b/src/schema/1.18.0/system_schema.json @@ -0,0 +1,121 @@ +{ + "$id": "system_schema.json", + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "Telemetry Streaming System schema", + "description": "", + "type": "object", + "allOf": [ + { + "if": { "properties": { "class": { "const": "Telemetry_System" } } }, + "then": { + "required": [ + "class" + ], + "properties": { + "class": { + "title": "Class", + "description": "Telemetry Streaming System class", + "type": "string", + "enum": [ "Telemetry_System" ] + }, + "enable": { + "title": "Enable all pollers attached to device", + "default": true, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/enable" + } + ] + }, + "trace": { + "$ref": "base_schema.json#/definitions/trace" + }, + "host": { + "title": "System connection address", + "default": "localhost", + "allOf": [ + { + "$ref": "base_schema.json#/definitions/host" + } + ] + }, + "port": { + "title": "System connection port", + "default": 8100, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/port" + } + ] + }, + "protocol": { + "title": "System connection protocol", + "default": "http", + "allOf": [ + { + "$ref": "base_schema.json#/definitions/protocol" + } + ] + }, + "allowSelfSignedCert": { + "title": "Allow Self-Signed Certificate", + "default": false, + "allOf": [ + { + "$ref": "base_schema.json#/definitions/allowSelfSignedCert" + } + ] + }, + "enableHostConnectivityCheck": { + "$ref": "base_schema.json#/definitions/enableHostConnectivityCheck" + }, + "username": { + "title": "System Username", + "$ref": "base_schema.json#/definitions/username" + }, + "passphrase": { + "title": "System Passphrase", + "$ref": "base_schema.json#/definitions/secret" + }, + "systemPoller": { + "title": "System Poller declaration", + "oneOf": [ + { + "$ref": "system_poller_schema.json#/definitions/systemPollerPointerRef" + }, + { + "$ref": "system_poller_schema.json#/definitions/systemPollerObjectRef" + }, + { + "type": "array", + "items": { + "anyOf": [ + { + "$ref": "system_poller_schema.json#/definitions/systemPollerObjectRef" + }, + { + "$ref": "system_poller_schema.json#/definitions/systemPollerPointerRef" + } + ] + }, + "minItems": 1 + } + ] + }, + "iHealthPoller": { + "title": "iHealth Poller declaration", + "oneOf": [ + { + "$ref": "ihealth_poller_schema.json#/definitions/iHealthPollerPointerRef" + }, + { + "$ref": "ihealth_poller_schema.json#/definitions/iHealthPollerObjectRef" + } + ] + } + }, + "additionalProperties": false + } + } + ] +} \ No newline at end of file diff --git a/src/schema/latest/base_schema.json b/src/schema/latest/base_schema.json index aec11552..9bb2d9dd 100644 --- a/src/schema/latest/base_schema.json +++ b/src/schema/latest/base_schema.json @@ -342,8 +342,8 @@ "description": "Version of ADC Declaration schema this declaration uses", "type": "string", "$comment": "IMPORTANT: In enum array, please put current schema version first, oldest-supported version last. Keep enum array sorted most-recent-first.", - "enum": [ "1.17.0", "1.16.0", "1.15.0", "1.14.0", "1.13.0", "1.12.0", "1.11.0", "1.10.0", "1.9.0", "1.8.0", "1.7.0", "1.6.0", "1.5.0", "1.4.0", "1.3.0", "1.2.0", "1.1.0", "1.0.0", "0.9.0" ], - "default": "1.17.0" + "enum": [ "1.18.0", "1.17.0", "1.16.0", "1.15.0", "1.14.0", "1.13.0", "1.12.0", "1.11.0", "1.10.0", "1.9.0", "1.8.0", "1.7.0", "1.6.0", "1.5.0", "1.4.0", "1.3.0", "1.2.0", "1.1.0", "1.0.0", "0.9.0" ], + "default": "1.18.0" }, "$schema": { "title": "Schema", diff --git a/src/schema/latest/consumer_schema.json b/src/schema/latest/consumer_schema.json index fb55c5fd..a732ce6b 100644 --- a/src/schema/latest/consumer_schema.json +++ b/src/schema/latest/consumer_schema.json @@ -274,7 +274,7 @@ ] }, "clientCertificate": { - "$comment": "Required for certain consumers: Kafka", + "$comment": "Required for certain consumers: Kafka, Generic HTTP", "title": "Client Certificate", "description": "Certificate(s) to use when connecting to a secured endpoint.", "type": "object", @@ -286,7 +286,7 @@ ] }, "rootCertificate": { - "$comment": "Required for certain consumers: Kafka", + "$comment": "Required for certain consumers: Kafka, Generic HTTP", "title": "Root Certificate", "description": "Certificate Authority root certificate, used to validate certificate chains.", "type": "object", @@ -319,7 +319,7 @@ "f5expand": true }, "privateKey": { - "$comment": "Required for certain consumers: Kafka", + "$comment": "Required for certain consumers: Kafka, Generic HTTP", "title": "Private Key", "description": "Private Key", "type": "object", @@ -617,8 +617,21 @@ "method": { "$ref": "#/definitions/method", "default": "POST" }, "headers": { "$ref": "#/definitions/headers" }, "passphrase": { "$ref": "base_schema.json#/definitions/secret" }, - "proxy": { "$ref": "base_schema.json#/definitions/proxy" } - } + "proxy": { "$ref": "base_schema.json#/definitions/proxy" }, + "privateKey": { "$ref": "#/definitions/privateKey" }, + "clientCertificate": { "$ref": "#/definitions/clientCertificate" }, + "rootCertificate": { "$ref": "#/definitions/rootCertificate" } + }, + "allOf": [ + { + "if": { "required": [ "clientCertificate" ] }, + "then": { "required": [ "privateKey" ] } + }, + { + "if": { "required": [ "privateKey" ] }, + "then": { "required": [ "clientCertificate" ] } + } + ] }, "else": {} }, diff --git a/src/schema/latest/namespace_schema.json b/src/schema/latest/namespace_schema.json index a0c9766c..f6cb09fc 100644 --- a/src/schema/latest/namespace_schema.json +++ b/src/schema/latest/namespace_schema.json @@ -4,6 +4,67 @@ "title": "Telemetry Streaming Namespace schema", "description": "", "type": "object", + "definitions": { + "namespace": { + "required": [ + "class" + ], + "type": "object", + "properties": { + "class": { + "title": "Class", + "description": "Telemetry Streaming Namespace class", + "type": "string", + "enum": [ "Telemetry_Namespace" ] + } + }, + "additionalProperties": { + "$comment": "All objects supported under a Telemetry Namespace", + "properties": { + "class": { + "title": "Class", + "type": "string", + "enum": [ + "Telemetry_System", + "Telemetry_System_Poller", + "Telemetry_Listener", + "Telemetry_Consumer", + "Telemetry_Pull_Consumer", + "Telemetry_iHealth_Poller", + "Telemetry_Endpoints", + "Shared" + ] + } + }, + "allOf": [ + { + "$ref": "system_schema.json#" + }, + { + "$ref": "system_poller_schema.json#" + }, + { + "$ref": "listener_schema.json#" + }, + { + "$ref": "consumer_schema.json#" + }, + { + "$ref": "pull_consumer_schema.json#" + }, + { + "$ref": "ihealth_poller_schema.json#" + }, + { + "$ref": "endpoints_schema.json#" + }, + { + "$ref": "shared_schema.json#" + } + ] + } + } + }, "allOf": [ { "if": { "properties": { "class": { "const": "Telemetry_Namespace" } } }, @@ -19,51 +80,11 @@ "enum": [ "Telemetry_Namespace" ] } }, - "additionalProperties": { - "$comment": "All objects supported under a Telemetry Namespace", - "properties": { - "class": { - "title": "Class", - "type": "string", - "enum": [ - "Telemetry_System", - "Telemetry_System_Poller", - "Telemetry_Listener", - "Telemetry_Consumer", - "Telemetry_Pull_Consumer", - "Telemetry_iHealth_Poller", - "Telemetry_Endpoints", - "Shared" - ] - } - }, - "allOf": [ - { - "$ref": "system_schema.json#" - }, - { - "$ref": "system_poller_schema.json#" - }, - { - "$ref": "listener_schema.json#" - }, - { - "$ref": "consumer_schema.json#" - }, - { - "$ref": "pull_consumer_schema.json#" - }, - { - "$ref": "ihealth_poller_schema.json#" - }, - { - "$ref": "endpoints_schema.json#" - }, - { - "$ref": "shared_schema.json#" - } - ] - } + "allOf": [ + { + "$ref": "#/definitions/namespace" + } + ] }, "else": {} } diff --git a/test/functional/cloud/awsTests.js b/test/functional/cloud/awsTests.js index 73dda44c..49dabc56 100644 --- a/test/functional/cloud/awsTests.js +++ b/test/functional/cloud/awsTests.js @@ -13,7 +13,7 @@ const assert = require('assert'); const AWS = require('aws-sdk'); const constants = require('./../shared/constants'); const testUtil = require('./../shared/util'); -const awsUtil = require('../../../src/lib/consumers/AWS_CloudWatch/awsUtil'); +const awsUtil = require('../../../src/lib/consumers/shared/awsUtil'); const ENV_FILE = process.env[constants.ENV_VARS.CLOUD.FILE]; const ENV_INFO = JSON.parse(fs.readFileSync(ENV_FILE)); diff --git a/test/functional/dutTests.js b/test/functional/dutTests.js index a375910a..7d29c1c7 100644 --- a/test/functional/dutTests.js +++ b/test/functional/dutTests.js @@ -10,7 +10,8 @@ 'use strict'; -const assert = require('assert'); +const chai = require('chai'); +const chaiAsPromised = require('chai-as-promised'); const fs = require('fs'); const net = require('net'); const readline = require('readline'); @@ -18,6 +19,9 @@ const util = require('./shared/util'); const constants = require('./shared/constants'); const DEFAULT_UNNAMED_NAMESPACE = require('../../src/lib/constants').DEFAULT_UNNAMED_NAMESPACE; +chai.use(chaiAsPromised); +const assert = chai.assert; + const duts = util.getHosts('BIGIP'); const packageDetails = util.getPackageDetails(); const basicDeclaration = JSON.parse(fs.readFileSync(constants.DECL.BASIC)); @@ -346,13 +350,11 @@ function setup() { ipProtocol: 'tcp', destination: { ports: [ - { - name: String(constants.EVENT_LISTENER_PORT) - }, - { - name: String(constants.EVENT_LISTENER_NAMESPACE_PORT) - } - ] + constants.EVENT_LISTENER_DEFAULT_PORT, + constants.EVENT_LISTENER_SECONDARY_PORT, + constants.EVENT_LISTENER_NAMESPACE_PORT, + constants.EVENT_LISTENER_NAMESPACE_SECONDARY_PORT + ].map(port => ({ name: String(port) })) } }); const postOptions = { @@ -396,6 +398,11 @@ function test() { { name: 'mixed declaration (default and namespace), verify default by "f5telemetry_default"', namespace: DEFAULT_UNNAMED_NAMESPACE + }, + { + name: 'basic declaration - namespace endpoint', + namespace: constants.DECL.NAMESPACE_NAME, + useNamespaceDeclare: true } ]; @@ -403,6 +410,8 @@ function test() { let declaration = util.deepCopy(basicDeclaration); if (testSetup.name.startsWith('mixed')) { declaration.My_Namespace = util.deepCopy(namespaceDeclaration.My_Namespace); + } else if (testSetup.useNamespaceDeclare) { + declaration = util.deepCopy(namespaceDeclaration.My_Namespace); } else if (testSetup.namespace && testSetup.namespace !== DEFAULT_UNNAMED_NAMESPACE) { declaration = util.deepCopy(namespaceDeclaration); } @@ -475,75 +484,92 @@ function test() { }; }); - it('should post same configuration twice and get it after', () => { - const uri = `${constants.BASE_ILX_URI}/declare`; - const postOptions = Object.assign(util.deepCopy(options), { - method: 'POST', - body: getDeclToUse(testSetup) - }); - let postResponses = []; - - // wait 2s to buffer consecutive POSTs - return util.sleep(2000) - .then(() => util.makeRequest(host, uri, util.deepCopy(postOptions))) - .then((data) => { - util.logger.info('POST request #1: Declaration response:', { host, data }); - assert.strictEqual(data.message, 'success'); - - checkPassphraseObject(data); - postResponses.push(data); - // wait for 5 secs while declaration will be applied and saved to storage - return util.sleep(5000); - }) - .then(() => util.makeRequest(host, uri, util.deepCopy(postOptions))) - .then((data) => { - util.logger.info('POST request #2: Declaration response:', { host, data }); - assert.strictEqual(data.message, 'success'); - - checkPassphraseObject(data); - postResponses.push(data); - - // wait for 5 secs while declaration will be applied and saved to storage - return util.sleep(5000); - }) - .then(() => util.makeRequest(host, uri, util.deepCopy(options))) - .then((data) => { - util.logger.info('GET request: Declaration response:', { host, data }); - assert.strictEqual(data.message, 'success'); - - checkPassphraseObject(data); - postResponses.push(data); - - // compare GET to recent POST - assert.deepStrictEqual(postResponses[2], postResponses[1]); - // lest compare first POST to second POST (only one difference is secrets) - postResponses = postResponses.map(removeCipherTexts); - assert.deepStrictEqual(postResponses[0], postResponses[1]); + describe('basic checks', () => { + it('should post same configuration twice and get it after', () => { + const uri = testSetup.useNamespaceDeclare ? `${constants.BASE_ILX_URI}${namespacePath}/declare` : `${constants.BASE_ILX_URI}/declare`; + const postOptions = Object.assign(util.deepCopy(options), { + method: 'POST', + body: getDeclToUse(testSetup) }); - }); + let postResponses = []; + + // wait 2s to buffer consecutive POSTs + return util.sleep(2000) + .then(() => util.makeRequest(host, uri, util.deepCopy(postOptions))) + .then((data) => { + util.logger.info('POST request #1: Declaration response:', { host, data }); + assert.strictEqual(data.message, 'success'); + + checkPassphraseObject(data); + postResponses.push(data); + // wait for 5 secs while declaration will be applied and saved to storage + return util.sleep(5000); + }) + .then(() => util.makeRequest(host, uri, util.deepCopy(postOptions))) + .then((data) => { + util.logger.info('POST request #2: Declaration response:', { host, data }); + assert.strictEqual(data.message, 'success'); + + checkPassphraseObject(data); + postResponses.push(data); + + // wait for 5 secs while declaration will be applied and saved to storage + return util.sleep(5000); + }) + .then(() => util.makeRequest(host, uri, util.deepCopy(options))) + .then((data) => { + util.logger.info('GET request: Declaration response:', { host, data }); + assert.strictEqual(data.message, 'success'); + + checkPassphraseObject(data); + postResponses.push(data); + + // compare GET to recent POST + assert.deepStrictEqual(postResponses[2], postResponses[1]); + // lest compare first POST to second POST (only one difference is secrets) + postResponses = postResponses.map(removeCipherTexts); + assert.deepStrictEqual(postResponses[0], postResponses[1]); + }) + .then(() => { + if (testSetup.useNamespaceDeclare) { + util.logger.info('Additional test for namespace endpoint - verify full declaration'); + const url = `${constants.BASE_ILX_URI}/declare`; + + return util.makeRequest(host, url, util.deepCopy(options)) + .then((data) => { + util.logger.info('GET request: Declaration response', { host, data }); + assert.strictEqual(data.message, 'success'); + // verify merged decl + assert.isTrue(typeof data.declaration[constants.DECL.NAMESPACE_NAME] !== 'undefined'); // named namespace + assert.isTrue(typeof data.declaration[constants.DECL.SYSTEM_NAME] !== 'undefined'); // default namespace + }); + } + return Promise.resolve(); + }); + }); - it('should get response from systempoller endpoint', () => { - const uri = `${constants.BASE_ILX_URI}${namespacePath}/systempoller/${constants.DECL.SYSTEM_NAME}`; - // wait 500ms in case if config was not applied yet - return util.sleep(500) - .then(() => util.makeRequest(host, uri, options)) - .then((data) => { - data = data || []; - util.logger.info(`SystemPoller response (${uri}):`, { host, data }); - assert.strictEqual(data.length, 1); - // read schema and validate data - data = data[0]; - const schema = JSON.parse(fs.readFileSync(constants.DECL.SYSTEM_POLLER_SCHEMA)); - const valid = util.validateAgainstSchema(data, schema); - if (valid !== true) { - assert.fail(`output is not valid: ${JSON.stringify(valid.errors)}`); - } - }); - }); - it('should ensure event listener is up', () => { - const connectToEventListener = port => util.sleep(500) - .then(() => new Promise((resolve, reject) => { + it('should get response from systempoller endpoint', () => { + const uri = `${constants.BASE_ILX_URI}${namespacePath}/systempoller/${constants.DECL.SYSTEM_NAME}`; + // wait 500ms in case if config was not applied yet + return util.sleep(500) + .then(() => util.makeRequest(host, uri, options)) + .then((data) => { + data = data || []; + util.logger.info(`SystemPoller response (${uri}):`, { host, data }); + assert.strictEqual(data.length, 1); + // read schema and validate data + data = data[0]; + const schema = JSON.parse(fs.readFileSync(constants.DECL.SYSTEM_POLLER_SCHEMA)); + const valid = util.validateAgainstSchema(data, schema); + if (valid !== true) { + assert.fail(`output is not valid: ${JSON.stringify(valid.errors)}`); + } + }); + }); + + it('should ensure event listener is up', () => { + const connectToEventListener = port => new Promise((resolve, reject) => { const client = net.createConnection({ host, port }, () => { client.end(); }); @@ -551,157 +577,254 @@ function test() { resolve(); }); client.on('error', (err) => { - reject(err); + reject(new Error(`Can not connect to TCP port ${port}: ${err}`)); }); - })); - const decl = JSON.stringify(getDeclToUse(testSetup)); - - const promises = [ - constants.EVENT_LISTENER_PORT, - constants.EVENT_LISTENER_NAMESPACE_PORT - ].map((portToCheck) => { - if (decl.indexOf(portToCheck) !== -1) { - return connectToEventListener(portToCheck); - } - return Promise.resolve(); - }); - return Promise.all(promises); - }); + }); - ifNoNamespaceIt('should apply configuration containing system poller filtering', testSetup, () => { - let uri = `${constants.BASE_ILX_URI}/declare`; - const postOptions = Object.assign(util.deepCopy(options), { - method: 'POST', - body: fs.readFileSync(constants.DECL.FILTER).toString() - }); + // ports = { opened: [], closed: [] } + const checkPorts = ports => Promise.all( + (ports.opened || []).map( + openedPort => assert.isFulfilled(connectToEventListener(openedPort)) + ).concat( + (ports.closed || []).map( + closedPort => connectToEventListener(closedPort) + .then( + () => Promise.reject(new Error(`Port ${closedPort} expected to be closed`)), + () => {} // do nothing on catch + ) + ) + ) + ); + + const findListeners = (obj, cb) => { + if (typeof obj === 'object') { + if (obj.class === 'Telemetry_Listener') { + cb(obj); + } else { + Object.keys(obj).forEach(key => findListeners(obj[key], cb)); + } + } + }; - // wait 2s to buffer consecutive POSTs - return util.sleep(2000) - .then(() => util.makeRequest(host, uri, postOptions)) - .then((data) => { - util.logger.info('Declaration response:', { host, data }); - assert.strictEqual(data.message, 'success'); - // wait 5s in case if config was not applied yet - return util.sleep(5000); - }) - .then(() => { - uri = `${constants.BASE_ILX_URI}${namespacePath}/systempoller/${constants.DECL.SYSTEM_NAME}`; - return util.makeRequest(host, uri, util.deepCopy(options)); - }) - .then((data) => { - data = data || []; - util.logger.info(`Filtered SystemPoller response (${uri}):`, { host, data }); - - assert.strictEqual(data.length, 1); - // verify that certain data was filtered out, while other data was preserved - data = data[0]; - assert.strictEqual(Object.keys(data.system).indexOf('provisioning'), -1); - assert.strictEqual(Object.keys(data.system.diskStorage).indexOf('/usr'), -1); - assert.notStrictEqual(Object.keys(data.system.diskStorage).indexOf('/'), -1); - assert.notStrictEqual(Object.keys(data.system).indexOf('version'), -1); - assert.notStrictEqual(Object.keys(data.system).indexOf('hostname'), -1); + const fetchListenerPorts = (decl) => { + const ports = []; + findListeners(decl, listener => ports.push(listener.port || 6514)); + return ports; + }; + + const allListenerPorts = [ + constants.EVENT_LISTENER_DEFAULT_PORT, + constants.EVENT_LISTENER_SECONDARY_PORT, + constants.EVENT_LISTENER_NAMESPACE_PORT, + constants.EVENT_LISTENER_NAMESPACE_SECONDARY_PORT + ]; + const newPorts = [ + constants.EVENT_LISTENER_SECONDARY_PORT, + constants.EVENT_LISTENER_NAMESPACE_SECONDARY_PORT + ]; + + const uri = testSetup.useNamespaceDeclare ? `${constants.BASE_ILX_URI}${namespacePath}/declare` : `${constants.BASE_ILX_URI}/declare`; + const postOptions = Object.assign(util.deepCopy(options), { + method: 'POST', + body: testSetup.useNamespaceDeclare ? { class: 'Telemetry_Namespace' } : { class: 'Telemetry' } }); + return util.makeRequest(host, uri, util.deepCopy(postOptions)) + .then(() => checkPorts({ + closed: util.deepCopy(allListenerPorts) + })) + .then(() => { + postOptions.body = getDeclToUse(testSetup); + return util.makeRequest(host, uri, util.deepCopy(postOptions)); + }) + .then(() => { + const ports = { opened: fetchListenerPorts(postOptions.body) }; + ports.closed = allListenerPorts.filter(port => ports.opened.indexOf(port) === -1); + return checkPorts(ports); + }) + // post declaration again and check that listeners are still available + .then(() => util.makeRequest(host, uri, util.deepCopy(postOptions))) + .then(() => { + const ports = { opened: fetchListenerPorts(postOptions.body) }; + ports.closed = allListenerPorts.filter(port => ports.opened.indexOf(port) === -1); + return checkPorts(ports); + }) + .then(() => { + let idx = 0; + // already a copy + postOptions.body = getDeclToUse(testSetup); + // disable all listeners + findListeners(postOptions.body, (listener) => { + if (idx >= newPorts.length) { + throw new Error(`Expected ${newPorts.length} listeners only`); + } + listener.port = newPorts[idx]; + idx += 1; + }); + return util.makeRequest(host, uri, util.deepCopy(postOptions)); + }) + .then(() => { + const ports = { opened: fetchListenerPorts(postOptions.body) }; + ports.closed = allListenerPorts.filter(port => ports.opened.indexOf(port) === -1); + return checkPorts(ports); + }) + // post declaration again and check that listeners are still available + .then(() => util.makeRequest(host, uri, util.deepCopy(postOptions))) + .then(() => { + const ports = { opened: fetchListenerPorts(postOptions.body) }; + ports.closed = allListenerPorts.filter(port => ports.opened.indexOf(port) === -1); + return checkPorts(ports); + }) + .then(() => { + // already a copy + postOptions.body = getDeclToUse(testSetup); + // disable all listeners + findListeners(postOptions.body, (listener) => { + listener.enable = false; + }); + return util.makeRequest(host, uri, util.deepCopy(postOptions)); + }) + .then(() => checkPorts({ + closed: util.deepCopy(allListenerPorts) + })); + }); }); - ifNoNamespaceIt('should apply configuration containing chained system poller actions', testSetup, () => { - let uri = `${constants.BASE_ILX_URI}/declare`; - const postOptions = Object.assign(util.deepCopy(options), { - method: 'POST', - body: fs.readFileSync(constants.DECL.ACTION_CHAINING).toString() + describe('advanced options', () => { + ifNoNamespaceIt('should apply configuration containing system poller filtering', testSetup, () => { + let uri = `${constants.BASE_ILX_URI}/declare`; + const postOptions = Object.assign(util.deepCopy(options), { + method: 'POST', + body: fs.readFileSync(constants.DECL.FILTER).toString() + }); + + // wait 2s to buffer consecutive POSTs + return util.sleep(2000) + .then(() => util.makeRequest(host, uri, postOptions)) + .then((data) => { + util.logger.info('Declaration response:', { host, data }); + assert.strictEqual(data.message, 'success'); + // wait 5s in case if config was not applied yet + return util.sleep(5000); + }) + .then(() => { + uri = `${constants.BASE_ILX_URI}${namespacePath}/systempoller/${constants.DECL.SYSTEM_NAME}`; + return util.makeRequest(host, uri, util.deepCopy(options)); + }) + .then((data) => { + data = data || []; + util.logger.info(`Filtered SystemPoller response (${uri}):`, { host, data }); + + assert.strictEqual(data.length, 1); + // verify that certain data was filtered out, while other data was preserved + data = data[0]; + assert.strictEqual(Object.keys(data.system).indexOf('provisioning'), -1); + assert.strictEqual(Object.keys(data.system.diskStorage).indexOf('/usr'), -1); + assert.notStrictEqual(Object.keys(data.system.diskStorage).indexOf('/'), -1); + assert.notStrictEqual(Object.keys(data.system).indexOf('version'), -1); + assert.notStrictEqual(Object.keys(data.system).indexOf('hostname'), -1); + }); }); - // wait 2s to buffer consecutive POSTs - return util.sleep(2000) - .then(() => util.makeRequest(host, uri, postOptions)) - .then((data) => { - util.logger.info('Declaration response:', { host, data }); - assert.strictEqual(data.message, 'success'); - // wait 5s in case if config was not applied yet - return util.sleep(5000); - }) - .then(() => { - uri = `${constants.BASE_ILX_URI}${namespacePath}/systempoller/${constants.DECL.SYSTEM_NAME}`; - return util.makeRequest(host, uri, util.deepCopy(options)); - }) - .then((data) => { - data = data || {}; - util.logger.info(`Filtered SystemPoller response (${uri}):`, { host, data }); - - assert.strictEqual(data.length, 1); - data = data[0]; - // verify /var is included with, with 1_tagB removed - assert.notStrictEqual(Object.keys(data.system.diskStorage).indexOf('/var'), -1); - assert.deepEqual(data.system.diskStorage['/var']['1_tagB'], { '1_valueB_1': 'value1' }); - // verify /var/log is included with, with 1_tagB included - assert.strictEqual(Object.keys(data.system.diskStorage['/var/log']).indexOf('1_tagB'), -1); - assert.deepEqual(data.system.diskStorage['/var/log']['1_tagA'], 'myTag'); + ifNoNamespaceIt('should apply configuration containing chained system poller actions', testSetup, () => { + let uri = `${constants.BASE_ILX_URI}/declare`; + const postOptions = Object.assign(util.deepCopy(options), { + method: 'POST', + body: fs.readFileSync(constants.DECL.ACTION_CHAINING).toString() }); - }); - ifNoNamespaceIt('should apply configuration containing filters with ifAnyMatch', testSetup, () => { - let uri = `${constants.BASE_ILX_URI}/declare`; - const postOptions = Object.assign(util.deepCopy(options), { - method: 'POST', - body: fs.readFileSync(constants.DECL.FILTERING_WITH_MATCHING).toString() + // wait 2s to buffer consecutive POSTs + return util.sleep(2000) + .then(() => util.makeRequest(host, uri, postOptions)) + .then((data) => { + util.logger.info('Declaration response:', { host, data }); + assert.strictEqual(data.message, 'success'); + // wait 5s in case if config was not applied yet + return util.sleep(5000); + }) + .then(() => { + uri = `${constants.BASE_ILX_URI}${namespacePath}/systempoller/${constants.DECL.SYSTEM_NAME}`; + return util.makeRequest(host, uri, util.deepCopy(options)); + }) + .then((data) => { + data = data || {}; + util.logger.info(`Filtered SystemPoller response (${uri}):`, { host, data }); + + assert.strictEqual(data.length, 1); + data = data[0]; + // verify /var is included with, with 1_tagB removed + assert.notStrictEqual(Object.keys(data.system.diskStorage).indexOf('/var'), -1); + assert.deepEqual(data.system.diskStorage['/var']['1_tagB'], { '1_valueB_1': 'value1' }); + // verify /var/log is included with, with 1_tagB included + assert.strictEqual(Object.keys(data.system.diskStorage['/var/log']).indexOf('1_tagB'), -1); + assert.deepEqual(data.system.diskStorage['/var/log']['1_tagA'], 'myTag'); + }); }); - // wait 2s to buffer consecutive POSTs - return util.sleep(2000) - .then(() => util.makeRequest(host, uri, postOptions)) - .then((data) => { - util.logger.info('Declaration response:', { host, data }); - assert.strictEqual(data.message, 'success'); - // wait 5s in case if config was not applied yet - return util.sleep(5000); - }) - .then(() => { - uri = `${constants.BASE_ILX_URI}${namespacePath}/systempoller/${constants.DECL.SYSTEM_NAME}`; - return util.makeRequest(host, uri, util.deepCopy(options)); - }) - .then((data) => { - data = data || {}; - util.logger.info(`Filtered and Matched SystemPoller response (${uri}):`, { host, data }); - - assert.strictEqual(data.length, 1); - data = data[0]; - // verify that 'system' key and child objects are included - assert.deepEqual(Object.keys(data), ['system']); - assert.ok(Object.keys(data.system).length > 1); - // verify that 'system.diskStorage' is NOT excluded - assert.notStrictEqual(Object.keys(data.system).indexOf('diskStorage'), -1); + ifNoNamespaceIt('should apply configuration containing filters with ifAnyMatch', testSetup, () => { + let uri = `${constants.BASE_ILX_URI}/declare`; + const postOptions = Object.assign(util.deepCopy(options), { + method: 'POST', + body: fs.readFileSync(constants.DECL.FILTERING_WITH_MATCHING).toString() }); - }); - ifNoNamespaceIt('should apply configuration containing multiple system pollers and endpointList', testSetup, () => { - let uri = `${constants.BASE_ILX_URI}/declare`; - const postOptions = Object.assign(util.deepCopy(options), { - method: 'POST', - body: fs.readFileSync(constants.DECL.ENDPOINTLIST).toString() + // wait 2s to buffer consecutive POSTs + return util.sleep(2000) + .then(() => util.makeRequest(host, uri, postOptions)) + .then((data) => { + util.logger.info('Declaration response:', { host, data }); + assert.strictEqual(data.message, 'success'); + // wait 5s in case if config was not applied yet + return util.sleep(5000); + }) + .then(() => { + uri = `${constants.BASE_ILX_URI}${namespacePath}/systempoller/${constants.DECL.SYSTEM_NAME}`; + return util.makeRequest(host, uri, util.deepCopy(options)); + }) + .then((data) => { + data = data || {}; + util.logger.info(`Filtered and Matched SystemPoller response (${uri}):`, { host, data }); + + assert.strictEqual(data.length, 1); + data = data[0]; + // verify that 'system' key and child objects are included + assert.deepEqual(Object.keys(data), ['system']); + assert.ok(Object.keys(data.system).length > 1); + // verify that 'system.diskStorage' is NOT excluded + assert.notStrictEqual(Object.keys(data.system).indexOf('diskStorage'), -1); + }); }); - // wait 2s to buffer consecutive POSTs - return util.sleep(2000) - .then(() => util.makeRequest(host, uri, postOptions)) - .then((data) => { - util.logger.info('Declaration response:', { host, data }); - assert.strictEqual(data.message, 'success'); - // wait 2s in case if config was not applied yet - return util.sleep(2000); - }) - .then(() => { - uri = `${constants.BASE_ILX_URI}${namespacePath}/systempoller/${constants.DECL.SYSTEM_NAME}`; - return util.makeRequest(host, uri, util.deepCopy(options)); - }) - .then((data) => { - util.logger.info(`System Poller with endpointList response (${uri}):`, { host, data }); - assert.ok(Array.isArray(data)); - - const pollerOneData = data[0]; - const pollerTwoData = data[1]; - assert.notStrictEqual(pollerOneData.custom_ipOther, undefined); - assert.notStrictEqual(pollerOneData.custom_dns, undefined); - assert.ok(pollerTwoData.custom_provisioning.items.length > 0); + ifNoNamespaceIt('should apply configuration containing multiple system pollers and endpointList', testSetup, () => { + let uri = `${constants.BASE_ILX_URI}/declare`; + const postOptions = Object.assign(util.deepCopy(options), { + method: 'POST', + body: fs.readFileSync(constants.DECL.ENDPOINTLIST).toString() }); + + // wait 2s to buffer consecutive POSTs + return util.sleep(2000) + .then(() => util.makeRequest(host, uri, postOptions)) + .then((data) => { + util.logger.info('Declaration response:', { host, data }); + assert.strictEqual(data.message, 'success'); + // wait 2s in case if config was not applied yet + return util.sleep(2000); + }) + .then(() => { + uri = `${constants.BASE_ILX_URI}${namespacePath}/systempoller/${constants.DECL.SYSTEM_NAME}`; + return util.makeRequest(host, uri, util.deepCopy(options)); + }) + .then((data) => { + util.logger.info(`System Poller with endpointList response (${uri}):`, { host, data }); + assert.ok(Array.isArray(data)); + + const pollerOneData = data[0]; + const pollerTwoData = data[1]; + assert.notStrictEqual(pollerOneData.custom_ipOther, undefined); + assert.notStrictEqual(pollerOneData.custom_dns, undefined); + assert.ok(pollerTwoData.custom_provisioning.items.length > 0); + }); + }); }); }); }); diff --git a/test/functional/shared/basic_namespace.json b/test/functional/shared/basic_namespace.json index cc95c38a..94025650 100644 --- a/test/functional/shared/basic_namespace.json +++ b/test/functional/shared/basic_namespace.json @@ -15,7 +15,7 @@ }, "My_Listener": { "class": "Telemetry_Listener", - "port": 6515 + "port": 56516 }, "My_Consumer": { "class": "Telemetry_Consumer", diff --git a/test/functional/shared/constants.js b/test/functional/shared/constants.js index 0711666a..555da22e 100644 --- a/test/functional/shared/constants.js +++ b/test/functional/shared/constants.js @@ -88,8 +88,10 @@ module.exports = { GCP_PRIVATE_KEY: 'GCP_PRIVATE_KEY', GCP_SERVICE_EMAIL: 'GCP_SERVICE_EMAIL' }, - EVENT_LISTENER_PORT: 6514, - EVENT_LISTENER_NAMESPACE_PORT: 6515, + EVENT_LISTENER_DEFAULT_PORT: 6514, // default port + EVENT_LISTENER_SECONDARY_PORT: 56515, + EVENT_LISTENER_NAMESPACE_PORT: 56516, + EVENT_LISTENER_NAMESPACE_SECONDARY_PORT: 56517, REQUEST: { PORT: 443, PROTOCOL: 'https' diff --git a/test/functional/shared/util.js b/test/functional/shared/util.js index 568e9f80..1d05c38a 100644 --- a/test/functional/shared/util.js +++ b/test/functional/shared/util.js @@ -430,7 +430,7 @@ module.exports = { * @returns {Promise} Returns promise resolved on sent message */ sendEvent(host, msg) { - const port = constants.EVENT_LISTENER_PORT; + const port = constants.EVENT_LISTENER_DEFAULT_PORT; return new Promise((resolve, reject) => { const client = net.createConnection({ host, port }, () => { diff --git a/test/unit/configTests.js b/test/unit/configTests.js index 06730fe2..af57dfe5 100644 --- a/test/unit/configTests.js +++ b/test/unit/configTests.js @@ -67,20 +67,35 @@ describe('Config', () => { }); describe('.validate()', () => { - it('should validate basic declaration', () => { + it('should validate basic declaration (default = full schema)', () => { const obj = { class: 'Telemetry' }; return assert.isFulfilled(config.validate(obj)); }); - it('should throw error in validate function', () => { + it('should validate declaration for a subschema (schemaType = namespace)', () => { + const obj = { + class: 'Telemetry_Namespace' + }; + return assert.isFulfilled(config.validate(obj, { schemaType: 'Telemetry_Namespace' })); + }); + + it('should throw error when no validators found', () => { const obj = { class: 'Telemetry' }; - sinon.stub(config, 'validator').value(null); + sinon.stub(config, 'validators').value(null); return assert.isRejected(config.validate(obj), 'Validator is not available'); }); + + it('should throw error when no specific validator found (default full schema type)', () => { + const obj = { + class: 'Telemetry_New' + }; + sinon.stub(config, 'validators').value({ otherType: data => data }); + return assert.isRejected(config.validate(obj, { schemaType: 'Telemetry_New' }), 'Validator is not available'); + }); }); describe('.processDeclaration()', () => { @@ -467,11 +482,28 @@ describe('Config', () => { testSet.forEach(testConf => testUtil.getCallableIt(testConf)(testConf.name, () => { savedConfig = testConf.existingConfig; return config.processNamespaceDeclaration(testConf.input.declaration, testConf.input.namespace) - .then(() => { - assert.deepStrictEqual(savedConfig.normalized, testConf.expectedOutput); + .then((result) => { + assert.deepStrictEqual(savedConfig.normalized, testConf.expectedNormalized); + assert.deepStrictEqual(result, testConf.expectedResult); }); })); + it('should reject with invalid namespace declaration (class is not Telemetry_Namespace)', () => assert.isRejected( + config.processNamespaceDeclaration({ class: 'Telemetry' }), + /properties\/class\/enum.*"allowedValues":\["Telemetry_Namespace"\]/ + )); + + it('should reject with invalid namespace declaration (invalid property)', () => assert.isRejected(config.processNamespaceDeclaration( + { + class: 'Telemetry_Namespace', + My_System_1: { + class: 'Telemetry_System' + }, + additionalProp: { fake: true } + }, + 'NewbieNamespace' + ), /"additionalProperty":"fake".*should NOT have additional properties/)); + it('should emit expected normalized config (unchanged namespaces have skipUpdate = true)', () => { const baseComp = { name: 'My_System_1', diff --git a/test/unit/consumers/awsCloudWatchConsumerTests.js b/test/unit/consumers/awsCloudWatchConsumerTests.js index 8c5107f2..4829801c 100644 --- a/test/unit/consumers/awsCloudWatchConsumerTests.js +++ b/test/unit/consumers/awsCloudWatchConsumerTests.js @@ -19,7 +19,7 @@ const sinon = require('sinon'); const awsCloudWatchIndex = require('../../../src/lib/consumers/AWS_CloudWatch/index'); const testUtil = require('../shared/util'); -const awsUtil = require('../../../src/lib/consumers/AWS_CloudWatch/awsUtil'); +const awsUtil = require('../../../src/lib/consumers/shared/awsUtil'); chai.use(chaiAsPromised); const assert = chai.assert; diff --git a/test/unit/consumers/awsS3ConsumerTests.js b/test/unit/consumers/awsS3ConsumerTests.js index 4e2dcdcc..0a9c4dda 100644 --- a/test/unit/consumers/awsS3ConsumerTests.js +++ b/test/unit/consumers/awsS3ConsumerTests.js @@ -94,6 +94,21 @@ describe('AWS_S3', () => { }); }); + it('should configure AWS access with custom agent', () => { + let optionsParam; + awsConfigUpdate.callsFake((options) => { + optionsParam = options; + }); + const context = testUtil.buildConsumerContext({ + config: defaultConsumerConfig + }); + + return awsS3Index(context) + .then(() => { + assert.ok(optionsParam.httpOptions.agent.options, 'AWS should have custom Agent'); + }); + }); + describe('process', () => { const expectedParams = { Body: '', diff --git a/test/unit/consumers/awsUtilTests.js b/test/unit/consumers/awsUtilTests.js index 89ecc917..77969cbe 100644 --- a/test/unit/consumers/awsUtilTests.js +++ b/test/unit/consumers/awsUtilTests.js @@ -16,9 +16,10 @@ const sinon = require('sinon'); const chai = require('chai'); const chaiAsPromised = require('chai-as-promised'); const aws = require('aws-sdk'); +const https = require('https'); const testUtil = require('./../shared/util'); -const awsUtil = require('../../../src/lib/consumers/AWS_CloudWatch/awsUtil'); +const awsUtil = require('../../../src/lib/consumers/shared/awsUtil'); const awsUtilTestsData = require('./data/awsUtilTestsData'); @@ -63,9 +64,48 @@ describe('AWS Util Tests', () => { }); return awsUtil.initializeConfig(context) .then(() => { - assert.deepStrictEqual(actualParams, { region: 'us-west-1' }); + assert.strictEqual(actualParams.region, 'us-west-1'); }); }); + + it('should initialize config with custom agent', () => { + const context = testUtil.buildConsumerContext({ + config: { + region: 'us-west-1' + } + }); + return awsUtil.initializeConfig(context) + .then(() => { + const agent = actualParams.httpOptions.agent; + assert.ok(agent instanceof https.Agent, 'agent should be instance of https.Agent'); + assert.ok(agent.options.ca.length, 'should have atleast 1 certificate'); + assert.strictEqual(agent.options.rejectUnauthorized, true); + }); + }); + + it('should initialize config with custom https agent', () => { + const context = testUtil.buildConsumerContext({ + config: { + region: 'us-west-1' + } + }); + const configOptions = { + httpAgent: 'myAgent' + }; + return awsUtil.initializeConfig(context, configOptions) + .then(() => { + assert.deepStrictEqual(actualParams, + { region: 'us-west-1', httpOptions: { agent: 'myAgent' } }); + }); + }); + + it('should return a valid array when getting AWS root certs', () => { + const certs = awsUtil.getAWSRootCerts(); + assert.ok(Array.isArray(certs), 'certs should be a valid array'); + assert.ok(certs.every( + i => i.startsWith('-----BEGIN CERTIFICATE-----') + ), 'certs should have \'BEGIN CERTIFICATE\' header'); + }); }); describe('Metrics', () => { diff --git a/test/unit/consumers/data/splunkConsumerTestsData.js b/test/unit/consumers/data/splunkConsumerTestsData.js index 995492d0..375851b4 100644 --- a/test/unit/consumers/data/splunkConsumerTestsData.js +++ b/test/unit/consumers/data/splunkConsumerTestsData.js @@ -9,8 +9,8 @@ 'use strict'; module.exports = { - legacySystemData: [ - { + legacySystemData: { + exampleOfSystemPollerOutput: { expectedData: [ { event: { @@ -912,9 +912,9 @@ module.exports = { } ] } - ], - multiMetricSystemData: [ - { + }, + multiMetricSystemData: { + exampleOfSystemPollerOutput: { expectedData: [ { fields: { @@ -3353,6 +3353,32 @@ module.exports = { time: 1546304461000 } ] + }, + systemPollerOutputWithReferences: { + name: 'multi-metric output to check the case when References are ignored', + expectedData: [ + { + time: 1546304461000, + source: 'f5-telemetry', + sourcetype: 'f5:telemetry', + host: 'telemetry.bigip.com', + fields: { + hostname: 'telemetry.bigip.com', + telemetryStreamingStatisticSet: 'system' + } + }, + { + time: 1546304461000, + source: 'f5-telemetry', + sourcetype: 'f5:telemetry', + host: 'telemetry.bigip.com', + fields: { + 'metric_name:metric': 10, + name: 'pool1', + telemetryStreamingStatisticSet: 'pools' + } + } + ] } - ] + } }; diff --git a/test/unit/consumers/genericHTTPConsumerTests.js b/test/unit/consumers/genericHTTPConsumerTests.js index c6f02d94..e97dd7b8 100644 --- a/test/unit/consumers/genericHTTPConsumerTests.js +++ b/test/unit/consumers/genericHTTPConsumerTests.js @@ -31,6 +31,8 @@ describe('Generic_HTTP', () => { host: 'localhost' }; + const redactString = '*****'; + afterEach(() => { testUtil.checkNockActiveMocks(nock); sinon.restore(); @@ -102,7 +104,34 @@ describe('Generic_HTTP', () => { return genericHttpIndex(context) .then(() => { const traceData = JSON.parse(context.tracer.write.firstCall.args[0]); - assert.deepStrictEqual(traceData.headers, { Authorization: '*****' }); + assert.deepStrictEqual(traceData.headers, { Authorization: redactString }); + }); + }); + + it('should trace data with certificates redacted', () => { + const context = testUtil.buildConsumerContext({ + config: { + method: 'POST', + protocol: 'http', + port: '8080', + path: '/', + host: 'myMetricsSystem', + privateKey: 'secretKey', + clientCertificate: 'myCert', + rootCertificate: 'CACert' + } + }); + + nock('http://myMetricsSystem:8080') + .post('/') + .reply(200); + + return genericHttpIndex(context) + .then(() => { + const traceData = JSON.parse(context.tracer.write.firstCall.args[0]); + assert.deepStrictEqual(traceData.privateKey, redactString); + assert.deepStrictEqual(traceData.clientCertificate, redactString); + assert.deepStrictEqual(traceData.rootCertificate, redactString); }); }); @@ -240,4 +269,53 @@ describe('Generic_HTTP', () => { }); }); }); + + describe('tls options', () => { + let requestUtilSpy; + + beforeEach(() => { + requestUtilSpy = sinon.stub(httpUtil, 'sendToConsumer').resolves(); + }); + + it('should pass tls options', () => { + const context = testUtil.buildConsumerContext({ + config: { + method: 'POST', + protocol: 'https', + port: '80', + host: 'targetHost', + privateKey: 'secretKey', + clientCertificate: 'myCert', + rootCertificate: 'CACert' + } + }); + return genericHttpIndex(context) + .then(() => { + const reqOpt = requestUtilSpy.firstCall.args[0]; + assert.deepStrictEqual(reqOpt.ca, 'CACert'); + assert.deepStrictEqual(reqOpt.cert, 'myCert'); + assert.deepStrictEqual(reqOpt.key, 'secretKey'); + }); + }); + + it('should not allow self signed certs when using tls options', () => { + const context = testUtil.buildConsumerContext({ + config: { + method: 'POST', + protocol: 'https', + port: '80', + host: 'targetHost', + privateKey: 'secretKey', + clientCertificate: 'myCert', + rootCertificate: 'CACert', + allowSelfSignedCert: true + } + }); + return genericHttpIndex(context) + .then(() => { + const reqOpt = requestUtilSpy.firstCall.args[0]; + assert.deepStrictEqual(reqOpt.allowSelfSignedCert, false); + }); + }); + }); }); diff --git a/test/unit/consumers/splunkConsumerTests.js b/test/unit/consumers/splunkConsumerTests.js index c446a8d5..5a00ee1f 100644 --- a/test/unit/consumers/splunkConsumerTests.js +++ b/test/unit/consumers/splunkConsumerTests.js @@ -220,7 +220,7 @@ describe('Splunk', () => { try { let output = zlib.gunzipSync(opts.body).toString(); output = output.replace(/\}\{/g, '},{'); - assert.sameDeepMembers(JSON.parse(`[${output}]`), splunkData.legacySystemData[0].expectedData); + assert.sameDeepMembers(JSON.parse(`[${output}]`), splunkData.legacySystemData.exampleOfSystemPollerOutput.expectedData); done(); } catch (err) { // done() with parameter is treated as an error. @@ -243,7 +243,45 @@ describe('Splunk', () => { try { let output = zlib.gunzipSync(opts.body).toString(); output = output.replace(/\}\{/g, '},{'); - assert.sameDeepMembers(JSON.parse(`[${output}]`), splunkData.multiMetricSystemData[0].expectedData); + assert.sameDeepMembers(JSON.parse(`[${output}]`), splunkData.multiMetricSystemData.exampleOfSystemPollerOutput.expectedData); + done(); + } catch (err) { + // done() with parameter is treated as an error. + // Use catch back to pass thrown error from assert.deepStrictEqual to done() callback + done(err); + } + }); + + splunkIndex(context); + }); + + it('should ignore references in multiMetric format', (done) => { + const context = testUtil.buildConsumerContext({ + eventType: 'systemInfo', + config: defaultConsumerConfig + }); + context.config.format = 'multiMetric'; + context.event.data = { + system: { + hostname: context.event.data.system.hostname + }, + telemetryServiceInfo: context.event.data.telemetryServiceInfo, + telemetryEventCategory: context.event.data.telemetryEventCategory, + pools: { + pool1: { + metric: 10, + someReference: { + link: 'linkToReference', + name: 'someReference' + } + } + } + }; + sinon.stub(request, 'post').callsFake((opts) => { + try { + let output = zlib.gunzipSync(opts.body).toString(); + output = output.replace(/\}\{/g, '},{'); + assert.sameDeepMembers(JSON.parse(`[${output}]`), splunkData.multiMetricSystemData.systemPollerOutputWithReferences.expectedData); done(); } catch (err) { // done() with parameter is treated as an error. diff --git a/test/unit/data/configTestsData.js b/test/unit/data/configTestsData.js index ed790c6f..ebfdf511 100644 --- a/test/unit/data/configTestsData.js +++ b/test/unit/data/configTestsData.js @@ -27,7 +27,28 @@ module.exports = { } } }, - expectedOutput: { + expectedResult: { + class: 'Telemetry_Namespace', + Poller: { + class: 'Telemetry_System_Poller', + enable: true, + interval: 300, + actions: [ + { + setTag: { + tenant: '`T`', + application: '`A`' + }, + enable: true + } + ], + allowSelfSignedCert: false, + host: 'localhost', + port: 8100, + protocol: 'http' + } + }, + expectedNormalized: { mappings: { uuid1: [] }, @@ -119,7 +140,26 @@ module.exports = { } } }, - expectedOutput: { + expectedResult: { + class: 'Telemetry_Namespace', + My_Listener_1: { + class: 'Telemetry_Listener', + enable: true, + trace: false, + port: 6514, + match: '', + actions: [ + { + setTag: { + tenant: '`T`', + application: '`A`' + }, + enable: true + } + ] + } + }, + expectedNormalized: { mappings: { uuid1: [] }, components: [ { @@ -198,7 +238,18 @@ module.exports = { } } }, - expectedOutput: { + expectedResult: { + class: 'Telemetry_Namespace', + My_System_1: { + class: 'Telemetry_System', + enable: true, + allowSelfSignedCert: false, + host: 'localhost', + port: 8100, + protocol: 'http' + } + }, + expectedNormalized: { mappings: {}, components: [ { @@ -269,7 +320,19 @@ module.exports = { } } }, - expectedOutput: { + expectedResult: { + class: 'Telemetry_Namespace', + My_System_1: { + class: 'Telemetry_System', + host: 'some.other.host', + trace: true, + enable: true, + allowSelfSignedCert: false, + port: 8100, + protocol: 'http' + } + }, + expectedNormalized: { mappings: {}, components: [ { @@ -327,7 +390,18 @@ module.exports = { } } }, - expectedOutput: { + expectedResult: { + class: 'Telemetry_Namespace', + My_System_1: { + class: 'Telemetry_System', + enable: true, + allowSelfSignedCert: false, + host: 'localhost', + port: 8100, + protocol: 'http' + } + }, + expectedNormalized: { mappings: {}, components: [ { @@ -344,6 +418,79 @@ module.exports = { } ] } + }, + { + name: 'should remove existing namespace config (empty declaration)', + existingConfig: { + raw: { + class: 'Telemetry', + My_System_1: { + class: 'Telemetry_System', + trace: false + }, + SameNamespace: { + class: 'Telemetry_Namespace', + My_System_1: { + class: 'Telemetry_System' + } + } + }, + normalized: { + mappings: {}, + components: [ + { + name: 'My_System_1', + id: 'uuid-abc', + namespace: 'f5telemetry_default', + class: 'Telemetry_System', + enable: true, + systemPollers: [], + allowSelfSignedCert: false, + host: 'localhost', + port: 8100, + protocol: 'http', + trace: false + }, + { + name: 'My_System_1', + id: 'uuid-same', + namespace: 'SameNamespace', + class: 'Telemetry_System', + enable: true, + systemPollers: [], + allowSelfSignedCert: false, + host: 'localhost', + port: 8100, + protocol: 'http' + } + ] + } + }, + input: { + namespace: 'SameNamespace', + declaration: { + class: 'Telemetry_Namespace' + } + }, + expectedResult: { class: 'Telemetry_Namespace' }, + expectedNormalized: { + mappings: {}, + components: [ + { + name: 'My_System_1', + id: 'uuid-abc', + namespace: 'f5telemetry_default', + class: 'Telemetry_System', + enable: true, + systemPollers: [], + allowSelfSignedCert: false, + host: 'localhost', + port: 8100, + protocol: 'http', + trace: false + } + ] + } } ] }; diff --git a/test/unit/data/configUtilTestsData.js b/test/unit/data/configUtilTestsData.js index 54c88cc2..0267b273 100644 --- a/test/unit/data/configUtilTestsData.js +++ b/test/unit/data/configUtilTestsData.js @@ -709,6 +709,180 @@ module.exports = { ] } }, + { + name: 'should create new Telemetry_System for each unbound iHealthPoller', + declaration: { + class: 'Telemetry', + My_iHealth_Poller_1: { + class: 'Telemetry_iHealth_Poller', + username: 'IHEALTH_ACCOUNT_USERNAME', + passphrase: { + cipherText: 'IHEALTH_ACCOUNT_PASSPHRASE' + }, + interval: { + timeWindow: { + start: '23:15', + end: '02:15' + } + } + }, + My_iHealth_Poller_2: { + class: 'Telemetry_iHealth_Poller', + username: 'IHEALTH_ACCOUNT_USERNAME', + passphrase: { + cipherText: 'IHEALTH_ACCOUNT_PASSPHRASE' + }, + interval: { + timeWindow: { + start: '23:15', + end: '02:15' + } + } + } + }, + expected: { + mappings: { + uuid1: [], + uuid2: [] + }, + components: [ + { + class: 'Telemetry_iHealth_Poller', + enable: true, + trace: false, + iHealth: { + name: 'My_iHealth_Poller_1', + credentials: { + username: 'IHEALTH_ACCOUNT_USERNAME', + passphrase: { + cipherText: 'IHEALTH_ACCOUNT_PASSPHRASE', + class: 'Secret', + protected: 'SecureVault' + } + }, + downloadFolder: undefined, + interval: { + day: undefined, + frequency: 'daily', + timeWindow: { + start: '23:15', + end: '02:15' + } + }, + proxy: { + connection: { + host: undefined, + port: undefined, + protocol: undefined, + allowSelfSignedCert: undefined + }, + credentials: { + username: undefined, + passphrase: undefined + } + } + }, + system: { + host: 'localhost', + name: 'My_iHealth_Poller_1_System', + connection: { + port: 8100, + protocol: 'http', + allowSelfSignedCert: false + }, + credentials: { + username: undefined, + passphrase: undefined + } + }, + id: 'uuid1', + name: 'My_iHealth_Poller_1', + namespace: 'f5telemetry_default' + }, + { + class: 'Telemetry_iHealth_Poller', + enable: true, + trace: false, + iHealth: { + name: 'My_iHealth_Poller_2', + credentials: { + username: 'IHEALTH_ACCOUNT_USERNAME', + passphrase: { + cipherText: 'IHEALTH_ACCOUNT_PASSPHRASE', + class: 'Secret', + protected: 'SecureVault' + } + }, + downloadFolder: undefined, + interval: { + day: undefined, + frequency: 'daily', + timeWindow: { + start: '23:15', + end: '02:15' + } + }, + proxy: { + connection: { + host: undefined, + port: undefined, + protocol: undefined, + allowSelfSignedCert: undefined + }, + credentials: { + username: undefined, + passphrase: undefined + } + } + }, + system: { + host: 'localhost', + name: 'My_iHealth_Poller_2_System', + connection: { + port: 8100, + protocol: 'http', + allowSelfSignedCert: false + }, + credentials: { + username: undefined, + passphrase: undefined + } + }, + id: 'uuid2', + name: 'My_iHealth_Poller_2', + namespace: 'f5telemetry_default' + }, + { + class: 'Telemetry_System', + enable: true, + trace: false, + host: 'localhost', + port: 8100, + protocol: 'http', + allowSelfSignedCert: false, + name: 'My_iHealth_Poller_1_System', + id: 'uuid3', + namespace: 'f5telemetry_default', + systemPollers: [], + iHealthPoller: 'uuid1' + }, + { + class: 'Telemetry_System', + enable: true, + trace: false, + host: 'localhost', + port: 8100, + protocol: 'http', + allowSelfSignedCert: false, + name: 'My_iHealth_Poller_2_System', + id: 'uuid4', + namespace: 'f5telemetry_default', + systemPollers: [], + iHealthPoller: 'uuid2' + } + ] + } + }, { name: 'should normalize ihealth poller without an explicit System', declaration: { @@ -1310,6 +1484,174 @@ module.exports = { systemPollerNormalization: { name: 'Telemetry_System_Poller normalization', tests: [ + { + name: 'should create new Telemetry_System for each unbound Telemetry_System_Poller', + declaration: { + class: 'Telemetry', + My_Poller_1: { + // uuid1 + class: 'Telemetry_System_Poller', + trace: true, + interval: 500, + port: 8101, + enable: true, + username: 'username1', + passphrase: { + cipherText: 'passphrase1' + }, + tag: { + tag: 'tag1' + } + }, + My_Poller_2: { + // uuid2 + class: 'Telemetry_System_Poller', + trace: true, + interval: 600, + port: 8102, + enable: true, + username: 'username2', + passphrase: { + cipherText: 'passphrase2' + }, + tag: { + tag: 'tag2' + } + } + }, + expected: { + mappings: { + uuid1: [], + uuid2: [] + }, + components: [ + { + class: 'Telemetry_System_Poller', + trace: true, + interval: 500, + enable: true, + name: 'My_Poller_1', + id: 'uuid1', + namespace: 'f5telemetry_default', + traceName: 'My_Poller_1_System::My_Poller_1', + connection: { + host: 'localhost', + port: 8101, + protocol: 'http', + allowSelfSignedCert: false + }, + dataOpts: { + actions: [ + { + setTag: { + tenant: '`T`', + application: '`A`' + }, + enable: true + } + ], + tags: { + tag: 'tag1' + }, + noTMStats: true + }, + credentials: { + username: 'username1', + passphrase: { + cipherText: 'passphrase1', + class: 'Secret', + protected: 'SecureVault' + } + }, + tag: { + tag: 'tag1' + } + }, + { + class: 'Telemetry_System_Poller', + trace: true, + interval: 600, + enable: true, + name: 'My_Poller_2', + id: 'uuid2', + namespace: 'f5telemetry_default', + traceName: 'My_Poller_2_System::My_Poller_2', + connection: { + host: 'localhost', + port: 8102, + protocol: 'http', + allowSelfSignedCert: false + }, + dataOpts: { + actions: [ + { + setTag: { + tenant: '`T`', + application: '`A`' + }, + enable: true + } + ], + tags: { + tag: 'tag2' + }, + noTMStats: true + }, + credentials: { + username: 'username2', + passphrase: { + cipherText: 'passphrase2', + class: 'Secret', + protected: 'SecureVault' + } + }, + tag: { + tag: 'tag2' + } + }, + { + class: 'Telemetry_System', + enable: true, + host: 'localhost', + port: 8101, + protocol: 'http', + allowSelfSignedCert: false, + trace: true, + id: 'uuid3', + name: 'My_Poller_1_System', + systemPollers: [ + 'uuid1' + ], + username: 'username1', + passphrase: { + cipherText: 'passphrase1', + class: 'Secret', + protected: 'SecureVault' + } + }, + { + class: 'Telemetry_System', + enable: true, + host: 'localhost', + port: 8102, + protocol: 'http', + allowSelfSignedCert: false, + trace: true, + id: 'uuid4', + name: 'My_Poller_2_System', + systemPollers: [ + 'uuid2' + ], + username: 'username2', + passphrase: { + cipherText: 'passphrase2', + class: 'Secret', + protected: 'SecureVault' + } + } + ] + } + }, { name: 'should create new poller when same poller referenced by multiple systems', declaration: { diff --git a/test/unit/data/eventListenerTestsData.js b/test/unit/data/eventListenerTestsData.js index ec0c33e3..f492a8c6 100644 --- a/test/unit/data/eventListenerTestsData.js +++ b/test/unit/data/eventListenerTestsData.js @@ -9,36 +9,39 @@ 'use strict'; module.exports = { - processData: [ + onMessagesHandler: [ { - name: 'should process data as single event without newline', - rawData: '<0>Jul 6 22:37:15 bigip14.1.2.3.test BigIP:EOCtimestamp="1594100235",Microtimestamp="1594100235358418",errdefs_msgno="22327305",Hostname="bigip14.1.2.3.test",SlotId="0",globalBigiqConf="N/A",ObjectTagsList="N/A",pool_name="/Common/Shared/telemetry",errdefs_msg_name="pool modified",state="enabled",pool_description="",status_reason="",min_active_members="1",availability_state="offline",available_members="0",up_members="0"', + name: 'should normalize and classify events', + rawEvents: [ + '<0>Jul 6 22:37:15 bigip14.1.2.3.test BigIP:EOCtimestamp="1594100235",Microtimestamp="1594100235358418",errdefs_msgno="22327305",Hostname="bigip14.1.2.3.test",SlotId="0",globalBigiqConf="N/A",ObjectTagsList="N/A",pool_name="/Common/Shared/telemetry",errdefs_msg_name="pool modified",state="enabled",pool_description="",status_reason="",min_active_members="1",availability_state="offline",available_members="0",up_members="0"', + '<134>Jul 6 22:37:49 bigip14.1.2.3.test info httpd(pam_audit)[13810]: 01070417:6: AUDIT - user admin - RAW: httpd(pam_audit): user=admin(admin) partition=[All] level=Administrator tty=(unknown) host=172.18.5.167 attempts=1 start="Mon Jul 6 22:37:49 2020" end="Mon Jul 6 22:37:49 2020"', + ' <87>Jul 6 22:37:49 bigip14.1.2.3.test debug httpd[13810]: pam_bigip_authz: pam_sm_acct_mgmt returning status SUCCESS\n', + '<0>Jul 6 22:37:15 bigip14.1.2.3.test BigIP:EOCtimestamp="1594100235",Microtimestamp="1594100235370475",errdefs_msgno="22327308",Hostname="bigip14.1.2.3.test",SlotId="0",globalBigiqConf="N/A",ObjectTagsList="N/A",pool_member="/Common/Shared/255.255.255.254",pool_name="/Common/Shared/telemetry",POOLIP="255.255.255.254",POOLPort="6514",errdefs_msg_name="pool member modified",description="",monitor_state="down",monitor_status="AVAIL_RED",session_status="enabled",enabled_state="enabled",status_reason="/Common/tcp: No successful responses received before deadline. @2020/07/06 22:37:15. ",availability_state="offline"', + '<0>Jul 6 22:37:15 bigip14.1.2.3.test BigIP:EOCtimestamp="1594100235",Microtimestamp="1594100235358418",errdefs_msgno="22327305",Hostname="bigip14.1.2.3.test",SlotId="0",globalBigiqConf="N/A",ObjectTagsList="N/A",pool_name="/Common/Shared/telemetry",errdefs_msg_name="pool modified",state="enabled",pool_description="",status_reason="",min_active_members="1",availability_state="offline",available_members="0",up_members="0"', + ' ' // should skip empty events + ], expectedData: [ { EOCTimestamp: '1594100235', Microtimestamp: '1594100235358418', - errdefs_msgno: '22327305', - hostname: 'bigip14.1.2.3.test', + ObjectTagsList: 'N/A', SlotId: '0', + application: 'Shared', + availability_state: 'offline', + available_members: '0', + errdefs_msg_name: 'pool modified', + errdefs_msgno: '22327305', globalBigiqConf: 'N/A', - ObjectTagsList: 'N/A', + hostname: 'bigip14.1.2.3.test', + min_active_members: '1', + pool_description: '', pool_name: '/Common/Shared/telemetry', - errdefs_msg_name: 'pool modified', state: 'enabled', - pool_description: '', status_reason: '', - min_active_members: '1', - availability_state: 'offline', - available_members: '0', - up_members: '0', - telemetryEventCategory: 'AVR' - } - ] - }, - { - name: 'should process data as multiple events with newline preceding a double quote AND no classified keys', - rawData: '<134>Jul 6 22:37:49 bigip14.1.2.3.test info httpd(pam_audit)[13810]: 01070417:6: AUDIT - user admin - RAW: httpd(pam_audit): user=admin(admin) partition=[All] level=Administrator tty=(unknown) host=172.18.5.167 attempts=1 start="Mon Jul 6 22:37:49 2020" end="Mon Jul 6 22:37:49 2020"\n <87>Jul 6 22:37:49 bigip14.1.2.3.test debug httpd[13810]: pam_bigip_authz: pam_sm_acct_mgmt returning status SUCCESS\n', - expectedData: [ + telemetryEventCategory: 'AVR', + tenant: 'Common', + up_members: '0' + }, { data: '<134>Jul 6 22:37:49 bigip14.1.2.3.test info httpd(pam_audit)[13810]: 01070417:6: AUDIT - user admin - RAW: httpd(pam_audit): user=admin(admin) partition=[All] level=Administrator tty=(unknown) host=172.18.5.167 attempts=1 start="Mon Jul 6 22:37:49 2020" end="Mon Jul 6 22:37:49 2020"', telemetryEventCategory: 'syslog', @@ -48,175 +51,53 @@ module.exports = { data: '<87>Jul 6 22:37:49 bigip14.1.2.3.test debug httpd[13810]: pam_bigip_authz: pam_sm_acct_mgmt returning status SUCCESS', hostname: 'bigip14.1.2.3.test', telemetryEventCategory: 'syslog' - } - ] - }, - { - name: 'should process data as multiple events with newline preceding a double quote AND with classified keys', - rawData: '<0>Jul 6 22:37:15 bigip14.1.2.3.test BigIP:EOCtimestamp="1594100235",Microtimestamp="1594100235370475",errdefs_msgno="22327308",Hostname="bigip14.1.2.3.test",SlotId="0",globalBigiqConf="N/A",ObjectTagsList="N/A",pool_member="/Common/Shared/255.255.255.254",pool_name="/Common/Shared/telemetry",POOLIP="255.255.255.254",POOLPort="6514",errdefs_msg_name="pool member modified",description="",monitor_state="down",monitor_status="AVAIL_RED",session_status="enabled",enabled_state="enabled",status_reason="/Common/tcp: No successful responses received before deadline. @2020/07/06 22:37:15. ",availability_state="offline"\n<0>Jul 6 22:37:15 bigip14.1.2.3.test BigIP:EOCtimestamp="1594100235",Microtimestamp="1594100235358418",errdefs_msgno="22327305",Hostname="bigip14.1.2.3.test",SlotId="0",globalBigiqConf="N/A",ObjectTagsList="N/A",pool_name="/Common/Shared/telemetry",errdefs_msg_name="pool modified",state="enabled",pool_description="",status_reason="",min_active_members="1",availability_state="offline",available_members="0",up_members="0"', - expectedData: [ + }, { EOCTimestamp: '1594100235', Microtimestamp: '1594100235370475', - errdefs_msgno: '22327308', - hostname: 'bigip14.1.2.3.test', - SlotId: '0', - globalBigiqConf: 'N/A', ObjectTagsList: 'N/A', - pool_member: '/Common/Shared/255.255.255.254', - pool_name: '/Common/Shared/telemetry', POOLIP: '255.255.255.254', POOLPort: '6514', - errdefs_msg_name: 'pool member modified', + SlotId: '0', + application: 'Shared', + availability_state: 'offline', description: '', + enabled_state: 'enabled', + errdefs_msg_name: 'pool member modified', + errdefs_msgno: '22327308', + globalBigiqConf: 'N/A', + hostname: 'bigip14.1.2.3.test', monitor_state: 'down', monitor_status: 'AVAIL_RED', + pool_member: '/Common/Shared/255.255.255.254', + pool_name: '/Common/Shared/telemetry', session_status: 'enabled', - enabled_state: 'enabled', status_reason: '/Common/tcp: No successful responses received before deadline. @2020/07/06 22:37:15. ', - availability_state: 'offline', - telemetryEventCategory: 'AVR' + telemetryEventCategory: 'AVR', + tenant: 'Common' }, { EOCTimestamp: '1594100235', Microtimestamp: '1594100235358418', - errdefs_msgno: '22327305', - hostname: 'bigip14.1.2.3.test', + ObjectTagsList: 'N/A', SlotId: '0', + application: 'Shared', + availability_state: 'offline', + available_members: '0', + errdefs_msg_name: 'pool modified', + errdefs_msgno: '22327305', globalBigiqConf: 'N/A', - ObjectTagsList: 'N/A', + hostname: 'bigip14.1.2.3.test', + min_active_members: '1', + pool_description: '', pool_name: '/Common/Shared/telemetry', - errdefs_msg_name: 'pool modified', state: 'enabled', - pool_description: '', status_reason: '', - min_active_members: '1', - availability_state: 'offline', - available_members: '0', - up_members: '0', - telemetryEventCategory: 'AVR' - } - ] - }, - { - name: 'should process data as single event when newline quoted (double quotes)', - rawData: '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms."\n <30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart. \n<30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart."\n ', - expectedData: [ - { - data: '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms."\n <30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart. \n<30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart."', - hostname: 'bigip14.1.2.3.test', - telemetryEventCategory: 'syslog' - } - ] - }, - { - name: 'should process data as single event when newline quoted (single quotes)', - rawData: '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms.\'\n <30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart. \n<30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart.\'\n ', - expectedData: [ - { - data: '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms.\'\n <30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart. \n<30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart.\'', - hostname: 'bigip14.1.2.3.test', - telemetryEventCategory: 'syslog' - } - ] - }, - { - name: 'should omit empty lines', - rawData: 'line1\n \n line3\n \n line5', - expectedData: [ - { - data: 'line1', - telemetryEventCategory: 'event' - }, - { - data: 'line3', - telemetryEventCategory: 'event' - }, - { - data: 'line5', - telemetryEventCategory: 'event' - } - ] - }, - { - name: 'should process data when mixed new line chars in data', - rawData: 'line1\r\nline2\nline3\r\nline4', - expectedData: [ - { - data: 'line1', - telemetryEventCategory: 'event' - }, - { - data: 'line2', - telemetryEventCategory: 'event' - }, - { - data: 'line3', - telemetryEventCategory: 'event' - }, - { - data: 'line4', - telemetryEventCategory: 'event' + telemetryEventCategory: 'AVR', + tenant: 'Common', + up_members: '0' } ] - }, - { - name: 'should process data when mixed event separators', - rawData: 'key1="value\n"\nkey2=\\"value\n', - expectedData: [ - { - key1: 'value\n', - telemetryEventCategory: 'LTM' - }, - { - data: 'key2=\\"value', - telemetryEventCategory: 'event' - } - ] - } - ], - processRawData: [ - { - name: 'should process data without trailing newline', - rawData: ['<0>Jul 6 22:37:15 bigip14.1.2.3.test BigIP:EOCtimestamp="1594100235",Microtimestamp="1594100235358418",errdefs_msgno="22327305",Hostname="bigip14.1.2.3.test",SlotId="0",globalBigiqConf="N/A",ObjectTagsList="N/A",pool_name="/Common/Shared/telemetry",errdefs_msg_name="pool modified",state="enabled",pool_description="",status_reason="",min_active_members="1",availability_state="offline",available_members="0",up_members="0"'], - expectedData: ['<0>Jul 6 22:37:15 bigip14.1.2.3.test BigIP:EOCtimestamp="1594100235",Microtimestamp="1594100235358418",errdefs_msgno="22327305",Hostname="bigip14.1.2.3.test",SlotId="0",globalBigiqConf="N/A",ObjectTagsList="N/A",pool_name="/Common/Shared/telemetry",errdefs_msg_name="pool modified",state="enabled",pool_description="",status_reason="",min_active_members="1",availability_state="offline",available_members="0",up_members="0"'] - }, - { - name: 'should process single data with trailing newline', - rawData: ['<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms. \n'], - expectedData: ['<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms. \n'] - }, - { - name: 'should process multiple data with trailing newline', - rawData: [ - '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms. \n', - '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms. \n' - ], - expectedData: [ - '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms. \n', - '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms. \n' - ] - }, - { - name: 'should process single input as single data with newline within string (index < 70% string length)', - rawData: [ - '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms. \n<30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart... ', - 'and continued here \n<134>Jul 6 22:37:49 bigip14.1.2.3.test info httpd(pam_audit)[13810]: 01070417:6: AUDIT - user admin - RAW: httpd(pam_audit): user=admin(admin) partition=[All] level=Administrator tty=(unknown) host=172.18.5.167 attempts=1 start="Mon Jul 6 22:37:49 2020" end="Mon Jul 6 22:37:49 2020"\n' - ], - expectedData: [ - '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms. \n<30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart... and continued here \n<134>Jul 6 22:37:49 bigip14.1.2.3.test info httpd(pam_audit)[13810]: 01070417:6: AUDIT - user admin - RAW: httpd(pam_audit): user=admin(admin) partition=[All] level=Administrator tty=(unknown) host=172.18.5.167 attempts=1 start="Mon Jul 6 22:37:49 2020" end="Mon Jul 6 22:37:49 2020"\n' - ] - }, - { - name: 'should process single input as multiple data with newline within string (index > 70% string length)', - rawData: [ - '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms. \n<30>Jul 6 22:37:35 bigip14.1.2.3.test ', - 'info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart... and continued here \n<134>Jul 6 22:37:49 bigip14.1.2.3.test info httpd(pam_audit)[13810]: 01070417:6: AUDIT - user admin - RAW: httpd(pam_audit): user=admin(admin) partition=[All] level=Administrator tty=(unknown) host=172.18.5.167 attempts=1 start="Mon Jul 6 22:37:49 2020" end="Mon Jul 6 22:37:49 2020"\n' - ], - expectedData: [ - '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms. \n', - '<30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart... and continued here \n<134>Jul 6 22:37:49 bigip14.1.2.3.test info httpd(pam_audit)[13810]: 01070417:6: AUDIT - user admin - RAW: httpd(pam_audit): user=admin(admin) partition=[All] level=Administrator tty=(unknown) host=172.18.5.167 attempts=1 start="Mon Jul 6 22:37:49 2020" end="Mon Jul 6 22:37:49 2020"\n' - ] } ] }; diff --git a/test/unit/data/messageStreamTestsData.js b/test/unit/data/messageStreamTestsData.js new file mode 100644 index 00000000..2de8127a --- /dev/null +++ b/test/unit/data/messageStreamTestsData.js @@ -0,0 +1,428 @@ +/* + * Copyright 2020. F5 Networks, Inc. See End User License Agreement ('EULA') for + * license terms. Notwithstanding anything to the contrary in the EULA, Licensee + * may copy and modify this software product for its internal business purposes. + * Further, Licensee may upload, publish and distribute the modified version of + * the software product on devcentral.f5.com. + */ + +'use strict'; + +module.exports = { + /** + * Set of data to check actual and expected results only. + * If you need some additional check feel free to add additional + * property or write separate test. + * + * Note: you can specify 'testOpts' property on the same level as 'name'. + * Following options available: + * - only (bool) - run this test only (it.only) + * */ + dataHandler: [ + { + name: 'exceeded number of timeouts', + chunks: [ + 'chunk1', // timeout 1 + 'chunk2', // timeout 2 + 'chunk3', // timeout 3 + 'chunk4', // timeout 4 + 'chunk5', // timeout 5 + 'chunk6', + 'chunk7\n' + ], + expectedData: [ + 'chunk1chunk2chunk3chunk4chunk5', + 'chunk6chunk7' + ] + }, + { + name: 'single syslog message', + chunks: [ + '<0>Jul 6 22:37:15 bigip14.1.2.3.test BigIP:EOCtimestamp="1594100235",Microtimestamp="1594100235358418",errdefs_msgno="22327305",Hostname="bigip14.1.2.3.test",SlotId="0",globalBigiqConf="N/A",ObjectTagsList="N/A",pool_name="/Common/Shared/telemetry",errdefs_msg_name="pool modified",state="enabled",pool_description="",status_reason="",min_active_members="1",availability_state="offline",available_members="0",up_members="0"' + ], + expectedData: [ + '<0>Jul 6 22:37:15 bigip14.1.2.3.test BigIP:EOCtimestamp="1594100235",Microtimestamp="1594100235358418",errdefs_msgno="22327305",Hostname="bigip14.1.2.3.test",SlotId="0",globalBigiqConf="N/A",ObjectTagsList="N/A",pool_name="/Common/Shared/telemetry",errdefs_msg_name="pool modified",state="enabled",pool_description="",status_reason="",min_active_members="1",availability_state="offline",available_members="0",up_members="0"' + ] + }, + { + name: 'multiple events with newline preceding a double quote', + chunks: [ + '<134>Jul 6 22:37:49 bigip14.1.2.3.test info httpd(pam_audit)[13810]: 01070417:6: AUDIT - user admin - RAW: httpd(pam_audit): user=admin(admin) partition=[All] level=Administrator tty=(unknown) host=172.18.5.167 attempts=1 start="Mon Jul 6 22:37:49 2020" end="Mon Jul 6 22:37:49 2020"', + '\n<87>Jul 6 22:37:49 bigip14.1.2.3.test debug httpd[13810]: pam_bigip_authz: pam_sm_acct_mgmt returning status SUCCESS\n' + ], + expectedData: [ + '<134>Jul 6 22:37:49 bigip14.1.2.3.test info httpd(pam_audit)[13810]: 01070417:6: AUDIT - user admin - RAW: httpd(pam_audit): user=admin(admin) partition=[All] level=Administrator tty=(unknown) host=172.18.5.167 attempts=1 start="Mon Jul 6 22:37:49 2020" end="Mon Jul 6 22:37:49 2020"', + '<87>Jul 6 22:37:49 bigip14.1.2.3.test debug httpd[13810]: pam_bigip_authz: pam_sm_acct_mgmt returning status SUCCESS' + ] + }, + { + name: 'single event when newline quoted (double quotes)', + chunks: [ + '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms."\n <30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart. \n<30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart."\n' + ], + expectedData: [ + '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms."\n <30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart. \n<30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart."' + ] + }, + { + name: 'single event when newline quoted (single quotes)', + chunks: [ + '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms.\'\n <30>Jul ', + '6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling ', + 'restart. \n<30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart.\'\n ' + ], + expectedData: [ + '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms.\'\n <30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart. \n<30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart.\'', + ' ' + ] + }, + { + name: 'not omit empty lines', + chunks: [ + 'line1\n \n line3\n \n line5' + ], + expectedData: [ + 'line1', + ' ', + ' line3', + ' ', + ' line5' + ] + }, + { + name: 'mixed new line chars in data', + chunks: [ + 'line1\r\nline2\nline3\r\nline4' + ], + expectedData: [ + 'line1', + 'line2', + 'line3', + 'line4' + ] + }, + { + name: 'mixed event separators', + chunks: [ + 'key1="value\n"\nkey2=\\"value\n' + ], + expectedData: [ + 'key1="value\n"', + 'key2=\\"value' + ] + }, + { + name: 'without trailing newline', + chunks: [ + '<0>Jul 6 22:37:15 bigip14.1.2.3.test BigIP:EOCtimestamp="1594100235",Microtimestamp="1594100235358418",errdefs_msgno="22327305",Hostname="bigip14.1.2.3.test",SlotId="0",globalBigiqConf="N/A",ObjectTagsList="N/A",pool_name="/Common/Shared/telemetry",errdefs_msg_name="pool modified",state="enabled",pool_description="",status_reason="",min_active_members="1",availability_state="offline",available_members="0",up_members="0"' + ], + expectedData: [ + '<0>Jul 6 22:37:15 bigip14.1.2.3.test BigIP:EOCtimestamp="1594100235",Microtimestamp="1594100235358418",errdefs_msgno="22327305",Hostname="bigip14.1.2.3.test",SlotId="0",globalBigiqConf="N/A",ObjectTagsList="N/A",pool_name="/Common/Shared/telemetry",errdefs_msg_name="pool modified",state="enabled",pool_description="",status_reason="",min_active_members="1",availability_state="offline",available_members="0",up_members="0"' + ] + }, + { + name: 'event with trailing newline', + chunks: [ + '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms. \n' + ], + expectedData: [ + '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms. ' + ] + }, + { + name: 'events with trailing newlines', + chunks: [ + '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms. \n', + '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms. \n' + ], + expectedData: [ + '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms. ', + '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms. ' + ] + }, + { + name: 'data with newline within string, multiple chunks (example 1)', + chunks: [ + '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval ', + '112580ms. \n<30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart... ', + 'and continued here \n<134>Jul 6 22:37:49 bigip14.1.2.3.test info httpd(pam_audit)[13810]: 01070417:6: AUDIT - user admin - RAW: httpd(pam_audit): ', + 'user=admin(admin) partition=[All] level=Administrator tty=(unknown) host=172.18.5.167 attempts=1 start="Mon Jul 6 22:37:49 2020" end="Mon Jul 6 22:37:49 2020"\n' + ], + expectedData: [ + '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms. ', + '<30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart... and continued here ', + '<134>Jul 6 22:37:49 bigip14.1.2.3.test info httpd(pam_audit)[13810]: 01070417:6: AUDIT - user admin - RAW: httpd(pam_audit): user=admin(admin) partition=[All] level=Administrator tty=(unknown) host=172.18.5.167 attempts=1 start="Mon Jul 6 22:37:49 2020" end="Mon Jul 6 22:37:49 2020"' + ] + }, + { + name: 'with newline within string, multiple chunks (example 2)', + chunks: [ + '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms. \n<30>Jul 6 22:37:35 bigip14.1.2.3.test ', + 'info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart... and continued here \n<134>Jul ', + '6 22:37:49 bigip14.1.2.3.test info httpd(pam_audit)[13810]: 01070417:6: AUDIT - user admin - RAW: httpd(pam_audit): user=admin(admin) ', + 'partition=[All] level=Administrator tty=(unknown) host=172.18.5.167 attempts=1 start="Mon Jul 6 22:37:49 2020" end="Mon Jul 6 22:37:49 2020"\n' + ], + expectedData: [ + '<30>Jul 6 22:37:26 bigip14.1.2.3.test info dhclient[4079]: XMT: Solicit on mgmt, interval 112580ms. ', + '<30>Jul 6 22:37:35 bigip14.1.2.3.test info systemd[1]: getty@tty0\x20ttyS0.service has no holdoff time, scheduling restart... and continued here ', + '<134>Jul 6 22:37:49 bigip14.1.2.3.test info httpd(pam_audit)[13810]: 01070417:6: AUDIT - user admin - RAW: httpd(pam_audit): user=admin(admin) partition=[All] level=Administrator tty=(unknown) host=172.18.5.167 attempts=1 start="Mon Jul 6 22:37:49 2020" end="Mon Jul 6 22:37:49 2020"' + ] + }, + { + name: 'chunk is too long (> 512 chars but less than limit)', + chunks: [ + '1'.repeat(520) + ], + expectedData: [ + '1'.repeat(520) + ] + }, + { + name: 'chunk is too long (more than allowed max limit)', + chunks: [ + '1'.repeat(70000) + ], + expectedData: [ + '1'.repeat(70000) + ] + }, + { + name: 'quote opened but field is too long (> 512 chars, without new line)', + chunks: [ + '<0>Jul 6 22:37:15 bigip14.1.2.3.test BigIP:EOCtimestamp="', + '1'.repeat(520), + '",nextfield="1"' + ], + expectedData: [ + '<0>Jul 6 22:37:15 bigip14.1.2.3.test BigIP:EOCtimestamp="', + `${'1'.repeat(520)}",nextfield="1"` + ] + }, + { + name: 'quote opened but field is too long (> 512 chars, with new line) (example 1)', + chunks: [ + '<0>Jul 6 22:37:15 bigip14.1.2.3.test BigIP:EOCtimestamp="a\n', + '1'.repeat(520), + '",nextfield="1"' + ], + expectedData: [ + '<0>Jul 6 22:37:15 bigip14.1.2.3.test BigIP:EOCtimestamp="a', + `${'1'.repeat(520)}",nextfield="1"` + ] + }, + { + name: 'quote opened but field is too long (> 512 chars, with new line) (example 2)', + chunks: [ + '<0>Jul 6 22:37:15 bigip14.1.2.3.test BigIP:EOCtimestamp="a\r\n', + '1'.repeat(520), + '",nextfield="1"' + ], + expectedData: [ + '<0>Jul 6 22:37:15 bigip14.1.2.3.test BigIP:EOCtimestamp="a', + `${'1'.repeat(520)}",nextfield="1"` + ] + }, + { + name: 'no trailing new line', + chunks: [ + 'line1\n', + 'line2="value\nanotherPart=test\nanotherLine=anotherValue' + ], + expectedData: [ + 'line1', + 'line2="value', + 'anotherPart=test', + 'anotherLine=anotherValue' + ] + }, + { + name: 'empty string', + chunks: [ + '' + ], + expectedData: [] // no data expected + }, + { + name: 'empty line with line separator', + chunks: [ + '{sep}' + ], + expectedData: [ + '' + ] + }, + { + name: 'empty lines with line separator', + chunks: [ + '{sep}{sep}{sep}{sep}' + ], + expectedData: [ + '', + '', + '', + '' + ] + }, + { + name: 'line with trailing spaces', + chunks: [ + '{sep}{sep}{sep}{sep} ' + ], + expectedData: [ + '', + '', + '', + '', + ' ' + ] + }, + { + name: 'ignore escaped separators', + chunks: [ + '\\n \\r\\n' + ], + expectedData: [ + '\\n \\r\\n' + ] + }, + { + name: 'escaped sequences correctly', + chunks: [ + 'line1\\\\\\nstill line 1\\\\{sep}line2\\\\{sep}' + ], + expectedData: [ + 'line1\\\\\\nstill line 1\\\\', + 'line2\\\\' + ] + }, + { + name: 'ignore double quoted line separators (\\n)', + chunks: [ + 'line1"\\\\\\nstill line 1\\\\\n"line2\\\\{sep}' + ], + expectedData: [ + 'line1"\\\\\\nstill line 1\\\\\n"line2\\\\' + ] + }, + { + name: 'ignore double quoted line separators (\\r\\n)', + chunks: [ + 'line1"\\\\\\nstill line 1\\\\\r\n"line2\\\\{sep}' + ], + expectedData: [ + 'line1"\\\\\\nstill line 1\\\\\r\n"line2\\\\' + ] + }, + { + name: 'ignore single quoted line separators (\\n)', + chunks: [ + 'line1\'\\\\\\nstill line 1\\\\\n\'line2\\\\{sep}' + ], + expectedData: [ + 'line1\'\\\\\\nstill line 1\\\\\n\'line2\\\\' + ] + }, + { + name: 'ignore single quoted line separators (\\r\\n)', + chunks: [ + 'line1\'\\\\\\nstill line 1\\\\\r\n\'line2\\\\{sep}' + ], + expectedData: [ + 'line1\'\\\\\\nstill line 1\\\\\r\n\'line2\\\\' + ] + }, + { + name: 'ignore escaped single quoted line separators', + chunks: [ + 'line1\\\'\\\\\\nstill line 1\\\\{sep}\'line2\\\\\n' + ], + expectedData: [ + 'line1\\\'\\\\\\nstill line 1\\\\', + '\'line2\\\\' + ] + }, + { + name: 'ignore escaped double quoted line separators', + chunks: [ + 'line1\\"\\\\\\nstill line 1\\\\{sep}"line2\\\\\n' + ], + expectedData: [ + 'line1\\"\\\\\\nstill line 1\\\\', + '"line2\\\\' + ] + }, + { + name: 'correctly not closed quotes (last line, leading quote)', + chunks: [ + 'line1{sep}"{sep}line3{sep}' + ], + expectedData: [ + 'line1', + '"', + 'line3' + ] + }, + { + name: 'correctly not closed quotes (last line, trailing quote)', + chunks: [ + 'line1{sep}line2"' + ], + expectedData: [ + 'line1', + 'line2"' + ] + }, + { + name: 'correctly single line with opened quote (first line, leading quote)', + chunks: [ + '"line1{sep}line2' + ], + expectedData: [ + '"line1', + 'line2' + ] + }, + { + name: 'correctly single line with opened quote (first line, trailing quote)', + chunks: [ + 'line1"{sep}line2' + ], + expectedData: [ + 'line1"', + 'line2' + ] + }, + { + name: 'combination of complete and incomplete quotes', + chunks: [ + '\'foo"bar""\none\'\n\'two""thr\nee"' + ], + expectedData: [ + '\'foo"bar""\none\'', + '\'two""thr', + 'ee"' + ] + }, + { + name: 'combination of single and double quotes', + chunks: [ + '\'line_1"still_line_1"\n\'\n"line_2"' + ], + expectedData: [ + '\'line_1"still_line_1"\n\'', + '"line_2"' + ] + }, + { + name: 'combination of single and double quotes without any data in it', + chunks: [ + 'key1=""\'\'\nkey2=\'\'\nkey3=""' + ], + expectedData: [ + 'key1=""\'\'', + 'key2=\'\'', + 'key3=""' + ] + } + ] +}; diff --git a/test/unit/data/normalizeTestsData.js b/test/unit/data/normalizeTestsData.js index 8128c5bf..7750fb4f 100644 --- a/test/unit/data/normalizeTestsData.js +++ b/test/unit/data/normalizeTestsData.js @@ -172,154 +172,5 @@ module.exports = { telemetryEventCategory: EVENT_TYPES.SYSLOG_EVENT } } - ], - splitEventsData: [ - { - name: 'empty string', - data: '', - expectedData: [] - }, - { - name: 'empty line with line separator', - data: '{sep}', - expectedData: [ - '' - ] - }, - { - name: 'empty lines with line separator', - data: '{sep}{sep}{sep}{sep}', - expectedData: [ - '', '', '', '' - ] - }, - { - name: 'line with trailing spaces', - data: '{sep}{sep}{sep}{sep} ', - expectedData: [ - '', '', '', '', ' ' - ] - }, - { - name: 'ignore escaped separators', - data: '\\n \\r\\n', - expectedData: [ - '\\n \\r\\n' - ] - }, - { - name: 'process escaped sequences correctly', - data: 'line1\\\\\\nstill line 1\\\\{sep}line2\\\\{sep}', - expectedData: [ - 'line1\\\\\\nstill line 1\\\\', - 'line2\\\\' - ] - }, - { - name: 'ignore double quoted line separators (\\n)', - data: 'line1"\\\\\\nstill line 1\\\\\n"line2\\\\{sep}', - expectedData: [ - 'line1"\\\\\\nstill line 1\\\\\n"line2\\\\' - ] - }, - { - name: 'ignore double quoted line separators (\\r\\n)', - data: 'line1"\\\\\\nstill line 1\\\\\r\n"line2\\\\{sep}', - expectedData: [ - 'line1"\\\\\\nstill line 1\\\\\r\n"line2\\\\' - ] - }, - { - name: 'ignore single quoted line separators (\\n)', - data: 'line1\'\\\\\\nstill line 1\\\\\n\'line2\\\\{sep}', - expectedData: [ - 'line1\'\\\\\\nstill line 1\\\\\n\'line2\\\\' - ] - }, - { - name: 'ignore single quoted line separators (\\r\\n)', - data: 'line1\'\\\\\\nstill line 1\\\\\r\n\'line2\\\\{sep}', - expectedData: [ - 'line1\'\\\\\\nstill line 1\\\\\r\n\'line2\\\\' - ] - }, - { - name: 'ignore escaped single quoted line separators', - data: 'line1\\\'\\\\\\nstill line 1\\\\{sep}\'line2\\\\\n', - expectedData: [ - 'line1\\\'\\\\\\nstill line 1\\\\', - '\'line2\\\\', - '' - ] - }, - { - name: 'ignore escaped double quoted line separators', - data: 'line1\\"\\\\\\nstill line 1\\\\{sep}"line2\\\\\n', - expectedData: [ - 'line1\\"\\\\\\nstill line 1\\\\', - '"line2\\\\', - '' - ] - }, - { - name: 'process correctly not closed quotes (last line, leading quote)', - data: 'line1{sep}"{sep}line3{sep}', - expectedData: [ - 'line1', - '"', - 'line3', - '' - ] - }, - { - name: 'process correctly not closed quotes (last line, trailing quote)', - data: 'line1{sep}line2"', - expectedData: [ - 'line1', - 'line2"' - ] - }, - { - name: 'process correctly single line with opened quote (first line, leading quote)', - data: '"line1{sep}line2', - expectedData: [ - '"line1', - 'line2' - ] - }, - { - name: 'process correctly single line with opened quote (first line, trailing quote)', - data: 'line1"{sep}line2', - expectedData: [ - 'line1"', - 'line2' - ] - }, - { - name: 'process combination of complete and incomplete quotes', - data: '\'foo"bar""\none\'\n\'two""thr\nee"', - expectedData: [ - '\'foo"bar""\none\'', - '\'two""thr', - 'ee"' - ] - }, - { - name: 'process combination of single and double quotes', - data: '\'line_1"still_line_1"\n\'\n"line_2"', - expectedData: [ - '\'line_1"still_line_1"\n\'', - '"line_2"' - ] - }, - { - name: 'process combination of single and double quotes without any data in it', - data: 'key1=""\'\'\nkey2=\'\'\nkey3=""', - expectedData: [ - 'key1=""\'\'', - 'key2=\'\'', - 'key3=""' - ] - } ] }; diff --git a/test/unit/data/propertiesJsonTests/collectPools.js b/test/unit/data/propertiesJsonTests/collectPools.js index e04f9a62..639d834c 100644 --- a/test/unit/data/propertiesJsonTests/collectPools.js +++ b/test/unit/data/propertiesJsonTests/collectPools.js @@ -152,6 +152,10 @@ module.exports = { membersReference: { link: 'https://localhost/mgmt/tm/ltm/pool/~Common~test_pool_0/members?ver=14.1.0', isSubcollection: true + }, + gatewayFailsafeDeviceReference: { + link: 'https://localhost/mgmt/tm/cm/device/~Common~localhost?ver=14.1.0', + name: 'gatewayFailsafeDeviceReference' } } ] diff --git a/test/unit/declarationTests.js b/test/unit/declarationTests.js index eb5b04ff..dea976f6 100644 --- a/test/unit/declarationTests.js +++ b/test/unit/declarationTests.js @@ -848,7 +848,7 @@ describe('Declarations', () => { } }; - return config.validate(data, { expand: true }) + return config.validate(data, { context: { expand: true } }) .then((validated) => { assert.strictEqual(validated.My_Consumer.path, '/foo'); }); @@ -888,7 +888,7 @@ describe('Declarations', () => { } }; - return config.validate(data, { expand: true }) + return config.validate(data, { context: { expand: true } }) .then((validated) => { assert.strictEqual(validated.My_Namespace.My_NS_Consumer.path, '/nsfoo'); assert.strictEqual(validated.My_Consumer.path, '/upperfoo'); @@ -906,7 +906,7 @@ describe('Declarations', () => { } }; - return config.validate(data, { expand: true }) + return config.validate(data, { context: { expand: true } }) .then((validated) => { assert.strictEqual(validated.My_Consumer.path, '192.0.2.1'); }); @@ -926,7 +926,7 @@ describe('Declarations', () => { } }; - return config.validate(data, { expand: true }) + return config.validate(data, { context: { expand: true } }) .then((validated) => { assert.strictEqual(validated.My_Namespace.My_NS_Consumer.path, '192.0.2.1'); }); @@ -949,7 +949,7 @@ describe('Declarations', () => { } }; - return config.validate(data, { expand: true }) + return config.validate(data, { context: { expand: true } }) .then((validated) => { assert.strictEqual(validated.My_Consumer.headers[0].value, '192.0.2.1'); }); @@ -975,7 +975,7 @@ describe('Declarations', () => { } }; - return config.validate(data, { expand: true }) + return config.validate(data, { context: { expand: true } }) .then((validated) => { assert.strictEqual(validated.My_Namespace.My_NS_Consumer.headers[0].value, '192.0.2.1'); }); @@ -1000,7 +1000,7 @@ describe('Declarations', () => { } }; - return config.validate(data, { expand: true }) + return config.validate(data, { context: { expand: true } }) .then((validated) => { assert.strictEqual(validated.My_Consumer.path, '/foo/bar/baz'); }); @@ -1024,7 +1024,7 @@ describe('Declarations', () => { } }; - return config.validate(data, { expand: true }) + return config.validate(data, { context: { expand: true } }) .then((validated) => { assert.strictEqual(validated.My_Consumer.path, 'foo'); }); @@ -1066,7 +1066,7 @@ describe('Declarations', () => { } }; - return config.validate(data, { expand: true }) + return config.validate(data, { context: { expand: true } }) .then((validated) => { assert.deepStrictEqual(validated.My_Consumer.path, expectedValue); assert.deepStrictEqual(validated.My_Consumer.headers[0].value, expectedValue); @@ -1089,7 +1089,7 @@ describe('Declarations', () => { } } }; - return assert.isRejected(config.validate(data, { expand: true }), /syntax requires single pointer/); + return assert.isRejected(config.validate(data, { context: { expand: true } }), /syntax requires single pointer/); }); it('should fail pointer (absolute) outside \'Shared\'', () => { @@ -1102,7 +1102,7 @@ describe('Declarations', () => { path: '`=/class`' } }; - return assert.isRejected(config.validate(data, { expand: true }), /requires pointers root to be 'Shared'/); + return assert.isRejected(config.validate(data, { context: { expand: true } }), /requires pointers root to be 'Shared'/); }); it('should fail expanding pointer (absolute) outside of Namespace', () => { @@ -1125,7 +1125,7 @@ describe('Declarations', () => { } } }; - return assert.isRejected(config.validate(data, { expand: true }), /Cannot read property 'constants' of undefined/); + return assert.isRejected(config.validate(data, { context: { expand: true } }), /Cannot read property 'constants' of undefined/); }); it('should fail with correct dataPath when pointer is outside of Namespace', () => { @@ -1148,7 +1148,7 @@ describe('Declarations', () => { } } }; - return assert.isRejected(config.validate(data, { expand: true }), /dataPath":"\/My_Namespace\/My_NS_Consumer\/path/); + return assert.isRejected(config.validate(data, { context: { expand: true } }), /dataPath":"\/My_Namespace\/My_NS_Consumer\/path/); }); }); @@ -3324,7 +3324,7 @@ describe('Declarations', () => { }; Object.assign(targetDeclaration.Shared.constants, addtlContext.constants); } - return config.validate(targetDeclaration, context) + return config.validate(targetDeclaration, { context }) .then((validConfig) => { assert.deepStrictEqual(validConfig.My_Consumer, expectedTarget); }); @@ -4080,6 +4080,57 @@ describe('Declarations', () => { } )); + it('should pass minimal declaration when using tls options', () => validateMinimal( + { + type: 'Generic_HTTP', + host: 'host', + privateKey: { + cipherText: 'myKey' + }, + clientCertificate: { + cipherText: 'myCert' + } + }, + { + type: 'Generic_HTTP', + host: 'host', + protocol: 'https', + port: 443, + path: '/', + method: 'POST', + clientCertificate: { + cipherText: '$M$foo', + class: 'Secret', + protected: 'SecureVault' + }, + privateKey: { + cipherText: '$M$foo', + class: 'Secret', + protected: 'SecureVault' + } + } + )); + + it('should require privateKey when client certificate is provided', () => assert.isRejected(validateFull( + { + type: 'Generic_HTTP', + host: 'host', + clientCertificate: { + cipherText: 'myCert' + } + } + ), /should have required property 'privateKey'/)); + + it('should require client certificate when privateKey is provided', () => assert.isRejected(validateFull( + { + type: 'Generic_HTTP', + host: 'host', + privateKey: { + cipherText: 'myKey' + } + } + ), /should have required property 'clientCertificate'/)); + it('should allow full declaration', () => validateFull( { type: 'Generic_HTTP', @@ -4114,6 +4165,15 @@ describe('Declarations', () => { passphrase: { cipherText: 'passphrase' } + }, + privateKey: { + cipherText: 'myKey' + }, + clientCertificate: { + cipherText: 'myCert' + }, + rootCertificate: { + cipherText: 'myCA' } }, { @@ -4152,6 +4212,21 @@ describe('Declarations', () => { protected: 'SecureVault', cipherText: '$M$foo' } + }, + clientCertificate: { + cipherText: '$M$foo', + class: 'Secret', + protected: 'SecureVault' + }, + privateKey: { + cipherText: '$M$foo', + class: 'Secret', + protected: 'SecureVault' + }, + rootCertificate: { + cipherText: '$M$foo', + class: 'Secret', + protected: 'SecureVault' } } )); @@ -4861,7 +4936,7 @@ describe('Declarations', () => { }; Object.assign(targetDeclaration.Shared.constants, addtlContext.constants); } - return config.validate(targetDeclaration, context) + return config.validate(targetDeclaration, { context }) .then((validConfig) => { assert.deepStrictEqual(validConfig.My_Pull_Consumer, expectedTarget); }); @@ -4979,7 +5054,7 @@ describe('Declarations', () => { }; Object.assign(targetDeclaration.Shared.constants, addtlContext.constants); } - return config.validate(targetDeclaration, context) + return config.validate(targetDeclaration, { context }) .then((validConfig) => { assert.deepStrictEqual(validConfig.My_Namespace, expectedTarget); }); diff --git a/test/unit/eventListener/baseDataReceiverTests.js b/test/unit/eventListener/baseDataReceiverTests.js new file mode 100644 index 00000000..ddab1b90 --- /dev/null +++ b/test/unit/eventListener/baseDataReceiverTests.js @@ -0,0 +1,374 @@ +/* + * Copyright 2020. F5 Networks, Inc. See End User License Agreement ('EULA') for + * license terms. Notwithstanding anything to the contrary in the EULA, Licensee + * may copy and modify this software product for its internal business purposes. + * Further, Licensee may upload, publish and distribute the modified version of + * the software product on devcentral.f5.com. + */ + +'use strict'; + +/* eslint-disable import/order */ + +require('../shared/restoreCache')(); + +const chai = require('chai'); +const chaiAsPromised = require('chai-as-promised'); +const sinon = require('sinon'); + +const baseDataReceiver = require('../../../src/lib/eventListener/baseDataReceiver'); +const testUtil = require('../shared/util'); + +chai.use(chaiAsPromised); +const assert = chai.assert; + +describe('Base Data Receiver', () => { + const BaseDataReceiver = baseDataReceiver.BaseDataReceiver; + let receiverInst; + let stateChangedSpy; + let startHandlerStub; + let stopHandlerStub; + + const fetchStates = () => stateChangedSpy.args.map(callArgs => callArgs[0].current); + + beforeEach(() => { + receiverInst = new BaseDataReceiver(); + startHandlerStub = sinon.stub(receiverInst, 'startHandler'); + stopHandlerStub = sinon.stub(receiverInst, 'stopHandler'); + stateChangedSpy = sinon.spy(); + receiverInst.on('stateChanged', stateChangedSpy); + + startHandlerStub.resolves(); + stopHandlerStub.resolves(); + }); + + afterEach(() => { + receiverInst = null; + sinon.restore(); + }); + + describe('abstract methods', () => { + beforeEach(() => { + sinon.restore(); + }); + + ['startHandler', 'stopHandler'].forEach((method) => { + it(`.${method}()`, () => { + assert.throws( + () => receiverInst[method](), + /Not implemented/, + 'should throw "Not implemented" error' + ); + }); + }); + }); + + describe('._setState()', () => { + it('should reject when unable to set state', () => assert.isFulfilled(Promise.all([ + assert.isRejected(receiverInst._setState('NEW'), /NEW.*NEW/), + assert.isRejected(receiverInst._setState(BaseDataReceiver.STATE.NEW), /NEW.*NEW/) + ]))); + + it('should be able to set every state from .next', () => { + const promises = []; + Object.keys(BaseDataReceiver.STATE).forEach((stateKey) => { + const state = BaseDataReceiver.STATE[stateKey]; + state.next.forEach((nextStateKey) => { + promises.push(new Promise((resolve, reject) => { + receiverInst._state = state; + receiverInst._setState(nextStateKey).then(resolve).catch(reject); + })); + }); + }); + return assert.isFulfilled(Promise.all(promises)); + }); + + it('should not be able to set inappropriate state', () => { + assert.isRejected(receiverInst._setState('DESTROYED'), /NEW.*DESTROYED/); + }); + + it('should be able to set inappropriate state when forces', () => receiverInst._setState('DESTROYED', { force: true }) + .then(() => { + assert.strictEqual(receiverInst.getCurrentStateName(), 'DESTROYED', 'should have expected state'); + })); + + it('should not wait till state transition finished', () => receiverInst._setState('STARTING', { force: true }) + .then(() => assert.isRejected(receiverInst._setState('STOPPING', { wait: false }), /STARTING.*STOPPING/))); + + it('should wait till state transition finished', () => new Promise((resolve, reject) => { + startHandlerStub.callsFake(() => { + receiverInst.stop().then(resolve).catch(reject); + return testUtil.sleep(100); + }); + receiverInst.start().catch(reject); + }) + .then(() => { + assert.deepStrictEqual(fetchStates(), ['STARTING', 'RUNNING', 'STOPPING', 'STOPPED'], 'should match state\'s change order'); + })); + }); + + describe('.getCurrentStateName()', () => { + it('should return current state', () => { + assert.strictEqual(receiverInst.getCurrentStateName(), 'NEW'); + return receiverInst.destroy() + .then(() => { + assert.strictEqual(receiverInst.getCurrentStateName(), 'DESTROYED'); + }); + }); + }); + + describe('.destroy()', () => { + it('should destroy instance and check state', () => receiverInst.destroy() + .then(() => { + assert.deepStrictEqual(fetchStates(), ['DESTROYING', 'DESTROYED'], 'should match state\'s change order'); + assert.strictEqual(stopHandlerStub.callCount, 1, 'should call stopHandler once'); + assert.isTrue(receiverInst.isDestroyed(), 'should return true when instance destroyed'); + assert.isTrue(receiverInst.hasState(BaseDataReceiver.STATE.DESTROYED), 'should have DESTROYED state'); + assert.isFalse(receiverInst.isRestartAllowed(), 'should not allow restart once instance destroyed'); + })); + + it('should not fail when .destroy() called twice', () => Promise.all([ + receiverInst.destroy(), + receiverInst.destroy() + ]) + .then(() => { + assert.strictEqual(stopHandlerStub.callCount, 2, 'should call stopHandler 2 times'); + assert.isTrue(receiverInst.isDestroyed(), 'should have DESTROYED state'); + })); + }); + + describe('.hasState()', () => { + it('should check current state', () => { + assert.isTrue(receiverInst.hasState(BaseDataReceiver.STATE.NEW), 'should have NEW state'); + assert.isTrue(receiverInst.hasState('NEW'), 'should have NEW state'); + return receiverInst._setState(BaseDataReceiver.STATE.DESTROYING) + .then(() => { + assert.isTrue(receiverInst.hasState(BaseDataReceiver.STATE.DESTROYING), 'should have DESTROYING state'); + assert.isTrue(receiverInst.hasState('DESTROYING'), 'should have DESTROYING state'); + }); + }); + }); + + describe('.nextStateAllowed()', () => { + it('should check if a next state is allowed', () => { + assert.isFalse(receiverInst.nextStateAllowed(BaseDataReceiver.STATE.NEW), 'should not allow NEW as next state for NEW'); + assert.isFalse(receiverInst.nextStateAllowed('NEW'), 'should not allow NEW as next state for NEW'); + assert.isTrue(receiverInst.nextStateAllowed(BaseDataReceiver.STATE.STARTING), 'should allow STARTING as next state for NEW'); + assert.isTrue(receiverInst.nextStateAllowed('STARTING'), 'should allow STARTING as next state for NEW'); + }); + }); + + describe('.restart()', () => { + it('should be able to restart on first try', () => receiverInst.restart() + .then(() => { + assert.deepStrictEqual(fetchStates(), ['RESTARTING', 'STOPPING', 'STOPPED', 'STARTING', 'RUNNING'], 'should match state\'s change order'); + assert.strictEqual(startHandlerStub.callCount, 1, 'should call startHandler once'); + assert.strictEqual(stopHandlerStub.callCount, 1, 'should call stopHandlerStub once'); + assert.isFalse(receiverInst.isDestroyed(), 'should return false when instance started'); + assert.isTrue(receiverInst.isRunning(), 'should return true when instance started'); + assert.isTrue(receiverInst.hasState('RUNNING'), 'should have RUNNING state'); + assert.isTrue(receiverInst.isRestartAllowed(), 'should allow restart once instance started'); + return receiverInst.destroy(); + }) + .then(() => { + assert.isTrue(receiverInst.isDestroyed(), 'should return true when instance destroyed'); + })); + + it('should be able to call restart on second try', () => { + startHandlerStub.resolves(); + startHandlerStub.onFirstCall().rejects(new Error('start error')); + stopHandlerStub.resolves(); + stopHandlerStub.onFirstCall().rejects(new Error('stop error')); + return receiverInst.restart() + .then(() => { + assert.strictEqual(startHandlerStub.callCount, 2, 'should call startHandler 2 times'); + assert.strictEqual(stopHandlerStub.callCount, 2, 'should call stopHandlerStub 2 times'); + assert.isTrue(receiverInst.hasState('RUNNING'), 'should have RUNNING state'); + }); + }); + + it('should try to restart 10 times', () => { + startHandlerStub.rejects(new Error('start error')); + stopHandlerStub.rejects(new Error('stop error')); + return assert.isRejected(receiverInst.restart({ attempts: 10 }), /start error/) + .then(() => { + assert.strictEqual(startHandlerStub.callCount, 10, 'should call startHandler 10 times'); + assert.strictEqual(stopHandlerStub.callCount, 10, 'should call stopHandlerStub 10 times'); + assert.isTrue(receiverInst.hasState('FAILED_TO_RESTART'), 'should have FAILED_TO_RESTART state'); + }); + }); + + it('should try to restart with delay', () => { + startHandlerStub.rejects(new Error('start error')); + stopHandlerStub.rejects(new Error('stop error')); + return assert.isRejected(receiverInst.restart({ attempts: 10, delay: 1 }), /start error/) + .then(() => { + assert.strictEqual(startHandlerStub.callCount, 10, 'should call startHandler 10 times'); + assert.strictEqual(stopHandlerStub.callCount, 10, 'should call stopHandlerStub 10 times'); + assert.isTrue(receiverInst.hasState('FAILED_TO_RESTART'), 'should have FAILED_TO_RESTART state'); + }); + }); + + it('should not be able to call .restart() again right after first call', () => { + const errors = []; + return Promise.all([ + receiverInst.restart().catch(Array.prototype.push.bind(errors)), + receiverInst.restart().catch(Array.prototype.push.bind(errors)) + ]) + .then(() => { + assert.strictEqual(errors.length, 1, 'should throw error'); + assert.isTrue(/RESTARTING.*RESTARTING/.test(errors[0]), 'should not be able to change state to RESTARTING again'); + assert.strictEqual(startHandlerStub.callCount, 1, 'should call startHandler once'); + assert.strictEqual(stopHandlerStub.callCount, 1, 'should call stopHandlerStub once'); + assert.isTrue(receiverInst.hasState(BaseDataReceiver.STATE.RUNNING), 'should have RUNNING state'); + }); + }); + + it('should stop trying to restart once destroyed', () => { + startHandlerStub.onThirdCall().callsFake(() => receiverInst.destroy()); + startHandlerStub.rejects(new Error('start error')); + stopHandlerStub.rejects(new Error('stop error')); + return assert.isRejected(receiverInst.restart(), /DESTROYED/) + .then(() => { + assert.isAbove(startHandlerStub.callCount, 2, 'should call startHandler more than 2 times'); + assert.isAbove(stopHandlerStub.callCount, 2, 'should call stopHandlerStub more than 2 times'); + assert.isTrue(receiverInst.hasState(BaseDataReceiver.STATE.DESTROYED), 'should have DESTROYED state'); + }); + }); + }); + + describe('.start()', () => { + it('should start instance and check state', () => receiverInst.start() + .then(() => { + assert.deepStrictEqual(fetchStates(), ['STARTING', 'RUNNING'], 'should match state\'s change order'); + assert.strictEqual(startHandlerStub.callCount, 1, 'should call startHandler once'); + assert.isFalse(receiverInst.isDestroyed(), 'should return false when instance started'); + assert.isTrue(receiverInst.isRunning(), 'should return true when instance started'); + assert.isTrue(receiverInst.hasState(BaseDataReceiver.STATE.RUNNING), 'should have RUNNING state'); + assert.isTrue(receiverInst.isRestartAllowed(), 'should allow restart once instance started'); + return receiverInst.destroy(); + }) + .then(() => { + assert.isTrue(receiverInst.isDestroyed(), 'should return true when instance destroyed'); + })); + + it('should not be able to start destroyed instance', () => receiverInst.destroy() + .then(() => assert.isRejected(receiverInst.start(), /DESTROYED.*STARTING/))); + + it('should not be able to start instance that is not stopped yet', () => new Promise((resolve, reject) => { + stopHandlerStub.callsFake(() => { + receiverInst.start().then(resolve).catch(reject); + return testUtil.sleep(100); + }); + receiverInst.stop().catch(reject); + }) + .then(() => { + assert.deepStrictEqual(fetchStates(), ['STOPPING', 'STOPPED', 'STARTING', 'RUNNING'], 'should match state\'s change order'); + })); + + it('should not wait till completion of previous operation', () => { + const errors = []; + return new Promise((resolve, reject) => { + stopHandlerStub.callsFake(() => receiverInst.start(false).catch(err => errors.push(err))); + receiverInst.stop().then(resolve).catch(reject); + }) + .then(() => { + assert.strictEqual(errors.length, 1, 'should throw error'); + assert.isTrue(/STOPPING.*STARTING/.test(errors[0]), 'should not be able to change state to STOPPING'); + }); + }); + + it('should change state to FAILED_TO_START when failed', () => { + startHandlerStub.rejects(new Error('expected error')); + return assert.isRejected(receiverInst.start(), /expected error/) + .then(() => { + assert.isTrue(receiverInst.hasState('FAILED_TO_START'), 'should have FAILED_TO_START state'); + }); + }); + }); + + describe('.stop()', () => { + it('should stop instance and check state', () => receiverInst.stop() + .then(() => { + assert.strictEqual(stopHandlerStub.callCount, 1, 'should call stopHandler once'); + assert.isFalse(receiverInst.isDestroyed(), 'should return false when instance stopped'); + assert.isFalse(receiverInst.isRunning(), 'should return false when instance stopped'); + assert.isTrue(receiverInst.hasState('STOPPED'), 'should have STOPPED state'); + assert.isTrue(receiverInst.isRestartAllowed(), 'should allow restart once instance stopped'); + return receiverInst.destroy(); + }) + .then(() => { + assert.isTrue(receiverInst.isDestroyed(), 'should return true when instance destroyed'); + })); + + it('should not be able to stop destroyed instance', () => receiverInst.destroy() + .then(() => assert.isRejected(receiverInst.stop(), /DESTROYED.*STOPPING/))); + + it('should not be able to stop instance that is not started yet', () => new Promise((resolve, reject) => { + startHandlerStub.callsFake(() => { + receiverInst.stop().then(resolve).catch(reject); + return testUtil.sleep(100); + }); + receiverInst.start().catch(reject); + }) + .then(() => { + assert.deepStrictEqual(fetchStates(), ['STARTING', 'RUNNING', 'STOPPING', 'STOPPED'], 'should match state\'s change order'); + })); + + it('should not wait till completion of previous operation', () => { + const errors = []; + return new Promise((resolve, reject) => { + startHandlerStub.callsFake(() => receiverInst.stop(false).catch(err => errors.push(err))); + receiverInst.start().then(resolve).catch(reject); + }) + .then(() => { + assert.strictEqual(errors.length, 1, 'should throw error'); + assert.isTrue(/STARTING.*STOPPING/.test(errors[0]), 'should not be able to change state to STOPPING'); + }); + }); + + it('should change state to FAILED_TO_STOP when failed', () => { + stopHandlerStub.rejects(new Error('expected error')); + return assert.isRejected(receiverInst.stop(), /expected error/) + .then(() => { + assert.isTrue(receiverInst.hasState('FAILED_TO_STOP'), 'should have FAILED_TO_STOP state'); + }); + }); + }); +}); + +describe('Safe Event Emitter', () => { + const eventName = 'eventName'; + let emitter; + + beforeEach(() => { + emitter = new baseDataReceiver.SafeEventEmitter(); + }); + + afterEach(() => { + emitter.removeAllListeners(eventName); + }); + + describe('safeEmit', () => { + it('should catch listener error', () => { + const error = new Error('test error'); + emitter.on(eventName, () => { throw error; }); + const ret = emitter.safeEmit(eventName); + assert.isTrue(error === ret, 'should return error'); + }); + }); + + describe('safeEmitAsync', () => { + it('should catch listener error in sync part', () => { + const error = new Error('test error'); + emitter.on(eventName, () => { throw error; }); + return assert.becomes(emitter.safeEmitAsync(eventName), error); + }); + + it('should catch listener error in async part', () => { + const error = new Error('test error'); + emitter.on(eventName, () => new Promise((resolve, reject) => reject(error))); + return assert.becomes(emitter.safeEmitAsync(eventName), error); + }); + }); +}); diff --git a/test/unit/eventListener/eventListenerTests.js b/test/unit/eventListener/eventListenerTests.js new file mode 100644 index 00000000..61d3ad60 --- /dev/null +++ b/test/unit/eventListener/eventListenerTests.js @@ -0,0 +1,446 @@ +/* + * Copyright 2020. F5 Networks, Inc. See End User License Agreement ('EULA') for + * license terms. Notwithstanding anything to the contrary in the EULA, Licensee + * may copy and modify this software product for its internal business purposes. + * Further, Licensee may upload, publish and distribute the modified version of + * the software product on devcentral.f5.com. + */ + +'use strict'; + +/* eslint-disable import/order */ + +require('../shared/restoreCache')(); + +const chai = require('chai'); +const chaiAsPromised = require('chai-as-promised'); +const sinon = require('sinon'); + +const configUtil = require('../../../src/lib/utils/config'); +const configWorker = require('../../../src/lib/config'); +const dataPipeline = require('../../../src/lib/dataPipeline'); +const EventListener = require('../../../src/lib/eventListener'); +const eventListenerTestData = require('../data/eventListenerTestsData'); +const messageStream = require('../../../src/lib/eventListener/messageStream'); +const testUtil = require('../shared/util'); +const tracers = require('../../../src/lib/utils/tracer').Tracer; +const util = require('../../../src/lib/utils/misc'); + +chai.use(chaiAsPromised); +const assert = chai.assert; + +describe('Event Listener', () => { + /** + * 'change' event is the only one 'true' way to test EventLister because + * we are not using any other API to interact with it + */ + let actualData; + let allTracersStub; + let activeTracersStub; + let uuidCounter = 0; + + const defaultTestPort = 1234; + const defaultDeclarationPort = 6514; + + let origDecl; + + const assertListener = function (actualListener, expListener) { + assert.deepStrictEqual(actualListener.tags, expListener.tags || {}); + assert.deepStrictEqual(actualListener.actions, expListener.actions || [{ + enable: true, + setTag: { + application: '`A`', + tenant: '`T`' + } + }]); + if (expListener.hasFilterFunc) { + assert.isNotNull(actualListener.filterFunc); + } else { + assert.isNull(actualListener.filterFunc); + } + }; + + const validateAndNormalize = function (declaration) { + return configWorker.validate(util.deepCopy(declaration)) + .then(validated => Promise.resolve(configUtil.normalizeConfig(validated))); + }; + + const validateAndNormalizeEmit = function (declaration) { + return validateAndNormalize(declaration) + .then(normalized => configWorker.emitAsync('change', normalized)); + }; + + const addData = (dataPipelineArgs) => { + const dataCtx = dataPipelineArgs[0]; + actualData[dataCtx.sourceId] = actualData[dataCtx.sourceId] || []; + actualData[dataCtx.sourceId].push(dataCtx.data); + }; + + const gatherIds = () => { + const ids = EventListener.getAll().map(inst => inst.id); + ids.sort(); + return ids; + }; + + beforeEach(() => { + actualData = {}; + activeTracersStub = []; + allTracersStub = []; + + origDecl = { + class: 'Telemetry', + Listener1: { + class: 'Telemetry_Listener', + port: defaultTestPort, + tag: { + tenant: '`T`', + application: '`A`' + } + } + }; + + sinon.stub(tracers, 'createFromConfig').callsFake((className, objName, config) => { + allTracersStub.push(objName); + if (config.trace) { + activeTracersStub.push(objName); + } + return null; + }); + + sinon.stub(dataPipeline, 'process').callsFake(function () { + addData(Array.from(arguments)); + return Promise.resolve(); + }); + + ['startHandler', 'stopHandler'].forEach((method) => { + sinon.stub(messageStream.MessageStream.prototype, method).resolves(); + }); + + sinon.stub(util, 'generateUuid').callsFake(() => { + uuidCounter += 1; + return `uuid${uuidCounter}`; + }); + + return validateAndNormalizeEmit(util.deepCopy(origDecl)) + .then(() => { + const listeners = EventListener.instances; + assert.strictEqual(Object.keys(listeners).length, 1); + assert.deepStrictEqual(gatherIds(), ['uuid1']); + assertListener(listeners.Listener1, { + tags: { tenant: '`T`', application: '`A`' } + }); + assert.strictEqual(allTracersStub.length, 1); + assert.strictEqual(activeTracersStub.length, 0); + assert.strictEqual(EventListener.receiversManager.getAll().length, 1); + + // reset counts + activeTracersStub = []; + allTracersStub = []; + uuidCounter = 0; + }); + }); + + afterEach(() => configWorker.emitAsync('change', { components: [], mappings: {} }) + .then(() => { + const listeners = EventListener.getAll(); + assert.strictEqual(Object.keys(listeners).length, 0); + assert.strictEqual(EventListener.receiversManager.getAll().length, 0); + }) + .then(() => { + uuidCounter = 0; + sinon.restore(); + })); + + describe('events handling', () => { + let loggerSpy; + + beforeEach(() => { + loggerSpy = sinon.spy(EventListener.instances.Listener1.logger, 'exception'); + }); + eventListenerTestData.onMessagesHandler.forEach((testSet) => { + testUtil.getCallableIt(testSet)(testSet.name, () => EventListener.receiversManager + .getMessageStream(defaultTestPort) + .emitAsync('messages', testSet.rawEvents) + .then(() => { + assert.isTrue(loggerSpy.notCalled); + assert.deepStrictEqual(actualData[EventListener.instances.Listener1.id], testSet.expectedData); + })); + }); + }); + + it('should create listeners with default and custom opts on config change event (no prior config)', () => { + const newDecl = util.deepCopy(origDecl); + newDecl.Listener2 = { + class: 'Telemetry_Listener', + match: 'somePattern', + trace: true + }; + // should receive no data due filtering + newDecl.Listener3 = { + class: 'Telemetry_Listener', + match: 'somePattern2', + trace: true + }; + + return validateAndNormalizeEmit(util.deepCopy(newDecl)) + .then(() => { + assert.strictEqual(activeTracersStub.length, 2); + assert.strictEqual(allTracersStub.length, 3); + assert.strictEqual(EventListener.receiversManager.getAll().length, 2); + assert.deepStrictEqual(gatherIds(), ['uuid1', 'uuid2', 'uuid3']); + + const listeners = EventListener.instances; + assertListener(listeners.Listener1, { + tags: { tenant: '`T`', application: '`A`' } + }); + assertListener(listeners.Listener2, { hasFilterFunc: true }); + assertListener(listeners.Listener3, { hasFilterFunc: true }); + + return Promise.all([ + EventListener.receiversManager.getMessageStream(defaultDeclarationPort).emitAsync('messages', ['virtual_name="somePattern"']), + EventListener.receiversManager.getMessageStream(defaultTestPort).emitAsync('messages', ['1234']) + ]) + .then(() => assert.deepStrictEqual(actualData, { + [listeners.Listener1.id]: [{ data: '1234', telemetryEventCategory: 'event' }], + [listeners.Listener2.id]: [{ virtual_name: 'somePattern', telemetryEventCategory: 'LTM' }] + })); + }); + }); + + it('should stop existing listener(s) when removed from config', () => { + assert.notStrictEqual(EventListener.getAll().length, 0); + assert.notStrictEqual(EventListener.receiversManager.getAll().length, 0); + return configWorker.emitAsync('change', { components: [], mappings: {} }) + .then(() => { + assert.strictEqual(activeTracersStub.length, 0); + assert.strictEqual(allTracersStub.length, 0); + + assert.strictEqual(EventListener.getAll().length, 0); + assert.strictEqual(EventListener.receiversManager.getAll().length, 0); + }); + }); + + it('should update existing listener(s) without restarting if port is the same', () => { + const newDecl = util.deepCopy(origDecl); + newDecl.Listener1.trace = true; + const updateSpy = sinon.stub(EventListener.prototype, 'updateConfig'); + const existingMessageStream = EventListener.receiversManager.registered[newDecl.Listener1.port]; + + return validateAndNormalizeEmit(util.deepCopy(newDecl)) + .then(() => { + assert.strictEqual(activeTracersStub.length, 1); + assert.strictEqual(allTracersStub.length, 1); + assert.deepStrictEqual(gatherIds(), ['uuid1']); + + const listeners = EventListener.instances; + assertListener(listeners.Listener1, { + tags: { tenant: '`T`', application: '`A`' } + }); + // one for each protocol + assert.isTrue(updateSpy.calledOnce); + + assert.strictEqual(EventListener.getAll().length, 1); + assert.strictEqual(EventListener.receiversManager.getAll().length, 1); + + const currentMessageSteam = EventListener.receiversManager.getMessageStream(newDecl.Listener1.port); + assert.isTrue(existingMessageStream === currentMessageSteam, 'should not re-create Message Stream'); + + return existingMessageStream.emitAsync('messages', ['6514']) + .then(() => assert.deepStrictEqual(actualData, { + [listeners.Listener1.id]: [{ data: '6514', telemetryEventCategory: 'event' }] + })); + }); + }); + + it('should add a new listener without updating existing one when skipUpdate = true', () => { + const updateSpy = sinon.spy(EventListener.prototype, 'updateConfig'); + const newDecl = util.deepCopy(origDecl); + newDecl.New = { + class: 'Telemetry_Namespace', + Listener1: { + class: 'Telemetry_Listener', + port: 2345, + trace: true + } + }; + return validateAndNormalize(newDecl) + .then((normalized) => { + normalized.components[0].skipUpdate = true; + return configWorker.emitAsync('change', normalized); + }) + .then(() => { + assert.strictEqual(activeTracersStub.length, 1); + assert.strictEqual(allTracersStub.length, 1); + assert.deepStrictEqual(gatherIds(), ['uuid1', 'uuid2']); + + const listeners = EventListener.instances; + assertListener(listeners.Listener1, { + tags: { tenant: '`T`', application: '`A`' } + }); + assertListener(listeners['New::Listener1'], {}); + // one for each protocol, called through constructor + assert.isTrue(updateSpy.calledTwice); + assert.strictEqual(EventListener.getAll().length, 2); + assert.strictEqual(EventListener.receiversManager.getAll().length, 2); + }); + }); + + it('should remove disabled listener', () => { + const newDecl = util.deepCopy(origDecl); + newDecl.Listener1.enable = false; + + return validateAndNormalizeEmit(util.deepCopy(newDecl)) + .then(() => { + assert.strictEqual(activeTracersStub.length, 0); + assert.strictEqual(allTracersStub.length, 0); + assert.strictEqual(EventListener.getAll().length, 0); + assert.strictEqual(EventListener.receiversManager.getAll().length, 0); + }); + }); + + it('should allow another instance to listen on the same port', () => { + const newDecl = util.deepCopy(origDecl); + newDecl.New = { + class: 'Telemetry_Namespace', + Listener1: { + class: 'Telemetry_Listener', + port: newDecl.Listener1.port, + trace: true + } + }; + + return validateAndNormalizeEmit(util.deepCopy(newDecl)) + .then(() => { + assert.strictEqual(EventListener.getAll().length, 2); + assert.strictEqual(EventListener.receiversManager.getAll().length, 1); + assert.deepStrictEqual(gatherIds(), ['uuid1', 'uuid2']); + const listeners = EventListener.instances; + + return EventListener.receiversManager.getMessageStream(newDecl.Listener1.port).emitAsync('messages', ['6514']) + .then(() => assert.deepStrictEqual(actualData, { + [listeners.Listener1.id]: [{ data: '6514', telemetryEventCategory: 'event' }], + [listeners['New::Listener1'].id]: [{ data: '6514', telemetryEventCategory: 'event' }] + })); + }); + }); + + it('should update config of existing listener', () => { + const newDecl = util.deepCopy(origDecl); + newDecl.Listener1 = { + class: 'Telemetry_Listener', + port: 9999, + tag: { + tenant: 'Tenant', + application: 'Application' + }, + trace: true, + match: 'test', + actions: [{ + setTag: { + application: '`B`', + tenant: '`C`' + } + }] + }; + return validateAndNormalizeEmit(util.deepCopy(newDecl)) + .then(() => { + assert.strictEqual(activeTracersStub.length, 1); + assert.strictEqual(allTracersStub.length, 1); + assert.strictEqual(EventListener.getAll().length, 1); + assert.strictEqual(EventListener.receiversManager.getAll().length, 1); + assert.deepStrictEqual(gatherIds(), ['uuid1']); + const listeners = EventListener.instances; + + assertListener(listeners.Listener1, { + tags: { tenant: 'Tenant', application: 'Application' }, + hasFilterFunc: true, + actions: [{ + enable: true, + setTag: { + application: '`B`', + tenant: '`C`' + } + }] + }); + + return EventListener.receiversManager.getMessageStream(9999).emitAsync('messages', ['virtual_name="test"']) + .then(() => assert.deepStrictEqual(actualData, { + [listeners.Listener1.id]: [{ + virtual_name: 'test', + telemetryEventCategory: 'LTM', + tenant: 'Tenant', + application: 'Application' + }] + })); + }); + }); + + it('should set minimum and default props', () => { + const newDecl = util.deepCopy(origDecl); + newDecl.Listener1 = { + class: 'Telemetry_Listener' + }; + return validateAndNormalizeEmit(util.deepCopy(newDecl)) + .then(() => { + assert.strictEqual(EventListener.getAll().length, 1); + assert.strictEqual(EventListener.receiversManager.getAll().length, 1); + assert.strictEqual(activeTracersStub.length, 0); + assert.strictEqual(allTracersStub.length, 1); + assert.deepStrictEqual(gatherIds(), ['uuid1']); + const listeners = EventListener.instances; + + assertListener(listeners.Listener1, {}); + + return EventListener.receiversManager.getMessageStream(defaultDeclarationPort).emitAsync('messages', ['data']) + .then(() => assert.deepStrictEqual(actualData, { + [listeners.Listener1.id]: [{ + data: 'data', + telemetryEventCategory: 'event' + }] + })); + }); + }); + + it('should try to restart data receiver 10 times', () => { + const msStartSpy = sinon.spy(messageStream.MessageStream.prototype, 'restart'); + messageStream.MessageStream.prototype.startHandler.rejects(new Error('test error')); + + const newDecl = util.deepCopy(origDecl); + newDecl.Listener1 = { + class: 'Telemetry_Listener', + port: 9999, + tag: { + tenant: 'Tenant', + application: 'Application' + }, + trace: true, + match: 'test', + actions: [{ + setTag: { + application: '`B`', + tenant: '`C`' + } + }] + }; + return validateAndNormalizeEmit(util.deepCopy(newDecl)) + .then(() => { + assert.strictEqual(msStartSpy.callCount, 10); + }); + }); + + it('should destroy all registered data receivers', () => { + const receivers = [ + EventListener.receiversManager.getMessageStream(6514), + EventListener.receiversManager.getMessageStream(6515) + ]; + return Promise.all(receivers.map(r => r.start())) + .then(() => { + receivers.forEach(r => assert.isTrue(r.isRunning(), 'should be in running state')); + return EventListener.receiversManager.destroyAll(); + }) + .then(() => { + receivers.forEach(r => assert.isTrue(r.isDestroyed(), 'should be destroyed')); + assert.deepStrictEqual(EventListener.receiversManager.registered, {}, 'should have no registered receivers'); + }); + }); +}); diff --git a/test/unit/eventListener/messageStreamTests.js b/test/unit/eventListener/messageStreamTests.js new file mode 100644 index 00000000..e3697e99 --- /dev/null +++ b/test/unit/eventListener/messageStreamTests.js @@ -0,0 +1,297 @@ +/* + * Copyright 2020. F5 Networks, Inc. See End User License Agreement ('EULA') for + * license terms. Notwithstanding anything to the contrary in the EULA, Licensee + * may copy and modify this software product for its internal business purposes. + * Further, Licensee may upload, publish and distribute the modified version of + * the software product on devcentral.f5.com. + */ + +'use strict'; + +/* eslint-disable import/order */ + +require('../shared/restoreCache')(); + +const chai = require('chai'); +const chaiAsPromised = require('chai-as-promised'); +const EventEmitter = require('events').EventEmitter; +const sinon = require('sinon'); +const net = require('net'); +const udp = require('dgram'); + +const messageStreamTestData = require('../data/messageStreamTestsData'); +const messageStream = require('../../../src/lib/eventListener/messageStream'); +const testUtil = require('../shared/util'); + +chai.use(chaiAsPromised); +const assert = chai.assert; + +describe('Message Stream Receiver', () => { + let dataCallbackSpy; + let onMockCreatedCallback; + let receiverInst; + let serverMocks; + + const testPort = 6514; + const testAddr = 'localhost10'; + const testAddr6 = '::localhost10'; + const testBufferTimeout = 10 * 1000; + + class MockUdpServer extends EventEmitter { + setInitArgs(opts) { + this.opts = opts; + } + + bind() { + this.emit('listenMock', this, Array.from(arguments)); + } + + close() { + this.emit('closeMock', this, Array.from(arguments)); + } + } + + class MockTcpServer extends EventEmitter { + setInitArgs(opts) { + this.opts = opts; + } + + listen() { + this.emit('listenMock', this, Array.from(arguments)); + } + + close() { + this.emit('closeMock', this, Array.from(arguments)); + } + } + + class MockTcpSocket extends EventEmitter { + destroy() {} + } + + const getServerMock = (cls, ipv6) => serverMocks.find(mock => mock instanceof cls && (ipv6 === undefined || (ipv6 && mock.opts.type === 'udp6') || (!ipv6 && mock.opts.type === 'udp4'))); + const createServerMock = (Cls, opts) => { + const mock = new Cls(); + mock.setInitArgs(opts); + serverMocks.push(mock); + if (onMockCreatedCallback) { + onMockCreatedCallback(mock); + } + return mock; + }; + + beforeEach(() => { + dataCallbackSpy = sinon.spy(); + serverMocks = []; + receiverInst = new messageStream.MessageStream(testPort, { address: testAddr }); + receiverInst.on('messages', dataCallbackSpy); + + sinon.stub(messageStream.MessageStream, 'MAX_BUFFER_TIMEOUT').value(testBufferTimeout); + + sinon.stub(udp, 'createSocket').callsFake(opts => createServerMock(MockUdpServer, opts)); + sinon.stub(net, 'createServer').callsFake(opts => createServerMock(MockTcpServer, opts)); + onMockCreatedCallback = (serverMock) => { + serverMock.on('listenMock', () => serverMock.emit('listening')); + serverMock.on('closeMock', (inst, args) => { + serverMock.emit('close'); + args[0](); // call callback + }); + }; + }); + + afterEach(() => { + sinon.restore(); + }); + + describe('.dataHandler()', () => { + let socketId = 0; + const createSocketInfo = (cls, ipv6) => { + socketId += 1; + if (cls === MockUdpServer) { + return { + address: ipv6 ? testAddr6 : testAddr, + port: testPort + socketId + }; + } + const socketMock = new MockTcpSocket(); + socketMock.remoteAddress = ipv6 ? testAddr6 : testAddr; + socketMock.remotePort = testPort + socketId; + return socketMock; + }; + + describe('data handling for each protocol', () => { + const testMessage = '<1>testData\n'; + + it('should retrieve data via udp4 socket', () => receiverInst.start() + .then(() => { + const socketInfo = createSocketInfo(MockUdpServer, false); + getServerMock(MockUdpServer, false).emit('message', testMessage, socketInfo); + return receiverInst.stop(); + }) + .then(() => { + assert.deepStrictEqual(dataCallbackSpy.args[0], [[testMessage.slice(0, -1)]]); + })); + + it('should retrieve data via udp6 socket', () => receiverInst.start() + .then(() => { + const socketInfo = createSocketInfo(MockUdpServer, true); + getServerMock(MockUdpServer, true).emit('message', testMessage, socketInfo); + return receiverInst.stop(); + }) + .then(() => { + assert.deepStrictEqual(dataCallbackSpy.args[0], [[testMessage.slice(0, -1)]]); + })); + + it('should retrieve data via tcp socket', () => receiverInst.start() + .then(() => { + const socketInfo = createSocketInfo(MockTcpServer); + getServerMock(MockTcpServer).emit('connection', socketInfo); + socketInfo.emit('data', testMessage); + return receiverInst.stop(); + }) + .then(() => { + assert.deepStrictEqual(dataCallbackSpy.args[0], [[testMessage.slice(0, -1)]]); + })); + + it('should retrieve data via all protocol at same time', () => receiverInst.start() + .then(() => { + const socketInfoTcp = createSocketInfo(MockTcpServer); + getServerMock(MockTcpServer).emit('connection', socketInfoTcp); + const socketInfoUdp4 = createSocketInfo(MockUdpServer, false); + const socketInfoUdp6 = createSocketInfo(MockUdpServer, true); + + socketInfoTcp.emit('data', 'start'); + getServerMock(MockUdpServer, false).emit('message', 'start', socketInfoUdp4); + getServerMock(MockUdpServer, true).emit('message', 'start', socketInfoUdp6); + socketInfoTcp.emit('data', 'end\n'); + getServerMock(MockUdpServer, false).emit('message', 'end\n', socketInfoUdp4); + getServerMock(MockUdpServer, true).emit('message', 'end\n', socketInfoUdp6); + + return receiverInst.stop(); + }) + .then(() => { + assert.includeDeepMembers(dataCallbackSpy.args, [ + [['startend']], + [['startend']], + [['startend']] + ]); + })); + }); + + describe('chunked data', () => { + const fetchEvents = () => { + const events = []; + dataCallbackSpy.args.forEach((args) => { + args[0].forEach(arg => events.push(arg)); + }); + return events; + }; + let fakeClock; + + beforeEach(() => { + fakeClock = sinon.useFakeTimers(); + }); + + afterEach(() => { + fakeClock.restore(); + }); + + messageStreamTestData.dataHandler.forEach((testConf) => { + const separators = JSON.stringify(testConf.chunks).indexOf('{sep}') !== -1 ? ['\n', '\r\n'] : ['']; + separators.forEach((sep) => { + let sepMsg = 'built-in the test new line separator'; + if (sep) { + sepMsg = sep.replace(/\n/g, '\\n').replace(/\r/g, '\\r'); + } + testUtil.getCallableIt(testConf)(`should process data - ${testConf.name} (${sepMsg})`, () => receiverInst.start() + .then(() => { + const socketInfo = createSocketInfo(MockUdpServer, false); + const server = getServerMock(MockUdpServer, false); + testConf.chunks.forEach(chunk => server.emit('message', chunk.replace(/\{sep\}/g, sep), socketInfo)); + fakeClock.tick(testBufferTimeout * 2); + return receiverInst.stop(); + }) + .then(() => { + assert.deepStrictEqual(fetchEvents(), testConf.expectedData); + })); + }); + }); + }); + }); + + describe('.restart()', () => { + it('should recreate all receivers on restart', () => receiverInst.start() + .then(() => { + assert.strictEqual(serverMocks.length, 3, 'should create 3 sockets'); + assert.strictEqual(getServerMock(MockUdpServer, false).opts.type, 'udp4', 'should create udp4 listener'); + assert.strictEqual(getServerMock(MockUdpServer, true).opts.type, 'udp6', 'should create udp6 listener'); + assert.strictEqual(getServerMock(MockTcpServer).opts.allowHalfOpen, false, 'should create tcp listener'); + return receiverInst.restart(); + }) + .then(() => { + assert.strictEqual(serverMocks.length, 6, 'should create 3 more sockets'); + assert.strictEqual(serverMocks.filter(mock => mock.opts.type === 'udp4').length, 2, 'should have 2 udp4 sockets'); + assert.strictEqual(serverMocks.filter(mock => mock.opts.type === 'udp6').length, 2, 'should have 2 udp6 sockets'); + assert.strictEqual(serverMocks.filter(mock => mock.opts.allowHalfOpen === false).length, 2, 'should have 2 tcp sockets'); + })); + }); + + describe('.start()', () => { + it('should start receivers', () => receiverInst.start() + .then(() => { + assert.strictEqual(serverMocks.length, 3, 'should create 3 sockets'); + assert.strictEqual(getServerMock(MockUdpServer, false).opts.type, 'udp4', 'should create udp4 listener'); + assert.strictEqual(getServerMock(MockUdpServer, true).opts.type, 'udp6', 'should create udp6 listener'); + assert.strictEqual(getServerMock(MockTcpServer).opts.allowHalfOpen, false, 'should create tcp listener'); + assert.isTrue(receiverInst.isRunning(), 'should be in running state'); + })); + + it('should fail to start', () => { + let firstOnly = false; + onMockCreatedCallback = (serverMock) => { + if (!firstOnly) { + firstOnly = true; + serverMock.on('listenMock', () => { + serverMock.emit('close'); + }); + } else { + serverMock.on('listenMock', () => serverMock.emit('listening')); + } + }; + return assert.isRejected(receiverInst.start(), /socket closed before/) + .then(() => { + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + }); + }); + + it('should throw error on unknown protocol', () => { + receiverInst.protocols = ['test']; + return assert.isRejected(receiverInst.start(), /Unknown protocol/); + }); + }); + + describe('.stop()', () => { + it('should be able to stop receiver without active receivers', () => receiverInst.stop() + .then(() => { + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + assert.isTrue(receiverInst.hasState(messageStream.MessageStream.STATE.STOPPED), 'should have STOPPED state'); + })); + + it('should be able to stop receiver', () => { + const closeSpy = sinon.spy(); + return receiverInst.start() + .then(() => { + assert.isTrue(receiverInst.hasReceivers(), 'should have receivers started'); + getServerMock(MockTcpServer).on('close', closeSpy); + getServerMock(MockUdpServer, false).on('close', closeSpy); + getServerMock(MockUdpServer, true).on('close', closeSpy); + return receiverInst.stop(); + }) + .then(() => { + assert.strictEqual(closeSpy.callCount, 3, 'should close 3 sockets'); + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + assert.isTrue(receiverInst.hasState(messageStream.MessageStream.STATE.STOPPED), 'should have STOPPED state'); + }); + }); + }); +}); diff --git a/test/unit/eventListener/tcpUdpDataReceiverTests.js b/test/unit/eventListener/tcpUdpDataReceiverTests.js new file mode 100644 index 00000000..bc3fdd0a --- /dev/null +++ b/test/unit/eventListener/tcpUdpDataReceiverTests.js @@ -0,0 +1,705 @@ +/* + * Copyright 2020. F5 Networks, Inc. See End User License Agreement ('EULA') for + * license terms. Notwithstanding anything to the contrary in the EULA, Licensee + * may copy and modify this software product for its internal business purposes. + * Further, Licensee may upload, publish and distribute the modified version of + * the software product on devcentral.f5.com. + */ + +'use strict'; + +/* eslint-disable import/order */ + +require('../shared/restoreCache')(); + +const chai = require('chai'); +const chaiAsPromised = require('chai-as-promised'); +const EventEmitter = require('events').EventEmitter; +const sinon = require('sinon'); +const net = require('net'); +const udp = require('dgram'); + +const tcpUdpDataReceiver = require('../../../src/lib/eventListener/tcpUdpDataReceiver'); + +chai.use(chaiAsPromised); +const assert = chai.assert; + +describe('TCP and UDP Receivers', () => { + let dataCallbackSpy; + let receiverInst; + + const testPort = 6514; + const testAddr = 'localhost10'; + const testAddr6 = '::localhost10'; + + beforeEach(() => { + sinon.stub(tcpUdpDataReceiver.TcpUdpBaseDataReceiver, 'RESTART_DELAY').value(1); + dataCallbackSpy = sinon.spy(); + }); + + afterEach(() => { + sinon.restore(); + }); + + describe('TcpUdpBaseDataReceiver', () => { + beforeEach(() => { + receiverInst = new tcpUdpDataReceiver.TcpUdpBaseDataReceiver( + testPort, + { address: testAddr } + ); + receiverInst.on('data', dataCallbackSpy); + }); + + describe('abstract methods', () => { + beforeEach(() => { + sinon.restore(); + }); + + ['getConnKey'].forEach((method) => { + it(`.${method}()`, () => { + assert.throws( + () => receiverInst[method](), + /Not implemented/, + 'should throw "Not implemented" error' + ); + }); + }); + }); + + describe('safeRestart', () => { + it('should not fail when .restart() rejects', () => { + sinon.stub(receiverInst, 'restart').rejects(new Error('restart error')); + return assert.isFulfilled(receiverInst.safeRestart()); + }); + + it('should not fail when .restart() throws error', () => { + sinon.stub(receiverInst, 'restart').throws(new Error('restart error')); + return assert.isFulfilled(receiverInst.safeRestart()); + }); + }); + }); + + describe('TCPDataReceiver', () => { + class MockSocket extends EventEmitter { + destroy() { + this.emit('destroyMock', this); + } + } + class MockServer extends EventEmitter { + setInitArgs(opts) { + this.opts = opts; + } + + listen() { + this.emit('listenMock', this, Array.from(arguments)); + } + + close() { + this.emit('closeMock', this, Array.from(arguments)); + } + } + + let createServerMockCb; + + beforeEach(() => { + receiverInst = new tcpUdpDataReceiver.TCPDataReceiver(testPort, { address: testAddr }); + receiverInst.on('data', dataCallbackSpy); + + sinon.stub(net, 'createServer').callsFake(function () { + const serverMock = new MockServer(); + serverMock.setInitArgs.apply(serverMock, arguments); + if (createServerMockCb) { + createServerMockCb(serverMock); + } + return serverMock; + }); + }); + + afterEach(() => { + createServerMockCb = null; + }); + + describe('.callCallback()', () => { + let socketId = 0; + let serverMock; + + const createMockSocket = () => { + socketId += 1; + const socketMock = new MockSocket(); + socketMock.remoteAddress = testAddr; + socketMock.remotePort = testPort + socketId; + return socketMock; + }; + + beforeEach(() => { + createServerMockCb = (newServerMock) => { + serverMock = newServerMock; + serverMock.on('listenMock', () => serverMock.emit('listening')); + serverMock.on('closeMock', (inst, args) => { + serverMock.emit('close'); + args[0](); // call callback + }); + }; + }); + + afterEach(() => { + serverMock = null; + }); + + it('should call callback when received data', () => { + const expectedData = []; + return receiverInst.start() + .then(() => { + const socket1 = createMockSocket(); + expectedData.push(receiverInst.getConnKey(socket1)); + serverMock.emit('connection', socket1); + + const socket2 = createMockSocket(); + expectedData.push(receiverInst.getConnKey(socket2)); + serverMock.emit('connection', socket2); + + socket1.emit('data', expectedData[0]); + socket2.emit('data', expectedData[1]); + return receiverInst.stop(); + }) + .then(() => { + assert.deepStrictEqual(dataCallbackSpy.args, [ + [expectedData[0], expectedData[0]], + [expectedData[1], expectedData[1]] + ]); + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + assert.isTrue(receiverInst.hasState(tcpUdpDataReceiver.TCPDataReceiver.STATE.STOPPED), 'should have STOPPED state'); + }); + }); + }); + + describe('.getConnKey()', () => { + it('should compute unique key', () => { + assert.strictEqual(receiverInst.getConnKey({ remoteAddress: testAddr, remotePort: testPort }), `${testAddr}-${testPort}`); + }); + }); + + describe('.start()', () => { + it('should start receiver', () => { + let serverMock; + createServerMockCb = (newServerMock) => { + serverMock = newServerMock; + serverMock.on('listenMock', (inst, args) => { + assert.deepStrictEqual(args[0], { port: testPort, address: testAddr }, 'should match listen options'); + serverMock.emit('listening'); + }); + }; + return receiverInst.start() + .then(() => { + assert.isTrue(receiverInst.isRunning(), 'should be in running state'); + assert.deepStrictEqual(serverMock.opts, { allowHalfOpen: false, pauseOnConnect: false }, 'should match server options'); + }); + }); + + it('should fail to start when socket was closed before become active', () => { + createServerMockCb = (serverMock) => { + serverMock.on('listenMock', () => { + serverMock.emit('close'); + }); + }; + return assert.isRejected(receiverInst.start(), /socket closed before/) + .then(() => { + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + }); + }); + + it('should fail to start when error raised before socket become active', () => { + createServerMockCb = (serverMock) => { + serverMock.on('listenMock', () => { + serverMock.emit('error', new Error('test error')); + }); + }; + return assert.isRejected(receiverInst.start(), /test error/) + .then(() => { + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + }); + }); + + it('should restart receiver when caught error', () => { + const closeSpy = sinon.spy((inst, args) => { + inst.emit('close'); + args[0](); + }); + const listenSpy = sinon.spy((inst) => { + inst.emit('listening'); + setTimeout(() => { + inst.emit('error', new Error('test error')); + }, 10); + }); + createServerMockCb = (serverMock) => { + serverMock.on('listenMock', listenSpy); + serverMock.on('closeMock', closeSpy); + }; + + return new Promise((resolve, reject) => { + const originSafeRestart = receiverInst.safeRestart.bind(receiverInst); + sinon.stub(receiverInst, 'safeRestart') + .callsFake(() => originSafeRestart()) // listen #2 and listen #3, close #1 and close #2 + .onThirdCall().callsFake(() => { + receiverInst.destroy().then(resolve).catch(reject); // close #3 + return originSafeRestart(); + }); + receiverInst.start() // listen #1 + .catch(reject); + }) + .then(() => { + assert.strictEqual(closeSpy.callCount, 3, 'should call socket.close 3 times'); + assert.strictEqual(listenSpy.callCount, 3, 'should call socket.listen 3 times'); + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + assert.isTrue(receiverInst.hasState(tcpUdpDataReceiver.TCPDataReceiver.STATE.DESTROYED), 'should have DESTROYED state'); + }); + }); + }); + + describe('.stop()', () => { + let serverMock; + + beforeEach(() => { + createServerMockCb = (newServerMock) => { + serverMock = newServerMock; + serverMock.on('listenMock', () => serverMock.emit('listening')); + serverMock.on('closeMock', (inst, args) => { + serverMock.emit('close'); + args[0](); // call callback + }); + }; + }); + + afterEach(() => { + serverMock = null; + }); + + it('should be able to stop receiver without active socket', () => receiverInst.stop() + .then(() => { + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + assert.isTrue(receiverInst.hasState(tcpUdpDataReceiver.TCPDataReceiver.STATE.STOPPED), 'should have STOPPED state'); + })); + + it('should be able to stop receiver', () => receiverInst.start() + .then(receiverInst.stop.bind(receiverInst)) + .then(() => { + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + assert.isTrue(receiverInst.hasState(tcpUdpDataReceiver.TCPDataReceiver.STATE.STOPPED), 'should have STOPPED state'); + })); + + it('should close all opened connections', () => { + const sockets = []; + const createMockSocket = () => { + const socketMock = new MockSocket(); + socketMock.remoteAddress = testAddr; + socketMock.remotePort = testPort + sockets.length; + socketMock.destroy = sinon.spy(() => { + socketMock.emit('close'); + }); + sockets.push(socketMock); + return socketMock; + }; + return receiverInst.start() + .then(() => { + for (let i = 0; i < 10; i += 1) { + serverMock.emit('connection', createMockSocket()); + } + // close first socket to check that socket was removed from list + sockets[0].emit('close'); + // should call socket.destroy and remove socket from list too + sockets[1].emit('error'); + return receiverInst.stop(); + }) + .then(() => { + assert.strictEqual(sockets[0].destroy.callCount, 0, 'should not call socket.destroy for closed socket'); + sockets.slice(1).forEach((socketMock) => { + assert.strictEqual(socketMock.destroy.callCount, 1, 'should call socket.destroy just once for each socket'); + }); + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + assert.isTrue(receiverInst.hasState(tcpUdpDataReceiver.TCPDataReceiver.STATE.STOPPED), 'should have STOPPED state'); + }); + }); + }); + }); + + describe('UDPDataReceiver', () => { + class MockServer extends EventEmitter { + setInitArgs(opts) { + this.opts = opts; + } + + bind() { + this.emit('listenMock', this, Array.from(arguments)); + } + + close() { + this.emit('closeMock', this, Array.from(arguments)); + } + } + + let createServerMockCb; + + beforeEach(() => { + receiverInst = new tcpUdpDataReceiver.UDPDataReceiver(testPort, { address: testAddr }); + receiverInst.on('data', dataCallbackSpy); + + sinon.stub(udp, 'createSocket').callsFake(function () { + const serverMock = new MockServer(); + serverMock.setInitArgs.apply(serverMock, arguments); + if (createServerMockCb) { + createServerMockCb(serverMock); + } + return serverMock; + }); + }); + + afterEach(() => { + createServerMockCb = null; + }); + + describe('.callCallback()', () => { + let socketId = 0; + let serverMock; + + const createSocketInfo = () => { + socketId += 1; + return { + address: testAddr, + port: testPort + socketId + }; + }; + + beforeEach(() => { + createServerMockCb = (newServerMock) => { + serverMock = newServerMock; + serverMock.on('listenMock', () => serverMock.emit('listening')); + serverMock.on('closeMock', (inst, args) => { + serverMock.emit('close'); + args[0](); // call callback + }); + }; + }); + + afterEach(() => { + serverMock = null; + }); + + it('should call callback when received data', () => { + const expectedData = []; + return receiverInst.start() + .then(() => { + const socketInfo1 = createSocketInfo(); + expectedData.push(receiverInst.getConnKey(socketInfo1)); + serverMock.emit('message', expectedData[0], socketInfo1); + + const socketInfo2 = createSocketInfo(); + expectedData.push(receiverInst.getConnKey(socketInfo2)); + serverMock.emit('message', expectedData[1], socketInfo2); + + return receiverInst.stop(); + }) + .then(() => { + assert.deepStrictEqual(dataCallbackSpy.args, [ + [expectedData[0], expectedData[0]], + [expectedData[1], expectedData[1]] + ]); + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + assert.isTrue(receiverInst.hasState(tcpUdpDataReceiver.UDPDataReceiver.STATE.STOPPED), 'should have STOPPED state'); + }); + }); + }); + + describe('.getConnKey()', () => { + it('should compute unique key', () => { + assert.strictEqual(receiverInst.getConnKey({ address: testAddr, port: testPort }), `${testAddr}-${testPort}`); + }); + }); + + describe('.start()', () => { + it('should start receiver (udp4 by default)', () => { + let serverMock; + createServerMockCb = (newServerMock) => { + serverMock = newServerMock; + serverMock.on('listenMock', (inst, args) => { + assert.deepStrictEqual(args[0], { port: testPort, address: testAddr }, 'should match listen options'); + serverMock.emit('listening'); + }); + }; + return receiverInst.start() + .then(() => { + assert.isTrue(receiverInst.isRunning(), 'should be in running state'); + assert.deepStrictEqual(serverMock.opts, { type: 'udp4', ipv6Only: false, reuseAddr: true }, 'should match socket options'); + }); + }); + + it('should start receiver (udp6)', () => { + let serverMock; + createServerMockCb = (newServerMock) => { + serverMock = newServerMock; + serverMock.on('listenMock', (inst, args) => { + assert.deepStrictEqual(args[0], { port: testPort, address: testAddr }, 'should match listen options'); + serverMock.emit('listening'); + }); + }; + receiverInst = new tcpUdpDataReceiver.UDPDataReceiver(testPort, { address: testAddr }, 'udp6'); + return receiverInst.start() + .then(() => { + assert.isTrue(receiverInst.isRunning(), 'should be in running state'); + assert.deepStrictEqual(serverMock.opts, { type: 'udp6', ipv6Only: true, reuseAddr: true }, 'should match socket options'); + }); + }); + + it('should fail to start when socket was closed before become active', () => { + createServerMockCb = (serverMock) => { + serverMock.on('listenMock', () => { + serverMock.emit('close'); + }); + }; + return assert.isRejected(receiverInst.start(), /socket closed before/) + .then(() => { + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + }); + }); + + it('should fail to start when error raised before socket become active', () => { + createServerMockCb = (serverMock) => { + serverMock.on('listenMock', () => { + serverMock.emit('error', new Error('test error')); + }); + }; + return assert.isRejected(receiverInst.start(), /test error/) + .then(() => { + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + }); + }); + + it('should restart receiver when caught error', () => { + const closeSpy = sinon.spy((inst, args) => { + inst.emit('close'); + args[0](); + }); + const listenSpy = sinon.spy((inst) => { + inst.emit('listening'); + setTimeout(() => { + inst.emit('error', new Error('test error')); + }, 10); + }); + createServerMockCb = (serverMock) => { + serverMock.on('listenMock', listenSpy); + serverMock.on('closeMock', closeSpy); + }; + + return new Promise((resolve, reject) => { + const originSafeRestart = receiverInst.safeRestart.bind(receiverInst); + sinon.stub(receiverInst, 'safeRestart') + .callsFake(() => originSafeRestart()) // listen #2 and listen #3, close #1 and close #2 + .onThirdCall().callsFake(() => { + receiverInst.destroy().then(resolve).catch(reject); // close #3 + return originSafeRestart(); + }); + receiverInst.start() // listen #1 + .catch(reject); + }) + .then(() => { + assert.strictEqual(closeSpy.callCount, 3, 'should call socket.close 3 times'); + assert.strictEqual(listenSpy.callCount, 3, 'should call socket.listen 3 times'); + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + assert.isTrue(receiverInst.hasState(tcpUdpDataReceiver.UDPDataReceiver.STATE.DESTROYED), 'should have DESTROYED state'); + }); + }); + }); + + describe('.stop()', () => { + let serverMock; + + beforeEach(() => { + createServerMockCb = (newServerMock) => { + serverMock = newServerMock; + serverMock.on('listenMock', () => serverMock.emit('listening')); + serverMock.on('closeMock', (inst, args) => { + serverMock.emit('close'); + args[0](); // call callback + }); + }; + }); + + afterEach(() => { + serverMock = null; + }); + + it('should be able to stop receiver without active socket', () => receiverInst.stop() + .then(() => { + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + assert.isTrue(receiverInst.hasState(tcpUdpDataReceiver.UDPDataReceiver.STATE.STOPPED), 'should have STOPPED state'); + })); + + it('should be able to stop receiver', () => receiverInst.start() + .then(receiverInst.stop.bind(receiverInst)) + .then(() => { + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + assert.isTrue(receiverInst.hasState(tcpUdpDataReceiver.UDPDataReceiver.STATE.STOPPED), 'should have STOPPED state'); + })); + }); + }); + + describe('DualUDPDataReceiver', () => { + class MockServer extends EventEmitter { + setInitArgs(opts) { + this.opts = opts; + } + + bind() { + this.emit('listenMock', this, Array.from(arguments)); + } + + close() { + this.emit('closeMock', this, Array.from(arguments)); + } + } + + let serverMocks; + let onMockCreatedCallback; + const getServerMock = ipv6 => serverMocks.find(mock => (ipv6 && mock.opts.type === 'udp6') || (!ipv6 && mock.opts.type === 'udp4')); + + beforeEach(() => { + serverMocks = []; + receiverInst = new tcpUdpDataReceiver.DualUDPDataReceiver(testPort, { address: testAddr }); + receiverInst.on('data', dataCallbackSpy); + + sinon.stub(udp, 'createSocket').callsFake(function () { + const mock = new MockServer(); + mock.setInitArgs.apply(mock, arguments); + serverMocks.push(mock); + if (onMockCreatedCallback) { + onMockCreatedCallback(mock); + } + return mock; + }); + onMockCreatedCallback = (serverMock) => { + serverMock.on('listenMock', () => serverMock.emit('listening')); + serverMock.on('closeMock', (inst, args) => { + serverMock.emit('close'); + args[0](); // call callback + }); + }; + }); + + describe('abstract methods', () => { + beforeEach(() => { + sinon.restore(); + }); + + ['getConnKey'].forEach((method) => { + it(`.${method}()`, () => { + assert.throws( + () => receiverInst[method](), + /Not implemented/, + 'should throw "Not implemented" error' + ); + }); + }); + }); + + describe('.callCallback()', () => { + let socketId = 0; + const createSocketInfo = (ipv6) => { + socketId += 1; + return { + address: ipv6 ? testAddr6 : testAddr, + port: testPort + socketId + }; + }; + + it('should call callback when received data', () => { + const expectedData = []; + return receiverInst.start() + .then(() => { + const socketInfo1 = createSocketInfo(); + expectedData.push(receiverInst._receivers[0].getConnKey(socketInfo1)); + getServerMock().emit('message', expectedData[0], socketInfo1); + + const socketInfo2 = createSocketInfo(true); + expectedData.push(receiverInst._receivers[0].getConnKey(socketInfo2)); + getServerMock(true).emit('message', expectedData[1], socketInfo2); + + return receiverInst.stop(); + }) + .then(() => { + assert.deepStrictEqual(dataCallbackSpy.args, [ + [expectedData[0], expectedData[0]], + [expectedData[1], expectedData[1]] + ]); + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + assert.isTrue(receiverInst.hasState(tcpUdpDataReceiver.DualUDPDataReceiver.STATE.STOPPED), 'should have STOPPED state'); + }); + }); + }); + + describe('.restart()', () => { + it('should recrate all receivers on restart', () => receiverInst.start() + .then(() => { + assert.strictEqual(serverMocks.length, 2, 'should create 2 sockets'); + assert.strictEqual(getServerMock().opts.type, 'udp4', 'should create udp4 listener'); + assert.strictEqual(getServerMock(true).opts.type, 'udp6', 'should create udp6 listener'); + return receiverInst.restart(); + }) + .then(() => { + assert.strictEqual(serverMocks.length, 4, 'should create 2 more sockets'); + assert.strictEqual(serverMocks.filter(mock => mock.opts.type === 'udp4').length, 2, 'should have 2 udp4 sockets'); + assert.strictEqual(serverMocks.filter(mock => mock.opts.type === 'udp6').length, 2, 'should have 2 udp6 sockets'); + })); + }); + + describe('.start()', () => { + it('should start receivers', () => receiverInst.start() + .then(() => { + assert.strictEqual(serverMocks.length, 2, 'should create 2 sockets'); + assert.strictEqual(getServerMock().opts.type, 'udp4', 'should create udp4 listener'); + assert.strictEqual(getServerMock(true).opts.type, 'udp6', 'should create udp6 listener'); + assert.isTrue(receiverInst.isRunning(), 'should be in running state'); + })); + + it('should fail to start', () => { + let firstOnly = false; + onMockCreatedCallback = (serverMock) => { + if (!firstOnly) { + firstOnly = true; + serverMock.on('listenMock', () => { + serverMock.emit('close'); + }); + } else { + serverMock.on('listenMock', () => serverMock.emit('listening')); + } + }; + return assert.isRejected(receiverInst.start(), /socket closed before/) + .then(() => { + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + }); + }); + }); + + describe('.stop()', () => { + it('should be able to stop receiver without active socket', () => receiverInst.stop() + .then(() => { + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + assert.isTrue(receiverInst.hasState(tcpUdpDataReceiver.DualUDPDataReceiver.STATE.STOPPED), 'should have STOPPED state'); + })); + + it('should be able to stop receiver', () => { + const closeSpy = sinon.spy(); + return receiverInst.start() + .then(() => { + assert.isTrue(receiverInst.hasReceivers(), 'should have receivers started'); + getServerMock().on('close', closeSpy); + getServerMock(true).on('close', closeSpy); + return receiverInst.stop(); + }) + .then(() => { + assert.strictEqual(closeSpy.callCount, 2, 'should close 2 sockets'); + assert.isFalse(receiverInst.isRunning(), 'should not be in running state'); + assert.isTrue(receiverInst.hasState(tcpUdpDataReceiver.DualUDPDataReceiver.STATE.STOPPED), 'should have STOPPED state'); + }); + }); + }); + }); +}); diff --git a/test/unit/eventListenerTests.js b/test/unit/eventListenerTests.js deleted file mode 100644 index 0a0fbeb4..00000000 --- a/test/unit/eventListenerTests.js +++ /dev/null @@ -1,403 +0,0 @@ -/* - * Copyright 2018. F5 Networks, Inc. See End User License Agreement ('EULA') for - * license terms. Notwithstanding anything to the contrary in the EULA, Licensee - * may copy and modify this software product for its internal business purposes. - * Further, Licensee may upload, publish and distribute the modified version of - * the software product on devcentral.f5.com. - */ - -'use strict'; - -/* eslint-disable import/order */ - -require('./shared/restoreCache')(); - -const net = require('net'); -const dgram = require('dgram'); -const chai = require('chai'); -const chaiAsPromised = require('chai-as-promised'); -const sinon = require('sinon'); - -const nodeUtil = require('util'); -const EventEmitter = require('events').EventEmitter; -const EventListener = require('../../src/lib/eventListener'); -const dataPipeline = require('../../src/lib/dataPipeline'); -const testUtil = require('./shared/util'); -const eventListenerTestData = require('./data/eventListenerTestsData'); -const configWorker = require('../../src/lib/config'); -const configUtil = require('../../src/lib/utils/config'); -const util = require('../../src/lib/utils/misc'); -const tracers = require('../../src/lib/utils/tracer').Tracer; - -chai.use(chaiAsPromised); -const assert = chai.assert; - -describe('Event Listener', () => { - let eventListener; - let uuidCounter = 0; - - const validateAndNormalize = function (declaration) { - return configWorker.validate(util.deepCopy(declaration)) - .then(validated => Promise.resolve(configUtil.normalizeConfig(validated))); - }; - - beforeEach(() => { - sinon.stub(util, 'generateUuid').callsFake(() => { - uuidCounter += 1; - return `uuid${uuidCounter}`; - }); - eventListener = new EventListener('TestEventListener', 6514, {}); - }); - - afterEach(() => { - uuidCounter = 0; - sinon.restore(); - }); - - describe('constructor', () => { - it('should set minimum and default props', () => { - assert.strictEqual(eventListener.name, 'TestEventListener'); - assert.strictEqual(eventListener.port, 6514); - assert.strictEqual(eventListener.protocol, 'tcp'); - }); - - it('should set options when opts arg is provided', () => { - const filterFunc = data => data; - eventListener = new EventListener( - 'UdpEventListener', - 6514, - { - protocol: 'udp', - tags: { any: 'thing', but: 'true' }, - tracer: { mockTracer: true }, - actions: [{ doThis: true }], - filterFunc - } - ); - assert.strictEqual(eventListener.name, 'UdpEventListener'); - assert.strictEqual(eventListener.port, 6514); - assert.strictEqual(eventListener.protocol, 'udp'); - assert.deepStrictEqual(eventListener.tags, { any: 'thing', but: 'true' }); - assert.deepStrictEqual(eventListener.tracer, { mockTracer: true }); - assert.deepStrictEqual(eventListener.actions, [{ doThis: true }]); - assert.deepStrictEqual(eventListener.filterFunc, filterFunc); - }); - }); - - describe('.processData', () => { - let actualData; - let loggerSpy; - beforeEach(() => { - actualData = []; - sinon.stub(dataPipeline, 'process').callsFake((dataCtx) => { - actualData.push(dataCtx.data); - return Promise.resolve(); - }); - loggerSpy = sinon.spy(eventListener.logger, 'exception'); - }); - eventListenerTestData.processData.forEach((testSet) => { - testUtil.getCallableIt(testSet)(testSet.name, () => eventListener.processData(testSet.rawData) - .then(() => { - assert.isTrue(loggerSpy.notCalled); - assert.deepStrictEqual(actualData, testSet.expectedData); - })); - }); - }); - - describe('.processRawData', () => { - let actualData; - let loggerSpy; - const connInfo = { address: '127.0.0.1', port: '5555' }; - beforeEach(() => { - actualData = []; - sinon.stub(eventListener, 'processData').callsFake((data) => { - actualData.push(data); - return Promise.resolve(); - }); - loggerSpy = sinon.spy(eventListener.logger, 'exception'); - }); - eventListenerTestData.processRawData.forEach((testSet) => { - testUtil.getCallableIt(testSet)(testSet.name, () => Promise.resolve() - .then(() => { - testSet.rawData.forEach((rawData) => { - eventListener.processRawData(rawData, connInfo); - }); - // should be enough time to process and log exception - return new Promise(resolve => setTimeout(resolve, 1000)); - }) - .then(() => { - assert.isTrue(loggerSpy.notCalled); - assert.deepStrictEqual(actualData, testSet.expectedData); - })); - }); - - it('should flush all data immediately once number of timeouts reached maximum', () => { - // max number of timeouts is 5, see source code for more details - const maxTimeouts = 5; - for (let i = 0; i < maxTimeouts + 1; i += 1) { - eventListener.processRawData(i.toString(), connInfo); - } - assert.deepStrictEqual(eventListener._connDataBuffers, {}, 'should have no connection records once data flushed'); - assert.deepStrictEqual(actualData, ['012345']); - }); - - it('should flush all data immediately once number of timeouts reached maximum (twice more than number of max timeouts)', () => { - // max number of timeouts is 5, see source code for more details - const maxTimeouts = 5 * 2; - for (let i = 0; i < maxTimeouts + 2; i += 1) { - eventListener.processRawData(i.toString(), connInfo); - } - assert.deepStrictEqual(eventListener._connDataBuffers, {}, 'should have no connection records once data flushed'); - assert.deepStrictEqual(actualData, ['012345', '67891011']); - }); - }); - - describe('tcp listener', () => { - function MockClientSocket() { - EventEmitter.call(this); - } - nodeUtil.inherits(MockClientSocket, EventEmitter); - - function MockTcpServer(connCb) { - EventEmitter.call(this); - this.on('connection', () => connCb); - this.listen = (opts) => { - assert.deepStrictEqual(opts, { port: '6514' }); - this.emit('connection'); - }; - } - nodeUtil.inherits(MockTcpServer, EventEmitter); - - let logExceptionSpy; - let processRawDataSpy; - let tcpListener; - let mockSocket; - - beforeEach(() => { - mockSocket = new MockClientSocket(); - sinon.stub(net, 'createServer').callsFake(connCb => new MockTcpServer(connCb(mockSocket))); - - tcpListener = new EventListener('tcpListener', '6514', { protocol: 'tcp' }); - processRawDataSpy = sinon.spy(tcpListener, 'processRawData'); - logExceptionSpy = sinon.spy(tcpListener.logger, 'exception'); - }); - - it('should start, listen and process received data', () => Promise.resolve() - .then(() => { - tcpListener.start(); - assert.isNotNull(tcpListener._server); - mockSocket.emit('data', 'pandas eat from 25-40 lbs of bamboo per day'); - }) - .then(() => { - assert.isTrue(logExceptionSpy.notCalled); - assert.isTrue(processRawDataSpy.calledOnce); - assert.deepStrictEqual(processRawDataSpy.getCall(0).args[0], 'pandas eat from 25-40 lbs of bamboo per day'); - })); - }); - - describe('udp listener', () => { - let mockUdpSocket; - let udpListener; - let logExceptionSpy; - let processRawDataSpy; - - function MockUdpSocket() { - EventEmitter.call(this); - this.bind = (opts) => { - assert.deepStrictEqual(opts, 6543); - }; - } - nodeUtil.inherits(MockUdpSocket, EventEmitter); - - beforeEach(() => { - mockUdpSocket = new MockUdpSocket(); - sinon.stub(dgram, 'createSocket').callsFake((opts) => { - assert.deepStrictEqual(opts, { type: 'udp6', ipv6Only: false }); - return mockUdpSocket; - }); - udpListener = new EventListener('udpListener', 6543, { protocol: 'udp' }); - processRawDataSpy = sinon.spy(udpListener, 'processRawData'); - logExceptionSpy = sinon.spy(udpListener.logger, 'exception'); - }); - - it('should start, listen and process received data', () => Promise.resolve() - .then(() => { - udpListener.start(); - assert.isNotNull(udpListener._server); - mockUdpSocket.emit('message', Buffer.from('pandas eat for up to 14 hours a day'), { }); - }) - .then(() => { - assert.isTrue(logExceptionSpy.notCalled); - assert.isTrue(processRawDataSpy.calledOnce); - assert.deepStrictEqual(processRawDataSpy.getCall(0).args[0], 'pandas eat for up to 14 hours a day'); - })); - }); - - describe('config change', () => { - let listenerStub; - let allTracersStub; - let activeTracersStub; - - const origDecl = { - class: 'Telemetry', - Listener1: { - class: 'Telemetry_Listener', - port: 1234, - tag: { - tenant: '`T`', - application: '`A`' - } - } - }; - - const assertListener = function (actualListener, expListener) { - const protocols = ['tcp', 'udp']; - protocols.forEach((p) => { - assert.deepStrictEqual(actualListener[p].port, expListener.port); - assert.deepStrictEqual(actualListener[p].id, expListener.id); - assert.deepStrictEqual(actualListener[p].tags, expListener.tags); - if (expListener.hasFilterFunc) { - assert.isNotNull(actualListener[p].filterFunc); - } else { - assert.isNull(actualListener[p].filterFunc); - } - }); - }; - - beforeEach(() => { - activeTracersStub = []; - allTracersStub = []; - listenerStub = { start: 0, stop: 0 }; - - sinon.stub(EventListener.prototype, 'start').callsFake(() => { - listenerStub.start += 1; - }); - sinon.stub(EventListener.prototype, 'stop').callsFake(() => { - listenerStub.stop += 1; - }); - - sinon.stub(tracers, 'createFromConfig').callsFake((className, objName, config) => { - allTracersStub.push(objName); - if (config.trace) { - activeTracersStub.push(objName); - } - return null; - }); - - return validateAndNormalize(origDecl) - .then(normalized => configWorker.emitAsync('change', normalized)) - .then(() => { - const listeners = eventListener.getListeners(); - assert.strictEqual(Object.keys(listeners).length, 1); - assertListener(listeners.Listener1, { - port: 1234, id: 'uuid1', tags: { tenant: '`T`', application: '`A`' } - }); - assert.deepStrictEqual(listenerStub, { start: 2, stop: 0 }); - assert.strictEqual(allTracersStub.length, 2); - assert.strictEqual(activeTracersStub.length, 0); - }) - .then(() => { - // reset counts - activeTracersStub = []; - allTracersStub = []; - listenerStub = { start: 0, stop: 0 }; - uuidCounter = 0; - }); - }); - - afterEach(() => configWorker.emitAsync('change', { components: [], mappings: {} }) - .then(() => { - const listeners = eventListener.getListeners(); - assert.strictEqual(Object.keys(listeners).length, 0); - })); - - it('should create listeners with default and custom opts on config change event (no prior config)', () => { - const newDecl = util.deepCopy(origDecl); - newDecl.Listener2 = { - class: 'Telemetry_Listener', - match: 'somePattern', - trace: true - }; - - return validateAndNormalize(newDecl) - .then(normalized => configWorker.emitAsync('change', normalized)) - .then(() => { - // only start Listener2, 2 has been started from the orig config - assert.deepStrictEqual(listenerStub, { start: 2, stop: 0 }); - assert.strictEqual(activeTracersStub.length, 2); - assert.strictEqual(allTracersStub.length, 4); - - const listeners = eventListener.getListeners(); - assertListener(listeners.Listener1, { - port: 1234, id: 'uuid1', tags: { tenant: '`T`', application: '`A`' } - }); - assertListener(listeners.Listener2, { - port: 6514, id: 'uuid2', tags: {}, hasFilterFunc: true - }); - }); - }); - - it('should stop existing listener(s) when removed from config', () => configWorker.emitAsync('change', { components: [], mappings: {} }) - .then(() => { - assert.deepStrictEqual(listenerStub, { start: 0, stop: 2 }); - assert.strictEqual(activeTracersStub.length, 0); - assert.strictEqual(allTracersStub.length, 0); - - const listeners = eventListener.getListeners(); - assert.strictEqual(Object.keys(listeners).length, 0); - })); - - it('should update existing listener(s) without restarting if port is the same', () => { - const newDecl = util.deepCopy(origDecl); - newDecl.Listener1.trace = true; - const updateSpy = sinon.stub(EventListener.prototype, 'updateConfig'); - return validateAndNormalize(newDecl) - .then(normalized => configWorker.emitAsync('change', normalized)) - .then(() => { - assert.deepStrictEqual(listenerStub, { start: 0, stop: 0 }); - assert.strictEqual(activeTracersStub.length, 2); - assert.strictEqual(allTracersStub.length, 2); - - const listeners = eventListener.getListeners(); - assertListener(listeners.Listener1, { - port: 1234, id: 'uuid1', tags: { tenant: '`T`', application: '`A`' } - }); - // one for each protocol - assert.isTrue(updateSpy.calledTwice); - }); - }); - - it('should add a new listener without updating existing one when skipUpdate = true', () => { - const updateSpy = sinon.spy(EventListener.prototype, 'updateConfig'); - const newDecl = util.deepCopy(origDecl); - newDecl.New = { - class: 'Telemetry_Namespace', - Listener1: { - class: 'Telemetry_Listener', - port: 2345, - trace: true - } - }; - return validateAndNormalize(newDecl) - .then((normalized) => { - normalized.components[0].skipUpdate = true; - return configWorker.emitAsync('change', normalized); - }) - .then(() => { - assert.deepStrictEqual(listenerStub, { start: 2, stop: 0 }); - assert.strictEqual(activeTracersStub.length, 2); - assert.strictEqual(allTracersStub.length, 2); - - const listeners = eventListener.getListeners(); - assertListener(listeners.Listener1, { - port: 1234, id: 'uuid1', tags: { tenant: '`T`', application: '`A`' } - }); - assertListener(listeners['New::Listener1'], { - port: 2345, id: 'uuid2', tags: {} - }); - // one for each protocol, called through constructor - assert.isTrue(updateSpy.calledTwice); - }); - }); - }); -}); diff --git a/test/unit/normalizeTests.js b/test/unit/normalizeTests.js index 8f9713e5..01b3a85f 100644 --- a/test/unit/normalizeTests.js +++ b/test/unit/normalizeTests.js @@ -63,18 +63,6 @@ describe('Normalize', () => { }; describe('event', () => { - describe('.splitEvents()', () => { - dataExamples.splitEventsData.forEach((eventDataExample) => { - testUtil.getCallableIt(eventDataExample)(`should split events - ${eventDataExample.name}`, () => { - ['\n', '\r\n'].forEach((lineSep) => { - const data = eventDataExample.data.replace(/\{sep\}/g, lineSep); - const result = normalize.splitEvents(data); - assert.deepStrictEqual(result, eventDataExample.expectedData); - }); - }); - }); - }); - let clock; beforeEach(() => { // stub clock for categories that need timestamp diff --git a/test/unit/requestHandlers/declareHandlerTests.js b/test/unit/requestHandlers/declareHandlerTests.js index 77f76c89..38d9b65d 100644 --- a/test/unit/requestHandlers/declareHandlerTests.js +++ b/test/unit/requestHandlers/declareHandlerTests.js @@ -16,6 +16,7 @@ const chai = require('chai'); const chaiAsPromised = require('chai-as-promised'); const sinon = require('sinon'); +const ErrorHandler = require('../../../src/lib/requestHandlers/errorHandler'); const configWorker = require('../../../src/lib/config'); const DeclareHandler = require('../../../src/lib/requestHandlers/declareHandler'); const testUtil = require('../shared/util'); @@ -25,124 +26,263 @@ const assert = chai.assert; describe('DeclareHandler', () => { - const uri = 'http://localhost:8100/mgmt/shared/telemetry/declare'; - let restOpMock; let requestHandler; - - function getRestOperation(method) { - restOpMock = new testUtil.MockRestOperation({ method: method.toUpperCase() }); - restOpMock.uri = testUtil.parseURL(uri); - return restOpMock; - } - - beforeEach(() => { - requestHandler = new DeclareHandler(getRestOperation('GET')); - }); + let uri; afterEach(() => { + requestHandler = null; + uri = null; sinon.restore(); }); - it('should get raw config on GET request', () => { - const expectedConfig = { config: 'expected' }; - sinon.stub(configWorker, 'getRawConfig').resolves(testUtil.deepCopy(expectedConfig)); + function assertProcessResult(expected) { return requestHandler.process() + .then((actual) => { + if (expected.code === 200) { + assert.ok(actual === requestHandler, 'should return a reference to original handler'); + } else { + assert.isTrue(actual instanceof ErrorHandler, 'should return a reference to error handler'); + } + + assert.strictEqual(actual.getCode(), expected.code, 'should return expected code'); + const actualBody = actual.getBody(); + if (expected.body.code) { + assert.strictEqual(actualBody.code, expected.body.code, 'should return expected body.code'); + assert.strictEqual(actualBody.message, expected.body.message, 'should return expected body.message'); + if (expected.body.error) { + assert.match(actualBody.error, new RegExp(expected.body.error), 'should return expected body.error'); + } + } else { + assert.deepStrictEqual(requestHandler.getBody(), expected.body, 'should return expected body'); + } + }); + } + + function assertMultiRequestResults(mockConfig, expectedResponses, params) { + const fetchResponseInfo = handler => ({ + code: handler.getCode(), + body: handler.getBody() + }); + + return Promise.all([ + testUtil.sleep(10).then(() => new DeclareHandler(getRestOperation('POST', mockConfig), params).process()), // should return 200 or 503 + testUtil.sleep(10).then(() => new DeclareHandler(getRestOperation('POST', mockConfig), params).process()), // should return 503 or 200 + testUtil.sleep(20).then(() => new DeclareHandler(getRestOperation('GET'), params).process()) // should return 200 + ]) + .then((handlers) => { + assert.deepStrictEqual(fetchResponseInfo(handlers[2]), expectedResponses.GET, 'should match expected response for GET'); + assert.includeDeepMembers(handlers.slice(0, 2).map(fetchResponseInfo), expectedResponses.POST, 'should match expected responses for POST requests'); + // lock should be released already + return new DeclareHandler(getRestOperation('POST', mockConfig), params).process(); + }) .then((handler) => { - assert.ok(handler === requestHandler, 'should return a reference to original handler'); - assert.strictEqual(requestHandler.getCode(), 200, 'should return expected code'); - assert.deepStrictEqual(requestHandler.getBody(), { - message: 'success', - declaration: expectedConfig - }, 'should return expected body'); + assert.deepStrictEqual(fetchResponseInfo(handler), expectedResponses.POST[0], 'should succeed after lock released'); }); - }); + } - it('should pass declaration to configWorker on POST request', () => { - const expectedConfig = { config: 'validated' }; + function getRestOperation(method, body) { + const restOpMock = new testUtil.MockRestOperation({ method: method.toUpperCase() }); + restOpMock.uri = testUtil.parseURL(uri); + restOpMock.body = body; + return restOpMock; + } - restOpMock.method = 'POST'; - restOpMock.body = { class: 'Telemetry' }; + describe('/declare', () => { + beforeEach(() => { + uri = 'http://localhost:8100/mgmt/shared/telemetry/declare'; + }); - sinon.stub(configWorker, 'processDeclaration').resolves(testUtil.deepCopy(expectedConfig)); - return requestHandler.process() - .then((handler) => { - assert.ok(handler === requestHandler, 'should return a reference to original handler'); - assert.strictEqual(requestHandler.getCode(), 200, 'should return expected code'); - assert.deepStrictEqual(requestHandler.getBody(), { + it('should get full raw config on GET request', () => { + const mockConfig = { class: 'Telemetry' }; + const expected = { + code: 200, + body: { message: 'success', - declaration: expectedConfig - }, 'should return expected body'); - }); - }); + declaration: mockConfig + } + }; + sinon.stub(configWorker, 'getRawConfig').resolves(testUtil.deepCopy(mockConfig)); + requestHandler = new DeclareHandler(getRestOperation('GET')); + return assertProcessResult(expected); + }); - it('should return 422 on attempt to POST invalid declaration', () => { - restOpMock.method = 'POST'; - restOpMock.body = { class: 'Telemetry1' }; + it('should return 200 on POST - valid declaration', () => { + const mockConfig = { config: 'validated' }; + const expected = { + code: 200, + body: { + message: 'success', + declaration: mockConfig + } + }; - return requestHandler.process() - .then((handler) => { - assert.ok(handler === requestHandler, 'should return a reference to original handler'); - assert.strictEqual(requestHandler.getCode(), 422, 'should return expected code'); - assert.strictEqual(requestHandler.getBody().code, 422, 'should return expected code'); - assert.strictEqual(requestHandler.getBody().message, 'Unprocessable entity', 'should return expected message'); - assert.strictEqual(typeof requestHandler.getBody().error, 'string', 'should return error message'); - }); + sinon.stub(configWorker, 'processDeclaration').resolves(testUtil.deepCopy(mockConfig)); + requestHandler = new DeclareHandler(getRestOperation('POST', { class: 'Telemetry' })); + return assertProcessResult(expected); + }); + + it('should return 422 on POST - invalid declaration', () => { + const expected = { + code: 422, + body: { + code: 422, + message: 'Unprocessable entity', + error: 'should be equal to one of the allowed values' + } + }; + requestHandler = new DeclareHandler(getRestOperation('POST', { class: 'Telemetry1' })); + return assertProcessResult(expected); + }); + + it('should return 503 on attempt to POST declaration while previous one is still in process', () => { + const mockConfig = { config: 'validated' }; + sinon.stub(configWorker, 'processDeclaration').callsFake(() => testUtil.sleep(50).then(() => testUtil.deepCopy(mockConfig))); + sinon.stub(configWorker, 'getRawConfig').callsFake(() => testUtil.sleep(50).then(() => testUtil.deepCopy(mockConfig))); + + const expectedResponses = { + GET: { + code: 200, + body: { + message: 'success', + declaration: mockConfig + } + }, + POST: [ + { + code: 200, + body: { + message: 'success', + declaration: mockConfig + } + }, + { + code: 503, + body: { + code: 503, + message: 'Service Unavailable' + } + } + ] + }; + + return assertMultiRequestResults(mockConfig, expectedResponses); + }); + + it('should reject whe unknown error is caught', () => { + sinon.stub(configWorker, 'getRawConfig').rejects(new Error('expectedError')); + requestHandler = new DeclareHandler(getRestOperation('GET')); + return assert.isRejected(requestHandler.process(), 'expectedError'); + }); }); - it('should return 503 on attempt to POST declaration while previous one is still in process', () => { - const expectedConfig = { config: 'validated' }; - sinon.stub(configWorker, 'processDeclaration').callsFake(() => testUtil.sleep(50).then(() => testUtil.deepCopy(expectedConfig))); - sinon.stub(configWorker, 'getRawConfig').callsFake(() => testUtil.sleep(50).then(() => testUtil.deepCopy(expectedConfig))); + describe('/namespace/:namespace/declare', () => { + beforeEach(() => { + uri = 'http://localhost:8100/mgmt/shared/telemetry/namespace/testNamespace/declare'; + }); - const fetchResponseInfo = handler => ({ - code: handler.getCode(), - body: handler.getBody() + it('should get namespace-only raw config on GET request', () => { + const mockConfig = { + raw: { + class: 'Telemetry', + testNamespace: { class: 'Telemetry_Namespace' }, + otherNamespace: { class: 'Telemetry_Namespace', unwanted: true } + } + }; + const expected = { + code: 200, + body: { + message: 'success', + declaration: { class: 'Telemetry_Namespace' } + } + }; + sinon.stub(configWorker, 'getConfig').resolves(mockConfig); + requestHandler = new DeclareHandler(getRestOperation('GET'), { namespace: 'testNamespace' }); + return assertProcessResult(expected); }); - const expectedResponses = { - GET: { + it('should return 404 on GET - non-existent namespace', () => { + const expected = { + code: 404, + body: { + code: 404, + message: 'Namespace with name \'testNamespace\' doesn\'t exist' + } + }; + sinon.stub(configWorker, 'getConfig').resolves({ raw: { class: 'Telemetry' } }); + + requestHandler = new DeclareHandler(getRestOperation('GET'), { namespace: 'testNamespace' }); + return assertProcessResult(expected); + }); + + it('should return 200 on POST - valid declaration', () => { + const mockConfig = { config: 'validated' }; + const expected = { code: 200, body: { message: 'success', - declaration: expectedConfig + declaration: mockConfig + } + }; + + sinon.stub(configWorker, 'processDeclaration').resolves(testUtil.deepCopy(mockConfig)); + requestHandler = new DeclareHandler(getRestOperation('POST', { class: 'Telemetry_Namespace' })); + return assertProcessResult(expected); + }); + + it('should return 422 on POST - invalid declaration', () => { + const expected = { + code: 422, + body: { + code: 422, + message: 'Unprocessable entity', + error: /"schemaPath":"#\/properties\/class\/enum","params":{"allowedValues":\["Telemetry_Namespace"\]/ } - }, - POST: [ - { + }; + sinon.stub(configWorker, 'getConfig').resolves({ raw: { class: 'Telemetry' } }); + + requestHandler = new DeclareHandler(getRestOperation('POST', { class: 'Telemetry' }), { namespace: 'testNamespace' }); + return assertProcessResult(expected); + }); + + it('should return 503 on attempt to POST declaration while previous one is still in process', () => { + const namespaceConfig = { class: 'Telemetry_Namespace' }; + const validatedConfig = { class: 'Telemetry', testNamespace: { class: 'Telemetry_Namespace' } }; + sinon.stub(configWorker, 'processDeclaration').callsFake(() => testUtil.sleep(50).then(() => validatedConfig)); + sinon.stub(configWorker, 'getConfig').callsFake(() => testUtil.sleep(50).then(() => ({ raw: validatedConfig }))); + + const expectedResponses = { + GET: { code: 200, body: { message: 'success', - declaration: expectedConfig + declaration: namespaceConfig } }, - { - code: 503, - body: { + POST: [ + { + code: 200, + body: { + message: 'success', + declaration: namespaceConfig + } + }, + { code: 503, - message: 'Service Unavailable' + body: { + code: 503, + message: 'Service Unavailable' + } } - } - ] - }; + ] + }; - return Promise.all([ - testUtil.sleep(10).then(() => new DeclareHandler(getRestOperation('POST')).process()), // should return 200 or 503 - testUtil.sleep(10).then(() => new DeclareHandler(getRestOperation('POST')).process()), // should return 503 or 200 - testUtil.sleep(20).then(() => new DeclareHandler(getRestOperation('GET')).process()) // should return 200 - ]) - .then((handlers) => { - assert.deepStrictEqual(fetchResponseInfo(handlers[2]), expectedResponses.GET, 'should match expected response for GET'); - assert.includeDeepMembers(handlers.slice(0, 2).map(fetchResponseInfo), expectedResponses.POST, 'should match expected responses for POST requests'); - // lock should be released already - return new DeclareHandler(getRestOperation('POST')).process(); - }) - .then((handler) => { - assert.deepStrictEqual(fetchResponseInfo(handler), expectedResponses.POST[0], 'should match expected response for POST 200'); - }); - }); + return assertMultiRequestResults(namespaceConfig, expectedResponses, { namespace: 'testNamespace' }); + }); - it('should reject when caught unknown error', () => { - sinon.stub(configWorker, 'getRawConfig').rejects(new Error('expectedError')); - return assert.isRejected(requestHandler.process(), 'expectedError'); + it('should reject when unknown error is caught', () => { + sinon.stub(configWorker, 'getConfig').rejects(new Error('expectedError')); + requestHandler = new DeclareHandler(getRestOperation('POST', { class: 'Telemetry_Namespace' }), { namespace: 'testNamespace' }); + return assert.isRejected(requestHandler.process(), 'expectedError'); + }); }); }); diff --git a/test/unit/requestHandlers/errorHandlerTests.js b/test/unit/requestHandlers/errorHandlerTests.js new file mode 100644 index 00000000..4e05c850 --- /dev/null +++ b/test/unit/requestHandlers/errorHandlerTests.js @@ -0,0 +1,133 @@ +/* + * Copyright 2018. F5 Networks, Inc. See End User License Agreement ("EULA") for + * license terms. Notwithstanding anything to the contrary in the EULA, Licensee + * may copy and modify this software product for its internal business purposes. + * Further, Licensee may upload, publish and distribute the modified version of + * the software product on devcentral.f5.com. + */ + +'use strict'; + +/* eslint-disable import/order */ + +require('../shared/restoreCache')(); + +const chai = require('chai'); +const chaiAsPromised = require('chai-as-promised'); + +const testUtil = require('./../shared/util'); +const ErrorHandler = require('../../../src/lib/requestHandlers/errorHandler'); +const errors = require('../../../src/lib/errors'); +const httpErrors = require('../../../src/lib/requestHandlers/httpErrors'); + +chai.use(chaiAsPromised); +const assert = chai.assert; + + +describe('ErrorHandler', () => { + let errorHandler; + + const testData = [ + { + name: 'Bad URL', + error: new httpErrors.BadURLError('/a/b/c/d'), + expected: { + code: 400, + body: 'Bad URL: /a/b/c/d' + } + }, + { + name: 'Internal Server Error', + error: new httpErrors.InternalServerError('beep-badoo-bop'), + expected: { + code: 500, + body: { + code: 500, + message: 'Internal Server Error' + } + } + }, + { + name: 'Method Not Allowed', + error: new httpErrors.MethodNotAllowedError(['PATCH', 'HEAD']), + expected: { + code: 405, + body: { + code: 405, + message: 'Method Not Allowed', + allow: ['PATCH', 'HEAD'] + } + } + }, + { + name: 'Service Unavailable', + error: new httpErrors.ServiceUnavailableError(), + expected: { + code: 503, + body: { + code: 503, + message: 'Service Unavailable' + } + } + }, + { + name: 'Unsupported Media Type', + error: new httpErrors.UnsupportedMediaTypeError(), + expected: { + code: 415, + body: { + code: 415, + message: 'Unsupported Media Type', + accept: ['application/json'] + } + } + }, + { + name: 'Config Lookup Error', + error: new errors.ObjectNotFoundInConfigError('Unable to find object'), + expected: { + code: 404, + body: { + code: 404, + message: 'Unable to find object' + } + } + }, + { + name: 'Validation Error', + error: new errors.ValidationError('Does not conform to schema'), + expected: { + code: 422, + body: { + code: 422, + message: 'Unprocessable entity', + error: 'Does not conform to schema' + } + } + } + ]; + + function assertProcessResult(expected) { + assert.strictEqual(errorHandler.getCode(), expected.code, 'should return expected code'); + assert.deepStrictEqual(errorHandler.getBody(), expected.body, 'should match expected body'); + return errorHandler.process() + .then((handler) => { + assert.ok(handler === errorHandler, 'should return a reference to original handler'); + }); + } + + testData.forEach((testConf) => { + testUtil.getCallableIt(testConf)(`should handle error - ${testConf.name}`, () => { + errorHandler = new ErrorHandler(testConf.error); + assertProcessResult(testConf.expected); + }); + }); + + it('should reject if error is of unknown type', () => { + errorHandler = new ErrorHandler(new Error('i am a stealthy error')); + return assert.isRejected(errorHandler.process()) + .then((result) => { + assert.strictEqual(result.message, 'i am a stealthy error'); + }); + }); +}); diff --git a/test/unit/requestHandlers/httpStatus/badUrlHandlerTests.js b/test/unit/requestHandlers/httpStatus/badUrlHandlerTests.js deleted file mode 100644 index c78027fb..00000000 --- a/test/unit/requestHandlers/httpStatus/badUrlHandlerTests.js +++ /dev/null @@ -1,48 +0,0 @@ -/* - * Copyright 2018. F5 Networks, Inc. See End User License Agreement ("EULA") for - * license terms. Notwithstanding anything to the contrary in the EULA, Licensee - * may copy and modify this software product for its internal business purposes. - * Further, Licensee may upload, publish and distribute the modified version of - * the software product on devcentral.f5.com. - */ - -'use strict'; - -/* eslint-disable import/order */ - -require('../../shared/restoreCache')(); - -const chai = require('chai'); -const chaiAsPromised = require('chai-as-promised'); - -const BadUrlHandler = require('../../../../src/lib/requestHandlers/httpStatus/badUrlHandler'); -const MockRestOperation = require('../../shared/util').MockRestOperation; -const parseURL = require('../../shared/util').parseURL; - -chai.use(chaiAsPromised); -const assert = chai.assert; - - -describe('BadUrlHandler', () => { - let requestHandler; - - beforeEach(() => { - const restOpMock = new MockRestOperation(); - restOpMock.uri = parseURL('http://localhost:8100/a/b/c/d'); - requestHandler = new BadUrlHandler(restOpMock); - }); - - it('should return code 400', () => { - assert.strictEqual(requestHandler.getCode(), 400, 'should return expected code'); - }); - - it('should return body with message', () => { - const expectedBody = 'Bad URL: /a/b/c/d'; - assert.strictEqual(requestHandler.getBody(), expectedBody, 'should match expected body'); - }); - - it('should return self as result of process', () => requestHandler.process() - .then((handler) => { - assert.ok(handler === requestHandler, 'should return a reference to original handler'); - })); -}); diff --git a/test/unit/requestHandlers/httpStatus/internalServerErrorHandlerTests.js b/test/unit/requestHandlers/httpStatus/internalServerErrorHandlerTests.js deleted file mode 100644 index 2675e7c5..00000000 --- a/test/unit/requestHandlers/httpStatus/internalServerErrorHandlerTests.js +++ /dev/null @@ -1,48 +0,0 @@ -/* - * Copyright 2018. F5 Networks, Inc. See End User License Agreement ("EULA") for - * license terms. Notwithstanding anything to the contrary in the EULA, Licensee - * may copy and modify this software product for its internal business purposes. - * Further, Licensee may upload, publish and distribute the modified version of - * the software product on devcentral.f5.com. - */ - -'use strict'; - -/* eslint-disable import/order */ - -require('../../shared/restoreCache')(); - -const chai = require('chai'); -const chaiAsPromised = require('chai-as-promised'); - -const InternalServerErrorHandler = require('../../../../src/lib/requestHandlers/httpStatus/internalServerErrorHandler'); -const MockRestOperation = require('../../shared/util').MockRestOperation; - -chai.use(chaiAsPromised); -const assert = chai.assert; - - -describe('InternalServerErrorHandler', () => { - let requestHandler; - - beforeEach(() => { - requestHandler = new InternalServerErrorHandler(new MockRestOperation()); - }); - - it('should return code 500', () => { - assert.strictEqual(requestHandler.getCode(), 500, 'should return expected code'); - }); - - it('should return body with message', () => { - const expectedBody = { - code: 500, - message: 'Internal Server Error' - }; - assert.deepStrictEqual(requestHandler.getBody(), expectedBody, 'should match expected body'); - }); - - it('should return self as result of process', () => requestHandler.process() - .then((handler) => { - assert.ok(handler === requestHandler, 'should return a reference to original handler'); - })); -}); diff --git a/test/unit/requestHandlers/httpStatus/methodNotAllowedHandlerTests.js b/test/unit/requestHandlers/httpStatus/methodNotAllowedHandlerTests.js deleted file mode 100644 index ee16d61e..00000000 --- a/test/unit/requestHandlers/httpStatus/methodNotAllowedHandlerTests.js +++ /dev/null @@ -1,50 +0,0 @@ -/* - * Copyright 2018. F5 Networks, Inc. See End User License Agreement ("EULA") for - * license terms. Notwithstanding anything to the contrary in the EULA, Licensee - * may copy and modify this software product for its internal business purposes. - * Further, Licensee may upload, publish and distribute the modified version of - * the software product on devcentral.f5.com. - */ - -'use strict'; - -/* eslint-disable import/order */ - -require('../../shared/restoreCache')(); - -const chai = require('chai'); -const chaiAsPromised = require('chai-as-promised'); - -const MethodNotAllowedHandler = require('../../../../src/lib/requestHandlers/httpStatus/methodNotAllowedHandler'); -const MockRestOperation = require('../../shared/util').MockRestOperation; - -chai.use(chaiAsPromised); -const assert = chai.assert; - - -describe('MethodNotAllowedHandler', () => { - let requestHandler; - const allowedMethods = ['GET', 'POST']; - - beforeEach(() => { - requestHandler = new MethodNotAllowedHandler(new MockRestOperation(), allowedMethods); - }); - - it('should return code 405', () => { - assert.strictEqual(requestHandler.getCode(), 405, 'should return expected code'); - }); - - it('should return body with message', () => { - const expectedBody = { - code: 405, - message: 'Method Not Allowed', - allow: ['GET', 'POST'] - }; - assert.deepStrictEqual(requestHandler.getBody(), expectedBody, 'should match expected body'); - }); - - it('should return self as result of process', () => requestHandler.process() - .then((handler) => { - assert.ok(handler === requestHandler, 'should return a reference to original handler'); - })); -}); diff --git a/test/unit/requestHandlers/httpStatus/serviceUnavailableErrorHandlerTests.js b/test/unit/requestHandlers/httpStatus/serviceUnavailableErrorHandlerTests.js deleted file mode 100644 index 6211f1ed..00000000 --- a/test/unit/requestHandlers/httpStatus/serviceUnavailableErrorHandlerTests.js +++ /dev/null @@ -1,48 +0,0 @@ -/* - * Copyright 2020. F5 Networks, Inc. See End User License Agreement ("EULA") for - * license terms. Notwithstanding anything to the contrary in the EULA, Licensee - * may copy and modify this software product for its internal business purposes. - * Further, Licensee may upload, publish and distribute the modified version of - * the software product on devcentral.f5.com. - */ - -'use strict'; - -/* eslint-disable import/order */ - -require('../../shared/restoreCache')(); - -const chai = require('chai'); -const chaiAsPromised = require('chai-as-promised'); - -const ServiceUnavailableErrorHandler = require('../../../../src/lib/requestHandlers/httpStatus/serviceUnavailableErrorHandler'); -const MockRestOperation = require('../../shared/util').MockRestOperation; - -chai.use(chaiAsPromised); -const assert = chai.assert; - - -describe('ServiceUnavailableErrorHandler', () => { - let requestHandler; - - beforeEach(() => { - requestHandler = new ServiceUnavailableErrorHandler(new MockRestOperation()); - }); - - it('should return code 503', () => { - assert.strictEqual(requestHandler.getCode(), 503, 'should return expected code'); - }); - - it('should return body with message', () => { - const expectedBody = { - code: 503, - message: 'Service Unavailable' - }; - assert.deepStrictEqual(requestHandler.getBody(), expectedBody, 'should match expected body'); - }); - - it('should return self as result of process', () => requestHandler.process() - .then((handler) => { - assert.ok(handler === requestHandler, 'should return reference to origin instance'); - })); -}); diff --git a/test/unit/requestHandlers/httpStatus/unsupportedMediaTypeHandlerTests.js b/test/unit/requestHandlers/httpStatus/unsupportedMediaTypeHandlerTests.js deleted file mode 100644 index 94718afe..00000000 --- a/test/unit/requestHandlers/httpStatus/unsupportedMediaTypeHandlerTests.js +++ /dev/null @@ -1,49 +0,0 @@ -/* - * Copyright 2018. F5 Networks, Inc. See End User License Agreement ("EULA") for - * license terms. Notwithstanding anything to the contrary in the EULA, Licensee - * may copy and modify this software product for its internal business purposes. - * Further, Licensee may upload, publish and distribute the modified version of - * the software product on devcentral.f5.com. - */ - -'use strict'; - -/* eslint-disable import/order */ - -require('../../shared/restoreCache')(); - -const chai = require('chai'); -const chaiAsPromised = require('chai-as-promised'); - -const UnsupportedMediaTypeHandler = require('../../../../src/lib/requestHandlers/httpStatus/unsupportedMediaTypeHandler'); -const MockRestOperation = require('../../shared/util').MockRestOperation; - -chai.use(chaiAsPromised); -const assert = chai.assert; - - -describe('UnsupportedMediaTypeHandler', () => { - let requestHandler; - - beforeEach(() => { - requestHandler = new UnsupportedMediaTypeHandler(new MockRestOperation()); - }); - - it('should return code 415', () => { - assert.strictEqual(requestHandler.getCode(), 415, 'should return expected code'); - }); - - it('should return body with message', () => { - const expectedBody = { - code: 415, - message: 'Unsupported Media Type', - accept: ['application/json'] - }; - assert.deepStrictEqual(requestHandler.getBody(), expectedBody, 'should match expected body'); - }); - - it('should return self as result of process', () => requestHandler.process() - .then((handler) => { - assert.ok(handler === requestHandler, 'should return a reference to original handler'); - })); -}); diff --git a/test/unit/requestHandlers/ihealthPollerHandlerTests.js b/test/unit/requestHandlers/ihealthPollerHandlerTests.js index 6bf5cf3a..ecccb399 100644 --- a/test/unit/requestHandlers/ihealthPollerHandlerTests.js +++ b/test/unit/requestHandlers/ihealthPollerHandlerTests.js @@ -20,6 +20,7 @@ const errors = require('../../../src/lib/errors'); const ihealh = require('../../../src/lib/ihealth'); const IHealthPollerHandler = require('../../../src/lib/requestHandlers/ihealthPollerHandler'); const testUtil = require('../shared/util'); +const ErrorHandler = require('../../../src/lib/requestHandlers/errorHandler'); chai.use(chaiAsPromised); const assert = chai.assert; @@ -105,9 +106,9 @@ describe('SystemPollerHandler', () => { sinon.stub(ihealh, 'startPoller').rejects(new errors.ConfigLookupError('expectedError')); return requestHandler.process() .then((handler) => { - assert.ok(handler === requestHandler, 'should return a reference to original handler'); - assert.strictEqual(requestHandler.getCode(), 404, 'should return expected code'); - assert.deepStrictEqual(requestHandler.getBody(), { + assert.isTrue(handler instanceof ErrorHandler, 'should return a reference to error handler'); + assert.strictEqual(handler.getCode(), 404, 'should return expected code'); + assert.deepStrictEqual(handler.getBody(), { code: 404, message: 'expectedError' }, 'should return expected body'); diff --git a/test/unit/requestHandlers/pullConsumerHandlerTests.js b/test/unit/requestHandlers/pullConsumerHandlerTests.js index 996fb36e..052429be 100644 --- a/test/unit/requestHandlers/pullConsumerHandlerTests.js +++ b/test/unit/requestHandlers/pullConsumerHandlerTests.js @@ -19,6 +19,7 @@ const sinon = require('sinon'); const errors = require('../../../src/lib/errors'); const pullConsumers = require('../../../src/lib/pullConsumers'); const PullConsumerHandler = require('../../../src/lib/requestHandlers/pullConsumerHandler'); +const ErrorHandler = require('../../../src/lib/requestHandlers/errorHandler'); const testUtil = require('../shared/util'); chai.use(chaiAsPromised); @@ -42,9 +43,9 @@ describe('PullConsumerHandler', () => { sinon.stub(pullConsumers, 'getData').rejects(new errors.ConfigLookupError('expectedError')); return requestHandler.process() .then((handler) => { - assert.ok(handler === requestHandler, 'should return a reference to original handler'); - assert.strictEqual(requestHandler.getCode(), 404, 'should return expected code'); - assert.deepStrictEqual(requestHandler.getBody(), { + assert.isTrue(handler instanceof ErrorHandler, 'should return a reference to error handler'); + assert.strictEqual(handler.getCode(), 404, 'should return expected code'); + assert.deepStrictEqual(handler.getBody(), { code: 404, message: 'expectedError' }, 'should return expected body'); diff --git a/test/unit/requestHandlers/routerTests.js b/test/unit/requestHandlers/routerTests.js index 0c4e896f..f6e9c785 100644 --- a/test/unit/requestHandlers/routerTests.js +++ b/test/unit/requestHandlers/routerTests.js @@ -19,7 +19,7 @@ const sinon = require('sinon'); const BaseRequestHandler = require('../../../src/lib/requestHandlers/baseHandler'); const configWorker = require('../../../src/lib/config'); -const InternalServerErrorHandler = require('../../../src/lib/requestHandlers/httpStatus/internalServerErrorHandler'); +const httpErrors = require('../../../src/lib/requestHandlers/httpErrors'); const requestRouter = require('../../../src/lib/requestHandlers/router'); const testUtil = require('../shared/util'); @@ -277,10 +277,10 @@ describe('Requests Router', () => { }); }); - it('should return hardcoded server internal error when error thrown in InternalServerErrorHandler', () => { + it('should return hardcoded \'internal server error\' when error handler fails', () => { requestRouter.register(['GET', 'POST'], '/test', CustomRequestHandler); sinon.stub(CustomRequestHandler.prototype, 'process').rejects(new Error('expectedError')); - sinon.stub(InternalServerErrorHandler.prototype, 'getBody').throws(new Error('ISE_Error')); + sinon.stub(httpErrors.InternalServerError.prototype, 'getBody').throws(new Error('ISE_Error')); const restOp = new testUtil.MockRestOperation({ method: 'GET' }); restOp.uri = testUtil.parseURL('http://localhost/test'); diff --git a/test/unit/requestHandlers/systemPollerHandlerTests.js b/test/unit/requestHandlers/systemPollerHandlerTests.js index 891aa40e..a3d98e4a 100644 --- a/test/unit/requestHandlers/systemPollerHandlerTests.js +++ b/test/unit/requestHandlers/systemPollerHandlerTests.js @@ -19,6 +19,7 @@ const sinon = require('sinon'); const errors = require('../../../src/lib/errors'); const systemPoller = require('../../../src/lib/systemPoller'); const SystemPollerHandler = require('../../../src/lib/requestHandlers/systemPollerHandler'); +const ErrorHandler = require('../../../src/lib/requestHandlers/errorHandler'); const testUtil = require('../shared/util'); chai.use(chaiAsPromised); @@ -68,9 +69,9 @@ describe('SystemPollerHandler', () => { sinon.stub(systemPoller, 'getPollersConfig').rejects(new errors.ConfigLookupError('expectedError')); return requestHandler.process() .then((handler) => { - assert.ok(handler === requestHandler, 'should return a reference to original handler'); - assert.strictEqual(requestHandler.getCode(), 404, 'should return expected code'); - assert.deepStrictEqual(requestHandler.getBody(), { + assert.isTrue(handler instanceof ErrorHandler, 'should return a reference to error handler'); + assert.strictEqual(handler.getCode(), 404, 'should return expected code'); + assert.deepStrictEqual(handler.getBody(), { code: 404, message: 'expectedError' }, 'should return expected body'); diff --git a/test/unit/utils/configTests.js b/test/unit/utils/configTests.js index fbcbfff7..0337469b 100644 --- a/test/unit/utils/configTests.js +++ b/test/unit/utils/configTests.js @@ -95,6 +95,10 @@ describe('Config Util', () => { .then(validated => configUtil.componentizeConfig(validated)); }; + const sortMappings = (mappings) => { + Object.keys(mappings).forEach(key => mappings[key].sort()); + }; + beforeEach(() => { sinon.stub(deviceUtil, 'encryptSecret').resolvesArg(0); sinon.stub(deviceUtil, 'decryptSecret').resolvesArg(0); @@ -112,7 +116,11 @@ describe('Config Util', () => { parseDeclaration(testConf.declaration) .then(configData => configUtil.normalizeComponents(configData)) .then((normalized) => { - assert.deepStrictEqual(normalized, testConf.expected); + sortMappings(normalized.mappings); + sortMappings(testConf.expected.mappings); + + assert.deepStrictEqual(normalized.mappings, testConf.expected.mappings); + assert.sameDeepMembers(normalized.components, testConf.expected.components); })); }); }); diff --git a/test/unit/utils/miscTests.js b/test/unit/utils/miscTests.js index c5db1a39..2576e535 100644 --- a/test/unit/utils/miscTests.js +++ b/test/unit/utils/miscTests.js @@ -392,152 +392,6 @@ describe('Misc Util', () => { }); }); - describe('.retryPromise()', () => { - it('should retry at least once', () => { - let tries = 0; - // first call + re-try = 2 - const expectedTries = 2; - - const promiseFunc = () => { - tries += 1; - return Promise.reject(new Error('expected error')); - }; - - return util.retryPromise(promiseFunc) - .catch((err) => { - // in total should be 2 tries - 1 call + 1 re-try - assert.strictEqual(tries, expectedTries); - assert.ok(/expected error/.test(err)); - }); - }); - - it('should retry rejected promise', () => { - let tries = 0; - const maxTries = 3; - const expectedTries = maxTries + 1; - - const promiseFunc = () => { - tries += 1; - return Promise.reject(new Error('expected error')); - }; - - return util.retryPromise(promiseFunc, { maxTries }) - .catch((err) => { - // in total should be 4 tries - 1 call + 3 re-try - assert.strictEqual(tries, expectedTries); - assert.ok(/expected error/.test(err)); - }); - }); - - it('should call callback on retry', () => { - let callbackFlag = false; - let callbackErrFlag = false; - let tries = 0; - let cbTries = 0; - const maxTries = 3; - const expectedTries = maxTries + 1; - - const callback = (err) => { - cbTries += 1; - callbackErrFlag = /expected error/.test(err); - callbackFlag = true; - return true; - }; - const promiseFunc = () => { - tries += 1; - return Promise.reject(new Error('expected error')); - }; - - return util.retryPromise(promiseFunc, { maxTries, callback }) - .catch((err) => { - // in total should be 4 tries - 1 call + 3 re-try - assert.strictEqual(tries, expectedTries); - assert.strictEqual(cbTries, maxTries); - assert.ok(/expected error/.test(err)); - assert.ok(callbackErrFlag); - assert.ok(callbackFlag); - }); - }); - - it('should stop retry on success', () => { - let tries = 0; - const maxTries = 3; - const expectedTries = 2; - - const promiseFunc = () => { - tries += 1; - if (tries === expectedTries) { - return Promise.resolve('success'); - } - return Promise.reject(new Error('expected error')); - }; - - return util.retryPromise(promiseFunc, { maxTries }) - .then((data) => { - assert.strictEqual(tries, expectedTries); - assert.strictEqual(data, 'success'); - }); - }); - - it('should retry with delay', () => { - const timestamps = []; - const maxTries = 3; - const expectedTries = maxTries + 1; - const delay = 200; - - const promiseFunc = () => { - timestamps.push(Date.now()); - return Promise.reject(new Error('expected error')); - }; - - return util.retryPromise(promiseFunc, { maxTries, delay }) - .catch((err) => { - assert.ok(/expected error/.test(err)); - assert.ok(timestamps.length === expectedTries, - `Expected ${expectedTries} timestamps, got ${timestamps.length}`); - - for (let i = 1; i < timestamps.length; i += 1) { - const actualDelay = timestamps[i] - timestamps[i - 1]; - // sometimes it is less than expected - assert.ok(actualDelay >= delay * 0.9, - `Actual delay (${actualDelay}) is less than expected (${delay})`); - } - }); - }).timeout(2000); - - it('should retry first time without backoff', () => { - const timestamps = []; - const maxTries = 3; - const expectedTries = maxTries + 1; - const delay = 200; - const backoff = 100; - - const promiseFunc = () => { - timestamps.push(Date.now()); - return Promise.reject(new Error('expected error')); - }; - - return util.retryPromise(promiseFunc, { maxTries, delay, backoff }) - .catch((err) => { - assert.ok(/expected error/.test(err)); - assert.ok(timestamps.length === expectedTries, - `Expected ${expectedTries} timestamps, got ${timestamps.length}`); - - for (let i = 1; i < timestamps.length; i += 1) { - const actualDelay = timestamps[i] - timestamps[i - 1]; - let expectedDelay = delay; - // first attempt should be without backoff factor - if (i > 1) { - /* eslint-disable no-restricted-properties */ - expectedDelay += backoff * Math.pow(2, i - 1); - } - assert.ok(actualDelay >= expectedDelay * 0.9, - `Actual delay (${actualDelay}) is less than expected (${expectedDelay})`); - } - }); - }).timeout(10000); - }); - describe('.getRandomArbitrary()', () => { it('should return random number from range', () => { const left = -5; diff --git a/test/unit/utils/promiseTests.js b/test/unit/utils/promiseTests.js new file mode 100644 index 00000000..f96a0119 --- /dev/null +++ b/test/unit/utils/promiseTests.js @@ -0,0 +1,248 @@ +/* + * Copyright 2018. F5 Networks, Inc. See End User License Agreement ("EULA") for + * license terms. Notwithstanding anything to the contrary in the EULA, Licensee + * may copy and modify this software product for its internal business purposes. + * Further, Licensee may upload, publish and distribute the modified version of + * the software product on devcentral.f5.com. + */ + +'use strict'; + +/* eslint-disable import/order */ + +require('../shared/restoreCache')(); + +const chai = require('chai'); +const chaiAsPromised = require('chai-as-promised'); + +const promiseUtil = require('../../../src/lib/utils/promise'); + + +chai.use(chaiAsPromised); +const assert = chai.assert; + +describe('Promise Util', () => { + describe('.allSettled()', () => { + it('should resolve when all settled', () => assert.becomes( + promiseUtil.allSettled([ + Promise.resolve(1), + Promise.resolve(2) + ]), + [ + { status: 'fulfilled', value: 1 }, + { status: 'fulfilled', value: 2 } + ] + )); + + it('should resolve when all rejected', () => { + const err1 = new Error('err1'); + const err2 = new Error('err2'); + return assert.becomes( + promiseUtil.allSettled([ + Promise.reject(err1), + Promise.reject(err2) + ]), + [ + { status: 'rejected', reason: err1 }, + { status: 'rejected', reason: err2 } + ] + ); + }); + + it('should resolve when one fulfilled and one rejected', () => { + const err2 = new Error('err2'); + return assert.becomes( + promiseUtil.allSettled([ + Promise.resolve(1), + Promise.reject(err2) + ]), + [ + { status: 'fulfilled', value: 1 }, + { status: 'rejected', reason: err2 } + ] + ); + }); + }); + + describe('.getValues()', () => { + it('should get values for all fulfilled promises', () => assert.becomes( + promiseUtil.allSettled([ + Promise.resolve(1), + Promise.resolve(2) + ]) + .then(promiseUtil.getValues), + [1, 2] + )); + + it('should throw error when found rejected promise', () => { + const err1 = new Error('err1'); + const err2 = new Error('err2'); + return assert.isRejected( + promiseUtil.allSettled([ + Promise.reject(err1), + Promise.reject(err2) + ]) + .then(promiseUtil.getValues), + /err1/ + ); + }); + + it('should get values for fulfilled and ignore rejected promises', () => { + const err2 = new Error('err2'); + return assert.becomes( + promiseUtil.allSettled([ + Promise.resolve(1), + Promise.reject(err2) + ]) + .then(statuses => promiseUtil.getValues(statuses, true)), + [1, undefined] + ); + }); + }); + + describe('.retry()', () => { + it('should retry at least once', () => { + let tries = 0; + // first call + re-try = 2 + const expectedTries = 2; + + const promiseFunc = () => { + tries += 1; + return Promise.reject(new Error('expected error')); + }; + + return promiseUtil.retry(promiseFunc) + .catch((err) => { + // in total should be 2 tries - 1 call + 1 re-try + assert.strictEqual(tries, expectedTries); + assert.ok(/expected error/.test(err)); + }); + }); + + it('should retry rejected promise', () => { + let tries = 0; + const maxTries = 3; + const expectedTries = maxTries + 1; + + const promiseFunc = () => { + tries += 1; + return Promise.reject(new Error('expected error')); + }; + + return promiseUtil.retry(promiseFunc, { maxTries }) + .catch((err) => { + // in total should be 4 tries - 1 call + 3 re-try + assert.strictEqual(tries, expectedTries); + assert.ok(/expected error/.test(err)); + }); + }); + + it('should call callback on retry', () => { + let callbackFlag = false; + let callbackErrFlag = false; + let tries = 0; + let cbTries = 0; + const maxTries = 3; + const expectedTries = maxTries + 1; + + const callback = (err) => { + cbTries += 1; + callbackErrFlag = /expected error/.test(err); + callbackFlag = true; + return true; + }; + const promiseFunc = () => { + tries += 1; + return Promise.reject(new Error('expected error')); + }; + + return promiseUtil.retry(promiseFunc, { maxTries, callback }) + .catch((err) => { + // in total should be 4 tries - 1 call + 3 re-try + assert.strictEqual(tries, expectedTries); + assert.strictEqual(cbTries, maxTries); + assert.ok(/expected error/.test(err)); + assert.ok(callbackErrFlag); + assert.ok(callbackFlag); + }); + }); + + it('should stop retry on success', () => { + let tries = 0; + const maxTries = 3; + const expectedTries = 2; + + const promiseFunc = () => { + tries += 1; + if (tries === expectedTries) { + return Promise.resolve('success'); + } + return Promise.reject(new Error('expected error')); + }; + + return promiseUtil.retry(promiseFunc, { maxTries }) + .then((data) => { + assert.strictEqual(tries, expectedTries); + assert.strictEqual(data, 'success'); + }); + }); + + it('should retry with delay', () => { + const timestamps = []; + const maxTries = 3; + const expectedTries = maxTries + 1; + const delay = 200; + + const promiseFunc = () => { + timestamps.push(Date.now()); + return Promise.reject(new Error('expected error')); + }; + + return promiseUtil.retry(promiseFunc, { maxTries, delay }) + .catch((err) => { + assert.ok(/expected error/.test(err)); + assert.ok(timestamps.length === expectedTries, + `Expected ${expectedTries} timestamps, got ${timestamps.length}`); + + for (let i = 1; i < timestamps.length; i += 1) { + const actualDelay = timestamps[i] - timestamps[i - 1]; + // sometimes it is less than expected + assert.ok(actualDelay >= delay * 0.9, + `Actual delay (${actualDelay}) is less than expected (${delay})`); + } + }); + }).timeout(2000); + + it('should retry first time without backoff', () => { + const timestamps = []; + const maxTries = 3; + const expectedTries = maxTries + 1; + const delay = 200; + const backoff = 100; + + const promiseFunc = () => { + timestamps.push(Date.now()); + return Promise.reject(new Error('expected error')); + }; + + return promiseUtil.retry(promiseFunc, { maxTries, delay, backoff }) + .catch((err) => { + assert.ok(/expected error/.test(err)); + assert.ok(timestamps.length === expectedTries, + `Expected ${expectedTries} timestamps, got ${timestamps.length}`); + + for (let i = 1; i < timestamps.length; i += 1) { + const actualDelay = timestamps[i] - timestamps[i - 1]; + let expectedDelay = delay; + // first attempt should be without backoff factor + if (i > 1) { + /* eslint-disable no-restricted-properties */ + expectedDelay += backoff * Math.pow(2, i - 1); + } + assert.ok(actualDelay >= expectedDelay * 0.9, + `Actual delay (${actualDelay}) is less than expected (${expectedDelay})`); + } + }); + }).timeout(10000); + }); +});