From 9641aecdcbbdc374b6caf08744bcbc991d973bae Mon Sep 17 00:00:00 2001 From: "renovate[bot]" <29139614+renovate[bot]@users.noreply.github.com> Date: Wed, 13 Nov 2024 19:20:48 +0000 Subject: [PATCH 01/61] chore(deps): pin dependencies (#2244) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR contains the following updates: | Package | Update | Change | |---|---|---| | ghcr.io/peerdb-io/flow-api | pinDigest | -> `a7c7d1c` | | ghcr.io/peerdb-io/flow-snapshot-worker | pinDigest | -> `9b6db50` | | ghcr.io/peerdb-io/flow-worker | pinDigest | -> `3b4da65` | | ghcr.io/peerdb-io/peerdb-server | pinDigest | -> `dfb652e` | | ghcr.io/peerdb-io/peerdb-ui | pinDigest | -> `cb4d1db` | | grafana/pyroscope | pinDigest | -> `319bf32` | | minio/minio | pinDigest | -> `ac59185` | | postgres | major | `16-alpine` -> `17-alpine` | | [temporalio/admin-tools](https://redirect.github.com/temporalio/docker-builds) | pinDigest | -> `da0c7a7` | | [temporalio/auto-setup](https://redirect.github.com/temporalio/docker-builds) | pinDigest | -> `b1edc1e` | | [temporalio/ui](https://redirect.github.com/temporalio/ui-server) | minor | `2.29.1` -> `2.31.2` | --- ### Release Notes
temporalio/ui-server (temporalio/ui) ### [`v2.31.2`](https://redirect.github.com/temporalio/ui-server/releases/tag/v2.31.2) [Compare Source](https://redirect.github.com/temporalio/ui-server/compare/v2.31.1...v2.31.2) - 2024-10-02 - [`7a19ad5`](https://redirect.github.com/temporalio/ui-server/commit/7a19ad564) - Update version.go to v2.30.2 ([#​2363](https://redirect.github.com/temporalio/ui-server/issues/2363)) - 2024-10-01 - [`1bb9b9a`](https://redirect.github.com/temporalio/ui-server/commit/1bb9b9a9a) - Set startWorkflowDisabled to false ([#​2362](https://redirect.github.com/temporalio/ui-server/issues/2362)) - 2024-10-01 - [`229e732`](https://redirect.github.com/temporalio/ui-server/commit/229e732f0) - MenuItem updates ([#​2361](https://redirect.github.com/temporalio/ui-server/issues/2361)) - 2024-09-30 - [`fcc8279`](https://redirect.github.com/temporalio/ui-server/commit/fcc827931) - DT-2499 - wf reset ux improvement ([#​2359](https://redirect.github.com/temporalio/ui-server/issues/2359)) ### [`v2.31.1`](https://redirect.github.com/temporalio/ui-server/releases/tag/v2.31.1) [Compare Source](https://redirect.github.com/temporalio/ui-server/compare/v2.31.0...v2.31.1) - 2024-09-30 - [`a1d2cff`](https://redirect.github.com/temporalio/ui-server/commit/a1d2cff9) - update UI server version to v2.31.1 ([#​2358](https://redirect.github.com/temporalio/ui-server/issues/2358)) - 2024-09-30 - [`07d826c`](https://redirect.github.com/temporalio/ui-server/commit/07d826c8) - 2.31.2 ([#​2357](https://redirect.github.com/temporalio/ui-server/issues/2357)) - 2024-09-30 - [`c9415a2`](https://redirect.github.com/temporalio/ui-server/commit/c9415a2b) - DT-1740 - update workflow reset UI for new reapply types in 1.24 ([#​2014](https://redirect.github.com/temporalio/ui-server/issues/2014)) - 2024-09-30 - [`7bf4fc6`](https://redirect.github.com/temporalio/ui-server/commit/7bf4fc69) - Nexus form with searchable combobox ([#​2355](https://redirect.github.com/temporalio/ui-server/issues/2355)) - 2024-09-27 - [`8ebc23d`](https://redirect.github.com/temporalio/ui-server/commit/8ebc23dc) - Move stopPropagation to on:click for links ([#​2356](https://redirect.github.com/temporalio/ui-server/issues/2356)) - 2024-09-27 - [`0106d97`](https://redirect.github.com/temporalio/ui-server/commit/0106d976) - Fix differences in duration ([#​2353](https://redirect.github.com/temporalio/ui-server/issues/2353)) - 2024-09-27 - [`4ac3e46`](https://redirect.github.com/temporalio/ui-server/commit/4ac3e462) - Add empty state for Upcoming Runs on schedule detail page ([#​2354](https://redirect.github.com/temporalio/ui-server/issues/2354)) - 2024-09-26 - [`6292257`](https://redirect.github.com/temporalio/ui-server/commit/6292257e) - DT-2347 - support multiselect in combobox ([#​2352](https://redirect.github.com/temporalio/ui-server/issues/2352)) **Full Changelog**: https://github.com/temporalio/ui-server/compare/v2.31.0...v2.31.1 ### [`v2.31.0`](https://redirect.github.com/temporalio/ui-server/releases/tag/v2.31.0) [Compare Source](https://redirect.github.com/temporalio/ui-server/compare/v2.30.3...v2.31.0) - 2024-09-26 - [`a0123c6`](https://redirect.github.com/temporalio/ui-server/commit/a0123c610) - Update go version to 1.23 in update ui-server ([#​2351](https://redirect.github.com/temporalio/ui-server/issues/2351)) - 2024-09-25 - [`43ce09c`](https://redirect.github.com/temporalio/ui-server/commit/43ce09c62) - Use 1.23 ([#​2349](https://redirect.github.com/temporalio/ui-server/issues/2349)) - 2024-09-25 - [`4f9ca12`](https://redirect.github.com/temporalio/ui-server/commit/4f9ca1292) - Use go 1.23.0 ([#​2348](https://redirect.github.com/temporalio/ui-server/issues/2348)) - 2024-09-25 - [`3b5b1f0`](https://redirect.github.com/temporalio/ui-server/commit/3b5b1f0d2) - Update version.go to v2.31.0 ([#​2347](https://redirect.github.com/temporalio/ui-server/issues/2347)) - 2024-09-24 - [`55b45e0`](https://redirect.github.com/temporalio/ui-server/commit/55b45e01f) - 2.31.1 ([#​2346](https://redirect.github.com/temporalio/ui-server/issues/2346)) - 2024-09-24 - [`a9487eb`](https://redirect.github.com/temporalio/ui-server/commit/a9487eb8c) - Small UI fixes ([#​2345](https://redirect.github.com/temporalio/ui-server/issues/2345)) - 2024-09-24 - [`24c74ab`](https://redirect.github.com/temporalio/ui-server/commit/24c74ab30) - 2.31.0 ([#​2344](https://redirect.github.com/temporalio/ui-server/issues/2344)) - 2024-09-24 - [`1ab17ee`](https://redirect.github.com/temporalio/ui-server/commit/1ab17eec2) - Codefreeze 09.12.24 -> Main ([#​2341](https://redirect.github.com/temporalio/ui-server/issues/2341)) - 2024-09-17 - [`651f1d0`](https://redirect.github.com/temporalio/ui-server/commit/651f1d089) - 2.30.8 ([#​2331](https://redirect.github.com/temporalio/ui-server/issues/2331)) - 2024-09-17 - [`6853d85`](https://redirect.github.com/temporalio/ui-server/commit/6853d85db) - Remove child workflow timeline from row ([#​2330](https://redirect.github.com/temporalio/ui-server/issues/2330)) - 2024-09-12 - [`04e1fa8`](https://redirect.github.com/temporalio/ui-server/commit/04e1fa8e7) - Nexus text changes ([#​2326](https://redirect.github.com/temporalio/ui-server/issues/2326)) - 2024-09-12 - [`0ffe093`](https://redirect.github.com/temporalio/ui-server/commit/0ffe09333) - Don't allow multiple status filters of the same value ([#​2325](https://redirect.github.com/temporalio/ui-server/issues/2325)) - 2024-09-12 - [`292196a`](https://redirect.github.com/temporalio/ui-server/commit/292196a08) - 2.30.7 ([#​2322](https://redirect.github.com/temporalio/ui-server/issues/2322)) - 2024-09-12 - [`d9d9e45`](https://redirect.github.com/temporalio/ui-server/commit/d9d9e453c) - Allow slots to override text content ([#​2321](https://redirect.github.com/temporalio/ui-server/issues/2321)) - 2024-09-12 - [`1b1c817`](https://redirect.github.com/temporalio/ui-server/commit/1b1c81762) - Fix workflow counts and reactive selectedId ([#​2319](https://redirect.github.com/temporalio/ui-server/issues/2319)) - 2024-09-12 - [`8d238c3`](https://redirect.github.com/temporalio/ui-server/commit/8d238c3ef) - Add != operator to conditional menu defaults ([#​2320](https://redirect.github.com/temporalio/ui-server/issues/2320)) - 2024-09-11 - [`13ccfd8`](https://redirect.github.com/temporalio/ui-server/commit/13ccfd833) - Add initial tooltip to badge ([#​2310](https://redirect.github.com/temporalio/ui-server/issues/2310)) - 2024-09-11 - [`83abb85`](https://redirect.github.com/temporalio/ui-server/commit/83abb8512) - Add links to workflow counts on list workflows page ([#​2315](https://redirect.github.com/temporalio/ui-server/issues/2315)) ### [`v2.30.3`](https://redirect.github.com/temporalio/ui-server/releases/tag/v2.30.3) [Compare Source](https://redirect.github.com/temporalio/ui-server/compare/v2.30.2...v2.30.3) - 2024-09-10 - [`7e73a9d`](https://redirect.github.com/temporalio/ui-server/commit/7e73a9ddd) - Update version.go to v2.30.3 ([#​2318](https://redirect.github.com/temporalio/ui-server/issues/2318)) - 2024-09-10 - [`6b4c425`](https://redirect.github.com/temporalio/ui-server/commit/6b4c4258a) - 2.30.6 ([#​2317](https://redirect.github.com/temporalio/ui-server/issues/2317)) - 2024-09-10 - [`25b4795`](https://redirect.github.com/temporalio/ui-server/commit/25b4795cc) - Schedule search attributes ([#​2275](https://redirect.github.com/temporalio/ui-server/issues/2275)) - 2024-09-06 - [`6f842c2`](https://redirect.github.com/temporalio/ui-server/commit/6f842c23e) - Timer value ([#​2313](https://redirect.github.com/temporalio/ui-server/issues/2313)) - 2024-09-05 - [`8f10db3`](https://redirect.github.com/temporalio/ui-server/commit/8f10db330) - Support IS NULL and IS NOT NULL filter search ([#​2298](https://redirect.github.com/temporalio/ui-server/issues/2298)) - 2024-09-04 - [`830de68`](https://redirect.github.com/temporalio/ui-server/commit/830de6836) - Add taskQueueView store to persist worker tab on task queue page ([#​2309](https://redirect.github.com/temporalio/ui-server/issues/2309)) - 2024-09-04 - [`a6d1d8a`](https://redirect.github.com/temporalio/ui-server/commit/a6d1d8a4e) - Add checks for page settings ([#​2307](https://redirect.github.com/temporalio/ui-server/issues/2307)) - 2024-09-04 - [`5ce6856`](https://redirect.github.com/temporalio/ui-server/commit/5ce6856a9) - 2.30.5 ([#​2306](https://redirect.github.com/temporalio/ui-server/issues/2306)) ### [`v2.30.2`](https://redirect.github.com/temporalio/ui-server/releases/tag/v2.30.2) [Compare Source](https://redirect.github.com/temporalio/ui-server/compare/v2.30.1...v2.30.2) - 2024-09-04 - [`6847ad7`](https://redirect.github.com/temporalio/ui-server/commit/6847ad72b) - Update version.go to v2.30.2 ([#​2305](https://redirect.github.com/temporalio/ui-server/issues/2305)) - 2024-09-03 - [`4eb321e`](https://redirect.github.com/temporalio/ui-server/commit/4eb321eb4) - Fix copy button styles on CodeBlock ([#​2303](https://redirect.github.com/temporalio/ui-server/issues/2303)) - 2024-09-03 - [`cde49cd`](https://redirect.github.com/temporalio/ui-server/commit/cde49cd71) - reduce hr margin on side nav ([#​2304](https://redirect.github.com/temporalio/ui-server/issues/2304)) - 2024-09-03 - [`e269ec8`](https://redirect.github.com/temporalio/ui-server/commit/e269ec84a) - Fix max height on PaginatedTable component ([#​2302](https://redirect.github.com/temporalio/ui-server/issues/2302)) - 2024-09-03 - [`7817cdb`](https://redirect.github.com/temporalio/ui-server/commit/7817cdb13) - Bidirectional Links ([#​2299](https://redirect.github.com/temporalio/ui-server/issues/2299)) - 2024-09-03 - [`6ca894d`](https://redirect.github.com/temporalio/ui-server/commit/6ca894d74) - 2.30.4 ([#​2301](https://redirect.github.com/temporalio/ui-server/issues/2301)) - 2024-09-03 - [`3010499`](https://redirect.github.com/temporalio/ui-server/commit/301049978) - update paginated table to support custom page size options ([#​2300](https://redirect.github.com/temporalio/ui-server/issues/2300)) - 2024-09-03 - [`1b8db39`](https://redirect.github.com/temporalio/ui-server/commit/1b8db391a) - Make split card have a nicer fit and support hrefs. ([#​2295](https://redirect.github.com/temporalio/ui-server/issues/2295)) - 2024-09-03 - [`2d3c60e`](https://redirect.github.com/temporalio/ui-server/commit/2d3c60eeb) - DT-2422 - changes for billing center v2 ([#​2296](https://redirect.github.com/temporalio/ui-server/issues/2296)) ### [`v2.30.1`](https://redirect.github.com/temporalio/ui-server/releases/tag/v2.30.1) [Compare Source](https://redirect.github.com/temporalio/ui-server/compare/v2.30.0...v2.30.1) - 2024-08-29 - [`d459dc0`](https://redirect.github.com/temporalio/ui-server/commit/d459dc033) - Update version.go to v2.30.1 ([#​2294](https://redirect.github.com/temporalio/ui-server/issues/2294)) - 2024-08-29 - [`cd4a83b`](https://redirect.github.com/temporalio/ui-server/commit/cd4a83b85) - 2.30.3 ([#​2293](https://redirect.github.com/temporalio/ui-server/issues/2293)) - 2024-08-29 - [`7db9de7`](https://redirect.github.com/temporalio/ui-server/commit/7db9de75b) - Add select-all to code-block ([#​2292](https://redirect.github.com/temporalio/ui-server/issues/2292)) - 2024-08-29 - [`81b8507`](https://redirect.github.com/temporalio/ui-server/commit/81b850754) - 2.30.2 ([#​2290](https://redirect.github.com/temporalio/ui-server/issues/2290)) - 2024-08-29 - [`bcde92d`](https://redirect.github.com/temporalio/ui-server/commit/bcde92dd2) - Extra checks for attributes ([#​2291](https://redirect.github.com/temporalio/ui-server/issues/2291)) - 2024-08-29 - [`b2bc572`](https://redirect.github.com/temporalio/ui-server/commit/b2bc572c7) - Add check for primary key ([#​2289](https://redirect.github.com/temporalio/ui-server/issues/2289)) - 2024-08-28 - [`ce8a68d`](https://redirect.github.com/temporalio/ui-server/commit/ce8a68dde) - 2.30.1 ([#​2288](https://redirect.github.com/temporalio/ui-server/issues/2288)) - 2024-08-28 - [`67ab213`](https://redirect.github.com/temporalio/ui-server/commit/67ab21384) - Add label for Next Retry on Pending Activities ([#​2287](https://redirect.github.com/temporalio/ui-server/issues/2287)) - 2024-08-28 - [`cbf6f61`](https://redirect.github.com/temporalio/ui-server/commit/cbf6f61cc) - 2.30.0 ([#​2285](https://redirect.github.com/temporalio/ui-server/issues/2285)) - 2024-08-28 - [`fb8eb54`](https://redirect.github.com/temporalio/ui-server/commit/fb8eb5427) - Add next retry time and refresh workflow every 10 seconds ([#​2284](https://redirect.github.com/temporalio/ui-server/issues/2284)) - 2024-08-28 - [`ceac218`](https://redirect.github.com/temporalio/ui-server/commit/ceac218d5) - Remove nexus filter for now, format for time for pending events ([#​2283](https://redirect.github.com/temporalio/ui-server/issues/2283)) ### [`v2.30.0`](https://redirect.github.com/temporalio/ui-server/releases/tag/v2.30.0) [Compare Source](https://redirect.github.com/temporalio/ui-server/compare/v2.29.2...v2.30.0) - 2024-08-27 - [`458b816`](https://redirect.github.com/temporalio/ui-server/commit/458b81637) - Update version.go to v2.30.0 ([#​2282](https://redirect.github.com/temporalio/ui-server/issues/2282)) - 2024-08-27 - [`6b25926`](https://redirect.github.com/temporalio/ui-server/commit/6b259260a) - Workflow Execution Page - Bring into the Light ([#​2269](https://redirect.github.com/temporalio/ui-server/issues/2269)) - 2024-08-26 - [`5a99fd4`](https://redirect.github.com/temporalio/ui-server/commit/5a99fd4a3) - 2.29.4 ([#​2281](https://redirect.github.com/temporalio/ui-server/issues/2281)) - 2024-08-26 - [`389cc02`](https://redirect.github.com/temporalio/ui-server/commit/389cc02fa) - add pageSizeOptions prop and add max-h-none to table wrapper ([#​2280](https://redirect.github.com/temporalio/ui-server/issues/2280)) - 2024-08-26 - [`eaf2401`](https://redirect.github.com/temporalio/ui-server/commit/eaf240110) - 2.29.3 ([#​2279](https://redirect.github.com/temporalio/ui-server/issues/2279)) - 2024-08-23 - [`32d26b1`](https://redirect.github.com/temporalio/ui-server/commit/32d26b177) - Remove encodingDecoded field ([#​2274](https://redirect.github.com/temporalio/ui-server/issues/2274)) - 2024-08-20 - [`61ab2bf`](https://redirect.github.com/temporalio/ui-server/commit/61ab2bf44) - Fix the wrong nano conversion in timestampToDate ([#​2273](https://redirect.github.com/temporalio/ui-server/issues/2273)) - 2024-08-19 - [`38fa5e3`](https://redirect.github.com/temporalio/ui-server/commit/38fa5e31c) - remove expand when slot is empty ([#​2242](https://redirect.github.com/temporalio/ui-server/issues/2242)) - 2024-08-14 - [`60340c2`](https://redirect.github.com/temporalio/ui-server/commit/60340c208) - 2.29.2 ([#​2267](https://redirect.github.com/temporalio/ui-server/issues/2267)) - 2024-08-13 - [`e08f5ee`](https://redirect.github.com/temporalio/ui-server/commit/e08f5ee8a) - Add API PaginatedTable component ([#​2261](https://redirect.github.com/temporalio/ui-server/issues/2261)) - 2024-08-13 - [`149e011`](https://redirect.github.com/temporalio/ui-server/commit/149e0111d) - 2.29.1 ([#​2266](https://redirect.github.com/temporalio/ui-server/issues/2266)) - 2024-08-13 - [`5197587`](https://redirect.github.com/temporalio/ui-server/commit/51975879c) - Pass helpText prop to Nexus Form ([#​2265](https://redirect.github.com/temporalio/ui-server/issues/2265)) - 2024-08-12 - [`6b2cf56`](https://redirect.github.com/temporalio/ui-server/commit/6b2cf5640) - 2.29.0 ([#​2263](https://redirect.github.com/temporalio/ui-server/issues/2263)) - 2024-08-12 - [`0df27e6`](https://redirect.github.com/temporalio/ui-server/commit/0df27e674) - Ability to pass regex to form from create/edit ([#​2264](https://redirect.github.com/temporalio/ui-server/issues/2264)) - 2024-08-12 - [`d7781aa`](https://redirect.github.com/temporalio/ui-server/commit/d7781aac2) - Markdown Render - Nexus Description ([#​2258](https://redirect.github.com/temporalio/ui-server/issues/2258)) - 2024-08-09 - [`a458f6f`](https://redirect.github.com/temporalio/ui-server/commit/a458f6fd9) - Remove namespace name when loading schedule ([#​2259](https://redirect.github.com/temporalio/ui-server/issues/2259)) - 2024-08-07 - [`785d006`](https://redirect.github.com/temporalio/ui-server/commit/785d00669) - Nexus Callback Information ([#​2256](https://redirect.github.com/temporalio/ui-server/issues/2256)) - 2024-08-05 - [`3049ea3`](https://redirect.github.com/temporalio/ui-server/commit/3049ea3d7) - 2.28.11 ([#​2254](https://redirect.github.com/temporalio/ui-server/issues/2254)) - 2024-08-05 - [`771e3f1`](https://redirect.github.com/temporalio/ui-server/commit/771e3f19c) - fix text styles in navigation badge ([#​2253](https://redirect.github.com/temporalio/ui-server/issues/2253)) - 2024-08-02 - [`72d623d`](https://redirect.github.com/temporalio/ui-server/commit/72d623d36) - Revert "Update base images ([#​2226](https://redirect.github.com/temporalio/ui-server/issues/2226))" ([#​2252](https://redirect.github.com/temporalio/ui-server/issues/2252)) - 2024-08-01 - [`fc30f9a`](https://redirect.github.com/temporalio/ui-server/commit/fc30f9a02) - Extra Test Id's for easy testing ([#​2251](https://redirect.github.com/temporalio/ui-server/issues/2251)) - 2024-08-01 - [`555e018`](https://redirect.github.com/temporalio/ui-server/commit/555e018f4) - Update base images ([#​2226](https://redirect.github.com/temporalio/ui-server/issues/2226)) - 2024-08-01 - [`dae9aac`](https://redirect.github.com/temporalio/ui-server/commit/dae9aac42) - 2.28.10 ([#​2249](https://redirect.github.com/temporalio/ui-server/issues/2249)) - 2024-08-01 - [`6cb9798`](https://redirect.github.com/temporalio/ui-server/commit/6cb97985b) - Small UI fixes ([#​2248](https://redirect.github.com/temporalio/ui-server/issues/2248)) - 2024-08-01 - [`e9b014e`](https://redirect.github.com/temporalio/ui-server/commit/e9b014ede) - Nexus Help Text and various UI updates ([#​2247](https://redirect.github.com/temporalio/ui-server/issues/2247)) - 2024-08-01 - [`82030a9`](https://redirect.github.com/temporalio/ui-server/commit/82030a9b0) - change min text color ([#​2245](https://redirect.github.com/temporalio/ui-server/issues/2245)) - 2024-07-31 - [`7f8f1f5`](https://redirect.github.com/temporalio/ui-server/commit/7f8f1f512) - 2.28.9 ([#​2244](https://redirect.github.com/temporalio/ui-server/issues/2244)) - 2024-07-31 - [`d39dd5e`](https://redirect.github.com/temporalio/ui-server/commit/d39dd5e4e) - Text styles audit ([#​2236](https://redirect.github.com/temporalio/ui-server/issues/2236)) - 2024-07-30 - [`af0d545`](https://redirect.github.com/temporalio/ui-server/commit/af0d5451b) - Dispatch ChangeSet instead of creating new EditorState on resetView in CodeBlock ([#​2243](https://redirect.github.com/temporalio/ui-server/issues/2243)) - 2024-07-29 - [`5f41480`](https://redirect.github.com/temporalio/ui-server/commit/5f41480f1) - update range colors ([#​2240](https://redirect.github.com/temporalio/ui-server/issues/2240)) - 2024-07-29 - [`a638bce`](https://redirect.github.com/temporalio/ui-server/commit/a638bce01) - Fix editable CodeBlock ([#​2239](https://redirect.github.com/temporalio/ui-server/issues/2239)) - 2024-07-26 - [`944162d`](https://redirect.github.com/temporalio/ui-server/commit/944162d38) - Refactor encoding and decoding to use same token logic and add unit tests ([#​2237](https://redirect.github.com/temporalio/ui-server/issues/2237)) - 2024-07-24 - [`d15597a`](https://redirect.github.com/temporalio/ui-server/commit/d15597ac3) - 2.28.8 ([#​2234](https://redirect.github.com/temporalio/ui-server/issues/2234)) - 2024-07-24 - [`5bc4a06`](https://redirect.github.com/temporalio/ui-server/commit/5bc4a068d) - Fix alert ([#​2233](https://redirect.github.com/temporalio/ui-server/issues/2233)) - 2024-07-24 - [`2b14a4d`](https://redirect.github.com/temporalio/ui-server/commit/2b14a4d3d) - 2.28.7 ([#​2232](https://redirect.github.com/temporalio/ui-server/issues/2232)) - 2024-07-24 - [`793ea20`](https://redirect.github.com/temporalio/ui-server/commit/793ea2063) - Fix WorkflowTaskFailed Alert when Cause is not provided ([#​2230](https://redirect.github.com/temporalio/ui-server/issues/2230)) - 2024-07-23 - [`29fcbac`](https://redirect.github.com/temporalio/ui-server/commit/29fcbacc3) - Add handleChange to bottom nav controls ([#​2229](https://redirect.github.com/temporalio/ui-server/issues/2229)) - 2024-07-22 - [`88eeac1`](https://redirect.github.com/temporalio/ui-server/commit/88eeac1df) - Small CodeBlock in Modal fixes ([#​2225](https://redirect.github.com/temporalio/ui-server/issues/2225)) - 2024-07-22 - [`50a5e10`](https://redirect.github.com/temporalio/ui-server/commit/50a5e10e5) - Add Chromatic for visual regression testing ([#​2169](https://redirect.github.com/temporalio/ui-server/issues/2169)) - 2024-07-22 - [`a251302`](https://redirect.github.com/temporalio/ui-server/commit/a25130253) - Update spacing and font weight ([#​2222](https://redirect.github.com/temporalio/ui-server/issues/2222)) - 2024-07-22 - [`a205f52`](https://redirect.github.com/temporalio/ui-server/commit/a205f523c) - Return 404 page for endpoint not found. Use endpoint name on delete confirmation and remove the need to type DELETE. ([#​2224](https://redirect.github.com/temporalio/ui-server/issues/2224)) - 2024-07-19 - [`a852a06`](https://redirect.github.com/temporalio/ui-server/commit/a852a06c4) - 2.28.6 ([#​2221](https://redirect.github.com/temporalio/ui-server/issues/2221)) - 2024-07-19 - [`22e0d77`](https://redirect.github.com/temporalio/ui-server/commit/22e0d77f9) - fix navigation badge styles ([#​2220](https://redirect.github.com/temporalio/ui-server/issues/2220)) - 2024-07-19 - [`7f351f8`](https://redirect.github.com/temporalio/ui-server/commit/7f351f812) - Add WorkflowProperties event and group with Marker and SAUpsert into new Other group ([#​2219](https://redirect.github.com/temporalio/ui-server/issues/2219)) ### [`v2.29.2`](https://redirect.github.com/temporalio/ui-server/releases/tag/v2.29.2) [Compare Source](https://redirect.github.com/temporalio/ui-server/compare/v2.29.1...v2.29.2) - 2024-07-19 - [`2d20650`](https://redirect.github.com/temporalio/ui-server/commit/2d2065099) - Update version.go to v2.29.2 ([#​2218](https://redirect.github.com/temporalio/ui-server/issues/2218)) - 2024-07-18 - [`65ece25`](https://redirect.github.com/temporalio/ui-server/commit/65ece2565) - 2.28.5 ([#​2217](https://redirect.github.com/temporalio/ui-server/issues/2217)) - 2024-07-18 - [`7ef4701`](https://redirect.github.com/temporalio/ui-server/commit/7ef470154) - New nexus icon ([#​2216](https://redirect.github.com/temporalio/ui-server/issues/2216)) - 2024-07-17 - [`7ec7291`](https://redirect.github.com/temporalio/ui-server/commit/7ec729131) - Fix hover background on Namespace selector ([#​2214](https://redirect.github.com/temporalio/ui-server/issues/2214)) - 2024-07-17 - [`c84b7df`](https://redirect.github.com/temporalio/ui-server/commit/c84b7dff0) - Update codemirror to get bug fixes ([#​2213](https://redirect.github.com/temporalio/ui-server/issues/2213)) - 2024-07-16 - [`3d38441`](https://redirect.github.com/temporalio/ui-server/commit/3d38441b2) - 2.28.4 ([#​2211](https://redirect.github.com/temporalio/ui-server/issues/2211)) - 2024-07-16 - [`b04945a`](https://redirect.github.com/temporalio/ui-server/commit/b04945aac) - add new components for promo nav item ([#​2210](https://redirect.github.com/temporalio/ui-server/issues/2210)) - 2024-07-16 - [`118c87a`](https://redirect.github.com/temporalio/ui-server/commit/118c87a40) - 2.28.3 ([#​2209](https://redirect.github.com/temporalio/ui-server/issues/2209)) - 2024-07-16 - [`c0127ad`](https://redirect.github.com/temporalio/ui-server/commit/c0127ad05) - New Nexus icon ([#​2208](https://redirect.github.com/temporalio/ui-server/issues/2208)) - 2024-07-15 - [`a976552`](https://redirect.github.com/temporalio/ui-server/commit/a97655209) - Nexus Bug Bash UI updates ([#​2207](https://redirect.github.com/temporalio/ui-server/issues/2207)) - 2024-07-15 - [`f418976`](https://redirect.github.com/temporalio/ui-server/commit/f418976ce) - Mobile Nav ([#​2197](https://redirect.github.com/temporalio/ui-server/issues/2197))
--- ### Configuration 📅 **Schedule**: Branch creation - "after 5pm on monday" in timezone Etc/UTC, Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox. 👻 **Immortal**: This PR will be recreated if closed unmerged. Get [config help](https://redirect.github.com/renovatebot/renovate/discussions) if that's undesired. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/PeerDB-io/peerdb). Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> --- docker-compose-dev.yml | 12 ++++++------ docker-compose.yml | 20 ++++++++++---------- 2 files changed, 16 insertions(+), 16 deletions(-) diff --git a/docker-compose-dev.yml b/docker-compose-dev.yml index 711081925..7309472d7 100644 --- a/docker-compose-dev.yml +++ b/docker-compose-dev.yml @@ -39,7 +39,7 @@ x-flow-worker-env: &flow-worker-env services: catalog: container_name: catalog - image: postgres:16-alpine + image: postgres:17-alpine@sha256:d388be15cfb665c723da47cccdc7ea5c003ed71f700c5419bbd075033227ce1f command: -c config_file=/etc/postgresql.conf ports: - 9901:5432 @@ -73,7 +73,7 @@ services: - POSTGRES_PWD=postgres - POSTGRES_SEEDS=catalog - DYNAMIC_CONFIG_FILE_PATH=config/dynamicconfig/development-sql.yaml - image: temporalio/auto-setup:1.25 + image: temporalio/auto-setup:1.25@sha256:b1edc1e20002d958c8182f2ae08dee877a125083683a627a44917683419ba6a8 ports: - 7233:7233 volumes: @@ -83,7 +83,7 @@ services: pyroscope: container_name: pyroscope - image: grafana/pyroscope:latest + image: grafana/pyroscope:latest@sha256:319bf32ae06b67c1b9795c06ae6c3ba67e9b43382896df7a9df54cdb47a5c535 ports: - 4040:4040 @@ -95,7 +95,7 @@ services: - TEMPORAL_ADDRESS=temporal:7233 - TEMPORAL_CLI_ADDRESS=temporal:7233 - TEMPORAL_CLI_SHOW_STACKS=1 - image: temporalio/admin-tools:1.25.2-tctl-1.18.1-cli-1.1.1 + image: temporalio/admin-tools:1.25.2-tctl-1.18.1-cli-1.1.1@sha256:da0c7a7982b571857173ab8f058e7f139b3054800abb4dcb100445d29a563ee8 stdin_open: true tty: true entrypoint: /etc/temporal/entrypoint.sh @@ -116,7 +116,7 @@ services: - TEMPORAL_ADDRESS=temporal:7233 - TEMPORAL_CORS_ORIGINS=http://localhost:3000 - TEMPORAL_CSRF_COOKIE_INSECURE=true - image: temporalio/ui:2.29.1 + image: temporalio/ui:2.31.2@sha256:28bb3ea5a6ea3e09f16b521f32ab727c96470f7f1e420c66a6cbfb02001a8aa2 ports: - 8085:8080 @@ -209,7 +209,7 @@ services: - flow-api minio: - image: minio/minio:RELEASE.2024-11-07T00-52-20Z + image: minio/minio:RELEASE.2024-11-07T00-52-20Z@sha256:ac591851803a79aee64bc37f66d77c56b0a4b6e12d9e5356380f4105510f2332 volumes: - minio-data:/data ports: diff --git a/docker-compose.yml b/docker-compose.yml index ce4a3994a..c03c9993d 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -32,7 +32,7 @@ x-flow-worker-env: &flow-worker-env services: catalog: container_name: catalog - image: postgres:16-alpine + image: postgres:17-alpine@sha256:d388be15cfb665c723da47cccdc7ea5c003ed71f700c5419bbd075033227ce1f command: -c config_file=/etc/postgresql.conf restart: unless-stopped ports: @@ -68,7 +68,7 @@ services: - POSTGRES_PWD=postgres - POSTGRES_SEEDS=catalog - DYNAMIC_CONFIG_FILE_PATH=config/dynamicconfig/development-sql.yaml - image: temporalio/auto-setup:1.25 + image: temporalio/auto-setup:1.25@sha256:b1edc1e20002d958c8182f2ae08dee877a125083683a627a44917683419ba6a8 ports: - 7233:7233 volumes: @@ -85,7 +85,7 @@ services: - TEMPORAL_ADDRESS=temporal:7233 - TEMPORAL_CLI_ADDRESS=temporal:7233 - TEMPORAL_CLI_SHOW_STACKS=1 - image: temporalio/admin-tools:1.25.2-tctl-1.18.1-cli-1.1.1 + image: temporalio/admin-tools:1.25.2-tctl-1.18.1-cli-1.1.1@sha256:da0c7a7982b571857173ab8f058e7f139b3054800abb4dcb100445d29a563ee8 stdin_open: true tty: true entrypoint: /etc/temporal/entrypoint.sh @@ -106,13 +106,13 @@ services: - TEMPORAL_ADDRESS=temporal:7233 - TEMPORAL_CORS_ORIGINS=http://localhost:3000 - TEMPORAL_CSRF_COOKIE_INSECURE=true - image: temporalio/ui:2.29.1 + image: temporalio/ui:2.31.2@sha256:28bb3ea5a6ea3e09f16b521f32ab727c96470f7f1e420c66a6cbfb02001a8aa2 ports: - 8085:8080 flow-api: container_name: flow_api - image: ghcr.io/peerdb-io/flow-api:latest-dev + image: ghcr.io/peerdb-io/flow-api:latest-dev@sha256:a7c7d1cb12f618a747233556820df114acc4d2e737e487bdfdbb0051720710c9 restart: unless-stopped ports: - 8112:8112 @@ -128,7 +128,7 @@ services: flow-snapshot-worker: container_name: flow-snapshot-worker - image: ghcr.io/peerdb-io/flow-snapshot-worker:latest-dev + image: ghcr.io/peerdb-io/flow-snapshot-worker:latest-dev@sha256:9b6db5039e4f73f7d205a40400683ff9ca242dbb2eee405a2e9b056e947c8666 restart: unless-stopped environment: <<: [*catalog-config, *flow-worker-env, *minio-config] @@ -138,7 +138,7 @@ services: flow-worker: container_name: flow-worker - image: ghcr.io/peerdb-io/flow-worker:latest-dev + image: ghcr.io/peerdb-io/flow-worker:latest-dev@sha256:3b4da6550deacd638e99592b65d9f61191ed020b8268efa52297c3101ab37c16 restart: unless-stopped environment: <<: [*catalog-config, *flow-worker-env, *minio-config] @@ -151,7 +151,7 @@ services: peerdb: container_name: peerdb-server stop_signal: SIGINT - image: ghcr.io/peerdb-io/peerdb-server:latest-dev + image: ghcr.io/peerdb-io/peerdb-server:latest-dev@sha256:dfb652eebb410198f28e720d04d17c4de0698581d214ce99337ee1efc0874ba4 restart: unless-stopped environment: <<: *catalog-config @@ -167,7 +167,7 @@ services: peerdb-ui: container_name: peerdb-ui - image: ghcr.io/peerdb-io/peerdb-ui:latest-dev + image: ghcr.io/peerdb-io/peerdb-ui:latest-dev@sha256:cb4d1dbd167c6bdd44e795a33b69ab5001932c7cbd8ac02126b79331022c17a0 restart: unless-stopped ports: - 3000:3000 @@ -184,7 +184,7 @@ services: - flow-api minio: - image: minio/minio:RELEASE.2024-11-07T00-52-20Z + image: minio/minio:RELEASE.2024-11-07T00-52-20Z@sha256:ac591851803a79aee64bc37f66d77c56b0a4b6e12d9e5356380f4105510f2332 restart: unless-stopped volumes: - minio-data:/data From 1f969d191ea98df5933f85a7782d77e8add441cc Mon Sep 17 00:00:00 2001 From: Amogh Bharadwaj <65964360+Amogh-Bharadwaj@users.noreply.github.com> Date: Thu, 14 Nov 2024 01:06:28 +0530 Subject: [PATCH 02/61] Fix resync logic for postgres to postgres (#2240) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR fixes the intended drop if exists logic for _resync tables when a Postgres to Postgres mirror is resynced, along with a casing issue Co-authored-by: Philip Dubé --- flow/connectors/postgres/postgres.go | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/flow/connectors/postgres/postgres.go b/flow/connectors/postgres/postgres.go index b3161161e..c8a853287 100644 --- a/flow/connectors/postgres/postgres.go +++ b/flow/connectors/postgres/postgres.go @@ -891,15 +891,17 @@ func (c *PostgresConnector) SetupNormalizedTable( if tableAlreadyExists { c.logger.Info("[postgres] table already exists, skipping", slog.String("table", tableIdentifier)) - if config.IsResync { - err := c.ExecuteCommand(ctx, fmt.Sprintf(dropTableIfExistsSQL, - QuoteIdentifier(parsedNormalizedTable.Schema), - QuoteIdentifier(parsedNormalizedTable.Table))) - if err != nil { - return false, fmt.Errorf("error while dropping _resync table: %w", err) - } + if !config.IsResync { + return true, nil + } + + err := c.ExecuteCommand(ctx, fmt.Sprintf(dropTableIfExistsSQL, + QuoteIdentifier(parsedNormalizedTable.Schema), + QuoteIdentifier(parsedNormalizedTable.Table))) + if err != nil { + return false, fmt.Errorf("error while dropping _resync table: %w", err) } - return true, nil + c.logger.Info("[postgres] dropped resync table for resync", slog.String("resyncTable", parsedNormalizedTable.String())) } // convert the column names and types to Postgres types @@ -1437,7 +1439,7 @@ func (c *PostgresConnector) RenameTables( } // rename the src table to dst - _, err = c.execWithLoggingTx(ctx, fmt.Sprintf("ALTER TABLE %s RENAME TO %s", src, dstTable.Table), renameTablesTx) + _, err = c.execWithLoggingTx(ctx, fmt.Sprintf("ALTER TABLE %s RENAME TO %s", src, QuoteIdentifier(dstTable.Table)), renameTablesTx) if err != nil { return nil, fmt.Errorf("unable to rename table %s to %s: %w", src, dst, err) } From 0fbd7f752bfa838c6e8ee1263a3038501648920a Mon Sep 17 00:00:00 2001 From: Kevin Biju <52661649+heavycrystal@users.noreply.github.com> Date: Thu, 14 Nov 2024 02:10:53 +0530 Subject: [PATCH 03/61] [clickhouse] change raw table definition to use int64 for _peerdb_batch_id (#2248) --- flow/connectors/clickhouse/cdc.go | 12 ++++++------ flow/connectors/clickhouse/normalize.go | 2 +- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/flow/connectors/clickhouse/cdc.go b/flow/connectors/clickhouse/cdc.go index 3e002f502..8fae9d6f2 100644 --- a/flow/connectors/clickhouse/cdc.go +++ b/flow/connectors/clickhouse/cdc.go @@ -46,13 +46,13 @@ func (c *ClickHouseConnector) CreateRawTable(ctx context.Context, req *protos.Cr rawTableName := c.getRawTableName(req.FlowJobName) createRawTableSQL := `CREATE TABLE IF NOT EXISTS %s ( - _peerdb_uid UUID NOT NULL, - _peerdb_timestamp Int64 NOT NULL, - _peerdb_destination_table_name String NOT NULL, - _peerdb_data String NOT NULL, - _peerdb_record_type Int NOT NULL, + _peerdb_uid UUID, + _peerdb_timestamp Int64, + _peerdb_destination_table_name String, + _peerdb_data String, + _peerdb_record_type Int, _peerdb_match_data String, - _peerdb_batch_id Int, + _peerdb_batch_id Int64, _peerdb_unchanged_toast_columns String ) ENGINE = MergeTree() ORDER BY (_peerdb_batch_id, _peerdb_destination_table_name);` diff --git a/flow/connectors/clickhouse/normalize.go b/flow/connectors/clickhouse/normalize.go index 770abc7f2..d5357c9a1 100644 --- a/flow/connectors/clickhouse/normalize.go +++ b/flow/connectors/clickhouse/normalize.go @@ -462,7 +462,7 @@ func (c *ClickHouseConnector) getDistinctTableNamesInBatch( rawTbl := c.getRawTableName(flowJobName) q := fmt.Sprintf( - `SELECT DISTINCT _peerdb_destination_table_name FROM %s WHERE _peerdb_batch_id > %d AND _peerdb_batch_id <= %d`, + `SELECT DISTINCT _peerdb_destination_table_name FROM %s WHERE _peerdb_batch_id>%d AND _peerdb_batch_id<=%d`, rawTbl, normalizeBatchID, syncBatchID) rows, err := c.query(ctx, q) From 713f10b7c898e82b65f6c90fff671d2884f383a4 Mon Sep 17 00:00:00 2001 From: Kevin Biju <52661649+heavycrystal@users.noreply.github.com> Date: Thu, 14 Nov 2024 02:44:24 +0530 Subject: [PATCH 04/61] [postgres] reduce unnecessary queries (#2214) closes #2210 --- flow/connectors/postgres/cdc.go | 62 +++++++---- flow/connectors/postgres/client.go | 9 +- flow/connectors/postgres/postgres.go | 74 ++++++++----- flow/connectors/postgres/qrep.go | 21 +++- flow/connectors/postgres/qrep_bench_test.go | 13 ++- .../postgres/qrep_query_executor.go | 38 ++++--- .../postgres/qrep_query_executor_test.go | 102 ++++++------------ flow/e2e/postgres/postgres.go | 5 +- flow/e2e/test_utils.go | 5 +- flow/shared/postgres.go | 14 +-- 10 files changed, 189 insertions(+), 154 deletions(-) diff --git a/flow/connectors/postgres/cdc.go b/flow/connectors/postgres/cdc.go index a355cfa00..9f4bd0b96 100644 --- a/flow/connectors/postgres/cdc.go +++ b/flow/connectors/postgres/cdc.go @@ -67,9 +67,9 @@ func (c *PostgresConnector) NewPostgresCDCSource(cdcConfig *PostgresCDCConfig) * relationMessageMapping: cdcConfig.RelationMessageMapping, slot: cdcConfig.Slot, publication: cdcConfig.Publication, - childToParentRelIDMapping: cdcConfig.ChildToParentRelIDMap, typeMap: pgtype.NewMap(), commitLock: nil, + childToParentRelIDMapping: cdcConfig.ChildToParentRelIDMap, catalogPool: cdcConfig.CatalogPool, flowJobName: cdcConfig.FlowJobName, hushWarnUnhandledMessageType: make(map[pglogrepl.MessageType]struct{}), @@ -85,21 +85,18 @@ func GetChildToParentRelIDMap(ctx context.Context, conn *pgx.Conn) (map[uint32]u WHERE parent.relkind='p'; ` - rows, err := conn.Query(ctx, query, pgx.QueryExecModeSimpleProtocol) + rows, err := conn.Query(ctx, query) if err != nil { return nil, fmt.Errorf("error querying for child to parent relid map: %w", err) } - defer rows.Close() childToParentRelIDMap := make(map[uint32]uint32) - var parentRelID pgtype.Uint32 - var childRelID pgtype.Uint32 - for rows.Next() { - err := rows.Scan(&parentRelID, &childRelID) - if err != nil { - return nil, fmt.Errorf("error scanning child to parent relid map: %w", err) - } + var parentRelID, childRelID pgtype.Uint32 + if _, err := pgx.ForEachRow(rows, []any{&parentRelID, &childRelID}, func() error { childToParentRelIDMap[childRelID.Uint32] = parentRelID.Uint32 + return nil + }); err != nil { + return nil, fmt.Errorf("error iterating over child to parent relid map: %w", err) } return childToParentRelIDMap, nil @@ -114,6 +111,7 @@ type replProcessor[Items model.Items] interface { p *PostgresCDCSource, tuple *pglogrepl.TupleDataColumn, col *pglogrepl.RelationMessageColumn, + customTypeMapping map[uint32]string, ) error } @@ -128,6 +126,7 @@ func (pgProcessor) Process( p *PostgresCDCSource, tuple *pglogrepl.TupleDataColumn, col *pglogrepl.RelationMessageColumn, + customTypeMapping map[uint32]string, ) error { switch tuple.DataType { case 'n': // null @@ -158,13 +157,14 @@ func (qProcessor) Process( p *PostgresCDCSource, tuple *pglogrepl.TupleDataColumn, col *pglogrepl.RelationMessageColumn, + customTypeMapping map[uint32]string, ) error { switch tuple.DataType { case 'n': // null items.AddColumn(col.Name, qvalue.QValueNull(qvalue.QValueKindInvalid)) case 't': // text // bytea also appears here as a hex - data, err := p.decodeColumnData(tuple.Data, col.DataType, pgtype.TextFormatCode) + data, err := p.decodeColumnData(tuple.Data, col.DataType, pgtype.TextFormatCode, customTypeMapping) if err != nil { p.logger.Error("error decoding text column data", slog.Any("error", err), slog.String("columnName", col.Name), slog.Int64("dataType", int64(col.DataType))) @@ -172,7 +172,7 @@ func (qProcessor) Process( } items.AddColumn(col.Name, data) case 'b': // binary - data, err := p.decodeColumnData(tuple.Data, col.DataType, pgtype.BinaryFormatCode) + data, err := p.decodeColumnData(tuple.Data, col.DataType, pgtype.BinaryFormatCode, customTypeMapping) if err != nil { return fmt.Errorf("error decoding binary column data: %w", err) } @@ -189,6 +189,7 @@ func processTuple[Items model.Items]( tuple *pglogrepl.TupleData, rel *pglogrepl.RelationMessage, exclude map[string]struct{}, + customTypeMapping map[uint32]string, ) (Items, map[string]struct{}, error) { // if the tuple is nil, return an empty map if tuple == nil { @@ -208,7 +209,7 @@ func processTuple[Items model.Items]( unchangedToastColumns = make(map[string]struct{}) } unchangedToastColumns[rcol.Name] = struct{}{} - } else if err := processor.Process(items, p, tcol, rcol); err != nil { + } else if err := processor.Process(items, p, tcol, rcol, customTypeMapping); err != nil { var none Items return none, nil, err } @@ -216,7 +217,9 @@ func processTuple[Items model.Items]( return items, unchangedToastColumns, nil } -func (p *PostgresCDCSource) decodeColumnData(data []byte, dataType uint32, formatCode int16) (qvalue.QValue, error) { +func (p *PostgresCDCSource) decodeColumnData(data []byte, dataType uint32, + formatCode int16, customTypeMapping map[uint32]string, +) (qvalue.QValue, error) { var parsedData any var err error if dt, ok := p.typeMap.TypeForOID(dataType); ok { @@ -260,7 +263,7 @@ func (p *PostgresCDCSource) decodeColumnData(data []byte, dataType uint32, forma return retVal, nil } - typeName, ok := p.customTypesMapping[dataType] + typeName, ok := customTypeMapping[dataType] if ok { customQKind := customTypeToQKind(typeName) switch customQKind { @@ -634,17 +637,21 @@ func processMessage[Items model.Items]( if err != nil { return nil, fmt.Errorf("error parsing logical message: %w", err) } + customTypeMapping, err := p.fetchCustomTypeMapping(ctx) + if err != nil { + return nil, err + } switch msg := logicalMsg.(type) { case *pglogrepl.BeginMessage: logger.Debug("BeginMessage", slog.Any("FinalLSN", msg.FinalLSN), slog.Any("XID", msg.Xid)) p.commitLock = msg case *pglogrepl.InsertMessage: - return processInsertMessage(p, xld.WALStart, msg, processor) + return processInsertMessage(p, xld.WALStart, msg, processor, customTypeMapping) case *pglogrepl.UpdateMessage: - return processUpdateMessage(p, xld.WALStart, msg, processor) + return processUpdateMessage(p, xld.WALStart, msg, processor, customTypeMapping) case *pglogrepl.DeleteMessage: - return processDeleteMessage(p, xld.WALStart, msg, processor) + return processDeleteMessage(p, xld.WALStart, msg, processor, customTypeMapping) case *pglogrepl.CommitMessage: // for a commit message, update the last checkpoint id for the record batch. logger.Debug("CommitMessage", slog.Any("CommitLSN", msg.CommitLSN), slog.Any("TransactionEndLSN", msg.TransactionEndLSN)) @@ -694,6 +701,7 @@ func processInsertMessage[Items model.Items]( lsn pglogrepl.LSN, msg *pglogrepl.InsertMessage, processor replProcessor[Items], + customTypeMapping map[uint32]string, ) (model.Record[Items], error) { relID := p.getParentRelIDIfPartitioned(msg.RelationID) @@ -710,7 +718,7 @@ func processInsertMessage[Items model.Items]( return nil, fmt.Errorf("unknown relation id: %d", relID) } - items, _, err := processTuple(processor, p, msg.Tuple, rel, p.tableNameMapping[tableName].Exclude) + items, _, err := processTuple(processor, p, msg.Tuple, rel, p.tableNameMapping[tableName].Exclude, customTypeMapping) if err != nil { return nil, fmt.Errorf("error converting tuple to map: %w", err) } @@ -729,6 +737,7 @@ func processUpdateMessage[Items model.Items]( lsn pglogrepl.LSN, msg *pglogrepl.UpdateMessage, processor replProcessor[Items], + customTypeMapping map[uint32]string, ) (model.Record[Items], error) { relID := p.getParentRelIDIfPartitioned(msg.RelationID) @@ -745,13 +754,14 @@ func processUpdateMessage[Items model.Items]( return nil, fmt.Errorf("unknown relation id: %d", relID) } - oldItems, _, err := processTuple(processor, p, msg.OldTuple, rel, p.tableNameMapping[tableName].Exclude) + oldItems, _, err := processTuple(processor, p, msg.OldTuple, rel, + p.tableNameMapping[tableName].Exclude, customTypeMapping) if err != nil { return nil, fmt.Errorf("error converting old tuple to map: %w", err) } newItems, unchangedToastColumns, err := processTuple( - processor, p, msg.NewTuple, rel, p.tableNameMapping[tableName].Exclude) + processor, p, msg.NewTuple, rel, p.tableNameMapping[tableName].Exclude, customTypeMapping) if err != nil { return nil, fmt.Errorf("error converting new tuple to map: %w", err) } @@ -785,6 +795,7 @@ func processDeleteMessage[Items model.Items]( lsn pglogrepl.LSN, msg *pglogrepl.DeleteMessage, processor replProcessor[Items], + customTypeMapping map[uint32]string, ) (model.Record[Items], error) { relID := p.getParentRelIDIfPartitioned(msg.RelationID) @@ -801,7 +812,8 @@ func processDeleteMessage[Items model.Items]( return nil, fmt.Errorf("unknown relation id: %d", relID) } - items, _, err := processTuple(processor, p, msg.OldTuple, rel, p.tableNameMapping[tableName].Exclude) + items, _, err := processTuple(processor, p, msg.OldTuple, rel, + p.tableNameMapping[tableName].Exclude, customTypeMapping) if err != nil { return nil, fmt.Errorf("error converting tuple to map: %w", err) } @@ -844,6 +856,10 @@ func processRelationMessage[Items model.Items]( slog.Uint64("relId", uint64(currRel.RelationID))) return nil, nil } + customTypeMapping, err := p.fetchCustomTypeMapping(ctx) + if err != nil { + return nil, err + } // retrieve current TableSchema for table changed, mapping uses dst table name as key, need to translate source name currRelDstInfo, ok := p.tableNameMapping[currRelName] @@ -867,7 +883,7 @@ func processRelationMessage[Items model.Items]( case protos.TypeSystem_Q: qKind := p.postgresOIDToQValueKind(column.DataType) if qKind == qvalue.QValueKindInvalid { - typeName, ok := p.customTypesMapping[column.DataType] + typeName, ok := customTypeMapping[column.DataType] if ok { qKind = customTypeToQKind(typeName) } diff --git a/flow/connectors/postgres/client.go b/flow/connectors/postgres/client.go index 1daabbf68..70b0d15d1 100644 --- a/flow/connectors/postgres/client.go +++ b/flow/connectors/postgres/client.go @@ -550,7 +550,14 @@ func (c *PostgresConnector) jobMetadataExists(ctx context.Context, jobName strin } func (c *PostgresConnector) MajorVersion(ctx context.Context) (shared.PGVersion, error) { - return shared.GetMajorVersion(ctx, c.conn) + if c.pgVersion == 0 { + pgVersion, err := shared.GetMajorVersion(ctx, c.conn) + if err != nil { + return 0, err + } + c.pgVersion = pgVersion + } + return c.pgVersion, nil } func (c *PostgresConnector) updateSyncMetadata(ctx context.Context, flowJobName string, lastCP int64, syncBatchID int64, diff --git a/flow/connectors/postgres/postgres.go b/flow/connectors/postgres/postgres.go index c8a853287..b179e2d07 100644 --- a/flow/connectors/postgres/postgres.go +++ b/flow/connectors/postgres/postgres.go @@ -39,12 +39,13 @@ type PostgresConnector struct { conn *pgx.Conn replConn *pgx.Conn replState *ReplState - customTypesMapping map[uint32]string + customTypeMapping map[uint32]string hushWarnOID map[uint32]struct{} relationMessageMapping model.RelationMessageMapping connStr string metadataSchema string replLock sync.Mutex + pgVersion shared.PGVersion } type ReplState struct { @@ -87,33 +88,39 @@ func NewPostgresConnector(ctx context.Context, env map[string]string, pgConfig * return nil, fmt.Errorf("failed to create connection: %w", err) } - customTypeMap, err := shared.GetCustomDataTypes(ctx, conn) - if err != nil { - logger.Error("failed to get custom type map", slog.Any("error", err)) - return nil, fmt.Errorf("failed to get custom type map: %w", err) - } - metadataSchema := "_peerdb_internal" if pgConfig.MetadataSchema != nil { metadataSchema = *pgConfig.MetadataSchema } return &PostgresConnector{ - connStr: connectionString, + logger: logger, config: pgConfig, ssh: tunnel, conn: conn, replConn: nil, replState: nil, - replLock: sync.Mutex{}, - customTypesMapping: customTypeMap, - metadataSchema: metadataSchema, + customTypeMapping: nil, hushWarnOID: make(map[uint32]struct{}), - logger: logger, relationMessageMapping: make(model.RelationMessageMapping), + connStr: connectionString, + metadataSchema: metadataSchema, + replLock: sync.Mutex{}, + pgVersion: 0, }, nil } +func (c *PostgresConnector) fetchCustomTypeMapping(ctx context.Context) (map[uint32]string, error) { + if c.customTypeMapping == nil { + customTypeMapping, err := shared.GetCustomDataTypes(ctx, c.conn) + if err != nil { + return nil, err + } + c.customTypeMapping = customTypeMapping + } + return c.customTypeMapping, nil +} + func (c *PostgresConnector) CreateReplConn(ctx context.Context) (*pgx.Conn, error) { // create a separate connection pool for non-replication queries as replication connections cannot // be used for extended query protocol, i.e. prepared statements @@ -129,6 +136,7 @@ func (c *PostgresConnector) CreateReplConn(ctx context.Context) (*pgx.Conn, erro replConfig.Config.RuntimeParams["replication"] = "database" replConfig.Config.RuntimeParams["bytea_output"] = "hex" replConfig.Config.RuntimeParams["intervalstyle"] = "postgres" + replConfig.DefaultQueryExecMode = pgx.QueryExecModeSimpleProtocol conn, err := c.ssh.NewPostgresConnFromConfig(ctx, replConfig) if err != nil { @@ -168,6 +176,7 @@ func (c *PostgresConnector) MaybeStartReplication( slotName string, publicationName string, lastOffset int64, + pgVersion shared.PGVersion, ) error { if c.replState != nil && (c.replState.Offset != lastOffset || c.replState.Slot != slotName || @@ -180,7 +189,7 @@ func (c *PostgresConnector) MaybeStartReplication( } if c.replState == nil { - replicationOpts, err := c.replicationOptions(ctx, publicationName) + replicationOpts, err := c.replicationOptions(publicationName, pgVersion) if err != nil { return fmt.Errorf("error getting replication options: %w", err) } @@ -210,7 +219,8 @@ func (c *PostgresConnector) MaybeStartReplication( return nil } -func (c *PostgresConnector) replicationOptions(ctx context.Context, publicationName string) (pglogrepl.StartReplicationOptions, error) { +func (c *PostgresConnector) replicationOptions(publicationName string, pgVersion shared.PGVersion, +) (pglogrepl.StartReplicationOptions, error) { pluginArguments := append(make([]string, 0, 3), "proto_version '1'") if publicationName != "" { @@ -220,10 +230,7 @@ func (c *PostgresConnector) replicationOptions(ctx context.Context, publicationN return pglogrepl.StartReplicationOptions{}, errors.New("publication name is not set") } - pgversion, err := c.MajorVersion(ctx) - if err != nil { - return pglogrepl.StartReplicationOptions{}, err - } else if pgversion >= shared.POSTGRES_14 { + if pgVersion >= shared.POSTGRES_14 { pluginArguments = append(pluginArguments, "messages 'true'") } @@ -380,12 +387,21 @@ func pullCore[Items model.Items]( c.logger.Info("PullRecords: performed checks for slot and publication") - childToParentRelIDMap, err := GetChildToParentRelIDMap(ctx, c.conn) + // cached, since this connector is reused + pgVersion, err := c.MajorVersion(ctx) if err != nil { - return fmt.Errorf("error getting child to parent relid map: %w", err) + return err + } + var childToParentRelIDMap map[uint32]uint32 + // only initialize the map if needed, escape hatch because custom publications may not have the right setting + if req.OverridePublicationName != "" || pgVersion < shared.POSTGRES_13 { + childToParentRelIDMap, err = GetChildToParentRelIDMap(ctx, c.conn) + if err != nil { + return fmt.Errorf("error getting child to parent relid map: %w", err) + } } - if err := c.MaybeStartReplication(ctx, slotName, publicationName, req.LastOffset); err != nil { + if err := c.MaybeStartReplication(ctx, slotName, publicationName, req.LastOffset, pgVersion); err != nil { // in case of Aurora error ERROR: replication slots cannot be used on RO (Read Only) node (SQLSTATE 55000) if shared.IsSQLStateError(err, pgerrcode.ObjectNotInPrerequisiteState) && strings.Contains(err.Error(), "replication slots cannot be used on RO (Read Only) node") { @@ -396,15 +412,15 @@ func pullCore[Items model.Items]( } cdc := c.NewPostgresCDCSource(&PostgresCDCConfig{ + CatalogPool: catalogPool, SrcTableIDNameMapping: req.SrcTableIDNameMapping, - Slot: slotName, - Publication: publicationName, TableNameMapping: req.TableNameMapping, TableNameSchemaMapping: req.TableNameSchemaMapping, ChildToParentRelIDMap: childToParentRelIDMap, - CatalogPool: catalogPool, - FlowJobName: req.FlowJobName, RelationMessageMapping: c.relationMessageMapping, + FlowJobName: req.FlowJobName, + Slot: slotName, + Publication: publicationName, }) if err := PullCdcRecords(ctx, cdc, req, processor, &c.replLock); err != nil { @@ -766,6 +782,10 @@ func (c *PostgresConnector) getTableSchemaForTable( if err != nil { return nil, err } + customTypeMapping, err := c.fetchCustomTypeMapping(ctx) + if err != nil { + return nil, err + } relID, err := c.getRelIDForTable(ctx, schemaTable) if err != nil { @@ -811,7 +831,7 @@ func (c *PostgresConnector) getTableSchemaForTable( case protos.TypeSystem_PG: colType = c.postgresOIDToName(fieldDescription.DataTypeOID) if colType == "" { - typeName, ok := c.customTypesMapping[fieldDescription.DataTypeOID] + typeName, ok := customTypeMapping[fieldDescription.DataTypeOID] if !ok { return nil, fmt.Errorf("error getting type name for %d", fieldDescription.DataTypeOID) } @@ -820,7 +840,7 @@ func (c *PostgresConnector) getTableSchemaForTable( case protos.TypeSystem_Q: qColType := c.postgresOIDToQValueKind(fieldDescription.DataTypeOID) if qColType == qvalue.QValueKindInvalid { - typeName, ok := c.customTypesMapping[fieldDescription.DataTypeOID] + typeName, ok := customTypeMapping[fieldDescription.DataTypeOID] if ok { qColType = customTypeToQKind(typeName) } else { diff --git a/flow/connectors/postgres/qrep.go b/flow/connectors/postgres/qrep.go index b393a4691..1cd2cd595 100644 --- a/flow/connectors/postgres/qrep.go +++ b/flow/connectors/postgres/qrep.go @@ -328,10 +328,15 @@ func corePullQRepRecords( sink QRepPullSink, ) (int, error) { partitionIdLog := slog.String(string(shared.PartitionIDKey), partition.PartitionId) + if partition.FullTablePartition { c.logger.Info("pulling full table partition", partitionIdLog) - executor := c.NewQRepQueryExecutorSnapshot(config.SnapshotName, config.FlowJobName, partition.PartitionId) - _, err := executor.ExecuteQueryIntoSink(ctx, sink, config.Query) + executor, err := c.NewQRepQueryExecutorSnapshot(ctx, config.SnapshotName, + config.FlowJobName, partition.PartitionId) + if err != nil { + return 0, fmt.Errorf("failed to create query executor: %w", err) + } + _, err = executor.ExecuteQueryIntoSink(ctx, sink, config.Query) return 0, err } c.logger.Info("Obtained ranges for partition for PullQRepStream", partitionIdLog) @@ -369,7 +374,11 @@ func corePullQRepRecords( return 0, err } - executor := c.NewQRepQueryExecutorSnapshot(config.SnapshotName, config.FlowJobName, partition.PartitionId) + executor, err := c.NewQRepQueryExecutorSnapshot(ctx, config.SnapshotName, config.FlowJobName, + partition.PartitionId) + if err != nil { + return 0, fmt.Errorf("failed to create query executor: %w", err) + } numRecords, err := executor.ExecuteQueryIntoSink(ctx, sink, query, rangeStart, rangeEnd) if err != nil { @@ -669,7 +678,11 @@ func pullXminRecordStream( queryArgs = []interface{}{strconv.FormatInt(partition.Range.Range.(*protos.PartitionRange_IntRange).IntRange.Start&0xffffffff, 10)} } - executor := c.NewQRepQueryExecutorSnapshot(config.SnapshotName, config.FlowJobName, partition.PartitionId) + executor, err := c.NewQRepQueryExecutorSnapshot(ctx, config.SnapshotName, + config.FlowJobName, partition.PartitionId) + if err != nil { + return 0, 0, fmt.Errorf("failed to create query executor: %w", err) + } numRecords, currentSnapshotXmin, err := executor.ExecuteQueryIntoSinkGettingCurrentSnapshotXmin( ctx, diff --git a/flow/connectors/postgres/qrep_bench_test.go b/flow/connectors/postgres/qrep_bench_test.go index d880343f4..777faf6e6 100644 --- a/flow/connectors/postgres/qrep_bench_test.go +++ b/flow/connectors/postgres/qrep_bench_test.go @@ -4,6 +4,8 @@ import ( "context" "testing" + "github.com/stretchr/testify/require" + "github.com/PeerDB-io/peer-flow/peerdbenv" ) @@ -12,13 +14,12 @@ func BenchmarkQRepQueryExecutor(b *testing.B) { ctx := context.Background() connector, err := NewPostgresConnector(ctx, nil, peerdbenv.GetCatalogPostgresConfigFromEnv(ctx)) - if err != nil { - b.Fatalf("failed to create connection: %v", err) - } + require.NoError(b, err, "error while creating connector") defer connector.Close() // Create a new QRepQueryExecutor instance - qe := connector.NewQRepQueryExecutor("test flow", "test part") + qe, err := connector.NewQRepQueryExecutor(ctx, "test flow", "test part") + require.NoError(b, err, "error while creating QRepQueryExecutor") // Run the benchmark b.ResetTimer() @@ -28,8 +29,6 @@ func BenchmarkQRepQueryExecutor(b *testing.B) { // Execute the query and process the rows _, err := qe.ExecuteAndProcessQuery(ctx, query) - if err != nil { - b.Fatalf("failed to execute query: %v", err) - } + require.NoError(b, err, "error while executing query") } } diff --git a/flow/connectors/postgres/qrep_query_executor.go b/flow/connectors/postgres/qrep_query_executor.go index bdfa7038b..f3c915c13 100644 --- a/flow/connectors/postgres/qrep_query_executor.go +++ b/flow/connectors/postgres/qrep_query_executor.go @@ -18,24 +18,35 @@ import ( type QRepQueryExecutor struct { *PostgresConnector - logger log.Logger - snapshot string - flowJobName string - partitionID string + logger log.Logger + customTypeMapping map[uint32]string + snapshot string + flowJobName string + partitionID string } -func (c *PostgresConnector) NewQRepQueryExecutor(flowJobName string, partitionID string) *QRepQueryExecutor { - return c.NewQRepQueryExecutorSnapshot("", flowJobName, partitionID) +func (c *PostgresConnector) NewQRepQueryExecutor(ctx context.Context, + flowJobName string, partitionID string, +) (*QRepQueryExecutor, error) { + return c.NewQRepQueryExecutorSnapshot(ctx, "", flowJobName, partitionID) } -func (c *PostgresConnector) NewQRepQueryExecutorSnapshot(snapshot string, flowJobName string, partitionID string) *QRepQueryExecutor { +func (c *PostgresConnector) NewQRepQueryExecutorSnapshot(ctx context.Context, + snapshot string, flowJobName string, partitionID string, +) (*QRepQueryExecutor, error) { + customTypeMapping, err := c.fetchCustomTypeMapping(ctx) + if err != nil { + c.logger.Error("[pg_query_executor] failed to fetch custom type mapping", slog.Any("error", err)) + return nil, fmt.Errorf("failed to fetch custom type mapping: %w", err) + } return &QRepQueryExecutor{ PostgresConnector: c, snapshot: snapshot, flowJobName: flowJobName, partitionID: partitionID, logger: log.With(c.logger, slog.String(string(shared.PartitionIDKey), partitionID)), - } + customTypeMapping: customTypeMapping, + }, nil } func (qe *QRepQueryExecutor) ExecuteQuery(ctx context.Context, query string, args ...interface{}) (pgx.Rows, error) { @@ -67,7 +78,7 @@ func (qe *QRepQueryExecutor) fieldDescriptionsToSchema(fds []pgconn.FieldDescrip cname := fd.Name ctype := qe.postgresOIDToQValueKind(fd.DataTypeOID) if ctype == qvalue.QValueKindInvalid { - typeName, ok := qe.customTypesMapping[fd.DataTypeOID] + typeName, ok := qe.customTypeMapping[fd.DataTypeOID] if ok { ctype = customTypeToQKind(typeName) } else { @@ -98,6 +109,7 @@ func (qe *QRepQueryExecutor) fieldDescriptionsToSchema(fds []pgconn.FieldDescrip } func (qe *QRepQueryExecutor) ProcessRows( + ctx context.Context, rows pgx.Rows, fieldDescriptions []pgconn.FieldDescription, ) (*model.QRecordBatch, error) { @@ -119,8 +131,9 @@ func (qe *QRepQueryExecutor) ProcessRows( return nil, fmt.Errorf("row iteration failed: %w", err) } + schema := qe.fieldDescriptionsToSchema(fieldDescriptions) batch := &model.QRecordBatch{ - Schema: qe.fieldDescriptionsToSchema(fieldDescriptions), + Schema: schema, Records: records, } @@ -186,7 +199,8 @@ func (qe *QRepQueryExecutor) processFetchedRows( fieldDescriptions := rows.FieldDescriptions() if !stream.IsSchemaSet() { - stream.SetSchema(qe.fieldDescriptionsToSchema(fieldDescriptions)) + schema := qe.fieldDescriptionsToSchema(fieldDescriptions) + stream.SetSchema(schema) } numRows, err := qe.processRowsStream(ctx, cursorName, stream, rows, fieldDescriptions) @@ -324,7 +338,7 @@ func (qe *QRepQueryExecutor) mapRowToQRecord( for i, fd := range fds { // Check if it's a custom type first - typeName, ok := qe.customTypesMapping[fd.DataTypeOID] + typeName, ok := qe.customTypeMapping[fd.DataTypeOID] if !ok { tmp, err := qe.parseFieldFromPostgresOID(fd.DataTypeOID, values[i]) if err != nil { diff --git a/flow/connectors/postgres/qrep_query_executor_test.go b/flow/connectors/postgres/qrep_query_executor_test.go index d7932ba00..f8f686c42 100644 --- a/flow/connectors/postgres/qrep_query_executor_test.go +++ b/flow/connectors/postgres/qrep_query_executor_test.go @@ -1,7 +1,6 @@ package connpostgres import ( - "bytes" "context" "fmt" "testing" @@ -10,6 +9,7 @@ import ( "github.com/google/uuid" "github.com/jackc/pgx/v5" "github.com/shopspring/decimal" + "github.com/stretchr/testify/require" "github.com/PeerDB-io/peer-flow/peerdbenv" ) @@ -19,18 +19,14 @@ func setupDB(t *testing.T) (*PostgresConnector, string) { connector, err := NewPostgresConnector(context.Background(), nil, peerdbenv.GetCatalogPostgresConfigFromEnv(context.Background())) - if err != nil { - t.Fatalf("unable to create connector: %v", err) - } + require.NoError(t, err, "error while creating connector") // Create unique schema name using current time schemaName := fmt.Sprintf("schema_%d", time.Now().Unix()) // Create the schema _, err = connector.conn.Exec(context.Background(), fmt.Sprintf("CREATE SCHEMA %s;", schemaName)) - if err != nil { - t.Fatalf("unable to create schema: %v", err) - } + require.NoError(t, err, "error while creating schema") return connector, schemaName } @@ -39,9 +35,7 @@ func teardownDB(t *testing.T, conn *pgx.Conn, schemaName string) { t.Helper() _, err := conn.Exec(context.Background(), fmt.Sprintf("DROP SCHEMA %s CASCADE;", schemaName)) - if err != nil { - t.Fatalf("error while dropping schema: %v", err) - } + require.NoError(t, err, "error while dropping schema") } func TestExecuteAndProcessQuery(t *testing.T) { @@ -53,31 +47,20 @@ func TestExecuteAndProcessQuery(t *testing.T) { query := fmt.Sprintf("CREATE TABLE IF NOT EXISTS %s.test(id SERIAL PRIMARY KEY, data TEXT);", schemaName) _, err := conn.Exec(ctx, query) - if err != nil { - t.Fatalf("error while creating test table: %v", err) - } + require.NoError(t, err, "error while creating table") query = fmt.Sprintf("INSERT INTO %s.test(data) VALUES('testdata');", schemaName) _, err = conn.Exec(ctx, query) - if err != nil { - t.Fatalf("error while inserting into test table: %v", err) - } + require.NoError(t, err, "error while inserting data") - qe := connector.NewQRepQueryExecutor("test flow", "test part") + qe, err := connector.NewQRepQueryExecutor(ctx, "test flow", "test part") + require.NoError(t, err, "error while creating QRepQueryExecutor") query = fmt.Sprintf("SELECT * FROM %s.test;", schemaName) batch, err := qe.ExecuteAndProcessQuery(context.Background(), query) - if err != nil { - t.Fatalf("error while executing and processing query: %v", err) - } - - if len(batch.Records) != 1 { - t.Fatalf("expected 1 record, got %v", len(batch.Records)) - } - - if batch.Records[0][1].Value() != "testdata" { - t.Fatalf("expected 'testdata', got %v", batch.Records[0][0].Value()) - } + require.NoError(t, err, "error while executing query") + require.Len(t, batch.Records, 1, "expected 1 record") + require.Equal(t, "testdata", batch.Records[0][1].Value(), "expected 'testdata'") } func TestAllDataTypes(t *testing.T) { @@ -109,9 +92,7 @@ func TestAllDataTypes(t *testing.T) { );`, schemaName) _, err := conn.Exec(ctx, query) - if err != nil { - t.Fatalf("error while creating test table: %v", err) - } + require.NoError(t, err, "error while creating table") // Insert a row into the table query = fmt.Sprintf(` @@ -137,7 +118,7 @@ func TestAllDataTypes(t *testing.T) { )`, schemaName) - savedTime := time.Now() + savedTime := time.Now().UTC() savedUUID := uuid.New() _, err = conn.Exec( @@ -160,48 +141,34 @@ func TestAllDataTypes(t *testing.T) { savedTime, // col_tz4 savedTime, // col_date ) - if err != nil { - t.Fatalf("error while inserting into test table: %v", err) - } + require.NoError(t, err, "error while inserting into test table") - qe := connector.NewQRepQueryExecutor("test flow", "test part") + qe, err := connector.NewQRepQueryExecutor(ctx, "test flow", "test part") + require.NoError(t, err, "error while creating QRepQueryExecutor") // Select the row back out of the table query = fmt.Sprintf("SELECT * FROM %s.test;", schemaName) rows, err := qe.ExecuteQuery(context.Background(), query) - if err != nil { - t.Fatalf("error while executing query: %v", err) - } + require.NoError(t, err, "error while executing query") defer rows.Close() // Use rows.FieldDescriptions() to get field descriptions fieldDescriptions := rows.FieldDescriptions() - batch, err := qe.ProcessRows(rows, fieldDescriptions) - if err != nil { - t.Fatalf("failed to process rows: %v", err) - } - - if len(batch.Records) != 1 { - t.Fatalf("expected 1 record, got %v", len(batch.Records)) - } + batch, err := qe.ProcessRows(ctx, rows, fieldDescriptions) + require.NoError(t, err, "error while processing rows") + require.Len(t, batch.Records, 1, "expected 1 record") // Retrieve the results. record := batch.Records[0] expectedBool := true - if record[0].Value().(bool) != expectedBool { - t.Fatalf("expected %v, got %v", expectedBool, record[0].Value()) - } + require.Equal(t, expectedBool, record[0].Value(), "expected true") expectedInt4 := int32(2) - if record[1].Value().(int32) != expectedInt4 { - t.Fatalf("expected %v, got %v", expectedInt4, record[1].Value()) - } + require.Equal(t, expectedInt4, record[1].Value(), "expected 2") expectedInt8 := int64(3) - if record[2].Value().(int64) != expectedInt8 { - t.Fatalf("expected %v, got %v", expectedInt8, record[2].Value()) - } + require.Equal(t, expectedInt8, record[2].Value(), "expected 3") expectedFloat4 := float32(1.1) if record[3].Value().(float32) != expectedFloat4 { @@ -214,28 +181,21 @@ func TestAllDataTypes(t *testing.T) { } expectedText := "text" - if record[5].Value().(string) != expectedText { - t.Fatalf("expected %v, got %v", expectedText, record[5].Value()) - } + require.Equal(t, expectedText, record[5].Value(), "expected 'text'") expectedBytea := []byte("bytea") - if !bytes.Equal(record[6].Value().([]byte), expectedBytea) { - t.Fatalf("expected %v, got %v", expectedBytea, record[6].Value()) - } + require.Equal(t, expectedBytea, record[6].Value(), "expected 'bytea'") expectedJSON := `{"key":"value"}` - if record[7].Value().(string) != expectedJSON { - t.Fatalf("expected %v, got %v", expectedJSON, record[7].Value()) - } + require.Equal(t, expectedJSON, record[7].Value(), "expected '{\"key\":\"value\"}'") actualUUID := record[8].Value().([16]uint8) - if !bytes.Equal(actualUUID[:], savedUUID[:]) { - t.Fatalf("expected %v, got %v", savedUUID, actualUUID) - } + require.Equal(t, savedUUID[:], actualUUID[:], "expected savedUUID: %v", savedUUID) + actualTime := record[9].Value().(time.Time) + require.Equal(t, savedTime.Truncate(time.Second), + actualTime.Truncate(time.Second), "expected savedTime: %v", savedTime) expectedNumeric := "123.456" actualNumeric := record[10].Value().(decimal.Decimal).String() - if actualNumeric != expectedNumeric { - t.Fatalf("expected %v, got %v", expectedNumeric, actualNumeric) - } + require.Equal(t, expectedNumeric, actualNumeric, "expected 123.456") } diff --git a/flow/e2e/postgres/postgres.go b/flow/e2e/postgres/postgres.go index 37a0ace06..ea43648f7 100644 --- a/flow/e2e/postgres/postgres.go +++ b/flow/e2e/postgres/postgres.go @@ -54,7 +54,10 @@ func (s PeerFlowE2ETestSuitePG) DestinationTable(table string) string { func (s PeerFlowE2ETestSuitePG) GetRows(table string, cols string) (*model.QRecordBatch, error) { s.t.Helper() - pgQueryExecutor := s.conn.NewQRepQueryExecutor("testflow", "testpart") + pgQueryExecutor, err := s.conn.NewQRepQueryExecutor(context.Background(), "testflow", "testpart") + if err != nil { + return nil, err + } return pgQueryExecutor.ExecuteAndProcessQuery( context.Background(), diff --git a/flow/e2e/test_utils.go b/flow/e2e/test_utils.go index ce134f819..c3e577a4d 100644 --- a/flow/e2e/test_utils.go +++ b/flow/e2e/test_utils.go @@ -89,7 +89,10 @@ func EnvTrue(t *testing.T, env WorkflowRun, val bool) { } func GetPgRows(conn *connpostgres.PostgresConnector, suffix string, table string, cols string) (*model.QRecordBatch, error) { - pgQueryExecutor := conn.NewQRepQueryExecutor("testflow", "testpart") + pgQueryExecutor, err := conn.NewQRepQueryExecutor(context.Background(), "testflow", "testpart") + if err != nil { + return nil, err + } return pgQueryExecutor.ExecuteAndProcessQuery( context.Background(), diff --git a/flow/shared/postgres.go b/flow/shared/postgres.go index be3cf7d07..121fb73bf 100644 --- a/flow/shared/postgres.go +++ b/flow/shared/postgres.go @@ -58,17 +58,17 @@ func GetCustomDataTypes(ctx context.Context, conn *pgx.Conn) (map[uint32]string, AND n.nspname NOT IN ('pg_catalog', 'information_schema'); `) if err != nil { - return nil, fmt.Errorf("failed to get custom types: %w", err) + return nil, fmt.Errorf("failed to get customTypeMapping: %w", err) } customTypeMap := map[uint32]string{} - for rows.Next() { - var typeID pgtype.Uint32 - var typeName pgtype.Text - if err := rows.Scan(&typeID, &typeName); err != nil { - return nil, fmt.Errorf("failed to scan row: %w", err) - } + var typeID pgtype.Uint32 + var typeName pgtype.Text + if _, err := pgx.ForEachRow(rows, []any{&typeID, &typeName}, func() error { customTypeMap[typeID.Uint32] = typeName.String + return nil + }); err != nil { + return nil, fmt.Errorf("failed to scan into customTypeMapping: %w", err) } return customTypeMap, nil } From 08f4ffd6f7da7bce6f5058a9b3e838f61a5fe6ef Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Philip=20Dub=C3=A9?= Date: Thu, 14 Nov 2024 13:13:32 +0000 Subject: [PATCH 05/61] add dynamic config to adjust s3 part size (#2251) fixes #2184 where a user with a large enough batch hit ``` failed to sync records: failed to write records to S3: failed to upload file to path s3://peerdb-cache/...... 4890f21240e1.avro.zst: upload multipart failed, upload id: OTA0ZTE5NTMtMTdiMi00MWE5LWJhY....., cause: exceeded total allowed configured MaxUploadParts (10000). Adjust PartSize to fit in this limit ``` s3 sdk defaults to 5MiB part sizes, this user was able to fix their upload by changing that to 500MiB Co-authored-by: joltcan --- flow/connectors/clickhouse/cdc.go | 2 +- flow/connectors/clickhouse/qrep_avro_sync.go | 8 ++- flow/connectors/s3/qrep.go | 5 +- flow/connectors/snowflake/qrep_avro_sync.go | 67 ++++++++++---------- flow/connectors/snowflake/snowflake.go | 5 +- flow/connectors/utils/avro/avro_writer.go | 24 +++++-- flow/peerdbenv/dynamicconf.go | 12 ++++ 7 files changed, 76 insertions(+), 47 deletions(-) diff --git a/flow/connectors/clickhouse/cdc.go b/flow/connectors/clickhouse/cdc.go index 8fae9d6f2..d3eb883b4 100644 --- a/flow/connectors/clickhouse/cdc.go +++ b/flow/connectors/clickhouse/cdc.go @@ -88,7 +88,7 @@ func (c *ClickHouseConnector) syncRecordsViaAvro( } avroSyncer := c.avroSyncMethod(req.FlowJobName) - numRecords, err := avroSyncer.SyncRecords(ctx, stream, req.FlowJobName, syncBatchID) + numRecords, err := avroSyncer.SyncRecords(ctx, req.Env, stream, req.FlowJobName, syncBatchID) if err != nil { return nil, err } diff --git a/flow/connectors/clickhouse/qrep_avro_sync.go b/flow/connectors/clickhouse/qrep_avro_sync.go index f8277e3aa..fa2cfe103 100644 --- a/flow/connectors/clickhouse/qrep_avro_sync.go +++ b/flow/connectors/clickhouse/qrep_avro_sync.go @@ -60,6 +60,7 @@ func (s *ClickHouseAvroSyncMethod) CopyStageToDestination(ctx context.Context, a func (s *ClickHouseAvroSyncMethod) SyncRecords( ctx context.Context, + env map[string]string, stream *model.QRecordStream, flowJobName string, syncBatchID int64, @@ -76,7 +77,7 @@ func (s *ClickHouseAvroSyncMethod) SyncRecords( } batchIdentifierForFile := fmt.Sprintf("%s_%d", shared.RandomString(16), syncBatchID) - avroFile, err := s.writeToAvroFile(ctx, stream, avroSchema, batchIdentifierForFile, flowJobName) + avroFile, err := s.writeToAvroFile(ctx, env, stream, avroSchema, batchIdentifierForFile, flowJobName) if err != nil { return 0, err } @@ -110,7 +111,7 @@ func (s *ClickHouseAvroSyncMethod) SyncQRepRecords( return 0, err } - avroFile, err := s.writeToAvroFile(ctx, stream, avroSchema, partition.PartitionId, config.FlowJobName) + avroFile, err := s.writeToAvroFile(ctx, config.Env, stream, avroSchema, partition.PartitionId, config.FlowJobName) if err != nil { return 0, err } @@ -176,6 +177,7 @@ func (s *ClickHouseAvroSyncMethod) getAvroSchema( func (s *ClickHouseAvroSyncMethod) writeToAvroFile( ctx context.Context, + env map[string]string, stream *model.QRecordStream, avroSchema *model.QRecordAvroSchemaDefinition, identifierForFile string, @@ -190,7 +192,7 @@ func (s *ClickHouseAvroSyncMethod) writeToAvroFile( s3AvroFileKey := fmt.Sprintf("%s/%s/%s.avro.zst", s3o.Prefix, flowJobName, identifierForFile) s3AvroFileKey = strings.Trim(s3AvroFileKey, "/") - avroFile, err := ocfWriter.WriteRecordsToS3(ctx, s3o.Bucket, s3AvroFileKey, s.credsProvider.Provider) + avroFile, err := ocfWriter.WriteRecordsToS3(ctx, env, s3o.Bucket, s3AvroFileKey, s.credsProvider.Provider) if err != nil { return nil, fmt.Errorf("failed to write records to S3: %w", err) } diff --git a/flow/connectors/s3/qrep.go b/flow/connectors/s3/qrep.go index 14c7b31ef..9fbb485ab 100644 --- a/flow/connectors/s3/qrep.go +++ b/flow/connectors/s3/qrep.go @@ -25,7 +25,7 @@ func (c *S3Connector) SyncQRepRecords( return 0, err } - numRecords, err := c.writeToAvroFile(ctx, stream, avroSchema, partition.PartitionId, config.FlowJobName) + numRecords, err := c.writeToAvroFile(ctx, config.Env, stream, avroSchema, partition.PartitionId, config.FlowJobName) if err != nil { return 0, err } @@ -47,6 +47,7 @@ func getAvroSchema( func (c *S3Connector) writeToAvroFile( ctx context.Context, + env map[string]string, stream *model.QRecordStream, avroSchema *model.QRecordAvroSchemaDefinition, partitionID string, @@ -60,7 +61,7 @@ func (c *S3Connector) writeToAvroFile( s3AvroFileKey := fmt.Sprintf("%s/%s/%s.avro", s3o.Prefix, jobName, partitionID) writer := avro.NewPeerDBOCFWriter(stream, avroSchema, avro.CompressNone, protos.DBType_SNOWFLAKE) - avroFile, err := writer.WriteRecordsToS3(ctx, s3o.Bucket, s3AvroFileKey, c.credentialsProvider) + avroFile, err := writer.WriteRecordsToS3(ctx, env, s3o.Bucket, s3AvroFileKey, c.credentialsProvider) if err != nil { return 0, fmt.Errorf("failed to write records to S3: %w", err) } diff --git a/flow/connectors/snowflake/qrep_avro_sync.go b/flow/connectors/snowflake/qrep_avro_sync.go index 2e37705c1..0fea54b02 100644 --- a/flow/connectors/snowflake/qrep_avro_sync.go +++ b/flow/connectors/snowflake/qrep_avro_sync.go @@ -20,8 +20,8 @@ import ( ) type SnowflakeAvroSyncHandler struct { - config *protos.QRepConfig - connector *SnowflakeConnector + *SnowflakeConnector + config *protos.QRepConfig } func NewSnowflakeAvroSyncHandler( @@ -29,13 +29,14 @@ func NewSnowflakeAvroSyncHandler( connector *SnowflakeConnector, ) *SnowflakeAvroSyncHandler { return &SnowflakeAvroSyncHandler{ - config: config, - connector: connector, + SnowflakeConnector: connector, + config: config, } } func (s *SnowflakeAvroSyncHandler) SyncRecords( ctx context.Context, + env map[string]string, dstTableSchema []*sql.ColumnType, stream *model.QRecordStream, flowJobName string, @@ -45,7 +46,7 @@ func (s *SnowflakeAvroSyncHandler) SyncRecords( schema := stream.Schema() - s.connector.logger.Info("sync function called and schema acquired", tableLog) + s.logger.Info("sync function called and schema acquired", tableLog) avroSchema, err := s.getAvroSchema(dstTableName, schema) if err != nil { @@ -53,32 +54,31 @@ func (s *SnowflakeAvroSyncHandler) SyncRecords( } partitionID := shared.RandomString(16) - avroFile, err := s.writeToAvroFile(ctx, stream, avroSchema, partitionID, flowJobName) + avroFile, err := s.writeToAvroFile(ctx, env, stream, avroSchema, partitionID, flowJobName) if err != nil { return 0, err } defer avroFile.Cleanup() - s.connector.logger.Info(fmt.Sprintf("written %d records to Avro file", avroFile.NumRecords), tableLog) + s.logger.Info(fmt.Sprintf("written %d records to Avro file", avroFile.NumRecords), tableLog) - stage := s.connector.getStageNameForJob(s.config.FlowJobName) - err = s.connector.createStage(ctx, stage, s.config) - if err != nil { + stage := s.getStageNameForJob(s.config.FlowJobName) + if err := s.createStage(ctx, stage, s.config); err != nil { return 0, err } - s.connector.logger.Info("Created stage " + stage) + s.logger.Info("Created stage " + stage) err = s.putFileToStage(ctx, avroFile, stage) if err != nil { return 0, err } - s.connector.logger.Info("pushed avro file to stage", tableLog) + s.logger.Info("pushed avro file to stage", tableLog) - writeHandler := NewSnowflakeAvroConsolidateHandler(s.connector, s.config, s.config.DestinationTableIdentifier, stage) + writeHandler := NewSnowflakeAvroConsolidateHandler(s.SnowflakeConnector, s.config, s.config.DestinationTableIdentifier, stage) err = writeHandler.CopyStageToDestination(ctx) if err != nil { return 0, err } - s.connector.logger.Info(fmt.Sprintf("copying records into %s from stage %s", + s.logger.Info(fmt.Sprintf("copying records into %s from stage %s", s.config.DestinationTableIdentifier, stage)) return avroFile.NumRecords, nil @@ -96,7 +96,7 @@ func (s *SnowflakeAvroSyncHandler) SyncQRepRecords( dstTableName := config.DestinationTableIdentifier schema := stream.Schema() - s.connector.logger.Info("sync function called and schema acquired", partitionLog) + s.logger.Info("sync function called and schema acquired", partitionLog) err := s.addMissingColumns(ctx, schema, dstTableSchema, dstTableName, partition) if err != nil { @@ -108,22 +108,20 @@ func (s *SnowflakeAvroSyncHandler) SyncQRepRecords( return 0, err } - avroFile, err := s.writeToAvroFile(ctx, stream, avroSchema, partition.PartitionId, config.FlowJobName) + avroFile, err := s.writeToAvroFile(ctx, config.Env, stream, avroSchema, partition.PartitionId, config.FlowJobName) if err != nil { return 0, err } defer avroFile.Cleanup() - stage := s.connector.getStageNameForJob(config.FlowJobName) + stage := s.getStageNameForJob(config.FlowJobName) - err = s.putFileToStage(ctx, avroFile, stage) - if err != nil { + if err := s.putFileToStage(ctx, avroFile, stage); err != nil { return 0, err } - s.connector.logger.Info("Put file to stage in Avro sync for snowflake", partitionLog) + s.logger.Info("Put file to stage in Avro sync for snowflake", partitionLog) - err = s.connector.FinishQRepPartition(ctx, partition, config.FlowJobName, startTime) - if err != nil { + if err := s.FinishQRepPartition(ctx, partition, config.FlowJobName, startTime); err != nil { return 0, err } @@ -152,14 +150,14 @@ func (s *SnowflakeAvroSyncHandler) addMissingColumns( } if !hasColumn { - s.connector.logger.Info(fmt.Sprintf("adding column %s to destination table %s", + s.logger.Info(fmt.Sprintf("adding column %s to destination table %s", col.Name, dstTableName), partitionLog) colsToTypes[col.Name] = col.Type } } if len(colsToTypes) > 0 { - tx, err := s.connector.database.Begin() + tx, err := s.database.Begin() if err != nil { return fmt.Errorf("failed to begin transaction: %w", err) } @@ -173,7 +171,7 @@ func (s *SnowflakeAvroSyncHandler) addMissingColumns( alterTableCmd := fmt.Sprintf("ALTER TABLE %s ", dstTableName) alterTableCmd += fmt.Sprintf("ADD COLUMN IF NOT EXISTS \"%s\" %s;", upperCasedColName, sfColType) - s.connector.logger.Info(fmt.Sprintf("altering destination table %s with command `%s`", + s.logger.Info(fmt.Sprintf("altering destination table %s with command `%s`", dstTableName, alterTableCmd), partitionLog) if _, err := tx.ExecContext(ctx, alterTableCmd); err != nil { @@ -185,10 +183,10 @@ func (s *SnowflakeAvroSyncHandler) addMissingColumns( return fmt.Errorf("failed to commit transaction: %w", err) } - s.connector.logger.Info("successfully added missing columns to destination table "+ + s.logger.Info("successfully added missing columns to destination table "+ dstTableName, partitionLog) } else { - s.connector.logger.Info("no missing columns found in destination table "+dstTableName, partitionLog) + s.logger.Info("no missing columns found in destination table "+dstTableName, partitionLog) } return nil @@ -203,12 +201,13 @@ func (s *SnowflakeAvroSyncHandler) getAvroSchema( return nil, fmt.Errorf("failed to define Avro schema: %w", err) } - s.connector.logger.Info(fmt.Sprintf("Avro schema: %v\n", avroSchema)) + s.logger.Info(fmt.Sprintf("Avro schema: %v\n", avroSchema)) return avroSchema, nil } func (s *SnowflakeAvroSyncHandler) writeToAvroFile( ctx context.Context, + env map[string]string, stream *model.QRecordStream, avroSchema *model.QRecordAvroSchemaDefinition, partitionID string, @@ -223,7 +222,7 @@ func (s *SnowflakeAvroSyncHandler) writeToAvroFile( } localFilePath := fmt.Sprintf("%s/%s.avro.zst", tmpDir, partitionID) - s.connector.logger.Info("writing records to local file " + localFilePath) + s.logger.Info("writing records to local file " + localFilePath) avroFile, err := ocfWriter.WriteRecordsToAvroFile(ctx, localFilePath) if err != nil { return nil, fmt.Errorf("failed to write records to Avro file: %w", err) @@ -238,14 +237,14 @@ func (s *SnowflakeAvroSyncHandler) writeToAvroFile( } s3AvroFileKey := fmt.Sprintf("%s/%s/%s.avro.zst", s3o.Prefix, s.config.FlowJobName, partitionID) - s.connector.logger.Info("OCF: Writing records to S3", + s.logger.Info("OCF: Writing records to S3", slog.String(string(shared.PartitionIDKey), partitionID)) provider, err := utils.GetAWSCredentialsProvider(ctx, "snowflake", utils.PeerAWSCredentials{}) if err != nil { return nil, err } - avroFile, err := ocfWriter.WriteRecordsToS3(ctx, s3o.Bucket, s3AvroFileKey, provider) + avroFile, err := ocfWriter.WriteRecordsToS3(ctx, env, s3o.Bucket, s3AvroFileKey, provider) if err != nil { return nil, fmt.Errorf("failed to write records to S3: %w", err) } @@ -258,16 +257,16 @@ func (s *SnowflakeAvroSyncHandler) writeToAvroFile( func (s *SnowflakeAvroSyncHandler) putFileToStage(ctx context.Context, avroFile *avro.AvroFile, stage string) error { if avroFile.StorageLocation != avro.AvroLocalStorage { - s.connector.logger.Info("no file to put to stage") + s.logger.Info("no file to put to stage") return nil } putCmd := fmt.Sprintf("PUT file://%s @%s", avroFile.FilePath, stage) - if _, err := s.connector.database.ExecContext(ctx, putCmd); err != nil { + if _, err := s.database.ExecContext(ctx, putCmd); err != nil { return fmt.Errorf("failed to put file to stage: %w", err) } - s.connector.logger.Info(fmt.Sprintf("put file %s to stage %s", avroFile.FilePath, stage)) + s.logger.Info(fmt.Sprintf("put file %s to stage %s", avroFile.FilePath, stage)) return nil } diff --git a/flow/connectors/snowflake/snowflake.go b/flow/connectors/snowflake/snowflake.go index 7a400d78a..06e3fb881 100644 --- a/flow/connectors/snowflake/snowflake.go +++ b/flow/connectors/snowflake/snowflake.go @@ -423,8 +423,7 @@ func (c *SnowflakeConnector) SyncRecords(ctx context.Context, req *model.SyncRec return nil, err } - err = c.FinishBatch(ctx, req.FlowJobName, req.SyncBatchID, res.LastSyncedCheckpointID) - if err != nil { + if err := c.FinishBatch(ctx, req.FlowJobName, req.SyncBatchID, res.LastSyncedCheckpointID); err != nil { return nil, err } @@ -456,7 +455,7 @@ func (c *SnowflakeConnector) syncRecordsViaAvro( return nil, err } - numRecords, err := avroSyncer.SyncRecords(ctx, destinationTableSchema, stream, req.FlowJobName) + numRecords, err := avroSyncer.SyncRecords(ctx, req.Env, destinationTableSchema, stream, req.FlowJobName) if err != nil { return nil, err } diff --git a/flow/connectors/utils/avro/avro_writer.go b/flow/connectors/utils/avro/avro_writer.go index 6f193be88..ee72e2c28 100644 --- a/flow/connectors/utils/avro/avro_writer.go +++ b/flow/connectors/utils/avro/avro_writer.go @@ -23,6 +23,7 @@ import ( "github.com/PeerDB-io/peer-flow/connectors/utils" "github.com/PeerDB-io/peer-flow/generated/protos" "github.com/PeerDB-io/peer-flow/model" + "github.com/PeerDB-io/peer-flow/peerdbenv" "github.com/PeerDB-io/peer-flow/shared" ) @@ -187,7 +188,11 @@ func (p *peerDBOCFWriter) WriteOCF(ctx context.Context, w io.Writer) (int, error } func (p *peerDBOCFWriter) WriteRecordsToS3( - ctx context.Context, bucketName, key string, s3Creds utils.AWSCredentialsProvider, + ctx context.Context, + env map[string]string, + bucketName string, + key string, + s3Creds utils.AWSCredentialsProvider, ) (*AvroFile, error) { logger := shared.LoggerFromCtx(ctx) s3svc, err := utils.CreateS3Client(ctx, s3Creds) @@ -215,12 +220,23 @@ func (p *peerDBOCFWriter) WriteRecordsToS3( numRows, writeOcfError = p.WriteOCF(ctx, w) }() - _, err = manager.NewUploader(s3svc).Upload(ctx, &s3.PutObjectInput{ + partSize, err := peerdbenv.PeerDBS3PartSize(ctx, env) + if err != nil { + return nil, fmt.Errorf("could not get s3 part size config: %w", err) + } + + // Create the uploader using the AWS SDK v2 manager + uploader := manager.NewUploader(s3svc, func(u *manager.Uploader) { + if partSize > 0 { + u.PartSize = partSize + } + }) + + if _, err := uploader.Upload(ctx, &s3.PutObjectInput{ Bucket: aws.String(bucketName), Key: aws.String(key), Body: r, - }) - if err != nil { + }); err != nil { s3Path := "s3://" + bucketName + "/" + key logger.Error("failed to upload file", slog.Any("error", err), slog.String("s3_path", s3Path)) return nil, fmt.Errorf("failed to upload file: %w", err) diff --git a/flow/peerdbenv/dynamicconf.go b/flow/peerdbenv/dynamicconf.go index 1e2f22590..4810faf0a 100644 --- a/flow/peerdbenv/dynamicconf.go +++ b/flow/peerdbenv/dynamicconf.go @@ -116,6 +116,14 @@ DROP AGGREGATE PEERDB_EPHEMERAL_HEARTBEAT(float4); END;`, ApplyMode: protos.DynconfApplyMode_APPLY_MODE_IMMEDIATE, TargetForSetting: protos.DynconfTarget_CLICKHOUSE, }, + { + Name: "PEERDB_S3_PART_SIZE", + Description: "S3 upload part size, may need to increase for large batches", + DefaultValue: "0", + ValueType: protos.DynconfValueType_INT, + ApplyMode: protos.DynconfApplyMode_APPLY_MODE_IMMEDIATE, + TargetForSetting: protos.DynconfTarget_ALL, + }, { Name: "PEERDB_QUEUE_FORCE_TOPIC_CREATION", Description: "Force auto topic creation in mirrors, applies to Kafka and PubSub mirrors", @@ -340,6 +348,10 @@ func PeerDBClickHouseAWSS3BucketName(ctx context.Context, env map[string]string) return dynLookup(ctx, env, "PEERDB_CLICKHOUSE_AWS_S3_BUCKET_NAME") } +func PeerDBS3PartSize(ctx context.Context, env map[string]string) (int64, error) { + return dynamicConfSigned[int64](ctx, env, "PEERDB_S3_PART_SIZE") +} + // Kafka has topic auto create as an option, auto.create.topics.enable // But non-dedicated cluster maybe can't set config, may want peerdb to create topic. Similar for PubSub func PeerDBQueueForceTopicCreation(ctx context.Context, env map[string]string) (bool, error) { From 825c778ac43836811697ba1db09c42734c2f0b86 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Philip=20Dub=C3=A9?= Date: Thu, 14 Nov 2024 13:27:44 +0000 Subject: [PATCH 06/61] update s3 part size config description (#2252) #2184 feedback --- flow/peerdbenv/dynamicconf.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/flow/peerdbenv/dynamicconf.go b/flow/peerdbenv/dynamicconf.go index 4810faf0a..17b3b48bb 100644 --- a/flow/peerdbenv/dynamicconf.go +++ b/flow/peerdbenv/dynamicconf.go @@ -118,7 +118,7 @@ DROP AGGREGATE PEERDB_EPHEMERAL_HEARTBEAT(float4); END;`, }, { Name: "PEERDB_S3_PART_SIZE", - Description: "S3 upload part size, may need to increase for large batches", + Description: "S3 upload part size in bytes, may need to increase for large batches. https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.html", DefaultValue: "0", ValueType: protos.DynconfValueType_INT, ApplyMode: protos.DynconfApplyMode_APPLY_MODE_IMMEDIATE, From 1ce7723a65cbc8855b99107776ac90cb3f780fac Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Philip=20Dub=C3=A9?= Date: Thu, 14 Nov 2024 13:38:08 +0000 Subject: [PATCH 07/61] fix lint (#2253) should really not use auto-merge so much with linting not actually being required --- flow/peerdbenv/dynamicconf.go | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/flow/peerdbenv/dynamicconf.go b/flow/peerdbenv/dynamicconf.go index 17b3b48bb..1eb53d3cb 100644 --- a/flow/peerdbenv/dynamicconf.go +++ b/flow/peerdbenv/dynamicconf.go @@ -117,8 +117,9 @@ DROP AGGREGATE PEERDB_EPHEMERAL_HEARTBEAT(float4); END;`, TargetForSetting: protos.DynconfTarget_CLICKHOUSE, }, { - Name: "PEERDB_S3_PART_SIZE", - Description: "S3 upload part size in bytes, may need to increase for large batches. https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.html", + Name: "PEERDB_S3_PART_SIZE", + Description: "S3 upload part size in bytes, may need to increase for large batches. " + + "https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.html", DefaultValue: "0", ValueType: protos.DynconfValueType_INT, ApplyMode: protos.DynconfApplyMode_APPLY_MODE_IMMEDIATE, From 350686f6d7e8532af1013796488f23a61cd6589e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Philip=20Dub=C3=A9?= Date: Thu, 14 Nov 2024 13:45:52 +0000 Subject: [PATCH 08/61] Enable wal heartbeats by default (#2217) users predating PG14 should change the query or disable wal heartbeats --- flow/peerdbenv/dynamicconf.go | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/flow/peerdbenv/dynamicconf.go b/flow/peerdbenv/dynamicconf.go index 1eb53d3cb..c86c4616a 100644 --- a/flow/peerdbenv/dynamicconf.go +++ b/flow/peerdbenv/dynamicconf.go @@ -68,17 +68,14 @@ var DynamicSettings = [...]*protos.DynamicSetting{ { Name: "PEERDB_ENABLE_WAL_HEARTBEAT", Description: "Enables WAL heartbeat to prevent replication slot lag from increasing during times of no activity", - DefaultValue: "false", + DefaultValue: "true", ValueType: protos.DynconfValueType_BOOL, ApplyMode: protos.DynconfApplyMode_APPLY_MODE_IMMEDIATE, TargetForSetting: protos.DynconfTarget_ALL, }, { - Name: "PEERDB_WAL_HEARTBEAT_QUERY", - DefaultValue: `BEGIN; -DROP AGGREGATE IF EXISTS PEERDB_EPHEMERAL_HEARTBEAT(float4); -CREATE AGGREGATE PEERDB_EPHEMERAL_HEARTBEAT(float4) (SFUNC = float4pl, STYPE = float4); -DROP AGGREGATE PEERDB_EPHEMERAL_HEARTBEAT(float4); END;`, + Name: "PEERDB_WAL_HEARTBEAT_QUERY", + DefaultValue: "SELECT pg_logical_emit_message(false,'peerdb_heartbeat','')", ValueType: protos.DynconfValueType_STRING, Description: "SQL to run during each WAL heartbeat", ApplyMode: protos.DynconfApplyMode_APPLY_MODE_IMMEDIATE, From 42b02080ff91a05b52ef5a1b02c7991a4307c968 Mon Sep 17 00:00:00 2001 From: Kunal Gupta <39487888+iamKunalGupta@users.noreply.github.com> Date: Thu, 14 Nov 2024 20:26:22 +0530 Subject: [PATCH 09/61] feat: add maintenance mode for upgrades (#2211) - Introduces Maintenance mode (status is available via dynamic config: `PEERDB_MAINTENANCE_MODE_ENABLED`) - Maintenance mode consists of 2 Workflows: - `StartMaintenance` - for pre-upgrade, responsible for - Waiting for running snapshots - Updating dynamic config to true - Pausing and backing up currently running mirrors - `EndMaintenance` - for post-upgrade, responsible for - Resuming backed up mirrors - Updating dynamic config to false - During the upgrade (between `Start` and `End`), mirrors cannot be mutated/created in any way, - There is also an instance info API which returns `Ready`/`Maintenance` which can be used for UI changes later. There are 2 ways to trigger these 2 workflows: 1. API call to flow-api 2. Running the new `maintenance` entrypoint with the respective args A new task queue is added so that the maintenance tasks can be spun up even during pre-upgrade hooks (from version earlier than ones containing this PR) and this also ensures that always the latest version of the maintenance flows run irrespective of the old version. --- docker-bake.hcl | 30 ++ flow/activities/maintenance_activity.go | 284 ++++++++++++++++ flow/alerting/alerting.go | 6 +- flow/cmd/api.go | 41 +-- flow/cmd/handler.go | 66 ++++ flow/cmd/maintenance.go | 246 ++++++++++++++ flow/cmd/mirror_status.go | 15 +- flow/cmd/settings.go | 3 +- flow/cmd/validate_mirror.go | 12 + flow/cmd/worker.go | 14 +- flow/go.mod | 47 ++- flow/go.sum | 103 +++++- flow/main.go | 88 +++++ flow/peerdbenv/config.go | 6 + flow/peerdbenv/dynamicconf.go | 32 ++ flow/shared/constants.go | 5 +- flow/shared/telemetry/event_types.go | 10 +- flow/shared/worklow.go | 27 ++ flow/workflows/activities.go | 5 +- flow/workflows/maintenance_flow.go | 305 ++++++++++++++++++ flow/workflows/register.go | 3 + .../migrations/V40__maintenance_flows.sql | 29 ++ protos/flow.proto | 25 ++ protos/route.proto | 40 +++ stacks/flow.Dockerfile | 15 + stacks/peerdb-server.Dockerfile | 4 + stacks/peerdb-ui.Dockerfile | 3 + 27 files changed, 1393 insertions(+), 71 deletions(-) create mode 100644 flow/activities/maintenance_activity.go create mode 100644 flow/cmd/maintenance.go create mode 100644 flow/shared/worklow.go create mode 100644 flow/workflows/maintenance_flow.go create mode 100644 nexus/catalog/migrations/V40__maintenance_flows.sql diff --git a/docker-bake.hcl b/docker-bake.hcl index 6e6098ca1..4927cd550 100644 --- a/docker-bake.hcl +++ b/docker-bake.hcl @@ -16,6 +16,7 @@ group "default" { "flow-worker", "flow-api", "flow-snapshot-worker", + "flow-maintenance", "peerdb-ui" ] } @@ -45,6 +46,9 @@ target "flow-snapshot-worker" { "linux/amd64", "linux/arm64", ] + args = { + PEERDB_VERSION_SHA_SHORT = "${SHA_SHORT}" + } tags = [ "${REGISTRY}/flow-snapshot-worker:${TAG}", "${REGISTRY}/flow-snapshot-worker:${SHA_SHORT}", @@ -59,12 +63,32 @@ target "flow-worker" { "linux/amd64", "linux/arm64", ] + args = { + PEERDB_VERSION_SHA_SHORT = "${SHA_SHORT}" + } tags = [ "${REGISTRY}/flow-worker:${TAG}", "${REGISTRY}/flow-worker:${SHA_SHORT}", ] } +target "flow-maintenance" { + context = "." + dockerfile = "stacks/flow.Dockerfile" + target = "flow-maintenance" + platforms = [ + "linux/amd64", + "linux/arm64", + ] + args = { + PEERDB_VERSION_SHA_SHORT = "${SHA_SHORT}" + } + tags = [ + "${REGISTRY}/flow-maintenance:${TAG}", + "${REGISTRY}/flow-maintenance:${SHA_SHORT}", + ] +} + target "peerdb" { context = "." dockerfile = "stacks/peerdb-server.Dockerfile" @@ -72,6 +96,9 @@ target "peerdb" { "linux/amd64", "linux/arm64", ] + args = { + PEERDB_VERSION_SHA_SHORT = "${SHA_SHORT}" + } tags = [ "${REGISTRY}/peerdb-server:${TAG}", "${REGISTRY}/peerdb-server:${SHA_SHORT}", @@ -85,6 +112,9 @@ target "peerdb-ui" { "linux/amd64", "linux/arm64", ] + args = { + PEERDB_VERSION_SHA_SHORT = "${SHA_SHORT}" + } tags = [ "${REGISTRY}/peerdb-ui:${TAG}", "${REGISTRY}/peerdb-ui:${SHA_SHORT}", diff --git a/flow/activities/maintenance_activity.go b/flow/activities/maintenance_activity.go new file mode 100644 index 000000000..be42cc8e5 --- /dev/null +++ b/flow/activities/maintenance_activity.go @@ -0,0 +1,284 @@ +package activities + +import ( + "context" + "fmt" + "log/slog" + "time" + + "github.com/jackc/pgx/v5" + "github.com/jackc/pgx/v5/pgxpool" + "go.temporal.io/sdk/activity" + "go.temporal.io/sdk/client" + "google.golang.org/protobuf/types/known/timestamppb" + + "github.com/PeerDB-io/peer-flow/alerting" + "github.com/PeerDB-io/peer-flow/generated/protos" + "github.com/PeerDB-io/peer-flow/model" + "github.com/PeerDB-io/peer-flow/peerdbenv" + "github.com/PeerDB-io/peer-flow/shared" + "github.com/PeerDB-io/peer-flow/shared/telemetry" +) + +const ( + mirrorStateBackup = "backup" + mirrorStateRestored = "restore" +) + +type MaintenanceActivity struct { + CatalogPool *pgxpool.Pool + Alerter *alerting.Alerter + TemporalClient client.Client +} + +func (a *MaintenanceActivity) GetAllMirrors(ctx context.Context) (*protos.MaintenanceMirrors, error) { + rows, err := a.CatalogPool.Query(ctx, ` + select distinct on(name) + id, name, workflow_id, + created_at, coalesce(query_string, '')='' is_cdc + from flows + `) + if err != nil { + return &protos.MaintenanceMirrors{}, err + } + + maintenanceMirrorItems, err := pgx.CollectRows(rows, func(row pgx.CollectableRow) (*protos.MaintenanceMirror, error) { + var info protos.MaintenanceMirror + var createdAt time.Time + err := row.Scan(&info.MirrorId, &info.MirrorName, &info.WorkflowId, &createdAt, &info.IsCdc) + info.MirrorCreatedAt = timestamppb.New(createdAt) + return &info, err + }) + return &protos.MaintenanceMirrors{ + Mirrors: maintenanceMirrorItems, + }, err +} + +func (a *MaintenanceActivity) getMirrorStatus(ctx context.Context, mirror *protos.MaintenanceMirror) (protos.FlowStatus, error) { + return shared.GetWorkflowStatus(ctx, a.TemporalClient, mirror.WorkflowId) +} + +func (a *MaintenanceActivity) WaitForRunningSnapshots(ctx context.Context) (*protos.MaintenanceMirrors, error) { + mirrors, err := a.GetAllMirrors(ctx) + if err != nil { + return &protos.MaintenanceMirrors{}, err + } + + slog.Info("Found mirrors for snapshot check", "mirrors", mirrors, "len", len(mirrors.Mirrors)) + + for _, mirror := range mirrors.Mirrors { + lastStatus, err := a.checkAndWaitIfSnapshot(ctx, mirror, 2*time.Minute) + if err != nil { + return &protos.MaintenanceMirrors{}, err + } + slog.Info("Finished checking and waiting for snapshot", + "mirror", mirror.MirrorName, "workflowId", mirror.WorkflowId, "lastStatus", lastStatus.String()) + } + slog.Info("Finished checking and waiting for all mirrors to finish snapshot") + return mirrors, nil +} + +func (a *MaintenanceActivity) checkAndWaitIfSnapshot( + ctx context.Context, + mirror *protos.MaintenanceMirror, + logEvery time.Duration, +) (protos.FlowStatus, error) { + // In case a mirror was just kicked off, it shows up in the running state, we wait for a bit before checking for snapshot + if mirror.MirrorCreatedAt.AsTime().After(time.Now().Add(-30 * time.Second)) { + slog.Info("Mirror was created less than 30 seconds ago, waiting for it to be ready before checking for snapshot", + "mirror", mirror.MirrorName, "workflowId", mirror.WorkflowId) + time.Sleep(30 * time.Second) + } + + flowStatus, err := RunEveryIntervalUntilFinish(ctx, func() (bool, protos.FlowStatus, error) { + activity.RecordHeartbeat(ctx, fmt.Sprintf("Waiting for mirror %s to finish snapshot", mirror.MirrorName)) + mirrorStatus, err := a.getMirrorStatus(ctx, mirror) + if err != nil { + return false, mirrorStatus, err + } + if mirrorStatus == protos.FlowStatus_STATUS_SNAPSHOT || mirrorStatus == protos.FlowStatus_STATUS_SETUP { + return false, mirrorStatus, nil + } + return true, mirrorStatus, nil + }, 10*time.Second, fmt.Sprintf("Waiting for mirror %s to finish snapshot", mirror.MirrorName), logEvery) + return flowStatus, err +} + +func (a *MaintenanceActivity) EnableMaintenanceMode(ctx context.Context) error { + slog.Info("Enabling maintenance mode") + return peerdbenv.UpdatePeerDBMaintenanceModeEnabled(ctx, a.CatalogPool, true) +} + +func (a *MaintenanceActivity) BackupAllPreviouslyRunningFlows(ctx context.Context, mirrors *protos.MaintenanceMirrors) error { + tx, err := a.CatalogPool.Begin(ctx) + if err != nil { + return err + } + defer shared.RollbackTx(tx, slog.Default()) + + for _, mirror := range mirrors.Mirrors { + _, err := tx.Exec(ctx, ` + insert into maintenance.maintenance_flows + (flow_id, flow_name, workflow_id, flow_created_at, is_cdc, state, from_version) + values + ($1, $2, $3, $4, $5, $6, $7) + `, mirror.MirrorId, mirror.MirrorName, mirror.WorkflowId, mirror.MirrorCreatedAt.AsTime(), mirror.IsCdc, mirrorStateBackup, + peerdbenv.PeerDBVersionShaShort()) + if err != nil { + return err + } + } + return tx.Commit(ctx) +} + +func (a *MaintenanceActivity) PauseMirrorIfRunning(ctx context.Context, mirror *protos.MaintenanceMirror) (bool, error) { + mirrorStatus, err := a.getMirrorStatus(ctx, mirror) + if err != nil { + return false, err + } + + slog.Info("Checking if mirror is running", "mirror", mirror.MirrorName, "workflowId", mirror.WorkflowId, "status", mirrorStatus.String()) + + if mirrorStatus != protos.FlowStatus_STATUS_RUNNING { + return false, nil + } + + slog.Info("Pausing mirror for maintenance", "mirror", mirror.MirrorName, "workflowId", mirror.WorkflowId) + + if err := model.FlowSignal.SignalClientWorkflow(ctx, a.TemporalClient, mirror.WorkflowId, "", model.PauseSignal); err != nil { + slog.Error("Error signaling mirror running to pause for maintenance", + "mirror", mirror.MirrorName, "workflowId", mirror.WorkflowId, "error", err) + return false, err + } + + return RunEveryIntervalUntilFinish(ctx, func() (bool, bool, error) { + updatedMirrorStatus, statusErr := a.getMirrorStatus(ctx, mirror) + if statusErr != nil { + return false, false, statusErr + } + activity.RecordHeartbeat(ctx, "Waiting for mirror to pause with current status "+updatedMirrorStatus.String()) + if statusErr := model.FlowSignal.SignalClientWorkflow(ctx, a.TemporalClient, mirror.WorkflowId, "", + model.PauseSignal); statusErr != nil { + return false, false, statusErr + } + if updatedMirrorStatus == protos.FlowStatus_STATUS_PAUSED { + return true, true, nil + } + return false, false, nil + }, 10*time.Second, "Waiting for mirror to pause", 30*time.Second) +} + +func (a *MaintenanceActivity) CleanBackedUpFlows(ctx context.Context) error { + _, err := a.CatalogPool.Exec(ctx, ` + update maintenance.maintenance_flows + set state = $1, + restored_at = now(), + to_version = $2 + where state = $3 + `, mirrorStateRestored, peerdbenv.PeerDBVersionShaShort(), mirrorStateBackup) + return err +} + +func (a *MaintenanceActivity) GetBackedUpFlows(ctx context.Context) (*protos.MaintenanceMirrors, error) { + rows, err := a.CatalogPool.Query(ctx, ` + select flow_id, flow_name, workflow_id, flow_created_at, is_cdc + from maintenance.maintenance_flows + where state = $1 + `, mirrorStateBackup) + if err != nil { + return nil, err + } + + maintenanceMirrorItems, err := pgx.CollectRows(rows, func(row pgx.CollectableRow) (*protos.MaintenanceMirror, error) { + var info protos.MaintenanceMirror + var createdAt time.Time + err := row.Scan(&info.MirrorId, &info.MirrorName, &info.WorkflowId, &createdAt, &info.IsCdc) + info.MirrorCreatedAt = timestamppb.New(createdAt) + return &info, err + }) + if err != nil { + return nil, err + } + + return &protos.MaintenanceMirrors{ + Mirrors: maintenanceMirrorItems, + }, nil +} + +func (a *MaintenanceActivity) ResumeMirror(ctx context.Context, mirror *protos.MaintenanceMirror) error { + mirrorStatus, err := a.getMirrorStatus(ctx, mirror) + if err != nil { + return err + } + + if mirrorStatus != protos.FlowStatus_STATUS_PAUSED { + slog.Error("Cannot resume mirror that is not paused", + "mirror", mirror.MirrorName, "workflowId", mirror.WorkflowId, "status", mirrorStatus.String()) + return nil + } + + // There can also be "workflow already completed" errors, what should we do in that case? + if err := model.FlowSignal.SignalClientWorkflow(ctx, a.TemporalClient, mirror.WorkflowId, "", model.NoopSignal); err != nil { + slog.Error("Error signaling mirror to resume for maintenance", + "mirror", mirror.MirrorName, "workflowId", mirror.WorkflowId, "error", err) + return err + } + return nil +} + +func (a *MaintenanceActivity) DisableMaintenanceMode(ctx context.Context) error { + slog.Info("Disabling maintenance mode") + return peerdbenv.UpdatePeerDBMaintenanceModeEnabled(ctx, a.CatalogPool, false) +} + +func (a *MaintenanceActivity) BackgroundAlerter(ctx context.Context) error { + heartbeatTicker := time.NewTicker(30 * time.Second) + defer heartbeatTicker.Stop() + + alertTicker := time.NewTicker(time.Duration(peerdbenv.PeerDBMaintenanceModeWaitAlertSeconds()) * time.Second) + defer alertTicker.Stop() + + for { + select { + case <-ctx.Done(): + return ctx.Err() + case <-heartbeatTicker.C: + activity.RecordHeartbeat(ctx, "Maintenance Workflow is still running") + case <-alertTicker.C: + slog.Warn("Maintenance Workflow is still running") + a.Alerter.LogNonFlowWarning(ctx, telemetry.MaintenanceWait, "Waiting", "Maintenance mode is still running") + } + } +} + +func RunEveryIntervalUntilFinish[T any]( + ctx context.Context, + runFunc func() (finished bool, result T, err error), + runInterval time.Duration, + logMessage string, + logInterval time.Duration, +) (T, error) { + runTicker := time.NewTicker(runInterval) + defer runTicker.Stop() + + logTicker := time.NewTicker(logInterval) + defer logTicker.Stop() + var lastResult T + for { + select { + case <-ctx.Done(): + return lastResult, ctx.Err() + case <-runTicker.C: + finished, result, err := runFunc() + lastResult = result + if err != nil { + return lastResult, err + } + if finished { + return lastResult, err + } + case <-logTicker.C: + slog.Info(logMessage, "lastResult", lastResult) + } + } +} diff --git a/flow/alerting/alerting.go b/flow/alerting/alerting.go index e9df410f9..5f05005d1 100644 --- a/flow/alerting/alerting.go +++ b/flow/alerting/alerting.go @@ -377,10 +377,10 @@ func (a *Alerter) sendTelemetryMessage( } if a.snsTelemetrySender != nil { - if status, err := a.snsTelemetrySender.SendMessage(ctx, details, details, attributes); err != nil { + if response, err := a.snsTelemetrySender.SendMessage(ctx, details, details, attributes); err != nil { logger.Warn("failed to send message to snsTelemetrySender", slog.Any("error", err)) } else { - logger.Info("received status from snsTelemetrySender", slog.String("status", status)) + logger.Info("received response from snsTelemetrySender", slog.String("response", response)) } } @@ -388,7 +388,7 @@ func (a *Alerter) sendTelemetryMessage( if status, err := a.incidentIoTelemetrySender.SendMessage(ctx, details, details, attributes); err != nil { logger.Warn("failed to send message to incidentIoTelemetrySender", slog.Any("error", err)) } else { - logger.Info("received status from incident.io", slog.String("status", status)) + logger.Info("received response from incident.io", slog.String("response", status)) } } } diff --git a/flow/cmd/api.go b/flow/cmd/api.go index ca225e429..f81f9d923 100644 --- a/flow/cmd/api.go +++ b/flow/cmd/api.go @@ -191,24 +191,7 @@ func APIMain(ctx context.Context, args *APIServerParams) error { Logger: slog.New(shared.NewSlogHandler(slog.NewJSONHandler(os.Stdout, nil))), } - if peerdbenv.PeerDBTemporalEnableCertAuth() { - slog.Info("Using temporal certificate/key for authentication") - - certs, err := parseTemporalCertAndKey(ctx) - if err != nil { - return fmt.Errorf("unable to base64 decode certificate and key: %w", err) - } - - connOptions := client.ConnectionOptions{ - TLS: &tls.Config{ - Certificates: certs, - MinVersion: tls.VersionTLS13, - }, - } - clientOptions.ConnectionOptions = connOptions - } - - tc, err := client.Dial(clientOptions) + tc, err := setupTemporalClient(ctx, clientOptions) if err != nil { return fmt.Errorf("unable to create Temporal client: %w", err) } @@ -309,3 +292,25 @@ func APIMain(ctx context.Context, args *APIServerParams) error { return nil } + +func setupTemporalClient(ctx context.Context, clientOptions client.Options) (client.Client, error) { + if peerdbenv.PeerDBTemporalEnableCertAuth() { + slog.Info("Using temporal certificate/key for authentication") + + certs, err := parseTemporalCertAndKey(ctx) + if err != nil { + return nil, fmt.Errorf("unable to base64 decode certificate and key: %w", err) + } + + connOptions := client.ConnectionOptions{ + TLS: &tls.Config{ + Certificates: certs, + MinVersion: tls.VersionTLS13, + }, + } + clientOptions.ConnectionOptions = connOptions + } + + tc, err := client.Dial(clientOptions) + return tc, err +} diff --git a/flow/cmd/handler.go b/flow/cmd/handler.go index e2d1da2e3..6caefaf47 100644 --- a/flow/cmd/handler.go +++ b/flow/cmd/handler.go @@ -19,6 +19,7 @@ import ( "github.com/PeerDB-io/peer-flow/connectors/utils" "github.com/PeerDB-io/peer-flow/generated/protos" "github.com/PeerDB-io/peer-flow/model" + "github.com/PeerDB-io/peer-flow/peerdbenv" "github.com/PeerDB-io/peer-flow/shared" peerflow "github.com/PeerDB-io/peer-flow/workflows" ) @@ -327,6 +328,17 @@ func (h *FlowRequestHandler) FlowStateChange( ) (*protos.FlowStateChangeResponse, error) { logs := slog.String("flowJobName", req.FlowJobName) slog.Info("FlowStateChange called", logs, slog.Any("req", req)) + underMaintenance, err := peerdbenv.PeerDBMaintenanceModeEnabled(ctx, nil) + if err != nil { + slog.Error("unable to check maintenance mode", logs, slog.Any("error", err)) + return nil, fmt.Errorf("unable to load dynamic config: %w", err) + } + + if underMaintenance { + slog.Warn("Flow state change request denied due to maintenance", logs) + return nil, errors.New("PeerDB is under maintenance") + } + workflowID, err := h.getWorkflowID(ctx, req.FlowJobName) if err != nil { slog.Error("[flow-state-change] unable to get workflowID", logs, slog.Any("error", err)) @@ -488,6 +500,14 @@ func (h *FlowRequestHandler) ResyncMirror( ctx context.Context, req *protos.ResyncMirrorRequest, ) (*protos.ResyncMirrorResponse, error) { + underMaintenance, err := peerdbenv.PeerDBMaintenanceModeEnabled(ctx, nil) + if err != nil { + return nil, fmt.Errorf("unable to get maintenance mode status: %w", err) + } + if underMaintenance { + return nil, errors.New("PeerDB is under maintenance") + } + isCDC, err := h.isCDCFlow(ctx, req.FlowJobName) if err != nil { return nil, err @@ -521,3 +541,49 @@ func (h *FlowRequestHandler) ResyncMirror( } return &protos.ResyncMirrorResponse{}, nil } + +func (h *FlowRequestHandler) GetInstanceInfo(ctx context.Context, in *protos.InstanceInfoRequest) (*protos.InstanceInfoResponse, error) { + enabled, err := peerdbenv.PeerDBMaintenanceModeEnabled(ctx, nil) + if err != nil { + slog.Error("unable to get maintenance mode status", slog.Any("error", err)) + return &protos.InstanceInfoResponse{ + Status: protos.InstanceStatus_INSTANCE_STATUS_UNKNOWN, + }, fmt.Errorf("unable to get maintenance mode status: %w", err) + } + if enabled { + return &protos.InstanceInfoResponse{ + Status: protos.InstanceStatus_INSTANCE_STATUS_MAINTENANCE, + }, nil + } + return &protos.InstanceInfoResponse{ + Status: protos.InstanceStatus_INSTANCE_STATUS_READY, + }, nil +} + +func (h *FlowRequestHandler) Maintenance(ctx context.Context, in *protos.MaintenanceRequest) (*protos.MaintenanceResponse, error) { + taskQueueId := shared.MaintenanceFlowTaskQueue + if in.UsePeerflowTaskQueue { + taskQueueId = shared.PeerFlowTaskQueue + } + switch { + case in.Status == protos.MaintenanceStatus_MAINTENANCE_STATUS_START: + workflowRun, err := peerflow.RunStartMaintenanceWorkflow(ctx, h.temporalClient, &protos.StartMaintenanceFlowInput{}, taskQueueId) + if err != nil { + return nil, err + } + return &protos.MaintenanceResponse{ + WorkflowId: workflowRun.GetID(), + RunId: workflowRun.GetRunID(), + }, nil + case in.Status == protos.MaintenanceStatus_MAINTENANCE_STATUS_END: + workflowRun, err := peerflow.RunEndMaintenanceWorkflow(ctx, h.temporalClient, &protos.EndMaintenanceFlowInput{}, taskQueueId) + if err != nil { + return nil, err + } + return &protos.MaintenanceResponse{ + WorkflowId: workflowRun.GetID(), + RunId: workflowRun.GetRunID(), + }, nil + } + return nil, errors.New("invalid maintenance status") +} diff --git a/flow/cmd/maintenance.go b/flow/cmd/maintenance.go new file mode 100644 index 000000000..474a67db3 --- /dev/null +++ b/flow/cmd/maintenance.go @@ -0,0 +1,246 @@ +package cmd + +import ( + "context" + "crypto/tls" + "errors" + "fmt" + "log/slog" + "os" + + "github.com/aws/smithy-go/ptr" + "go.temporal.io/sdk/client" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials" + "google.golang.org/grpc/credentials/insecure" + k8sErrors "k8s.io/apimachinery/pkg/api/errors" + v1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/kubernetes" + "k8s.io/client-go/rest" + + "github.com/PeerDB-io/peer-flow/generated/protos" + "github.com/PeerDB-io/peer-flow/peerdbenv" + "github.com/PeerDB-io/peer-flow/shared" + peerflow "github.com/PeerDB-io/peer-flow/workflows" +) + +type MaintenanceCLIParams struct { + TemporalHostPort string + TemporalNamespace string + Mode string + FlowGrpcAddress string + SkipIfK8sServiceMissing string + FlowTlsEnabled bool + SkipOnApiVersionMatch bool + SkipOnNoMirrors bool + UseMaintenanceTaskQueue bool + AssumeSkippedMaintenanceWorkflows bool +} + +type StartMaintenanceResult struct { + SkippedReason *string `json:"skippedReason,omitempty"` + APIVersion string `json:"apiVersion,omitempty"` + CLIVersion string `json:"cliVersion,omitempty"` + Skipped bool `json:"skipped,omitempty"` +} + +// MaintenanceMain is the entry point for the maintenance command, requires access to Temporal client, will exit after +// running the requested maintenance workflow +func MaintenanceMain(ctx context.Context, args *MaintenanceCLIParams) error { + slog.Info("Starting Maintenance Mode CLI") + clientOptions := client.Options{ + HostPort: args.TemporalHostPort, + Namespace: args.TemporalNamespace, + Logger: slog.New(shared.NewSlogHandler(slog.NewJSONHandler(os.Stdout, nil))), + } + tc, err := setupTemporalClient(ctx, clientOptions) + if err != nil { + return fmt.Errorf("unable to create Temporal client: %w", err) + } + + taskQueueId := shared.MaintenanceFlowTaskQueue + if !args.UseMaintenanceTaskQueue { + taskQueueId = shared.PeerFlowTaskQueue + } + + if args.Mode == "start" { + if args.AssumeSkippedMaintenanceWorkflows { + slog.Info("Assuming maintenance workflows were skipped") + return WriteMaintenanceOutputToCatalog(ctx, StartMaintenanceResult{ + Skipped: true, + SkippedReason: ptr.String("Assumed skipped by CLI Flag"), + CLIVersion: peerdbenv.PeerDBVersionShaShort(), + }) + } + skipped, err := skipStartMaintenanceIfNeeded(ctx, args) + if err != nil { + return err + } + if skipped { + return nil + } + slog.Info("Running start maintenance workflow") + workflowRun, err := peerflow.RunStartMaintenanceWorkflow(ctx, tc, &protos.StartMaintenanceFlowInput{}, taskQueueId) + if err != nil { + slog.Error("Error running start maintenance workflow", "error", err) + return err + } + var output *protos.StartMaintenanceFlowOutput + if err := workflowRun.Get(ctx, &output); err != nil { + slog.Error("Error in start maintenance workflow", "error", err) + return err + } + slog.Info("Start maintenance workflow completed", "output", output) + return WriteMaintenanceOutputToCatalog(ctx, StartMaintenanceResult{ + Skipped: false, + CLIVersion: peerdbenv.PeerDBVersionShaShort(), + }) + } else if args.Mode == "end" { + if input, err := ReadLastMaintenanceOutput(ctx); input != nil || err != nil { + if err != nil { + return err + } + slog.Info("Checking if end maintenance workflow should be skipped", "input", input) + if input.Skipped { + slog.Info("Skipping end maintenance workflow as start maintenance was skipped", "reason", input.SkippedReason) + return nil + } + } + workflowRun, err := peerflow.RunEndMaintenanceWorkflow(ctx, tc, &protos.EndMaintenanceFlowInput{}, taskQueueId) + if err != nil { + slog.Error("Error running end maintenance workflow", "error", err) + return err + } + var output *protos.EndMaintenanceFlowOutput + if err := workflowRun.Get(ctx, &output); err != nil { + slog.Error("Error in end maintenance workflow", "error", err) + return err + } + slog.Info("End maintenance workflow completed", "output", output) + } else { + return fmt.Errorf("unknown flow type %s", args.Mode) + } + slog.Info("Maintenance workflow completed with type", "type", args.Mode) + return nil +} + +func skipStartMaintenanceIfNeeded(ctx context.Context, args *MaintenanceCLIParams) (bool, error) { + if args.SkipIfK8sServiceMissing != "" { + slog.Info("Checking if k8s service exists", "service", args.SkipIfK8sServiceMissing) + exists, err := CheckK8sServiceExistence(ctx, args.SkipIfK8sServiceMissing) + if err != nil { + return false, err + } + if !exists { + slog.Info("Skipping maintenance workflow due to missing k8s service", "service", args.SkipIfK8sServiceMissing) + return true, WriteMaintenanceOutputToCatalog(ctx, StartMaintenanceResult{ + Skipped: true, + SkippedReason: ptr.String(fmt.Sprintf("K8s service %s missing", args.SkipIfK8sServiceMissing)), + CLIVersion: peerdbenv.PeerDBVersionShaShort(), + }) + } + } + if args.SkipOnApiVersionMatch || args.SkipOnNoMirrors { + if args.FlowGrpcAddress == "" { + return false, errors.New("flow address is required when skipping based on API") + } + slog.Info("Constructing flow client") + transportCredentials := credentials.NewTLS(&tls.Config{ + MinVersion: tls.VersionTLS12, + }) + if !args.FlowTlsEnabled { + transportCredentials = insecure.NewCredentials() + } + conn, err := grpc.NewClient(args.FlowGrpcAddress, + grpc.WithTransportCredentials(transportCredentials), + ) + if err != nil { + return false, fmt.Errorf("unable to dial grpc flow server: %w", err) + } + peerFlowClient := protos.NewFlowServiceClient(conn) + if args.SkipOnApiVersionMatch { + slog.Info("Checking if CLI version matches API version", "cliVersion", peerdbenv.PeerDBVersionShaShort()) + version, err := peerFlowClient.GetVersion(ctx, &protos.PeerDBVersionRequest{}) + if err != nil { + return false, err + } + slog.Info("Got version from flow", "version", version.Version) + if version.Version == peerdbenv.PeerDBVersionShaShort() { + slog.Info("Skipping maintenance workflow due to matching versions") + return true, WriteMaintenanceOutputToCatalog(ctx, StartMaintenanceResult{ + Skipped: true, + SkippedReason: ptr.String(fmt.Sprintf("CLI version %s matches API version %s", peerdbenv.PeerDBVersionShaShort(), + version.Version)), + APIVersion: version.Version, + CLIVersion: peerdbenv.PeerDBVersionShaShort(), + }) + } + } + if args.SkipOnNoMirrors { + slog.Info("Checking if there are any mirrors") + mirrors, err := peerFlowClient.ListMirrors(ctx, &protos.ListMirrorsRequest{}) + if err != nil { + return false, err + } + slog.Info("Got mirrors from flow", "mirrors", mirrors.Mirrors) + if len(mirrors.Mirrors) == 0 { + slog.Info("Skipping maintenance workflow due to no mirrors") + return true, WriteMaintenanceOutputToCatalog(ctx, StartMaintenanceResult{ + Skipped: true, + SkippedReason: ptr.String("No mirrors found"), + }) + } + } + } + return false, nil +} + +func WriteMaintenanceOutputToCatalog(ctx context.Context, result StartMaintenanceResult) error { + pool, err := peerdbenv.GetCatalogConnectionPoolFromEnv(ctx) + if err != nil { + return err + } + _, err = pool.Exec(ctx, ` + insert into maintenance.start_maintenance_outputs + (cli_version, api_version, skipped, skipped_reason) + values + ($1, $2, $3, $4) + `, result.CLIVersion, result.APIVersion, result.Skipped, result.SkippedReason) + return err +} + +func ReadLastMaintenanceOutput(ctx context.Context) (*StartMaintenanceResult, error) { + pool, err := peerdbenv.GetCatalogConnectionPoolFromEnv(ctx) + if err != nil { + return nil, err + } + var result StartMaintenanceResult + if err := pool.QueryRow(ctx, ` + select cli_version, api_version, skipped, skipped_reason + from maintenance.start_maintenance_outputs + order by created_at desc + limit 1 + `).Scan(&result.CLIVersion, &result.APIVersion, &result.Skipped, &result.SkippedReason); err != nil { + return nil, err + } + return &result, nil +} + +func CheckK8sServiceExistence(ctx context.Context, serviceName string) (bool, error) { + config, err := rest.InClusterConfig() + if err != nil { + return false, err + } + clientset, err := kubernetes.NewForConfig(config) + if err != nil { + return false, err + } + _, err = clientset.CoreV1().Services(peerdbenv.GetEnvString("POD_NAMESPACE", "")).Get(ctx, serviceName, v1.GetOptions{}) + if err != nil { + if k8sErrors.IsNotFound(err) { + return false, nil + } + return false, err + } + return true, nil +} diff --git a/flow/cmd/mirror_status.go b/flow/cmd/mirror_status.go index a0c4a989e..58cf20a80 100644 --- a/flow/cmd/mirror_status.go +++ b/flow/cmd/mirror_status.go @@ -447,20 +447,7 @@ func (h *FlowRequestHandler) isCDCFlow(ctx context.Context, flowJobName string) } func (h *FlowRequestHandler) getWorkflowStatus(ctx context.Context, workflowID string) (protos.FlowStatus, error) { - res, err := h.temporalClient.QueryWorkflow(ctx, workflowID, "", shared.FlowStatusQuery) - if err != nil { - slog.Error(fmt.Sprintf("failed to get status in workflow with ID %s: %s", workflowID, err.Error())) - return protos.FlowStatus_STATUS_UNKNOWN, - fmt.Errorf("failed to get status in workflow with ID %s: %w", workflowID, err) - } - var state protos.FlowStatus - err = res.Get(&state) - if err != nil { - slog.Error(fmt.Sprintf("failed to get status in workflow with ID %s: %s", workflowID, err.Error())) - return protos.FlowStatus_STATUS_UNKNOWN, - fmt.Errorf("failed to get status in workflow with ID %s: %w", workflowID, err) - } - return state, nil + return shared.GetWorkflowStatus(ctx, h.temporalClient, workflowID) } func (h *FlowRequestHandler) getCDCWorkflowState(ctx context.Context, diff --git a/flow/cmd/settings.go b/flow/cmd/settings.go index 12e072859..dd4755f4a 100644 --- a/flow/cmd/settings.go +++ b/flow/cmd/settings.go @@ -55,8 +55,7 @@ func (h *FlowRequestHandler) PostDynamicSetting( ctx context.Context, req *protos.PostDynamicSettingRequest, ) (*protos.PostDynamicSettingResponse, error) { - _, err := h.pool.Exec(ctx, `insert into dynamic_settings (config_name, config_value) values ($1, $2) - on conflict (config_name) do update set config_value = $2`, req.Name, req.Value) + err := peerdbenv.UpdateDynamicSetting(ctx, h.pool, req.Name, req.Value) if err != nil { slog.Error("[PostDynamicConfig] failed to execute update setting", slog.Any("error", err)) return nil, err diff --git a/flow/cmd/validate_mirror.go b/flow/cmd/validate_mirror.go index 3e870aa66..83c9d2a07 100644 --- a/flow/cmd/validate_mirror.go +++ b/flow/cmd/validate_mirror.go @@ -14,6 +14,7 @@ import ( connpostgres "github.com/PeerDB-io/peer-flow/connectors/postgres" "github.com/PeerDB-io/peer-flow/connectors/utils" "github.com/PeerDB-io/peer-flow/generated/protos" + "github.com/PeerDB-io/peer-flow/peerdbenv" "github.com/PeerDB-io/peer-flow/shared/telemetry" ) @@ -25,6 +26,17 @@ var ( func (h *FlowRequestHandler) ValidateCDCMirror( ctx context.Context, req *protos.CreateCDCFlowRequest, ) (*protos.ValidateCDCMirrorResponse, error) { + underMaintenance, err := peerdbenv.PeerDBMaintenanceModeEnabled(ctx, nil) + if err != nil { + slog.Error("unable to check maintenance mode", slog.Any("error", err)) + return nil, fmt.Errorf("unable to load dynamic config: %w", err) + } + + if underMaintenance { + slog.Warn("Validate request denied due to maintenance", "flowName", req.ConnectionConfigs.FlowJobName) + return nil, errors.New("PeerDB is under maintenance") + } + if !req.ConnectionConfigs.Resync { mirrorExists, existCheckErr := h.CheckIfMirrorNameExists(ctx, req.ConnectionConfigs.FlowJobName) if existCheckErr != nil { diff --git a/flow/cmd/worker.go b/flow/cmd/worker.go index 9db97288c..5c16376a1 100644 --- a/flow/cmd/worker.go +++ b/flow/cmd/worker.go @@ -30,6 +30,7 @@ type WorkerSetupOptions struct { TemporalMaxConcurrentWorkflowTasks int EnableProfiling bool EnableOtelMetrics bool + UseMaintenanceTaskQueue bool } type workerSetupResponse struct { @@ -124,8 +125,11 @@ func WorkerSetup(opts *WorkerSetupOptions) (*workerSetupResponse, error) { return nil, fmt.Errorf("unable to create Temporal client: %w", err) } slog.Info("Created temporal client") - - taskQueue := peerdbenv.PeerFlowTaskQueueName(shared.PeerFlowTaskQueue) + queueId := shared.PeerFlowTaskQueue + if opts.UseMaintenanceTaskQueue { + queueId = shared.MaintenanceFlowTaskQueue + } + taskQueue := peerdbenv.PeerFlowTaskQueueName(queueId) slog.Info( fmt.Sprintf("Creating temporal worker for queue %v: %v workflow workers %v activity workers", taskQueue, @@ -170,6 +174,12 @@ func WorkerSetup(opts *WorkerSetupOptions) (*workerSetupResponse, error) { OtelManager: otelManager, }) + w.RegisterActivity(&activities.MaintenanceActivity{ + CatalogPool: conn, + Alerter: alerting.NewAlerter(context.Background(), conn), + TemporalClient: c, + }) + return &workerSetupResponse{ Client: c, Worker: w, diff --git a/flow/go.mod b/flow/go.mod index b7eb9d1d6..a11ffb5a7 100644 --- a/flow/go.mod +++ b/flow/go.mod @@ -53,22 +53,28 @@ require ( github.com/urfave/cli/v3 v3.0.0-alpha9.2 github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 github.com/yuin/gopher-lua v1.1.1 - go.opentelemetry.io/otel v1.31.0 + go.opentelemetry.io/otel v1.32.0 go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.31.0 go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.31.0 - go.opentelemetry.io/otel/metric v1.31.0 - go.opentelemetry.io/otel/sdk v1.31.0 + go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.32.0 + go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.32.0 + go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.19.0 + go.opentelemetry.io/otel/metric v1.32.0 + go.opentelemetry.io/otel/sdk v1.32.0 go.opentelemetry.io/otel/sdk/metric v1.31.0 + go.opentelemetry.io/otel/trace v1.32.0 go.temporal.io/api v1.41.0 go.temporal.io/sdk v1.30.0 go.temporal.io/sdk/contrib/opentelemetry v0.6.0 go.uber.org/automaxprocs v1.6.0 golang.org/x/crypto v0.28.0 - golang.org/x/sync v0.8.0 + golang.org/x/sync v0.9.0 google.golang.org/api v0.204.0 google.golang.org/genproto/googleapis/api v0.0.0-20241104194629-dd2ea8efbc28 google.golang.org/grpc v1.67.1 google.golang.org/protobuf v1.35.1 + k8s.io/apimachinery v0.31.2 + k8s.io/client-go v0.31.2 ) require ( @@ -105,18 +111,29 @@ require ( github.com/decred/dcrd/dcrec/secp256k1/v4 v4.3.0 // indirect github.com/dvsekhvalnov/jose2go v1.7.0 // indirect github.com/elastic/elastic-transport-go/v8 v8.6.0 // indirect + github.com/emicklei/go-restful/v3 v3.11.0 // indirect github.com/envoyproxy/go-control-plane v0.13.1 // indirect github.com/envoyproxy/protoc-gen-validate v1.1.0 // indirect github.com/felixge/httpsnoop v1.0.4 // indirect + github.com/fxamacker/cbor/v2 v2.7.0 // indirect github.com/gabriel-vasile/mimetype v1.4.6 // indirect github.com/getsentry/sentry-go v0.29.1 // indirect github.com/go-faster/city v1.0.1 // indirect github.com/go-faster/errors v0.7.1 // indirect github.com/go-logr/logr v1.4.2 // indirect github.com/go-logr/stdr v1.2.2 // indirect + github.com/go-openapi/jsonpointer v0.19.6 // indirect + github.com/go-openapi/jsonreference v0.20.2 // indirect + github.com/go-openapi/swag v0.22.4 // indirect github.com/godbus/dbus v0.0.0-20190726142602-4481cbc300e2 // indirect + github.com/golang/protobuf v1.5.4 // indirect + github.com/google/gnostic-models v0.6.8 // indirect + github.com/google/go-cmp v0.6.0 // indirect + github.com/google/gofuzz v1.2.0 // indirect github.com/gorilla/websocket v1.5.3 // indirect github.com/gsterjov/go-libsecret v0.0.0-20161001094733-a6f4afe4910c // indirect + github.com/josharian/intern v1.0.0 // indirect + github.com/json-iterator/go v1.1.12 // indirect github.com/kr/pretty v0.3.1 // indirect github.com/kr/text v0.2.0 // indirect github.com/lestrrat-go/blackmagic v1.0.2 // indirect @@ -124,6 +141,9 @@ require ( github.com/lestrrat-go/httprc v1.0.6 // indirect github.com/lestrrat-go/iter v1.0.2 // indirect github.com/lestrrat-go/option v1.0.1 // indirect + github.com/mailru/easyjson v0.7.7 // indirect + github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect + github.com/modern-go/reflect2 v1.0.2 // indirect github.com/mtibben/percent v0.2.1 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect github.com/nexus-rpc/sdk-go v0.0.11 // indirect @@ -138,14 +158,23 @@ require ( github.com/segmentio/asm v1.2.0 // indirect github.com/sirupsen/logrus v1.9.3 // indirect github.com/twmb/franz-go/pkg/kmsg v1.9.0 // indirect + github.com/x448/float16 v0.8.4 // indirect go.opentelemetry.io/contrib/detectors/gcp v1.31.0 // indirect go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.56.0 // indirect go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.56.0 // indirect - go.opentelemetry.io/otel/trace v1.31.0 // indirect go.opentelemetry.io/proto/otlp v1.3.1 // indirect golang.org/x/mod v0.21.0 // indirect golang.org/x/term v0.25.0 // indirect google.golang.org/grpc/stats/opentelemetry v0.0.0-20241028142157-ada6787961b3 // indirect + gopkg.in/inf.v0 v0.9.1 // indirect + gopkg.in/yaml.v2 v2.4.0 // indirect + k8s.io/api v0.31.2 // indirect + k8s.io/klog/v2 v2.130.1 // indirect + k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 // indirect + k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 // indirect + sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect + sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect + sigs.k8s.io/yaml v1.4.0 // indirect ) require ( @@ -165,7 +194,7 @@ require ( github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.3 // indirect github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.3 // indirect github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.3 // indirect - github.com/davecgh/go-spew v1.1.1 // indirect + github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect github.com/djherbis/buffer v1.2.0 github.com/djherbis/nio/v3 v3.0.1 github.com/facebookgo/clock v0.0.0-20150410010913-600d898af40a // indirect @@ -193,7 +222,7 @@ require ( github.com/pborman/uuid v1.2.1 // indirect github.com/pierrec/lz4/v4 v4.1.21 // indirect github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect - github.com/pmezard/go-difflib v1.0.0 // indirect + github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect github.com/robfig/cron v1.2.0 // indirect github.com/stretchr/objx v0.5.2 // indirect github.com/zeebo/xxh3 v1.0.2 // indirect @@ -201,8 +230,8 @@ require ( golang.org/x/exp v0.0.0-20241009180824-f66d83c29e7c golang.org/x/net v0.30.0 // indirect golang.org/x/oauth2 v0.23.0 // indirect - golang.org/x/sys v0.26.0 // indirect - golang.org/x/text v0.19.0 // indirect + golang.org/x/sys v0.27.0 // indirect + golang.org/x/text v0.20.0 // indirect golang.org/x/time v0.7.0 // indirect golang.org/x/tools v0.26.0 // indirect golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da // indirect diff --git a/flow/go.sum b/flow/go.sum index 71299452d..7a0380da0 100644 --- a/flow/go.sum +++ b/flow/go.sum @@ -180,8 +180,9 @@ github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ3 github.com/danieljoos/wincred v1.2.2 h1:774zMFJrqaeYCK2W57BgAem/MLi6mtSE47MB6BOJ0i0= github.com/danieljoos/wincred v1.2.2/go.mod h1:w7w4Utbrz8lqeMbDAK0lkNJUv5sAOkFi7nd/ogr0Uh8= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/decred/dcrd/dcrec/secp256k1/v4 v4.3.0 h1:rpfIENRNNilwHwZeG5+P150SMrnNEcHYvcCuK6dPZSg= github.com/decred/dcrd/dcrec/secp256k1/v4 v4.3.0/go.mod h1:v57UDF4pDQJcEfFUCRop3lJL149eHGSe9Jvczhzjo/0= github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78= @@ -197,6 +198,8 @@ github.com/elastic/elastic-transport-go/v8 v8.6.0 h1:Y2S/FBjx1LlCv5m6pWAF2kDJAHo github.com/elastic/elastic-transport-go/v8 v8.6.0/go.mod h1:YLHer5cj0csTzNFXoNQ8qhtGY1GTvSqPnKWKaqQE3Hk= github.com/elastic/go-elasticsearch/v8 v8.15.0 h1:IZyJhe7t7WI3NEFdcHnf6IJXqpRf+8S8QWLtZYYyBYk= github.com/elastic/go-elasticsearch/v8 v8.15.0/go.mod h1:HCON3zj4btpqs2N1jjsAy4a/fiAul+YBP00mBH4xik8= +github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g= +github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= @@ -211,6 +214,8 @@ github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2 github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= github.com/fortytw2/leaktest v1.3.0 h1:u8491cBMTQ8ft8aeV+adlcytMZylmA5nnwwkRZjI8vw= github.com/fortytw2/leaktest v1.3.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g= +github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E= +github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ= github.com/gabriel-vasile/mimetype v1.4.6 h1:3+PzJTKLkvgjeTbts6msPJt4DixhT4YtFNf1gtGe3zc= github.com/gabriel-vasile/mimetype v1.4.6/go.mod h1:JX1qVKqZd40hUPpAfiNTe0Sne7hdfKSbOqqmkq8GCXc= github.com/getsentry/sentry-go v0.29.1 h1:DyZuChN8Hz3ARxGVV8ePaNXh1dQ7d76AiB117xcREwA= @@ -228,9 +233,18 @@ github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY= github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE= +github.com/go-openapi/jsonpointer v0.19.6 h1:eCs3fxoIi3Wh6vtgmLTOjdhSpiqphQ+DaPn38N2ZdrE= +github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs= +github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE= +github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k= +github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14= +github.com/go-openapi/swag v0.22.4 h1:QLMzNJnMGPRNDCbySlcj1x01tzU8/9LTTL9hZZZogBU= +github.com/go-openapi/swag v0.22.4/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14= github.com/go-sql-driver/mysql v1.8.1 h1:LedoTUt/eveggdHS9qUFC1EFSa8bU2+1pZjSRpvNJ1Y= github.com/go-sql-driver/mysql v1.8.1/go.mod h1:wEBSXgmK//2ZFJyE+qWnIsVGmvmEKlqwuVSjsCm7DZg= github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= +github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI= +github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8= github.com/go-test/deep v1.0.4 h1:u2CU3YKy9I2pmu9pX0eq50wCgjfGIt539SqR7FbHiho= github.com/go-test/deep v1.0.4/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA= github.com/goccy/go-json v0.10.3 h1:KZ5WoDbxAIgm2HNbYckL0se1fHD6rz5j4ywS6ebzDqA= @@ -270,6 +284,8 @@ github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM= github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= github.com/google/flatbuffers v24.3.25+incompatible h1:CX395cjN9Kke9mmalRoL3d81AtFUxJM+yDthflgJGkI= github.com/google/flatbuffers v24.3.25+incompatible/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8= +github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I= +github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U= github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= @@ -279,10 +295,16 @@ github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/ github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.7/go.mod h1:n+brtR0CgQNWTVd5ZUFpTBC8YFBDLK/h/bpaJ8/DtOE= +github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0= +github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/martian/v3 v3.3.3 h1:DIhPTQrbPkgs2yJYdXU/eNACCG5DVQjySNRNlflZ9Fc= github.com/google/martian/v3 v3.3.3/go.mod h1:iEPrYcgCF7jA9OtScMFQyAlZZ4YXTKEtJ1E6RWzmBA0= +github.com/google/pprof v0.0.0-20240525223248-4bfdf5a9a2af h1:kmjWCqn2qkEml422C2Rrd27c3VGxi6a/6HNq8QmHRKM= +github.com/google/pprof v0.0.0-20240525223248-4bfdf5a9a2af/go.mod h1:K1liHPHnj73Fdn/EKuT8nrFqBihUSKXoLYU0BuatOYo= github.com/google/s2a-go v0.1.8 h1:zZDs9gcbt9ZPLV0ndSyQk6Kacx2g/X+SKYovpnz3SMM= github.com/google/s2a-go v0.1.8/go.mod h1:6iNWHTpQ+nfNRN5E00MSdfDwVesa8hhS32PhPO8deJA= github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= @@ -330,6 +352,10 @@ github.com/jmoiron/sqlx v1.4.0 h1:1PLqN7S1UYp5t4SrVVnt4nUVNemrDAtxlulVe+Qgm3o= github.com/jmoiron/sqlx v1.4.0/go.mod h1:ZrZ7UsYB/weZdl2Bxg6jCRO9c3YHl8r3ahlKmRT4JLY= github.com/joho/godotenv v1.5.1 h1:7eLL/+HRGLY0ldzfGMeQkb7vMd0as4CfYvUVzLqw0N0= github.com/joho/godotenv v1.5.1/go.mod h1:f4LDr5Voq0i2e/R5DDNOoa2zzDfwtkZa6DnEwAbqwq4= +github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= +github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= +github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= +github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= github.com/keybase/go-keychain v0.0.0-20231219164618-57a3676c3af6 h1:IsMZxCuZqKuao2vNdfD82fjjgPLfyHLpR41Z88viRWs= github.com/keybase/go-keychain v0.0.0-20231219164618-57a3676c3af6/go.mod h1:3VeWNIJaW+O5xpRQbPp0Ybqu1vJd/pm7s2F473HRrkw= github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= @@ -341,6 +367,7 @@ github.com/klauspost/cpuid/v2 v2.2.8 h1:+StwCXwm9PdpiEkPyzBXIy+M9KUb4ODm0Zarf1kS github.com/klauspost/cpuid/v2 v2.2.8/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws= github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= +github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= @@ -365,10 +392,17 @@ github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw= github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= github.com/linkedin/goavro/v2 v2.13.0 h1:L8eI8GcuciwUkt41Ej62joSZS4kKaYIUdze+6for9NU= github.com/linkedin/goavro/v2 v2.13.0/go.mod h1:KXx+erlq+RPlGSPmLF7xGo6SAbh8sCQ53x064+ioxhk= +github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0= +github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc= github.com/mattn/go-sqlite3 v1.14.22 h1:2gZY6PC6kBnID23Tichd1K+Z0oS6nE/XwU+Vz/5o4kU= github.com/mattn/go-sqlite3 v1.14.22/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y= github.com/microsoft/go-mssqldb v1.7.2 h1:CHkFJiObW7ItKTJfHo1QX7QBBD1iV+mn1eOyRP3b/PA= github.com/microsoft/go-mssqldb v1.7.2/go.mod h1:kOvZKUdrhhFQmxLZqbwUV0rHkNkZpthMITIb2Ko1IoA= +github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= +github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= github.com/montanaflynn/stats v0.0.0-20171201202039-1bf9dbcd8cbe/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc= github.com/mtibben/percent v0.2.1 h1:5gssi8Nqo8QU/r2pynCm+hBQHpkB/uNK7BJCFogWdzs= github.com/mtibben/percent v0.2.1/go.mod h1:KG9uO+SZkUp+VkRHsCdYQV3XSZrrSpR3O9ibNBTZrns= @@ -377,6 +411,10 @@ github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8m github.com/nexus-rpc/sdk-go v0.0.11 h1:qH3Us3spfp50t5ca775V1va2eE6z1zMQDZY4mvbw0CI= github.com/nexus-rpc/sdk-go v0.0.11/go.mod h1:TpfkM2Cw0Rlk9drGkoiSMpFqflKTiQLWUNyKJjF8mKQ= github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno= +github.com/onsi/ginkgo/v2 v2.19.0 h1:9Cnnf7UHo57Hy3k6/m5k3dRfGTMXGvxhHFvkDTCTpvA= +github.com/onsi/ginkgo/v2 v2.19.0/go.mod h1:rlwLi9PilAFJ8jCg9UE1QP6VBpd6/xj3SRC0d6TU0To= +github.com/onsi/gomega v1.19.0 h1:4ieX6qQjPP/BfC3mpsAtIGGlxTWPeA3Inl/7DtXw1tw= +github.com/onsi/gomega v1.19.0/go.mod h1:LY+I3pBVzYsTBU1AnDwOSxaYi9WoWiqgwooUqq9yPro= github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o= github.com/orcaman/concurrent-map/v2 v2.0.1 h1:jOJ5Pg2w1oeB6PeDurIYf6k9PQ+aTITr/6lP/L/zp6c= github.com/orcaman/concurrent-map/v2 v2.0.1/go.mod h1:9Eq3TG2oBe5FirmYWQfYO5iH1q0Jv47PLaNK++uCdOM= @@ -397,8 +435,9 @@ github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 h1:GFCKgmp0tecUJ0sJuv4pzYCqS9+RGSn52M3FUwPs+uo= github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g= github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U= github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y= @@ -428,6 +467,8 @@ github.com/slack-go/slack v0.15.0 h1:LE2lj2y9vqqiOf+qIIy0GvEoxgF1N5yLGZffmEZykt0 github.com/slack-go/slack v0.15.0/go.mod h1:hlGi5oXA+Gt+yWTPP0plCdRKmjsDxecdHxYQdlMQKOw= github.com/snowflakedb/gosnowflake v1.12.0 h1:Saez8egtn5xAoVMBxFaMu9MYfAG9SS9dpAEXD1/ECIo= github.com/snowflakedb/gosnowflake v1.12.0/go.mod h1:wHfYmZi3zvtWItojesAhWWXBN7+niex2R1h/S7QCZYg= +github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= +github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= @@ -458,6 +499,8 @@ github.com/twpayne/go-geos v0.19.0 h1:V7vnLe7gY7JOHLTg8+2oykZOw6wpBLHVNlcnzS2FlG github.com/twpayne/go-geos v0.19.0/go.mod h1:XGpUjCtZf4Ul6BMii6KA4EmJ9JCNhVP1mohdoReopZ8= github.com/urfave/cli/v3 v3.0.0-alpha9.2 h1:CL8llQj3dGRLVQQzHxS+ZYRLanOuhyK1fXgLKD+qV+Y= github.com/urfave/cli/v3 v3.0.0-alpha9.2/go.mod h1:FnIeEMYu+ko8zP1F9Ypr3xkZMIDqW3DR92yUtY39q1Y= +github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM= +github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg= github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI= github.com/xdg-go/scram v1.1.1/go.mod h1:RaEWvsqvNKKvBPvcKeFjrG2cJqOkHTiyTpzz23ni57g= github.com/xdg-go/stringprep v1.0.3/go.mod h1:W3f5j4i+9rC0kuIEJL0ky1VpHXQU3ocBgklLGvcBnW8= @@ -486,20 +529,26 @@ go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.5 go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.56.0/go.mod h1:n8MR6/liuGB5EmTETUBeU5ZgqMOlqKRxUaqPQBOANZ8= go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.56.0 h1:UP6IpuHFkUgOQL9FFQFrZ+5LiwhhYRbi7VZSIx6Nj5s= go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.56.0/go.mod h1:qxuZLtbq5QDtdeSHsS7bcf6EH6uO6jUAgk764zd3rhM= -go.opentelemetry.io/otel v1.31.0 h1:NsJcKPIW0D0H3NgzPDHmo0WW6SptzPdqg/L1zsIm2hY= -go.opentelemetry.io/otel v1.31.0/go.mod h1:O0C14Yl9FgkjqcCZAsE053C13OaddMYr/hz6clDkEJE= +go.opentelemetry.io/otel v1.32.0 h1:WnBN+Xjcteh0zdk01SVqV55d/m62NJLJdIyb4y/WO5U= +go.opentelemetry.io/otel v1.32.0/go.mod h1:00DCVSB0RQcnzlwyTfqtxSm+DRr9hpYrHjNGiBHVQIg= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.31.0 h1:FZ6ei8GFW7kyPYdxJaV2rgI6M+4tvZzhYsQ2wgyVC08= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.31.0/go.mod h1:MdEu/mC6j3D+tTEfvI15b5Ci2Fn7NneJ71YMoiS3tpI= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.31.0 h1:ZsXq73BERAiNuuFXYqP4MR5hBrjXfMGSO+Cx7qoOZiM= go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.31.0/go.mod h1:hg1zaDMpyZJuUzjFxFsRYBoccE86tM9Uf4IqNMUxvrY= -go.opentelemetry.io/otel/metric v1.31.0 h1:FSErL0ATQAmYHUIzSezZibnyVlft1ybhy4ozRPcF2fE= -go.opentelemetry.io/otel/metric v1.31.0/go.mod h1:C3dEloVbLuYoX41KpmAhOqNriGbA+qqH6PQ5E5mUfnY= -go.opentelemetry.io/otel/sdk v1.31.0 h1:xLY3abVHYZ5HSfOg3l2E5LUj2Cwva5Y7yGxnSW9H5Gk= -go.opentelemetry.io/otel/sdk v1.31.0/go.mod h1:TfRbMdhvxIIr/B2N2LQW2S5v9m3gOQ/08KsbbO5BPT0= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.32.0 h1:IJFEoHiytixx8cMiVAO+GmHR6Frwu+u5Ur8njpFO6Ac= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.32.0/go.mod h1:3rHrKNtLIoS0oZwkY2vxi+oJcwFRWdtUyRII+so45p8= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.32.0 h1:9kV11HXBHZAvuPUZxmMWrH8hZn/6UnHX4K0mu36vNsU= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.32.0/go.mod h1:JyA0FHXe22E1NeNiHmVp7kFHglnexDQ7uRWDiiJ1hKQ= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.19.0 h1:IeMeyr1aBvBiPVYihXIaeIZba6b8E1bYp7lbdxK8CQg= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.19.0/go.mod h1:oVdCUtjq9MK9BlS7TtucsQwUcXcymNiEDjgDD2jMtZU= +go.opentelemetry.io/otel/metric v1.32.0 h1:xV2umtmNcThh2/a/aCP+h64Xx5wsj8qqnkYZktzNa0M= +go.opentelemetry.io/otel/metric v1.32.0/go.mod h1:jH7CIbbK6SH2V2wE16W05BHCtIDzauciCRLoc/SyMv8= +go.opentelemetry.io/otel/sdk v1.32.0 h1:RNxepc9vK59A8XsgZQouW8ue8Gkb4jpWtJm9ge5lEG4= +go.opentelemetry.io/otel/sdk v1.32.0/go.mod h1:LqgegDBjKMmb2GC6/PrTnteJG39I8/vJCAP9LlJXEjU= go.opentelemetry.io/otel/sdk/metric v1.31.0 h1:i9hxxLJF/9kkvfHppyLL55aW7iIJz4JjxTeYusH7zMc= go.opentelemetry.io/otel/sdk/metric v1.31.0/go.mod h1:CRInTMVvNhUKgSAMbKyTMxqOBC0zgyxzW55lZzX43Y8= -go.opentelemetry.io/otel/trace v1.31.0 h1:ffjsj1aRouKewfr85U2aGagJ46+MvodynlQ1HYdmJys= -go.opentelemetry.io/otel/trace v1.31.0/go.mod h1:TXZkRk7SM2ZQLtR6eoAWQFIHPvzQ06FJAsO1tJg480A= +go.opentelemetry.io/otel/trace v1.32.0 h1:WIC9mYrXf8TmY/EXuULKc8hR17vE+Hjv2cssQDe03fM= +go.opentelemetry.io/otel/trace v1.32.0/go.mod h1:+i4rkvCraA+tG6AzwloGaCtkx53Fa+L+V8e9a7YvhT8= go.opentelemetry.io/proto/otlp v1.3.1 h1:TrMUixzpM0yuc/znrFTP9MMRh8trP93mkCiDVeXrui0= go.opentelemetry.io/proto/otlp v1.3.1/go.mod h1:0X1WI4de4ZsLrrJNLAQbFeLCm3T7yBkR0XqQ7niQU+8= go.temporal.io/api v1.41.0 h1:VYzyWJjJk1jeB9urntA/t7Hiyo2tHdM5xEdtdib4EO8= @@ -512,6 +561,8 @@ go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc= go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs= go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8= go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A= +go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= +go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU= go.uber.org/zap v1.18.1/go.mod h1:xg/QME4nWcxGxrpdeYfq7UvYrLh66cuVKdrbD1XF/NI= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= @@ -554,8 +605,8 @@ golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJ golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ= -golang.org/x/sync v0.8.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= +golang.org/x/sync v0.9.0 h1:fEo0HyrW1GIgZdpbhCRO0PkJajUS5H9IFUztCgEo2jQ= +golang.org/x/sync v0.9.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= @@ -570,8 +621,8 @@ golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.26.0 h1:KHjCJyddX0LoSTb3J+vWpupP9p0oznkqVk/IfjymZbo= -golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.27.0 h1:wBqf8DvsY9Y/2P8gAfPDEYNuS30J4lPHJxXSb/nJZ+s= +golang.org/x/sys v0.27.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.25.0 h1:WtHI/ltw4NvSUig5KARz9h521QvRC8RmF/cuYqifU24= golang.org/x/term v0.25.0/go.mod h1:RPyXicDX+6vLxogjjRxjgD2TKtmAO6NZBsBRfrOLu7M= @@ -579,8 +630,8 @@ golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= -golang.org/x/text v0.19.0 h1:kTxAhCbGbxhK0IwgSKiMO5awPoDQ0RpfiVYBfK860YM= -golang.org/x/text v0.19.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY= +golang.org/x/text v0.20.0 h1:gK/Kv2otX8gz+wn7Rmb3vT96ZwuoxnQlY+HlJVj7Qug= +golang.org/x/text v0.20.0/go.mod h1:D4IsuqiFMhST5bX19pQ9ikHC2GsaKyk/oF+pn3ducp4= golang.org/x/time v0.7.0 h1:ntUhktv3OPE6TgYxXWv9vKvUSJyIFJlyohwbkEwPrKQ= golang.org/x/time v0.7.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= @@ -645,6 +696,8 @@ gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8 gopkg.in/check.v1 v1.0.0-20200902074654-038fdea0a05b/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= +gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= +gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= @@ -655,5 +708,23 @@ gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= +k8s.io/api v0.31.2 h1:3wLBbL5Uom/8Zy98GRPXpJ254nEFpl+hwndmk9RwmL0= +k8s.io/api v0.31.2/go.mod h1:bWmGvrGPssSK1ljmLzd3pwCQ9MgoTsRCuK35u6SygUk= +k8s.io/apimachinery v0.31.2 h1:i4vUt2hPK56W6mlT7Ry+AO8eEsyxMD1U44NR22CLTYw= +k8s.io/apimachinery v0.31.2/go.mod h1:rsPdaZJfTfLsNJSQzNHQvYoTmxhoOEofxtOsF3rtsMo= +k8s.io/client-go v0.31.2 h1:Y2F4dxU5d3AQj+ybwSMqQnpZH9F30//1ObxOKlTI9yc= +k8s.io/client-go v0.31.2/go.mod h1:NPa74jSVR/+eez2dFsEIHNa+3o09vtNaWwWwb1qSxSs= +k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk= +k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= +k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 h1:BZqlfIlq5YbRMFko6/PM7FjZpUb45WallggurYhKGag= +k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340/go.mod h1:yD4MZYeKMBwQKVht279WycxKyM84kkAx2DPrTXaeb98= +k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 h1:pUdcCO1Lk/tbT5ztQWOBi5HBgbBP1J8+AsQnQCKsi8A= +k8s.io/utils v0.0.0-20240711033017-18e509b52bc8/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= nhooyr.io/websocket v1.8.11 h1:f/qXNc2/3DpoSZkHt1DQu6rj4zGC8JmkkLkWss0MgN0= nhooyr.io/websocket v1.8.11/go.mod h1:rN9OFWIUwuxg4fR5tELlYC04bXYowCP9GX47ivo2l+c= +sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo= +sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0= +sigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+sGiqlzvrtq4= +sigs.k8s.io/structured-merge-diff/v4 v4.4.1/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08= +sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E= +sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY= diff --git a/flow/main.go b/flow/main.go index 4001a8891..9d499e957 100644 --- a/flow/main.go +++ b/flow/main.go @@ -70,6 +70,60 @@ func main() { Sources: cli.EnvVars("TEMPORAL_MAX_CONCURRENT_WORKFLOW_TASKS"), } + maintenanceModeWorkflowFlag := &cli.StringFlag{ + Name: "run-maintenance-flow", + Value: "", + Usage: "Run a maintenance flow. Options are 'start' or 'end'", + Sources: cli.EnvVars("RUN_MAINTENANCE_FLOW"), + } + + maintenanceSkipOnApiVersionMatchFlag := &cli.BoolFlag{ + Name: "skip-on-api-version-match", + Value: false, + Usage: "Skip maintenance flow if the API version matches", + Sources: cli.EnvVars("MAINTENANCE_SKIP_ON_API_VERSION_MATCH"), + } + + maintenanceSkipOnNoMirrorsFlag := &cli.BoolFlag{ + Name: "skip-on-no-mirrors", + Value: false, + Usage: "Skip maintenance flow if there are no mirrors", + Sources: cli.EnvVars("MAINTENANCE_SKIP_ON_NO_MIRRORS"), + } + + flowGrpcAddressFlag := &cli.StringFlag{ + Name: "flow-grpc-address", + Value: "", + Usage: "Address of the flow gRPC server", + Sources: cli.EnvVars("FLOW_GRPC_ADDRESS"), + } + + flowTlsEnabledFlag := &cli.BoolFlag{ + Name: "flow-tls-enabled", + Value: false, + Usage: "Enable TLS for the flow gRPC server", + Sources: cli.EnvVars("FLOW_TLS_ENABLED"), + } + + useMaintenanceTaskQueueFlag := &cli.BoolFlag{ + Name: "use-maintenance-task-queue", + Value: false, + Usage: "Use the maintenance task queue for the worker", + Sources: cli.EnvVars("USE_MAINTENANCE_TASK_QUEUE"), + } + + assumedSkippedMaintenanceWorkflowsFlag := &cli.BoolFlag{ + Name: "assume-skipped-workflow", + Value: false, + Usage: "Skip running maintenance workflows and simply output to catalog", + } + + skipIfK8sServiceMissingFlag := &cli.StringFlag{ + Name: "skip-if-k8s-service-missing", + Value: "", + Usage: "Skip maintenance if the k8s service is missing, generally used during pre-upgrade hook", + } + app := &cli.Command{ Name: "PeerDB Flows CLI", Commands: []*cli.Command{ @@ -85,6 +139,7 @@ func main() { TemporalNamespace: clicmd.String("temporal-namespace"), TemporalMaxConcurrentActivities: int(clicmd.Int("temporal-max-concurrent-activities")), TemporalMaxConcurrentWorkflowTasks: int(clicmd.Int("temporal-max-concurrent-workflow-tasks")), + UseMaintenanceTaskQueue: clicmd.Bool(useMaintenanceTaskQueueFlag.Name), }) if err != nil { return err @@ -100,6 +155,7 @@ func main() { temporalNamespaceFlag, temporalMaxConcurrentActivitiesFlag, temporalMaxConcurrentWorkflowTasksFlag, + useMaintenanceTaskQueueFlag, }, }, { @@ -148,6 +204,37 @@ func main() { }) }, }, + { + Name: "maintenance", + Flags: []cli.Flag{ + temporalHostPortFlag, + temporalNamespaceFlag, + maintenanceModeWorkflowFlag, + maintenanceSkipOnApiVersionMatchFlag, + maintenanceSkipOnNoMirrorsFlag, + flowGrpcAddressFlag, + flowTlsEnabledFlag, + useMaintenanceTaskQueueFlag, + assumedSkippedMaintenanceWorkflowsFlag, + skipIfK8sServiceMissingFlag, + }, + Action: func(ctx context.Context, clicmd *cli.Command) error { + temporalHostPort := clicmd.String("temporal-host-port") + + return cmd.MaintenanceMain(ctx, &cmd.MaintenanceCLIParams{ + TemporalHostPort: temporalHostPort, + TemporalNamespace: clicmd.String(temporalNamespaceFlag.Name), + Mode: clicmd.String(maintenanceModeWorkflowFlag.Name), + SkipOnApiVersionMatch: clicmd.Bool(maintenanceSkipOnApiVersionMatchFlag.Name), + SkipOnNoMirrors: clicmd.Bool(maintenanceSkipOnNoMirrorsFlag.Name), + FlowGrpcAddress: clicmd.String(flowGrpcAddressFlag.Name), + FlowTlsEnabled: clicmd.Bool(flowTlsEnabledFlag.Name), + UseMaintenanceTaskQueue: clicmd.Bool(useMaintenanceTaskQueueFlag.Name), + AssumeSkippedMaintenanceWorkflows: clicmd.Bool(assumedSkippedMaintenanceWorkflowsFlag.Name), + SkipIfK8sServiceMissing: clicmd.String(skipIfK8sServiceMissingFlag.Name), + }) + }, + }, }, } @@ -164,5 +251,6 @@ func main() { if err := app.Run(appCtx, os.Args); err != nil { log.Printf("error running app: %+v", err) + panic(err) } } diff --git a/flow/peerdbenv/config.go b/flow/peerdbenv/config.go index e033b8719..9aa9d2c5e 100644 --- a/flow/peerdbenv/config.go +++ b/flow/peerdbenv/config.go @@ -166,3 +166,9 @@ func PeerDBRAPIRequestLoggingEnabled() bool { } return requestLoggingEnabled } + +// PEERDB_MAINTENANCE_MODE_WAIT_ALERT_SECONDS tells how long to wait before alerting that peerdb has been stuck in maintenance mode +// for too long +func PeerDBMaintenanceModeWaitAlertSeconds() int { + return getEnvInt("PEERDB_MAINTENANCE_MODE_WAIT_ALERT_SECONDS", 600) +} diff --git a/flow/peerdbenv/dynamicconf.go b/flow/peerdbenv/dynamicconf.go index c86c4616a..566c8ead1 100644 --- a/flow/peerdbenv/dynamicconf.go +++ b/flow/peerdbenv/dynamicconf.go @@ -8,8 +8,10 @@ import ( "strconv" "time" + "github.com/aws/smithy-go/ptr" "github.com/jackc/pgx/v5" "github.com/jackc/pgx/v5/pgtype" + "github.com/jackc/pgx/v5/pgxpool" "golang.org/x/exp/constraints" "github.com/PeerDB-io/peer-flow/generated/protos" @@ -186,6 +188,14 @@ var DynamicSettings = [...]*protos.DynamicSetting{ ApplyMode: protos.DynconfApplyMode_APPLY_MODE_IMMEDIATE, TargetForSetting: protos.DynconfTarget_ALL, }, + { + Name: "PEERDB_MAINTENANCE_MODE_ENABLED", + Description: "Whether PeerDB is in maintenance mode, which disables any modifications to mirrors", + DefaultValue: "false", + ValueType: protos.DynconfValueType_BOOL, + ApplyMode: protos.DynconfApplyMode_APPLY_MODE_IMMEDIATE, + TargetForSetting: protos.DynconfTarget_ALL, + }, } var DynamicIndex = func() map[string]int { @@ -267,6 +277,20 @@ func dynamicConfBool(ctx context.Context, env map[string]string, key string) (bo return value, nil } +func UpdateDynamicSetting(ctx context.Context, pool *pgxpool.Pool, name string, value *string) error { + if pool == nil { + var err error + pool, err = GetCatalogConnectionPoolFromEnv(ctx) + if err != nil { + shared.LoggerFromCtx(ctx).Error("Failed to get catalog connection pool for dynamic setting update", slog.Any("error", err)) + return fmt.Errorf("failed to get catalog connection pool: %w", err) + } + } + _, err := pool.Exec(ctx, `insert into dynamic_settings (config_name, config_value) values ($1, $2) + on conflict (config_name) do update set config_value = $2`, name, value) + return err +} + // PEERDB_SLOT_LAG_MB_ALERT_THRESHOLD, 0 disables slot lag alerting entirely func PeerDBSlotLagMBAlertThreshold(ctx context.Context, env map[string]string) (uint32, error) { return dynamicConfUnsigned[uint32](ctx, env, "PEERDB_SLOT_LAG_MB_ALERT_THRESHOLD") @@ -364,3 +388,11 @@ func PeerDBIntervalSinceLastNormalizeThresholdMinutes(ctx context.Context, env m func PeerDBApplicationNamePerMirrorName(ctx context.Context, env map[string]string) (bool, error) { return dynamicConfBool(ctx, env, "PEERDB_APPLICATION_NAME_PER_MIRROR_NAME") } + +func PeerDBMaintenanceModeEnabled(ctx context.Context, env map[string]string) (bool, error) { + return dynamicConfBool(ctx, env, "PEERDB_MAINTENANCE_MODE_ENABLED") +} + +func UpdatePeerDBMaintenanceModeEnabled(ctx context.Context, pool *pgxpool.Pool, enabled bool) error { + return UpdateDynamicSetting(ctx, pool, "PEERDB_MAINTENANCE_MODE_ENABLED", ptr.String(strconv.FormatBool(enabled))) +} diff --git a/flow/shared/constants.go b/flow/shared/constants.go index 2dc5a8a64..955ecfc4b 100644 --- a/flow/shared/constants.go +++ b/flow/shared/constants.go @@ -11,8 +11,9 @@ type ( const ( // Task Queues - PeerFlowTaskQueue TaskQueueID = "peer-flow-task-queue" - SnapshotFlowTaskQueue TaskQueueID = "snapshot-flow-task-queue" + PeerFlowTaskQueue TaskQueueID = "peer-flow-task-queue" + SnapshotFlowTaskQueue TaskQueueID = "snapshot-flow-task-queue" + MaintenanceFlowTaskQueue TaskQueueID = "maintenance-flow-task-queue" // Queries CDCFlowStateQuery = "q-cdc-flow-state" diff --git a/flow/shared/telemetry/event_types.go b/flow/shared/telemetry/event_types.go index 0d87ba354..a68fab869 100644 --- a/flow/shared/telemetry/event_types.go +++ b/flow/shared/telemetry/event_types.go @@ -3,7 +3,11 @@ package telemetry type EventType string const ( - CreatePeer EventType = "CreatePeer" - CreateMirror EventType = "CreateMirror" - Other EventType = "Other" + CreatePeer EventType = "CreatePeer" + CreateMirror EventType = "CreateMirror" + StartMaintenance EventType = "StartMaintenance" + EndMaintenance EventType = "EndMaintenance" + MaintenanceWait EventType = "MaintenanceWait" + + Other EventType = "Other" ) diff --git a/flow/shared/worklow.go b/flow/shared/worklow.go new file mode 100644 index 000000000..c9cafc37e --- /dev/null +++ b/flow/shared/worklow.go @@ -0,0 +1,27 @@ +package shared + +import ( + "context" + "fmt" + "log/slog" + + "go.temporal.io/sdk/client" + + "github.com/PeerDB-io/peer-flow/generated/protos" +) + +func GetWorkflowStatus(ctx context.Context, temporalClient client.Client, workflowID string) (protos.FlowStatus, error) { + res, err := temporalClient.QueryWorkflow(ctx, workflowID, "", FlowStatusQuery) + if err != nil { + slog.Error("failed to query status in workflow with ID "+workflowID, slog.Any("error", err)) + return protos.FlowStatus_STATUS_UNKNOWN, + fmt.Errorf("failed to query status in workflow with ID %s: %w", workflowID, err) + } + var state protos.FlowStatus + if err := res.Get(&state); err != nil { + slog.Error("failed to get status in workflow with ID "+workflowID, slog.Any("error", err)) + return protos.FlowStatus_STATUS_UNKNOWN, + fmt.Errorf("failed to get status in workflow with ID %s: %w", workflowID, err) + } + return state, nil +} diff --git a/flow/workflows/activities.go b/flow/workflows/activities.go index 0b23d10dd..5fe699419 100644 --- a/flow/workflows/activities.go +++ b/flow/workflows/activities.go @@ -3,6 +3,7 @@ package peerflow import "github.com/PeerDB-io/peer-flow/activities" var ( - flowable *activities.FlowableActivity - snapshot *activities.SnapshotActivity + flowable *activities.FlowableActivity + snapshot *activities.SnapshotActivity + maintenance *activities.MaintenanceActivity ) diff --git a/flow/workflows/maintenance_flow.go b/flow/workflows/maintenance_flow.go new file mode 100644 index 000000000..c48750a80 --- /dev/null +++ b/flow/workflows/maintenance_flow.go @@ -0,0 +1,305 @@ +package peerflow + +import ( + "context" + "log/slog" + "time" + + tEnums "go.temporal.io/api/enums/v1" + "go.temporal.io/sdk/client" + "go.temporal.io/sdk/log" + "go.temporal.io/sdk/workflow" + + "github.com/PeerDB-io/peer-flow/generated/protos" + "github.com/PeerDB-io/peer-flow/peerdbenv" + "github.com/PeerDB-io/peer-flow/shared" +) + +func getMaintenanceWorkflowOptions(workflowIDPrefix string, taskQueueId shared.TaskQueueID) client.StartWorkflowOptions { + maintenanceWorkflowOptions := client.StartWorkflowOptions{ + WorkflowIDReusePolicy: tEnums.WORKFLOW_ID_REUSE_POLICY_ALLOW_DUPLICATE, + WorkflowIDConflictPolicy: tEnums.WORKFLOW_ID_CONFLICT_POLICY_USE_EXISTING, + TaskQueue: peerdbenv.PeerFlowTaskQueueName(taskQueueId), + ID: workflowIDPrefix, + } + if deploymentUid := peerdbenv.PeerDBDeploymentUID(); deploymentUid != "" { + maintenanceWorkflowOptions.ID += "-" + deploymentUid + } + return maintenanceWorkflowOptions +} + +// RunStartMaintenanceWorkflow is a helper function to start the StartMaintenanceWorkflow with sane defaults +func RunStartMaintenanceWorkflow( + ctx context.Context, + temporalClient client.Client, + input *protos.StartMaintenanceFlowInput, + taskQueueId shared.TaskQueueID, +) (client.WorkflowRun, error) { + workflowOptions := getMaintenanceWorkflowOptions("start-maintenance", taskQueueId) + workflowRun, err := temporalClient.ExecuteWorkflow(ctx, workflowOptions, StartMaintenanceWorkflow, input) + if err != nil { + return nil, err + } + return workflowRun, nil +} + +// RunEndMaintenanceWorkflow is a helper function to start the EndMaintenanceWorkflow with sane defaults +func RunEndMaintenanceWorkflow( + ctx context.Context, + temporalClient client.Client, + input *protos.EndMaintenanceFlowInput, + taskQueueId shared.TaskQueueID, +) (client.WorkflowRun, error) { + workflowOptions := getMaintenanceWorkflowOptions("end-maintenance", taskQueueId) + workflowRun, err := temporalClient.ExecuteWorkflow(ctx, workflowOptions, EndMaintenanceWorkflow, &protos.EndMaintenanceFlowInput{}) + if err != nil { + return nil, err + } + return workflowRun, nil +} + +func StartMaintenanceWorkflow(ctx workflow.Context, input *protos.StartMaintenanceFlowInput) (*protos.StartMaintenanceFlowOutput, error) { + logger := workflow.GetLogger(ctx) + logger.Info("Starting StartMaintenance workflow", "input", input) + defer runBackgroundAlerter(ctx)() + + maintenanceFlowOutput, err := startMaintenance(ctx, logger) + if err != nil { + slog.Error("Error in StartMaintenance workflow", "error", err) + return nil, err + } + return maintenanceFlowOutput, nil +} + +func startMaintenance(ctx workflow.Context, logger log.Logger) (*protos.StartMaintenanceFlowOutput, error) { + ctx = workflow.WithActivityOptions(ctx, workflow.ActivityOptions{ + StartToCloseTimeout: 24 * time.Hour, + }) + + snapshotWaitCtx := workflow.WithActivityOptions(ctx, workflow.ActivityOptions{ + StartToCloseTimeout: 24 * time.Hour, + HeartbeatTimeout: 1 * time.Minute, + }) + waitSnapshotsFuture := workflow.ExecuteActivity(snapshotWaitCtx, + maintenance.WaitForRunningSnapshots, + ) + err := waitSnapshotsFuture.Get(snapshotWaitCtx, nil) + if err != nil { + return nil, err + } + + enableCtx := workflow.WithActivityOptions(ctx, workflow.ActivityOptions{ + StartToCloseTimeout: 5 * time.Minute, + }) + enableMaintenanceFuture := workflow.ExecuteActivity(enableCtx, maintenance.EnableMaintenanceMode) + + if err := enableMaintenanceFuture.Get(enableCtx, nil); err != nil { + return nil, err + } + + logger.Info("Waiting for all snapshot mirrors to finish snapshotting") + waitSnapshotsPostEnableFuture := workflow.ExecuteActivity(snapshotWaitCtx, + maintenance.WaitForRunningSnapshots, + ) + + if err := waitSnapshotsPostEnableFuture.Get(snapshotWaitCtx, nil); err != nil { + return nil, err + } + + mirrorsList, err := getAllMirrors(ctx) + if err != nil { + return nil, err + } + + runningMirrors, err := pauseAndGetRunningMirrors(ctx, mirrorsList, logger) + if err != nil { + return nil, err + } + + backupCtx := workflow.WithActivityOptions(ctx, workflow.ActivityOptions{ + StartToCloseTimeout: 2 * time.Minute, + }) + future := workflow.ExecuteActivity(backupCtx, maintenance.BackupAllPreviouslyRunningFlows, runningMirrors) + + if err := future.Get(backupCtx, nil); err != nil { + return nil, err + } + version, err := GetPeerDBVersion(ctx) + if err != nil { + return nil, err + } + logger.Info("StartMaintenance workflow completed", "version", version) + return &protos.StartMaintenanceFlowOutput{ + Version: version, + }, nil +} + +func pauseAndGetRunningMirrors( + ctx workflow.Context, + mirrorsList *protos.MaintenanceMirrors, + logger log.Logger, +) (*protos.MaintenanceMirrors, error) { + ctx = workflow.WithActivityOptions(ctx, workflow.ActivityOptions{ + StartToCloseTimeout: 24 * time.Hour, + HeartbeatTimeout: 1 * time.Minute, + }) + selector := workflow.NewSelector(ctx) + runningMirrors := make([]bool, len(mirrorsList.Mirrors)) + for i, mirror := range mirrorsList.Mirrors { + f := workflow.ExecuteActivity( + ctx, + maintenance.PauseMirrorIfRunning, + mirror, + ) + + selector.AddFuture(f, func(f workflow.Future) { + var wasRunning bool + err := f.Get(ctx, &wasRunning) + if err != nil { + logger.Error("Error checking and pausing mirror", "mirror", mirror, "error", err) + } else { + logger.Info("Finished check and pause for mirror", "mirror", mirror, "wasRunning", wasRunning) + runningMirrors[i] = wasRunning + } + }) + } + onlyRunningMirrors := make([]*protos.MaintenanceMirror, 0, len(mirrorsList.Mirrors)) + for range mirrorsList.Mirrors { + selector.Select(ctx) + if err := ctx.Err(); err != nil { + return nil, err + } + } + for i, mirror := range mirrorsList.Mirrors { + if runningMirrors[i] { + onlyRunningMirrors = append(onlyRunningMirrors, mirror) + } + } + return &protos.MaintenanceMirrors{ + Mirrors: onlyRunningMirrors, + }, nil +} + +func getAllMirrors(ctx workflow.Context) (*protos.MaintenanceMirrors, error) { + ctx = workflow.WithActivityOptions(ctx, workflow.ActivityOptions{ + StartToCloseTimeout: 2 * time.Minute, + }) + getMirrorsFuture := workflow.ExecuteActivity(ctx, maintenance.GetAllMirrors) + var mirrorsList protos.MaintenanceMirrors + err := getMirrorsFuture.Get(ctx, &mirrorsList) + return &mirrorsList, err +} + +func EndMaintenanceWorkflow(ctx workflow.Context, input *protos.EndMaintenanceFlowInput) (*protos.EndMaintenanceFlowOutput, error) { + logger := workflow.GetLogger(ctx) + logger.Info("Starting EndMaintenance workflow", "input", input) + defer runBackgroundAlerter(ctx)() + + flowOutput, err := endMaintenance(ctx, logger) + if err != nil { + slog.Error("Error in EndMaintenance workflow", "error", err) + return nil, err + } + return flowOutput, nil +} + +func endMaintenance(ctx workflow.Context, logger log.Logger) (*protos.EndMaintenanceFlowOutput, error) { + ctx = workflow.WithActivityOptions(ctx, workflow.ActivityOptions{ + StartToCloseTimeout: 24 * time.Hour, + HeartbeatTimeout: 1 * time.Minute, + }) + + mirrorsList, err := resumeBackedUpMirrors(ctx, logger) + if err != nil { + return nil, err + } + + clearBackupsFuture := workflow.ExecuteActivity(ctx, maintenance.CleanBackedUpFlows) + if err := clearBackupsFuture.Get(ctx, nil); err != nil { + return nil, err + } + + logger.Info("Resumed backed up mirrors", "mirrors", mirrorsList) + + disableCtx := workflow.WithActivityOptions(ctx, workflow.ActivityOptions{ + StartToCloseTimeout: 5 * time.Minute, + }) + + future := workflow.ExecuteActivity(disableCtx, maintenance.DisableMaintenanceMode) + if err := future.Get(disableCtx, nil); err != nil { + return nil, err + } + logger.Info("Disabled maintenance mode") + version, err := GetPeerDBVersion(ctx) + if err != nil { + return nil, err + } + + logger.Info("EndMaintenance workflow completed", "version", version) + return &protos.EndMaintenanceFlowOutput{ + Version: version, + }, nil +} + +func resumeBackedUpMirrors(ctx workflow.Context, logger log.Logger) (*protos.MaintenanceMirrors, error) { + future := workflow.ExecuteActivity(ctx, maintenance.GetBackedUpFlows) + ctx = workflow.WithActivityOptions(ctx, workflow.ActivityOptions{ + StartToCloseTimeout: 5 * time.Minute, + }) + var mirrorsList *protos.MaintenanceMirrors + err := future.Get(ctx, &mirrorsList) + if err != nil { + return nil, err + } + + selector := workflow.NewSelector(ctx) + for _, mirror := range mirrorsList.Mirrors { + activityInput := mirror + f := workflow.ExecuteActivity( + ctx, + maintenance.ResumeMirror, + activityInput, + ) + + selector.AddFuture(f, func(f workflow.Future) { + err := f.Get(ctx, nil) + if err != nil { + logger.Error("Error resuming mirror", "mirror", mirror, "error", err) + } else { + logger.Info("Finished resuming mirror", "mirror", mirror) + } + }) + } + + for range mirrorsList.Mirrors { + selector.Select(ctx) + if err := ctx.Err(); err != nil { + return nil, err + } + } + return mirrorsList, nil +} + +// runBackgroundAlerter Alerts every few minutes regarding currently running maintenance workflows +func runBackgroundAlerter(ctx workflow.Context) workflow.CancelFunc { + activityCtx, cancelActivity := workflow.WithCancel(ctx) + alerterCtx := workflow.WithActivityOptions(activityCtx, workflow.ActivityOptions{ + StartToCloseTimeout: 24 * time.Hour, + HeartbeatTimeout: 1 * time.Minute, + }) + workflow.ExecuteActivity(alerterCtx, maintenance.BackgroundAlerter) + return cancelActivity +} + +func GetPeerDBVersion(wCtx workflow.Context) (string, error) { + activityCtx := workflow.WithLocalActivityOptions(wCtx, workflow.LocalActivityOptions{ + StartToCloseTimeout: time.Minute, + }) + getVersionActivity := func(ctx context.Context) (string, error) { + return peerdbenv.PeerDBVersionShaShort(), nil + } + var version string + future := workflow.ExecuteLocalActivity(activityCtx, getVersionActivity) + err := future.Get(activityCtx, &version) + return version, err +} diff --git a/flow/workflows/register.go b/flow/workflows/register.go index 35adf135b..2c4b32ba3 100644 --- a/flow/workflows/register.go +++ b/flow/workflows/register.go @@ -18,4 +18,7 @@ func RegisterFlowWorkerWorkflows(w worker.WorkflowRegistry) { w.RegisterWorkflow(GlobalScheduleManagerWorkflow) w.RegisterWorkflow(HeartbeatFlowWorkflow) w.RegisterWorkflow(RecordSlotSizeWorkflow) + + w.RegisterWorkflow(StartMaintenanceWorkflow) + w.RegisterWorkflow(EndMaintenanceWorkflow) } diff --git a/nexus/catalog/migrations/V40__maintenance_flows.sql b/nexus/catalog/migrations/V40__maintenance_flows.sql new file mode 100644 index 000000000..e43e8eb92 --- /dev/null +++ b/nexus/catalog/migrations/V40__maintenance_flows.sql @@ -0,0 +1,29 @@ +CREATE SCHEMA IF NOT EXISTS maintenance; + +CREATE TABLE IF NOT EXISTS maintenance.maintenance_flows +( + id SERIAL PRIMARY KEY, + flow_id BIGINT NOT NULL, + flow_name TEXT NOT NULL, + workflow_id TEXT NOT NULL, + flow_created_at TIMESTAMP NOT NULL, + is_cdc BOOLEAN NOT NULL, + state TEXT NOT NULL, + restored_at TIMESTAMP, + from_version TEXT, + to_version TEXT +); + +CREATE INDEX IF NOT EXISTS idx_maintenance_flows_state ON maintenance.maintenance_flows (state); + +CREATE TABLE IF NOT EXISTS maintenance.start_maintenance_outputs +( + id SERIAL PRIMARY KEY, + api_version TEXT NOT NULL, + cli_version TEXT NOT NULL, + skipped BOOLEAN NOT NULL, + skipped_reason TEXT, + created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP +); + +CREATE INDEX IF NOT EXISTS idx_start_maintenance_outputs_created_at ON maintenance.start_maintenance_outputs (created_at DESC); diff --git a/protos/flow.proto b/protos/flow.proto index d1681fd8d..de7bf740d 100644 --- a/protos/flow.proto +++ b/protos/flow.proto @@ -466,3 +466,28 @@ message DropFlowActivityInput { string peer_name = 2; } +message StartMaintenanceFlowInput { +} + +message StartMaintenanceFlowOutput { + string version = 1; +} + +message EndMaintenanceFlowInput { +} + +message EndMaintenanceFlowOutput { + string version = 1; +} + +message MaintenanceMirror { + int64 mirror_id = 1; + string mirror_name = 2; + string workflow_id = 3; + bool is_cdc = 4; + google.protobuf.Timestamp mirror_created_at = 5; +} + +message MaintenanceMirrors { + repeated MaintenanceMirror mirrors = 1; +} diff --git a/protos/route.proto b/protos/route.proto index 0265f221e..1c6d38ed6 100644 --- a/protos/route.proto +++ b/protos/route.proto @@ -441,6 +441,38 @@ message ResyncMirrorRequest { message ResyncMirrorResponse { } +message PeerDBStateRequest { +} + +enum InstanceStatus { + INSTANCE_STATUS_UNKNOWN = 0; + INSTANCE_STATUS_READY = 1; + INSTANCE_STATUS_MAINTENANCE = 3; +} + +message InstanceInfoRequest { +} + +message InstanceInfoResponse { + InstanceStatus status = 1; +} + +enum MaintenanceStatus { + MAINTENANCE_STATUS_UNKNOWN = 0; + MAINTENANCE_STATUS_START = 1; + MAINTENANCE_STATUS_END = 2; +} + +message MaintenanceRequest { + MaintenanceStatus status = 1; + bool use_peerflow_task_queue = 2; +} + +message MaintenanceResponse { + string workflow_id = 1; + string run_id = 2; +} + service FlowService { rpc ValidatePeer(ValidatePeerRequest) returns (ValidatePeerResponse) { option (google.api.http) = { @@ -595,4 +627,12 @@ service FlowService { rpc ResyncMirror(ResyncMirrorRequest) returns (ResyncMirrorResponse) { option (google.api.http) = { post: "/v1/mirrors/resync", body: "*" }; } + + rpc GetInstanceInfo(InstanceInfoRequest) returns (InstanceInfoResponse) { + option (google.api.http) = { get: "/v1/instance/info" }; + } + + rpc Maintenance(MaintenanceRequest) returns (MaintenanceResponse) { + option (google.api.http) = { post: "/v1/instance/maintenance", body: "*" }; + } } diff --git a/stacks/flow.Dockerfile b/stacks/flow.Dockerfile index 0f997777e..13fc5b089 100644 --- a/stacks/flow.Dockerfile +++ b/stacks/flow.Dockerfile @@ -45,6 +45,8 @@ FROM flow-base AS flow-worker # Sane defaults for OpenTelemetry ENV OTEL_METRIC_EXPORT_INTERVAL=10000 ENV OTEL_EXPORTER_OTLP_COMPRESSION=gzip +ARG PEERDB_VERSION_SHA_SHORT +ENV PEERDB_VERSION_SHA_SHORT=${PEERDB_VERSION_SHA_SHORT} ENTRYPOINT [\ "./peer-flow",\ @@ -52,7 +54,20 @@ ENTRYPOINT [\ ] FROM flow-base AS flow-snapshot-worker + +ARG PEERDB_VERSION_SHA_SHORT +ENV PEERDB_VERSION_SHA_SHORT=${PEERDB_VERSION_SHA_SHORT} ENTRYPOINT [\ "./peer-flow",\ "snapshot-worker"\ ] + + +FROM flow-base AS flow-maintenance + +ARG PEERDB_VERSION_SHA_SHORT +ENV PEERDB_VERSION_SHA_SHORT=${PEERDB_VERSION_SHA_SHORT} +ENTRYPOINT [\ + "./peer-flow",\ + "maintenance"\ + ] diff --git a/stacks/peerdb-server.Dockerfile b/stacks/peerdb-server.Dockerfile index 689e3cf5b..3e9db5240 100644 --- a/stacks/peerdb-server.Dockerfile +++ b/stacks/peerdb-server.Dockerfile @@ -29,4 +29,8 @@ RUN apk add --no-cache ca-certificates postgresql-client curl iputils && \ USER peerdb WORKDIR /home/peerdb COPY --from=builder --chown=peerdb /root/nexus/target/release/peerdb-server . + +ARG PEERDB_VERSION_SHA_SHORT +ENV PEERDB_VERSION_SHA_SHORT=${PEERDB_VERSION_SHA_SHORT} + ENTRYPOINT ["./peerdb-server"] diff --git a/stacks/peerdb-ui.Dockerfile b/stacks/peerdb-ui.Dockerfile index cd99e61a5..def0aad72 100644 --- a/stacks/peerdb-ui.Dockerfile +++ b/stacks/peerdb-ui.Dockerfile @@ -35,5 +35,8 @@ ENV PORT 3000 # set hostname to localhost ENV HOSTNAME "0.0.0.0" +ARG PEERDB_VERSION_SHA_SHORT +ENV PEERDB_VERSION_SHA_SHORT=${PEERDB_VERSION_SHA_SHORT} + ENTRYPOINT ["/app/entrypoint.sh"] CMD ["node", "server.js"] From 79732fdedb3200979b4df9c91befb1bce16feda1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Philip=20Dub=C3=A9?= Date: Thu, 14 Nov 2024 21:03:36 +0000 Subject: [PATCH 10/61] PEERDB_CLICKHOUSE_MAX_INSERT_THREADS (#2255) https://clickhouse.com/docs/en/operations/settings/settings#max_insert_threads --- flow/connectors/clickhouse/clickhouse.go | 12 ++++++++++-- flow/e2e/clickhouse/clickhouse.go | 4 ++-- flow/e2e/clickhouse/peer_flow_ch_test.go | 4 ++-- flow/peerdbenv/dynamicconf.go | 12 ++++++++++++ 4 files changed, 26 insertions(+), 6 deletions(-) diff --git a/flow/connectors/clickhouse/clickhouse.go b/flow/connectors/clickhouse/clickhouse.go index 4e8975701..63ccea693 100644 --- a/flow/connectors/clickhouse/clickhouse.go +++ b/flow/connectors/clickhouse/clickhouse.go @@ -128,7 +128,7 @@ func NewClickHouseConnector( config *protos.ClickhouseConfig, ) (*ClickHouseConnector, error) { logger := shared.LoggerFromCtx(ctx) - database, err := Connect(ctx, config) + database, err := Connect(ctx, env, config) if err != nil { return nil, fmt.Errorf("failed to open connection to ClickHouse peer: %w", err) } @@ -205,7 +205,7 @@ func NewClickHouseConnector( return connector, nil } -func Connect(ctx context.Context, config *protos.ClickhouseConfig) (clickhouse.Conn, error) { +func Connect(ctx context.Context, env map[string]string, config *protos.ClickhouseConfig) (clickhouse.Conn, error) { var tlsSetting *tls.Config if !config.DisableTls { tlsSetting = &tls.Config{MinVersion: tls.VersionTLS13} @@ -228,6 +228,13 @@ func Connect(ctx context.Context, config *protos.ClickhouseConfig) (clickhouse.C tlsSetting.RootCAs = caPool } + var settings clickhouse.Settings + if maxInsertThreads, err := peerdbenv.PeerDBClickHouseMaxInsertThreads(ctx, env); err != nil { + return nil, fmt.Errorf("failed to load max_insert_threads config: %w", err) + } else if maxInsertThreads != 0 { + settings = clickhouse.Settings{"max_insert_threads": maxInsertThreads} + } + conn, err := clickhouse.Open(&clickhouse.Options{ Addr: []string{fmt.Sprintf("%s:%d", config.Host, config.Port)}, Auth: clickhouse.Auth{ @@ -245,6 +252,7 @@ func Connect(ctx context.Context, config *protos.ClickhouseConfig) (clickhouse.C {Name: "peerdb"}, }, }, + Settings: settings, DialTimeout: 3600 * time.Second, ReadTimeout: 3600 * time.Second, }) diff --git a/flow/e2e/clickhouse/clickhouse.go b/flow/e2e/clickhouse/clickhouse.go index 79ff2aa7b..975676152 100644 --- a/flow/e2e/clickhouse/clickhouse.go +++ b/flow/e2e/clickhouse/clickhouse.go @@ -92,7 +92,7 @@ func (s ClickHouseSuite) Teardown() { } func (s ClickHouseSuite) GetRows(table string, cols string) (*model.QRecordBatch, error) { - ch, err := connclickhouse.Connect(context.Background(), s.Peer().GetClickhouseConfig()) + ch, err := connclickhouse.Connect(context.Background(), nil, s.Peer().GetClickhouseConfig()) if err != nil { return nil, err } @@ -203,7 +203,7 @@ func SetupSuite(t *testing.T) ClickHouseSuite { s3Helper: s3Helper, } - ch, err := connclickhouse.Connect(context.Background(), s.PeerForDatabase("default").GetClickhouseConfig()) + ch, err := connclickhouse.Connect(context.Background(), nil, s.PeerForDatabase("default").GetClickhouseConfig()) require.NoError(t, err, "failed to connect to clickhouse") err = ch.Exec(context.Background(), "CREATE DATABASE e2e_test_"+suffix) require.NoError(t, err, "failed to create clickhouse database") diff --git a/flow/e2e/clickhouse/peer_flow_ch_test.go b/flow/e2e/clickhouse/peer_flow_ch_test.go index 8b2857310..9c4fa2a16 100644 --- a/flow/e2e/clickhouse/peer_flow_ch_test.go +++ b/flow/e2e/clickhouse/peer_flow_ch_test.go @@ -505,7 +505,7 @@ func (s ClickHouseSuite) WeirdTable(tableName string) { }) e2e.EnvWaitForFinished(s.t, env, 3*time.Minute) // now test weird names with rename based resync - ch, err := connclickhouse.Connect(context.Background(), s.Peer().GetClickhouseConfig()) + ch, err := connclickhouse.Connect(context.Background(), nil, s.Peer().GetClickhouseConfig()) require.NoError(s.t, err) require.NoError(s.t, ch.Exec(context.Background(), fmt.Sprintf("DROP TABLE `%s`", dstTableName))) require.NoError(s.t, ch.Close()) @@ -523,7 +523,7 @@ func (s ClickHouseSuite) WeirdTable(tableName string) { }) e2e.EnvWaitForFinished(s.t, env, 3*time.Minute) // now test weird names with exchange based resync - ch, err = connclickhouse.Connect(context.Background(), s.Peer().GetClickhouseConfig()) + ch, err = connclickhouse.Connect(context.Background(), nil, s.Peer().GetClickhouseConfig()) require.NoError(s.t, err) require.NoError(s.t, ch.Exec(context.Background(), fmt.Sprintf("TRUNCATE TABLE `%s`", dstTableName))) require.NoError(s.t, ch.Close()) diff --git a/flow/peerdbenv/dynamicconf.go b/flow/peerdbenv/dynamicconf.go index 566c8ead1..f3c2de097 100644 --- a/flow/peerdbenv/dynamicconf.go +++ b/flow/peerdbenv/dynamicconf.go @@ -172,6 +172,14 @@ var DynamicSettings = [...]*protos.DynamicSetting{ ApplyMode: protos.DynconfApplyMode_APPLY_MODE_IMMEDIATE, TargetForSetting: protos.DynconfTarget_CLICKHOUSE, }, + { + Name: "PEERDB_CLICKHOUSE_MAX_INSERT_THREADS", + Description: "Configures max_insert_threads setting on clickhouse for inserting into destination table. Setting left unset when 0", + DefaultValue: "0", + ValueType: protos.DynconfValueType_UINT, + ApplyMode: protos.DynconfApplyMode_APPLY_MODE_IMMEDIATE, + TargetForSetting: protos.DynconfTarget_CLICKHOUSE, + }, { Name: "PEERDB_INTERVAL_SINCE_LAST_NORMALIZE_THRESHOLD_MINUTES", Description: "Duration in minutes since last normalize to start alerting, 0 disables all alerting entirely", @@ -362,6 +370,10 @@ func PeerDBEnableClickHousePrimaryUpdate(ctx context.Context, env map[string]str return dynamicConfBool(ctx, env, "PEERDB_CLICKHOUSE_ENABLE_PRIMARY_UPDATE") } +func PeerDBClickHouseMaxInsertThreads(ctx context.Context, env map[string]string) (int64, error) { + return dynamicConfSigned[int64](ctx, env, "PEERDB_CLICKHOUSE_MAX_INSERT_THREADS") +} + func PeerDBSnowflakeMergeParallelism(ctx context.Context, env map[string]string) (int64, error) { return dynamicConfSigned[int64](ctx, env, "PEERDB_SNOWFLAKE_MERGE_PARALLELISM") } From 8c02a5eb9c4a66153554d09b92152f31bd86a2ea Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Philip=20Dub=C3=A9?= Date: Thu, 14 Nov 2024 22:00:06 +0000 Subject: [PATCH 11/61] PEERDB_CLICKHOUSE_PARALLEL_NORMALIZE (#2256) distributes queries across multiple connections so normalization can be concurrent & potentially distributed across nodes uses channel since that'll allow unevenly distributed changes (one table may have much more activity than others) even out across connections --- flow/connectors/clickhouse/normalize.go | 72 +++++++++++++++++++------ flow/peerdbenv/dynamicconf.go | 24 ++++++--- 2 files changed, 74 insertions(+), 22 deletions(-) diff --git a/flow/connectors/clickhouse/normalize.go b/flow/connectors/clickhouse/normalize.go index d5357c9a1..fb221096c 100644 --- a/flow/connectors/clickhouse/normalize.go +++ b/flow/connectors/clickhouse/normalize.go @@ -12,6 +12,9 @@ import ( "strings" "time" + "github.com/ClickHouse/clickhouse-go/v2" + "golang.org/x/sync/errgroup" + "github.com/PeerDB-io/peer-flow/datatypes" "github.com/PeerDB-io/peer-flow/generated/protos" "github.com/PeerDB-io/peer-flow/model" @@ -262,8 +265,7 @@ func (c *ClickHouseConnector) NormalizeRecords( }, nil } - err = c.copyAvroStagesToDestination(ctx, req.FlowJobName, normBatchID, req.SyncBatchID) - if err != nil { + if err := c.copyAvroStagesToDestination(ctx, req.FlowJobName, normBatchID, req.SyncBatchID); err != nil { return nil, fmt.Errorf("failed to copy avro stages to destination: %w", err) } @@ -278,9 +280,48 @@ func (c *ClickHouseConnector) NormalizeRecords( return nil, err } + enablePrimaryUpdate, err := peerdbenv.PeerDBEnableClickHousePrimaryUpdate(ctx, req.Env) + if err != nil { + return nil, err + } + + parallelNormalize, err := peerdbenv.PeerDBClickHouseParallelNormalize(ctx, req.Env) + if err != nil { + return nil, err + } + parallelNormalize = min(max(parallelNormalize, 1), len(destinationTableNames)) + if parallelNormalize > 1 { + c.logger.Info("normalizing in parallel", slog.Int("connections", parallelNormalize)) + } + + queries := make(chan string) rawTbl := c.getRawTableName(req.FlowJobName) - // model the raw table data as inserts. + group, errCtx := errgroup.WithContext(ctx) + for i := range parallelNormalize { + group.Go(func() error { + var chConn clickhouse.Conn + if i == 0 { + chConn = c.database + } else { + var err error + chConn, err = Connect(errCtx, req.Env, c.config) + if err != nil { + return err + } + defer chConn.Close() + } + + for query := range queries { + c.logger.Info("normalizing batch", slog.String("query", query)) + if err := chConn.Exec(errCtx, query); err != nil { + return fmt.Errorf("error while inserting into normalized table: %w", err) + } + } + return nil + }) + } + for _, tbl := range destinationTableNames { // SELECT projection FROM raw_table WHERE _peerdb_batch_id > normalize_batch_id AND _peerdb_batch_id <= sync_batch_id selectQuery := strings.Builder{} @@ -299,11 +340,6 @@ func (c *ClickHouseConnector) NormalizeRecords( } } - enablePrimaryUpdate, err := peerdbenv.PeerDBEnableClickHousePrimaryUpdate(ctx, req.Env) - if err != nil { - return nil, err - } - projection := strings.Builder{} projectionUpdate := strings.Builder{} @@ -338,6 +374,7 @@ func (c *ClickHouseConnector) NormalizeRecords( var err error clickHouseType, err = colType.ToDWHColumnType(protos.DBType_CLICKHOUSE) if err != nil { + close(queries) return nil, fmt.Errorf("error while converting column type to clickhouse type: %w", err) } } @@ -433,15 +470,19 @@ func (c *ClickHouseConnector) NormalizeRecords( insertIntoSelectQuery.WriteString(colSelector.String()) insertIntoSelectQuery.WriteString(selectQuery.String()) - q := insertIntoSelectQuery.String() - - if err := c.execWithLogging(ctx, q); err != nil { - return nil, fmt.Errorf("error while inserting into normalized table: %w", err) + select { + case queries <- insertIntoSelectQuery.String(): + case <-errCtx.Done(): + close(queries) + return nil, ctx.Err() } } + close(queries) + if err := group.Wait(); err != nil { + return nil, err + } - err = c.UpdateNormalizeBatchID(ctx, req.FlowJobName, req.SyncBatchID) - if err != nil { + if err := c.UpdateNormalizeBatchID(ctx, req.FlowJobName, req.SyncBatchID); err != nil { c.logger.Error("[clickhouse] error while updating normalize batch id", slog.Int64("BatchID", req.SyncBatchID), slog.Any("error", err)) return nil, err } @@ -510,8 +551,7 @@ func (c *ClickHouseConnector) copyAvroStagesToDestination( ctx context.Context, flowJobName string, normBatchID, syncBatchID int64, ) error { for s := normBatchID + 1; s <= syncBatchID; s++ { - err := c.copyAvroStageToDestination(ctx, flowJobName, s) - if err != nil { + if err := c.copyAvroStageToDestination(ctx, flowJobName, s); err != nil { return fmt.Errorf("failed to copy avro stage to destination: %w", err) } } diff --git a/flow/peerdbenv/dynamicconf.go b/flow/peerdbenv/dynamicconf.go index f3c2de097..f149bf37b 100644 --- a/flow/peerdbenv/dynamicconf.go +++ b/flow/peerdbenv/dynamicconf.go @@ -180,6 +180,14 @@ var DynamicSettings = [...]*protos.DynamicSetting{ ApplyMode: protos.DynconfApplyMode_APPLY_MODE_IMMEDIATE, TargetForSetting: protos.DynconfTarget_CLICKHOUSE, }, + { + Name: "PEERDB_CLICKHOUSE_PARALLEL_NORMALIZE", + Description: "Divide tables in batch into N insert selects. Helps distribute load to multiple nodes", + DefaultValue: "0", + ValueType: protos.DynconfValueType_INT, + ApplyMode: protos.DynconfApplyMode_APPLY_MODE_IMMEDIATE, + TargetForSetting: protos.DynconfTarget_CLICKHOUSE, + }, { Name: "PEERDB_INTERVAL_SINCE_LAST_NORMALIZE_THRESHOLD_MINUTES", Description: "Duration in minutes since last normalize to start alerting, 0 disables all alerting entirely", @@ -256,8 +264,8 @@ func dynamicConfSigned[T constraints.Signed](ctx context.Context, env map[string return strconv.ParseInt(value, 10, 64) }) if err != nil { - shared.LoggerFromCtx(ctx).Error("Failed to parse as int64", slog.Any("error", err)) - return 0, fmt.Errorf("failed to parse as int64: %w", err) + shared.LoggerFromCtx(ctx).Error("Failed to parse as int64", slog.String("key", key), slog.Any("error", err)) + return 0, fmt.Errorf("failed to parse %s as int64: %w", key, err) } return T(value), nil @@ -268,8 +276,8 @@ func dynamicConfUnsigned[T constraints.Unsigned](ctx context.Context, env map[st return strconv.ParseUint(value, 10, 64) }) if err != nil { - shared.LoggerFromCtx(ctx).Error("Failed to parse as uint64", slog.Any("error", err)) - return 0, fmt.Errorf("failed to parse as uint64: %w", err) + shared.LoggerFromCtx(ctx).Error("Failed to parse as uint64", slog.String("key", key), slog.Any("error", err)) + return 0, fmt.Errorf("failed to parse %s as uint64: %w", key, err) } return T(value), nil @@ -278,8 +286,8 @@ func dynamicConfUnsigned[T constraints.Unsigned](ctx context.Context, env map[st func dynamicConfBool(ctx context.Context, env map[string]string, key string) (bool, error) { value, err := dynLookupConvert(ctx, env, key, strconv.ParseBool) if err != nil { - shared.LoggerFromCtx(ctx).Error("Failed to parse bool", slog.Any("error", err)) - return false, fmt.Errorf("failed to parse bool: %w", err) + shared.LoggerFromCtx(ctx).Error("Failed to parse bool", slog.String("key", key), slog.Any("error", err)) + return false, fmt.Errorf("failed to parse %s as bool: %w", key, err) } return value, nil @@ -374,6 +382,10 @@ func PeerDBClickHouseMaxInsertThreads(ctx context.Context, env map[string]string return dynamicConfSigned[int64](ctx, env, "PEERDB_CLICKHOUSE_MAX_INSERT_THREADS") } +func PeerDBClickHouseParallelNormalize(ctx context.Context, env map[string]string) (int, error) { + return dynamicConfSigned[int](ctx, env, "PEERDB_CLICKHOUSE_PARALLEL_NORMALIZE") +} + func PeerDBSnowflakeMergeParallelism(ctx context.Context, env map[string]string) (int64, error) { return dynamicConfSigned[int64](ctx, env, "PEERDB_SNOWFLAKE_MERGE_PARALLELISM") } From 593093e522406e09fa1a30c0a30998f5761bd524 Mon Sep 17 00:00:00 2001 From: Kevin Biju <52661649+heavycrystal@users.noreply.github.com> Date: Fri, 15 Nov 2024 15:02:26 +0530 Subject: [PATCH 12/61] use disconnected ctx to ensure snapshot connection drops (#2258) Temporal has a thing for everything may close #2162 --- flow/workflows/snapshot_flow.go | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/flow/workflows/snapshot_flow.go b/flow/workflows/snapshot_flow.go index c8b6a3fd2..d4f494d1f 100644 --- a/flow/workflows/snapshot_flow.go +++ b/flow/workflows/snapshot_flow.go @@ -274,6 +274,13 @@ func (s *SnapshotFlowExecution) cloneTablesWithSlot( if err != nil { return fmt.Errorf("failed to setup replication: %w", err) } + defer func() { + dCtx, cancel := workflow.NewDisconnectedContext(sessionCtx) + defer cancel() + if err := s.closeSlotKeepAlive(dCtx); err != nil { + s.logger.Error("failed to close slot keep alive", slog.Any("error", err)) + } + }() s.logger.Info(fmt.Sprintf("cloning %d tables in parallel", numTablesInParallel)) if err := s.cloneTables(ctx, @@ -283,13 +290,10 @@ func (s *SnapshotFlowExecution) cloneTablesWithSlot( slotInfo.SupportsTidScans, numTablesInParallel, ); err != nil { + s.logger.Error("failed to clone tables", slog.Any("error", err)) return fmt.Errorf("failed to clone tables: %w", err) } - if err := s.closeSlotKeepAlive(sessionCtx); err != nil { - return fmt.Errorf("failed to close slot keep alive: %w", err) - } - return nil } From 27996dbfc53c22ed61e43e6e31c117eb12cbbd61 Mon Sep 17 00:00:00 2001 From: Kaushik Iska Date: Fri, 15 Nov 2024 10:06:41 -0600 Subject: [PATCH 13/61] Update end time for batches where end time is not set (#2260) --- flow/connectors/utils/monitoring/monitoring.go | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/flow/connectors/utils/monitoring/monitoring.go b/flow/connectors/utils/monitoring/monitoring.go index 9c7397004..98a62ec65 100644 --- a/flow/connectors/utils/monitoring/monitoring.go +++ b/flow/connectors/utils/monitoring/monitoring.go @@ -96,8 +96,10 @@ func UpdateEndTimeForCDCBatch( batchID int64, ) error { _, err := pool.Exec(ctx, - "UPDATE peerdb_stats.cdc_batches SET end_time=$1 WHERE flow_name=$2 AND batch_id=$3", - time.Now(), flowJobName, batchID) + `UPDATE peerdb_stats.cdc_batches + SET end_time = COALESCE(end_time, NOW()) + WHERE flow_name = $1 AND batch_id <= $2`, + flowJobName, batchID) if err != nil { return fmt.Errorf("error while updating batch in cdc_batch: %w", err) } From cd583155dec65a7de7dbb97f231d39ccb72b058f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Philip=20Dub=C3=A9?= Date: Fri, 15 Nov 2024 16:24:57 +0000 Subject: [PATCH 14/61] cdc graph: fix filtering not including flow_name (#2261) --- flow/cmd/mirror_status.go | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/flow/cmd/mirror_status.go b/flow/cmd/mirror_status.go index 58cf20a80..156185054 100644 --- a/flow/cmd/mirror_status.go +++ b/flow/cmd/mirror_status.go @@ -215,8 +215,8 @@ func (h *FlowRequestHandler) CDCGraph(ctx context.Context, req *protos.GraphRequ } rows, err := h.pool.Query(ctx, `select tm, coalesce(sum(rows_in_batch), 0) from generate_series(date_trunc($2, now() - $1::INTERVAL * 30), now(), $1::INTERVAL) tm - left join peerdb_stats.cdc_batches on start_time >= tm and start_time < tm + $1::INTERVAL - group by 1 order by 1`, req.AggregateType, truncField) + left join peerdb_stats.cdc_batches on start_time >= tm and start_time < tm + $1::INTERVAL and flow_name = $3 + group by 1 order by 1`, req.AggregateType, truncField, req.FlowJobName) if err != nil { return nil, err } From 2c52d4f4f19f94f7f12fcd3bf6c8cc9d88edca78 Mon Sep 17 00:00:00 2001 From: Amogh Bharadwaj <65964360+Amogh-Bharadwaj@users.noreply.github.com> Date: Mon, 18 Nov 2024 22:58:35 +0530 Subject: [PATCH 15/61] Postgres to Postgres: Introduce support for more types (#2238) - TSTZRange: Introduce logic to construct the final string in range format - Supports JSON array - Supports JSONB and JSONB array --- .../bigquery/merge_stmt_generator.go | 2 +- flow/connectors/bigquery/qvalue_convert.go | 2 +- flow/connectors/postgres/postgres.go | 1 + flow/connectors/postgres/qvalue_convert.go | 75 +++++++++++++++++-- .../snowflake/merge_stmt_generator.go | 2 +- flow/model/qrecord_copy_from_source.go | 13 +++- flow/model/qvalue/avro_converter.go | 8 +- flow/model/qvalue/kind.go | 11 ++- flow/model/qvalue/qvalue.go | 21 +++++- 9 files changed, 121 insertions(+), 14 deletions(-) diff --git a/flow/connectors/bigquery/merge_stmt_generator.go b/flow/connectors/bigquery/merge_stmt_generator.go index e903ef586..5ee4f883c 100644 --- a/flow/connectors/bigquery/merge_stmt_generator.go +++ b/flow/connectors/bigquery/merge_stmt_generator.go @@ -34,7 +34,7 @@ func (m *mergeStmtGenerator) generateFlattenedCTE(dstTable string, normalizedTab var castStmt string shortCol := m.shortColumn[column.Name] switch qvalue.QValueKind(colType) { - case qvalue.QValueKindJSON, qvalue.QValueKindHStore: + case qvalue.QValueKindJSON, qvalue.QValueKindJSONB, qvalue.QValueKindHStore: // if the type is JSON, then just extract JSON castStmt = fmt.Sprintf("CAST(PARSE_JSON(JSON_VALUE(_peerdb_data, '$.%s'),wide_number_mode=>'round') AS %s) AS `%s`", column.Name, bqTypeString, shortCol) diff --git a/flow/connectors/bigquery/qvalue_convert.go b/flow/connectors/bigquery/qvalue_convert.go index d2d9d9f0c..aa798641a 100644 --- a/flow/connectors/bigquery/qvalue_convert.go +++ b/flow/connectors/bigquery/qvalue_convert.go @@ -34,7 +34,7 @@ func qValueKindToBigQueryType(columnDescription *protos.FieldDescription, nullab case qvalue.QValueKindString: bqField.Type = bigquery.StringFieldType // json also is stored as string for now - case qvalue.QValueKindJSON, qvalue.QValueKindHStore: + case qvalue.QValueKindJSON, qvalue.QValueKindJSONB, qvalue.QValueKindHStore: bqField.Type = bigquery.JSONFieldType // time related case qvalue.QValueKindTimestamp, qvalue.QValueKindTimestampTZ: diff --git a/flow/connectors/postgres/postgres.go b/flow/connectors/postgres/postgres.go index b179e2d07..593a94aa4 100644 --- a/flow/connectors/postgres/postgres.go +++ b/flow/connectors/postgres/postgres.go @@ -1187,6 +1187,7 @@ func (c *PostgresConnector) SyncFlowCleanup(ctx context.Context, jobName string) if err := syncFlowCleanupTx.Commit(ctx); err != nil { return fmt.Errorf("unable to commit transaction for sync flow cleanup: %w", err) } + return nil } diff --git a/flow/connectors/postgres/qvalue_convert.go b/flow/connectors/postgres/qvalue_convert.go index d359212bd..fe2489ed3 100644 --- a/flow/connectors/postgres/qvalue_convert.go +++ b/flow/connectors/postgres/qvalue_convert.go @@ -62,8 +62,10 @@ func (c *PostgresConnector) postgresOIDToQValueKind(recvOID uint32) qvalue.QValu return qvalue.QValueKindString case pgtype.ByteaOID: return qvalue.QValueKindBytes - case pgtype.JSONOID, pgtype.JSONBOID: + case pgtype.JSONOID: return qvalue.QValueKindJSON + case pgtype.JSONBOID: + return qvalue.QValueKindJSONB case pgtype.UUIDOID: return qvalue.QValueKindUUID case pgtype.TimeOID: @@ -104,8 +106,14 @@ func (c *PostgresConnector) postgresOIDToQValueKind(recvOID uint32) qvalue.QValu return qvalue.QValueKindArrayTimestampTZ case pgtype.TextArrayOID, pgtype.VarcharArrayOID, pgtype.BPCharArrayOID: return qvalue.QValueKindArrayString + case pgtype.JSONArrayOID: + return qvalue.QValueKindArrayJSON + case pgtype.JSONBArrayOID: + return qvalue.QValueKindArrayJSONB case pgtype.IntervalOID: return qvalue.QValueKindInterval + case pgtype.TstzrangeOID: + return qvalue.QValueKindTSTZRange default: typeName, ok := pgtype.NewMap().TypeForOID(recvOID) if !ok { @@ -161,6 +169,8 @@ func qValueKindToPostgresType(colTypeStr string) string { return "BYTEA" case qvalue.QValueKindJSON: return "JSON" + case qvalue.QValueKindJSONB: + return "JSONB" case qvalue.QValueKindHStore: return "HSTORE" case qvalue.QValueKindUUID: @@ -203,6 +213,10 @@ func qValueKindToPostgresType(colTypeStr string) string { return "BOOLEAN[]" case qvalue.QValueKindArrayString: return "TEXT[]" + case qvalue.QValueKindArrayJSON: + return "JSON[]" + case qvalue.QValueKindArrayJSONB: + return "JSONB[]" case qvalue.QValueKindGeography: return "GEOGRAPHY" case qvalue.QValueKindGeometry: @@ -214,12 +228,12 @@ func qValueKindToPostgresType(colTypeStr string) string { } } -func parseJSON(value interface{}) (qvalue.QValue, error) { +func parseJSON(value interface{}, isArray bool) (qvalue.QValue, error) { jsonVal, err := json.Marshal(value) if err != nil { return nil, fmt.Errorf("failed to parse JSON: %w", err) } - return qvalue.QValueJSON{Val: string(jsonVal)}, nil + return qvalue.QValueJSON{Val: string(jsonVal), IsArray: isArray}, nil } func convertToArray[T any](kind qvalue.QValueKind, value interface{}) ([]T, error) { @@ -277,6 +291,31 @@ func parseFieldFromQValueKind(qvalueKind qvalue.QValueKind, value interface{}) ( } return qvalue.QValueString{Val: string(intervalJSON)}, nil + case qvalue.QValueKindTSTZRange: + tstzrangeObject := value.(pgtype.Range[interface{}]) + lowerBoundType := tstzrangeObject.LowerType + upperBoundType := tstzrangeObject.UpperType + lowerTime, err := convertTimeRangeBound(tstzrangeObject.Lower) + if err != nil { + return nil, fmt.Errorf("[tstzrange]error for lower time bound: %w", err) + } + + upperTime, err := convertTimeRangeBound(tstzrangeObject.Upper) + if err != nil { + return nil, fmt.Errorf("[tstzrange]error for upper time bound: %w", err) + } + + lowerBracket := "[" + if lowerBoundType == pgtype.Exclusive { + lowerBracket = "(" + } + upperBracket := "]" + if upperBoundType == pgtype.Exclusive { + upperBracket = ")" + } + tstzrangeStr := fmt.Sprintf("%s%v,%v%s", + lowerBracket, lowerTime, upperTime, upperBracket) + return qvalue.QValueTSTZRange{Val: tstzrangeStr}, nil case qvalue.QValueKindDate: switch val := value.(type) { case time.Time: @@ -306,12 +345,18 @@ func parseFieldFromQValueKind(qvalueKind qvalue.QValueKind, value interface{}) ( case qvalue.QValueKindBoolean: boolVal := value.(bool) return qvalue.QValueBoolean{Val: boolVal}, nil - case qvalue.QValueKindJSON: - tmp, err := parseJSON(value) + case qvalue.QValueKindJSON, qvalue.QValueKindJSONB: + tmp, err := parseJSON(value, false) if err != nil { return nil, fmt.Errorf("failed to parse JSON: %w", err) } return tmp, nil + case qvalue.QValueKindArrayJSON, qvalue.QValueKindArrayJSONB: + tmp, err := parseJSON(value, true) + if err != nil { + return nil, fmt.Errorf("failed to parse JSON Array: %w", err) + } + return tmp, nil case qvalue.QValueKindInt16: intVal := value.(int16) return qvalue.QValueInt16{Val: intVal}, nil @@ -483,3 +528,23 @@ func customTypeToQKind(typeName string) qvalue.QValueKind { return qvalue.QValueKindString } } + +// Postgres does not like timestamps of the form 2006-01-02 15:04:05 +0000 UTC +// in tstzrange. +// convertTimeRangeBound removes the +0000 UTC part +func convertTimeRangeBound(timeBound interface{}) (string, error) { + layout := "2006-01-02 15:04:05 -0700 MST" + postgresFormat := "2006-01-02 15:04:05" + var convertedTime string + if timeBound != nil { + lowerParsed, err := time.Parse(layout, fmt.Sprint(timeBound)) + if err != nil { + return "", fmt.Errorf("unexpected lower bound value in tstzrange. Error: %v", err) + } + convertedTime = lowerParsed.Format(postgresFormat) + } else { + convertedTime = "" + } + + return convertedTime, nil +} diff --git a/flow/connectors/snowflake/merge_stmt_generator.go b/flow/connectors/snowflake/merge_stmt_generator.go index 3f0cfbc63..37b4ed7bd 100644 --- a/flow/connectors/snowflake/merge_stmt_generator.go +++ b/flow/connectors/snowflake/merge_stmt_generator.go @@ -52,7 +52,7 @@ func (m *mergeStmtGenerator) generateMergeStmt(dstTable string) (string, error) flattenedCastsSQLArray = append(flattenedCastsSQLArray, fmt.Sprintf("TO_GEOMETRY(CAST(%s:\"%s\" AS STRING),true) AS %s", toVariantColumnName, column.Name, targetColumnName)) - case qvalue.QValueKindJSON, qvalue.QValueKindHStore, qvalue.QValueKindInterval: + case qvalue.QValueKindJSON, qvalue.QValueKindJSONB, qvalue.QValueKindHStore, qvalue.QValueKindInterval: flattenedCastsSQLArray = append(flattenedCastsSQLArray, fmt.Sprintf("PARSE_JSON(CAST(%s:\"%s\" AS STRING)) AS %s", toVariantColumnName, column.Name, targetColumnName)) diff --git a/flow/model/qrecord_copy_from_source.go b/flow/model/qrecord_copy_from_source.go index 308676c5f..d633fda99 100644 --- a/flow/model/qrecord_copy_from_source.go +++ b/flow/model/qrecord_copy_from_source.go @@ -1,6 +1,7 @@ package model import ( + "encoding/json" "errors" "fmt" "strings" @@ -82,6 +83,8 @@ func (src *QRecordCopyFromSource) Values() ([]interface{}, error) { values[i] = str case qvalue.QValueTime: values[i] = pgtype.Time{Microseconds: v.Val.UnixMicro(), Valid: true} + case qvalue.QValueTSTZRange: + values[i] = v.Val case qvalue.QValueTimestamp: values[i] = pgtype.Timestamp{Time: v.Val, Valid: true} case qvalue.QValueTimestampTZ: @@ -170,8 +173,16 @@ func (src *QRecordCopyFromSource) Values() ([]interface{}, error) { } values[i] = a case qvalue.QValueJSON: - values[i] = v.Val + if v.IsArray { + var arrayJ []interface{} + if err := json.Unmarshal([]byte(v.Value().(string)), &arrayJ); err != nil { + return nil, fmt.Errorf("failed to unmarshal JSON array: %v", err) + } + values[i] = arrayJ + } else { + values[i] = v.Value() + } // And so on for the other types... default: return nil, fmt.Errorf("unsupported value type %T", qValue) diff --git a/flow/model/qvalue/avro_converter.go b/flow/model/qvalue/avro_converter.go index 9738f46e8..97d9641b6 100644 --- a/flow/model/qvalue/avro_converter.go +++ b/flow/model/qvalue/avro_converter.go @@ -138,7 +138,9 @@ func GetAvroSchemaFromQValueKind(kind QValueKind, targetDWH protos.DBType, preci }, nil } return "string", nil - case QValueKindHStore, QValueKindJSON, QValueKindStruct: + case QValueKindTSTZRange: + return "string", nil + case QValueKindHStore, QValueKindJSON, QValueKindJSONB, QValueKindStruct: return "string", nil case QValueKindArrayFloat32: return AvroSchemaArray{ @@ -193,6 +195,8 @@ func GetAvroSchemaFromQValueKind(kind QValueKind, targetDWH protos.DBType, preci Type: "array", Items: "string", }, nil + case QValueKindArrayJSON, QValueKindArrayJSONB: + return "string", nil case QValueKindArrayString: return AvroSchemaArray{ Type: "array", @@ -315,7 +319,7 @@ func QValueToAvro(value QValue, field *QField, targetDWH protos.DBType, logger l return t, nil case QValueQChar: return c.processNullableUnion("string", string(v.Val)) - case QValueString, QValueCIDR, QValueINET, QValueMacaddr, QValueInterval: + case QValueString, QValueCIDR, QValueINET, QValueMacaddr, QValueInterval, QValueTSTZRange: if c.TargetDWH == protos.DBType_SNOWFLAKE && v.Value() != nil && (len(v.Value().(string)) > 15*1024*1024) { slog.Warn("Clearing TEXT value > 15MB for Snowflake!") diff --git a/flow/model/qvalue/kind.go b/flow/model/qvalue/kind.go index 79e8f89e4..91ab867a0 100644 --- a/flow/model/qvalue/kind.go +++ b/flow/model/qvalue/kind.go @@ -26,10 +26,12 @@ const ( QValueKindTime QValueKind = "time" QValueKindTimeTZ QValueKind = "timetz" QValueKindInterval QValueKind = "interval" + QValueKindTSTZRange QValueKind = "tstzrange" QValueKindNumeric QValueKind = "numeric" QValueKindBytes QValueKind = "bytes" QValueKindUUID QValueKind = "uuid" QValueKindJSON QValueKind = "json" + QValueKindJSONB QValueKind = "jsonb" QValueKindHStore QValueKind = "hstore" QValueKindGeography QValueKind = "geography" QValueKindGeometry QValueKind = "geometry" @@ -51,6 +53,8 @@ const ( QValueKindArrayTimestamp QValueKind = "array_timestamp" QValueKindArrayTimestampTZ QValueKind = "array_timestamptz" QValueKindArrayBoolean QValueKind = "array_bool" + QValueKindArrayJSON QValueKind = "array_json" + QValueKindArrayJSONB QValueKind = "array_jsonb" ) func (kind QValueKind) IsArray() bool { @@ -68,6 +72,7 @@ var QValueKindToSnowflakeTypeMap = map[QValueKind]string{ QValueKindQChar: "CHAR", QValueKindString: "STRING", QValueKindJSON: "VARIANT", + QValueKindJSONB: "VARIANT", QValueKindTimestamp: "TIMESTAMP_NTZ", QValueKindTimestampTZ: "TIMESTAMP_TZ", QValueKindInterval: "VARIANT", @@ -94,6 +99,8 @@ var QValueKindToSnowflakeTypeMap = map[QValueKind]string{ QValueKindArrayTimestamp: "VARIANT", QValueKindArrayTimestampTZ: "VARIANT", QValueKindArrayBoolean: "VARIANT", + QValueKindArrayJSON: "VARIANT", + QValueKindArrayJSONB: "VARIANT", } var QValueKindToClickHouseTypeMap = map[QValueKind]string{ @@ -109,6 +116,7 @@ var QValueKindToClickHouseTypeMap = map[QValueKind]string{ QValueKindJSON: "String", QValueKindTimestamp: "DateTime64(6)", QValueKindTimestampTZ: "DateTime64(6)", + QValueKindTSTZRange: "String", QValueKindTime: "DateTime64(6)", QValueKindTimeTZ: "DateTime64(6)", QValueKindDate: "Date32", @@ -118,7 +126,6 @@ var QValueKindToClickHouseTypeMap = map[QValueKind]string{ QValueKindInvalid: "String", QValueKindHStore: "String", - // array types will be mapped to VARIANT QValueKindArrayFloat32: "Array(Float32)", QValueKindArrayFloat64: "Array(Float64)", QValueKindArrayInt32: "Array(Int32)", @@ -129,6 +136,8 @@ var QValueKindToClickHouseTypeMap = map[QValueKind]string{ QValueKindArrayDate: "Array(Date)", QValueKindArrayTimestamp: "Array(DateTime64(6))", QValueKindArrayTimestampTZ: "Array(DateTime64(6))", + QValueKindArrayJSON: "String", + QValueKindArrayJSONB: "String", } func (kind QValueKind) ToDWHColumnType(dwhType protos.DBType) (string, error) { diff --git a/flow/model/qvalue/qvalue.go b/flow/model/qvalue/qvalue.go index 9b1c13f75..1277881a3 100644 --- a/flow/model/qvalue/qvalue.go +++ b/flow/model/qvalue/qvalue.go @@ -6,7 +6,7 @@ import ( "github.com/google/uuid" "github.com/shopspring/decimal" - "github.com/yuin/gopher-lua" + lua "github.com/yuin/gopher-lua" "github.com/PeerDB-io/glua64" "github.com/PeerDB-io/peer-flow/shared" @@ -294,6 +294,22 @@ func (v QValueInterval) LValue(ls *lua.LState) lua.LValue { return lua.LString(v.Val) } +type QValueTSTZRange struct { + Val string +} + +func (QValueTSTZRange) Kind() QValueKind { + return QValueKindInterval +} + +func (v QValueTSTZRange) Value() any { + return v.Val +} + +func (v QValueTSTZRange) LValue(ls *lua.LState) lua.LValue { + return lua.LString(v.Val) +} + type QValueNumeric struct { Val decimal.Decimal } @@ -343,7 +359,8 @@ func (v QValueUUID) LValue(ls *lua.LState) lua.LValue { } type QValueJSON struct { - Val string + Val string + IsArray bool } func (QValueJSON) Kind() QValueKind { From 5ffc25c7440c6d2f7238ab091281f7e94535f36b Mon Sep 17 00:00:00 2001 From: Amogh Bharadwaj <65964360+Amogh-Bharadwaj@users.noreply.github.com> Date: Mon, 18 Nov 2024 23:08:50 +0530 Subject: [PATCH 16/61] QRep Overwrite Mode: introduce full refresh code path (#2239) Currently overwrite mode in query replication has the following behaviour: Before new rows are synced, truncate the destination table This PR introduces a dynconf-gated (default false) full refresh mode which if enabled, has the same behaviour as above but pulls the entire data from the source table at every wait time interval --- flow/peerdbenv/dynamicconf.go | 11 +++++++++++ flow/workflows/local_activities.go | 14 ++++++++++++++ flow/workflows/qrep_flow.go | 31 ++++++++++++++++++++++-------- 3 files changed, 48 insertions(+), 8 deletions(-) diff --git a/flow/peerdbenv/dynamicconf.go b/flow/peerdbenv/dynamicconf.go index f149bf37b..314b36573 100644 --- a/flow/peerdbenv/dynamicconf.go +++ b/flow/peerdbenv/dynamicconf.go @@ -91,6 +91,13 @@ var DynamicSettings = [...]*protos.DynamicSetting{ ApplyMode: protos.DynconfApplyMode_APPLY_MODE_AFTER_RESUME, TargetForSetting: protos.DynconfTarget_ALL, }, + { + Name: "PEERDB_FULL_REFRESH_OVERWRITE_MODE", + Description: "Enables full refresh mode for query replication mirrors of overwrite type", + DefaultValue: "false", + ValueType: protos.DynconfValueType_BOOL, + ApplyMode: protos.DynconfApplyMode_APPLY_MODE_NEW_MIRROR, + }, { Name: "PEERDB_NULLABLE", Description: "Propagate nullability in schema", @@ -370,6 +377,10 @@ func PeerDBEnableParallelSyncNormalize(ctx context.Context, env map[string]strin return dynamicConfBool(ctx, env, "PEERDB_ENABLE_PARALLEL_SYNC_NORMALIZE") } +func PeerDBFullRefreshOverwriteMode(ctx context.Context, env map[string]string) (bool, error) { + return dynamicConfBool(ctx, env, "PEERDB_FULL_REFRESH_OVERWRITE_MODE") +} + func PeerDBNullable(ctx context.Context, env map[string]string) (bool, error) { return dynamicConfBool(ctx, env, "PEERDB_NULLABLE") } diff --git a/flow/workflows/local_activities.go b/flow/workflows/local_activities.go index d163352ca..7a3e80f24 100644 --- a/flow/workflows/local_activities.go +++ b/flow/workflows/local_activities.go @@ -29,6 +29,20 @@ func getParallelSyncNormalize(wCtx workflow.Context, logger log.Logger, env map[ return parallel } +func getQRepOverwriteFullRefreshMode(wCtx workflow.Context, logger log.Logger, env map[string]string) bool { + checkCtx := workflow.WithLocalActivityOptions(wCtx, workflow.LocalActivityOptions{ + StartToCloseTimeout: time.Minute, + }) + + getFullRefreshFuture := workflow.ExecuteLocalActivity(checkCtx, peerdbenv.PeerDBFullRefreshOverwriteMode, env) + var fullRefreshEnabled bool + if err := getFullRefreshFuture.Get(checkCtx, &fullRefreshEnabled); err != nil { + logger.Warn("Failed to check if full refresh mode is enabled", slog.Any("error", err)) + return false + } + return fullRefreshEnabled +} + func localPeerType(ctx context.Context, name string) (protos.DBType, error) { pool, err := peerdbenv.GetCatalogConnectionPoolFromEnv(ctx) if err != nil { diff --git a/flow/workflows/qrep_flow.go b/flow/workflows/qrep_flow.go index c7348eefa..f862b4f3d 100644 --- a/flow/workflows/qrep_flow.go +++ b/flow/workflows/qrep_flow.go @@ -32,13 +32,15 @@ type QRepPartitionFlowExecution struct { runUUID string } +var InitialLastPartition = &protos.QRepPartition{ + PartitionId: "not-applicable-partition", + Range: nil, +} + // returns a new empty QRepFlowState func newQRepFlowState() *protos.QRepFlowState { return &protos.QRepFlowState{ - LastPartition: &protos.QRepPartition{ - PartitionId: "not-applicable-partition", - Range: nil, - }, + LastPartition: InitialLastPartition, NumPartitionsProcessed: 0, NeedsResync: true, CurrentFlowStatus: protos.FlowStatus_STATUS_RUNNING, @@ -461,8 +463,10 @@ func QRepWaitForNewRowsWorkflow(ctx workflow.Context, config *protos.QRepConfig, return fmt.Errorf("error checking for new rows: %w", err) } + optedForOverwrite := config.WriteMode.WriteType == protos.QRepWriteType_QREP_WRITE_MODE_OVERWRITE + fullRefresh := optedForOverwrite && getQRepOverwriteFullRefreshMode(ctx, logger, config.Env) // If no new rows are found, continue as new - if !hasNewRows { + if !hasNewRows || fullRefresh { waitBetweenBatches := 5 * time.Second if config.WaitBetweenBatchesSeconds > 0 { waitBetweenBatches = time.Duration(config.WaitBetweenBatchesSeconds) * time.Second @@ -472,6 +476,9 @@ func QRepWaitForNewRowsWorkflow(ctx workflow.Context, config *protos.QRepConfig, return sleepErr } + if fullRefresh { + return nil + } logger.Info("QRepWaitForNewRowsWorkflow: continuing the loop") return workflow.NewContinueAsNewError(ctx, QRepWaitForNewRowsWorkflow, config, lastPartition) } @@ -545,8 +552,16 @@ func QRepFlowWorkflow( return state, err } - if !config.InitialCopyOnly && state.LastPartition != nil { - if err := q.waitForNewRows(ctx, signalChan, state.LastPartition); err != nil { + fullRefresh := false + lastPartition := state.LastPartition + if config.WriteMode.WriteType == protos.QRepWriteType_QREP_WRITE_MODE_OVERWRITE { + if fullRefresh = getQRepOverwriteFullRefreshMode(ctx, q.logger, config.Env); fullRefresh { + lastPartition = InitialLastPartition + } + } + + if !config.InitialCopyOnly && lastPartition != nil { + if err := q.waitForNewRows(ctx, signalChan, lastPartition); err != nil { return state, err } } @@ -580,7 +595,7 @@ func QRepFlowWorkflow( q.logger.Info(fmt.Sprintf("%d partitions processed", len(partitions.Partitions))) state.NumPartitionsProcessed += uint64(len(partitions.Partitions)) - if len(partitions.Partitions) > 0 { + if len(partitions.Partitions) > 0 && !fullRefresh { state.LastPartition = partitions.Partitions[len(partitions.Partitions)-1] } } From 9f62aec1af1e6f080ec5aacfc9fe4ac7dc2fe0f9 Mon Sep 17 00:00:00 2001 From: "renovate[bot]" <29139614+renovate[bot]@users.noreply.github.com> Date: Mon, 18 Nov 2024 19:18:17 +0000 Subject: [PATCH 17/61] fix(deps): update cargo dependencies (#2266) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR contains the following updates: | Package | Type | Update | Change | |---|---|---|---| | [cargo-deb](https://lib.rs/crates/cargo-deb) ([source](https://redirect.github.com/kornelski/cargo-deb)) | dependencies | minor | `2.8.0` -> `2.9.0` | | [clap](https://redirect.github.com/clap-rs/clap) | dependencies | patch | `4.5.20` -> `4.5.21` | | [rustls](https://redirect.github.com/rustls/rustls) | dependencies | patch | `0.23.16` -> `0.23.17` | | [serde_json](https://redirect.github.com/serde-rs/json) | dependencies | patch | `1.0.132` -> `1.0.133` | --- ### Release Notes
kornelski/cargo-deb (cargo-deb) ### [`v2.9.0`](https://redirect.github.com/kornelski/cargo-deb/compare/v2.8.0...v2.9.0) [Compare Source](https://redirect.github.com/kornelski/cargo-deb/compare/v2.8.0...v2.9.0)
clap-rs/clap (clap) ### [`v4.5.21`](https://redirect.github.com/clap-rs/clap/blob/HEAD/CHANGELOG.md#4521---2024-11-13) [Compare Source](https://redirect.github.com/clap-rs/clap/compare/v4.5.20...v4.5.21) ##### Fixes - *(parser)* Ensure defaults are filled in on error with `ignore_errors(true)`
serde-rs/json (serde_json) ### [`v1.0.133`](https://redirect.github.com/serde-rs/json/releases/tag/v1.0.133) [Compare Source](https://redirect.github.com/serde-rs/json/compare/v1.0.132...v1.0.133) - Implement From<\[T; N]> for serde_json::Value ([#​1215](https://redirect.github.com/serde-rs/json/issues/1215))
--- ### Configuration 📅 **Schedule**: Branch creation - "after 5pm on monday" in timezone Etc/UTC, Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. â™» **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox. 👻 **Immortal**: This PR will be recreated if closed unmerged. Get [config help](https://redirect.github.com/renovatebot/renovate/discussions) if that's undesired. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/PeerDB-io/peerdb). Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> --- nexus/Cargo.lock | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/nexus/Cargo.lock b/nexus/Cargo.lock index cc3650b6f..f827849ff 100644 --- a/nexus/Cargo.lock +++ b/nexus/Cargo.lock @@ -801,9 +801,9 @@ dependencies = [ [[package]] name = "cargo-deb" -version = "2.8.0" +version = "2.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "74d0500c935971265437386796faad57064d17bf2648f3f0a7e3c8d5a631de23" +checksum = "9103cb60c68ef7ce14a3d17c6d697e8b180356a447685784f7951074bce0b844" dependencies = [ "ar", "cargo_toml", @@ -962,9 +962,9 @@ dependencies = [ [[package]] name = "clap" -version = "4.5.20" +version = "4.5.21" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b97f376d85a664d5837dbae44bf546e6477a679ff6610010f17276f686d867e8" +checksum = "fb3b4b9e5a7c7514dfa52869339ee98b3156b0bfb4e8a77c4ff4babb64b1604f" dependencies = [ "clap_builder", "clap_derive", @@ -972,9 +972,9 @@ dependencies = [ [[package]] name = "clap_builder" -version = "4.5.20" +version = "4.5.21" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "19bc80abd44e4bed93ca373a0704ccbd1b710dc5749406201bb018272808dc54" +checksum = "b17a95aa67cc7b5ebd32aa5370189aa0d79069ef1c64ce893bd30fb24bff20ec" dependencies = [ "anstream", "anstyle", @@ -1790,7 +1790,7 @@ dependencies = [ "http 1.1.0", "hyper 1.5.0", "hyper-util", - "rustls 0.23.16", + "rustls 0.23.17", "rustls-native-certs 0.8.0", "rustls-pki-types", "tokio", @@ -3044,7 +3044,7 @@ dependencies = [ "anyhow", "futures-util", "pt", - "rustls 0.23.16", + "rustls 0.23.17", "ssh2", "tokio", "tokio-postgres", @@ -3159,7 +3159,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0c1318b19085f08681016926435853bbf7858f9c082d0999b80550ff5d9abe15" dependencies = [ "bytes", - "heck 0.5.0", + "heck 0.4.1", "itertools", "log", "multimap", @@ -3270,7 +3270,7 @@ dependencies = [ "quinn-proto", "quinn-udp", "rustc-hash 2.0.0", - "rustls 0.23.16", + "rustls 0.23.17", "socket2", "thiserror", "tokio", @@ -3287,7 +3287,7 @@ dependencies = [ "rand", "ring", "rustc-hash 2.0.0", - "rustls 0.23.16", + "rustls 0.23.17", "slab", "thiserror", "tinyvec", @@ -3305,7 +3305,7 @@ dependencies = [ "once_cell", "socket2", "tracing", - "windows-sys 0.59.0", + "windows-sys 0.52.0", ] [[package]] @@ -3518,7 +3518,7 @@ dependencies = [ "percent-encoding", "pin-project-lite", "quinn", - "rustls 0.23.16", + "rustls 0.23.17", "rustls-pemfile 2.2.0", "rustls-pki-types", "serde", @@ -3686,9 +3686,9 @@ dependencies = [ [[package]] name = "rustls" -version = "0.23.16" +version = "0.23.17" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "eee87ff5d9b36712a58574e12e9f0ea80f915a5b0ac518d322b24a465617925e" +checksum = "7f1a745511c54ba6d4465e8d5dfbd81b45791756de28d4981af70d6dca128f1e" dependencies = [ "log", "once_cell", @@ -3916,9 +3916,9 @@ dependencies = [ [[package]] name = "serde_json" -version = "1.0.132" +version = "1.0.133" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d726bfaff4b320266d395898905d0eba0345aae23b54aee3a737e260fd46db03" +checksum = "c7fceb2473b9166b2294ef05efcb65a3db80803f0b03ef86a5fc88a2b85ee377" dependencies = [ "itoa", "memchr", @@ -4411,7 +4411,7 @@ checksum = "27d684bad428a0f2481f42241f821db42c54e2dc81d8c00db8536c506b0a0144" dependencies = [ "const-oid", "ring", - "rustls 0.23.16", + "rustls 0.23.17", "tokio", "tokio-postgres", "tokio-rustls 0.26.0", @@ -4445,7 +4445,7 @@ version = "0.26.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0c7bc40d0e5a97695bb96e27995cd3a08538541b0a846f65bba7a359f36700d4" dependencies = [ - "rustls 0.23.16", + "rustls 0.23.17", "rustls-pki-types", "tokio", ] @@ -4790,7 +4790,7 @@ dependencies = [ "flate2", "log", "once_cell", - "rustls 0.23.16", + "rustls 0.23.17", "rustls-pki-types", "serde", "serde_json", @@ -5358,7 +5358,7 @@ dependencies = [ "hyper-util", "log", "percent-encoding", - "rustls 0.23.16", + "rustls 0.23.17", "rustls-pemfile 2.2.0", "seahash", "serde", From 15677364dd1041a2822948ba200cba9e15707fc5 Mon Sep 17 00:00:00 2001 From: Kaushik Iska Date: Mon, 18 Nov 2024 18:16:50 -0600 Subject: [PATCH 18/61] refactor: reduce default batch and partition sizes for improved stability (#2262) --- flow/activities/flowable_core.go | 2 +- flow/workflows/snapshot_flow.go | 2 +- ui/app/mirrors/create/helpers/cdc.ts | 12 ++++++------ ui/app/mirrors/create/helpers/common.ts | 4 ++-- 4 files changed, 10 insertions(+), 10 deletions(-) diff --git a/flow/activities/flowable_core.go b/flow/activities/flowable_core.go index db04efea3..d583044b0 100644 --- a/flow/activities/flowable_core.go +++ b/flow/activities/flowable_core.go @@ -139,7 +139,7 @@ func syncCore[TPull connectors.CDCPullConnectorCore, TSync connectors.CDCSyncCon batchSize := options.BatchSize if batchSize == 0 { - batchSize = 1_000_000 + batchSize = 250_000 } lastOffset, err := func() (int64, error) { diff --git a/flow/workflows/snapshot_flow.go b/flow/workflows/snapshot_flow.go index d4f494d1f..9b21b7b38 100644 --- a/flow/workflows/snapshot_flow.go +++ b/flow/workflows/snapshot_flow.go @@ -166,7 +166,7 @@ func (s *SnapshotFlowExecution) cloneTable( numWorkers = s.config.SnapshotMaxParallelWorkers } - numRowsPerPartition := uint32(500000) + numRowsPerPartition := uint32(250000) if s.config.SnapshotNumRowsPerPartition > 0 { numRowsPerPartition = s.config.SnapshotNumRowsPerPartition } diff --git a/ui/app/mirrors/create/helpers/cdc.ts b/ui/app/mirrors/create/helpers/cdc.ts index 99dd229cb..957564d67 100644 --- a/ui/app/mirrors/create/helpers/cdc.ts +++ b/ui/app/mirrors/create/helpers/cdc.ts @@ -22,12 +22,12 @@ export const cdcSettings: MirrorSetting[] = [ setter( (curr: CDCConfig): CDCConfig => ({ ...curr, - maxBatchSize: (value as number) || 1000000, + maxBatchSize: (value as number) || 250000, }) ), - tips: 'The number of rows PeerDB will pull from source at a time. If left empty, the default value is 1,000,000 rows.', + tips: 'The number of rows PeerDB will pull from source at a time. If left empty, the default value is 250,000 rows.', type: 'number', - default: '1000000', + default: '250000', advanced: AdvancedSettingType.ALL, }, { @@ -78,11 +78,11 @@ export const cdcSettings: MirrorSetting[] = [ setter( (curr: CDCConfig): CDCConfig => ({ ...curr, - snapshotNumRowsPerPartition: parseInt(value as string, 10) || 1000000, + snapshotNumRowsPerPartition: parseInt(value as string, 10) || 250000, }) ), - tips: 'PeerDB splits up table data into partitions for increased performance. This setting controls the number of rows per partition. The default value is 1000000.', - default: '1000000', + tips: 'PeerDB splits up table data into partitions for increased performance. This setting controls the number of rows per partition. The default value is 250000.', + default: '250000', type: 'number', advanced: AdvancedSettingType.ALL, }, diff --git a/ui/app/mirrors/create/helpers/common.ts b/ui/app/mirrors/create/helpers/common.ts index d4ba5747a..f29a2376c 100644 --- a/ui/app/mirrors/create/helpers/common.ts +++ b/ui/app/mirrors/create/helpers/common.ts @@ -25,10 +25,10 @@ export const blankCDCSetting: CDCConfig = { destinationName: '', flowJobName: '', tableMappings: [], - maxBatchSize: 1000000, + maxBatchSize: 250000, doInitialSnapshot: true, publicationName: '', - snapshotNumRowsPerPartition: 1000000, + snapshotNumRowsPerPartition: 250000, snapshotMaxParallelWorkers: 4, snapshotNumTablesInParallel: 1, snapshotStagingPath: '', From 6c7c50424e6e674c6a8b06441ca987ae69fab89d Mon Sep 17 00:00:00 2001 From: "renovate[bot]" <29139614+renovate[bot]@users.noreply.github.com> Date: Tue, 19 Nov 2024 18:13:26 +0530 Subject: [PATCH 19/61] chore(deps): update dockerfile dependencies (#2265) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR contains the following updates: | Package | Type | Update | Change | |---|---|---|---| | golang | stage | digest | `9f68de8` -> `c694a4d` | | lukemathwalker/cargo-chef | stage | digest | `9ba204a` -> `75f772f` | | [node](https://redirect.github.com/nodejs/node) | final | digest | `dc8ba2f` -> `b64ced2` | --- ### Configuration 📅 **Schedule**: Branch creation - "after 5pm on monday" in timezone Etc/UTC, Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. â™» **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox. 👻 **Immortal**: This PR will be recreated if closed unmerged. Get [config help](https://redirect.github.com/renovatebot/renovate/discussions) if that's undesired. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/PeerDB-io/peerdb). Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> --- stacks/flow.Dockerfile | 2 +- stacks/peerdb-server.Dockerfile | 2 +- stacks/peerdb-ui.Dockerfile | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/stacks/flow.Dockerfile b/stacks/flow.Dockerfile index 13fc5b089..2c6f375d0 100644 --- a/stacks/flow.Dockerfile +++ b/stacks/flow.Dockerfile @@ -1,6 +1,6 @@ # syntax=docker/dockerfile:1.11@sha256:10c699f1b6c8bdc8f6b4ce8974855dd8542f1768c26eb240237b8f1c9c6c9976 -FROM golang:1.23-alpine@sha256:9f68de83bef9e75cda99597d51778f4f5776ab8d9374e1094a3cd724401094c3 AS builder +FROM golang:1.23-alpine@sha256:c694a4d291a13a9f9d94933395673494fc2cc9d4777b85df3a7e70b3492d3574 AS builder RUN apk add --no-cache gcc geos-dev musl-dev WORKDIR /root/flow diff --git a/stacks/peerdb-server.Dockerfile b/stacks/peerdb-server.Dockerfile index 3e9db5240..50c69f807 100644 --- a/stacks/peerdb-server.Dockerfile +++ b/stacks/peerdb-server.Dockerfile @@ -1,6 +1,6 @@ # syntax=docker/dockerfile:1@sha256:865e5dd094beca432e8c0a1d5e1c465db5f998dca4e439981029b3b81fb39ed5 -FROM lukemathwalker/cargo-chef:latest-rust-alpine3.20@sha256:9ba204a79235804a3a2f41467b09e499daad8bd637c72449ba30ada4070526ff as chef +FROM lukemathwalker/cargo-chef:latest-rust-alpine3.20@sha256:75f772fe2d870acb77ffdb2206810cd694a6720263f94c74fcc75080963dbff5 as chef WORKDIR /root FROM chef as planner diff --git a/stacks/peerdb-ui.Dockerfile b/stacks/peerdb-ui.Dockerfile index def0aad72..42cedca11 100644 --- a/stacks/peerdb-ui.Dockerfile +++ b/stacks/peerdb-ui.Dockerfile @@ -1,7 +1,7 @@ # syntax=docker/dockerfile:1.11@sha256:10c699f1b6c8bdc8f6b4ce8974855dd8542f1768c26eb240237b8f1c9c6c9976 # Base stage -FROM node:22-alpine@sha256:dc8ba2f61dd86c44e43eb25a7812ad03c5b1b224a19fc6f77e1eb9e5669f0b82 AS base +FROM node:22-alpine@sha256:b64ced2e7cd0a4816699fe308ce6e8a08ccba463c757c00c14cd372e3d2c763e AS base ENV NPM_CONFIG_UPDATE_NOTIFIER=false RUN apk add --no-cache openssl && \ mkdir /app && \ From b6e95aa3348174d14edb97ca4a1735b171d71ebf Mon Sep 17 00:00:00 2001 From: Kunal Gupta <39487888+iamKunalGupta@users.noreply.github.com> Date: Tue, 19 Nov 2024 18:19:31 +0530 Subject: [PATCH 20/61] enable automerge for renovate --- renovate.json | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/renovate.json b/renovate.json index e053c6ed5..dead09279 100644 --- a/renovate.json +++ b/renovate.json @@ -16,7 +16,8 @@ ] } ], - "separateMajorMinor": false + "separateMajorMinor": false, + "automerge": true }, { "matchPackageNames": ["mysql_async"], From 140edc39fc85a0544afee0fb28530ce384905c48 Mon Sep 17 00:00:00 2001 From: "renovate[bot]" <29139614+renovate[bot]@users.noreply.github.com> Date: Tue, 19 Nov 2024 12:58:32 +0000 Subject: [PATCH 21/61] chore(deps): update docker-compose dependencies (#2264) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR contains the following updates: | Package | Update | Change | |---|---|---| | ghcr.io/peerdb-io/flow-api | digest | `a7c7d1c` -> `e1800e4` | | ghcr.io/peerdb-io/flow-snapshot-worker | digest | `9b6db50` -> `995d426` | | ghcr.io/peerdb-io/flow-worker | digest | `3b4da65` -> `66afae0` | | ghcr.io/peerdb-io/peerdb-server | digest | `dfb652e` -> `9abc818` | | ghcr.io/peerdb-io/peerdb-ui | digest | `cb4d1db` -> `d6261dd` | | postgres | digest | `d388be1` -> `0d96245` | --- ### Configuration 📅 **Schedule**: Branch creation - "after 5pm on monday" in timezone Etc/UTC, Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. â™» **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 👻 **Immortal**: This PR will be recreated if closed unmerged. Get [config help](https://redirect.github.com/renovatebot/renovate/discussions) if that's undesired. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/PeerDB-io/peerdb). Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> --- docker-compose-dev.yml | 2 +- docker-compose.yml | 12 ++++++------ 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/docker-compose-dev.yml b/docker-compose-dev.yml index 7309472d7..9db08bbda 100644 --- a/docker-compose-dev.yml +++ b/docker-compose-dev.yml @@ -39,7 +39,7 @@ x-flow-worker-env: &flow-worker-env services: catalog: container_name: catalog - image: postgres:17-alpine@sha256:d388be15cfb665c723da47cccdc7ea5c003ed71f700c5419bbd075033227ce1f + image: postgres:17-alpine@sha256:0d9624535618a135c5453258fd629f4963390338b11aaffb92292c12df3a6c17 command: -c config_file=/etc/postgresql.conf ports: - 9901:5432 diff --git a/docker-compose.yml b/docker-compose.yml index c03c9993d..4c59bc889 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -32,7 +32,7 @@ x-flow-worker-env: &flow-worker-env services: catalog: container_name: catalog - image: postgres:17-alpine@sha256:d388be15cfb665c723da47cccdc7ea5c003ed71f700c5419bbd075033227ce1f + image: postgres:17-alpine@sha256:0d9624535618a135c5453258fd629f4963390338b11aaffb92292c12df3a6c17 command: -c config_file=/etc/postgresql.conf restart: unless-stopped ports: @@ -112,7 +112,7 @@ services: flow-api: container_name: flow_api - image: ghcr.io/peerdb-io/flow-api:latest-dev@sha256:a7c7d1cb12f618a747233556820df114acc4d2e737e487bdfdbb0051720710c9 + image: ghcr.io/peerdb-io/flow-api:latest-dev@sha256:e1800e45df01c20c654628f106d06b3e61fddaa71930cb044bb8a119f574eb69 restart: unless-stopped ports: - 8112:8112 @@ -128,7 +128,7 @@ services: flow-snapshot-worker: container_name: flow-snapshot-worker - image: ghcr.io/peerdb-io/flow-snapshot-worker:latest-dev@sha256:9b6db5039e4f73f7d205a40400683ff9ca242dbb2eee405a2e9b056e947c8666 + image: ghcr.io/peerdb-io/flow-snapshot-worker:latest-dev@sha256:995d426604275f14c59bbc198d382825832cf4f69b3f05083d746b78b28952ca restart: unless-stopped environment: <<: [*catalog-config, *flow-worker-env, *minio-config] @@ -138,7 +138,7 @@ services: flow-worker: container_name: flow-worker - image: ghcr.io/peerdb-io/flow-worker:latest-dev@sha256:3b4da6550deacd638e99592b65d9f61191ed020b8268efa52297c3101ab37c16 + image: ghcr.io/peerdb-io/flow-worker:latest-dev@sha256:66afae04c334af2a7cdb791f8f5a025a287b88346a6f72fdbb62156a898f62f3 restart: unless-stopped environment: <<: [*catalog-config, *flow-worker-env, *minio-config] @@ -151,7 +151,7 @@ services: peerdb: container_name: peerdb-server stop_signal: SIGINT - image: ghcr.io/peerdb-io/peerdb-server:latest-dev@sha256:dfb652eebb410198f28e720d04d17c4de0698581d214ce99337ee1efc0874ba4 + image: ghcr.io/peerdb-io/peerdb-server:latest-dev@sha256:9abc8184f8104599105cefc8f14c708cb9283e99bf904b860258a0d303bebea6 restart: unless-stopped environment: <<: *catalog-config @@ -167,7 +167,7 @@ services: peerdb-ui: container_name: peerdb-ui - image: ghcr.io/peerdb-io/peerdb-ui:latest-dev@sha256:cb4d1dbd167c6bdd44e795a33b69ab5001932c7cbd8ac02126b79331022c17a0 + image: ghcr.io/peerdb-io/peerdb-ui:latest-dev@sha256:d6261dd51d3f15d2a47fd690e19c87ffa2d59aeac3d3e950781fca56ab39f95c restart: unless-stopped ports: - 3000:3000 From e10e54fa0bc338b1f48bfd0beabf96ac80061ee9 Mon Sep 17 00:00:00 2001 From: Kunal Gupta <39487888+iamKunalGupta@users.noreply.github.com> Date: Tue, 19 Nov 2024 21:41:38 +0530 Subject: [PATCH 22/61] chore: replace debezium in ci and test on multiple pg versions (#2269) --- .github/workflows/ci.yml | 5 +++-- .github/workflows/flow.yml | 3 ++- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 4d98e8736..5dbbb4ee9 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -5,18 +5,19 @@ on: branches: [main, release/*] pull_request: branches: [main, release/*] - paths: [nexus/**, protos/**] + paths: [nexus/**, protos/**, .github/workflows/ci.yml] jobs: build: strategy: matrix: runner: [ubicloud-standard-2-ubuntu-2204-arm] + postgres-version: [13, 14, 15, 16, 17] runs-on: ${{ matrix.runner }} timeout-minutes: 30 services: catalog_peer: - image: debezium/postgres:14-alpine + image: postgres:${{ matrix.postgres-version }}-alpine ports: - 7132:5432 env: diff --git a/.github/workflows/flow.yml b/.github/workflows/flow.yml index 2673bda3f..d3168e96d 100644 --- a/.github/workflows/flow.yml +++ b/.github/workflows/flow.yml @@ -11,11 +11,12 @@ jobs: strategy: matrix: runner: [ubicloud-standard-16-ubuntu-2204-arm] + postgres-version: [15, 16, 17] runs-on: ${{ matrix.runner }} timeout-minutes: 30 services: catalog: - image: imresamu/postgis:15-3.4-alpine + image: imresamu/postgis:${{ matrix.postgres-version }}-3.5-alpine ports: - 5432:5432 env: From 172d65297546379cc082dc55c30d02587e64ffdc Mon Sep 17 00:00:00 2001 From: Kunal Gupta <39487888+iamKunalGupta@users.noreply.github.com> Date: Tue, 19 Nov 2024 21:48:21 +0530 Subject: [PATCH 23/61] renovate: disable major dependencies for next js --- renovate.json | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/renovate.json b/renovate.json index dead09279..b9a8ffef6 100644 --- a/renovate.json +++ b/renovate.json @@ -23,6 +23,12 @@ "matchPackageNames": ["mysql_async"], "matchManagers": ["cargo"], "enabled": false + }, + { + "matchPackageNames": ["next"], + "matchManagers": ["npm"], + "matchUpdateTypes": ["major"], + "enabled": false } ], "vulnerabilityAlerts": { From 1e5f10f3a16d79312f03be5e65e3bc1be145f398 Mon Sep 17 00:00:00 2001 From: "renovate[bot]" <29139614+renovate[bot]@users.noreply.github.com> Date: Tue, 19 Nov 2024 16:26:08 +0000 Subject: [PATCH 24/61] chore(deps): pin dependencies (#2267) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR contains the following updates: | Package | Type | Update | Change | |---|---|---|---| | [actions/checkout](https://redirect.github.com/actions/checkout) | action | pinDigest | -> `11bd719` | | [bitnami/minio](https://redirect.github.com/bitnami/containers) ([source](https://redirect.github.com/bitnami/containers/tree/HEAD/bitnami/minio)) | service | pinDigest | -> `9f2d9c4` | | [bufbuild/buf-action](https://redirect.github.com/bufbuild/buf-action) | action | pinDigest | -> `3fb7035` | | [depot/bake-action](https://redirect.github.com/depot/bake-action) | action | pinDigest | -> `143e50b` | | [depot/setup-action](https://redirect.github.com/depot/setup-action) | action | pinDigest | -> `b0b1ea4` | | [docker/login-action](https://redirect.github.com/docker/login-action) | action | pinDigest | -> `9780b0c` | | elasticsearch | service | minor | `8.13.0` -> `8.16.0` | | [github/codeql-action](https://redirect.github.com/github/codeql-action) | action | pinDigest | -> `ea9e4e3` | | [golangci/golangci-lint-action](https://redirect.github.com/golangci/golangci-lint-action) | action | pinDigest | -> `971e284` | | [jsdaniell/create-json](https://redirect.github.com/jsdaniell/create-json) | action | pinDigest | -> `b8e77fa` | | [redpanda-data/github-action](https://redirect.github.com/redpanda-data/github-action) | action | pinDigest | -> `c68af8e` | | [temporalio/setup-temporal](https://redirect.github.com/temporalio/setup-temporal) | action | pinDigest | -> `1059a50` | | [ubicloud/cache](https://redirect.github.com/ubicloud/cache) | action | pinDigest | -> `0a97811` | | [ubicloud/rust-cache](https://redirect.github.com/ubicloud/rust-cache) | action | pinDigest | -> `69587b2` | | [ubicloud/setup-go](https://redirect.github.com/ubicloud/setup-go) | action | pinDigest | -> `35680fe` | | [wearerequired/lint-action](https://redirect.github.com/wearerequired/lint-action) | action | pinDigest | -> `548d8a7` | --- ### Configuration 📅 **Schedule**: Branch creation - "after 5pm on monday" in timezone Etc/UTC, Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. â™» **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox. 👻 **Immortal**: This PR will be recreated if closed unmerged. Get [config help](https://redirect.github.com/renovatebot/renovate/discussions) if that's undesired. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/PeerDB-io/peerdb). Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> --- .github/actions/genprotos/action.yml | 6 +++--- .github/workflows/ci.yml | 8 ++++---- .github/workflows/cleanup.yml | 12 ++++++------ .github/workflows/codeql-analysis.yml | 6 +++--- .github/workflows/customer-docker.yml | 8 ++++---- .github/workflows/dev-docker.yml | 8 ++++---- .github/workflows/flow-api-client.yml | 2 +- .github/workflows/flow.yml | 24 ++++++++++++------------ .github/workflows/golang-lint.yml | 6 +++--- .github/workflows/rust-lint.yml | 2 +- .github/workflows/stable-docker.yml | 8 ++++---- .github/workflows/ui-build.yml | 2 +- .github/workflows/ui-lint.yml | 4 ++-- 13 files changed, 48 insertions(+), 48 deletions(-) diff --git a/.github/actions/genprotos/action.yml b/.github/actions/genprotos/action.yml index 84bc29d00..84dfd540f 100644 --- a/.github/actions/genprotos/action.yml +++ b/.github/actions/genprotos/action.yml @@ -3,10 +3,10 @@ description: 'Install buf with local plugins, generate protos and cache' runs: using: "composite" steps: - - uses: actions/checkout@v4 + - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4 - name: check cache id: cache - uses: ubicloud/cache@v4 + uses: ubicloud/cache@0a97811d53629b143a56b3c2b1f729fd11719ef7 # v4 with: path: | ./flow/generated/protos @@ -15,7 +15,7 @@ runs: key: ${{ runner.os }}-build-genprotos-${{ hashFiles('buf.gen.yaml', './protos/peers.proto', './protos/flow.proto', './protos/route.proto') }} - if: steps.cache.outputs.cache-hit != 'true' - uses: bufbuild/buf-action@v1 + uses: bufbuild/buf-action@3fb70352251376e958c4c2c92c3818de82a71c2b # v1 with: setup_only: true github_token: ${{ github.token }} diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 5dbbb4ee9..5c86636ae 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -30,7 +30,7 @@ jobs: --health-timeout 5s --health-retries 5 steps: - - uses: actions/checkout@v4 + - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4 - name: generate or hydrate protos uses: ./.github/actions/genprotos @@ -42,7 +42,7 @@ jobs: - name: setup gcp service account id: gcp-service-account - uses: jsdaniell/create-json@v1.2.3 + uses: jsdaniell/create-json@b8e77fa01397ca39cc4a6198cc29a3be5481afef # v1.2.3 with: name: "bq_service_account.json" json: ${{ secrets.GCP_GH_CI_PKEY }} @@ -50,13 +50,13 @@ jobs: - name: setup snowflake credentials id: sf-credentials - uses: jsdaniell/create-json@v1.2.3 + uses: jsdaniell/create-json@b8e77fa01397ca39cc4a6198cc29a3be5481afef # v1.2.3 with: name: "snowflake_creds.json" json: ${{ secrets.SNOWFLAKE_GH_CI_PKEY }} dir: "nexus/server/tests/assets/" - - uses: ubicloud/rust-cache@v2 + - uses: ubicloud/rust-cache@69587b2b3f26e8938580c44a643d265ed12f3119 # v2 with: workspaces: nexus diff --git a/.github/workflows/cleanup.yml b/.github/workflows/cleanup.yml index 5897eae7f..9471872f6 100644 --- a/.github/workflows/cleanup.yml +++ b/.github/workflows/cleanup.yml @@ -10,9 +10,9 @@ jobs: timeout-minutes: 60 steps: - name: checkout sources - uses: actions/checkout@v4 + uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4 - - uses: ubicloud/setup-go@v5 + - uses: ubicloud/setup-go@35680fe0723d4a9309d4b1ac1c67e0d46eac5f24 # v5 with: go-version: '1.23.0' cache-dependency-path: e2e_cleanup/go.sum @@ -24,28 +24,28 @@ jobs: - name: setup gcp service account id: gcp-service-account - uses: jsdaniell/create-json@v1.2.3 + uses: jsdaniell/create-json@b8e77fa01397ca39cc4a6198cc29a3be5481afef # v1.2.3 with: name: "bq_service_account.json" json: ${{ secrets.GCP_GH_CI_PKEY }} - name: setup snowflake credentials id: sf-credentials - uses: jsdaniell/create-json@v1.2.3 + uses: jsdaniell/create-json@b8e77fa01397ca39cc4a6198cc29a3be5481afef # v1.2.3 with: name: "snowflake_creds.json" json: ${{ secrets.SNOWFLAKE_GH_CI_PKEY }} - name: setup S3 credentials id: s3-credentials - uses: jsdaniell/create-json@v1.2.3 + uses: jsdaniell/create-json@b8e77fa01397ca39cc4a6198cc29a3be5481afef # v1.2.3 with: name: "s3_creds.json" json: ${{ secrets.S3_CREDS }} - name: setup GCS credentials id: gcs-credentials - uses: jsdaniell/create-json@v1.2.3 + uses: jsdaniell/create-json@b8e77fa01397ca39cc4a6198cc29a3be5481afef # v1.2.3 with: name: "gcs_creds.json" json: ${{ secrets.GCS_CREDS }} diff --git a/.github/workflows/codeql-analysis.yml b/.github/workflows/codeql-analysis.yml index 5de1d92c4..303066f11 100644 --- a/.github/workflows/codeql-analysis.yml +++ b/.github/workflows/codeql-analysis.yml @@ -34,7 +34,7 @@ jobs: build-mode: none steps: - name: Checkout repository - uses: actions/checkout@v4 + uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4 - name: generate or hydrate protos uses: ./.github/actions/genprotos @@ -47,12 +47,12 @@ jobs: # Initializes the CodeQL tools for scanning. - name: Initialize CodeQL - uses: github/codeql-action/init@v3 + uses: github/codeql-action/init@ea9e4e37992a54ee68a9622e985e60c8e8f12d9f # v3 with: languages: ${{ matrix.language }} build-mode: ${{ matrix.build-mode }} - name: Perform CodeQL Analysis - uses: github/codeql-action/analyze@v3 + uses: github/codeql-action/analyze@ea9e4e37992a54ee68a9622e985e60c8e8f12d9f # v3 with: category: "/language:${{matrix.language}}" diff --git a/.github/workflows/customer-docker.yml b/.github/workflows/customer-docker.yml index 8278ec3d2..67145512a 100644 --- a/.github/workflows/customer-docker.yml +++ b/.github/workflows/customer-docker.yml @@ -18,15 +18,15 @@ jobs: contents: read packages: write steps: - - uses: actions/checkout@v4 + - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4 - name: generate or hydrate protos uses: ./.github/actions/genprotos - - uses: depot/setup-action@v1 + - uses: depot/setup-action@b0b1ea4f69e92ebf5dea3f8713a1b0c37b2126a5 # v1 - name: Login to GitHub Container Registry - uses: docker/login-action@v3 + uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3 with: registry: ghcr.io username: ${{github.actor}} @@ -42,7 +42,7 @@ jobs: echo "branch=$(echo $GITHUB_REF | sed -e 's/.*customer-//')" >> $GITHUB_OUTPUT - name: Build (optionally publish) PeerDB Images - uses: depot/bake-action@v1 + uses: depot/bake-action@143e50b965398f1f5dc8463be7dde6f62b9e9c21 # v1 with: token: ${{ secrets.DEPOT_TOKEN }} files: ./docker-bake.hcl diff --git a/.github/workflows/dev-docker.yml b/.github/workflows/dev-docker.yml index 6011ec4ab..275ad28b7 100644 --- a/.github/workflows/dev-docker.yml +++ b/.github/workflows/dev-docker.yml @@ -17,15 +17,15 @@ jobs: contents: read packages: write steps: - - uses: actions/checkout@v4 + - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4 - name: generate or hydrate protos uses: ./.github/actions/genprotos - - uses: depot/setup-action@v1 + - uses: depot/setup-action@b0b1ea4f69e92ebf5dea3f8713a1b0c37b2126a5 # v1 - name: Login to GitHub Container Registry - uses: docker/login-action@v3 + uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3 with: registry: ghcr.io username: ${{github.actor}} @@ -36,7 +36,7 @@ jobs: run: echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT - name: Build (optionally publish) PeerDB Images - uses: depot/bake-action@v1 + uses: depot/bake-action@143e50b965398f1f5dc8463be7dde6f62b9e9c21 # v1 with: token: ${{ secrets.DEPOT_TOKEN }} files: ./docker-bake.hcl diff --git a/.github/workflows/flow-api-client.yml b/.github/workflows/flow-api-client.yml index 046b377db..5e373b2d6 100644 --- a/.github/workflows/flow-api-client.yml +++ b/.github/workflows/flow-api-client.yml @@ -9,7 +9,7 @@ jobs: permissions: contents: write steps: - - uses: actions/checkout@v4 + - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4 - name: generate or hydrate protos uses: ./.github/actions/genprotos diff --git a/.github/workflows/flow.yml b/.github/workflows/flow.yml index d3168e96d..e794512a0 100644 --- a/.github/workflows/flow.yml +++ b/.github/workflows/flow.yml @@ -25,7 +25,7 @@ jobs: POSTGRES_DB: postgres POSTGRES_INITDB_ARGS: --locale=C.UTF-8 elasticsearch: - image: elasticsearch:8.13.0 + image: elasticsearch:8.16.0@sha256:a411f7c17549209c5839b69f929de00bd91f1e2dcf08b65d5f41b122eae17f5e ports: - 9200:9200 env: @@ -33,7 +33,7 @@ jobs: xpack.security.enabled: false xpack.security.enrollment.enabled: false minio: - image: bitnami/minio:2024.11.7 + image: bitnami/minio:2024.11.7@sha256:9f2d9c45006a2ada1bc485e1393291ce7d54ae1a46260dd491381a4eb8b2fd47 ports: - 9999:9999 env: @@ -44,12 +44,12 @@ jobs: MINIO_DEFAULT_BUCKETS: peerdb steps: - - uses: actions/checkout@v4 + - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4 - name: generate or hydrate protos uses: ./.github/actions/genprotos - - uses: ubicloud/setup-go@v5 + - uses: ubicloud/setup-go@35680fe0723d4a9309d4b1ac1c67e0d46eac5f24 # v5 with: go-version: '1.23.0' cache-dependency-path: flow/go.sum @@ -64,35 +64,35 @@ jobs: - name: setup gcp service account id: gcp-service-account - uses: jsdaniell/create-json@v1.2.3 + uses: jsdaniell/create-json@b8e77fa01397ca39cc4a6198cc29a3be5481afef # v1.2.3 with: name: "bq_service_account.json" json: ${{ secrets.GCP_GH_CI_PKEY }} - name: setup snowflake credentials id: sf-credentials - uses: jsdaniell/create-json@v1.2.3 + uses: jsdaniell/create-json@b8e77fa01397ca39cc4a6198cc29a3be5481afef # v1.2.3 with: name: "snowflake_creds.json" json: ${{ secrets.SNOWFLAKE_GH_CI_PKEY }} - name: setup S3 credentials id: s3-credentials - uses: jsdaniell/create-json@v1.2.3 + uses: jsdaniell/create-json@b8e77fa01397ca39cc4a6198cc29a3be5481afef # v1.2.3 with: name: "s3_creds.json" json: ${{ secrets.S3_CREDS }} - name: setup GCS credentials id: gcs-credentials - uses: jsdaniell/create-json@v1.2.3 + uses: jsdaniell/create-json@b8e77fa01397ca39cc4a6198cc29a3be5481afef # v1.2.3 with: name: "gcs_creds.json" json: ${{ secrets.GCS_CREDS }} - name: setup Eventhubs credentials id: eventhubs-credentials - uses: jsdaniell/create-json@v1.2.3 + uses: jsdaniell/create-json@b8e77fa01397ca39cc4a6198cc29a3be5481afef # v1.2.3 with: name: "eh_creds.json" json: ${{ secrets.EH_CREDS }} @@ -110,11 +110,11 @@ jobs: PGPASSWORD: postgres - name: start redpanda - uses: redpanda-data/github-action@v0.1.4 + uses: redpanda-data/github-action@c68af8edc420b987e871615ca40b3a5dd70eb5b1 # v0.1.4 with: version: "latest" - - uses: ubicloud/cache@v4 + - uses: ubicloud/cache@0a97811d53629b143a56b3c2b1f729fd11719ef7 # v4 id: cache-clickhouse with: path: ./clickhouse @@ -130,7 +130,7 @@ jobs: ./clickhouse server & - name: Install Temporal CLI - uses: temporalio/setup-temporal@v0 + uses: temporalio/setup-temporal@1059a504f87e7fa2f385e3fa40d1aa7e62f1c6ca # v0 - name: run tests run: | diff --git a/.github/workflows/golang-lint.yml b/.github/workflows/golang-lint.yml index aadcfa7a5..2289eeae1 100644 --- a/.github/workflows/golang-lint.yml +++ b/.github/workflows/golang-lint.yml @@ -13,7 +13,7 @@ jobs: name: lint runs-on: [ubicloud-standard-4-ubuntu-2204-arm] steps: - - uses: actions/checkout@v4 + - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4 - name: generate or hydrate protos uses: ./.github/actions/genprotos @@ -22,12 +22,12 @@ jobs: run: | sudo apt-get update sudo apt-get install libgeos-dev - - uses: ubicloud/setup-go@v5 + - uses: ubicloud/setup-go@35680fe0723d4a9309d4b1ac1c67e0d46eac5f24 # v5 with: go-version: '1.23.0' cache: false - name: golangci-lint - uses: golangci/golangci-lint-action@v6 + uses: golangci/golangci-lint-action@971e284b6050e8a5849b72094c50ab08da042db8 # v6 with: version: v1.61 working-directory: ./flow diff --git a/.github/workflows/rust-lint.yml b/.github/workflows/rust-lint.yml index b9e43c1a2..c4e2782f1 100644 --- a/.github/workflows/rust-lint.yml +++ b/.github/workflows/rust-lint.yml @@ -16,7 +16,7 @@ jobs: runner: [ubicloud-standard-4-ubuntu-2204-arm] runs-on: ${{ matrix.runner }} steps: - - uses: actions/checkout@v4 + - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4 - name: generate or hydrate protos uses: ./.github/actions/genprotos diff --git a/.github/workflows/stable-docker.yml b/.github/workflows/stable-docker.yml index 9eabbcfb2..0056a7d9c 100644 --- a/.github/workflows/stable-docker.yml +++ b/.github/workflows/stable-docker.yml @@ -15,22 +15,22 @@ jobs: contents: read packages: write steps: - - uses: actions/checkout@v4 + - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4 - name: generate or hydrate protos uses: ./.github/actions/genprotos - - uses: depot/setup-action@v1 + - uses: depot/setup-action@b0b1ea4f69e92ebf5dea3f8713a1b0c37b2126a5 # v1 - name: Login to GitHub Container Registry - uses: docker/login-action@v3 + uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3 with: registry: ghcr.io username: ${{github.actor}} password: ${{secrets.GITHUB_TOKEN}} - name: Build (optionally publish) PeerDB Images - uses: depot/bake-action@v1 + uses: depot/bake-action@143e50b965398f1f5dc8463be7dde6f62b9e9c21 # v1 with: token: ${{ secrets.DEPOT_TOKEN }} files: ./docker-bake.hcl diff --git a/.github/workflows/ui-build.yml b/.github/workflows/ui-build.yml index feea1ffda..7915445fe 100644 --- a/.github/workflows/ui-build.yml +++ b/.github/workflows/ui-build.yml @@ -16,7 +16,7 @@ jobs: runs-on: ${{ matrix.runner }} steps: - name: checkout - uses: actions/checkout@v4 + uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4 - name: generate or hydrate protos uses: ./.github/actions/genprotos diff --git a/.github/workflows/ui-lint.yml b/.github/workflows/ui-lint.yml index 31e2340ff..6fb1f2b82 100644 --- a/.github/workflows/ui-lint.yml +++ b/.github/workflows/ui-lint.yml @@ -20,7 +20,7 @@ jobs: runs-on: ${{ matrix.runner }} steps: - name: checkout - uses: actions/checkout@v4 + uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4 - name: generate or hydrate protos uses: ./.github/actions/genprotos @@ -30,7 +30,7 @@ jobs: run: npm ci - name: lint - uses: wearerequired/lint-action@v2 + uses: wearerequired/lint-action@548d8a7c4b04d3553d32ed5b6e91eb171e10e7bb # v2 with: eslint: true prettier: true From 51d58ce2e344e5ddf57ce5f7ec3b936dc51f010d Mon Sep 17 00:00:00 2001 From: Kunal Gupta <39487888+iamKunalGupta@users.noreply.github.com> Date: Tue, 19 Nov 2024 22:00:19 +0530 Subject: [PATCH 25/61] feat: use latest-stable in docker-compose.yml (#2270) renovate will automatically pin the latest stable weekly to ensure that whenever docker compose up is run again after updating the repo, it will use the latest sha --- docker-compose.yml | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docker-compose.yml b/docker-compose.yml index 4c59bc889..cf1ec3efe 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -112,7 +112,7 @@ services: flow-api: container_name: flow_api - image: ghcr.io/peerdb-io/flow-api:latest-dev@sha256:e1800e45df01c20c654628f106d06b3e61fddaa71930cb044bb8a119f574eb69 + image: ghcr.io/peerdb-io/flow-api:latest-stable restart: unless-stopped ports: - 8112:8112 @@ -128,7 +128,7 @@ services: flow-snapshot-worker: container_name: flow-snapshot-worker - image: ghcr.io/peerdb-io/flow-snapshot-worker:latest-dev@sha256:995d426604275f14c59bbc198d382825832cf4f69b3f05083d746b78b28952ca + image: ghcr.io/peerdb-io/flow-snapshot-worker:latest-stable restart: unless-stopped environment: <<: [*catalog-config, *flow-worker-env, *minio-config] @@ -138,7 +138,7 @@ services: flow-worker: container_name: flow-worker - image: ghcr.io/peerdb-io/flow-worker:latest-dev@sha256:66afae04c334af2a7cdb791f8f5a025a287b88346a6f72fdbb62156a898f62f3 + image: ghcr.io/peerdb-io/flow-worker:latest-stable restart: unless-stopped environment: <<: [*catalog-config, *flow-worker-env, *minio-config] @@ -151,7 +151,7 @@ services: peerdb: container_name: peerdb-server stop_signal: SIGINT - image: ghcr.io/peerdb-io/peerdb-server:latest-dev@sha256:9abc8184f8104599105cefc8f14c708cb9283e99bf904b860258a0d303bebea6 + image: ghcr.io/peerdb-io/peerdb-server:latest-stable restart: unless-stopped environment: <<: *catalog-config @@ -167,7 +167,7 @@ services: peerdb-ui: container_name: peerdb-ui - image: ghcr.io/peerdb-io/peerdb-ui:latest-dev@sha256:d6261dd51d3f15d2a47fd690e19c87ffa2d59aeac3d3e950781fca56ab39f95c + image: ghcr.io/peerdb-io/peerdb-ui:latest-stable restart: unless-stopped ports: - 3000:3000 From 61fe601a503cd7d96d58420783c6248a074df60b Mon Sep 17 00:00:00 2001 From: "renovate[bot]" <29139614+renovate[bot]@users.noreply.github.com> Date: Tue, 19 Nov 2024 17:23:08 +0000 Subject: [PATCH 26/61] fix(deps): update npm dependencies (#2268) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR contains the following updates: | Package | Change | Age | Adoption | Passing | Confidence | Type | Update | |---|---|---|---|---|---|---|---| | @​radix-ui/react-icons | [`1.3.1` -> `1.3.2`](https://renovatebot.com/diffs/npm/@radix-ui%2freact-icons/1.3.1/1.3.2) | [![age](https://developer.mend.io/api/mc/badges/age/npm/@radix-ui%2freact-icons/1.3.2?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/@radix-ui%2freact-icons/1.3.2?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/@radix-ui%2freact-icons/1.3.1/1.3.2?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/@radix-ui%2freact-icons/1.3.1/1.3.2?slim=true)](https://docs.renovatebot.com/merge-confidence/) | dependencies | patch | | [@radix-ui/react-tooltip](https://radix-ui.com/primitives) ([source](https://redirect.github.com/radix-ui/primitives)) | [`1.1.3` -> `1.1.4`](https://renovatebot.com/diffs/npm/@radix-ui%2freact-tooltip/1.1.3/1.1.4) | [![age](https://developer.mend.io/api/mc/badges/age/npm/@radix-ui%2freact-tooltip/1.1.4?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/@radix-ui%2freact-tooltip/1.1.4?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/@radix-ui%2freact-tooltip/1.1.3/1.1.4?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/@radix-ui%2freact-tooltip/1.1.3/1.1.4?slim=true)](https://docs.renovatebot.com/merge-confidence/) | dependencies | patch | | [@tremor/react](https://redirect.github.com/tremorlabs/tremor) | [`3.18.3` -> `3.18.4`](https://renovatebot.com/diffs/npm/@tremor%2freact/3.18.3/3.18.4) | [![age](https://developer.mend.io/api/mc/badges/age/npm/@tremor%2freact/3.18.4?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/@tremor%2freact/3.18.4?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/@tremor%2freact/3.18.3/3.18.4?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/@tremor%2freact/3.18.3/3.18.4?slim=true)](https://docs.renovatebot.com/merge-confidence/) | dependencies | patch | | [autoprefixer](https://redirect.github.com/postcss/autoprefixer) | [`^10.4.20` -> `10.4.20`](https://renovatebot.com/diffs/npm/autoprefixer/10.4.20/10.4.20) | [![age](https://developer.mend.io/api/mc/badges/age/npm/autoprefixer/10.4.20?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/autoprefixer/10.4.20?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/autoprefixer/10.4.20/10.4.20?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/autoprefixer/10.4.20/10.4.20?slim=true)](https://docs.renovatebot.com/merge-confidence/) | devDependencies | pin | | [copy-webpack-plugin](https://redirect.github.com/webpack-contrib/copy-webpack-plugin) | [`^12.0.2` -> `12.0.2`](https://renovatebot.com/diffs/npm/copy-webpack-plugin/12.0.2/12.0.2) | [![age](https://developer.mend.io/api/mc/badges/age/npm/copy-webpack-plugin/12.0.2?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/copy-webpack-plugin/12.0.2?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/copy-webpack-plugin/12.0.2/12.0.2?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/copy-webpack-plugin/12.0.2/12.0.2?slim=true)](https://docs.renovatebot.com/merge-confidence/) | devDependencies | pin | | [eslint](https://eslint.org) ([source](https://redirect.github.com/eslint/eslint)) | [`^8.57.1` -> `8.57.1`](https://renovatebot.com/diffs/npm/eslint/8.57.1/8.57.1) | [![age](https://developer.mend.io/api/mc/badges/age/npm/eslint/8.57.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/eslint/8.57.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/eslint/8.57.1/8.57.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/eslint/8.57.1/8.57.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | devDependencies | pin | | [eslint-config-next](https://nextjs.org/docs/app/building-your-application/configuring/eslint#eslint-config) ([source](https://redirect.github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next)) | [`^14.2.14` -> `14.2.17`](https://renovatebot.com/diffs/npm/eslint-config-next/14.2.17/14.2.17) | [![age](https://developer.mend.io/api/mc/badges/age/npm/eslint-config-next/14.2.17?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/eslint-config-next/14.2.17?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/eslint-config-next/14.2.17/14.2.17?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/eslint-config-next/14.2.17/14.2.17?slim=true)](https://docs.renovatebot.com/merge-confidence/) | devDependencies | pin | | [eslint-config-prettier](https://redirect.github.com/prettier/eslint-config-prettier) | [`^9.1.0` -> `9.1.0`](https://renovatebot.com/diffs/npm/eslint-config-prettier/9.1.0/9.1.0) | [![age](https://developer.mend.io/api/mc/badges/age/npm/eslint-config-prettier/9.1.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/eslint-config-prettier/9.1.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/eslint-config-prettier/9.1.0/9.1.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/eslint-config-prettier/9.1.0/9.1.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | devDependencies | pin | | [less](http://lesscss.org) ([source](https://redirect.github.com/less/less.js)) | [`^4.2.0` -> `4.2.0`](https://renovatebot.com/diffs/npm/less/4.2.0/4.2.0) | [![age](https://developer.mend.io/api/mc/badges/age/npm/less/4.2.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/less/4.2.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/less/4.2.0/4.2.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/less/4.2.0/4.2.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | devDependencies | pin | | [lucide-react](https://lucide.dev) ([source](https://redirect.github.com/lucide-icons/lucide/tree/HEAD/packages/lucide-react)) | [`^0.454.0` -> `^0.460.0`](https://renovatebot.com/diffs/npm/lucide-react/0.454.0/0.460.0) | [![age](https://developer.mend.io/api/mc/badges/age/npm/lucide-react/0.460.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/lucide-react/0.460.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/lucide-react/0.454.0/0.460.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/lucide-react/0.454.0/0.460.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | dependencies | minor | | [material-symbols](https://marella.github.io/material-symbols/demo/) ([source](https://redirect.github.com/marella/material-symbols/tree/HEAD/material-symbols)) | [`^0.26.0` -> `^0.27.0`](https://renovatebot.com/diffs/npm/material-symbols/0.26.0/0.27.0) | [![age](https://developer.mend.io/api/mc/badges/age/npm/material-symbols/0.27.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/material-symbols/0.27.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/material-symbols/0.26.0/0.27.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/material-symbols/0.26.0/0.27.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | dependencies | minor | | [postcss](https://postcss.org/) ([source](https://redirect.github.com/postcss/postcss)) | [`^8.4.47` -> `8.4.47`](https://renovatebot.com/diffs/npm/postcss/8.4.47/8.4.47) | [![age](https://developer.mend.io/api/mc/badges/age/npm/postcss/8.4.47?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/postcss/8.4.47?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/postcss/8.4.47/8.4.47?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/postcss/8.4.47/8.4.47?slim=true)](https://docs.renovatebot.com/merge-confidence/) | devDependencies | pin | | [prettier](https://prettier.io) ([source](https://redirect.github.com/prettier/prettier)) | [`^3.3.3` -> `3.3.3`](https://renovatebot.com/diffs/npm/prettier/3.3.3/3.3.3) | [![age](https://developer.mend.io/api/mc/badges/age/npm/prettier/3.3.3?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/prettier/3.3.3?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/prettier/3.3.3/3.3.3?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/prettier/3.3.3/3.3.3?slim=true)](https://docs.renovatebot.com/merge-confidence/) | devDependencies | pin | | [prettier-plugin-organize-imports](https://redirect.github.com/simonhaenisch/prettier-plugin-organize-imports) | [`^4.1.0` -> `4.1.0`](https://renovatebot.com/diffs/npm/prettier-plugin-organize-imports/4.1.0/4.1.0) | [![age](https://developer.mend.io/api/mc/badges/age/npm/prettier-plugin-organize-imports/4.1.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/prettier-plugin-organize-imports/4.1.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/prettier-plugin-organize-imports/4.1.0/4.1.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/prettier-plugin-organize-imports/4.1.0/4.1.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | devDependencies | pin | | [react-select](https://redirect.github.com/JedWatson/react-select/tree/master#readme) ([source](https://redirect.github.com/JedWatson/react-select)) | [`5.8.2` -> `5.8.3`](https://renovatebot.com/diffs/npm/react-select/5.8.2/5.8.3) | [![age](https://developer.mend.io/api/mc/badges/age/npm/react-select/5.8.3?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/react-select/5.8.3?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/react-select/5.8.2/5.8.3?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/react-select/5.8.2/5.8.3?slim=true)](https://docs.renovatebot.com/merge-confidence/) | dependencies | patch | | [string-width](https://redirect.github.com/sindresorhus/string-width) | [`^7.2.0` -> `7.2.0`](https://renovatebot.com/diffs/npm/string-width/7.2.0/7.2.0) | [![age](https://developer.mend.io/api/mc/badges/age/npm/string-width/7.2.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/string-width/7.2.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/string-width/7.2.0/7.2.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/string-width/7.2.0/7.2.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | devDependencies | pin | | [tailwindcss](https://tailwindcss.com) ([source](https://redirect.github.com/tailwindlabs/tailwindcss)) | [`^3.4.13` -> `3.4.14`](https://renovatebot.com/diffs/npm/tailwindcss/3.4.14/3.4.14) | [![age](https://developer.mend.io/api/mc/badges/age/npm/tailwindcss/3.4.14?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/tailwindcss/3.4.14?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/tailwindcss/3.4.14/3.4.14?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/tailwindcss/3.4.14/3.4.14?slim=true)](https://docs.renovatebot.com/merge-confidence/) | devDependencies | pin | | tailwindcss-animate | [`^1.0.7` -> `1.0.7`](https://renovatebot.com/diffs/npm/tailwindcss-animate/1.0.7/1.0.7) | [![age](https://developer.mend.io/api/mc/badges/age/npm/tailwindcss-animate/1.0.7?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/tailwindcss-animate/1.0.7?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/tailwindcss-animate/1.0.7/1.0.7?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/tailwindcss-animate/1.0.7/1.0.7?slim=true)](https://docs.renovatebot.com/merge-confidence/) | devDependencies | pin | | [typescript](https://www.typescriptlang.org/) ([source](https://redirect.github.com/microsoft/TypeScript)) | [`^5.6.2` -> `5.6.3`](https://renovatebot.com/diffs/npm/typescript/5.6.3/5.6.3) | [![age](https://developer.mend.io/api/mc/badges/age/npm/typescript/5.6.3?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/typescript/5.6.3?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/typescript/5.6.3/5.6.3?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/typescript/5.6.3/5.6.3?slim=true)](https://docs.renovatebot.com/merge-confidence/) | devDependencies | pin | | [webpack](https://redirect.github.com/webpack/webpack) | [`^5.95.0` -> `5.96.1`](https://renovatebot.com/diffs/npm/webpack/5.96.1/5.96.1) | [![age](https://developer.mend.io/api/mc/badges/age/npm/webpack/5.96.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/webpack/5.96.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/webpack/5.96.1/5.96.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/webpack/5.96.1/5.96.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | devDependencies | pin | --- ### Release Notes
radix-ui/primitives (@​radix-ui/react-tooltip) ### [`v1.1.4`](https://redirect.github.com/radix-ui/primitives/compare/157415ed1f34c53b5afbf53a047895ed6a7f957f...d70e369c11188107319a4e11b598919251724629) [Compare Source](https://redirect.github.com/radix-ui/primitives/compare/157415ed1f34c53b5afbf53a047895ed6a7f957f...d70e369c11188107319a4e11b598919251724629)
tremorlabs/tremor (@​tremor/react) ### [`v3.18.4`](https://redirect.github.com/tremorlabs/tremor/compare/v3.18.3...v3.18.4) [Compare Source](https://redirect.github.com/tremorlabs/tremor/compare/v3.18.3...v3.18.4)
lucide-icons/lucide (lucide-react) ### [`v0.460.0`](https://redirect.github.com/lucide-icons/lucide/releases/tag/0.460.0): New icons 0.460.0 [Compare Source](https://redirect.github.com/lucide-icons/lucide/compare/0.459.0...0.460.0) #### New icons 🎨 - `toilet` ([#​2141](https://redirect.github.com/lucide-icons/lucide/issues/2141)) by [@​EthanHazel](https://redirect.github.com/EthanHazel) ### [`v0.459.0`](https://redirect.github.com/lucide-icons/lucide/releases/tag/0.459.0): New icons 0.459.0 [Compare Source](https://redirect.github.com/lucide-icons/lucide/compare/0.458.0...0.459.0) #### New icons 🎨 - `equal-approximately` ([#​2594](https://redirect.github.com/lucide-icons/lucide/issues/2594)) by [@​ksk3110](https://redirect.github.com/ksk3110) ### [`v0.458.0`](https://redirect.github.com/lucide-icons/lucide/releases/tag/0.458.0): New icons 0.458.0 [Compare Source](https://redirect.github.com/lucide-icons/lucide/compare/0.457.0...0.458.0) #### New icons 🎨 - `cloud-alert` ([#​2611](https://redirect.github.com/lucide-icons/lucide/issues/2611)) by [@​lscheibel](https://redirect.github.com/lscheibel) #### Modified Icons 🔨 - `drill` ([#​1919](https://redirect.github.com/lucide-icons/lucide/issues/1919)) by [@​jguddas](https://redirect.github.com/jguddas) #### Other Changes - feat(lucide-svelte): Aliased imports for direct imports by [@​ericfennis](https://redirect.github.com/ericfennis) in [https://github.com/lucide-icons/lucide/pull/2584](https://redirect.github.com/lucide-icons/lucide/pull/2584) **Full Changelog**: https://github.com/lucide-icons/lucide/compare/0.457.0...0.458.0 ### [`v0.457.0`](https://redirect.github.com/lucide-icons/lucide/releases/tag/0.457.0): New icons 0.457.0 [Compare Source](https://redirect.github.com/lucide-icons/lucide/compare/0.456.0...0.457.0) #### New icons 🎨 - `laptop-minimal-check` ([#​2563](https://redirect.github.com/lucide-icons/lucide/issues/2563)) by [@​jguddas](https://redirect.github.com/jguddas) #### Modified Icons 🔨 - `bath` ([#​2512](https://redirect.github.com/lucide-icons/lucide/issues/2512)) by [@​jamiemlaw](https://redirect.github.com/jamiemlaw) - `cross` ([#​2578](https://redirect.github.com/lucide-icons/lucide/issues/2578)) by [@​jguddas](https://redirect.github.com/jguddas) - `hand-platter` ([#​2326](https://redirect.github.com/lucide-icons/lucide/issues/2326)) by [@​karsa-mistmere](https://redirect.github.com/karsa-mistmere) - `hard-hat` ([#​2559](https://redirect.github.com/lucide-icons/lucide/issues/2559)) by [@​jguddas](https://redirect.github.com/jguddas) - `heading-4` ([#​2546](https://redirect.github.com/lucide-icons/lucide/issues/2546)) by [@​jguddas](https://redirect.github.com/jguddas) - `puzzle` ([#​2603](https://redirect.github.com/lucide-icons/lucide/issues/2603)) by [@​jamiemlaw](https://redirect.github.com/jamiemlaw) - `school` ([#​2598](https://redirect.github.com/lucide-icons/lucide/issues/2598)) by [@​jguddas](https://redirect.github.com/jguddas) - `vegan` ([#​2556](https://redirect.github.com/lucide-icons/lucide/issues/2556)) by [@​jguddas](https://redirect.github.com/jguddas) ### [`v0.456.0`](https://redirect.github.com/lucide-icons/lucide/releases/tag/0.456.0): Choosing import name style 0.456.0 [Compare Source](https://redirect.github.com/lucide-icons/lucide/compare/0.455.0...0.456.0) #### What's Changed - ci(pull-request): Fix generate comments for empty changes by [@​ericfennis](https://redirect.github.com/ericfennis) in [https://github.com/lucide-icons/lucide/pull/2593](https://redirect.github.com/lucide-icons/lucide/pull/2593) - feat(lucide-react, lucide-preact, lucide-react-native, lucide-solid, lucide-vue-next): Adjustable icon naming imports by [@​ericfennis](https://redirect.github.com/ericfennis) in [https://github.com/lucide-icons/lucide/pull/2328](https://redirect.github.com/lucide-icons/lucide/pull/2328) - fix(icons): changed `glass-water` icon by [@​jguddas](https://redirect.github.com/jguddas) in [https://github.com/lucide-icons/lucide/pull/2579](https://redirect.github.com/lucide-icons/lucide/pull/2579) ### Adjustable icon naming imports Customize import name styles for `lucide-react`, `lucide-vue`, `lucide-react-native`, `lucide-preact`, to manage autocompletion in your IDE. 1. **Turn off autocomplete in your IDE**: Add the following to your `settings.json` ```json { "typescript.preferences.autoImportFileExcludePatterns": [ "lucide-react", "lucide-preact", "lucide-react-native", "lucide-vue-next" ] } ``` 2. **Create a custom module declaration file**: It allows you to choose the import name style. For React: ```ts declare module "lucide-react" { // Prefixed import names export * from "lucide-react/dist/lucide-react.prefixed"; // or // Suffixed import names export * from "lucide-react/dist/lucide-react.suffixed"; } ``` For Vue: ```ts declare module "lucide-vue-next" { // Prefixed import names export * from "lucide-vue-next/dist/lucide-vue-next.prefixed"; // or // Suffixed import names export * from "lucide-vue-next/dist/lucide-vue-next.suffixed"; } ``` ### [`v0.455.0`](https://redirect.github.com/lucide-icons/lucide/releases/tag/0.455.0): New icons 0.455.0 [Compare Source](https://redirect.github.com/lucide-icons/lucide/compare/0.454.0...0.455.0) #### New icons 🎨 - `wind-arrow-down` ([#​2554](https://redirect.github.com/lucide-icons/lucide/issues/2554)) by [@​jamiemlaw](https://redirect.github.com/jamiemlaw) #### Modified Icons 🔨 - `file-music` ([#​2536](https://redirect.github.com/lucide-icons/lucide/issues/2536)) by [@​jguddas](https://redirect.github.com/jguddas) - `slice` ([#​2500](https://redirect.github.com/lucide-icons/lucide/issues/2500)) by [@​jguddas](https://redirect.github.com/jguddas) - `undo-dot` ([#​2557](https://redirect.github.com/lucide-icons/lucide/issues/2557)) by [@​jguddas](https://redirect.github.com/jguddas) - `wind` ([#​2554](https://redirect.github.com/lucide-icons/lucide/issues/2554)) by [@​jamiemlaw](https://redirect.github.com/jamiemlaw)
marella/material-symbols (material-symbols) ### [`v0.27.0`](https://redirect.github.com/marella/material-symbols/compare/v0.26.0...v0.27.0) [Compare Source](https://redirect.github.com/marella/material-symbols/compare/v0.26.0...v0.27.0)
JedWatson/react-select (react-select) ### [`v5.8.3`](https://redirect.github.com/JedWatson/react-select/releases/tag/react-select%405.8.3) [Compare Source](https://redirect.github.com/JedWatson/react-select/compare/react-select@5.8.2...react-select@5.8.3) ##### Patch Changes - [`111efad1`](https://redirect.github.com/JedWatson/react-select/commit/111efad170a11dbae96ae436251cd028e702eb72) [#​5974](https://redirect.github.com/JedWatson/react-select/pull/5974) Thanks [@​j2ghz](https://redirect.github.com/j2ghz)! - Fix types compatibility with React 19
--- ### Configuration 📅 **Schedule**: Branch creation - "after 5pm on monday" in timezone Etc/UTC, Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox. 👻 **Immortal**: This PR will be recreated if closed unmerged. Get [config help](https://redirect.github.com/renovatebot/renovate/discussions) if that's undesired. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/PeerDB-io/peerdb). Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> --- ui/package-lock.json | 72 ++++++++++++++++++++++---------------------- ui/package.json | 32 ++++++++++---------- 2 files changed, 52 insertions(+), 52 deletions(-) diff --git a/ui/package-lock.json b/ui/package-lock.json index 5d64807ea..f233b02b9 100644 --- a/ui/package-lock.json +++ b/ui/package-lock.json @@ -29,8 +29,8 @@ "@types/react": "^18.3.11", "@types/react-dom": "^18.3.0", "classnames": "^2.5.1", - "lucide-react": "^0.454.0", - "material-symbols": "^0.26.0", + "lucide-react": "^0.460.0", + "material-symbols": "^0.27.0", "moment": "^2.30.1", "moment-timezone": "^0.5.46", "next": "^14.2.14", @@ -48,20 +48,20 @@ "zod": "^3.23.8" }, "devDependencies": { - "autoprefixer": "^10.4.20", - "copy-webpack-plugin": "^12.0.2", - "eslint": "^8.57.1", - "eslint-config-next": "^14.2.14", - "eslint-config-prettier": "^9.1.0", - "less": "^4.2.0", - "postcss": "^8.4.47", - "prettier": "^3.3.3", - "prettier-plugin-organize-imports": "^4.1.0", - "string-width": "^7.2.0", - "tailwindcss": "^3.4.13", - "tailwindcss-animate": "^1.0.7", - "typescript": "^5.6.2", - "webpack": "^5.95.0" + "autoprefixer": "10.4.20", + "copy-webpack-plugin": "12.0.2", + "eslint": "8.57.1", + "eslint-config-next": "14.2.17", + "eslint-config-prettier": "9.1.0", + "less": "4.2.0", + "postcss": "8.4.47", + "prettier": "3.3.3", + "prettier-plugin-organize-imports": "4.1.0", + "string-width": "7.2.0", + "tailwindcss": "3.4.14", + "tailwindcss-animate": "1.0.7", + "typescript": "5.6.3", + "webpack": "5.96.1" } }, "node_modules/@alloc/quick-lru": { @@ -1337,12 +1337,12 @@ } }, "node_modules/@radix-ui/react-icons": { - "version": "1.3.1", - "resolved": "https://registry.npmjs.org/@radix-ui/react-icons/-/react-icons-1.3.1.tgz", - "integrity": "sha512-QvYompk0X+8Yjlo/Fv4McrzxohDdM5GgLHyQcPpcsPvlOSXCGFjdbuyGL5dzRbg0GpknAjQJJZzdiRK7iWVuFQ==", + "version": "1.3.2", + "resolved": "https://registry.npmjs.org/@radix-ui/react-icons/-/react-icons-1.3.2.tgz", + "integrity": "sha512-fyQIhGDhzfc9pK2kH6Pl9c4BDJGfMkPqkyIgYDthyNYoNg3wVhoJMMh19WS4Up/1KMPFVpNsT2q3WmXn2N1m6g==", "license": "MIT", "peerDependencies": { - "react": "^16.x || ^17.x || ^18.x || ^19.x" + "react": "^16.x || ^17.x || ^18.x || ^19.0.0 || ^19.0.0-rc" } }, "node_modules/@radix-ui/react-id": { @@ -1879,9 +1879,9 @@ } }, "node_modules/@radix-ui/react-tooltip": { - "version": "1.1.3", - "resolved": "https://registry.npmjs.org/@radix-ui/react-tooltip/-/react-tooltip-1.1.3.tgz", - "integrity": "sha512-Z4w1FIS0BqVFI2c1jZvb/uDVJijJjJ2ZMuPV81oVgTZ7g3BZxobplnMVvXtFWgtozdvYJ+MFWtwkM5S2HnAong==", + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/@radix-ui/react-tooltip/-/react-tooltip-1.1.4.tgz", + "integrity": "sha512-QpObUH/ZlpaO4YgHSaYzrLO2VuO+ZBFFgGzjMUPwtiYnAzzNNDPJeEGRrT7qNOrWm/Jr08M1vlp+vTHtnSQ0Uw==", "license": "MIT", "dependencies": { "@radix-ui/primitive": "1.1.0", @@ -2129,9 +2129,9 @@ } }, "node_modules/@tremor/react": { - "version": "3.18.3", - "resolved": "https://registry.npmjs.org/@tremor/react/-/react-3.18.3.tgz", - "integrity": "sha512-7QyGE2W9f2FpwH24TKy3/mqBgLl4sHZeQcXP3rxXZ8W2AUq7AVaG1+vIT3xXxISrkh7zknjWlZsuhoF8NWNVDw==", + "version": "3.18.4", + "resolved": "https://registry.npmjs.org/@tremor/react/-/react-3.18.4.tgz", + "integrity": "sha512-HDjYbuzxQIZvosGzB1j1nCSuLLRdKRHPfRmoGUyI57cesbThFzWuFHz07Sio9Vhk/ew3TKJUZPy+ljfZ3u1M4g==", "license": "Apache 2.0", "dependencies": { "@floating-ui/react": "^0.19.2", @@ -2140,7 +2140,7 @@ "date-fns": "^3.6.0", "react-day-picker": "^8.10.1", "react-transition-state": "^2.1.2", - "recharts": "^2.12.7", + "recharts": "^2.13.3", "tailwind-merge": "^2.5.2" }, "peerDependencies": { @@ -6001,9 +6001,9 @@ } }, "node_modules/lucide-react": { - "version": "0.454.0", - "resolved": "https://registry.npmjs.org/lucide-react/-/lucide-react-0.454.0.tgz", - "integrity": "sha512-hw7zMDwykCLnEzgncEEjHeA6+45aeEzRYuKHuyRSOPkhko+J3ySGjGIzu+mmMfDFG1vazHepMaYFYHbTFAZAAQ==", + "version": "0.460.0", + "resolved": "https://registry.npmjs.org/lucide-react/-/lucide-react-0.460.0.tgz", + "integrity": "sha512-BVtq/DykVeIvRTJvRAgCsOwaGL8Un3Bxh8MbDxMhEWlZay3T4IpEKDEpwt5KZ0KJMHzgm6jrltxlT5eXOWXDHg==", "license": "ISC", "peerDependencies": { "react": "^16.5.1 || ^17.0.0 || ^18.0.0 || ^19.0.0-rc" @@ -6036,9 +6036,9 @@ } }, "node_modules/material-symbols": { - "version": "0.26.0", - "resolved": "https://registry.npmjs.org/material-symbols/-/material-symbols-0.26.0.tgz", - "integrity": "sha512-7WefpjuZLsXjE4MHlbi7QVca9y6M45YJws8oC3l7UITfpGDxVwEddQaaqYqtGMGVRFeBw/dIxmlazR5eeZH0rg==", + "version": "0.27.0", + "resolved": "https://registry.npmjs.org/material-symbols/-/material-symbols-0.27.0.tgz", + "integrity": "sha512-nRHpnw2Cz7eNl6GptgHHhkjauL0zvkYsuiqy1HBifOYCY4fdbZ/PwtdZN4RNmwA+9jQPoymvlArVPPX5nYTdZg==", "license": "Apache-2.0" }, "node_modules/memoize-one": { @@ -7165,9 +7165,9 @@ } }, "node_modules/react-select": { - "version": "5.8.2", - "resolved": "https://registry.npmjs.org/react-select/-/react-select-5.8.2.tgz", - "integrity": "sha512-a/LkOckoI62710gGPQSQqUp7A10fGbH/ya3/IR49qaq3XoBvwymgD5mJgtiHxBDsutyEQfdKNycWVh8Cg8UCjw==", + "version": "5.8.3", + "resolved": "https://registry.npmjs.org/react-select/-/react-select-5.8.3.tgz", + "integrity": "sha512-lVswnIq8/iTj1db7XCG74M/3fbGB6ZaluCzvwPGT5ZOjCdL/k0CLWhEK0vCBLuU5bHTEf6Gj8jtSvi+3v+tO1w==", "license": "MIT", "dependencies": { "@babel/runtime": "^7.12.0", diff --git a/ui/package.json b/ui/package.json index 3f4259838..d755ce864 100644 --- a/ui/package.json +++ b/ui/package.json @@ -31,8 +31,8 @@ "@types/react": "^18.3.11", "@types/react-dom": "^18.3.0", "classnames": "^2.5.1", - "lucide-react": "^0.454.0", - "material-symbols": "^0.26.0", + "lucide-react": "^0.460.0", + "material-symbols": "^0.27.0", "moment": "^2.30.1", "moment-timezone": "^0.5.46", "next": "^14.2.14", @@ -50,19 +50,19 @@ "zod": "^3.23.8" }, "devDependencies": { - "autoprefixer": "^10.4.20", - "copy-webpack-plugin": "^12.0.2", - "eslint": "^8.57.1", - "eslint-config-next": "^14.2.14", - "eslint-config-prettier": "^9.1.0", - "less": "^4.2.0", - "postcss": "^8.4.47", - "prettier": "^3.3.3", - "prettier-plugin-organize-imports": "^4.1.0", - "string-width": "^7.2.0", - "tailwindcss": "^3.4.13", - "tailwindcss-animate": "^1.0.7", - "typescript": "^5.6.2", - "webpack": "^5.95.0" + "autoprefixer": "10.4.20", + "copy-webpack-plugin": "12.0.2", + "eslint": "8.57.1", + "eslint-config-next": "14.2.17", + "eslint-config-prettier": "9.1.0", + "less": "4.2.0", + "postcss": "8.4.47", + "prettier": "3.3.3", + "prettier-plugin-organize-imports": "4.1.0", + "string-width": "7.2.0", + "tailwindcss": "3.4.14", + "tailwindcss-animate": "1.0.7", + "typescript": "5.6.3", + "webpack": "5.96.1" } } From f57dd0c4ddfd02451f9985a3cb2852a646e55b96 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Philip=20Dub=C3=A9?= Date: Wed, 20 Nov 2024 00:19:16 +0000 Subject: [PATCH 27/61] Replace SyncGauge with upstream otel sdk gauges (#2272) Upstream added gauges in May: https://github.com/open-telemetry/opentelemetry-go/pull/5304 --- flow/cmd/worker.go | 5 +- flow/connectors/postgres/postgres.go | 33 ++++--- flow/otel_metrics/otel_manager.go | 4 +- flow/otel_metrics/peerdb_gauges/gauges.go | 10 ++- flow/otel_metrics/sync_gauges.go | 104 ++-------------------- 5 files changed, 38 insertions(+), 118 deletions(-) diff --git a/flow/cmd/worker.go b/flow/cmd/worker.go index 5c16376a1..cca0202ec 100644 --- a/flow/cmd/worker.go +++ b/flow/cmd/worker.go @@ -10,6 +10,7 @@ import ( "runtime" "github.com/grafana/pyroscope-go" + "go.opentelemetry.io/otel/metric" "go.temporal.io/sdk/client" temporalotel "go.temporal.io/sdk/contrib/opentelemetry" "go.temporal.io/sdk/worker" @@ -157,8 +158,8 @@ func WorkerSetup(opts *WorkerSetupOptions) (*workerSetupResponse, error) { otelManager = &otel_metrics.OtelManager{ MetricsProvider: metricsProvider, Meter: metricsProvider.Meter("io.peerdb.flow-worker"), - Float64GaugesCache: make(map[string]*otel_metrics.Float64SyncGauge), - Int64GaugesCache: make(map[string]*otel_metrics.Int64SyncGauge), + Float64GaugesCache: make(map[string]metric.Float64Gauge), + Int64GaugesCache: make(map[string]metric.Int64Gauge), } cleanupOtelManagerFunc = func() { shutDownErr := otelManager.MetricsProvider.Shutdown(context.Background()) diff --git a/flow/connectors/postgres/postgres.go b/flow/connectors/postgres/postgres.go index 593a94aa4..ae0dbea52 100644 --- a/flow/connectors/postgres/postgres.go +++ b/flow/connectors/postgres/postgres.go @@ -17,6 +17,7 @@ import ( "github.com/jackc/pgx/v5/pgtype" "github.com/jackc/pgx/v5/pgxpool" "go.opentelemetry.io/otel/attribute" + "go.opentelemetry.io/otel/metric" "go.temporal.io/sdk/log" "go.temporal.io/sdk/temporal" @@ -1214,11 +1215,12 @@ func (c *PostgresConnector) HandleSlotInfo( logger.Info(fmt.Sprintf("Checking %s lag for %s", alertKeys.SlotName, alertKeys.PeerName), slog.Float64("LagInMB", float64(slotInfo[0].LagInMb))) alerter.AlertIfSlotLag(ctx, alertKeys, slotInfo[0]) - slotMetricGauges.SlotLagGauge.Set(float64(slotInfo[0].LagInMb), attribute.NewSet( + slotMetricGauges.SlotLagGauge.Record(ctx, float64(slotInfo[0].LagInMb), metric.WithAttributeSet(attribute.NewSet( attribute.String(otel_metrics.FlowNameKey, alertKeys.FlowName), attribute.String(otel_metrics.PeerNameKey, alertKeys.PeerName), attribute.String(otel_metrics.SlotNameKey, alertKeys.SlotName), - attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID()))) + attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID())), + )) // Also handles alerts for PeerDB user connections exceeding a given limit here res, err := getOpenConnectionsForUser(ctx, c.conn, c.config.User) @@ -1227,10 +1229,11 @@ func (c *PostgresConnector) HandleSlotInfo( return err } alerter.AlertIfOpenConnections(ctx, alertKeys, res) - slotMetricGauges.OpenConnectionsGauge.Set(res.CurrentOpenConnections, attribute.NewSet( + slotMetricGauges.OpenConnectionsGauge.Record(ctx, res.CurrentOpenConnections, metric.WithAttributeSet(attribute.NewSet( attribute.String(otel_metrics.FlowNameKey, alertKeys.FlowName), attribute.String(otel_metrics.PeerNameKey, alertKeys.PeerName), - attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID()))) + attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID()), + ))) replicationRes, err := getOpenReplicationConnectionsForUser(ctx, c.conn, c.config.User) if err != nil { @@ -1238,10 +1241,13 @@ func (c *PostgresConnector) HandleSlotInfo( return err } - slotMetricGauges.OpenReplicationConnectionsGauge.Set(replicationRes.CurrentOpenConnections, attribute.NewSet( - attribute.String(otel_metrics.FlowNameKey, alertKeys.FlowName), - attribute.String(otel_metrics.PeerNameKey, alertKeys.PeerName), - attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID()))) + slotMetricGauges.OpenReplicationConnectionsGauge.Record(ctx, replicationRes.CurrentOpenConnections, + metric.WithAttributeSet(attribute.NewSet( + attribute.String(otel_metrics.FlowNameKey, alertKeys.FlowName), + attribute.String(otel_metrics.PeerNameKey, alertKeys.PeerName), + attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID()), + )), + ) var intervalSinceLastNormalize *time.Duration if err := alerter.CatalogPool.QueryRow( @@ -1255,10 +1261,13 @@ func (c *PostgresConnector) HandleSlotInfo( return nil } if intervalSinceLastNormalize != nil { - slotMetricGauges.IntervalSinceLastNormalizeGauge.Set(intervalSinceLastNormalize.Seconds(), attribute.NewSet( - attribute.String(otel_metrics.FlowNameKey, alertKeys.FlowName), - attribute.String(otel_metrics.PeerNameKey, alertKeys.PeerName), - attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID()))) + slotMetricGauges.IntervalSinceLastNormalizeGauge.Record(ctx, intervalSinceLastNormalize.Seconds(), + metric.WithAttributeSet(attribute.NewSet( + attribute.String(otel_metrics.FlowNameKey, alertKeys.FlowName), + attribute.String(otel_metrics.PeerNameKey, alertKeys.PeerName), + attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID()), + )), + ) alerter.AlertIfTooLongSinceLastNormalize(ctx, alertKeys, *intervalSinceLastNormalize) } diff --git a/flow/otel_metrics/otel_manager.go b/flow/otel_metrics/otel_manager.go index becf13a16..c59adecd4 100644 --- a/flow/otel_metrics/otel_manager.go +++ b/flow/otel_metrics/otel_manager.go @@ -20,8 +20,8 @@ import ( type OtelManager struct { MetricsProvider *sdkmetric.MeterProvider Meter metric.Meter - Float64GaugesCache map[string]*Float64SyncGauge - Int64GaugesCache map[string]*Int64SyncGauge + Float64GaugesCache map[string]metric.Float64Gauge + Int64GaugesCache map[string]metric.Int64Gauge } // newOtelResource returns a resource describing this application. diff --git a/flow/otel_metrics/peerdb_gauges/gauges.go b/flow/otel_metrics/peerdb_gauges/gauges.go index 767aac094..a3b7d5c3e 100644 --- a/flow/otel_metrics/peerdb_gauges/gauges.go +++ b/flow/otel_metrics/peerdb_gauges/gauges.go @@ -1,6 +1,8 @@ package peerdb_gauges import ( + "go.opentelemetry.io/otel/metric" + "github.com/PeerDB-io/peer-flow/otel_metrics" ) @@ -12,10 +14,10 @@ const ( ) type SlotMetricGauges struct { - SlotLagGauge *otel_metrics.Float64SyncGauge - OpenConnectionsGauge *otel_metrics.Int64SyncGauge - OpenReplicationConnectionsGauge *otel_metrics.Int64SyncGauge - IntervalSinceLastNormalizeGauge *otel_metrics.Float64SyncGauge + SlotLagGauge metric.Float64Gauge + OpenConnectionsGauge metric.Int64Gauge + OpenReplicationConnectionsGauge metric.Int64Gauge + IntervalSinceLastNormalizeGauge metric.Float64Gauge } func BuildGaugeName(baseGaugeName string) string { diff --git a/flow/otel_metrics/sync_gauges.go b/flow/otel_metrics/sync_gauges.go index d2ef4924c..e9da02c87 100644 --- a/flow/otel_metrics/sync_gauges.go +++ b/flow/otel_metrics/sync_gauges.go @@ -1,106 +1,15 @@ package otel_metrics import ( - "context" - "fmt" - "sync" - - "go.opentelemetry.io/otel/attribute" "go.opentelemetry.io/otel/metric" ) -type ObservationMapValue[V comparable] struct { - Value V -} - -// SyncGauge is a generic synchronous gauge that can be used to observe any type of value -// Inspired from https://github.com/open-telemetry/opentelemetry-go/issues/3984#issuecomment-1743231837 -type SyncGauge[V comparable, O metric.Observable] struct { - observableGauge O - observations sync.Map - name string -} - -func (a *SyncGauge[V, O]) Callback(ctx context.Context, observeFunc func(value V, options ...metric.ObserveOption)) error { - a.observations.Range(func(key, value interface{}) bool { - attrs := key.(attribute.Set) - val := value.(*ObservationMapValue[V]) - observeFunc(val.Value, metric.WithAttributeSet(attrs)) - // If the pointer is still same we can safely delete, else it means that the value was overwritten in parallel - a.observations.CompareAndDelete(attrs, val) - return true - }) - return nil -} - -func (a *SyncGauge[V, O]) Set(input V, attrs attribute.Set) { - val := ObservationMapValue[V]{Value: input} - a.observations.Store(attrs, &val) -} - -type Int64SyncGauge struct { - syncGauge *SyncGauge[int64, metric.Int64Observable] -} - -func (a *Int64SyncGauge) Set(input int64, attrs attribute.Set) { - if a == nil { - return - } - a.syncGauge.Set(input, attrs) -} - -func NewInt64SyncGauge(meter metric.Meter, gaugeName string, opts ...metric.Int64ObservableGaugeOption) (*Int64SyncGauge, error) { - syncGauge := &SyncGauge[int64, metric.Int64Observable]{ - name: gaugeName, - } - observableGauge, err := meter.Int64ObservableGauge(gaugeName, - append(opts, metric.WithInt64Callback(func(ctx context.Context, observer metric.Int64Observer) error { - return syncGauge.Callback(ctx, func(value int64, options ...metric.ObserveOption) { - observer.Observe(value, options...) - }) - }))...) - if err != nil { - return nil, fmt.Errorf("failed to create Int64SyncGauge: %w", err) - } - syncGauge.observableGauge = observableGauge - return &Int64SyncGauge{syncGauge: syncGauge}, nil -} - -type Float64SyncGauge struct { - syncGauge *SyncGauge[float64, metric.Float64Observable] -} - -func (a *Float64SyncGauge) Set(input float64, attrs attribute.Set) { - if a == nil { - return - } - a.syncGauge.Set(input, attrs) -} - -func NewFloat64SyncGauge(meter metric.Meter, gaugeName string, opts ...metric.Float64ObservableGaugeOption) (*Float64SyncGauge, error) { - syncGauge := &SyncGauge[float64, metric.Float64Observable]{ - name: gaugeName, - } - observableGauge, err := meter.Float64ObservableGauge(gaugeName, - append(opts, metric.WithFloat64Callback(func(ctx context.Context, observer metric.Float64Observer) error { - return syncGauge.Callback(ctx, func(value float64, options ...metric.ObserveOption) { - observer.Observe(value, options...) - }) - }))...) - if err != nil { - return nil, fmt.Errorf("failed to create Float64SyncGauge: %w", err) - } - syncGauge.observableGauge = observableGauge - return &Float64SyncGauge{syncGauge: syncGauge}, nil -} - -func GetOrInitInt64SyncGauge(meter metric.Meter, cache map[string]*Int64SyncGauge, name string, - opts ...metric.Int64ObservableGaugeOption, -) (*Int64SyncGauge, error) { +func GetOrInitInt64SyncGauge(meter metric.Meter, cache map[string]metric.Int64Gauge, name string, opts ...metric.Int64GaugeOption, +) (metric.Int64Gauge, error) { gauge, ok := cache[name] if !ok { var err error - gauge, err = NewInt64SyncGauge(meter, name, opts...) + gauge, err = meter.Int64Gauge(name, opts...) if err != nil { return nil, err } @@ -109,13 +18,12 @@ func GetOrInitInt64SyncGauge(meter metric.Meter, cache map[string]*Int64SyncGaug return gauge, nil } -func GetOrInitFloat64SyncGauge(meter metric.Meter, cache map[string]*Float64SyncGauge, - name string, opts ...metric.Float64ObservableGaugeOption, -) (*Float64SyncGauge, error) { +func GetOrInitFloat64SyncGauge(meter metric.Meter, cache map[string]metric.Float64Gauge, name string, opts ...metric.Float64GaugeOption, +) (metric.Float64Gauge, error) { gauge, ok := cache[name] if !ok { var err error - gauge, err = NewFloat64SyncGauge(meter, name, opts...) + gauge, err = meter.Float64Gauge(name, opts...) if err != nil { return nil, err } From b8e55ea6d1e26f4782ec2198190086bec817564c Mon Sep 17 00:00:00 2001 From: Amogh Bharadwaj <65964360+Amogh-Bharadwaj@users.noreply.github.com> Date: Thu, 21 Nov 2024 02:44:20 +0530 Subject: [PATCH 28/61] Introduce Completed flow status (#2274) Introduces a new flow state - `Completed` Currently this is the state for an initial load only mirror which has finished initial load Functionally tested --- flow/e2e/test_utils.go | 2 +- flow/workflows/cdc_flow.go | 2 ++ protos/flow.proto | 1 + 3 files changed, 4 insertions(+), 1 deletion(-) diff --git a/flow/e2e/test_utils.go b/flow/e2e/test_utils.go index c3e577a4d..7fb3f857d 100644 --- a/flow/e2e/test_utils.go +++ b/flow/e2e/test_utils.go @@ -221,7 +221,7 @@ func SetupCDCFlowStatusQuery(t *testing.T, env WorkflowRun, config *protos.FlowC var status protos.FlowStatus if err := response.Get(&status); err != nil { t.Fatal(err) - } else if status == protos.FlowStatus_STATUS_RUNNING { + } else if status == protos.FlowStatus_STATUS_RUNNING || status == protos.FlowStatus_STATUS_COMPLETED { return } else if counter > 30 { env.Cancel() diff --git a/flow/workflows/cdc_flow.go b/flow/workflows/cdc_flow.go index 72e37b01f..bd1d5459d 100644 --- a/flow/workflows/cdc_flow.go +++ b/flow/workflows/cdc_flow.go @@ -485,6 +485,8 @@ func CDCFlowWorkflow( // if initial_copy_only is opted for, we end the flow here. if cfg.InitialSnapshotOnly { + logger.Info("initial snapshot only, ending flow") + state.CurrentFlowStatus = protos.FlowStatus_STATUS_COMPLETED return state, nil } } diff --git a/protos/flow.proto b/protos/flow.proto index de7bf740d..42170a563 100644 --- a/protos/flow.proto +++ b/protos/flow.proto @@ -385,6 +385,7 @@ enum FlowStatus { STATUS_SNAPSHOT = 5; STATUS_TERMINATING = 6; STATUS_TERMINATED = 7; + STATUS_COMPLETED = 8; } message CDCFlowConfigUpdate { From b5e2063d946e9cb30e6d739655ea0924ff3def2c Mon Sep 17 00:00:00 2001 From: Amogh Bharadwaj <65964360+Amogh-Bharadwaj@users.noreply.github.com> Date: Thu, 21 Nov 2024 03:08:11 +0530 Subject: [PATCH 29/61] Optimise state setting for initial load only (#2275) Set status to running after the initial load only check --- flow/workflows/cdc_flow.go | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/flow/workflows/cdc_flow.go b/flow/workflows/cdc_flow.go index bd1d5459d..0c97af9b7 100644 --- a/flow/workflows/cdc_flow.go +++ b/flow/workflows/cdc_flow.go @@ -480,15 +480,15 @@ func CDCFlowWorkflow( } } - state.CurrentFlowStatus = protos.FlowStatus_STATUS_RUNNING logger.Info("executed setup flow and snapshot flow") - // if initial_copy_only is opted for, we end the flow here. if cfg.InitialSnapshotOnly { logger.Info("initial snapshot only, ending flow") state.CurrentFlowStatus = protos.FlowStatus_STATUS_COMPLETED return state, nil } + + state.CurrentFlowStatus = protos.FlowStatus_STATUS_RUNNING } syncFlowID := GetChildWorkflowID("sync-flow", cfg.FlowJobName, originalRunID) From 324afd01defb4f8425a173c591685f4865cbfbfd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Philip=20Dub=C3=A9?= Date: Wed, 20 Nov 2024 23:40:35 +0000 Subject: [PATCH 30/61] Remove PEERDB_MAX_SYNCS_PER_CDC_FLOW (#2273) Already unused in rest of code --- flow/peerdbenv/dynamicconf.go | 8 -------- 1 file changed, 8 deletions(-) diff --git a/flow/peerdbenv/dynamicconf.go b/flow/peerdbenv/dynamicconf.go index 314b36573..b0cbe05f5 100644 --- a/flow/peerdbenv/dynamicconf.go +++ b/flow/peerdbenv/dynamicconf.go @@ -19,14 +19,6 @@ import ( ) var DynamicSettings = [...]*protos.DynamicSetting{ - { - Name: "PEERDB_MAX_SYNCS_PER_CDC_FLOW", - Description: "Experimental setting: changes number of syncs per workflow, affects frequency of replication slot disconnects", - DefaultValue: "32", - ValueType: protos.DynconfValueType_UINT, - ApplyMode: protos.DynconfApplyMode_APPLY_MODE_IMMEDIATE, - TargetForSetting: protos.DynconfTarget_ALL, - }, { Name: "PEERDB_CDC_CHANNEL_BUFFER_SIZE", Description: "Advanced setting: changes buffer size of channel PeerDB uses while streaming rows read to destination in CDC", From 2aeb13de078f3b4f118c43cb51f89fde3cfcecd0 Mon Sep 17 00:00:00 2001 From: "renovate[bot]" <29139614+renovate[bot]@users.noreply.github.com> Date: Thu, 21 Nov 2024 11:12:49 +0000 Subject: [PATCH 31/61] chore(deps): pin dependencies (#2277) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR contains the following updates: | Package | Update | Change | |---|---|---| | ghcr.io/peerdb-io/flow-api | pinDigest | -> `53a6de3` | | ghcr.io/peerdb-io/flow-snapshot-worker | pinDigest | -> `036d609` | | ghcr.io/peerdb-io/flow-worker | pinDigest | -> `f5d4d5e` | | ghcr.io/peerdb-io/peerdb-server | pinDigest | -> `15249fc` | | ghcr.io/peerdb-io/peerdb-ui | pinDigest | -> `f4d1cdf` | | [temporalio/ui](https://redirect.github.com/temporalio/ui-server) | minor | `2.31.2` -> `2.32.0` | --- ### Release Notes
temporalio/ui-server (temporalio/ui) ### [`v2.32.0`](https://redirect.github.com/temporalio/ui-server/releases/tag/v2.32.0) [Compare Source](https://redirect.github.com/temporalio/ui-server/compare/v2.31.2...v2.32.0) - 2024-11-20 - [`10e52ae`](https://redirect.github.com/temporalio/ui-server/commit/10e52ae39) - Update version.go to v2.32.0 ([#​2435](https://redirect.github.com/temporalio/ui-server/issues/2435)) - 2024-11-20 - [`303d7f1`](https://redirect.github.com/temporalio/ui-server/commit/303d7f1b4) - KeywordList search attribute support ([#​2420](https://redirect.github.com/temporalio/ui-server/issues/2420)) - 2024-11-20 - [`49063e4`](https://redirect.github.com/temporalio/ui-server/commit/49063e442) - User Metadata ([#​2423](https://redirect.github.com/temporalio/ui-server/issues/2423)) - 2024-11-20 - [`a4b9d3c`](https://redirect.github.com/temporalio/ui-server/commit/a4b9d3c3e) - Add render route in ui-server ([#​2430](https://redirect.github.com/temporalio/ui-server/issues/2430)) - 2024-11-20 - [`532f95f`](https://redirect.github.com/temporalio/ui-server/commit/532f95fe2) - Use input name for Update attribute in event summary row ([#​2434](https://redirect.github.com/temporalio/ui-server/issues/2434)) - 2024-11-19 - [`bbaa395`](https://redirect.github.com/temporalio/ui-server/commit/bbaa395db) - Update combobox to support integration with async code ([#​2431](https://redirect.github.com/temporalio/ui-server/issues/2431)) - 2024-11-19 - [`105ad96`](https://redirect.github.com/temporalio/ui-server/commit/105ad96e9) - Bump [@​grpc/grpc-js](https://redirect.github.com/grpc/grpc-js) from 1.10.8 to 1.10.9 ([#​2200](https://redirect.github.com/temporalio/ui-server/issues/2200)) - 2024-11-19 - [`e6de85f`](https://redirect.github.com/temporalio/ui-server/commit/e6de85f82) - Bump rollup from 2.79.1 to 2.79.2 ([#​2419](https://redirect.github.com/temporalio/ui-server/issues/2419)) - 2024-11-19 - [`ce45d90`](https://redirect.github.com/temporalio/ui-server/commit/ce45d908f) - Bump vite from 5.2.14 to 5.3.6 ([#​2343](https://redirect.github.com/temporalio/ui-server/issues/2343)) - 2024-11-19 - [`71a35a0`](https://redirect.github.com/temporalio/ui-server/commit/71a35a064) - Bump axios from 1.7.2 to 1.7.4 ([#​2268](https://redirect.github.com/temporalio/ui-server/issues/2268)) - 2024-11-18 - [`f65d406`](https://redirect.github.com/temporalio/ui-server/commit/f65d406e7) - 2.32.6 ([#​2429](https://redirect.github.com/temporalio/ui-server/issues/2429)) - 2024-11-18 - [`f52de86`](https://redirect.github.com/temporalio/ui-server/commit/f52de8663) - Add href prop to Combobox to get reactive hrefs working correctly ([#​2428](https://redirect.github.com/temporalio/ui-server/issues/2428)) - 2024-11-18 - [`3c339b9`](https://redirect.github.com/temporalio/ui-server/commit/3c339b9ff) - 2.32.5 ([#​2427](https://redirect.github.com/temporalio/ui-server/issues/2427)) - 2024-11-14 - [`ea68f93`](https://redirect.github.com/temporalio/ui-server/commit/ea68f9378) - Use reactive route for namespace ([#​2426](https://redirect.github.com/temporalio/ui-server/issues/2426)) - 2024-11-13 - [`a580770`](https://redirect.github.com/temporalio/ui-server/commit/a58077007) - Permanently fix formatted utc offset test ([#​2422](https://redirect.github.com/temporalio/ui-server/issues/2422)) - 2024-11-12 - [`08dc657`](https://redirect.github.com/temporalio/ui-server/commit/08dc65724) - Make ids in event history mono for better spacing ([#​2425](https://redirect.github.com/temporalio/ui-server/issues/2425)) - 2024-11-12 - [`6276fd5`](https://redirect.github.com/temporalio/ui-server/commit/6276fd5a6) - Update go-oidc to v3.11 ([#​2424](https://redirect.github.com/temporalio/ui-server/issues/2424)) - 2024-11-07 - [`687ae03`](https://redirect.github.com/temporalio/ui-server/commit/687ae0305) - fix: keep extra row cell when write actions are disabled ([#​2416](https://redirect.github.com/temporalio/ui-server/issues/2416)) - 2024-11-06 - [`185075a`](https://redirect.github.com/temporalio/ui-server/commit/185075a4f) - Remove dark theme on CopyButton in CodeBlock ([#​2421](https://redirect.github.com/temporalio/ui-server/issues/2421)) - 2024-11-06 - [`34944e5`](https://redirect.github.com/temporalio/ui-server/commit/34944e595) - 2.32.4 ([#​2417](https://redirect.github.com/temporalio/ui-server/issues/2417)) - 2024-11-06 - [`ec43569`](https://redirect.github.com/temporalio/ui-server/commit/ec4356929) - Update daylight savings test ([#​2418](https://redirect.github.com/temporalio/ui-server/issues/2418)) - 2024-11-06 - [`3a423d2`](https://redirect.github.com/temporalio/ui-server/commit/3a423d2bb) - Add mobile option for playwright tests ([#​2412](https://redirect.github.com/temporalio/ui-server/issues/2412)) - 2024-11-04 - [`d540904`](https://redirect.github.com/temporalio/ui-server/commit/d54090426) - Only encode Schedule Input when added or edited ([#​2384](https://redirect.github.com/temporalio/ui-server/issues/2384)) - 2024-11-01 - [`197e238`](https://redirect.github.com/temporalio/ui-server/commit/197e238d4) - Don't update browser history on query param changes in start workflow. preserve order of query params on goto ([#​2415](https://redirect.github.com/temporalio/ui-server/issues/2415)) - 2024-11-01 - [`558ae1b`](https://redirect.github.com/temporalio/ui-server/commit/558ae1bc3) - Light/Dark mode themes for Codemirror Editor ([#​2414](https://redirect.github.com/temporalio/ui-server/issues/2414)) - 2024-10-31 - [`fa5e6b3`](https://redirect.github.com/temporalio/ui-server/commit/fa5e6b3ae) - Add Authorization-Extras if authUser idToken exists ([#​2250](https://redirect.github.com/temporalio/ui-server/issues/2250)) - 2024-10-30 - [`0090c09`](https://redirect.github.com/temporalio/ui-server/commit/0090c0945) - Use \__user_metadata query if it exists ([#​2411](https://redirect.github.com/temporalio/ui-server/issues/2411)) - 2024-10-29 - [`e80a783`](https://redirect.github.com/temporalio/ui-server/commit/e80a7839d) - Add query argument support ([#​2407](https://redirect.github.com/temporalio/ui-server/issues/2407)) - 2024-10-29 - [`39d1a21`](https://redirect.github.com/temporalio/ui-server/commit/39d1a21a3) - Get rid of empty class, not needed ([#​2410](https://redirect.github.com/temporalio/ui-server/issues/2410)) - 2024-10-24 - [`e129594`](https://redirect.github.com/temporalio/ui-server/commit/e129594c6) - Use attempt versus retry ([#​2408](https://redirect.github.com/temporalio/ui-server/issues/2408)) - 2024-10-23 - [`2cf3d6f`](https://redirect.github.com/temporalio/ui-server/commit/2cf3d6f8e) - 2.32.3 ([#​2406](https://redirect.github.com/temporalio/ui-server/issues/2406)) - 2024-10-23 - [`eeb2cca`](https://redirect.github.com/temporalio/ui-server/commit/eeb2ccad0) - Bump webpack from 5.90.3 to 5.94.0 ([#​2286](https://redirect.github.com/temporalio/ui-server/issues/2286)) - 2024-10-23 - [`eb11a09`](https://redirect.github.com/temporalio/ui-server/commit/eb11a09e4) - Encode header fields if they exist on schedule edit ([#​2405](https://redirect.github.com/temporalio/ui-server/issues/2405)) - 2024-10-22 - [`aacf78e`](https://redirect.github.com/temporalio/ui-server/commit/aacf78e02) - Don't use translation for default schedule column names ([#​2404](https://redirect.github.com/temporalio/ui-server/issues/2404)) - 2024-10-22 - [`f2d9742`](https://redirect.github.com/temporalio/ui-server/commit/f2d974210) - Don't omit header attribute on event details ([#​2371](https://redirect.github.com/temporalio/ui-server/issues/2371)) - 2024-10-22 - [`42ba898`](https://redirect.github.com/temporalio/ui-server/commit/42ba89818) - 2.32.2 ([#​2403](https://redirect.github.com/temporalio/ui-server/issues/2403)) - 2024-10-22 - [`e91708a`](https://redirect.github.com/temporalio/ui-server/commit/e91708a57) - Add toast instead of redirect on Start Workflow with link to workflow ([#​2402](https://redirect.github.com/temporalio/ui-server/issues/2402)) - 2024-10-22 - [`c2047bf`](https://redirect.github.com/temporalio/ui-server/commit/c2047bf1c) - Add new responsive styles to input on api pagination ([#​2333](https://redirect.github.com/temporalio/ui-server/issues/2333)) - 2024-10-22 - [`9d64fab`](https://redirect.github.com/temporalio/ui-server/commit/9d64fabce) - Create CodecServerErrorBanner component ([#​2399](https://redirect.github.com/temporalio/ui-server/issues/2399)) - 2024-10-22 - [`d5f3e76`](https://redirect.github.com/temporalio/ui-server/commit/d5f3e7690) - Make workflow action menu smaller on small screens ([#​2401](https://redirect.github.com/temporalio/ui-server/issues/2401)) - 2024-10-22 - [`7de4ab5`](https://redirect.github.com/temporalio/ui-server/commit/7de4ab598) - Fix loading state to not flash, set delay to 2 seconds ([#​2400](https://redirect.github.com/temporalio/ui-server/issues/2400)) - 2024-10-21 - [`1550c3a`](https://redirect.github.com/temporalio/ui-server/commit/1550c3afa) - 2.32.1 ([#​2398](https://redirect.github.com/temporalio/ui-server/issues/2398)) - 2024-10-21 - [`4f0d49a`](https://redirect.github.com/temporalio/ui-server/commit/4f0d49a75) - Remove logic for WorkflowUpdate to find corresponding initial event, add Nexus operation events to failed/timedout/canceled ([#​2397](https://redirect.github.com/temporalio/ui-server/issues/2397)) - 2024-10-21 - [`495efe9`](https://redirect.github.com/temporalio/ui-server/commit/495efe966) - Codec Server error banner ([#​2394](https://redirect.github.com/temporalio/ui-server/issues/2394)) - 2024-10-15 - [`8e3486f`](https://redirect.github.com/temporalio/ui-server/commit/8e3486fe5) - Fix pending activity text in timeline ([#​2392](https://redirect.github.com/temporalio/ui-server/issues/2392)) - 2024-10-14 - [`6eb3c15`](https://redirect.github.com/temporalio/ui-server/commit/6eb3c1555) - Add a second wait on start workflow for eventual consistency. Make all borders consistent ([#​2391](https://redirect.github.com/temporalio/ui-server/issues/2391)) - 2024-10-14 - [`ae77f01`](https://redirect.github.com/temporalio/ui-server/commit/ae77f01d0) - 2.32.0 ([#​2390](https://redirect.github.com/temporalio/ui-server/issues/2390)) - 2024-10-14 - [`69f569c`](https://redirect.github.com/temporalio/ui-server/commit/69f569cf8) - Move timestamps to the left side of row, include timestamp in compact view, remove icons, add duration/elapsed tooltips ([#​2389](https://redirect.github.com/temporalio/ui-server/issues/2389)) - 2024-10-14 - [`f7aeff7`](https://redirect.github.com/temporalio/ui-server/commit/f7aeff7d6) - Check filter value onRowFilterClick ([#​2387](https://redirect.github.com/temporalio/ui-server/issues/2387)) - 2024-10-11 - [`570ca0f`](https://redirect.github.com/temporalio/ui-server/commit/570ca0f13) - Fix retry attempt text when textAnchor is end ([#​2381](https://redirect.github.com/temporalio/ui-server/issues/2381)) - 2024-10-11 - [`605e906`](https://redirect.github.com/temporalio/ui-server/commit/605e906d7) - Show correct encoding value in PayloadInput ([#​2385](https://redirect.github.com/temporalio/ui-server/issues/2385)) - 2024-10-11 - [`7351e7e`](https://redirect.github.com/temporalio/ui-server/commit/7351e7e8d) - Use whitespace-pre-line to wrap links correctly, remove badge on links ([#​2386](https://redirect.github.com/temporalio/ui-server/issues/2386)) - 2024-10-10 - [`bcc9f02`](https://redirect.github.com/temporalio/ui-server/commit/bcc9f02de) - Explicitly check for response ok false in codeServerRequest ([#​2383](https://redirect.github.com/temporalio/ui-server/issues/2383)) - 2024-10-10 - [`241a40b`](https://redirect.github.com/temporalio/ui-server/commit/241a40b76) - Use encodeUriComponent and decodeUriComponent and add tests ([#​2382](https://redirect.github.com/temporalio/ui-server/issues/2382)) - 2024-10-09 - [`6415ada`](https://redirect.github.com/temporalio/ui-server/commit/6415adaca) - Add newTab prop to MenuItem ([#​2378](https://redirect.github.com/temporalio/ui-server/issues/2378)) - 2024-10-09 - [`f620c44`](https://redirect.github.com/temporalio/ui-server/commit/f620c44be) - Check for initial/lastEvent for eventTime ([#​2380](https://redirect.github.com/temporalio/ui-server/issues/2380)) - 2024-10-09 - [`258e83b`](https://redirect.github.com/temporalio/ui-server/commit/258e83b80) - Add logout icon ([#​2379](https://redirect.github.com/temporalio/ui-server/issues/2379)) - 2024-10-09 - [`3a0ebb0`](https://redirect.github.com/temporalio/ui-server/commit/3a0ebb096) - 2.31.3 ([#​2377](https://redirect.github.com/temporalio/ui-server/issues/2377)) - 2024-10-09 - [`0b39a94`](https://redirect.github.com/temporalio/ui-server/commit/0b39a94c8) - Add pending activity attempt count to timeline ([#​2372](https://redirect.github.com/temporalio/ui-server/issues/2372)) - 2024-10-09 - [`b0df11c`](https://redirect.github.com/temporalio/ui-server/commit/b0df11c19) - UserMenu refactoring ([#​2374](https://redirect.github.com/temporalio/ui-server/issues/2374)) - 2024-10-08 - [`17398e7`](https://redirect.github.com/temporalio/ui-server/commit/17398e79a) - DT-1495 - use description instead of tooltip on reset menu item ([#​2373](https://redirect.github.com/temporalio/ui-server/issues/2373)) - 2024-10-08 - [`f4f9bfc`](https://redirect.github.com/temporalio/ui-server/commit/f4f9bfc54) - Refactor out payload inputs to its own component and add encoding field ([#​2370](https://redirect.github.com/temporalio/ui-server/issues/2370)) - 2024-10-04 - [`7757a34`](https://redirect.github.com/temporalio/ui-server/commit/7757a34ec) - Combobox component updates ([#​2369](https://redirect.github.com/temporalio/ui-server/issues/2369)) - 2024-10-04 - [`11742db`](https://redirect.github.com/temporalio/ui-server/commit/11742db15) - DT-2502 - handle WorkflowExecutionUpdateAdmitted events ([#​2364](https://redirect.github.com/temporalio/ui-server/issues/2364)) - 2024-10-03 - [`78eb5d4`](https://redirect.github.com/temporalio/ui-server/commit/78eb5d428) - Small responsive UI improvements for EventDetailsRow ([#​2366](https://redirect.github.com/temporalio/ui-server/issues/2366)) - 2024-10-03 - [`5b6371b`](https://redirect.github.com/temporalio/ui-server/commit/5b6371b36) - Refactor event details link, add link support for endpointId ([#​2360](https://redirect.github.com/temporalio/ui-server/issues/2360))
--- ### Configuration 📅 **Schedule**: Branch creation - "after 5pm on monday" in timezone Etc/UTC, Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox. 👻 **Immortal**: This PR will be recreated if closed unmerged. Get [config help](https://redirect.github.com/renovatebot/renovate/discussions) if that's undesired. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/PeerDB-io/peerdb). Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> --- docker-compose-dev.yml | 2 +- docker-compose.yml | 12 ++++++------ 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/docker-compose-dev.yml b/docker-compose-dev.yml index 9db08bbda..98ee987b3 100644 --- a/docker-compose-dev.yml +++ b/docker-compose-dev.yml @@ -116,7 +116,7 @@ services: - TEMPORAL_ADDRESS=temporal:7233 - TEMPORAL_CORS_ORIGINS=http://localhost:3000 - TEMPORAL_CSRF_COOKIE_INSECURE=true - image: temporalio/ui:2.31.2@sha256:28bb3ea5a6ea3e09f16b521f32ab727c96470f7f1e420c66a6cbfb02001a8aa2 + image: temporalio/ui:2.32.0@sha256:82bf98dbe005a831b6bc5dc12ccd7bffd606af2032dae4821ae133caaa943d3d ports: - 8085:8080 diff --git a/docker-compose.yml b/docker-compose.yml index cf1ec3efe..4d714e997 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -106,13 +106,13 @@ services: - TEMPORAL_ADDRESS=temporal:7233 - TEMPORAL_CORS_ORIGINS=http://localhost:3000 - TEMPORAL_CSRF_COOKIE_INSECURE=true - image: temporalio/ui:2.31.2@sha256:28bb3ea5a6ea3e09f16b521f32ab727c96470f7f1e420c66a6cbfb02001a8aa2 + image: temporalio/ui:2.32.0@sha256:82bf98dbe005a831b6bc5dc12ccd7bffd606af2032dae4821ae133caaa943d3d ports: - 8085:8080 flow-api: container_name: flow_api - image: ghcr.io/peerdb-io/flow-api:latest-stable + image: ghcr.io/peerdb-io/flow-api:latest-stable@sha256:53a6de3d7537b4a90b4ff13d822d0a9fa3015857fc739fc2497d33f33b05dfaa restart: unless-stopped ports: - 8112:8112 @@ -128,7 +128,7 @@ services: flow-snapshot-worker: container_name: flow-snapshot-worker - image: ghcr.io/peerdb-io/flow-snapshot-worker:latest-stable + image: ghcr.io/peerdb-io/flow-snapshot-worker:latest-stable@sha256:036d6091e32c9d15f2738bc6aab312aa1f412f5c06c57687b497cde233b73d4c restart: unless-stopped environment: <<: [*catalog-config, *flow-worker-env, *minio-config] @@ -138,7 +138,7 @@ services: flow-worker: container_name: flow-worker - image: ghcr.io/peerdb-io/flow-worker:latest-stable + image: ghcr.io/peerdb-io/flow-worker:latest-stable@sha256:f5d4d5e4e44336d6917e3c8b3d753c77d813d5d1e55ca7fb4d3a3d3d1d3253cc restart: unless-stopped environment: <<: [*catalog-config, *flow-worker-env, *minio-config] @@ -151,7 +151,7 @@ services: peerdb: container_name: peerdb-server stop_signal: SIGINT - image: ghcr.io/peerdb-io/peerdb-server:latest-stable + image: ghcr.io/peerdb-io/peerdb-server:latest-stable@sha256:15249fc45b8b5384fb7a046bc73f75cc679c570a3d2fd3fd8c40c7d7e85f7eef restart: unless-stopped environment: <<: *catalog-config @@ -167,7 +167,7 @@ services: peerdb-ui: container_name: peerdb-ui - image: ghcr.io/peerdb-io/peerdb-ui:latest-stable + image: ghcr.io/peerdb-io/peerdb-ui:latest-stable@sha256:f4d1cdf966eb06f4a4a03db4b02593b44c8a37bd32143c937d3c59c2586c4bb1 restart: unless-stopped ports: - 3000:3000 From 3facb2da4c4fb94d9f1e6152a34e4e5b4eec6dc3 Mon Sep 17 00:00:00 2001 From: Kunal Gupta <39487888+iamKunalGupta@users.noreply.github.com> Date: Thu, 21 Nov 2024 18:45:26 +0530 Subject: [PATCH 32/61] fix: latest-stable tag in docker compose (#2279) --- docker-compose.yml | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docker-compose.yml b/docker-compose.yml index 4d714e997..5448632ac 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -112,7 +112,7 @@ services: flow-api: container_name: flow_api - image: ghcr.io/peerdb-io/flow-api:latest-stable@sha256:53a6de3d7537b4a90b4ff13d822d0a9fa3015857fc739fc2497d33f33b05dfaa + image: ghcr.io/peerdb-io/flow-api:stable-v0.19.1 restart: unless-stopped ports: - 8112:8112 @@ -128,7 +128,7 @@ services: flow-snapshot-worker: container_name: flow-snapshot-worker - image: ghcr.io/peerdb-io/flow-snapshot-worker:latest-stable@sha256:036d6091e32c9d15f2738bc6aab312aa1f412f5c06c57687b497cde233b73d4c + image: ghcr.io/peerdb-io/flow-snapshot-worker:stable-v0.19.1 restart: unless-stopped environment: <<: [*catalog-config, *flow-worker-env, *minio-config] @@ -138,7 +138,7 @@ services: flow-worker: container_name: flow-worker - image: ghcr.io/peerdb-io/flow-worker:latest-stable@sha256:f5d4d5e4e44336d6917e3c8b3d753c77d813d5d1e55ca7fb4d3a3d3d1d3253cc + image: ghcr.io/peerdb-io/flow-worker:stable-v0.19.1 restart: unless-stopped environment: <<: [*catalog-config, *flow-worker-env, *minio-config] @@ -151,7 +151,7 @@ services: peerdb: container_name: peerdb-server stop_signal: SIGINT - image: ghcr.io/peerdb-io/peerdb-server:latest-stable@sha256:15249fc45b8b5384fb7a046bc73f75cc679c570a3d2fd3fd8c40c7d7e85f7eef + image: ghcr.io/peerdb-io/peerdb-server:stable-v0.19.1 restart: unless-stopped environment: <<: *catalog-config @@ -167,7 +167,7 @@ services: peerdb-ui: container_name: peerdb-ui - image: ghcr.io/peerdb-io/peerdb-ui:latest-stable@sha256:f4d1cdf966eb06f4a4a03db4b02593b44c8a37bd32143c937d3c59c2586c4bb1 + image: ghcr.io/peerdb-io/peerdb-ui:stable-v0.19.1 restart: unless-stopped ports: - 3000:3000 From 136a28345eece0eeef638fe7445a6198347e0a53 Mon Sep 17 00:00:00 2001 From: Amogh Bharadwaj <65964360+Amogh-Bharadwaj@users.noreply.github.com> Date: Thu, 21 Nov 2024 20:59:19 +0530 Subject: [PATCH 33/61] Miscellaneous fixes - add more alerting for maintainpull, fix lag graph function (#2263) - Account for NaN values in LSN function - Add alerts for failure to get postgres connector in maintainpull - Fix <= sign in slack alert log --------- Co-authored-by: Kevin Biju <52661649+heavycrystal@users.noreply.github.com> --- flow/activities/flowable.go | 2 ++ flow/alerting/alerting.go | 2 +- ui/app/peers/[peerName]/lagGraph.tsx | 7 ++++--- 3 files changed, 7 insertions(+), 4 deletions(-) diff --git a/flow/activities/flowable.go b/flow/activities/flowable.go index cc09bae0d..8a65f5dde 100644 --- a/flow/activities/flowable.go +++ b/flow/activities/flowable.go @@ -287,11 +287,13 @@ func (a *FlowableActivity) MaintainPull( ctx = context.WithValue(ctx, shared.FlowNameKey, config.FlowJobName) srcConn, err := connectors.GetByNameAs[connectors.CDCPullConnector](ctx, config.Env, a.CatalogPool, config.SourceName) if err != nil { + a.Alerter.LogFlowError(ctx, config.FlowJobName, err) return err } defer connectors.CloseConnector(ctx, srcConn) if err := srcConn.SetupReplConn(ctx); err != nil { + a.Alerter.LogFlowError(ctx, config.FlowJobName, err) return err } diff --git a/flow/alerting/alerting.go b/flow/alerting/alerting.go index 5f05005d1..69282330d 100644 --- a/flow/alerting/alerting.go +++ b/flow/alerting/alerting.go @@ -356,7 +356,7 @@ func (a *Alerter) checkAndAddAlertToCatalog(ctx context.Context, alertConfigId i return true } - logger.Info(fmt.Sprintf("Skipped sending alerts: last alert was sent at %s, which was >=%s ago", createdTimestamp.String(), dur.String())) + logger.Info(fmt.Sprintf("Skipped sending alerts: last alert was sent at %s, which was <=%s ago", createdTimestamp.String(), dur.String())) return false } diff --git a/ui/app/peers/[peerName]/lagGraph.tsx b/ui/app/peers/[peerName]/lagGraph.tsx index 87b90fa8c..d971bee8f 100644 --- a/ui/app/peers/[peerName]/lagGraph.tsx +++ b/ui/app/peers/[peerName]/lagGraph.tsx @@ -21,9 +21,10 @@ type LagGraphProps = { function parseLSN(lsn: string): number { if (!lsn) return 0; const [lsn1, lsn2] = lsn.split('/'); - return Number( - (BigInt(parseInt(lsn1, 16)) << BigInt(32)) | BigInt(parseInt(lsn2, 16)) - ); + const parsedLsn1 = parseInt(lsn1, 16); + const parsedLsn2 = parseInt(lsn2, 16); + if (isNaN(parsedLsn1) || isNaN(parsedLsn2)) return 0; + return Number((BigInt(parsedLsn1) << BigInt(32)) | BigInt(parsedLsn2)); } export default function LagGraph({ peerName }: LagGraphProps) { From 60c2744edd581f3880b3dc1b0210a0a6d9512b41 Mon Sep 17 00:00:00 2001 From: Amogh Bharadwaj <65964360+Amogh-Bharadwaj@users.noreply.github.com> Date: Thu, 21 Nov 2024 21:17:59 +0530 Subject: [PATCH 34/61] Revamp settings page (#2280) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR opts for a card grid display for the settings page rather than a table - as the content was being cramped in to fit in that tabular layout ![Screenshot 2024-11-21 at 6 49 06 PM](https://github.com/user-attachments/assets/08b43cf7-83e7-4c5d-91f2-afc4aacff2de) --- ui/app/settings/page.tsx | 236 +++++++++++++++++---------------------- 1 file changed, 102 insertions(+), 134 deletions(-) diff --git a/ui/app/settings/page.tsx b/ui/app/settings/page.tsx index 7ebb1b4cd..c1d51a228 100644 --- a/ui/app/settings/page.tsx +++ b/ui/app/settings/page.tsx @@ -9,10 +9,7 @@ import { Button } from '@/lib/Button'; import { Icon } from '@/lib/Icon'; import { Label } from '@/lib/Label'; import { SearchField } from '@/lib/SearchField'; -import { Table, TableCell, TableRow } from '@/lib/Table'; import { TextField } from '@/lib/TextField'; -import { Tooltip } from '@/lib/Tooltip'; -import { MaterialSymbol } from 'material-symbols'; import { useEffect, useMemo, useState } from 'react'; import { ToastContainer } from 'react-toastify'; import 'react-toastify/dist/ReactToastify.css'; @@ -22,40 +19,32 @@ const ROWS_PER_PAGE = 7; const ApplyModeIconWithTooltip = ({ applyMode }: { applyMode: number }) => { let tooltipText = ''; - let iconName: MaterialSymbol = 'help'; + switch (applyMode.toString()) { case DynconfApplyMode[DynconfApplyMode.APPLY_MODE_IMMEDIATE].toString(): tooltipText = 'Changes to this configuration will apply immediately'; - iconName = 'bolt'; break; case DynconfApplyMode[DynconfApplyMode.APPLY_MODE_AFTER_RESUME].toString(): tooltipText = 'Changes to this configuration will apply after resume'; - iconName = 'cached'; break; case DynconfApplyMode[DynconfApplyMode.APPLY_MODE_RESTART].toString(): tooltipText = 'Changes to this configuration will apply after server restart.'; - iconName = 'restart_alt'; break; case DynconfApplyMode[DynconfApplyMode.APPLY_MODE_NEW_MIRROR].toString(): tooltipText = 'Changes to this configuration will apply only to new mirrors'; - iconName = 'new_window'; break; default: tooltipText = 'Unknown apply mode'; - iconName = 'help'; } return (
- - - +
); }; - const DynamicSettingItem = ({ setting, onSettingUpdate, @@ -65,7 +54,7 @@ const DynamicSettingItem = ({ }) => { const [editMode, setEditMode] = useState(false); const [newValue, setNewValue] = useState(setting.value); - + const [showDescription, setShowDescription] = useState(false); const handleEdit = () => { setEditMode(true); }; @@ -130,41 +119,80 @@ const DynamicSettingItem = ({ }; return ( - - - - - - {editMode ? ( -
- setNewValue(e.target.value)} - variant='simple' - /> - +
+
+ +
+
+
+
+
+ setNewValue(e.target.value)} + variant='simple' + readOnly={!editMode} + disabled={!editMode} + /> + +
+
+ +
+
- ) : ( -
- {setting.value || 'N/A'} - +
+
- )} - - - {setting.defaultValue || 'N/A'} - - - {setting.description || 'N/A'} - - - - - + + {showDescription && ( +
+ +
+ )} +
+
+
); }; @@ -172,10 +200,7 @@ const SettingsPage = () => { const [settings, setSettings] = useState({ settings: [], }); - const [currentPage, setCurrentPage] = useState(1); const [searchQuery, setSearchQuery] = useState(''); - const [sortDir, setSortDir] = useState<'asc' | 'dsc'>('asc'); - const sortField = 'name'; const fetchSettings = async () => { const response = await fetch('/api/v1/dynamic_settings'); @@ -189,101 +214,44 @@ const SettingsPage = () => { const filteredSettings = useMemo( () => - settings.settings - .filter((setting) => - setting.name.toLowerCase().includes(searchQuery.toLowerCase()) - ) - .sort((a, b) => { - const aValue = a[sortField]; - const bValue = b[sortField]; - if (aValue < bValue) return sortDir === 'dsc' ? 1 : -1; - if (aValue > bValue) return sortDir === 'dsc' ? -1 : 1; - return 0; - }), - [settings, searchQuery, sortDir] + settings.settings.filter((setting) => + setting.name.toLowerCase().includes(searchQuery.toLowerCase()) + ), + [settings, searchQuery] ); - const totalPages = Math.ceil(filteredSettings.length / ROWS_PER_PAGE); - const displayedSettings = useMemo(() => { - const startRow = (currentPage - 1) * ROWS_PER_PAGE; - const endRow = startRow + ROWS_PER_PAGE; - return filteredSettings.slice(startRow, endRow); - }, [filteredSettings, currentPage]); - - const handlePrevPage = () => { - if (currentPage > 1) setCurrentPage(currentPage - 1); - }; - - const handleNextPage = () => { - if (currentPage < totalPages) setCurrentPage(currentPage + 1); - }; return ( -
- Settings List} - toolbar={{ - left: ( -
- - - - - - -
- ), - right: ( - setSearchQuery(e.target.value)} - /> - ), +
+ + setSearchQuery(e.target.value)} + style={{ fontSize: 13 }} + /> +
- {[ - { header: 'Configuration Name', width: '35%' }, - { header: 'Current Value', width: '10%' }, - { header: 'Default Value', width: '10%' }, - { header: 'Description', width: '35%' }, - { header: 'Apply Mode', width: '10%' }, - ].map(({ header, width }) => ( - - {header} - - ))} - - } > - {displayedSettings.map((setting) => ( + {filteredSettings.map((setting) => ( ))} -
- + +
); }; From cf14792add157c7bbceed069157510b730719131 Mon Sep 17 00:00:00 2001 From: Kevin Biju <52661649+heavycrystal@users.noreply.github.com> Date: Fri, 22 Nov 2024 17:21:35 +0530 Subject: [PATCH 35/61] [dropflow] delete flow configs only after finish drop (#2281) --- flow/workflows/drop_flow.go | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/flow/workflows/drop_flow.go b/flow/workflows/drop_flow.go index 51bf0091a..93086157d 100644 --- a/flow/workflows/drop_flow.go +++ b/flow/workflows/drop_flow.go @@ -92,6 +92,15 @@ func DropFlowWorkflow(ctx workflow.Context, input *protos.DropFlowInput) error { } } + if input.FlowConnectionConfigs != nil { + err := executeCDCDropActivities(ctx, input) + if err != nil { + workflow.GetLogger(ctx).Error("failed to drop CDC flow", slog.Any("error", err)) + return err + } + workflow.GetLogger(ctx).Info("CDC flow dropped successfully") + } + removeFlowEntriesCtx := workflow.WithActivityOptions(ctx, workflow.ActivityOptions{ StartToCloseTimeout: 1 * time.Minute, }) @@ -103,14 +112,5 @@ func DropFlowWorkflow(ctx workflow.Context, input *protos.DropFlowInput) error { return err } - if input.FlowConnectionConfigs != nil { - err := executeCDCDropActivities(ctx, input) - if err != nil { - workflow.GetLogger(ctx).Error("failed to drop CDC flow", slog.Any("error", err)) - return err - } - workflow.GetLogger(ctx).Info("CDC flow dropped successfully") - } - return nil } From 3d813211d36989fe03d573505f4c052d63f38077 Mon Sep 17 00:00:00 2001 From: Amogh Bharadwaj <65964360+Amogh-Bharadwaj@users.noreply.github.com> Date: Sun, 24 Nov 2024 20:24:01 +0530 Subject: [PATCH 36/61] CH Normalize: fix error return (#2282) This PR fixes a bug where we are returning the wrong error object resulting in flow-worker panics --- flow/connectors/clickhouse/normalize.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/flow/connectors/clickhouse/normalize.go b/flow/connectors/clickhouse/normalize.go index fb221096c..a5edb40d5 100644 --- a/flow/connectors/clickhouse/normalize.go +++ b/flow/connectors/clickhouse/normalize.go @@ -474,7 +474,7 @@ func (c *ClickHouseConnector) NormalizeRecords( case queries <- insertIntoSelectQuery.String(): case <-errCtx.Done(): close(queries) - return nil, ctx.Err() + return nil, errCtx.Err() } } close(queries) From c0d1d8abd5f8c4a5d8d1cbe09239b72fae032bc1 Mon Sep 17 00:00:00 2001 From: Amogh Bharadwaj <65964360+Amogh-Bharadwaj@users.noreply.github.com> Date: Sun, 24 Nov 2024 22:10:35 +0530 Subject: [PATCH 37/61] Fix slot guage (#2283) There seems to be an issue where RecordSlotSize panics in this line with nil pointer dereference: ``` slotMetricGauges.SlotLagGauge.Record(ctx, float64(slotInfo[0].LagInMb), metric.WithAttributeSet(attribute.NewSet( ``` As far as I can see the only thing that can be nil is `slotMetricGauges.SlotLagGauge` , so this PR adds a guard for that and the other guages here --- flow/connectors/postgres/postgres.go | 67 +++++++++++++++++----------- 1 file changed, 42 insertions(+), 25 deletions(-) diff --git a/flow/connectors/postgres/postgres.go b/flow/connectors/postgres/postgres.go index ae0dbea52..e685b5c12 100644 --- a/flow/connectors/postgres/postgres.go +++ b/flow/connectors/postgres/postgres.go @@ -1215,12 +1215,17 @@ func (c *PostgresConnector) HandleSlotInfo( logger.Info(fmt.Sprintf("Checking %s lag for %s", alertKeys.SlotName, alertKeys.PeerName), slog.Float64("LagInMB", float64(slotInfo[0].LagInMb))) alerter.AlertIfSlotLag(ctx, alertKeys, slotInfo[0]) - slotMetricGauges.SlotLagGauge.Record(ctx, float64(slotInfo[0].LagInMb), metric.WithAttributeSet(attribute.NewSet( - attribute.String(otel_metrics.FlowNameKey, alertKeys.FlowName), - attribute.String(otel_metrics.PeerNameKey, alertKeys.PeerName), - attribute.String(otel_metrics.SlotNameKey, alertKeys.SlotName), - attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID())), - )) + + if slotMetricGauges.SlotLagGauge != nil { + slotMetricGauges.SlotLagGauge.Record(ctx, float64(slotInfo[0].LagInMb), metric.WithAttributeSet(attribute.NewSet( + attribute.String(otel_metrics.FlowNameKey, alertKeys.FlowName), + attribute.String(otel_metrics.PeerNameKey, alertKeys.PeerName), + attribute.String(otel_metrics.SlotNameKey, alertKeys.SlotName), + attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID())), + )) + } else { + logger.Warn("warning: slotMetricGauges.SlotLagGauge is nil") + } // Also handles alerts for PeerDB user connections exceeding a given limit here res, err := getOpenConnectionsForUser(ctx, c.conn, c.config.User) @@ -1229,25 +1234,33 @@ func (c *PostgresConnector) HandleSlotInfo( return err } alerter.AlertIfOpenConnections(ctx, alertKeys, res) - slotMetricGauges.OpenConnectionsGauge.Record(ctx, res.CurrentOpenConnections, metric.WithAttributeSet(attribute.NewSet( - attribute.String(otel_metrics.FlowNameKey, alertKeys.FlowName), - attribute.String(otel_metrics.PeerNameKey, alertKeys.PeerName), - attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID()), - ))) + if slotMetricGauges.OpenConnectionsGauge != nil { + slotMetricGauges.OpenConnectionsGauge.Record(ctx, res.CurrentOpenConnections, metric.WithAttributeSet(attribute.NewSet( + attribute.String(otel_metrics.FlowNameKey, alertKeys.FlowName), + attribute.String(otel_metrics.PeerNameKey, alertKeys.PeerName), + attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID()), + ))) + } else { + logger.Warn("warning: slotMetricGauges.OpenConnectionsGauge is nil") + } replicationRes, err := getOpenReplicationConnectionsForUser(ctx, c.conn, c.config.User) if err != nil { logger.Warn("warning: failed to get current open replication connections", "error", err) return err } - slotMetricGauges.OpenReplicationConnectionsGauge.Record(ctx, replicationRes.CurrentOpenConnections, - metric.WithAttributeSet(attribute.NewSet( - attribute.String(otel_metrics.FlowNameKey, alertKeys.FlowName), - attribute.String(otel_metrics.PeerNameKey, alertKeys.PeerName), - attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID()), - )), - ) + if slotMetricGauges.OpenReplicationConnectionsGauge != nil { + slotMetricGauges.OpenReplicationConnectionsGauge.Record(ctx, replicationRes.CurrentOpenConnections, + metric.WithAttributeSet(attribute.NewSet( + attribute.String(otel_metrics.FlowNameKey, alertKeys.FlowName), + attribute.String(otel_metrics.PeerNameKey, alertKeys.PeerName), + attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID()), + )), + ) + } else { + logger.Warn("warning: slotMetricGauges.OpenReplicationConnectionsGauge is nil") + } var intervalSinceLastNormalize *time.Duration if err := alerter.CatalogPool.QueryRow( @@ -1261,13 +1274,17 @@ func (c *PostgresConnector) HandleSlotInfo( return nil } if intervalSinceLastNormalize != nil { - slotMetricGauges.IntervalSinceLastNormalizeGauge.Record(ctx, intervalSinceLastNormalize.Seconds(), - metric.WithAttributeSet(attribute.NewSet( - attribute.String(otel_metrics.FlowNameKey, alertKeys.FlowName), - attribute.String(otel_metrics.PeerNameKey, alertKeys.PeerName), - attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID()), - )), - ) + if slotMetricGauges.IntervalSinceLastNormalizeGauge != nil { + slotMetricGauges.IntervalSinceLastNormalizeGauge.Record(ctx, intervalSinceLastNormalize.Seconds(), + metric.WithAttributeSet(attribute.NewSet( + attribute.String(otel_metrics.FlowNameKey, alertKeys.FlowName), + attribute.String(otel_metrics.PeerNameKey, alertKeys.PeerName), + attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID()), + )), + ) + } else { + logger.Warn("warning: slotMetricGauges.IntervalSinceLastNormalizeGauge is nil") + } alerter.AlertIfTooLongSinceLastNormalize(ctx, alertKeys, *intervalSinceLastNormalize) } From 82bd46102ff5915cffaf782228c0a5d3c57d8730 Mon Sep 17 00:00:00 2001 From: Kaushik Iska Date: Sun, 24 Nov 2024 13:19:24 -0600 Subject: [PATCH 38/61] add cause for context in ch normalize errgroup (#2285) --- flow/activities/flowable.go | 8 ++++---- flow/connectors/clickhouse/normalize.go | 5 ++++- 2 files changed, 8 insertions(+), 5 deletions(-) diff --git a/flow/activities/flowable.go b/flow/activities/flowable.go index 8a65f5dde..8001b5344 100644 --- a/flow/activities/flowable.go +++ b/flow/activities/flowable.go @@ -409,7 +409,7 @@ func (a *FlowableActivity) StartNormalize( if errors.Is(err, errors.ErrUnsupported) { return nil, monitoring.UpdateEndTimeForCDCBatch(ctx, a.CatalogPool, input.FlowConnectionConfigs.FlowJobName, input.SyncBatchID) } else if err != nil { - return nil, err + return nil, fmt.Errorf("failed to get normalize connector: %w", err) } defer connectors.CloseConnector(ctx, dstConn) @@ -420,7 +420,7 @@ func (a *FlowableActivity) StartNormalize( tableNameSchemaMapping, err := a.getTableNameSchemaMapping(ctx, input.FlowConnectionConfigs.FlowJobName) if err != nil { - return nil, err + return nil, fmt.Errorf("failed to get table name schema mapping: %w", err) } res, err := dstConn.NormalizeRecords(ctx, &model.NormalizeRecordsRequest{ @@ -438,13 +438,13 @@ func (a *FlowableActivity) StartNormalize( } dstType, err := connectors.LoadPeerType(ctx, a.CatalogPool, input.FlowConnectionConfigs.DestinationName) if err != nil { - return nil, err + return nil, fmt.Errorf("failed to get peer type: %w", err) } if dstType == protos.DBType_POSTGRES { err = monitoring.UpdateEndTimeForCDCBatch(ctx, a.CatalogPool, input.FlowConnectionConfigs.FlowJobName, input.SyncBatchID) if err != nil { - return nil, err + return nil, fmt.Errorf("failed to update end time for cdc batch: %w", err) } } diff --git a/flow/connectors/clickhouse/normalize.go b/flow/connectors/clickhouse/normalize.go index a5edb40d5..2debe0f4d 100644 --- a/flow/connectors/clickhouse/normalize.go +++ b/flow/connectors/clickhouse/normalize.go @@ -474,7 +474,10 @@ func (c *ClickHouseConnector) NormalizeRecords( case queries <- insertIntoSelectQuery.String(): case <-errCtx.Done(): close(queries) - return nil, errCtx.Err() + c.logger.Error("[clickhouse] context canceled while normalizing", + slog.Any("error", errCtx.Err()), + slog.Any("cause", context.Cause(errCtx))) + return nil, context.Cause(errCtx) } } close(queries) From 4379613bbc8e2df8949bf918dc3cd11555ea62d3 Mon Sep 17 00:00:00 2001 From: Kaushik Iska Date: Mon, 25 Nov 2024 08:37:28 -0600 Subject: [PATCH 39/61] Tag network errors as err:Net (#2287) --- flow/alerting/alerting.go | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/flow/alerting/alerting.go b/flow/alerting/alerting.go index 69282330d..4413b2efb 100644 --- a/flow/alerting/alerting.go +++ b/flow/alerting/alerting.go @@ -440,6 +440,10 @@ func (a *Alerter) LogFlowError(ctx context.Context, flowName string, err error) if errors.As(err, &pgErr) { tags = append(tags, "pgcode:"+pgErr.Code) } + var netErr *net.OpError + if errors.As(err, &netErr) { + tags = append(tags, "err:Net") + } a.sendTelemetryMessage(ctx, logger, flowName, errorWithStack, telemetry.ERROR, tags...) } From 2511b4db479253c693a51df0d4b7a45c718ad63d Mon Sep 17 00:00:00 2001 From: Kaushik Iska Date: Mon, 25 Nov 2024 09:21:48 -0600 Subject: [PATCH 40/61] ClickHouse Cloud Private Preview (#2288) --- README.md | 8 ++++++++ images/in-clickpipes.png | Bin 0 -> 103291 bytes 2 files changed, 8 insertions(+) create mode 100644 images/in-clickpipes.png diff --git a/README.md b/README.md index ebd4579f1..63aa2fe27 100644 --- a/README.md +++ b/README.md @@ -54,6 +54,14 @@ PeerDB is an ETL/ELT tool built for PostgreSQL. We implement multiple Postgres n **From a feature richness standpoint**, we support efficient syncing of tables with large (TOAST) columns. We support multiple streaming modes - Log based (CDC) based, Query based streaming etc. We provide rich data-type mapping and plan to support every possible (incl. Custom types) that Postgres supports to the best extent possible on the target data-store. +### Now available natively in ClickHouse Cloud (Private Preview) + +PeerDB is now available natively in ClickHouse Cloud (Private Preview). Learn more about it [here](https://clickhouse.com/cloud/clickpipes/postgres-cdc-connector). + + + + + #### **Postgres-compatible SQL interface to do ETL** The Postgres-compatible SQL interface for ETL is unique to PeerDB and enables you to operate in a language you are familiar with. You can do ETL the same way you work with your databases. diff --git a/images/in-clickpipes.png b/images/in-clickpipes.png new file mode 100644 index 0000000000000000000000000000000000000000..18d4c709d320fc67aba9cb4f8122cb3b8dd94fa6 GIT binary patch literal 103291 zcmd43c|4T=-akGwc7yD)BufY*vX*_#PFW}G*tdl2gs~SP6d@!bOqs0N2O&xJeT`%p zyX@QVqEDT3KIeSSeV^Zbf4`6Kf7kN9Ua#l#`P$ysW$NKgEoD+-24WBhM5?Nyr~?8~ zVn86GAwnGB%0l=h1`r7R=B9?8(#4BRa&mIO#gnQkS`JwydU_$d`27=ylamuVI@+?1 zJwrqNnVFf^)|SZ`^SHQ}M{x=Yik$B5Zf@?^B_yslwHg!_7Dz~N8W{3ly2Q%I$0I2z zDJCW=A#rVeeZ#;|;B}$q>q3p?b?cnhYJ7aG>FH^A?urx^<_ie$OG!!f_xEveajB{* zr=}|h1xwr5SXEZOS5f5^m*5Nz4r+;yXJkAdlZo+-Ex9EtV`U?3!pBz`7+79;OGZii z&Rx;qU}Sf{F**6U=C;nbxL6Ne-ItFZJ&2Z_U0T=l%OoNq6cl24 zySaICa#Bzh&cVsZ!oqUX#Pv~rV_iKax3*JOR(5y>oA;(-b#)bCVV;?pS>HA8o?Lo6 zG&d?CQ%IP_)zy`ro{n8mV=+I!v$ON&&6^F)dRY}yDVc@0I5_TyCps!BX8ZXmXj#qA z%=iUJL?yi7y&_oHwA9&q`$4pfyn=wV_T4wBsi7(L%^y4EKvj65#4kYL_=l>Ba(X@^ zOJi4w!)}t}$q5NjSa<7P9mjKD4SHL9V>4g;1IA-5mk5Yz3%V2Y__-KxbmN`xrW^(W z0wIfg)6PL?flE*LChn7?fdiJ$xU__D;NY<>1d0m-4pfAK@y^z~O2I(DP~gfZ4MZ>! z4>)k(M9ZGpDIHAw!_MzD%reqvc5ePBc7&h{H07!eo4^tPcr=v&0H*!`bSAGOWfc7sn=iQy8KV!`)6E& zf0q59>HSA$MivVR{*);he*a}W_ZEI8<-bAmJ?GDM|0zPLam;6F`|kiV{~65oR@a{* zUOX!Er;PqTFWaxA|9kbLWhA`sKv+Pya%dc?c(Hy6u0jLmxUe{e)Hcdzmz3;JGRK+! znSDT@@1N59|Ivb!s9FEVTz+xq{-3V=y)ZwU)4z-IuhjTA`SeSQ|H#b9V(6c7OItN*CQzhKTdDe;fpK;@sDe`?tz``O-q_aw8H>-Zd}bLK0ie?yL;@-!Pn+gx z>pmL_r)JI!pPD&K!T-q2$gB&&nJZx0{NXCE{=%7^Q z#W-i`00ZQ&)LFKhBXefK1+fr%t+D#LrQ)iCuk9B_i5h(NaO3@g_lI=fe@W;5i*n_! z2$=km<;pJ~biX11{S|@#k(m*TH-E7BZJ2; z4rFI7A*@62k2S|h*Z(3DpL7WPm7U+QA6~lmU$cV!qnWjL?gaVH3b&|bt3%ib+0w7$ zYCC@vn5wltseV0Xnc4^8E#??MIk|4UHdPz+#+x;j7$$qxGKFd2NYH5$weB>V{o{xS zsOr<5dAA-==coJdm-#^DpYF^r`EOn(_<5uYbcNGh`vb3yKVMSG1zN}HUYcAJNTAbQ zI?KQC_t#z&f7$*Y8}i>``3V>Im&pfSKbZf_;1}G-8h;Jf{ukV~e-h<4+z0>rhL)1^ zuKiRnK#OpjiB`OA9WJ8V_c^~^!x2>ICBrIe@1u4BoE@ErIPH|6ER`WOBN^RQc!fqIJDp&rFMRvi^W9UPd5>TkU^; zTF~|7=jmZDE@13(Fjl&W8R-S2(^4rY?1S#x0}_AG`8)^|-3vj#FT~b`o-P380?`7N zzaO04xwfh4 zPQZgafhF+khz1hC!adWs2VQRvfD87|&Iq9|3ZO09;d4 zg8@iDWu~*6snb>G00x)i&KS_N05XCASyRu<-X!_CG>yat2rswE4j2IN=+>#A799R( z_dq2~XE)=b@_yci$(>>k=RU4q!T!90vUcGF{O<1TJ;6$!8b8e;2|m*1Q(pi-wO6P7 zbn3dXb%{S0sGWI#qCq|6Cj07!6(@O5tuOD0iTL~ zMxb2^a0#5Dubtf=UGD=jsx$A@3w+kbk`2deBqE@0#UGIJR07V^Rprw%r_2CPH*wC+ zfB1azya)oAK0Qkv1wiOim-RuDj31HdUcH^COo8PX(o-JG3G1h{01xlePBoMO((cUP z86f{&!2KV~MNi{AGJ5zKbc&Fb27YKech`WaB18b`m-3W?(5Mk?9*7tQ(lP#QC7Y{{03#t# zwq!#~Doqvey?W>{Sv)lpH`OB>@~ka!7ZnayU3S$sPS(2NdK|5SFA`ouH{goR65!ANIZV5DRD|mubuKQK+2}c8YuJON$$p>)n+O zbPI*c5YC!uWenJQB!C8f(Fpnu*;=!w($z`O$&*#mC&`%S)kN^FD)3Dqp+8A`3^ge0 zG*%-=H1J4;)jeYtwB8;HZSXo$ZMMlRn+z+cRT$WNl_p7o8zgV4BqWAcJL_o@SI}gMk!a-P z>ou)WZGuh-eu3oon^#{oihrTu&)Rk`zncSkV13#aS1K_ZN#rcG|ATeS>F3e7-W1W6bv^(cmq4NS0 zB70i4%PUY+6MDCe2R#+(_Taj&+5LT?Donu+vjryfZkhIlo!=|pK(MaY4mx&{ZF3HL`Mmmu@9THW)AXX9*2NcE{Oy6s5i81lX)bfIhI_^AluRo?CO8MP|BVkI z15^0Xws{!zz?m3TW|$?`of)(nU0X7$`Sc;gesx4qiH)u0RXJsi1F^opjb%@r>f{NZ z^ubR0v3-hEiN%fE*nOtq&vznLnr&4ohjBm6Fz#?^A=vCH3odV5T95Sb$QsvS;Z+vGu1+T53OLF91Iasw?Kgv7`r0vJ#aJ}4o;$2>m#-!u`!fudK*gZdvd zynF_cMSx2QjDZ&r2sC)BEg}s9fvy=gFrq%su;p8$Ew$INjIbzfz@e-Y8W^%HXma?8 z%10C6(^HYW7@Qd|smEacUBzxcDQ!PBVB>|z5HM)T;e;suHEwwTUDEcBOGOP&v93AC z(H`$nEm{5=HcdMS36_ zPUqA=H~2&sLWYUH56L*mWTA2QxV zu%hos``p44>XEp+4d4!;3jGCa4*({fQIY0qpr75$MG|bY&TMuqckN|Y0pxYwS-N1@ zz0M4hHAJ2Y$f{??0xLEgiI*RKc+Bx;6Y_ObmaYM|Nv(IPq1P`B$>bxi0G30Ijl0Aj zx|qKVQor$JWqNt!ct_((+)do1A4Xn&!Du+w?8ST}d~?5>7OC&nYkL`v`Y z%wvz{ue@VgAO&;-!pPl#bBZ4@CO9Fv+x+(Z*9-1|(BVS^X5#~gtb3Mr1|$ZV8Z+K6 ze+=a7FPTDAkDrwbhb!slwdqtR%c}nrtT%WA7f{2FAAKj9(a(bdF57d=s&C2eUV^>_ z-zwD3-SNJg(IR%m8{D9!<5lE1@r0B8!tkTgFVZV|XTli#6vk5?u^c3pl)(^-2V^ZK z*O$xqnK5G}JYP&3m~{#pqb?ja6D`)gNKdO{7;92JO@1-;ty`zLWE9+r1wyy@05ft@ zj8@B01=9Z8k_GW?{MP-TEZhBPcq)7)%4V$P(#6PVjq+xL9{3WF4X2q?({xtxi;pl> zH8AK31rVo;A15$;o?-Z%NzNI2u(ZkQ`>*v`W3oKo>Zz>?xv* zw?LCAREMa;l_nCJ2w~TPbQTH>m%b6SRWye7i7?xc9%VV@p+AqjvRvr2<<~6Q5a)wG zIpZz*2X8fm>5YWE&4&ZrO1Jj4sP{D6B%?AW*?sTAp19%~9J>|QDUV-P>Iss>9xa}f zq#txt35Q*aWplivU^)8y!rBc+7BPem0BU@ z&ie#HjB%@zp_MxqKX;SMMDl)kVLPEfXoWZq@3rT|<3_hwXg05|`JKFP70drakt7VI zi18gJ?DpK~wG>;Kr?*AZ4RSJG3B+1gAKy-*9rj>c?HOyu73D#1lxoyvI=`DRsGcJ? zd^le|Vp}!+!*!Yl>p)X+9bb7P=%O+ufe2j>KI+4E6a@onEE!WgQt;_p)D^~EO}x-D zx6UjUsn19D*Xe)E$*3(Kk0S){+t(H_$w=}UdLLbE_`Hf=_*H?QJW<5oClyt9Zb%w4 zn>e_;jU%>q_3mbaIb4Jy1;P?BFK1;uel(Aa~c*z-Kt)l|LE{~lv=?NXYL@G}m zqukyfr;#$1)@R{Q1!TRa2I8UPWo)%#M$%~2{yif;qu0~srdp<#u+AyvQo(NSTqYN! zCM_8Vw!S%D#Ad}}UUe^I7*w}N^lX^wr=4J}DIPRa-aS};T=njIa4I!CIo`)uAdzl8 zj1&$%zN<9U0G1-;U2^Pj4JtUz9o$QSGqSVSzXWtrS+dUxX=3^9`UXB%kmS!aRQtjC3k^ z!cMrC0fEX){(AO_b@wZcA!W<`Pu5pylyz((uS%G0i{+S(`x*N;rhRiQOSRo%qPSvQ zeX^r-;bFf4LMdm#!zIDgN;NsJ-{|ql@kI_|8M*maX1wu~8<)|rRP-W&ZFf?A0t0H{ z`uNh0&A0KVtegF>E%=2PHNWR}lSIVwyc#T;n_{(X3YPQHHn7^#yVyJawdD3TX^7)p zh}{kFdy<$t_aZ}Y2rcY?{PyfZ>D{RG&ha082ULTP;Qi_C7OGzLl7c#?eB^0?{xMDk z521^I!PJaY>w`s{KU%c0-%ft1lF+r%#UEi;bKP1~;|ph*-pj~3{Kw1{&kZ2FQ8t=f zO6+93+El*B{$6m^v>@>)hBTc_2zEoMiq3C&k<|%f^nP6VQg^k>40CL);Eqyehf+sd zSHj@PU?oUC!sLb>NxN|oQms6eRT3w?)!Vs6&DCZoit zNiHg9n%=uIlEnenjP+kxeA=#&q|t zAS|JhSQjyzQQc(HXI9=cX;rbWj%Irv!e0%1CWU{6!ZyVn$DhgV%~GmqCO^J=f$i#D zLZ})-sj*@51FfnBFFH_v+lC@UEMSWz~|{ByvQK88>RI{0+a-fR>! z;y|4;VVo>)&b1`y{LtVa*?>yLTA-E9u&y;5`upTb`)mGKt=DJ`+<{M?OL{a0o#OG| z{EFJ_&1N6^O0p8elHFHjjJ_x>Uk#$qZ{4GT6e|5pWmBMfT4n>gb@qc)-`omiRJd3x`pZ`&0N@VA%Vtz5*$DjOht0uEndg8!O#}(?r^c zGVc|^;4Q26a%^yyw_JOiH~4gvv7$eF>xf=)dcM`q`!u34ikj_eBGFeul!ukrgn91Z zJl!3<0dIh(7}r<2(9?&a(iRB?7c=a zesq}S4Cwn)EW5p>iZS_ko(R={(lY%;X?uYfb@Il@WyO8}Ey34U@dm=Vi8M)=s%>x8 z!Hav?@aqMda9o2meL4^bf~Cey^6d@pP9)=Vu;z*VVI03i0<&3`=@KwH3l^pGw8GSI^Hg?a z{5GUpPx#L|)eN?&BO3TJ)tVtzfU~&a=buBb@iX+kbgvl{A5uwB!^RL@3slv1p`9UZ}{t+fgSr2rD(Mwo59osYmaM zQo=75VM)`istu{(7Y&?ys@~YY{HmHqzoaFSxM*rG3&GY+Zd5|?2C_exv!V~B#_>)! zT~9HBE5nr{Xu=V8$MG;4O-~%|PgLY^=u;ee&h#=&f%M6OR?&%~A(QDHFR$BqOD?AV zD>Bp|GcPKI?mYzBDrthZQ<#oQfs2wo;ovn6# z-7eb~YCwU+^z5Z(PLVchbXueqog3|~8OAcQH?+HHN<7|4#^QEABSJk)s241CQqZLr z)lF&Ns~sQXJ2FbVXyo-WQ^j*Vj16szbCli5LF%^EY92r~%oOZNrgWBJpK%RLH7`0d z(CaZiMPFthv`zycA6u<1dFdMZ349*D!Nz*4M>bp$NvK60mdVV@N|R65b5I{W{TAY_m6us4 z8F)ePOD>zn50`?JLWsARx3dKm-z`TmCuFTTB0z<&j2+}@pT_ZIMk$Hc)LkVJx&A(P zOOiNVzl>5QDinZT z*G~o0Qb?<2Sp@=Mf?B8%AMTa&o_rc^$+~;#4KCDzCHu#6(@eP@2rV!=wi~cUr~j5a zO-1!))=1Ma$IY^3MYQ~T+s7=p_~?l2utX@P4t1Xmni)!eBIn!*73mkT+$ZKxG5mf8kU_W)l|}C+=-J30ye*6py&@2@$Vaii1ja^BnW!U z2KTGp$8OD8VaUsQDF{^@R9ccb8Hjkp;T3o%&C6h|GPYD*sUmZazLxFCOyF9zJqZ46 zPF>*~>vTu-E8Roh$#7ojr9?9UauPxc5WFea>e>r=N|M2H*NY`0by>`za9|R|a8qBuFzZPRYM9D}ES!9w#`Vkt~2jbzfQi2Mp zXjMU|uQxg-ScwK$J^W|2_%Xy8+-QGs{HyunU7(}~cW{&1tOb$%#z;Ko%QdfF&zpFL zq)KPt@LLyAAo?Mvp_VSNy5`!Bx7UwwjmR;z#~IsiS$kl%d$+c5D&5(N22>d0FIkik z$@->$Ig>YdP;f?Z^39WQY}ZXB^puFQn6`C4KL zP5H(gnn4;5hS3ib!cG+^tN=kxDnubDM&&#FeWH{`Ni4)@gQ!4pP>(~*ZC75*2tmEt z`OS|C{4*xuL5iX8K0nz{Ymo9aqya(KALk!2+0o^1eu#VSCKE+QLV~+8eZAnkC4bez|f2k(M$}cE%jV|Z8@`RZzOb+0w3<4

?+0fb?wyMd3k zwq{2IzQL#o2+Zf|fGlk2JHOn-qo+cs>;ZgqhEl>f4#J{OUft&-@Qv~LY@;UEweV`4 zAd!ai(gw(Im5oo4gWYwZYr0%X9y0;t>(u)LS)u7JSG0NBSM3|6K1EtuYm6`w=vH>5E^IpR(Jiaf&eSHa&>i%6R`QjH)lI7p>*lEcZzBUph&eT*nR>_Tz zZqaGGLW=%k)D!E(YQYD0)s0s}HIkq)EA%Gl zo-Yhn-+k7?kasT^icmuNUIk)>{+oET?Kf-Eo=W{Q*LS>lH(BU|&8v zIkUmu8&C{{%t}Fv3_(ZAY=AL*H1)5kREr09Jy0oVIPM*(^RO{)DPcgC^=!`s6Cu`ZC(MGE`O$b#$#$)| zF(O-}e(>!*?|KHG$NSA>2Vdl6di0jGo~3B`^QSUl9No%3f>HNg*sY3l@n6W#id@m88W2-2eKuKo50#gKn zZbHi55g&94Jxp5)QbhO}!D#ci1L^ZAksNlI1U&fI;P}#E_1Eit9_uNdPQ{NC=cg}5 zNrO=%QbAImjlG7T)gqrnu^O+Sp9mo+9d?YP62WKx!Kh_3(Q?c|vu2Nkbx z7Tk;M7$f;6&wDb&E?z^tLWnXWJ6yHZuLfKgM|`IcP$8*w@#?r@A{a~7M1cc`P`~%Q zlmdp;{z?sPutC)O@@UF~&dPO1a(48m&K)3GN!3N_Yz6z~ASxs&}*i}$^ zS43xj&>M~f7xk{DKfvR3liw!Sx_Ds7E{3VPE>m>(UTd4S#L^suTG4P}T-3mf62Sp9 z(==1kTJKr6#0xjHFOI~JwYf!`eb%D;;Hv{cU%I>|03_;zzo8jmc6au_fRAcG^m|;N zdYjQO9r(V5ZpZZ_2xTHL%mJpovN_Y{S__3w?T?N70IvEvgz~Tk>0fJI3b8<0E z!Si!?0v^(%7DZo>{qk~(YOCcV#*~O=2Jk=xaoUHXNpX)-EHKX!ul z{J3||{0R$2-e$tGUm&;^zWa7Yw~Hx5v@wskj_B*iPRqS>m65n`*Vd6i<<0&r3adyd z)%Xh+#8kJltY@au9OQ(wa=a_kFBLHtIuzD&fCpT2$q~($hF*;cKg;{>@|xP`W%#x3 zb#KmEPG!398)fx6ga{k6iXTpb57jM6?ts+a{2Iy1Nd5lT>S;RX(XvMSMf!=z(1A6p;b9Jq}iC=Ukf8^bQRo-DEYf_R#j^aTBu(hDR*Bo_K56LWeSo{{DigRxv-h zRp~IL4*f+WO>=9T^}Rd)!Dh6^q6c5C0=y3WZ?CH`PDUbTe( zVOVT;JDu|Iw~QCUc0H`}UUc=Wfi=bD0A;1 zn6vk@_n>S#Bjgs|_Of7}(sIo^)s(WiZ?(z*4BkCCYJCe?26;zn0iIuL$7bdWxS-9}^`7bZYlz+@w*9Aw3`+ktzu(@xY z687qv5FQAs8ZFx4)WU|MxQgMKxg#nN{^Spwe)#Ap>dA63PxF!0dj1Cj^3fu}--klI zc99HgO%T{Cw?!Qa6R-0~&()**xMO6LgMF(Z(4r?wE;eYQMRjjpTN&`cga}dk8=&d= zU(^3&$gtQYTw^b<@0*?6tHR6#GI&C9tL$qM8A2qX^?G^3yQ~1LN^6lo+Mq1+f}lfe z2tWS@$Vn?~g$ey5f*SGEkbhRRsI zv~2i&I5Hf|vb39UwLCGA43V`)!?uaAz>~1dkOCRdJwh6xL*!y=4-#Kwf}k3Qj)RQf z93unLZr;zuk$Um?*0ix-a(1kGTGnSJB6^HtHmd}aK5stAfYI)re4OV_VIh(5pf{If~4h3D% zZS=3C13|~!Q>rIe!Y^#=^{v1jB36nruLij0X&FsVb zfEvcY!yXHwH0~$F*V(i&l=f7g6`vw*hJj$=<{f=j-Pi@FLWlkaSLH+__oV%v=JhR}iwwfPSec2#Y%G8TfuMCmPJh70M@H@_J(?5N_x z>+$OD#`Nr1;8cEpK~g55$4V30HHfMv?4&_Y$j2EBzZvfNQ1{B*hQ1tQc_xL@5EEH1=X9laSw8rTd82%dUHrp)j2NayT9^jEF(c1QO)5fteCAH}V7 z%U?C0s=tv>m&~0Aq3z3|@k*Nid?3PxAPqII;QVnNZ-Tx%pZ~`o$wk#Q?p0B7Lrh~m z1&iCu^jo+bNPUuLa0@6b!h#Tv3-i4Stuud_E9C>enTU^?O0Pn($M|VuHr!rTdKN-3 z3=$I*#ltf5UYg-SK77+0b8p9RVGx5%vuzt!e2N!Cs-_ZbJl#HJG;xF==rM-b?28ov z>=_PmJLhC$JG+_6R)P2Jn%mb*!urOW$EncLOiNcV5BM&uu(dG%(jbAZWa=o{xl9Z9UV+POGJ10wQKZP)lNPF5o zS7qqD{NRzC^qqmw>R7}=?-d5b)7^&*3}1T9?2<0*ug*>o?Vd z=fTPWrLxoJI6JjfbFTpbAqH_~+XTtOE86J_d`zEgtrO=8Nzu4(4HSW_^FyTvrp=cb ze|*aXS1r@4Ub=Mn1q7KPg0Cq_wwD*WICJ5`L@{cTyg+5J0+fzjTBM3EhkLwXuKCJ| z`qXCA&?|Gj5LAL+brxNpD4jUr)oClYT zFMDs`qI#NE6d9$T1@1dzK0tpL$OQG>@H&A^6fAf1_PZ9LWIqCB%ti>?d219>WaECU z3fIY0S3VyB)1X%E-5L%5?l~x2ciAUp-NuD~ki5f-PTVB;)^{FBHF$Ic=Kuw(IXw~D zi+L`;Yo+tH->k=YpO&EJj1!4tzI&bJv>F|P7WSJ!{%wkFui3^LorpX&w zaQ(-y16~%Q9STJ}G;yXQLMxhkj24Rdk##zpe9`e-G&eP&HDegPE`0$Ey&^49?d)sk z+k*!2+73aYEzSewj&1*}oZFUm=oIw2y(aDE9B;EY{_|`5U|N&QBFT5@o!{Ev!qP{^ z(pNq_4J@(?CC|R%X$5bv~XG#LJwWp{|f1ZPjID=4l9e2Yq8J?`$hD0t>=r8#;963l`5vBuPtKA@3Sej; zKJ!!UYY+qfskhRNRSza3;DPCHs`_K6CRuA3K2mc*AJ)dT z(@vpR4*OoRBWkN$3m=Fj<70RTG(G^o4{8zsQa?%FpoC<*n=;UymI&?!lpzTg_)u%C!BM!fnI0wdjY2vUSzfCBh) z|IbNh{>pKf?}vL(6wyr658}cm<`Wx76RmjyxWJD#Dct84qagL++8V9yeT zefl9Xorr@sL~ud1bcfoK>GnE4uKIhVt?As~MsMV8+2J6FL@tpb+P=N(G7HwV#27{J zzQAUaC29np zrCayuzr&)ZxH`g-9M$vZmW3QYB?lN!ZQZzB)Su`4~M_i0#HB!!#Eofo+i+$=x>jz(^Un>(XEdKfYZ+Dms7PKhyN z+`5YvH*jF3AjPY81kr99$ebk$v}!=4wr0P2LQPS&bf)qG^SvWn*skj4PTsuZls8to zy`6d4u_SI?W9a(IXGQ%s_8Ic8i+L8*n1rnl$6bw`S40p2hW(*A1v~3Fu=6Rx(IE}2 zqwpbvf}jMq-)*AIcH)!hIS;DTX%0PG9`&uqLwrxdW(nXj8hPAdlqn$f+XLGw0g;n; z`mep4>e~cMecCvpWJDv>#tSMp8v!Z0v_sRaB&oV=0#kWnLW6~gFH%0hfeXI8g)?wn z1aU!l3(>AuBqM_fA@KX^6S>kCmuiVo{5{6oj)*fu92hQSNyX-I3Pvfy1CimPp*@F1 z#JpFof|E7^pXJmwr4d3aS=pkOfbB7$bp#Co&n1zwuVVs%jz&Fc_`DLho1k zXT-hBr)`P9CVa~LwV1tHx?_71E2hT_QlD4X9h69wxV0kdjK*J_;;Mk96Ah} zb$LCu-(0u$m~qC}sZ6KB;_krgpj z@iTK2g_@5Dgkgfd#|J{9hz94*6~86m?a&3K9@UfDQgS40!Y0g$a3I?<2EG+vvA?gm zwZDl28|+Wy3XOew+x+p|?=XwlRFgFk7VNQA;iJoH+IuO>8?W2=VK%JTpa_TLEE_1? z%>3K~!?3cDRWKy!Tfp|s{k0cm9T&~Fft2t6@&xEtwBYt~zv4A~R2%YP^7XvlpZ!BK zT-Q`z!v0-^W8JPmXSOLLho;i3z+GU%F(n)$a}Ei!X7+FLBKAf`hEBjoNN?N3nF9_% zLGeIY?s6ou-}m)JUvQF>SXm9YeIpCwT5%a%c<%RpMtcuDB)@{3H6~@J z0JbRZrB2iL@L`kbh2vLoRxTvl(CvIg;#FW$jhGjO>Xv|Q^{F_#E*SBfPuYG0J*Gvq z>TV;S#sE8$R3V*-bGLDlRw2sX4OLnG8ySEIR}f!2XLC6=%{@%*IG~%Vrm*JlU zGEj71q=mC>4`-l#{9WR~6vP+;#-@PU2Q_Pr{bml6W{-+22Vrf5y(RV6J|rNw^pIP{+aA5yt5kk?t14@p@N92^84VEBOzv9e zBjVNhl#JQKVlDvo=@u6))zd$e3*sZgbds+8N~x*GZU|K{gJ6bxo=;cI4jS&6fDpY z!)2Z0tte=H?C&DCrmtUmhfRZ^Tk=<3t^4}#)(0%@KZvM)Pr#2?xLkjY2VpuTBqFF| zfhZpP;QO?4GGg-n9nPk7?k%P3ryZvQiY74{_QuG!DlHJf6(9!3zZ1#X^bNJ{r6&}N zACOzEdk0@V7tz=6`(b)MBZ=VUx8@A?D?Irc#2XK8<@kM)Dh9!7!u_3d8Zl0Ut2mv? z-hoKZPef=viPN{OfKMyFU?**i-^S1F1BweDhU%kY>*PA%%fsV;Dc$wh1Gzt_Gdow+yaXZoS*5JyvUjRLb4@( z1G6aIXJLSx9UaB66j$(lcqQyhBi@ezV}lW1PB&1<0H*)m1bX$)?hQS5~*I-L9j+-nYwcL+YE_AyUESEr4m3#T(9rktZ(Tsi_D8w z1|VohwlSIKQ8O5ohpU72lD~+HE>2cMs3u%Oav*6Yc-4t4e?(gg|HvnvL|u7>12~RR zBCV8t!EQ^43{Ul36jb@);g{IUvt}}5aM=(#P}@4?j|0~oL-yBoXxGKjdjyC{jaz@E zTUO9Ac-o5jf@ygY4@$CmjTV~|RWYQM7CtZV5C`tfIZZ&kr7{70-R6IaNw}m6QU~$g zzZ#7x=W|6}Up``?r6g?yqmUZy_0QDn0MNBO?tVWs2WK0c2bbCX)tF)nPVB~p=bT(X zg+F}c$uGBchm?5*l#}`n*T*!mIhctBr>m;+P@R%CHuykT&T1MHaf=BP5?wovTP{U~ z`D9uq@1UB1i^fHNme=IKk6mMCmNS&#it2x_`2>u}x+pjrCL%?Mfv9##R z#)=dt?+&gr*$w4w7?^{iRahjGmt`e`4`J~4-Y|+pTPl?_%GgG6D?<0~Z+QPwyystRDT#WltIQW3Q3Pya6tVB*G2k zL?fona_c-UA~-hk&vP_KO>==}EXi)vT;9m=mm}EsW z+eaTT$%~aRcJF*B>u4>I58QJRZTL|Z_l5latggu=Bp!(fJGppcv`9Sl9MRZ7x&$jZ zBCG!+NhWKJX4^VY+-kl2K;R*82;86{|Axhn51m$4+AEODsz8{$TcitSYm~Nyw-?ouZt~vYP>qJjPgjf_)hUY z;-CbS>ouxg%CW1Asl+*_Re zP|J16th9s=U?5i4U2e$ZjLU%rgxfa}6P|dh&BPtC*1(4j&#}4P)e8g!aDUq@wM^yt zkK7L*@A+9eX52W`j85PbEFDXQ{7}`fJHPrd@e~7#;3Bsj|AXlmjO$CO{-0W60dbCfjFuzvHM<1E8z?pp zdwxj-hJ2c+2T?zZ2g=N{QqaD}LMitAv#yv3Obd*H-Q?|q&#YnW;}#$T3bAY)N9A3& z2?c+7iZN>inWm!H1ukG(iMUT1E=M@>m66aY5htE>TnOrIyn0=Uxl6Cm*vamrEZrBD z`{WowGEXqA0CqtrOrY7~^L-&Po``bm6u$SzgU&^9#9-7TJ>JJaSWep!_tcF8rNBZG zd@DDkWt815&(>8IO!2GmuV(aLP)UoyP<;42I<4JLQp@db)u$Qjw6OqDhKGo!_DCfP zmib{>nlSA!wZ1{BUjD@AY)6cZC&j|$l_c{dG=4Hr#O+9NRoSrP5N?zs}QO4`~) zg5mZl>!#=OUVyzXKzdz6Xe5dw3VAXW(U$}DC7ugQV2nzdx2wHmL z--YtLS2mM(8R$Fr6bFL({&4~~d{WEH$CuwPma;hN$GR&WnP;ZoJPDOw(A{(jjC=%x zu=vNOXQ1G1V*!f|4Fob`m^ETE@}g zf1IWvlnsEolBPMj2vgN0Lc%Jer+6)i0$ic!qao|2-ep88r^i)vlBrWdMj+>xD)ukyr9%shkLGxrN<|Q9|QX8^U z-+kc-n!Zc`LKO@Ry#)_x-CqCyXnX6pDx$A%bm&g$4ng7o0ulm(fFO;C;Gqv4(ka~{ zB}z(5NXMZY4y}ZMq@?r zxz~ferkg8v`{r*@xsk}~8B{*n2q8T1?WxX- zg^UKIeS`yMtISNWVWwFp;o!mn0F$*<>^jX5@2|BO3c8X#WHmfEPa4pBA6~$i02B5@ zQ(fZ}%X+r>S}l+^w(U3#2VvB!UWVBVd$#7|a)1uiP&@jO=51%u+(9sK5@*{~`?fbt zQY;lU9LuG~(ue&ej5qKZPC6PKp3!z#C$xRHUS$=pJcdjsQvfmQEb!_kG`x(b)d=6E z&{PTFr2adTt!5gPeu95b#jf%x!7+F)Lh*=igqDE+Am$iZC{{9iqwOx~st_eEtTpu2 z!~NKdgk^J{_?`8OUY~>z$+3C7h0G0IpGO!BSuv65ps&NI zHNEk^J#{)w**x=5TXL8FVnl*QW-V#8XcgEwu#cRaOCH70vAS`pHQUv`KQ zPO%TI8cpl)gZ;Z>_WE1YLOhc9V~G%J(toDr_`q6Vl_u3tnQ}cS6d99@=80?fs0p8@ z5R%*iuFNTSKK{NLJuZw$O#<2TN8iU(+GN)wRDC}U0`pS~Z{+6q^|7kd$?v4qdJG$t z>{HGu^lOp5wD4TkdfvtI@w5B`NlntO#1fv>@|Cuk9UGn4x_7@+xH=tYyY$P>7R*)$ z>5(8_dG?fUMLv`&@6tXq4kdCspql>@B4+%Q2Gv&eISHai218~j z`X&|(3wHOV06}AQP_?s)+D!I*B|c~LExNlyIu`tM+bW|E1?X^hqTY>LV!c3vK8V|K zU*RE2w>jRuhv3Tio zpYnOo15tXRiH~+?*1bAExcYtd1Dd$L(&EE--*<{|@bjiy77k-B)TI%U6sKo3E6!CU zdz|}@=38TxqnRSZe^Nbu-yU`Tc2#;;DIvk-%iQ45zR4k@bF&A`T}=e_nA5`N=fueP zm!K^b&mR!KVEhB_xPWux&!Lq~9i>FT@Y3U+%9j-;_6M*#F&Fd38( z6$yp}v($pK?KdkUjyP1Dpubwv%eylf4YrbwHpV(|!+_r5>fI{OBlqU93dn5UewA?Z z8E5Zy>PloO58uaB0Zsh^TjlrGuy?P~Pt`eZC?Cb$%14@;=~+*H-|D3;iUPYEa`N?~ z!R%(Tgy{P_B<4kBQPnEIvp!sS+wI$P&jENJCKsFm6B&Vw_*@BdXif18XgGFcV;$fK z<3U1$<0wqPyNa}bSs_F0T$E4!Dv4(q3=rEKm1m@~Z zX^u)pTM_>S61*<~8gJQA^svR5+G2S{8(*Nq+a^|{N|gMI1DdDaUgSSZoqywiCHM>b33(t0c;%dxlMGYu6I}MUEQ^I2vY^inMJJA#)`H^~c zw|2SoTfYqPHo~N^9GTt31j3PF?C%B-ZtYa&$VyA_=}HYdO{`d4J+C``fC_@`xz zr7w_C<+qUF7QmPVOOtbYC-Wz%mi`Afwqouih4QaX?O_7FjepdKh=fx6afrI!uECL} z{aC>#$_Yb-$;ULbVcyrpG^_EUh8!6buYm%LBa`z6_D$Y}oPCNwdX_!dz?KWc1E5rF z`+HgmW*KuSxgMWz;Y-`9tUGt{{7jct1W~W7h>yV+%=;KlnZwIZ;ux-ChQ9O`3EFnf zz%2AM`MrtjGpD#7?{o{4qBF-w+d~pA6l;b&>l*A5^nKd3`zrE%d;pZk3Xd!G4vzBl z(f!`Ehiyvr<`*ciFNE4KNkUXVG(Q=(dBvdMRL6#BXKd3KO7^k<*_#@!$#;*ZQFvJ*wRy~gtEEDzWBg<8#S7?8|)P3AV%MT_>GyW=G1)#EVHL{IC0F|L-EL=gCEmcZ0uUbH$x+IggBLeHN<9^KN-dXW9#RbYhzmRe zq;MdZ0RS9;zj*ffd%nf3Xn}M#ln;IOqSAokV3i|p+w%@;NedtFMAA$G2O<+n)){|1 z@=M_mp1^N?6aS_RVZg@AMi8QfJ~jsiy7$u-W8zNt%rKytHFr;qFc5ryj912mzpVD|Q^Gc%b z#gutAzQBZMk9pd42G-Vn+kdt2GzlnVPO@YPkQpe=CODmzD?ZJvAYI1<;@$H28SNb% zaTGB7O3W5k$1vb@*o zxyc1H{#MIp6zFK|mQ2B$ZrooAzu*#6c`gQ3708kRyiqz1=~l7v6TYT*#C9jqBC`#`@xr`s0w4*|aBd3#R^%vABsGlZ5ew2KRLcNRq>SB$G~y^~&tGFwbEkoK8fz;FPd2CDyWa!i(K67; zt^>FICiEdcGnPHqgBq=E?{`wv5NaTtwnW&FIdrv@bWexu^^4koCxE8RR_F~@XvlIL z&EZCMZZUz1;VmeqLKZF%pMB6hu;X0)p>*N2HvwJBt-agZC?LR(yF%tZU6wj~I-!~qk6<{5An&7`tR51>##fqnP z1rXTS0zw_7wY(}6En!FbaVHrK-q^%IZ_uCoR3xccOUIRZye*10k!Xzc@M5<|$WIU% z{+!&4sS?q3Jlfx{X6+k1)1&@<*o9^xdH)C?8+V=sgCRcV>C~bK+^1y`-tZO#?^hZ1BEW4KiF9YwUrQfAJnC zZ1`!67x>{9=&eRCY;*QxVoTvW>r+$Ejh!ObAgLvd5{VVLoV$fdUgb8PrbW3W( zmInYq`3uvFhj1@WuCd1rSdi0&pV7U$u6K}ZbuV5~`gM+Fe#FUv6BqSJ5ny6Hdt|Tf zVCbjenv>+F{Wcv ziGzhqiP4~53ESCFL#M{B_VbR%8)yLK`aZq= zI@6OXj09W7`7`S;kSJo&FOWbTT^yqrv3O-HyUv5?Ic(9f;KC((?t$up0j4=954uhT z+Ppj73rhNpD*be4s=!)@H-2hQ%NR+IlE!E^q?Z?yMe!r0TPvzWmNIBF_RBEu*X0!8 zpNzVR{GYP88a}kD4QrL(c2=vnG5^R(clNtDw6p&{64||b*g&q>NJb)+MiOmY0P+4eemEM|zSdG~@oyP7(1kp`C>gd7O+HV!aK^g#)j=aWu2j0FUaSO-{f;cWHn^_Yl) z+(7Sy^T%)Wjty1iA{V?lVmoViTmVEEZrrCs67v=&A;Z*sM=od21Z~Pq9DgJ@?%#zP zRkTdj%U|7!4=vl63Qm!4buZ4hHbGs`(KmIg$gO5Aj0qt3?HH>f8r0bQLc)6EHzCm%sNNF5bf~wvnLAx zANjjP#ltWpiq&>rrq`RPNx6+uH2CFWG~EXQgqw60eo*auL|^QTK70KjiR8Iq1e!NVb$VCA#&tnYb5Bj0Gtib9Fa|`S+X#uDT^aU z_c~Z;<5<}MIHOrU^&(=?5ueHatbbT^T+rgRJE}atMTYy(_r1y8>J{PXm~UznM_nce zyY1{pij3(?Wn(=^``Q2Y%5&V@9NZWbn1+nG86#N;<_it1nr%Ku8i`nW3Kt6g2p0GW zK6fJ$JP*(W-W~f(szMssoH^eccj-=fENfHNzeicsqf*%d;FW)U`_ZaS_-BGP4U6$u zFk$42@hZ5O?b-`Fcm9XtT?VXg=TVAQz6+&2$YTFc8X7!A`@;VRUy<_bsUoHq&Xqu6 zLHMU>KGe@`9!x6p8z~=j4EArU?okdy$BKELzofcpF_Va<9M`cF!1 zU(6Oa^%8UB+tA?p1ByD=Yx}Vuz?cS`6P!vz#*LTv^>~?5>C)h@2zs9f zicCoH?8*(|)?a7@AI^3*Lv&sBDiS^bWBIPw6|-Vy&sk)b_lI+4j|QAOo6AC+CI-6P zyuVsBTRA22IZxX?Nz9$TiHH5Uy?PUcs&KcG%(&uAhn&wba2S&58Cxj^$?g7_Q*!Z*s(%3c0|RpygR^J3I1 zZVk={=3kPFTsJK_Z8D&C+mp@*xvHjV*`~=+^DDQD4-Q9~8=`vDeM)&#E+!fhW~v=- z16eaQu*f$W)H5H*DkFl++>mgUW;?bo$QGxuYg~vwX4sD0U6&kGn|OO7b$)if*h9ZE z)gsY+;gsNves+A&vN!kL&d%!=$=5p+mzC3z$Jk~Y+t_^_u~BL=`GS7QV1A^>VQ*i* zSi`t5v_e`pT8^A=ID2+Ra6$eqy+v34!tRxkR@9b!)Tz2hUE}G=lI`7}I$HzFyT=2& zE_EZttYfOOm9NgB3jEAW=jRLH+I(gD;vPRNRLc zQP^M={Fk483q;?d=05zYc3C2S=G^mKR)+E)cgtu~spuF~o)!?` zje9Ip17NxA2j-|`FjY?2o4143YG#|wRx|d++ukT($$G!<5ny>-OEkzgN;v+eilI=G z+1+nfD%~_E zy};C*d0s3!jdv+{<*me(;FQxBz2(Oe6wfQikU-RMKiJ75d1@ntFWnL69^I#B!qr%O zzhy537V;)czXn0ZQSeQaQAm5j2LmEdG{E(fi~WjK8exslvPSqH^3=I#*<8R=Tf%B>V|TI3SPHKu}_cQ?S2=zibH#IJQHAuUe$KJSxoK@~zu zCwNd_Z=&ZYW}8D&VXV0Zm4 zyCgUxt^q{p{Qzhe9msCm@TrN@8@_FiaI0%O$Yo0O>O=nAo$KOqSB z{ae4uqFpzfLes_Uz<7R#^E)Ig3vD?4=7TCi#0H|0`^{>8oS-3!L3&gOQg~&Y=Nb0* zHH<#vbdQhf$(>rRt}XhdM}_=6I>7S;kVL{cBV%w_F&k;?wH0`%N3}bbEc~QQwRnJT z@5!Rcl6HC%H?#yb{x=J{=+v!cigL=0|1}9jXqV&qsql6DD3$@+LoBE5RP)cwXDbty(vO0eJIW zHfjLLv-(7|#;SHUvk%CZ2d2EMz?*L0xz@u-#dU`uq=;t4OnZ6@#SiPxEeBir5A=s$ z8Spk=X?U}HTL}qN+sLe%xAZ<~HSvDWra18S)^8@RQkC*h&*QKU;ltq1U#WiE*jBGr zc5u@&HMeATYpPcrKr~zfkAt;d?X|p5+QP{bb){*C$vPI856PUAp$)1xgA1xd-p++N zG!Ot3VZUuNc6U{G%?q1u5qoQr=6HF9{kQDdS6aPJP>o`Jjm)-c!1dv+QuID`yV)%#FKj^Wazul$ zx5{+&2!)Gqh8Z1L8<1#KPtd4-)(GjTd8PVM9RLP?+wpy5eTN-%7GP_7*;fP$dOzq2 zuKR9u)}ohqAN8}E(kjCR`A-CfY3|BVBnGZ%jm@E z^7gs}*{=>9VwY~DxO0US;^FV>B40t)z|L>h(>`8n0;*h>{N9Ozv43gh_^azXpnve_ zQwKv3xJ`R~UB>E?nt}KFzs~*N)J4CkjeD=C5iHl*WvX7@d;i(*d#4w#uHHBK{eQBU z%Yo~hb{aC^hfjGmyOHs|^6HV*GZ945k%j^Ka5Kw$%ayjIZXwxsD1Snoy zdQ)J}?{bs@w;ef4Kl=H_xgxGEz7=7HfAw|Uf<3gn9ntn9K6@l7Y~=MsFc!5G>Je+x zMT4VZ1Vr{557$nt_yFyP*RIV%pCl35&WPP{P-nzC|D~0!NWBIiEWlgPXGIs3@LyoD zaR6~D@{|vp9ze^$R$*6v5dLXA^Y@R}MNT)bTbX?P(@bXQzy0x1MCgBHv>8ZEdEI(u z?CzWE{ud_cDIczTWBl!W1NQN`{-yj=FfN#S!B!1dii3YV_-}vQT&MXT@Ba@*-v2ik zrF8#Kiu_|lUt$cVYS3?go77CIj`RLk;QkK?{)6&=M*06Un$6^1ifgpZE{7us-^;@p z!~Y40{{zN{;#9$zs28GM2>}9tv@a%ct$BPlp}X~ zVi1z4y6~0SX)-ZET=M#Z7g^+4mzxvz$E@# zqp_aLv0EoO&9K-$YAUR+I#na_>D>7q{;1@Y;?LdvZk68R>FfAImX#86md|tiJx1p;z8 zIaQ`=_`Vg<){BA)f!@B%LCfe4fxjB$bkm9H=(ah)-0pH360~^ke~I>eL;%`p=PYGa zZ9$=7Z2}spWJmb@V3db~FEN7kQwwe1UR}&m@Z!jsezrW-ws;D}-`fywdG^=_LVr z_@*gwmR~0dSnv2XPwZ8tYRVwoT~c)$Mmmu|uoSh;Pbx-UY^WGmmk>CX!n!QO{6_Kv zgF#KQsmDS1W#Q=Y++$WF68LjaN5B{kOJigh{9szQ&gdJo79c-{2@Kz)f z9G1(v%QE1{9x=XbEq`jT1oH`ca+uZshQO;UVk#^I38a|(03-UNCD%VKzevXFgkz|& zgH^SWd(XI}{%BDi%!Ydn=*^NAk@ScDtx{m^=xNTn8-JK^{$)ZuWe|!BR^s(Pn1mJU zasM&3ZdW4n_HWcD&J~P6mw=YWzW9T<_dh6h&f*XtdqU{ks5T-`3KRXYwy z`~?d8FOnZlISBDU6G#tP{$j8j^zVL$$cw}AAkYElA{0@6EVejY#UYt#`K<4#OPTl$>MT{5sb`c`oI1U?WcfQt=e zoUiSo`}vqfM3+uq(0Bq~@r!tO75{+GqqG>q;uKua&`zSu&P~D;zxmO|kz=0js{FVt zBaR6vrDp=W?ExR~e{&3R`2p6m)_sv<(Z79Rma`!Eq*edoG9Vzg#l}EFIQ_LN{6x1} z?hE7ZajHL>)@9KQL`=3I>IK$E+8@`|k0y14{d|E#lq<~9eyA*ZJXx!D>|6dU+@uDj zLLJ*ei+5?rspC8t*a6o}z!K^;p}k|#*I5Ehm-29x(|ug9Wdd@q3yE46ElA?}o0q+# zsNZOd);L9d#O{UdWta_I97Vcef^5FRA6*q^2Xcoy=YcT>jyGJsZ{V_^>U0-Q-J(Gt z7yC!Ql^@QJe`=eay=AB-77&aO$w7#u+Li}w{laGisWyUJ5{jPEgoVU8*S6>Jewfr<$1m1I)c%c(zpNVG&M>L z6-oD4g7Gpj5T@l**6-}KzA4vUb63drmVFKsI?o(p;b~T&_?vDCwQ$hCSG=p4+5-ru zAw)={Gm=bRa@e--y30k34k{(T>Z#YEduV!L{D@3P$k;2z=Y+n(*x}u{gzs#Ww_8x6 zU6k>bcL!b#UTJBn-O>A>z1F&1$$}_M9}CS&3eWe{$6Dr2OQSD|e_+y}&tz8Gn~`}nL*o_ZksetX_ocSDXERp1c$i8b(5@nJoq;UB za#6-wXY9h!{k(4GWL?EnsCz5Tk<*t&04hV_zkHX*>t1KVbmg6`GI=eS%ljbiSBUba zGq*Ge-S}}gPsYKqi``D?a&g93Jvld%q>?$bbKDn+6*}_J%y_Tr*K$mZBwn3}R^WxwQa~OZi-n=eEZ9&m2y8Q0HKQ9J1mM;`xJB zi67LyRZ|mXse}dh<5v2MiDzdS{7WbVJ%t}b#a@jPsnXSCmeboAX4|lAtoT?;MMe*8 zwaELHJ$<}Aup1tCKaj1fkMk#A{?x@A)em&nW7hxuO?dM!OU9`{aGSPDCRFD{mQB88 z|1LJfo%b}c?s1)o|n;*hJ12hLbn zMTJh<2dAbIzTBCb9vHZ4oj;jG{RN(yo9n79SYj8+PR}yDVPcoLAQ_%l)?I}No$$y7dwdOX zzF&O}UUdEo?SpI(IhsGVv%twN?8cd!rE&2^thJozzk);FrV~$SkLwsI2S`~BVy9&q0;Pl{x^a~b7M6{Na;aA z(*uQvUq|hhl?`AA>CJL9S@@EG_jUF_jZZ?H7*E}aD-r$1*81S{q);RQ361P(1u<0F z{V+Fvlw$Y>l-zLrG(n?Kh!A>c{NVvH)b}PLJQ+?A>qEA__Dn+qdFK7DuV5ys;^r5l zIa&voq!sB}oeM*rHS^v=Jrb6f7#f|nBJWkqA_*F+O|PxWtQ67g>nSW4@%RSx!Zx`? zocf#Km*%q4^0JfuS_UnpW6UBc#eA#+<7)b7!9K%=&b93ZC=v>Q(RE1a~O_w=cV9VcVG9$ z-a)%6Y!qvZs}voK_dP|UUpQj`7^MIPi3fM-GSEiZ!7#Wd3NsZ@#l%t*l31#4~ZD(2T1q2YD^r{+(lwgcpTc-OfdSy2skAadSw+H20}6oHrO-MCso z2$dCh8}{lc8y3X!D-B8W@)0p~EQBNpioD)ED3G;n{dJFffY9=hIBtkS4dt0YZs(!} zt%JtSQ*rY*_fmH%>}(@u+XcTz){^f&%8G0CA$AW>bbCA*Y6xeL{1aN4i2B@}@E-pV zQ{`2I?5bRelWlGGV-shU#n#OCGplDr{GPE&+k2poAKX{EhzTBTmXqU-?3xV=ua?`DM|gN`ZC#+4OanF=*+p0+r|q{+7?tIdv+Rn*tNR|Lq4 z>M*@HSLG?HB0hz}ji?v8!{pHVQUdG-9T}KFTXiuFj)DSgbO?3qt8Sm{a$Pv-5`DMk z%UomLFA%zi>o4_t4enAEG0a#yP4rh|#;fVlF6TBhl6_J_)pX<2cAE1-mc#ioTG`=b zO&ztVtw~SZy4Qr3G-irpQO6Ew&>JM*F~W27YfN=8izYeI?3myu)#1X_eTJ0<`$|$b zzRgo8bBA}&i)7}Fa+lx>^L(#)|5;ljMOEdM_R(!!#!p;kv&|ZwZb|&me^zP0^1pQy z8-*g8^AxH++o`6#MijK;At(1&1R$d6Xmm+AFpGXhGri`yd*qaN%%b6t#jit?$4o*r z-b&S0^}3%?-rL%LkJg1l`s(#D%5J0&(!XI^E?;Rmntfm@c09>)g7PG`gu z63ZO}?r*j)RMdwnDyHsrM$uL?t6gEe{0N1|=!HG5 zVazLMBd?I+&)hv~Af+TQ2l{zsByucZ3_>N*U7?oM>PXIMrAM*uvgaPuzv$ur;lLUz z$Mn5qnpJ~kMaU`Z&ZA}qgu)&-Q2-nGym7syO&0~SoT>i#*6=yi00Gh6JtTMzYSz2) z@2#GXed17IZys*R17v~b5+ZwDuSGtID=i`kMv?e`*F}@k8j)f8SX4xUTv?Nile0a0 z@PNsP-^PXwbuVey{x*P@vNd zW_R+EkYO&FTF&>WWxjpsIvTYI^|#Dl5wa($E!1F%_VU`SxfIie77D;ymSG|{5omD;mL*4{d;l%JjkF-EHiS=Vn}tQR=YGL z#XMi(Y4{y~iZfKEhaOs)pn&340dyN4OTE&vRc)05Y2>1*2>v@uQebMk-h}DtH>zcT zn~{z&s`Gj5f%_S@VX8I76Imd4_q&rflxLEl3#jW-9=)1~@ zA*?X;IlmvV5d)Cj*bNnvEKKgTH;^0epHQJ}rM|bXs?}%MH(a$HUW@J5nu`l1Pp;Ws zbaTx}5->YtLDD=%lEepRpx=(hfR1UvKVW9Ljb$7oX|68q(-6RG<#F~E6%K)Q#)uhS z8^%iog-g}p9usJ^S8h9|H65K#D7@QpkYS<}v5qV=37l zgm3%1E_UTQOxhSIt9bwpJyQ_PUr0od5#`>+D!yMtMb-zW0nQoa;ueE1HETflGqhQ< zW3?fG6Em_~(q=?otx@o~R!U9`s`&e_{}0fyRWGsCU?e$goLP7IM)R zN?hLbq)IGpMq$xS2f@2N0FO`;}yNO517`o zqa()+@)xwLV)03aA4p}yNxb?2=M7K5&>{2~i6b|iX2keAXz`KYO1-r)cl~Opw(5|^ zm=A74Rhz)$%f`7D3^6E>SH?~0oCNPQLL1_v#^v6eu6%5J6u(T^%EU3A+>M6vM&#nX z>Rg&aHLneC<|ppQ?_0HfDg8#jQs6C*-Rk4H-jf&tKyyl^KX$iL0-Vo~<41UBg74z= zsafE`Ddq`oRj8@Hul;Uuz(G<<>~frTEuY@lW8 z{P6U9>psEJ?{H4L5WcNPw&06`YZ2X`h;CNLO&vE#-1z)qRH&{G>D$0T3Ydd# zpBEoB>6069DfQ9e=NFIFMn?NSooLZgL{JyZoO=2K|z7mS!L}y}04D1LRGSlb2 zK8&q3n>;zve0h76Y=p*jxsz6ktpR*GyWRtC3_ymBZyeWI+0iDE!vtP+N#Y_FSX`%( z6ZpR%7v0L;9KiX)HVfbAIZ4*=9_f2uLdKH$D|4F&p%;1yKQpq0$Nqi*}kLAUu_+>6CO=`h|rzBIAz zL9rF`*tpn+fyC2At5?Y{BQ4CedGhWkq6pw1)$r!tn>SF(5m_Ib-Io*PkauH`3b>IVlMWus9BT z$@q0vD=jZynkcw8&IhAv#k0XeaeOs)){=GMA``%@{O?`LL%IHlI+5bSRljlk=cJm zir|>IQTD?q>qNld=l41@rH!Q!y(kgaHq63(UqzeFCAYm=BL*LXYJHq6o26T2f*e@T z8QHb@nfDB1mi#I#Av%7eMOwsy zZRPwdOYOe);~966$O=&!xPXpRs;_z|u*kl`aJymuP9&&IXG<`3%0%9yyvTr2nXq4+ zD2&|!;z}HnUPpbqif>b$z_O?%_kW4}7+1Oo42Qpn+l&q26C_W;h_V!8(HYi$pTDA9 zR3-T+l*$$AS9OBC+S>FfjAQGOrgz)PZ!clP1@WGhI?*Xx$e{y8a?~~IKx!v92by1J zFpGn^tBs#ax_+e9`{fS7f@xVnZ%5gjo=iGAc!$1Wt!}0Bpw_ASJVZHCRjm{*gl~l@VS%lvIr}@DYr^ifre3sgYp#PKps0sm|QvBkt z4tK0-OL_B#S}_&y;7<$~w0xKTbDczq43X*?QS1y>lqp%k^CKm?V^prlh(ZqW>uhYK zGHRj!iUu2{vnJW>p)(J4$BuBzYBuAZtbezN8H&OVf)0*}3-RIewIRnJtt9mNlV`0h zEEmY;l=nh4K)I<#7SsvuskyO0IDEg{N@{@t&vfKg%I|#d3WUe`J(Ate!WB<0uXika6+S#!XjBYtB(&$804UT9I;GP7 z0tNOS11dSiH#8)}jsjKk8SSJWPX&r510xIK67pI+zIhr+KD!-*Bl-H}o)eEurj6X% znJwK@TvX=y{AlHDeRmx*+x}=s!Lp5aOZ3c-S1C)Cr%7Wh@W;qS4d2v|0h}I4?C&!c zz4$O9g1Zqh;3)uRKY=wJ)^eVW2f(xXotw|ew^3kd1cFwq7{v+B{U(O3^*`kgrK6ub z&$-NgUk5!!xC$~ea6_0Y93A6p$eEOE7unQDySTiHP1?n7B0LJC))v6|AM4Nj&xZni z!rUfY2cFJChVF~f@oa`@brFDNioTMZ!UX0x5e4@q*hF}V-Q5>D#}Y>)N4l-i?$|b` zdtT-g;nn0g*dYH{A`D&!$&vbe;S-jnbOXt;YADrwKrN=YQL?E43H~nPBxvXNV*)m8 z$c|$$$_RkI0yHgQi60(Whhab)30}k)7>J@nTLU8(`e;r{Kb7UPxfE!`ob%Q^O82-D z)r&zp#oZ_4@z~O4FQs<{jjD2E@!UyKZjCxih|=>x3@(f`zn(ha@!xe5Xqoez>IF>j z5=5Cpod~xtCO3{E(4q3jaRlJf46c?^NmZY)ZG`fv$bMLcp!dy?S_b>426tUv!lG@7 z*Xko=-0MJL`e*?t~=>5uUaB0|> zMF6)C=7~Sp8-`l34gUy$n-K_xk#|4mmQ|9We0%nj`Gj4^%r4GbFU9fFHt1s7uPhv= zH;o0nF;g6rEExyKk7e3llov&+%@YKi?Y;!UdBZ&d=Ykka|4P#YdY8H$`S=N-z#ODC zwVgX?s6!K1GBFIdg#oCj%yZ%|Y-x)b!$a(!0vzgY?%1wB_UR=YGH?IIFkm3>~DQZ?Rwj~n_X%9QZM?3mjc634>hDq03eH1PXEz#y|&dtDr#Gp zh@E>KyCrgm9KqC5WZJZ6hnBm$n^VFdFn|w?^#(>_3uc|1(q`~&4IDlw=DnG}a85h> zr1R{-<)W^2il^s$okfjrsCc2|=AL<9`IFbTpHknAyFr-r00<9U_<##`8|h*FI}EU4 zF07^QNbp|(B%?x0{UyD@3BdW%#>vO8O2}w-H5|5{z=nf3qqn zB8e+6m=?igpoM*9VrFxpseW7XVn~1pB6_dz7qG5+_^DJt^A^^-J>hpuAHD|M7KeNY z_^`%TisSy_=t3p=^fX4<@m6BNTFKVf-Ie3n@3A&jE*sqR=P&M<#wOf85ovtUx8hV9 zK_AGoFrmaVdVVWv1*b#n{j|tl(?ml`)S9d{%B=F}i${HYO=DkjLP>ZEKAY%|7Cvww zP(UR!3H;qH*BuKoLzW6WBf{7*7C~0D#IBrqq5q6B0Tua~yfD1+In?qdd)9u9vQ6PY z$KrA3O@?jdH{8pOO^}8(u{0h+5l{Hr?anT{6RP&z#*IGovq!6@M9J1zJd!v?50Dje z7i`ONNP%1Dl3%X6!Nw;bKHvK)H+Q;B4qaK`wqW&V(GP^jlVS1defD2&e@1CKl7pWD zb;Zp=?h6+;se5IC^;pI2hOoPs0RvTn$xsqk%(*zpK)=%m@5VK!W936C(`vnXk^t67 zgq6n1Nbt-&umzsd0Y8>tLWKO^PkFI;Q}mwSPRthS;zo_HEIe>$=OW&jRy{_37ACmc z-jkOsVxMZ`rR=`wJxAE*5T$lBDt?LMgKa*XGgqFlg!Z*<=`vzEX(oYHX0?y9BNnIk z*mI8WAU{*~2tj$iXV9Y$ewby1H=@Q5NS7NxJ2thr-|G!3AuCQxSJ%pA?>|jMi_gA* zM)MWy;!Llx^DaIRJ1>VTt+7@%de`5t>h2n8(~X+?Y0;C@J6NFN?do_$vRFf z7p!_Z3-MWRx6_UR>4E=0cbS&rtvcegz(-b^#Oq9nq5h(ZVdT&^N$*TZp%o^ylF@uU zYV5wV-|wdHISTFIWp%%)@1146G-`zpr^@W3vqK+D@xCVAs6^fy@w?9P6uB1uadI#{ z-4a1$9KId;g1;s`)>$Hl8XXVCxk~b5LDIo;Bg}B4rhoxcl8@2B{f}&LlXDJv%Bs7wZ(^3me-fFnDhLV9 z(>h`Xg&?APaEBeCt1RnMQZBDTeO^~gZvzNqG{W$Xl2{hgy;W*5c>Gb)bpHCJH8HB zq9%Rm$%*I3`0DV}C9+$nWPTErSKs@=@2T(a1;*Y>4I_hAlgXSCLU(>?h-ZCPBqz#d z%fh9XxM3GQDSn>2oRX~gj$xc&%vLA{!{445ntX>H>QiXLp3TSQyG48SQ;=?XaWzvS z`A#pmXARJ{xY}?4e%4;6b)cRm@_+AsqH>%~tq6-$Y;@{jfiKNHx^%!u0N6tzem+GP zhF3MmnB1$vN`O3n6RvyYlpHTraB=Z2Ig;8v>HGdFcYk&;0(lTT{dEXfkkw0Rn$PIPmQ-)v!!w>|=(%aB^3ze}g&}|1d(?L1{;m!wF3w;XEGw}*cl8sfu0IgHCa6x<{D~$ z7mt~y+R!B%%no%dh}w;!s3gmw>HT`{EvJQFQ3&Lh;DM#9-q%tv@2{*SuV0jNvmtZV zBdCr}N!xbbckp5RyWl$9=$#^bX-*7{JM40A{OCNKG8=NJEiKP4hKzF`>o0@UI|Ti~ z;th=OOq`;Z0`}Ve0?ljBcu+O@{9UwZso$orXY7M!iM>ZqRjwQg=X*^o7q2boqn#06 zc0b)D4jk}aC0U`XHJ{qA&*2~j8dQD>yWahD2JlgXd7|8dR@Xn(lPJQgx%20}#7 zv)gzMJ6#O3!wI`c)16*IKTXKu^Lbth)8mhn;DEr_zS@HTy6_~ZHr#VRL}+KLvzP`< z@bKzv>PcQS&7$!ip{QfRi5R9~d{z!hPfm%UcDdhc4lNpE@3|Ac>_hIod6+%Jn2fN% z+f6yP^e@T@Z?IUgDmcal-S=m{VuULVtnSoMC`n>`Vuoj{&atOEXJHpb75n0lL-$CC zmq<*XEWgd8>hBp&Vvn~bgntr3(d5*dvX{kP>6zY z%=|BoLuGVnJ7=YZ6$u)*9>I!NU%2Y}lbVx;R3NBU@^P%SQZpm8X>W{fp^iqhIy>-@>JB5PT6PiJe)C%|n7!+4t$RX@an(qks_o{-;_?#lyH z9i+)k2}c5K?mlZPmVV}>@evG-4-mNEK;}^QkdPy7~swS?Q*1|Z|aDrOA{M7C_j8Mv3fK%4bn9aJlg8h}GNOcWtDNcBuqGNceqHfG z3RN>`JpxgVAJFe?1io_Et9_+Gf94euaO*0?WpC@t_S(hS78^*bJKc)h%=Y$xl0GfI z_vD)ltkvOPOz#zbt5t++@2E|IY@Pj;Qf)0nD~GYRz6OJ=k*C@S<}au`c`6e#tQPU7 zs=r>P5y%d=b915%Z8#;ruk92k230n!xQ!nz#t ztiz)d)uE^_91*a1|8I6ae0cCy5x&GV-Qw<@CJzf8@RS#n47(-Qiv z%6U15zOKbfXF_i~e|%BRMjT%LMdfQQxDlvJ53ry7NlQg=!e@bCd7Ci zQk-m_{8SG|by)hKyx$zhr4EetAH>klc5Q1<>B4SIkkG+5$6Fd#xAhFkkDC1+;)02c z=%0Uoy6gxNRnH$lx&5n-^Af*^lLUbiPLe$jF>WaQ66xK@JXAj(-z7oshCO$vov&fv~Q@TSU^r z6Tz;RBS^@K&G$A->#gMNg9*6dm2GmmLaKtLliRF@JTZ@FAf1<lUa(8 z+*hfz2spb*n*9~fd!F`l=2?&^hO$gG87o{QAt@i{XVwChrwOLdZWvs#RBiJls!Gxbtr6qxIUvjPwMG~*fe>b$cI^~^mZ#dJnMwe-{rHQV~uYZp}*0z_`DD!r|oVa=HR=f-$&zKa~zZ}8;i?;WUifVZlMu!2EAd*2P zNs=@~MI4dPIRY6&1(SZHiWf*l4N0H@g zHxaSckM-G(u09S(-!ISm;61wEG{=b7mydfNTz#F%X*MmYg_hl#g2httC`SW%`a;jY z#)AHO?$0zs>-E`?m5&vf(Kwn7gs_#EJgQ->dj9=>0c5)EcUyyYBt~e_fTFc)&_kNd z;<09P>8*AZb`Lt}1PpCEL8ZzP>$JPk?2TjP&Plp-sf{@dR)w^##v|d7g z_YSEqxX#Dd_F{X_gQRdthE7`cDMsiVYeQvMto(Yr^oN^p>##RTt?@u^OE51*YPu6% zNicOf$hdkYgcp1I>VIzV=KYR?i^NCx{h2*I{zhqzcHTSf3-^msLp1ND?pt>7aBnwE zz9poA&&h!slfp3FXCJ?^bQx*TZ{g91v^IbLZlGneoMfVmWf=IgkNADn--A`GUs4-c zd3#OWBA$t(jeq2`59y-r$NGeeL4V;2yaOM9rfQ*Ps7wVXa%c7&iz-=4s>P2s)YQG2 z$c$c>?aVH{gp-=-_~}k)b9&g_4Y}a4C)Xbl!y0b0UTxO+;;AQ@&mHa@wpKEX`w(lq zasFZ#SqiDN77x>tS>J`k;qV+WitBrV>eru<`O7IxSbSFCWM$bn{qyIP#HbrT71_{9$zLN!$kqlDR33S)bbTM1!41dS zIB4YWrG@E!I(&0#Q(rM}eBQ4A2BMNMi8W=Y@`0KWxaPb*aD-LrGnfO3TKlKD!QvWF zUzfci<{!Mdsc0iA5cR|6cu-aJ?r-eBm!rJ5AH#fuIA3vDZ1NM(Vb-5&G!?%cuhEB` zpEu|DD);9MVxh{${8ve$*;XEgN7Rqs#c{iOZ7f<;hvl5A+mGgPf#nzxWT1 z?z`7;J2H&J$C^ctgd{vZTqIk9Dt`}ZyiEwx?PZ#oNZbw zgI`M>zT^&Ci)(KR54SQr)!vuzzxa1za&)otPrhaGBEq%k}Udti$n269)Ap!;CDz8 zGx{n#)d*=f-$S4HW|6f4Zna`CEQ=sKP#Yga#c<2AiH#cZN&LR z2|+XSfKhLV>BhTK3u_yhAs`>y#Hbv5Hc!D^g`6v{?{0X;ift}T((F-^iegy_`HPh_ zaqVYKBy(qs#(o6Z>g;jvJ}zvoEGK}i6imFSJa=Kj)I%J}&i9?qdXNBx2y~&L4XT*k zTQ06C51^g06h1KdrdmsFr!fhXas)s6QJ=?K^po_>^OU?H2+4!2_-{qnh~Q5rLFa+M zxx`uo(K6=5&8uZA?G24l!_eOk-`ZAx&rF=sthr*dW_6_@aD)RdS{;$7x@H#~GIL4h zLdnp|_rhdb^KiG`!PhT?JTzkz3br0`IRH<82vts+xVO>zv)53T%HoVDdIdZnGd46U7B6ze6Y&Ajqr`?kyUfWY0Mq~!w@0_+bzj|c9jAgQq4pBC=$*t|-3e%0iPZG5T3 z#Md51642kMm!~vGI)mul$>C^Uu#X2q;ITpF?M=pX+^OOl$VA4Jip@I_JS6P zzxO*vk{K;y_`Oks{8_v)YaZr;P<>V*HGI;!;_k(GGWhK9jc0!EcQ=;<=gV_GT8`fA zrSFn#-0c%dUiW5rL9P)a3jBgEGHRw!*&Z5Q^FUClx^#DFLr@;rw*{tOkyT)J`;pyD zD@R|D0~vL{&@?Zko^_r%myz!pkiRHa93G)Tll@fdQF^zpwRN{5x!mFJgn;*daUEf* zSYuh2tj0TF+nJa9esYZ739C;_V7nMZ^kXBe1>R}k;3eQC>mVvMvrM{UiD3@RTR*46 zFfA|rqL|N5txUMjM%9+Jop3=^9nD>{mL(tB=ujoa^einc zB)*)X%?c#$SOSzdRy1!xq(!ZnH#mi&#lR8~4?6#{iph|nmWEQzg?h^$Yc2;Q|Cmea zT|VtZA&o$I*a%d)+O3Xz<&q(ySzGH3)|#xcA=t-wn4&^Ei5%Gox@0{5>` zuv%+*jDafP|B<>t$97)yV<}rRTDZl0=~jz^vs^jrPZCCCl+8qftGK4#tz~;d9ZNZ4 z=nEMHNpN-fy#fd2K@Kzjud;a5*nNy=Cu-`4Au^LRH6d-HDwx832+ZI!2*A@DoTkVIOE+7{Z&|I7%SRq=*Dx&h0q@;uABFw zz}n?06_m%cXkSUpl8b%1)qaiwIiGsWeeo0SUqs(p&rk{2&7lBGKw(Ws*%=P>_dy)y z<6chpvr*<0etTP)@{x7Q{KmzX1gHQ#v%E>nYzU%qX)#+_M4kbwcTOjIcO9`h*%QBmoK~9@CAQ>bz?3Mf65K#O~%r zbpq7AbUKbEOv589P3o<>&B4+c*#O62Lhc5i6llQK)g*uK%jYm;qx z-;}%?`r^Xx1+1C{o>m-}6(b8YV@R$%pZ}glL#5OLmg|XiLbPCC+7I*c)jq6N)6p%E z0CuEObXzZt2EHn@6km3o0M+I(X;`}mQ=Alvk#bKXx~1uJ(F&zYihP~V;{*B#D$Q2ZghU5~u)vG%ZY#MLg4`tQ-$KO`t{i3QZK%u>O2vvE zdlGM=za;l!K=)B_Ytt11RN7AP>UH-EsLyL;2O7rFt5}<(f)H&Im1@3y7tgBD+-rYd zgUJnRa;ok}VK+9eBO0pQIK-aa@hQ*{g|=R{4RzKjqt2`R2sD%?83!?+eGyL*2fUD+vqTK$RmE3BBCtoafJ?B zaVJS6AKGXdj~VS#oGhA=^l1oYG8~mjVz?#Xo<>mbE{*(24G&MIeAtel&ig2y@L^Ne!wg)H;4>e&&aXC`!zQz6KSZFzeOO_)5#Iap?*xfx zJs(2nZLaF!FryFW%4kRX1Q$02dLD{4Hd6#ezTPR1Z>|#|>*d3p;6b-(PNb=AhVnk9 zCTn%8R%XT0iF4w=!2a}yvqU-S^og2Dm;#mBmkT)`DB-L1fiy_!&rTe&%Ac2CUAB(y z^6W)v7&q>PUn1?i4$q8YC4%)_*PaQ;-)s5mHv26C>mE3MD{;s@_&0;XP3}7gZ~fvH zGFM;rwN zee{mA3JhV`Wp*sO5g!5)KhZ?@;6tW2UO-zf-KzDDdID88{B+QKN(8SjPf|pPDXl8H z$Ek>+kB=NBwQRL&Ph4*?jsGoOL^4iewuSEd%y^qgF3gLb9) za@rkax*PuFR2eXU%dVbYRLszMtIjZm+6yE(PDYVCoHQ1kS7qKvE!(XAREHn9EMujV z26?q@Bb4Xy2BYI_DfiOMvLIIY-F^#y!T0WtA=#R1=RcmN;Xz9s_KD4|j@&~u6PIlf zpi+`}3{~i~Mantbvb*@WEAO0J9zDE7N?X`{C5Rv(53_s5*`;M{!L%lnz*L`lYgq^j z+U^f%SNN;i`(Yc#1>Br)ge-sjWLr`$rhw~vIL70%6MLU=0zFINNWqLm)_zIX=eq$V zUETzxiKLVSM#%K-B*f?tU|9$KtTE=sU9esD;oloYjO{MO5i)q+hk<^2*@0(X0kk=CXO8oI_wGp!I zT8z284_gxa-_vV0RPLx^t)J_!L^F`~_WbGwr!I1g*XT%z7M?6kk(8x%dnI`<3_rT` zxj0n$vr~}kWn#+(gBZn_O`0Ja9`W)0VacZy2h+0>3vR0dXX>*I`RXR1cX#M$6;tFh z9x`d*<%obwql8W&@qfl06&&})kE!1@yTmA|smv&ETu7>}c#3W&YtdYBPp0tMh3Lbw z0aON+Roc8qyi-X_Ytv`JCkh{gb3U$e2#{suzt@q@(e)PZk72SiKCVdxUfm41gJ0d*#kZmS$w`UZB=K}9K z5a+2MD5gadAT*B6&!TuCZ{samdT#qA!fhq#v2WdIUUk4R)o}VP?gFJn| zlX3(93m`Wa%JBg?|Dfn}KQEvROWghcnYivh#NGLyiMx1GDgq1A^#2Et-_ZY|7j9?u z{>8&PSC@4E_3}S5HZrV`2Cy2-?UT9_|CQT+@6fmG|A75J^>Xl?7r5DV?&Pau$s)(Ic5v_Y=_sW|bwF2}EqL4b16 zVn-AL`87gA~ye#p5KKV{$T%8WaAI6-!(G+x4>+Eefdu|Z}I71 z)ev&={~nj&V1V|PUp0mh>e!l+3_JLBs`*Nk_qwF{WoydpGjo-Ygr8?OY z>OZ;1?f^7@Drx+A_+Klju{TF?GBAHZ^mGnB>F-}PHGU8NNo5Bx)c=0?^YOo%{=E7B z`b}B&iJ<*eN2aRuU$69yewB#%EuZH9ez46u5xo;727>xWivD_t(LNdCKMM3)z%Ksv zP{DJoK>tJOmgfsWIsg19o6Z2dBz)+e}rHDQa)`2E#%SXvAvfd6j8_TjHKpNgWt+Wr$rGre`RF1Q?9XPA+x z!=lKg03d6t!@_VWw#$8$aZu{1qXMN#uo|525nUk7OIBqDWv2Xi0ADU5rtnc);lfh& z+Dtg1;>7S}hWVdDqNc!xzHh=ppr%8IH=3^V1VU-H&ZEPF-7DAGK$$3rwlUan!?c`* zPmm~MCVg=G;_C_vY0!|bM8^!+!I|$B+puBB3Eevj&n20U`UcukL@419s8+CF@6hX= zlpw!`%G0P44D3E39P$~+1_UQZ0b^;SpmH5b1C0&<{JZW3#t;uBgu}D?5{6(#Ff6ix zB3F=IK`h~#MDScm0yx=u^bp#44s?0_n51lK^bpzR>mb0tDM=adwa0Ncsfoe92Edd^ z!oj;Qp7GSL_lenQet?4Z7=aawKnqq9p7H*;A;}STXE>kjMXdfU)GAK1w8ngVuCV9p-oU%rTbL z0D&^7Z40+~Oqly?eksU_Be$dpHLxeH+xCo%y#6UfJ(IA z6fd?vySq!ZM91&XD=!_R=fX17|uNIUM>?7pol4OFajvJPou8l zjl94=uC?;1BLe88pYky$d(Xmcc(J&(9)jQ&`8PlzzL*_DV?;xjJ{{x0kNCojz_BMj z9^GEtzWvKsir5iTRguwe@!Bf&=2kiqkq(cw|F8q9(B7?B%`czs0=nH_J^=igZZ zK9uAGfO8#D`2_R!10g`#qKUZs7;8nkexYT75_=*s#z{id@JQ?|;5y8XEe8T2K+%$e z7tH9_8T$A%>{vp{jstnf;QXv;nrJ8z-<=5N^OOuUuz-T55D0YM6t6Feg%reEa--pN z;QiBYMkk;iK$Gq*K+#_~#{FtIyw^0s=Fc)(z< zq2bosZ{HQsd#97K2N#Ul*r$6~Ztf7n@;sGV2;Ws`ONvk*ajkDgzsY-|(?9i{8Lck} zJ6QTuSU}$homXebh#B@w_tMC>d1C45-8$8g$Fq2s&ZQfHNS zKaV*F=G`AB#$$>rW~_YoVapX>3lVEcDh>#sx^K?tuSEcX_+Ue}fP`rfz#=G6d@y8! z*LqYOCeE62SY2bj)RWpgY+pq+8rq7VSutN^?O1xwlnjk_951s#e!WYnf5y!vjX)gd!M>?aS6QL9L$d$#_+3zA`F*!;U6ut=AfH z@Zx7+xH9tM3QxNVVl?|l3?mA?05&P`eynKmNF3CqoPQYdV#U+`{90#_hZ|OaM!K z#QdSE)(&z@5)Zdk1q7ui7jlw9etJJ*Rl^5HH>5Y_xI>cYRd}N}Ud3Sc7wq?Ak(e zDlY9&jopqSIFU( zD>7R7vs6c`&+aohf7x4;1*XW4_r%_bg<&HZ;_4hf_O;&1B1ARv)4&|8^U0E94o>DU4*Vx(SbKALyfgt!u!%==!W#4sknu zE{Bw-R_P-8d-ZtVlk?|hin2^EkNzkh?67zm?RSo=7iIFp>?P9$?ODcH9h-Hi@Vx+^ zHpNAaZs7e8#3%L+1US}L_tSEsr5ai;?7kF0n#K%-&`!DceHX(G`VgZu&4TVT+!;Oa zYMWI7hzv$>@%z5BLvP8Oejd{sexmLbWWSNPC%9cR1Xb4A37Mk_98$FJv?Doq{&cD>N-f-WVkD_~6>q6Up zuc~^SKSqdd&k@7^Mw<*C+8;eea#`-svIo)IpA&6KNYU$`kI9Z8pjmxhX@t38=w7n- zqxZoS4y5Y*$@~Bu1XP=_RJvwxp|#@cT|?TdCo85#6@mxOAY1*Vky4Q&ijb zB|g`Vfr(9W-agIKrGOef%3L$a^4?bW1N5D>rLR1^QxPkgBqyQ?LWY}V>L+FHB1Gfs zwdNXBxnvN;c%eSe|yp-%|hKo;4nb1Jlb zgQ~G}R47YyA9u;sfq&yMB?MCfD&Auy@-iWcncF|ol-Wh4g@PMh-&~!HcrN7~u>;lp>RE^@VBZapg6vsiWgkvS|M=9nRhLgpwGw`9aQ{{Y&k?c~U8Y!4K z?@d|HlxmMLo2;iHi2RBoG!?wByux{9IxAm%)06T_5I;Et;)RV}hT!$Z;%QMw9R_oo zNW8BxCxX$&pKp55+A64c*k~S31NYzgG!I7hu5A|CB25Hu8TG@2gFZeMUL!MsHPc%v zqmoy0b|SS_-q?21$&c_Xy2ub=1@5=?ya&yRP-0@&c^V*^67&5~0jy!N#>W4F(F4bT*Q$T0 zaiT3Z>Wf3+&xeJnHEqFPuC516(QHDMeX89=wGb{@(OdcP)TCc5@)oTxp}S>6nH~3R zwi=gHqdnKULLKc-L$<^J$eBy%Jt}u%*g<{qv)8BxI)^4t~CL*OOZ(X}( z6H5AUqx!ToInC3qzZ+&Y^!F??Z#g9UK}n*0dEe6w1-RdafR(r^BDdc^$VVON5=!sP zjf`?OlZNHL3ct7OnW~Xs6if>*Nt2(}q0M2jq`7iyB-(+>N9^w$jm7J`Fpm$WCPTrw z0+$)S>AwlPBc_62=l!fWu-2%LaDU3|mV&W-$+v9sim&qKwb@2lGj?=0W9(t`wE$t& zn_l(nFo(76x}UzA9k#-bjD#s=X9gNSKWaCA%16`YCurr*l)}GzFm7S?aOafJidt6o z0~hfbeKmSz?yL+|rQTyf{55T(rGN`szHyfYs&5@=Dqh=^t9eCb=H1Td8Ubo^^DvDLn}bl-_f$HFgDcSaZ{Ht`e0G`x6QpMbNtcOGg{G}@ z-s82586hT%(MpIYv(DMZ(@z(+&k&0Enz)!8ll@IPEfYKDG30Q;1I6inDtJ|}jhz6^ zU5e~-->Ug_!J-$UHXB!>ydA=2<@5)`hgFy zwvnzm#rw6nSLC{K2vM09!`mbo1fa@Oku^OlQ$+VRV|N2t(X9*5?mwi0-xV(^w<+CX z$#%{!diG#XS>Md+Q+p!=)^htvo>&zRa%Atu@>#9|14Q5|<*O2~V?jCGETCM?@F!&ySKF$Y)a0i)0#wQ1C2pPl)O09&xd~Ycq~`ic zg_CRVX-$=#p@o?|<1*d0B8y&2(fnOM)p*fy zvty@j({zaup@dd_V(uaoHjB(Jw4K8m) zS-N@jRo|NBaTkbwOF74S&N}q-1%`_Du8*io$X<`#oTqjXnP;R8bq*fx%d4e_Q9>Zw zxF_>NcRVfjlq*-hFdX3Zu@+xzv4M$>K!vZHn{eRHgH zXzXRq!Vo=f7md78DJpVX=_7_nBaZ7)aze6p&cSMa51CY^8XG#TY5n5*TlLurA4rR_ zdCtTbf)=jV#XEfBY_aK-z*eY0c?5^c)SbnILiUzd z&8san7T*!Y)3VeRL25SmoDaBnMLN|hBP-YF}1*y46HCOlrD=)~onA2hv)979Ma9Q*A5CqH#DWR$WBCF zdJAT)$B7-+5eDjrsJ;iw5hGn+8sPZRR}v*!3Ig4~?wV$$sI>4BW|V;WA-c4cX2n)ygn5A?#IgcIB$W|W`!#Qzd4Z%b|55#x@ijb)zgh4Y zD!2#-QaI;Q4Gf+3b|U4BIr`%hqvpurM?elMXW|>1k(5SGa?+|zM!E7g+0%rD-|IaOE0`W)N4O3q;xKi@d;ubm>vSeEsU7&sv~fkvEV+5=@`- zdFL!g(z!qn(Qr3Qt{r(R?Fv)c+sC8&OA7(iwfImBcke5pDBS^$%Z<&i@8IccBGSp} zEkf=Y-a{-}-Y>&g3V)07m{15qkVO{V6o@%X`HmJ|d61@C*NuPd4kEA-th?L8{V^2N_5KIs?Ub^zw#Z&~q1((Z^}taM_18Kg(}{*_t8&g{ zKUz#o7f-!>w8`}XlP%$MpdFC7qE}}w7ltup%cxH=II+bm`~~0&2v92!ukWB;7Tm^% zmcWa?nX3@hwx+LzAjFqnH;}%6Uwd_ZgAm9eD|#rB&PnyVWItOt%FD9ThHv^+EgL$( zBw~?+6Si>SjPytIQyNc;*ZP~2_Rn7bYjXrFKjbG_ym6vIy79wEal7;Cq-!1^Gk*OC zlSO?qF%c>#^v0Jy5~5a*x3X_1s^Wj5%yT&!ZdhmGM}OEwDHl@vGe6U9?{s;)D8Ze0 z0c-DEw?&%4RTa>AN9g>|*5+U39A#Q+%>o!IfUn^M~5tY@?T!9fQPvD@2mqTF| z?D`CQBkD(en{TCQneu4huj+Kn2zz_iax`p9;$eQY4c6LO9l$9s{~Ad^bI@5fKyxT& z4i#*(Y)n-ZgQ0l0;zX%vS4`DU1`l%8;YQ1iZU?Fv8TV^n5IMCY{EoWU5~-=IE%maQ zim+h9dBtqn?zFqUOW(Xq1lCk~{+ghRA~$|28S_Rdf|&g{bz3neouX#ra}WDfO3LkK zvbiH3kt3;v6u0-k z7UH)PI8#;-JeT@dy$A$Scg`9=O_e$)%$*rTKqX`FO|2VgYs`CS2QCNRjl-HOGY`!&8Z)h525sFSNVS*)!N;+}nN@ae4 z`X!>o??!~Nnh3)Es5=M?&%lkc{u05Dn)h|O>)KPO;e(+xdUcOv)(`X2UL9_!@@XVA zLLlS6wu0WXqm}Lwze(dN$iQUZ?!MXQ%H7iAvcNRn4u)U6h{Rk7I66>sEaD3UYX5G&!8kNkb32htYlmXL2d^(;Wbo0P_^lnh zzMJPBz74X_3!JjOR<$x&D;^MLc#mo8)E6SuLW!-&lRk@%YLi#jlvueqqhJ4BRM?Td zN=IJawFZ0~NOrHbOf`jcH?8u`1lcwiy1+=#igLNb8aRu*0I2eC%KNs;Dv`RB{!X^o zOesE{9wL<95*IHoE;p+w;yH2Fr9+#)n^>^+Azh(M`W#lTh|r-JMkpRuJ!m%b?u9=@ zMjeC<0zD-Y$>5jR46~U}=0^v(5%73_3O9UXWD!x`gCC7TE%q2mUfTI-Qc;@x^9?S< z-sKlho?(c{jmD!tG}dV%&QlE6cExxl%(#%jf$>fE5}~FsTPc7l({#P*Ih*B7e9%u0 z4{6P8>X=t!*Ly(gHYlt1_h~3NUHW|M;{;(DNI9JA4bENQkgdujnBjc@Xsnd)X2;i} z%a0AY#j_XBx3y|Zh?1!ke<00O8vl+O?hU$rJJg`5y@u^*xHcnJSN(}F1he&P(VPQv zVaAvh`MGw8VBW?&>Vh>+FtCX>=*b2wc>|UFB^o$WS&W-9!sUi%M_I6CE~e$M@^rez zt>i|hk3?bneh1E_9W7J%+f1HjJ;RwV11DgtIK}}m8ij|#oD{|n*t7~)6?Tq}EOci_ zsz$xVw*71V$(g@plaP5)>xat9WT6hMUQ>~Z$ zIp_N@2-QalCxDcyk|th@5{BD&eNKd^F5O64F3;Igs4{&FUt*?@6f&K`s1km)eNf0) zdhxx%zz-+{GfsAVn&tsT_Erx|Z_Kcv>%{|N1eVuJm#=PW6QD9E;HD-%7%;(FIXgjd zHK`@rLkjHL$)^>sS93K;&NhJR15Q8riLhQvYYED8%V>%nZ3rdSYp!pzUBuQ9ky!7w zz>oeBF?6v8X7jKUsP3hr=wyDF_?q>Iyp@y$1cJH_k}x1p9w(&cXx)A@RL~p`()Kk)ZghEN(3Tal7eANu_Pz1QDne>_ z-I#%{+)`p<)o57p6QeVh3&NYBb~unEdY0qUgdHN3!_yr*7cHprnIOxZ3NXOfyV3YOL#W3#{HyH%@rRkJJoz@~k$XPZHoqrjPoc$Mu<+TnS26Ml3S^ zc$I~jYt3iS*D}*Z7VS}Y<$ygBDs;xeod0faf z-pF+9QX&yw%b{ylA{jtsA7kE64?0C!U8tNNff0Lx;pP|5?dTzbbzKy8Z9$I!aE28T7OO{VVVQ-+2M)__b3D})*m?R{P(TEmzNEzn%@L$6){Z#j z@8m>-dltayR|zgLZ1L>#Oc~1}v$LhpB$RMUz(5S%0$rOa#sNu$P;y7wR4-stY6+b^G zQHJp#kFV~e?i#w(f2m)V42GU#?c|)Dps|@G*=)G?$twI<thq=StXC}j~~4&9I&U~8p%6;PJkng%58R>^lw@SXnKL^Zdp=`y>jg-i!d~2 zC5bu-%JbuOBs^?f$RH6UT?GytE|rEI&1)a)!960d%IlWwJRC?2F>K>-fK`EunFBn6PaGatoYVr zHECP(=R*-jHz|^q&R*sm3yPyX%*-i>AN7f)VMaSrz;#`8W)H@9e_Zk=2$jU^i?sdj zc-ui5Ov+1BBGc}T1E)ueborrS^D>{Fy8T|j$9`U=QW~rj%k;_K=^8@4 zR4>7l6j_A#Oy6OBFccF)0iSGHARXCV4*fp2FaffeJ+f^E)0KH_W~}}9<6lm`Uq9C1 zKN4lij(+cPe^@MX=IyjVWYZ!el&9V)v-ihMY-`kA-#xUiCn=4HV1+unYXlHy2;?BzI9Gq%f zD5kD6TE}-?%21UA%eVaB+5;k{(X zi%2r#XVV&)ts0~_Y(a9U)b9*m_Q-pJzcJN2Qxptl4sid!XiOii(fi=I_=ba=q^sRq z^%^lMWyHMbtId9{;_hoJyyy##4~LRA)gy&`33km+13!{ik? z#p8+2F@+X^i@7dYF<(-w9ONQ9fCr!oN0ay2GnRie%4I(A+iDoc~f*1pDz}w*A{bcqxsXTCMh3nKoL?iZ5U31zRTBjeTNW zL`_LoTrkglmRQkePH)!}XBN$W=NswUv7U?KM5ZS@c2AcseB|j|A$%Uo$F+2915Rc! zzy`d*tnyehx$RHFrCbX=c}33Cjf`2RW-txjv5hro#HB7EfQ8z;;7gg4YTKC|VVRr6 zkG?^{)fBdVIlfg+d?vgAe3fQwj^#^~~97fyyK%W_S+0^VC zyy}rFGqfMGLbGd%GCAH}$Z)Y;XK{eG{*+D4GyRs1*I+S6_E?y0B8&Q6!BHCy&+kF+y&jf0_=ZuzuX1hcjvZ}3YWU1yZt?$;Rb;C41K1gbs}L$rkgJhkDpS^%MSWl~<|Y zQf>rM1132)ZV?|ZV%xy%+RicVqrV52cqxO@11@tvA$AJi5c?f z`D}J$eXy5rRO{A%nY@(|!GgpCEGwc-h+)aPWuoB|14`#WMxODtOlya@(dp6N8wi8F z+k5K;IzjJ!oUGDjkNFXHbT=Fu3_=Ev1psQbc+)EUiBaqTeV@IxGXxm*tuy1R6mM1@ zo9{0$wjWGfL>J$t@l-=I`PO{g99p|7&IrM9Va?^&qlM8?+etW$-`%AehT!1qG9l?@ z*V&ob%%@!0_#Pd(#ieCr@Ms#?EO`4|ds4)Bk!-lsu}W+V1XuTY;r5m5UM%mw9AGkT z6ph@h>&=|L84$1AU*)$EW72RSL%H|BLuyP-ie41jXWa86siA%`av9$$wfY?76u7(= zdlBy%3A}mm;eK9`wK&3bkg3Qv-5!-4T6mP~rzyD0$1`^Av=NU@ zNCnWBo8X>W5H^h3IfeQjxEK(1^k&g-LrV(L%R`@GiA67)dmz)Z_h9aF0J0~B9nl+F z)gLF)F>wvt!hIckk1m${XSjWymFfky;)tnCQW2(wA(66kv}hrGl&%R{b;|bb&e=Oj zewy>e4xG+6!#@sWcaV^NB?MRIV##(o3lFYy*ae4_p7^LU8t6V7djB{zazeQDNKUrI z3eD^*gWQO_OMXSASIaVZXM=yx#prcYx%mn}&In+mb1Bf%^k@p&;u%Y;Dk7fc7$TgU zMwMjEcGAaw8~zv6KH4aaU0+SC)CaHqV-LQQ8!49UHF~p{>Y@JDd!5s$t=fj^9H(I% z9!b8iYxGlFKkhARL$x-PJO`*NLm!Gu4MMd7_ZrIw^0t&k9Ccc#+`t9CPkyn!@GRV- zlT1}ld3pC8u9k<_xx%x22IzNom&nmVr%;oZ>|>=aB{ZI^Q~&c9lzGh8%$ls z|B1aN7|VV4)2OsWyqVxU7fW#jyRY)s%rJUWVy!73c7!G$eB!``tCMW=1#KHGT8S2A zl|RqD&BHOmyk7|szyOm0BC_M$F&_L3)>T6#3}|(Eg9vd(c4DnX@b_L`_{z*pjbhp- zXVn`0G>buMapC3r;h#t`z|y)Z+)|JRS?!VYrLk3yor`*iRq6(d{cA#gU)|tOOC?R+~8U>5uZb zx@@nq(tgs=c5bSY!RRi8c;ydGoJ|Jd^@eXoVx-A*oFDjdr+-(nKC6GG1xwtA?(>hQuxkR(c-I@cI#E5#;Uy{01DA>F=ax(?If#`S2JG~ zTaPfyat(y`P3Gp1qiO7PL}s1c%6IWmVr8b)IZkY=fphOMYSqP&c&S_MM4xA9X^hV`)$KVP3<2i7^gOxoT4Rm(C)>oQmOeegsLkK9!x}RJe_HJqej5(+pI)KM*V)d|KNg;@I-f9Ui${Zb?yDCJ0_UN9xOk|6LIK6` zV488o74L8L<@y++rSP~aZFc$Exuu=2&;6o~gm!n@EBhp<2D263_(e^z7V6GiOUjoI z>zq1N`@1l)0$8=1uQ%x$uHN_dj-JbXfFUXh`c&{}u>G15A-sW7t7DCBNQlZt(jZ?d zIu7l)XdW7PbT<{(L}Mfj13iZ>A_EBncqn-S_?ti*N3g4M(^uDhGqB$|!zrrh(D5=T zl8Xovy|r>{aJg{oi;NE2?%5W6RHvsr)6pkXj-hV8xVm*ng*43{+ZsWRW~p-!LhW!| zLRH%&T%#S2{CSmE>Q+pnU4D&Jnd?gecznU~#JPFiQB$`dlNVNB44VCl;0t|0aPl^| zm&r}EYkEf{FgX6d6uyZ74oYAV3Pr~op{B=kKDJLO_tTz*C+8LDj5=`fA*iKQHQd%* z`uDdA6&&0ns^Hq?<@8$`(n9g3V=M#I#E;)_EdkX#BY0v08CendNz83DY;sZYy)Ngi zDc!j{ZOeQHokuM-VF>R-vAz6wrQM~WmQp$#t94=L3!|KSYw2z6u%Bn{Z--Hl!jnnn zh02*3Hd`0=s9A5KJpjOX-DUjcT_DgYjawpBgtM5AV~?JyHJ`3dPDsD@k>8VKI*u{x6W?No^7yo zA55-Drky%y>UyNZ#9KczZlxA;&>6Ga)ZlZ?j&EK&h-3SO`_6{^!|G^H9PCZ*7ZEpn z^72}xKUPT{lsj!5nM@!ge(XpdF0V5tj*pZ?L_|EJA;8cc+aeH%&pWL91Q3mVx{T!y z2=6zKJEjH#x!!X;0-#N|xQGad3HAy2vg0(c&sUCh3}h0#JN8Z>8RCPr6*(&;Stu(lR(_6jXlh{2G9 znm_*=U7Wi*F{QNf~Ie&}szeV6rAO3s3{a=A8$QSrO#JveL zl<)gLJ~MX8z6((*TZ(K6sgNy2#-Mm8WG`8=ldY^p2%*T584N=95mAI}qpXpA%`*1o z_sr7r{(QdQ-}iI=zw&7^(eF&|2IPeE5;pN-ga0^ZqBhvH*?g#=>(Oi*Fp>B*!BKY5Zw! zLQFrwvq&QI>l=KF1cFE7TPYBos?nW>=;Ob>!5>>E|3NAjN&WV=BH)#0v8Q-ru>Hli!@4@+>cq1pdrPH+mY|9OhP+~~HB;h*XLCoKVvZK24(O`g>4 zSTHi((UBz%1^7ZTA}isaB}OY3u(`@CU@#*z1%N@3xs%$ICQ!;ikKvBczE(%|@8#K4Grh}_Cab}_7-bv!u9 zvZTgiGt)$6fn;Zzd+F-&NCB|4K*SY zsWTdQOv^n^7PWHCq9|weak8Hm)o^^^O`i&%rOg@Z_qj7tg=_Lsrjdg#RiQOa1^!TwzW96R_=}ac?etTPZ2~4w+3hWFDc(Cy8f+8 zeR92H@~XyrV+CVh`uwPt+H2cmPaEL1g?ukN{PYlZv`;C8HLzk zLdyp_a#R>!ZLVu?zvxQuf=p`nYquVWh>|U)Np#WP>LJZ`KAcr!LCvn5T=Q~ow4W6% zwkf@U=?Z_N-JE@+k4?8-&!tx7b!(Q6%H%GwMXQ8yi2{)HcL}Zm4g!os9$d>UG1aZe zUBvQ8ndhxm>eW|fCf;UR#k5Lz73&T>Qw;qvE$XRmP4~2hUwfb5Np#9^*G#HbCW{6M z)b*vVzG_LH-DhXHxv%JeI1c8geJAU6RXn~?qXn%!F#a>&`rjB_M~B4jU$qljJS3A{ zp6-hHxrzK#uQSZyoT3A-P%?eCV3?OOAQ2TeN@2IW(=%>kRqc672f+)adoD{dbFjV6 z)>TDZ(ok+XptEIQQRBIxFsSifM26Z;!PoC+{e{xvrQ{&0%(a5XwDQ3*r>JV<5#5H? zjo}Ti%IEVgsS04%h*K?uRSgXOYMrXdHK&@(OUWy8nZmBKfZMBZ5-mS8Wt+!-Hn?eb z4Otxg;72H0DQ;8l+#X_8wz&4}+E#ShBS~ehH2&f*(-u`_(G2D{^Jx$md~Lz*x;iIj zsy#x=p3f5ZB7RtsZz?g)jiksMf2zNkH8UYt^;IlRbMrXkppRRMoF@ z4oQi*-n=2!jYF*R=Q-b)Ik2Oo#wh`cDMOa~t$e7S9po23rW`W|Hk1dHj#G%RTf&Ie zypo$=W2U1mCPRVz3|X;a{QB7fH+xD9A*5-#;!=BfH@9a?sd^}Ub6qIJ!=y?AF{qck z+?ENRc;IbVu`*5Yr-+Cl?D0?|ROuNhE3P@MjkQ0M6Olf5xttl`jZcbk&*-K^rKPVT zvGv)(r9f+zkZHuy(&-e_`PjLZC#mN*dW%m$?c_e~@_=lO4EGO+6IN1e=g!OVTErb| zaZt2<_+jk9de~d&tc_??nPn-k%QW3xC?5qrO*o|y#P6a08qjJ5A!+?v!b=Lw+1m%nOkR@KwW{bci``u#_z?)S}` zY4M=@0tPfCj4&{@Cc;h%Im(#_U2!e`jr=)TlRISah{x)SH-(ERW-Z*8)(2rYuajKOpj3FBd@e5$0P*XLvOo!mUnfF=_*en8c%Q@{Taz@FRSv-R_+?bvg@Fk7hv-3HnHT}k$=Cr4FPsj_HWO}A@~KrmipL>nUhrFQywV)o+}s8HOy##Cc*|Pm|?!syw`sP7a7c+ zOu?-<|CEvlG7f#qCO#{-Z5YV9>V@{l&3-pX`?SA}W}H28TVS8(NbB8wc?3qYV9O3P z_dybhn-v4^p6=Gn{Km{tlz*{rL7B~#H@YQQ_8S02#wi1eG@Ne9vW+}o zy$7b&Oi}>M@pY?qU3_k`;y};=k5}ntW1I3tCfL+aJOT9)|Z# zd!4wg`sXQ7FS*2h$v;nleVC+)JFKJ`34i(L61GeOh%*vNp6?+6lQ1KRJ=_0%_Hbs; zq4-eo_q2Nd#k3{299anZ{v$A5P9iDh{Sla=+wUeNegR69fGDo!|AV;umRy9D3ft7tOy-L5f}YR$e4- zV7U_$jOQ_11BXDhJ?mXub<@w^OD??r*EC?nry<~A&BnuHc6)VsKXt2Ly>jcN#pA!Q zCj``eO}_60ipSel)$O;hSXP!?^R|iq9H&`Hebk#eYUG~!J`UQ~L&bz#+|T(N7pEEt zxS)^QYrB}0RONzpD=FQs_uCDeui6?@?|bx@wqWcL;&@(A7oX*?=_Raj7dSWO?8=Bm zA7{n)KO?Dzt}zzx8(8y4hTAz>UWreJrg`7g8>`pgG@BMQ;qkp~TYvX*Sdx5EM zRfiPV7w@kr00I~AG0!+Br*3*C(ZvZQZaabhTwke)&*;z~GJ4W|PlNeyl>ca;M)`fK ztW8i`lk%^34-7Z1JITZz^@Z-uA$B*Fs;M}gu8V>?7cMVT9SY|>V#`|e^6FIW&|t~& ziKe58+E$~VH@r`-&B$>d4Et{Q!ZYwogxBs#Y={_g^m1?6mtvnLIUZL;61St~n!bB%BOVeogpN$v;j<@)7hNHu99SfiBQQ?@VZ>+uJM~wsF?Ud%X0FI6Vdw$+Ci!DQeWF9Ok*kq2tw1%#;=YTr%_rb10uwl%S0M%tR2od!UGn)9FC^r* zchUN^6v!*ELDA`-uQ&)vD#U%((u)Gmt${&_es^ZW%}OSh4`tvm^t$^|0r%{UnvR3z zV>;r(?5oS- zrFz!>;Q)r%`>EPYh)zy(3ym;0_(5+@k=NE8L-p`i1fm0I z0PxarD~kA|P}B{Vv%LjarE_w#+Px{a*YZLYRZo=*pr-BOjlXJm^1xWJ46c% ziENTYzj$RBS6;D#mu0$l1%E6bo-Q3{G*bBM%Q|g`h`kUpCEDDI&5R-Tr=j zT$%8=Wo_@lg@iZT26lnI(o#V^IujRuJhXhO@H+tr`Demz6O8s**3JK?;P|c~3&UaU z;ev-Rl?rQZBjZeB9~zWWA~F8BmKHOQ^+7UhTg3+Cw>t?d^KM)|0cM_+xO#fVq1Q@G zt}OPi-Hs=QGGUz>Pj(ek{>ggG;H(rxSjlr^Dzssq>{#AydR!|6+kN_d5IIuv7-GR4 zltM%j*!?lSj_&K-nNx-!{rzzT3B9ZRnr188?R#|ix%INP!=r=1v{f{;dD&vYY*CG` zD=|d#=En=;HcuV`2YCev5u|l?Tx$;srh;}~ugxSsLI!&_O*B>-6gByl<6U(71?h(e zv6?FAEw$fL5fuKKER9--2C%Gl{z$oQ`(zFJ9**wFk6}XH3*a#sb0jASX7B*hrvzew z03Law&TE+J2f)3f?oOria?~<6GGgBWaS_A?!w>^7?=(M`-zPw{fUqnvLd^P5b75w( z0x$QJA^h1Yns<+_!8V(&d=3&dCO~(1Hcan*7$1w5Oby?y>rG_#!TVTDk<@Tm znouv^x~=c|R75t3qqNFf2{E9Y3sW&LK)Eg}cV12uu~ha^_M#g4g}|Q}jW>ud5z|;6 zrq8<*lxXmJBWPXNu70;!FA$e^kLoXPdA3h90N7*jUAD468+jrlstL!yM6X_3qhF>7 ziVKQHL9|7)4V4muCKw+cWUvIe3rpm1^I&o*oK7HaF)&5=#4NdzBduxvL}L)m1J(pI zqhw*v^_pwCeSREV@LQEThLF7{;jHKt^5s4;Lw=w{nXTQTBF*AW3_|!x#ycE7zWhjr z7`BTZbiUbCrFLXYK4`&Gp>cbtVd);OSte=zvm>kDc<2Mz)hD%BMW z+^EJm_VpsLkp@Yp1O<;11D$2n)0*pQ)-ywO5IAFZBcJNmN2)-Egz_5ah2M{BLA1&j zFxe{LYJ|(_Eex5kAVqD~KBs5}LpSoPA|hQM-8~Ap6Q~l$Fk|%cw~KWH@$NZzhXO7? zXw6uYf7gkUYudB*Vvww#H}1 z4bsKOeoz{5kDe)t@8SbWqR|#vz@hS6XIGBN4=DESA0V01$Zy%^DwrxudGavaS(Pxm zAl073QjvLeVfP>1B}ae1jCxLqoKt z-M-hq1XCGar3RX=6xdsh@OQe=!5`!=eNCc3>B}lpkf8=6_i`l5M< z(3*IT5#KIwvU-ufW@u+%GJZXweZ}#U_OY%+F$4bGnu7osU)cRsJsvY5E?t7a9U-$y zrh(epS3ACzGA$&>j!WGTWZl7WVZh~>7-Y5zBAOHAEUO{S$nH-;7@8ue1~4%}5v65{ z_0>+Rlp~*y)f!h{%26^=>Z3#{uq%VqDS#9x5H}W@!CRXL;zSHWhe_9OpcWGpgtRO# zlv=-Aj~F@}fr6oWw1QX;`n%SRQW54`ha$5i+iGw06lD^or^5(lRNt1ke2E+|8v0;k z^-lK#ZMGc;kat7y5UvP@RR;-!5299}sF(R`7n+x!Ivph^&BsETZ>>R#y8l~2r^Xqn ztxCI!OLC+KU}*ZiA75*fIO7%fh1>g=TAFBni+GqQVmVZ7|55J}FFbv2eKkT>I|^`> zf?*X2ikO5VxSSc#*mkpEUnHx&cf+h&YuAJDFQ%AgX2JuvZ6Yv&f-toG;u#5n2voJo z#;B_9=LCr%NiqV{Vh}q&o6`VhNY;o$~964&}L>Cik zNd_qVJ+>lDwryD!X1APtn>4p)K4*X`2#s+nbWP%!tf!x356#?pK206ZsWVJ~Ps=@w z=e`McNPz;elv*LOFI?Bv{-s3AAJ?QTwGJh)Pz0U>!J~N-ii%gk7=KlS+3iwe1?%gI zoCvCxr$@BM1pDJ$UNjqSz6cx5tUm|A7Lyb7#uY(up?KV2!?GadeAK}x40rVd^Iayf z;Psn-2 z#2O5{yzhb2?)M;Rt*ErNnCU18%w;k}S}PQ;i*lS; zgTa?WpVdr0k65c9*oqEw_6SC=${5U^glJJp1W3!?PznUSW#QUY!hw$Qit3D=h+Wi* zz*Ly;5+K5UoalTo^a-=wNT0(*{=(Y8qv71@pRUY=jq1Jwvm_p`XS2BwVLwpkesQHh zv;zctOMlz?o${HhygyDhTKTYp!2w2%5n+WvW((&6he24TB|s4^`?GI$$3qsXgMY3w zMDNlAOrXz|GZC=}vAY=(0w;@?I{|-j!t|^?9SC74IS@CK0KwMZ|LL6#nVnxY+xBbc zMJI8U=4NUK7SBfAE2OY-K{#hW`0>iBV^|0d~$UpnA_p{6?XrU zCEwcM9{;!-u3~H5z=DGRxFkR!E@AXC)OVV(UycqTB%#fBX=VB6eF*vnIV!vBx(GL$ zpC;MA(K*dK=vZsK8K`D^YF)(l(INs;PKD?lY2+S6E?D!ZLy=<+jxY3TTanoe6nIJ( zb{H;fHSNsU*uzh6-<2cjU6SXQ&U{L{Ka1qgsKrh{I5lr$fbgeUdkm#q(p9ew9m_x@nfydSBo~^(lRA?PcH;Y3G2J1iB z{KroK6t6nnO0L7d9w8$}ua07(Wkk*V$9c%Wi*swHPioERW|gH}ilXfLc9Be zBnl}h*}?%LhxVg5Mso>iQONgtB`(~5JkudnhFmlUmGT8fP|0$_4W)PUH{~mO&uxL{ zMu`cID~y;KPgXphA4i_XFiXrP+mH0Pl-eCUBXtK+J#yJX{5Y=W(vnXJIWSlVbEZS@ zbf-NjY2BDmyf_I##?*MxVmH7`6*>LKj{y{-`Lpx8=6=;%EZ0b>P|$?ry84);8JUcZ zD{M15YrH|+?rE_FrFoGFjHyDve81w2(@7XO?E-Zblk*gdhPt{|@ znS88ZoOa|z0%HX|B0-dM#YO+*JF$Yn=zdT6;cMQO!hjGJ`n{X?ty3QKslGK}#-u`B zsLXlpH_%H)j1_{hr(xmvXsOO|y2hJ@CC7!|i`g!D+)te>vgzgZe1<4CHxr*=u7xxY znpYYAJ*=VXa7x9Sk7I@EWcm+%GK_~|OE_=VdK#eO=e%83*!(RO0XFVeF=i#vJI=pk zko+x!#OPS86(*@*ud(xD&MF;N+p<#H@cGElgiufv47<>xt0IPwp2suZ<6i4|nqACk zR{jRxE*PZ2BY3+9Qxs@e2nnHoMOZ4~anF4F}wNp~%Ei2wOUgEafg}s_0zTcoGs5!=Lo+HlpW?5I zW0h38E^oGRoAK7Le@)mgeLk`v0(jUI=+bEyMyB}V&dlc4?2206YOk{ZlYO!C{(TNK zZ7@=+HdS&n_gn7FmI#>KRsuZaL2 z@F}HXF#_YvitglCOBE*! z9B3;OyBC8Ob1IbkPH)|8VnPMH4#9osea5fj*M7jg35rarc+%S@;|7CoG6Gx==t=GZ z{-t`H6e#x~v#~q7<)2r&x;AoU4~b1?w>NeyBiGMvWdTCHyJusLmsl-}t;caKSkoa4 zcB4fS{4<9O2&(|XGA;ZHc=s&0h(ztIszE*%EOFmykZ6#-)i8d!a;*WQRF+=Vb;6C^%K<&;vtFV;2QM>Lw z$NS#o+qCH@Srb^}A_kQ{!%w1GzK2O5H#$UQ;gFE&#SMcbC-BF`OHuchpHIJR6-{Q1 z?UUOLBV@UwntqH_mED44>*|(%Y#dPrOO+tGHX8JdQ`~lKIG7o4uOll^&aJP!*gY1x6LG$nz-!Sn-b-%8}D1CP72C?eqcpk|DY{)eV3P;K1&-?zM;D{|cbbD_>&OZGg-Z8BAm?*?mqW7?o* zS8=PO&c)-ydlM$aN)=21JbDh^bwiCZc0I+797>Mg&O#Ee`C&vb>JobaBwqU3!kDVz z0=u%tm>M~${U#W|9aBZz8oj$6mDQrao=Uj69c(Wl6Ck~8$XvVA!Fr<=WCo-xM$R5Y z0ag&f?ntp{qY!kSK2UPBXoM$asqE#%1C?ieMUR?d?dTSCtL~Q^*D+K-%f6TNi<)FW z$qh!`_Vl?dgUpj)YaPO!MGw)$&Ks7jT_8_WtcaV5w^zd}bac&bwii80S1DBRP^*8B zY8(p@FNkKXaR~iG&45+6-L~#W!C{FThOg4^g6H>%6rYeNvV#=eTxn50!s2eoyn?Re zfnuaOpi@R0)$8tRNs)?!W?ukqF3~=_8c>p&mOix+3$1H~M^Io#S6$<&l)t%fhkloq zrAEooiJeBCyu3a2j2|UR+t7z^KsY8 zreMbnmV`#f@4G+3;M?q;|M@PBlMV#|Ur>0=ipWq%7cZnlndC0?X{=6 zTWHbN96Ly5xxQ=8j1SN5VHy&v@0d?tNz&`$-MQAXm|EGX_Q{MAI~nys<*oPRSUk7g zqxzfg{9cxf$TJZP4HLZyvvWAb$kVBCopxE@0o1QnI7ZKF3XcSv(RS^9JkUb~#6Ab4 zcCS9hV!&ot`eBv3G-o`$=7;iqisI&|Cg?wWaaf`yB2e!zIzDv8ieJNNbL1e?Io5-C zE=AErbnAFzJ@E%$L4uIS+XN+#4$-IH?ulDa>b?i-Jl@Cc@IZ$O?1=zAu9<)lp?LQx zw1v+eFIfi-%6={T8xFij9CL{8)Gis=C@WC}eG>ON%Z>#hD7LFTPJZ&JDDt)!Z(ds? z?^$YsWbg`d>|FyDv&VDExw7OSUAFUpr8P$|sQLRO>~LFfz1LTKGyUTX*uJ4O-jazY z=G*G=q86dBb_F(=U8?M10@~sz!B)|U{-m+QL4Aog*uAwRB~4#+dkLbM#nDzO zhbXXfd+MA4Z0Z4QInC%tAeGk3fMxOU=aJtQ)C%#HA{{ng zpLLlpP94VMvGWMp>OzqSI#+`kAq5R!(*{(B)j?1f*EoU)Bm*p2WrPyNhYFro0kmFHK*_XIcF9-?qaa$F86msABZb$tJxQU5!NK$3;A;x5S=A`f{Cgnm&y=ja5c$ zb-|4nR~DX;xEgmTU_ODCWwfKAb5~*I+@!y%(1-i0w2~ zk{s6IKq%9n2R_^Yf_d-}@CZp%F+p-W_4IEipJQ^7)M7o_9vv6Iesk@6J^L;HD@%b* zf(v7oAs~L%B=i&{Q5f52_P&|XZaB6S~y}87L%Ji!@$}j$k*YDOu zYG8J@7B>eFV&f~jF;#t@HicC(^6Ww-KkzPKPL28rYPcnCdpIvX+vrC7`3911*pnGi zseA!Fd_F^o&2710L-EN?jnp1fIR7XQGVsYdKNuWD2Vr*39V!lMB8@On{oD!fs|Krv zpyA+`VWKFI+4%khyN)H>oRKijW4`#6&#iK1wF!)ZxX4+mOdY^TF(~Tvf1&N#EH*fNf_EpFuad{tZ1SzkYwZ{E*lR_@S6t%!s17Bbb(gaDvJRgJ`P7=^m^gU zVei``-E>PEPsQo?7pp94aYv}QRvXM$*t`PbHnSD!?`QY!$(S+mc;7-}wh9^pPMZPp zgQpE^D32A~1bn>9=dWJ=IqG;6Ci)d~hXs+SZJJAseBOIECjgUF)fw>~bMzu0f8$Mi zz&GdN9h-fqukN`%USIY0xms#I1;yDp*?r@dgs=Lw5`J2u)Tl?~@z(PbBF4gJi3=aD zh|BksyAM4qDd>#JxdzZYop{^)k}bC9Ztar+!@{t;wXS5!&MCV(wZ+?B>$Q^w$6V~8GpJ(m&uG^azVYVt8FsRG)5lgU^rKoSq(x)bqD6kzSJYKj_{bvN; zSrLh!Jx??HNP7n9m`>XT2WW&~oZsoCk=vcB2I`7MI*~Z<^IPK^k+#^>g*x3-*0o4=eJ+z6- z-R&;q9)muZsB%yR?Abx4t_~%yPq$D8fPI(0!nB<5{0Agm>OE~CEJD(FBz0XyH_t-@ z!|BUl^ID{@AUwpNB?CU!fIB;+7;Qi$zy_uS5wsw9%~d#k)SQ7P1r}p5-m4WJ9*!JO z_S4KvT8L|4L`W52Gy^crzU8myqf`O;C%yMIUQ&?q2n(D#-f%Eh^tcGkPhTzc4F+tK zO^^bKE;F9)P?4bxJ3=53lXQ;)J6&eOh~Ulu-J4pB4Ctf6r^h7Z-3w^c)SK`emhUON z&wOaGg(i!yEoAZLzN=F4klXp(ncP? zv=QAS1&0^YjHn7qJC|48PrmAD_@_DB@UAnXN?Tk_m=>Hy*?uck0Il?(Ulbe}en2dP z0-InqTX73qNTN1~J?&cVUZTd9w=s%6J=gR29CS$jT1yRii?hSYOyFMl-Q#57{FGR2 za%^&fj0rwkp)TA(VLYqS@fs39kAk->;t^C|NKk8=iQS*j{E{I{HlO8HhlR5 z+eAyKjjcQuLhSQ>JeT2`Rs11W9uPHv9YMJEbZE%?&r!sjxdBM?Wl%8XCK!cKeo++urfXECk;cCsnc9M(4sIj^O^ynUzqw_j11n* zFWt=YgAx#rUgPGR9~s<_+jHJtU!PuiQrkx+eqcA@{NzZ+F7k{BP>ndX`%fVom{CdA zj_#gXlZ-EiV0i&{`S0zoluMe@11ZL`{&4{?|2Vkc=TpFL>-Uu>J{ZYPr$)#u7G1ND zdqC6x6J`((Z5dnnP6=dox?{HL(~4I;xelUJTN7(xph}9WDU??6MY2REqwk*!v7@<< zyfW3~{T`T8qrKf(1`hrF;LvmUYvCbG(&#(`xS?W6fzHx~Vapub>JQ>KNFXSCD0V0I zXg&n|hkY~~zVIARM)SgmNee9#5qVD9rH$WFH;KQ_HW!M_d;WP@Ndq%Fh^I8pj;?X# zv?A{QThgE-Bx>>GB36?a2uN9Dv8L9I9NaBj!;#C&e$+T>6`BLw-zb?@pu6>;e{VFdD^rnmz@*@%@_lWb)NB9*o{#{Fi6uU=f`RE3xQ_9P$~dUEJEH zz!XuVC;YU3(&Edd%LqbsQ@+Eq*V>oON$5-haB~KN&06AJD4S&NIf77sBckG2Kp<0%<~4M1Ml&W= zQ1?*7t0zSj%>BnrwtX=qp`SWw)V;@&# zOFnzzJ$d{g%+_bp=#0VmyRLTpc*~0+A+sGfIt}rgbZ$gXYIz#;1@#!G?`cgv9!bu# zBDAVFP#4;=yVQz7F7gw2*wZyL&aZ zF_L^%1BIcD0vtlgupgd4QNpIe1Olp6;v*u2y*(@hI~&YOfZkjbQ3FRFO9hog53VyG z_cT7rYI}T-<~?t*zfd&oS_D9Yv1Q=Hcz&d#AOJU~Oo;HlKnpv@QzoJLA6&e)G(Xdn zt224TjopjM%jCzZ`E>)j8(g47)LO!@>kWjChIeVVyLid@)BBf>W&$ z{`SaDJMz;}b+k8`l|}hB2f;pc7)ckBh>i3JVIow-NKi>3??N@iWv80#eK;3(Vt&d( zGT`I|HMGoD8=EA-I*|AU2O2_EoLdTLRn}N1K!1V+{bz`Wt}`D@Yx!!LXO2$LuBRM@ zAUkIEcay7cH<56%MTq?ntQ=JI!?+>%Wn7}4#bZ231U*9Twtohdi|Xfc>^?Y1ajmEH zL2-BA)x22?f<+*xdz2t7A@MSRqF3w|!M3d=gymQYNZVPpF)lQmy?CM)nQ{Pdc)L~F zZMOP|msE=+6vpE$Ftf@cDlbe90|C7gJbBB)DB`9WC=#cb`cPEHwu|Lk!{2XCM z_H~xDb^O~G3AFMTzjmIz_<;4tff1|EUK~++#n$ArIg=l6r+~_}docb%ukc;bhRCA7 zfv8gW181u zQ=uqnO7t=vVmvN;Mxy2*Oid-tagGA)iJuxeec`9a3EO=eEE#;#)*VBI&OKcZL%)Eb zYKNE5J3r5j$o^1OP^kIzMy*_a*?ioB>Cn6gN)ytERAyO)pzij?4lcSrIq zw+@WlwrgNQx&5>^qdkI1u6c}Hyn}s0h7dgfjDsZ>r}m&=2DqX`Pzrz7IrTfD02`IJ zTw>&=%CTbq62Ar2X<9JxYtHQO*%IG%X1X zDP0G{-2e=JZ8!g1cR@|2d&Cz#jB(ZWH>;LwyPAe*d$4L7#*sJZLjc|~7e3{YbkU3V z=_#8Ofq^GS5dBPiL$@?&jzu5D2HrWwaN08sRERjBxL89SzZFe+Qh3PA@a=;7o#Oxv zicVh8=(-PeN6fi!=xks3jd4+;6XRb>`o>CB^t$I)zmT6Fzfi}!%QxHR22r7Rz^lop z7Anap(ar}D&L3^F*0r2^oN8Lypz()_az!Ew%$!`-0>0591e&NsD|>qSb1NsJnMm(C zfz{U;&f5pMpa^oQ+s_2bZQa?P)rtxAbU+@d5066=nEch57z#k z5E8FQiUJZ~2xWIdTmPnQbmG>+cTDKPkj zzyLN@(BqiX)FNvYFYtao=pe8;QACeV?iC+70N3F=1qYsMEN4VLZ|>x*D`G-7+t!NE zp>pH5)2Jeu#K6-1aM>##B+Il?a3y{nO9&R=+U+P&c_-POLw;&Agq`$?xO`Kpgl(a^ z2&*bJOu5cR@Uitk^GB1QA4-BQ05C^VZV@&YhN> zIdc^~6Jk}9?$!yuIHO>&p|NEU$)VZ?nKcN4qV(jZ8k8^IIRY3wOB2>i9d#LrSm3$< zmI2+#Ym=UD?Cy2^N*eeatWL_o+E@4<`!tAGOE1Jm!e8qLDG2A^t?mC|%3x}Eac{l$ zwVC3W^ix?(#r_mp4uBvT%7%s{f}Cm4Sv@BVulfv4`KmqkiSN|JRB(MNICiWe9DSCQ znedZbBu+9QHg{lzLq~XP)GsuBV9Zy5uk$h$8Wu_|LB?W41;z+Yw5iC!iOglU!1e7m z(UVZD$@pY)Om>yd4@HM+XIgYcD6C9Ssph_NA&%{j?(IqH$-qa@r7R15!?4m_r)zy9 zWH)(6p&YR`<42I5^<{NFkx9SE+`>}*t{3u!v?>w6_t1237esiFKU5tICdn(@Cpt#Nq&n?fb_QVhj` zaE@1jp5(ey{x97}$Z0sK%h{W_&P1GbJVR!kj}^L=p0UaKLMY&y5&et)m9%#4ovE2C zuKi}VCcEc2yb24uN=-h&6+LG2F2;M}e@YviUSo`^zprV5oi?PN?ewKGcM61QoqKTi zaq2TFblWZ%BIqLp&~e2(zmZ_*ogMwMTv9A-+V5 z|H)1*MolL|sOvTtNxZq|L?MfqIDi!m#xagodpkKklYN)w*I&@w-J@rI+G-TMnS&RG zxW|l;&;Ae+2Ad6ymg|YLp8P`$!E&8Y)$O*lv5Klk-b=_y_guVI+JvqwGwQ<%Ul*E= zqq~66q(ObVQWxX!^YYmdWm<$OpGtu1?XG}0H)=q*Jos)6{}=sNJqZa~j(GmS-3fes ztWSwVSrz_ch+2g$(zF#mQaTr^BCRWz$f>* z%#`e}D+ODowdz-a)^%;GF_kkO5IccoZ>P*l!2AwABaZOq(;T| z3=7r&G~y>J>AD*Hhtk(4wcOJDCVALn;_wx{M*t4F2VPaV!2665asQtF8LlMMeTdy& z(|Gz~vf5QZC@hAeEpY-&$L+PA%inWDs?vmaxg5rNUXZ58BGzF_^0!~% zH^}t3jUv4czj6`QzSijDE`qEHkU_x*J$ox}S-VTW7!n`$+h0hcjt4K+G)H#QX~fD} z4atmOnSSYaJ}MofX{1bb#Lsg5b>d4w@U|5?81iN1#=%q?A+Rif*I+jzBG`ZlrLWe$ z^%4d)C=AGy8ICu$diTQkZQujJ4l~~4$C<7nB1`A;b@kuWS16@+yUg+|^0UqXN`}mt=qDmTTc&AJeC166=!pi%s}(&}l&2|E!&&FWS+)e<{*5 z@f?HMbM34+g=*zek4$s)8kbBixN~3sGAmBfrtp=qQvJ!-yz4#lC`|L?w2K;V*=1DT z>U~c+OpW?Thw^Tm5BYft{+fvu;Z!p0x1vdduLo6QG`oKqngr4k%G9HvBXoUdLWP_N z%L+fQ-)eAR*4M{%;aJ{8GS3>L$O-ha8%=E34o>Tmf;HUq?wYuhMSWR+GTWm(f_w{6$E5ey2y< zLeX%+F?EYxaMn!+hUAN?HCAgrQKzQ?d@4Kp^+95+%g^uEE>XmLbH%k4T;+jc1)10p zN>_D(#9f1|cuz%k+VZZRN=IVJ{SXDaJrya0-sdwWw;LNPi05lDw^&89`}GtKWDc&B z3%)swzRrq}3=|rcO=!0jAV;T*pl^fLnJHL~OJ_gl4;tYehB=B~47*&Kr1e$!dh-!N za3*KBXCczQgdT7f)xNhdw`0)xR&yb#{@S2xHz2L;{?2E^Bf8}*4Oow>E*EazE@&RY zvM3x!GmpO&jW(oy6V4I9W)l#1KchfQS-CF0loU}Z>f~-G5IB?7vz*Vf6qj8DmJQ<@t)Id;EUs{hP$~bwVv#z(n(rH z0qhOQ9(OR@G|+;Oa8~&4-kQn-lrM8YW)4R4=#|w@G5eS|a+x|JHrXNTDIAE$mqt!V z?H{te5}vYIdRp;IH6!90c>N|@)77Mp2^YTVg^>~)BO_67_W(Z|tNcg}&6eb26|B2G zB`LKP>>>lRS}Ch{UNC>988lP8>?l_||8lUz&@d3*l|FtIkdj-1zE zQLI=t?5X+wCo>eH9&QYqZMfBM@`}^@9*;^u2^%pxZj3!K;DbP5 z;neZSbQg2wp^dMlkl8E$r@1eWr+WMT{v7i>hRjluxe`JslA>}O90%c)$V}$hX;g$% z2+1u&aSlF)V;+tal_cSqi85!N=VzaD=yvb-`~5w?=k@wM&+EDWbUS@MYp>zG*504J z*IN5^^1+dmg5GbjA<3MR<*k)rwbJ%CW31l3MWH3QmNBi^UGU9dHV}!;c7ciflS6vT z8qGtNtw3K@uq25Ew+e!!f7!?leSa^bZ#}uHQu}`jEDp}7Tl$rIpgYj0Q|CeOV4^jm zLH@yk0v^KOkcb>1_)}6*z^qMm3|v(X?j#K2lub^x|7OZ{|5R#vW@S86cCMGX+-_9z zLt12=i&j8#X8RKMhblE4Ag7H^mxTw_sSJ-?E1(iOeSh1U3|rQ$f)#lt121X_Hd@(; zmyHrDsDsRfC(I*5q~lk+Cqda3St-cVAd|q1I>z-BhMSF|CMxEeuJ6U!L2&2c%Q(@v z(l+L$Ih|4w?-cBRrpd8^ddDJ%=3NSe3s+4XquFlJ`+t}@Xaf}O97&h5Co0(n1-PE)n zNtPm}gGy$|ugal3jgnBpfr5p%2pzns)O^HI+>ycj+`7Z51$)$(S(=t`Z8en+FG|CO z3abi_%8Xsexxqn}>YWZ66u&xAR)8SL+|eQWfCVPb=LGEBYNW@xrLTQmfz zeOAq?S)hq|YhhoVc-^&NH{`qxNU=$KNz=Dy{IC+0E`5PLzPVywFOb&U%l$J4H>O z34K2FO^Xw7kA7ZyH$Ca}ozK{WobIkyc^M29IcaRNnVPWbH+XvgaLd3Xk*o5J zB}og{IZ&IvR)L?e4Gg%QOt{6oYkorm@!ok3J*3$e%`&AU^9 zt=Ftp;|z^Ax<&AKMx1}scxU0Q5Te!hJGI0C>sQ{Eg~5e})q6IWfXYJ?2`jt&zhqNX z!hu=~iIfeIohId(<%FY!9h~i9x+_RKk&2S*@k{xFGGX~|cu`KG1~YLt1PwFq%&k7) z1S?5{z#H>M8!46)NQ=Q@8Cw!xr!ihY z{D`$qmaWL%xF?(bC(fax<&t#T2i^BwlV!$2NjisUiS$**fy>j=vmx0oMDrga<~IiD zx0dHO<&J__wU$&~x@sqAK)v5$z=m7GB!y*OKk8ma;|Xm}fv3nWqtHvC9Sqvbw*FN- z=~TM{F5C&#*D))~TnM(%yds&C_@hm;Ry87N=!(ML8sgJ?Il=yG@eq!Ns8Xw8G_LoW zS(2sPy&N;u?C`Ut_EudKVXkMM>u52?XwiGJWvUr_G2EplpDG6RZsyx!$|Kf16=8(I z)R9jtY8L4NHZFctG3C%$6aDIoH&%E9(&j8>+l6T><$q7ih*d(~hC@t!)gt>u(VDd9 zILZfq-RFRpn18p%OMQ8?(<%X*cUEzLzb^X`D@wjV>UO84Bcd*cB{Tm#3tn9gF_stp zWF0xLGqPiwI25M6RpJpfuW{Vgg55U+9hkg%3+X*p6=&=jg)j;c;nQjc3vuKj!v@awa;9<61iAzAVV*k@cy$%jh`p-0rGpw5VEz# zWPZ+HQR!s|kLCU2)W`zyxa=!TsIwKO@vP+ra)>&=n^R%mlQy%*OvxK0oEY7eca&n4 z7xR%-Zk5JUiL-H)ci(9R%D|eT5OI}tGHiD-2H)rNxp9S0@ILa01_PS)vY-|6%~>l3 zl>6N>{smKN;y4_QTju)9xDOfB2`Jm{b-Mm?>Z~v0V_o@D`;l}gz ztnBm-GE}mw9J4{op&|6#Y$!!Q-5C>Ie@c`p?EpG3aHTNd`^Jx?;|L;6EBn^2O4Q^| z^0A>r(E#FOI$L>!3ICy_h*o92p#*xq`Pza)6)h3g>-%*vo_3MxWWAcnsIo~r^5E5Z8kFa% zSdfVD)oG)z2m^`Ew~X-M#ZPb~HCnh-Ynf783P{8dgO8F9WVB%sHBWBhsTQj<7dWa) zf&z*RInhp?dVL=N6Z<-!gt;w>@)4vzr?jvjwu&A|o0yNl6-#7udPfVKLOl-{kTXwK zPuE=kQ=HI|pLBkQsW1MMf(Su6M%YnNX7%DbYWPV#yj5gKexKrvvP871eRe+X8ht~D zj-v?5%G!mN{!IPX7I{>D zWzum_u(}JSeJlAH78cNEXJ0UQDo)6?^z-S^VB4Jq$zCr=&rmh$peEk@%D(_1!QEen zOHqjjSav6=JczUR+ODt5zelMj)bKC3#62rB8|P3xX~i;j_1!d+jxNSp^ov>-PbCsE z@Mt-At}*?q5FC29gG5gVfko|i%c}g&lPu=0EwsfOEv1r!SL15vl2J`U$)Mr*3lp|7 zd(V!n%od1P68L3#;`&k|6{@FRwe1AIr5zhdw!Du(I38d! z`kCHV+L*423d0X}Ch{bEcH5>jNrux z;t$CnF-kGnB9&6-8MZy`CdGVVeqpeEa+>A*K_wa-wL`WXV3x~T@Ld47G)grD!?Poa z9x@09K%%vjayW^~V<6)nqkHTMxzJX3{q}&<1*N7O^E2uMwfsq7E6Cu1lN;^8D}6&k zyjiF%iEmLtJT^tQ?hfB}yk_De!&g>7_WK(t#qT-M&KOVUaXiVndc231U$>tyB)G94 z!hRgFWkDhQ6z^d04`;7ay~%EC;)55EiQO83Fn-jToxJNCTA)>2yc^W+zAtC1!)Y8u zX4=@qlo#58yQ(g?2w|dQ7*Tq}v+6)+60VI&mY7Pfkim4!cX7D@Qs!v4|sP$SGm)R^$1k#sS9^ zRzh~HIh*SdKGJHk(_xfSF-onQsrv?)Ev_agttd;&-B#bOsG%KJm>4G=PBKLeAH>zZ z;xM8jmXtSbmmx!SdVyounU=oDC%k^WiIUqUygWX; zhf>@Lu*TTxkAaVPnHx=7-)Y|;vegbie3KIF#eL;^Ed>(TXL}tMHxoWwQmw7TlHig= zf(zvYmOzE$iOOUhGtt@lDO8zKRkfo}O%7~(+xA+SVWRtPZygjf2Jc3`RW78nxfAz& zyLfwJi&xI9FXtIhLkj|=JR3kAH{Cr!Kkiim$#kyI%QHG#e!5dx+`<7ndr^B+Dy((4 z@DwkGE{~-L=GG*cJJnE)in!-1C%pGj1V*W8y;F7ULYdpHi1c0PGkeIS-|JDdYO4=9 z)OK7G+&)F%lArVsH1B!kL$*IHI_fsX(az0bqR&I9u?5W`IfCinwlLw?9|_48cLMWC z`C7mO!qzC$SlG+c!oZsYPy~XTHjHeVU%!V^B_!W*TG5F!3YacJaD%#R-RPR=3*aMA$zGWE( z)Hof7Si0@7D{<-A(%sf&MTu==$y1DV2xNM^r^m56#2VFQ+dp&O`sXTtd&}${pr?*L z9lHDHMu30s1S^@;p~kIYoEXy>M@!JcUd9H=6-?21 zK{9{)`k%X7>VYYz5oAtC2*aZb^D!FL8r36YqZo*zQ=4inA>1B_*jb)qd8f3+obuY& zEm^N3#*Xr$oy`2nxj1pwqRc@_nsONYl#dASBR14XX#?oCL`|%B$(ej3KvvU^qmmlbu_^LoA%Gpsxw65vQE=?Bc-8W5GCDKb=`(M*fLSRZ*FlAE3MLE}07?rBW) z*Dng75HSuxKp0rUs2A%Mv=^g|@BQRN$Z^>mxKcey!+NyfCs`-^iq6 zrQENVNVL!`*qS3=zS!003~3N`)_3K99xwLCZu$w$xGdTzEa`G^Vw8k2M$v32b)rb6 zS&-3pM~cy2xD>8xMQF`WfL_!I`UIx0a|q&}meB>oAT<&0wc>({kM>bAu%fazBOYBL ztmrVn&wYhvv>;v}IB>ABg#qFb3e02HoFh{oR8hej4L#-S$D;-iLMm^=HI6;RV0Pv3 z+`NC_W^6e7-n}2c(S?UHm9SU*L_Zkz={G!@as^8E7yOT$5HcQ(mAiP8l@OJ8^oLe_`|mHqNv zW5W$-%*mrFb~mMAkI`AnFc;9|tegd<&d#yXGJ>fx%kk8pyvn@&Dl5tXyk_B1HNF!L zaxFxV`Zir&#jDaXqgg>7XlRm1+4FvIcs)w{oR8Wt-u||n0dV)#_7sN?{3cXlVAM`+5@9O!=el(_-rjU`6vc2Vm@il zf^t?S_1%*3*mNoro1nrSqrsg_$$HfVxBU222~eQOU-I1}9JE8=JNYRKy{iReOSZ!6;c5(a%A`t^sI)WQ4(U zWk)#EOk+Ez=g)#>oh_FegQe@}=NhYMaf)#%RA?m!$C7rDJ%o%D3mROSUf?OM==E2p z)e9($EXhk_K>M+DDuTd$W4+0n5HWGdPlgS&-O}Y>G_{dFA^GW-*!yD65it((N zmaa_NPNUEe01*jPNNP!k z5f#vHt+3{9*(^l_(q)Py5!fN(2@ZlEueoz4i#LfVNkgjkw#~7!y>42oh2%zlPj+u< zjLBZJzfd?c;yE1xr_wlp1earOb^VGi@3n6)yr5KT1I$JXUvvijV`Wc420T>6FnDr)QP{B_t+ z?s_#9-61!qaAV=YonvtoQVoj~@syH=$$*e9X`V_oZXC7EOwnQJw!^eIH)?t9(RoG3 z2eVx(292bU$)(|f)1FTe4ilrOIY~ZrAnt&?(88jrfG}@?!a0mcz>X2Vi z%zPioE{_f$DDsBJE`L9<=5LE|aCXBDe3zprBc1uo(s6pS<1wHN`v)4wAq>7^l0zHl zoMeo?)$-+6gVnB7#IE7ffvvS>6f2$Q19i@1{HGiyl6KlOqB-mx!VXsm ztD!(V1l}O)skLSMPEfIp5dNTWF>%)-5be zuvhhPEwrfTDVC<_M#^amMn)}SX^V|BwwbVH_q|xt$=!sKAIiYtXkoRvm&o9!85-;j z<%bU+!QfZFRQ=)C(u9eYBfzbs21o7}!=)w+`{x@I(S{t(zBef@ zb`Z@oi`AsU-PWPk83Ab^B+;bC^tB=E*EF@>bZPYi?-wmLg3>$UFLYVnjqk6RrRj6n z8Xw#dtN3m$!xEBN{!(QsR=(v-sSUubTS72+SErQ@D#9a(2sJA)Fte(<{xVN`_>Z(*^v zpvyN{i-@6tS1L3vbU93hB8c&l3NV zIt0#?G`pr@j}@TD9SJM8x5+t5Kf=0>+}nn^#~k?kEfe`!FrhE zblY*sclxZTRpcJN)txYJ+Jc9~TirkNA!KEX&s|pui(Q#Khi|%GqbAA-Qk)Z#iM5ar zQbQ%hIjXvo0RduP-|8y}c_U3SBGo6|9QB+HfihT??_$lCVuWiRgNwuzuJ}JCkqo%& zs%|0f^n(sZue7_1;xbr5<(aQ9Zq3Iv7MojwuON~;XQW6Z4T6@UXO5@OWwZ}0-vJBo z;zzCf2_SyQDW#0~PRpST9afdm{C#YcH%0`wPBAqV$~QiteUsDAW(VmR0AqERB=F<6tJ0Ho z6aA$2pj4=ibYHf<_sFu7eiy!ETXb%}xy<0;hD+Y+EFF{a9ymdCVd}fJ>{4Iw2o2o) z^<$45N_DWP(qS5#DH$O~=}w6&6P~o5AV#wS$u6&^ zKF?TYxm>pcS1-6cR9JZAsLzLTT1pr?a@TWy%A19(EnvN4AmrdCTq^ZDFwr9LUg>iv z5~AD$+x@9{*#H1}%KB;L@W`9;!HIsiQmJJypt3$n^5 zUhye-40w}Wub3fG6OVgBbiUdf*m982O-)|-fLQ)#iWTEl=r$(>A)_0X5*-))2x^CV z{1no70VcQz0=Hkvp>|{C3{vY~B`%GAPS6N2THU!C&Aru^TIO^uH><;-5GfSOR4zQD zMtz(gl)~_Q%H~x#EtmLAWHY#_$#`#!;T{(MN+bu&CLA9*QFGsU94vc4SSU?sEJ%;c z3`?EVIN`JUIj=qtiydEDKE_T7*%0z*wf^B47|Bky`MmKfHHQW*b;|+AA?$h=Yqvh5NSr$LO<`+1H%gmr3TqZP*`tpZ2QQ5A;yipRRt!o*OCi zDLds6N!8ZlZl!!p{cn9bUE-hHFHaBYK*&OBl0@Y`W6h6aOZiSau=h-Kgm_wgHxajJ zu{Ue*UK^$NCiw|4nYM~)1PS^rmY!*^4%v91W$*EO260_a^RIlQ53VJ_MB}~2s@s>@ zN?%;5j@JN@0l$@I_s6H2 z9%q^^6qprQPJ^^!@+!e)@l^lG#g)?=U>_XQa;;uiUbA#`5~=>vS6{+V_?lg&*R1jR zIMNrv!SknPpv8gwyw^37HrM#b@Hy|M!puM=Z7th8?6wtn&{i*|8gc1Rv_aE0qGw?RULyaV*W&#Lq-8 zZ(D4Gekrh)3oxg2`leEo$a7?qrA=^NuV)Lz_9?T5`_2?Vy@7mhrH7R~?AYtQ>Cv~7sS{HrO6{WQv| z68l%jy-ObN@6LE+d)ohF(=}uH2e97ZkHT*lQU?)cB~NwPEJ5f754b z`q93h!HG0ze&h%*hgZ-VDKDLY@Up(n3Emj4a-3-y^3Gl!jk*j#LE7L9ZK+El*T;x6 z@tJ`kUwSOw)g2hA0=pJrkZea;7u}ehB+igK58hb*UVFnLXC3sjC9#G)P3S0FG9gvQ zh`)4AToR{87a#sm)46ht^a~W`ansX#ak{Ya&aFy~fT9b2SC_Z86_H%a>uU>nw}EiF z$E&V@%cbk?KP_7!kp%}z4dtQ?#O~F|q#)6bmdPZ|&9Sq5P1~J@`l*S7V`O#ka?*|w z77s&p*Qt;rlYPn5$mnM?LudD5#ayW2M?X3r?rEK~TMKILVJj{9co~9;&FbL>flIwZ zA%=z;M*`NSHk|h?kyG|TZN*-K-$9Zt#9yLwz%|_ihY?uTr6~jgtEOyXTp5sirBiwN z3^g*>g+2Rhj&r9-?1YNZMy=rT%B^$Wk6i}d%~Yi~IyG5&GPf$2R-ohNmSSE|Q?wh; z4hxYs*~H`T$$>u#8Buo(zwM{TIj*X;AA-xqPfSWQb%AW>kZ~Aix(98viX%pzYz>ti zJ`zxMSMsFdlH%^2^-beT?=tfm3+;oyMC!!u#a0G2eqFgzW&8;=TKl!928G$Cl9-oB zznD}xx4blnF=|UNd``Tu_}IFi5xyvieJt^r@Uf7uHihro8Rw}N`+X-1n}~;&PdF=^ z@BLa>mZ|U@_}shk@nqK-o`7Fltt1dO+FJoD8JyR08}rn~M7z{GOL^Xvpp%`i&u(D8 zq>Wl=qJ9*|g_I>64A`xb(9;H|iRLO82d=-A;NtcB8zoMST3)d4aWyD(Wv;kYl7S6X zGn&)hoT`Y%k0w7q^HWM59XAz?f%_rkqf=S#fp)GOABaC+K(e77tKN||e(ye$>Vtlf zh*3^I_JI<1Vdij^APDehY!smH>?P?qiQ@d4*ud2*3XG#(tu-;COj{LA7SU;NF?Des3Br%?w+5$5l)Tr3R3d*c!7|~Qyu*I zRZ&GbvJE%Aq6_vBw+*IyoH?B22FFr4XJonHA3>?ziXV2uhm@Z%{*+=un`FLKJX}V= z7?}#sq*@*B2-mK7GsZ{SDBGsMDR+o$Z;ua~OmEPbmita0?o7|qymwf2#uVab;*r3LMxv+J4~J34~D0Q~%oqHZXeBPJG)o{&QvXneBh zjgq?O-*&72q;J)I`E5VWRFAUwYjU&gWlfKQ6 z!5=yh(L^!7wC?qz$f+1mYxXusi5hLkYs6b@TYg!E9WDHT{d>cW4+lYmH;E*Sy>+X? z?`RY^y-G_1j)Bx@d_L{Oq?#PHE%l8>psn31;}LKQR!HgEMOCne*o&xjdfnWTnk!0u z(%5SqG%5b&sp2nn0h~bzX3y=zH>e<_s#m8F_-J^M-QebL&+S_b!OaKKx%=iyr`z?8 zAsnX6qOu-C7ur?LSsO}3z*+k5l%^>e*aY3e}BQ}_JNkBz7C?qa~8-JgF!Bzj&`um2$0 zWU_#VAC`X1{S_@Y=oLihIK15W;ill&lqeMx_M3gdB=;!DWH&VShI8*mNV7Vczd_KA z9+#YH30muaok+YGk{im7($R7=tx@Z0t-E@DbSkeUYxEzUVnFf~+&ndM%u@{1(4-g4 zW7{yEZ95}p2Bc7D0>HHbbYHBQkS+(#BnqaJ^lgm0i!N% z4Bn4$FhTJuD{A>Eqm$Z)Hw8tQ789z+gNNCt5!jXrq~M#3pP$3aKkS51zXfZs8jH#% zVW9*;7+n*YWk9EYH@~NB1H@p9(RjAeTuYCeeUFXij&XY?iPaiI$o|bknaGfi=v~xV z+laqLIKQ{u!#Sj|siJ$drvJ|A3oogNAF6V__r5xcB76;eUQy20_p_viP30d{LIp^U zk%fs~56!ot(Xs^`&)Pf z(cZ*}UD z{1gaJYDc{lWjW8iJ-Eb3a7*0#WR=7R-kTtBfpET{J%fAb1uG*ey}Hx(kiX0!v|Ws2 z#foB6OJicI{~74-K7R8pDc-;zf5h?8q2y6k5DM>wYnWKr!395=o>G%kcunx9Wm zNw{l7{WoBP1sML4C>YEgyU}4y9n*N?XIev4Jr$~{*YI@O<)QU#eUe4XRBCo-*;+aYkGv=h2r=4LLAz z{F<%7|M!uX_x!QS-GFhsf+z&qXB3WYe<=f7lUEpePab71 z3^%3yVyB>*qC$Trf%fMi7XnP(W-E+7N6)bGvQ)<2+)T)AV~XGY#L7n2f_^fmlDL-N z;I7}W!sTQOG}bPu=xYx-@Rce9qYI+D%ZE)Z?-+pvHb$4>uZIz++s6e=(TU1vqw!6= z$c9^Iq0{3^*1mKo##|8R17QyxO*?=Xo3I}9zick?uP?vlC%?>-xrx*-8(0=gRaGG5 zzR{S_9%e)F(T#US-_+&yu3%t9{JxNT`0cTk;VV{@w`Ci>)Ech!M!#FPUboWP`ZpYkyp9+I(M7r*^i(ID@xJ@6M0THH<%ulMlC zqgz6#d;Ac%wG%lGO;>IWTu0PV%@>3On}txJN)F=UHoh~XIko?qrouS52K!G!vZXsh zh zJ2?n@sGorTPK0Cg1?i(P#;*3hF^s{h3b|-F`pkZIcOB4*FQ^jWfeWag<2<3OZWtp5 zVKejFExC2o{#;LRmbbTat-#bFe)~3$DPYO@9E+RGD$5%To`qjaL4N4BjkFstm(cG7} zvG@ho?{JY|@6?jlrsi#ZFfjsykE*UoQ8rmL_19GS7hawS`BK4e7dV*QU++ibhaB2m z*|XVaQYzDGIdp4iWs?tPAWGLjB3cYJuJpqn0e?t$A9f%r{GAtVCeos}O5=8J$m2tx zoml7r7MQm~?(PXExP}&&&Yde2mSUfL>+0B{!s-J5y)xrbruQ^=0sr_7t=jiIK076| zU?g_FV|3#^h{k{PBU*s$T2^8aX28Pti9n`ATxY0a)O^ZI36yaM`-STZ|Ac9uTsRHE z?W7W?7afS)hlQL}8KgWgEm`Ld_P@A}z`8o$^t9fi8P~)ae@YbPbh5dnPWZi(WjA?l>UQiE!R8W zVBb-peuq2|jGvceV(OPAl?mVWYTFU)4co+0zEC6mhvJV3;-Wx&4Jc{;IcZ9LprgQ7 zf(>_L}X;yMGvsXNx*+78>(>b>;8+J?Sd58v$RvXD7MiPPU#X3s@%pFrbRb zIP)}%mG~ahQt|14U&TP53nXma(0^n-$kaJF(L=Qe>=J>TE^9Ypt_)z+Kev5DRKfQ^Yr0VLI zC7dswqCZf_T)851fa_etdbpDyt`UJP_daVDQ*fYgs$-=wmRl6#Ii>vM1byuSuJxZF zHrvUF8F;4ei;fn@%gkU>)n0k`9Ji%I5rI?E}GHg zqm4l!m8{Y(>8vIFYA2$uQdsGq(b+oMXKCQ9(zD7p&(2oYC|3ER7BAip39<;8pG8h~vY2vv9vDPr9Zy5tVBNPIffn^w5x zinkx$=Z86T=DtRmbwTkn2-K)tWa6dAr)SS<9iWp1Gp-&a3n~qiSa>p4 z&ko*6jPNW^ODVNM6K4yuW8$M_9hO@INP(7>dM`pCAqi@-HvG-}8 zSJ&PO!58fd7#ZPamB*(G6OmR+re0F4M*GIb+7FHecV094nn&Fn@OE_9B~MMi+>>=p zOUE#iWJO!lFsRgrUuJH+zpte7#+x&<8Z%skNAx($(zDO1H_>aPWYgOmne;$>@pt;0 z!p|DKKFoey`(qAli3=Y<+TlMuTD_W}yDpW(N+X+~jH-oXJ>#0!H_GZ4ZmG+UMsQcA zU$|otC$9yWJTkJ|G^#nhT3{JC=cDFcC)e|Oz3QBcX5ie_($}rU6SLqd7AjI0@h`eR zrd=*2=)YTyJ>hwhg!UPEcHbmDMpX6yLgr(s;GrQ+Qi_`R?Vmbed(h!fm0Qk5EzE$2 z_w)@+s-K?mE(Eey^UJ|X!^O%mY1Oc=>p60e!=h{6yhoJN<5I((avk|m{7P|b0j=Mv zv06f^U5R_!{q`YYf;bHUxO%*Msai&yJ@0^7qi@mpX1(28BCAeQG6-5>i8u=-A+HlP@mU z5yT{~TN|Yz4Z0Wo9WJobzt!^Mor^768|`-f0Rd4dW&3U`Q~_u9mwuJ`ouRVuD5VogdjMhg(~C zoewI{`*V@AfZ*pp;qS2i-nr@QsQGO<7&I3Gnj|gK@4M{w&^QCat~^(1?^OxNVBURc z?><5asw3(Tr-*O7H}RD*>E#eIZn=|#v;w^e;l{#VD;pS*#_90VbKm`@!vI0ViuTqP zhD=q?Xmn{|Y>rEyg02>(+xQ7QwhY`H-l@?A)c9sSZ4VTK=8UmQ-&n#}-&YdHSQF{S zo?@)$=?3HK?>#tWTW*{a&Q6F=5Pxi-N?HXmQo*3#!Rz8S7Sv6`_7zBV3>Ua z2CNo&lO#Ta9V+G2+UQ`nJ<&w#5S+Mh+DD+mZ-+S}@&dS;Q(_yP$o;pKG^o%fte(;c z6#d@3I%!!#u4{SvfXG!vE@^DYrsXiAZb5c#lU?32TsiLAtAZd-?JKmnYqwMuA}5sJ z8L-bzxox%`$fx$Vm0CXT$7P}9&mbBsXhx4Ke124st1WQpY8hwJ-B&rDRUIJ334A$g zlXUAVE?eh*E1={0$Rt1fiP6A`djVtJOgHu(W4%u$J+S!PffnYbTIm=E#F%)TsgOge zT(Jp9%)AvfY0)H-w|ReN@7jVPyR-YawTN?aR9T3hqv##mV8;@)M&!Hf_(uo&{5gVX zN>#3V=o$n#Wp>A+*Ezy0*O!NqyKZq1#P<+L-xo761073;pBa`i8<$bXL={APhO``p zke9#Eje$LwSV7~gMoeE-b($aO4TIb1H&W)d)7@szGvCC-AEbGf7q-|F^}fD3^JtGx z{5_SZPdY()qOT*h^Fbrg&erQoEn1@f(u9(-uGq^8RwxbW;Fs-B&>Ab6Y?Uj<-;E`qr1QeiXS8S+4HqG|EDK_{gxXYB_@xmzvDiL#!|MpMCMgG?{0|376xe zMpslE2+qOBMwnt=h-ay+Cb!n?Ig{YXGOzCYSd-m{Z%>eF`17RR#pgB_@hy`IXnRoG zg-Qr@YZCQ)n$qdP&bd(W!Xak}%_=gIU?c2j7H@G@x5{tTdU)+k`R&VvLYsBT&!!}k z3u(vZV(mtaMGqpF_fr`V^o4Vz2L=b*a^ZYm$zE2E-d>aLooB@6!&KUf-|>0ad8Qn* zkUurjsSpV|$F4tV>g(5v%iRPQFXb?GHdG#Z zQG6?1RqL>Ki7Xm%0?mk`wZ@%IjgR%8vB!U9^lP%j{zTHdxEEeko_FAq45y6@+S>n?Le1+0m@e{8rW`NKH;xs3 z{47YAZu3@NtUae+2m{||)2i7PQ3c+w!eT-jIm5Sm;!)Wt<>CeYAq+zzoBU;#<&*PR zMj^2su$ahhm_`;tFxwnNoAcn7gEdNpcX#d?H`bw*v3Volr++#(C^|9U&)oL19VSj-fl{F4rwByU0xJ8mUVDvpTzneWvB=N;HPHZ|4mOG=c%OKN;ar)OG3ktLrP$#)>uojddj3uaYi9jc{%!BVR&e^kX=c0X85 z&B;}>JX-lNH;Inc!P(hitUkM{Y#COXqmyjPrjca;^#=&%qlB9#tf=-z)4A;D9j|Ui zUN{jgnrd+?Qs>UT0{SH8QUU#eKg?#3A9W1#+YL&Y>Wo(tZmI_+F`@|UB#T2-yr0|% zA9KBa(x2S{uV)l8%6NVBZa{U)i}&T#l8RdF!JF4J=xEsSoo@*Df;^SPI)n zQt^JVrisDOilM8(eOIy$AzgtN>pE$(tNTsg#u&>S`d%Ky@4wjcncI9(JN;>juUoFj zk;_iPXJ-;F5N@Kk?Kh8!w$`i1&!YSzn6iz_V`6JNLX18N>d`7xl^e6aA!3Oxv=asQ z6JUG)cjh}JL3bd-WFq!f$m8`M0IL|$?re(iL4?s0IIr0z3!3o3>EnD?a=P#3u!X-% zKDD-NnEUfXEiq_+wpP&Rf-nu41swnMA^jZnnG#8Z!bzh{D_N4v0AB-+#3mL^f6hUMF%Qg42G79x9eHW{7q&5 zi4U9K6#oDCg)=I)|6U*s&s%icW}*3=LEsm^-T)r+Z|!*ezj|S<`m^veu^lw$I6e`= z-tY5kglH!3T5TM7`fkJGwf>c-nn$)UDpvoOUjDH83sxr;|3XYk++Wc9dipPJtoQx} zm_yuuaU9S17b1TruNwlU|N2lUHRcZ1309hKbU(!%8@NY#Yqx*Ng z|Ere&hsgXKWv5vGg!R}};Ozwo1Q4E8R zYvCQyozT}yB=;vNNS@^TWEir5_zsZeP7W9VM~Qp4#26%p6QJij0_E9*e54>j0u9(O zA|Xty9yACc!SSIDW(#1y=>F@o8>9&k_|(7h@ax%S(liIWt(Y9KfFu3Ov$>J;|9qx` z$VbupHp73(@}KGa7qa{hky$7#qo{5Fws;*4Q~yZ?|M%JXFQjr#CM`#P!@!gJ!T-eQ zp>1K>Ps?vbnQaVgk8pAuN>A$h|Mz5A{u^X^{u7z}>aBlzW8=I9z}Vt{;$s^_)_Dd0 d6^ Date: Tue, 26 Nov 2024 13:08:44 +0530 Subject: [PATCH 41/61] chore(deps): update lukemathwalker/cargo-chef:latest-rust-alpine3.20 docker digest to a539f69 (#2290) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR contains the following updates: | Package | Type | Update | Change | |---|---|---|---| | lukemathwalker/cargo-chef | stage | digest | `75f772f` -> `a539f69` | --- ### Configuration 📅 **Schedule**: Branch creation - "after 5pm on monday" in timezone Etc/UTC, Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/PeerDB-io/peerdb). Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> --- stacks/peerdb-server.Dockerfile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/stacks/peerdb-server.Dockerfile b/stacks/peerdb-server.Dockerfile index 50c69f807..16f0a58ea 100644 --- a/stacks/peerdb-server.Dockerfile +++ b/stacks/peerdb-server.Dockerfile @@ -1,6 +1,6 @@ # syntax=docker/dockerfile:1@sha256:865e5dd094beca432e8c0a1d5e1c465db5f998dca4e439981029b3b81fb39ed5 -FROM lukemathwalker/cargo-chef:latest-rust-alpine3.20@sha256:75f772fe2d870acb77ffdb2206810cd694a6720263f94c74fcc75080963dbff5 as chef +FROM lukemathwalker/cargo-chef:latest-rust-alpine3.20@sha256:a539f69c0a6b9d328b398f1e7aed81d53e986b49db485557cdb3e4479ea42889 as chef WORKDIR /root FROM chef as planner From 7deabd8d11d0f5b5e3e837c8a0706bd4d83bba7e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Philip=20Dub=C3=A9?= Date: Tue, 26 Nov 2024 15:40:15 +0000 Subject: [PATCH 42/61] otel: fetched bytes (#2276) bit of code cleanup along with adding an otel counter of postgres cdc payload sizes --- flow/activities/flowable.go | 23 ++--- flow/activities/flowable_core.go | 5 +- flow/cmd/worker.go | 31 +++--- flow/connectors/core.go | 18 +++- flow/connectors/postgres/cdc.go | 30 +++++- flow/connectors/postgres/postgres.go | 14 +-- .../postgres/qrep_query_executor.go | 4 +- flow/main.go | 2 +- flow/model/qrecord_stream.go | 2 +- flow/otel_metrics/env.go | 11 --- flow/otel_metrics/otel_manager.go | 96 ++++++++++++++----- flow/otel_metrics/peerdb_gauges/gauges.go | 25 ----- flow/otel_metrics/sync_gauges.go | 33 ------- flow/peerdbenv/otel.go | 9 ++ 14 files changed, 158 insertions(+), 145 deletions(-) delete mode 100644 flow/otel_metrics/env.go delete mode 100644 flow/otel_metrics/peerdb_gauges/gauges.go delete mode 100644 flow/otel_metrics/sync_gauges.go create mode 100644 flow/peerdbenv/otel.go diff --git a/flow/activities/flowable.go b/flow/activities/flowable.go index 8001b5344..ed9d2b720 100644 --- a/flow/activities/flowable.go +++ b/flow/activities/flowable.go @@ -28,7 +28,6 @@ import ( "github.com/PeerDB-io/peer-flow/generated/protos" "github.com/PeerDB-io/peer-flow/model" "github.com/PeerDB-io/peer-flow/otel_metrics" - "github.com/PeerDB-io/peer-flow/otel_metrics/peerdb_gauges" "github.com/PeerDB-io/peer-flow/peerdbenv" "github.com/PeerDB-io/peer-flow/pua" "github.com/PeerDB-io/peer-flow/shared" @@ -759,11 +758,10 @@ func (a *FlowableActivity) RecordSlotSizes(ctx context.Context) error { return } - slotMetricGauges := peerdb_gauges.SlotMetricGauges{} + slotMetricGauges := otel_metrics.SlotMetricGauges{} if a.OtelManager != nil { - slotLagGauge, err := otel_metrics.GetOrInitFloat64SyncGauge(a.OtelManager.Meter, - a.OtelManager.Float64GaugesCache, - peerdb_gauges.BuildGaugeName(peerdb_gauges.SlotLagGaugeName), + slotLagGauge, err := a.OtelManager.GetOrInitFloat64Gauge( + otel_metrics.BuildMetricName(otel_metrics.SlotLagGaugeName), metric.WithUnit("MiBy"), metric.WithDescription("Postgres replication slot lag in MB")) if err != nil { @@ -772,9 +770,8 @@ func (a *FlowableActivity) RecordSlotSizes(ctx context.Context) error { } slotMetricGauges.SlotLagGauge = slotLagGauge - openConnectionsGauge, err := otel_metrics.GetOrInitInt64SyncGauge(a.OtelManager.Meter, - a.OtelManager.Int64GaugesCache, - peerdb_gauges.BuildGaugeName(peerdb_gauges.OpenConnectionsGaugeName), + openConnectionsGauge, err := a.OtelManager.GetOrInitInt64Gauge( + otel_metrics.BuildMetricName(otel_metrics.OpenConnectionsGaugeName), metric.WithDescription("Current open connections for PeerDB user")) if err != nil { logger.Error("Failed to get open connections gauge", slog.Any("error", err)) @@ -782,9 +779,8 @@ func (a *FlowableActivity) RecordSlotSizes(ctx context.Context) error { } slotMetricGauges.OpenConnectionsGauge = openConnectionsGauge - openReplicationConnectionsGauge, err := otel_metrics.GetOrInitInt64SyncGauge(a.OtelManager.Meter, - a.OtelManager.Int64GaugesCache, - peerdb_gauges.BuildGaugeName(peerdb_gauges.OpenReplicationConnectionsGaugeName), + openReplicationConnectionsGauge, err := a.OtelManager.GetOrInitInt64Gauge( + otel_metrics.BuildMetricName(otel_metrics.OpenReplicationConnectionsGaugeName), metric.WithDescription("Current open replication connections for PeerDB user")) if err != nil { logger.Error("Failed to get open replication connections gauge", slog.Any("error", err)) @@ -792,9 +788,8 @@ func (a *FlowableActivity) RecordSlotSizes(ctx context.Context) error { } slotMetricGauges.OpenReplicationConnectionsGauge = openReplicationConnectionsGauge - intervalSinceLastNormalizeGauge, err := otel_metrics.GetOrInitFloat64SyncGauge(a.OtelManager.Meter, - a.OtelManager.Float64GaugesCache, - peerdb_gauges.BuildGaugeName(peerdb_gauges.IntervalSinceLastNormalizeGaugeName), + intervalSinceLastNormalizeGauge, err := a.OtelManager.GetOrInitFloat64Gauge( + otel_metrics.BuildMetricName(otel_metrics.IntervalSinceLastNormalizeGaugeName), metric.WithUnit("s"), metric.WithDescription("Interval since last normalize")) if err != nil { diff --git a/flow/activities/flowable_core.go b/flow/activities/flowable_core.go index d583044b0..2d1f7e1f3 100644 --- a/flow/activities/flowable_core.go +++ b/flow/activities/flowable_core.go @@ -23,6 +23,7 @@ import ( "github.com/PeerDB-io/peer-flow/connectors/utils/monitoring" "github.com/PeerDB-io/peer-flow/generated/protos" "github.com/PeerDB-io/peer-flow/model" + "github.com/PeerDB-io/peer-flow/otel_metrics" "github.com/PeerDB-io/peer-flow/peerdbenv" "github.com/PeerDB-io/peer-flow/shared" ) @@ -113,7 +114,7 @@ func syncCore[TPull connectors.CDCPullConnectorCore, TSync connectors.CDCSyncCon options *protos.SyncFlowOptions, sessionID string, adaptStream func(*model.CDCStream[Items]) (*model.CDCStream[Items], error), - pull func(TPull, context.Context, *pgxpool.Pool, *model.PullRecordsRequest[Items]) error, + pull func(TPull, context.Context, *pgxpool.Pool, *otel_metrics.OtelManager, *model.PullRecordsRequest[Items]) error, sync func(TSync, context.Context, *model.SyncRecordsRequest[Items]) (*model.SyncResponse, error), ) (*model.SyncCompositeResponse, error) { flowName := config.FlowJobName @@ -181,7 +182,7 @@ func syncCore[TPull connectors.CDCPullConnectorCore, TSync connectors.CDCSyncCon startTime := time.Now() errGroup, errCtx := errgroup.WithContext(ctx) errGroup.Go(func() error { - return pull(srcConn, errCtx, a.CatalogPool, &model.PullRecordsRequest[Items]{ + return pull(srcConn, errCtx, a.CatalogPool, a.OtelManager, &model.PullRecordsRequest[Items]{ FlowJobName: flowName, SrcTableIDNameMapping: options.SrcTableIdNameMapping, TableNameMapping: tblNameMapping, diff --git a/flow/cmd/worker.go b/flow/cmd/worker.go index cca0202ec..51415a1dc 100644 --- a/flow/cmd/worker.go +++ b/flow/cmd/worker.go @@ -35,9 +35,18 @@ type WorkerSetupOptions struct { } type workerSetupResponse struct { - Client client.Client - Worker worker.Worker - Cleanup func() + Client client.Client + Worker worker.Worker + OtelManager *otel_metrics.OtelManager +} + +func (w *workerSetupResponse) Close() { + w.Client.Close() + if w.OtelManager != nil { + if err := w.OtelManager.Close(context.Background()); err != nil { + slog.Error("Failed to shutdown metrics provider", slog.Any("error", err)) + } + } } func setupPyroscope(opts *WorkerSetupOptions) { @@ -148,7 +157,6 @@ func WorkerSetup(opts *WorkerSetupOptions) (*workerSetupResponse, error) { }) peerflow.RegisterFlowWorkerWorkflows(w) - cleanupOtelManagerFunc := func() {} var otelManager *otel_metrics.OtelManager if opts.EnableOtelMetrics { metricsProvider, metricsErr := otel_metrics.SetupPeerDBMetricsProvider("flow-worker") @@ -161,12 +169,6 @@ func WorkerSetup(opts *WorkerSetupOptions) (*workerSetupResponse, error) { Float64GaugesCache: make(map[string]metric.Float64Gauge), Int64GaugesCache: make(map[string]metric.Int64Gauge), } - cleanupOtelManagerFunc = func() { - shutDownErr := otelManager.MetricsProvider.Shutdown(context.Background()) - if shutDownErr != nil { - slog.Error("Failed to shutdown metrics provider", slog.Any("error", shutDownErr)) - } - } } w.RegisterActivity(&activities.FlowableActivity{ CatalogPool: conn, @@ -182,11 +184,8 @@ func WorkerSetup(opts *WorkerSetupOptions) (*workerSetupResponse, error) { }) return &workerSetupResponse{ - Client: c, - Worker: w, - Cleanup: func() { - cleanupOtelManagerFunc() - c.Close() - }, + Client: c, + Worker: w, + OtelManager: otelManager, }, nil } diff --git a/flow/connectors/core.go b/flow/connectors/core.go index 8a6bbbc0e..073d9d82b 100644 --- a/flow/connectors/core.go +++ b/flow/connectors/core.go @@ -23,7 +23,7 @@ import ( connsqlserver "github.com/PeerDB-io/peer-flow/connectors/sqlserver" "github.com/PeerDB-io/peer-flow/generated/protos" "github.com/PeerDB-io/peer-flow/model" - "github.com/PeerDB-io/peer-flow/otel_metrics/peerdb_gauges" + "github.com/PeerDB-io/peer-flow/otel_metrics" "github.com/PeerDB-io/peer-flow/peerdbenv" "github.com/PeerDB-io/peer-flow/shared" ) @@ -85,7 +85,7 @@ type CDCPullConnectorCore interface { alerter *alerting.Alerter, catalogPool *pgxpool.Pool, alertKeys *alerting.AlertKeys, - slotMetricGauges peerdb_gauges.SlotMetricGauges, + slotMetricGauges otel_metrics.SlotMetricGauges, ) error // GetSlotInfo returns the WAL (or equivalent) info of a slot for the connector. @@ -102,7 +102,12 @@ type CDCPullConnector interface { CDCPullConnectorCore // This method should be idempotent, and should be able to be called multiple times with the same request. - PullRecords(ctx context.Context, catalogPool *pgxpool.Pool, req *model.PullRecordsRequest[model.RecordItems]) error + PullRecords( + ctx context.Context, + catalogPool *pgxpool.Pool, + otelManager *otel_metrics.OtelManager, + req *model.PullRecordsRequest[model.RecordItems], + ) error } type CDCPullPgConnector interface { @@ -110,7 +115,12 @@ type CDCPullPgConnector interface { // This method should be idempotent, and should be able to be called multiple times with the same request. // It's signature, aside from type parameter, should match CDCPullConnector.PullRecords. - PullPg(ctx context.Context, catalogPool *pgxpool.Pool, req *model.PullRecordsRequest[model.PgItems]) error + PullPg( + ctx context.Context, + catalogPool *pgxpool.Pool, + otelManager *otel_metrics.OtelManager, + req *model.PullRecordsRequest[model.PgItems], + ) error } type NormalizedTablesConnector interface { diff --git a/flow/connectors/postgres/cdc.go b/flow/connectors/postgres/cdc.go index 9f4bd0b96..6ff2f335e 100644 --- a/flow/connectors/postgres/cdc.go +++ b/flow/connectors/postgres/cdc.go @@ -14,6 +14,8 @@ import ( "github.com/jackc/pgx/v5/pgtype" "github.com/jackc/pgx/v5/pgxpool" "github.com/lib/pq/oid" + "go.opentelemetry.io/otel/attribute" + "go.opentelemetry.io/otel/metric" "go.temporal.io/sdk/activity" connmetadata "github.com/PeerDB-io/peer-flow/connectors/external_metadata" @@ -22,6 +24,8 @@ import ( "github.com/PeerDB-io/peer-flow/generated/protos" "github.com/PeerDB-io/peer-flow/model" "github.com/PeerDB-io/peer-flow/model/qvalue" + "github.com/PeerDB-io/peer-flow/otel_metrics" + "github.com/PeerDB-io/peer-flow/peerdbenv" "github.com/PeerDB-io/peer-flow/shared" ) @@ -41,12 +45,14 @@ type PostgresCDCSource struct { // for storing schema delta audit logs to catalog catalogPool *pgxpool.Pool + otelManager *otel_metrics.OtelManager hushWarnUnhandledMessageType map[pglogrepl.MessageType]struct{} flowJobName string } type PostgresCDCConfig struct { CatalogPool *pgxpool.Pool + OtelManager *otel_metrics.OtelManager SrcTableIDNameMapping map[uint32]string TableNameMapping map[string]model.NameAndExclude TableNameSchemaMapping map[string]*protos.TableSchema @@ -71,6 +77,7 @@ func (c *PostgresConnector) NewPostgresCDCSource(cdcConfig *PostgresCDCConfig) * commitLock: nil, childToParentRelIDMapping: cdcConfig.ChildToParentRelIDMap, catalogPool: cdcConfig.CatalogPool, + otelManager: cdcConfig.OtelManager, flowJobName: cdcConfig.FlowJobName, hushWarnUnhandledMessageType: make(map[pglogrepl.MessageType]struct{}), } @@ -331,8 +338,7 @@ func PullCdcRecords[Items model.Items]( records.SignalAsEmpty() } logger.Info(fmt.Sprintf("[finished] PullRecords streamed %d records", cdcRecordsStorage.Len())) - err := cdcRecordsStorage.Close() - if err != nil { + if err := cdcRecordsStorage.Close(); err != nil { logger.Warn("failed to clean up records storage", slog.Any("error", err)) } }() @@ -361,6 +367,16 @@ func PullCdcRecords[Items model.Items]( return nil } + var fetchedBytesCounter metric.Int64Counter + if p.otelManager != nil { + var err error + fetchedBytesCounter, err = p.otelManager.GetOrInitInt64Counter(otel_metrics.BuildMetricName(otel_metrics.FetchedBytesCounterName), + metric.WithUnit("By"), metric.WithDescription("Bytes received of CopyData over replication slot")) + if err != nil { + return fmt.Errorf("could not get FetchedBytesCounter: %w", err) + } + } + pkmRequiresResponse := false waitingForCommit := false @@ -439,8 +455,7 @@ func PullCdcRecords[Items model.Items]( }() cancel() - ctxErr := ctx.Err() - if ctxErr != nil { + if ctxErr := ctx.Err(); ctxErr != nil { return fmt.Errorf("consumeStream preempted: %w", ctxErr) } @@ -463,6 +478,13 @@ func PullCdcRecords[Items model.Items]( continue } + if fetchedBytesCounter != nil { + fetchedBytesCounter.Add(ctx, int64(len(msg.Data)), metric.WithAttributeSet(attribute.NewSet( + attribute.String(otel_metrics.FlowNameKey, req.FlowJobName), + attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID()), + ))) + } + switch msg.Data[0] { case pglogrepl.PrimaryKeepaliveMessageByteID: pkm, err := pglogrepl.ParsePrimaryKeepaliveMessage(msg.Data[1:]) diff --git a/flow/connectors/postgres/postgres.go b/flow/connectors/postgres/postgres.go index e685b5c12..4845946aa 100644 --- a/flow/connectors/postgres/postgres.go +++ b/flow/connectors/postgres/postgres.go @@ -28,7 +28,6 @@ import ( "github.com/PeerDB-io/peer-flow/model" "github.com/PeerDB-io/peer-flow/model/qvalue" "github.com/PeerDB-io/peer-flow/otel_metrics" - "github.com/PeerDB-io/peer-flow/otel_metrics/peerdb_gauges" "github.com/PeerDB-io/peer-flow/peerdbenv" "github.com/PeerDB-io/peer-flow/shared" ) @@ -330,17 +329,19 @@ func (c *PostgresConnector) SetLastOffset(ctx context.Context, jobName string, l func (c *PostgresConnector) PullRecords( ctx context.Context, catalogPool *pgxpool.Pool, + otelManager *otel_metrics.OtelManager, req *model.PullRecordsRequest[model.RecordItems], ) error { - return pullCore(ctx, c, catalogPool, req, qProcessor{}) + return pullCore(ctx, c, catalogPool, otelManager, req, qProcessor{}) } func (c *PostgresConnector) PullPg( ctx context.Context, catalogPool *pgxpool.Pool, + otelManager *otel_metrics.OtelManager, req *model.PullRecordsRequest[model.PgItems], ) error { - return pullCore(ctx, c, catalogPool, req, pgProcessor{}) + return pullCore(ctx, c, catalogPool, otelManager, req, pgProcessor{}) } // PullRecords pulls records from the source. @@ -348,6 +349,7 @@ func pullCore[Items model.Items]( ctx context.Context, c *PostgresConnector, catalogPool *pgxpool.Pool, + otelManager *otel_metrics.OtelManager, req *model.PullRecordsRequest[Items], processor replProcessor[Items], ) error { @@ -414,6 +416,7 @@ func pullCore[Items model.Items]( cdc := c.NewPostgresCDCSource(&PostgresCDCConfig{ CatalogPool: catalogPool, + OtelManager: otelManager, SrcTableIDNameMapping: req.SrcTableIDNameMapping, TableNameMapping: req.TableNameMapping, TableNameSchemaMapping: req.TableNameSchemaMapping, @@ -435,8 +438,7 @@ func pullCore[Items model.Items]( return fmt.Errorf("failed to get current LSN: %w", err) } - err = monitoring.UpdateLatestLSNAtSourceForCDCFlow(ctx, catalogPool, req.FlowJobName, int64(latestLSN)) - if err != nil { + if err := monitoring.UpdateLatestLSNAtSourceForCDCFlow(ctx, catalogPool, req.FlowJobName, int64(latestLSN)); err != nil { c.logger.Error("error updating latest LSN at source for CDC flow", slog.Any("error", err)) return fmt.Errorf("failed to update latest LSN at source for CDC flow: %w", err) } @@ -1197,7 +1199,7 @@ func (c *PostgresConnector) HandleSlotInfo( alerter *alerting.Alerter, catalogPool *pgxpool.Pool, alertKeys *alerting.AlertKeys, - slotMetricGauges peerdb_gauges.SlotMetricGauges, + slotMetricGauges otel_metrics.SlotMetricGauges, ) error { logger := shared.LoggerFromCtx(ctx) diff --git a/flow/connectors/postgres/qrep_query_executor.go b/flow/connectors/postgres/qrep_query_executor.go index f3c915c13..339c54a63 100644 --- a/flow/connectors/postgres/qrep_query_executor.go +++ b/flow/connectors/postgres/qrep_query_executor.go @@ -212,8 +212,8 @@ func (qe *QRepQueryExecutor) processFetchedRows( if err := rows.Err(); err != nil { stream.Close(err) qe.logger.Error("[pg_query_executor] row iteration failed", - slog.String("query", query), slog.Any("error", rows.Err())) - return 0, fmt.Errorf("[pg_query_executor] row iteration failed '%s': %w", query, rows.Err()) + slog.String("query", query), slog.Any("error", err)) + return 0, fmt.Errorf("[pg_query_executor] row iteration failed '%s': %w", query, err) } return numRows, nil diff --git a/flow/main.go b/flow/main.go index 9d499e957..613c42634 100644 --- a/flow/main.go +++ b/flow/main.go @@ -144,7 +144,7 @@ func main() { if err != nil { return err } - defer res.Cleanup() + defer res.Close() return res.Worker.Run(worker.InterruptCh()) }, Flags: []cli.Flag{ diff --git a/flow/model/qrecord_stream.go b/flow/model/qrecord_stream.go index 3bb2d1f24..054d6a42b 100644 --- a/flow/model/qrecord_stream.go +++ b/flow/model/qrecord_stream.go @@ -30,8 +30,8 @@ func (s *QRecordStream) Schema() qvalue.QRecordSchema { func (s *QRecordStream) SetSchema(schema qvalue.QRecordSchema) { if !s.schemaSet { s.schema = schema - close(s.schemaLatch) s.schemaSet = true + close(s.schemaLatch) } } diff --git a/flow/otel_metrics/env.go b/flow/otel_metrics/env.go deleted file mode 100644 index 81b5d0c3e..000000000 --- a/flow/otel_metrics/env.go +++ /dev/null @@ -1,11 +0,0 @@ -package otel_metrics - -import "github.com/PeerDB-io/peer-flow/peerdbenv" - -func GetPeerDBOtelMetricsNamespace() string { - return peerdbenv.GetEnvString("PEERDB_OTEL_METRICS_NAMESPACE", "") -} - -func GetPeerDBOtelTemporalMetricsExportListEnv() string { - return peerdbenv.GetEnvString("PEERDB_OTEL_TEMPORAL_METRICS_EXPORT_LIST", "") -} diff --git a/flow/otel_metrics/otel_manager.go b/flow/otel_metrics/otel_manager.go index c59adecd4..099c733d2 100644 --- a/flow/otel_metrics/otel_manager.go +++ b/flow/otel_metrics/otel_manager.go @@ -17,46 +17,90 @@ import ( "github.com/PeerDB-io/peer-flow/peerdbenv" ) +const ( + SlotLagGaugeName string = "cdc_slot_lag" + OpenConnectionsGaugeName string = "open_connections" + OpenReplicationConnectionsGaugeName string = "open_replication_connections" + IntervalSinceLastNormalizeGaugeName string = "interval_since_last_normalize" + FetchedBytesCounterName string = "fetched_bytes" +) + +type SlotMetricGauges struct { + SlotLagGauge metric.Float64Gauge + OpenConnectionsGauge metric.Int64Gauge + OpenReplicationConnectionsGauge metric.Int64Gauge + IntervalSinceLastNormalizeGauge metric.Float64Gauge + FetchedBytesCounter metric.Int64Counter +} + +func BuildMetricName(baseName string) string { + return peerdbenv.GetPeerDBOtelMetricsNamespace() + baseName +} + type OtelManager struct { MetricsProvider *sdkmetric.MeterProvider Meter metric.Meter Float64GaugesCache map[string]metric.Float64Gauge Int64GaugesCache map[string]metric.Int64Gauge + Int64CountersCache map[string]metric.Int64Counter +} + +func (om *OtelManager) Close(ctx context.Context) error { + return om.MetricsProvider.Shutdown(ctx) +} + +func getOrInitMetric[M any, O any]( + cons func(metric.Meter, string, ...O) (M, error), + meter metric.Meter, + cache map[string]M, + name string, + opts ...O, +) (M, error) { + gauge, ok := cache[name] + if !ok { + var err error + gauge, err = cons(meter, name, opts...) + if err != nil { + var none M + return none, err + } + cache[name] = gauge + } + return gauge, nil +} + +func (om *OtelManager) GetOrInitInt64Gauge(name string, opts ...metric.Int64GaugeOption) (metric.Int64Gauge, error) { + return getOrInitMetric(metric.Meter.Int64Gauge, om.Meter, om.Int64GaugesCache, name, opts...) +} + +func (om *OtelManager) GetOrInitFloat64Gauge(name string, opts ...metric.Float64GaugeOption) (metric.Float64Gauge, error) { + return getOrInitMetric(metric.Meter.Float64Gauge, om.Meter, om.Float64GaugesCache, name, opts...) +} + +func (om *OtelManager) GetOrInitInt64Counter(name string, opts ...metric.Int64CounterOption) (metric.Int64Counter, error) { + return getOrInitMetric(metric.Meter.Int64Counter, om.Meter, om.Int64CountersCache, name, opts...) } // newOtelResource returns a resource describing this application. func newOtelResource(otelServiceName string, attrs ...attribute.KeyValue) (*resource.Resource, error) { - allAttrs := []attribute.KeyValue{ - semconv.ServiceNameKey.String(otelServiceName), - } - allAttrs = append(allAttrs, attrs...) - r, err := resource.Merge( + allAttrs := append([]attribute.KeyValue{semconv.ServiceNameKey.String(otelServiceName)}, attrs...) + return resource.Merge( resource.Default(), resource.NewWithAttributes( semconv.SchemaURL, allAttrs..., ), ) - - return r, err -} - -func setupHttpOtelMetricsExporter() (sdkmetric.Exporter, error) { - return otlpmetrichttp.New(context.Background()) -} - -func setupGrpcOtelMetricsExporter() (sdkmetric.Exporter, error) { - return otlpmetricgrpc.New(context.Background()) } func temporalMetricsFilteringView() sdkmetric.View { - exportListString := GetPeerDBOtelTemporalMetricsExportListEnv() + exportListString := peerdbenv.GetPeerDBOtelTemporalMetricsExportListEnv() slog.Info("Found export list for temporal metrics", slog.String("exportList", exportListString)) // Special case for exporting all metrics if exportListString == "__ALL__" { return func(instrument sdkmetric.Instrument) (sdkmetric.Stream, bool) { stream := sdkmetric.Stream{ - Name: GetPeerDBOtelMetricsNamespace() + "temporal." + instrument.Name, + Name: BuildMetricName("temporal." + instrument.Name), Description: instrument.Description, Unit: instrument.Unit, } @@ -68,7 +112,7 @@ func temporalMetricsFilteringView() sdkmetric.View { if len(exportList) == 0 { return func(instrument sdkmetric.Instrument) (sdkmetric.Stream, bool) { return sdkmetric.Stream{ - Name: GetPeerDBOtelMetricsNamespace() + "temporal." + instrument.Name, + Name: BuildMetricName("temporal." + instrument.Name), Description: instrument.Description, Unit: instrument.Unit, Aggregation: sdkmetric.AggregationDrop{}, @@ -84,7 +128,7 @@ func temporalMetricsFilteringView() sdkmetric.View { } return func(instrument sdkmetric.Instrument) (sdkmetric.Stream, bool) { stream := sdkmetric.Stream{ - Name: GetPeerDBOtelMetricsNamespace() + "temporal." + instrument.Name, + Name: BuildMetricName("temporal." + instrument.Name), Description: instrument.Description, Unit: instrument.Unit, } @@ -95,16 +139,16 @@ func temporalMetricsFilteringView() sdkmetric.View { } } -func setupExporter() (sdkmetric.Exporter, error) { +func setupExporter(ctx context.Context) (sdkmetric.Exporter, error) { otlpMetricProtocol := peerdbenv.GetEnvString("OTEL_EXPORTER_OTLP_PROTOCOL", peerdbenv.GetEnvString("OTEL_EXPORTER_OTLP_METRICS_PROTOCOL", "http/protobuf")) var metricExporter sdkmetric.Exporter var err error switch otlpMetricProtocol { case "http/protobuf": - metricExporter, err = setupHttpOtelMetricsExporter() + metricExporter, err = otlpmetrichttp.New(ctx) case "grpc": - metricExporter, err = setupGrpcOtelMetricsExporter() + metricExporter, err = otlpmetricgrpc.New(ctx) default: return nil, fmt.Errorf("unsupported otel metric protocol: %s", otlpMetricProtocol) } @@ -114,8 +158,8 @@ func setupExporter() (sdkmetric.Exporter, error) { return metricExporter, err } -func setupMetricsProvider(otelResource *resource.Resource, views ...sdkmetric.View) (*sdkmetric.MeterProvider, error) { - metricExporter, err := setupExporter() +func setupMetricsProvider(ctx context.Context, otelResource *resource.Resource, views ...sdkmetric.View) (*sdkmetric.MeterProvider, error) { + metricExporter, err := setupExporter(ctx) if err != nil { return nil, err } @@ -133,7 +177,7 @@ func SetupPeerDBMetricsProvider(otelServiceName string) (*sdkmetric.MeterProvide if err != nil { return nil, fmt.Errorf("failed to create OpenTelemetry resource: %w", err) } - return setupMetricsProvider(otelResource) + return setupMetricsProvider(context.Background(), otelResource) } func SetupTemporalMetricsProvider(otelServiceName string) (*sdkmetric.MeterProvider, error) { @@ -141,5 +185,5 @@ func SetupTemporalMetricsProvider(otelServiceName string) (*sdkmetric.MeterProvi if err != nil { return nil, fmt.Errorf("failed to create OpenTelemetry resource: %w", err) } - return setupMetricsProvider(otelResource, temporalMetricsFilteringView()) + return setupMetricsProvider(context.Background(), otelResource, temporalMetricsFilteringView()) } diff --git a/flow/otel_metrics/peerdb_gauges/gauges.go b/flow/otel_metrics/peerdb_gauges/gauges.go deleted file mode 100644 index a3b7d5c3e..000000000 --- a/flow/otel_metrics/peerdb_gauges/gauges.go +++ /dev/null @@ -1,25 +0,0 @@ -package peerdb_gauges - -import ( - "go.opentelemetry.io/otel/metric" - - "github.com/PeerDB-io/peer-flow/otel_metrics" -) - -const ( - SlotLagGaugeName string = "cdc_slot_lag" - OpenConnectionsGaugeName string = "open_connections" - OpenReplicationConnectionsGaugeName string = "open_replication_connections" - IntervalSinceLastNormalizeGaugeName string = "interval_since_last_normalize" -) - -type SlotMetricGauges struct { - SlotLagGauge metric.Float64Gauge - OpenConnectionsGauge metric.Int64Gauge - OpenReplicationConnectionsGauge metric.Int64Gauge - IntervalSinceLastNormalizeGauge metric.Float64Gauge -} - -func BuildGaugeName(baseGaugeName string) string { - return otel_metrics.GetPeerDBOtelMetricsNamespace() + baseGaugeName -} diff --git a/flow/otel_metrics/sync_gauges.go b/flow/otel_metrics/sync_gauges.go deleted file mode 100644 index e9da02c87..000000000 --- a/flow/otel_metrics/sync_gauges.go +++ /dev/null @@ -1,33 +0,0 @@ -package otel_metrics - -import ( - "go.opentelemetry.io/otel/metric" -) - -func GetOrInitInt64SyncGauge(meter metric.Meter, cache map[string]metric.Int64Gauge, name string, opts ...metric.Int64GaugeOption, -) (metric.Int64Gauge, error) { - gauge, ok := cache[name] - if !ok { - var err error - gauge, err = meter.Int64Gauge(name, opts...) - if err != nil { - return nil, err - } - cache[name] = gauge - } - return gauge, nil -} - -func GetOrInitFloat64SyncGauge(meter metric.Meter, cache map[string]metric.Float64Gauge, name string, opts ...metric.Float64GaugeOption, -) (metric.Float64Gauge, error) { - gauge, ok := cache[name] - if !ok { - var err error - gauge, err = meter.Float64Gauge(name, opts...) - if err != nil { - return nil, err - } - cache[name] = gauge - } - return gauge, nil -} diff --git a/flow/peerdbenv/otel.go b/flow/peerdbenv/otel.go new file mode 100644 index 000000000..d7f3cb68a --- /dev/null +++ b/flow/peerdbenv/otel.go @@ -0,0 +1,9 @@ +package peerdbenv + +func GetPeerDBOtelMetricsNamespace() string { + return GetEnvString("PEERDB_OTEL_METRICS_NAMESPACE", "") +} + +func GetPeerDBOtelTemporalMetricsExportListEnv() string { + return GetEnvString("PEERDB_OTEL_TEMPORAL_METRICS_EXPORT_LIST", "") +} From 83350b5205bbdf9bafc64c70948c90be182af770 Mon Sep 17 00:00:00 2001 From: "renovate[bot]" <29139614+renovate[bot]@users.noreply.github.com> Date: Tue, 26 Nov 2024 15:50:20 +0000 Subject: [PATCH 43/61] fix(deps): update cargo dependencies (#2293) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR contains the following updates: | Package | Type | Update | Change | |---|---|---|---| | [rsa](https://redirect.github.com/RustCrypto/RSA) | dependencies | patch | `0.9.6` -> `0.9.7` | | [rustls](https://redirect.github.com/rustls/rustls) | dependencies | patch | `0.23.17` -> `0.23.18` | --- ### Release Notes

RustCrypto/RSA (rsa) ### [`v0.9.7`](https://redirect.github.com/RustCrypto/RSA/compare/v0.9.6...v0.9.7) [Compare Source](https://redirect.github.com/RustCrypto/RSA/compare/v0.9.6...v0.9.7)
--- ### Configuration 📅 **Schedule**: Branch creation - "after 5pm on monday" in timezone Etc/UTC, Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox. 👻 **Immortal**: This PR will be recreated if closed unmerged. Get [config help](https://redirect.github.com/renovatebot/renovate/discussions) if that's undesired. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/PeerDB-io/peerdb). Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> --- nexus/Cargo.lock | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/nexus/Cargo.lock b/nexus/Cargo.lock index f827849ff..15a8c3274 100644 --- a/nexus/Cargo.lock +++ b/nexus/Cargo.lock @@ -1790,7 +1790,7 @@ dependencies = [ "http 1.1.0", "hyper 1.5.0", "hyper-util", - "rustls 0.23.17", + "rustls 0.23.18", "rustls-native-certs 0.8.0", "rustls-pki-types", "tokio", @@ -3044,7 +3044,7 @@ dependencies = [ "anyhow", "futures-util", "pt", - "rustls 0.23.17", + "rustls 0.23.18", "ssh2", "tokio", "tokio-postgres", @@ -3270,7 +3270,7 @@ dependencies = [ "quinn-proto", "quinn-udp", "rustc-hash 2.0.0", - "rustls 0.23.17", + "rustls 0.23.18", "socket2", "thiserror", "tokio", @@ -3287,7 +3287,7 @@ dependencies = [ "rand", "ring", "rustc-hash 2.0.0", - "rustls 0.23.17", + "rustls 0.23.18", "slab", "thiserror", "tinyvec", @@ -3305,7 +3305,7 @@ dependencies = [ "once_cell", "socket2", "tracing", - "windows-sys 0.52.0", + "windows-sys 0.59.0", ] [[package]] @@ -3518,7 +3518,7 @@ dependencies = [ "percent-encoding", "pin-project-lite", "quinn", - "rustls 0.23.17", + "rustls 0.23.18", "rustls-pemfile 2.2.0", "rustls-pki-types", "serde", @@ -3583,9 +3583,9 @@ dependencies = [ [[package]] name = "rsa" -version = "0.9.6" +version = "0.9.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5d0e5124fcb30e76a7e79bfee683a2746db83784b86289f6251b54b7950a0dfc" +checksum = "47c75d7c5c6b673e58bf54d8544a9f432e3a925b0e80f7cd3602ab5c50c55519" dependencies = [ "const-oid", "digest", @@ -3686,9 +3686,9 @@ dependencies = [ [[package]] name = "rustls" -version = "0.23.17" +version = "0.23.18" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7f1a745511c54ba6d4465e8d5dfbd81b45791756de28d4981af70d6dca128f1e" +checksum = "9c9cc1d47e243d655ace55ed38201c19ae02c148ae56412ab8750e8f0166ab7f" dependencies = [ "log", "once_cell", @@ -4411,7 +4411,7 @@ checksum = "27d684bad428a0f2481f42241f821db42c54e2dc81d8c00db8536c506b0a0144" dependencies = [ "const-oid", "ring", - "rustls 0.23.17", + "rustls 0.23.18", "tokio", "tokio-postgres", "tokio-rustls 0.26.0", @@ -4445,7 +4445,7 @@ version = "0.26.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0c7bc40d0e5a97695bb96e27995cd3a08538541b0a846f65bba7a359f36700d4" dependencies = [ - "rustls 0.23.17", + "rustls 0.23.18", "rustls-pki-types", "tokio", ] @@ -4790,7 +4790,7 @@ dependencies = [ "flate2", "log", "once_cell", - "rustls 0.23.17", + "rustls 0.23.18", "rustls-pki-types", "serde", "serde_json", @@ -5358,7 +5358,7 @@ dependencies = [ "hyper-util", "log", "percent-encoding", - "rustls 0.23.17", + "rustls 0.23.18", "rustls-pemfile 2.2.0", "seahash", "serde", From 2d9dd21e036c7ef7f03384cfc0a563b80330ecdf Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Philip=20Dub=C3=A9?= Date: Tue, 26 Nov 2024 15:59:16 +0000 Subject: [PATCH 44/61] lift DeploymentUidKey attribute to newOtelResource (#2296) --- flow/connectors/postgres/cdc.go | 2 -- flow/connectors/postgres/postgres.go | 6 +----- flow/otel_metrics/otel_manager.go | 7 +++++-- 3 files changed, 6 insertions(+), 9 deletions(-) diff --git a/flow/connectors/postgres/cdc.go b/flow/connectors/postgres/cdc.go index 6ff2f335e..27ae89904 100644 --- a/flow/connectors/postgres/cdc.go +++ b/flow/connectors/postgres/cdc.go @@ -25,7 +25,6 @@ import ( "github.com/PeerDB-io/peer-flow/model" "github.com/PeerDB-io/peer-flow/model/qvalue" "github.com/PeerDB-io/peer-flow/otel_metrics" - "github.com/PeerDB-io/peer-flow/peerdbenv" "github.com/PeerDB-io/peer-flow/shared" ) @@ -481,7 +480,6 @@ func PullCdcRecords[Items model.Items]( if fetchedBytesCounter != nil { fetchedBytesCounter.Add(ctx, int64(len(msg.Data)), metric.WithAttributeSet(attribute.NewSet( attribute.String(otel_metrics.FlowNameKey, req.FlowJobName), - attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID()), ))) } diff --git a/flow/connectors/postgres/postgres.go b/flow/connectors/postgres/postgres.go index 4845946aa..14b827cc8 100644 --- a/flow/connectors/postgres/postgres.go +++ b/flow/connectors/postgres/postgres.go @@ -1223,8 +1223,7 @@ func (c *PostgresConnector) HandleSlotInfo( attribute.String(otel_metrics.FlowNameKey, alertKeys.FlowName), attribute.String(otel_metrics.PeerNameKey, alertKeys.PeerName), attribute.String(otel_metrics.SlotNameKey, alertKeys.SlotName), - attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID())), - )) + ))) } else { logger.Warn("warning: slotMetricGauges.SlotLagGauge is nil") } @@ -1241,7 +1240,6 @@ func (c *PostgresConnector) HandleSlotInfo( slotMetricGauges.OpenConnectionsGauge.Record(ctx, res.CurrentOpenConnections, metric.WithAttributeSet(attribute.NewSet( attribute.String(otel_metrics.FlowNameKey, alertKeys.FlowName), attribute.String(otel_metrics.PeerNameKey, alertKeys.PeerName), - attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID()), ))) } else { logger.Warn("warning: slotMetricGauges.OpenConnectionsGauge is nil") @@ -1257,7 +1255,6 @@ func (c *PostgresConnector) HandleSlotInfo( metric.WithAttributeSet(attribute.NewSet( attribute.String(otel_metrics.FlowNameKey, alertKeys.FlowName), attribute.String(otel_metrics.PeerNameKey, alertKeys.PeerName), - attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID()), )), ) } else { @@ -1281,7 +1278,6 @@ func (c *PostgresConnector) HandleSlotInfo( metric.WithAttributeSet(attribute.NewSet( attribute.String(otel_metrics.FlowNameKey, alertKeys.FlowName), attribute.String(otel_metrics.PeerNameKey, alertKeys.PeerName), - attribute.String(otel_metrics.DeploymentUidKey, peerdbenv.PeerDBDeploymentUID()), )), ) } else { diff --git a/flow/otel_metrics/otel_manager.go b/flow/otel_metrics/otel_manager.go index 099c733d2..6dd57ebe2 100644 --- a/flow/otel_metrics/otel_manager.go +++ b/flow/otel_metrics/otel_manager.go @@ -83,7 +83,10 @@ func (om *OtelManager) GetOrInitInt64Counter(name string, opts ...metric.Int64Co // newOtelResource returns a resource describing this application. func newOtelResource(otelServiceName string, attrs ...attribute.KeyValue) (*resource.Resource, error) { - allAttrs := append([]attribute.KeyValue{semconv.ServiceNameKey.String(otelServiceName)}, attrs...) + allAttrs := append([]attribute.KeyValue{ + semconv.ServiceNameKey.String(otelServiceName), + attribute.String(DeploymentUidKey, peerdbenv.PeerDBDeploymentUID()), + }, attrs...) return resource.Merge( resource.Default(), resource.NewWithAttributes( @@ -181,7 +184,7 @@ func SetupPeerDBMetricsProvider(otelServiceName string) (*sdkmetric.MeterProvide } func SetupTemporalMetricsProvider(otelServiceName string) (*sdkmetric.MeterProvider, error) { - otelResource, err := newOtelResource(otelServiceName, attribute.String(DeploymentUidKey, peerdbenv.PeerDBDeploymentUID())) + otelResource, err := newOtelResource(otelServiceName) if err != nil { return nil, fmt.Errorf("failed to create OpenTelemetry resource: %w", err) } From 12bd167187d8ecfb371d43d4f93d2f27d6aed78d Mon Sep 17 00:00:00 2001 From: "renovate[bot]" <29139614+renovate[bot]@users.noreply.github.com> Date: Wed, 27 Nov 2024 00:37:13 +0530 Subject: [PATCH 45/61] chore(deps): update github-actions dependencies (#2291) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR contains the following updates: | Package | Type | Update | Change | |---|---|---|---| | elasticsearch | service | patch | `8.16.0` -> `8.16.1` | | [github/codeql-action](https://redirect.github.com/github/codeql-action) | action | digest | `ea9e4e3` -> `f09c1c0` | --- ### Configuration 📅 **Schedule**: Branch creation - "after 5pm on monday" in timezone Etc/UTC, Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox. 👻 **Immortal**: This PR will be recreated if closed unmerged. Get [config help](https://redirect.github.com/renovatebot/renovate/discussions) if that's undesired. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/PeerDB-io/peerdb). Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> --- .github/workflows/codeql-analysis.yml | 4 ++-- .github/workflows/flow.yml | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/.github/workflows/codeql-analysis.yml b/.github/workflows/codeql-analysis.yml index 303066f11..52c6d705f 100644 --- a/.github/workflows/codeql-analysis.yml +++ b/.github/workflows/codeql-analysis.yml @@ -47,12 +47,12 @@ jobs: # Initializes the CodeQL tools for scanning. - name: Initialize CodeQL - uses: github/codeql-action/init@ea9e4e37992a54ee68a9622e985e60c8e8f12d9f # v3 + uses: github/codeql-action/init@f09c1c0a94de965c15400f5634aa42fac8fb8f88 # v3 with: languages: ${{ matrix.language }} build-mode: ${{ matrix.build-mode }} - name: Perform CodeQL Analysis - uses: github/codeql-action/analyze@ea9e4e37992a54ee68a9622e985e60c8e8f12d9f # v3 + uses: github/codeql-action/analyze@f09c1c0a94de965c15400f5634aa42fac8fb8f88 # v3 with: category: "/language:${{matrix.language}}" diff --git a/.github/workflows/flow.yml b/.github/workflows/flow.yml index e794512a0..b7c225614 100644 --- a/.github/workflows/flow.yml +++ b/.github/workflows/flow.yml @@ -25,7 +25,7 @@ jobs: POSTGRES_DB: postgres POSTGRES_INITDB_ARGS: --locale=C.UTF-8 elasticsearch: - image: elasticsearch:8.16.0@sha256:a411f7c17549209c5839b69f929de00bd91f1e2dcf08b65d5f41b122eae17f5e + image: elasticsearch:8.16.1@sha256:e5ee5f8dacbf18fa3ab59a098cc7d4d69f73e61637eb45f1c029e74b1cb200a1 ports: - 9200:9200 env: From fb852aa48d51d79062b35d89f6c804c8fafeb16c Mon Sep 17 00:00:00 2001 From: Amogh Bharadwaj <65964360+Amogh-Bharadwaj@users.noreply.github.com> Date: Wed, 27 Nov 2024 03:04:38 +0530 Subject: [PATCH 46/61] Register tstzrange in gob (#2298) Need to register tstsrange qvalue type which we newly introduced in gob --- flow/connectors/utils/cdc_store.go | 1 + 1 file changed, 1 insertion(+) diff --git a/flow/connectors/utils/cdc_store.go b/flow/connectors/utils/cdc_store.go index e3aa9e449..d3e9d27f8 100644 --- a/flow/connectors/utils/cdc_store.go +++ b/flow/connectors/utils/cdc_store.go @@ -115,6 +115,7 @@ func init() { gob.Register(qvalue.QValueArrayTimestamp{}) gob.Register(qvalue.QValueArrayTimestampTZ{}) gob.Register(qvalue.QValueArrayBoolean{}) + gob.Register(qvalue.QValueTSTZRange{}) } func (c *cdcStore[T]) initPebbleDB() error { From c294c0baae9202125f3b9fb7181cbe166a83ea1b Mon Sep 17 00:00:00 2001 From: Kaushik Iska Date: Tue, 26 Nov 2024 16:12:45 -0600 Subject: [PATCH 47/61] Add tags for Flows along with routes to set and get (#2297) - add tags to telemetry to provide additional visibility - later these can be added to otel metrics etc --- flow/alerting/alerting.go | 16 +- flow/cmd/tags_handler.go | 84 ++++ flow/tags/tags.go | 24 + .../migrations/V41__add_metadata_tags.sql | 2 + protos/route.proto | 433 ++++++++++-------- 5 files changed, 363 insertions(+), 196 deletions(-) create mode 100644 flow/cmd/tags_handler.go create mode 100644 flow/tags/tags.go create mode 100644 nexus/catalog/migrations/V41__add_metadata_tags.sql diff --git a/flow/alerting/alerting.go b/flow/alerting/alerting.go index 4413b2efb..d1394561f 100644 --- a/flow/alerting/alerting.go +++ b/flow/alerting/alerting.go @@ -21,6 +21,7 @@ import ( "github.com/PeerDB-io/peer-flow/peerdbenv" "github.com/PeerDB-io/peer-flow/shared" "github.com/PeerDB-io/peer-flow/shared/telemetry" + "github.com/PeerDB-io/peer-flow/tags" ) // alerting service, no cool name :( @@ -366,13 +367,24 @@ func (a *Alerter) sendTelemetryMessage( flowName string, more string, level telemetry.Level, - tags ...string, + additionalTags ...string, ) { + allTags := []string{flowName, peerdbenv.PeerDBDeploymentUID()} + allTags = append(allTags, additionalTags...) + + if flowTags, err := tags.GetTags(ctx, a.CatalogPool, flowName); err != nil { + logger.Warn("failed to get flow tags", slog.Any("error", err)) + } else { + for key, value := range flowTags { + allTags = append(allTags, fmt.Sprintf("%s:%s", key, value)) + } + } + details := fmt.Sprintf("[%s] %s", flowName, more) attributes := telemetry.Attributes{ Level: level, DeploymentUID: peerdbenv.PeerDBDeploymentUID(), - Tags: append([]string{flowName, peerdbenv.PeerDBDeploymentUID()}, tags...), + Tags: allTags, Type: flowName, } diff --git a/flow/cmd/tags_handler.go b/flow/cmd/tags_handler.go new file mode 100644 index 000000000..ddd362c3e --- /dev/null +++ b/flow/cmd/tags_handler.go @@ -0,0 +1,84 @@ +package cmd + +import ( + "context" + "fmt" + "log/slog" + + "github.com/PeerDB-io/peer-flow/generated/protos" + "github.com/PeerDB-io/peer-flow/tags" +) + +func (h *FlowRequestHandler) flowExists(ctx context.Context, flowName string) (bool, error) { + var exists bool + err := h.pool.QueryRow(ctx, "SELECT EXISTS(SELECT 1 FROM flows WHERE name = $1)", flowName).Scan(&exists) + if err != nil { + slog.Error("error checking if flow exists", slog.Any("error", err)) + return false, err + } + + slog.Info(fmt.Sprintf("flow %s exists: %t", flowName, exists)) + return exists, nil +} + +func (h *FlowRequestHandler) CreateOrReplaceFlowTags( + ctx context.Context, + in *protos.CreateOrReplaceFlowTagsRequest, +) (*protos.CreateOrReplaceFlowTagsResponse, error) { + flowName := in.FlowName + + exists, err := h.flowExists(ctx, flowName) + if err != nil { + return nil, err + } + + if !exists { + slog.Error("flow does not exist", slog.String("flow_name", flowName)) + return nil, fmt.Errorf("flow %s does not exist", flowName) + } + + tags := make(map[string]string, len(in.Tags)) + for _, tag := range in.Tags { + tags[tag.Key] = tag.Value + } + + _, err = h.pool.Exec(ctx, "UPDATE flows SET tags = $1 WHERE name = $2", tags, flowName) + if err != nil { + slog.Error("error updating flow tags", slog.Any("error", err)) + return nil, err + } + + return &protos.CreateOrReplaceFlowTagsResponse{ + FlowName: flowName, + }, nil +} + +func (h *FlowRequestHandler) GetFlowTags(ctx context.Context, in *protos.GetFlowTagsRequest) (*protos.GetFlowTagsResponse, error) { + flowName := in.FlowName + + exists, err := h.flowExists(ctx, flowName) + if err != nil { + return nil, err + } + + if !exists { + slog.Error("flow does not exist", slog.String("flow_name", flowName)) + return nil, fmt.Errorf("flow %s does not exist", flowName) + } + + tags, err := tags.GetTags(ctx, h.pool, flowName) + if err != nil { + slog.Error("error getting flow tags", slog.Any("error", err)) + return nil, err + } + + protosTags := make([]*protos.FlowTag, 0, len(tags)) + for key, value := range tags { + protosTags = append(protosTags, &protos.FlowTag{Key: key, Value: value}) + } + + return &protos.GetFlowTagsResponse{ + FlowName: flowName, + Tags: protosTags, + }, nil +} diff --git a/flow/tags/tags.go b/flow/tags/tags.go new file mode 100644 index 000000000..8adc9a437 --- /dev/null +++ b/flow/tags/tags.go @@ -0,0 +1,24 @@ +package tags + +import ( + "context" + "log/slog" + + "github.com/jackc/pgx/v5/pgxpool" +) + +func GetTags(ctx context.Context, catalogPool *pgxpool.Pool, flowName string) (map[string]string, error) { + var tags map[string]string + + err := catalogPool.QueryRow(ctx, "SELECT tags FROM flows WHERE name = $1", flowName).Scan(&tags) + if err != nil { + slog.Error("error getting flow tags", slog.Any("error", err)) + return nil, err + } + + if tags == nil { + tags = make(map[string]string) + } + + return tags, nil +} diff --git a/nexus/catalog/migrations/V41__add_metadata_tags.sql b/nexus/catalog/migrations/V41__add_metadata_tags.sql new file mode 100644 index 000000000..e3bfd2948 --- /dev/null +++ b/nexus/catalog/migrations/V41__add_metadata_tags.sql @@ -0,0 +1,2 @@ +ALTER TABLE flows +ADD COLUMN tags JSONB; diff --git a/protos/route.proto b/protos/route.proto index 1c6d38ed6..3c902ba22 100644 --- a/protos/route.proto +++ b/protos/route.proto @@ -12,18 +12,14 @@ message CreateCDCFlowRequest { peerdb_flow.FlowConnectionConfigs connection_configs = 1; } -message CreateCDCFlowResponse { - string workflow_id = 1; -} +message CreateCDCFlowResponse { string workflow_id = 1; } message CreateQRepFlowRequest { peerdb_flow.QRepConfig qrep_config = 1; bool create_catalog_entry = 2; } -message CreateQRepFlowResponse { - string workflow_id = 1; -} +message CreateQRepFlowResponse { string workflow_id = 1; } message CreateCustomSyncRequest { string flow_job_name = 1; @@ -41,23 +37,13 @@ message AlertConfig { string service_config = 3; repeated string alert_for_mirrors = 4; } -message GetAlertConfigsRequest { -} +message GetAlertConfigsRequest {} -message PostAlertConfigRequest { - AlertConfig config = 1; -} -message DeleteAlertConfigRequest { - int32 id = 1; -} -message GetAlertConfigsResponse { - repeated AlertConfig configs = 1; -} -message PostAlertConfigResponse { - int32 id = 3; -} -message DeleteAlertConfigResponse { -} +message PostAlertConfigRequest { AlertConfig config = 1; } +message DeleteAlertConfigRequest { int32 id = 1; } +message GetAlertConfigsResponse { repeated AlertConfig configs = 1; } +message PostAlertConfigResponse { int32 id = 3; } +message DeleteAlertConfigResponse {} message DynamicSetting { string name = 1; @@ -68,17 +54,13 @@ message DynamicSetting { peerdb_flow.DynconfApplyMode apply_mode = 6; peerdb_flow.DynconfTarget target_for_setting = 7; } -message GetDynamicSettingsRequest { -} -message GetDynamicSettingsResponse { - repeated DynamicSetting settings = 1; -} +message GetDynamicSettingsRequest {} +message GetDynamicSettingsResponse { repeated DynamicSetting settings = 1; } message PostDynamicSettingRequest { string name = 1; optional string value = 2; } -message PostDynamicSettingResponse { -} +message PostDynamicSettingResponse {} message Script { int32 id = 1; @@ -86,39 +68,23 @@ message Script { string name = 3; string source = 4; } -message GetScriptsRequest { - int32 id = 1; -} -message GetScriptsResponse { - repeated Script scripts = 1; -} -message PostScriptRequest { - Script script = 1; -} -message PostScriptResponse { - int32 id = 1; -} -message DeleteScriptRequest { - int32 id = 1; -} -message DeleteScriptResponse { -} +message GetScriptsRequest { int32 id = 1; } +message GetScriptsResponse { repeated Script scripts = 1; } +message PostScriptRequest { Script script = 1; } +message PostScriptResponse { int32 id = 1; } +message DeleteScriptRequest { int32 id = 1; } +message DeleteScriptResponse {} -message ValidatePeerRequest { - peerdb_peers.Peer peer = 1; -} +message ValidatePeerRequest { peerdb_peers.Peer peer = 1; } message CreatePeerRequest { peerdb_peers.Peer peer = 1; bool allow_update = 2; } -message DropPeerRequest { - string peer_name = 1; -} +message DropPeerRequest { string peer_name = 1; } -message DropPeerResponse { -} +message DropPeerResponse {} enum ValidatePeerStatus { CREATION_UNKNOWN = 0; @@ -171,7 +137,6 @@ message CDCBatch { int64 batch_id = 6; } - message CDCRowCounts { int64 total_count = 1; int64 inserts_count = 2; @@ -182,21 +147,17 @@ message CDCTableRowCounts { string table_name = 1; CDCRowCounts counts = 2; } -message CDCTableTotalCountsRequest { - string flow_job_name = 1; -} + +message CDCTableTotalCountsRequest { string flow_job_name = 1; } + message CDCTableTotalCountsResponse { CDCRowCounts total_data = 1; repeated CDCTableRowCounts tables_data = 2; } -message PeerSchemasResponse { - repeated string schemas = 1; -} +message PeerSchemasResponse { repeated string schemas = 1; } -message PeerPublicationsResponse { - repeated string publication_names = 1; -} +message PeerPublicationsResponse { repeated string publication_names = 1; } message SchemaTablesRequest { string peer_name = 1; @@ -204,9 +165,7 @@ message SchemaTablesRequest { bool cdc_enabled = 3; } -message SchemaTablesResponse { - repeated TableResponse tables = 1; -} +message SchemaTablesResponse { repeated TableResponse tables = 1; } message TableResponse { string table_name = 1; @@ -214,9 +173,7 @@ message TableResponse { string table_size = 3; } -message AllTablesResponse { - repeated string tables = 1; -} +message AllTablesResponse { repeated string tables = 1; } message TableColumnsRequest { string peer_name = 1; @@ -224,17 +181,11 @@ message TableColumnsRequest { string table_name = 3; } -message TableColumnsResponse { - repeated string columns = 1; -} +message TableColumnsResponse { repeated string columns = 1; } -message PostgresPeerActivityInfoRequest { - string peer_name = 1; -} +message PostgresPeerActivityInfoRequest { string peer_name = 1; } -message PeerInfoRequest { - string peer_name = 1; -} +message PeerInfoRequest { string peer_name = 1; } message PeerInfoResponse { peerdb_peers.Peer peer = 1; @@ -245,8 +196,7 @@ message PeerListItem { string name = 1; peerdb_peers.DBType type = 2; } -message ListPeersRequest { -} +message ListPeersRequest {} message ListPeersResponse { repeated PeerListItem items = 1; repeated PeerListItem source_items = 2; @@ -275,9 +225,7 @@ message GetSlotLagHistoryRequest { string slot_name = 2; string time_since = 3; } -message GetSlotLagHistoryResponse { - repeated SlotLagPoint data = 1; -} +message GetSlotLagHistoryResponse { repeated SlotLagPoint data = 1; } message StatInfo { int64 pid = 1; @@ -289,13 +237,9 @@ message StatInfo { string state = 7; } -message PeerSlotResponse { - repeated SlotInfo slot_data = 1; -} +message PeerSlotResponse { repeated SlotInfo slot_data = 1; } -message PeerStatResponse { - repeated StatInfo stat_data = 1; -} +message PeerStatResponse { repeated StatInfo stat_data = 1; } message CloneTableSummary { string table_name = 1; @@ -311,9 +255,7 @@ message CloneTableSummary { string mirror_name = 11; } -message SnapshotStatus { - repeated CloneTableSummary clones = 1; -} +message SnapshotStatus { repeated CloneTableSummary clones = 1; } message CDCMirrorStatus { peerdb_flow.FlowConnectionConfigs config = 1; @@ -334,9 +276,7 @@ message MirrorStatusResponse { google.protobuf.Timestamp created_at = 7; } -message InitialLoadSummaryRequest { - string parent_mirror_name = 1; -} +message InitialLoadSummaryRequest { string parent_mirror_name = 1; } message InitialLoadSummaryResponse { repeated CloneTableSummary tableSummaries = 1; @@ -366,9 +306,7 @@ message GraphResponseItem { double rows = 2; } -message GraphResponse { - repeated GraphResponseItem data = 1; -} +message GraphResponse { repeated GraphResponseItem data = 1; } message MirrorLog { string flow_name = 1; @@ -391,8 +329,7 @@ message ListMirrorLogsResponse { int32 page = 3; } -message ValidateCDCMirrorResponse{ -} +message ValidateCDCMirrorResponse {} message ListMirrorsItem { int64 id = 1; @@ -405,17 +342,11 @@ message ListMirrorsItem { double created_at = 8; bool is_cdc = 9; } -message ListMirrorsRequest { -} -message ListMirrorsResponse { - repeated ListMirrorsItem mirrors = 1; -} +message ListMirrorsRequest {} +message ListMirrorsResponse { repeated ListMirrorsItem mirrors = 1; } -message ListMirrorNamesRequest { -} -message ListMirrorNamesResponse { - repeated string names = 1; -} +message ListMirrorNamesRequest {} +message ListMirrorNamesResponse { repeated string names = 1; } message FlowStateChangeRequest { string flow_job_name = 1; @@ -424,25 +355,19 @@ message FlowStateChangeRequest { optional peerdb_flow.FlowConfigUpdate flow_config_update = 5; bool drop_mirror_stats = 6; } -message FlowStateChangeResponse { -} +message FlowStateChangeResponse {} -message PeerDBVersionRequest { -} -message PeerDBVersionResponse { - string version = 1; -} +message PeerDBVersionRequest {} +message PeerDBVersionResponse { string version = 1; } message ResyncMirrorRequest { string flow_job_name = 1; bool drop_stats = 2; } -message ResyncMirrorResponse { -} +message ResyncMirrorResponse {} -message PeerDBStateRequest { -} +message PeerDBStateRequest {} enum InstanceStatus { INSTANCE_STATUS_UNKNOWN = 0; @@ -450,12 +375,9 @@ enum InstanceStatus { INSTANCE_STATUS_MAINTENANCE = 3; } -message InstanceInfoRequest { -} +message InstanceInfoRequest {} -message InstanceInfoResponse { - InstanceStatus status = 1; -} +message InstanceInfoResponse { InstanceStatus status = 1; } enum MaintenanceStatus { MAINTENANCE_STATUS_UNKNOWN = 0; @@ -473,166 +395,289 @@ message MaintenanceResponse { string run_id = 2; } +message FlowTag { + string key = 1; + string value = 2; +} + +message CreateOrReplaceFlowTagsRequest { + string flow_name = 1; + repeated FlowTag tags = 2; +} + +message CreateOrReplaceFlowTagsResponse { string flow_name = 1; } + +message GetFlowTagsRequest { string flow_name = 1; } + +message GetFlowTagsResponse { + string flow_name = 1; + repeated FlowTag tags = 2; +} + service FlowService { rpc ValidatePeer(ValidatePeerRequest) returns (ValidatePeerResponse) { option (google.api.http) = { - post: "/v1/peers/validate", - body: "*" - }; + post : "/v1/peers/validate", + body : "*" + }; } - rpc ValidateCDCMirror(CreateCDCFlowRequest) returns (ValidateCDCMirrorResponse) { + rpc ValidateCDCMirror(CreateCDCFlowRequest) + returns (ValidateCDCMirrorResponse) { option (google.api.http) = { - post: "/v1/mirrors/cdc/validate", - body: "*" - }; + post : "/v1/mirrors/cdc/validate", + body : "*" + }; } rpc CreatePeer(CreatePeerRequest) returns (CreatePeerResponse) { option (google.api.http) = { - post: "/v1/peers/create", - body: "*" - }; + post : "/v1/peers/create", + body : "*" + }; } rpc DropPeer(DropPeerRequest) returns (DropPeerResponse) { option (google.api.http) = { - post: "/v1/peers/drop", - body: "*" + post : "/v1/peers/drop", + body : "*" }; } rpc CreateCDCFlow(CreateCDCFlowRequest) returns (CreateCDCFlowResponse) { option (google.api.http) = { - post: "/v1/flows/cdc/create", - body: "*" - }; + post : "/v1/flows/cdc/create", + body : "*" + }; } rpc CreateQRepFlow(CreateQRepFlowRequest) returns (CreateQRepFlowResponse) { option (google.api.http) = { - post: "/v1/flows/qrep/create", - body: "*" - }; + post : "/v1/flows/qrep/create", + body : "*" + }; } - rpc CustomSyncFlow(CreateCustomSyncRequest) returns (CreateCustomSyncResponse) { + rpc CustomSyncFlow(CreateCustomSyncRequest) + returns (CreateCustomSyncResponse) { option (google.api.http) = { - post: "/v1/flows/cdc/sync", - body: "*" - }; + post : "/v1/flows/cdc/sync", + body : "*" + }; } - rpc GetAlertConfigs(GetAlertConfigsRequest) returns (GetAlertConfigsResponse) { - option (google.api.http) = { get: "/v1/alerts/config" }; + rpc GetAlertConfigs(GetAlertConfigsRequest) + returns (GetAlertConfigsResponse) { + option (google.api.http) = { + get : "/v1/alerts/config" + }; } - rpc PostAlertConfig(PostAlertConfigRequest) returns (PostAlertConfigResponse) { - option (google.api.http) = { post: "/v1/alerts/config", body: "*" }; + rpc PostAlertConfig(PostAlertConfigRequest) + returns (PostAlertConfigResponse) { + option (google.api.http) = { + post : "/v1/alerts/config", + body : "*" + }; } - rpc DeleteAlertConfig(DeleteAlertConfigRequest) returns (DeleteAlertConfigResponse) { - option (google.api.http) = { delete: "/v1/alerts/config/{id}" }; + rpc DeleteAlertConfig(DeleteAlertConfigRequest) + returns (DeleteAlertConfigResponse) { + option (google.api.http) = { + delete : "/v1/alerts/config/{id}" + }; } - rpc GetDynamicSettings(GetDynamicSettingsRequest) returns (GetDynamicSettingsResponse) { - option (google.api.http) = { get: "/v1/dynamic_settings" }; + rpc GetDynamicSettings(GetDynamicSettingsRequest) + returns (GetDynamicSettingsResponse) { + option (google.api.http) = { + get : "/v1/dynamic_settings" + }; } - rpc PostDynamicSetting(PostDynamicSettingRequest) returns (PostDynamicSettingResponse) { - option (google.api.http) = { post: "/v1/dynamic_settings", body: "*" }; + rpc PostDynamicSetting(PostDynamicSettingRequest) + returns (PostDynamicSettingResponse) { + option (google.api.http) = { + post : "/v1/dynamic_settings", + body : "*" + }; } rpc GetScripts(GetScriptsRequest) returns (GetScriptsResponse) { - option (google.api.http) = { get: "/v1/scripts/{id}" }; + option (google.api.http) = { + get : "/v1/scripts/{id}" + }; } rpc PostScript(PostScriptRequest) returns (PostScriptResponse) { - option (google.api.http) = { post: "/v1/scripts", body: "*" }; + option (google.api.http) = { + post : "/v1/scripts", + body : "*" + }; } rpc DeleteScript(DeleteScriptRequest) returns (DeleteScriptResponse) { - option (google.api.http) = { delete: "/v1/scripts/{id}" }; + option (google.api.http) = { + delete : "/v1/scripts/{id}" + }; } - rpc CDCTableTotalCounts(CDCTableTotalCountsRequest) returns (CDCTableTotalCountsResponse) { - option (google.api.http) = { get: "/v1/mirrors/cdc/table_total_counts/{flow_job_name}" }; + rpc CDCTableTotalCounts(CDCTableTotalCountsRequest) + returns (CDCTableTotalCountsResponse) { + option (google.api.http) = { + get : "/v1/mirrors/cdc/table_total_counts/{flow_job_name}" + }; } - rpc GetSchemas(PostgresPeerActivityInfoRequest) returns (PeerSchemasResponse) { - option (google.api.http) = { get: "/v1/peers/schemas" }; + rpc GetSchemas(PostgresPeerActivityInfoRequest) + returns (PeerSchemasResponse) { + option (google.api.http) = { + get : "/v1/peers/schemas" + }; } - rpc GetPublications(PostgresPeerActivityInfoRequest) returns (PeerPublicationsResponse) { - option (google.api.http) = { get: "/v1/peers/publications" }; + rpc GetPublications(PostgresPeerActivityInfoRequest) + returns (PeerPublicationsResponse) { + option (google.api.http) = { + get : "/v1/peers/publications" + }; } rpc GetTablesInSchema(SchemaTablesRequest) returns (SchemaTablesResponse) { - option (google.api.http) = { get: "/v1/peers/tables" }; + option (google.api.http) = { + get : "/v1/peers/tables" + }; } - rpc GetAllTables(PostgresPeerActivityInfoRequest) returns (AllTablesResponse) { - option (google.api.http) = { get: "/v1/peers/tables/all" }; + rpc GetAllTables(PostgresPeerActivityInfoRequest) + returns (AllTablesResponse) { + option (google.api.http) = { + get : "/v1/peers/tables/all" + }; } rpc GetColumns(TableColumnsRequest) returns (TableColumnsResponse) { - option (google.api.http) = { get: "/v1/peers/columns" }; + option (google.api.http) = { + get : "/v1/peers/columns" + }; } rpc GetSlotInfo(PostgresPeerActivityInfoRequest) returns (PeerSlotResponse) { - option (google.api.http) = { get: "/v1/peers/slots/{peer_name}" }; + option (google.api.http) = { + get : "/v1/peers/slots/{peer_name}" + }; } - rpc GetSlotLagHistory(GetSlotLagHistoryRequest) returns (GetSlotLagHistoryResponse) { - option (google.api.http) = { post: "/v1/peers/slots/lag_history", body: "*" }; + rpc GetSlotLagHistory(GetSlotLagHistoryRequest) + returns (GetSlotLagHistoryResponse) { + option (google.api.http) = { + post : "/v1/peers/slots/lag_history", + body : "*" + }; } rpc GetStatInfo(PostgresPeerActivityInfoRequest) returns (PeerStatResponse) { - option (google.api.http) = { get: "/v1/peers/stats/{peer_name}" }; + option (google.api.http) = { + get : "/v1/peers/stats/{peer_name}" + }; } rpc ListMirrorLogs(ListMirrorLogsRequest) returns (ListMirrorLogsResponse) { - option (google.api.http) = { post: "/v1/mirrors/logs", body: "*" }; + option (google.api.http) = { + post : "/v1/mirrors/logs", + body : "*" + }; } rpc ListMirrors(ListMirrorsRequest) returns (ListMirrorsResponse) { - option (google.api.http) = { get: "/v1/mirrors/list" }; + option (google.api.http) = { + get : "/v1/mirrors/list" + }; } - rpc ListMirrorNames(ListMirrorNamesRequest) returns (ListMirrorNamesResponse) { - option (google.api.http) = { get: "/v1/mirrors/names" }; + rpc ListMirrorNames(ListMirrorNamesRequest) + returns (ListMirrorNamesResponse) { + option (google.api.http) = { + get : "/v1/mirrors/names" + }; } - rpc FlowStateChange(FlowStateChangeRequest) returns (FlowStateChangeResponse) { - option (google.api.http) = { post: "/v1/mirrors/state_change", body: "*" }; + rpc FlowStateChange(FlowStateChangeRequest) + returns (FlowStateChangeResponse) { + option (google.api.http) = { + post : "/v1/mirrors/state_change", + body : "*" + }; } rpc MirrorStatus(MirrorStatusRequest) returns (MirrorStatusResponse) { - option (google.api.http) = { post: "/v1/mirrors/status", body: "*" }; + option (google.api.http) = { + post : "/v1/mirrors/status", + body : "*" + }; } rpc GetCDCBatches(GetCDCBatchesRequest) returns (GetCDCBatchesResponse) { - option (google.api.http) = { get: "/v1/mirrors/cdc/batches/{flow_job_name}" }; + option (google.api.http) = { + get : "/v1/mirrors/cdc/batches/{flow_job_name}" + }; } rpc CDCBatches(GetCDCBatchesRequest) returns (GetCDCBatchesResponse) { - option (google.api.http) = { post: "/v1/mirrors/cdc/batches", body: "*" }; + option (google.api.http) = { + post : "/v1/mirrors/cdc/batches", + body : "*" + }; } rpc CDCGraph(GraphRequest) returns (GraphResponse) { - option (google.api.http) = { post: "/v1/mirrors/cdc/graph", body: "*" }; + option (google.api.http) = { + post : "/v1/mirrors/cdc/graph", + body : "*" + }; } - rpc InitialLoadSummary(InitialLoadSummaryRequest) returns (InitialLoadSummaryResponse) { - option (google.api.http) = { get: "/v1/mirrors/cdc/initial_load/{parent_mirror_name}" }; + rpc InitialLoadSummary(InitialLoadSummaryRequest) + returns (InitialLoadSummaryResponse) { + option (google.api.http) = { + get : "/v1/mirrors/cdc/initial_load/{parent_mirror_name}" + }; } rpc GetPeerInfo(PeerInfoRequest) returns (PeerInfoResponse) { - option (google.api.http) = { get: "/v1/peers/info/{peer_name}" }; + option (google.api.http) = { + get : "/v1/peers/info/{peer_name}" + }; } rpc ListPeers(ListPeersRequest) returns (ListPeersResponse) { - option (google.api.http) = { get: "/v1/peers/list" }; + option (google.api.http) = { + get : "/v1/peers/list" + }; } rpc GetVersion(PeerDBVersionRequest) returns (PeerDBVersionResponse) { - option (google.api.http) = { get: "/v1/version" }; + option (google.api.http) = { + get : "/v1/version" + }; } rpc ResyncMirror(ResyncMirrorRequest) returns (ResyncMirrorResponse) { - option (google.api.http) = { post: "/v1/mirrors/resync", body: "*" }; + option (google.api.http) = { + post : "/v1/mirrors/resync", + body : "*" + }; } rpc GetInstanceInfo(InstanceInfoRequest) returns (InstanceInfoResponse) { - option (google.api.http) = { get: "/v1/instance/info" }; + option (google.api.http) = { + get : "/v1/instance/info" + }; } rpc Maintenance(MaintenanceRequest) returns (MaintenanceResponse) { - option (google.api.http) = { post: "/v1/instance/maintenance", body: "*" }; + option (google.api.http) = { + post : "/v1/instance/maintenance", + body : "*" + }; + } + + rpc CreateOrReplaceFlowTags(CreateOrReplaceFlowTagsRequest) + returns (CreateOrReplaceFlowTagsResponse) { + option (google.api.http) = { + post : "/v1/flows/tags", + body : "*" + }; + } + + rpc GetFlowTags(GetFlowTagsRequest) returns (GetFlowTagsResponse) { + option (google.api.http) = { + get : "/v1/flows/tags/{flow_name}" + }; } } From c4742c3d8e2a23b5added339adc8743190e28bdf Mon Sep 17 00:00:00 2001 From: Amogh Bharadwaj <65964360+Amogh-Bharadwaj@users.noreply.github.com> Date: Wed, 27 Nov 2024 03:57:52 +0530 Subject: [PATCH 48/61] Otel: initialise int64counter (#2299) Currently flow-worker panics due to assignment to nil map entry at the getOrInitMetric call in the int64 counter because the map is nil --- flow/cmd/worker.go | 1 + 1 file changed, 1 insertion(+) diff --git a/flow/cmd/worker.go b/flow/cmd/worker.go index 51415a1dc..3de6f6bcd 100644 --- a/flow/cmd/worker.go +++ b/flow/cmd/worker.go @@ -168,6 +168,7 @@ func WorkerSetup(opts *WorkerSetupOptions) (*workerSetupResponse, error) { Meter: metricsProvider.Meter("io.peerdb.flow-worker"), Float64GaugesCache: make(map[string]metric.Float64Gauge), Int64GaugesCache: make(map[string]metric.Int64Gauge), + Int64CountersCache: make(map[string]metric.Int64Counter), } } w.RegisterActivity(&activities.FlowableActivity{ From 04deaf0c6da736dd379087d7395da20e85f8d2a9 Mon Sep 17 00:00:00 2001 From: Amogh Bharadwaj <65964360+Amogh-Bharadwaj@users.noreply.github.com> Date: Wed, 27 Nov 2024 04:34:52 +0530 Subject: [PATCH 49/61] Clickhouse: add select consistency (#2300) ### Overview [A recent effort ](https://github.com/PeerDB-io/peerdb/pull/2256/files) was made to improve PeerDB's ingestion performance into ClickHouse by parallelizing the INSERT INTO SELECT queries which took data from the raw table and put it into the final tables. Just before this step, PeerDB moves data from S3 to the raw table. ### Problem In this implementation, every insert runs on a new ClickHouse session - which means the inserts could be running on different nodes/replicas. In this case, there exists a non-zero chance (albeit very unliikely) that the INSERT INTO SELECT does not read the rows in the raw table inserted by the first step. This is documented here: https://clickhouse.com/docs/en/cloud/reference/shared-merge-tree#consistency ### Solution This PR adds the setting `select_sequential_consistency = 1` in the `Connect` function which we use for connecting to ClickHouse --- flow/connectors/clickhouse/clickhouse.go | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/flow/connectors/clickhouse/clickhouse.go b/flow/connectors/clickhouse/clickhouse.go index 63ccea693..f024a767e 100644 --- a/flow/connectors/clickhouse/clickhouse.go +++ b/flow/connectors/clickhouse/clickhouse.go @@ -228,11 +228,12 @@ func Connect(ctx context.Context, env map[string]string, config *protos.Clickhou tlsSetting.RootCAs = caPool } - var settings clickhouse.Settings + // See: https://clickhouse.com/docs/en/cloud/reference/shared-merge-tree#consistency + settings := clickhouse.Settings{"select_sequential_consistency": uint64(1)} if maxInsertThreads, err := peerdbenv.PeerDBClickHouseMaxInsertThreads(ctx, env); err != nil { return nil, fmt.Errorf("failed to load max_insert_threads config: %w", err) } else if maxInsertThreads != 0 { - settings = clickhouse.Settings{"max_insert_threads": maxInsertThreads} + settings["max_insert_threads"] = maxInsertThreads } conn, err := clickhouse.Open(&clickhouse.Options{ From 04036cf66d92f669105bc162e0e1b8f5ce903533 Mon Sep 17 00:00:00 2001 From: Kaushik Iska Date: Tue, 26 Nov 2024 18:29:58 -0600 Subject: [PATCH 50/61] refactor: move OtelManager initialization to constructor (#2301) --- flow/cmd/worker.go | 15 ++++----------- flow/otel_metrics/otel_manager.go | 15 +++++++++++++++ 2 files changed, 19 insertions(+), 11 deletions(-) diff --git a/flow/cmd/worker.go b/flow/cmd/worker.go index 3de6f6bcd..87fbd0aa5 100644 --- a/flow/cmd/worker.go +++ b/flow/cmd/worker.go @@ -10,7 +10,6 @@ import ( "runtime" "github.com/grafana/pyroscope-go" - "go.opentelemetry.io/otel/metric" "go.temporal.io/sdk/client" temporalotel "go.temporal.io/sdk/contrib/opentelemetry" "go.temporal.io/sdk/worker" @@ -159,18 +158,12 @@ func WorkerSetup(opts *WorkerSetupOptions) (*workerSetupResponse, error) { var otelManager *otel_metrics.OtelManager if opts.EnableOtelMetrics { - metricsProvider, metricsErr := otel_metrics.SetupPeerDBMetricsProvider("flow-worker") - if metricsErr != nil { - return nil, metricsErr - } - otelManager = &otel_metrics.OtelManager{ - MetricsProvider: metricsProvider, - Meter: metricsProvider.Meter("io.peerdb.flow-worker"), - Float64GaugesCache: make(map[string]metric.Float64Gauge), - Int64GaugesCache: make(map[string]metric.Int64Gauge), - Int64CountersCache: make(map[string]metric.Int64Counter), + otelManager, err = otel_metrics.NewOtelManager() + if err != nil { + return nil, fmt.Errorf("unable to create otel manager: %w", err) } } + w.RegisterActivity(&activities.FlowableActivity{ CatalogPool: conn, Alerter: alerting.NewAlerter(context.Background(), conn), diff --git a/flow/otel_metrics/otel_manager.go b/flow/otel_metrics/otel_manager.go index 6dd57ebe2..dc3deb424 100644 --- a/flow/otel_metrics/otel_manager.go +++ b/flow/otel_metrics/otel_manager.go @@ -45,6 +45,21 @@ type OtelManager struct { Int64CountersCache map[string]metric.Int64Counter } +func NewOtelManager() (*OtelManager, error) { + metricsProvider, err := SetupPeerDBMetricsProvider("flow-worker") + if err != nil { + return nil, err + } + + return &OtelManager{ + MetricsProvider: metricsProvider, + Meter: metricsProvider.Meter("io.peerdb.flow-worker"), + Float64GaugesCache: make(map[string]metric.Float64Gauge), + Int64GaugesCache: make(map[string]metric.Int64Gauge), + Int64CountersCache: make(map[string]metric.Int64Counter), + }, nil +} + func (om *OtelManager) Close(ctx context.Context) error { return om.MetricsProvider.Shutdown(ctx) } From 2899aff5eee4d0d9f21f32af7de0a96e1ce4f1e8 Mon Sep 17 00:00:00 2001 From: Kunal Gupta <39487888+iamKunalGupta@users.noreply.github.com> Date: Wed, 27 Nov 2024 20:57:13 +0530 Subject: [PATCH 51/61] Update renovate.json --- renovate.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/renovate.json b/renovate.json index b9a8ffef6..17de3825e 100644 --- a/renovate.json +++ b/renovate.json @@ -25,7 +25,7 @@ "enabled": false }, { - "matchPackageNames": ["next"], + "matchPackageNames": ["next", "eslint", "eslint-config-next"], "matchManagers": ["npm"], "matchUpdateTypes": ["major"], "enabled": false From 65effd94370aaf09c4d2b2588b918506064d7280 Mon Sep 17 00:00:00 2001 From: "renovate[bot]" <29139614+renovate[bot]@users.noreply.github.com> Date: Wed, 27 Nov 2024 21:19:30 +0530 Subject: [PATCH 52/61] fix(deps): update npm dependencies (#2294) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR contains the following updates: | Package | Change | Age | Adoption | Passing | Confidence | |---|---|---|---|---|---| | [@types/node](https://redirect.github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/node) ([source](https://redirect.github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node)) | [`22.9.0` -> `22.10.0`](https://renovatebot.com/diffs/npm/@types%2fnode/22.9.0/22.10.0) | [![age](https://developer.mend.io/api/mc/badges/age/npm/@types%2fnode/22.10.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/@types%2fnode/22.10.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/@types%2fnode/22.9.0/22.10.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/@types%2fnode/22.9.0/22.10.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | | [less](http://lesscss.org) ([source](https://redirect.github.com/less/less.js)) | [`4.2.0` -> `4.2.1`](https://renovatebot.com/diffs/npm/less/4.2.0/4.2.1) | [![age](https://developer.mend.io/api/mc/badges/age/npm/less/4.2.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/less/4.2.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/less/4.2.0/4.2.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/less/4.2.0/4.2.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | | [lucide-react](https://lucide.dev) ([source](https://redirect.github.com/lucide-icons/lucide/tree/HEAD/packages/lucide-react)) | [`^0.460.0` -> `^0.462.0`](https://renovatebot.com/diffs/npm/lucide-react/0.460.0/0.462.0) | [![age](https://developer.mend.io/api/mc/badges/age/npm/lucide-react/0.462.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/lucide-react/0.462.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/lucide-react/0.460.0/0.462.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/lucide-react/0.460.0/0.462.0?slim=true)](https://docs.renovatebot.com/merge-confidence/) | | [material-symbols](https://marella.github.io/material-symbols/demo/) ([source](https://redirect.github.com/marella/material-symbols/tree/HEAD/material-symbols)) | [`0.27.0` -> `0.27.1`](https://renovatebot.com/diffs/npm/material-symbols/0.27.0/0.27.1) | [![age](https://developer.mend.io/api/mc/badges/age/npm/material-symbols/0.27.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/material-symbols/0.27.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/material-symbols/0.27.0/0.27.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/material-symbols/0.27.0/0.27.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | | [postcss](https://postcss.org/) ([source](https://redirect.github.com/postcss/postcss)) | [`8.4.47` -> `8.4.49`](https://renovatebot.com/diffs/npm/postcss/8.4.47/8.4.49) | [![age](https://developer.mend.io/api/mc/badges/age/npm/postcss/8.4.49?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/postcss/8.4.49?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/postcss/8.4.47/8.4.49?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/postcss/8.4.47/8.4.49?slim=true)](https://docs.renovatebot.com/merge-confidence/) | | [prettier](https://prettier.io) ([source](https://redirect.github.com/prettier/prettier)) | [`3.3.3` -> `3.4.1`](https://renovatebot.com/diffs/npm/prettier/3.3.3/3.4.1) | [![age](https://developer.mend.io/api/mc/badges/age/npm/prettier/3.4.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/prettier/3.4.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/prettier/3.3.3/3.4.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/prettier/3.3.3/3.4.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | | [tailwindcss](https://tailwindcss.com) ([source](https://redirect.github.com/tailwindlabs/tailwindcss)) | [`3.4.14` -> `3.4.15`](https://renovatebot.com/diffs/npm/tailwindcss/3.4.14/3.4.15) | [![age](https://developer.mend.io/api/mc/badges/age/npm/tailwindcss/3.4.15?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/tailwindcss/3.4.15?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/tailwindcss/3.4.14/3.4.15?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/tailwindcss/3.4.14/3.4.15?slim=true)](https://docs.renovatebot.com/merge-confidence/) | | [typescript](https://www.typescriptlang.org/) ([source](https://redirect.github.com/microsoft/TypeScript)) | [`5.6.3` -> `5.7.2`](https://renovatebot.com/diffs/npm/typescript/5.6.3/5.7.2) | [![age](https://developer.mend.io/api/mc/badges/age/npm/typescript/5.7.2?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/typescript/5.7.2?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/typescript/5.6.3/5.7.2?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/typescript/5.6.3/5.7.2?slim=true)](https://docs.renovatebot.com/merge-confidence/) | --- ### Release Notes
less/less.js (less) ### [`v4.2.1`](https://redirect.github.com/less/less.js/blob/HEAD/CHANGELOG.md#v421-2024-09-26) [Compare Source](https://redirect.github.com/less/less.js/compare/v4.2.0...1df9072ee9ebdadc791bf35dfb1dbc3ef9f1948f) - [#​4237](https://redirect.github.com/less/less.js/pull/4237) Fix [#​4235](https://redirect.github.com/less/less.js/issues/4235) container style queries extra space resolved ([@​puckowski](https://redirect.github.com/puckowski))
lucide-icons/lucide (lucide-react) ### [`v0.462.0`](https://redirect.github.com/lucide-icons/lucide/releases/tag/0.462.0): New icons 0.462.0 [Compare Source](https://redirect.github.com/lucide-icons/lucide/compare/0.461.0...0.462.0) #### New icons 🎨 - `image-upscale` ([#​2462](https://redirect.github.com/lucide-icons/lucide/issues/2462)) by [@​jguddas](https://redirect.github.com/jguddas) #### Modified Icons 🔨 - `grid-2x2` ([#​2628](https://redirect.github.com/lucide-icons/lucide/issues/2628)) by [@​jguddas](https://redirect.github.com/jguddas) - `ship` ([#​2548](https://redirect.github.com/lucide-icons/lucide/issues/2548)) by [@​jguddas](https://redirect.github.com/jguddas) - `shuffle` ([#​2478](https://redirect.github.com/lucide-icons/lucide/issues/2478)) by [@​jguddas](https://redirect.github.com/jguddas) - `venetian-mask` ([#​1950](https://redirect.github.com/lucide-icons/lucide/issues/1950)) by [@​jguddas](https://redirect.github.com/jguddas) ### [`v0.461.0`](https://redirect.github.com/lucide-icons/lucide/releases/tag/0.461.0): New icons 0.461.0 [Compare Source](https://redirect.github.com/lucide-icons/lucide/compare/0.460.0...0.461.0) #### New icons 🎨 - `calendar-sync` ([#​2590](https://redirect.github.com/lucide-icons/lucide/issues/2590)) by [@​chessurisme](https://redirect.github.com/chessurisme) #### Modified Icons 🔨 - `scale-3d` ([#​2627](https://redirect.github.com/lucide-icons/lucide/issues/2627)) by [@​jguddas](https://redirect.github.com/jguddas)
marella/material-symbols (material-symbols) ### [`v0.27.1`](https://redirect.github.com/marella/material-symbols/compare/v0.27.0...v0.27.1) [Compare Source](https://redirect.github.com/marella/material-symbols/compare/v0.27.0...v0.27.1)
postcss/postcss (postcss) ### [`v8.4.49`](https://redirect.github.com/postcss/postcss/blob/HEAD/CHANGELOG.md#8449) [Compare Source](https://redirect.github.com/postcss/postcss/compare/8.4.48...8.4.49) - Fixed custom syntax without `source.offset` (by [@​romainmenke](https://redirect.github.com/romainmenke)). ### [`v8.4.48`](https://redirect.github.com/postcss/postcss/blob/HEAD/CHANGELOG.md#8448) [Compare Source](https://redirect.github.com/postcss/postcss/compare/8.4.47...8.4.48) - Fixed position calculation in error/warnings methods (by [@​romainmenke](https://redirect.github.com/romainmenke)).
prettier/prettier (prettier) ### [`v3.4.1`](https://redirect.github.com/prettier/prettier/blob/HEAD/CHANGELOG.md#341) [Compare Source](https://redirect.github.com/prettier/prettier/compare/3.4.0...3.4.1) [diff](https://redirect.github.com/prettier/prettier/compare/3.4.0...3.4.1) ##### Remove unnecessary parentheses around assignment in `v-on` ([#​16887](https://redirect.github.com/prettier/prettier/pull/16887) by [@​fisker](https://redirect.github.com/fisker)) ```vue ``` ### [`v3.4.0`](https://redirect.github.com/prettier/prettier/blob/HEAD/CHANGELOG.md#340) [Compare Source](https://redirect.github.com/prettier/prettier/compare/3.3.3...3.4.0) [diff](https://redirect.github.com/prettier/prettier/compare/3.3.3...3.4.0) 🔗 [Release Notes](https://prettier.io/blog/2024/11/26/3.4.0.html)
tailwindlabs/tailwindcss (tailwindcss) ### [`v3.4.15`](https://redirect.github.com/tailwindlabs/tailwindcss/releases/tag/v3.4.15) [Compare Source](https://redirect.github.com/tailwindlabs/tailwindcss/compare/v3.4.14...v3.4.15) - Bump versions for security vulnerabilities ([#​14697](https://redirect.github.com/tailwindlabs/tailwindcss/pull/14697)) - Ensure the TypeScript types for the `boxShadow` theme configuration allows arrays ([#​14856](https://redirect.github.com/tailwindlabs/tailwindcss/pull/14856)) - Set fallback for opacity variables to ensure setting colors with the `selection:*` variant works in Chrome 131 ([#​15003](https://redirect.github.com/tailwindlabs/tailwindcss/pull/15003))
microsoft/TypeScript (typescript) ### [`v5.7.2`](https://redirect.github.com/microsoft/TypeScript/compare/v5.6.3...d701d908d534e68cfab24b6df15539014ac348a3) [Compare Source](https://redirect.github.com/microsoft/TypeScript/compare/v5.6.3...v5.7.2)
--- ### Configuration 📅 **Schedule**: Branch creation - "after 5pm on monday" in timezone Etc/UTC, Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox. 👻 **Immortal**: This PR will be recreated if closed unmerged. Get [config help](https://redirect.github.com/renovatebot/renovate/discussions) if that's undesired. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/PeerDB-io/peerdb). Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> --- ui/package-lock.json | 92 ++++++++++++++++++++++---------------------- ui/package.json | 12 +++--- 2 files changed, 52 insertions(+), 52 deletions(-) diff --git a/ui/package-lock.json b/ui/package-lock.json index f233b02b9..c58187758 100644 --- a/ui/package-lock.json +++ b/ui/package-lock.json @@ -29,7 +29,7 @@ "@types/react": "^18.3.11", "@types/react-dom": "^18.3.0", "classnames": "^2.5.1", - "lucide-react": "^0.460.0", + "lucide-react": "^0.462.0", "material-symbols": "^0.27.0", "moment": "^2.30.1", "moment-timezone": "^0.5.46", @@ -53,14 +53,14 @@ "eslint": "8.57.1", "eslint-config-next": "14.2.17", "eslint-config-prettier": "9.1.0", - "less": "4.2.0", - "postcss": "8.4.47", - "prettier": "3.3.3", + "less": "4.2.1", + "postcss": "8.4.49", + "prettier": "3.4.1", "prettier-plugin-organize-imports": "4.1.0", "string-width": "7.2.0", - "tailwindcss": "3.4.14", + "tailwindcss": "3.4.15", "tailwindcss-animate": "1.0.7", - "typescript": "5.6.3", + "typescript": "5.7.2", "webpack": "5.96.1" } }, @@ -2255,12 +2255,12 @@ "license": "MIT" }, "node_modules/@types/node": { - "version": "22.9.0", - "resolved": "https://registry.npmjs.org/@types/node/-/node-22.9.0.tgz", - "integrity": "sha512-vuyHg81vvWA1Z1ELfvLko2c8f34gyA0zaic0+Rllc5lbCnbSyuvb2Oxpm6TAUAC/2xZN3QGqxBNggD1nNR2AfQ==", + "version": "22.10.0", + "resolved": "https://registry.npmjs.org/@types/node/-/node-22.10.0.tgz", + "integrity": "sha512-XC70cRZVElFHfIUB40FgZOBbgJYFKKMa5nb9lxcwYstFG/Mi+/Y0bGS+rs6Dmhmkpq4pnNiLiuZAbc02YCOnmA==", "license": "MIT", "dependencies": { - "undici-types": "~6.19.8" + "undici-types": "~6.20.0" } }, "node_modules/@types/parse-json": { @@ -5870,9 +5870,9 @@ } }, "node_modules/less": { - "version": "4.2.0", - "resolved": "https://registry.npmjs.org/less/-/less-4.2.0.tgz", - "integrity": "sha512-P3b3HJDBtSzsXUl0im2L7gTO5Ubg8mEN6G8qoTS77iXxXX4Hvu4Qj540PZDvQ8V6DmX6iXo98k7Md0Cm1PrLaA==", + "version": "4.2.1", + "resolved": "https://registry.npmjs.org/less/-/less-4.2.1.tgz", + "integrity": "sha512-CasaJidTIhWmjcqv0Uj5vccMI7pJgfD9lMkKtlnTHAdJdYK/7l8pM9tumLyJ0zhbD4KJLo/YvTj+xznQd5NBhg==", "dev": true, "license": "Apache-2.0", "dependencies": { @@ -6001,9 +6001,9 @@ } }, "node_modules/lucide-react": { - "version": "0.460.0", - "resolved": "https://registry.npmjs.org/lucide-react/-/lucide-react-0.460.0.tgz", - "integrity": "sha512-BVtq/DykVeIvRTJvRAgCsOwaGL8Un3Bxh8MbDxMhEWlZay3T4IpEKDEpwt5KZ0KJMHzgm6jrltxlT5eXOWXDHg==", + "version": "0.462.0", + "resolved": "https://registry.npmjs.org/lucide-react/-/lucide-react-0.462.0.tgz", + "integrity": "sha512-NTL7EbAao9IFtuSivSZgrAh4fZd09Lr+6MTkqIxuHaH2nnYiYIzXPo06cOxHg9wKLdj6LL8TByG4qpePqwgx/g==", "license": "ISC", "peerDependencies": { "react": "^16.5.1 || ^17.0.0 || ^18.0.0 || ^19.0.0-rc" @@ -6036,9 +6036,9 @@ } }, "node_modules/material-symbols": { - "version": "0.27.0", - "resolved": "https://registry.npmjs.org/material-symbols/-/material-symbols-0.27.0.tgz", - "integrity": "sha512-nRHpnw2Cz7eNl6GptgHHhkjauL0zvkYsuiqy1HBifOYCY4fdbZ/PwtdZN4RNmwA+9jQPoymvlArVPPX5nYTdZg==", + "version": "0.27.1", + "resolved": "https://registry.npmjs.org/material-symbols/-/material-symbols-0.27.1.tgz", + "integrity": "sha512-ICw3sP2EyCsxo1T2vvQGhxcUX8sqb3FYLF0vTUOjCNPdJ8G1Z3bn3wjAh2ZIdP/AfGy96zuBY5okK3Ag4XLyVw==", "license": "Apache-2.0" }, "node_modules/memoize-one": { @@ -6752,9 +6752,9 @@ } }, "node_modules/postcss": { - "version": "8.4.47", - "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.47.tgz", - "integrity": "sha512-56rxCq7G/XfB4EkXq9Egn5GCqugWvDFjafDOThIdMBsI15iqPqR5r15TfSr1YPYeEI19YeaXMCbY6u88Y76GLQ==", + "version": "8.4.49", + "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.49.tgz", + "integrity": "sha512-OCVPnIObs4N29kxTjzLfUryOkvZEq+pf8jTF0lg8E7uETuWHA+v7j3c/xJmiqpX450191LlmZfUKkXxkTry7nA==", "funding": [ { "type": "opencollective", @@ -6772,7 +6772,7 @@ "license": "MIT", "dependencies": { "nanoid": "^3.3.7", - "picocolors": "^1.1.0", + "picocolors": "^1.1.1", "source-map-js": "^1.2.1" }, "engines": { @@ -6951,9 +6951,9 @@ } }, "node_modules/prettier": { - "version": "3.3.3", - "resolved": "https://registry.npmjs.org/prettier/-/prettier-3.3.3.tgz", - "integrity": "sha512-i2tDNA0O5IrMO757lfrdQZCc2jPNDVntV0m/+4whiDfWaTKfMNgR7Qz0NAeGz/nRqF4m5/6CLzbP4/liHt12Ew==", + "version": "3.4.1", + "resolved": "https://registry.npmjs.org/prettier/-/prettier-3.4.1.tgz", + "integrity": "sha512-G+YdqtITVZmOJje6QkXQWzl3fSfMxFwm1tjTyo9exhkmWSqC4Yhd1+lug++IlR2mvRVAxEDDWYkQdeSztajqgg==", "dev": true, "license": "MIT", "bin": { @@ -8225,33 +8225,33 @@ } }, "node_modules/tailwindcss": { - "version": "3.4.14", - "resolved": "https://registry.npmjs.org/tailwindcss/-/tailwindcss-3.4.14.tgz", - "integrity": "sha512-IcSvOcTRcUtQQ7ILQL5quRDg7Xs93PdJEk1ZLbhhvJc7uj/OAhYOnruEiwnGgBvUtaUAJ8/mhSw1o8L2jCiENA==", + "version": "3.4.15", + "resolved": "https://registry.npmjs.org/tailwindcss/-/tailwindcss-3.4.15.tgz", + "integrity": "sha512-r4MeXnfBmSOuKUWmXe6h2CcyfzJCEk4F0pptO5jlnYSIViUkVmsawj80N5h2lO3gwcmSb4n3PuN+e+GC1Guylw==", "license": "MIT", "dependencies": { "@alloc/quick-lru": "^5.2.0", "arg": "^5.0.2", - "chokidar": "^3.5.3", + "chokidar": "^3.6.0", "didyoumean": "^1.2.2", "dlv": "^1.1.3", - "fast-glob": "^3.3.0", + "fast-glob": "^3.3.2", "glob-parent": "^6.0.2", "is-glob": "^4.0.3", - "jiti": "^1.21.0", + "jiti": "^1.21.6", "lilconfig": "^2.1.0", - "micromatch": "^4.0.5", + "micromatch": "^4.0.8", "normalize-path": "^3.0.0", "object-hash": "^3.0.0", - "picocolors": "^1.0.0", - "postcss": "^8.4.23", + "picocolors": "^1.1.1", + "postcss": "^8.4.47", "postcss-import": "^15.1.0", "postcss-js": "^4.0.1", - "postcss-load-config": "^4.0.1", - "postcss-nested": "^6.0.1", - "postcss-selector-parser": "^6.0.11", - "resolve": "^1.22.2", - "sucrase": "^3.32.0" + "postcss-load-config": "^4.0.2", + "postcss-nested": "^6.2.0", + "postcss-selector-parser": "^6.1.2", + "resolve": "^1.22.8", + "sucrase": "^3.35.0" }, "bin": { "tailwind": "lib/cli.js", @@ -8558,9 +8558,9 @@ } }, "node_modules/typescript": { - "version": "5.6.3", - "resolved": "https://registry.npmjs.org/typescript/-/typescript-5.6.3.tgz", - "integrity": "sha512-hjcS1mhfuyi4WW8IWtjP7brDrG2cuDZukyrYrSauoXGNgx0S7zceP07adYkJycEr56BOUTNPzbInooiN3fn1qw==", + "version": "5.7.2", + "resolved": "https://registry.npmjs.org/typescript/-/typescript-5.7.2.tgz", + "integrity": "sha512-i5t66RHxDvVN40HfDd1PsEThGNnlMCMT3jMUuoh9/0TaqWevNontacunWyN02LA9/fIbEWlcHZcgTKb9QoaLfg==", "dev": true, "license": "Apache-2.0", "bin": { @@ -8588,9 +8588,9 @@ } }, "node_modules/undici-types": { - "version": "6.19.8", - "resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.19.8.tgz", - "integrity": "sha512-ve2KP6f/JnbPBFyobGHuerC9g1FYGn/F8n1LWTwNxCEzd6IfqTwUQcNXgEtmmQ6DlRrC1hrSrBnCZPokRrDHjw==", + "version": "6.20.0", + "resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.20.0.tgz", + "integrity": "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg==", "license": "MIT" }, "node_modules/unicorn-magic": { diff --git a/ui/package.json b/ui/package.json index d755ce864..9f4833398 100644 --- a/ui/package.json +++ b/ui/package.json @@ -31,7 +31,7 @@ "@types/react": "^18.3.11", "@types/react-dom": "^18.3.0", "classnames": "^2.5.1", - "lucide-react": "^0.460.0", + "lucide-react": "^0.462.0", "material-symbols": "^0.27.0", "moment": "^2.30.1", "moment-timezone": "^0.5.46", @@ -55,14 +55,14 @@ "eslint": "8.57.1", "eslint-config-next": "14.2.17", "eslint-config-prettier": "9.1.0", - "less": "4.2.0", - "postcss": "8.4.47", - "prettier": "3.3.3", + "less": "4.2.1", + "postcss": "8.4.49", + "prettier": "3.4.1", "prettier-plugin-organize-imports": "4.1.0", "string-width": "7.2.0", - "tailwindcss": "3.4.14", + "tailwindcss": "3.4.15", "tailwindcss-animate": "1.0.7", - "typescript": "5.6.3", + "typescript": "5.7.2", "webpack": "5.96.1" } } From 442a4b94ae1e73519a3a86edb93232542a59ccef Mon Sep 17 00:00:00 2001 From: "renovate[bot]" <29139614+renovate[bot]@users.noreply.github.com> Date: Wed, 27 Nov 2024 15:58:40 +0000 Subject: [PATCH 53/61] chore(deps): pin dependencies (#2289) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR contains the following updates: | Package | Update | Change | |---|---|---| | ghcr.io/peerdb-io/flow-api | pinDigest | -> `a759b2d` | | ghcr.io/peerdb-io/flow-snapshot-worker | pinDigest | -> `894c1fe` | | ghcr.io/peerdb-io/flow-worker | pinDigest | -> `4482314` | | ghcr.io/peerdb-io/peerdb-server | pinDigest | -> `c736500` | | ghcr.io/peerdb-io/peerdb-ui | pinDigest | -> `ffc4b59` | | postgres | digest | `0d96245` -> `e7897ba` | --- ### Configuration 📅 **Schedule**: Branch creation - "after 5pm on monday" in timezone Etc/UTC, Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 👻 **Immortal**: This PR will be recreated if closed unmerged. Get [config help](https://redirect.github.com/renovatebot/renovate/discussions) if that's undesired. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/PeerDB-io/peerdb). Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> --- docker-compose-dev.yml | 2 +- docker-compose.yml | 12 ++++++------ 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/docker-compose-dev.yml b/docker-compose-dev.yml index 98ee987b3..6459c0b13 100644 --- a/docker-compose-dev.yml +++ b/docker-compose-dev.yml @@ -39,7 +39,7 @@ x-flow-worker-env: &flow-worker-env services: catalog: container_name: catalog - image: postgres:17-alpine@sha256:0d9624535618a135c5453258fd629f4963390338b11aaffb92292c12df3a6c17 + image: postgres:17-alpine@sha256:e7897baa70dae1968d23d785adb4aeb699175e0bcaae44f98a7083ecb9668b93 command: -c config_file=/etc/postgresql.conf ports: - 9901:5432 diff --git a/docker-compose.yml b/docker-compose.yml index 5448632ac..c7991b9e2 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -32,7 +32,7 @@ x-flow-worker-env: &flow-worker-env services: catalog: container_name: catalog - image: postgres:17-alpine@sha256:0d9624535618a135c5453258fd629f4963390338b11aaffb92292c12df3a6c17 + image: postgres:17-alpine@sha256:e7897baa70dae1968d23d785adb4aeb699175e0bcaae44f98a7083ecb9668b93 command: -c config_file=/etc/postgresql.conf restart: unless-stopped ports: @@ -112,7 +112,7 @@ services: flow-api: container_name: flow_api - image: ghcr.io/peerdb-io/flow-api:stable-v0.19.1 + image: ghcr.io/peerdb-io/flow-api:stable-v0.19.1@sha256:a759b2d1b14f11d09ade672c268abcb456fd8884468547ea0f467cdfb60a0994 restart: unless-stopped ports: - 8112:8112 @@ -128,7 +128,7 @@ services: flow-snapshot-worker: container_name: flow-snapshot-worker - image: ghcr.io/peerdb-io/flow-snapshot-worker:stable-v0.19.1 + image: ghcr.io/peerdb-io/flow-snapshot-worker:stable-v0.19.1@sha256:894c1fea1cf9a4f5622420d8630509243b60cf177e107ec4d14d7294a9490451 restart: unless-stopped environment: <<: [*catalog-config, *flow-worker-env, *minio-config] @@ -138,7 +138,7 @@ services: flow-worker: container_name: flow-worker - image: ghcr.io/peerdb-io/flow-worker:stable-v0.19.1 + image: ghcr.io/peerdb-io/flow-worker:stable-v0.19.1@sha256:4482314bd3bd4a96930fbee10c00a9f2d5764e86cfd8802642589d339cf04054 restart: unless-stopped environment: <<: [*catalog-config, *flow-worker-env, *minio-config] @@ -151,7 +151,7 @@ services: peerdb: container_name: peerdb-server stop_signal: SIGINT - image: ghcr.io/peerdb-io/peerdb-server:stable-v0.19.1 + image: ghcr.io/peerdb-io/peerdb-server:stable-v0.19.1@sha256:c736500e0b42f100df29af43ecf4c96d0c8f4805dd294fecd0bb4ce7b7897a18 restart: unless-stopped environment: <<: *catalog-config @@ -167,7 +167,7 @@ services: peerdb-ui: container_name: peerdb-ui - image: ghcr.io/peerdb-io/peerdb-ui:stable-v0.19.1 + image: ghcr.io/peerdb-io/peerdb-ui:stable-v0.19.1@sha256:ffc4b5960dc1653a59e680c61fca0ba2c5891cb4965e4662927d9886f4d7f6bc restart: unless-stopped ports: - 3000:3000 From d0af615730dbdf7112b1ffc09de345294f668b91 Mon Sep 17 00:00:00 2001 From: "renovate[bot]" <29139614+renovate[bot]@users.noreply.github.com> Date: Tue, 3 Dec 2024 04:21:31 +0530 Subject: [PATCH 54/61] chore(deps): update dockerfile dependencies (#2310) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR contains the following updates: | Package | Type | Update | Change | |---|---|---|---| | [docker/dockerfile](https://redirect.github.com/moby/buildkit) | syntax | minor | `1.11` -> `1.12` | | lukemathwalker/cargo-chef | stage | digest | `a539f69` -> `5b4cc6b` | --- ### Configuration 📅 **Schedule**: Branch creation - "after 5pm on monday" in timezone Etc/UTC, Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox. 👻 **Immortal**: This PR will be recreated if closed unmerged. Get [config help](https://redirect.github.com/renovatebot/renovate/discussions) if that's undesired. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/PeerDB-io/peerdb). Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> --- stacks/flow.Dockerfile | 2 +- stacks/peerdb-server.Dockerfile | 2 +- stacks/peerdb-ui.Dockerfile | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/stacks/flow.Dockerfile b/stacks/flow.Dockerfile index 2c6f375d0..4595e45b3 100644 --- a/stacks/flow.Dockerfile +++ b/stacks/flow.Dockerfile @@ -1,4 +1,4 @@ -# syntax=docker/dockerfile:1.11@sha256:10c699f1b6c8bdc8f6b4ce8974855dd8542f1768c26eb240237b8f1c9c6c9976 +# syntax=docker/dockerfile:1.12@sha256:db1ff77fb637a5955317c7a3a62540196396d565f3dd5742e76dddbb6d75c4c5 FROM golang:1.23-alpine@sha256:c694a4d291a13a9f9d94933395673494fc2cc9d4777b85df3a7e70b3492d3574 AS builder RUN apk add --no-cache gcc geos-dev musl-dev diff --git a/stacks/peerdb-server.Dockerfile b/stacks/peerdb-server.Dockerfile index 16f0a58ea..497b3aa7c 100644 --- a/stacks/peerdb-server.Dockerfile +++ b/stacks/peerdb-server.Dockerfile @@ -1,6 +1,6 @@ # syntax=docker/dockerfile:1@sha256:865e5dd094beca432e8c0a1d5e1c465db5f998dca4e439981029b3b81fb39ed5 -FROM lukemathwalker/cargo-chef:latest-rust-alpine3.20@sha256:a539f69c0a6b9d328b398f1e7aed81d53e986b49db485557cdb3e4479ea42889 as chef +FROM lukemathwalker/cargo-chef:latest-rust-alpine3.20@sha256:5b4cc6b770d17769eec91c97e8b85173b1c15a23d218e0c538e05b25a774aa88 as chef WORKDIR /root FROM chef as planner diff --git a/stacks/peerdb-ui.Dockerfile b/stacks/peerdb-ui.Dockerfile index 42cedca11..f976aaee0 100644 --- a/stacks/peerdb-ui.Dockerfile +++ b/stacks/peerdb-ui.Dockerfile @@ -1,4 +1,4 @@ -# syntax=docker/dockerfile:1.11@sha256:10c699f1b6c8bdc8f6b4ce8974855dd8542f1768c26eb240237b8f1c9c6c9976 +# syntax=docker/dockerfile:1.12@sha256:db1ff77fb637a5955317c7a3a62540196396d565f3dd5742e76dddbb6d75c4c5 # Base stage FROM node:22-alpine@sha256:b64ced2e7cd0a4816699fe308ce6e8a08ccba463c757c00c14cd372e3d2c763e AS base From 6d99f782b3eb32c2ac2de7973ca6ab2850e12f39 Mon Sep 17 00:00:00 2001 From: Amogh Bharadwaj <65964360+Amogh-Bharadwaj@users.noreply.github.com> Date: Tue, 3 Dec 2024 04:31:15 +0530 Subject: [PATCH 55/61] ClickHouse: Put numeric-string mapping behind feature flag (#2305) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ### Why Unbounded numerics in Postgres databases statistically are unlikely to have values with dimensions larger than 76,38. Currently by default unbounded `numeric` in postgres is mapped to String in ClickHouse - an unideal mapping from data nativity point of view and only useful for the scarce cases mentioned above ### What This PR puts this mapping choice behind a dynamic setting flag whose default is false - thereby restoring the default mapping to numeric to Decimal in all cases Adapted the E2E test for this --------- Co-authored-by: Philip Dubé --- flow/activities/flowable_core.go | 3 +- flow/connectors/bigquery/bigquery.go | 1 + flow/connectors/bigquery/qrep.go | 4 +- flow/connectors/bigquery/qrep_avro_sync.go | 12 +-- flow/connectors/clickhouse/cdc.go | 9 ++- flow/connectors/clickhouse/normalize.go | 38 +++------- flow/connectors/clickhouse/qrep_avro_sync.go | 8 +- flow/connectors/core.go | 4 +- .../connectors/elasticsearch/elasticsearch.go | 2 +- flow/connectors/eventhub/eventhub.go | 4 +- flow/connectors/kafka/kafka.go | 4 +- flow/connectors/postgres/postgres.go | 3 +- .../postgres/postgres_schema_delta_test.go | 8 +- flow/connectors/pubsub/pubsub.go | 4 +- flow/connectors/s3/qrep.go | 6 +- flow/connectors/s3/s3.go | 4 +- .../snowflake/avro_file_writer_test.go | 20 ++--- .../snowflake/merge_stmt_generator.go | 5 +- flow/connectors/snowflake/qrep_avro_sync.go | 28 +++---- flow/connectors/snowflake/snowflake.go | 23 ++---- flow/connectors/utils/avro/avro_writer.go | 21 +++-- flow/datatypes/numeric.go | 12 +++ flow/e2e/clickhouse/peer_flow_ch_test.go | 76 +++++++++++++++++-- .../snowflake/snowflake_schema_delta_test.go | 8 +- flow/model/conversion_avro.go | 49 ++++++++---- flow/model/qvalue/avro_converter.go | 60 ++++++++++----- flow/model/qvalue/dwh.go | 14 ++-- flow/model/qvalue/kind.go | 34 +++++++-- flow/model/qvalue/qschema.go | 45 +++++++++++ flow/peerdbenv/dynamicconf.go | 12 +++ flow/workflows/snapshot_flow.go | 1 + 31 files changed, 361 insertions(+), 161 deletions(-) diff --git a/flow/activities/flowable_core.go b/flow/activities/flowable_core.go index 2d1f7e1f3..a9f58a5f3 100644 --- a/flow/activities/flowable_core.go +++ b/flow/activities/flowable_core.go @@ -225,7 +225,7 @@ func syncCore[TPull connectors.CDCPullConnectorCore, TSync connectors.CDCSyncCon } defer connectors.CloseConnector(ctx, dstConn) - if err := dstConn.ReplayTableSchemaDeltas(ctx, flowName, recordBatchSync.SchemaDeltas); err != nil { + if err := dstConn.ReplayTableSchemaDeltas(ctx, config.Env, flowName, recordBatchSync.SchemaDeltas); err != nil { return nil, fmt.Errorf("failed to sync schema: %w", err) } @@ -440,6 +440,7 @@ func replicateQRepPartition[TRead any, TWrite any, TSync connectors.QRepSyncConn }) errGroup.Go(func() error { + var err error rowsSynced, err = syncRecords(dstConn, errCtx, config, partition, outstream) if err != nil { a.Alerter.LogFlowError(ctx, config.FlowJobName, err) diff --git a/flow/connectors/bigquery/bigquery.go b/flow/connectors/bigquery/bigquery.go index f990b2f19..d6504322c 100644 --- a/flow/connectors/bigquery/bigquery.go +++ b/flow/connectors/bigquery/bigquery.go @@ -203,6 +203,7 @@ func (c *BigQueryConnector) waitForTableReady(ctx context.Context, datasetTable // This could involve adding or dropping multiple columns. func (c *BigQueryConnector) ReplayTableSchemaDeltas( ctx context.Context, + env map[string]string, flowJobName string, schemaDeltas []*protos.TableSchemaDelta, ) error { diff --git a/flow/connectors/bigquery/qrep.go b/flow/connectors/bigquery/qrep.go index 3da50c8e8..b184cc62a 100644 --- a/flow/connectors/bigquery/qrep.go +++ b/flow/connectors/bigquery/qrep.go @@ -35,7 +35,7 @@ func (c *BigQueryConnector) SyncQRepRecords( partition.PartitionId, destTable)) avroSync := NewQRepAvroSyncMethod(c, config.StagingPath, config.FlowJobName) - return avroSync.SyncQRepRecords(ctx, config.FlowJobName, destTable, partition, + return avroSync.SyncQRepRecords(ctx, config.Env, config.FlowJobName, destTable, partition, tblMetadata, stream, config.SyncedAtColName, config.SoftDeleteColName) } @@ -80,7 +80,7 @@ func (c *BigQueryConnector) replayTableSchemaDeltasQRep( } } - err = c.ReplayTableSchemaDeltas(ctx, config.FlowJobName, []*protos.TableSchemaDelta{tableSchemaDelta}) + err = c.ReplayTableSchemaDeltas(ctx, config.Env, config.FlowJobName, []*protos.TableSchemaDelta{tableSchemaDelta}) if err != nil { return nil, fmt.Errorf("failed to add columns to destination table: %w", err) } diff --git a/flow/connectors/bigquery/qrep_avro_sync.go b/flow/connectors/bigquery/qrep_avro_sync.go index da3b15c37..07285eb99 100644 --- a/flow/connectors/bigquery/qrep_avro_sync.go +++ b/flow/connectors/bigquery/qrep_avro_sync.go @@ -55,7 +55,7 @@ func (s *QRepAvroSyncMethod) SyncRecords( } stagingTable := fmt.Sprintf("%s_%s_staging", rawTableName, strconv.FormatInt(syncBatchID, 10)) - numRecords, err := s.writeToStage(ctx, strconv.FormatInt(syncBatchID, 10), rawTableName, avroSchema, + numRecords, err := s.writeToStage(ctx, req.Env, strconv.FormatInt(syncBatchID, 10), rawTableName, avroSchema, &datasetTable{ project: s.connector.projectID, dataset: s.connector.datasetID, @@ -97,7 +97,7 @@ func (s *QRepAvroSyncMethod) SyncRecords( slog.String(string(shared.FlowNameKey), req.FlowJobName), slog.String("dstTableName", rawTableName)) - err = s.connector.ReplayTableSchemaDeltas(ctx, req.FlowJobName, req.Records.SchemaDeltas) + err = s.connector.ReplayTableSchemaDeltas(ctx, req.Env, req.FlowJobName, req.Records.SchemaDeltas) if err != nil { return nil, fmt.Errorf("failed to sync schema changes: %w", err) } @@ -139,6 +139,7 @@ func getTransformedColumns(dstSchema *bigquery.Schema, syncedAtCol string, softD func (s *QRepAvroSyncMethod) SyncQRepRecords( ctx context.Context, + env map[string]string, flowJobName string, dstTableName string, partition *protos.QRepPartition, @@ -167,7 +168,7 @@ func (s *QRepAvroSyncMethod) SyncQRepRecords( table: fmt.Sprintf("%s_%s_staging", dstDatasetTable.table, strings.ReplaceAll(partition.PartitionId, "-", "_")), } - numRecords, err := s.writeToStage(ctx, partition.PartitionId, flowJobName, avroSchema, + numRecords, err := s.writeToStage(ctx, env, partition.PartitionId, flowJobName, avroSchema, stagingDatasetTable, stream, flowJobName) if err != nil { return -1, fmt.Errorf("failed to push to avro stage: %w", err) @@ -389,6 +390,7 @@ func GetAvroField(bqField *bigquery.FieldSchema) (AvroField, error) { func (s *QRepAvroSyncMethod) writeToStage( ctx context.Context, + env map[string]string, syncID string, objectFolder string, avroSchema *model.QRecordAvroSchemaDefinition, @@ -408,7 +410,7 @@ func (s *QRepAvroSyncMethod) writeToStage( obj := bucket.Object(avroFilePath) w := obj.NewWriter(ctx) - numRecords, err := ocfWriter.WriteOCF(ctx, w) + numRecords, err := ocfWriter.WriteOCF(ctx, env, w) if err != nil { return 0, fmt.Errorf("failed to write records to Avro file on GCS: %w", err) } @@ -426,7 +428,7 @@ func (s *QRepAvroSyncMethod) writeToStage( avroFilePath := fmt.Sprintf("%s/%s.avro", tmpDir, syncID) s.connector.logger.Info("writing records to local file", idLog) - avroFile, err = ocfWriter.WriteRecordsToAvroFile(ctx, avroFilePath) + avroFile, err = ocfWriter.WriteRecordsToAvroFile(ctx, env, avroFilePath) if err != nil { return 0, fmt.Errorf("failed to write records to local Avro file: %w", err) } diff --git a/flow/connectors/clickhouse/cdc.go b/flow/connectors/clickhouse/cdc.go index d3eb883b4..5dc8a1462 100644 --- a/flow/connectors/clickhouse/cdc.go +++ b/flow/connectors/clickhouse/cdc.go @@ -93,7 +93,7 @@ func (c *ClickHouseConnector) syncRecordsViaAvro( return nil, err } - if err := c.ReplayTableSchemaDeltas(ctx, req.FlowJobName, req.Records.SchemaDeltas); err != nil { + if err := c.ReplayTableSchemaDeltas(ctx, req.Env, req.FlowJobName, req.Records.SchemaDeltas); err != nil { return nil, fmt.Errorf("failed to sync schema changes: %w", err) } @@ -120,7 +120,10 @@ func (c *ClickHouseConnector) SyncRecords(ctx context.Context, req *model.SyncRe return res, nil } -func (c *ClickHouseConnector) ReplayTableSchemaDeltas(ctx context.Context, flowJobName string, +func (c *ClickHouseConnector) ReplayTableSchemaDeltas( + ctx context.Context, + env map[string]string, + flowJobName string, schemaDeltas []*protos.TableSchemaDelta, ) error { if len(schemaDeltas) == 0 { @@ -133,7 +136,7 @@ func (c *ClickHouseConnector) ReplayTableSchemaDeltas(ctx context.Context, flowJ } for _, addedColumn := range schemaDelta.AddedColumns { - clickHouseColType, err := qvalue.QValueKind(addedColumn.Type).ToDWHColumnType(protos.DBType_CLICKHOUSE) + clickHouseColType, err := qvalue.QValueKind(addedColumn.Type).ToDWHColumnType(ctx, env, protos.DBType_CLICKHOUSE, addedColumn) if err != nil { return fmt.Errorf("failed to convert column type %s to ClickHouse type: %w", addedColumn.Type, err) } diff --git a/flow/connectors/clickhouse/normalize.go b/flow/connectors/clickhouse/normalize.go index 2debe0f4d..fabe07a35 100644 --- a/flow/connectors/clickhouse/normalize.go +++ b/flow/connectors/clickhouse/normalize.go @@ -15,7 +15,6 @@ import ( "github.com/ClickHouse/clickhouse-go/v2" "golang.org/x/sync/errgroup" - "github.com/PeerDB-io/peer-flow/datatypes" "github.com/PeerDB-io/peer-flow/generated/protos" "github.com/PeerDB-io/peer-flow/model" "github.com/PeerDB-io/peer-flow/model/qvalue" @@ -81,16 +80,6 @@ func getColName(overrides map[string]string, name string) string { return name } -func getClickhouseTypeForNumericColumn(column *protos.FieldDescription) string { - rawPrecision, _ := datatypes.ParseNumericTypmod(column.TypeModifier) - if rawPrecision > datatypes.PeerDBClickHouseMaxPrecision { - return "String" - } else { - precision, scale := datatypes.GetNumericTypeForWarehouse(column.TypeModifier, datatypes.ClickHouseNumericCompatibility{}) - return fmt.Sprintf("Decimal(%d, %d)", precision, scale) - } -} - func generateCreateTableSQLForNormalizedTable( ctx context.Context, config *protos.SetupNormalizedTableBatchInput, @@ -142,14 +131,10 @@ func generateCreateTableSQLForNormalizedTable( } if clickHouseType == "" { - if colType == qvalue.QValueKindNumeric { - clickHouseType = getClickhouseTypeForNumericColumn(column) - } else { - var err error - clickHouseType, err = colType.ToDWHColumnType(protos.DBType_CLICKHOUSE) - if err != nil { - return "", fmt.Errorf("error while converting column type to ClickHouse type: %w", err) - } + var err error + clickHouseType, err = colType.ToDWHColumnType(ctx, config.Env, protos.DBType_CLICKHOUSE, column) + if err != nil { + return "", fmt.Errorf("error while converting column type to ClickHouse type: %w", err) } } if (tableSchema.NullableEnabled || columnNullableEnabled) && column.Nullable && !colType.IsArray() { @@ -368,16 +353,13 @@ func (c *ClickHouseConnector) NormalizeRecords( colSelector.WriteString(fmt.Sprintf("`%s`,", dstColName)) if clickHouseType == "" { - if colType == qvalue.QValueKindNumeric { - clickHouseType = getClickhouseTypeForNumericColumn(column) - } else { - var err error - clickHouseType, err = colType.ToDWHColumnType(protos.DBType_CLICKHOUSE) - if err != nil { - close(queries) - return nil, fmt.Errorf("error while converting column type to clickhouse type: %w", err) - } + var err error + clickHouseType, err = colType.ToDWHColumnType(ctx, req.Env, protos.DBType_CLICKHOUSE, column) + if err != nil { + close(queries) + return nil, fmt.Errorf("error while converting column type to clickhouse type: %w", err) } + if (schema.NullableEnabled || columnNullableEnabled) && column.Nullable && !colType.IsArray() { clickHouseType = fmt.Sprintf("Nullable(%s)", clickHouseType) } diff --git a/flow/connectors/clickhouse/qrep_avro_sync.go b/flow/connectors/clickhouse/qrep_avro_sync.go index fa2cfe103..61450dd55 100644 --- a/flow/connectors/clickhouse/qrep_avro_sync.go +++ b/flow/connectors/clickhouse/qrep_avro_sync.go @@ -71,7 +71,7 @@ func (s *ClickHouseAvroSyncMethod) SyncRecords( s.logger.Info("sync function called and schema acquired", slog.String("dstTable", dstTableName)) - avroSchema, err := s.getAvroSchema(dstTableName, schema) + avroSchema, err := s.getAvroSchema(ctx, env, dstTableName, schema) if err != nil { return 0, err } @@ -106,7 +106,7 @@ func (s *ClickHouseAvroSyncMethod) SyncQRepRecords( stagingPath := s.credsProvider.BucketPath startTime := time.Now() - avroSchema, err := s.getAvroSchema(dstTableName, stream.Schema()) + avroSchema, err := s.getAvroSchema(ctx, config.Env, dstTableName, stream.Schema()) if err != nil { return 0, err } @@ -165,10 +165,12 @@ func (s *ClickHouseAvroSyncMethod) SyncQRepRecords( } func (s *ClickHouseAvroSyncMethod) getAvroSchema( + ctx context.Context, + env map[string]string, dstTableName string, schema qvalue.QRecordSchema, ) (*model.QRecordAvroSchemaDefinition, error) { - avroSchema, err := model.GetAvroSchemaDefinition(dstTableName, schema, protos.DBType_CLICKHOUSE) + avroSchema, err := model.GetAvroSchemaDefinition(ctx, env, dstTableName, schema, protos.DBType_CLICKHOUSE) if err != nil { return nil, fmt.Errorf("failed to define Avro schema: %w", err) } diff --git a/flow/connectors/core.go b/flow/connectors/core.go index 073d9d82b..0991a5097 100644 --- a/flow/connectors/core.go +++ b/flow/connectors/core.go @@ -173,7 +173,7 @@ type CDCSyncConnectorCore interface { // ReplayTableSchemaDelta changes a destination table to match the schema at source // This could involve adding or dropping multiple columns. // Connectors which are non-normalizing should implement this as a nop. - ReplayTableSchemaDeltas(ctx context.Context, flowJobName string, schemaDeltas []*protos.TableSchemaDelta) error + ReplayTableSchemaDeltas(ctx context.Context, env map[string]string, flowJobName string, schemaDeltas []*protos.TableSchemaDelta) error } type CDCSyncConnector interface { @@ -463,8 +463,6 @@ var ( _ CDCSyncConnector = &connclickhouse.ClickHouseConnector{} _ CDCSyncConnector = &connelasticsearch.ElasticsearchConnector{} - _ CDCSyncPgConnector = &connpostgres.PostgresConnector{} - _ CDCNormalizeConnector = &connpostgres.PostgresConnector{} _ CDCNormalizeConnector = &connbigquery.BigQueryConnector{} _ CDCNormalizeConnector = &connsnowflake.SnowflakeConnector{} diff --git a/flow/connectors/elasticsearch/elasticsearch.go b/flow/connectors/elasticsearch/elasticsearch.go index e67516805..30279fd74 100644 --- a/flow/connectors/elasticsearch/elasticsearch.go +++ b/flow/connectors/elasticsearch/elasticsearch.go @@ -95,7 +95,7 @@ func (esc *ElasticsearchConnector) CreateRawTable(ctx context.Context, } // we handle schema changes by not handling them since no mapping is being enforced right now -func (esc *ElasticsearchConnector) ReplayTableSchemaDeltas(ctx context.Context, +func (esc *ElasticsearchConnector) ReplayTableSchemaDeltas(ctx context.Context, env map[string]string, flowJobName string, schemaDeltas []*protos.TableSchemaDelta, ) error { return nil diff --git a/flow/connectors/eventhub/eventhub.go b/flow/connectors/eventhub/eventhub.go index 01982bf71..0f175233e 100644 --- a/flow/connectors/eventhub/eventhub.go +++ b/flow/connectors/eventhub/eventhub.go @@ -380,7 +380,9 @@ func (c *EventHubConnector) CreateRawTable(ctx context.Context, req *protos.Crea }, nil } -func (c *EventHubConnector) ReplayTableSchemaDeltas(_ context.Context, flowJobName string, schemaDeltas []*protos.TableSchemaDelta) error { +func (c *EventHubConnector) ReplayTableSchemaDeltas(_ context.Context, _ map[string]string, + flowJobName string, schemaDeltas []*protos.TableSchemaDelta, +) error { c.logger.Info("ReplayTableSchemaDeltas for event hub is a no-op") return nil } diff --git a/flow/connectors/kafka/kafka.go b/flow/connectors/kafka/kafka.go index ea0805b84..ee78093fe 100644 --- a/flow/connectors/kafka/kafka.go +++ b/flow/connectors/kafka/kafka.go @@ -149,7 +149,9 @@ func (c *KafkaConnector) SetupMetadataTables(_ context.Context) error { return nil } -func (c *KafkaConnector) ReplayTableSchemaDeltas(_ context.Context, flowJobName string, schemaDeltas []*protos.TableSchemaDelta) error { +func (c *KafkaConnector) ReplayTableSchemaDeltas(_ context.Context, _ map[string]string, + flowJobName string, schemaDeltas []*protos.TableSchemaDelta, +) error { return nil } diff --git a/flow/connectors/postgres/postgres.go b/flow/connectors/postgres/postgres.go index 14b827cc8..8f49545ff 100644 --- a/flow/connectors/postgres/postgres.go +++ b/flow/connectors/postgres/postgres.go @@ -592,7 +592,7 @@ func syncRecordsCore[Items model.Items]( return nil, err } - err = c.ReplayTableSchemaDeltas(ctx, req.FlowJobName, req.Records.SchemaDeltas) + err = c.ReplayTableSchemaDeltas(ctx, req.Env, req.FlowJobName, req.Records.SchemaDeltas) if err != nil { return nil, fmt.Errorf("failed to sync schema changes: %w", err) } @@ -941,6 +941,7 @@ func (c *PostgresConnector) SetupNormalizedTable( // This could involve adding or dropping multiple columns. func (c *PostgresConnector) ReplayTableSchemaDeltas( ctx context.Context, + _ map[string]string, flowJobName string, schemaDeltas []*protos.TableSchemaDelta, ) error { diff --git a/flow/connectors/postgres/postgres_schema_delta_test.go b/flow/connectors/postgres/postgres_schema_delta_test.go index 946b20eb3..0b6668a5a 100644 --- a/flow/connectors/postgres/postgres_schema_delta_test.go +++ b/flow/connectors/postgres/postgres_schema_delta_test.go @@ -58,7 +58,7 @@ func (s PostgresSchemaDeltaTestSuite) TestSimpleAddColumn() { fmt.Sprintf("CREATE TABLE %s(id INT PRIMARY KEY)", tableName)) require.NoError(s.t, err) - err = s.connector.ReplayTableSchemaDeltas(context.Background(), "schema_delta_flow", []*protos.TableSchemaDelta{{ + err = s.connector.ReplayTableSchemaDeltas(context.Background(), nil, "schema_delta_flow", []*protos.TableSchemaDelta{{ SrcTableName: tableName, DstTableName: tableName, AddedColumns: []*protos.FieldDescription{ @@ -113,7 +113,7 @@ func (s PostgresSchemaDeltaTestSuite) TestAddAllColumnTypes() { } } - err = s.connector.ReplayTableSchemaDeltas(context.Background(), "schema_delta_flow", []*protos.TableSchemaDelta{{ + err = s.connector.ReplayTableSchemaDeltas(context.Background(), nil, "schema_delta_flow", []*protos.TableSchemaDelta{{ SrcTableName: tableName, DstTableName: tableName, AddedColumns: addedColumns, @@ -144,7 +144,7 @@ func (s PostgresSchemaDeltaTestSuite) TestAddTrickyColumnNames() { } } - err = s.connector.ReplayTableSchemaDeltas(context.Background(), "schema_delta_flow", []*protos.TableSchemaDelta{{ + err = s.connector.ReplayTableSchemaDeltas(context.Background(), nil, "schema_delta_flow", []*protos.TableSchemaDelta{{ SrcTableName: tableName, DstTableName: tableName, AddedColumns: addedColumns, @@ -175,7 +175,7 @@ func (s PostgresSchemaDeltaTestSuite) TestAddDropWhitespaceColumnNames() { } } - err = s.connector.ReplayTableSchemaDeltas(context.Background(), "schema_delta_flow", []*protos.TableSchemaDelta{{ + err = s.connector.ReplayTableSchemaDeltas(context.Background(), nil, "schema_delta_flow", []*protos.TableSchemaDelta{{ SrcTableName: tableName, DstTableName: tableName, AddedColumns: addedColumns, diff --git a/flow/connectors/pubsub/pubsub.go b/flow/connectors/pubsub/pubsub.go index 49aed379c..537cda724 100644 --- a/flow/connectors/pubsub/pubsub.go +++ b/flow/connectors/pubsub/pubsub.go @@ -67,7 +67,9 @@ func (c *PubSubConnector) CreateRawTable(ctx context.Context, req *protos.Create return &protos.CreateRawTableOutput{TableIdentifier: "n/a"}, nil } -func (c *PubSubConnector) ReplayTableSchemaDeltas(_ context.Context, flowJobName string, schemaDeltas []*protos.TableSchemaDelta) error { +func (c *PubSubConnector) ReplayTableSchemaDeltas(_ context.Context, _ map[string]string, + flowJobName string, schemaDeltas []*protos.TableSchemaDelta, +) error { return nil } diff --git a/flow/connectors/s3/qrep.go b/flow/connectors/s3/qrep.go index 9fbb485ab..968c956aa 100644 --- a/flow/connectors/s3/qrep.go +++ b/flow/connectors/s3/qrep.go @@ -20,7 +20,7 @@ func (c *S3Connector) SyncQRepRecords( schema := stream.Schema() dstTableName := config.DestinationTableIdentifier - avroSchema, err := getAvroSchema(dstTableName, schema) + avroSchema, err := getAvroSchema(ctx, config.Env, dstTableName, schema) if err != nil { return 0, err } @@ -34,10 +34,12 @@ func (c *S3Connector) SyncQRepRecords( } func getAvroSchema( + ctx context.Context, + env map[string]string, dstTableName string, schema qvalue.QRecordSchema, ) (*model.QRecordAvroSchemaDefinition, error) { - avroSchema, err := model.GetAvroSchemaDefinition(dstTableName, schema, protos.DBType_S3) + avroSchema, err := model.GetAvroSchemaDefinition(ctx, env, dstTableName, schema, protos.DBType_S3) if err != nil { return nil, fmt.Errorf("failed to define Avro schema: %w", err) } diff --git a/flow/connectors/s3/s3.go b/flow/connectors/s3/s3.go index eac37cd7c..7d16a20af 100644 --- a/flow/connectors/s3/s3.go +++ b/flow/connectors/s3/s3.go @@ -118,7 +118,9 @@ func (c *S3Connector) SyncRecords(ctx context.Context, req *model.SyncRecordsReq }, nil } -func (c *S3Connector) ReplayTableSchemaDeltas(_ context.Context, flowJobName string, schemaDeltas []*protos.TableSchemaDelta) error { +func (c *S3Connector) ReplayTableSchemaDeltas(_ context.Context, _ map[string]string, + flowJobName string, schemaDeltas []*protos.TableSchemaDelta, +) error { c.logger.Info("ReplayTableSchemaDeltas for S3 is a no-op") return nil } diff --git a/flow/connectors/snowflake/avro_file_writer_test.go b/flow/connectors/snowflake/avro_file_writer_test.go index ac6f25351..4a76fccd0 100644 --- a/flow/connectors/snowflake/avro_file_writer_test.go +++ b/flow/connectors/snowflake/avro_file_writer_test.go @@ -144,14 +144,14 @@ func TestWriteRecordsToAvroFileHappyPath(t *testing.T) { // Define sample data records, schema := generateRecords(t, true, 10, false) - avroSchema, err := model.GetAvroSchemaDefinition("not_applicable", schema, protos.DBType_SNOWFLAKE) + avroSchema, err := model.GetAvroSchemaDefinition(context.Background(), nil, "not_applicable", schema, protos.DBType_SNOWFLAKE) require.NoError(t, err) t.Logf("[test] avroSchema: %v", avroSchema) // Call function writer := avro.NewPeerDBOCFWriter(records, avroSchema, avro.CompressNone, protos.DBType_SNOWFLAKE) - _, err = writer.WriteRecordsToAvroFile(context.Background(), tmpfile.Name()) + _, err = writer.WriteRecordsToAvroFile(context.Background(), nil, tmpfile.Name()) require.NoError(t, err, "expected WriteRecordsToAvroFile to complete without errors") // Check file is not empty @@ -171,14 +171,14 @@ func TestWriteRecordsToZstdAvroFileHappyPath(t *testing.T) { // Define sample data records, schema := generateRecords(t, true, 10, false) - avroSchema, err := model.GetAvroSchemaDefinition("not_applicable", schema, protos.DBType_SNOWFLAKE) + avroSchema, err := model.GetAvroSchemaDefinition(context.Background(), nil, "not_applicable", schema, protos.DBType_SNOWFLAKE) require.NoError(t, err) t.Logf("[test] avroSchema: %v", avroSchema) // Call function writer := avro.NewPeerDBOCFWriter(records, avroSchema, avro.CompressZstd, protos.DBType_SNOWFLAKE) - _, err = writer.WriteRecordsToAvroFile(context.Background(), tmpfile.Name()) + _, err = writer.WriteRecordsToAvroFile(context.Background(), nil, tmpfile.Name()) require.NoError(t, err, "expected WriteRecordsToAvroFile to complete without errors") // Check file is not empty @@ -198,14 +198,14 @@ func TestWriteRecordsToDeflateAvroFileHappyPath(t *testing.T) { // Define sample data records, schema := generateRecords(t, true, 10, false) - avroSchema, err := model.GetAvroSchemaDefinition("not_applicable", schema, protos.DBType_SNOWFLAKE) + avroSchema, err := model.GetAvroSchemaDefinition(context.Background(), nil, "not_applicable", schema, protos.DBType_SNOWFLAKE) require.NoError(t, err) t.Logf("[test] avroSchema: %v", avroSchema) // Call function writer := avro.NewPeerDBOCFWriter(records, avroSchema, avro.CompressDeflate, protos.DBType_SNOWFLAKE) - _, err = writer.WriteRecordsToAvroFile(context.Background(), tmpfile.Name()) + _, err = writer.WriteRecordsToAvroFile(context.Background(), nil, tmpfile.Name()) require.NoError(t, err, "expected WriteRecordsToAvroFile to complete without errors") // Check file is not empty @@ -224,14 +224,14 @@ func TestWriteRecordsToAvroFileNonNull(t *testing.T) { records, schema := generateRecords(t, false, 10, false) - avroSchema, err := model.GetAvroSchemaDefinition("not_applicable", schema, protos.DBType_SNOWFLAKE) + avroSchema, err := model.GetAvroSchemaDefinition(context.Background(), nil, "not_applicable", schema, protos.DBType_SNOWFLAKE) require.NoError(t, err) t.Logf("[test] avroSchema: %v", avroSchema) // Call function writer := avro.NewPeerDBOCFWriter(records, avroSchema, avro.CompressNone, protos.DBType_SNOWFLAKE) - _, err = writer.WriteRecordsToAvroFile(context.Background(), tmpfile.Name()) + _, err = writer.WriteRecordsToAvroFile(context.Background(), nil, tmpfile.Name()) require.NoError(t, err, "expected WriteRecordsToAvroFile to complete without errors") // Check file is not empty @@ -251,14 +251,14 @@ func TestWriteRecordsToAvroFileAllNulls(t *testing.T) { // Define sample data records, schema := generateRecords(t, true, 10, true) - avroSchema, err := model.GetAvroSchemaDefinition("not_applicable", schema, protos.DBType_SNOWFLAKE) + avroSchema, err := model.GetAvroSchemaDefinition(context.Background(), nil, "not_applicable", schema, protos.DBType_SNOWFLAKE) require.NoError(t, err) t.Logf("[test] avroSchema: %v", avroSchema) // Call function writer := avro.NewPeerDBOCFWriter(records, avroSchema, avro.CompressNone, protos.DBType_SNOWFLAKE) - _, err = writer.WriteRecordsToAvroFile(context.Background(), tmpfile.Name()) + _, err = writer.WriteRecordsToAvroFile(context.Background(), nil, tmpfile.Name()) require.NoError(t, err, "expected WriteRecordsToAvroFile to complete without errors") // Check file is not empty diff --git a/flow/connectors/snowflake/merge_stmt_generator.go b/flow/connectors/snowflake/merge_stmt_generator.go index 37b4ed7bd..d87d3004f 100644 --- a/flow/connectors/snowflake/merge_stmt_generator.go +++ b/flow/connectors/snowflake/merge_stmt_generator.go @@ -1,6 +1,7 @@ package connsnowflake import ( + "context" "fmt" "strings" @@ -24,7 +25,7 @@ type mergeStmtGenerator struct { mergeBatchId int64 } -func (m *mergeStmtGenerator) generateMergeStmt(dstTable string) (string, error) { +func (m *mergeStmtGenerator) generateMergeStmt(ctx context.Context, env map[string]string, dstTable string) (string, error) { parsedDstTable, _ := utils.ParseSchemaTable(dstTable) normalizedTableSchema := m.tableSchemaMapping[dstTable] unchangedToastColumns := m.unchangedToastColumnsMap[dstTable] @@ -34,7 +35,7 @@ func (m *mergeStmtGenerator) generateMergeStmt(dstTable string) (string, error) for _, column := range columns { genericColumnType := column.Type qvKind := qvalue.QValueKind(genericColumnType) - sfType, err := qvKind.ToDWHColumnType(protos.DBType_SNOWFLAKE) + sfType, err := qvKind.ToDWHColumnType(ctx, env, protos.DBType_SNOWFLAKE, column) if err != nil { return "", fmt.Errorf("failed to convert column type %s to snowflake type: %w", genericColumnType, err) } diff --git a/flow/connectors/snowflake/qrep_avro_sync.go b/flow/connectors/snowflake/qrep_avro_sync.go index 0fea54b02..728d393e6 100644 --- a/flow/connectors/snowflake/qrep_avro_sync.go +++ b/flow/connectors/snowflake/qrep_avro_sync.go @@ -48,7 +48,7 @@ func (s *SnowflakeAvroSyncHandler) SyncRecords( s.logger.Info("sync function called and schema acquired", tableLog) - avroSchema, err := s.getAvroSchema(dstTableName, schema) + avroSchema, err := s.getAvroSchema(ctx, env, dstTableName, schema) if err != nil { return 0, err } @@ -98,12 +98,12 @@ func (s *SnowflakeAvroSyncHandler) SyncQRepRecords( schema := stream.Schema() s.logger.Info("sync function called and schema acquired", partitionLog) - err := s.addMissingColumns(ctx, schema, dstTableSchema, dstTableName, partition) + err := s.addMissingColumns(ctx, config.Env, schema, dstTableSchema, dstTableName, partition) if err != nil { return 0, err } - avroSchema, err := s.getAvroSchema(dstTableName, schema) + avroSchema, err := s.getAvroSchema(ctx, config.Env, dstTableName, schema) if err != nil { return 0, err } @@ -130,6 +130,7 @@ func (s *SnowflakeAvroSyncHandler) SyncQRepRecords( func (s *SnowflakeAvroSyncHandler) addMissingColumns( ctx context.Context, + env map[string]string, schema qvalue.QRecordSchema, dstTableSchema []*sql.ColumnType, dstTableName string, @@ -138,7 +139,7 @@ func (s *SnowflakeAvroSyncHandler) addMissingColumns( partitionLog := slog.String(string(shared.PartitionIDKey), partition.PartitionId) // check if avro schema has additional columns compared to destination table // if so, we need to add those columns to the destination table - colsToTypes := map[string]qvalue.QValueKind{} + var newColumns []qvalue.QField for _, col := range schema.Fields { hasColumn := false // check ignoring case @@ -152,24 +153,23 @@ func (s *SnowflakeAvroSyncHandler) addMissingColumns( if !hasColumn { s.logger.Info(fmt.Sprintf("adding column %s to destination table %s", col.Name, dstTableName), partitionLog) - colsToTypes[col.Name] = col.Type + newColumns = append(newColumns, col) } } - if len(colsToTypes) > 0 { + if len(newColumns) > 0 { tx, err := s.database.Begin() if err != nil { return fmt.Errorf("failed to begin transaction: %w", err) } - for colName, colType := range colsToTypes { - sfColType, err := colType.ToDWHColumnType(protos.DBType_SNOWFLAKE) + for _, column := range newColumns { + sfColType, err := column.ToDWHColumnType(ctx, env, protos.DBType_SNOWFLAKE) if err != nil { return fmt.Errorf("failed to convert QValueKind to Snowflake column type: %w", err) } - upperCasedColName := strings.ToUpper(colName) - alterTableCmd := fmt.Sprintf("ALTER TABLE %s ", dstTableName) - alterTableCmd += fmt.Sprintf("ADD COLUMN IF NOT EXISTS \"%s\" %s;", upperCasedColName, sfColType) + upperCasedColName := strings.ToUpper(column.Name) + alterTableCmd := fmt.Sprintf("ALTER TABLE %s ADD COLUMN IF NOT EXISTS \"%s\" %s;", dstTableName, upperCasedColName, sfColType) s.logger.Info(fmt.Sprintf("altering destination table %s with command `%s`", dstTableName, alterTableCmd), partitionLog) @@ -193,10 +193,12 @@ func (s *SnowflakeAvroSyncHandler) addMissingColumns( } func (s *SnowflakeAvroSyncHandler) getAvroSchema( + ctx context.Context, + env map[string]string, dstTableName string, schema qvalue.QRecordSchema, ) (*model.QRecordAvroSchemaDefinition, error) { - avroSchema, err := model.GetAvroSchemaDefinition(dstTableName, schema, protos.DBType_SNOWFLAKE) + avroSchema, err := model.GetAvroSchemaDefinition(ctx, env, dstTableName, schema, protos.DBType_SNOWFLAKE) if err != nil { return nil, fmt.Errorf("failed to define Avro schema: %w", err) } @@ -223,7 +225,7 @@ func (s *SnowflakeAvroSyncHandler) writeToAvroFile( localFilePath := fmt.Sprintf("%s/%s.avro.zst", tmpDir, partitionID) s.logger.Info("writing records to local file " + localFilePath) - avroFile, err := ocfWriter.WriteRecordsToAvroFile(ctx, localFilePath) + avroFile, err := ocfWriter.WriteRecordsToAvroFile(ctx, env, localFilePath) if err != nil { return nil, fmt.Errorf("failed to write records to Avro file: %w", err) } diff --git a/flow/connectors/snowflake/snowflake.go b/flow/connectors/snowflake/snowflake.go index 06e3fb881..518b01ff2 100644 --- a/flow/connectors/snowflake/snowflake.go +++ b/flow/connectors/snowflake/snowflake.go @@ -19,7 +19,6 @@ import ( metadataStore "github.com/PeerDB-io/peer-flow/connectors/external_metadata" "github.com/PeerDB-io/peer-flow/connectors/utils" - numeric "github.com/PeerDB-io/peer-flow/datatypes" "github.com/PeerDB-io/peer-flow/generated/protos" "github.com/PeerDB-io/peer-flow/model" "github.com/PeerDB-io/peer-flow/model/qvalue" @@ -338,7 +337,7 @@ func (c *SnowflakeConnector) SetupNormalizedTable( return true, nil } - normalizedTableCreateSQL := generateCreateTableSQLForNormalizedTable(config, normalizedSchemaTable, tableSchema) + normalizedTableCreateSQL := generateCreateTableSQLForNormalizedTable(ctx, config, normalizedSchemaTable, tableSchema) if _, err := c.execWithLogging(ctx, normalizedTableCreateSQL); err != nil { return false, fmt.Errorf("[sf] error while creating normalized table: %w", err) } @@ -349,6 +348,7 @@ func (c *SnowflakeConnector) SetupNormalizedTable( // This could involve adding or dropping multiple columns. func (c *SnowflakeConnector) ReplayTableSchemaDeltas( ctx context.Context, + env map[string]string, flowJobName string, schemaDeltas []*protos.TableSchemaDelta, ) error { @@ -374,17 +374,12 @@ func (c *SnowflakeConnector) ReplayTableSchemaDeltas( } for _, addedColumn := range schemaDelta.AddedColumns { - sfColtype, err := qvalue.QValueKind(addedColumn.Type).ToDWHColumnType(protos.DBType_SNOWFLAKE) + sfColtype, err := qvalue.QValueKind(addedColumn.Type).ToDWHColumnType(ctx, env, protos.DBType_SNOWFLAKE, addedColumn) if err != nil { return fmt.Errorf("failed to convert column type %s to snowflake type: %w", addedColumn.Type, err) } - if addedColumn.Type == string(qvalue.QValueKindNumeric) { - precision, scale := numeric.GetNumericTypeForWarehouse(addedColumn.TypeModifier, numeric.SnowflakeNumericCompatibility{}) - sfColtype = fmt.Sprintf("NUMERIC(%d,%d)", precision, scale) - } - _, err = tableSchemaModifyTx.ExecContext(ctx, fmt.Sprintf("ALTER TABLE %s ADD COLUMN IF NOT EXISTS \"%s\" %s", schemaDelta.DstTableName, strings.ToUpper(addedColumn.Name), sfColtype)) @@ -460,7 +455,7 @@ func (c *SnowflakeConnector) syncRecordsViaAvro( return nil, err } - err = c.ReplayTableSchemaDeltas(ctx, req.FlowJobName, req.Records.SchemaDeltas) + err = c.ReplayTableSchemaDeltas(ctx, req.Env, req.FlowJobName, req.Records.SchemaDeltas) if err != nil { return nil, fmt.Errorf("failed to sync schema changes: %w", err) } @@ -557,7 +552,7 @@ func (c *SnowflakeConnector) mergeTablesForBatch( } g.Go(func() error { - mergeStatement, err := mergeGen.generateMergeStmt(tableName) + mergeStatement, err := mergeGen.generateMergeStmt(gCtx, env, tableName) if err != nil { return err } @@ -666,6 +661,7 @@ func (c *SnowflakeConnector) checkIfTableExists( } func generateCreateTableSQLForNormalizedTable( + ctx context.Context, config *protos.SetupNormalizedTableBatchInput, dstSchemaTable *utils.SchemaTable, tableSchema *protos.TableSchema, @@ -674,18 +670,13 @@ func generateCreateTableSQLForNormalizedTable( for _, column := range tableSchema.Columns { genericColumnType := column.Type normalizedColName := SnowflakeIdentifierNormalize(column.Name) - sfColType, err := qvalue.QValueKind(genericColumnType).ToDWHColumnType(protos.DBType_SNOWFLAKE) + sfColType, err := qvalue.QValueKind(genericColumnType).ToDWHColumnType(ctx, config.Env, protos.DBType_SNOWFLAKE, column) if err != nil { slog.Warn(fmt.Sprintf("failed to convert column type %s to snowflake type", genericColumnType), slog.Any("error", err)) continue } - if genericColumnType == "numeric" { - precision, scale := numeric.GetNumericTypeForWarehouse(column.TypeModifier, numeric.SnowflakeNumericCompatibility{}) - sfColType = fmt.Sprintf("NUMERIC(%d,%d)", precision, scale) - } - var notNull string if tableSchema.NullableEnabled && !column.Nullable { notNull = " NOT NULL" diff --git a/flow/connectors/utils/avro/avro_writer.go b/flow/connectors/utils/avro/avro_writer.go index ee72e2c28..75bc9f435 100644 --- a/flow/connectors/utils/avro/avro_writer.go +++ b/flow/connectors/utils/avro/avro_writer.go @@ -127,16 +127,21 @@ func (p *peerDBOCFWriter) createOCFWriter(w io.Writer) (*goavro.OCFWriter, error return ocfWriter, nil } -func (p *peerDBOCFWriter) writeRecordsToOCFWriter(ctx context.Context, ocfWriter *goavro.OCFWriter) (int64, error) { +func (p *peerDBOCFWriter) writeRecordsToOCFWriter(ctx context.Context, env map[string]string, ocfWriter *goavro.OCFWriter) (int64, error) { logger := shared.LoggerFromCtx(ctx) schema := p.stream.Schema() - avroConverter := model.NewQRecordAvroConverter( + avroConverter, err := model.NewQRecordAvroConverter( + ctx, + env, p.avroSchema, p.targetDWH, schema.GetColumnNames(), logger, ) + if err != nil { + return 0, err + } numRows := atomic.Int64{} @@ -147,7 +152,7 @@ func (p *peerDBOCFWriter) writeRecordsToOCFWriter(ctx context.Context, ocfWriter for qrecord := range p.stream.Records { if err := ctx.Err(); err != nil { - return numRows.Load(), ctx.Err() + return numRows.Load(), err } else { avroMap, err := avroConverter.Convert(qrecord) if err != nil { @@ -172,7 +177,7 @@ func (p *peerDBOCFWriter) writeRecordsToOCFWriter(ctx context.Context, ocfWriter return numRows.Load(), nil } -func (p *peerDBOCFWriter) WriteOCF(ctx context.Context, w io.Writer) (int, error) { +func (p *peerDBOCFWriter) WriteOCF(ctx context.Context, env map[string]string, w io.Writer) (int, error) { ocfWriter, err := p.createOCFWriter(w) if err != nil { return 0, fmt.Errorf("failed to create OCF writer: %w", err) @@ -180,7 +185,7 @@ func (p *peerDBOCFWriter) WriteOCF(ctx context.Context, w io.Writer) (int, error // we have to keep a reference to the underlying writer as goavro doesn't provide any access to it defer p.writer.Close() - numRows, err := p.writeRecordsToOCFWriter(ctx, ocfWriter) + numRows, err := p.writeRecordsToOCFWriter(ctx, env, ocfWriter) if err != nil { return 0, fmt.Errorf("failed to write records to OCF writer: %w", err) } @@ -217,7 +222,7 @@ func (p *peerDBOCFWriter) WriteRecordsToS3( } w.Close() }() - numRows, writeOcfError = p.WriteOCF(ctx, w) + numRows, writeOcfError = p.WriteOCF(ctx, env, w) }() partSize, err := peerdbenv.PeerDBS3PartSize(ctx, env) @@ -254,7 +259,7 @@ func (p *peerDBOCFWriter) WriteRecordsToS3( }, nil } -func (p *peerDBOCFWriter) WriteRecordsToAvroFile(ctx context.Context, filePath string) (*AvroFile, error) { +func (p *peerDBOCFWriter) WriteRecordsToAvroFile(ctx context.Context, env map[string]string, filePath string) (*AvroFile, error) { file, err := os.Create(filePath) if err != nil { return nil, fmt.Errorf("failed to create temporary Avro file: %w", err) @@ -275,7 +280,7 @@ func (p *peerDBOCFWriter) WriteRecordsToAvroFile(ctx context.Context, filePath s bufferedWriter := bufio.NewWriterSize(file, buffSizeBytes) defer bufferedWriter.Flush() - numRecords, err := p.WriteOCF(ctx, bufferedWriter) + numRecords, err := p.WriteOCF(ctx, env, bufferedWriter) if err != nil { return nil, fmt.Errorf("failed to write records to temporary Avro file: %w", err) } diff --git a/flow/datatypes/numeric.go b/flow/datatypes/numeric.go index 56c1b1783..8b942e4f6 100644 --- a/flow/datatypes/numeric.go +++ b/flow/datatypes/numeric.go @@ -90,6 +90,10 @@ func MakeNumericTypmod(precision int32, scale int32) int32 { // This is to reverse what make_numeric_typmod of Postgres does: // https://github.com/postgres/postgres/blob/21912e3c0262e2cfe64856e028799d6927862563/src/backend/utils/adt/numeric.c#L897 func ParseNumericTypmod(typmod int32) (int16, int16) { + if typmod == -1 { + return 0, 0 + } + offsetMod := typmod - VARHDRSZ precision := int16((offsetMod >> 16) & 0x7FFF) scale := int16(offsetMod & 0x7FFF) @@ -102,6 +106,14 @@ func GetNumericTypeForWarehouse(typmod int32, warehouseNumeric WarehouseNumericC } precision, scale := ParseNumericTypmod(typmod) + return GetNumericTypeForWarehousePrecisionScale(precision, scale, warehouseNumeric) +} + +func GetNumericTypeForWarehousePrecisionScale(precision int16, scale int16, warehouseNumeric WarehouseNumericCompatibility) (int16, int16) { + if precision == 0 && scale == 0 { + return warehouseNumeric.DefaultPrecisionAndScale() + } + if !IsValidPrecision(precision, warehouseNumeric) { precision = warehouseNumeric.MaxPrecision() } diff --git a/flow/e2e/clickhouse/peer_flow_ch_test.go b/flow/e2e/clickhouse/peer_flow_ch_test.go index 9c4fa2a16..a19e69c8c 100644 --- a/flow/e2e/clickhouse/peer_flow_ch_test.go +++ b/flow/e2e/clickhouse/peer_flow_ch_test.go @@ -4,6 +4,7 @@ import ( "context" "embed" "fmt" + "strconv" "strings" "testing" "time" @@ -11,7 +12,7 @@ import ( "github.com/shopspring/decimal" "github.com/stretchr/testify/require" - "github.com/PeerDB-io/peer-flow/connectors/clickhouse" + connclickhouse "github.com/PeerDB-io/peer-flow/connectors/clickhouse" "github.com/PeerDB-io/peer-flow/e2e" "github.com/PeerDB-io/peer-flow/e2eshared" "github.com/PeerDB-io/peer-flow/generated/protos" @@ -557,8 +558,8 @@ func (s ClickHouseSuite) Test_Large_Numeric() { `, srcFullName)) require.NoError(s.t, err) - _, err = s.Conn().Exec(context.Background(), fmt.Sprintf(` - INSERT INTO %s(c1,c2) VALUES(%s,%s);`, srcFullName, strings.Repeat("7", 76), strings.Repeat("9", 78))) + _, err = s.Conn().Exec(context.Background(), fmt.Sprintf("INSERT INTO %s(c1,c2) VALUES($1,$2)", srcFullName), + strings.Repeat("7", 76), strings.Repeat("9", 78)) require.NoError(s.t, err) connectionGen := e2e.FlowConnectionGenerationConfig{ @@ -568,14 +569,15 @@ func (s ClickHouseSuite) Test_Large_Numeric() { } flowConnConfig := connectionGen.GenerateFlowConnectionConfigs(s.t) flowConnConfig.DoInitialSnapshot = true + tc := e2e.NewTemporalClient(s.t) env := e2e.ExecutePeerflow(tc, peerflow.CDCFlowWorkflow, flowConnConfig, nil) e2e.SetupCDCFlowStatusQuery(s.t, env, flowConnConfig) e2e.EnvWaitForCount(env, s, "waiting for CDC count", dstTableName, "id,c1,c2", 1) - _, err = s.Conn().Exec(context.Background(), fmt.Sprintf(` - INSERT INTO %s(c1,c2) VALUES(%s,%s);`, srcFullName, strings.Repeat("7", 76), strings.Repeat("9", 78))) + _, err = s.Conn().Exec(context.Background(), fmt.Sprintf("INSERT INTO %s(c1,c2) VALUES($1,$2)", srcFullName), + strings.Repeat("7", 76), strings.Repeat("9", 78)) require.NoError(s.t, err) e2e.EnvWaitForCount(env, s, "waiting for CDC count", dstTableName, "id,c1,c2", 2) @@ -598,3 +600,67 @@ func (s ClickHouseSuite) Test_Large_Numeric() { env.Cancel() e2e.RequireEnvCanceled(s.t, env) } + +// Unbounded NUMERICs (no precision, scale specified) are mapped to String on CH if FF enabled, Decimal if not +func (s ClickHouseSuite) testNumericFF(ffValue bool) { + nines := strings.Repeat("9", 38) + dstTableName := fmt.Sprintf("unumeric_ff_%v", ffValue) + srcFullName := s.attachSchemaSuffix(dstTableName) + + _, err := s.Conn().Exec(context.Background(), fmt.Sprintf(` + CREATE TABLE IF NOT EXISTS %s( + id INT PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY, + c numeric + ); + `, srcFullName)) + require.NoError(s.t, err) + + _, err = s.Conn().Exec(context.Background(), fmt.Sprintf("INSERT INTO %s(c) VALUES($1)", srcFullName), nines) + require.NoError(s.t, err) + + connectionGen := e2e.FlowConnectionGenerationConfig{ + FlowJobName: s.attachSuffix(fmt.Sprintf("clickhouse_test_unbounded_numerics_ff_%v", ffValue)), + TableNameMapping: map[string]string{srcFullName: dstTableName}, + Destination: s.Peer().Name, + } + flowConnConfig := connectionGen.GenerateFlowConnectionConfigs(s.t) + flowConnConfig.DoInitialSnapshot = true + flowConnConfig.Env = map[string]string{"PEERDB_CLICKHOUSE_UNBOUNDED_NUMERIC_AS_STRING": strconv.FormatBool(ffValue)} + tc := e2e.NewTemporalClient(s.t) + env := e2e.ExecutePeerflow(tc, peerflow.CDCFlowWorkflow, flowConnConfig, nil) + e2e.SetupCDCFlowStatusQuery(s.t, env, flowConnConfig) + + e2e.EnvWaitForCount(env, s, "waiting for CDC count", dstTableName, "id,c", 1) + + _, err = s.Conn().Exec(context.Background(), fmt.Sprintf("INSERT INTO %s(c) VALUES($1)", srcFullName), nines) + require.NoError(s.t, err) + + e2e.EnvWaitForCount(env, s, "waiting for CDC count", dstTableName, "id,c", 2) + + rows, err := s.GetRows(dstTableName, "c") + require.NoError(s.t, err) + require.Len(s.t, rows.Records, 2, "expected 2 rows") + for _, row := range rows.Records { + require.Len(s.t, row, 1, "expected 1 column") + if ffValue { + c, ok := row[0].Value().(string) + require.True(s.t, ok, "expected unbounded NUMERIC to be String") + require.Equal(s.t, nines, c, "expected unbounded NUMERIC to be 9s") + } else { + c, ok := row[0].Value().(decimal.Decimal) + require.True(s.t, ok, "expected unbounded NUMERIC to be Decimal") + require.Equal(s.t, nines, c.String(), "expected unbounded NUMERIC to be 9s") + } + } + + env.Cancel() + e2e.RequireEnvCanceled(s.t, env) +} + +func (s ClickHouseSuite) Test_Unbounded_Numeric_With_FF() { + s.testNumericFF(true) +} + +func (s ClickHouseSuite) Test_Unbounded_Numeric_Without_FF() { + s.testNumericFF(false) +} diff --git a/flow/e2e/snowflake/snowflake_schema_delta_test.go b/flow/e2e/snowflake/snowflake_schema_delta_test.go index 32cb03b64..ada2b10f6 100644 --- a/flow/e2e/snowflake/snowflake_schema_delta_test.go +++ b/flow/e2e/snowflake/snowflake_schema_delta_test.go @@ -53,7 +53,7 @@ func (s SnowflakeSchemaDeltaTestSuite) TestSimpleAddColumn() { err := s.sfTestHelper.RunCommand(fmt.Sprintf("CREATE TABLE %s(ID TEXT PRIMARY KEY)", tableName)) require.NoError(s.t, err) - err = s.connector.ReplayTableSchemaDeltas(context.Background(), "schema_delta_flow", []*protos.TableSchemaDelta{{ + err = s.connector.ReplayTableSchemaDeltas(context.Background(), nil, "schema_delta_flow", []*protos.TableSchemaDelta{{ SrcTableName: tableName, DstTableName: tableName, AddedColumns: []*protos.FieldDescription{ @@ -167,7 +167,7 @@ func (s SnowflakeSchemaDeltaTestSuite) TestAddAllColumnTypes() { } } - err = s.connector.ReplayTableSchemaDeltas(context.Background(), "schema_delta_flow", []*protos.TableSchemaDelta{{ + err = s.connector.ReplayTableSchemaDeltas(context.Background(), nil, "schema_delta_flow", []*protos.TableSchemaDelta{{ SrcTableName: tableName, DstTableName: tableName, AddedColumns: addedColumns, @@ -246,7 +246,7 @@ func (s SnowflakeSchemaDeltaTestSuite) TestAddTrickyColumnNames() { } } - err = s.connector.ReplayTableSchemaDeltas(context.Background(), "schema_delta_flow", []*protos.TableSchemaDelta{{ + err = s.connector.ReplayTableSchemaDeltas(context.Background(), nil, "schema_delta_flow", []*protos.TableSchemaDelta{{ SrcTableName: tableName, DstTableName: tableName, AddedColumns: addedColumns, @@ -301,7 +301,7 @@ func (s SnowflakeSchemaDeltaTestSuite) TestAddWhitespaceColumnNames() { } } - err = s.connector.ReplayTableSchemaDeltas(context.Background(), "schema_delta_flow", []*protos.TableSchemaDelta{{ + err = s.connector.ReplayTableSchemaDeltas(context.Background(), nil, "schema_delta_flow", []*protos.TableSchemaDelta{{ SrcTableName: tableName, DstTableName: tableName, AddedColumns: addedColumns, diff --git a/flow/model/conversion_avro.go b/flow/model/conversion_avro.go index 8f52c4461..ec7cfc6e3 100644 --- a/flow/model/conversion_avro.go +++ b/flow/model/conversion_avro.go @@ -1,6 +1,7 @@ package model import ( + "context" "encoding/json" "fmt" @@ -8,38 +9,52 @@ import ( "github.com/PeerDB-io/peer-flow/generated/protos" "github.com/PeerDB-io/peer-flow/model/qvalue" + "github.com/PeerDB-io/peer-flow/peerdbenv" ) type QRecordAvroConverter struct { - logger log.Logger - Schema *QRecordAvroSchemaDefinition - ColNames []string - TargetDWH protos.DBType + logger log.Logger + Schema *QRecordAvroSchemaDefinition + ColNames []string + TargetDWH protos.DBType + UnboundedNumericAsString bool } func NewQRecordAvroConverter( + ctx context.Context, + env map[string]string, schema *QRecordAvroSchemaDefinition, targetDWH protos.DBType, colNames []string, logger log.Logger, -) *QRecordAvroConverter { - return &QRecordAvroConverter{ - Schema: schema, - TargetDWH: targetDWH, - ColNames: colNames, - logger: logger, +) (*QRecordAvroConverter, error) { + var unboundedNumericAsString bool + if targetDWH == protos.DBType_CLICKHOUSE { + var err error + unboundedNumericAsString, err = peerdbenv.PeerDBEnableClickHouseNumericAsString(ctx, env) + if err != nil { + return nil, err + } } -} -func (qac *QRecordAvroConverter) Convert(qrecord []qvalue.QValue) (map[string]interface{}, error) { - m := make(map[string]interface{}, len(qrecord)) + return &QRecordAvroConverter{ + Schema: schema, + TargetDWH: targetDWH, + ColNames: colNames, + logger: logger, + UnboundedNumericAsString: unboundedNumericAsString, + }, nil +} +func (qac *QRecordAvroConverter) Convert(qrecord []qvalue.QValue) (map[string]any, error) { + m := make(map[string]any, len(qrecord)) for idx, val := range qrecord { avroVal, err := qvalue.QValueToAvro( val, &qac.Schema.Fields[idx], qac.TargetDWH, qac.logger, + qac.UnboundedNumericAsString, ) if err != nil { return nil, fmt.Errorf("failed to convert QValue to Avro-compatible value: %w", err) @@ -52,8 +67,8 @@ func (qac *QRecordAvroConverter) Convert(qrecord []qvalue.QValue) (map[string]in } type QRecordAvroField struct { - Type interface{} `json:"type"` - Name string `json:"name"` + Type any `json:"type"` + Name string `json:"name"` } type QRecordAvroSchema struct { @@ -68,6 +83,8 @@ type QRecordAvroSchemaDefinition struct { } func GetAvroSchemaDefinition( + ctx context.Context, + env map[string]string, dstTableName string, qRecordSchema qvalue.QRecordSchema, targetDWH protos.DBType, @@ -75,7 +92,7 @@ func GetAvroSchemaDefinition( avroFields := make([]QRecordAvroField, 0, len(qRecordSchema.Fields)) for _, qField := range qRecordSchema.Fields { - avroType, err := qvalue.GetAvroSchemaFromQValueKind(qField.Type, targetDWH, qField.Precision, qField.Scale) + avroType, err := qvalue.GetAvroSchemaFromQValueKind(ctx, env, qField.Type, targetDWH, qField.Precision, qField.Scale) if err != nil { return nil, err } diff --git a/flow/model/qvalue/avro_converter.go b/flow/model/qvalue/avro_converter.go index 97d9641b6..db5bf4e2a 100644 --- a/flow/model/qvalue/avro_converter.go +++ b/flow/model/qvalue/avro_converter.go @@ -1,6 +1,7 @@ package qvalue import ( + "context" "encoding/base64" "errors" "fmt" @@ -14,6 +15,7 @@ import ( "github.com/PeerDB-io/peer-flow/datatypes" "github.com/PeerDB-io/peer-flow/generated/protos" + "github.com/PeerDB-io/peer-flow/peerdbenv" ) type AvroSchemaField struct { @@ -74,7 +76,14 @@ func TruncateOrLogNumeric(num decimal.Decimal, precision int16, scale int16, tar // // For example, QValueKindInt64 would return an AvroLogicalSchema of "long". Unsupported QValueKinds // will return an error. -func GetAvroSchemaFromQValueKind(kind QValueKind, targetDWH protos.DBType, precision int16, scale int16) (interface{}, error) { +func GetAvroSchemaFromQValueKind( + ctx context.Context, + env map[string]string, + kind QValueKind, + targetDWH protos.DBType, + precision int16, + scale int16, +) (interface{}, error) { switch kind { case QValueKindString: return "string", nil @@ -103,9 +112,19 @@ func GetAvroSchemaFromQValueKind(kind QValueKind, targetDWH protos.DBType, preci } return "bytes", nil case QValueKindNumeric: - if targetDWH == protos.DBType_CLICKHOUSE && - precision > datatypes.PeerDBClickHouseMaxPrecision { - return "string", nil + if targetDWH == protos.DBType_CLICKHOUSE { + if precision == 0 && scale == 0 { + asString, err := peerdbenv.PeerDBEnableClickHouseNumericAsString(ctx, env) + if err != nil { + return nil, err + } + if asString { + return "string", nil + } + } + if precision > datatypes.PeerDBClickHouseMaxPrecision { + return "string", nil + } } avroNumericPrecision, avroNumericScale := DetermineNumericSettingForDWH(precision, scale, targetDWH) return AvroSchemaNumeric{ @@ -212,19 +231,24 @@ func GetAvroSchemaFromQValueKind(kind QValueKind, targetDWH protos.DBType, preci type QValueAvroConverter struct { *QField - logger log.Logger - TargetDWH protos.DBType + logger log.Logger + TargetDWH protos.DBType + UnboundedNumericAsString bool } -func QValueToAvro(value QValue, field *QField, targetDWH protos.DBType, logger log.Logger) (interface{}, error) { +func QValueToAvro( + value QValue, field *QField, targetDWH protos.DBType, logger log.Logger, + unboundedNumericAsString bool, +) (any, error) { if value.Value() == nil { return nil, nil } - c := &QValueAvroConverter{ - QField: field, - TargetDWH: targetDWH, - logger: logger, + c := QValueAvroConverter{ + QField: field, + TargetDWH: targetDWH, + logger: logger, + UnboundedNumericAsString: unboundedNumericAsString, } switch v := value.(type) { @@ -456,18 +480,18 @@ func (c *QValueAvroConverter) processNullableUnion( return value, nil } -func (c *QValueAvroConverter) processNumeric(num decimal.Decimal) interface{} { +func (c *QValueAvroConverter) processNumeric(num decimal.Decimal) any { + if (c.UnboundedNumericAsString && c.Precision == 0 && c.Scale == 0) || + (c.TargetDWH == protos.DBType_CLICKHOUSE && c.Precision > datatypes.PeerDBClickHouseMaxPrecision) { + numStr, _ := c.processNullableUnion("string", num.String()) + return numStr + } + num, err := TruncateOrLogNumeric(num, c.Precision, c.Scale, c.TargetDWH) if err != nil { return nil } - if c.TargetDWH == protos.DBType_CLICKHOUSE && - c.Precision > datatypes.PeerDBClickHouseMaxPrecision { - // no error returned - numStr, _ := c.processNullableUnion("string", num.String()) - return numStr - } rat := num.Rat() if c.Nullable { return goavro.Union("bytes.decimal", rat) diff --git a/flow/model/qvalue/dwh.go b/flow/model/qvalue/dwh.go index 49c359b88..b2d085acb 100644 --- a/flow/model/qvalue/dwh.go +++ b/flow/model/qvalue/dwh.go @@ -5,24 +5,24 @@ import ( "go.temporal.io/sdk/log" - numeric "github.com/PeerDB-io/peer-flow/datatypes" + "github.com/PeerDB-io/peer-flow/datatypes" "github.com/PeerDB-io/peer-flow/generated/protos" ) func DetermineNumericSettingForDWH(precision int16, scale int16, dwh protos.DBType) (int16, int16) { - var warehouseNumeric numeric.WarehouseNumericCompatibility + var warehouseNumeric datatypes.WarehouseNumericCompatibility switch dwh { case protos.DBType_CLICKHOUSE: - warehouseNumeric = numeric.ClickHouseNumericCompatibility{} + warehouseNumeric = datatypes.ClickHouseNumericCompatibility{} case protos.DBType_SNOWFLAKE: - warehouseNumeric = numeric.SnowflakeNumericCompatibility{} + warehouseNumeric = datatypes.SnowflakeNumericCompatibility{} case protos.DBType_BIGQUERY: - warehouseNumeric = numeric.BigQueryNumericCompatibility{} + warehouseNumeric = datatypes.BigQueryNumericCompatibility{} default: - warehouseNumeric = numeric.DefaultNumericCompatibility{} + warehouseNumeric = datatypes.DefaultNumericCompatibility{} } - return numeric.GetNumericTypeForWarehouse(numeric.MakeNumericTypmod(int32(precision), int32(scale)), warehouseNumeric) + return datatypes.GetNumericTypeForWarehousePrecisionScale(precision, scale, warehouseNumeric) } // Bigquery will not allow timestamp if it is less than 1AD and more than 9999AD diff --git a/flow/model/qvalue/kind.go b/flow/model/qvalue/kind.go index 91ab867a0..3cffcc274 100644 --- a/flow/model/qvalue/kind.go +++ b/flow/model/qvalue/kind.go @@ -1,10 +1,13 @@ package qvalue import ( + "context" "fmt" "strings" + "github.com/PeerDB-io/peer-flow/datatypes" "github.com/PeerDB-io/peer-flow/generated/protos" + "github.com/PeerDB-io/peer-flow/peerdbenv" ) type QValueKind string @@ -68,7 +71,6 @@ var QValueKindToSnowflakeTypeMap = map[QValueKind]string{ QValueKindInt64: "INTEGER", QValueKindFloat32: "FLOAT", QValueKindFloat64: "FLOAT", - QValueKindNumeric: "NUMBER(38, 9)", QValueKindQChar: "CHAR", QValueKindString: "STRING", QValueKindJSON: "VARIANT", @@ -110,7 +112,6 @@ var QValueKindToClickHouseTypeMap = map[QValueKind]string{ QValueKindInt64: "Int64", QValueKindFloat32: "Float32", QValueKindFloat64: "Float64", - QValueKindNumeric: "Decimal128(9)", QValueKindQChar: "FixedString(1)", QValueKindString: "String", QValueKindJSON: "String", @@ -140,16 +141,39 @@ var QValueKindToClickHouseTypeMap = map[QValueKind]string{ QValueKindArrayJSONB: "String", } -func (kind QValueKind) ToDWHColumnType(dwhType protos.DBType) (string, error) { +func getClickHouseTypeForNumericColumn(ctx context.Context, env map[string]string, column *protos.FieldDescription) (string, error) { + if column.TypeModifier == -1 { + numericAsStringEnabled, err := peerdbenv.PeerDBEnableClickHouseNumericAsString(ctx, env) + if err != nil { + return "", err + } + if numericAsStringEnabled { + return "String", nil + } + } else if rawPrecision, _ := datatypes.ParseNumericTypmod(column.TypeModifier); rawPrecision > datatypes.PeerDBClickHouseMaxPrecision { + return "String", nil + } + precision, scale := datatypes.GetNumericTypeForWarehouse(column.TypeModifier, datatypes.ClickHouseNumericCompatibility{}) + return fmt.Sprintf("Decimal(%d, %d)", precision, scale), nil +} + +// SEE ALSO: QField ToDWHColumnType +func (kind QValueKind) ToDWHColumnType(ctx context.Context, env map[string]string, dwhType protos.DBType, column *protos.FieldDescription, +) (string, error) { switch dwhType { case protos.DBType_SNOWFLAKE: - if val, ok := QValueKindToSnowflakeTypeMap[kind]; ok { + if kind == QValueKindNumeric { + precision, scale := datatypes.GetNumericTypeForWarehouse(column.TypeModifier, datatypes.SnowflakeNumericCompatibility{}) + return fmt.Sprintf("NUMERIC(%d,%d)", precision, scale), nil + } else if val, ok := QValueKindToSnowflakeTypeMap[kind]; ok { return val, nil } else { return "STRING", nil } case protos.DBType_CLICKHOUSE: - if val, ok := QValueKindToClickHouseTypeMap[kind]; ok { + if kind == QValueKindNumeric { + return getClickHouseTypeForNumericColumn(ctx, env, column) + } else if val, ok := QValueKindToClickHouseTypeMap[kind]; ok { return val, nil } else { return "String", nil diff --git a/flow/model/qvalue/qschema.go b/flow/model/qvalue/qschema.go index a956968ac..a6632fdf5 100644 --- a/flow/model/qvalue/qschema.go +++ b/flow/model/qvalue/qschema.go @@ -1,7 +1,13 @@ package qvalue import ( + "context" + "fmt" "strings" + + "github.com/PeerDB-io/peer-flow/datatypes" + "github.com/PeerDB-io/peer-flow/generated/protos" + "github.com/PeerDB-io/peer-flow/peerdbenv" ) type QField struct { @@ -47,3 +53,42 @@ func (q QRecordSchema) GetColumnNames() []string { } return names } + +func (q QField) getClickHouseTypeForNumericField(ctx context.Context, env map[string]string) (string, error) { + if q.Precision == 0 && q.Scale == 0 { + numericAsStringEnabled, err := peerdbenv.PeerDBEnableClickHouseNumericAsString(ctx, env) + if err != nil { + return "", err + } + if numericAsStringEnabled { + return "String", nil + } + } else if q.Precision > datatypes.PeerDBClickHouseMaxPrecision { + return "String", nil + } + return fmt.Sprintf("Decimal(%d, %d)", q.Precision, q.Scale), nil +} + +// SEE ALSO: qvalue/kind.go ToDWHColumnType +func (q QField) ToDWHColumnType(ctx context.Context, env map[string]string, dwhType protos.DBType) (string, error) { + switch dwhType { + case protos.DBType_SNOWFLAKE: + if val, ok := QValueKindToSnowflakeTypeMap[q.Type]; ok { + return val, nil + } else if q.Type == QValueKindNumeric { + return fmt.Sprintf("NUMERIC(%d,%d)", q.Precision, q.Scale), nil + } else { + return "STRING", nil + } + case protos.DBType_CLICKHOUSE: + if val, ok := QValueKindToClickHouseTypeMap[q.Type]; ok { + return q.getClickHouseTypeForNumericField(ctx, env) + } else if q.Type == QValueKindNumeric { + return val, nil + } else { + return "String", nil + } + default: + return "", fmt.Errorf("unknown dwh type: %v", dwhType) + } +} diff --git a/flow/peerdbenv/dynamicconf.go b/flow/peerdbenv/dynamicconf.go index b0cbe05f5..98a47d8fd 100644 --- a/flow/peerdbenv/dynamicconf.go +++ b/flow/peerdbenv/dynamicconf.go @@ -187,6 +187,14 @@ var DynamicSettings = [...]*protos.DynamicSetting{ ApplyMode: protos.DynconfApplyMode_APPLY_MODE_IMMEDIATE, TargetForSetting: protos.DynconfTarget_CLICKHOUSE, }, + { + Name: "PEERDB_CLICKHOUSE_UNBOUNDED_NUMERIC_AS_STRING", + Description: "Map unbounded numerics in Postgres to String in ClickHouse to preserve precision and scale", + DefaultValue: "false", + ValueType: protos.DynconfValueType_BOOL, + ApplyMode: protos.DynconfApplyMode_APPLY_MODE_NEW_MIRROR, + TargetForSetting: protos.DynconfTarget_CLICKHOUSE, + }, { Name: "PEERDB_INTERVAL_SINCE_LAST_NORMALIZE_THRESHOLD_MINUTES", Description: "Duration in minutes since last normalize to start alerting, 0 disables all alerting entirely", @@ -389,6 +397,10 @@ func PeerDBClickHouseParallelNormalize(ctx context.Context, env map[string]strin return dynamicConfSigned[int](ctx, env, "PEERDB_CLICKHOUSE_PARALLEL_NORMALIZE") } +func PeerDBEnableClickHouseNumericAsString(ctx context.Context, env map[string]string) (bool, error) { + return dynamicConfBool(ctx, env, "PEERDB_CLICKHOUSE_UNBOUNDED_NUMERIC_AS_STRING") +} + func PeerDBSnowflakeMergeParallelism(ctx context.Context, env map[string]string) (int64, error) { return dynamicConfSigned[int64](ctx, env, "PEERDB_SNOWFLAKE_MERGE_PARALLELISM") } diff --git a/flow/workflows/snapshot_flow.go b/flow/workflows/snapshot_flow.go index 9b21b7b38..1db3b6d60 100644 --- a/flow/workflows/snapshot_flow.go +++ b/flow/workflows/snapshot_flow.go @@ -208,6 +208,7 @@ func (s *SnapshotFlowExecution) cloneTable( WriteMode: snapshotWriteMode, System: s.config.System, Script: s.config.Script, + Env: s.config.Env, ParentMirrorName: flowName, } From f7680569441b64a2f877e602c7697ffb9b755427 Mon Sep 17 00:00:00 2001 From: "renovate[bot]" <29139614+renovate[bot]@users.noreply.github.com> Date: Mon, 2 Dec 2024 23:09:10 +0000 Subject: [PATCH 56/61] fix(deps): update dependency @types/node to v22.10.1 (#2309) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR contains the following updates: | Package | Change | Age | Adoption | Passing | Confidence | |---|---|---|---|---|---| | [@types/node](https://redirect.github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/node) ([source](https://redirect.github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node)) | [`22.10.0` -> `22.10.1`](https://renovatebot.com/diffs/npm/@types%2fnode/22.10.0/22.10.1) | [![age](https://developer.mend.io/api/mc/badges/age/npm/@types%2fnode/22.10.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/npm/@types%2fnode/22.10.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/npm/@types%2fnode/22.10.0/22.10.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/@types%2fnode/22.10.0/22.10.1?slim=true)](https://docs.renovatebot.com/merge-confidence/) | --- ### Configuration 📅 **Schedule**: Branch creation - "after 5pm on monday" in timezone Etc/UTC, Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/PeerDB-io/peerdb). Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Co-authored-by: Philip Dubé --- ui/package-lock.json | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/ui/package-lock.json b/ui/package-lock.json index c58187758..eea6bce38 100644 --- a/ui/package-lock.json +++ b/ui/package-lock.json @@ -2255,9 +2255,9 @@ "license": "MIT" }, "node_modules/@types/node": { - "version": "22.10.0", - "resolved": "https://registry.npmjs.org/@types/node/-/node-22.10.0.tgz", - "integrity": "sha512-XC70cRZVElFHfIUB40FgZOBbgJYFKKMa5nb9lxcwYstFG/Mi+/Y0bGS+rs6Dmhmkpq4pnNiLiuZAbc02YCOnmA==", + "version": "22.10.1", + "resolved": "https://registry.npmjs.org/@types/node/-/node-22.10.1.tgz", + "integrity": "sha512-qKgsUwfHZV2WCWLAnVP1JqnpE6Im6h3Y0+fYgMTasNQ7V++CBX5OT1as0g0f+OyubbFqhf6XVNIsmN4IIhEgGQ==", "license": "MIT", "dependencies": { "undici-types": "~6.20.0" From 1f07e7b9a09638107fdce9f20918e10c8c23fe66 Mon Sep 17 00:00:00 2001 From: "renovate[bot]" <29139614+renovate[bot]@users.noreply.github.com> Date: Mon, 2 Dec 2024 23:30:27 +0000 Subject: [PATCH 57/61] fix(deps): update cargo dependencies (#2311) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR contains the following updates: | Package | Type | Update | Change | |---|---|---|---| | [bytes](https://redirect.github.com/tokio-rs/bytes) | dependencies | minor | `1.8.0` -> `1.9.0` | | [cargo-deb](https://lib.rs/crates/cargo-deb) ([source](https://redirect.github.com/kornelski/cargo-deb)) | dependencies | patch | `2.9.0` -> `2.9.1` | | [pgwire](https://redirect.github.com/sunng87/pgwire) | workspace.dependencies | minor | `0.26` -> `0.27` | | [rustls](https://redirect.github.com/rustls/rustls) | dependencies | patch | `0.23.18` -> `0.23.19` | | [tracing](https://tokio.rs) ([source](https://redirect.github.com/tokio-rs/tracing)) | workspace.dependencies | patch | `0.1.40` -> `0.1.41` | | [tracing-subscriber](https://tokio.rs) ([source](https://redirect.github.com/tokio-rs/tracing)) | dependencies | patch | `0.3.18` -> `0.3.19` | | [ureq](https://redirect.github.com/algesten/ureq) | dependencies | minor | `2.10.1` -> `2.11.0` | --- ### Release Notes
tokio-rs/bytes (bytes) ### [`v1.9.0`](https://redirect.github.com/tokio-rs/bytes/blob/HEAD/CHANGELOG.md#190-November-27-2024) [Compare Source](https://redirect.github.com/tokio-rs/bytes/compare/v1.8.0...v1.9.0) ##### Added - Add `Bytes::from_owner` to enable externally-allocated memory ([#​742](https://redirect.github.com/tokio-rs/bytes/issues/742)) ##### Documented - Fix typo in Buf::chunk() comment ([#​744](https://redirect.github.com/tokio-rs/bytes/issues/744)) ##### Internal changes - Replace BufMut::put with BufMut::put_slice in Writer impl ([#​745](https://redirect.github.com/tokio-rs/bytes/issues/745)) - Rename hex_impl! to fmt_impl! and reuse it for fmt::Debug ([#​743](https://redirect.github.com/tokio-rs/bytes/issues/743))
kornelski/cargo-deb (cargo-deb) ### [`v2.9.1`](https://redirect.github.com/kornelski/cargo-deb/compare/v2.9.0...v2.9.1) [Compare Source](https://redirect.github.com/kornelski/cargo-deb/compare/v2.9.0...v2.9.1)
sunng87/pgwire (pgwire) ### [`v0.27.0`](https://redirect.github.com/sunng87/pgwire/blob/HEAD/CHANGELOG.md#0270---2024-11-30) [Compare Source](https://redirect.github.com/sunng87/pgwire/compare/v0.26.0...v0.27.0) ##### Added - `on_flush` handler for extended query handler \[[#​220](https://redirect.github.com/sunng87/pgwire/issues/220)] ##### Fixed - `Parse` message encoding \[[#​219](https://redirect.github.com/sunng87/pgwire/issues/219)] ##### Changed - MSRV to 1.75 - Made TLS an optional feature. `server-api` now provides no TLS functionality \[[#​218](https://redirect.github.com/sunng87/pgwire/issues/218)]
tokio-rs/tracing (tracing) ### [`v0.1.41`](https://redirect.github.com/tokio-rs/tracing/releases/tag/tracing-0.1.41): tracing 0.1.41 [Compare Source](https://redirect.github.com/tokio-rs/tracing/compare/tracing-0.1.40...tracing-0.1.41) \[ [crates.io][crate-0.1.41] ] | \[ [docs.rs][docs-0.1.41] ] This release updates the `tracing-core` dependency to [v0.1.33][core-0.1.33] and the `tracing-attributes` dependency to [v0.1.28][attrs-0.1.28]. ##### Added - **core**: Add index API for `Field` ([#​2820]) - **core**: Allow `&[u8]` to be recorded as event/span field ([#​2954]) ##### Changed - Bump MSRV to 1.63 ([#​2793]) - **core**: Use const `thread_local`s when possible ([#​2838]) ##### Fixed - Removed core imports in macros ([#​2762]) - **attributes**: Added missing RecordTypes for instrument ([#​2781]) - **attributes**: Change order of async and unsafe modifier ([#​2864]) - Fix missing field prefixes ([#​2878]) - **attributes**: Extract match scrutinee ([#​2880]) - Fix non-simple macro usage without message ([#​2879]) - Fix event macros with constant field names in the first position ([#​2883]) - Allow field path segments to be keywords ([#​2925]) - **core**: Fix missed `register_callsite` error ([#​2938]) - **attributes**: Support const values for `target` and `name` ([#​2941]) - Prefix macro calls with ::core to avoid clashing with local macros ([#​3024]) [#​2762]: https://redirect.github.com/tokio-rs/tracing/pull/2762 [#​2781]: https://redirect.github.com/tokio-rs/tracing/pull/2781 [#​2793]: https://redirect.github.com/tokio-rs/tracing/pull/2793 [#​2820]: https://redirect.github.com/tokio-rs/tracing/pull/2820 [#​2838]: https://redirect.github.com/tokio-rs/tracing/pull/2838 [#​2864]: https://redirect.github.com/tokio-rs/tracing/pull/2864 [#​2878]: https://redirect.github.com/tokio-rs/tracing/pull/2878 [#​2879]: https://redirect.github.com/tokio-rs/tracing/pull/2879 [#​2880]: https://redirect.github.com/tokio-rs/tracing/pull/2880 [#​2883]: https://redirect.github.com/tokio-rs/tracing/pull/2883 [#​2925]: https://redirect.github.com/tokio-rs/tracing/pull/2925 [#​2938]: https://redirect.github.com/tokio-rs/tracing/pull/2938 [#​2941]: https://redirect.github.com/tokio-rs/tracing/pull/2941 [#​2954]: https://redirect.github.com/tokio-rs/tracing/pull/2954 [#​3024]: https://redirect.github.com/tokio-rs/tracing/pull/3024 [attrs-0.1.28]: https://redirect.github.com/tokio-rs/tracing/releases/tag/tracing-attributes-0.1.28 [core-0.1.33]: https://redirect.github.com/tokio-rs/tracing/releases/tag/tracing-core-0.1.33 [docs-0.1.41]: https://docs.rs/tracing/0.1.41/tracing/ [crate-0.1.41]: https://crates.io/crates/tracing/0.1.41
algesten/ureq (ureq) ### [`v2.11.0`](https://redirect.github.com/algesten/ureq/blob/HEAD/CHANGELOG.md#2110) [Compare Source](https://redirect.github.com/algesten/ureq/compare/2.10.1...2.11.0) - Fixes for changes to cargo-deny ([#​882](https://redirect.github.com/algesten/ureq/issues/882)) - Pin rustls dep on 0.23.19 to keep MSRV 1.67 ([#​878](https://redirect.github.com/algesten/ureq/issues/878)) - Bump MSRV 1.63 -> 1.67 due to time crate ([#​878](https://redirect.github.com/algesten/ureq/issues/878)) - Re-export rustls ([#​813](https://redirect.github.com/algesten/ureq/issues/813))
--- ### Configuration 📅 **Schedule**: Branch creation - "after 5pm on monday" in timezone Etc/UTC, Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox. 👻 **Immortal**: This PR will be recreated if closed unmerged. Get [config help](https://redirect.github.com/renovatebot/renovate/discussions) if that's undesired. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/PeerDB-io/peerdb). --------- Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Co-authored-by: Philip Dubé Co-authored-by: Kunal Gupta <39487888+iamKunalGupta@users.noreply.github.com> --- nexus/Cargo.lock | 105 +++++++++++++++++++++++--------------- nexus/Cargo.toml | 2 +- nexus/analyzer/src/lib.rs | 2 +- nexus/catalog/src/lib.rs | 2 +- 4 files changed, 67 insertions(+), 44 deletions(-) diff --git a/nexus/Cargo.lock b/nexus/Cargo.lock index 15a8c3274..9cd70344b 100644 --- a/nexus/Cargo.lock +++ b/nexus/Cargo.lock @@ -785,9 +785,9 @@ checksum = "1fd0f2584146f6f2ef48085050886acf353beff7305ebd1ae69500e27c67f64b" [[package]] name = "bytes" -version = "1.8.0" +version = "1.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9ac0150caa2ae65ca5bd83f25c7de183dea78d4d366469f148435e2acfbad0da" +checksum = "325918d6fe32f23b19878fe4b34794ae41fc19ddbe53b10571a4874d44ffd39b" [[package]] name = "bytes-utils" @@ -801,9 +801,9 @@ dependencies = [ [[package]] name = "cargo-deb" -version = "2.9.0" +version = "2.9.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9103cb60c68ef7ce14a3d17c6d697e8b180356a447685784f7951074bce0b844" +checksum = "db0e12dd59626cd2543903f1b794135b1f6e0df1003dd3be1071c06961bf6072" dependencies = [ "ar", "cargo_toml", @@ -1500,7 +1500,7 @@ dependencies = [ "reqwest", "serde", "serde_json", - "thiserror", + "thiserror 1.0.68", "time", "tokio", "tokio-stream", @@ -1790,7 +1790,7 @@ dependencies = [ "http 1.1.0", "hyper 1.5.0", "hyper-util", - "rustls 0.23.18", + "rustls 0.23.19", "rustls-native-certs 0.8.0", "rustls-pki-types", "tokio", @@ -2361,7 +2361,7 @@ dependencies = [ "serde", "serde_json", "socket2", - "thiserror", + "thiserror 1.0.68", "tokio", "tokio-rustls 0.25.0", "tokio-util", @@ -2401,7 +2401,7 @@ dependencies = [ "sha2", "smallvec", "subprocess", - "thiserror", + "thiserror 1.0.68", "uuid", "zstd", ] @@ -2893,9 +2893,9 @@ dependencies = [ [[package]] name = "pgwire" -version = "0.26.0" +version = "0.27.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "99e0f273b9ffa92a06b0a900c012df432de901c1854b2411cd7b27e2db165cc8" +checksum = "f8e3b217978f9e224cfd5e2b272064067e793a39744030e49657c699752473c8" dependencies = [ "async-trait", "base64 0.22.1", @@ -2911,7 +2911,7 @@ dependencies = [ "ring", "rust_decimal", "stringprep", - "thiserror", + "thiserror 2.0.3", "tokio", "tokio-rustls 0.26.0", "tokio-util", @@ -3044,7 +3044,7 @@ dependencies = [ "anyhow", "futures-util", "pt", - "rustls 0.23.18", + "rustls 0.23.19", "ssh2", "tokio", "tokio-postgres", @@ -3159,7 +3159,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0c1318b19085f08681016926435853bbf7858f9c082d0999b80550ff5d9abe15" dependencies = [ "bytes", - "heck 0.4.1", + "heck 0.5.0", "itertools", "log", "multimap", @@ -3270,9 +3270,9 @@ dependencies = [ "quinn-proto", "quinn-udp", "rustc-hash 2.0.0", - "rustls 0.23.18", + "rustls 0.23.19", "socket2", - "thiserror", + "thiserror 1.0.68", "tokio", "tracing", ] @@ -3287,9 +3287,9 @@ dependencies = [ "rand", "ring", "rustc-hash 2.0.0", - "rustls 0.23.18", + "rustls 0.23.19", "slab", - "thiserror", + "thiserror 1.0.68", "tinyvec", "tracing", ] @@ -3412,7 +3412,7 @@ dependencies = [ "log", "regex", "siphasher 1.0.1", - "thiserror", + "thiserror 1.0.68", "time", "tokio", "tokio-postgres", @@ -3518,7 +3518,7 @@ dependencies = [ "percent-encoding", "pin-project-lite", "quinn", - "rustls 0.23.18", + "rustls 0.23.19", "rustls-pemfile 2.2.0", "rustls-pki-types", "serde", @@ -3686,9 +3686,9 @@ dependencies = [ [[package]] name = "rustls" -version = "0.23.18" +version = "0.23.19" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9c9cc1d47e243d655ace55ed38201c19ae02c148ae56412ab8750e8f0166ab7f" +checksum = "934b404430bb06b3fae2cba809eb45a1ab1aecd64491213d7c3301b88393f8d1" dependencies = [ "log", "once_cell", @@ -4029,7 +4029,7 @@ checksum = "adc4e5204eb1910f40f9cfa375f6f05b68c3abac4b6fd879c8ff5e7ae8a0a085" dependencies = [ "num-bigint", "num-traits", - "thiserror", + "thiserror 1.0.68", "time", ] @@ -4245,7 +4245,16 @@ version = "1.0.68" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "02dd99dc800bbb97186339685293e1cc5d9df1f8fae2d0aecd9ff1c77efea892" dependencies = [ - "thiserror-impl", + "thiserror-impl 1.0.68", +] + +[[package]] +name = "thiserror" +version = "2.0.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c006c85c7651b3cf2ada4584faa36773bd07bac24acfb39f3c431b36d7e667aa" +dependencies = [ + "thiserror-impl 2.0.3", ] [[package]] @@ -4259,6 +4268,17 @@ dependencies = [ "syn 2.0.87", ] +[[package]] +name = "thiserror-impl" +version = "2.0.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f077553d607adc1caf65430528a576c757a71ed73944b66ebb58ef2bbd243568" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.87", +] + [[package]] name = "thread_local" version = "1.1.8" @@ -4411,7 +4431,7 @@ checksum = "27d684bad428a0f2481f42241f821db42c54e2dc81d8c00db8536c506b0a0144" dependencies = [ "const-oid", "ring", - "rustls 0.23.18", + "rustls 0.23.19", "tokio", "tokio-postgres", "tokio-rustls 0.26.0", @@ -4445,7 +4465,7 @@ version = "0.26.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0c7bc40d0e5a97695bb96e27995cd3a08538541b0a846f65bba7a359f36700d4" dependencies = [ - "rustls 0.23.18", + "rustls 0.23.19", "rustls-pki-types", "tokio", ] @@ -4630,9 +4650,9 @@ checksum = "8df9b6e13f2d32c91b9bd719c00d1958837bc7dec474d94952798cc8e69eeec3" [[package]] name = "tracing" -version = "0.1.40" +version = "0.1.41" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c3523ab5a71916ccf420eebdf5521fcef02141234bbc0b8a49f2fdc4544364ef" +checksum = "784e0ac535deb450455cbfa28a6f0df145ea1bb7ae51b821cf5e7927fdcfbdd0" dependencies = [ "pin-project-lite", "tracing-attributes", @@ -4646,16 +4666,16 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3566e8ce28cc0a3fe42519fc80e6b4c943cc4c8cef275620eb8dac2d3d4e06cf" dependencies = [ "crossbeam-channel", - "thiserror", + "thiserror 1.0.68", "time", "tracing-subscriber", ] [[package]] name = "tracing-attributes" -version = "0.1.27" +version = "0.1.28" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "34704c8d6ebcbc939824180af020566b01a7c01f80641264eba0999f6c2b6be7" +checksum = "395ae124c09f9e6918a2310af6038fba074bcf474ac352496d5910dd59a2226d" dependencies = [ "proc-macro2", "quote", @@ -4664,9 +4684,9 @@ dependencies = [ [[package]] name = "tracing-core" -version = "0.1.32" +version = "0.1.33" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c06d3da6113f116aaee68e4d601191614c9053067f9ab7f6edbcb161237daa54" +checksum = "e672c95779cf947c5311f83787af4fa8fffd12fb27e4993211a84bdfd9610f9c" dependencies = [ "once_cell", "valuable", @@ -4685,9 +4705,9 @@ dependencies = [ [[package]] name = "tracing-subscriber" -version = "0.3.18" +version = "0.3.19" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ad0f048c97dbd9faa9b7df56362b8ebcaa52adb06b498c050d2f4e32f90a7a8b" +checksum = "e8189decb5ac0fa7bc8b96b7cb9b2701d60d48805aca84a238004d665fcc4008" dependencies = [ "matchers", "nu-ansi-term", @@ -4781,21 +4801,24 @@ checksum = "8ecb6da28b8a351d773b68d5825ac39017e680750f980f3a1a85cd8dd28a47c1" [[package]] name = "ureq" -version = "2.10.1" +version = "2.11.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b74fc6b57825be3373f7054754755f03ac3a8f5d70015ccad699ba2029956f4a" +checksum = "b30e6f97efe1fa43535ee241ee76967d3ff6ff3953ebb430d8d55c5393029e7b" dependencies = [ "base64 0.22.1", "encoding_rs", "flate2", + "litemap", "log", "once_cell", - "rustls 0.23.18", + "rustls 0.23.19", "rustls-pki-types", "serde", "serde_json", "url", "webpki-roots", + "yoke", + "zerofrom", ] [[package]] @@ -5285,9 +5308,9 @@ dependencies = [ [[package]] name = "x509-certificate" -version = "0.23.1" +version = "0.24.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "66534846dec7a11d7c50a74b7cdb208b9a581cad890b7866430d438455847c85" +checksum = "e57b9f8bcae7c1f36479821ae826d75050c60ce55146fd86d3553ed2573e2762" dependencies = [ "bcder", "bytes", @@ -5298,7 +5321,7 @@ dependencies = [ "ring", "signature", "spki", - "thiserror", + "thiserror 1.0.68", "zeroize", ] @@ -5358,7 +5381,7 @@ dependencies = [ "hyper-util", "log", "percent-encoding", - "rustls 0.23.18", + "rustls 0.23.19", "rustls-pemfile 2.2.0", "seahash", "serde", diff --git a/nexus/Cargo.toml b/nexus/Cargo.toml index 6efea5f4b..5f5d1b3e6 100644 --- a/nexus/Cargo.toml +++ b/nexus/Cargo.toml @@ -32,7 +32,7 @@ ssh2 = "0.9" sqlparser = { git = "https://github.com/peerdb-io/sqlparser-rs.git", branch = "main" } tokio = { version = "1", features = ["full"] } tracing = "0.1" -pgwire = { version = "0.26", default-features = false, features = [ +pgwire = { version = "0.27", default-features = false, features = [ "scram", "server-api-ring", ] } diff --git a/nexus/analyzer/src/lib.rs b/nexus/analyzer/src/lib.rs index 830da627d..cc9309c6b 100644 --- a/nexus/analyzer/src/lib.rs +++ b/nexus/analyzer/src/lib.rs @@ -48,7 +48,7 @@ pub enum QueryAssociation { Catalog, } -impl<'a> StatementAnalyzer for PeerExistanceAnalyzer<'a> { +impl StatementAnalyzer for PeerExistanceAnalyzer<'_> { type Output = QueryAssociation; fn analyze(&self, statement: &Statement) -> anyhow::Result { diff --git a/nexus/catalog/src/lib.rs b/nexus/catalog/src/lib.rs index d5d023e57..4cb2512c4 100644 --- a/nexus/catalog/src/lib.rs +++ b/nexus/catalog/src/lib.rs @@ -51,7 +51,7 @@ pub struct CatalogConfig<'a> { pub database: &'a str, } -impl<'a> CatalogConfig<'a> { +impl CatalogConfig<'_> { // convert catalog config to PostgresConfig pub fn to_postgres_config(&self) -> pt::peerdb_peers::PostgresConfig { PostgresConfig { From 928ee3cf4a26b86fecd7964d7dbba6f91b0105c0 Mon Sep 17 00:00:00 2001 From: Kevin Biju <52661649+heavycrystal@users.noreply.github.com> Date: Wed, 4 Dec 2024 02:53:09 +0530 Subject: [PATCH 58/61] remove/refactor functions from shared (#2307) --- flow/connectors/eventhub/hubmanager.go | 10 ++++---- .../postgres/qrep_partition_test.go | 8 +++---- flow/connectors/postgres/sink_q.go | 14 ++++------- .../snowflake/qrep_avro_consolidate.go | 12 ++++------ flow/connectors/utils/azure.go | 15 ------------ flow/e2e/bigquery/bigquery_helper.go | 8 +++---- flow/e2e/snowflake/snowflake_helper.go | 9 +++---- flow/e2e/sqlserver/sqlserver_helper.go | 9 +++---- flow/shared/random.go | 24 ------------------- flow/shared/{worklow.go => workflow.go} | 0 10 files changed, 27 insertions(+), 82 deletions(-) delete mode 100644 flow/connectors/utils/azure.go rename flow/shared/{worklow.go => workflow.go} (100%) diff --git a/flow/connectors/eventhub/hubmanager.go b/flow/connectors/eventhub/hubmanager.go index 3e134968d..5515eae88 100644 --- a/flow/connectors/eventhub/hubmanager.go +++ b/flow/connectors/eventhub/hubmanager.go @@ -14,8 +14,8 @@ import ( "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/eventhub/armeventhub" cmap "github.com/orcaman/concurrent-map/v2" - "github.com/PeerDB-io/peer-flow/connectors/utils" "github.com/PeerDB-io/peer-flow/generated/protos" + "github.com/PeerDB-io/peer-flow/peerdbenv" "github.com/PeerDB-io/peer-flow/shared" ) @@ -186,10 +186,10 @@ func (m *EventHubManager) EnsureEventHubExists(ctx context.Context, name ScopedE func (m *EventHubManager) getEventHubMgmtClient(subID string) (*armeventhub.EventHubsClient, error) { if subID == "" { - envSubID, err := utils.GetAzureSubscriptionID() - if err != nil { - slog.Error("failed to get azure subscription id", slog.Any("error", err)) - return nil, err + envSubID := peerdbenv.GetEnvString("AZURE_SUBSCRIPTION_ID", "") + if envSubID == "" { + slog.Error("couldn't find AZURE_SUBSCRIPTION_ID in environment") + return nil, errors.New("couldn't find AZURE_SUBSCRIPTION_ID in environment") } subID = envSubID } diff --git a/flow/connectors/postgres/qrep_partition_test.go b/flow/connectors/postgres/qrep_partition_test.go index 0249b75fc..a81df2769 100644 --- a/flow/connectors/postgres/qrep_partition_test.go +++ b/flow/connectors/postgres/qrep_partition_test.go @@ -4,6 +4,7 @@ import ( "context" "fmt" "log/slog" + "math/rand/v2" "testing" "time" @@ -84,11 +85,8 @@ func TestGetQRepPartitions(t *testing.T) { } defer conn.Close(context.Background()) - // Generate a random schema name - rndUint, err := shared.RandomUInt64() - if err != nil { - t.Fatalf("Failed to generate random uint: %v", err) - } + //nolint:gosec // Generate a random schema name, number has no cryptographic significance + rndUint := rand.Uint64() schemaName := fmt.Sprintf("test_%d", rndUint) // Create the schema diff --git a/flow/connectors/postgres/sink_q.go b/flow/connectors/postgres/sink_q.go index 89dab6a94..21a39627b 100644 --- a/flow/connectors/postgres/sink_q.go +++ b/flow/connectors/postgres/sink_q.go @@ -4,6 +4,7 @@ import ( "context" "fmt" "log/slog" + "math/rand/v2" "github.com/jackc/pgx/v5" @@ -35,20 +36,15 @@ func (stream RecordStreamSink) ExecuteQueryWithTx( } } - randomUint, err := shared.RandomUInt64() - if err != nil { - qe.logger.Error("[pg_query_executor] failed to generate random uint", slog.Any("error", err)) - err = fmt.Errorf("[pg_query_executor] failed to generate random uint: %w", err) - stream.Close(err) - return 0, err - } + //nolint:gosec // number has no cryptographic significance + randomUint := rand.Uint64() cursorName := fmt.Sprintf("peerdb_cursor_%d", randomUint) fetchSize := shared.FetchAndChannelSize cursorQuery := fmt.Sprintf("DECLARE %s CURSOR FOR %s", cursorName, query) qe.logger.Info(fmt.Sprintf("[pg_query_executor] executing cursor declaration for %v with args %v", cursorQuery, args)) - _, err = tx.Exec(ctx, cursorQuery, args...) - if err != nil { + + if _, err := tx.Exec(ctx, cursorQuery, args...); err != nil { qe.logger.Info("[pg_query_executor] failed to declare cursor", slog.String("cursorQuery", cursorQuery), slog.Any("error", err)) err = fmt.Errorf("[pg_query_executor] failed to declare cursor: %w", err) diff --git a/flow/connectors/snowflake/qrep_avro_consolidate.go b/flow/connectors/snowflake/qrep_avro_consolidate.go index 547aef27e..a4a8d1a28 100644 --- a/flow/connectors/snowflake/qrep_avro_consolidate.go +++ b/flow/connectors/snowflake/qrep_avro_consolidate.go @@ -4,13 +4,13 @@ import ( "context" "fmt" "log/slog" + "math/rand/v2" "strings" "time" "github.com/PeerDB-io/peer-flow/connectors/utils" "github.com/PeerDB-io/peer-flow/generated/protos" "github.com/PeerDB-io/peer-flow/peerdbenv" - "github.com/PeerDB-io/peer-flow/shared" ) type SnowflakeAvroConsolidateHandler struct { @@ -214,10 +214,8 @@ func (s *SnowflakeAvroConsolidateHandler) generateUpsertMergeCommand( // handleUpsertMode handles the upsert mode func (s *SnowflakeAvroConsolidateHandler) handleUpsertMode(ctx context.Context) error { - runID, err := shared.RandomUInt64() - if err != nil { - return fmt.Errorf("failed to generate run ID: %w", err) - } + //nolint:gosec // number has no cryptographic significance + runID := rand.Uint64() tempTableName := fmt.Sprintf("%s_temp_%d", s.dstTableName, runID) @@ -230,8 +228,8 @@ func (s *SnowflakeAvroConsolidateHandler) handleUpsertMode(ctx context.Context) s.connector.logger.Info("created temp table " + tempTableName) copyCmd := s.getCopyTransformation(tempTableName) - _, err = s.connector.database.ExecContext(ctx, copyCmd) - if err != nil { + + if _, err := s.connector.database.ExecContext(ctx, copyCmd); err != nil { return fmt.Errorf("failed to run COPY INTO command: %w", err) } s.connector.logger.Info("copied file from stage " + s.stage + " to temp table " + tempTableName) diff --git a/flow/connectors/utils/azure.go b/flow/connectors/utils/azure.go deleted file mode 100644 index df612b47d..000000000 --- a/flow/connectors/utils/azure.go +++ /dev/null @@ -1,15 +0,0 @@ -package utils - -import ( - "errors" - "os" -) - -func GetAzureSubscriptionID() (string, error) { - // get this from env - id := os.Getenv("AZURE_SUBSCRIPTION_ID") - if id == "" { - return "", errors.New("AZURE_SUBSCRIPTION_ID is not set") - } - return id, nil -} diff --git a/flow/e2e/bigquery/bigquery_helper.go b/flow/e2e/bigquery/bigquery_helper.go index 1ee303acf..ee33f2bfc 100644 --- a/flow/e2e/bigquery/bigquery_helper.go +++ b/flow/e2e/bigquery/bigquery_helper.go @@ -6,6 +6,7 @@ import ( "errors" "fmt" "math/big" + "math/rand/v2" "os" "strings" "testing" @@ -21,7 +22,6 @@ import ( "github.com/PeerDB-io/peer-flow/generated/protos" "github.com/PeerDB-io/peer-flow/model" "github.com/PeerDB-io/peer-flow/model/qvalue" - "github.com/PeerDB-io/peer-flow/shared" ) type BigQueryTestHelper struct { @@ -37,10 +37,8 @@ type BigQueryTestHelper struct { func NewBigQueryTestHelper(t *testing.T) (*BigQueryTestHelper, error) { t.Helper() // random 64 bit int to namespace stateful schemas. - runID, err := shared.RandomUInt64() - if err != nil { - return nil, fmt.Errorf("failed to generate random uint64: %w", err) - } + //nolint:gosec // number has no cryptographic significance + runID := rand.Uint64() jsonPath := os.Getenv("TEST_BQ_CREDS") if jsonPath == "" { diff --git a/flow/e2e/snowflake/snowflake_helper.go b/flow/e2e/snowflake/snowflake_helper.go index ca57b5b47..7e2943e3b 100644 --- a/flow/e2e/snowflake/snowflake_helper.go +++ b/flow/e2e/snowflake/snowflake_helper.go @@ -5,6 +5,7 @@ import ( "encoding/json" "errors" "fmt" + "math/rand/v2" "os" "testing" @@ -13,7 +14,6 @@ import ( "github.com/PeerDB-io/peer-flow/generated/protos" "github.com/PeerDB-io/peer-flow/model" "github.com/PeerDB-io/peer-flow/model/qvalue" - "github.com/PeerDB-io/peer-flow/shared" ) type SnowflakeTestHelper struct { @@ -47,11 +47,8 @@ func NewSnowflakeTestHelper(t *testing.T) (*SnowflakeTestHelper, error) { return nil, fmt.Errorf("failed to unmarshal json: %w", err) } - runID, err := shared.RandomUInt64() - if err != nil { - return nil, fmt.Errorf("failed to generate random uint64: %w", err) - } - + //nolint:gosec // number has no cryptographic significance + runID := rand.Uint64() testDatabaseName := fmt.Sprintf("e2e_test_%d", runID) adminClient, err := connsnowflake.NewSnowflakeClient(context.Background(), config) diff --git a/flow/e2e/sqlserver/sqlserver_helper.go b/flow/e2e/sqlserver/sqlserver_helper.go index 056922800..d3e1401f2 100644 --- a/flow/e2e/sqlserver/sqlserver_helper.go +++ b/flow/e2e/sqlserver/sqlserver_helper.go @@ -3,6 +3,7 @@ package e2e_sqlserver import ( "context" "fmt" + "math/rand/v2" "os" "strconv" @@ -10,7 +11,6 @@ import ( connsqlserver "github.com/PeerDB-io/peer-flow/connectors/sqlserver" "github.com/PeerDB-io/peer-flow/generated/protos" "github.com/PeerDB-io/peer-flow/model/qvalue" - "github.com/PeerDB-io/peer-flow/shared" ) type SQLServerHelper struct { @@ -45,11 +45,8 @@ func NewSQLServerHelper() (*SQLServerHelper, error) { return nil, fmt.Errorf("invalid connection configs: %v", connErr) } - rndNum, err := shared.RandomUInt64() - if err != nil { - return nil, err - } - + //nolint:gosec // number has no cryptographic significance + rndNum := rand.Uint64() testSchema := fmt.Sprintf("e2e_test_%d", rndNum) if err := connector.CreateSchema(context.Background(), testSchema); err != nil { return nil, err diff --git a/flow/shared/random.go b/flow/shared/random.go index 7ef3c8e5d..84830f376 100644 --- a/flow/shared/random.go +++ b/flow/shared/random.go @@ -2,32 +2,8 @@ package shared import ( "crypto/rand" - "encoding/binary" - "errors" ) -// RandomInt64 returns a random 64 bit integer. -func RandomInt64() (int64, error) { - b := make([]byte, 8) - _, err := rand.Read(b) - if err != nil { - return 0, errors.New("could not generate random int64: " + err.Error()) - } - // Convert bytes to int64 - return int64(binary.LittleEndian.Uint64(b)), nil -} - -// RandomUInt64 returns a random 64 bit unsigned integer. -func RandomUInt64() (uint64, error) { - b := make([]byte, 8) - _, err := rand.Read(b) - if err != nil { - return 0, errors.New("could not generate random uint64: " + err.Error()) - } - // Convert bytes to uint64 - return binary.LittleEndian.Uint64(b), nil -} - func RandomString(n int) string { const alphanum = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" bytes := make([]byte, n) diff --git a/flow/shared/worklow.go b/flow/shared/workflow.go similarity index 100% rename from flow/shared/worklow.go rename to flow/shared/workflow.go From 5c4d32e50853be7813d6a1706e0faa102d8cada3 Mon Sep 17 00:00:00 2001 From: Kunal Gupta <39487888+iamKunalGupta@users.noreply.github.com> Date: Wed, 4 Dec 2024 06:45:33 +0530 Subject: [PATCH 59/61] Feat/workflow for docker compose image tags (#2319) --- .../update-docker-compose-stable.yaml | 43 +++++++++++++++++++ 1 file changed, 43 insertions(+) create mode 100644 .github/workflows/update-docker-compose-stable.yaml diff --git a/.github/workflows/update-docker-compose-stable.yaml b/.github/workflows/update-docker-compose-stable.yaml new file mode 100644 index 000000000..7435f3657 --- /dev/null +++ b/.github/workflows/update-docker-compose-stable.yaml @@ -0,0 +1,43 @@ +name: Update docker-compose.yaml tags + +on: + schedule: + - cron: '0 15 * * 1' + workflow_dispatch: + inputs: {} +permissions: + issues: write + pull-requests: write + contents: write + + +env: + PR_BRANCH: automated/docker-compose-image-tags-upgrade + PR_LABEL: dependencies + PR_TITLE: "feat: upgrade `docker-compose.yml` stable image tags" + +jobs: + update-docker-compose-tag: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + - name: create-PR + shell: bash + run: | + set -eou pipefail + latest_tag="$(gh api \ + -H "Accept: application/vnd.github+json" \ + -H "X-GitHub-Api-Version: 2022-11-28" \ + /repos/${{ github.repository }}/releases/latest | jq '.tag_name')" + sed -i -E 's|(image: ghcr\.io/peerdb\-io/.*?:stable-)(.*$)|\1'"${latest_tag}"'|g' docker-compose.yml + git checkout -b "${PR_BRANCH}" + git fetch || true + git push -u origin "${PR_BRANCH}" --force-with-lease + + PR_ID=$(gh pr list "${PR_LABEL}" --head "${PR_BRANCH}" --json number | jq -r '.[0].number // ""') + if [ "$PR_ID" == "" ]; then + PR_ID=$(gh pr create -l "$PR_LABEL" -t "$PR_TITLE" --body "") + fi + + git commit -m 'chore(automated): upgrade docker-compose.yml stable tags' + gh pr merge --auto --squash From 1fd442c1e347563b6e0ca5366b80f5a24b3cdd9b Mon Sep 17 00:00:00 2001 From: Kunal Gupta <39487888+iamKunalGupta@users.noreply.github.com> Date: Wed, 4 Dec 2024 06:58:51 +0530 Subject: [PATCH 60/61] fix: docker compose image tag workflow (#2322) --- .../workflows/update-docker-compose-stable.yaml | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/.github/workflows/update-docker-compose-stable.yaml b/.github/workflows/update-docker-compose-stable.yaml index 7435f3657..437d83ea4 100644 --- a/.github/workflows/update-docker-compose-stable.yaml +++ b/.github/workflows/update-docker-compose-stable.yaml @@ -21,6 +21,8 @@ jobs: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 + with: + ref: main - name: create-PR shell: bash run: | @@ -28,16 +30,22 @@ jobs: latest_tag="$(gh api \ -H "Accept: application/vnd.github+json" \ -H "X-GitHub-Api-Version: 2022-11-28" \ - /repos/${{ github.repository }}/releases/latest | jq '.tag_name')" + /repos/${{ github.repository }}/releases/latest | jq -r '.tag_name')" sed -i -E 's|(image: ghcr\.io/peerdb\-io/.*?:stable-)(.*$)|\1'"${latest_tag}"'|g' docker-compose.yml + git config --global user.name "${GITHUB_ACTOR}" + git config --global user.email "${GITHUB_ACTOR}@users.noreply.github.com" git checkout -b "${PR_BRANCH}" git fetch || true + git add -u + git commit -m 'chore(automated): upgrade docker-compose.yml stable tags' git push -u origin "${PR_BRANCH}" --force-with-lease - PR_ID=$(gh pr list "${PR_LABEL}" --head "${PR_BRANCH}" --json number | jq -r '.[0].number // ""') + PR_ID=$(gh pr list --label "${PR_LABEL}" --head "${PR_BRANCH}" --json number | jq -r '.[0].number // ""') if [ "$PR_ID" == "" ]; then PR_ID=$(gh pr create -l "$PR_LABEL" -t "$PR_TITLE" --body "") fi - git commit -m 'chore(automated): upgrade docker-compose.yml stable tags' + gh pr merge --auto --squash + env: + GH_TOKEN: ${{ github.token }} From a9d33fdc4ae4b39d542216dc318bd5dc4f10a767 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Wed, 4 Dec 2024 07:00:18 +0530 Subject: [PATCH 61/61] feat: upgrade `docker-compose.yml` stable image tags (#2321) Co-authored-by: iamKunalGupta --- docker-compose.yml | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docker-compose.yml b/docker-compose.yml index c7991b9e2..b2c5936a2 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -112,7 +112,7 @@ services: flow-api: container_name: flow_api - image: ghcr.io/peerdb-io/flow-api:stable-v0.19.1@sha256:a759b2d1b14f11d09ade672c268abcb456fd8884468547ea0f467cdfb60a0994 + image: ghcr.io/peerdb-io/flow-api:stable-v0.20.0 restart: unless-stopped ports: - 8112:8112 @@ -128,7 +128,7 @@ services: flow-snapshot-worker: container_name: flow-snapshot-worker - image: ghcr.io/peerdb-io/flow-snapshot-worker:stable-v0.19.1@sha256:894c1fea1cf9a4f5622420d8630509243b60cf177e107ec4d14d7294a9490451 + image: ghcr.io/peerdb-io/flow-snapshot-worker:stable-v0.20.0 restart: unless-stopped environment: <<: [*catalog-config, *flow-worker-env, *minio-config] @@ -138,7 +138,7 @@ services: flow-worker: container_name: flow-worker - image: ghcr.io/peerdb-io/flow-worker:stable-v0.19.1@sha256:4482314bd3bd4a96930fbee10c00a9f2d5764e86cfd8802642589d339cf04054 + image: ghcr.io/peerdb-io/flow-worker:stable-v0.20.0 restart: unless-stopped environment: <<: [*catalog-config, *flow-worker-env, *minio-config] @@ -151,7 +151,7 @@ services: peerdb: container_name: peerdb-server stop_signal: SIGINT - image: ghcr.io/peerdb-io/peerdb-server:stable-v0.19.1@sha256:c736500e0b42f100df29af43ecf4c96d0c8f4805dd294fecd0bb4ce7b7897a18 + image: ghcr.io/peerdb-io/peerdb-server:stable-v0.20.0 restart: unless-stopped environment: <<: *catalog-config @@ -167,7 +167,7 @@ services: peerdb-ui: container_name: peerdb-ui - image: ghcr.io/peerdb-io/peerdb-ui:stable-v0.19.1@sha256:ffc4b5960dc1653a59e680c61fca0ba2c5891cb4965e4662927d9886f4d7f6bc + image: ghcr.io/peerdb-io/peerdb-ui:stable-v0.20.0 restart: unless-stopped ports: - 3000:3000