Skip to content

Commit

Permalink
Merge major-release into main & bump version number in setup.py (#938)
Browse files Browse the repository at this point in the history
* Merge main into major-release (#814)

* Use black formatting in addition to flake8 (#796)

* Run black formatter on entire repository

* Update requirements.txt and CONTRIBUTING.md to reflect black format

* Use black linting in circleci test job

* Use longer variable name to resolve flake8 E741

* Move noqa comments back to proper lines after black reformat

* Standardize S3 Prefix Conventions (#803)

This PR catches exception errors when a user does not exhaustive access to keys in an S3 bucket

* Add Default Parameter Flexibility (#807)

Skips over new `/` logic checks if prefix is `None` (which is true by default)

* MoveOn Shopify / AK changes (#801)

* Add access_token authentication option for Shopify

* Remove unnecessary check
The access token will either be None or explicitly set; don't worry about an empty string.

* Add get_orders function and test

* Add get_transactions function and test

* Add function and test to get order

* style fixes

* style fixes

---------

Co-authored-by: sjwmoveon <[email protected]>
Co-authored-by: Alex French <[email protected]>
Co-authored-by: Kathy Nguyen <[email protected]>

* Catch File Extensions in S3 Prefix (#809)

* add exception handling

* Shortened logs for flake8

* add logic for default case

* added file logic + note to user

* restructured prefix logic

This change moves the prefix -> prefix/ logic into a try/except block ... this will be more robust to most use cases, while adding flexibility that we desire for split-permission buckets

* drop nested try/catch + add verbose error log

* Add error message verbosity

Co-authored-by: willyraedy <[email protected]>

---------

Co-authored-by: willyraedy <[email protected]>

---------

Co-authored-by: Austin Weisgrau <[email protected]>
Co-authored-by: Ian <[email protected]>
Co-authored-by: Cody Gordon <[email protected]>
Co-authored-by: sjwmoveon <[email protected]>
Co-authored-by: Alex French <[email protected]>
Co-authored-by: Kathy Nguyen <[email protected]>
Co-authored-by: willyraedy <[email protected]>

* DatabaseConnector Interface to Major Release (#815)

* Create the DatabaseConnector

* Implement DatabaseConnector for the DB connectors

* Add DatabaseConnector to std imports

* Flake8 fix

* Remove reference to padding in copy()

* Add database_discover and fix inheritance

* Remove strict_length from copy()

* Put strict_length back in original order

* Remove strict_length stub from BQ

* Fix discover_database export statement

* Add return annotation to mysql table_exists

* Black formatter pass

* Add more documentation on when you should use

* Add developer notes.

* Fix code block documentation

* Enhance discover database

* Add unit tests for discover database

* Fix unit tests

* Add two more tests

* Reverse Postgres string_length change

---------

Co-authored-by: Jason Walker <[email protected]>

* Zoom Authentication + Polling API (#873)

* Add multiple python versions to CI tests (#858)

* Add multiple python versions to CI tests

* Remove duplicate key

* Combine CI jobs

* Update ubuntu image and actually install Python versions

* Replace pyenv with apt-get to install python versions

* Remove sudo

* Remove get from 'apt-get'

* Update apt before attempting to install

* Add ppa/deadsnakes repository

* Add prereq

* Fix typo

* Add -y to install command

* Move -y to correct spot

* Add more -ys

* Add some echoes to debug

* Switch back to pyenv approach

* Remove tests from circleci config and move to new github actions config

Note: no caching yet, this is more of a proof of concept

* Split out Mac tests into seaparate file

* Set testing environmental variable separately

* First attempt to add depdendency cache

* Remove windows tests for now

* Fix circleci config

* Fix circleci for real this time

* Add tests on merging of PRs and update readme to show we do not support for Python 3.7

* Enable passing `identifiers` to ActionNetwork `upsert_person()` (#861)

* Enable passing `identifiers` to ActionNetwork upsert_person

* Remove unused arguments from method

self.get_page method doesn't exist and that method call doesn't return
anything. The return statement works fine as-is to return all tags and
handles pagination on its own.

* Include deprecated per_page argument for backwards compatibility

Emit a deprecation warning if this argument is used

* Include examples in docstring for `identifiers` argument

* Expand documentation on ActionNetwork identifiers

* Add pre-commit hook config to run flake8 and black on commit (#864)

Notes added to README on how to install and set up

* black format

* black format

* jwt -> s2s oauth

* scaffold new functions

* add docs

* return

* add type handling

* pass in updated params

* move access token function

* ok let's rock!!

* make changes

* pass access token key only

* use temporary client to gen token

* mock request in constructor

* drop unused imports

* add changes

* scaffolding tests

* Add multiple python versions to CI tests (#858)

* Add multiple python versions to CI tests

* Remove duplicate key

* Combine CI jobs

* Update ubuntu image and actually install Python versions

* Replace pyenv with apt-get to install python versions

* Remove sudo

* Remove get from 'apt-get'

* Update apt before attempting to install

* Add ppa/deadsnakes repository

* Add prereq

* Fix typo

* Add -y to install command

* Move -y to correct spot

* Add more -ys

* Add some echoes to debug

* Switch back to pyenv approach

* Remove tests from circleci config and move to new github actions config

Note: no caching yet, this is more of a proof of concept

* Split out Mac tests into seaparate file

* Set testing environmental variable separately

* First attempt to add depdendency cache

* Remove windows tests for now

* Fix circleci config

* Fix circleci for real this time

* Add tests on merging of PRs and update readme to show we do not support for Python 3.7

* Enable passing `identifiers` to ActionNetwork `upsert_person()` (#861)

* Enable passing `identifiers` to ActionNetwork upsert_person

* Remove unused arguments from method

self.get_page method doesn't exist and that method call doesn't return
anything. The return statement works fine as-is to return all tags and
handles pagination on its own.

* Include deprecated per_page argument for backwards compatibility

Emit a deprecation warning if this argument is used

* Include examples in docstring for `identifiers` argument

* Expand documentation on ActionNetwork identifiers

* Add pre-commit hook config to run flake8 and black on commit (#864)

Notes added to README on how to install and set up

* black format

* black format

* jwt -> s2s oauth

* scaffold new functions

* add docs

* return

* add type handling

* pass in updated params

* move access token function

* ok let's rock!!

* make changes

* pass access token key only

* use temporary client to gen token

* mock request in constructor

* drop unused imports

* add changes

* scaffolding tests

* write unit tests

* drop poll endpoints for now

---------

Co-authored-by: Shauna <[email protected]>
Co-authored-by: Austin Weisgrau <[email protected]>

* Merging Main Before Release (#880)

* Add multiple python versions to CI tests (#858)

* Add multiple python versions to CI tests

* Remove duplicate key

* Combine CI jobs

* Update ubuntu image and actually install Python versions

* Replace pyenv with apt-get to install python versions

* Remove sudo

* Remove get from 'apt-get'

* Update apt before attempting to install

* Add ppa/deadsnakes repository

* Add prereq

* Fix typo

* Add -y to install command

* Move -y to correct spot

* Add more -ys

* Add some echoes to debug

* Switch back to pyenv approach

* Remove tests from circleci config and move to new github actions config

Note: no caching yet, this is more of a proof of concept

* Split out Mac tests into seaparate file

* Set testing environmental variable separately

* First attempt to add depdendency cache

* Remove windows tests for now

* Fix circleci config

* Fix circleci for real this time

* Add tests on merging of PRs and update readme to show we do not support for Python 3.7

* Enable passing `identifiers` to ActionNetwork `upsert_person()` (#861)

* Enable passing `identifiers` to ActionNetwork upsert_person

* Remove unused arguments from method

self.get_page method doesn't exist and that method call doesn't return
anything. The return statement works fine as-is to return all tags and
handles pagination on its own.

* Include deprecated per_page argument for backwards compatibility

Emit a deprecation warning if this argument is used

* Include examples in docstring for `identifiers` argument

* Expand documentation on ActionNetwork identifiers

* Add pre-commit hook config to run flake8 and black on commit (#864)

Notes added to README on how to install and set up

* Add Events Helpers to PDI Connector (#865)

* add helpers to Events object

* stage docstring

* add docs

* linting

* fix typo + enforce validation

* add return docs

* add events tests

* use mock pdi

* jk

* mark live tests

* add alias

* drop unused imports

* change release number (#872)

* add release notes yml (#878)

---------

Co-authored-by: Shauna <[email protected]>
Co-authored-by: Austin Weisgrau <[email protected]>
Co-authored-by: sharinetmc <[email protected]>

* Switch from API key to Personal Access Token (#866)

* Wraedy/bigquery db connector (#875)

* Create the DatabaseConnector

* Implement DatabaseConnector for the DB connectors

* Add DatabaseConnector to std imports

* Flake8 fix

* Remove reference to padding in copy()

* Add database_discover and fix inheritance

* Remove strict_length from copy()

* Put strict_length back in original order

* Remove strict_length stub from BQ

* Fix discover_database export statement

* Add return annotation to mysql table_exists

* Black formatter pass

* create bigquery folder in databases folde

* create query parity between bigquery and redshift

* mock up copy functionality for bigquery

* fix typo

* add duplicate function to bigquery

* move transaction to helper function

* implement upsert

* fix imports and packages

* add get tables and views methods

* add query return flexibility

* match bigquery apis with redshift

* make s3 to gcs more generic

* add transaction support to bigquery

* remove logs

* add gcs docs

* process job config in function

* finish todo's (and add one more lol)

* [ wip ] AttributeError

* add raw download param

* drop raw download

* copy from GCS docstring

* copy s3 docs

* copy docs

* docstrings

* control flow

* add source path to aws transfer spec

* add Code object to imports

* cleaning up slightly

* check status code

* nice

* pass in required param

* add pattern handling

* add quote character to LoadJobConfig

* add schema to copy from gcs

* drop dist and sortkeys

No longer input params

* add delimiter param

* use schema definition

* write column mapping helper

* pass in formatted schema to load_uri fn

* rename new file

* move file with jason's changes

* move new changes back into file to maintain history

* remove extraneous fn and move project job config

* get back to test parity

* fix bad merge conflict

* remove extra params from copy sig

* clarify transaction guidance

* clean up list blobs

* clean up storage transfer polling

* upgrade cloud storage package

* use list of schema mappings

* scaffolded big file function 😎

* add to docs

* default to compression

we can make this more flexible, just scaffolding

* add temp logging

we can drop this later just trying to get a handle on cycle time

* use decompress

* add logging

* implement unzipping and reuploading cloud file

* logging error

* Add destination path

* Small fix

* add todo's

* drop max wait time

* add kwargs to put blob

Potentially useful for metadata (content type, etc.)

* add verbosity to description

* black formatted

* add gcs to/from helpers

* write to_bigquery function

* update big file logic

* allow jagged rows logic

* test additional methods

* add duplicate table test

* test drop flag for duplicate

* basic test for upsert

* add typing

* move non-essential logs to debug

* move logs to debug

* hey, it works!

* add UUID support for bigquery type map

* add datetime to bigquery type map

* address comments

* address comments

* drop GCS class function

we can pick this up later but it doesn't currently work

* move class back to old location with new import

* revert to old name

* remove transaction error handler

* add description conditional block for s3

* change one more conditional to s3

* handle empty source paths

* reverting new import path

---------

Co-authored-by: Jason Walker <[email protected]>
Co-authored-by: Ian <[email protected]>
Co-authored-by: Kasia Hinkson <[email protected]>

* BigQuery - Add Column Helpers (#911)

* add column outlines

* optionally log query

* flip default params

* flip back

* Google BigQuery - Clean Up Autodetect Logic (#914)

* don't delete

* clean up schema autodetect logic

* undo comments

* Update stale references to parsons.databases.bigquery (#920)

* Fix BQ references in discover_database

* Update BQ references in tofrom.py

* Update BQ refs in test_discover_database.py

* Fix gcs hidden error (#930)

* logging

* edit flake8 max line for testing

* change flake8 for testing

* comment out unsused var

* add print to check branch

* change to logging

* more logging

* try printing

* more logging

* logging:

* more printing

* more logging

* print transfer job request

* change error message

* requested changes

* remove comment

* GoogleCloudStorage - Handle zip / gzip files flexibly (#937)

* Update release (#894)

* Zoom Polls (#886)

* Merge main into major-release (#814)

* Use black formatting in addition to flake8 (#796)

* Run black formatter on entire repository

* Update requirements.txt and CONTRIBUTING.md to reflect black format

* Use black linting in circleci test job

* Use longer variable name to resolve flake8 E741

* Move noqa comments back to proper lines after black reformat

* Standardize S3 Prefix Conventions (#803)

This PR catches exception errors when a user does not exhaustive access to keys in an S3 bucket

* Add Default Parameter Flexibility (#807)

Skips over new `/` logic checks if prefix is `None` (which is true by default)

* MoveOn Shopify / AK changes (#801)

* Add access_token authentication option for Shopify

* Remove unnecessary check
The access token will either be None or explicitly set; don't worry about an empty string.

* Add get_orders function and test

* Add get_transactions function and test

* Add function and test to get order

* style fixes

* style fixes

---------

Co-authored-by: sjwmoveon <[email protected]>
Co-authored-by: Alex French <[email protected]>
Co-authored-by: Kathy Nguyen <[email protected]>

* Catch File Extensions in S3 Prefix (#809)

* add exception handling

* Shortened logs for flake8

* add logic for default case

* added file logic + note to user

* restructured prefix logic

This change moves the prefix -> prefix/ logic into a try/except block ... this will be more robust to most use cases, while adding flexibility that we desire for split-permission buckets

* drop nested try/catch + add verbose error log

* Add error message verbosity

Co-authored-by: willyraedy <[email protected]>

---------

Co-authored-by: willyraedy <[email protected]>

---------

Co-authored-by: Austin Weisgrau <[email protected]>
Co-authored-by: Ian <[email protected]>
Co-authored-by: Cody Gordon <[email protected]>
Co-authored-by: sjwmoveon <[email protected]>
Co-authored-by: Alex French <[email protected]>
Co-authored-by: Kathy Nguyen <[email protected]>
Co-authored-by: willyraedy <[email protected]>

* black format

* black format

* jwt -> s2s oauth

* scaffold new functions

* add docs

* return

* DatabaseConnector Interface to Major Release (#815)

* Create the DatabaseConnector

* Implement DatabaseConnector for the DB connectors

* Add DatabaseConnector to std imports

* Flake8 fix

* Remove reference to padding in copy()

* Add database_discover and fix inheritance

* Remove strict_length from copy()

* Put strict_length back in original order

* Remove strict_length stub from BQ

* Fix discover_database export statement

* Add return annotation to mysql table_exists

* Black formatter pass

* Add more documentation on when you should use

* Add developer notes.

* Fix code block documentation

* Enhance discover database

* Add unit tests for discover database

* Fix unit tests

* Add two more tests

* Reverse Postgres string_length change

---------

Co-authored-by: Jason Walker <[email protected]>

* add type handling

* pass in updated params

* move access token function

* ok let's rock!!

* make changes

* pass access token key only

* use temporary client to gen token

* mock request in constructor

* drop unused imports

* add changes

* scaffolding tests

* Add multiple python versions to CI tests (#858)

* Add multiple python versions to CI tests

* Remove duplicate key

* Combine CI jobs

* Update ubuntu image and actually install Python versions

* Replace pyenv with apt-get to install python versions

* Remove sudo

* Remove get from 'apt-get'

* Update apt before attempting to install

* Add ppa/deadsnakes repository

* Add prereq

* Fix typo

* Add -y to install command

* Move -y to correct spot

* Add more -ys

* Add some echoes to debug

* Switch back to pyenv approach

* Remove tests from circleci config and move to new github actions config

Note: no caching yet, this is more of a proof of concept

* Split out Mac tests into seaparate file

* Set testing environmental variable separately

* First attempt to add depdendency cache

* Remove windows tests for now

* Fix circleci config

* Fix circleci for real this time

* Add tests on merging of PRs and update readme to show we do not support for Python 3.7

* Enable passing `identifiers` to ActionNetwork `upsert_person()` (#861)

* Enable passing `identifiers` to ActionNetwork upsert_person

* Remove unused arguments from method

self.get_page method doesn't exist and that method call doesn't return
anything. The return statement works fine as-is to return all tags and
handles pagination on its own.

* Include deprecated per_page argument for backwards compatibility

Emit a deprecation warning if this argument is used

* Include examples in docstring for `identifiers` argument

* Expand documentation on ActionNetwork identifiers

* Add pre-commit hook config to run flake8 and black on commit (#864)

Notes added to README on how to install and set up

* black format

* black format

* jwt -> s2s oauth

* scaffold new functions

* add docs

* return

* add type handling

* pass in updated params

* move access token function

* ok let's rock!!

* make changes

* pass access token key only

* use temporary client to gen token

* mock request in constructor

* drop unused imports

* add changes

* scaffolding tests

* write unit tests

* added testing

* drop typing (for now)

* update docstring typing

* add tests

* write functions

* update typing

* add poll results

* update table output

* fix tests

* uhhh run it back

* add scope requirements

* add to docs

We can add more here if folks see fit

* one for the money two for the show

---------

Co-authored-by: Jason <[email protected]>
Co-authored-by: Austin Weisgrau <[email protected]>
Co-authored-by: Cody Gordon <[email protected]>
Co-authored-by: sjwmoveon <[email protected]>
Co-authored-by: Alex French <[email protected]>
Co-authored-by: Kathy Nguyen <[email protected]>
Co-authored-by: willyraedy <[email protected]>
Co-authored-by: Jason Walker <[email protected]>
Co-authored-by: Shauna <[email protected]>

* Check for empty tables in zoom poll results (#897)

Co-authored-by: Jason Walker <[email protected]>

* Bump urllib3 from 1.26.5 to 1.26.17 (#901)

Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.5 to 1.26.17.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](urllib3/urllib3@1.26.5...1.26.17)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Add MobileCommons Connector (#896)

* mobilecommons class

* Update __init__.py

* get broadcasts

* fix get broadcast request

* Add mc_get_request method

* Add annotation

* Incorporate Daniel's suggestions and finish up get_broadcasts

* A few more methods

Need to figure out page_count issue

* small fix

* Remove page_count, use page record num instead

* Add in page_count again

Not all get responses include num param, but do include page_count. wft

* Fix logging numbers

* Add create_profile

* Fix error message for post request

* Start tests

* Add some tests

* Continue testing

* Update test_mobilecommons.py

* functionalize status_code check

* break out parse_get_request function

* fix test data

* fix documentation typo

* Add several tests

* Update mobilecommons.py

* Fix limit and pagination logic

* debug unit testing

* better commenting and logic

* Documentation

* Add MC to init file

* Revert "Merge branch 'main' into cormac-mobilecommons-connector"

This reverts commit cad250f, reversing
changes made to 493e117.

* Revert "Add MC to init file"

This reverts commit 493e117.

* Revert "Revert "Add MC to init file""

This reverts commit 8f87ec2.

* Revert "Revert "Merge branch 'main' into cormac-mobilecommons-connector""

This reverts commit 8190052.

* Fix init destruction

* fix init yet again

* Update testing docs with underscores

* Lint

* Lint tests

* break up long responses

* Fix more linting issues

* Hopefully last linting issue

* DGJKSNCHIVBN

* Documentation fixes

* Remove note to self

* date format

* remove random notes

* Update test_mobilecommons.py

---------

Co-authored-by: sharinetmc <[email protected]>

* #741 : Deprecate Slack chat.postMessage `as_user` argument and allow for new authorship arguments (#891)

* remove the argument and add a warning that the usage is deprecated

* remove usage of as_user from sample code

* add in the user customization arguments in lieu of the deprecated as_user argument

* add comment regarding the permissions required to use these arguments

* use kwargs

* surface the whole response

* allow usage of the deprecated argument but surface the failed response better

* add to retry

* delete test file

* fix linting

* formatting to fix tests

* fix if style

* add warning for using thread_ts

* move the documentation to the optional arguments

* #816 Airtable.get_records() fields argument can be either str or list (#892)

* all fields to be a str object

* remove newline

* Nir's actionnetwork changes (#900)

* working on adding a functio to an and took care of a lint issues

* init

* working on all get functions

* actionnetwork functions batch 1 is ready

* linting and black formatted compliance

* removed unwanted/unsused lines

* merged updated main

* did some linting

* added some more get functions to support all ActionNetwork objects (Advocacy Campaigns, Attendances, Campaigns, Custom Fields, Donations, Embeds, Event Campaigns, Events, Forms, Fundraising Pages, Items, Lists, Messages, Metadata, Outreaches, People, Petitions, Queries, Signatures, Submissions, Tags, Taggings, Wrappers)

* worked on linting again

* fix airtable.insert_records table arg (#907)

* Add canales s3 functions (#885)

* add raw s3 functions to parsons

* add selected functions to s3.py

* delte redundant functions and move drop_and_save function to redshift.py

* create test file

* add s3 unit tests

* add rs.drop_and_unload unit test

* add printing for debugging

* remove testing file

* unsaved changes

* remove unused packages

* remove unneeded module

* Bump urllib3 from 1.26.17 to 1.26.18 (#904)

Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.17 to 1.26.18.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](urllib3/urllib3@1.26.17...1.26.18)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: sharinetmc <[email protected]>

* New connector for working with the Catalist Match API (#912)

* Enable api_connector to return error message in `text` attribute

Some API error responses contain the error message in the `text`
attribute, so this update makes it possible to fetch that message if
it exists.

* New connector to work with the Catalist Match API

* Add pytest-mock to requirements to support mocking in pytests

* Tests on the catalist match connector

* More open ended pytest-mock version for compatibility

* Expand docstring documetation based on feedback in PR

* More verbose error on match failure

* Parameterize template_id variable

* Expand docstrings on initial setup

* Include Catalist documentation rst file

* Enhancement: Action Network Connector: Added unpack_statistics param in get_messages method (#917)

* Adds parameter to get_messages

This adds the ability to unpack the statistics which are returned as a nested dictionary in the response.

* added unpack_statistics to an.get_messages()

* added parameters to get_messages and built tests

* changes unpack_statistics to False by default.

* added tbl variable

* formatted with black

* fixed docs

---------

Co-authored-by: mattkrausse <[email protected]>

* Adding rename_columns method to Parsons Table (#923)

* added rename_columns for multiple cols

* linted

* added clarification to docs about dict structure

* updated docs

---------

Co-authored-by: mattkrausse <[email protected]>

* Add http response to update_mailer (#924)

Without returning the response, or at least the status code, it's impossible to check for errors.

* Enable passing arbitrary additional fields to NGPVAN person match API (#916)

* match gcs api to s3

* wip

* two different functions

* use csv as default

* drop unused var

* add docs

* use temp file

* add comments

* wip

* add docs + replicate in gzip

* boy howdy!

* set timeout

* Revert "Merge branch 'main' into ianferguson/gcs-pathing"

This reverts commit 5b1ef6e, reversing
changes made to f0eb3d6.

* black format

---------

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: Kasia Hinkson <[email protected]>
Co-authored-by: Jason <[email protected]>
Co-authored-by: Austin Weisgrau <[email protected]>
Co-authored-by: Cody Gordon <[email protected]>
Co-authored-by: sjwmoveon <[email protected]>
Co-authored-by: Alex French <[email protected]>
Co-authored-by: Kathy Nguyen <[email protected]>
Co-authored-by: willyraedy <[email protected]>
Co-authored-by: Jason Walker <[email protected]>
Co-authored-by: Shauna <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Cormac Martinez del Rio <[email protected]>
Co-authored-by: sharinetmc <[email protected]>
Co-authored-by: Angela Gloyna <[email protected]>
Co-authored-by: NirTatcher <[email protected]>
Co-authored-by: justicehaze <[email protected]>
Co-authored-by: mattkrausse <[email protected]>
Co-authored-by: mkrausse-ggtx <[email protected]>
Co-authored-by: Sophie Waldman <[email protected]>

* GoogleCloudStorage - Add GCS Destination Path Param (#936)

* Update release (#894)

* Zoom Polls (#886)

* Merge main into major-release (#814)

* Use black formatting in addition to flake8 (#796)

* Run black formatter on entire repository

* Update requirements.txt and CONTRIBUTING.md to reflect black format

* Use black linting in circleci test job

* Use longer variable name to resolve flake8 E741

* Move noqa comments back to proper lines after black reformat

* Standardize S3 Prefix Conventions (#803)

This PR catches exception errors when a user does not exhaustive access to keys in an S3 bucket

* Add Default Parameter Flexibility (#807)

Skips over new `/` logic checks if prefix is `None` (which is true by default)

* MoveOn Shopify / AK changes (#801)

* Add access_token authentication option for Shopify

* Remove unnecessary check
The access token will either be None or explicitly set; don't worry about an empty string.

* Add get_orders function and test

* Add get_transactions function and test

* Add function and test to get order

* style fixes

* style fixes

---------

Co-authored-by: sjwmoveon <[email protected]>
Co-authored-by: Alex French <[email protected]>
Co-authored-by: Kathy Nguyen <[email protected]>

* Catch File Extensions in S3 Prefix (#809)

* add exception handling

* Shortened logs for flake8

* add logic for default case

* added file logic + note to user

* restructured prefix logic

This change moves the prefix -> prefix/ logic into a try/except block ... this will be more robust to most use cases, while adding flexibility that we desire for split-permission buckets

* drop nested try/catch + add verbose error log

* Add error message verbosity

Co-authored-by: willyraedy <[email protected]>

---------

Co-authored-by: willyraedy <[email protected]>

---------

Co-authored-by: Austin Weisgrau <[email protected]>
Co-authored-by: Ian <[email protected]>
Co-authored-by: Cody Gordon <[email protected]>
Co-authored-by: sjwmoveon <[email protected]>
Co-authored-by: Alex French <[email protected]>
Co-authored-by: Kathy Nguyen <[email protected]>
Co-authored-by: willyraedy <[email protected]>

* black format

* black format

* jwt -> s2s oauth

* scaffold new functions

* add docs

* return

* DatabaseConnector Interface to Major Release (#815)

* Create the DatabaseConnector

* Implement DatabaseConnector for the DB connectors

* Add DatabaseConnector to std imports

* Flake8 fix

* Remove reference to padding in copy()

* Add database_discover and fix inheritance

* Remove strict_length from copy()

* Put strict_length back in original order

* Remove strict_length stub from BQ

* Fix discover_database export statement

* Add return annotation to mysql table_exists

* Black formatter pass

* Add more documentation on when you should use

* Add developer notes.

* Fix code block documentation

* Enhance discover database

* Add unit tests for discover database

* Fix unit tests

* Add two more tests

* Reverse Postgres string_length change

---------

Co-authored-by: Jason Walker <[email protected]>

* add type handling

* pass in updated params

* move access token function

* ok let's rock!!

* make changes

* pass access token key only

* use temporary client to gen token

* mock request in constructor

* drop unused imports

* add changes

* scaffolding tests

* Add multiple python versions to CI tests (#858)

* Add multiple python versions to CI tests

* Remove duplicate key

* Combine CI jobs

* Update ubuntu image and actually install Python versions

* Replace pyenv with apt-get to install python versions

* Remove sudo

* Remove get from 'apt-get'

* Update apt before attempting to install

* Add ppa/deadsnakes repository

* Add prereq

* Fix typo

* Add -y to install command

* Move -y to correct spot

* Add more -ys

* Add some echoes to debug

* Switch back to pyenv approach

* Remove tests from circleci config and move to new github actions config

Note: no caching yet, this is more of a proof of concept

* Split out Mac tests into seaparate file

* Set testing environmental variable separately

* First attempt to add depdendency cache

* Remove windows tests for now

* Fix circleci config

* Fix circleci for real this time

* Add tests on merging of PRs and update readme to show we do not support for Python 3.7

* Enable passing `identifiers` to ActionNetwork `upsert_person()` (#861)

* Enable passing `identifiers` to ActionNetwork upsert_person

* Remove unused arguments from method

self.get_page method doesn't exist and that method call doesn't return
anything. The return statement works fine as-is to return all tags and
handles pagination on its own.

* Include deprecated per_page argument for backwards compatibility

Emit a deprecation warning if this argument is used

* Include examples in docstring for `identifiers` argument

* Expand documentation on ActionNetwork identifiers

* Add pre-commit hook config to run flake8 and black on commit (#864)

Notes added to README on how to install and set up

* black format

* black format

* jwt -> s2s oauth

* scaffold new functions

* add docs

* return

* add type handling

* pass in updated params

* move access token function

* ok let's rock!!

* make changes

* pass access token key only

* use temporary client to gen token

* mock request in constructor

* drop unused imports

* add changes

* scaffolding tests

* write unit tests

* added testing

* drop typing (for now)

* update docstring typing

* add tests

* write functions

* update typing

* add poll results

* update table output

* fix tests

* uhhh run it back

* add scope requirements

* add to docs

We can add more here if folks see fit

* one for the money two for the show

---------

Co-authored-by: Jason <[email protected]>
Co-authored-by: Austin Weisgrau <[email protected]>
Co-authored-by: Cody Gordon <[email protected]>
Co-authored-by: sjwmoveon <[email protected]>
Co-authored-by: Alex French <[email protected]>
Co-authored-by: Kathy Nguyen <[email protected]>
Co-authored-by: willyraedy <[email protected]>
Co-authored-by: Jason Walker <[email protected]>
Co-authored-by: Shauna <[email protected]>

* Check for empty tables in zoom poll results (#897)

Co-authored-by: Jason Walker <[email protected]>

* Bump urllib3 from 1.26.5 to 1.26.17 (#901)

Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.5 to 1.26.17.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](urllib3/urllib3@1.26.5...1.26.17)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Add MobileCommons Connector (#896)

* mobilecommons class

* Update __init__.py

* get broadcasts

* fix get broadcast request

* Add mc_get_request method

* Add annotation

* Incorporate Daniel's suggestions and finish up get_broadcasts

* A few more methods

Need to figure out page_count issue

* small fix

* Remove page_count, use page record num instead

* Add in page_count again

Not all get responses include num param, but do include page_count. wft

* Fix logging numbers

* Add create_profile

* Fix error message for post request

* Start tests

* Add some tests

* Continue testing

* Update test_mobilecommons.py

* functionalize status_code check

* break out parse_get_request function

* fix test data

* fix documentation typo

* Add several tests

* Update mobilecommons.py

* Fix limit and pagination logic

* debug unit testing

* better commenting and logic

* Documentation

* Add MC to init file

* Revert "Merge branch 'main' into cormac-mobilecommons-connector"

This reverts commit cad250f, reversing
changes made to 493e117.

* Revert "Add MC to init file"

This reverts commit 493e117.

* Revert "Revert "Add MC to init file""

This reverts commit 8f87ec2.

* Revert "Revert "Merge branch 'main' into cormac-mobilecommons-connector""

This reverts commit 8190052.

* Fix init destruction

* fix init yet again

* Update testing docs with underscores

* Lint

* Lint tests

* break up long responses

* Fix more linting issues

* Hopefully last linting issue

* DGJKSNCHIVBN

* Documentation fixes

* Remove note to self

* date format

* remove random notes

* Update test_mobilecommons.py

---------

Co-authored-by: sharinetmc <[email protected]>

* #741 : Deprecate Slack chat.postMessage `as_user` argument and allow for new authorship arguments (#891)

* remove the argument and add a warning that the usage is deprecated

* remove usage of as_user from sample code

* add in the user customization arguments in lieu of the deprecated as_user argument

* add comment regarding the permissions required to use these arguments

* use kwargs

* surface the whole response

* allow usage of the deprecated argument but surface the failed response better

* add to retry

* delete test file

* fix linting

* formatting to fix tests

* fix if style

* add warning for using thread_ts

* move the documentation to the optional arguments

* #816 Airtable.get_records() fields argument can be either str or list (#892)

* all fields to be a str object

* remove newline

* Nir's actionnetwork changes (#900)

* working on adding a functio to an and took care of a lint issues

* init

* working on all get functions

* actionnetwork functions batch 1 is ready

* linting and black formatted compliance

* removed unwanted/unsused lines

* merged updated main

* did some linting

* added some more get functions to support all ActionNetwork objects (Advocacy Campaigns, Attendances, Campaigns, Custom Fields, Donations, Embeds, Event Campaigns, Events, Forms, Fundraising Pages, Items, Lists, Messages, Metadata, Outreaches, People, Petitions, Queries, Signatures, Submissions, Tags, Taggings, Wrappers)

* worked on linting again

* fix airtable.insert_records table arg (#907)

* Add canales s3 functions (#885)

* add raw s3 functions to parsons

* add selected functions to s3.py

* delte redundant functions and move drop_and_save function to redshift.py

* create test file

* add s3 unit tests

* add rs.drop_and_unload unit test

* add printing for debugging

* remove testing file

* unsaved changes

* remove unused packages

* remove unneeded module

* Bump urllib3 from 1.26.17 to 1.26.18 (#904)

Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.17 to 1.26.18.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](urllib3/urllib3@1.26.17...1.26.18)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: sharinetmc <[email protected]>

* New connector for working with the Catalist Match API (#912)

* Enable api_connector to return error message in `text` attribute

Some API error responses contain the error message in the `text`
attribute, so this update makes it possible to fetch that message if
it exists.

* New connector to work with the Catalist Match API

* Add pytest-mock to requirements to support mocking in pytests

* Tests on the catalist match connector

* More open ended pytest-mock version for compatibility

* Expand docstring documetation based on feedback in PR

* More verbose error on match failure

* Parameterize template_id variable

* Expand docstrings on initial setup

* Include Catalist documentation rst file

* Enhancement: Action Network Connector: Added unpack_statistics param in get_messages method (#917)

* Adds parameter to get_messages

This adds the ability to unpack the statistics which are returned as a nested dictionary in the response.

* added unpack_statistics to an.get_messages()

* added parameters to get_messages and built tests

* changes unpack_statistics to False by default.

* added tbl variable

* formatted with black

* fixed docs

---------

Co-authored-by: mattkrausse <[email protected]>

* Adding rename_columns method to Parsons Table (#923)

* added rename_columns for multiple cols

* linted

* added clarification to docs about dict structure

* updated docs

---------

Co-authored-by: mattkrausse <[email protected]>

* Add http response to update_mailer (#924)

Without returning the response, or at least the status code, it's impossible to check for errors.

* Enable passing arbitrary additional fields to NGPVAN person match API (#916)

* match gcs api to s3

* Revert "Merge branch 'main' into ianferguson/gcs-pathing"

This reverts commit 5b1ef6e, reversing
changes made to f0eb3d6.

---------

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: Kasia Hinkson <[email protected]>
Co-authored-by: Jason <[email protected]>
Co-authored-by: Austin Weisgrau <[email protected]>
Co-authored-by: Cody Gordon <[email protected]>
Co-authored-by: sjwmoveon <[email protected]>
Co-authored-by: Alex French <[email protected]>
Co-authored-by: Kathy Nguyen <[email protected]>
Co-authored-by: willyraedy <[email protected]>
Co-authored-by: Jason Walker <[email protected]>
Co-authored-by: Shauna <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Cormac Martinez del Rio <[email protected]>
Co-authored-by: sharinetmc <[email protected]>
Co-authored-by: Angela Gloyna <[email protected]>
Co-authored-by: NirTatcher <[email protected]>
Co-authored-by: justicehaze <[email protected]>
Co-authored-by: mattkrausse <[email protected]>
Co-authored-by: mkrausse-ggtx <[email protected]>
Co-authored-by: Sophie Waldman <[email protected]>

* Bump version number to 3.0.0

* Fix import statement in test_bigquery.py

* Add Tests to major-release Branch (#949)

* add major release branch to gh workflow

* add mac tests

* null changes (want to trigger test)

* remove temp change

* Resolve GCP Test Failures For Major Release (#948)

* add full import

* resolve bigquery unzip test

* remove keyword

* fix flake8 errors

* fix linting

* push docs

* fix flake8

* too long for flake8

* Install google-cloud-storage-transfer for google extras (#946)

This is required for the import of storage_transfer to work

Co-authored-by: Ian <[email protected]>

* Revert "Enable passing `identifiers` to ActionNetwork `upsert_person()` (#861)" (#945)

This reverts commit 77ead60.

Co-authored-by: Ian <[email protected]>

* BigQuery - Add row count function to connector (#913)

* add row count function

* use sql

* add unit test

* unit test

* whoops!

* add examples (#952)

* Parse Boolean types by default (#943)

* Parse Boolean types by default

Commit 766cfae created a feature for
parsing boolean types but turned it off by default. This commit turns
that feature on by default and adds a comment about how to turn it off
and what that does.

* Fix test expectations after updating boolean parsing behavior

* Only ever interpret python bools as SQL booleans

No longer coerce by default any of the following as booleans:
"yes", "True", "t", 1, 0, "no", "False", "f"

* Fix redshift test parsing bools

* Move redshift test into test_databases folder

* Remove retired TRUE_VALS and FALSE_VALS configuration variables

We now only use python booleans

---------

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: Jason <[email protected]>
Co-authored-by: Austin Weisgrau <[email protected]>
Co-authored-by: Ian <[email protected]>
Co-authored-by: Cody Gordon <[email protected]>
Co-authored-by: sjwmoveon <[email protected]>
Co-authored-by: Alex French <[email protected]>
Co-authored-by: Kathy Nguyen <[email protected]>
Co-authored-by: willyraedy <[email protected]>
Co-authored-by: Jason Walker <[email protected]>
Co-authored-by: sharinetmc <[email protected]>
Co-authored-by: Kathy Nguyen <[email protected]>
Co-authored-by: Kasia Hinkson <[email protected]>
Co-authored-by: dexchan <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Cormac Martinez del Rio <[email protected]>
Co-authored-by: Angela Gloyna <[email protected]>
Co-authored-by: NirTatcher <[email protected]>
Co-authored-by: justicehaze <[email protected]>
Co-authored-by: mattkrausse <[email protected]>
Co-authored-by: mkrausse-ggtx <[email protected]>
Co-authored-by: Sophie Waldman <[email protected]>
  • Loading branch information
22 people authored Dec 8, 2023
1 parent a13b2ca commit 626edc7
Show file tree
Hide file tree
Showing 27 changed files with 2,186 additions and 627 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/test-linux-windows.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@ name: tests

on:
pull_request:
branches: ["main"]
branches: ["main", "major-release"]
push:
branches: ["main"]
branches: ["main", "major-release"]

env:
TESTING: 1
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/tests-mac.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@ name: tests for mac

on:
pull_request:
branches: ["main"]
branches: ["main", "major-release"]
push:
branches: ["main"]
branches: ["main", "major-release"]

env:
TESTING: 1
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,7 @@ venv.bak/
# scratch
scratch*
old!_*
test.ipynb

# vscode
.vscode/
Expand Down
155 changes: 154 additions & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,4 +12,157 @@ You can contribute by:
* [teaching and mentoring](https://www.parsonsproject.org/pub/contributing-guide#teaching-and-mentoring)
* [helping "triage" issues and review pull requests](https://www.parsonsproject.org/pub/contributing-guide#maintainer-tasks)

If you're not sure how to get started, please ask for help! We're happy to chat and help you find the best way to get involved.
We encourage folks to review existing issues before starting a new issue.

* If the issue you want exists, feel free to use the *thumbs up* emoji to up vote the issue.
* If you have additional documentation or context that would be helpful, please add using comments.
* If you have code snippets, but don’t have time to do the full write, please add to the issue!

We use labels to help us classify issues. They include:
* **bug** - something in Parsons isn’t working the way it should
* **enhancement** - new feature or request (e.g. a new API connector)
* **good first issue** - an issue that would be good for someone who is new to Parsons

## Contributing Code to Parsons

Generally, code contributions to Parsons will be either enhancements or bug requests (or contributions of [sample code](#sample-code), discussed below). All changes to the repository are made [via pull requests](#submitting-a-pull-request).

If you would like to contribute code to Parsons, please review the issues in the repository and find one you would like to work on. If you are new to Parsons or to open source projects, look for issues with the [**good first issue**](https://github.com/move-coop/parsons/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) label. Once you have found your issue, please add a comment to the issue that lets others know that you are interested in working on it. If you're having trouble finding something to work on, please ask us for help on Slack.

The bulk of Parsons is made up of Connector classes, which are Python classes that help move data in and out of third party services. When you feel ready, you may want to contribute by [adding a new Connector class](https://move-coop.github.io/parsons/html/build_a_connector.html).

### Making Changes to Parsons

To make code changes to Parsons, you'll need to set up your development environment, make your changes, and then submit a pull request.

To set up your development environment:

* Fork the Parsons project using [the “Fork” button in GitHub](https://guides.github.com/activities/forking/)
* Clone your fork to your local computer
* Set up a [virtual environment](#virtual-environments)
* Install the [dependencies](#installing-dependencies)
* Check that everything's working by [running the unit tests](#unit-tests) and the [linter](#linting)

Now it's time to make your changes. We suggest taking a quick look at our [coding conventions](#coding-conventions) - it'll make the review process easier down the line. In addition to any code changes, make sure to update the documentation and the unit tests if necessary. Not sure if your changes require test or documentation updates? Just ask in Slack or through a comment on the relevant issue. When you're done, make sure to run the [unit tests](#unit-tests) and the [linter](#linting) again.

Finally, you'll want to [submit a pull request](#submitting-a-pull-request). And that's it!

#### Virtual Environments

If required dependencies conflict with packages or modules you need for other projects, you can create and use a [virtual environment](https://docs.python.org/3/library/venv.html).

```
python3 -m venv .venv # Creates a virtual environment in the .venv folder
source .venv/bin/activate # Activate in Unix or MacOS
.venv/Scripts/activate.bat # Activate in Windows
```

#### Installing Dependencies

Before running or testing your code changes, be sure to install all of the required Python libraries that Parsons depends on.

From the root of the parsons repository, use the run the following command:

```bash
> pip install -r requirements.txt
```

#### Unit Tests

When contributing code, we ask you to add to tests that can be used to verify that the code is working as expected. All of our unit tests are located in the `test/` folder at the root of the repository.

We use the pytest tool to run our suite of automated unit tests. The pytest command line tool is installed as part of the Parsons dependencies.

To run all the entire suite of unit tests, execute the following command:

```bash
> pytest -rf test/
```

Once the pytest tool has finished running all of the tests, it will output details around any errors or test failures it encountered. If no failures are identified, then you are good to go!

**Note:*** Some tests are written to call out to external API’s, and will be skipped as part of standard unit testing. This is expected.

See the [pytest documentation](https://docs.pytest.org/en/latest/contents.html) for more info and many more options.

#### Linting

We use the [black](https://github.com/psf/black) and [flake8](http://flake8.pycqa.org/en/latest/) tools to [lint](https://en.wikipedia.org/wiki/Lint_(software)) the code in the repository to make sure it matches our preferred style. Both tools are installed as part of the Parsons dependencies.

Run the following commands from the root of the Parsons repository to lint your code changes:

```bash
> flake8 --max-line-length=100 --extend-ignore=E203,W503 parsons
> black parsons
```

Pre-commit hooks are available to enforce black and isort formatting on
commit. You can also set up your IDE to reformat using black and/or isort on
save.

To set up the pre-commit hooks, install pre-commit with `pip install
pre-commit`, and then run `pre-commit install`.

#### Coding Conventions

The following is a list of best practices to consider when writing code for the Parsons project:

* Each tool connector should be its own unique class (e.g. ActionKit, VAN) in its own Python package. Use existing connectors as examples when deciding how to layout your code.

* Methods should be named using a verb_noun structure, such as `get_activist()` or `update_event()`.

* Methods should reflect the vocabulary utilized by the original tool where possible to mantain transparency. For example, Google Cloud Storage refers to file like objects as blobs. The methods are called `get_blob()` rather than `get_file()`.

* Methods that can work with arbitrarily large data (e.g. database or API queries) should use of Parson Tables to hold the data instead of standard Python collections (e.g. lists, dicts).

* You should avoid abbreviations for method names and variable names where possible.

* Inline comments explaining complex codes and methods are appreciated.

* Capitalize the word Parsons for consistency where possible, especially in documentation.

If you are building a new connector or extending an existing connector, there are more best practices in the [How to Build a Connector](https://move-coop.github.io/parsons/html/build_a_connector.html) documentation.

## Documentation

Parsons documentation is built using the Python Sphinx tool. Sphinx uses the `docs/*.rst` files in the repository to create the documentation.

We have a [documentation label](https://github.com/move-coop/parsons/issues?q=is%3Aissue+is%3Aopen+label%3Adocumentation) that may help you find good docs issues to work on. If you are adding a new connector, you will need to add a reference to the connector to one of the .rst files. Please use the existing documentation as an example.

When editing documentation, make sure you are editing the source files (with .md or .rst extension) and not the build files (.html extension).

The workflow for documentation changes is a bit simpler than for code changes:

* Fork the Parsons project using [the “Fork” button in GitHub](https://guides.github.com/activities/forking/)
* Clone your fork to your local computer
* Change into the `docs` folder and install the requirements with `pip install -r requirements.txt` (you may want to set up a [virtual environment](#virtual-environments) first)
* Make your changes and re-build the docs by running `make html`. (Note: this builds only a single version of the docs, from the current files. To create docs with multiple versions like our publicly hosted docs, run `make deploy_docs`.)
* Open these files in your web browser to check that they look as you expect.
* [Submit a pull request](#submitting-a-pull-request)

When you make documentation changes, you only need to track the source files with git. The docs built by the html folder should not be included.

You should not need to worry about the unit tests or the linter if you are making documentation changes only.

## Contributing Sample Code

One important way to contribute to the Parsons project is to submit sample code that provides recipes and patterns for how to use the Parsons library.

We have a folder called `useful_resources/` in the root of the repository. If you have scripts that incorporate Parsons, we encourage you to add them there!

The workflow for adding sample code is:

* Fork the Parsons project using [the “Fork” button in GitHub](https://guides.github.com/activities/forking/)
* Clone your fork to your local computer
* Add your sample code into the `useful_resources/` folder
* [Submit a pull request](#submitting-a-pull-request)

You should not need to worry about the unit tests or the linter if you are only adding sample code.

## Submitting a Pull Request

To submit a pull request, follow [these instructions to create a Pull Request from your fork](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request-from-a-fork) back to the original Parsons repository.

The Parsons team will review your pull request and provide feedback. Please feel free to ping us if no one's responded to your Pull Request after a few days. We may not be able to review it right away, but we should be able to tell you when we'll get to it.

Once your pull request has been approved, the Parsons team will merge your changes into the Parsons repository
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -42,4 +42,4 @@ RUN python setup.py develop
RUN mkdir /app
WORKDIR /app
# Useful for importing modules that are associated with your python scripts:
env PYTHONPATH=.:/app
ENV PYTHONPATH=.:/app
12 changes: 6 additions & 6 deletions docs/airtable.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Overview
********

The Airtable class allows you to interact with an `Airtable <https://airtable.com/>`_ base. In order to use this class
you must generate an Airtable API Key which can be found in your Airtable `account settings <https://airtable.com/account>`_.
you must generate an Airtable personal access token which can be found in your Airtable `settings <https://airtable.com/create/tokens>`_.

.. note::
Finding The Base Key
Expand All @@ -18,20 +18,20 @@ you must generate an Airtable API Key which can be found in your Airtable `accou
**********
QuickStart
**********
To instantiate the Airtable class, you can either store your Airtable API
``AIRTABLE_API_KEY`` as an environmental variable or pass in your api key
To instantiate the Airtable class, you can either store your Airtable personal access token
``AIRTABLE_PERSONAL_ACCESS_TOKEN`` as an environmental variable or pass in your personal access token
as an argument. You also need to pass in the base key and table name.

.. code-block:: python
from parsons import Airtable
# First approach: Use API credentials via environmental variables and pass
# First approach: Use personal access token via environmental variable and pass
# the base key and the table as arguments.
at = Airtable(base_key, 'table01')
# Second approach: Pass API credentials, base key and table name as arguments.
at = Airtable(base_key, 'table01', api_key='MYFAKEKEY')
# Second approach: Pass personal access token, base key and table name as arguments.
at = Airtable(base_key, 'table01', personal_access_token='MYFAKETOKEN')
You can then call various endpoints:
Expand Down
32 changes: 21 additions & 11 deletions docs/google.rst
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ Google Cloud projects.
Quickstart
==========

To instantiate the GoogleBigQuery class, you can pass the constructor a string containing either the name of the Google service account credentials file or a JSON string encoding those credentials. Alternatively, you can set the environment variable ``GOOGLE_APPLICATION_CREDENTIALS`` to be either of those strings and call the constructor without that argument.
To instantiate the `GoogleBigQuery` class, you can pass the constructor a string containing either the name of the Google service account credentials file or a JSON string encoding those credentials. Alternatively, you can set the environment variable ``GOOGLE_APPLICATION_CREDENTIALS`` to be either of those strings and call the constructor without that argument.

.. code-block:: python
Expand All @@ -78,16 +78,18 @@ To instantiate the GoogleBigQuery class, you can pass the constructor a string c
# be the file name or a JSON encoding of the credentials.
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'google_credentials_file.json'
big_query = GoogleBigQuery()
bigquery = GoogleBigQuery()
Alternatively, you can pass the credentials in as an argument. In the example below, we also specify the project.

.. code-block:: python
# Project in which we're working
project = 'parsons-test'
big_query = GoogleBigQuery(app_creds='google_credentials_file.json',
project=project)
bigquery = GoogleBigQuery(
app_creds='google_credentials_file.json',
project=project
)
We can now upload/query data.

Expand All @@ -98,7 +100,7 @@ We can now upload/query data.
# Table name should be project.dataset.table, or dataset.table, if
# working with the default project
table_name = project + '.' + dataset + '.' + table
table_name = f"`{project}.{dataset}.{table}`"
# Must be pre-existing bucket. Create via GoogleCloudStorage() or
# at https://console.cloud.google.com/storage/create-bucket. May be
Expand All @@ -107,23 +109,31 @@ We can now upload/query data.
gcs_temp_bucket = 'parsons_bucket'
# Create dataset if it doesn't already exist
big_query.client.create_dataset(dataset=dataset, exists_ok=True)
bigquery.client.create_dataset(dataset=dataset, exists_ok=True)
parsons_table = Table([{'name':'Bob', 'party':'D'},
{'name':'Jane', 'party':'D'},
{'name':'Sue', 'party':'R'},
{'name':'Bill', 'party':'I'}])
# Copy table in to create new BigQuery table
big_query.copy(table_obj=parsons_table,
table_name=table_name,
tmp_gcs_bucket=gcs_temp_bucket)
bigquery.copy(
table_obj=parsons_table,
table_name=table_name,
tmp_gcs_bucket=gcs_temp_bucket
)
# Select from project.dataset.table
big_query.query(f'select name from {table_name} where party = "D"')
bigquery.query(f'select name from {table_name} where party = "D"')
# Query with parameters
bigquery.query(
f"select name from {table_name} where party = %s",
parameters=["D"]
)
# Delete the table when we're done
big_query.client.delete_table(table=table_name)
bigquery.client.delete_table(table=table_name)
===
API
Expand Down
13 changes: 8 additions & 5 deletions parsons/airtable/airtable.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,17 @@ class Airtable(object):
table_name: str
The name of the table in the base. The table name is the equivilant of the sheet name
in Excel or GoogleDocs.
api_key: str
The Airtable provided api key. Not required if ``AIRTABLE_API_KEY`` env variable set.
personal_access_token: str
The Airtable personal access token. Not required if ``AIRTABLE_PERSONAL_ACCESS_TOKEN``
env variable set.
"""

def __init__(self, base_key, table_name, api_key=None):
def __init__(self, base_key, table_name, personal_access_token=None):

self.api_key = check_env.check("AIRTABLE_API_KEY", api_key)
self.client = client(base_key, table_name, self.api_key)
self.personal_access_token = check_env.check(
"AIRTABLE_PERSONAL_ACCESS_TOKEN", personal_access_token
)
self.client = client(base_key, table_name, self.personal_access_token)

def get_record(self, record_id):
"""
Expand Down
4 changes: 0 additions & 4 deletions parsons/databases/database/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -153,11 +153,7 @@

VARCHAR = "varchar"
FLOAT = "float"

DO_PARSE_BOOLS = False
BOOL = "bool"
TRUE_VALS = ("TRUE", "T", "YES", "Y", "1", 1)
FALSE_VALS = ("FALSE", "F", "NO", "N", "0", 0)

# The following values are the minimum and maximum values for MySQL int
# types. https://dev.mysql.com/doc/refman/8.0/en/integer-types.html
Expand Down
Loading

0 comments on commit 626edc7

Please sign in to comment.