diff --git a/README.md b/README.md
index 8faa8b16..f6c86fed 100644
--- a/README.md
+++ b/README.md
@@ -202,10 +202,10 @@ We recommend following these instructions closely to catch as many possible issu
## Frequently Asked Questions
1. ### Why does taking and restoring a database dump affect my expected codd schema?
- `pg_dump` does not dump all of the schema state that codd checks. A few examples include (at least with PG 13) role related state, the database's default transaction isolation level and deferredness, among possibly others. So check that it isn't the case that you get different schemas when that happens. We recommend using `pg_dumpall` to preserve more when possible instead. If you've checked with `psql` and everything looks to be the same please report a bug in codd.
+ `pg_dump` does not dump all of the schema state that codd checks. A few examples include (at least with PG 13) role related state, the database's default transaction isolation level and deferredness, among possibly others. So check that it isn't the case that you get different schemas when that happens. If you've checked with `psql` and everything looks to be the same please report a bug in codd.
2. ### Will codd run out of memory or system resources if my migration files are too large or too many?
- Most likely not. Codd reads migrations from disk in streaming fashion and keeps in memory only a single statement at a time. For `COPY` statements, codd uses a constant-size buffer to stream-read the contents and achieve bounded memory usage while staying fast. Also, codd does not open more than one migration file simultaneously to stay well below typical file handle limits imposed by the shell or operating system, and that is also assured through an automated test that runs in CI with `strace`.
+ Most likely not. Codd reads migrations from disk in streaming fashion and keeps in memory only a single statement at a time. For `COPY` statements, codd uses a constant-size buffer to stream-read the contents and achieve bounded memory usage while staying fast. Also, codd does not open more than one migration file simultaneously to stay well below typical file handle limits imposed by the shell or operating system, and that is also assured through an automated test that runs in CI with `strace`. Codd does keep metadata about all pending migrations in memory, but that should be fairly small.
3. ### Will codd handle SQL errors nicely?
Codd tries to do the "best possible thing" even in rather unusual situations. It will retry sets of consecutive in-txn migrations atomically so as not to leave your database in an intermediary state. Even for no-txn migrations, codd will retry the failing statement instead of entire migrations, and _even_ if you write explicit `BEGIN..COMMIT` sections in no-txn migrations, codd will be smart enough to retry from the `BEGIN` if a statement inside that section fails. See the [retry examples](/docs/SQL-MIGRATIONS.md#examples) if you're interested. What codd currently cannot handle well is having its connection killed by an external agent while it's applying a _no-txn_ migration, a scenario which should be extremely rare. Basically, we hope you should be able to write your migrations however you want and rely comfortably on the fact that codd should do the reasonable thing when handling errors.
diff --git a/docs/START-USING.md b/docs/START-USING.md
index f115198e..216e325e 100644
--- a/docs/START-USING.md
+++ b/docs/START-USING.md
@@ -4,7 +4,7 @@ If you already have a Database and would like to start using codd, here's a guid
1. Configure your environment variables as explained in the [README](../README.md) and in [CONFIGURATION.md](CONFIGURATION.md).
2. In that configuration make sure you have that extra `dev-only` folder to hold SQL migrations that will only run in developers' machines.
-3. Run `pg_dump your_database > dump-migration.sql` locally. Do not use `pg_dumpall` because it includes _psql_'s meta-commands that codd_ doesn't support.
+3. Run `pg_dump your_database > dump-migration.sql` **locally**. Do not use `pg_dumpall` because it includes _psql_'s meta-commands that codd doesn't support.
4. Run `dropdb your_database` to drop your DB **locally**.
5. Add a bootstrap migration similar to the one exemplified in [BOOTSTRAPPING.md](BOOTSTRAPPING.md), but with ownership, encoding and locale equal to your Production DB's. The database's and the _public_'s Schema ownership might need some manual intervention to match in different environments.
- **What do we mean?** Cloud services such as Amazon's RDS will create Schemas and DBs owned by users managed by them - such as the `rdsadmin` user -, that we don't usually replicate locally. We can either replicate these locally so we don't need to touch our Prod DB or change our Prod DB so only users managed by us are ever referenced in any environment.
@@ -12,9 +12,9 @@ If you already have a Database and would like to start using codd, here's a guid
- Use _psql_'s `\dg` to view roles in your Prod DB.
- Use _psql_'s `\l` to check DB ownership and permissions of your Prod DB.
- Use _psql_'s `\dn+` to check the _public_ schema's ownership and permissions in your Prod DB.
-7. Edit `dump-migration.sql` (created in step 3) and add `-- codd: no-txn` as its very first line.
-8. Run `codd add dump-migration.sql --dest-folder your-dev-only-folder`
-9. You should now have your database back and managed through codd.
-10. Make sure your Production environment variable `CODD_MIGRATION_DIRS` does not contain your `dev-only` folder. Add any future SQL migrations to your `all-migrations` folder.
-11. Before deploying with codd, we strongly recommend you run `codd verify-schema` with your environment variables connected to your Production database and make sure schemas match.
-12. In Production, we strongly recommend running `codd up --lax-check` (the default, so equivalent to `codd up`) to start with until you get acquainted enough to consider strict-checking. Make sure you read `codd up --help` to better understand your options.
\ No newline at end of file
+Once your bootstrapping migration is ready, run `codd add bootstrap-migration.sql --dest-folder your-dev-only-folder`. This will create your database with no tables or data in it.
+7. Run `codd add dump-migration.sql --dest-folder your-dev-only-folder`. Dumps can some times fail to be applied due to privileges being enforced by postgresql itself, so make sure to edit and change the dump file accordingly so that it can be applied. This often means adding a custom `-- codd-connection` comment on top to make it run as a privileged enough user, like the `postgres` user.
+8. You should now have your database back and managed through codd.
+9. Make sure your Production environment variable `CODD_MIGRATION_DIRS` does not contain your `dev-only` folder. Add any future SQL migrations to your `all-migrations` folder.
+10. Before deploying with codd, we strongly recommend you run `codd verify-schema` with your environment variables connected to your Production database and make sure schemas match.
+11. In Production, we strongly recommend running `codd up --lax-check` (the default) to start with until you get acquainted enough to consider strict-checking. Make sure you read `codd up --help` to better understand your options.
diff --git a/src/Codd/Internal.hs b/src/Codd/Internal.hs
index b895f943..a95adfab 100644
--- a/src/Codd/Internal.hs
+++ b/src/Codd/Internal.hs
@@ -38,6 +38,7 @@ import Codd.Query ( CanStartTxn
, NotInTxn
, execvoid_
, query
+ , queryMay
, txnStatus
, unsafeQuery1
, withTransaction
@@ -47,6 +48,7 @@ import Codd.Representations ( DbRep
, readRepresentationsFromDbWithSettings
)
import Codd.Types ( TxnIsolationLvl(..) )
+import Control.Applicative ( Alternative(..) )
import Control.Monad ( (>=>)
, foldM
, forM
@@ -59,6 +61,7 @@ import Control.Monad.IO.Class ( MonadIO(..) )
import Control.Monad.Trans ( MonadTrans(..) )
import Control.Monad.Trans.Resource ( MonadThrow )
import Data.Functor ( (<&>) )
+import Data.Int ( Int64 )
import Data.Kind ( Type )
import qualified Data.List as List
import Data.List ( sortOn )
@@ -81,6 +84,8 @@ import qualified Database.PostgreSQL.LibPQ as PQ
import qualified Database.PostgreSQL.Simple as DB
import qualified Database.PostgreSQL.Simple.Time
as DB
+import Database.PostgreSQL.Simple.ToRow
+ as DB
import qualified Formatting as Fmt
import Streaming ( Of(..) )
import qualified Streaming.Prelude as Streaming
@@ -100,7 +105,7 @@ import UnliftIO ( Exception
, readIORef
, timeout
, try
- , writeIORef
+ , writeIORef, modifyIORef'
)
import UnliftIO.Concurrent ( threadDelay )
import UnliftIO.Directory ( listDirectory )
@@ -114,11 +119,6 @@ import UnliftIO.Exception ( IOException
import UnliftIO.IO ( IOMode(ReadMode)
, openFile
)
-import UnliftIO.MVar ( modifyMVar
- , modifyMVar_
- , newMVar
- , readMVar
- )
import UnliftIO.Resource ( MonadResource
, ReleaseKey
, ResourceT
@@ -306,36 +306,126 @@ applyCollectedMigrations actionAfter CoddSettings { migsConnString = defaultConn
-- This function is complex because:
-- 1. We need to open connections as late as possible due to bootstrapping.
-- 2. We want to insert into codd_schema.sql_migrations as early as possible even for custom-connection migrations.
- -- 3. When possible, we want to insert into codd_schema.sql_migrations in the same transaction the migrations are running.
+ -- 3. While the item above says we want to register applied migrations as early as possible, we want to delay that sufficiently to allow for migrations that create codd_schema themselves, so as to allow dumps to be migrations, as long as there is no harm to atomicity.
+ -- 4. We allow migrations to insert into codd_schema.sql_migrations themselves (the reasonable use case here is having dumps as migrations), so we need to skip migrations if a migration registers they were applied.
+ -- 5. When possible, we want to insert into codd_schema.sql_migrations in the same transaction the migrations are running.
let dbName = Text.pack $ DB.connectDatabase defaultConnInfo
hoistedBlocks :: [BlockOfMigrations txn] =
map (hoistBlockOfMigrations lift) pendingMigs
- -- Note: We could probably compound this Monad with StateT instead of using an MVar, but IIRC that creates issues
+ -- Note: We could probably compound this Monad with StateT instead of using IORefs, but IIRC that creates issues
-- with MonadUnliftIO.
- connsPerInfo <- newMVar (mempty :: [(DB.ConnectInfo, DB.Connection)])
- unregisteredButAppliedMigs <- newMVar (mempty :: [AppliedMigration])
- coddSchemaUpToDate <-
- newMVar $ coddSchemaVersion bootstrapCheck == maxBound
- let openConn :: DB.ConnectInfo -> m (ReleaseKey, DB.Connection)
+ connsPerInfo <- newIORef (mempty :: [(DB.ConnectInfo, DB.Connection)])
+ unregisteredButAppliedMigs <- newIORef (mempty :: [AppliedMigration])
+ lastKnownCoddSchemaVersionRef <- newIORef $ coddSchemaVersion bootstrapCheck
+ let coddSchemaUpToDate :: forall n . MonadUnliftIO n => n Bool
+ coddSchemaUpToDate =
+ (== maxBound) <$> readIORef lastKnownCoddSchemaVersionRef
+
+ openConn :: DB.ConnectInfo -> m (ReleaseKey, DB.Connection)
openConn cinfo = flip allocate DB.close $ do
- mConn <- lookup cinfo <$> readMVar connsPerInfo
+ mConn <- lookup cinfo <$> readIORef connsPerInfo
case mConn of
Just conn -> pure conn
- Nothing -> modifyMVar connsPerInfo $ \m -> do
+ Nothing -> do
+ currentlyOpenConns <- readIORef connsPerInfo
-- print
-- $ "Connecting to (TODO: REDACT PASSWORD) "
-- <> Text.pack (show cinfo)
conn <- connectWithTimeout cinfo connectTimeout
- pure ((cinfo, conn) : m, conn)
-
- queryConn :: DB.ConnectInfo -> m (Maybe DB.Connection)
- queryConn cinfo = lookup cinfo <$> readMVar connsPerInfo
+ modifyIORef' connsPerInfo $ const $ (cinfo, conn) : currentlyOpenConns
+ pure conn
+
+ queryConn
+ :: forall n
+ . MonadIO n
+ => DB.ConnectInfo
+ -> n (Maybe DB.Connection)
+ queryConn cinfo = lookup cinfo <$> readIORef connsPerInfo
+
+-- | Meant to check what the application status of an arbitrary migration is. Used because dumps-as-migrations can insert into codd_schema.sql_migrations themselves,
+-- in which case we should detect that and skip migrations that were collected as pending at an earlier stage.
+-- Returns `Nothing` if the migration has not been applied at all.
+ hasMigBeenApplied
+ :: forall n
+ . MonadUnliftIO n
+ => Maybe DB.Connection
+ -> FilePath
+ -> n (Maybe MigrationApplicationStatus)
+ hasMigBeenApplied mDefaultDatabaseConn fp = do
+ mDefaultConn <- queryConn defaultConnInfo
+ lastKnownCoddSchemaVersion <- readIORef lastKnownCoddSchemaVersionRef
+ apunregmigs <- readIORef unregisteredButAppliedMigs
+ let appliedUnreg = List.find
+ (\apmig -> appliedMigrationName apmig == fp)
+ apunregmigs
+ case appliedUnreg of
+ Just m -> pure $ Just $ appliedMigrationStatus m
+ Nothing -> -- We use the same connection as the one applying migrations if it's in the default database, otherwise we try to use the default conn if it's available
+ case mDefaultDatabaseConn <|> mDefaultConn of
+ Nothing -> pure Nothing
+ Just connToUse -> do
+ -- If in-memory info says codd_schema does not exist or is not the latest, a migration may have created it or upgraded it and we're just not aware yet, so check that.
+ refinedCoddSchemaVersion <-
+ if lastKnownCoddSchemaVersion < maxBound
+ then
+ do
+ actualVersion <- detectCoddSchema
+ connToUse
+ modifyIORef'
+ lastKnownCoddSchemaVersionRef
+ (const actualVersion)
+ pure actualVersion
+ else
+ pure lastKnownCoddSchemaVersion
+ if refinedCoddSchemaVersion >= CoddSchemaV3
+ then queryMay
+ connToUse
+ "SELECT num_applied_statements, no_txn_failed_at FROM codd_schema.sql_migrations WHERE name=?"
+ (DB.Only fp)
+ else if refinedCoddSchemaVersion >= CoddSchemaV1
+ then do
+ queryMay
+ connToUse
+ "SELECT 0, NULL::timestamptz FROM codd_schema.sql_migrations WHERE name=?"
+ (DB.Only fp)
+ else pure Nothing
+
+
+ createCoddSchemaAndFlushPendingMigrationsDefaultConnection
+ :: DB.Connection -> txn ()
+ createCoddSchemaAndFlushPendingMigrationsDefaultConnection defaultConn
+ = do
+ csUpToDate <- coddSchemaUpToDate
+ unless csUpToDate $ do
+ logInfo
+ "Creating or updating codd_schema..."
+ createCoddSchema @txn maxBound
+ txnIsolationLvl
+ defaultConn
+ modifyIORef' lastKnownCoddSchemaVersionRef
+ (const maxBound)
+ apmigs <- readIORef unregisteredButAppliedMigs
+ withTransaction @txn txnIsolationLvl defaultConn
+ $ forM_ apmigs
+ $ \AppliedMigration {..} ->
+ registerRanMigration @txn
+ defaultConn
+ txnIsolationLvl
+ appliedMigrationName
+ appliedMigrationTimestamp
+ (SpecificTime appliedMigrationAt)
+ appliedMigrationDuration
+ appliedMigrationStatus
+ (SpecificIds appliedMigrationTxnId
+ appliedMigrationConnId
+ )
+ modifyIORef' unregisteredButAppliedMigs $ const []
createCoddSchemaAndFlushPendingMigrations :: m ()
createCoddSchemaAndFlushPendingMigrations = do
mDefaultConn <- queryConn defaultConnInfo
- csUpToDate <- readMVar coddSchemaUpToDate
+ csUpToDate <- coddSchemaUpToDate
case mDefaultConn of
Just defaultConn -> do
unless csUpToDate $ do
@@ -344,20 +434,25 @@ applyCollectedMigrations actionAfter CoddSettings { migsConnString = defaultConn
createCoddSchema @txn maxBound
txnIsolationLvl
defaultConn
- modifyMVar_ coddSchemaUpToDate $ const $ pure True
- modifyMVar_ unregisteredButAppliedMigs $ \apmigs -> do
- withTransaction @txn txnIsolationLvl defaultConn
- $ forM_ apmigs
- $ \AppliedMigration {..} ->
- registerRanMigration @txn
- defaultConn
- txnIsolationLvl
- appliedMigrationName
- appliedMigrationTimestamp
- (SpecificTime appliedMigrationAt)
- appliedMigrationDuration
- appliedMigrationStatus
- pure []
+ modifyIORef' lastKnownCoddSchemaVersionRef
+ (const maxBound)
+ apmigs <- readIORef unregisteredButAppliedMigs
+ withTransaction @txn txnIsolationLvl defaultConn
+ $ forM_ apmigs
+ $ \AppliedMigration {..} ->
+ registerRanMigration @txn
+ defaultConn
+ txnIsolationLvl
+ appliedMigrationName
+ appliedMigrationTimestamp
+ (SpecificTime appliedMigrationAt)
+ appliedMigrationDuration
+ appliedMigrationStatus
+ (SpecificIds
+ appliedMigrationTxnId
+ appliedMigrationConnId
+ )
+ modifyIORef' unregisteredButAppliedMigs $ const []
Nothing -> pure ()
-- | The function used to register applied migrations for in-txn migrations.
@@ -372,8 +467,8 @@ applyCollectedMigrations actionAfter CoddSettings { migsConnString = defaultConn
-> txn ()
registerAppliedInTxnMig blockConn blockConnInfo appliedMigrationName appliedMigrationTimestamp appliedMigrationDuration appliedMigrationStatus
= do
- csReady <- readMVar coddSchemaUpToDate
- -- We can insert into codd_schema.sql_migrations with any user
+ csReady <- coddSchemaUpToDate
+ -- If we are on the default database, we can insert into codd_schema.sql_migrations with any user
if DB.connectDatabase blockConnInfo
== DB.connectDatabase defaultConnInfo
&& csReady
@@ -386,23 +481,25 @@ applyCollectedMigrations actionAfter CoddSettings { migsConnString = defaultConn
NowInPostgresTime
appliedMigrationDuration
appliedMigrationStatus
+ OfCurrentTransaction
else
+ -- If not in the default database, we have to wait until after COMMIT to register that migrations were applied
do
- appliedAt <-
- DB.fromOnly
- <$> unsafeQuery1
- blockConn
- "SELECT clock_timestamp()"
- ()
- modifyMVar_ unregisteredButAppliedMigs $ \apmigs ->
- pure
- $ apmigs
+ (appliedMigrationAt, appliedMigrationTxnId, appliedMigrationConnId) <-
+ unsafeQuery1
+ blockConn
+ "SELECT clock_timestamp(), txid_current(), pg_backend_pid()"
+ ()
+ modifyIORef' unregisteredButAppliedMigs $ \apmigs ->
+ apmigs
++ [ AppliedMigration
{ appliedMigrationName
, appliedMigrationTimestamp
- , appliedMigrationAt = appliedAt
+ , appliedMigrationAt
, appliedMigrationDuration
, appliedMigrationStatus
+ , appliedMigrationTxnId
+ , appliedMigrationConnId
}
]
@@ -419,7 +516,7 @@ applyCollectedMigrations actionAfter CoddSettings { migsConnString = defaultConn
-> m ()
registerAppliedNoTxnMig blockConn blockConnInfo appliedMigrationName appliedMigrationTimestamp appliedMigrationDuration appliedMigrationStatus
= do
- csReady <- readMVar coddSchemaUpToDate
+ csReady <- coddSchemaUpToDate
case (appliedMigrationStatus, csReady) of
(NoTxnMigrationFailed _, False) -> do
-- Super duper ultra extra special case: we try to create codd_schema as a partially-run no-txn migration may have applied statements that make the default connection string accessible. The same isn't possible with in-txn migrations.
@@ -442,26 +539,30 @@ applyCollectedMigrations actionAfter CoddSettings { migsConnString = defaultConn
)
of
(Nothing, False) -> do -- No default connection available and migrations running on non-default database
- appliedAt <-
- DB.fromOnly
- <$> unsafeQuery1
- blockConn
- "SELECT clock_timestamp()"
- ()
- modifyMVar_ unregisteredButAppliedMigs
+ (appliedMigrationAt, appliedMigrationTxnId, appliedMigrationConnId) <-
+ unsafeQuery1
+ blockConn
+ "SELECT clock_timestamp(), txid_current(), pg_backend_pid()"
+ ()
+ modifyIORef' unregisteredButAppliedMigs
$ \apmigs ->
- pure
- $ apmigs
+ apmigs
++ [ AppliedMigration
{ appliedMigrationName
, appliedMigrationTimestamp
- , appliedMigrationAt =
- appliedAt
+ , appliedMigrationAt
, appliedMigrationDuration
, appliedMigrationStatus
+ , appliedMigrationTxnId
+ , appliedMigrationConnId
}
]
(Just defaultConn, False) -> do -- Running migrations on non-default database, but default connection is available
+ (appliedMigrationTxnId, appliedMigrationConnId) <-
+ unsafeQuery1
+ blockConn
+ "SELECT txid_current(), pg_backend_pid()"
+ ()
void
$ withTransaction @txn txnIsolationLvl
defaultConn
@@ -473,6 +574,10 @@ applyCollectedMigrations actionAfter CoddSettings { migsConnString = defaultConn
NowInPostgresTime
appliedMigrationDuration
appliedMigrationStatus
+ (SpecificIds
+ appliedMigrationTxnId
+ appliedMigrationConnId
+ )
(_, True) -> do -- Migrations running on default database means we can register them here if codd_schema exists and is up to date
void
$ withTransaction @txn txnIsolationLvl
@@ -485,6 +590,7 @@ applyCollectedMigrations actionAfter CoddSettings { migsConnString = defaultConn
NowInPostgresTime
appliedMigrationDuration
appliedMigrationStatus
+ OfCurrentTransaction
@@ -492,32 +598,51 @@ applyCollectedMigrations actionAfter CoddSettings { migsConnString = defaultConn
(\_ block -> do
let cinfo =
fromMaybe defaultConnInfo (blockCustomConnInfo block)
+ isDefaultConn = cinfo == defaultConnInfo
+ connUsesDefaultDb = DB.connectDatabase cinfo
+ == DB.connectDatabase defaultConnInfo
(_, conn) <- openConn cinfo
- -- Create codd_schema and flush previously applied migrations if possible. We do this here
- -- since we expect _some_ of the migration blocks to use the default connection string, and after
- -- that we can register migrations were applied.
- createCoddSchemaAndFlushPendingMigrations
-
- case
- ( block
- , isOneShotApplication defaultConnInfo pendingMigs
- )
- of
- (BlockInTxn inTxnBlock, True) -> runInTxnBlock
- (fmap Just . actionAfter hoistedBlocks)
- conn
- inTxnBlock
- (registerAppliedInTxnMig conn cinfo)
- (BlockInTxn inTxnBlock, False) -> runInTxnBlock
- (const $ pure Nothing)
+ case block of
+ BlockInTxn inTxnBlock -> do
+ unless isDefaultConn
+ createCoddSchemaAndFlushPendingMigrations
+ runInTxnBlock
+ (do
+ -- Creating codd_schema inside the same transaction as migrations increases atomicity
+ -- and also allows for migrations themselves to create codd_schema, which is useful because then users
+ -- can have dumps as migrations
+ when isDefaultConn
+ $ createCoddSchemaAndFlushPendingMigrationsDefaultConnection
+ conn
+ if isOneShotApplication defaultConnInfo
+ pendingMigs
+ then
+ Just <$> actionAfter hoistedBlocks conn
+ else
+ pure Nothing
+ )
conn
inTxnBlock
(registerAppliedInTxnMig conn cinfo)
- (BlockNoTxn noTxnBlock, _) -> runNoTxnMig
+ (hasMigBeenApplied
+ (if connUsesDefaultDb
+ then Just conn
+ else Nothing
+ )
+ )
+ BlockNoTxn noTxnBlock -> do
+ createCoddSchemaAndFlushPendingMigrations
+ runNoTxnMig
conn
noTxnBlock
(registerAppliedNoTxnMig conn cinfo)
+ (hasMigBeenApplied
+ (if connUsesDefaultDb
+ then Just conn
+ else Nothing
+ )
+ )
)
Nothing
pendingMigs
@@ -541,7 +666,7 @@ applyCollectedMigrations actionAfter CoddSettings { migsConnString = defaultConn
where
runInTxnBlock
- :: (DB.Connection -> txn b)
+ :: txn b
-> DB.Connection
-> ConsecutiveInTxnMigrations m
-> ( FilePath
@@ -550,8 +675,9 @@ applyCollectedMigrations actionAfter CoddSettings { migsConnString = defaultConn
-> MigrationApplicationStatus
-> txn ()
) -- ^ Using the `txn` monad is right: registering applied migrations happens in the same connection that applies migrations if that is a default-database connection, and otherwise will be scheduled to be inserted into codd_schema.sql_migrations in the first future opportunity, meaning when this function is called it's merely an in-memory operation, which can also run in `txn`.
+ -> (FilePath -> txn (Maybe MigrationApplicationStatus)) -- ^ Function to check if a migration has been applied
-> m b
- runInTxnBlock act conn migBlock registerMig =
+ runInTxnBlock act conn migBlock registerMig hasMigBeenApplied =
-- Naturally, we retry entire in-txn block transactions on error, not individual statements or individual migrations
retryFold
retryPolicy
@@ -574,10 +700,25 @@ applyCollectedMigrations actionAfter CoddSettings { migsConnString = defaultConn
let hoistedMigs :: NonEmpty (AddedSqlMigration txn)
hoistedMigs = hoistAddedSqlMigration lift
<$> inTxnMigs blockFinal
- errorOrOk <- forMExcept hoistedMigs $ applySingleMigration
- conn
- registerMig
- NoSkipStatements
+ errorOrOk <- forMExcept hoistedMigs $ \mig -> do
+ let migname = migrationName $ addedSqlMig mig
+ appStatus <- hasMigBeenApplied migname
+ case appStatus of
+ Just (NoTxnMigrationFailed _) ->
+ error
+ $ migname
+ ++ " is an in-txn migration, yet when I look in codd_schema.sql_migrations it's registered as a partially applied no-txn migration. If you aren't messing with codd's internals, this is a bug. Otherwise you're trying to do something unsupported."
+ Just _ -> do
+ logInfo
+ ( "Skipping "
+ <> Text.pack migname
+ )
+ pure (Right ())
+ Nothing -> applySingleMigration
+ conn
+ registerMig
+ NoSkipStatements
+ mig
case errorOrOk of
Left e -> do
liftIO $ DB.rollback conn
@@ -585,7 +726,7 @@ applyCollectedMigrations actionAfter CoddSettings { migsConnString = defaultConn
"ROLLBACKed transaction"
pure $ Left e
Right () -> do
- res <- act conn
+ res <- act
-- Also catch exceptions on COMMIT so they're treated as a retriable error
commitE <- try $ liftIO $ DB.execute_ conn
"COMMIT"
@@ -609,29 +750,43 @@ applyCollectedMigrations actionAfter CoddSettings { migsConnString = defaultConn
-> DiffTime
-> MigrationApplicationStatus
-> m ()
- ) -- ^ This is `m` instead of `txn` and is correct, unlike in `runInTxnBlock`. The reason is that there is no transaction opened by codd for no-txn migrations, and so the function that registers applied migrations needs to start its own transaction.
+ ) -- ^ This is `m` instead of `txn` and is correct. The reason is that there is no transaction opened by codd for no-txn migrations, and so the function that registers applied migrations needs to start its own transaction.
+ -> (FilePath -> m (Maybe MigrationApplicationStatus)) -- ^ Function to check if a migration has been applied
-> m (Maybe x)
- runNoTxnMig conn mig registerMig = do
+ runNoTxnMig conn mig registerMig hasMigBeenApplied = do
+ let migname = migrationName $ addedSqlMig $ singleNoTxnMig mig
retryFold
retryPolicy
(\(previousMig, _) RetryIteration { lastError } ->
case lastError of
Nothing -> do
- let numStmtsApplied =
- numStatementsAlreadyApplied mig
- when (numStmtsApplied > 0)
- $ logWarn
- $ "Resuming application of partially applied no-txn migration "
- <> Text.pack
- (migrationName
- (addedSqlMig (singleNoTxnMig mig))
- )
- <> ". Skipping the first "
- <> Fmt.sformat Fmt.int numStmtsApplied
- <> " SQL statements, which have already been applied, and starting application from the "
- <> Fmt.sformat Fmt.ords (numStmtsApplied + 1)
- <> " statement"
- pure (previousMig, numStmtsApplied)
+ -- We recheck application status because dumps-as-migrations can insert into codd_schema.sql_migrations themselves.
+ recheckedAppStatus <- hasMigBeenApplied migname
+ case recheckedAppStatus of
+ Just (MigrationAppliedSuccessfully _) -> do
+ -- Here a migration inserted into codd_schema.sql_migrations itself, claiming the migration has been applied.
+ -- So we signal it must be skipped with the `Left ()` value
+ pure (previousMig, Left ())
+ _ -> do
+ let
+ numStmtsApplied =
+ case recheckedAppStatus of
+ Nothing ->
+ numStatementsAlreadyApplied
+ mig
+ Just (NoTxnMigrationFailed n)
+ -> n
+ when (numStmtsApplied > 0)
+ $ logWarn
+ $ "Resuming application of partially applied no-txn migration "
+ <> Text.pack migname
+ <> ". Skipping the first "
+ <> Fmt.sformat Fmt.int numStmtsApplied
+ <> " SQL statements, which have already been applied, and starting application from the "
+ <> Fmt.sformat Fmt.ords
+ (numStmtsApplied + 1)
+ <> " statement"
+ pure (previousMig, Right numStmtsApplied)
Just NoTxnMigrationApplicationFailure { noTxnMigAppliedStatements }
-> do
logWarn
@@ -644,9 +799,12 @@ applyCollectedMigrations actionAfter CoddSettings { migsConnString = defaultConn
(noTxnMigAppliedStatements + 1)
<> " statement"
freshBlock <- reReadMig previousMig
- pure (freshBlock, noTxnMigAppliedStatements)
+ pure
+ ( freshBlock
+ , Right noTxnMigAppliedStatements
+ )
)
- (mig, 0)
+ (mig, Right 0)
(\case
Left lastErr -> do
logError
@@ -662,15 +820,24 @@ applyCollectedMigrations actionAfter CoddSettings { migsConnString = defaultConn
throwIO lastErr
Right ret -> pure ret
)
- $ \(migFinal, numStmtsToSkip) ->
- fmap (const Nothing) <$> applySingleMigration
- conn
- registerMig
- (SkipStatementsNoTxn numStmtsToSkip)
- (singleNoTxnMig migFinal)
+ $ \(migFinal, skipInstr) -> do
+ case skipInstr of
+ Left () -> do
+ logInfo
+ ("Skipping " <> Text.pack migname
+ )
+ pure (Right Nothing)
+ Right numStmtsToSkip ->
+ fmap (const Nothing) <$> applySingleMigration
+ conn
+ registerMig
+ (SkipStatementsNoTxn numStmtsToSkip)
+ (singleNoTxnMig migFinal)
+
data CoddSchemaVersion = CoddSchemaDoesNotExist | CoddSchemaV1 | CoddSchemaV2 -- ^ V2 includes duration of each migration's application
| CoddSchemaV3 -- ^ V3 includes the number of SQL statements applied per migration, allowing codd to resume application of even failed no-txn migrations correctly
+ | CoddSchemaV4 -- ^ V4 includes the ID of the transaction and of the connection applying each migration
deriving stock (Bounded, Enum, Eq, Ord, Show)
detectCoddSchema :: MonadIO m => DB.Connection -> m CoddSchemaVersion
@@ -689,6 +856,8 @@ detectCoddSchema conn = do
-> pure CoddSchemaV2
["id", "migration_timestamp", "applied_at", "name", "application_duration", "num_applied_statements", "no_txn_failed_at"]
-> pure CoddSchemaV3
+ ["id", "migration_timestamp", "applied_at", "name", "application_duration", "num_applied_statements", "no_txn_failed_at", "txnid", "connid"]
+ -> pure CoddSchemaV4
_ ->
error
$ "Internal codd error. Unless you've manually modified the codd_schema.sql_migrations table, this is a bug in codd. Please report it and include the following as column names in your report: "
@@ -696,7 +865,7 @@ detectCoddSchema conn = do
createCoddSchema
:: forall txn m
- . (MonadUnliftIO m, NotInTxn m, txn ~ InTxnT m)
+ . (MonadUnliftIO m, CanStartTxn m txn, MonadUnliftIO txn)
=> CoddSchemaVersion
-- ^ Desired schema version. This should always be `maxBound` in the app; it's meant to assume other values only in tests
-> TxnIsolationLvl
@@ -746,7 +915,10 @@ createCoddSchema targetVersion txnIsolationLvl conn =
"ALTER TABLE codd_schema.sql_migrations ADD COLUMN num_applied_statements INT, ADD COLUMN no_txn_failed_at timestamptz, ALTER COLUMN applied_at DROP NOT NULL, ADD CONSTRAINT no_txn_mig_applied_or_failed CHECK ((applied_at IS NULL) <> (no_txn_failed_at IS NULL)); \n\
\ -- Grant UPDATE so in-txn migrations running under different users can register themselves atomically \n\
\GRANT UPDATE ON TABLE codd_schema.sql_migrations TO PUBLIC;"
- CoddSchemaV3 -> pure ()
+ CoddSchemaV3 -> execvoid_
+ conn
+ "ALTER TABLE codd_schema.sql_migrations ADD COLUMN txnid BIGINT DEFAULT txid_current(), ADD COLUMN connid INT DEFAULT pg_backend_pid()"
+ CoddSchemaV4 -> pure ()
-- `succ` is a partial function, but it should never throw in this context
go (succ currentSchemaVersion)
@@ -809,6 +981,10 @@ collectPendingMigrations defaultConnString sqlMigrations txnIsolationLvl connect
withTransaction @(InTxnT m) txnIsolationLvl conn
$ do
case v of
+ CoddSchemaV4 -> query
+ conn
+ "SELECT name, no_txn_failed_at IS NULL, COALESCE(num_applied_statements, 0) FROM codd_schema.sql_migrations"
+ ()
CoddSchemaV3 -> query
conn
"SELECT name, no_txn_failed_at IS NULL, COALESCE(num_applied_statements, 0) FROM codd_schema.sql_migrations"
@@ -1386,6 +1562,11 @@ applySingleMigration conn registerMigRan skip (AddedSqlMigration sqlMig migTimes
-- applied, so we wouldn't be able to use NowInPostgresTime by the time codd_schema is created.
data MigrationLastStatementAppliedAt = NowInPostgresTime | SpecificTime UTCTime
+-- | This type exists because we want to register the txnId and connId that was effectively the one that applied statements in each migration (no-txn migration have a different txnId for each statement, but that's orthogonal),
+-- and some migrations are applied in separate connections or before codd_schema is ready, so they query these IDs from the database at that time in those cases. When codd_schema is ready and the migration runs in the default
+-- database, callers will call us with `OfCurrentTransaction`, issuing one less statement per migration (not really a performance gain, but less log spamming).
+data MigrationTxnAndConnIds = OfCurrentTransaction | SpecificIds Int64 Int
+
-- | Registers in the DB that a migration with supplied name and timestamp
-- has been either successfully applied or partially failed (the latter only makes sense for no-txn migrations).
-- Will throw an error if codd_schema hasn't yet been created.
@@ -1400,9 +1581,12 @@ registerRanMigration
-> MigrationLastStatementAppliedAt -- ^ The time the last statement of the migration was applied or when it failed.
-> DiffTime
-> MigrationApplicationStatus
+ -> MigrationTxnAndConnIds
-> m UTCTime
-registerRanMigration conn isolLvl fn migTimestamp appliedAt appliedMigrationDuration apStatus
- = let (args, numAppliedStatements, timestampValue) =
+registerRanMigration conn isolLvl fn migTimestamp appliedAt appliedMigrationDuration apStatus txnConnIds
+ = let
+ -- Ugly splicing and dicing follows..
+ (args1, numAppliedStatements, timestampValue) =
case (appliedAt, apStatus) of
(NowInPostgresTime, NoTxnMigrationFailed numStmts) ->
("?, ?, clock_timestamp()", numStmts, Nothing)
@@ -1412,26 +1596,37 @@ registerRanMigration conn isolLvl fn migTimestamp appliedAt appliedMigrationDura
("?, NULL, ?", numStmts, Just t)
(SpecificTime t, MigrationAppliedSuccessfully numStmts) ->
("?, ?, NULL", numStmts, Just t)
+ (cols2, args2, txnConnIdsRow) = case txnConnIds of
+ OfCurrentTransaction -> (")", "", DB.toRow ())
+ SpecificIds txnId connId ->
+ (", txnid, connid)", ", ?, ?", DB.toRow (txnId, connId))
in
withTransaction @txn isolLvl conn $ DB.fromOnly <$> unsafeQuery1
conn
- ("INSERT INTO codd_schema.sql_migrations as m (migration_timestamp, name, application_duration, num_applied_statements, applied_at, no_txn_failed_at) \
+ ("INSERT INTO codd_schema.sql_migrations as m (migration_timestamp, name, application_duration, num_applied_statements, applied_at, no_txn_failed_at"
+ <> cols2
+ <> " \
\ SELECT ?, ?, ?, "
- <> args
+ <> args1
+ <> args2
<> " \
\ ON CONFLICT (name) DO UPDATE \
\ SET application_duration=EXCLUDED.application_duration + m.application_duration \
\ , num_applied_statements=EXCLUDED.num_applied_statements \
\ , applied_at=EXCLUDED.applied_at \
\ , no_txn_failed_at=EXCLUDED.no_txn_failed_at \
+ \ , txnid=EXCLUDED.txnid \
+ \ , connid=EXCLUDED.connid \
\ RETURNING COALESCE(applied_at, no_txn_failed_at)"
)
- ( migTimestamp
- , fn
+ ( ( migTimestamp
+ , fn
-- postgresql-simple does not have a `ToField DiffTime` instance :(
- , realToFrac @Double @NominalDiffTime
- $ fromIntegral (diffTimeToPicoseconds appliedMigrationDuration)
- / 1_000_000_000_000
- , numAppliedStatements
- , timestampValue
+ , realToFrac @Double @NominalDiffTime
+ $ fromIntegral (diffTimeToPicoseconds appliedMigrationDuration)
+ / 1_000_000_000_000
+ , numAppliedStatements
+ , timestampValue
+ )
+ DB.:. txnConnIdsRow
)
diff --git a/src/Codd/Parsing.hs b/src/Codd/Parsing.hs
index 76068326..90d83a6e 100644
--- a/src/Codd/Parsing.hs
+++ b/src/Codd/Parsing.hs
@@ -34,8 +34,8 @@ module Codd.Parsing
isCommentPiece,
isTransactionEndingPiece,
isWhiteSpacePiece,
- manyStreaming,
piecesToText,
+ sqlPieceText,
parsedSqlText,
parseSqlMigration,
parseWithEscapeCharProper,
@@ -43,13 +43,13 @@ module Codd.Parsing
parseAndClassifyMigration,
parseMigrationTimestamp,
parseSqlPiecesStreaming,
- sqlPieceText,
substituteEnvVarsInSqlPiecesStream,
toMigrationTimestamp,
-- Exported for tests
ParserState (..),
coddConnStringCommentParser,
copyFromStdinAfterStatementParser,
+ manyStreaming,
parseSqlPiecesStreaming',
)
where
@@ -93,6 +93,7 @@ import qualified Data.Attoparsec.Text as Parsec
import Data.Bifunctor (first)
import qualified Data.Char as Char
import qualified Data.DList as DList
+import Data.Int (Int64)
import Data.Kind (Type)
import Data.List
( nub,
@@ -114,6 +115,7 @@ import Data.Time
)
import Data.Time.Clock (UTCTime (..))
import Database.PostgreSQL.Simple (ConnectInfo (..))
+import qualified Database.PostgreSQL.Simple.FromRow as DB
import qualified Database.PostgreSQL.Simple.Time as DB
import Network.URI
( URI (..),
@@ -166,6 +168,17 @@ data AddedSqlMigration m = AddedSqlMigration
-- | Holds applied status and number of applied statements.
data MigrationApplicationStatus = NoTxnMigrationFailed Int | MigrationAppliedSuccessfully Int
+instance DB.FromRow MigrationApplicationStatus where
+ fromRow = do
+ numAppliedStmts :: Maybe Int <- DB.field
+ noTxnFailedAt :: Maybe UTCTime <- DB.field
+ case (numAppliedStmts, noTxnFailedAt) of
+ (Nothing, _) ->
+ -- old codd_schema version where only fully applied migs were registered
+ pure $ MigrationAppliedSuccessfully 0
+ (Just n, Nothing) -> pure $ MigrationAppliedSuccessfully n
+ (Just n, Just _) -> pure $ NoTxnMigrationFailed n
+
data AppliedMigration = AppliedMigration
{ appliedMigrationName :: FilePath,
-- | The migration's timestamp as extracted from its file name.
@@ -173,7 +186,9 @@ data AppliedMigration = AppliedMigration
-- | When the migration was effectively applied.
appliedMigrationAt :: UTCTime,
appliedMigrationDuration :: DiffTime,
- appliedMigrationStatus :: MigrationApplicationStatus
+ appliedMigrationStatus :: MigrationApplicationStatus,
+ appliedMigrationTxnId :: Int64,
+ appliedMigrationConnId :: Int
}
data FileStream m = FileStream
diff --git a/src/Codd/Query.hs b/src/Codd/Query.hs
index d8d9067a..803a9632 100644
--- a/src/Codd/Query.hs
+++ b/src/Codd/Query.hs
@@ -7,6 +7,7 @@ module Codd.Query
, query
, txnStatus
, unsafeQuery1
+ , queryMay
, withTransaction
) where
@@ -58,6 +59,20 @@ unsafeQuery1 conn q r = liftIO $ do
[x] -> return x
_ -> error "More than one result for query1"
+-- | Throws an exception if more one result is returned by the query.
+queryMay
+ :: (DB.FromRow b, MonadIO m, DB.ToRow a)
+ => DB.Connection
+ -> DB.Query
+ -> a
+ -> m (Maybe b)
+queryMay conn q r = liftIO $ do
+ res <- DB.query conn q r
+ case res of
+ [] -> pure Nothing
+ [x] -> pure $ Just x
+ _ -> error "More than one result for queryMay"
+
-- | Returns a Query with a valid "BEGIN" statement that is READ WRITE and has
-- the desired isolation level.
diff --git a/test/DbDependentSpecs/ApplicationSpec.hs b/test/DbDependentSpecs/ApplicationSpec.hs
index 6449170c..7aa8d718 100644
--- a/test/DbDependentSpecs/ApplicationSpec.hs
+++ b/test/DbDependentSpecs/ApplicationSpec.hs
@@ -1,1220 +1,1429 @@
module DbDependentSpecs.ApplicationSpec where
-import Codd ( VerifySchemas(..)
- , applyMigrations
- , applyMigrationsNoCheck
- )
-import Codd.Environment ( CoddSettings(..) )
-import Codd.Internal ( CoddSchemaVersion(..)
- , collectPendingMigrations
- , createCoddSchema
- , detectCoddSchema
- , withConnection
- )
-import Codd.Logging ( runCoddLogger )
-import Codd.Parsing ( AddedSqlMigration(..)
- , SqlMigration(..)
- , hoistAddedSqlMigration
- )
-import Codd.Query ( execvoid_
- , unsafeQuery1
- )
-import Codd.Representations.Types ( DbRep(..) )
-import Codd.Types ( TxnIsolationLvl(..) )
-import Control.Monad ( forM_
- , void
- )
-import Control.Monad.Trans ( lift )
-import Control.Monad.Trans.Resource ( MonadThrow )
-import qualified Data.Aeson as Aeson
-import qualified Data.Map.Strict as Map
-import Data.Maybe ( fromMaybe )
-import Data.Text ( Text )
-import qualified Data.Text as Text
-import Data.Time ( CalendarDiffTime(ctTime)
- , UTCTime
- , diffUTCTime
- , secondsToNominalDiffTime
- )
-import qualified Database.PostgreSQL.Simple as DB
-import DbUtils ( aroundFreshDatabase
- , createTestUserMig
- , createTestUserMigPol
- , finallyDrop
- , fixMigsOrder
- , getIncreasingTimestamp
- , mkValidSql
- , shouldBeStrictlySortedOn
- , testCoddSettings
- , testConnInfo
- , testConnTimeout
- )
-import Test.Hspec
-import Test.QuickCheck
-import qualified Test.QuickCheck as QC
-import UnliftIO ( SomeException
- , liftIO
- )
-
-placeHoldersMig, selectMig, copyMig :: MonadThrow m => AddedSqlMigration m
-placeHoldersMig = AddedSqlMigration
+import Codd
+ ( VerifySchemas (..),
+ applyMigrations,
+ applyMigrationsNoCheck,
+ )
+import Codd.Environment (CoddSettings (..))
+import Codd.Internal
+ ( CoddSchemaVersion (..),
+ collectPendingMigrations,
+ createCoddSchema,
+ detectCoddSchema,
+ withConnection,
+ )
+import Codd.Logging (runCoddLogger)
+import Codd.Parsing
+ ( AddedSqlMigration (..),
+ SqlMigration (..),
+ hoistAddedSqlMigration,
+ )
+import Codd.Query
+ ( execvoid_,
+ unsafeQuery1,
+ )
+import Codd.Representations.Types (DbRep (..))
+import Codd.Types (TxnIsolationLvl (..))
+import Control.Monad
+ ( forM_,
+ void,
+ )
+import Control.Monad.Trans (lift)
+import Control.Monad.Trans.Resource (MonadThrow)
+import qualified Data.Aeson as Aeson
+import Data.Int (Int64)
+import qualified Data.Map.Strict as Map
+import Data.Maybe (fromMaybe)
+import Data.Set (Set)
+import qualified Data.Set as Set
+import Data.Text (Text)
+import qualified Data.Text as Text
+import Data.Time
+ ( CalendarDiffTime (ctTime),
+ UTCTime,
+ diffUTCTime,
+ secondsToNominalDiffTime,
+ )
+import qualified Database.PostgreSQL.Simple as DB
+import DbDependentSpecs.RetrySpec (runMVarLogger)
+import DbUtils
+ ( aroundFreshDatabase,
+ cleanupAfterTest,
+ createTestUserMig,
+ createTestUserMigPol,
+ fixMigsOrder,
+ getIncreasingTimestamp,
+ mkValidSql,
+ shouldBeStrictlySortedOn,
+ testCoddSettings,
+ testConnInfo,
+ testConnTimeout,
+ )
+import Test.Hspec
+import Test.QuickCheck
+import qualified Test.QuickCheck as QC
+import UnliftIO
+ ( SomeException,
+ liftIO,
+ newMVar,
+ readMVar,
+ )
+
+placeHoldersMig, selectMig, copyMig :: (MonadThrow m) => AddedSqlMigration m
+placeHoldersMig =
+ AddedSqlMigration
SqlMigration
- { migrationName = "0000-placeholders.sql"
- , migrationSql = mkValidSql
- "CREATE TABLE any_table();\n-- ? $1 $2 ? ? ?"
- , migrationInTxn = True
- , migrationCustomConnInfo = Nothing
- , migrationEnvVars = mempty
- }
+ { migrationName = "0000-placeholders.sql",
+ migrationSql =
+ mkValidSql
+ "CREATE TABLE any_table();\n-- ? $1 $2 ? ? ?",
+ migrationInTxn = True,
+ migrationCustomConnInfo = Nothing,
+ migrationEnvVars = mempty
+ }
(getIncreasingTimestamp 0)
-selectMig = AddedSqlMigration
- SqlMigration { migrationName = "0001-select-mig.sql"
- , migrationSql = mkValidSql "SELECT 1, 3"
- , migrationInTxn = True
- , migrationCustomConnInfo = Nothing
- , migrationEnvVars = mempty
- }
+selectMig =
+ AddedSqlMigration
+ SqlMigration
+ { migrationName = "0001-select-mig.sql",
+ migrationSql = mkValidSql "SELECT 1, 3",
+ migrationInTxn = True,
+ migrationCustomConnInfo = Nothing,
+ migrationEnvVars = mempty
+ }
(getIncreasingTimestamp 1)
-copyMig = AddedSqlMigration
+copyMig =
+ AddedSqlMigration
SqlMigration
- { migrationName = "0002-copy-mig.sql"
- , migrationSql =
- -- CSV and Text formats' escaping rules aren't obvious.
- -- We test those here. See https://www.postgresql.org/docs/13/sql-copy.html
- -- TODO:
- -- Specifying custom delimiters, escape chars, NULL specifier, BINARY copy.
- -- Always compare to what psql does. Hopefully all the complexity is server-side.
- mkValidSql
- "CREATE TABLE x(name TEXT); COPY x (name) FROM STDIN WITH (FORMAT CSV);\nSome name\n\\.\n COPY x FROM STDIN WITH (FORMAT CSV);\n\\.\n COPY x FROM stdin;\nLine\\nbreak\\r\n\\.\n"
- , migrationInTxn = False
- , migrationCustomConnInfo = Nothing
- , migrationEnvVars = mempty
- }
+ { migrationName = "0002-copy-mig.sql",
+ migrationSql =
+ -- CSV and Text formats' escaping rules aren't obvious.
+ -- We test those here. See https://www.postgresql.org/docs/13/sql-copy.html
+ -- TODO:
+ -- Specifying custom delimiters, escape chars, NULL specifier, BINARY copy.
+ -- Always compare to what psql does. Hopefully all the complexity is server-side.
+ mkValidSql
+ "CREATE TABLE x(name TEXT); COPY x (name) FROM STDIN WITH (FORMAT CSV);\nSome name\n\\.\n COPY x FROM STDIN WITH (FORMAT CSV);\n\\.\n COPY x FROM stdin;\nLine\\nbreak\\r\n\\.\n",
+ migrationInTxn = False,
+ migrationCustomConnInfo = Nothing,
+ migrationEnvVars = mempty
+ }
(getIncreasingTimestamp 2)
-createTableNewTableMig
- :: MonadThrow m => String -> Bool -> Int -> AddedSqlMigration m
-createTableNewTableMig tableName inTxn migOrder = AddedSqlMigration
+createTableNewTableMig ::
+ (MonadThrow m) => String -> Bool -> Int -> AddedSqlMigration m
+createTableNewTableMig tableName inTxn migOrder =
+ AddedSqlMigration
SqlMigration
- { migrationName = "000"
- <> show migOrder
- <> "-create-table-newtable-mig.sql"
- , migrationSql = mkValidSql
- $ "CREATE TABLE "
- <> Text.pack tableName
- <> "()"
- , migrationInTxn = inTxn
- , migrationCustomConnInfo = Nothing
- , migrationEnvVars = mempty
- }
+ { migrationName =
+ "000"
+ <> show migOrder
+ <> "-create-table-newtable-mig.sql",
+ migrationSql =
+ mkValidSql $
+ "CREATE TABLE "
+ <> Text.pack tableName
+ <> "()",
+ migrationInTxn = inTxn,
+ migrationCustomConnInfo = Nothing,
+ migrationEnvVars = mempty
+ }
(getIncreasingTimestamp (fromIntegral migOrder))
-createDatabaseMig
- :: MonadThrow m => DB.ConnectInfo -> String -> Int -> Int -> SqlMigration m
-createDatabaseMig customConnInfo dbName sleepInSeconds migOrder = SqlMigration
- { migrationName = "000" <> show migOrder <> "-create-database-mig.sql"
- , migrationSql = mkValidSql
- $ "CREATE DATABASE "
- <> Text.pack dbName
- <> "; SELECT pg_sleep("
- <> Text.pack (show sleepInSeconds)
- <> ");"
- , migrationInTxn = False
- , migrationCustomConnInfo = Just customConnInfo
- , migrationEnvVars = mempty
+createDatabaseMig ::
+ (MonadThrow m) => DB.ConnectInfo -> String -> Int -> Int -> SqlMigration m
+createDatabaseMig customConnInfo dbName sleepInSeconds migOrder =
+ SqlMigration
+ { migrationName = "000" <> show migOrder <> "-create-database-mig.sql",
+ migrationSql =
+ mkValidSql $
+ "CREATE DATABASE "
+ <> Text.pack dbName
+ <> "; SELECT pg_sleep("
+ <> Text.pack (show sleepInSeconds)
+ <> ");",
+ migrationInTxn = False,
+ migrationCustomConnInfo = Just customConnInfo,
+ migrationEnvVars = mempty
}
-createCountCheckingMig :: MonadThrow m => Int -> String -> SqlMigration m
-createCountCheckingMig expectedCount migName = SqlMigration
- { migrationName = "000" <> show expectedCount <> "-" <> migName <> ".sql"
- , migrationSql =
- mkValidSql
- $ "DO\
-\\n$do$\
-\\nBEGIN\
-\\n IF (SELECT COUNT(*) <> "
- <> Text.pack (show expectedCount)
- <> " FROM codd_schema.sql_migrations) THEN\
-\\n RAISE 'Not the right count';\
-\\n END IF;\
-\\nEND\
-\\n$do$;"
- , migrationInTxn = False
- , migrationCustomConnInfo = Nothing
- , migrationEnvVars = mempty
+createAlwaysPassingMig :: (MonadThrow m) => Int -> String -> SqlMigration m
+createAlwaysPassingMig migrationIdx migName =
+ SqlMigration
+ { migrationName = "000" <> show migrationIdx <> "-" <> migName <> ".sql",
+ migrationSql = mkValidSql "SELECT 71",
+ migrationInTxn = False,
+ migrationCustomConnInfo = Nothing,
+ migrationEnvVars = mempty
}
-alwaysPassingMig, createTableMig, addColumnMig, alwaysFailingMig
- :: MonadThrow m => AddedSqlMigration m
-alwaysPassingMig = AddedSqlMigration
- SqlMigration { migrationName = "0001-always-passing.sql"
- , migrationSql = mkValidSql "SELECT 99"
- , migrationInTxn = True
- , migrationCustomConnInfo = Nothing
- , migrationEnvVars = mempty
- }
+alwaysPassingMig,
+ createTableMig,
+ addColumnMig,
+ alwaysFailingMig,
+ alwaysFailingMigNoTxn ::
+ (MonadThrow m) => AddedSqlMigration m
+alwaysPassingMig =
+ AddedSqlMigration
+ SqlMigration
+ { migrationName = "0001-always-passing.sql",
+ migrationSql = mkValidSql "SELECT 99",
+ migrationInTxn = True,
+ migrationCustomConnInfo = Nothing,
+ migrationEnvVars = mempty
+ }
(getIncreasingTimestamp 1)
-createTableMig = AddedSqlMigration
- SqlMigration { migrationName = "0002-create-table.sql"
- , migrationSql = mkValidSql "CREATE TABLE anytable ();"
- , migrationInTxn = True
- , migrationCustomConnInfo = Nothing
- , migrationEnvVars = mempty
- }
+createTableMig =
+ AddedSqlMigration
+ SqlMigration
+ { migrationName = "0002-create-table.sql",
+ migrationSql = mkValidSql "CREATE TABLE anytable ();",
+ migrationInTxn = True,
+ migrationCustomConnInfo = Nothing,
+ migrationEnvVars = mempty
+ }
(getIncreasingTimestamp 2)
-addColumnMig = AddedSqlMigration
+addColumnMig =
+ AddedSqlMigration
SqlMigration
- { migrationName = "0003-add-column.sql"
- , migrationSql = mkValidSql
- "ALTER TABLE anytable ADD COLUMN anycolumn TEXT;"
- , migrationInTxn = True
- , migrationCustomConnInfo = Nothing
- , migrationEnvVars = mempty
- }
+ { migrationName = "0003-add-column.sql",
+ migrationSql =
+ mkValidSql
+ "ALTER TABLE anytable ADD COLUMN anycolumn TEXT;",
+ migrationInTxn = True,
+ migrationCustomConnInfo = Nothing,
+ migrationEnvVars = mempty
+ }
(getIncreasingTimestamp 3)
-alwaysFailingMig = AddedSqlMigration
- SqlMigration { migrationName = "0004-always-failing.sql"
- , migrationSql = mkValidSql "SELECT 5/0"
- , migrationInTxn = True
- , migrationCustomConnInfo = Nothing
- , migrationEnvVars = mempty
- }
+alwaysFailingMig =
+ AddedSqlMigration
+ SqlMigration
+ { migrationName = "0004-always-failing.sql",
+ migrationSql = mkValidSql "SELECT 5/0",
+ migrationInTxn = True,
+ migrationCustomConnInfo = Nothing,
+ migrationEnvVars = mempty
+ }
(getIncreasingTimestamp 4)
-
-changeConnUser
- :: CoddSettings -> String -> AddedSqlMigration m -> AddedSqlMigration m
-changeConnUser dbInfo newUser mig = mig
- { addedSqlMig = (addedSqlMig mig)
- { migrationCustomConnInfo =
- let cinfo = fromMaybe
- (migsConnString dbInfo)
- (migrationCustomConnInfo (addedSqlMig mig))
- in Just cinfo { DB.connectUser = newUser }
- }
+alwaysFailingMigNoTxn =
+ AddedSqlMigration
+ SqlMigration
+ { migrationName = "0005-always-failing-no-txn.sql",
+ migrationSql = mkValidSql "SELECT 5/0",
+ migrationInTxn = False,
+ migrationCustomConnInfo = Nothing,
+ migrationEnvVars = mempty
+ }
+ (getIncreasingTimestamp 5)
+
+-- | A migration that creates the codd_schema itself (and an old version at that). This is like what a pg dump would have.
+pgDumpEmulatingMig :: (MonadThrow m) => AddedSqlMigration m
+pgDumpEmulatingMig =
+ AddedSqlMigration
+ SqlMigration
+ { migrationName = "0010-pg_dump-emulating-mig.sql",
+ migrationSql =
+ mkValidSql
+ "CREATE SCHEMA codd_schema; GRANT USAGE ON SCHEMA codd_schema TO PUBLIC;\
+ \CREATE TABLE codd_schema.sql_migrations ( \
+ \ id SERIAL PRIMARY KEY\
+ \, migration_timestamp timestamptz not null\
+ \, applied_at timestamptz not null \
+ \, name text not null \
+ \, unique (name), unique (migration_timestamp));\
+ \GRANT INSERT,SELECT ON TABLE codd_schema.sql_migrations TO PUBLIC;\
+ \GRANT USAGE ,SELECT ON SEQUENCE codd_schema.sql_migrations_id_seq TO PUBLIC;\
+ \ -- Pretend a migration that always fails was applied. We'll be able to add this migration in our test as codd should skip it \n\
+ \INSERT INTO codd_schema.sql_migrations (migration_timestamp, applied_at, name) VALUES ('2000-01-01', '2000-01-01', '0004-always-failing.sql'), ('2000-01-02', '2000-01-01 00:00:01', '0005-always-failing-no-txn.sql')",
+ migrationInTxn = True,
+ migrationCustomConnInfo = Nothing,
+ migrationEnvVars = mempty
+ }
+ (getIncreasingTimestamp 10)
+
+changeConnUser ::
+ CoddSettings -> String -> AddedSqlMigration m -> AddedSqlMigration m
+changeConnUser dbInfo newUser mig =
+ mig
+ { addedSqlMig =
+ (addedSqlMig mig)
+ { migrationCustomConnInfo =
+ let cinfo =
+ fromMaybe
+ (migsConnString dbInfo)
+ (migrationCustomConnInfo (addedSqlMig mig))
+ in Just cinfo {DB.connectUser = newUser}
+ }
}
-changeConnDb
- :: CoddSettings -> String -> AddedSqlMigration m -> AddedSqlMigration m
-changeConnDb dbInfo newDb mig = mig
- { addedSqlMig = (addedSqlMig mig)
- { migrationCustomConnInfo =
- let cinfo = fromMaybe
- (migsConnString dbInfo)
- (migrationCustomConnInfo (addedSqlMig mig))
- in Just cinfo { DB.connectDatabase = newDb }
- }
+changeConnDb ::
+ CoddSettings -> String -> AddedSqlMigration m -> AddedSqlMigration m
+changeConnDb dbInfo newDb mig =
+ mig
+ { addedSqlMig =
+ (addedSqlMig mig)
+ { migrationCustomConnInfo =
+ let cinfo =
+ fromMaybe
+ (migsConnString dbInfo)
+ (migrationCustomConnInfo (addedSqlMig mig))
+ in Just cinfo {DB.connectDatabase = newDb}
+ }
}
-
-- | A migration that uses many different ways of inputting strings in postgres. In theory we'd only need to
-- test the parser, but we do this to sleep well at night too.
-- This migration only makes sense with standard_conforming_strings=on.
-stdConfStringsMig :: MonadThrow m => AddedSqlMigration m
-stdConfStringsMig = AddedSqlMigration
+stdConfStringsMig :: (MonadThrow m) => AddedSqlMigration m
+stdConfStringsMig =
+ AddedSqlMigration
SqlMigration
- { migrationName = "0001-string-escaping.sql"
- , migrationSql =
- mkValidSql
- "create table string_escape_tests (id int not null, t text not null);\n\
-
-\insert into string_escape_tests (id, t) values \n\
-\ (1, 'bc\\def')\n\
-\ -- ^ With standard_confirming_strings=on, the value inserted above should be the Haskell string \"bc\\def\"\n\
-
-\ , (2, E'abc\\def')\n\
-\ -- ^ The value above should _not_ contain the slash, it should be the Haskell string \"abcdef\"\n\
-
-\ , (3, E'abc\\\\def')\n\
-\ -- ^ The value above should be the Haskell string \"abc\\def\"\n\
-
-\ , (4, U&'d\\0061t\\+000061')\n\
-\ -- ^ The value above should be the Haskell string \"data\"\n\
-
-\ , (5, U&'d!0061t!+000061' UESCAPE '!')\n\
-\ -- ^ The value above should also be the Haskell string \"data\"\n\
-
-\ , (6, U&'d;0061t;+000061' UESCAPE ';')\n\
-\ -- ^ The value above should also be the Haskell string \"data\"\n\
-
-\ , (7, U&'d\\0061t\\+000061\\\\''')\n\
-\ -- ^ The value above should also be the Haskell string \"data\\'\"\n\
-
-\ , (8, U&'\\0441\\043B\\043E\\043D')\n\
-\ -- ^ The value above should be the Haskell string \"слон\"\n\
-
-\ , (9, $$Dianne's horse$$)\n\
-\ -- ^ Haskell string \"Dianne's horse\"\n\
-
-\ , (10, $SomeTag$Dianne's horse$SomeTag$)\n\
-\ -- ^ Same as above\n\
-\;"
- , migrationInTxn = True
- , migrationCustomConnInfo = Nothing
- , migrationEnvVars = mempty
- }
+ { migrationName = "0001-string-escaping.sql",
+ migrationSql =
+ mkValidSql
+ "create table string_escape_tests (id int not null, t text not null);\n\
+ \insert into string_escape_tests (id, t) values \n\
+ \ (1, 'bc\\def')\n\
+ \ -- ^ With standard_confirming_strings=on, the value inserted above should be the Haskell string \"bc\\def\"\n\
+ \ , (2, E'abc\\def')\n\
+ \ -- ^ The value above should _not_ contain the slash, it should be the Haskell string \"abcdef\"\n\
+ \ , (3, E'abc\\\\def')\n\
+ \ -- ^ The value above should be the Haskell string \"abc\\def\"\n\
+ \ , (4, U&'d\\0061t\\+000061')\n\
+ \ -- ^ The value above should be the Haskell string \"data\"\n\
+ \ , (5, U&'d!0061t!+000061' UESCAPE '!')\n\
+ \ -- ^ The value above should also be the Haskell string \"data\"\n\
+ \ , (6, U&'d;0061t;+000061' UESCAPE ';')\n\
+ \ -- ^ The value above should also be the Haskell string \"data\"\n\
+ \ , (7, U&'d\\0061t\\+000061\\\\''')\n\
+ \ -- ^ The value above should also be the Haskell string \"data\\'\"\n\
+ \ , (8, U&'\\0441\\043B\\043E\\043D')\n\
+ \ -- ^ The value above should be the Haskell string \"слон\"\n\
+ \ , (9, $$Dianne's horse$$)\n\
+ \ -- ^ Haskell string \"Dianne's horse\"\n\
+ \ , (10, $SomeTag$Dianne's horse$SomeTag$)\n\
+ \ -- ^ Same as above\n\
+ \;",
+ migrationInTxn = True,
+ migrationCustomConnInfo = Nothing,
+ migrationEnvVars = mempty
+ }
(getIncreasingTimestamp 0)
-- | A migration that uses many different ways of inputting strings in postgres. In theory we'd only need to
-- test the parser, but we do this to sleep well at night too.
-- This migration only makes sense with standard_conforming_strings=off.
-notStdConfStringsMig :: MonadThrow m => AddedSqlMigration m
-notStdConfStringsMig = AddedSqlMigration
+notStdConfStringsMig :: (MonadThrow m) => AddedSqlMigration m
+notStdConfStringsMig =
+ AddedSqlMigration
SqlMigration
- { migrationName = "0001-string-escaping.sql"
- , migrationSql =
- mkValidSql
- "set standard_conforming_strings=off; create table string_escape_tests (id int not null, t text not null);\n\
-
-\insert into string_escape_tests (id, t) values \n\
-\ (1, 'bc\\def')\n\
-\ -- ^ With standard_confirming_strings=off, the value inserted above should be the Haskell string \"bcdef\"\n\
-
-\ , (2, 'abc\\\\de''f')\n\
-\ -- ^ The value above should _not_ contain the slash, it should be the Haskell string \"abc\\de'f\"\n\
-\;"
- , migrationInTxn = True
- , migrationCustomConnInfo = Nothing
- , migrationEnvVars = mempty
- }
+ { migrationName = "0001-string-escaping.sql",
+ migrationSql =
+ mkValidSql
+ "set standard_conforming_strings=off; create table string_escape_tests (id int not null, t text not null);\n\
+ \insert into string_escape_tests (id, t) values \n\
+ \ (1, 'bc\\def')\n\
+ \ -- ^ With standard_confirming_strings=off, the value inserted above should be the Haskell string \"bcdef\"\n\
+ \ , (2, 'abc\\\\de''f')\n\
+ \ -- ^ The value above should _not_ contain the slash, it should be the Haskell string \"abc\\de'f\"\n\
+ \;",
+ migrationInTxn = True,
+ migrationCustomConnInfo = Nothing,
+ migrationEnvVars = mempty
+ }
(getIncreasingTimestamp 0)
downgradeCoddSchema :: DB.Connection -> CoddSchemaVersion -> IO ()
downgradeCoddSchema conn targetVersion = go maxBound
where
go currentSchemaVersion
- | targetVersion == currentSchemaVersion = pure ()
- | otherwise = do
- case currentSchemaVersion of
- CoddSchemaDoesNotExist -> pure ()
- CoddSchemaV1 -> do
- execvoid_ conn "DROP SCHEMA codd_schema CASCADE"
-
- CoddSchemaV2 -> execvoid_
- conn
- "ALTER TABLE codd_schema.sql_migrations DROP COLUMN application_duration"
- CoddSchemaV3 -> execvoid_
- conn
- "ALTER TABLE codd_schema.sql_migrations DROP COLUMN num_applied_statements, DROP COLUMN no_txn_failed_at, ALTER COLUMN applied_at SET NOT NULL; \n\
- \REVOKE UPDATE ON TABLE codd_schema.sql_migrations FROM PUBLIC;"
-
- go (pred currentSchemaVersion)
+ | targetVersion == currentSchemaVersion = pure ()
+ | otherwise = do
+ case currentSchemaVersion of
+ CoddSchemaDoesNotExist -> pure ()
+ CoddSchemaV1 -> do
+ execvoid_ conn "DROP SCHEMA codd_schema CASCADE"
+ CoddSchemaV2 ->
+ execvoid_
+ conn
+ "ALTER TABLE codd_schema.sql_migrations DROP COLUMN application_duration"
+ CoddSchemaV3 ->
+ execvoid_
+ conn
+ "ALTER TABLE codd_schema.sql_migrations DROP COLUMN num_applied_statements, DROP COLUMN no_txn_failed_at, ALTER COLUMN applied_at SET NOT NULL; \n\
+ \REVOKE UPDATE ON TABLE codd_schema.sql_migrations FROM PUBLIC;"
+ CoddSchemaV4 ->
+ execvoid_
+ conn
+ "ALTER TABLE codd_schema.sql_migrations DROP COLUMN txnid, DROP COLUMN connid"
+
+ go (pred currentSchemaVersion)
spec :: Spec
spec = do
- describe "DbDependentSpecs" $ do
- describe "Application tests" $ do
- describe "codd_schema version migrations"
- $ forM_ [CoddSchemaDoesNotExist .. maxBound]
- $ \vIntermediary ->
- aroundFreshDatabase
- $ it
- ("codd_schema version migration succeeds from "
- ++ show CoddSchemaDoesNotExist
- ++ " to "
- ++ show vIntermediary
- ++ " and then to "
- ++ show (maxBound @CoddSchemaVersion)
+ describe "DbDependentSpecs" $ do
+ describe "Application tests" $ do
+ describe "codd_schema version migrations" $
+ forM_ [CoddSchemaDoesNotExist .. maxBound] $
+ \vIntermediary ->
+ aroundFreshDatabase
+ $ it
+ ( "codd_schema version migration succeeds from "
+ ++ show CoddSchemaDoesNotExist
+ ++ " to "
+ ++ show vIntermediary
+ ++ " and then to "
+ ++ show (maxBound @CoddSchemaVersion)
+ )
+ $ \emptyTestDbInfo ->
+ void @IO
+ $ withConnection
+ (migsConnString emptyTestDbInfo)
+ testConnTimeout
+ $ \conn -> do
+ -- Downgrade the schema created by `aroundFreshDatabase`. We want migrations applied with different versions to exist.
+ downgradeCoddSchema conn vIntermediary
+ detectCoddSchema conn
+ `shouldReturn` vIntermediary
+
+ -- Test that we can apply migrations in an older state, and that it updates the schema to the latest
+ runCoddLogger $
+ applyMigrationsNoCheck
+ emptyTestDbInfo
+ -- At least one pending migration so codd updates the schema
+ (Just [alwaysPassingMig])
+ testConnTimeout
+ (const $ pure ())
+ detectCoddSchema conn
+ `shouldReturn` maxBound
+
+ -- Run `codd up` one more time to ensure test that fetching migrations applied with different schema versions does not fail
+ -- This can happen if we e.g. assume `num_applied_statements` is not nullable for V3, but migrations applied with earlier versions
+ -- will have that with a NULL value.
+ runCoddLogger $
+ applyMigrationsNoCheck
+ emptyTestDbInfo
+ (Just [alwaysPassingMig])
+ testConnTimeout
+ (const $ pure ())
+
+ describe "With the default database available" $
+ aroundFreshDatabase $
+ do
+ it
+ "SQL containing characters typical to placeholders does not throw"
+ $ \emptyTestDbInfo -> do
+ void @IO $
+ runCoddLogger $
+ applyMigrationsNoCheck
+ emptyTestDbInfo
+ (Just [placeHoldersMig])
+ testConnTimeout
+ (const $ pure ())
+
+ it "Rows-returning function works for no-txn migrations" $
+ \emptyTestDbInfo -> do
+ void @IO $
+ runCoddLogger $
+ applyMigrationsNoCheck
+ emptyTestDbInfo
+ (Just [selectMig])
+ testConnTimeout
+ (const $ pure ())
+
+ it "Rows-returning function works for in-txn migrations" $
+ \emptyTestDbInfo -> do
+ let (AddedSqlMigration mig t) = selectMig
+ inTxnMig =
+ AddedSqlMigration
+ mig {migrationInTxn = True}
+ t
+ void @IO $
+ runCoddLogger $
+ applyMigrationsNoCheck
+ emptyTestDbInfo
+ (Just [inTxnMig])
+ testConnTimeout
+ (const $ pure ())
+
+ it
+ "String escaping works in all its forms with standard_conforming_strings=on"
+ $ \emptyTestDbInfo -> void @IO $ do
+ stringsAndIds :: [(Int, Text)] <-
+ runCoddLogger $
+ applyMigrationsNoCheck
+ emptyTestDbInfo
+ (Just [stdConfStringsMig])
+ testConnTimeout
+ ( \conn ->
+ liftIO $
+ DB.query
+ conn
+ "SELECT id, t FROM string_escape_tests ORDER BY id"
+ ()
+ )
+
+ map snd stringsAndIds
+ `shouldBe` [ "bc\\def",
+ "abcdef",
+ "abc\\def",
+ "data",
+ "data",
+ "data",
+ "data\\'",
+ "слон",
+ "Dianne's horse",
+ "Dianne's horse"
+ ]
+ it
+ "String escaping works in all its forms with standard_conforming_strings=off"
+ $ \emptyTestDbInfo -> void @IO $ do
+ stringsAndIds :: [(Int, Text)] <-
+ runCoddLogger $
+ applyMigrationsNoCheck
+ emptyTestDbInfo
+ (Just [notStdConfStringsMig])
+ testConnTimeout
+ ( \conn ->
+ liftIO $
+ DB.query
+ conn
+ "SELECT id, t FROM string_escape_tests ORDER BY id"
+ ()
+ )
+
+ map snd stringsAndIds
+ `shouldBe` ["bcdef", "abc\\de'f"]
+
+ it "COPY FROM STDIN works" $ \emptyTestDbInfo ->
+ runCoddLogger
+ ( applyMigrationsNoCheck
+ emptyTestDbInfo
+ (Just [copyMig])
+ testConnTimeout
+ ( \conn ->
+ liftIO $
+ DB.query
+ conn
+ "SELECT name FROM x ORDER BY name"
+ ()
+ )
+ )
+ `shouldReturn` [ DB.Only @String "Line\nbreak\r",
+ DB.Only "Some name"
+ ]
+
+ forM_
+ [ DbDefault,
+ Serializable,
+ RepeatableRead,
+ ReadCommitted,
+ ReadUncommitted
+ ]
+ $ \isolLvl ->
+ it
+ ( "Transaction Isolation Level is properly applied - "
+ <> show isolLvl
+ )
+ $ \emptyTestDbInfo -> do
+ let modifiedSettings =
+ emptyTestDbInfo
+ { txnIsolationLvl =
+ isolLvl
+ }
+ -- This pretty much copies Codd.hs's applyMigrations, but it allows
+ -- us to run an after-migrations action that queries the transaction isolation level
+ ( actualTxnIsol :: DB.Only String,
+ actualTxnReadOnly ::
+ DB.Only
+ String
+ ) <-
+ runCoddLogger @IO $
+ applyMigrationsNoCheck
+ modifiedSettings
+ -- One in-txn migration is just what we need to make the last action
+ -- run in the same transaction as it
+ (Just [selectMig])
+ testConnTimeout
+ ( \conn ->
+ (,)
+ <$> unsafeQuery1
+ conn
+ "SHOW transaction_isolation"
+ ()
+ <*> unsafeQuery1
+ conn
+ "SHOW transaction_read_only"
+ ()
+ )
+
+ DB.fromOnly actualTxnReadOnly
+ `shouldBe` "off"
+ DB.fromOnly actualTxnIsol
+ `shouldBe` case isolLvl of
+ DbDefault ->
+ "read committed"
+ Serializable ->
+ "serializable"
+ RepeatableRead ->
+ "repeatable read"
+ ReadCommitted ->
+ "read committed"
+ ReadUncommitted ->
+ "read uncommitted"
+
+ it
+ "Strict checking and lax checking behaviour on mismatched schemas"
+ $ \emptyTestDbInfo -> do
+ let bogusDbHashes =
+ DbRep Aeson.Null Map.empty Map.empty
+ void @IO
+ $ withConnection
+ (migsConnString emptyTestDbInfo)
+ testConnTimeout
+ $ \conn -> do
+ -- Strict checking will not apply the migration and therefore will not
+ -- create "newtable"
+ DB.query_
+ conn
+ "SELECT 1 FROM pg_catalog.pg_tables WHERE tablename='newtable'"
+ `shouldReturn` ( [] ::
+ [ DB.Only
+ Int
+ ]
+ )
+ runCoddLogger
+ ( applyMigrations
+ ( emptyTestDbInfo
+ { onDiskReps =
+ Right
+ bogusDbHashes
+ }
+ )
+ ( Just
+ [ createTableNewTableMig
+ "newtable"
+ True
+ 1
+ ]
+ )
+ testConnTimeout
+ StrictCheck
+ )
+ `shouldThrow` anyIOException
+ DB.query_
+ conn
+ "SELECT 1 FROM pg_catalog.pg_tables WHERE tablename='newtable'"
+ `shouldReturn` ( [] ::
+ [ DB.Only
+ Int
+ ]
+ )
+
+ -- Lax checking will apply the migration and will not throw an exception
+ void $
+ runCoddLogger
+ ( applyMigrations
+ ( emptyTestDbInfo
+ { onDiskReps =
+ Right
+ bogusDbHashes
+ }
+ )
+ ( Just
+ [ createTableNewTableMig
+ "newtable"
+ True
+ 1
+ ]
+ )
+ testConnTimeout
+ LaxCheck
+ )
+ DB.query_
+ conn
+ "SELECT 1 FROM pg_catalog.pg_tables WHERE tablename='newtable'"
+ `shouldReturn` ( [DB.Only 1] ::
+ [ DB.Only
+ Int
+ ]
+ )
+
+ forM_ [True, False] $ \firstInTxn ->
+ forM_
+ [ ("newtable", "sometable"),
+ ("sometable", "newtable")
+ ]
+ $ \(t1, t2) ->
+ it
+ ( "Strict checking commits before checking in the presence of no-txn migrations - "
+ ++ show firstInTxn
+ ++ " - "
+ ++ t1
+ )
+ $ \emptyTestDbInfo -> do
+ let bogusDbHashes =
+ DbRep
+ Aeson.Null
+ Map.empty
+ Map.empty
+ void @IO
+ $ withConnection
+ ( migsConnString
+ emptyTestDbInfo
+ )
+ testConnTimeout
+ $ \conn -> do
+ runCoddLogger
+ ( applyMigrations
+ ( emptyTestDbInfo
+ { onDiskReps =
+ Right
+ bogusDbHashes
+ }
)
- $ \emptyTestDbInfo ->
- void @IO
- $ withConnection
- (migsConnString emptyTestDbInfo)
- testConnTimeout
- $ \conn -> do
- -- Downgrade the schema created by `aroundFreshDatabase`. We want migrations applied with different versions to exist.
- downgradeCoddSchema conn vIntermediary
- detectCoddSchema conn
- `shouldReturn` vIntermediary
-
- -- Test that we can apply migrations in an older state, and that it updates the schema to the latest
- runCoddLogger $ applyMigrationsNoCheck
- emptyTestDbInfo
- -- At least one pending migration so codd updates the schema
- (Just [alwaysPassingMig])
- testConnTimeout
- (const $ pure ())
- detectCoddSchema conn
- `shouldReturn` maxBound
-
- -- Run `codd up` one more time to ensure test that fetching migrations applied with different schema versions does not fail
- -- This can happen if we e.g. assume `num_applied_statements` is not nullable for V3, but migrations applied with earlier versions
- -- will have that with a NULL value.
- runCoddLogger $ applyMigrationsNoCheck
- emptyTestDbInfo
- (Just [alwaysPassingMig])
- testConnTimeout
- (const $ pure ())
-
-
- describe "With the default database available"
- $ aroundFreshDatabase
- $ do
- it
- "SQL containing characters typical to placeholders does not throw"
- $ \emptyTestDbInfo -> do
- void @IO
- $ runCoddLogger
- $ applyMigrationsNoCheck
- emptyTestDbInfo
- (Just [placeHoldersMig])
- testConnTimeout
- (const $ pure ())
-
- it "Rows-returning function works for no-txn migrations"
- $ \emptyTestDbInfo -> do
- void @IO
- $ runCoddLogger
- $ applyMigrationsNoCheck
- emptyTestDbInfo
- (Just [selectMig])
- testConnTimeout
- (const $ pure ())
-
- it "Rows-returning function works for in-txn migrations"
- $ \emptyTestDbInfo -> do
- let (AddedSqlMigration mig t) = selectMig
- inTxnMig = AddedSqlMigration
- mig { migrationInTxn = True }
- t
- void @IO
- $ runCoddLogger
- $ applyMigrationsNoCheck
- emptyTestDbInfo
- (Just [inTxnMig])
- testConnTimeout
- (const $ pure ())
-
- it
- "String escaping works in all its forms with standard_conforming_strings=on"
- $ \emptyTestDbInfo -> void @IO $ do
- stringsAndIds :: [(Int, Text)] <-
- runCoddLogger $ applyMigrationsNoCheck
- emptyTestDbInfo
- (Just [stdConfStringsMig])
- testConnTimeout
- (\conn -> liftIO $ DB.query
- conn
- "SELECT id, t FROM string_escape_tests ORDER BY id"
- ()
+ ( Just
+ [ createTableNewTableMig
+ t1
+ firstInTxn
+ 1,
+ createTableNewTableMig
+ t2
+ ( not
+ firstInTxn
)
-
- map snd stringsAndIds
- `shouldBe` [ "bc\\def"
- , "abcdef"
- , "abc\\def"
- , "data"
- , "data"
- , "data"
- , "data\\'"
- , "слон"
- , "Dianne's horse"
- , "Dianne's horse"
- ]
- it
- "String escaping works in all its forms with standard_conforming_strings=off"
- $ \emptyTestDbInfo -> void @IO $ do
- stringsAndIds :: [(Int, Text)] <-
- runCoddLogger $ applyMigrationsNoCheck
- emptyTestDbInfo
- (Just [notStdConfStringsMig])
- testConnTimeout
- (\conn -> liftIO $ DB.query
- conn
- "SELECT id, t FROM string_escape_tests ORDER BY id"
- ()
- )
-
- map snd stringsAndIds
- `shouldBe` ["bcdef", "abc\\de'f"]
-
- it "COPY FROM STDIN works" $ \emptyTestDbInfo ->
- runCoddLogger
-
- (applyMigrationsNoCheck
- emptyTestDbInfo
- (Just [copyMig])
- testConnTimeout
- (\conn -> liftIO $ DB.query
- conn
- "SELECT name FROM x ORDER BY name"
- ()
- )
+ 2
+ ]
+ )
+ testConnTimeout
+ StrictCheck
+ )
+ `shouldThrow` anyIOException
+ DB.query_
+ conn
+ "SELECT COUNT(*) FROM pg_catalog.pg_tables WHERE tablename='newtable' OR tablename='sometable'"
+ `shouldReturn` [ DB.Only
+ ( 2 :: Int
+ )
+ ]
+
+ it
+ "no-txn migrations and in-txn migrations run in intertwined blocks"
+ $ \emptyTestDbInfo -> do
+ let migs =
+ [ AddedSqlMigration
+ SqlMigration
+ { migrationName =
+ "0000-first-in-txn-mig.sql",
+ migrationSql =
+ mkValidSql $
+ "CREATE TABLE any_table (txid bigint not null);"
+ <> "\nINSERT INTO any_table (txid) VALUES (txid_current());"
+ <> "\nINSERT INTO any_table (txid) VALUES (txid_current());",
+ -- One unique txid from this migration, two rows
+ migrationInTxn = True,
+ migrationCustomConnInfo =
+ Nothing,
+ migrationEnvVars = mempty
+ }
+ (getIncreasingTimestamp 0),
+ AddedSqlMigration
+ SqlMigration
+ { migrationName =
+ "0001-second-in-txn-mig.sql",
+ migrationSql =
+ mkValidSql $
+ "INSERT INTO any_table (txid) VALUES (txid_current());"
+ <> "\nINSERT INTO any_table (txid) VALUES (txid_current());",
+ -- No txids from this migration because it runs in the same transaction as the last one, two more rows
+ migrationInTxn = True,
+ migrationCustomConnInfo =
+ Nothing,
+ migrationEnvVars = mempty
+ }
+ (getIncreasingTimestamp 1),
+ AddedSqlMigration
+ SqlMigration
+ { migrationName =
+ "0002-no-txn-mig.sql",
+ migrationSql =
+ mkValidSql $
+ "CREATE TYPE experience AS ENUM ('junior', 'senior');"
+ <> "\nALTER TABLE any_table ADD COLUMN experience experience;"
+ <> "\nALTER TYPE experience ADD VALUE 'intern' BEFORE 'junior';"
+ <> "\nUPDATE any_table SET experience='intern';"
+ <> "\nINSERT INTO any_table (txid) VALUES (txid_current());"
+ <> "\nINSERT INTO any_table (txid) VALUES (txid_current());",
+ -- Two distinct txids because this one doesn't run in a migration and two more rows
+ migrationInTxn = False,
+ migrationCustomConnInfo =
+ Nothing,
+ migrationEnvVars = mempty
+ }
+ (getIncreasingTimestamp 2),
+ AddedSqlMigration
+ SqlMigration
+ { migrationName =
+ "0003-second-in-txn-mig.sql",
+ migrationSql =
+ mkValidSql $
+ "INSERT INTO any_table (txid) VALUES (txid_current());"
+ <> "\nINSERT INTO any_table (txid) VALUES (txid_current());",
+ -- One unique txid from this migration because it runs in a new transaction, two more rows
+ migrationInTxn = True,
+ migrationCustomConnInfo =
+ Nothing,
+ migrationEnvVars = mempty
+ }
+ (getIncreasingTimestamp 3),
+ AddedSqlMigration
+ SqlMigration
+ { migrationName =
+ "0004-second-in-txn-mig.sql",
+ migrationSql =
+ mkValidSql $
+ "INSERT INTO any_table (txid) VALUES (txid_current());"
+ <> "\nINSERT INTO any_table (txid) VALUES (txid_current());",
+ -- No txids from this migration because it runs in the same transaction as the last one, two more rows
+ migrationInTxn = True,
+ migrationCustomConnInfo =
+ Nothing,
+ migrationEnvVars = mempty
+ }
+ (getIncreasingTimestamp 4)
+ ]
+
+ void @IO $
+ runCoddLogger $
+ applyMigrationsNoCheck
+ emptyTestDbInfo
+ (Just migs)
+ testConnTimeout
+ (const $ pure ())
+ withConnection
+ (migsConnString emptyTestDbInfo)
+ testConnTimeout
+ $ \conn -> do
+ (countTxIds :: Int, countInterns :: Int, totalRows :: Int) <-
+ unsafeQuery1
+ conn
+ "SELECT (SELECT COUNT(DISTINCT txid) FROM any_table), (SELECT COUNT(*) FROM any_table WHERE experience='intern'), (SELECT COUNT(*) FROM any_table);"
+ ()
+ countTxIds `shouldBe` 4
+ countInterns `shouldBe` 4
+ totalRows `shouldBe` 10
+
+ describe "Custom connection-string migrations" $ do
+ aroundFreshDatabase $
+ forM_ [True, False] $
+ \addDefaultConnMig ->
+ it
+ ( "In-txn migrations in non-default database get registered after all said migrations are committed, not after each one is applied inside a transaction. Default-conn mig first: "
+ ++ show addDefaultConnMig
+ )
+ $ \dbInfo -> do
+ -- To test this we put three consecutive in-txn migrations that run on a non-default database, where the last migration always fails.
+ -- Neither of the three migrations should be registered in this scenario, as they were all rolled back. In a second time, we run only the first two migrations,
+ -- and check they were registered.
+ runCoddLogger
+ ( applyMigrationsNoCheck
+ dbInfo
+ ( Just $
+ -- A default-conn mig is interesting because it makes the default connection available if it runs first, but we don't want that default connection to be used too soon
+ [ alwaysPassingMig
+ | addDefaultConnMig
+ ]
+ ++ [ changeConnDb
+ dbInfo
+ "postgres"
+ createTableMig,
+ changeConnDb
+ dbInfo
+ "postgres"
+ addColumnMig,
+ changeConnDb
+ dbInfo
+ "postgres"
+ alwaysFailingMig
+ ]
+ )
+ testConnTimeout
+ (const $ pure ())
+ )
+ `shouldThrow` ( \(_ :: SomeException) ->
+ True
)
- `shouldReturn` [ DB.Only @String "Line\nbreak\r"
- , DB.Only "Some name"
- ]
-
-
-
- forM_
- [ DbDefault
- , Serializable
- , RepeatableRead
- , ReadCommitted
- , ReadUncommitted
- ]
- $ \isolLvl ->
- it
- ("Transaction Isolation Level is properly applied - "
- <> show isolLvl
+ allRegisteredMigs :: [String] <-
+ map DB.fromOnly
+ <$> withConnection
+ (migsConnString dbInfo)
+ testConnTimeout
+ ( \conn ->
+ DB.query
+ conn
+ "SELECT name from codd_schema.sql_migrations"
+ ()
+ )
+ allRegisteredMigs
+ `shouldNotContain` [ migrationName
+ ( addedSqlMig
+ @IO
+ createTableMig
+ )
+ ]
+ allRegisteredMigs
+ `shouldNotContain` [ migrationName
+ ( addedSqlMig
+ @IO
+ addColumnMig
+ )
+ ]
+ allRegisteredMigs
+ `shouldNotContain` [ migrationName
+ ( addedSqlMig
+ @IO
+ alwaysFailingMig
+ )
+ ]
+
+ -- If we don't include the third migration, the first two should be applied
+ runCoddLogger
+ ( applyMigrationsNoCheck
+ dbInfo
+ ( Just $
+ -- A default-conn mig is interesting because it makes the default connection available if it runs first, but we don't want that default connection to be used too soon
+ [ alwaysPassingMig
+ | addDefaultConnMig
+ ]
+ ++ [ changeConnDb
+ dbInfo
+ "postgres"
+ createTableMig,
+ changeConnDb
+ dbInfo
+ "postgres"
+ addColumnMig
+ ]
+ )
+ testConnTimeout
+ (const $ pure ())
+ )
+ allRegisteredMigs2 :: [String] <-
+ map DB.fromOnly
+ <$> withConnection
+ (migsConnString dbInfo)
+ testConnTimeout
+ ( \conn ->
+ DB.query
+ conn
+ "SELECT name from codd_schema.sql_migrations"
+ ()
+ )
+ allRegisteredMigs2
+ `shouldContain` [ migrationName
+ ( addedSqlMig @IO
+ createTableMig
)
- $ \emptyTestDbInfo -> do
- let
- modifiedSettings =
- emptyTestDbInfo
- { txnIsolationLvl =
- isolLvl
- }
- -- This pretty much copies Codd.hs's applyMigrations, but it allows
- -- us to run an after-migrations action that queries the transaction isolation level
- (actualTxnIsol :: DB.Only String, actualTxnReadOnly :: DB.Only
- String) <-
- runCoddLogger @IO
- $ applyMigrationsNoCheck
- modifiedSettings
- -- One in-txn migration is just what we need to make the last action
- -- run in the same transaction as it
- (Just [selectMig])
- testConnTimeout
- (\conn ->
- (,)
- <$> unsafeQuery1
- conn
- "SHOW transaction_isolation"
- ()
- <*> unsafeQuery1
- conn
- "SHOW transaction_read_only"
- ()
- )
-
- DB.fromOnly actualTxnReadOnly
- `shouldBe` "off"
- DB.fromOnly actualTxnIsol
- `shouldBe` case isolLvl of
- DbDefault
- -> "read committed"
- Serializable
- -> "serializable"
- RepeatableRead
- -> "repeatable read"
- ReadCommitted
- -> "read committed"
- ReadUncommitted
- -> "read uncommitted"
-
- it
- "Strict checking and lax checking behaviour on mismatched schemas"
- $ \emptyTestDbInfo -> do
- let bogusDbHashes =
- DbRep Aeson.Null Map.empty Map.empty
- void @IO
- $ withConnection
- (migsConnString emptyTestDbInfo)
- testConnTimeout
- $ \conn -> do
-
- -- Strict checking will not apply the migration and therefore will not
- -- create "newtable"
- DB.query_
- conn
- "SELECT 1 FROM pg_catalog.pg_tables WHERE tablename='newtable'"
- `shouldReturn` ([] :: [ DB.Only
- Int
- ]
- )
- runCoddLogger
-
- (applyMigrations
- (emptyTestDbInfo
- { onDiskReps = Right
- bogusDbHashes
- }
- )
- (Just
- [ createTableNewTableMig
- "newtable"
- True
- 1
- ]
- )
- testConnTimeout
- StrictCheck
- )
- `shouldThrow` anyIOException
- DB.query_
- conn
- "SELECT 1 FROM pg_catalog.pg_tables WHERE tablename='newtable'"
- `shouldReturn` ([] :: [ DB.Only
- Int
- ]
- )
-
- -- Lax checking will apply the migration and will not throw an exception
- void $ runCoddLogger
-
- (applyMigrations
- (emptyTestDbInfo
- { onDiskReps = Right
- bogusDbHashes
- }
- )
- (Just
- [ createTableNewTableMig
- "newtable"
- True
- 1
- ]
- )
- testConnTimeout
- LaxCheck
- )
- DB.query_
- conn
- "SELECT 1 FROM pg_catalog.pg_tables WHERE tablename='newtable'"
- `shouldReturn` ([DB.Only 1] :: [ DB.Only
- Int
- ]
- )
-
- forM_ [True, False] $ \firstInTxn ->
- forM_
- [ ("newtable" , "sometable")
- , ("sometable", "newtable")
- ]
- $ \(t1, t2) ->
- it
- ("Strict checking commits before checking in the presence of no-txn migrations - "
- ++ show firstInTxn
- ++ " - "
- ++ t1
- )
- $ \emptyTestDbInfo -> do
- let
- bogusDbHashes = DbRep
- Aeson.Null
- Map.empty
- Map.empty
- void @IO
- $ withConnection
- (migsConnString
- emptyTestDbInfo
- )
- testConnTimeout
- $ \conn -> do
- runCoddLogger
-
- (applyMigrations
- (emptyTestDbInfo
- { onDiskReps =
- Right
- bogusDbHashes
- }
- )
- (Just
- [ createTableNewTableMig
- t1
- firstInTxn
- 1
- , createTableNewTableMig
- t2
- (not
- firstInTxn
- )
- 2
- ]
- )
- testConnTimeout
- StrictCheck
- )
- `shouldThrow` anyIOException
- DB.query_
- conn
- "SELECT COUNT(*) FROM pg_catalog.pg_tables WHERE tablename='newtable' OR tablename='sometable'"
- `shouldReturn` [ DB.Only
- (2 :: Int
- )
- ]
-
- it
- "no-txn migrations and in-txn migrations run in intertwined blocks"
- $ \emptyTestDbInfo -> do
- let
- migs =
- [ AddedSqlMigration
- SqlMigration
- { migrationName =
- "0000-first-in-txn-mig.sql"
- , migrationSql =
- mkValidSql
- $ "CREATE TABLE any_table (txid bigint not null);"
- <> "\nINSERT INTO any_table (txid) VALUES (txid_current());"
- <> "\nINSERT INTO any_table (txid) VALUES (txid_current());"
- -- One unique txid from this migration, two rows
- , migrationInTxn = True
- , migrationCustomConnInfo =
- Nothing
- , migrationEnvVars = mempty
- }
- (getIncreasingTimestamp 0)
- , AddedSqlMigration
- SqlMigration
- { migrationName =
- "0001-second-in-txn-mig.sql"
- , migrationSql =
- mkValidSql
- $ "INSERT INTO any_table (txid) VALUES (txid_current());"
- <> "\nINSERT INTO any_table (txid) VALUES (txid_current());"
- -- No txids from this migration because it runs in the same transaction as the last one, two more rows
- , migrationInTxn = True
- , migrationCustomConnInfo =
- Nothing
- , migrationEnvVars = mempty
- }
- (getIncreasingTimestamp 1)
- , AddedSqlMigration
- SqlMigration
- { migrationName =
- "0002-no-txn-mig.sql"
- , migrationSql =
- mkValidSql
- $ "CREATE TYPE experience AS ENUM ('junior', 'senior');"
- <> "\nALTER TABLE any_table ADD COLUMN experience experience;"
- <> "\nALTER TYPE experience ADD VALUE 'intern' BEFORE 'junior';"
- <> "\nUPDATE any_table SET experience='intern';"
- <> "\nINSERT INTO any_table (txid) VALUES (txid_current());"
- <> "\nINSERT INTO any_table (txid) VALUES (txid_current());"
- -- Two distinct txids because this one doesn't run in a migration and two more rows
- , migrationInTxn = False
- , migrationCustomConnInfo =
- Nothing
- , migrationEnvVars = mempty
- }
- (getIncreasingTimestamp 2)
- , AddedSqlMigration
- SqlMigration
- { migrationName =
- "0003-second-in-txn-mig.sql"
- , migrationSql =
- mkValidSql
- $ "INSERT INTO any_table (txid) VALUES (txid_current());"
- <> "\nINSERT INTO any_table (txid) VALUES (txid_current());"
- -- One unique txid from this migration because it runs in a new transaction, two more rows
- , migrationInTxn = True
- , migrationCustomConnInfo =
- Nothing
- , migrationEnvVars = mempty
- }
- (getIncreasingTimestamp 3)
- , AddedSqlMigration
- SqlMigration
- { migrationName =
- "0004-second-in-txn-mig.sql"
- , migrationSql =
- mkValidSql
- $ "INSERT INTO any_table (txid) VALUES (txid_current());"
- <> "\nINSERT INTO any_table (txid) VALUES (txid_current());"
- -- No txids from this migration because it runs in the same transaction as the last one, two more rows
- , migrationInTxn = True
- , migrationCustomConnInfo =
- Nothing
- , migrationEnvVars = mempty
- }
- (getIncreasingTimestamp 4)
- ]
-
- void @IO
- $ runCoddLogger
- $ applyMigrationsNoCheck
- emptyTestDbInfo
- (Just migs)
- testConnTimeout
- (const $ pure ())
- withConnection
- (migsConnString emptyTestDbInfo)
- testConnTimeout
- $ \conn -> do
- (countTxIds :: Int, countInterns :: Int, totalRows :: Int) <-
- unsafeQuery1
- conn
- "SELECT (SELECT COUNT(DISTINCT txid) FROM any_table), (SELECT COUNT(*) FROM any_table WHERE experience='intern'), (SELECT COUNT(*) FROM any_table);"
- ()
- countTxIds `shouldBe` 4
- countInterns `shouldBe` 4
- totalRows `shouldBe` 10
-
- describe "Custom connection-string migrations" $ do
- aroundFreshDatabase
- $ forM_ [True, False]
- $ \addDefaultConnMig ->
- it
- ("In-txn migrations in non-default database get registered after all said migrations are committed, not after each one is applied inside a transaction. Default-conn mig first: "
- ++ show addDefaultConnMig
- )
- $ \dbInfo -> do
- -- To test this we put three consecutive in-txn migrations that run on a non-default database, where the last migration always fails.
- -- Neither of the three migrations should be registered in this scenario, as they were all rolled back. In a second time, we run only the first two migrations,
- -- and check they were registered.
- runCoddLogger
- (applyMigrationsNoCheck
- dbInfo
- ( Just
- $
- -- A default-conn mig is interesting because it makes the default connection available if it runs first, but we don't want that default connection to be used too soon
- [ alwaysPassingMig
- | addDefaultConnMig
- ]
- ++ [ changeConnDb
- dbInfo
- "postgres"
- createTableMig
- , changeConnDb
- dbInfo
- "postgres"
- addColumnMig
- , changeConnDb
- dbInfo
- "postgres"
- alwaysFailingMig
- ]
- )
- testConnTimeout
- (const $ pure ())
- )
- `shouldThrow` (\(_ :: SomeException) ->
- True
- )
- allRegisteredMigs :: [String] <-
- map DB.fromOnly <$> withConnection
- (migsConnString dbInfo)
- testConnTimeout
- (\conn -> DB.query
- conn
- "SELECT name from codd_schema.sql_migrations"
- ()
- )
- allRegisteredMigs
- `shouldNotContain` [ migrationName
- (addedSqlMig
- @IO
- createTableMig
- )
- ]
- allRegisteredMigs
- `shouldNotContain` [ migrationName
- (addedSqlMig
- @IO
- addColumnMig
- )
- ]
- allRegisteredMigs
- `shouldNotContain` [ migrationName
- (addedSqlMig
- @IO
- alwaysFailingMig
- )
- ]
-
- -- If we don't include the third migration, the first two should be applied
- runCoddLogger
- (applyMigrationsNoCheck
- dbInfo
- ( Just
- $
- -- A default-conn mig is interesting because it makes the default connection available if it runs first, but we don't want that default connection to be used too soon
- [ alwaysPassingMig
- | addDefaultConnMig
- ]
- ++ [ changeConnDb
- dbInfo
- "postgres"
- createTableMig
- , changeConnDb dbInfo
- "postgres"
- addColumnMig
- ]
- )
- testConnTimeout
- (const $ pure ())
+ ]
+ allRegisteredMigs2
+ `shouldContain` [ migrationName
+ ( addedSqlMig @IO
+ addColumnMig
)
- allRegisteredMigs2 :: [String] <-
- map DB.fromOnly <$> withConnection
- (migsConnString dbInfo)
- testConnTimeout
- (\conn -> DB.query
- conn
- "SELECT name from codd_schema.sql_migrations"
- ()
- )
- allRegisteredMigs2
- `shouldContain` [ migrationName
- (addedSqlMig @IO
- createTableMig
- )
- ]
- allRegisteredMigs2
- `shouldContain` [ migrationName
- (addedSqlMig @IO
- addColumnMig
- )
- ]
- allRegisteredMigs2
- `shouldNotContain` [ migrationName
- (addedSqlMig
- @IO
- alwaysFailingMig
- )
- ]
- aroundFreshDatabase
- $ forM_ [True, False]
- $ \addDefaultConnMig ->
- it
- ("In-txn migrations in same database as the default connection string get registered in the same transaction even for a different user. Default-conn mig first: "
- ++ show addDefaultConnMig
+ ]
+ allRegisteredMigs2
+ `shouldNotContain` [ migrationName
+ ( addedSqlMig
+ @IO
+ alwaysFailingMig
+ )
+ ]
+ aroundFreshDatabase $
+ forM_ [True, False] $
+ \addDefaultConnMig ->
+ it
+ ( "In-txn migrations in same database as the default connection string get registered in the same transaction even for a different user. Default-conn mig first: "
+ ++ show addDefaultConnMig
+ )
+ $ \dbInfo -> do
+ -- To test this we put three consecutive in-txn migrations on the default database under a different user, where the last migration always fails.
+ -- Neither of the three migrations should be registered in this scenario, as they were all rolled back.
+ runCoddLogger
+ ( applyMigrationsNoCheck
+ dbInfo
+ ( Just $
+ -- A default-conn mig is interesting because it makes the default connection available if it runs first, but we don't want that default connection to be used regardless
+ [ alwaysPassingMig
+ | addDefaultConnMig
+ ]
+ ++ [ changeConnUser
+ dbInfo
+ "codd-test-user"
+ createTableMig,
+ changeConnUser
+ dbInfo
+ "codd-test-user"
+ addColumnMig,
+ changeConnUser
+ dbInfo
+ "codd-test-user"
+ alwaysFailingMig
+ ]
+ )
+ testConnTimeout
+ (const $ pure ())
+ )
+ `shouldThrow` ( \(_ :: SomeException) ->
+ True
)
- $ \dbInfo -> do
- -- To test this we put three consecutive in-txn migrations on the default database under a different user, where the last migration always fails.
- -- Neither of the three migrations should be registered in this scenario, as they were all rolled back.
- runCoddLogger
- (applyMigrationsNoCheck
- dbInfo
- ( Just
- $
- -- A default-conn mig is interesting because it makes the default connection available if it runs first, but we don't want that default connection to be used regardless
- [ alwaysPassingMig
- | addDefaultConnMig
- ]
- ++ [ changeConnUser
- dbInfo
- "codd-test-user"
- createTableMig
- , changeConnUser
- dbInfo
- "codd-test-user"
- addColumnMig
- , changeConnUser
- dbInfo
- "codd-test-user"
- alwaysFailingMig
- ]
- )
- testConnTimeout
- (const $ pure ())
- )
- `shouldThrow` (\(_ :: SomeException) ->
- True
- )
- allRegisteredMigs :: [String] <-
- map DB.fromOnly <$> withConnection
- (migsConnString dbInfo)
- testConnTimeout
- (\conn -> DB.query
- conn
- "SELECT name from codd_schema.sql_migrations"
- ()
- )
- allRegisteredMigs
- `shouldNotContain` [ migrationName
- (addedSqlMig
- @IO
- createTableMig
- )
- ]
- allRegisteredMigs
- `shouldNotContain` [ migrationName
- (addedSqlMig
- @IO
- addColumnMig
- )
- ]
- allRegisteredMigs
- `shouldNotContain` [ migrationName
- (addedSqlMig
- @IO
- alwaysFailingMig
- )
- ]
- it
- "applied_at and application_duration registered properly for migrations running before codd_schema is available"
- $ do
- defaultConnInfo <- testConnInfo
- testSettings <- testCoddSettings
- createCoddTestDbMigs <- (: []) <$> createTestUserMig
-
- let postgresCinfo = defaultConnInfo
- { DB.connectDatabase = "postgres"
- , DB.connectUser = "postgres"
- }
-
- allMigs =
- map (hoistAddedSqlMigration lift)
- $ fixMigsOrder
- $ [ AddedSqlMigration
- (createDatabaseMig
- postgresCinfo
- { DB.connectDatabase =
- previousDbName
- }
- ("new_database_" <> show i)
- 1 {- 1 sec pg_sleep -}
- i
- )
- (getIncreasingTimestamp 0)
- | i <- [0 .. 3]
- , previousDbName <- if i == 0
- then ["postgres"]
- else
- [ "new_database_"
- <> show (i - 1)
- ]
- ]
- ++ createCoddTestDbMigs
-
- finallyDrop "codd-test-db"
- $ finallyDrop "new_database_0"
- $ finallyDrop "new_database_1"
- $ finallyDrop "new_database_2"
- $ finallyDrop "new_database_3"
- $ void @IO
- $ do
- runCoddLogger $ applyMigrationsNoCheck
- testSettings
- (Just allMigs)
- testConnTimeout
- (const $ pure ())
- withConnection defaultConnInfo
- testConnTimeout
- $ \conn -> do
- -- 1. Check that migrations ran
- map DB.fromOnly
- <$> DB.query
- conn
- "SELECT datname FROM pg_database WHERE datname LIKE 'new_database_%' ORDER BY datname"
- ()
-
- `shouldReturn` [ "new_database_0" :: String
- , "new_database_1"
- , "new_database_2"
- , "new_database_3"
- ]
-
- -- 2. Check applied_at is not the time we insert into codd_schema.sql_migrations,
- -- but the time when migrations are effectively applied.
- runMigs :: [ ( FilePath
- , UTCTime
- , CalendarDiffTime
- )
- ] <-
- DB.query
- conn
- "SET intervalstyle TO 'iso_8601'; SELECT name, applied_at, application_duration FROM codd_schema.sql_migrations ORDER BY applied_at, id"
- ()
- map (\(f, _, _) -> f) runMigs
- `shouldBe` map
- ( migrationName
- . addedSqlMig
- )
- allMigs
-
- -- Half a second is a conservative minimum given pg_sleep(1) in each migration
- let minTimeBetweenMigs =
- secondsToNominalDiffTime
- 0.5
- migsWithSleep = filter
- (\(n, _, _) ->
- "-create-database-mig.sql"
- `Text.isSuffixOf` Text.pack
- n
- )
- runMigs
- zipWith
- (\(_, time1 :: UTCTime, _) (_, time2, _) ->
- diffUTCTime time2
- time1
- )
- migsWithSleep
- (drop 1 migsWithSleep)
- `shouldSatisfy` all
- (> minTimeBetweenMigs
- )
- migsWithSleep `shouldSatisfy` all
- (\(_, _, applicationDuration) ->
- ctTime applicationDuration
- > secondsToNominalDiffTime
- 0.7
- -- 700ms is a conservative value given each duration should be ~ 1 sec
- )
-
- it "Diverse order of different types of migrations"
- $ ioProperty
- $ do
- defaultConnInfo <- testConnInfo
- testSettings <- testCoddSettings
- createCoddTestDbMigs :: AddedSqlMigration IO <-
- createTestUserMigPol @IO
- pure
- $ forAll
- (diversifyAppCheckMigs
- defaultConnInfo
- createCoddTestDbMigs
- )
- $ \(DiverseMigrationOrder allMigs) ->
- finallyDrop "codd-test-db"
- $ finallyDrop "new_database_0"
- $ finallyDrop "new_database_1"
- $ finallyDrop "new_database_2"
- $ finallyDrop "new_database_3"
- $ void
- $ do
- runCoddLogger
- $ applyMigrationsNoCheck
- testSettings
- (Just $ map
- (hoistAddedSqlMigration
- lift
- )
- allMigs
- )
- testConnTimeout
- (const $ pure ())
- withConnection defaultConnInfo
- testConnTimeout
- $ \conn -> do
- -- Check all migrations were applied in order
- runMigs :: [ ( Int
- , FilePath
- )
- ] <-
- DB.query
- conn
- "SELECT id, name FROM codd_schema.sql_migrations ORDER BY applied_at, id"
- ()
- map snd runMigs
- `shouldBe` map
- (migrationName
- . addedSqlMig
- )
- allMigs
- runMigs
- `shouldBeStrictlySortedOn` fst
+ allRegisteredMigs :: [String] <-
+ map DB.fromOnly
+ <$> withConnection
+ (migsConnString dbInfo)
+ testConnTimeout
+ ( \conn ->
+ DB.query
+ conn
+ "SELECT name from codd_schema.sql_migrations"
+ ()
+ )
+ allRegisteredMigs
+ `shouldNotContain` [ migrationName
+ ( addedSqlMig
+ @IO
+ createTableMig
+ )
+ ]
+ allRegisteredMigs
+ `shouldNotContain` [ migrationName
+ ( addedSqlMig
+ @IO
+ addColumnMig
+ )
+ ]
+ allRegisteredMigs
+ `shouldNotContain` [ migrationName
+ ( addedSqlMig
+ @IO
+ alwaysFailingMig
+ )
+ ]
+ after (const cleanupAfterTest)
+ $ it
+ "applied_at and application_duration registered properly for migrations running before codd_schema is available"
+ $ do
+ defaultConnInfo <- testConnInfo
+ testSettings <- testCoddSettings
+ createCoddTestDbMigs <- (: []) <$> createTestUserMig
+
+ let postgresCinfo =
+ defaultConnInfo
+ { DB.connectDatabase = "postgres",
+ DB.connectUser = "postgres"
+ }
+
+ allMigs =
+ map (hoistAddedSqlMigration lift) $
+ fixMigsOrder $
+ [ AddedSqlMigration
+ ( createDatabaseMig
+ postgresCinfo
+ { DB.connectDatabase =
+ previousDbName
+ }
+ ("new_database_" <> show i)
+ 1 {- 1 sec pg_sleep -}
+ i
+ )
+ (getIncreasingTimestamp 0)
+ | i <- [0 .. 3],
+ previousDbName <-
+ if i == 0
+ then ["postgres"]
+ else
+ [ "new_database_"
+ <> show (i - 1)
+ ]
+ ]
+ ++ createCoddTestDbMigs
+
+ void @IO $
+ do
+ runCoddLogger $
+ applyMigrationsNoCheck
+ testSettings
+ (Just allMigs)
+ testConnTimeout
+ (const $ pure ())
+ withConnection
+ defaultConnInfo
+ testConnTimeout
+ $ \conn -> do
+ -- 1. Check that migrations ran
+ map DB.fromOnly
+ <$> DB.query
+ conn
+ "SELECT datname FROM pg_database WHERE datname LIKE 'new_database_%' ORDER BY datname"
+ ()
+ `shouldReturn` [ "new_database_0" :: String,
+ "new_database_1",
+ "new_database_2",
+ "new_database_3"
+ ]
+
+ -- 2. Check applied_at is not the time we insert into codd_schema.sql_migrations,
+ -- but the time when migrations are effectively applied.
+ runMigs ::
+ [ ( FilePath,
+ UTCTime,
+ CalendarDiffTime
+ )
+ ] <-
+ DB.query
+ conn
+ "SET intervalstyle TO 'iso_8601'; SELECT name, applied_at, application_duration FROM codd_schema.sql_migrations ORDER BY applied_at, id"
+ ()
+ map (\(f, _, _) -> f) runMigs
+ `shouldBe` map
+ ( migrationName
+ . addedSqlMig
+ )
+ allMigs
+
+ -- Half a second is a conservative minimum given pg_sleep(1) in each migration
+ let minTimeBetweenMigs =
+ secondsToNominalDiffTime
+ 0.5
+ migsWithSleep =
+ filter
+ ( \(n, _, _) ->
+ "-create-database-mig.sql"
+ `Text.isSuffixOf` Text.pack
+ n
+ )
+ runMigs
+ zipWith
+ ( \(_, time1 :: UTCTime, _) (_, time2, _) ->
+ diffUTCTime
+ time2
+ time1
+ )
+ migsWithSleep
+ (drop 1 migsWithSleep)
+ `shouldSatisfy` all
+ ( > minTimeBetweenMigs
+ )
+ migsWithSleep
+ `shouldSatisfy` all
+ ( \(_, _, applicationDuration) ->
+ ctTime applicationDuration
+ > secondsToNominalDiffTime
+ 0.7
+ -- 700ms is a conservative value given each duration should be ~ 1 sec
+ )
+ after (const cleanupAfterTest)
+ $ it
+ "Allow migrations that create codd_schema themselves if they run sufficiently early in some cases"
+ $ do
+ defaultConnInfo <- testConnInfo
+ testSettings <- testCoddSettings
+ bootstrapMig <- createTestUserMig
+ logsmv <- newMVar []
+
+ let postgresCinfo =
+ defaultConnInfo
+ { DB.connectDatabase = "postgres",
+ DB.connectUser = "postgres"
+ }
+
+ allMigs =
+ map (hoistAddedSqlMigration lift) $
+ fixMigsOrder
+ [ bootstrapMig,
+ alwaysPassingMig,
+ pgDumpEmulatingMig,
+ alwaysFailingMig, -- The previous migration pretends this was applied so codd should skip this
+ alwaysFailingMigNoTxn -- The previous migration pretends this was applied so codd should skip this
+ ]
+
+ finalCoddSchemaVersion <-
+ runMVarLogger logsmv $
+ applyMigrationsNoCheck
+ testSettings
+ (Just allMigs)
+ testConnTimeout
+ detectCoddSchema
+ finalCoddSchemaVersion `shouldBe` maxBound
+ withConnection defaultConnInfo testConnTimeout $
+ \conn -> do
+ -- 1. Check that all migrations are registered. Note they might _NOT_ be in the same order of `allMigs` as the dump is registered as applied after skipped migrations
+ runMigs :: Set FilePath <-
+ Set.fromList
+ . map DB.fromOnly
+ <$> DB.query
+ conn
+ "SELECT name FROM codd_schema.sql_migrations"
+ ()
+ runMigs
+ `shouldBe` Set.fromList
+ ( map
+ ( migrationName
+ . addedSqlMig
+ )
+ allMigs
+ )
+ logs <- readMVar logsmv
+ logs
+ `shouldSatisfy` any
+ ( \l ->
+ "Skipping"
+ `Text.isInfixOf` l
+ && Text.pack
+ ( migrationName
+ ( addedSqlMig @IO
+ alwaysFailingMig
+ )
+ )
+ `Text.isInfixOf` l
+ )
+ logs
+ `shouldSatisfy` any
+ ( \l ->
+ "Skipping"
+ `Text.isInfixOf` l
+ && Text.pack
+ ( migrationName
+ ( addedSqlMig @IO
+ alwaysFailingMigNoTxn
+ )
+ )
+ `Text.isInfixOf` l
+ )
+
+ after (const cleanupAfterTest) $
+ it "Diverse order of different types of migrations" $
+ ioProperty $
+ do
+ defaultConnInfo <- testConnInfo
+ testSettings <- testCoddSettings
+ createCoddTestDbMigs :: AddedSqlMigration IO <-
+ createTestUserMigPol @IO
+ pure
+ $ forAll
+ ( diversifyAppCheckMigs
+ defaultConnInfo
+ createCoddTestDbMigs
+ )
+ $ \(DiverseMigrationOrder allMigs) -> void $ do
+ runMigs :: [(Int, FilePath, Int64, Int)] <-
+ runCoddLogger $
+ applyMigrationsNoCheck
+ testSettings
+ ( Just $
+ map
+ (hoistAddedSqlMigration lift)
+ allMigs
+ )
+ testConnTimeout
+ ( \conn ->
+ liftIO $
+ DB.query
+ conn
+ "SELECT id, name, txnid, connid FROM codd_schema.sql_migrations ORDER BY applied_at, id"
+ ()
+ )
+ -- Check all migrations were applied in order, and that ordering only by their Ids gives the same order
+ map (\(_, name, _, _) -> name) runMigs
+ `shouldBe` map
+ ( migrationName
+ . addedSqlMig
+ )
+ allMigs
+ runMigs
+ `shouldBeStrictlySortedOn` \(migid, _, _, _) ->
+ migid
+
+ map (\(_, name, _, _) -> name) runMigs
+ `shouldBe` map
+ ( migrationName
+ . addedSqlMig
+ )
+ allMigs
+ let migsForTests =
+ zipWith
+ ( \(migId, _, txnId, connId) mig ->
+ ( migId,
+ fromMaybe defaultConnInfo $
+ migrationCustomConnInfo $
+ addedSqlMig mig,
+ migrationInTxn $
+ addedSqlMig mig,
+ txnId,
+ connId
+ )
+ )
+ runMigs
+ allMigs
+ everyPairOfMigs =
+ [ ( conStr1,
+ conStr2,
+ intxn1,
+ intxn2,
+ t1,
+ t2,
+ cid1,
+ cid2
+ )
+ | (migId1, conStr1, intxn1, t1, cid1) <-
+ migsForTests,
+ (migId2, conStr2, intxn2, t2, cid2) <-
+ migsForTests,
+ migId1 < migId2
+ ]
+ consecutiveMigs =
+ zipWith
+ ( \(migId1, conStr1, intxn1, t1, cid1) (migId2, conStr2, intxn2, t2, cid2) ->
+ ( conStr1,
+ conStr2,
+ intxn1,
+ intxn2,
+ t1,
+ t2,
+ cid1,
+ cid2
+ )
+ )
+ migsForTests
+ (drop 1 migsForTests)
+
+ -- 1. Test that no two migrations with the same connection string use different connIds
+ everyPairOfMigs
+ `shouldSatisfy` all
+ ( \(conStr1, conStr2, _, _, _, _, cid1, cid2) ->
+ conStr1 /= conStr2 || cid1 == cid2
+ )
+ -- 2. Test that no two migrations with different connection strings use the same connIds
+ everyPairOfMigs
+ `shouldSatisfy` all
+ ( \(conStr1, conStr2, _, _, _, _, cid1, cid2) ->
+ conStr1 == conStr2 || cid1 /= cid2
+ )
+ -- 3. Test that no two consecutive in-txn migrations with the same connection string have different txnIds
+ consecutiveMigs
+ `shouldSatisfy` all
+ ( \(conStr1, conStr2, intxn1, intxn2, txnid1, txnid2, _, _) ->
+ conStr1
+ /= conStr2
+ || not intxn1
+ || not intxn2
+ || txnid1
+ == txnid2
+ )
+ -- 4. Test that no two no-txn migrations have the same txnId
+ consecutiveMigs
+ `shouldSatisfy` all
+ ( \(conStr1, conStr2, intxn1, intxn2, txnid1, txnid2, _, _) ->
+ intxn1 || intxn2 || txnid1 /= txnid2
+ )
-- | Concatenates two lists, generates a shuffle of that
-- that does not change relative order of elements when compared
--- to their original lists. The supplied function is called with
+-- to their original lists. The supplied function is called with
-- the final 0-based index of each element in the list and the
-- element itself to form the final generated list.
mergeShuffle :: [a] -> [a] -> (Int -> a -> b) -> Gen [b]
mergeShuffle ll1 ll2 f = go ll1 ll2 (0 :: Int)
where
- go [] l2 i = pure $ zipWith f [i ..] l2
- go l1 [] i = pure $ zipWith f [i ..] l1
+ go [] l2 i = pure $ zipWith f [i ..] l2
+ go l1 [] i = pure $ zipWith f [i ..] l1
go l1@(x : xs) l2@(y : ys) i = do
- yieldFirst <- arbitrary @Bool
- if yieldFirst
- then (f i x :) <$> go xs l2 (i + 1)
- else (f i y :) <$> go l1 ys (i + 1)
+ yieldFirst <- arbitrary @Bool
+ if yieldFirst
+ then (f i x :) <$> go xs l2 (i + 1)
+ else (f i y :) <$> go l1 ys (i + 1)
-data MigToCreate = CreateCoddTestDb | CreateCountCheckingMig Bool | CreateCountCheckingMigDifferentUser Bool | CreateDbCreationMig Int
+data MigToCreate = CreateCoddTestDb | CreatePassingMig Bool | CreatePassingMigDifferentUser Bool | CreateDbCreationMig Int
-- | Holds migrations that test codd_schema's internal management while migrations are applied.
-- Look at the `diversifyAppCheckMigs` function, which generates migrations that explore a combination space
-- with the intent of checking codd's migration application internals are robust.
-newtype DiverseMigrationOrder m = DiverseMigrationOrder {
- migrationsInOrder :: [AddedSqlMigration m]
-}
+newtype DiverseMigrationOrder m = DiverseMigrationOrder
+ { migrationsInOrder :: [AddedSqlMigration m]
+ }
instance Show (DiverseMigrationOrder m) where
- show (DiverseMigrationOrder migs) =
- concatMap (show . migrationName . addedSqlMig) migs
-
-
-diversifyAppCheckMigs
- :: MonadThrow m
- => DB.ConnectInfo
- -> AddedSqlMigration m
- -> Gen (DiverseMigrationOrder m)
+ show (DiverseMigrationOrder migs) =
+ concatMap (show . migrationName . addedSqlMig) migs
+
+diversifyAppCheckMigs ::
+ (MonadThrow m) =>
+ DB.ConnectInfo ->
+ AddedSqlMigration m ->
+ Gen (DiverseMigrationOrder m)
diversifyAppCheckMigs defaultConnInfo createCoddTestDbMigs = do
- -- We want to diversify the order of migrations we apply.
- -- Meaning we want to test regexp-like sequences
- -- like (custom-connection-mig | default-connection-mig)+
- -- However, some custom-connection migrations need to run
- -- in the right relative order between them, and any default-connection
- -- migrations need to run after the mig that creates "codd-test-db".
- -- So the format is actually approximately:
- -- 1. migs <- [create-codd-test-db] ++ Perm{default-conn-migs}
- -- 2. Merge [create-database-migs] and `migs` into another list
- -- preserving relative order of migs from each list randomly.
-
- -- Also, we want - to the extent that is possible - to ensure
- -- codd_schema.sql_migrations is up-to-date and with rows in
- -- the correct order.
- -- We do that by having default-conn-migs check COUNT(*)
- -- for that table. We also test same-database-migs with different
- -- users wrt that.
-
- -- Finally we check the order of migrations in codd_schema.sql_migrations
- -- after having all migrations applied.
- let postgresCinfo = defaultConnInfo { DB.connectDatabase = "postgres"
- , DB.connectUser = "postgres"
- }
-
- -- A small number of count-checking migrations is *important*
- -- to ensure we have statistical diversity in terms of relative
- -- position between different types of migrations.
- -- Otherwise, we'd have count-checking migs as the last migrations
- -- and at the earliest position almost every time, when we could have
- -- custom-connection-string ones there.
- createDbAndFirstDefaultConnMig <-
- sequenceA
- [pure CreateCoddTestDb, CreateCountCheckingMig <$> arbitrary @Bool]
-
- countCheckingMigs <- QC.resize 5 $ QC.listOf $ QC.elements
- [ CreateCountCheckingMig True
- , CreateCountCheckingMig False
- , CreateCountCheckingMigDifferentUser True
- , CreateCountCheckingMigDifferentUser False
- ]
-
- migsInOrder <-
- fmap (fixMigsOrder . concat)
- $ mergeShuffle (createDbAndFirstDefaultConnMig ++ countCheckingMigs)
- (map CreateDbCreationMig [0 .. 3])
- $ \migOrder migType -> case migType of
- CreateCoddTestDb -> [createCoddTestDbMigs]
- CreateDbCreationMig i ->
- [ AddedSqlMigration
- (createDatabaseMig
- postgresCinfo { DB.connectDatabase = previousDbName
- }
- ("new_database_" <> show i)
- 0 {- no pg_sleep, another test already tests this -}
- i
- )
- (getIncreasingTimestamp 0)
- | previousDbName <- if i == 0
- then ["postgres"]
- else ["new_database_" <> show (i - 1)]
- ]
- CreateCountCheckingMig inTxn ->
- [ AddedSqlMigration
- (createCountCheckingMig migOrder "count-checking-mig")
- { migrationInTxn = inTxn
- }
- (getIncreasingTimestamp 0)
- ]
-
- CreateCountCheckingMigDifferentUser inTxn ->
- [ AddedSqlMigration
- (createCountCheckingMig
- migOrder
- "count-checking-custom-user-mig"
- )
- { migrationCustomConnInfo = Just defaultConnInfo
- { DB.connectUser = "codd-test-user"
- }
- , migrationInTxn = inTxn
- }
- $ getIncreasingTimestamp 0
- ]
- pure $ DiverseMigrationOrder migsInOrder
+ let postgresCinfo =
+ defaultConnInfo
+ { DB.connectDatabase = "postgres",
+ DB.connectUser = "postgres"
+ }
+
+ numPassingMigs <- chooseInt (0, 5)
+ numCreateOtherDbMigs <- chooseInt (0, 3)
+ passingMigs <-
+ QC.resize numPassingMigs $
+ QC.listOf $
+ QC.elements
+ [ CreatePassingMig True,
+ CreatePassingMig False,
+ CreatePassingMigDifferentUser True,
+ CreatePassingMigDifferentUser False
+ ]
+
+ migsInOrder <-
+ fmap (fixMigsOrder . concat)
+ $ mergeShuffle
+ (CreateCoddTestDb : passingMigs)
+ (map CreateDbCreationMig [0 .. numCreateOtherDbMigs])
+ $ \migOrder migType -> case migType of
+ CreateCoddTestDb -> [createCoddTestDbMigs]
+ CreateDbCreationMig i ->
+ [ AddedSqlMigration
+ ( createDatabaseMig
+ postgresCinfo
+ { DB.connectDatabase = previousDbName
+ }
+ ("new_database_" <> show i)
+ 0 {- no pg_sleep, another test already tests this -}
+ migOrder
+ )
+ (getIncreasingTimestamp 0)
+ | previousDbName <-
+ if i == 0
+ then ["postgres"]
+ else ["new_database_" <> show (i - 1)]
+ ]
+ CreatePassingMig inTxn ->
+ [ AddedSqlMigration
+ (createAlwaysPassingMig migOrder "passing-mig")
+ { migrationInTxn = inTxn
+ }
+ (getIncreasingTimestamp 0)
+ ]
+ CreatePassingMigDifferentUser inTxn ->
+ [ AddedSqlMigration
+ ( createAlwaysPassingMig
+ migOrder
+ "passing-custom-user-mig"
+ )
+ { migrationCustomConnInfo =
+ Just
+ defaultConnInfo
+ { DB.connectUser = "codd-test-user"
+ },
+ migrationInTxn = inTxn
+ }
+ $ getIncreasingTimestamp 0
+ ]
+ pure $ DiverseMigrationOrder migsInOrder
diff --git a/test/DbDependentSpecs/SchemaVerificationSpec.hs b/test/DbDependentSpecs/SchemaVerificationSpec.hs
index 1944123a..36bdec4d 100644
--- a/test/DbDependentSpecs/SchemaVerificationSpec.hs
+++ b/test/DbDependentSpecs/SchemaVerificationSpec.hs
@@ -1,84 +1,95 @@
module DbDependentSpecs.SchemaVerificationSpec where
-import Codd ( applyMigrationsNoCheck )
-import Codd.Environment ( CoddSettings(..) )
-import Codd.Internal ( withConnection )
-import Codd.Internal.MultiQueryStatement
- ( multiQueryStatement_ )
-import Codd.Logging ( runCoddLogger )
-import Codd.Parsing ( AddedSqlMigration(..)
- , EnvVars
- , ParsedSql(..)
- , PureStream(..)
- , SqlMigration(..)
- , hoistAddedSqlMigration
- , parseSqlMigration
- )
-import Codd.Query ( unsafeQuery1 )
-import Codd.Representations ( DbRep(..)
- , DiffType(..)
- , readRepresentationsFromDbWithSettings
- , schemaDifferences
- )
-import Codd.Representations.Database ( queryServerMajorVersion
- , readRepsFromDbWithNewTxn
- )
-import Codd.Representations.Types ( ObjName(..) )
-import Codd.Types ( SchemaAlgo(..)
- , SchemaSelection(..)
- )
-import Control.Monad ( foldM
- , forM
- , void
- , zipWithM
- )
-import Control.Monad.State ( MonadState(put)
- , State
- , execState
- )
-import Control.Monad.State.Class ( get )
-import Control.Monad.Trans ( lift )
-import Control.Monad.Trans.Resource ( MonadThrow )
-import Data.Functor ( (<&>) )
-import qualified Data.Map as Map
-import Data.Maybe ( fromMaybe )
-import Data.Text ( Text )
-import qualified Data.Text as Text
-import qualified Database.PostgreSQL.Simple as DB
-import Database.PostgreSQL.Simple ( ConnectInfo(..) )
-import DbUtils ( aroundFreshDatabase
- , finallyDrop
- , getIncreasingTimestamp
- , mkValidSql
- , parseSqlMigrationIO
- , testConnTimeout
- )
-import qualified Streaming.Prelude as Streaming
-import System.Process.Typed ( ExitCode(..)
- , byteStringInput
- , readProcessStdout
- , runProcess
- , setStdin
- , shell
- )
-import Test.Hspec
-import Test.Hspec.QuickCheck ( modifyMaxSuccess )
-import Test.QuickCheck ( Arbitrary
- , chooseBoundedIntegral
- , property
- )
-import Test.QuickCheck.Arbitrary ( Arbitrary(arbitrary) )
-import UnliftIO ( liftIO )
+import Codd (applyMigrationsNoCheck)
+import Codd.Environment (CoddSettings (..))
+import Codd.Internal (withConnection)
+import Codd.Internal.MultiQueryStatement
+ ( multiQueryStatement_,
+ )
+import Codd.Logging (runCoddLogger)
+import Codd.Parsing
+ ( AddedSqlMigration (..),
+ EnvVars,
+ ParsedSql (..),
+ PureStream (..),
+ SqlMigration (..),
+ hoistAddedSqlMigration,
+ parseSqlMigration,
+ )
+import Codd.Query (unsafeQuery1)
+import Codd.Representations
+ ( DbRep (..),
+ DiffType (..),
+ readRepresentationsFromDbWithSettings,
+ schemaDifferences,
+ )
+import Codd.Representations.Database
+ ( queryServerMajorVersion,
+ readRepsFromDbWithNewTxn,
+ )
+import Codd.Representations.Types (ObjName (..))
+import Codd.Types
+ ( SchemaAlgo (..),
+ SchemaSelection (..),
+ )
+import Control.Monad
+ ( foldM,
+ forM,
+ void,
+ zipWithM,
+ )
+import Control.Monad.State
+ ( MonadState (put),
+ State,
+ execState,
+ )
+import Control.Monad.State.Class (get)
+import Control.Monad.Trans (lift)
+import Control.Monad.Trans.Resource (MonadThrow)
+import Data.Functor ((<&>))
+import qualified Data.Map as Map
+import Data.Maybe (fromMaybe)
+import Data.Text (Text)
+import qualified Data.Text as Text
+import Database.PostgreSQL.Simple (ConnectInfo (..))
+import qualified Database.PostgreSQL.Simple as DB
+import DbUtils
+ ( aroundFreshDatabase,
+ finallyDrop,
+ getIncreasingTimestamp,
+ mkValidSql,
+ parseSqlMigrationIO,
+ testConnTimeout,
+ )
+import qualified Streaming.Prelude as Streaming
+import System.Process.Typed
+ ( ExitCode (..),
+ byteStringInput,
+ readProcessStdout,
+ runProcess,
+ setStdin,
+ shell,
+ )
+import Test.Hspec
+import Test.Hspec.QuickCheck (modifyMaxSuccess)
+import Test.QuickCheck
+ ( Arbitrary,
+ chooseBoundedIntegral,
+ property,
+ )
+import Test.QuickCheck.Arbitrary (Arbitrary (arbitrary))
+import UnliftIO (liftIO)
data DiffTypeSimplified = DBothButDifferent | DExpectedButNotFound | DNotExpectedButFound
deriving stock (Show, Eq)
+
data DbChange = ChangeEq [(FilePath, DiffTypeSimplified)] | SomeChange
simplifyDiff :: DiffType -> DiffTypeSimplified
simplifyDiff = \case
- ExpectedButNotFound -> DExpectedButNotFound
- NotExpectedButFound _ -> DNotExpectedButFound
- BothButDifferent _ -> DBothButDifferent
+ ExpectedButNotFound -> DExpectedButNotFound
+ NotExpectedButFound _ -> DNotExpectedButFound
+ BothButDifferent _ -> DBothButDifferent
-- | Contains either text or a migration first and the SQL to undo it next.
data MU a = MU a (Maybe Text)
@@ -88,1103 +99,1134 @@ unMU (MU a _) = a
addMig :: Text -> Text -> DbChange -> State [(MU Text, DbChange)] (Text, Text)
addMig doSql undoSql expectedChanges = do
- existingMigs <- get
- put $ existingMigs ++ [(MU doSql $ Just undoSql, expectedChanges)]
- pure (doSql, undoSql)
+ existingMigs <- get
+ put $ existingMigs ++ [(MU doSql $ Just undoSql, expectedChanges)]
+ pure (doSql, undoSql)
addMig_ :: Text -> Text -> DbChange -> State [(MU Text, DbChange)] ()
addMig_ doSql undoSql expectedChanges =
- void $ addMig doSql undoSql expectedChanges
+ void $ addMig doSql undoSql expectedChanges
addMigNoChanges_ :: Text -> State [(MU Text, DbChange)] ()
addMigNoChanges_ doSql = do
- existingMigs <- get
- put $ existingMigs ++ [(MU doSql Nothing, ChangeEq [])]
-
-hoistMU
- :: (Monad m, Monad n)
- => (forall x . m x -> n x)
- -> (MU (AddedSqlMigration m), DbChange)
- -> (MU (AddedSqlMigration n), DbChange)
+ existingMigs <- get
+ put $ existingMigs ++ [(MU doSql Nothing, ChangeEq [])]
+
+hoistMU ::
+ (Monad m, Monad n) =>
+ (forall x. m x -> n x) ->
+ (MU (AddedSqlMigration m), DbChange) ->
+ (MU (AddedSqlMigration n), DbChange)
hoistMU f (MU sqlMig tst, change) =
- (MU (hoistAddedSqlMigration f sqlMig) tst, change)
+ (MU (hoistAddedSqlMigration f sqlMig) tst, change)
-migrationsForPgDumpRestoreTest
- :: forall m . (MonadThrow m, EnvVars m) => m [AddedSqlMigration m]
-migrationsForPgDumpRestoreTest = mapM
+migrationsForPgDumpRestoreTest ::
+ forall m. (MonadThrow m, EnvVars m) => m [AddedSqlMigration m]
+migrationsForPgDumpRestoreTest =
+ mapM
parseMig
[ "CREATE FUNCTION test_function_with_whitespace() RETURNS INT AS $$\n\
-\ BEGIN\n\
-\ SELECT 1;\n\
-\\n\
-\\n\
-\ -- White space\n\
-\\n\
-\\n\
-\ \n\
-\ -- More white space\n\
-\ \n\
-\ END\n\
-\$$ LANGUAGE plpgsql;"
+ \ BEGIN\n\
+ \ SELECT 1;\n\
+ \\n\
+ \\n\
+ \ -- White space\n\
+ \\n\
+ \\n\
+ \ \n\
+ \ -- More white space\n\
+ \ \n\
+ \ END\n\
+ \$$ LANGUAGE plpgsql;"
]
where
parseMig sql = do
+ mig <-
+ either (error "Could not parse SQL migration") id
+ <$> parseSqlMigration @m @(PureStream m)
+ "1900-01-01-00-00-00-migration.sql"
+ (PureStream $ Streaming.yield sql)
+ pure $
+ AddedSqlMigration
+ (mig {migrationName = "0-dump-migration.sql"})
+ (getIncreasingTimestamp 99999)
+
+migrationsAndRepChange ::
+ forall m.
+ (MonadThrow m, EnvVars m) =>
+ Int ->
+ m [(MU (AddedSqlMigration m), DbChange)]
+migrationsAndRepChange pgVersion =
+ zipWithM
+ ( \(MU doSql undoSql, c) i -> do
mig <-
- either (error "Could not parse SQL migration") id
- <$> parseSqlMigration @m @(PureStream m)
- "1900-01-01-00-00-00-migration.sql"
- (PureStream $ Streaming.yield sql)
- pure $ AddedSqlMigration
- (mig { migrationName = "0-dump-migration.sql" })
- (getIncreasingTimestamp 99999)
-
-migrationsAndRepChange
- :: forall m
- . (MonadThrow m, EnvVars m)
- => Int
- -> m [(MU (AddedSqlMigration m), DbChange)]
-migrationsAndRepChange pgVersion = zipWithM
- (\(MU doSql undoSql, c) i -> do
- mig <-
- either (error "Could not parse SQL migration") id
- <$> parseSqlMigration @m @(PureStream m)
- "1900-01-01-00-00-00-migration.sql"
- (PureStream $ Streaming.yield doSql)
+ either (error "Could not parse SQL migration") id
+ <$> parseSqlMigration @m @(PureStream m)
+ "1900-01-01-00-00-00-migration.sql"
+ (PureStream $ Streaming.yield doSql)
pure
- ( MU
- (AddedSqlMigration
- (mig { migrationName = show i <> "-migration.sql" })
- (getIncreasingTimestamp i)
- )
- undoSql
- , c
- )
+ ( MU
+ ( AddedSqlMigration
+ (mig {migrationName = show i <> "-migration.sql"})
+ (getIncreasingTimestamp i)
+ )
+ undoSql,
+ c
+ )
)
(migrationsAndRepChangeText pgVersion)
(map fromInteger [0 ..]) -- This would be a list of DiffTime, which would have 10^-12s resolution and fail in the DB
migrationsAndRepChangeText :: Int -> [(MU Text, DbChange)]
migrationsAndRepChangeText pgVersion = flip execState [] $ do
- -- MISSING:
- -- COLUMNS WITH GENERATED AS (they are hashed but we can't test them without a pg version check)
- -- EXCLUSION CONSTRAINTS
- -- EXTENSIONS
- -- PARTITIONING
- -- LANGUAGES
- -- FOREIGN SERVERS
- -- DOMAINS
- -- TABLESPACES
-
-
-
-
- -- TABLES AND COLUMNS
- addMig_
- "CREATE TABLE employee (employee_id SERIAL PRIMARY KEY, employee_name VARCHAR(30))"
- "DROP TABLE employee"
- $ ChangeEq
- [ ( "schemas/public/sequences/employee_employee_id_seq"
- , DExpectedButNotFound
- )
- , ( "schemas/public/tables/employee/cols/employee_id"
- , DExpectedButNotFound
- )
- , ( "schemas/public/tables/employee/cols/employee_name"
- , DExpectedButNotFound
- )
- , ( "schemas/public/tables/employee/constraints/employee_pkey"
- , DExpectedButNotFound
- )
- , ( "schemas/public/tables/employee/indexes/employee_pkey"
- , DExpectedButNotFound
- )
- , ("schemas/public/tables/employee/objrep", DExpectedButNotFound)
- ]
- addMig_
- "ALTER TABLE employee ALTER COLUMN employee_name TYPE VARCHAR(50)"
- "ALTER TABLE employee ALTER COLUMN employee_name TYPE VARCHAR(30)"
- $ ChangeEq
- [ ( "schemas/public/tables/employee/cols/employee_name"
- , DBothButDifferent
- )
- ]
- addMig_ "ALTER TABLE employee ALTER COLUMN employee_name SET NOT NULL"
- "ALTER TABLE employee ALTER COLUMN employee_name DROP NOT NULL"
- $ ChangeEq
- [ ( "schemas/public/tables/employee/cols/employee_name"
- , DBothButDifferent
- )
- ]
-
- addMig_
- "ALTER TABLE employee ADD COLUMN birthday DATE; ALTER TABLE employee ADD COLUMN deathday DATE;"
-
- "ALTER TABLE employee DROP COLUMN deathday; ALTER TABLE employee DROP COLUMN birthday;"
- $ ChangeEq
- [ ( "schemas/public/tables/employee/cols/birthday"
- , DExpectedButNotFound
- )
- , ( "schemas/public/tables/employee/cols/deathday"
- , DExpectedButNotFound
- )
- ]
-
- -- Column order matters because of things like 'SELECT *'
- addMig_
- "ALTER TABLE employee DROP COLUMN birthday; ALTER TABLE employee DROP COLUMN deathday; \
- \ ALTER TABLE employee ADD COLUMN deathday DATE; ALTER TABLE employee ADD COLUMN birthday DATE;"
-
- "ALTER TABLE employee DROP COLUMN birthday; ALTER TABLE employee DROP COLUMN deathday; \
- \ ALTER TABLE employee ADD COLUMN birthday DATE; ALTER TABLE employee ADD COLUMN deathday DATE;"
-
- $ ChangeEq
- [ ( "schemas/public/tables/employee/cols/birthday"
- , DBothButDifferent
- )
- , ( "schemas/public/tables/employee/cols/deathday"
- , DBothButDifferent
- )
- ]
-
- addMig_ "ALTER TABLE employee ALTER COLUMN birthday TYPE TIMESTAMP;"
- "ALTER TABLE employee ALTER COLUMN birthday TYPE DATE;"
- $ ChangeEq
- [ ( "schemas/public/tables/employee/cols/birthday"
- , DBothButDifferent
- )
- ]
-
- addMigNoChanges_
- "ALTER TABLE employee ADD COLUMN IF NOT EXISTS birthday TIMESTAMP;"
-
- addMig_
- "ALTER TABLE employee ALTER COLUMN deathday SET DEFAULT '2100-02-03';"
- "ALTER TABLE employee ALTER COLUMN deathday DROP DEFAULT;"
- $ ChangeEq
- [ ( "schemas/public/tables/employee/cols/deathday"
- , DBothButDifferent
- )
- ]
-
- addMigNoChanges_
- "ALTER TABLE employee ALTER COLUMN deathday SET DEFAULT '2100-02-03';"
-
- addMig_
- "ALTER TABLE employee ALTER COLUMN deathday SET DEFAULT '2100-02-04';"
-
- "ALTER TABLE employee ALTER COLUMN deathday SET DEFAULT '2100-02-03';"
-
- $ ChangeEq
- [ ( "schemas/public/tables/employee/cols/deathday"
- , DBothButDifferent
- )
- ]
-
-
- -- Recreating a column exactly like it was before will affect column order, which will affect the index and the sequence too
- addMig_
- "ALTER TABLE employee DROP COLUMN employee_id; ALTER TABLE employee ADD COLUMN employee_id SERIAL PRIMARY KEY;"
-
- "DROP TABLE employee; CREATE TABLE employee (employee_id SERIAL PRIMARY KEY, employee_name VARCHAR(50) NOT NULL); \
- \ ALTER TABLE employee ADD COLUMN deathday DATE DEFAULT '2100-02-04'; ALTER TABLE employee ADD COLUMN birthday TIMESTAMP;"
-
- $ ChangeEq
- [ ( "schemas/public/tables/employee/cols/employee_id"
- , DBothButDifferent
- )
- -- Other columns in the same table have their relative order changed as well
- , ( "schemas/public/tables/employee/cols/birthday"
- , DBothButDifferent
- )
- , ( "schemas/public/tables/employee/cols/deathday"
- , DBothButDifferent
- )
- , ( "schemas/public/tables/employee/cols/employee_name"
- , DBothButDifferent
- )
- , ( "schemas/public/sequences/employee_employee_id_seq"
- , DBothButDifferent
- ) -- This change happens because due to sequence ownership, we need to
+ -- MISSING:
+ -- COLUMNS WITH GENERATED AS (they are hashed but we can't test them without a pg version check)
+ -- EXCLUSION CONSTRAINTS
+ -- EXTENSIONS
+ -- PARTITIONING
+ -- LANGUAGES
+ -- FOREIGN SERVERS
+ -- DOMAINS
+ -- TABLESPACES
+
+ -- TABLES AND COLUMNS
+ addMig_
+ "CREATE TABLE employee (employee_id SERIAL PRIMARY KEY, employee_name VARCHAR(30))"
+ "DROP TABLE employee"
+ $ ChangeEq
+ [ ( "schemas/public/sequences/employee_employee_id_seq",
+ DExpectedButNotFound
+ ),
+ ( "schemas/public/tables/employee/cols/employee_id",
+ DExpectedButNotFound
+ ),
+ ( "schemas/public/tables/employee/cols/employee_name",
+ DExpectedButNotFound
+ ),
+ ( "schemas/public/tables/employee/constraints/employee_pkey",
+ DExpectedButNotFound
+ ),
+ ( "schemas/public/tables/employee/indexes/employee_pkey",
+ DExpectedButNotFound
+ ),
+ ("schemas/public/tables/employee/objrep", DExpectedButNotFound)
+ ]
+ addMig_
+ "ALTER TABLE employee ALTER COLUMN employee_name TYPE VARCHAR(50)"
+ "ALTER TABLE employee ALTER COLUMN employee_name TYPE VARCHAR(30)"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/cols/employee_name",
+ DBothButDifferent
+ )
+ ]
+ addMig_
+ "ALTER TABLE employee ALTER COLUMN employee_name SET NOT NULL"
+ "ALTER TABLE employee ALTER COLUMN employee_name DROP NOT NULL"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/cols/employee_name",
+ DBothButDifferent
+ )
+ ]
+
+ addMig_
+ "ALTER TABLE employee ADD COLUMN birthday DATE; ALTER TABLE employee ADD COLUMN deathday DATE;"
+ "ALTER TABLE employee DROP COLUMN deathday; ALTER TABLE employee DROP COLUMN birthday;"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/cols/birthday",
+ DExpectedButNotFound
+ ),
+ ( "schemas/public/tables/employee/cols/deathday",
+ DExpectedButNotFound
+ )
+ ]
+
+ -- Column order matters because of things like 'SELECT *'
+ addMig_
+ "ALTER TABLE employee DROP COLUMN birthday; ALTER TABLE employee DROP COLUMN deathday; \
+ \ ALTER TABLE employee ADD COLUMN deathday DATE; ALTER TABLE employee ADD COLUMN birthday DATE;"
+ "ALTER TABLE employee DROP COLUMN birthday; ALTER TABLE employee DROP COLUMN deathday; \
+ \ ALTER TABLE employee ADD COLUMN birthday DATE; ALTER TABLE employee ADD COLUMN deathday DATE;"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/cols/birthday",
+ DBothButDifferent
+ ),
+ ( "schemas/public/tables/employee/cols/deathday",
+ DBothButDifferent
+ )
+ ]
+
+ addMig_
+ "ALTER TABLE employee ALTER COLUMN birthday TYPE TIMESTAMP;"
+ "ALTER TABLE employee ALTER COLUMN birthday TYPE DATE;"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/cols/birthday",
+ DBothButDifferent
+ )
+ ]
+
+ addMigNoChanges_
+ "ALTER TABLE employee ADD COLUMN IF NOT EXISTS birthday TIMESTAMP;"
+
+ addMig_
+ "ALTER TABLE employee ALTER COLUMN deathday SET DEFAULT '2100-02-03';"
+ "ALTER TABLE employee ALTER COLUMN deathday DROP DEFAULT;"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/cols/deathday",
+ DBothButDifferent
+ )
+ ]
+
+ addMigNoChanges_
+ "ALTER TABLE employee ALTER COLUMN deathday SET DEFAULT '2100-02-03';"
+
+ addMig_
+ "ALTER TABLE employee ALTER COLUMN deathday SET DEFAULT '2100-02-04';"
+ "ALTER TABLE employee ALTER COLUMN deathday SET DEFAULT '2100-02-03';"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/cols/deathday",
+ DBothButDifferent
+ )
+ ]
+
+ -- Recreating a column exactly like it was before will affect column order, which will affect the index and the sequence too
+ addMig_
+ "ALTER TABLE employee DROP COLUMN employee_id; ALTER TABLE employee ADD COLUMN employee_id SERIAL PRIMARY KEY;"
+ "DROP TABLE employee; CREATE TABLE employee (employee_id SERIAL PRIMARY KEY, employee_name VARCHAR(50) NOT NULL); \
+ \ ALTER TABLE employee ADD COLUMN deathday DATE DEFAULT '2100-02-04'; ALTER TABLE employee ADD COLUMN birthday TIMESTAMP;"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/cols/employee_id",
+ DBothButDifferent
+ ),
+ -- Other columns in the same table have their relative order changed as well
+ ( "schemas/public/tables/employee/cols/birthday",
+ DBothButDifferent
+ ),
+ ( "schemas/public/tables/employee/cols/deathday",
+ DBothButDifferent
+ ),
+ ( "schemas/public/tables/employee/cols/employee_name",
+ DBothButDifferent
+ ),
+ ( "schemas/public/sequences/employee_employee_id_seq",
+ DBothButDifferent
+ ) -- This change happens because due to sequence ownership, we need to
-- either include the owner column's name or its attnum. We chose the latter thinking it's more common case to rename columns than change
-- their relative positions.
-- Constraints, however, reference column names (argh..) due to their expressions being verified
- ]
-
-
- -- SEQUENCES
- addMig_ "CREATE SEQUENCE some_seq MINVALUE 1 MAXVALUE 100"
- "DROP SEQUENCE some_seq"
- $ ChangeEq [("schemas/public/sequences/some_seq", DExpectedButNotFound)]
-
- -- MINVALUE and MAXVALUE that fit other types so we are sure changing just the seq. type has an effect
- addMig_ "ALTER SEQUENCE some_seq AS smallint"
- "ALTER SEQUENCE some_seq AS bigint"
- $ ChangeEq [("schemas/public/sequences/some_seq", DBothButDifferent)]
- addMig_ "ALTER SEQUENCE some_seq AS integer"
- "ALTER SEQUENCE some_seq AS smallint"
- $ ChangeEq [("schemas/public/sequences/some_seq", DBothButDifferent)]
- addMig_ "ALTER SEQUENCE some_seq START WITH 3"
- "ALTER SEQUENCE some_seq START WITH 1"
- $ ChangeEq [("schemas/public/sequences/some_seq", DBothButDifferent)]
- addMig_ "ALTER SEQUENCE some_seq RESTART WITH 7"
- "ALTER SEQUENCE some_seq RESTART WITH 1"
- $ ChangeEq []
- -- TODO: Where can I find in pg_catalog the restart_with value? Currently it does not affect hashing, sadly.
- addMig_ "ALTER SEQUENCE some_seq MINVALUE 2"
- "ALTER SEQUENCE some_seq MINVALUE 1"
- $ ChangeEq [("schemas/public/sequences/some_seq", DBothButDifferent)]
- addMig_ "ALTER SEQUENCE some_seq MAXVALUE 99999"
- "ALTER SEQUENCE some_seq MAXVALUE 100"
- $ ChangeEq [("schemas/public/sequences/some_seq", DBothButDifferent)]
- addMig_ "ALTER SEQUENCE some_seq INCREMENT BY 2"
- "ALTER SEQUENCE some_seq INCREMENT BY 1"
- $ ChangeEq [("schemas/public/sequences/some_seq", DBothButDifferent)]
- addMig_ "ALTER SEQUENCE some_seq CYCLE" "ALTER SEQUENCE some_seq NO CYCLE"
- $ ChangeEq [("schemas/public/sequences/some_seq", DBothButDifferent)]
- addMig_ "ALTER SEQUENCE some_seq CACHE 2" "ALTER SEQUENCE some_seq CACHE 1"
- $ ChangeEq [("schemas/public/sequences/some_seq", DBothButDifferent)]
- addMig_ "ALTER SEQUENCE some_seq OWNED BY employee.employee_id"
- "ALTER SEQUENCE some_seq OWNED BY NONE"
- $ ChangeEq [("schemas/public/sequences/some_seq", DBothButDifferent)]
-
- -- CHECK CONSTRAINTS
- addMig_
- "ALTER TABLE employee ADD CONSTRAINT employee_ck_name CHECK (employee_name <> '')"
- "ALTER TABLE employee DROP CONSTRAINT employee_ck_name"
- $ ChangeEq
- [ ( "schemas/public/tables/employee/constraints/employee_ck_name"
- , DExpectedButNotFound
- )
- ]
-
- addMigNoChanges_
- "ALTER TABLE employee DROP CONSTRAINT employee_ck_name; ALTER TABLE employee ADD CONSTRAINT employee_ck_name CHECK (employee_name <> '')"
-
- addMig_
- "ALTER TABLE employee DROP CONSTRAINT employee_ck_name; ALTER TABLE employee ADD CONSTRAINT employee_ck_name CHECK (employee_name <> 'EMPTY')"
- "ALTER TABLE employee DROP CONSTRAINT employee_ck_name; ALTER TABLE employee ADD CONSTRAINT employee_ck_name CHECK (employee_name <> '')"
- $ ChangeEq
- [ ( "schemas/public/tables/employee/constraints/employee_ck_name"
- , DBothButDifferent
- )
- ]
-
- -- FOREIGN KEYS
- addMig_
- "CREATE TABLE employee_car (employee_id INT NOT NULL, car_model TEXT NOT NULL)"
- "DROP TABLE employee_car"
- $ ChangeEq
- [ ( "schemas/public/tables/employee_car/cols/car_model"
- , DExpectedButNotFound
- )
- , ( "schemas/public/tables/employee_car/cols/employee_id"
- , DExpectedButNotFound
- )
- , ( "schemas/public/tables/employee_car/objrep"
- , DExpectedButNotFound
- )
- ]
-
- addMig_
- "CREATE TABLE employee_computer (employee_id INT NOT NULL, computer_model TEXT NOT NULL, UNIQUE (employee_id))"
- "DROP TABLE employee_computer"
- $ ChangeEq
- [ ( "schemas/public/tables/employee_computer/cols/computer_model"
- , DExpectedButNotFound
- )
- , ( "schemas/public/tables/employee_computer/cols/employee_id"
- , DExpectedButNotFound
- )
- , ( "schemas/public/tables/employee_computer/constraints/employee_computer_employee_id_key"
- , DExpectedButNotFound
- )
- , ( "schemas/public/tables/employee_computer/indexes/employee_computer_employee_id_key"
- , DExpectedButNotFound
- )
- , ( "schemas/public/tables/employee_computer/objrep"
- , DExpectedButNotFound
- )
- ]
-
- addMig_
- "ALTER TABLE employee_car ADD CONSTRAINT employee_car_employee_fk FOREIGN KEY (employee_id) REFERENCES employee(employee_id)"
- "ALTER TABLE employee_car DROP CONSTRAINT employee_car_employee_fk"
- $ ChangeEq
- [ ( "schemas/public/tables/employee_car/constraints/employee_car_employee_fk"
- , DExpectedButNotFound
- )
- ]
-
- addMig_
- "ALTER TABLE employee_car ALTER CONSTRAINT employee_car_employee_fk DEFERRABLE INITIALLY DEFERRED"
- "ALTER TABLE employee_car ALTER CONSTRAINT employee_car_employee_fk NOT DEFERRABLE INITIALLY IMMEDIATE"
- $ ChangeEq
- [ ( "schemas/public/tables/employee_car/constraints/employee_car_employee_fk"
- , DBothButDifferent
- )
- ]
-
- addMig_
- "ALTER TABLE employee_car ALTER CONSTRAINT employee_car_employee_fk DEFERRABLE INITIALLY IMMEDIATE"
- "ALTER TABLE employee_car ALTER CONSTRAINT employee_car_employee_fk DEFERRABLE INITIALLY DEFERRED"
- $ ChangeEq
- [ ( "schemas/public/tables/employee_car/constraints/employee_car_employee_fk"
- , DBothButDifferent
- )
- ]
-
- addMig_
- "ALTER TABLE employee_car ALTER CONSTRAINT employee_car_employee_fk NOT DEFERRABLE"
- "ALTER TABLE employee_car ALTER CONSTRAINT employee_car_employee_fk DEFERRABLE INITIALLY IMMEDIATE"
- $ ChangeEq
- [ ( "schemas/public/tables/employee_car/constraints/employee_car_employee_fk"
- , DBothButDifferent
- )
- ]
-
- addMig_
- "ALTER TABLE employee_car ADD CONSTRAINT employee__employee_fk FOREIGN KEY (employee_id) REFERENCES employee(employee_id)"
- "ALTER TABLE employee_car DROP CONSTRAINT employee__employee_fk"
- $ ChangeEq
- [ ( "schemas/public/tables/employee_car/constraints/employee__employee_fk"
- , DExpectedButNotFound
- )
- ]
-
- -- Same FK on the same table and column, referencing a different table, but with the same referenced column name as before.
- addMig_
- "ALTER TABLE employee_car DROP CONSTRAINT employee__employee_fk; ALTER TABLE employee_car ADD CONSTRAINT employee__employee_fk FOREIGN KEY (employee_id) REFERENCES employee_computer(employee_id)"
- "ALTER TABLE employee_car DROP CONSTRAINT employee__employee_fk; ALTER TABLE employee_car ADD CONSTRAINT employee__employee_fk FOREIGN KEY (employee_id) REFERENCES employee(employee_id)"
- $ ChangeEq
- [ ( "schemas/public/tables/employee_car/constraints/employee__employee_fk"
- , DBothButDifferent
- )
- ]
-
-
- -- UNIQUE CONSTRAINTS AND INDEXES
- addMig_
- "ALTER TABLE employee ADD CONSTRAINT unique_employee UNIQUE(employee_name)"
- "ALTER TABLE employee DROP CONSTRAINT unique_employee"
- SomeChange
-
- addMig_
- "ALTER TABLE employee RENAME CONSTRAINT unique_employee TO employee_unique_name"
- "ALTER TABLE employee RENAME CONSTRAINT employee_unique_name TO unique_employee"
- $ ChangeEq
- [ ( "schemas/public/tables/employee/constraints/employee_unique_name"
- , DExpectedButNotFound
- )
- , ( "schemas/public/tables/employee/constraints/unique_employee"
- , DNotExpectedButFound
- )
- , ( "schemas/public/tables/employee/indexes/employee_unique_name"
- , DExpectedButNotFound
- )
- , ( "schemas/public/tables/employee/indexes/unique_employee"
- , DNotExpectedButFound
- )
- ]
-
- addMig_
- "CREATE UNIQUE INDEX unique_employee_idx ON employee (employee_name)"
- "DROP INDEX unique_employee_idx"
- $ ChangeEq
- [ ( "schemas/public/tables/employee/indexes/unique_employee_idx"
- , DExpectedButNotFound
- )
- ]
-
-
- -- FUNCTIONS
- addMig_
- "CREATE OR REPLACE FUNCTION increment(i integer) RETURNS integer AS $$\
- \BEGIN \n RETURN i + 1; \n END; \n $$ LANGUAGE plpgsql;"
- "DROP FUNCTION increment(integer)"
- $ ChangeEq
- [("schemas/public/routines/increment;int4", DExpectedButNotFound)]
-
- addMig_
- "CREATE OR REPLACE FUNCTION increment(i integer) RETURNS integer AS $$\
- \BEGIN \n RETURN i + 2; \n END; \n $$ LANGUAGE plpgsql;"
- "CREATE OR REPLACE FUNCTION increment(i integer) RETURNS integer AS $$\
- \BEGIN \n RETURN i + 1; \n END; \n $$ LANGUAGE plpgsql;"
- $ ChangeEq
- [("schemas/public/routines/increment;int4", DBothButDifferent)]
-
- -- Change in function args means new function
- addMig_
- "CREATE OR REPLACE FUNCTION increment(i integer, x text) RETURNS integer AS $$\
- \BEGIN \n RETURN i + 2; \n END; \n $$ LANGUAGE plpgsql;"
- "DROP FUNCTION increment(integer, text)"
- $ ChangeEq
- [ ( "schemas/public/routines/increment;int4,text"
- , DExpectedButNotFound
- )
- ]
-
- -- Change in function args means new function
- addMig_
- "CREATE OR REPLACE FUNCTION increment(x text, i integer) RETURNS integer AS $$\
- \BEGIN \n RETURN i + 2; \n END; \n $$ LANGUAGE plpgsql;"
- "DROP FUNCTION increment(text, integer)"
- $ ChangeEq
- [ ( "schemas/public/routines/increment;text,int4"
- , DExpectedButNotFound
- )
- ]
-
- -- Same everything as existing function, just changing return type
- addMig_
- "DROP FUNCTION increment(text, integer); CREATE OR REPLACE FUNCTION increment(x text, i integer) RETURNS bigint AS $$\
- \BEGIN \n RETURN i + 2; \n END; \n $$ LANGUAGE plpgsql;"
- "DROP FUNCTION increment(text, integer); CREATE OR REPLACE FUNCTION increment(x text, i integer) RETURNS integer AS $$\
- \BEGIN \n RETURN i + 2; \n END; \n $$ LANGUAGE plpgsql;"
- $ ChangeEq
- [ ( "schemas/public/routines/increment;text,int4"
- , DBothButDifferent
- )
- ]
-
- -- SQL bodied functions are also verified
- addMig_
- "CREATE OR REPLACE FUNCTION increment_sql(i integer) RETURNS integer AS $$\
- \SELECT i + 1; \n $$ LANGUAGE sql;"
- "DROP FUNCTION increment_sql(integer)"
- $ ChangeEq
- [ ( "schemas/public/routines/increment_sql;int4"
- , DExpectedButNotFound
- )
- ]
-
- addMig_
- "CREATE OR REPLACE FUNCTION increment_sql(i integer) RETURNS integer AS $$\
- \SELECT i + 2; \n $$ LANGUAGE sql;"
- "CREATE OR REPLACE FUNCTION increment_sql(i integer) RETURNS integer AS $$\
- \SELECT i + 1; \n $$ LANGUAGE sql;"
- $ ChangeEq
- [ ( "schemas/public/routines/increment_sql;int4"
- , DBothButDifferent
- )
- ]
-
- addMig_ "ALTER FUNCTION increment(text, integer) SECURITY DEFINER;"
- "ALTER FUNCTION increment(text, integer) SECURITY INVOKER;"
- $ ChangeEq
- [ ( "schemas/public/routines/increment;text,int4"
- , DBothButDifferent
- )
- ]
-
- addMig_ "ALTER FUNCTION increment(text, integer) PARALLEL RESTRICTED;"
- "ALTER FUNCTION increment(text, integer) PARALLEL UNSAFE;"
- $ ChangeEq
- [ ( "schemas/public/routines/increment;text,int4"
- , DBothButDifferent
- )
- ]
-
- addMig_ "ALTER FUNCTION increment(text, integer) PARALLEL SAFE;"
- "ALTER FUNCTION increment(text, integer) PARALLEL RESTRICTED;"
- $ ChangeEq
- [ ( "schemas/public/routines/increment;text,int4"
- , DBothButDifferent
- )
- ]
-
- addMig_ "ALTER FUNCTION increment(text, integer) LEAKPROOF;"
- "ALTER FUNCTION increment(text, integer) NOT LEAKPROOF;"
- $ ChangeEq
- [ ( "schemas/public/routines/increment;text,int4"
- , DBothButDifferent
- )
- ]
-
- -- TODO: SECURITY attribute should be verified, but the following fails for some reason.
- -- addMig_ "ALTER FUNCTION increment(text, integer) SECURITY DEFINER;"
- -- "ALTER FUNCTION increment(text, integer) SECURITY INVOKER;"
- -- $ ChangeEq
- -- [("schemas/public/routines/increment;text,int4", DBothButDifferent)]
-
- -- TRIGGERS
- addMig_ "ALTER TABLE employee ADD COLUMN name TEXT"
- "ALTER TABLE employee DROP COLUMN name"
- SomeChange
-
- addMig_
- "CREATE FUNCTION employee_name_rename_set_new() RETURNS TRIGGER AS $$\n\
- \BEGIN\n NEW.name = NEW.employee_name;\n RETURN NEW;\n END\n $$ LANGUAGE plpgsql;"
- "DROP FUNCTION employee_name_rename_set_new()"
- SomeChange
-
- (createTrigger1, dropTrigger) <-
- addMig
- "CREATE TRIGGER employee_old_app_update_column_name\
- \ \n BEFORE UPDATE ON employee\
- \ \n FOR EACH ROW\
- \ \n WHEN (OLD.employee_name IS DISTINCT FROM NEW.employee_name)\
- \ \n EXECUTE PROCEDURE employee_name_rename_set_new()"
- "DROP TRIGGER employee_old_app_update_column_name ON employee;"
- $ ChangeEq
- [ ( "schemas/public/tables/employee/triggers/employee_old_app_update_column_name"
- , DExpectedButNotFound
- )
- ]
-
-
- -- No WHEN in the recreated trigger
- addMig_
- "DROP TRIGGER employee_old_app_update_column_name ON employee; CREATE TRIGGER employee_old_app_update_column_name\
- \ \n BEFORE UPDATE ON employee\
- \ \n FOR EACH ROW\
- \ \n EXECUTE PROCEDURE employee_name_rename_set_new()"
- (dropTrigger <> createTrigger1)
- $ ChangeEq
- [ ( "schemas/public/tables/employee/triggers/employee_old_app_update_column_name"
- , DBothButDifferent
- )
- ]
-
- addMig_
- "DROP TRIGGER employee_old_app_update_column_name ON employee"
- "CREATE TRIGGER employee_old_app_update_column_name\
- \ \n BEFORE UPDATE ON employee\
- \ \n FOR EACH ROW\
- \ \n EXECUTE PROCEDURE employee_name_rename_set_new()"
- $ ChangeEq
- [ ( "schemas/public/tables/employee/triggers/employee_old_app_update_column_name"
- , DNotExpectedButFound
- )
- ]
+ ]
+
+ -- SEQUENCES
+ addMig_
+ "CREATE SEQUENCE some_seq MINVALUE 1 MAXVALUE 100"
+ "DROP SEQUENCE some_seq"
+ $ ChangeEq [("schemas/public/sequences/some_seq", DExpectedButNotFound)]
+
+ -- MINVALUE and MAXVALUE that fit other types so we are sure changing just the seq. type has an effect
+ addMig_
+ "ALTER SEQUENCE some_seq AS smallint"
+ "ALTER SEQUENCE some_seq AS bigint"
+ $ ChangeEq [("schemas/public/sequences/some_seq", DBothButDifferent)]
+ addMig_
+ "ALTER SEQUENCE some_seq AS integer"
+ "ALTER SEQUENCE some_seq AS smallint"
+ $ ChangeEq [("schemas/public/sequences/some_seq", DBothButDifferent)]
+ addMig_
+ "ALTER SEQUENCE some_seq START WITH 3"
+ "ALTER SEQUENCE some_seq START WITH 1"
+ $ ChangeEq [("schemas/public/sequences/some_seq", DBothButDifferent)]
+ addMig_
+ "ALTER SEQUENCE some_seq RESTART WITH 7"
+ "ALTER SEQUENCE some_seq RESTART WITH 1"
+ $ ChangeEq []
+ -- TODO: Where can I find in pg_catalog the restart_with value? Currently it does not affect hashing, sadly.
+ addMig_
+ "ALTER SEQUENCE some_seq MINVALUE 2"
+ "ALTER SEQUENCE some_seq MINVALUE 1"
+ $ ChangeEq [("schemas/public/sequences/some_seq", DBothButDifferent)]
+ addMig_
+ "ALTER SEQUENCE some_seq MAXVALUE 99999"
+ "ALTER SEQUENCE some_seq MAXVALUE 100"
+ $ ChangeEq [("schemas/public/sequences/some_seq", DBothButDifferent)]
+ addMig_
+ "ALTER SEQUENCE some_seq INCREMENT BY 2"
+ "ALTER SEQUENCE some_seq INCREMENT BY 1"
+ $ ChangeEq [("schemas/public/sequences/some_seq", DBothButDifferent)]
+ addMig_ "ALTER SEQUENCE some_seq CYCLE" "ALTER SEQUENCE some_seq NO CYCLE" $
+ ChangeEq [("schemas/public/sequences/some_seq", DBothButDifferent)]
+ addMig_ "ALTER SEQUENCE some_seq CACHE 2" "ALTER SEQUENCE some_seq CACHE 1" $
+ ChangeEq [("schemas/public/sequences/some_seq", DBothButDifferent)]
+ addMig_
+ "ALTER SEQUENCE some_seq OWNED BY employee.employee_id"
+ "ALTER SEQUENCE some_seq OWNED BY NONE"
+ $ ChangeEq [("schemas/public/sequences/some_seq", DBothButDifferent)]
+
+ -- CHECK CONSTRAINTS
+ addMig_
+ "ALTER TABLE employee ADD CONSTRAINT employee_ck_name CHECK (employee_name <> '')"
+ "ALTER TABLE employee DROP CONSTRAINT employee_ck_name"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/constraints/employee_ck_name",
+ DExpectedButNotFound
+ )
+ ]
+
+ addMigNoChanges_
+ "ALTER TABLE employee DROP CONSTRAINT employee_ck_name; ALTER TABLE employee ADD CONSTRAINT employee_ck_name CHECK (employee_name <> '')"
+
+ addMig_
+ "ALTER TABLE employee DROP CONSTRAINT employee_ck_name; ALTER TABLE employee ADD CONSTRAINT employee_ck_name CHECK (employee_name <> 'EMPTY')"
+ "ALTER TABLE employee DROP CONSTRAINT employee_ck_name; ALTER TABLE employee ADD CONSTRAINT employee_ck_name CHECK (employee_name <> '')"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/constraints/employee_ck_name",
+ DBothButDifferent
+ )
+ ]
+
+ -- FOREIGN KEYS
+ addMig_
+ "CREATE TABLE employee_car (employee_id INT NOT NULL, car_model TEXT NOT NULL)"
+ "DROP TABLE employee_car"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee_car/cols/car_model",
+ DExpectedButNotFound
+ ),
+ ( "schemas/public/tables/employee_car/cols/employee_id",
+ DExpectedButNotFound
+ ),
+ ( "schemas/public/tables/employee_car/objrep",
+ DExpectedButNotFound
+ )
+ ]
+
+ addMig_
+ "CREATE TABLE employee_computer (employee_id INT NOT NULL, computer_model TEXT NOT NULL, UNIQUE (employee_id))"
+ "DROP TABLE employee_computer"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee_computer/cols/computer_model",
+ DExpectedButNotFound
+ ),
+ ( "schemas/public/tables/employee_computer/cols/employee_id",
+ DExpectedButNotFound
+ ),
+ ( "schemas/public/tables/employee_computer/constraints/employee_computer_employee_id_key",
+ DExpectedButNotFound
+ ),
+ ( "schemas/public/tables/employee_computer/indexes/employee_computer_employee_id_key",
+ DExpectedButNotFound
+ ),
+ ( "schemas/public/tables/employee_computer/objrep",
+ DExpectedButNotFound
+ )
+ ]
+
+ addMig_
+ "ALTER TABLE employee_car ADD CONSTRAINT employee_car_employee_fk FOREIGN KEY (employee_id) REFERENCES employee(employee_id)"
+ "ALTER TABLE employee_car DROP CONSTRAINT employee_car_employee_fk"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee_car/constraints/employee_car_employee_fk",
+ DExpectedButNotFound
+ )
+ ]
+
+ addMig_
+ "ALTER TABLE employee_car ALTER CONSTRAINT employee_car_employee_fk DEFERRABLE INITIALLY DEFERRED"
+ "ALTER TABLE employee_car ALTER CONSTRAINT employee_car_employee_fk NOT DEFERRABLE INITIALLY IMMEDIATE"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee_car/constraints/employee_car_employee_fk",
+ DBothButDifferent
+ )
+ ]
+
+ addMig_
+ "ALTER TABLE employee_car ALTER CONSTRAINT employee_car_employee_fk DEFERRABLE INITIALLY IMMEDIATE"
+ "ALTER TABLE employee_car ALTER CONSTRAINT employee_car_employee_fk DEFERRABLE INITIALLY DEFERRED"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee_car/constraints/employee_car_employee_fk",
+ DBothButDifferent
+ )
+ ]
+
+ addMig_
+ "ALTER TABLE employee_car ALTER CONSTRAINT employee_car_employee_fk NOT DEFERRABLE"
+ "ALTER TABLE employee_car ALTER CONSTRAINT employee_car_employee_fk DEFERRABLE INITIALLY IMMEDIATE"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee_car/constraints/employee_car_employee_fk",
+ DBothButDifferent
+ )
+ ]
+
+ addMig_
+ "ALTER TABLE employee_car ADD CONSTRAINT employee__employee_fk FOREIGN KEY (employee_id) REFERENCES employee(employee_id)"
+ "ALTER TABLE employee_car DROP CONSTRAINT employee__employee_fk"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee_car/constraints/employee__employee_fk",
+ DExpectedButNotFound
+ )
+ ]
+
+ -- Same FK on the same table and column, referencing a different table, but with the same referenced column name as before.
+ addMig_
+ "ALTER TABLE employee_car DROP CONSTRAINT employee__employee_fk; ALTER TABLE employee_car ADD CONSTRAINT employee__employee_fk FOREIGN KEY (employee_id) REFERENCES employee_computer(employee_id)"
+ "ALTER TABLE employee_car DROP CONSTRAINT employee__employee_fk; ALTER TABLE employee_car ADD CONSTRAINT employee__employee_fk FOREIGN KEY (employee_id) REFERENCES employee(employee_id)"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee_car/constraints/employee__employee_fk",
+ DBothButDifferent
+ )
+ ]
+
+ -- UNIQUE CONSTRAINTS AND INDEXES
+ addMig_
+ "ALTER TABLE employee ADD CONSTRAINT unique_employee UNIQUE(employee_name)"
+ "ALTER TABLE employee DROP CONSTRAINT unique_employee"
+ SomeChange
+
+ addMig_
+ "ALTER TABLE employee RENAME CONSTRAINT unique_employee TO employee_unique_name"
+ "ALTER TABLE employee RENAME CONSTRAINT employee_unique_name TO unique_employee"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/constraints/employee_unique_name",
+ DExpectedButNotFound
+ ),
+ ( "schemas/public/tables/employee/constraints/unique_employee",
+ DNotExpectedButFound
+ ),
+ ( "schemas/public/tables/employee/indexes/employee_unique_name",
+ DExpectedButNotFound
+ ),
+ ( "schemas/public/tables/employee/indexes/unique_employee",
+ DNotExpectedButFound
+ )
+ ]
+
+ addMig_
+ "CREATE UNIQUE INDEX unique_employee_idx ON employee (employee_name)"
+ "DROP INDEX unique_employee_idx"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/indexes/unique_employee_idx",
+ DExpectedButNotFound
+ )
+ ]
+
+ -- FUNCTIONS
+ addMig_
+ "CREATE OR REPLACE FUNCTION increment(i integer) RETURNS integer AS $$\
+ \BEGIN \n RETURN i + 1; \n END; \n $$ LANGUAGE plpgsql;"
+ "DROP FUNCTION increment(integer)"
+ $ ChangeEq
+ [("schemas/public/routines/increment;int4", DExpectedButNotFound)]
+
+ addMig_
+ "CREATE OR REPLACE FUNCTION increment(i integer) RETURNS integer AS $$\
+ \BEGIN \n RETURN i + 2; \n END; \n $$ LANGUAGE plpgsql;"
+ "CREATE OR REPLACE FUNCTION increment(i integer) RETURNS integer AS $$\
+ \BEGIN \n RETURN i + 1; \n END; \n $$ LANGUAGE plpgsql;"
+ $ ChangeEq
+ [("schemas/public/routines/increment;int4", DBothButDifferent)]
+
+ -- Change in function args means new function
+ addMig_
+ "CREATE OR REPLACE FUNCTION increment(i integer, x text) RETURNS integer AS $$\
+ \BEGIN \n RETURN i + 2; \n END; \n $$ LANGUAGE plpgsql;"
+ "DROP FUNCTION increment(integer, text)"
+ $ ChangeEq
+ [ ( "schemas/public/routines/increment;int4,text",
+ DExpectedButNotFound
+ )
+ ]
+
+ -- Change in function args means new function
+ addMig_
+ "CREATE OR REPLACE FUNCTION increment(x text, i integer) RETURNS integer AS $$\
+ \BEGIN \n RETURN i + 2; \n END; \n $$ LANGUAGE plpgsql;"
+ "DROP FUNCTION increment(text, integer)"
+ $ ChangeEq
+ [ ( "schemas/public/routines/increment;text,int4",
+ DExpectedButNotFound
+ )
+ ]
+
+ -- Same everything as existing function, just changing return type
+ addMig_
+ "DROP FUNCTION increment(text, integer); CREATE OR REPLACE FUNCTION increment(x text, i integer) RETURNS bigint AS $$\
+ \BEGIN \n RETURN i + 2; \n END; \n $$ LANGUAGE plpgsql;"
+ "DROP FUNCTION increment(text, integer); CREATE OR REPLACE FUNCTION increment(x text, i integer) RETURNS integer AS $$\
+ \BEGIN \n RETURN i + 2; \n END; \n $$ LANGUAGE plpgsql;"
+ $ ChangeEq
+ [ ( "schemas/public/routines/increment;text,int4",
+ DBothButDifferent
+ )
+ ]
+
+ -- SQL bodied functions are also verified
+ addMig_
+ "CREATE OR REPLACE FUNCTION increment_sql(i integer) RETURNS integer AS $$\
+ \SELECT i + 1; \n $$ LANGUAGE sql;"
+ "DROP FUNCTION increment_sql(integer)"
+ $ ChangeEq
+ [ ( "schemas/public/routines/increment_sql;int4",
+ DExpectedButNotFound
+ )
+ ]
+
+ addMig_
+ "CREATE OR REPLACE FUNCTION increment_sql(i integer) RETURNS integer AS $$\
+ \SELECT i + 2; \n $$ LANGUAGE sql;"
+ "CREATE OR REPLACE FUNCTION increment_sql(i integer) RETURNS integer AS $$\
+ \SELECT i + 1; \n $$ LANGUAGE sql;"
+ $ ChangeEq
+ [ ( "schemas/public/routines/increment_sql;int4",
+ DBothButDifferent
+ )
+ ]
+
+ addMig_
+ "ALTER FUNCTION increment(text, integer) SECURITY DEFINER;"
+ "ALTER FUNCTION increment(text, integer) SECURITY INVOKER;"
+ $ ChangeEq
+ [ ( "schemas/public/routines/increment;text,int4",
+ DBothButDifferent
+ )
+ ]
+
+ addMig_
+ "ALTER FUNCTION increment(text, integer) PARALLEL RESTRICTED;"
+ "ALTER FUNCTION increment(text, integer) PARALLEL UNSAFE;"
+ $ ChangeEq
+ [ ( "schemas/public/routines/increment;text,int4",
+ DBothButDifferent
+ )
+ ]
+
+ addMig_
+ "ALTER FUNCTION increment(text, integer) PARALLEL SAFE;"
+ "ALTER FUNCTION increment(text, integer) PARALLEL RESTRICTED;"
+ $ ChangeEq
+ [ ( "schemas/public/routines/increment;text,int4",
+ DBothButDifferent
+ )
+ ]
+
+ addMig_
+ "ALTER FUNCTION increment(text, integer) LEAKPROOF;"
+ "ALTER FUNCTION increment(text, integer) NOT LEAKPROOF;"
+ $ ChangeEq
+ [ ( "schemas/public/routines/increment;text,int4",
+ DBothButDifferent
+ )
+ ]
+
+ -- TODO: SECURITY attribute should be verified, but the following fails for some reason.
+ -- addMig_ "ALTER FUNCTION increment(text, integer) SECURITY DEFINER;"
+ -- "ALTER FUNCTION increment(text, integer) SECURITY INVOKER;"
+ -- $ ChangeEq
+ -- [("schemas/public/routines/increment;text,int4", DBothButDifferent)]
+
+ -- TRIGGERS
+ addMig_
+ "ALTER TABLE employee ADD COLUMN name TEXT"
+ "ALTER TABLE employee DROP COLUMN name"
+ SomeChange
+
+ addMig_
+ "CREATE FUNCTION employee_name_rename_set_new() RETURNS TRIGGER AS $$\n\
+ \BEGIN\n NEW.name = NEW.employee_name;\n RETURN NEW;\n END\n $$ LANGUAGE plpgsql;"
+ "DROP FUNCTION employee_name_rename_set_new()"
+ SomeChange
+
+ (createTrigger1, dropTrigger) <-
+ addMig
+ "CREATE TRIGGER employee_old_app_update_column_name\
+ \ \n BEFORE UPDATE ON employee\
+ \ \n FOR EACH ROW\
+ \ \n WHEN (OLD.employee_name IS DISTINCT FROM NEW.employee_name)\
+ \ \n EXECUTE PROCEDURE employee_name_rename_set_new()"
+ "DROP TRIGGER employee_old_app_update_column_name ON employee;"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/triggers/employee_old_app_update_column_name",
+ DExpectedButNotFound
+ )
+ ]
- addMig_
- "ALTER TABLE employee DROP COLUMN employee_name"
- -- Undoing this statement requires recreating a lot of dependent objects..
- -- Also, on Pg <= 12, dropping and readding employee_id in the same ALTER TABLE statement
- -- makes Pg create a new sequence with a different name instead of doing what you might expect
- "ALTER SEQUENCE some_seq OWNED BY NONE; \
+ -- No WHEN in the recreated trigger
+ addMig_
+ "DROP TRIGGER employee_old_app_update_column_name ON employee; CREATE TRIGGER employee_old_app_update_column_name\
+ \ \n BEFORE UPDATE ON employee\
+ \ \n FOR EACH ROW\
+ \ \n EXECUTE PROCEDURE employee_name_rename_set_new()"
+ (dropTrigger <> createTrigger1)
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/triggers/employee_old_app_update_column_name",
+ DBothButDifferent
+ )
+ ]
+
+ addMig_
+ "DROP TRIGGER employee_old_app_update_column_name ON employee"
+ "CREATE TRIGGER employee_old_app_update_column_name\
+ \ \n BEFORE UPDATE ON employee\
+ \ \n FOR EACH ROW\
+ \ \n EXECUTE PROCEDURE employee_name_rename_set_new()"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/triggers/employee_old_app_update_column_name",
+ DNotExpectedButFound
+ )
+ ]
+
+ addMig_
+ "ALTER TABLE employee DROP COLUMN employee_name"
+ -- Undoing this statement requires recreating a lot of dependent objects..
+ -- Also, on Pg <= 12, dropping and readding employee_id in the same ALTER TABLE statement
+ -- makes Pg create a new sequence with a different name instead of doing what you might expect
+ "ALTER SEQUENCE some_seq OWNED BY NONE; \
\\nALTER TABLE employee DROP COLUMN birthday, DROP COLUMN deathday, DROP COLUMN name, DROP COLUMN employee_id CASCADE;\
\\nALTER TABLE employee ADD COLUMN employee_name VARCHAR(50) NOT NULL, ADD COLUMN deathday DATE NULL DEFAULT '2100-02-04', \
- \ ADD COLUMN birthday TIMESTAMP, ADD COLUMN employee_id SERIAL PRIMARY KEY, \
- \ ADD COLUMN name TEXT;\
+ \ ADD COLUMN birthday TIMESTAMP, ADD COLUMN employee_id SERIAL PRIMARY KEY, \
+ \ ADD COLUMN name TEXT;\
\\nALTER TABLE employee ADD CONSTRAINT employee_ck_name CHECK (employee_name <> 'EMPTY');\
\\nALTER TABLE employee ADD CONSTRAINT employee_unique_name UNIQUE(employee_name);\
\\nCREATE UNIQUE INDEX unique_employee_idx ON employee (employee_name);\
\\nALTER TABLE employee_car ADD CONSTRAINT employee_car_employee_fk FOREIGN KEY (employee_id) REFERENCES employee(employee_id);\
\\nALTER SEQUENCE some_seq OWNED BY employee.employee_id;"
- SomeChange
-
- addMig_ "ALTER TABLE employee RENAME COLUMN name TO employee_name"
- "ALTER TABLE employee RENAME COLUMN employee_name TO name"
- SomeChange
-
-
- -- VIEWS
- addMig_
- "CREATE OR REPLACE VIEW all_employee_names (employee_name) AS (SELECT employee_name FROM employee)"
- "DROP VIEW all_employee_names"
- $ ChangeEq
- [ ( "schemas/public/views/all_employee_names"
- , DExpectedButNotFound
- )
- ]
-
- addMig_
- "CREATE OR REPLACE VIEW all_employee_names (employee_name) WITH (security_barrier=TRUE) AS (SELECT employee_name FROM employee)"
- "CREATE OR REPLACE VIEW all_employee_names (employee_name) AS (SELECT employee_name FROM employee)"
- $ ChangeEq
- [("schemas/public/views/all_employee_names", DBothButDifferent)]
-
- addMigNoChanges_
- "CREATE OR REPLACE VIEW all_employee_names (employee_name) WITH (security_barrier=TRUE) AS (SELECT employee_name FROM employee)"
-
- addMig_
- "CREATE OR REPLACE VIEW all_employee_names (employee_name) WITH (security_barrier=TRUE) AS (SELECT 'Mr. ' || employee_name FROM employee)"
- "CREATE OR REPLACE VIEW all_employee_names (employee_name) WITH (security_barrier=TRUE) AS (SELECT employee_name FROM employee)"
- $ ChangeEq
- [("schemas/public/views/all_employee_names", DBothButDifferent)]
-
- addMig_ "ALTER VIEW all_employee_names OWNER TO \"codd-test-user\""
- "ALTER VIEW all_employee_names OWNER TO \"postgres\""
- $ ChangeEq
- [("schemas/public/views/all_employee_names", DBothButDifferent)]
-
-
- -- ROW LEVEL SECURITY
- addMig_ "ALTER TABLE employee ENABLE ROW LEVEL SECURITY"
- "ALTER TABLE employee DISABLE ROW LEVEL SECURITY"
- $ ChangeEq
- [("schemas/public/tables/employee/objrep", DBothButDifferent)]
-
- addMig_ "ALTER TABLE employee FORCE ROW LEVEL SECURITY"
- "ALTER TABLE employee NO FORCE ROW LEVEL SECURITY"
- $ ChangeEq
- [("schemas/public/tables/employee/objrep", DBothButDifferent)]
-
- addMig_ "ALTER TABLE employee NO FORCE ROW LEVEL SECURITY"
- "ALTER TABLE employee FORCE ROW LEVEL SECURITY"
- $ ChangeEq
- [("schemas/public/tables/employee/objrep", DBothButDifferent)]
-
- (createPolicy1, dropPolicy) <-
- addMig
- "CREATE POLICY some_policy ON employee USING (employee_name <> 'Some Name');"
- "DROP POLICY some_policy ON employee;"
- $ ChangeEq
- [ ( "schemas/public/tables/employee/policies/some_policy"
- , DExpectedButNotFound
- )
- ]
-
- (dropCreatePolicy2, _) <-
- addMig
- "DROP POLICY some_policy ON employee; CREATE POLICY some_policy ON employee USING (employee_name <> 'Some Other Name');"
- (dropPolicy <> createPolicy1)
- $ ChangeEq
- [ ( "schemas/public/tables/employee/policies/some_policy"
- , DBothButDifferent
- )
- ]
-
- (dropCreatePolicy3, _) <-
- addMig
- "DROP POLICY some_policy ON employee; CREATE POLICY some_policy ON employee FOR UPDATE USING (employee_name <> 'Some Other Name');"
- dropCreatePolicy2
- $ ChangeEq
- [ ( "schemas/public/tables/employee/policies/some_policy"
- , DBothButDifferent
- )
- ]
-
- (dropCreatePolicy4, _) <-
- addMig
- "DROP POLICY some_policy ON employee; CREATE POLICY some_policy ON employee FOR UPDATE USING (employee_name <> 'Some Other Name') WITH CHECK (TRUE);"
- dropCreatePolicy3
- $ ChangeEq
- [ ( "schemas/public/tables/employee/policies/some_policy"
- , DBothButDifferent
- )
- ]
-
- let
- createPolicy5
- = "CREATE POLICY some_policy ON employee FOR UPDATE USING (employee_name <> 'Some Other Name') WITH CHECK (TRUE);"
- addMig_ ("DROP POLICY some_policy ON employee;" <> createPolicy5)
- dropCreatePolicy4
- $ ChangeEq []
+ SomeChange
+
+ addMig_
+ "ALTER TABLE employee RENAME COLUMN name TO employee_name"
+ "ALTER TABLE employee RENAME COLUMN employee_name TO name"
+ SomeChange
+
+ -- VIEWS
+ addMig_
+ "CREATE OR REPLACE VIEW all_employee_names (employee_name) AS (SELECT employee_name FROM employee)"
+ "DROP VIEW all_employee_names"
+ $ ChangeEq
+ [ ( "schemas/public/views/all_employee_names",
+ DExpectedButNotFound
+ )
+ ]
+
+ addMig_
+ "CREATE OR REPLACE VIEW all_employee_names (employee_name) WITH (security_barrier=TRUE) AS (SELECT employee_name FROM employee)"
+ "CREATE OR REPLACE VIEW all_employee_names (employee_name) AS (SELECT employee_name FROM employee)"
+ $ ChangeEq
+ [("schemas/public/views/all_employee_names", DBothButDifferent)]
+
+ addMigNoChanges_
+ "CREATE OR REPLACE VIEW all_employee_names (employee_name) WITH (security_barrier=TRUE) AS (SELECT employee_name FROM employee)"
+
+ addMig_
+ "CREATE OR REPLACE VIEW all_employee_names (employee_name) WITH (security_barrier=TRUE) AS (SELECT 'Mr. ' || employee_name FROM employee)"
+ "CREATE OR REPLACE VIEW all_employee_names (employee_name) WITH (security_barrier=TRUE) AS (SELECT employee_name FROM employee)"
+ $ ChangeEq
+ [("schemas/public/views/all_employee_names", DBothButDifferent)]
+
+ addMig_
+ "ALTER VIEW all_employee_names OWNER TO \"codd-test-user\""
+ "ALTER VIEW all_employee_names OWNER TO \"postgres\""
+ $ ChangeEq
+ [("schemas/public/views/all_employee_names", DBothButDifferent)]
+
+ -- ROW LEVEL SECURITY
+ addMig_
+ "ALTER TABLE employee ENABLE ROW LEVEL SECURITY"
+ "ALTER TABLE employee DISABLE ROW LEVEL SECURITY"
+ $ ChangeEq
+ [("schemas/public/tables/employee/objrep", DBothButDifferent)]
+
+ addMig_
+ "ALTER TABLE employee FORCE ROW LEVEL SECURITY"
+ "ALTER TABLE employee NO FORCE ROW LEVEL SECURITY"
+ $ ChangeEq
+ [("schemas/public/tables/employee/objrep", DBothButDifferent)]
+
+ addMig_
+ "ALTER TABLE employee NO FORCE ROW LEVEL SECURITY"
+ "ALTER TABLE employee FORCE ROW LEVEL SECURITY"
+ $ ChangeEq
+ [("schemas/public/tables/employee/objrep", DBothButDifferent)]
+
+ (createPolicy1, dropPolicy) <-
+ addMig
+ "CREATE POLICY some_policy ON employee USING (employee_name <> 'Some Name');"
+ "DROP POLICY some_policy ON employee;"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/policies/some_policy",
+ DExpectedButNotFound
+ )
+ ]
- addMig_ "DROP POLICY some_policy ON employee;" createPolicy5 $ ChangeEq
- [ ( "schemas/public/tables/employee/policies/some_policy"
- , DNotExpectedButFound
+ (dropCreatePolicy2, _) <-
+ addMig
+ "DROP POLICY some_policy ON employee; CREATE POLICY some_policy ON employee USING (employee_name <> 'Some Other Name');"
+ (dropPolicy <> createPolicy1)
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/policies/some_policy",
+ DBothButDifferent
)
]
- -- ROLES
- (createUnmappedRole, dropUnmappedRole) <-
- addMig "CREATE ROLE any_unmapped_role" "DROP ROLE any_unmapped_role"
- $ ChangeEq []
- addMig_ dropUnmappedRole createUnmappedRole $ ChangeEq []
- addMig_ "CREATE ROLE \"extra-codd-test-user\""
- "DROP ROLE \"extra-codd-test-user\""
- $ ChangeEq [("roles/extra-codd-test-user", DExpectedButNotFound)]
-
- addMig_
- "ALTER ROLE \"codd-test-user\" SET search_path TO public, pg_catalog"
- "ALTER ROLE \"codd-test-user\" RESET search_path"
- $ ChangeEq [("roles/codd-test-user", DBothButDifferent)]
-
- addMig_
- "ALTER ROLE \"codd-test-user\" SET default_transaction_isolation='repeatable read'"
- "ALTER ROLE \"codd-test-user\" RESET default_transaction_isolation"
- $ ChangeEq [("roles/codd-test-user", DBothButDifferent)]
-
- addMig_
- "ALTER ROLE \"codd-test-user\" WITH BYPASSRLS; ALTER ROLE \"codd-test-user\" WITH REPLICATION;"
- "ALTER ROLE \"codd-test-user\" WITH NOBYPASSRLS; ALTER ROLE \"codd-test-user\" WITH NOREPLICATION; "
- $ ChangeEq [("roles/codd-test-user", DBothButDifferent)]
-
- addMigNoChanges_ "ALTER ROLE \"codd-test-user\" WITH BYPASSRLS"
-
- -- Database-related permissions affect only roles, not db-settings
- (revokeConnect, grantConnect) <-
- addMig
- "REVOKE CONNECT ON DATABASE \"codd-test-db\" FROM \"codd-test-user\""
- "GRANT CONNECT ON DATABASE \"codd-test-db\" TO \"codd-test-user\""
- $ ChangeEq [("roles/codd-test-user", DBothButDifferent)]
-
- addMig_
- "GRANT CONNECT ON DATABASE \"codd-test-db\" TO \"codd-test-user\""
- revokeConnect
- $ ChangeEq [("roles/codd-test-user", DBothButDifferent)]
-
- addMigNoChanges_ grantConnect
-
- -- Role membership
- (grantRole, revokeRole) <-
- addMig "GRANT \"extra-codd-test-user\" TO \"codd-test-user\""
- "REVOKE \"extra-codd-test-user\" FROM \"codd-test-user\""
- $ ChangeEq [("roles/codd-test-user", DBothButDifferent)]
-
- addMig_ revokeRole grantRole
- $ ChangeEq [("roles/codd-test-user", DBothButDifferent)]
-
- -- Config attributes
- addMig_ "ALTER ROLE postgres SET search_path TO public, pg_catalog"
- "ALTER ROLE postgres RESET search_path"
- $ ChangeEq [("roles/postgres", DBothButDifferent)]
-
- addMig_
- "ALTER ROLE \"codd-test-user\" SET search_path TO DEFAULT"
- "ALTER ROLE \"codd-test-user\" SET search_path TO public, pg_catalog"
- $ ChangeEq [("roles/codd-test-user", DBothButDifferent)]
-
- addMig_ "ALTER ROLE postgres SET search_path TO DEFAULT"
- "ALTER ROLE postgres SET search_path TO public, pg_catalog"
- $ ChangeEq [("roles/postgres", DBothButDifferent)]
-
- addMigNoChanges_ "ALTER ROLE postgres SET search_path TO DEFAULT"
-
- -- PERMISSIONS
- -- For tables
-
- -- Owner of the table implicitly has all privileges by default
- addMig_ "GRANT ALL ON TABLE employee TO postgres" "SELECT 1;" $ ChangeEq []
-
- addMig_ "GRANT SELECT ON TABLE employee TO \"codd-test-user\""
- "REVOKE SELECT ON TABLE employee FROM \"codd-test-user\""
- $ ChangeEq
- [("schemas/public/tables/employee/objrep", DBothButDifferent)]
-
- addMig_ "GRANT INSERT ON TABLE employee TO \"codd-test-user\""
- "REVOKE INSERT ON TABLE employee FROM \"codd-test-user\""
- $ ChangeEq
- [("schemas/public/tables/employee/objrep", DBothButDifferent)]
-
- addMig_ "GRANT DELETE ON TABLE employee TO \"codd-test-user\""
- "REVOKE DELETE ON TABLE employee FROM \"codd-test-user\""
- $ ChangeEq
- [("schemas/public/tables/employee/objrep", DBothButDifferent)]
-
- -- For sequences
- addMig_
- "GRANT SELECT ON SEQUENCE employee_employee_id_seq TO \"codd-test-user\""
- "REVOKE SELECT ON SEQUENCE employee_employee_id_seq FROM \"codd-test-user\";"
- $ ChangeEq
- [ ( "schemas/public/sequences/employee_employee_id_seq"
- , DBothButDifferent
- )
- ]
+ (dropCreatePolicy3, _) <-
+ addMig
+ "DROP POLICY some_policy ON employee; CREATE POLICY some_policy ON employee FOR UPDATE USING (employee_name <> 'Some Other Name');"
+ dropCreatePolicy2
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/policies/some_policy",
+ DBothButDifferent
+ )
+ ]
+ (dropCreatePolicy4, _) <-
+ addMig
+ "DROP POLICY some_policy ON employee; CREATE POLICY some_policy ON employee FOR UPDATE USING (employee_name <> 'Some Other Name') WITH CHECK (TRUE);"
+ dropCreatePolicy3
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/policies/some_policy",
+ DBothButDifferent
+ )
+ ]
- -- At this point codd-test-user has S+I+D permissions on the employee table
- -- Order of granting does not matter, nor do grantors
- addMig_
- "REVOKE ALL ON TABLE employee FROM \"codd-test-user\""
- "GRANT SELECT,INSERT,DELETE ON TABLE employee TO \"codd-test-user\""
- $ ChangeEq
- [("schemas/public/tables/employee/objrep", DBothButDifferent)]
-
- addMig_
- "GRANT INSERT, DELETE ON TABLE employee TO \"codd-test-user\";"
- "REVOKE INSERT, DELETE ON TABLE employee FROM \"codd-test-user\";"
- $ ChangeEq
- [("schemas/public/tables/employee/objrep", DBothButDifferent)]
-
- -- At this point codd-test-user has I+D permissions on the employee table
- addMig_
- "REVOKE ALL ON TABLE employee FROM \"codd-test-user\"; GRANT INSERT, DELETE ON TABLE employee TO \"codd-test-user\";"
- "GRANT INSERT, DELETE ON TABLE employee TO \"codd-test-user\";"
- $ ChangeEq []
-
- -- At this point codd-test-user has I+D permissions on the employee table
- addMig_
- "GRANT ALL ON TABLE employee TO \"codd-test-user\""
- "REVOKE ALL ON TABLE employee FROM \"codd-test-user\"; GRANT INSERT, DELETE ON TABLE employee TO \"codd-test-user\""
- $ ChangeEq
- [("schemas/public/tables/employee/objrep", DBothButDifferent)]
-
- addMig_ "GRANT ALL ON TABLE employee TO \"extra-codd-test-user\""
- "REVOKE ALL ON TABLE employee FROM \"extra-codd-test-user\""
- $ ChangeEq
- [("schemas/public/tables/employee/objrep", DBothButDifferent)]
-
- addMigNoChanges_
- "REVOKE ALL ON TABLE employee FROM \"codd-test-user\"; GRANT ALL ON TABLE employee TO \"codd-test-user\"; GRANT ALL ON TABLE employee TO \"extra-codd-test-user\""
-
- addMig_
- "GRANT ALL ON TABLE employee TO PUBLIC"
- "REVOKE ALL ON TABLE employee FROM PUBLIC; GRANT ALL ON TABLE employee TO \"codd-test-user\"; GRANT ALL ON TABLE employee TO \"extra-codd-test-user\";"
- $ ChangeEq
- [("schemas/public/tables/employee/objrep", DBothButDifferent)]
-
-
- -- Permissions of unmapped role don't affect hashing
- (createUnmappedRole1AndGrant, dropUnmappedRole1) <-
- addMig
- "CREATE ROLE unmapped_role1; GRANT ALL ON TABLE employee TO unmapped_role1; GRANT ALL ON SEQUENCE employee_employee_id_seq TO unmapped_role1; GRANT ALL ON all_employee_names TO unmapped_role1"
- "DROP OWNED BY unmapped_role1; DROP ROLE unmapped_role1"
- $ ChangeEq []
-
- addMig_ dropUnmappedRole1 createUnmappedRole1AndGrant $ ChangeEq []
-
- -- CREATING UNMAPPED AND MAPPED SCHEMAS
- addMig_ "CREATE SCHEMA unmappedschema" "DROP SCHEMA unmappedschema"
- $ ChangeEq []
- addMig_ "DROP SCHEMA unmappedschema" "CREATE SCHEMA unmappedschema"
- $ ChangeEq []
- (createMappedSchema, dropMappedSchema) <-
- addMig "CREATE SCHEMA \"codd-extra-mapped-schema\""
- "DROP SCHEMA \"codd-extra-mapped-schema\""
- $ ChangeEq
- [ ( "schemas/codd-extra-mapped-schema/objrep"
- , DExpectedButNotFound
- )
- ]
+ let createPolicy5 =
+ "CREATE POLICY some_policy ON employee FOR UPDATE USING (employee_name <> 'Some Other Name') WITH CHECK (TRUE);"
+ addMig_
+ ("DROP POLICY some_policy ON employee;" <> createPolicy5)
+ dropCreatePolicy4
+ $ ChangeEq []
+
+ addMig_ "DROP POLICY some_policy ON employee;" createPolicy5 $
+ ChangeEq
+ [ ( "schemas/public/tables/employee/policies/some_policy",
+ DNotExpectedButFound
+ )
+ ]
+
+ -- ROLES
+ (createUnmappedRole, dropUnmappedRole) <-
+ addMig "CREATE ROLE any_unmapped_role" "DROP ROLE any_unmapped_role" $
+ ChangeEq []
+ addMig_ dropUnmappedRole createUnmappedRole $ ChangeEq []
+ addMig_
+ "CREATE ROLE \"extra-codd-test-user\""
+ "DROP ROLE \"extra-codd-test-user\""
+ $ ChangeEq [("roles/extra-codd-test-user", DExpectedButNotFound)]
+
+ addMig_
+ "ALTER ROLE \"codd-test-user\" SET search_path TO public, pg_catalog"
+ "ALTER ROLE \"codd-test-user\" RESET search_path"
+ $ ChangeEq [("roles/codd-test-user", DBothButDifferent)]
+
+ addMig_
+ "ALTER ROLE \"codd-test-user\" SET default_transaction_isolation='repeatable read'"
+ "ALTER ROLE \"codd-test-user\" RESET default_transaction_isolation"
+ $ ChangeEq [("roles/codd-test-user", DBothButDifferent)]
+
+ addMig_
+ "ALTER ROLE \"codd-test-user\" WITH BYPASSRLS; ALTER ROLE \"codd-test-user\" WITH REPLICATION;"
+ "ALTER ROLE \"codd-test-user\" WITH NOBYPASSRLS; ALTER ROLE \"codd-test-user\" WITH NOREPLICATION; "
+ $ ChangeEq [("roles/codd-test-user", DBothButDifferent)]
+
+ addMigNoChanges_ "ALTER ROLE \"codd-test-user\" WITH BYPASSRLS"
+
+ -- Database-related permissions affect only roles, not db-settings
+ (revokeConnect, grantConnect) <-
+ addMig
+ "REVOKE CONNECT ON DATABASE \"codd-test-db\" FROM \"codd-test-user\""
+ "GRANT CONNECT ON DATABASE \"codd-test-db\" TO \"codd-test-user\""
+ $ ChangeEq [("roles/codd-test-user", DBothButDifferent)]
+
+ addMig_
+ "GRANT CONNECT ON DATABASE \"codd-test-db\" TO \"codd-test-user\""
+ revokeConnect
+ $ ChangeEq [("roles/codd-test-user", DBothButDifferent)]
+
+ addMigNoChanges_ grantConnect
+
+ -- Role membership
+ (grantRole, revokeRole) <-
+ addMig
+ "GRANT \"extra-codd-test-user\" TO \"codd-test-user\""
+ "REVOKE \"extra-codd-test-user\" FROM \"codd-test-user\""
+ $ ChangeEq [("roles/codd-test-user", DBothButDifferent)]
+
+ addMig_ revokeRole grantRole $
+ ChangeEq [("roles/codd-test-user", DBothButDifferent)]
+
+ -- Config attributes
+ addMig_
+ "ALTER ROLE postgres SET search_path TO public, pg_catalog"
+ "ALTER ROLE postgres RESET search_path"
+ $ ChangeEq [("roles/postgres", DBothButDifferent)]
+
+ addMig_
+ "ALTER ROLE \"codd-test-user\" SET search_path TO DEFAULT"
+ "ALTER ROLE \"codd-test-user\" SET search_path TO public, pg_catalog"
+ $ ChangeEq [("roles/codd-test-user", DBothButDifferent)]
+
+ addMig_
+ "ALTER ROLE postgres SET search_path TO DEFAULT"
+ "ALTER ROLE postgres SET search_path TO public, pg_catalog"
+ $ ChangeEq [("roles/postgres", DBothButDifferent)]
+
+ addMigNoChanges_ "ALTER ROLE postgres SET search_path TO DEFAULT"
+
+ -- PERMISSIONS
+ -- For tables
+
+ -- Owner of the table implicitly has all privileges by default
+ addMig_ "GRANT ALL ON TABLE employee TO postgres" "SELECT 1;" $ ChangeEq []
+
+ addMig_
+ "GRANT SELECT ON TABLE employee TO \"codd-test-user\""
+ "REVOKE SELECT ON TABLE employee FROM \"codd-test-user\""
+ $ ChangeEq
+ [("schemas/public/tables/employee/objrep", DBothButDifferent)]
+
+ addMig_
+ "GRANT INSERT ON TABLE employee TO \"codd-test-user\""
+ "REVOKE INSERT ON TABLE employee FROM \"codd-test-user\""
+ $ ChangeEq
+ [("schemas/public/tables/employee/objrep", DBothButDifferent)]
+
+ addMig_
+ "GRANT DELETE ON TABLE employee TO \"codd-test-user\""
+ "REVOKE DELETE ON TABLE employee FROM \"codd-test-user\""
+ $ ChangeEq
+ [("schemas/public/tables/employee/objrep", DBothButDifferent)]
+
+ -- For sequences
+ addMig_
+ "GRANT SELECT ON SEQUENCE employee_employee_id_seq TO \"codd-test-user\""
+ "REVOKE SELECT ON SEQUENCE employee_employee_id_seq FROM \"codd-test-user\";"
+ $ ChangeEq
+ [ ( "schemas/public/sequences/employee_employee_id_seq",
+ DBothButDifferent
+ )
+ ]
+
+ -- At this point codd-test-user has S+I+D permissions on the employee table
+ -- Order of granting does not matter, nor do grantors
+ addMig_
+ "REVOKE ALL ON TABLE employee FROM \"codd-test-user\""
+ "GRANT SELECT,INSERT,DELETE ON TABLE employee TO \"codd-test-user\""
+ $ ChangeEq
+ [("schemas/public/tables/employee/objrep", DBothButDifferent)]
+
+ addMig_
+ "GRANT INSERT, DELETE ON TABLE employee TO \"codd-test-user\";"
+ "REVOKE INSERT, DELETE ON TABLE employee FROM \"codd-test-user\";"
+ $ ChangeEq
+ [("schemas/public/tables/employee/objrep", DBothButDifferent)]
+
+ -- At this point codd-test-user has I+D permissions on the employee table
+ addMig_
+ "REVOKE ALL ON TABLE employee FROM \"codd-test-user\"; GRANT INSERT, DELETE ON TABLE employee TO \"codd-test-user\";"
+ "GRANT INSERT, DELETE ON TABLE employee TO \"codd-test-user\";"
+ $ ChangeEq []
+
+ -- At this point codd-test-user has I+D permissions on the employee table
+ addMig_
+ "GRANT ALL ON TABLE employee TO \"codd-test-user\""
+ "REVOKE ALL ON TABLE employee FROM \"codd-test-user\"; GRANT INSERT, DELETE ON TABLE employee TO \"codd-test-user\""
+ $ ChangeEq
+ [("schemas/public/tables/employee/objrep", DBothButDifferent)]
+
+ addMig_
+ "GRANT ALL ON TABLE employee TO \"extra-codd-test-user\""
+ "REVOKE ALL ON TABLE employee FROM \"extra-codd-test-user\""
+ $ ChangeEq
+ [("schemas/public/tables/employee/objrep", DBothButDifferent)]
+
+ addMigNoChanges_
+ "REVOKE ALL ON TABLE employee FROM \"codd-test-user\"; GRANT ALL ON TABLE employee TO \"codd-test-user\"; GRANT ALL ON TABLE employee TO \"extra-codd-test-user\""
+
+ addMig_
+ "GRANT ALL ON TABLE employee TO PUBLIC"
+ "REVOKE ALL ON TABLE employee FROM PUBLIC; GRANT ALL ON TABLE employee TO \"codd-test-user\"; GRANT ALL ON TABLE employee TO \"extra-codd-test-user\";"
+ $ ChangeEq
+ [("schemas/public/tables/employee/objrep", DBothButDifferent)]
+
+ -- Permissions of unmapped role don't affect hashing
+ (createUnmappedRole1AndGrant, dropUnmappedRole1) <-
+ addMig
+ "CREATE ROLE unmapped_role1; GRANT ALL ON TABLE employee TO unmapped_role1; GRANT ALL ON SEQUENCE employee_employee_id_seq TO unmapped_role1; GRANT ALL ON all_employee_names TO unmapped_role1"
+ "DROP OWNED BY unmapped_role1; DROP ROLE unmapped_role1"
+ $ ChangeEq []
+
+ addMig_ dropUnmappedRole1 createUnmappedRole1AndGrant $ ChangeEq []
+
+ -- CREATING UNMAPPED AND MAPPED SCHEMAS
+ addMig_ "CREATE SCHEMA unmappedschema" "DROP SCHEMA unmappedschema" $
+ ChangeEq []
+ addMig_ "DROP SCHEMA unmappedschema" "CREATE SCHEMA unmappedschema" $
+ ChangeEq []
+ (createMappedSchema, dropMappedSchema) <-
+ addMig
+ "CREATE SCHEMA \"codd-extra-mapped-schema\""
+ "DROP SCHEMA \"codd-extra-mapped-schema\""
+ $ ChangeEq
+ [ ( "schemas/codd-extra-mapped-schema/objrep",
+ DExpectedButNotFound
+ )
+ ]
- addMig_ dropMappedSchema createMappedSchema $ ChangeEq
- [("schemas/codd-extra-mapped-schema/objrep", DNotExpectedButFound)]
-
-
- -- DATABASE SETTINGS
- -- Default privileges change nothing
- let grantConnectPublic =
- "GRANT CONNECT ON DATABASE \"codd-test-db\" TO public"
- addMigNoChanges_ grantConnectPublic
-
- -- Privileges for public affect db-settings, not some role
- addMig_ "REVOKE CONNECT ON DATABASE \"codd-test-db\" FROM public;"
- grantConnectPublic
- $ ChangeEq [("db-settings", DBothButDifferent)]
-
- -- codd-test-user owns codd-test-db, so it already has permissions to connect to it
- addMigNoChanges_
- "GRANT CONNECT ON DATABASE \"codd-test-db\" TO \"codd-test-user\""
-
- addMig_
- "ALTER DATABASE \"codd-test-db\" SET default_transaction_isolation TO 'serializable';"
- "ALTER DATABASE \"codd-test-db\" RESET default_transaction_isolation;"
- $ ChangeEq [("db-settings", DBothButDifferent)]
-
- -- Session settings don't change the schema
- addMig_ "SET default_transaction_isolation TO 'serializable';"
- "RESET default_transaction_isolation;"
- $ ChangeEq []
-
- -- COLLATIONS
- (createCutf8Coll, dropColl) <-
- addMig "CREATE COLLATION new_collation (locale = 'C.utf8');"
- "DROP COLLATION new_collation;"
- $ ChangeEq
- [ ( "schemas/public/collations/new_collation"
- , DExpectedButNotFound
- )
- ]
- (dropCreateEnUSColl, _) <-
- addMig
- ( dropColl
- <> " CREATE COLLATION new_collation (locale = 'en_US.utf8');"
- )
- (dropColl <> createCutf8Coll)
- $ ChangeEq
- [ ( "schemas/public/collations/new_collation"
- , DBothButDifferent
- )
- ]
- addMig_
- (dropColl
- <> " CREATE COLLATION new_collation (provider = icu, locale = 'de-u-co-phonebk');"
+ addMig_ dropMappedSchema createMappedSchema $
+ ChangeEq
+ [("schemas/codd-extra-mapped-schema/objrep", DNotExpectedButFound)]
+
+ -- DATABASE SETTINGS
+ -- Default privileges change nothing
+ let grantConnectPublic =
+ "GRANT CONNECT ON DATABASE \"codd-test-db\" TO public"
+ addMigNoChanges_ grantConnectPublic
+
+ -- Privileges for public affect db-settings, not some role
+ addMig_
+ "REVOKE CONNECT ON DATABASE \"codd-test-db\" FROM public;"
+ grantConnectPublic
+ $ ChangeEq [("db-settings", DBothButDifferent)]
+
+ -- codd-test-user owns codd-test-db, so it already has permissions to connect to it
+ addMigNoChanges_
+ "GRANT CONNECT ON DATABASE \"codd-test-db\" TO \"codd-test-user\""
+
+ addMig_
+ "ALTER DATABASE \"codd-test-db\" SET default_transaction_isolation TO 'serializable';"
+ "ALTER DATABASE \"codd-test-db\" RESET default_transaction_isolation;"
+ $ ChangeEq [("db-settings", DBothButDifferent)]
+
+ -- Session settings don't change the schema
+ addMig_
+ "SET default_transaction_isolation TO 'serializable';"
+ "RESET default_transaction_isolation;"
+ $ ChangeEq []
+
+ -- COLLATIONS
+ (createCutf8Coll, dropColl) <-
+ addMig
+ "CREATE COLLATION new_collation (locale = 'C.utf8');"
+ "DROP COLLATION new_collation;"
+ $ ChangeEq
+ [ ( "schemas/public/collations/new_collation",
+ DExpectedButNotFound
+ )
+ ]
+ (dropCreateEnUSColl, _) <-
+ addMig
+ ( dropColl
+ <> " CREATE COLLATION new_collation (locale = 'en_US.utf8');"
+ )
+ (dropColl <> createCutf8Coll)
+ $ ChangeEq
+ [ ( "schemas/public/collations/new_collation",
+ DBothButDifferent
+ )
+ ]
+ addMig_
+ ( dropColl
+ <> " CREATE COLLATION new_collation (provider = icu, locale = 'de-u-co-phonebk');"
+ )
+ dropCreateEnUSColl
+ $ ChangeEq
+ [("schemas/public/collations/new_collation", DBothButDifferent)]
+
+ addMig_
+ "ALTER TABLE employee ADD COLUMN employee_surname TEXT;"
+ "ALTER TABLE employee DROP COLUMN employee_surname;"
+ SomeChange
+
+ addMig_
+ "ALTER TABLE employee ALTER COLUMN employee_surname TYPE TEXT COLLATE \"new_collation\";"
+ "ALTER TABLE employee ALTER COLUMN employee_surname TYPE TEXT COLLATE \"default\";"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/cols/employee_surname",
+ DBothButDifferent
+ )
+ ]
+
+ -- Deterministic collations were introduced in Pg 12..
+ -- addMig_ (dropColl <> " CREATE COLLATION (locale = 'C.utf8', deterministic = false) new_collation;") dropColl $ ChangeEq [("schemas/public/collations/new_collation", DBothButDifferent )]
+
+ -- TYPES
+
+ -- Enum Types
+ (createExp, dropExp) <-
+ addMig
+ "CREATE TYPE experience AS ENUM ('junior', 'senior');"
+ "DROP TYPE experience;"
+ $ ChangeEq
+ [("schemas/public/types/experience", DExpectedButNotFound)]
+
+ addMig_
+ "-- codd: no-txn\n\
+ \ALTER TYPE experience ADD VALUE 'intern' BEFORE 'junior';"
+ (dropExp <> createExp)
+ $ ChangeEq [("schemas/public/types/experience", DBothButDifferent)]
+
+ -- Composite types
+ addMig_ "CREATE TYPE complex AS (a double precision);" "DROP TYPE complex;" $
+ ChangeEq [("schemas/public/types/complex", DExpectedButNotFound)]
+ addMig_
+ "ALTER TYPE complex ADD ATTRIBUTE b double precision;"
+ "ALTER TYPE complex DROP ATTRIBUTE b"
+ $ ChangeEq [("schemas/public/types/complex", DBothButDifferent)]
+
+ addMig_
+ "ALTER TYPE complex ALTER ATTRIBUTE b SET DATA TYPE text;"
+ "ALTER TYPE complex ALTER ATTRIBUTE b SET DATA TYPE double precision;"
+ $ ChangeEq [("schemas/public/types/complex", DBothButDifferent)]
+
+ addMig_
+ "ALTER TYPE complex ALTER ATTRIBUTE b TYPE text COLLATE new_collation;"
+ "ALTER TYPE complex ALTER ATTRIBUTE b TYPE text COLLATE \"default\";"
+ $ ChangeEq [("schemas/public/types/complex", DBothButDifferent)]
+
+ addMig_
+ "ALTER TYPE complex ADD ATTRIBUTE c employee;"
+ "ALTER TYPE complex DROP ATTRIBUTE c;"
+ $ ChangeEq [("schemas/public/types/complex", DBothButDifferent)]
+
+ -- We don't want the type to change when the table changes
+ -- because it'd be unnecessarily verbose.
+ addMig_
+ "ALTER TABLE employee ADD COLUMN anycolumn TEXT;"
+ "ALTER TABLE employee DROP COLUMN anycolumn;"
+ $ ChangeEq
+ [ ( "schemas/public/tables/employee/cols/anycolumn",
+ DExpectedButNotFound
+ )
+ ]
+
+ -- Range types
+ (createFloatRange1, dropFloatRange) <-
+ addMig
+ "CREATE TYPE floatrange AS RANGE (subtype = float8,subtype_diff = float8mi);"
+ "DROP TYPE floatrange;"
+ $ ChangeEq
+ -- Range and multirange constructor/types functions created by PG too
+ ( [ ("schemas/public/types/floatrange", DExpectedButNotFound),
+ ( "schemas/public/routines/floatrange;float8,float8",
+ DExpectedButNotFound
+ ),
+ ( "schemas/public/routines/floatrange;float8,float8,text",
+ DExpectedButNotFound
)
- dropCreateEnUSColl
- $ ChangeEq
- [("schemas/public/collations/new_collation", DBothButDifferent)]
-
- addMig_ "ALTER TABLE employee ADD COLUMN employee_surname TEXT;"
- "ALTER TABLE employee DROP COLUMN employee_surname;"
- SomeChange
-
- addMig_
- "ALTER TABLE employee ALTER COLUMN employee_surname TYPE TEXT COLLATE \"new_collation\";"
- "ALTER TABLE employee ALTER COLUMN employee_surname TYPE TEXT COLLATE \"default\";"
- $ ChangeEq
- [ ( "schemas/public/tables/employee/cols/employee_surname"
- , DBothButDifferent
- )
- ]
-
- -- Deterministic collations were introduced in Pg 12..
- -- addMig_ (dropColl <> " CREATE COLLATION (locale = 'C.utf8', deterministic = false) new_collation;") dropColl $ ChangeEq [("schemas/public/collations/new_collation", DBothButDifferent )]
-
- -- TYPES
-
- -- Enum Types
- (createExp, dropExp) <-
- addMig "CREATE TYPE experience AS ENUM ('junior', 'senior');"
- "DROP TYPE experience;"
- $ ChangeEq
- [("schemas/public/types/experience", DExpectedButNotFound)]
-
- addMig_
- "-- codd: no-txn\n\
- \ALTER TYPE experience ADD VALUE 'intern' BEFORE 'junior';"
- (dropExp <> createExp)
- $ ChangeEq [("schemas/public/types/experience", DBothButDifferent)]
-
- -- Composite types
- addMig_ "CREATE TYPE complex AS (a double precision);" "DROP TYPE complex;"
- $ ChangeEq [("schemas/public/types/complex", DExpectedButNotFound)]
- addMig_ "ALTER TYPE complex ADD ATTRIBUTE b double precision;"
- "ALTER TYPE complex DROP ATTRIBUTE b"
- $ ChangeEq [("schemas/public/types/complex", DBothButDifferent)]
-
- addMig_
- "ALTER TYPE complex ALTER ATTRIBUTE b SET DATA TYPE text;"
- "ALTER TYPE complex ALTER ATTRIBUTE b SET DATA TYPE double precision;"
- $ ChangeEq [("schemas/public/types/complex", DBothButDifferent)]
-
- addMig_
- "ALTER TYPE complex ALTER ATTRIBUTE b TYPE text COLLATE new_collation;"
- "ALTER TYPE complex ALTER ATTRIBUTE b TYPE text COLLATE \"default\";"
- $ ChangeEq [("schemas/public/types/complex", DBothButDifferent)]
-
- addMig_ "ALTER TYPE complex ADD ATTRIBUTE c employee;"
- "ALTER TYPE complex DROP ATTRIBUTE c;"
- $ ChangeEq [("schemas/public/types/complex", DBothButDifferent)]
-
- -- We don't want the type to change when the table changes
- -- because it'd be unnecessarily verbose.
- addMig_ "ALTER TABLE employee ADD COLUMN anycolumn TEXT;"
- "ALTER TABLE employee DROP COLUMN anycolumn;"
- $ ChangeEq
- [ ( "schemas/public/tables/employee/cols/anycolumn"
- , DExpectedButNotFound
- )
- ]
-
- -- Range types
- (createFloatRange1, dropFloatRange) <-
- addMig
- "CREATE TYPE floatrange AS RANGE (subtype = float8,subtype_diff = float8mi);"
- "DROP TYPE floatrange;"
- $ ChangeEq
- -- Range and multirange constructor/types functions created by PG too
- ( [ ("schemas/public/types/floatrange", DExpectedButNotFound)
- , ( "schemas/public/routines/floatrange;float8,float8"
- , DExpectedButNotFound
- )
- , ( "schemas/public/routines/floatrange;float8,float8,text"
- , DExpectedButNotFound
- )
- ]
- ++ if pgVersion >= 14
- then
- [ ( "schemas/public/types/floatmultirange"
- , DExpectedButNotFound
- )
- , ( "schemas/public/routines/floatmultirange;"
- , DExpectedButNotFound
- )
- , ( "schemas/public/routines/floatmultirange;floatrange"
- , DExpectedButNotFound
- )
- , ( "schemas/public/routines/floatmultirange;_floatrange"
- , DExpectedButNotFound
- )
- ]
- else []
+ ]
+ ++ if pgVersion >= 14
+ then
+ [ ( "schemas/public/types/floatmultirange",
+ DExpectedButNotFound
+ ),
+ ( "schemas/public/routines/floatmultirange;",
+ DExpectedButNotFound
+ ),
+ ( "schemas/public/routines/floatmultirange;floatrange",
+ DExpectedButNotFound
+ ),
+ ( "schemas/public/routines/floatmultirange;_floatrange",
+ DExpectedButNotFound
)
-
- addMig_
- "CREATE FUNCTION time_subtype_diff(x time, y time) RETURNS float8 AS 'SELECT EXTRACT(EPOCH FROM (x - y))' LANGUAGE sql STRICT IMMUTABLE;"
- "DROP FUNCTION time_subtype_diff"
- SomeChange
-
- -- Change of subtype
- addMig_
- (dropFloatRange
- <> "CREATE TYPE floatrange AS RANGE (subtype = time,subtype_diff = time_subtype_diff);"
- )
- (dropFloatRange <> createFloatRange1)
- $ ChangeEq
- [ ("schemas/public/types/floatrange", DBothButDifferent)
- -- Constructor functions:
- , ( "schemas/public/routines/floatrange;time,time"
- , DExpectedButNotFound
- )
- , ( "schemas/public/routines/floatrange;time,time,text"
- , DExpectedButNotFound
- )
- , ( "schemas/public/routines/floatrange;float8,float8"
- , DNotExpectedButFound
- )
- , ( "schemas/public/routines/floatrange;float8,float8,text"
- , DNotExpectedButFound
- )
- ]
- -- Multirange constructors are unchanged since the type of their arguments
- -- is exclusively `floatrange`.
-
- -- Domain types
- addMig_
- "CREATE DOMAIN non_empty_text TEXT NOT NULL CHECK (VALUE != '');"
- "DROP DOMAIN non_empty_text;"
- $ ChangeEq
- [("schemas/public/types/non_empty_text", DExpectedButNotFound)]
-
- addMig_ "ALTER DOMAIN non_empty_text SET DEFAULT 'empty';"
- "ALTER DOMAIN non_empty_text DROP DEFAULT;"
- $ ChangeEq [("schemas/public/types/non_empty_text", DBothButDifferent)]
-
- addMig_ "ALTER DOMAIN non_empty_text DROP NOT NULL;"
- "ALTER DOMAIN non_empty_text SET NOT NULL;"
- $ ChangeEq [("schemas/public/types/non_empty_text", DBothButDifferent)]
-
- (addTypeCheck, dropTypeCheck) <-
- addMig
- "ALTER DOMAIN non_empty_text ADD CONSTRAINT new_constraint CHECK(TRIM(VALUE) != '') NOT VALID;"
- "ALTER DOMAIN non_empty_text DROP CONSTRAINT new_constraint;"
- $ ChangeEq
- [("schemas/public/types/non_empty_text", DBothButDifferent)]
-
- addMig_
- "ALTER DOMAIN non_empty_text VALIDATE CONSTRAINT new_constraint;"
- (dropTypeCheck <> addTypeCheck)
- $ ChangeEq [("schemas/public/types/non_empty_text", DBothButDifferent)]
-
- addMig_
- "ALTER DOMAIN non_empty_text RENAME CONSTRAINT new_constraint TO new_constraint_2;"
- "ALTER DOMAIN non_empty_text RENAME CONSTRAINT new_constraint_2 TO new_constraint;"
- $ ChangeEq [("schemas/public/types/non_empty_text", DBothButDifferent)]
-
- -- Change type permissions/ownership.
- addMig_ "ALTER DOMAIN non_empty_text OWNER TO \"codd-test-user\""
- "ALTER DOMAIN non_empty_text OWNER TO postgres;"
- $ ChangeEq [("schemas/public/types/non_empty_text", DBothButDifferent)]
-
- addMig_ "ALTER DOMAIN non_empty_text OWNER TO postgres"
- "ALTER DOMAIN non_empty_text OWNER TO \"codd-test-user\";"
- $ ChangeEq [("schemas/public/types/non_empty_text", DBothButDifferent)]
-
- addMig_ "GRANT ALL ON DOMAIN non_empty_text TO \"codd-test-user\""
- "REVOKE ALL ON DOMAIN non_empty_text FROM \"codd-test-user\";"
- $ ChangeEq [("schemas/public/types/non_empty_text", DBothButDifferent)]
-
- -- CRUD
- addMig_ "INSERT INTO employee (employee_name) VALUES ('Marcelo')"
- "DELETE FROM employee WHERE employee_name='Marcelo'"
- $ ChangeEq []
+ ]
+ else []
+ )
+
+ addMig_
+ "CREATE FUNCTION time_subtype_diff(x time, y time) RETURNS float8 AS 'SELECT EXTRACT(EPOCH FROM (x - y))' LANGUAGE sql STRICT IMMUTABLE;"
+ "DROP FUNCTION time_subtype_diff"
+ SomeChange
+
+ -- Change of subtype
+ addMig_
+ ( dropFloatRange
+ <> "CREATE TYPE floatrange AS RANGE (subtype = time,subtype_diff = time_subtype_diff);"
+ )
+ (dropFloatRange <> createFloatRange1)
+ $ ChangeEq
+ [ ("schemas/public/types/floatrange", DBothButDifferent),
+ -- Constructor functions:
+ ( "schemas/public/routines/floatrange;time,time",
+ DExpectedButNotFound
+ ),
+ ( "schemas/public/routines/floatrange;time,time,text",
+ DExpectedButNotFound
+ ),
+ ( "schemas/public/routines/floatrange;float8,float8",
+ DNotExpectedButFound
+ ),
+ ( "schemas/public/routines/floatrange;float8,float8,text",
+ DNotExpectedButFound
+ )
+ ]
+ -- Multirange constructors are unchanged since the type of their arguments
+ -- is exclusively `floatrange`.
+
+ -- Domain types
+ addMig_
+ "CREATE DOMAIN non_empty_text TEXT NOT NULL CHECK (VALUE != '');"
+ "DROP DOMAIN non_empty_text;"
+ $ ChangeEq
+ [("schemas/public/types/non_empty_text", DExpectedButNotFound)]
+
+ addMig_
+ "ALTER DOMAIN non_empty_text SET DEFAULT 'empty';"
+ "ALTER DOMAIN non_empty_text DROP DEFAULT;"
+ $ ChangeEq [("schemas/public/types/non_empty_text", DBothButDifferent)]
+
+ addMig_
+ "ALTER DOMAIN non_empty_text DROP NOT NULL;"
+ "ALTER DOMAIN non_empty_text SET NOT NULL;"
+ $ ChangeEq [("schemas/public/types/non_empty_text", DBothButDifferent)]
+
+ (addTypeCheck, dropTypeCheck) <-
+ addMig
+ "ALTER DOMAIN non_empty_text ADD CONSTRAINT new_constraint CHECK(TRIM(VALUE) != '') NOT VALID;"
+ "ALTER DOMAIN non_empty_text DROP CONSTRAINT new_constraint;"
+ $ ChangeEq
+ [("schemas/public/types/non_empty_text", DBothButDifferent)]
+
+ addMig_
+ "ALTER DOMAIN non_empty_text VALIDATE CONSTRAINT new_constraint;"
+ (dropTypeCheck <> addTypeCheck)
+ $ ChangeEq [("schemas/public/types/non_empty_text", DBothButDifferent)]
+
+ addMig_
+ "ALTER DOMAIN non_empty_text RENAME CONSTRAINT new_constraint TO new_constraint_2;"
+ "ALTER DOMAIN non_empty_text RENAME CONSTRAINT new_constraint_2 TO new_constraint;"
+ $ ChangeEq [("schemas/public/types/non_empty_text", DBothButDifferent)]
+
+ -- Change type permissions/ownership.
+ addMig_
+ "ALTER DOMAIN non_empty_text OWNER TO \"codd-test-user\""
+ "ALTER DOMAIN non_empty_text OWNER TO postgres;"
+ $ ChangeEq [("schemas/public/types/non_empty_text", DBothButDifferent)]
+
+ addMig_
+ "ALTER DOMAIN non_empty_text OWNER TO postgres"
+ "ALTER DOMAIN non_empty_text OWNER TO \"codd-test-user\";"
+ $ ChangeEq [("schemas/public/types/non_empty_text", DBothButDifferent)]
+
+ addMig_
+ "GRANT ALL ON DOMAIN non_empty_text TO \"codd-test-user\""
+ "REVOKE ALL ON DOMAIN non_empty_text FROM \"codd-test-user\";"
+ $ ChangeEq [("schemas/public/types/non_empty_text", DBothButDifferent)]
+
+ -- CRUD
+ addMig_
+ "INSERT INTO employee (employee_name) VALUES ('Marcelo')"
+ "DELETE FROM employee WHERE employee_name='Marcelo'"
+ $ ChangeEq []
lastMaybe :: [a] -> Maybe a
-lastMaybe [] = Nothing
-lastMaybe [x ] = Just x
+lastMaybe [] = Nothing
+lastMaybe [x] = Just x
lastMaybe (_ : xs) = lastMaybe xs
newtype NumMigsToReverse = NumMigsToReverse Int deriving stock (Show)
+
instance Arbitrary NumMigsToReverse where
- arbitrary = NumMigsToReverse
- -- TODO: We know the number of migrations does not depend on the server's version,
- -- but this is really ugly. We should avoid this generate random input without
- -- an instance of Arbitrary instead.
- <$> chooseBoundedIntegral
+ arbitrary =
+ NumMigsToReverse
+ -- TODO: We know the number of migrations does not depend on the server's version,
+ -- but this is really ugly. We should avoid this generate random input without
+ -- an instance of Arbitrary instead.
+ <$> chooseBoundedIntegral
(5, length (migrationsAndRepChangeText 0) - 1)
-- | This type includes each migration with their expected changes and hashes after applied. Hashes before the first migration are not included.
@@ -1192,459 +1234,511 @@ newtype AccumChanges m = AccumChanges [((AddedSqlMigration m, DbChange), DbRep)]
spec :: Spec
spec = do
- describe "DbDependentSpecs" $ do
- aroundFreshDatabase
- $ it "User search_path does not affect representations"
- $ \emptyTestDbInfo -> do
- createStuffMig <-
- AddedSqlMigration
- <$> ( either (error "Could not parse SQL migration") id
- <$> parseSqlMigrationIO
- "1900-01-01-00-00-00-create-stuff.sql"
- ( PureStream
- $ Streaming.yield
- "-- Columns with default values are SQL expressions that should reference the schema's name.\
- \\n-- And SQL expressions are what we need for this test, since they change when the search_path changes.\
- \\nCREATE SCHEMA nsp1; CREATE SCHEMA nsp2; CREATE TABLE nsp1.tbl1 (col1 SERIAL PRIMARY KEY); CREATE TABLE nsp2.tbl2 (col2 SERIAL PRIMARY KEY);"
- )
- )
- <*> pure (getIncreasingTimestamp 0)
- let setSearchPath conn sp =
- liftIO
- $ void
- $ DB.query @(DB.Only Text) @(DB.Only Text)
- conn
- "SELECT set_config('search_path', ?, false)"
- (DB.Only sp)
- (reps1, reps2, reps3, reps4) <-
- runCoddLogger $ applyMigrationsNoCheck
- emptyTestDbInfo
- (Just [hoistAddedSqlMigration lift createStuffMig])
- testConnTimeout
- (\conn -> do
- setSearchPath conn "nsp1"
- reps1 <- readRepresentationsFromDbWithSettings
- emptyTestDbInfo
- conn
- setSearchPath conn "nsp2"
- reps2 <- readRepresentationsFromDbWithSettings
- emptyTestDbInfo
- conn
- setSearchPath conn "nsp1, nsp2"
- reps3 <- readRepresentationsFromDbWithSettings
- emptyTestDbInfo
- conn
- setSearchPath conn "public"
- reps4 <- readRepresentationsFromDbWithSettings
- emptyTestDbInfo
- conn
- pure (reps1, reps2, reps3, reps4)
- )
- reps1 `shouldBe` reps2
- reps2 `shouldBe` reps3
- reps3 `shouldBe` reps4
-
- aroundFreshDatabase
- $ it "Strict and Lax collation representations differ"
- $ \emptyTestDbInfo -> do
+ describe "DbDependentSpecs" $ do
+ aroundFreshDatabase $
+ it "User search_path does not affect representations" $
+ \emptyTestDbInfo -> do
+ createStuffMig <-
+ AddedSqlMigration
+ <$> ( either (error "Could not parse SQL migration") id
+ <$> parseSqlMigrationIO
+ "1900-01-01-00-00-00-create-stuff.sql"
+ ( PureStream $
+ Streaming.yield
+ "-- Columns with default values are SQL expressions that should reference the schema's name.\
+ \\n-- And SQL expressions are what we need for this test, since they change when the search_path changes.\
+ \\nCREATE SCHEMA nsp1; CREATE SCHEMA nsp2; CREATE TABLE nsp1.tbl1 (col1 SERIAL PRIMARY KEY); CREATE TABLE nsp2.tbl2 (col2 SERIAL PRIMARY KEY);"
+ )
+ )
+ <*> pure (getIncreasingTimestamp 0)
+ let setSearchPath conn sp =
+ liftIO $
+ void $
+ DB.query @(DB.Only Text) @(DB.Only Text)
+ conn
+ "SELECT set_config('search_path', ?, false)"
+ (DB.Only sp)
+ (reps1, reps2, reps3, reps4) <-
+ runCoddLogger $
+ applyMigrationsNoCheck
+ emptyTestDbInfo
+ (Just [hoistAddedSqlMigration lift createStuffMig])
+ testConnTimeout
+ ( \conn -> do
+ setSearchPath conn "nsp1"
+ reps1 <-
+ readRepresentationsFromDbWithSettings
+ emptyTestDbInfo
+ conn
+ setSearchPath conn "nsp2"
+ reps2 <-
+ readRepresentationsFromDbWithSettings
+ emptyTestDbInfo
+ conn
+ setSearchPath conn "nsp1, nsp2"
+ reps3 <-
+ readRepresentationsFromDbWithSettings
+ emptyTestDbInfo
+ conn
+ setSearchPath conn "public"
+ reps4 <-
+ readRepresentationsFromDbWithSettings
+ emptyTestDbInfo
+ conn
+ pure (reps1, reps2, reps3, reps4)
+ )
+ reps1 `shouldBe` reps2
+ reps2 `shouldBe` reps3
+ reps3 `shouldBe` reps4
+
+ aroundFreshDatabase $
+ it "Strict and Lax collation representations differ" $
+ \emptyTestDbInfo -> do
-- It'd be nice to test that different libc/libicu versions make hashes
-- change, but I don't know how to do that sanely.
-- So we just test the code paths and make sure hashes differ.
- let strictCollDbInfo = emptyTestDbInfo
- { schemaAlgoOpts = SchemaAlgo
- { strictCollations = True
- , strictRangeCtorPrivs = False
- , ignoreColumnOrder = False
- }
- }
- createCollMig <-
- AddedSqlMigration
- <$> ( either (error "Could not parse SQL migration") id
- <$> parseSqlMigrationIO
- "1900-01-01-00-00-00-create-coll.sql"
- ( PureStream
- $ Streaming.yield
- "CREATE COLLATION new_collation (provider = icu, locale = 'de-u-co-phonebk');"
- )
- )
- <*> pure (getIncreasingTimestamp 0)
- (laxCollHashes, strictCollHashes) <-
- runCoddLogger $ applyMigrationsNoCheck
- emptyTestDbInfo
- (Just [hoistAddedSqlMigration lift createCollMig])
- testConnTimeout
- (\conn ->
- (,)
- <$> readRepresentationsFromDbWithSettings
- emptyTestDbInfo
- conn
- <*> readRepresentationsFromDbWithSettings
- strictCollDbInfo
- conn
- )
- laxCollHashes `shouldNotBe` strictCollHashes
-
- aroundFreshDatabase
- $ it "Strict range constructor ownership"
- $ \emptyTestDbInfo -> do
- let strictRangeDbInfo = emptyTestDbInfo
- { schemaAlgoOpts = SchemaAlgo
- { strictCollations = False
- , strictRangeCtorPrivs = True
- , ignoreColumnOrder = False
- }
- }
- createMig <-
- AddedSqlMigration
- <$> ( either (error "Could not parse SQL migration") id
- <$> parseSqlMigrationIO
- "1900-01-01-00-00-00-create-range-and-other-function.sql"
- ( PureStream
- $ Streaming.yield
- "CREATE TYPE floatrange AS RANGE (subtype = float8,subtype_diff = float8mi); \
- \\n CREATE FUNCTION time_subtype_diff(x time, y time) RETURNS float8 AS 'SELECT EXTRACT(EPOCH FROM (x - y))' LANGUAGE sql STRICT IMMUTABLE;"
- )
- )
- <*> pure (getIncreasingTimestamp 0)
- pgVersion <- withConnection
- (migsConnString emptyTestDbInfo)
- testConnTimeout
- queryServerMajorVersion
- (laxRangeHashes, strictRangeHashes) <-
- runCoddLogger $ applyMigrationsNoCheck
- emptyTestDbInfo
- (Just [hoistAddedSqlMigration lift createMig])
- testConnTimeout
- (\conn ->
- (,)
- <$> readRepresentationsFromDbWithSettings
- emptyTestDbInfo
- conn
- <*> readRepresentationsFromDbWithSettings
- strictRangeDbInfo
- conn
- )
-
- -- The time_subtype_diff function shouldn't have its representation change,
- -- but the constructors of floatrange and floatmultirange should.
- ( schemaDifferences laxRangeHashes strictRangeHashes
- <&> simplifyDiff
- )
- `shouldBe` Map.fromList
- ([ ( "schemas/public/routines/floatrange;float8,float8"
- , DBothButDifferent
- )
- , ( "schemas/public/routines/floatrange;float8,float8,text"
- , DBothButDifferent
- )
- ]
- ++ if pgVersion >= 14
- then
- [ ( "schemas/public/routines/floatmultirange;"
- , DBothButDifferent
- )
- , ( "schemas/public/routines/floatmultirange;floatrange"
- , DBothButDifferent
- )
- , ( "schemas/public/routines/floatmultirange;_floatrange"
- , DBothButDifferent
- )
- ]
- else []
- )
-
- aroundFreshDatabase
- $ it "ignore-column-order setting"
- $ \emptyTestDbInfo -> do
- createMig <-
- AddedSqlMigration
- <$> ( either (error "Could not parse SQL migration") id
- <$> parseSqlMigrationIO
- "1900-01-01-00-00-00-ignore-col-order-1.sql"
- ( PureStream
- $ Streaming.yield
- "CREATE TABLE othertbl(col2 INT PRIMARY KEY);\
- \CREATE TABLE tbl(col1 INT, col2 SERIAL PRIMARY KEY CHECK (col2 > 0) REFERENCES othertbl(col2));\
- \CREATE UNIQUE INDEX someidx ON tbl(col2);"
- )
- -- TODO: Other dependent objects like triggers, custom locales and whatnot
- )
- <*> pure (getIncreasingTimestamp 0)
- let ignColOrderDbInfo = emptyTestDbInfo
- { schemaAlgoOpts = SchemaAlgo
- { strictCollations = False
- , strictRangeCtorPrivs = False
- , ignoreColumnOrder = True
- }
- }
- initialHashes <- runCoddLogger $ applyMigrationsNoCheck
- ignColOrderDbInfo
- (Just [hoistAddedSqlMigration lift createMig])
- testConnTimeout
- (readRepresentationsFromDbWithSettings ignColOrderDbInfo)
-
- dropCol1Mig <-
- AddedSqlMigration
- <$> (either (error "Could not parse SQL migration 2") id
- <$> parseSqlMigrationIO
- "1900-01-01-00-00-01-ignore-col-order-2.sql"
- (PureStream $ Streaming.yield
- "ALTER TABLE tbl DROP COLUMN col1;"
- )
- )
- <*> pure (getIncreasingTimestamp 1)
-
- afterDropHashes <- runCoddLogger $ applyMigrationsNoCheck
- ignColOrderDbInfo
- (Just $ map (hoistAddedSqlMigration lift)
- [createMig, dropCol1Mig]
- )
- testConnTimeout
- (readRepresentationsFromDbWithSettings ignColOrderDbInfo)
-
- -- Only the removed column should have its representation file removed.
- -- The other column and all dependent objects should not change one bit.
- ( afterDropHashes
- `schemaDifferences` initialHashes
- <&> simplifyDiff
- )
- `shouldBe` Map.fromList
- [ ( "schemas/public/tables/tbl/cols/col1"
- , DExpectedButNotFound
- )
- ]
-
- aroundFreshDatabase $ it "Schema selection" $ \emptyTestDbInfo -> do
- let nonInternalSchemasDbInfo = emptyTestDbInfo
- { namespacesToCheck = AllNonInternalSchemas
- }
- publicSchemaDbInfo = emptyTestDbInfo
- { namespacesToCheck = IncludeSchemas ["public"]
- }
- emptySchemasDbInfo =
- emptyTestDbInfo { namespacesToCheck = IncludeSchemas [] }
- nonExistingAndCatalogSchemasDbInfo = emptyTestDbInfo
- { namespacesToCheck = IncludeSchemas
- ["non-existing-schema", "pg_catalog"]
- }
- getSchemaHashes dbinfo = do
- DbRep _ hashes _ <- runCoddLogger $ applyMigrationsNoCheck
- dbinfo
- Nothing
- testConnTimeout
- (readRepresentationsFromDbWithSettings dbinfo)
- pure $ Map.keys hashes
-
- -- 1. We should not see pg_catalog in any verified object, but "public"
- -- should be there as the only schema. The same for the one that includes "public"
- -- explicitly
- nonInternalSchemas <- getSchemaHashes nonInternalSchemasDbInfo
- publicSchemas <- getSchemaHashes publicSchemaDbInfo
- nonInternalSchemas `shouldBe` [ObjName "public"]
- publicSchemas `shouldBe` [ObjName "public"]
-
- -- 2. No schema hashes at all for empty list of schemas
- emptySchemas <- getSchemaHashes emptySchemasDbInfo
- emptySchemas `shouldBe` []
-
- -- 3. Non-existing schema ignored and internal pg_catalog listed
- pgCatSchemas <- getSchemaHashes nonExistingAndCatalogSchemasDbInfo
- pgCatSchemas `shouldBe` [ObjName "pg_catalog"]
-
- aroundFreshDatabase
- $ it
- "Restoring a pg_dump yields the same schema as applying original migrations"
- $ \emptyDbInfo -> do
- let connInfo = migsConnString emptyDbInfo
- pgVersion <- withConnection connInfo
- testConnTimeout
- queryServerMajorVersion
-
- -- We also use migrations of the "Accurate and reversible representation changes" test
- -- because that's the most complete set of database objects we have in our test codebase.
- -- But we append other migrations we know to be problematic to that set.
- bunchOfOtherMigs <- map (unMU . fst . hoistMU lift)
- <$> migrationsAndRepChange pgVersion
- problematicMigs <-
- map (hoistAddedSqlMigration lift)
- <$> migrationsForPgDumpRestoreTest
- let allMigs = bunchOfOtherMigs ++ problematicMigs
- expectedSchema <- runCoddLogger $ applyMigrationsNoCheck
- emptyDbInfo
- (Just allMigs)
- testConnTimeout
- (readRepresentationsFromDbWithSettings emptyDbInfo)
-
- -- Take the pg_dump and drop the database
- pg_dump_output <-
- finallyDrop (Text.pack $ connectDatabase connInfo) $ do
- (pg_dump_exitCode, pg_dump_output) <-
- readProcessStdout
- $ shell
- $ "pg_dump --create -N codd_schema -d \""
- ++ connectDatabase connInfo
- ++ "\""
- pg_dump_exitCode `shouldBe` ExitSuccess
- pure pg_dump_output
-
- -- Apply the dump with psql and check schemas match
- psqlExitCode <-
- runProcess
- $ setStdin (byteStringInput pg_dump_output)
- $ shell "psql -d postgres"
- psqlExitCode `shouldBe` ExitSuccess
- schemaAfterRestore <-
- runCoddLogger
- $ withConnection connInfo testConnTimeout
- $ readRepsFromDbWithNewTxn emptyDbInfo
- schemaAfterRestore `shouldBe` expectedSchema
-
- describe "Schema verification tests" $ do
- modifyMaxSuccess (const 3) -- This is a bit heavy on CI but this test is too important
- $ aroundFreshDatabase
- $ it "Accurate and reversible representation changes"
- $ \emptyDbInfo2 -> property $ \(NumMigsToReverse num) -> do
- let emptyDbInfo = emptyDbInfo2
- { namespacesToCheck = IncludeSchemas
- ["public", "codd-extra-mapped-schema"]
- }
- connInfo = migsConnString emptyDbInfo
- getHashes sett = runCoddLogger $ withConnection
- connInfo
- testConnTimeout
- (readRepsFromDbWithNewTxn sett)
- pgVersion <- withConnection connInfo
- testConnTimeout
- queryServerMajorVersion
- allMigsAndExpectedChanges <- map (hoistMU lift)
- <$> migrationsAndRepChange pgVersion
- hashBeforeEverything <- getHashes emptyDbInfo
- (_, AccumChanges applyHistory, hashesAndUndo :: [ ( DbRep
- , Maybe Text
- )
- ]) <-
- forwardApplyMigs 0
- hashBeforeEverything
- emptyDbInfo
- allMigsAndExpectedChanges
- let
- hashAfterAllMigs = maybe hashBeforeEverything
- snd
- (lastMaybe applyHistory)
-
- -- 1. After applying all migrations, let's make sure we're actually applying
- -- them by fetching some data..
- ensureMarceloExists connInfo
-
- -- 2. Now undo part or all of the SQL migrations and check that
- -- schemas match each step of the way in reverse!
- liftIO
- $ putStrLn
- $ "Undoing migrations in reverse. Num = "
- <> show num
- hashesAfterEachUndo <-
- forM (take num $ reverse hashesAndUndo)
- $ \(expectedHashesAfterUndo, mUndoSql) ->
- case mUndoSql of
- Nothing -> pure expectedHashesAfterUndo
- Just undoSql -> do
- void
- $ runCoddLogger
- $ withConnection
- connInfo
- testConnTimeout
- $ \conn ->
- Streaming.effects
- $ multiQueryStatement_
- conn
- $ (\(WellParsedSql sqlStream) ->
- sqlStream
- )
- (mkValidSql
- undoSql
- )
- hashesAfterUndo <- getHashes
- emptyDbInfo
- let
- diff = schemaDifferences
- hashesAfterUndo
- expectedHashesAfterUndo
- (undoSql, diff)
- `shouldBe` (undoSql, Map.empty)
- -- What follows is just a sanity check
- (undoSql, hashesAfterUndo)
- `shouldBe` ( undoSql
- , expectedHashesAfterUndo
- )
- pure hashesAfterUndo
- void $ withConnection
- connInfo
- testConnTimeout
- (\conn -> DB.execute
- conn
- "WITH reversedMigs (name) AS (SELECT name FROM codd_schema.sql_migrations ORDER BY migration_timestamp DESC LIMIT ?) DELETE FROM codd_schema.sql_migrations USING reversedMigs WHERE reversedMigs.name=sql_migrations.name"
- (DB.Only num)
- )
-
- let
- hashesAfterUndo = fromMaybe
- hashAfterAllMigs
- (lastMaybe hashesAfterEachUndo)
-
- -- 3. Finally, reapply every migration that was reversed.
- -- This may look unnecessary, but if a created object (e.g. a column) is not dropped
- -- on the way forward, we might not get a change to the effects of re-adding it,
- -- which manifests as a dead but unusable `attnum` in pg_attribute.
- liftIO $ putStrLn "Re-applying migrations"
- void $ forwardApplyMigs
- (length allMigsAndExpectedChanges - num)
- hashesAfterUndo
- emptyDbInfo
- allMigsAndExpectedChanges
- ensureMarceloExists connInfo
- where
- forwardApplyMigs numMigsAlreadyApplied hashBeforeEverything dbInfo allMigsAndExpectedChanges
- = foldM
- (\(hashSoFar, AccumChanges appliedMigsAndCksums, hundo) (MU nextMig undoSql, expectedChanges) ->
- do
- let appliedMigs = map (fst . fst) appliedMigsAndCksums
- newMigs = appliedMigs ++ [nextMig]
- dbHashesAfterMig <- runCoddLogger $ applyMigrationsNoCheck
- dbInfo
- (Just newMigs)
- testConnTimeout
- (readRepresentationsFromDbWithSettings dbInfo)
- -- migText <- parsedSqlText <$> migrationSql (addedSqlMig nextMig)
- let diff = schemaDifferences hashSoFar dbHashesAfterMig
- case expectedChanges of
- ChangeEq c -> do
- (simplifyDiff <$> diff) `shouldBe` Map.fromList c
- -- The check below is just a safety net in case "schemaDifferences" has a problem in its implementation
- if null c
- then hashSoFar `shouldBe` dbHashesAfterMig
- else hashSoFar `shouldNotBe` dbHashesAfterMig
- SomeChange -> do
- diff `shouldNotBe` Map.empty
- -- The check below is just a safety net in case "schemaDifferences" has a problem in its implementation
- hashSoFar `shouldNotBe` dbHashesAfterMig
-
- return
- ( dbHashesAfterMig
- , AccumChanges
- $ appliedMigsAndCksums
- ++ [((nextMig, expectedChanges), dbHashesAfterMig)]
- , hundo ++ [(hashSoFar, undoSql)]
+ let strictCollDbInfo =
+ emptyTestDbInfo
+ { schemaAlgoOpts =
+ SchemaAlgo
+ { strictCollations = True,
+ strictRangeCtorPrivs = False,
+ ignoreColumnOrder = False
+ }
+ }
+ createCollMig <-
+ AddedSqlMigration
+ <$> ( either (error "Could not parse SQL migration") id
+ <$> parseSqlMigrationIO
+ "1900-01-01-00-00-00-create-coll.sql"
+ ( PureStream $
+ Streaming.yield
+ "CREATE COLLATION new_collation (provider = icu, locale = 'de-u-co-phonebk');"
+ )
+ )
+ <*> pure (getIncreasingTimestamp 0)
+ (laxCollHashes, strictCollHashes) <-
+ runCoddLogger $
+ applyMigrationsNoCheck
+ emptyTestDbInfo
+ (Just [hoistAddedSqlMigration lift createCollMig])
+ testConnTimeout
+ ( \conn ->
+ (,)
+ <$> readRepresentationsFromDbWithSettings
+ emptyTestDbInfo
+ conn
+ <*> readRepresentationsFromDbWithSettings
+ strictCollDbInfo
+ conn
+ )
+ laxCollHashes `shouldNotBe` strictCollHashes
+
+ aroundFreshDatabase $
+ it "Strict range constructor ownership" $
+ \emptyTestDbInfo -> do
+ let strictRangeDbInfo =
+ emptyTestDbInfo
+ { schemaAlgoOpts =
+ SchemaAlgo
+ { strictCollations = False,
+ strictRangeCtorPrivs = True,
+ ignoreColumnOrder = False
+ }
+ }
+ createMig <-
+ AddedSqlMigration
+ <$> ( either (error "Could not parse SQL migration") id
+ <$> parseSqlMigrationIO
+ "1900-01-01-00-00-00-create-range-and-other-function.sql"
+ ( PureStream $
+ Streaming.yield
+ "CREATE TYPE floatrange AS RANGE (subtype = float8,subtype_diff = float8mi); \
+ \\n CREATE FUNCTION time_subtype_diff(x time, y time) RETURNS float8 AS 'SELECT EXTRACT(EPOCH FROM (x - y))' LANGUAGE sql STRICT IMMUTABLE;"
)
+ )
+ <*> pure (getIncreasingTimestamp 0)
+ pgVersion <-
+ withConnection
+ (migsConnString emptyTestDbInfo)
+ testConnTimeout
+ queryServerMajorVersion
+ (laxRangeHashes, strictRangeHashes) <-
+ runCoddLogger $
+ applyMigrationsNoCheck
+ emptyTestDbInfo
+ (Just [hoistAddedSqlMigration lift createMig])
+ testConnTimeout
+ ( \conn ->
+ (,)
+ <$> readRepresentationsFromDbWithSettings
+ emptyTestDbInfo
+ conn
+ <*> readRepresentationsFromDbWithSettings
+ strictRangeDbInfo
+ conn
+ )
+
+ -- The time_subtype_diff function shouldn't have its representation change,
+ -- but the constructors of floatrange and floatmultirange should.
+ ( schemaDifferences laxRangeHashes strictRangeHashes
+ <&> simplifyDiff
)
- (hashBeforeEverything, AccumChanges [], [])
- (drop numMigsAlreadyApplied allMigsAndExpectedChanges)
- ensureMarceloExists connInfo =
- withConnection
- connInfo
+ `shouldBe` Map.fromList
+ ( [ ( "schemas/public/routines/floatrange;float8,float8",
+ DBothButDifferent
+ ),
+ ( "schemas/public/routines/floatrange;float8,float8,text",
+ DBothButDifferent
+ )
+ ]
+ ++ if pgVersion >= 14
+ then
+ [ ( "schemas/public/routines/floatmultirange;",
+ DBothButDifferent
+ ),
+ ( "schemas/public/routines/floatmultirange;floatrange",
+ DBothButDifferent
+ ),
+ ( "schemas/public/routines/floatmultirange;_floatrange",
+ DBothButDifferent
+ )
+ ]
+ else []
+ )
+
+ aroundFreshDatabase $
+ it "ignore-column-order setting" $
+ \emptyTestDbInfo -> do
+ createMig <-
+ AddedSqlMigration
+ <$> ( either (error "Could not parse SQL migration") id
+ <$> parseSqlMigrationIO
+ "1900-01-01-00-00-00-ignore-col-order-1.sql"
+ ( PureStream $
+ Streaming.yield
+ "CREATE TABLE othertbl(col2 INT PRIMARY KEY);\
+ \CREATE TABLE tbl(col1 INT, col2 SERIAL PRIMARY KEY CHECK (col2 > 0) REFERENCES othertbl(col2));\
+ \CREATE UNIQUE INDEX someidx ON tbl(col2);"
+ )
+ -- TODO: Other dependent objects like triggers, custom locales and whatnot
+ )
+ <*> pure (getIncreasingTimestamp 0)
+ let ignColOrderDbInfo =
+ emptyTestDbInfo
+ { schemaAlgoOpts =
+ SchemaAlgo
+ { strictCollations = False,
+ strictRangeCtorPrivs = False,
+ ignoreColumnOrder = True
+ }
+ }
+ initialHashes <-
+ runCoddLogger $
+ applyMigrationsNoCheck
+ ignColOrderDbInfo
+ (Just [hoistAddedSqlMigration lift createMig])
testConnTimeout
- (\conn -> unsafeQuery1
- conn
- "SELECT COUNT(*) employee_name FROM employee WHERE employee_name='Marcelo'"
- ()
+ (readRepresentationsFromDbWithSettings ignColOrderDbInfo)
+
+ dropCol1Mig <-
+ AddedSqlMigration
+ <$> ( either (error "Could not parse SQL migration 2") id
+ <$> parseSqlMigrationIO
+ "1900-01-01-00-00-01-ignore-col-order-2.sql"
+ ( PureStream $
+ Streaming.yield
+ "ALTER TABLE tbl DROP COLUMN col1;"
+ )
+ )
+ <*> pure (getIncreasingTimestamp 1)
+
+ afterDropHashes <-
+ runCoddLogger $
+ applyMigrationsNoCheck
+ ignColOrderDbInfo
+ ( Just $
+ map
+ (hoistAddedSqlMigration lift)
+ [createMig, dropCol1Mig]
)
- `shouldReturn` DB.Only (1 :: Int)
+ testConnTimeout
+ (readRepresentationsFromDbWithSettings ignColOrderDbInfo)
+
+ -- Only the removed column should have its representation file removed.
+ -- The other column and all dependent objects should not change one bit.
+ ( afterDropHashes
+ `schemaDifferences` initialHashes
+ <&> simplifyDiff
+ )
+ `shouldBe` Map.fromList
+ [ ( "schemas/public/tables/tbl/cols/col1",
+ DExpectedButNotFound
+ )
+ ]
+
+ aroundFreshDatabase $ it "Schema selection" $ \emptyTestDbInfo -> do
+ let nonInternalSchemasDbInfo =
+ emptyTestDbInfo
+ { namespacesToCheck = AllNonInternalSchemas
+ }
+ publicSchemaDbInfo =
+ emptyTestDbInfo
+ { namespacesToCheck = IncludeSchemas ["public"]
+ }
+ emptySchemasDbInfo =
+ emptyTestDbInfo {namespacesToCheck = IncludeSchemas []}
+ nonExistingAndCatalogSchemasDbInfo =
+ emptyTestDbInfo
+ { namespacesToCheck =
+ IncludeSchemas
+ ["non-existing-schema", "pg_catalog"]
+ }
+ getSchemaHashes dbinfo = do
+ DbRep _ hashes _ <-
+ runCoddLogger $
+ applyMigrationsNoCheck
+ dbinfo
+ Nothing
+ testConnTimeout
+ (readRepresentationsFromDbWithSettings dbinfo)
+ pure $ Map.keys hashes
+
+ -- 1. We should not see pg_catalog in any verified object, but "public"
+ -- should be there as the only schema. The same for the one that includes "public"
+ -- explicitly
+ nonInternalSchemas <- getSchemaHashes nonInternalSchemasDbInfo
+ publicSchemas <- getSchemaHashes publicSchemaDbInfo
+ nonInternalSchemas `shouldBe` [ObjName "public"]
+ publicSchemas `shouldBe` [ObjName "public"]
+
+ -- 2. No schema hashes at all for empty list of schemas
+ emptySchemas <- getSchemaHashes emptySchemasDbInfo
+ emptySchemas `shouldBe` []
+
+ -- 3. Non-existing schema ignored and internal pg_catalog listed
+ pgCatSchemas <- getSchemaHashes nonExistingAndCatalogSchemasDbInfo
+ pgCatSchemas `shouldBe` [ObjName "pg_catalog"]
+
+ aroundFreshDatabase
+ $ it
+ "Restoring a pg_dump yields the same schema as applying original migrations"
+ $ \emptyDbInfo -> do
+ let connInfo = migsConnString emptyDbInfo
+ pgVersion <-
+ withConnection
+ connInfo
+ testConnTimeout
+ queryServerMajorVersion
+
+ -- We also use migrations of the "Accurate and reversible representation changes" test
+ -- because that's the most complete set of database objects we have in our test codebase.
+ -- But we append other migrations we know to be problematic to that set.
+ bunchOfOtherMigs <-
+ map (unMU . fst . hoistMU lift)
+ <$> migrationsAndRepChange pgVersion
+ problematicMigs <-
+ map (hoistAddedSqlMigration lift)
+ <$> migrationsForPgDumpRestoreTest
+ let allMigs = bunchOfOtherMigs ++ problematicMigs
+ expectedSchema <-
+ runCoddLogger $
+ applyMigrationsNoCheck
+ emptyDbInfo
+ (Just allMigs)
+ testConnTimeout
+ (readRepresentationsFromDbWithSettings emptyDbInfo)
+
+ -- Take the pg_dump and drop the database
+ pg_dump_output <-
+ finallyDrop (Text.pack $ connectDatabase connInfo) $ do
+ (pg_dump_exitCode, pg_dump_output) <-
+ readProcessStdout $
+ shell $
+ "pg_dump --create -d \""
+ ++ connectDatabase connInfo
+ ++ "\""
+ pg_dump_exitCode `shouldBe` ExitSuccess
+ pure pg_dump_output
+
+ -- Apply the dump with psql and check schemas match
+ psqlExitCode <-
+ runProcess $
+ setStdin (byteStringInput pg_dump_output) $
+ shell "psql -d postgres"
+ psqlExitCode `shouldBe` ExitSuccess
+ schemaAfterRestore <-
+ runCoddLogger $
+ withConnection connInfo testConnTimeout $
+ readRepsFromDbWithNewTxn emptyDbInfo
+ schemaAfterRestore `shouldBe` expectedSchema
+
+ describe "Schema verification tests" $ do
+ modifyMaxSuccess (const 3) $ -- This is a bit heavy on CI but this test is too important
+ aroundFreshDatabase $
+ it "Accurate and reversible representation changes" $
+ \emptyDbInfo2 -> property $ \(NumMigsToReverse num) -> do
+ let emptyDbInfo =
+ emptyDbInfo2
+ { namespacesToCheck =
+ IncludeSchemas
+ ["public", "codd-extra-mapped-schema"]
+ }
+ connInfo = migsConnString emptyDbInfo
+ getHashes sett =
+ runCoddLogger $
+ withConnection
+ connInfo
+ testConnTimeout
+ (readRepsFromDbWithNewTxn sett)
+ pgVersion <-
+ withConnection
+ connInfo
+ testConnTimeout
+ queryServerMajorVersion
+ allMigsAndExpectedChanges <-
+ map (hoistMU lift)
+ <$> migrationsAndRepChange pgVersion
+ hashBeforeEverything <- getHashes emptyDbInfo
+ ( _,
+ AccumChanges applyHistory,
+ hashesAndUndo ::
+ [ ( DbRep,
+ Maybe Text
+ )
+ ]
+ ) <-
+ forwardApplyMigs
+ 0
+ hashBeforeEverything
+ emptyDbInfo
+ allMigsAndExpectedChanges
+ let hashAfterAllMigs =
+ maybe
+ hashBeforeEverything
+ snd
+ (lastMaybe applyHistory)
+
+ -- 1. After applying all migrations, let's make sure we're actually applying
+ -- them by fetching some data..
+ ensureMarceloExists connInfo
+
+ -- 2. Now undo part or all of the SQL migrations and check that
+ -- schemas match each step of the way in reverse!
+ liftIO $
+ putStrLn $
+ "Undoing migrations in reverse. Num = "
+ <> show num
+ hashesAfterEachUndo <-
+ forM (take num $ reverse hashesAndUndo) $
+ \(expectedHashesAfterUndo, mUndoSql) ->
+ case mUndoSql of
+ Nothing -> pure expectedHashesAfterUndo
+ Just undoSql -> do
+ void
+ $ runCoddLogger
+ $ withConnection
+ connInfo
+ testConnTimeout
+ $ \conn ->
+ Streaming.effects
+ $ multiQueryStatement_
+ conn
+ $ ( \(WellParsedSql sqlStream) ->
+ sqlStream
+ )
+ ( mkValidSql
+ undoSql
+ )
+ hashesAfterUndo <-
+ getHashes
+ emptyDbInfo
+ let diff =
+ schemaDifferences
+ hashesAfterUndo
+ expectedHashesAfterUndo
+ (undoSql, diff)
+ `shouldBe` (undoSql, Map.empty)
+ -- What follows is just a sanity check
+ (undoSql, hashesAfterUndo)
+ `shouldBe` ( undoSql,
+ expectedHashesAfterUndo
+ )
+ pure hashesAfterUndo
+ void $
+ withConnection
+ connInfo
+ testConnTimeout
+ ( \conn ->
+ DB.execute
+ conn
+ "WITH reversedMigs (name) AS (SELECT name FROM codd_schema.sql_migrations ORDER BY migration_timestamp DESC LIMIT ?) DELETE FROM codd_schema.sql_migrations USING reversedMigs WHERE reversedMigs.name=sql_migrations.name"
+ (DB.Only num)
+ )
+
+ let hashesAfterUndo =
+ fromMaybe
+ hashAfterAllMigs
+ (lastMaybe hashesAfterEachUndo)
+
+ -- 3. Finally, reapply every migration that was reversed.
+ -- This may look unnecessary, but if a created object (e.g. a column) is not dropped
+ -- on the way forward, we might not get a change to the effects of re-adding it,
+ -- which manifests as a dead but unusable `attnum` in pg_attribute.
+ liftIO $ putStrLn "Re-applying migrations"
+ void $
+ forwardApplyMigs
+ (length allMigsAndExpectedChanges - num)
+ hashesAfterUndo
+ emptyDbInfo
+ allMigsAndExpectedChanges
+ ensureMarceloExists connInfo
+ where
+ forwardApplyMigs numMigsAlreadyApplied hashBeforeEverything dbInfo allMigsAndExpectedChanges =
+ foldM
+ ( \(hashSoFar, AccumChanges appliedMigsAndCksums, hundo) (MU nextMig undoSql, expectedChanges) ->
+ do
+ let appliedMigs = map (fst . fst) appliedMigsAndCksums
+ newMigs = appliedMigs ++ [nextMig]
+ dbHashesAfterMig <-
+ runCoddLogger $
+ applyMigrationsNoCheck
+ dbInfo
+ (Just newMigs)
+ testConnTimeout
+ (readRepresentationsFromDbWithSettings dbInfo)
+ -- migText <- parsedSqlText <$> migrationSql (addedSqlMig nextMig)
+ let diff = schemaDifferences hashSoFar dbHashesAfterMig
+ case expectedChanges of
+ ChangeEq c -> do
+ (simplifyDiff <$> diff) `shouldBe` Map.fromList c
+ -- The check below is just a safety net in case "schemaDifferences" has a problem in its implementation
+ if null c
+ then hashSoFar `shouldBe` dbHashesAfterMig
+ else hashSoFar `shouldNotBe` dbHashesAfterMig
+ SomeChange -> do
+ diff `shouldNotBe` Map.empty
+ -- The check below is just a safety net in case "schemaDifferences" has a problem in its implementation
+ hashSoFar `shouldNotBe` dbHashesAfterMig
+
+ return
+ ( dbHashesAfterMig,
+ AccumChanges $
+ appliedMigsAndCksums
+ ++ [((nextMig, expectedChanges), dbHashesAfterMig)],
+ hundo ++ [(hashSoFar, undoSql)]
+ )
+ )
+ (hashBeforeEverything, AccumChanges [], [])
+ (drop numMigsAlreadyApplied allMigsAndExpectedChanges)
+ ensureMarceloExists connInfo =
+ withConnection
+ connInfo
+ testConnTimeout
+ ( \conn ->
+ unsafeQuery1
+ conn
+ "SELECT COUNT(*) employee_name FROM employee WHERE employee_name='Marcelo'"
+ ()
+ )
+ `shouldReturn` DB.Only (1 :: Int)