Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: multi-stage-output/new csv flags #1110

Merged
merged 28 commits into from
Nov 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
5e2329f
fix: cache refactor, remove unused code/props
cristiand391 Oct 31, 2024
2b99678
chore: big refactor
cristiand391 Oct 31, 2024
2049e22
test: nuke UTs
cristiand391 Oct 31, 2024
063d399
fix: keep cache entry
cristiand391 Oct 31, 2024
6eeb736
chore: more refactoring
cristiand391 Nov 1, 2024
7c9ce9b
fix: bring back verbose table
cristiand391 Nov 5, 2024
ac63a04
test: update HRO assertion
cristiand391 Nov 5, 2024
28f0da1
fix: bulk delete can return JSON failures
cristiand391 Nov 5, 2024
41e3b32
fix: match regex for table
cristiand391 Nov 5, 2024
6af6b5b
fix: bulk upsert return json failures + NUT
cristiand391 Nov 5, 2024
4138648
chore: remove unused code/refactor
cristiand391 Nov 6, 2024
07c944a
fix: handle hardDelete missing perm error
cristiand391 Nov 6, 2024
3f72588
chore: address final TODOs
cristiand391 Nov 6, 2024
3244aba
chore: code review
cristiand391 Nov 14, 2024
b323cc3
chore: update msg
cristiand391 Nov 14, 2024
63d887a
chore: remove column-delimiter for bulk delete
cristiand391 Nov 14, 2024
3c621b1
test: nuts use new flags
cristiand391 Nov 14, 2024
79c7aa0
fix: handle quoted fields when detecting column delimiter
cristiand391 Nov 15, 2024
803b7ac
chore: removed dead code
cristiand391 Nov 15, 2024
567bdf4
chore: refactor
cristiand391 Nov 15, 2024
cc9a20b
fix: remove ref to deleted md
cristiand391 Nov 15, 2024
f6e10b6
test: add import -> export -> import nut
cristiand391 Nov 18, 2024
7ef8d0e
fix: throw if job didn't process any record
cristiand391 Nov 25, 2024
e52d2a4
fix: pass err action tokens
cristiand391 Nov 26, 2024
4db440e
fix: restore support for resuming non-local jobs
cristiand391 Nov 26, 2024
6213764
fix: dont get records if job is in progress
cristiand391 Nov 26, 2024
95022b1
fix: bump jsforce-node
cristiand391 Nov 26, 2024
48cc145
Merge remote-tracking branch 'origin/main' into cd/mso-new-flags
cristiand391 Nov 26, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions command-snapshot.json
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@
"flags-dir",
"hard-delete",
"json",
"line-ending",
"loglevel",
"sobject",
"target-org",
Expand Down Expand Up @@ -309,10 +310,12 @@
"flags": [
"api-version",
"async",
"column-delimiter",
"external-id",
"file",
"flags-dir",
"json",
"line-ending",
"loglevel",
"sobject",
"target-org",
Expand Down
23 changes: 0 additions & 23 deletions messages/bulk.operation.command.md

This file was deleted.

15 changes: 0 additions & 15 deletions messages/bulk.resume.command.md

This file was deleted.

55 changes: 54 additions & 1 deletion messages/bulkIngest.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,59 @@ Job has been aborted.
- Get the job results by running: "sf data bulk results -o %s --job-id %s".
- View the job in the org: "sf org open -o %s --path '/lightning/setup/AsyncApiJobStatus/page?address=%2F%s'".

# error.hardDeletePermission

You must have the "Bulk API Hard Delete" system permission to use the --hard-delete flag. This permission is disabled by default and can be enabled only by a system administrator.

# error.noProcessedRecords

Job finished successfully but it didn't process any record.

# error.noProcessedRecords.actions

- Check that the provided CSV file is valid.
- View the job in the org: "sf org open -o %s --path '/lightning/setup/AsyncApiJobStatus/page?address=%2F%s'".

# flags.column-delimiter.summary

Column delimiter used in the CSV file. Default is COMMA.
Column delimiter used in the CSV file.

# flags.line-ending.summary

Line ending used in the CSV file. Default value on Windows is `CRLF`; on macOS and Linux it's `LF`.

# flags.sobject.summary

API name of the Salesforce object, either standard or custom, that you want to update or delete records from.

# flags.csvfile.summary

CSV file that contains the IDs of the records to update or delete.

# flags.wait.summary

Number of minutes to wait for the command to complete before displaying the results.

# flags.async.summary

Run the command asynchronously.

# flags.verbose.summary

Print verbose output of failed records if result is available.

# flags.jobid

ID of the job you want to resume.

# flags.useMostRecent.summary

Use the ID of the most recently-run bulk job.

# flags.targetOrg.summary

Username or alias of the target org. Not required if the "target-org" configuration variable is already set.

# flags.wait.summary

Number of minutes to wait for the command to complete before displaying the results.
4 changes: 0 additions & 4 deletions messages/data.import.bulk.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,3 @@ Time to wait for the command to finish, in minutes.
# flags.line-ending.summary

Line ending used in the CSV file. Default value on Windows is `CRLF`; on macOS and Linux it's `LF`.

# flags.column-delimiter.summary

Column delimiter used in the CSV file. Default is COMMA.
8 changes: 0 additions & 8 deletions messages/data.update.bulk.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,11 +37,3 @@ CSV file that contains the Salesforce object records you want to update.
# flags.sobject.summary

API name of the Salesforce object, either standard or custom, which you are updating.

# flags.line-ending.summary

Line ending used in the CSV file. Default value on Windows is `CRLF`; on macOS and Linux it's `LF`.

# flags.column-delimiter.summary

Column delimiter used in the CSV file. Default is COMMA.
33 changes: 2 additions & 31 deletions messages/messages.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,3 @@
# success

Bulk %s request %s started successfully.

# checkStatus

Run the command "sf data %s resume -i %s -o %s" to check the status.

# checkJobViaUi

To review the details of this job, run:
sf org open --target-org %s --path "/lightning/setup/AsyncApiJobStatus/page?address=%2F%s"

# remainingTimeStatus

Remaining time: %d minutes.

# remainingRecordsStatus

Processed %d | Success %d | Fail %d

# bulkJobFailed

The bulk job %s failed. Check the job status for more information.

# perfLogLevelOption

Get API performance data.
Expand All @@ -48,10 +23,6 @@ Malformed key=value pair for value: %s.

Format to display the results; the --json flag overrides this flag.

# bulkRequestIdRequiredWhenNotUsingMostRecent

The bulk request id must be supplied when not looking for most recent cache entry.

# error.bulkRequestIdNotFound

Could not find a cache entry for job ID %s.
Expand All @@ -60,9 +31,9 @@ Could not find a cache entry for job ID %s.

Could not load a most recent cache entry for a bulk request. Please rerun your command with a bulk request id.

# cannotCreateResumeOptionsWithoutAnOrg
# error.skipCacheValidateNoOrg

Cannot create a cache entry without a valid org.
A default target org for the job %s is required to be set because the job isn't in the local cache.

# usernameRequired

Expand Down
3 changes: 2 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,8 @@
"version": "oclif readme"
},
"dependencies": {
"@jsforce/jsforce-node": "^3.6.2",
"@jsforce/jsforce-node": "^3.6.3",
"@oclif/multi-stage-output": "^0.7.5",
"@oclif/multi-stage-output": "^0.7.12",
"@salesforce/core": "^8.6.1",
"@salesforce/kit": "^3.2.2",
Expand Down
111 changes: 50 additions & 61 deletions src/bulkDataRequestCache.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@
* For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause
*/

import { TTLConfig, Global, Logger, Messages, Org } from '@salesforce/core';
import { TTLConfig, Global, Logger, Messages, Org, ConfigAggregator, OrgConfigProperties } from '@salesforce/core';
import { Duration } from '@salesforce/kit';
import type { ResumeBulkExportOptions, ResumeBulkImportOptions, ResumeOptions } from './types.js';
import type { ResumeBulkExportOptions, ResumeBulkImportOptions } from './types.js';
import { ColumnDelimiterKeys } from './bulkUtils.js';

Messages.importMessagesDirectoryFromMetaUrl(import.meta.url);
Expand Down Expand Up @@ -64,61 +64,58 @@ export abstract class BulkDataRequestCache extends TTLConfig<TTLConfig.Options,
Logger.childFromRoot('DataRequestCache').debug(`bulk cache saved for ${bulkRequestId}`);
}

/**
* Resolve entries from the local cache.
*
* @param jobIdOrMostRecent job ID or boolean value to decide if it should return the most recent entry in the cache.
* @param skipCacheValidatation make this method not throw if you passed a job ID that's not in the cache
* This was only added for `data upsert/delete resume` for backwards compatibility and will be removed after March 2025.
*/
public async resolveResumeOptionsFromCache(
bulkJobId: string | undefined,
useMostRecent: boolean,
org: Org | undefined,
apiVersion: string | undefined
): Promise<ResumeOptions> {
if (!useMostRecent && !bulkJobId) {
throw messages.createError('bulkRequestIdRequiredWhenNotUsingMostRecent');
}
const resumeOptions = {
operation: 'query',
query: '',
pollingOptions: { pollTimeout: 0, pollInterval: 0 },
} satisfies Pick<ResumeOptions['options'], 'operation' | 'query' | 'pollingOptions'>;

if (useMostRecent) {
jobIdOrMostRecent: string | boolean,
skipCacheValidatation = false
): Promise<ResumeBulkImportOptions> {
if (typeof jobIdOrMostRecent === 'boolean') {
const key = this.getLatestKey();
if (key) {
// key definitely exists because it came from the cache
const entry = this.get(key);

return {
jobInfo: { id: entry.jobId },
options: {
...resumeOptions,
connection: (await Org.create({ aliasOrUsername: entry.username })).getConnection(apiVersion),
},
};
}
}
if (bulkJobId) {
const entry = this.get(bulkJobId);
if (entry) {
return {
jobInfo: { id: entry.jobId },
options: {
...resumeOptions,
connection: (await Org.create({ aliasOrUsername: entry.username })).getConnection(apiVersion),
},
};
} else if (org) {
return {
jobInfo: { id: bulkJobId },
options: {
...resumeOptions,
connection: org.getConnection(apiVersion),
},
};
} else {
throw messages.createError('cannotCreateResumeOptionsWithoutAnOrg');
if (!key) {
throw messages.createError('error.missingCacheEntryError');
}
} else if (useMostRecent) {
throw messages.createError('error.missingCacheEntryError');
// key definitely exists because it came from the cache
const entry = this.get(key);

return {
jobInfo: { id: entry.jobId },
options: {
connection: (await Org.create({ aliasOrUsername: entry.username })).getConnection(),
},
};
} else {
throw messages.createError('bulkRequestIdRequiredWhenNotUsingMostRecent');
const entry = this.get(jobIdOrMostRecent);
if (!entry) {
if (skipCacheValidatation) {
const config = await ConfigAggregator.create();
const aliasOrUsername = config.getInfo(OrgConfigProperties.TARGET_ORG)?.value as string;
if (!aliasOrUsername) {
throw messages.createError('error.skipCacheValidateNoOrg', [jobIdOrMostRecent]);
}

return {
jobInfo: { id: jobIdOrMostRecent },
options: {
connection: (await Org.create({ aliasOrUsername })).getConnection(),
},
};
}

throw messages.createError('error.bulkRequestIdNotFound', [jobIdOrMostRecent]);
}

return {
jobInfo: { id: entry.jobId },
options: {
connection: (await Org.create({ aliasOrUsername: entry.username })).getConnection(),
},
};
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

simplified resolveResumeOptionsFromCache so that it matches data import/export's cache resolver.
The previous implementation had a few unused properties and one (undocumented?) functionality:
passing a job ID that didn't exist in the cache wouldn't cause an error, the resolver would return it as if it was found in the local cache.
Added code ƒor specific bulk commands using this to avoid breaking changes.

}
}
}
Expand Down Expand Up @@ -377,12 +374,6 @@ export class BulkExportRequestCache extends TTLConfig<TTLConfig.Options, BulkExp
jobIdOrMostRecent: string | boolean,
apiVersion: string | undefined
): Promise<ResumeBulkExportOptions> {
const resumeOptionsOptions = {
operation: 'query',
query: '',
pollingOptions: { pollTimeout: 0, pollInterval: 0 },
} satisfies Pick<ResumeOptions['options'], 'operation' | 'query' | 'pollingOptions'>;

if (typeof jobIdOrMostRecent === 'boolean') {
const key = this.getLatestKey();
if (!key) {
Expand All @@ -399,7 +390,6 @@ export class BulkExportRequestCache extends TTLConfig<TTLConfig.Options, BulkExp
columnDelimiter: entry.outputInfo.columnDelimiter,
},
options: {
...resumeOptionsOptions,
connection: (await Org.create({ aliasOrUsername: entry.username })).getConnection(apiVersion),
},
};
Expand All @@ -413,7 +403,6 @@ export class BulkExportRequestCache extends TTLConfig<TTLConfig.Options, BulkExp
jobInfo: { id: entry.jobId },
outputInfo: entry.outputInfo,
options: {
...resumeOptionsOptions,
connection: (await Org.create({ aliasOrUsername: entry.username })).getConnection(apiVersion),
},
};
Expand Down
Loading
Loading