-
Notifications
You must be signed in to change notification settings - Fork 7
Deprecated Pages
CircleCI
PR #302 Remove use of CircleCI
Followed LHDI instructions and used lighthouse-di-circleci-java17-image. CircleCI was enabled in PR #16.
The configuration runs similar but not all operations as Github Actions.
-
config.yml uses
$GITHUB_USERNAME
and$GITHUB_ACCESS_TOKEN
(to pull a Docker image), which are set in CircleCI's project settings (This needs to be done in the internal repo as well to get CircleCI running there.)
CircleCI does not push container images ("packages") to the Github Container Registry. We don't want to pollute it with development packages. See Docker-containers#Packages.
Commit c7b786c limits CircleCI runs for only the main
and develop
branches to reduce the time for PR checks.
MAS api spec (IBM hosted api)
{
"openapi": "3.0.3",
"info": {
"version": "1.0.0",
"title": "Mail Automation System - VRO Integration (Automated Benefits Delivery)",
"description": "Integration with VRO via MAS-hosted APIs",
"termsOfService": "",
"contact": {
"name": "IBM Dev Team",
"email": "[email protected]",
"url": ""
}
},
"servers": [{
"url": "https://viccs-api-dev.ibm-intelligent-automation.com/pca/api/dev",
"description": "(IBM - VICCS API)"
}
],
"paths": {
"/pcCheckCollectionStatus": {
"get": {
"tags": [
"pcCheckCollectionStatus"
],
"summary": "Get the status of the collection",
"description": "Get the status of the collection .i.e. whether it is OCRed, Indexed and ready to call the annotations API",
"operationId": "getCheckCollectionStatus",
"parameters": [{
"name": "Collection Identifiers",
"in": "query",
"description": "Collection Status Request",
"schema": {
"$ref": "#/components/schemas/collectionStatusReq"
}
}
],
"responses": {
"200": {
"description": "Successful operation",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/collectionStatusResp"
}
}
}
}
},
"400": {
"description": "Invalid input value"
}
},
"security": [{
"bearerAuth": []
}
]
}
},
"/pcQueryCollectionAnnots": {
"get": {
"tags": [
"pcQueryCollectionAnnots"
],
"summary": "Get the claim details",
"description": "Get the claim details",
"operationId": "pcQueryCollectionAnnots",
"parameters": [{
"name": "Collection Identifier",
"in": "query",
"description": "Get claim details Request",
"schema": {
"$ref": "#/components/schemas/collectionAnnotsReq"
}
}
],
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/collectionAnnotsResp"
}
}
}
}
},
"422": {
"description": "Invalid input value",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/error"
}
}
}
},
"default": {
"description": "unexpected error",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/error"
}
}
}
}
},
"security": [{
"bearerAuth": []
}
]
}
},
"/pcOrderExam": {
"post": {
"description": "Request a medical exam",
"operationId": "pcOrderExam",
"requestBody": {
"description": "Request a medical exam due to insufficient medical evidence for the condition specified in the claim",
"required": true,
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/orderExamReq"
}
}
}
},
"responses": {
"200": {
"description": "success",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/orderExamResp"
}
}
}
},
"422": {
"description": "Invalid input value",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/error"
}
}
}
},
"default": {
"description": "Unexpected error",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/error"
}
}
}
}
},
"security": [{
"bearerAuth": []
}
]
}
}
},
"components": {
"schemas": {
"collectionStatusReq": {
"required": ["collectionId"],
"type": "object",
"properties": {
"collectionId": {
"description": "Unique identifier for the collection of annotations resulting from OCR and NLP processing of relevant documents",
"type": "integer"
},
"collectionIds": {
"description": "List of unique identifiers for the collection of annotations resulting from OCR and NLP processing of relevant documents",
"type": "array",
"items": {
"type": "integer"
}
}
}
},
"collectionStatusResp": {
"type": "object",
"required": [
"collectionId",
"collectionStatus"
],
"properties": {
"collectionId": {
"description": "Unique identifier for the collection of annotations resulting from OCR and NLP processing of relevant documents",
"type": "integer"
},
"collectionStatus": {
"description": "Status of the collection",
"type": "string",
"enum" : ["inProgress", "processed", "offramped", "vroNotified"]
}
}
},
"collectionAnnotsReq": {
"required": ["collectionId"],
"type": "object",
"properties": {
"collectionId": {
"description": "Unique identifier for the collection of annotations resulting from OCR and NLP processing of relevant documents",
"type": "integer"
}
}
},
"documents": {
"required": ["eFolderVersionRefId ", "condition", "annotations"],
"type": "object",
"properties": {
"eFolderVersionRefId": {
"description": "eFolder version Reference ID",
"type": "integer"
},
"condition": {
"description": "Claims condition",
"type": "string"
},
"annotations": {
"description": "List of Annotations",
"type": "array",
"items": {
"$ref": "#/components/schemas/annotations"
}
}
}
},
"annotations": {
"type": "object",
"properties": {
"annotType": {
"description": "Annotation Type",
"type": "string"
},
"pageNum": {
"description": "Page Number",
"type": "string"
},
"annotName": {
"description": "Annotation Name",
"type": "string"
},
"annotVal": {
"description": "Annotation Value",
"type": "string"
},
"spellCheckVal": {
"description": "Spellcheck Value",
"type": "string"
},
"observationDate": {
"description": "Observation Date and Time (YYYY-MM-DDThh:mm:ss.sTZD)",
"type": "string",
"pattern": "(^\\d{4}-\\d\\d-\\d\\dT\\d\\d:\\d\\d:\\d\\d(\\.\\d+)?(([+-]\\d\\d:\\d\\d)|Z)?$)"
},
"start": {
"description": "Start Value",
"type": "integer"
},
"end": {
"description": "End Value",
"type": "integer"
},
"acdPrefName": {
"description": "Acd Pref Name",
"type": "string"
},
"relevant": {
"description": "Is it relevant",
"type": "boolean"
}
}
},
"collectionAnnotsResp": {
"type": "object",
"required": [
"vtrnFileId",
"creationDate"
],
"properties": {
"vtrnFileId": {
"description": "Veteran File Identifier",
"type": "integer"
},
"creationDate": {
"description": "Claim creation date and Time (YYYY-MM-DDThh:mm:ss.sTZD)",
"type": "string",
"pattern": "(^\\d{4}-\\d\\d-\\d\\dT\\d\\d:\\d\\d:\\d\\d(\\.\\d+)?(([+-]\\d\\d:\\d\\d)|Z)?$)"
},
"documents": {
"description": "List of documents",
"type": "array",
"items": {
"$ref": "#/components/schemas/documents"
}
}
}
},
"orderExamReq": {
"required": ["collectionId"],
"type": "object",
"properties": {
"collectionId": {
"description": "Unique identifier for the collection of annotations resulting from OCR and NLP processing of relevant documents",
"type": "string"
}
}
},
"orderExamResp": {
"type": "object",
"required": [
"status"
],
"properties": {
"status": {
"description": "Order Exam Status",
"type": "string"
}
}
},
"error": {
"type": "object",
"required": [
"code",
"message"
],
"properties": {
"code": {
"type": "string"
},
"message": {
"type": "string"
}
}
}
},
"securitySchemes": {
"bearerAuth": {
"type": "http",
"scheme": "bearer",
"bearerFormat": "JWT"
}
}
}
}
MAS api spec (VRO hosted api)
{
"openapi": "3.0.0",
"info": {
"version": "1.0.0",
"title": "Mail Automation System - VRO Integration (Automated Benefits Delivery)",
"description": "Integration with Mail Automation System via VRO-hosted APIs",
"termsOfService": "",
"contact": {
"name": "ABD-VRO Maintenance Team",
"email": "[email protected]",
"url": ""
}
},
"servers": [{
"url": "http://localhost/abd-vro/v1",
"description": "(ABD-VRO API)"
}
],
"paths": {
"/automatedClaim": {
"post": {
"description": "Notify VRO of a new claim that has been forwarded to OCR and evidence gathering",
"operationId": "addclaimsNotification",
"requestBody": {
"description": "Claims Notification Request",
"required": true,
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/claimsNotification"
}
}
}
},
"responses": {
"200": {
"description": "Claims Notification Response",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/claimsNotificationResp"
}
}
}
},
"default": {
"description": "Unexpected error",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/error"
}
}
}
}
}
}
},
"/examOrderingStatus": {
"post": {
"description": "Notify health exam ordering status",
"operationId": "examOrderStatus",
"requestBody": {
"description": "Notify health exam ordering status",
"required": true,
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/examOrderStatus"
}
}
}
},
"responses": {
"200": {
"description": "Acknowledge Notify health exam ordering status",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/examOrderStatusResp"
}
}
}
},
"default": {
"description": "Unexpected error",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/error"
}
}
}
}
}
}
}
},
"components": {
"schemas": {
"claimsNotification": {
"required": ["collectionId", "veteranIdentifiers", "dob", "firstName", "lastName", "claimDetail"],
"type": "object",
"properties": {
"veteranIdentifiers": {
"$ref": "#/components/schemas/veteranIdentifiers"
},
"dob": {
"description": "Date of Birth (yyyy-mm-dd format)",
"type": "string",
"pattern": "([12]\\d{3}-(0[1-9]|1[0-2])-(0[1-9]|[12]\\d|3[01]))"
},
"firstName": {
"description": "First Name",
"type": "string"
},
"lastName": {
"description": "Last Name",
"type": "string"
},
"gender": {
"description": "Gender",
"type": "string"
},
"collectionId": {
"description": "Unique identifier for the collection of annotations resulting from OCR and NLP processing of relevant documents",
"type": "string"
},
"veteranFlashIds": {
"description": "Veteran Flash IDs",
"type": "array",
"items": {
"type": "string"
}
},
"claimDetail": {
"$ref": "#/components/schemas/claimDetail"
}
}
},
"veteranIdentifiers": {
"required": ["icn", "ssn", "edipn", "veteranFileId", "participantId"],
"type": "object",
"properties": {
"icn": {
"$ref": "#/components/schemas/icn"
},
"ssn": {
"$ref": "#/components/schemas/ssn"
},
"veteranFileId": {
"$ref": "#/components/schemas/veteranFileId"
},
"edipn": {
"$ref": "#/components/schemas/edipn"
},
"participantId": {
"$ref": "#/components/schemas/participantId"
}
}
},
"ssn": {
"description": "Veteran's Social Security number (Note: pass n/a in the absence of this field)",
"type": "string",
"default" : "N/A"
},
"icn": {
"description": "Veteran's Integration Control number (Note: pass n/a in the absence of this field)",
"type": "string",
"default" : "N/A"
},
"edipn": {
"description": "Veteran's DOD EDIPN ID (Electronic Data Interchange-Personal Identifier) (Note: pass n/a in the absence of this field)",
"type": "string",
"default" : "N/A"
},
"veteranFileId": {
"description": "Veteran File ID (a.k.a. BIRLS ID or CorpDB filenumber or VBMS filenumber)\n\nBIRLS : Beneficiary Identification Records Locator Subsystem\nVBMS: Veteran Benefits Management System\nCorpDB: VA Corporate Database (Note: pass n/a in the absence of this field)",
"type": "string",
"default" : "N/A"
},
"participantId": {
"description": "Veteran's participant id",
"type": "string",
"default" : "N/A"
},
"claimDetail": {
"required": ["benefitClaimId", "claimSubmissionDateTime", "claimSubmissionSource", "veteranFileId", "conditions"],
"type": "object",
"properties": {
"benefitClaimId": {
"description": "Benefit Claim Identifier",
"type": "string"
},
"claimSubmissionDateTime": {
"description": "Claims Submission Date and Time (YYYY-MM-DDThh:mm:ss.sTZD)",
"type": "string",
"pattern": "(^\\d{4}-\\d\\d-\\d\\dT\\d\\d:\\d\\d:\\d\\d(\\.\\d+)?(([+-]\\d\\d:\\d\\d)|Z)?$)"
},
"claimSubmissionSource": {
"description": "Claims Submission Source VA.gov or MAS",
"type": "string",
"enum" : ["VA.GOV", "MAS", "OTHER"]
},
"conditions": {
"$ref": "#/components/schemas/claimCondition"
}
}
},
"claimCondition": {
"required": ["diagnosticCode"],
"type": "object",
"properties": {
"name": {
"description": "Claim Condition Name",
"type": "string"
},
"diagnosticCode": {
"description": "Claim Diagnostic Code",
"type": "string",
"enum" : ["7101"]
},
"disabilityActionType": {
"description": "Claim Disability Action Type",
"type": "string",
"enum" : ["INCREASE", "NEW"]
},
"disabilityClassificationCode": {
"description": "Claim Disability Classification Code",
"type": "string",
"enum" : ["3460", "3370"]
},
"ratedDisabilityId": {
"description": "Claim Rated Disability ID",
"type": "string"
}
}
},
"claimsNotificationResp": {
"type": "object",
"required": [
"id",
"message"
],
"properties": {
"id": {
"description": "Unique ID to identify the transaction (for audit and debug purpose)",
"type": "string"
},
"message": {
"type": "string"
}
}
},
"examOrderStatus": {
"required": ["collectionId", "collectionStatus"],
"type": "object",
"properties": {
"collectionId": {
"description": "Unique identifier for the collection of annotations resulting from OCR and NLP processing of relevant documents",
"type": "string"
},
"collectionStatus": {
"description": "Claim Collection Status",
"type": "string",
"enum" : [ "DRAFT", "FINAL", "ERROR"]
},
"examOrderDateTime": {
"description": "Exam order Date and Time (YYYY-MM-DDThh:mm:ss.sTZD)",
"type": "string",
"pattern": "(^\\d{4}-\\d\\d-\\d\\dT\\d\\d:\\d\\d:\\d\\d(\\.\\d+)?(([+-]\\d\\d:\\d\\d)|Z)?$)"
}
}
},
"examOrderStatusResp": {
"type": "object",
"required": [
"id",
"message"
],
"properties": {
"id": {
"description": "Unique ID to identify the transaction (for audit and debug purpose)",
"type": "string"
},
"message": {
"type": "string"
}
}
},
"error": {
"type": "object",
"required": [
"code",
"message"
],
"properties": {
"code": {
"type": "string"
},
"message": {
"type": "string"
}
}
}
},
"securitySchemes": {
"ApiKeyAuth": {
"type": "apiKey",
"description": "X-API-KEY:valid-api-key",
"name": "X-API-KEY",
"in": "header"
}
}
},
"security": [{
"ApiKeyAuth": []
}
],
"tags": []
}
PDF Generator
The PDF Generator contains all the different templates used to generate documents by providing the appropriate data in the generation requests. The following libraries are available for rendering:
- WKHTMLTOPDF: https://wkhtmltopdf.org/
- WeasyPrint: https://weasyprint.org/
The PDF Generator routes allow you to specify which library you wish to use and if it is not provided in the request, it will default to WKHTMLTOPDF
On pdfgenerator
startup, the consumer will attempt to create a generate-pdf
, fetch-pdf
, and generate-fetch-pdf
queue on a pdf-generator
exchange.
- Request PDF Generation - POST /evidence-pdf
- Fetch Generated PDF - GET /evidence-pdf/{claimSubmissionID}
Both functions work off the same endpoint. The main difference being that the POST
has a JSON request body while the GET
uses the URL(claimSubmissionID
) to find the corresponding PDF.
This endpoint is a combined version of the generate-pdf
and fetch-pdf
. Pass it the POST
request body and it will respond with the PDF associated with the specified claimSubmissionID
Any messages passed to the generate-pdf
queue will use the diagnosticCode
, pdfTemplate
and pdfLibrary
(optional) to select the appropriate template and generate the PDF. diagnosticCode
is used to translate this into a human readable diagnostic type based on the mapping in config/settings.py
in the codes
variable.
Using pdfTemplate
like v1
along with a diagnostic type like hypertension
for example, it will pull up the appropriate template and template variables in the templates
and template_variables
folder respectively. For example, "diagnosticCode"=7101
and "pdfTemplate"="v1"
will fetch templates/hypertension-v1.html
and template_variables/hypertension-v1.json
.
The generator will first load the hypertension-v1.json
file which is prefilled with default values for the available variables within the template and attempt to replace them with what is provided in the message. If it cannot replace the data based on what's provided in the request, it will keep the default that is defined in the JSON file.
After the HTML template is generated with the replaced variables, it uses WKHTMLTOPDF
by default or the library selected by pdfLibrary
to create a PDF based on the HTML file.
Once the PDF has been generated, the consumer will create a key value pair in Redis to store the data due to it containing PII information. The key
being claimSubmissionId
while the value
is a base64 encoded string representation of the PDF.
The consumer will return a response similar to:
{
"claimSubmissionId: "1",
"status": "COMPLETE"
}
When the consumer receives a message in the fetch-pdf
queue, it will use the provided claimSubmissionId
to look it up on Redis.
If the PDF still hasn't been generated, you will receive a response similar to:
{
"claimSubmissionId: "1",
"status": "IN_PROGRESS"
}
but if the PDF is available then the response will be a downloadable file
ToCs are not part of the normal HTML template that gets generated for the PDF. They need to be created through a different process and merged with the main PD template
The PDF generator will check if there is a ToC file already created for the diagnosticCode
that gets passed. If not found, it will generate the PDF without a ToC so you don't have to worry about having a empty section or page
-
Create a directory in
templates
where the name will be the human readable diagnosis name used in thecodes
variable insettings.py
-
Within this folder, create a
toc.xsl
file. Most ToCs will follow the same format so you can just copy one from any other diagnosis if available. If you needed to create one from scratch, in the command line run the following:wkhtmltopdf --dump-default-toc-xsl
and copy the contents of the output to a newtoc.xsl
file as stated above -
By default a ToC is generated by finding all
<h?>
related tags(<h1>, <h2>, etc
) so you need to modify them if you want them ignored.- To ignore a heading, it must start with
‌
like this example:<h3 class="text-center">‌Test PDF Heading</h3>
. Thetoc.xsl
file has logic in place to skip over any headings that start with this special character. This character was used since it's an invisible character so it won't render on the PDF
- To ignore a heading, it must start with
-
The ToC page is fully customizable just like any HTML page
- The library was built using Webkit 2.2 (~2012) and QT4 (2015) so many newer HTML features are unavailable or need to be added through other ways for them to render properly
- This library renders at 96DPI but the value can be altered through the meta tags. We need to verify that the Figma design or other design software matches the proper DPI settings by making sure the resolution matches the paper size. Use the following links for proper conversions: https://a-size.com/legal-size-paper/ and https://www.papersizes.org/a-sizes-in-pixels.htm as well as https://pixelsconverter.com/inches-to-pixels
Work in Progress
- This library does not accept Javascript. At the moment, we would need to come up with a workaround by prerendering in a secondary library or just using
WKHTMLTOPDF
for Javascript specific portions but this solution has yet to be implemented. - This library renders at 96DPI and the value cannot be changed. We need to verify that the Figma design or other design software matches the proper DPI settings by making sure the resolution matches the paper size. Use the following links for proper conversions: https://a-size.com/legal-size-paper/ and https://www.papersizes.org/a-sizes-in-pixels.htm as well as https://pixelsconverter.com/inches-to-pixels
Before you can start working on the content of the PDF design, you must first match your DPI to the size dimensions of the design tool. If you skip this step, then the measurements will be all wrong and the design won't be a perfect match. This in turn will cause the developer to modify/test random values or mess with the zoom
setting to get it to fit the design.
For example, if a user wants to make a new PDF, they must:
- Get the size/dimension that the design team wants to use. Not just whether its A4, Legal, etc. but the pixel dimensions of the blank design page. We will use this number to set or match the DPI accordingly
- Setup the new template to match based on the library:
-
WKHTMLTOPDF
: DPI is customizable so you can use the following for proper conversions:- Legal and other sizes like A4, etc.: See what DPI setting the pixel dimensions fall under and set the meta tag to that DPI once you make the template file in the next step
-
WeasyPrint
: The DPI value is set to 96 and cannot be changed. Due to this, the process is somewhat backwards. The developer needs to use the above links to get the dimensions based on the requested page size and 96 DPI and send the dimensions back to the design team so they can adjust their document to match.
- Edit the
codes
dictionary inconfig/settings.py
by adding a new key, value pair for your code.- Example:
codes = { "0000": "cancer", "0001": "diabetes" //new code with human readable diagnosis }
- Create a HTML version of the PDF you want to generate and save it in the
templates
folder along with a version number- Take a look at Jinja 2 Template Documentation for a better idea of what you can do within the template files
- Every template file needs a version number. By default, the system looks for
v1
if one is not specified in the request - The file name should match the name you used in Step 1. Following that example, it should be called
diabetes-v1.html
- Every template file needs a version number. By default, the system looks for
- Take a look at Jinja 2 Template Documentation for a better idea of what you can do within the template files
- Create a JSON file in
template_variables
that will contain default values for the HTML file in case they are not provided in the RabbitMQ message- The file name should match the name you used in Step 1 and 2. Following that example, it should be called
diabetes-v1.json
- The file name should match the name you used in Step 1 and 2. Following that example, it should be called
Some diagnostic codes might need specific changes that don't need to affect other templates and instead of adding it to the assessment logic, we can use a helper function.
When generating a PDF, it will look for a helper function following this naming convention pdf_helper_0000
where 0000
would be the code we want to use. If it does not find it, it move on and then applies a pdf_helper_all
that gets applied to every single template. Usually these are edits like turning date string into proper date time objects, etc that would benefit all the templates.
Currently there are 2 ways to develop/test the PDF service:
- Run
./gradlew build check docker
to build all containers and run a full test. This can be used for the testing any updates that are made to the endpoints through Swagger but it takes longer due to having to load all the containers. After the containers are built, you can take it a step further and run the containers themselves using./gradlew app:dockerComposeDown app:dcPrune app:dockerComposeUp
and then heading to the Swagger page to view and run the available endpoints. - Run
python pdfgenerator/src/lib/local_pdf_test.py
from theservice-python
directory. This file calls the PDF generator while bypassing all the related RabbitMQ and Redis code. You can alter thediagnosis_name
andmessage
to simulate an endpoint request and to quickly debug any template or PDF issues. Thediagnosis_name
should be the full name including version number likehypertension-v1
MAS Integration Camel Routes
VRO v2 is using Apache Camel to coordinate several services into the workflow described by this document.
Apache Camel provides a lot of message-oriented abstractions but it is using it is own DSL (Domain Specific Language) which can be difficult to master. Familiarity with Apache Camel is a prerequisite for understanding the code. A brief overview of Apache Camel can be found here.
The MasController implements the two endpoints:
-
v2/examOrderingStatus
: This endpoint does not do anything except record the fact that it has been called. -
v2/automatedClaim
: This endpoint kicks off the functionality for automated claim processing.
The rest of the document will be referring the diagram below (from the VRO v2 Roadmap) and is dedicated to explaining the implementation of the pictured workflow.
The endpoint v2/automatedClaim
is handled by MasController which immediately hands off to MasProcessingService.
MasProcessingService is responsible for implementing the logic contained in the topmost yellow box of the diagram, where several checks are performed to verify if the claim qualifies for automated processing. If any of these checks fails the API returns a response explaining the reason. This is the only part of the code that executes synchronously. From that point on, a message is sent and the process continues asynchronously.
The following response indicates that the claim is not in scope:
{
"id": "c9fda1ac-2422-47e1-aeff-6c0a2b08d6df",
"message": "Claim with [collection id = 351], [diagnostic code = 7101], and [disability action type = FD] is not in scope."
}
The following response indicates that one of the anchor tests failed:
{
"id": "b34bc26a-68b1-4c08-bca0-ce28db9c4c98",
"message": "Claim with [collection id = 351] does not qualify for automated processing because it is missing anchors."
}
The following response indicates that the claim passed the initial checks and is undergoing processing, which can potentially take a long time.
{
"id": "a14fd4e5-abe0-48b7-95df-3f9c85164a3a",
"message": "Received Claim for collection Id 350."
}
The definitions of all the relevant Camel routes are in MasIntegrationRoutes. This class implements RouteBuilder
which provides access to the constructs of the Camel DSL. The configure()
method is the entry point and it is divided into several logical groupings of routes:
-
configureAuditing
: Sets up routes for audit messages. Audit messages end up in the database and in some cases appear as Slack notifications. A full description of the Auditing framework is given in a different section of this document. This method also implements exception handling. -
configureOffRamp
: This handles any claims that are off-ramped (rightmost path on the diagram) and forwards them to the "Complete Processing" step. -
configureAutomatedClaim
: Integration with MAS requires that we send a request for a collection ID and then keep poling periodically for status. This is implemented by storing a message in a queue with a specific delay. Once the delay elapses, the message triggers a query for status via the MAS API. This logic is implemented in "MasPollingProcessor". If the collection ID is not ready, the message is requeued with a delay. When the collection is ready, the message is forwarded to the internal mas-processing endpoint. -
configureMasProcessing
: This route orchestrates the entire workflow by delegating the processing steps to other routes. It performs the following steps:- Calls the collect-evidence endpoint to collect evidence from different sources
- Calls Health Assessment to assess the evidence (second yellow box in the diagram)
- Conditionally calls "order exam" (third yellow box)
- Generates and uploads the final PDF file (fourth yellow box)
-
configureCollectEvidence
: Collect evidence from the MAS and BIP APIs. Merge the two evidence objects and call Health Assessment via RabbitMQ. -
configureUploadPDF
: This route corresponds to the penultimate yellow box in the diagram. It calls the service to generate PDF via Rabbit and the calls a BIP endpoint to upload the PDF> -
configureCompleteProcessing
: This is the last processing step, corresponding to the last yellow box in the diagram. Both the off-ramp claims and the ones that have been processed converge on this route. Special Issue is removed via the BIP API. If there is sufficient evidence for fast-tracking, the claim is marked as RFD also via the BIP API. -
configureOrderExamStatus
: This endpoint simply records an audit event to record the fact that the REST endpoint v2/examOrderingStatus has been called.
Auditing for VRO v2 is more generic than auditing for VRO v1. Instead of having a table structure mirroring claim requests, we have a generic event data model that can contain information about any type of event.
An audit trail is created for each API request. When an entry point is invoking a UUID is created to track the request. This ID can be used to identify all events connected to the request.
Since the workflow interacts with several services, it is unavoidable that exceptional conditions will occur from time to time. Exception handling is configured as part of auditing. Every exception is caught and fed into two streams: One stream sends a notification to a slack channel, whereas the other stream posts the exception in audit_event table in the database.
A description of the data model can be found here: Audit Data Model
The auditing framework is designed to satisfy the inconsonant objectives of modeling VRO claim requests and also be completely generic. It achieves this via a layer of abstraction:
- Any object that needs to be audited in the database must implement the Auditable interface:
public interface Auditable {
String getEventId();
String getDetails();
String getDisplayName();
}
Auditable objects can be converted into AuditEvent objects by means of the method AuditEvent.fromAuditable(). Audit events can be triggered asynchronously via wireTap. For example:
private void configureOrderExamStatus() {
// This route does not do anything, but an audit event is persisted
String routeId = "mas-exam-order-status";
from(ENDPOINT_EXAM_ORDER_STATUS)
.routeId(routeId)
.wireTap(VroCamelUtils.wiretapProducer(EXAM_ORDER_STATUS_WIRETAP))
.wireTap(ENDPOINT_AUDIT_WIRETAP)
.onPrepare(auditProcessor(routeId, "Exam Order Status Called"))
.log("Invoked " + routeId);
}
The command wireTap(ENDPOINT_AUDIT_WIRETAP) sends a wiretap to the audit endpoint, the the onPrepare command is responsible from mapping the current object (an Auditable) to an AuditEvent.
Slack messages are also sent as part of the auditing framework, except in this case an AuditEvent is converted into a user-friendly string. Here are some examples:
AuditEvent{routeId='/automatedClaim', payloadType=Automated Claim, message='Claim with [collection id = 351] does not qualify for automated processing because it is missing anchors.}
AuditEvent{routeId='/automatedClaim', payloadType=Automated Claim, message='Claim with [collection id = 351] does not qualify for automated processing because it is missing anchors.}
AuditEvent{routeId='/automatedClaim', payloadType=Automated Claim, message='Claim with [collection id = 351], [diagnostic code = 7101], and [disability action type = FD] is not in scope.}
Exception occurred on route mas-claim-processing for Automated Claim(id = a14fd4e5-abe0-48b7-95df-3f9c85164a3a): Error in calling collection Status API .
Please check the audit store for more information.
Apache Camel makes some things easy and some things hard. One of the hard things is that Camel does not enforce type safety as a message travels from route to route. If the output type of a route does not match the input type of the next route, you are looking at a runtime exception.
I have found that the best way to debug Camel is to inject a processor between two routes and set a breakpoint to examine the contents of the message.
For example, consider the following snapshot:
.routingSlip(method(slipClaimSubmitRouter, "routeHealthSufficiency"))
.unmarshal(new JacksonDataFormat(AbdEvidenceWithSummary.class))
.process(new HealthEvidenceProcessor())
A message is processed via a routingSlip, then converted from JSON to the type AbdEvidenceWithSummary which is the input type of HealthEvidenceProcessor. Suppose this conversion does not work for some reason and we want to know why.
We can interject a processor between the two steps of interest like so:
.routingSlip(method(slipClaimSubmitRouter, "routeHealthSufficiency"))
.process(
new Processor() {
@Override
public void process(Exchange exchange) throws Exception {
var message = exchange.getMessage();
var body = message.getBody();
System.out.println(body);
}
})
.unmarshal(new JacksonDataFormat(AbdEvidenceWithSummary.class))
.process(new HealthEvidenceProcessor()) // returns MasTransferObject
and in this way we can examine the details of the exchange, the message, and the message contents.
External APIs (Partial)
Mail Automation Systems (MAS) fast-tracks claims from CMP (Central Mail Portal). The fast-tracking capability will eventually be ported to VRO.
VRO will need to trigger OCR+NLP processing of eFolder documents for veterans of eligible claims to extract health data as evidence for fast-tracking.
- Would be preferable if there was a Lighthouse API for this.
To retrieve the OCR+NLP results, VRO will query MAS directly until the Claim Evidence API is available.
VRO may want to automatically order a medical exam using MAS's exam-ordering capability.
- According to Boise RO, the value of auto-ordering exams for CFI (claim for increase) was pretty straightforward and accurate.
- Uncertain if VRO should auto-order exams for new claims.
How to access MAS endpoints
-
Call the auth server API with the required scope, grant type, and credentials to obtain the JWT token.
Token Endpoint: https://{baseurl}/pca/api/{environment}/token
scope : openid
grant_type : client_credentials
client_id: {client id}
client_secret: {client secret} -
Check the status of a collection by invoking the "collection status" API with the JWT minted above as the bearer token. https://{baseurl}/pca/api/{environment}/pcCheckCollectionStatus
Link to open api spec: https://github.com/department-of-veterans-affairs/abd-vro/wiki/MAS-api-spec-%28IBM-hosted-api%29
-
Call the "Collection Annotations" endpoint only if the collection is ready to be processed with the JWT minted above as the bearer token. https://{baseurl}/pca/api/{environment}/pcCheckCollectionStatus
Link to open api spec: https://github.com/department-of-veterans-affairs/abd-vro/wiki/MAS-api-spec-%28IBM-hosted-api%29
-
Call the "Order Exam" endpoint if the evidence is not sufficient for the claim with the JWT minted above as the bearer token. https://{baseurl}/pca/api/{environment}/pcOrderExam
Link to open api spec: https://github.com/department-of-veterans-affairs/abd-vro/wiki/MAS-api-spec-%28IBM-hosted-api%29
See EVSS
If a claim is fast-track-able, VRO will need to upload a generated PDF to eFolder and associate the PDF to the claim.
- VRO should use LH's new EVSS-replacement API (timeline? suitability for VRO?)
The following is ordered according to when it will be needed.
- ✅ health evidence data - LH Patient Health API (FHIR)
- mark claim as being assessed for fast-tracking so that users don't work on the claim
- mark using a VBMS/BGS "station 398" and "Rating Decision Review - Level 1" special issue? Paul Shute says it's not a good long-term solution
- LH doesn't currently have this capability
- The
RRD
(Rapid Ready for Decision) special issue also needs to be set.
- upload PDF to eFolder and associate the PDF to the claim
- LH is working on a new EVSS-replacement API (file upload service) to do this: timeline? suitability for VRO?
- claim details (diagnostic codes) using a claimID
- LH Benefits Claims API response is missing diagnostic codes
- veteran details (SSN, name, DOB) to generate a PDF
- LH Veteran Verification APIs doesn't return veteran information like SSN, name, DOB
- MAS claim notification to VRO (for VRO 2.0)
- Expose VRO endpoints as a Lighthouse API for MAS to use?
- query VBMS and BGS to verify some things (MAS does this for CMP-submitted claims)
- TBD: Need to determine MAS features that will be ported to VRO. Can we do the queries via Lighthouse?
- request OCR processing from MAS and retrieve OCR results from MAS
- VRO will connect directly to MAS
- listen for BAM/BIA contention event notifications (for VRO 3.0)
- VRO will need to set up a event subscriber directly with BIA's Kafka
- retrieve OCR results from BAM's Claims Evidence API
- VRO will connect directly to CE API
- query API to map given veteran identifier into an ICN, which is needed to query LH health data
- LH Benefits Claims API v2 accesses MPI to do this, but the veteran's SSN, name, and DOB is needed. Spoke with Derek but will need new "SR" to access MPI FHIR API to enable lookup by file number (a.k.a., BIRLS ID)
- mark claim as Ready for Decision (RFD)
- This is not required for single issue CFI, but will be needed for handling multi-issue claims. We should be guided by the AIM project on when this is needed.
Not yet incorporated into ordered list above:
-
veteran service history for presumptives
- perhaps the Veteran Verification APIs, which uses eMIS, which is an API for VADIR (VA/DoD Identity Repository)
- Pact Act CorpBD flashes
- requested to be added to LH Veteran Verification API but it doesn't currently support CCG (machine-to-machine) authentication