Installing and Configuring the Authorizer

Prev Next

The Access File Authorizer generates access files based on policy evaluations and serves them to connected systems. It integrates with data sources and PDP flows to calculate access decisions and supports various configurations, including templates and webhooks, to align with specific organizational requirements. Proper installation, configuration, and integration ensure consistent and accurate access enforcement across environments.

Prerequisites

Users need these prerequisites before installing the Access File Authorizer:

  • PlainID Private Docker Hub access
  • An operational database: PostgreSQL with read and write permissions.
  • Access to the relevant Subjects Data source database.
  • Allocated POD resources: Minimum 2 CPU and 4GB of memory.
  • Access permission on the POD "/app/data" folder to write files.
  • Access to Storage for output files writing/copying.
  • PAA Load Requirements: Ability to accommodate extensive load by configuring the PDP and PIP replicas as required.

How to Install and Upgrade

The Access File Authorizer deploys as a K8s Stateful Set. It should run alongside a PAA deployment as it integrates with the PDP and PIP.

To install or upgrade:

  1. Obtain an Authorizer pack from PlainID Global Services that includes the Helm Chart and other config files.
  2. Use the values-custom.yaml file to define the Helm Chart specifications as per your Environment's deployment requirements.
    • Configure the pull secret for accessing Docker images from PlainID Docker Hub.
    • Configure the Authorizer using a Config Map and Environment Variables (under the extraEnv: section):
      • Database Connection: Specify details for the database used in data processing (e.g., PostgreSQL).
      • Data Source Configuration: Provide connection details for the relevant data source.
      • PDP Configuration: Adjust settings according to your deployment requirements.
      • Jobs & Flows Configuration: Define parameters for processing jobs and flows.
    • Refer to the example configuration for more information.
    • Enable rclone support to use our rclone integration to copy generated files to other destinations using rclone capabilities such as SFTP transfer (optional). See detailed information about this integration here.
  3. Install the Helm Chart using the custom custom-values.yaml file.
    • Deploy the Access File POD within the same namespace as the PAA (recommended).
helm install <release-name> authz-access-file --values values-custom.yaml
  1. Post-deployment validation
    • Check that the Access File service is up and running. Review the logs for any significant errors.
      • The error ERROR: rabbit component is null. skipping fetching schemas from tenant-mgmt can be safely ignored.
    • During the initial deployment, the service startup should also create tables in the operational DB you configured. You can validate that these tables created successfully in the DB.

Note: We recommend defining a test job in the configuration with a test flow, targeting a limited subject population and running this job as a post deployment validation.

  1. Upgrade the Authorizer version by adjusting the image tag number and relevant configuration. Ensure that you also upgrade the deployment using the Helm Upgrade:
helm upgrade -i <release-name> authz-access-file --values values-custom.yaml

Configuration

Key Glossary

Term Description
Jobs The main working unit of the Authorizer.
The Authorizer triggers, executes, and manages operations at the job level.
A job can include multiple flows, resulting in different outputs from the same run.
Flows The basic working unit that handles the complete process of loading the subject population, managing access decisions, and generating output files.
Each Flow references specific configurations to set up the data pulled from a data source, define the PDP configuration to use, and convert authorization decisions into the output file.
Data Sources A set of definitions used to connect to a database and pull subject population data into the Flow process.
PDPs A set of definitions that manage authorization request execution during Flow processing, including the configuration of concurrency and timeouts.
Locations A set of definitions specifying where to generate output files during execution.
Converters A set of definitions that specify how to generate output files using Go Templates.

Structure

  1. The Authorizer configuration structure is a hierarchical YAML with sets of keys and a referencing structure.
  2. The configuration includes these sections which define the setup "building blocks" used by reference for Jobs:
Section Functionality
datasources Specifies data sources for the subject's population used in Job processing.
pdps Configures PDP(s) for processing Jobs and handling additional Authorization request flags and definitions.
locations Specifies locations for files generated by Jobs.
converters Defines the converter template responsible for transforming PDP responses into the required file output of a Job.
webhooks (optional) Defines webhook templates for post-processing or notifications when a Job is completed.
For example, a webhook can trigger an SFTP file transfer of the generated files.
flows Outlines processing flows, including flow settings and references to Data Sources, PDPs, Converters, and Locations used in execution.
jobs Configures job processing by setting job-specific parameters, referencing flows, and optionally including special converters for aggregated files and webhooks at the job’s conclusion.
YAML Config Structure

The full YAML config structure is defined in the values-custom.yaml under

plainIDConfig:
  config.yaml:
  1. Use Environment Variables in the config.yaml to keep it readable and organized (strongly recommended). All configuration values support Environment Variable substitution using the following format: ${ENV_VAR_NAME:default_value}. The Environment Variable is then defined under the extraEnv: section in the values-custom.yaml
  2. In addition, the values-custom.yaml includes additional service configurations such as, Image, Resources, Ingress etc.
  3. The values-custom.yaml should also be used for enabling the rclone integration if required. If enabled the deployment of the Authorizer also deploys an additional container running the rclone service, which can be used by the Authorizer using webhooks. See more details about rclone and webhooks here.

Parameters

Name Required Description Value Examples
management.port Yes Management API port 9090
http.port Yes HTTP server port 8080
http.enableXSSValidator No Enable XSS validation truefalse
http.jwt.jwtIgnoreVerification No Ignore JWT verification truefalse
log.level No Logging level tracedebuginfowarnwarningerrorfatalpanic
log.format No Logging format textjson
redis.host Yes Redis host "localhost""my-plainid-paa-redis-master"
redis.port Yes Redis port 6379
redis.password Yes Redis password "secret123"
redis.db No Redis database number 0
db.enabled Yes Enable database integration true
db.username Yes Database username "postgres"
db.password Yes Database password "secret123"
db.host Yes Database host "localhost""my-plainid-paa-postgresql"
db.port Yes Database port 5432
db.database Yes Database name "authz_db"
db.driver Yes Database driver type "postgres"
db.migrationSchemas No Schemas to apply migrations ["public"]
db.migrationType No Type of migrations to run "goose"
db.migrationOnStart No Run migrations on startup truefalse
db.schema No Default database schema "public"
db.ssl No Enable SSL for database truefalse
db.maxIdleConns No Max idle connections 10
db.maxOpenConns No Max open connections 10
db.connMaxLifetime No Connection max lifetime in seconds 3600
datasources.<name>.username Yes Data source username "pip"
datasources.<name>.password Yes Data source password "pa33word"
datasources.<name>.host Yes Data source host "localhost"
datasources.<name>.port Yes Data source port 30303
datasources.<name>.database Yes Data source database name "vdb"
datasources.<name>.sslmode No Enable SSL for data source truefalse
datasources.<name>.maxIdleConns No Data source max idle connections 10
datasources.<name>.maxOpenConns No Data source max open connections 10
datasources.<name>.connMaxLifetime No Data source connection max lifetime in seconds 3600
datasources.<name>.connectTimeout No Data source connection timeout in seconds 10
pdps.<name>.type Yes PDP type "runtime"
pdps.<name>.runtimeParameters.url Yes PDP runtime URL "http://localhost:30040"
pdps.<name>.runtimeParameters.timeout No PDP request timeout "30s"
pdps.<name>.runtimeParameters.maxConcurrentConnections No Max concurrent PDP connections 5
pdps.<name>.runtimeParameters.ignoreSSL No Ignore SSL verification for PDP truefalse
locations.<name>.path Yes Output location path "/path/to/output"
converters.<name>.type Yes Converter type "goTemplate"
converters.<name>.templateProperties.content Yes Template content "{\n \"users\": [\n {{- range $i, $data := . }}\n {{- if $i }},{{ end }}\n {\n \"id\": \"{{ $data.identity.uid }}\",\n \"name\": \"{{ $data.identity.name }}\"\n }\n {{- end }}\n ]\n}"
flows.<name>.mode No Flow execution mode "Normal""Full"
flows.<name>.source.datasource Yes Flow data source reference "ds1"
flows.<name>.source.schema Yes Flow source schema "public"
flows.<name>.source.table Yes Flow source table "users"
flows.<name>.source.uid Yes Flow population source unique identifier column name "id"
flows.<name>.pdp.id Yes Flow PDP reference "pdp1"
flows.<name>.pdp.runtimeParameters.type Yes Flow PDP runtime type "userList"
flows.<name>.pdp.runtimeParameters.userListParameters.resourceType Yes* Resource type for user list "groups-new"
flows.<name>.pdp.runtimeParameters.userAccessTokenParameters.entityTypeId Yes* Entity type ID "entity-type-1"
flows.<name>.pdp.runtimeParameters.path Yes PDP API path "/api/runtime/userlist/v3"
flows.<name>.pdp.runtimeParameters.clientId Yes PDP client ID "client123"
flows.<name>.pdp.runtimeParameters.clientSecret Yes PDP client secret "secret123"
flows.<name>.pdp.runtimeParameters.includeAsset No Include Asset in PDP response truefalse
flows.<name>.pdp.runtimeParameters.includeIdentity No Include Identity in PDP response truefalse
flows.<name>.pdp.runtimeParameters.includeAccessPolicy No Include Access Policy in PDP response truefalse
flows.<name>.convert.batchSize No Processing batch size 10
flows.<name>.convert.converters.<name>.id Yes Converter reference "t1"
flows.<name>.convert.converters.<name>.output.transient No Mark output as temporary truefalse
flows.<name>.convert.converters.<name>.output.location.id Yes Output location reference "l1"
flows.<name>.convert.converters.<name>.output.location.filenameTemplate Yes Output filename template "users-{{nowWithFormat \"20060102\"}}.json""access-{{.Context.BatchID}}.csv"
webhooks No This section is optional but if enabled, the below properties are required --
webhooks.<name> Yes Defines a name for the webhook template name: copyFileWebhook
webhooks.<name>.method Yes Defines the webhook call REST Method method: "POST"
webhooks.<name>.headers No Defines a list of Headers that are included in the webhook call. For each header in the list, define the header name and list of values headers:
- name: Content-Type
values:
- "application/json"
- name: Authorization
values:
- ${Auth-Token} # Env Var of a token used for Authorization
webhooks.<name>.url Yes Defines the webhook URL Example #1 - a 3rd party notification service
url: https:///notification/send

Example #2 - The rclone url for file copy using SFTP -
url: localhost:5572/operations/copyfile
webhooks.<name>.payload No Defines the webhook call. Usually in POST requests, you need to define a JSON payload using this key. You can use Go template here to define variables in the payload which is injected into when the webhook call is generated by the job. payload: |
{
"srcFs": "{{ .SrcFs }}",
"srcRemote": "{{ .SrcRemote }}",
"dstFs": "{{ .DstFs }}",
"dstRemote": "{{ .DstRemote }}"
}

Note the | is used in yaml config to define multiline string
webhooks.<name>.timeout Yes Defines the webhook call timeout timeout: 180
jobs.<name>.flows Yes List of flows to execute

Note: The flows are executed in the order they are listed in the Job flows array.
["flow1", "flow2"]
jobs.<name>.maxWorkers Yes Max concurrent workers 100
jobs.<name>.mode No Job execution mode "Normal""Full"
jobs.<name>.schedule No cron schedule expression "0 * * * *"
jobs.<name>.timeout No

If not defined, 30m by default. We recommend always specifying it.
Job execution timeout "24h"
jobs.<name>.converters.<name>.id Yes* Job converter reference "t1"
jobs.<name>.converters.<name>.output.location.id Yes* Job output location reference "l1"
jobs.<name>.converters.<name>.output.location.filenameTemplate Yes* Job output filename template "output-{{nowWithFormat "20060102"}}.json"
jobs.<name>.webhooks.template Yes (If webhooks are in use) Reference to the webhook name defined in the webhooks section. - template: "copyFileWebhook"
jobs.<name>.webhooks.template.variables No Assign values to variables of the webhook template if in use in the payload template variables:
- name: SrcFs
value: /app/data/
- name: SrcRemote
value: '{{ fileName "flow" "flow1" "c1" }}'
- name: DstFs
value: "my-sftp:/upload"
- name: DstRemote
value: '{{ fileName "flow" "flow1" "c1" }}'

Note: In this example the values assigned to the variable for the webhook call, defined using Go template with a custom function fileName that retrieves the the file name generated by a flow/job. See more details about this function under the template functions section.

Examples

Basic Configuration

management:
  port: ${MANAGEMENT_PORT:8081}

http:
  port: ${APP_PORT:8080}
  enableXSSValidator: true
  jwt:
    jwtIgnoreVerification: ${JWT_IGNORE_VERIFICATION:true}

redis:
  host: ${REDIS_HOST:localhost}
  port: ${REDIS_PORT:6379}
  password: ${REDIS_PASS}
  db: ${REDIS_DB:0}

db:
  enabled: true
  username: ${DB_USER:offline}
  password: ${DB_PASS:offline}
  host: ${DB_HOST:localhost}
  port: ${DB_PORT:5432}
  database: ${DB_DATABASE:offline}
  driver: postgres
  migrationSchemas:
    - ${DB_SCHEMA:public}
  migrationType: goose
  migrationOnStart: ${DB_MIGRATION_ON_START:true}
  schema: ${DB_SCHEMA:public}
  ssl: ${DB_SSL:false}
  maxIdleConns: ${DB_MAX_IDLE_CONNECTIONS:1}
  maxOpenConns: ${DB_MAX_OPEN_CONNECTIONS:10}
  connMaxLifetime: ${DB_CONNECTIONS_MAX_LIFE_TIME_SECONDS:3600}

datasources:
  ds1:
    username: ${DS1_DB_USER:pip}
    password: ${DS1_DB_PASS:pa33word}
    host: ${DS1_DB_HOST:localhost}
    port: ${DS1_DB_PORT:30303}
    database: ${DS1_DB_DATABASE:vdb}
    sslmode: ${DS1_DB_SSL_MODE:false}
    connectTimeout: ${DS1_CONNECT_TIMEOUT:10}

pdps:
  pdp1:
    type: ${PDP1_TYPE:runtime}
    runtimeParameters:
      url: ${PDP1_URL:http://localhost:30040}
      maxConcurrentConnections: ${PDP1_MAX_CONCURRENT_CONNECTIONS:5}
      timeout: ${PDP1_TIMEOUT:30s}
      ignoreSSL: ${PDP1_IGNORE_SSL:false}
locations:
  l1:
    path: ${LOCATION1_PATH}
converters:
  t1:
    type: goTemplate
    templateProperties:
      content: ${CONVERTER1_CONTENT}
  t2:
    type: goTemplate
    templateProperties:
      content: ${JOB1_CONVERTER1_CONTENT}
flows:
  flow1:
    mode: Full
    source:
      datasource: ds1
      schema: ${FLOW1_SOURCE_SCHEMA}
      table: ${FLOW1_SOURCE_TABLE}
      uid: ${FLOW1_SOURCE_UID}
    pdp:
      id: pdp1
      runtimeParameters:
        type: ${FLOW1_PDP_RUNTIME_PARAMETERS_TYPE:userList}
        userListParameters:
          resourceType: ${FLOW1_PDP_RUNTIME_PARAMETERS_USER_LIST_PARAMETERS_RESOURCE_TYPE}
        userAccessTokenParameters:
          entityTypeId: ${FLOW1_PDP_RUNTIME_PARAMETERS_USER_ACCESS_TOKEN_PARAMETERS_ENTITY_TYPE_ID}
        path: ${FLOW1_PDP_RUNTIME_PARAMETERS_PATH}
        clientId: ${FLOW1_PDP_RUNTIME_PARAMETERS_CLIENT_ID}
        clientSecret: ${FLOW1_PDP_RUNTIME_PARAMETERS_CLIENT_SECRET}
        includeAsset: true
        includeIdentity: true
        includeAccessPolicy: false
    convert:
      batchSize: ${FLOW1_CONVERT_BATCH_SIZE:10}
      converters:
        c1:
          id: t1
          output:
            transient: true
            location:
              id: l1
              filenameTemplate: ${FLOW1_CONVERTER1_FILENAME_TEMPLATE:output-jb1-flow1-{{nowWithFormat "20060102-150405"}}.json}
jobs:
  jb1:
    flows:
      - flow1
    maxWorkers: ${JOB1_MAX_WORKERS:100}

Advanced Configuration with Multiple Flows and Converters

management:
  port: ${MANAGEMENT_PORT:8081}

http:
  port: ${APP_PORT:8080}
  enableXSSValidator: true
  jwt:
    jwtIgnoreVerification: ${JWT_IGNORE_VERIFICATION:true}

log:
  level: info
  format: text

redis:
  host: ${REDIS_HOST:localhost}
  port: ${REDIS_PORT:6379}
  password: ${REDIS_PASS}
  db: ${REDIS_DB:0}

db:
  enabled: true
  username: ${DB_USER:offline}
  password: ${DB_PASS:offline}
  host: ${DB_HOST:localhost}
  port: ${DB_PORT:5432}
  database: ${DB_DATABASE:offline}
  driver: postgres
  migrationSchemas:
    - ${DB_SCHEMA:public}
  migrationType: goose
  migrationOnStart: ${DB_MIGRATION_ON_START:true}
  schema: ${DB_SCHEMA:public}
  ssl: ${DB_SSL:false}
  maxIdleConns: ${DB_MAX_IDLE_CONNECTIONS:1}
  maxOpenConns: ${DB_MAX_OPEN_CONNECTIONS:10}
  connMaxLifetime: ${DB_CONNECTIONS_MAX_LIFE_TIME_SECONDS:3600}

datasources:
  ds1:
    username: ${DS1_DB_USER:pip}
    password: ${DS1_DB_PASS:pa33word}
    host: ${DS1_DB_HOST:localhost}
    port: ${DS1_DB_PORT:30303}
    database: ${DS1_DB_DATABASE:vdb}
    sslmode: ${DS1_DB_SSL_MODE:false}
    connectTimeout: ${DS1_CONNECT_TIMEOUT:10}

pdps:
  pdp1:
    type: ${PDP1_TYPE:runtime}
    runtimeParameters:
      url: ${PDP1_URL:http://localhost:30040}
      maxConcurrentConnections: ${PDP1_MAX_CONCURRENT_CONNECTIONS:5}
      timeout: ${PDP1_TIMEOUT:30s}
      ignoreSSL: ${PDP1_IGNORE_SSL:false}
locations:
  l1:
    path: ${LOCATION1_PATH}
converters:
  t1:
    type: goTemplate
    templateProperties:
      content: ${CONVERTER1_CONTENT}
  t2:
    type: goTemplate
    templateProperties:
      content: ${JOB1_CONVERTER1_CONTENT}
flows:
  flow1:
    mode: Full
    source:
      datasource: ds1
      schema: ${FLOW1_SOURCE_SCHEMA}
      table: ${FLOW1_SOURCE_TABLE}
      uid: ${FLOW1_SOURCE_UID}
    pdp:
      id: pdp1
      runtimeParameters:
        type: ${FLOW1_PDP_RUNTIME_PARAMETERS_TYPE:userList}
        userListParameters:
          resourceType: ${FLOW1_PDP_RUNTIME_PARAMETERS_USER_LIST_PARAMETERS_RESOURCE_TYPE}
        userAccessTokenParameters:
          entityTypeId: ${FLOW1_PDP_RUNTIME_PARAMETERS_USER_ACCESS_TOKEN_PARAMETERS_ENTITY_TYPE_ID}
        path: ${FLOW1_PDP_RUNTIME_PARAMETERS_PATH}
        clientId: ${FLOW1_PDP_RUNTIME_PARAMETERS_CLIENT_ID}
        clientSecret: ${FLOW1_PDP_RUNTIME_PARAMETERS_CLIENT_SECRET}
        includeAsset: true
        includeIdentity: true
        includeAccessPolicy: false
    convert:
      batchSize: ${FLOW1_CONVERT_BATCH_SIZE:10}
      converters:
        c1:
          id: t1
          output:
            transient: true
            location:
              id: l1
              filenameTemplate: ${FLOW1_CONVERTER1_FILENAME_TEMPLATE:output-jb1-flow1-{{nowWithFormat "20060102-150405"}}.json}
jobs:
  jb1:
    flows:
      - flow1
    maxWorkers: ${JOB1_MAX_WORKERS:100}
    schedule: "00 18 * * *"
    converters:
      jc1:
        id: t2
        output:
          location:
            id: l1
            filenameTemplate: ${JOB1_CONVERTER1_FILENAME_TEMPLATE:output-aggregate1-jb1-{{nowWithFormat "20060102-150405"}}.json}

Full Example
This example includes the Basic and Advanced Configuration with Multiple Flows and Converters examples.

# Should contain a configuration file matching usage detailed in the documentation. See examples/values-custom.yaml.
imagePullSecrets:
  - name: "image-pull-secret"

rclone:
  enabled: true
  image:
    repository: "docker.io/rclone/rclone"
    tag: "1.69"
  config: |
    [my-sftp]
    type = sftp
    host = sftp-service
    user = username
    pass = <encrypted password>
    port = 22

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: nginx
  hosts:
    - host: access-file-authz.ps-cluster.plainid.net
      paths: [ "/" ]

resources:
  requests:
    memory: "4000Mi"
    cpu: "2000m"
  limits:
    memory: "4000Mi"
    cpu: "2000m"

podSecurityContext:
  runAsUser: 1000
  runAsGroup: 1000
  fsGroup: 1000
  fsGroupChangePolicy: "Always"

plainIDConfig:
  config.yaml:
    debug:
      profiling:
        enable: ${DEBUG_PROFILING_ENABLE:false}
    management:
      port: ${MANAGEMENT_PORT:8081}
      http:
        port: ${APP_PORT:8080}
        enableXSSValidator: true
        jwt:
          jwtIgnoreVerification: ${JWT_IGNORE_VERIFICATION:true}
    log:
      level: debug
      format: text
    redis:
      host: ${REDIS_HOST:localhost}
      port: ${REDIS_PORT:6379}
      password: ${REDIS_PASS}
      db: ${REDIS_DB:0}
    db:
      enabled: true
      username: ${DB_USER:offline}
      password: ${DB_PASS:offline}
      host: ${DB_HOST:localhost}
      port: ${DB_PORT:5432}
      database: ${DB_DATABASE:offline}
      driver: postgres
      migrationSchemas:
        - ${DB_SCHEMA:public}
      migrationType: goose
      migrationOnStart: ${DB_MIGRATION_ON_START:true}
      schema: ${DB_SCHEMA:public}
      ssl: ${DB_SSL:false}
      maxIdleConns: ${DB_MAX_IDLE_CONNECTIONS:1}
      maxOpenConns: ${DB_MAX_OPEN_CONNECTIONS:10}
      connMaxLifetime: ${DB_CONNECTIONS_MAX_LIFE_TIME_SECONDS:3600}
    datasources:
      ds1:
        username: ${DS1_DB_USER:pip}
        password: ${DS1_DB_PASS:password}
        host: ${DS1_DB_HOST:localhost}
        port: ${DS1_DB_PORT:30303}
        database: ${DS1_DB_DATABASE:vdb}
        sslmode: ${DS1_DB_SSL_MODE:false}
        connectTimeout: ${DS1_CONNECT_TIMEOUT:10}
    pdps:
      sample-pdp:
        type: runtime
        runtimeParameters:
          url: ${SAMPLE_PDP_URL:http://localhost:30040}
          maxConcurrentConnections: ${SAMPLE_PDP_MAX_CONCURRENT_CONNECTIONS:5}
          timeout: 30s
          ignoreSSL: false
    locations:
      location_sample:
        path: ${SAMPLE_JOB_LOCATION_PATH}
    converters:
      template1:
        type: goTemplate
        templateProperties:
          content: ${CONVERTER1_TEMPLATE}
      t2_1:
        type: goTemplate
        templateProperties:
          content: ${CONVERTER2_1_TEMPLATE}
      t2_2:
        type: goTemplate
        templateProperties:
          content: ${CONVERTER2_2_TEMPLATE}
      t2_3:
        type: goTemplate
        templateProperties:
          content: ${CONVERTER2_3_TEMPLATE}
      aggregated_file_sample:
        type: goTemplate
        templateProperties:
          content: |
            {
              "property-A": [
                {
                  {{ fileInput "flow-sample-2" "c2_1" }}{{ fileInput "flow-sample-2" "c2_2" }}{{ fileInput "flow-sample-2" "c2_3" }}]
            }

    flows:
      flow-sample-1:
        mode: Full
        source:
          datasource: ds1
          schema: ${FLOW_SAMPLE_1_SOURCE_SCHEMA}
          table: ${FLOW_SAMPLE_1_SOURCE_TABLE}
          uid: ${FLOW_SAMPLE_1_SOURCE_UID}
        pdp:
          id: sample-pdp
          runtimeParameters:
            type: ${SAMPLE_PDP_RUNTIME_PARAMETERS_TYPE}
            userAccessTokenParameters:
              entityTypeId: ${FLOW_SAMPLE_1_PDP_RUNTIME_PARAMETERS_USER_ACCESS_TOKEN_PARAMETERS_ENTITY_TYPE_ID}
            path: ${SAMPLE_PDP_RUNTIME_PARAMETERS_PATH}
            clientId: ${SAMPLE_PDP_RUNTIME_PARAMETERS_CLIENT_ID}
            clientSecret: ${SAMPLE_PDP_RUNTIME_PARAMETERS_CLIENT_SECRET}
            includeAsset: true
            includeIdentity: true
            includeAccessPolicy: false
        convert:
          batchSize: 10
          converters:
            c1:
              id: template1
              output:
                transient: false
                location:
                  id: location_sample
                  filenameTemplate: sample-file-1-output-{{nowWithFormat "20060102-150405"}}.json

      flow-sample-2:
        mode: Full
        source:
          datasource: ds1
          schema: ${FLOW_SAMPLE_2_SOURCE_SCHEMA}
          table: ${FLOW_SAMPLE_2_SOURCE_TABLE}
          uid: ${FLOW_SAMPLE_2_SOURCE_UID}
        pdp:
          id: sample-pdp
          runtimeParameters:
            type: ${SAMPLE_PDP_RUNTIME_PARAMETERS_TYPE:userList}
            userAccessTokenParameters:
              entityTypeId: ${FLOW_SAMPLE_2_PDP_RUNTIME_PARAMETERS_USER_ACCESS_TOKEN_PARAMETERS_ENTITY_TYPE_ID}
            path: ${SAMPLE_PDP_RUNTIME_PARAMETERS_PATH}
            clientId: ${SAMPLE_PDP_RUNTIME_PARAMETERS_CLIENT_ID}
            clientSecret: ${SAMPLE_PDP_RUNTIME_PARAMETERS_CLIENT_SECRET}
            includeAsset: true
            includeIdentity: true
            includeAccessPolicy: false
        convert:
          batchSize: 10
          converters:
            c2_1:
              id: t2_1
              output:
                transient: true
                location:
                  id: location_sample
                  filenameTemplate: temp-output-2_1-{{nowWithFormat "20060102-150405"}}.json
            c2_2:
              id: t2_2
              output:
                transient: true
                location:
                  id: location_sample
                  filenameTemplate: temp-output-2_2-{{nowWithFormat "20060102-150405"}}.json
            c2_3:
              id: t2_3
              output:
                transient: true
                location:
                  id: location_sample
                  filenameTemplate: temp-output-2_3-{{nowWithFormat "20060102-150405"}}.json

    webhooks:
      copyFileWebhook:
        method: "POST"
        headers:
          - name: Content-Type
            values:
              - "application/json"
          - name: Authorization
            values:
              - ${RCLONE_BASIC_BASE64}
        url: ${RCLONE_URL}/operations/copyfile
        payload: |
          {
            "srcFs": "{{ .SrcFs }}",
            "srcRemote": "{{ .SrcRemote }}",
            "dstFs": "{{ .DstFs }}",
            "dstRemote": "{{ .DstRemote }}",
            "_config": { 
	            "inplace": true 
			}
          }
        timeout: 180
      deleteFileWebhook:
        method: "POST"
        headers:
          - name: Content-Type
            values:
              - "application/json"
          - name: Authorization
            values:
              - ${RCLONE_BASIC_BASE64}
        url: ${RCLONE_URL}/operations/deletefile
        payload: |
          {
            "fs": "{{ .SrcFs }}",
            "remote": "{{ .SrcRemote }}"
          }
        timeout: 180

    jobs:
      jb_sample:
        timeout: ${SAMPLE_JOB_TIMEOUT:30m}
        flows:
          - flow-sample-1
          - flow-sample-2
        maxWorkers: ${SAMPLE_JOB_MAX_WORKERS:100}
        schedule: "00 02 * * 1"
        converters:
          jb_sample_aggregated_converter:
            id: aggregated_file_sample
            output:
              location:
                id: location_sample
                filenameTemplate: sample-aggregated-file-output-{{nowWithFormat "20060102"}}.json
        webhooks:
          - template: "copyFileWebhook"
            variables:
              - name: SrcFs
                value: /app/data/
              - name: SrcRemote
                value: '{{ fileName "flow" "flow-sample-1" "c1" }}'
              - name: DstFs
                value: "my-sftp:/upload"
              - name: DstRemote
                value: "sample-1-file-name.json"
          - template: "copyFileWebhook"
            variables:
              - name: SrcFs
                value: /app/data/
              - name: SrcRemote
                value: '{{ fileName "job" "jb_sample" "jb_sample_aggregated_converter" }}'
              - name: DstFs
                value: "my-sftp:/upload"
              - name: DstRemote
                value: "sample-aggregated-file-name.json"
    
appDataPersistence:
  enabled: true

extraEnv:
  DEBUG_PROFILING_ENABLE: 'true'
  #GOMEMLIMIT: '3900MiB'
  #GOMAXPROCS: '2'
  SAMPLE_JOB_MAX_WORKERS: '50'

  # Processing DB
  DB_USER: 'postgres'
  DB_PASS: '<password>'
  DB_DATABASE: 'accessfile'
  DB_HOST: 'alexb-persistent-mat-view.cmo3hixnsmm5.us-east-1.rds.amazonaws.com'
  DB_SSL: 'true'
  DB_SCHEMA: '<database name>'
  REDIS_HOST: '<plainid redis host>'
  REDIS_PASS: '<redis password>'

  # Data Source Params
  DS1_DB_USER: 'postgres'
  DS1_DB_PASS: '<password>'
  DS1_DB_HOST: '<host>.us-east-1.rds.amazonaws.com'
  DS1_DB_PORT: '5432'
  DS1_DB_DATABASE: '<database name>'
  DS1_DB_SSL_MODE: 'true'
  DS1_CONNECT_TIMEOUT: 10

  # PDP Params
  SAMPLE_PDP_URL: 'http://plainid-paa-runtime'
  SAMPLE_PDP_RUNTIME_PARAMETERS_PATH: '/api/runtime/token/v3'
  SAMPLE_PDP_RUNTIME_PARAMETERS_CLIENT_ID: '<scope client id>'
  SAMPLE_PDP_RUNTIME_PARAMETERS_CLIENT_SECRET: '<scope client secret>'
  SAMPLE_PDP_RUNTIME_PARAMETERS_TYPE: 'userAccessToken'
  SAMPLE_PDP_MAX_CONCURRENT_CONNECTIONS: '100'

  # Flow Sample 1 definitions
  FLOW_SAMPLE_1_PDP_RUNTIME_PARAMETERS_USER_ACCESS_TOKEN_PARAMETERS_ENTITY_TYPE_ID: '<entity type id>'
  #FLOW_SAMPLE_1_PDP_RUNTIME_PARAMETERS_USER_LIST_PARAMETERS_RESOURCE_TYPE: '<resource type id>'
  FLOW_SAMPLE_1_SOURCE_SCHEMA: 'public'
  FLOW_SAMPLE_1_SOURCE_TABLE: '<table/view name>'
  FLOW_SAMPLE_1_SOURCE_UID: '<id column name>'

  # Flow Sample 2 definitions
  FLOW_SAMPLE_2_PDP_RUNTIME_PARAMETERS_USER_ACCESS_TOKEN_PARAMETERS_ENTITY_TYPE_ID: '<entity type id>'
  #FLOW_SAMPLE_2_PDP_RUNTIME_PARAMETERS_USER_LIST_PARAMETERS_RESOURCE_TYPE: '<resource type id>'
  FLOW_SAMPLE_2_SOURCE_SCHEMA: 'public'
  FLOW_SAMPLE_2_SOURCE_TABLE: '<table/view name>'
  FLOW_SAMPLE_2_SOURCE_UID: '<id column name>'

  # Converters Params
  CONVERTER1_TEMPLATE: '{{ "\r\n" }}"property-A": [{{ "\r\n" }}{{- range $index, $data := . }}{{ if gt $index 0 }},{{ "\r\n" }}{{ end }}{ {{ "\r\n" }}"UserID": [{{ "\r\n" }}{{- range $entityIndex, $entity := (index $data.response 0).access }}{{ if $entityIndex }},{{ "\r\n" }}{{ end }}"{{ $entity.path }}"{{- end }}{{ "\r\n" }}],{{ "\r\n" }}"details": { {{ "\r\n" }}"name": "{{ index $data.identity.attributes.Name 0 }}",{{ "\r\n" }}"ID": "{{ index $data.identity.attributes.ID 0 }}",{{ "\r\n" }}"lastUpdated": "{{ index $data.identity.attributes.LastUpdated 0 }}"{{ "\r\n" }} } {{ "\r\n" }} } {{- end }}{{ "\r\n" }}] {{ "\r\n" }}'
  
  CONVERTER2_1_TEMPLATE: '"Type1":[{{ "\r\n" }}{{- $first := true }}{{- range $i, $data := . }}{{- range $response := $data.response }}{{- range $access := $response.access }}{{- if eq (index $access.attributes.path 0) "TYPE-1" }}{{- range $userid := $access.attributes.UserID }}{{- if not $first }},{{ "\r\n" }}{{ end }}"{{ $userid }}"{{- $first = false }}{{- end }}{{- end }}{{- end }}{{- end }}{{- end }}{{ "\r\n" }}],'
  
  CONVERTER2_2_TEMPLATE: '"Type2":[{{ "\r\n" }}{{- $first := true }}{{- range $i, $data := . }}{{- range $response := $data.response }}{{- range $access := $response.access }}{{- if eq (index $access.attributes.path 0) "TYPE-2" }}{{- range $userid := $access.attributes.UserID }}{{- if not $first }},{{ "\r\n" }}{{ end }}"{{ $userid }}"{{- $first = false }}{{- end }}{{- end }}{{- end }}{{- end }}{{- end }}{{ "\r\n" }}],'
  
  CONVERTER2_3_TEMPLATE: '"Type3":[{{ "\r\n" }}{{- $first := true }}{{- range $i, $data := . }}{{- range $response := $data.response }}{{- range $access := $response.access }}{{- if eq (index $access.attributes.path 0) "TYPE-3" }}{{- range $userid := $access.attributes.UserID }}{{- if not $first }},{{ "\r\n" }}{{ end }}"{{ $userid }}"{{- $first = false }}{{- end }}{{- end }}{{- end }}{{- end }}{{- end }}{{ "\r\n" }}]'

  # Job Params
  SAMPLE_JOB_LOCATION_PATH: '/app/data'
  SAMPLE_JOB_TIMEOUT: '180m'

  # RCLONE
  RCLONE_URL: 'http://localhost:5572'

Subject Data Sources and PDP Flows

The Access File Authorizer processes the subject population, calculates access for each subject, and generates an access file representing Authorizations for the entire population. To support this process, the Authorizer loads a predefined subject population from a customer's data source and evaluates each entry using a predefined PDP calculation flow based on modeled Templates and Policies in the PlainID Platform.

Data Sources

The Authorizer processes subjects that are either Identities or Assets (resources) requiring Authorization decisions. Data sources are defined in the Authorizer configuration in two parts:

  • Data Source Connectivity – Defined under the datasources set in the configuration, specifying connection details such as host, port, user, and password.
  • Data Reference – Defined under flow: source, referencing the connectivity setup and specifying the schema, table, and uid to identify the data source for subjects.

The Data Source configuration can either connect directly to a customer database or use the PIP service via postgres transport. When using PIP, connectivity is configured to the pip-operator through the Postgres port, referencing a View name as the Source table. This enables support for multiple data source types (not limited to databases), the creation of virtual views for defining population subsets, and necessary data normalization using PlainID PIP capabilities.

The uid should serve as a unique identifier for the subject, such as UserID for Identities or AssetID for resources.

Note: The subject data source must be preordered by the uid field to prevent data inconsistencies and issues with multi-values. Ordering can also be enforced using PIP Views.

PDP Flows

The Authorizer processes both Identities and Assets, supporting two different PDP calculation flows: User Access Token and User List. We recommend defining your use case modeling and testing the PDP calculation before configuring it for the Authorizer. For guidance on proper modeling, you can consult the PlainID Global Services team.

Each Flow references a predefined PDP and allows specifying additional parameters for the required PDP calculation, including:

  • clientId and clientSecret to identify the Policy Scope.
  • Type of PDP flow: userList or userAccessToken.
  • PDP endpoint, based on the flow type, specified via the path parameter.
  • Identity/Asset Types, determining the Templates used for calculation, using resourceType and entityTypeId parameters.
  • PDP request flags to enrich the PDP response for file processing, such as includeAsset, includeIdentity, and includeAccessPolicy.
runtimeParameters:
  type: ${FLOW1_PDP_RUNTIME_PARAMETERS_TYPE:userList}
  userListParameters:
    resourceType: <RESOURCE_TYPE>
  userAccessTokenParameters:
    entityTypeId: <ENTITY_TYPE_ID>
  path: <PDP URI PATH>
  clientId: ${FLOW1_PDP_RUNTIME_PARAMETERS_CLIENT_ID}
  clientSecret: ${FLOW1_PDP_RUNTIME_PARAMETERS_CLIENT_SECRET}
  includeAsset: true
  includeIdentity: true
  includeAccessPolicy: false

Templates

The Authorizer uses Go Templates to define file structures and inject data elements while processing access files based on PDP authorization decisions retrieved and stored for the subject population.

We recommend simulating PDP requests (e.g., using Postman) and working with authentic sample responses when building and defining Go Templates. For guidance on creating Go Templates, you can refer to resources like gotemplate.io. Additionally, the PlainID Professional Services team is available for further assistance.

The Authorizer also provides a template validation endpoint to help verify your Templates. For more details, see Authorizer API Endpoints.

Note: The Authorizer processes PDP responses as a range, meaning template processing should assume a JSON array as input. When validating your template using the validation API or an online tool, ensure that your input is a JSON array and that your template accesses the data using a range syntax, such as:

{{- range $i, $data := . }}

Supported Template Functions

Templates defined with Go Templates can include custom functions executed as part of the file generation. The Authorizer currently supports these functions:

  • padRight - Can be used in your output file when padding any line with spaces.

    • Syntax - padRight <String> <Length>
    • Example usage for completing lines to 100 char length - {{ padRight "some text" 100 -}}
  • subtract - Allows you to subtract one number from another. Useful when you need to handle the last items in an array by subtracting 1 from the length.

    • Syntax - subtract <num to subtract from> <num to subtract>
    • Example usage with native lan function - subtract (len $range) 1 cfc
  • fileInput - Allows you to inject a file into the template. See a detailed explanation under Flow Results Aggregator on how this function can be used.

    • Syntax - {{ fileInput "<flowID>" "<converterID>" }}
    • Example usage of injecting a file output inside a template - "some text... " {{ fileInput "flow4" "c4_1" }} " some other text..."
  • fileName - Allows you to get file name of the generated file for a specified job/flow and converter IDs.

    • Syntax - {{ fileName "flow" "<flowID>" "<converterID>" }} or {{ fileName "job" "<jobID>" "<converterID>" }}
    • Example usage of getting file name of the the file generated for flow4 with converter c4_1 {{ fileName "flow" "flow4" "c4_1" }}
    • Useful for setting up a webhook and inject a filename as a variable to its payload, as an example to trigger a file transfer using SFTP.

Note: If using any online Go Template tools, be aware that the mentioned functions are PlainID custom functions, not native Go Template functions, and cannot be tested online.

Additional Template Hints

You can use these templating hints:

  • Use {{ "\r\n" }} in the template to mark a new line.

  • Use If statements for conditional placement of data in the template. Example:

    • {{ if gt $index 0 }}
  • Use - to trim whitespace before or after the template expression, depending on its placement.

  • Use ranges to iterate over data lists. Example:

    • {{- range $entityIndex, $entity := (index $data.response 0).access }} - The range iterates over the elements in .access, which assigns the:
      • index to $entityIndex.
      • value of the current element to $entity.
    • index $data.response 0 retrieves the first element in the $data.response array.
    • .access retrieves the access field from the first element of $data.response.
  • For a Property Existence Check, use the syntax: {{ if index $data.asset.attributes "property" }}...{{ else }}""{{ end }}

  • For a Value Existence Check use the syntax {{ with index $data.asset.attributes "property" 0 }}"{{ . }}"{{ else }}{{ end }}

Special Cases Examples
These example demonstrate usage of range processing, native and custom functions and protection from non present input data:

  • Avoid Last Comma in Arrays

    • Range Index Handling - Avoids adding a comma after the last range item by subtracting 1.
    • Example: {{ if lt (subtract (len (index $data.response 0).access) 1) $entityIndex }}"{{ $entity.path }}",{{else}}"
    • Use this snippet to add array data elements with a comma and newline after, excluding a comma from the last item {{- range $index, $data := . }}{{ if gt $index 0 }},{{ "\r\n" }}{{ end }}.
  • Correct Range Length Handling

    • End of Range Handling - Ensures correct index calculation when processing ranges by subtracting
    • Example: {{ if lt (subtract (len $range) 1) $rangeIndex }}
  • Protect from Missing Input Data

    • Handling Non-Existing Properties or Values using If statements.
    • Example - "Property Name":"{{ if index $access.attributes.property 0 }}{{ with index $access.attributes.property 0 }}{{ . }}{{ else }}{{ end }}{{ else }}{{ end }}"
    • In this example, if statements are used to check whether the referenced property exists and contains a value.

Filename Template

The output file names are also configured and can be defined using Templates including a time stamping as part of the name like this output-jb3-flow3-{{nowWithFormat "20060102-150405"}}.json. The nowWithFormat template function gets a time stamp format and uses it with the current time stamp when creating the file name.

Flow Results Aggregator

The Flow Results Aggregator enables complex file generation by combining results from multiple flows. This is particularly useful for intermediate processing steps that require creating file structures, which can later be incorporated into a more complex template.

At the job level, you can define an additional template converters that takes one or more Flow output files as inputs and integrates them into the defined template along with other template structures and elements. This is done by using the File Input template function to reference other Flow files. The syntax is:

{{ fileInput "<flowID>" "<converterID>" }}

For example, this template definition aggregates outputs from multiple flows into a final job output file in JSON format:

{
  "ResourceTypes": [
    {
      {{ fileInput "flow2" "c2" }},
      {{ fileInput "flow3" "c3" }},
      {{ fileInput "flow4" "c4" }}
    }
  ],
  "AdditionalAccess": {{ fileInput "flow1" "c1" }}
  // where flow1 and c1 are references to the flowId and its defined converter.
}

At the Flow level, you can configure whether the generated Flow file is temporary by setting output: transient: true :

  • If set to true, the Flow file is saved temporarily until it is used by the aggregation template during job completion. It is automatically cleaned up after successful aggregation.
  • If set to false, the Flow file is saved as an output file at the designated location, and injected into the aggregation template.

Example Configuration with multiple flows and file aggregation used as the job converter:

converters:
  t1: ...
  t2: ...
  t3: ...
  t4: ...
  aggregator_template:
    type: goTemplate
    templateProperties:
      content: |
        {
		  "ResourcesTypes":[
			{
			  {{ fileInput "flow2" "c2" }},
			  {{ fileInput "flow3" "c3" }},
			  {{ fileInput "flow4" "c4" }}
			}
		  ],
		  "AdditionalAccess": {{ fileInput "flow1" "c1" }}
		}
flows:
  # Main flow with permanent output
  flow1:
    source:
      datasource: ds1
      schema: ${FLOW1_SCHEMA:public}
      table: ${FLOW1_TABLE:users}
    convert:
      converters:
        c1:
          id: t1
          output:
            location:
              id: l1
              filenameTemplate: ${FLOW1_OUTPUT:users-{{nowWithFormat "20060102"}}.json}

  # Flows with temporary outputs
  flow2:
    source:
      datasource: ds1
      schema: ${FLOW2_SCHEMA:public}
      table: ${FLOW2_TABLE:locations}
    convert:
      converters:
        c2:
          id: t2
          output:
            transient: true  # Mark as temporary
            location:
              id: l1
              filenameTemplate: ${FLOW2_OUTPUT:temp-loc-{{nowWithFormat "20060102"}}.json}

  flow3:
    source:
      datasource: ds1
      schema: ${FLOW3_SCHEMA:public}
      table: ${FLOW3_TABLE:departments}
    convert:
      converters:
        c3:
          id: t3
          output:
            transient: true  # Mark as temporary
            location:
              id: l1
              filenameTemplate: ${FLOW3_OUTPUT:temp-dept-{{nowWithFormat "20060102"}}.json}

  flow4:
    source:
      datasource: ds1
      schema: ${FLOW4_SCHEMA:public}
      table: ${FLOW4_TABLE:offices}
    convert:
      converters:
        c4:
          id: t4
          output:
            transient: true  # Mark as temporary
            location:
              id: l1
              filenameTemplate: ${FLOW4_OUTPUT:temp-office-{{nowWithFormat "20060102"}}.json}

jobs:
  complex_job:
    timeout: ${JOB1_TIMEOUT:24h}
    flows:
      - flow1
      - flow2
      - flow3
      - flow4
    converters:
      final:
        id: aggregator_template
        output:
          location:
            id: l1
            filenameTemplate: ${FINAL_OUTPUT:final-{{nowWithFormat "20060102"}}.json}

In this example:

  • flow1 generates a permanent output with user data
  • flow2flow3, and flow4 generate temporary files (marked with transient: true)
  • The job's final converter uses the fileInput function to:
    • Combine related data from flows 2-4
    • Include additional data from flow1
    • Generate a single aggregated output file
  • All temporary files (2,3 and 4) are automatically cleaned up after successful processing
  • Final result of the job will be file1 and aggregated file

Webhooks

The Authorizer supports configuring webhooks that are triggered at the end of Job processing. Once a Job is completed, an optional set of webhooks can be invoked—for example, to trigger a file transfer of output files from the Authorizer’s configured file store to an external location. Similarly, this capability can be used to send a notification to an external system that exposes a supported webhook.

To use this feature, you must define a Webhook Template, specifying parameters such as URL, Method, Headers, and Payload. The template can include variable declarations within the webhook payload. Once configured, the template is referenced in the Job configuration, ensuring it is triggered upon job completion, with variable values assigned as needed.

See the list of configuration parameters in the Configuration section.

A common use case for webhooks is transferring output files to an external destination at the end of Job processing. To support this, the Authorizer includes built-in integration with rclone.

rclone

The rclone integration is based on deploying rclone as an additional container with the Authorizer service, running the rclone http server, making webhook calls to it to trigger its remote operations.
To use this integration you will need to enable the rclone container deployment by copying to the values-custom.yaml the rclone section from the values.yaml, uncomment it, and configure it. See the steps below the code block for more information.

rclone:
  enabled: true
  image:
    repository: "docker.io/rclone/rclone"
    tag: "1.69"
  config: |
	[my-sftp]
	type = sftp
	host = YOUR_HOST
	user = YOUR_USERNAME
	pass = YOUR_SFTP_PASSWORD
	port = 22

  # Extra arguments to pass to the rclone remote control server
  extraArgs: >-
    --retries=3 --low-level-retries=10 --retries-sleep=10s
  1. Set enabled to true.

  2. Ensure the image section is configured to pull the rclone Docker image.

  3. Configure the config section:

    • For SFTP integration, a predefined commented configuration is available:
      • Define a name for your SFTP in brackets.
      • Set the type to sftp.
      • Configure your SFTP server details: host, user, pass, and port.
      • The password you set in this configuration will be encrypted as part of the deployment to meet rclone requirements.
      • Note: The password encryption required for SFTP may limit other out-of-the-box (rclone) integrations. Consult with PlainID Professional Services for alternative integrations.
    • For any special configurations or rclone integrations beyond SFTP (such as AWS S3), consult with PlainID Professional Services.
  4. Use extraArgs to define any additional supported rclone switches as needed.

Once this section is added and rclone deployment is enabled, its container will be deployed, and the rclone HTTP server will run alongside the Authorizer, allowing HTTP requests via the webhook definition.

SFTP

Using the built-in integration with rclone, you can set up an SFTP server and transfer the Authorizer's generated job files to a remote SFTP server for further use by your application or system.

Steps for Integration

  1. As part of the rclone setup, define your SFTP server details as mentioned.

  2. When defining the webhook template:

    • Set the webhook URL to the rclone endpoint: ../operations/copyfile.
    • Set the webhook Method to POST.
  3. Configure the webhook headers:

    • Use the Environment Variable ${RCLONE_BASIC_BASE64} for rclone Authorization, which is predefined as part of the rclone deployment.
    • Set Content-Type to application/json.
  4. Define the webhook payload with the copyfile operation JSON structure, including source and destination properties:

    • "srcFs", "srcRemote", "dstFs", and "dstRemote".
    • The payload definition can use Go Template syntax with {{ .varName }}.
    • The payload can include an optional _config object with rclone config flags.
      • For SFTP, the example below includes the inplace flag to guide the file copy operation.

Complete example for a webhook template section:

webhooks:
  copyFileWebhook:
    method: "POST"
    headers:
      - name: Content-Type
        values:
          - "application/json"
      - name: Authorization
        values:
          - ${RCLONE_BASIC_BASE64}
    url: ${RCLONE_URL}/operations/copyfile
    payload: |
      {
        "srcFs": "{{ .SrcFs }}",
        "srcRemote": "{{ .SrcRemote }}",
        "dstFs": "{{ .DstFs }}",
        "dstRemote": "{{ .DstRemote }}",
        "_config": { 
	        "inplace": true 
		}
      }
    timeout: 180

Configuring Webhook Triggers for File Transfer

  1. In the job configuration, add webhook triggers by referencing the defined webhook template and injecting variable values.
  2. Trigger a webhook call for each file you want to transfer to the SFTP server. Set these parameters under webhooks:
    • Template – Reference the webhook template name.
    • Variables – Define entries for each variable specified at the template level: "SrcFs", "SrcRemote", "DstFs", and "DstRemote".
      • SrcFs is set to the local folder where the Authorizer saves the generated file: /app/data.
      • DstFs refers to your SFTP server using the name defined in the rclone configuration (e.g., my-sftp).
      • File names can be set using the custom function fileName.

Complete webhook trigger section example for a job:

webhooks:
  - template: "copyFileWebhook"
    variables:
      - name: SrcFs
        value: /app/data/
      - name: SrcRemote
        value: '{{ fileName "flow" "flow1" "c1" }}'
      - name: DstFs
        value: "my-sftp:/upload"
      - name: DstRemote
        value: '{{ fileName "flow" "flow1" "c1" }}'
  - template: "copyFileWebhook"
	variables:
 	  - name: SrcFs
		value: /app/data/
	  - name: SrcRemote
		value: '{{ fileName "job" "jb1" "final" }}'
	  - name: DstFs
		value: "my-sftp:/upload"
	  - name: DstRemote
		value: '{{ fileName "job" "jb1" "final" }}'