API Resources

Aliases

class Aliases(session_kwargs, client, return_type='civis')

Methods

delete(self, id) Delete an alias
delete_shares_groups(self, id, group_id) Revoke the permissions a group has on this object
delete_shares_users(self, id, user_id) Revoke the permissions a user has on this object
get(self, id) Get an Alias
get_object_type(self, object_type, alias) Get details about an alias within an FCO type
list(self, \*[, object_type, limit, …]) List Aliases
list_shares(self, id) List users and groups permissioned on this object
patch(self, id, \*[, object_id, …]) Update some attributes of this Alias
post(self, object_id, object_type, alias, \*) Create an Alias
put(self, id, object_id, object_type, alias, \*) Replace all attributes of this Alias
put_shares_groups(self, id, group_ids, …) Set the permissions groups has on this object
put_shares_users(self, id, user_ids, …[, …]) Set the permissions users have on this object
delete(self, id)

Delete an alias

Parameters:
id : integer

The id of the Alias object.

Returns:
None

Response code 204: success

delete_shares_groups(self, id, group_id)

Revoke the permissions a group has on this object

Parameters:
id : integer

The ID of the resource that is shared.

group_id : integer

The ID of the group.

Returns:
None

Response code 204: success

delete_shares_users(self, id, user_id)

Revoke the permissions a user has on this object

Parameters:
id : integer

The ID of the resource that is shared.

user_id : integer

The ID of the user.

Returns:
None

Response code 204: success

get(self, id)

Get an Alias

Parameters:
id : integer
Returns:
id : integer

The id of the Alias object.

object_id : integer

The id of the object

object_type : string

The type of the object. Valid types include: model, cass_ncoa, container_script, gdoc_export, geocode, media_optimizer, python_script, r_script, salesforce_export, javascript_script, sql_script, project, notebook, workflow, template_script, template_report, service, report, tableau and service_report.

alias : string

The alias of the object

user_id : integer

The id of the user who created the alias

display_name : string

The display name of the Alias object. Defaults to object name if not provided.

get_object_type(self, object_type, alias)

Get details about an alias within an FCO type

Parameters:
object_type : string

The type of the object. Valid types include: model, cass_ncoa, container_script, gdoc_export, geocode, media_optimizer, python_script, r_script, salesforce_export, javascript_script, sql_script, project, notebook, workflow, template_script, template_report, service, report, tableau and service_report.

alias : string

The alias of the object

Returns:
id : integer

The id of the Alias object.

object_id : integer

The id of the object

object_type : string

The type of the object. Valid types include: model, cass_ncoa, container_script, gdoc_export, geocode, media_optimizer, python_script, r_script, salesforce_export, javascript_script, sql_script, project, notebook, workflow, template_script, template_report, service, report, tableau and service_report.

alias : string

The alias of the object

user_id : integer

The id of the user who created the alias

display_name : string

The display name of the Alias object. Defaults to object name if not provided.

list(self, *, object_type='DEFAULT', limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List Aliases

Parameters:
object_type : string, optional

Filter results by object type. Pass multiple object types with a comma- separatedlist. Valid types include: model, cass_ncoa, container_script, gdoc_export, geocode, media_optimizer, python_script, r_script, salesforce_export, javascript_script, sql_script, project, notebook, workflow, template_script, template_report, service, report, tableau and service_report.

limit : integer, optional

Number of results to return. Defaults to 50. Maximum allowed is 1000.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to id. Must be one of: id, object_type.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to asc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
id : integer

The id of the Alias object.

object_id : integer

The id of the object

object_type : string

The type of the object. Valid types include: model, cass_ncoa, container_script, gdoc_export, geocode, media_optimizer, python_script, r_script, salesforce_export, javascript_script, sql_script, project, notebook, workflow, template_script, template_report, service, report, tableau and service_report.

alias : string

The alias of the object

user_id : integer

The id of the user who created the alias

display_name : string

The display name of the Alias object. Defaults to object name if not provided.

list_shares(self, id)

List users and groups permissioned on this object

Parameters:
id : integer

The ID of the resource that is shared.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

patch(self, id, *, object_id='DEFAULT', object_type='DEFAULT', alias='DEFAULT', display_name='DEFAULT')

Update some attributes of this Alias

Parameters:
id : integer

The id of the Alias object.

object_id : integer, optional

The id of the object

object_type : string, optional

The type of the object. Valid types include: model, cass_ncoa, container_script, gdoc_export, geocode, media_optimizer, python_script, r_script, salesforce_export, javascript_script, sql_script, project, notebook, workflow, template_script, template_report, service, report, tableau and service_report.

alias : string, optional

The alias of the object

display_name : string, optional

The display name of the Alias object. Defaults to object name if not provided.

Returns:
id : integer

The id of the Alias object.

object_id : integer

The id of the object

object_type : string

The type of the object. Valid types include: model, cass_ncoa, container_script, gdoc_export, geocode, media_optimizer, python_script, r_script, salesforce_export, javascript_script, sql_script, project, notebook, workflow, template_script, template_report, service, report, tableau and service_report.

alias : string

The alias of the object

user_id : integer

The id of the user who created the alias

display_name : string

The display name of the Alias object. Defaults to object name if not provided.

post(self, object_id, object_type, alias, *, display_name='DEFAULT')

Create an Alias

Parameters:
object_id : integer

The id of the object

object_type : string

The type of the object. Valid types include: model, cass_ncoa, container_script, gdoc_export, geocode, media_optimizer, python_script, r_script, salesforce_export, javascript_script, sql_script, project, notebook, workflow, template_script, template_report, service, report, tableau and service_report.

alias : string

The alias of the object

display_name : string, optional

The display name of the Alias object. Defaults to object name if not provided.

Returns:
id : integer

The id of the Alias object.

object_id : integer

The id of the object

object_type : string

The type of the object. Valid types include: model, cass_ncoa, container_script, gdoc_export, geocode, media_optimizer, python_script, r_script, salesforce_export, javascript_script, sql_script, project, notebook, workflow, template_script, template_report, service, report, tableau and service_report.

alias : string

The alias of the object

user_id : integer

The id of the user who created the alias

display_name : string

The display name of the Alias object. Defaults to object name if not provided.

put(self, id, object_id, object_type, alias, *, display_name='DEFAULT')

Replace all attributes of this Alias

Parameters:
id : integer

The id of the Alias object.

object_id : integer

The id of the object

object_type : string

The type of the object. Valid types include: model, cass_ncoa, container_script, gdoc_export, geocode, media_optimizer, python_script, r_script, salesforce_export, javascript_script, sql_script, project, notebook, workflow, template_script, template_report, service, report, tableau and service_report.

alias : string

The alias of the object

display_name : string, optional

The display name of the Alias object. Defaults to object name if not provided.

Returns:
id : integer

The id of the Alias object.

object_id : integer

The id of the object

object_type : string

The type of the object. Valid types include: model, cass_ncoa, container_script, gdoc_export, geocode, media_optimizer, python_script, r_script, salesforce_export, javascript_script, sql_script, project, notebook, workflow, template_script, template_report, service, report, tableau and service_report.

alias : string

The alias of the object

user_id : integer

The id of the user who created the alias

display_name : string

The display name of the Alias object. Defaults to object name if not provided.

put_shares_groups(self, id, group_ids, permission_level, *, share_email_body='DEFAULT', send_shared_email='DEFAULT')

Set the permissions groups has on this object

Parameters:
id : integer

The ID of the resource that is shared.

group_ids : list

An array of one or more group IDs.

permission_level : string

Options are: “read”, “write”, or “manage”.

share_email_body : string, optional

Custom body text for e-mail sent on a share.

send_shared_email : boolean, optional

Send email to the recipients of a share.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

put_shares_users(self, id, user_ids, permission_level, *, share_email_body='DEFAULT', send_shared_email='DEFAULT')

Set the permissions users have on this object

Parameters:
id : integer

The ID of the resource that is shared.

user_ids : list

An array of one or more user IDs.

permission_level : string

Options are: “read”, “write”, or “manage”.

share_email_body : string, optional

Custom body text for e-mail sent on a share.

send_shared_email : boolean, optional

Send email to the recipients of a share.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

Announcements

class Announcements(session_kwargs, client, return_type='civis')

Methods

list(self, \*[, limit, page_num, order, …]) List announcements
list(self, *, limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List announcements

Parameters:
limit : integer, optional

Number of results to return. Defaults to 10. Maximum allowed is 50.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to released_at. Must be one of: released_at.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
id : integer

The ID of this announcement

subject : string

The subject of this announcement.

body : string

The body of this announcement.

released_at : string/date-time

The date and time this announcement was released.

created_at : string/date-time
updated_at : string/date-time

Clusters

class Clusters(session_kwargs, client, return_type='civis')

Methods

delete_kubernetes_partitions(self, id, …) Delete a Cluster Partition
get_kubernetes(self, id, \*[, …]) Describe a Kubernetes Cluster
get_kubernetes_instance_configs(self, …[, …]) Describe an Instance Config
get_kubernetes_partitions(self, id, …[, …]) Describe a Cluster Partition
list_kubernetes(self, \*[, …]) List Kubernetes Clusters
list_kubernetes_deployment_stats(self, id) Get stats about deployments associated with a Kubernetes Cluster
list_kubernetes_deployments(self, id, \*[, …]) List the deployments associated with a Kubernetes Cluster
list_kubernetes_instance_configs_historical_graphs(…) Get graphs of historical resource usage in an Instance Config
list_kubernetes_instance_configs_user_statistics(…) Get statistics about the current users of an Instance Config
list_kubernetes_partitions(self, id, \*[, …]) List Cluster Partitions for given cluster
patch_kubernetes(self, id, \*[, is_nat_enabled]) Update a Kubernetes Cluster
patch_kubernetes_partitions(self, id, …[, …]) Update a Cluster Partition
post_kubernetes(self, \*[, organization_id, …]) Create a Kubernetes Cluster
post_kubernetes_partitions(self, id, …) Create a Cluster Partition for given cluster
delete_kubernetes_partitions(self, id, cluster_partition_id)

Delete a Cluster Partition

Parameters:
id : integer

The ID of the cluster which this partition belongs to.

cluster_partition_id : integer

The ID of this cluster partition.

Returns:
None

Response code 204: success

get_kubernetes(self, id, *, include_usage_stats='DEFAULT')

Describe a Kubernetes Cluster

Parameters:
id : integer
include_usage_stats : boolean, optional

When true, usage stats are returned in instance config objects. Defaults to false.

Returns:
id : integer

The ID of this cluster.

organization_id : string

The id of this cluster’s organization.

organization_name : string

The name of this cluster’s organization.

organization_slug : string

The slug of this cluster’s organization.

custom_partitions : boolean

Whether this cluster has a custom partition configuration.

cluster_partitions : list::

List of cluster partitions associated with this cluster. - cluster_partition_id : integer

The ID of this cluster partition.

  • name : string

    The name of the cluster partition.

  • labels : list

    Labels associated with this partition.

  • instance_configs : list::

    The instances configured for this cluster partition. - instance_config_id : integer

    The ID of this InstanceConfig.

    • instance_type : string
      An EC2 instance type. Possible values include t2.large, m4.xlarge, m4.2xlarge, m4.4xlarge, m5.12xlarge, c5.18xlarge, and p2.xlarge.
    • min_instances : integer
      The minimum number of instances of that type in this cluster.
    • max_instances : integer
      The maximum number of instances of that type in this cluster.
    • instance_max_memory : integer
      The amount of memory (RAM) available to a single instance of that type in megabytes.
    • instance_max_cpu : integer
      The number of processor shares available to a single instance of that type in millicores.
    • instance_max_disk : integer
      The amount of disk available to a single instance of that type in gigabytes.
    • usage_stats : dict::
      • pending_memory_requested : integer
        The sum of memory requests (in MB) for pending deployments in this instance config.
      • pending_cpu_requested : integer
        The sum of cpu requests (in millicores) for pending deployments in this instance config.
      • running_memory_requested : integer
        The sum of memory requests (in MB) for running deployments in this instance config.
      • running_cpu_requested : integer
        The sum of cpu requests (in millicores) for running deployments in this instance config.
      • pending_deployments : integer
        The number of pending deployments in this instance config.
      • running_deployments : integer
        The number of running deployments in this instance config.
  • default_instance_config_id : integer

    The id of the InstanceConfig that is the default for this partition.

is_nat_enabled : boolean

Whether this cluster needs a NAT gateway or not.

hours : number/float

The number of hours used this month for this cluster.

get_kubernetes_instance_configs(self, instance_config_id, *, include_usage_stats='DEFAULT')

Describe an Instance Config

Parameters:
instance_config_id : integer

The ID of this instance config.

include_usage_stats : boolean, optional

When true, usage stats are returned in instance config objects. Defaults to false.

Returns:
instance_config_id : integer

The ID of this InstanceConfig.

instance_type : string

An EC2 instance type. Possible values include t2.large, m4.xlarge, m4.2xlarge, m4.4xlarge, m5.12xlarge, c5.18xlarge, and p2.xlarge.

min_instances : integer

The minimum number of instances of that type in this cluster.

max_instances : integer

The maximum number of instances of that type in this cluster.

instance_max_memory : integer

The amount of memory (RAM) available to a single instance of that type in megabytes.

instance_max_cpu : integer

The number of processor shares available to a single instance of that type in millicores.

instance_max_disk : integer

The amount of disk available to a single instance of that type in gigabytes.

usage_stats : dict::
  • pending_memory_requested : integer
    The sum of memory requests (in MB) for pending deployments in this instance config.
  • pending_cpu_requested : integer
    The sum of cpu requests (in millicores) for pending deployments in this instance config.
  • running_memory_requested : integer
    The sum of memory requests (in MB) for running deployments in this instance config.
  • running_cpu_requested : integer
    The sum of cpu requests (in millicores) for running deployments in this instance config.
  • pending_deployments : integer
    The number of pending deployments in this instance config.
  • running_deployments : integer
    The number of running deployments in this instance config.
cluster_partition_id : integer

The ID of this InstanceConfig’s cluster partition

cluster_partition_name : string

The name of this InstanceConfig’s cluster partition

get_kubernetes_partitions(self, id, cluster_partition_id, *, include_usage_stats='DEFAULT')

Describe a Cluster Partition

Parameters:
id : integer

The ID of the cluster which this partition belongs to.

cluster_partition_id : integer

The ID of this cluster partition.

include_usage_stats : boolean, optional

When true, usage stats are returned in instance config objects. Defaults to false.

Returns:
cluster_partition_id : integer

The ID of this cluster partition.

name : string

The name of the cluster partition.

labels : list

Labels associated with this partition.

instance_configs : list::

The instances configured for this cluster partition. - instance_config_id : integer

The ID of this InstanceConfig.

  • instance_type : string
    An EC2 instance type. Possible values include t2.large, m4.xlarge, m4.2xlarge, m4.4xlarge, m5.12xlarge, c5.18xlarge, and p2.xlarge.
  • min_instances : integer
    The minimum number of instances of that type in this cluster.
  • max_instances : integer
    The maximum number of instances of that type in this cluster.
  • instance_max_memory : integer
    The amount of memory (RAM) available to a single instance of that type in megabytes.
  • instance_max_cpu : integer
    The number of processor shares available to a single instance of that type in millicores.
  • instance_max_disk : integer
    The amount of disk available to a single instance of that type in gigabytes.
  • usage_stats : dict::
    • pending_memory_requested : integer
      The sum of memory requests (in MB) for pending deployments in this instance config.
    • pending_cpu_requested : integer
      The sum of cpu requests (in millicores) for pending deployments in this instance config.
    • running_memory_requested : integer
      The sum of memory requests (in MB) for running deployments in this instance config.
    • running_cpu_requested : integer
      The sum of cpu requests (in millicores) for running deployments in this instance config.
    • pending_deployments : integer
      The number of pending deployments in this instance config.
    • running_deployments : integer
      The number of running deployments in this instance config.
default_instance_config_id : integer

The id of the InstanceConfig that is the default for this partition.

list_kubernetes(self, *, organization_slug='DEFAULT', raw_cluster_slug='DEFAULT', limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List Kubernetes Clusters

Parameters:
organization_slug : string, optional

The slug of this cluster’s organization.

raw_cluster_slug : string, optional

The slug of this cluster’s raw configuration.

limit : integer, optional

Number of results to return. Defaults to its maximum of 50.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to organization_id. Must be one of: organization_id, created_at.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to asc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
id : integer

The ID of this cluster.

organization_id : string

The id of this cluster’s organization.

organization_name : string

The name of this cluster’s organization.

organization_slug : string

The slug of this cluster’s organization.

custom_partitions : boolean

Whether this cluster has a custom partition configuration.

cluster_partitions : list::

List of cluster partitions associated with this cluster. - cluster_partition_id : integer

The ID of this cluster partition.

  • name : string

    The name of the cluster partition.

  • labels : list

    Labels associated with this partition.

  • instance_configs : list::

    The instances configured for this cluster partition. - instance_config_id : integer

    The ID of this InstanceConfig.

    • instance_type : string
      An EC2 instance type. Possible values include t2.large, m4.xlarge, m4.2xlarge, m4.4xlarge, m5.12xlarge, c5.18xlarge, and p2.xlarge.
    • min_instances : integer
      The minimum number of instances of that type in this cluster.
    • max_instances : integer
      The maximum number of instances of that type in this cluster.
    • instance_max_memory : integer
      The amount of memory (RAM) available to a single instance of that type in megabytes.
    • instance_max_cpu : integer
      The number of processor shares available to a single instance of that type in millicores.
    • instance_max_disk : integer
      The amount of disk available to a single instance of that type in gigabytes.
    • usage_stats : dict::
      • pending_memory_requested : integer
        The sum of memory requests (in MB) for pending deployments in this instance config.
      • pending_cpu_requested : integer
        The sum of cpu requests (in millicores) for pending deployments in this instance config.
      • running_memory_requested : integer
        The sum of memory requests (in MB) for running deployments in this instance config.
      • running_cpu_requested : integer
        The sum of cpu requests (in millicores) for running deployments in this instance config.
      • pending_deployments : integer
        The number of pending deployments in this instance config.
      • running_deployments : integer
        The number of running deployments in this instance config.
  • default_instance_config_id : integer

    The id of the InstanceConfig that is the default for this partition.

is_nat_enabled : boolean

Whether this cluster needs a NAT gateway or not.

list_kubernetes_deployment_stats(self, id)

Get stats about deployments associated with a Kubernetes Cluster

Parameters:
id : integer

The ID of this cluster.

Returns:
base_type : string

The base type of this deployment

state : string

State of the deployment

count : integer

Number of deployments of base type and state

total_cpu : integer

Total amount of CPU in millicores for deployments of base type and state

total_memory : integer

Total amount of Memory in megabytes for deployments of base type and state

list_kubernetes_deployments(self, id, *, base_type='DEFAULT', state='DEFAULT', limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List the deployments associated with a Kubernetes Cluster

Parameters:
id : integer

The id of the cluster.

base_type : string, optional

If specified, return deployments of these base types. It accepts a comma- separated list, possible values are ‘Notebook’, ‘Service’, ‘Run’.

state : string, optional

If specified, return deployments in these states. It accepts a comma- separated list, possible values are pending, running, terminated, sleeping

limit : integer, optional

Number of results to return. Defaults to its maximum of 50.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to created_at. Must be one of: created_at.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to asc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
id : integer

The id of this deployment.

name : string

The name of the deployment.

base_id : integer

The id of the base object associated with the deployment.

base_type : string

The base type of this deployment.

state : string

The state of the deployment.

cpu : integer

The CPU in millicores required by the deployment.

memory : integer

The memory in MB required by the deployment.

disk_space : integer

The disk space in GB required by the deployment.

instance_type : string

The EC2 instance type requested for the deployment.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
created_at : string/time
updated_at : string/time
list_kubernetes_instance_configs_historical_graphs(self, instance_config_id, *, timeframe='DEFAULT')

Get graphs of historical resource usage in an Instance Config

Parameters:
instance_config_id : integer

The ID of this instance config.

timeframe : string, optional

The span of time that the graphs cover. Must be one of 1_day, 1_week.

Returns:
cpu_graph_url : string

URL for the graph of historical CPU usage in this instance config.

mem_graph_url : string

URL for the graph of historical memory usage in this instance config.

list_kubernetes_instance_configs_user_statistics(self, instance_config_id, *, order='DEFAULT', order_dir='DEFAULT')

Get statistics about the current users of an Instance Config

Parameters:
instance_config_id : integer

The ID of this instance config.

order : string, optional

The field on which to order the result set. Defaults to running_deployments. Must be one of pending_memory_requested, pending_cpu_requested, running_memory_requested, running_cpu_requested, pending_deployments, running_deployments.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending). Defaults to desc.

Returns:
user_id : string

The owning user’s ID

user_name : string

The owning user’s name

pending_deployments : integer

The number of deployments belonging to the owning user in “pending” state

pending_memory_requested : integer

The sum of memory requests (in MB) for deployments belonging to the owning user in “pending” state

pending_cpu_requested : integer

The sum of CPU requests (in millicores) for deployments belonging to the owning user in “pending” state

running_deployments : integer

The number of deployments belonging to the owning user in “running” state

running_memory_requested : integer

The sum of memory requests (in MB) for deployments belonging to the owning user in “running” state

running_cpu_requested : integer

The sum of CPU requests (in millicores) for deployments belonging to the owning user in “running” state

list_kubernetes_partitions(self, id, *, include_usage_stats='DEFAULT')

List Cluster Partitions for given cluster

Parameters:
id : integer
include_usage_stats : boolean, optional

When true, usage stats are returned in instance config objects. Defaults to false.

Returns:
cluster_partition_id : integer

The ID of this cluster partition.

name : string

The name of the cluster partition.

labels : list

Labels associated with this partition.

instance_configs : list::

The instances configured for this cluster partition. - instance_config_id : integer

The ID of this InstanceConfig.

  • instance_type : string
    An EC2 instance type. Possible values include t2.large, m4.xlarge, m4.2xlarge, m4.4xlarge, m5.12xlarge, c5.18xlarge, and p2.xlarge.
  • min_instances : integer
    The minimum number of instances of that type in this cluster.
  • max_instances : integer
    The maximum number of instances of that type in this cluster.
  • instance_max_memory : integer
    The amount of memory (RAM) available to a single instance of that type in megabytes.
  • instance_max_cpu : integer
    The number of processor shares available to a single instance of that type in millicores.
  • instance_max_disk : integer
    The amount of disk available to a single instance of that type in gigabytes.
  • usage_stats : dict::
    • pending_memory_requested : integer
      The sum of memory requests (in MB) for pending deployments in this instance config.
    • pending_cpu_requested : integer
      The sum of cpu requests (in millicores) for pending deployments in this instance config.
    • running_memory_requested : integer
      The sum of memory requests (in MB) for running deployments in this instance config.
    • running_cpu_requested : integer
      The sum of cpu requests (in millicores) for running deployments in this instance config.
    • pending_deployments : integer
      The number of pending deployments in this instance config.
    • running_deployments : integer
      The number of running deployments in this instance config.
default_instance_config_id : integer

The id of the InstanceConfig that is the default for this partition.

patch_kubernetes(self, id, *, is_nat_enabled='DEFAULT')

Update a Kubernetes Cluster

Parameters:
id : integer

The ID of this cluster.

is_nat_enabled : boolean, optional

Whether this cluster needs a NAT gateway or not.

Returns:
id : integer

The ID of this cluster.

organization_id : string

The id of this cluster’s organization.

organization_name : string

The name of this cluster’s organization.

organization_slug : string

The slug of this cluster’s organization.

custom_partitions : boolean

Whether this cluster has a custom partition configuration.

cluster_partitions : list::

List of cluster partitions associated with this cluster. - cluster_partition_id : integer

The ID of this cluster partition.

  • name : string

    The name of the cluster partition.

  • labels : list

    Labels associated with this partition.

  • instance_configs : list::

    The instances configured for this cluster partition. - instance_config_id : integer

    The ID of this InstanceConfig.

    • instance_type : string
      An EC2 instance type. Possible values include t2.large, m4.xlarge, m4.2xlarge, m4.4xlarge, m5.12xlarge, c5.18xlarge, and p2.xlarge.
    • min_instances : integer
      The minimum number of instances of that type in this cluster.
    • max_instances : integer
      The maximum number of instances of that type in this cluster.
    • instance_max_memory : integer
      The amount of memory (RAM) available to a single instance of that type in megabytes.
    • instance_max_cpu : integer
      The number of processor shares available to a single instance of that type in millicores.
    • instance_max_disk : integer
      The amount of disk available to a single instance of that type in gigabytes.
    • usage_stats : dict::
      • pending_memory_requested : integer
        The sum of memory requests (in MB) for pending deployments in this instance config.
      • pending_cpu_requested : integer
        The sum of cpu requests (in millicores) for pending deployments in this instance config.
      • running_memory_requested : integer
        The sum of memory requests (in MB) for running deployments in this instance config.
      • running_cpu_requested : integer
        The sum of cpu requests (in millicores) for running deployments in this instance config.
      • pending_deployments : integer
        The number of pending deployments in this instance config.
      • running_deployments : integer
        The number of running deployments in this instance config.
  • default_instance_config_id : integer

    The id of the InstanceConfig that is the default for this partition.

is_nat_enabled : boolean

Whether this cluster needs a NAT gateway or not.

hours : number/float

The number of hours used this month for this cluster.

patch_kubernetes_partitions(self, id, cluster_partition_id, *, instance_configs='DEFAULT', name='DEFAULT', labels='DEFAULT')

Update a Cluster Partition

Parameters:
id : integer

The ID of the cluster which this partition belongs to.

cluster_partition_id : integer

The ID of this cluster partition.

instance_configs : list, optional::

The instances configured for this cluster partition. - instance_type : string

An EC2 instance type. Possible values include t2.large, m4.xlarge, m4.2xlarge, m4.4xlarge, m5.12xlarge, c5.18xlarge, and p2.xlarge.

  • min_instances : integer
    The minimum number of instances of that type in this cluster.
  • max_instances : integer
    The maximum number of instances of that type in this cluster.
name : string, optional

The name of the cluster partition.

labels : list, optional

Labels associated with this partition.

Returns:
cluster_partition_id : integer

The ID of this cluster partition.

name : string

The name of the cluster partition.

labels : list

Labels associated with this partition.

instance_configs : list::

The instances configured for this cluster partition. - instance_config_id : integer

The ID of this InstanceConfig.

  • instance_type : string
    An EC2 instance type. Possible values include t2.large, m4.xlarge, m4.2xlarge, m4.4xlarge, m5.12xlarge, c5.18xlarge, and p2.xlarge.
  • min_instances : integer
    The minimum number of instances of that type in this cluster.
  • max_instances : integer
    The maximum number of instances of that type in this cluster.
  • instance_max_memory : integer
    The amount of memory (RAM) available to a single instance of that type in megabytes.
  • instance_max_cpu : integer
    The number of processor shares available to a single instance of that type in millicores.
  • instance_max_disk : integer
    The amount of disk available to a single instance of that type in gigabytes.
  • usage_stats : dict::
    • pending_memory_requested : integer
      The sum of memory requests (in MB) for pending deployments in this instance config.
    • pending_cpu_requested : integer
      The sum of cpu requests (in millicores) for pending deployments in this instance config.
    • running_memory_requested : integer
      The sum of memory requests (in MB) for running deployments in this instance config.
    • running_cpu_requested : integer
      The sum of cpu requests (in millicores) for running deployments in this instance config.
    • pending_deployments : integer
      The number of pending deployments in this instance config.
    • running_deployments : integer
      The number of running deployments in this instance config.
default_instance_config_id : integer

The id of the InstanceConfig that is the default for this partition.

post_kubernetes(self, *, organization_id='DEFAULT', organization_slug='DEFAULT', is_nat_enabled='DEFAULT')

Create a Kubernetes Cluster

Parameters:
organization_id : string, optional

The id of this cluster’s organization.

organization_slug : string, optional

The slug of this cluster’s organization.

is_nat_enabled : boolean, optional

Whether this cluster needs a NAT gateway or not.

Returns:
id : integer

The ID of this cluster.

organization_id : string

The id of this cluster’s organization.

organization_name : string

The name of this cluster’s organization.

organization_slug : string

The slug of this cluster’s organization.

custom_partitions : boolean

Whether this cluster has a custom partition configuration.

cluster_partitions : list::

List of cluster partitions associated with this cluster. - cluster_partition_id : integer

The ID of this cluster partition.

  • name : string

    The name of the cluster partition.

  • labels : list

    Labels associated with this partition.

  • instance_configs : list::

    The instances configured for this cluster partition. - instance_config_id : integer

    The ID of this InstanceConfig.

    • instance_type : string
      An EC2 instance type. Possible values include t2.large, m4.xlarge, m4.2xlarge, m4.4xlarge, m5.12xlarge, c5.18xlarge, and p2.xlarge.
    • min_instances : integer
      The minimum number of instances of that type in this cluster.
    • max_instances : integer
      The maximum number of instances of that type in this cluster.
    • instance_max_memory : integer
      The amount of memory (RAM) available to a single instance of that type in megabytes.
    • instance_max_cpu : integer
      The number of processor shares available to a single instance of that type in millicores.
    • instance_max_disk : integer
      The amount of disk available to a single instance of that type in gigabytes.
    • usage_stats : dict::
      • pending_memory_requested : integer
        The sum of memory requests (in MB) for pending deployments in this instance config.
      • pending_cpu_requested : integer
        The sum of cpu requests (in millicores) for pending deployments in this instance config.
      • running_memory_requested : integer
        The sum of memory requests (in MB) for running deployments in this instance config.
      • running_cpu_requested : integer
        The sum of cpu requests (in millicores) for running deployments in this instance config.
      • pending_deployments : integer
        The number of pending deployments in this instance config.
      • running_deployments : integer
        The number of running deployments in this instance config.
  • default_instance_config_id : integer

    The id of the InstanceConfig that is the default for this partition.

is_nat_enabled : boolean

Whether this cluster needs a NAT gateway or not.

hours : number/float

The number of hours used this month for this cluster.

post_kubernetes_partitions(self, id, instance_configs, name, labels)

Create a Cluster Partition for given cluster

Parameters:
id : integer

The ID of the cluster which this partition belongs to.

instance_configs : list::

The instances configured for this cluster partition. - instance_type : string

An EC2 instance type. Possible values include t2.large, m4.xlarge, m4.2xlarge, m4.4xlarge, m5.12xlarge, c5.18xlarge, and p2.xlarge.

  • min_instances : integer
    The minimum number of instances of that type in this cluster.
  • max_instances : integer
    The maximum number of instances of that type in this cluster.
name : string

The name of the cluster partition.

labels : list

Labels associated with this partition.

Returns:
cluster_partition_id : integer

The ID of this cluster partition.

name : string

The name of the cluster partition.

labels : list

Labels associated with this partition.

instance_configs : list::

The instances configured for this cluster partition. - instance_config_id : integer

The ID of this InstanceConfig.

  • instance_type : string
    An EC2 instance type. Possible values include t2.large, m4.xlarge, m4.2xlarge, m4.4xlarge, m5.12xlarge, c5.18xlarge, and p2.xlarge.
  • min_instances : integer
    The minimum number of instances of that type in this cluster.
  • max_instances : integer
    The maximum number of instances of that type in this cluster.
  • instance_max_memory : integer
    The amount of memory (RAM) available to a single instance of that type in megabytes.
  • instance_max_cpu : integer
    The number of processor shares available to a single instance of that type in millicores.
  • instance_max_disk : integer
    The amount of disk available to a single instance of that type in gigabytes.
  • usage_stats : dict::
    • pending_memory_requested : integer
      The sum of memory requests (in MB) for pending deployments in this instance config.
    • pending_cpu_requested : integer
      The sum of cpu requests (in millicores) for pending deployments in this instance config.
    • running_memory_requested : integer
      The sum of memory requests (in MB) for running deployments in this instance config.
    • running_cpu_requested : integer
      The sum of cpu requests (in millicores) for running deployments in this instance config.
    • pending_deployments : integer
      The number of pending deployments in this instance config.
    • running_deployments : integer
      The number of running deployments in this instance config.
default_instance_config_id : integer

The id of the InstanceConfig that is the default for this partition.

Credentials

class Credentials(session_kwargs, client, return_type='civis')

Methods

delete_shares_groups(self, id, group_id) Revoke the permissions a group has on this object
delete_shares_users(self, id, user_id) Revoke the permissions a user has on this object
get(self, id) Get a credential
list(self, \*[, type, remote_host_id, …]) List credentials
list_shares(self, id) List users and groups permissioned on this object
post(self, type, username, password, \*[, …]) Create a credential
post_authenticate(self, url, …) Authenticate against a remote host
post_temporary(self, id, \*[, duration]) Generate a temporary credential for accessing S3
put(self, id, type, username, password, \*) Update an existing credential
put_shares_groups(self, id, group_ids, …) Set the permissions groups has on this object
put_shares_users(self, id, user_ids, …[, …]) Set the permissions users have on this object
delete_shares_groups(self, id, group_id)

Revoke the permissions a group has on this object

Parameters:
id : integer

The ID of the resource that is shared.

group_id : integer

The ID of the group.

Returns:
None

Response code 204: success

delete_shares_users(self, id, user_id)

Revoke the permissions a user has on this object

Parameters:
id : integer

The ID of the resource that is shared.

user_id : integer

The ID of the user.

Returns:
None

Response code 204: success

get(self, id)

Get a credential

Parameters:
id : integer

The ID of the credential.

Returns:
id : integer

The ID of the credential.

name : string

The name identifying the credential

type : string

The credential’s type.

username : string

The username for the credential.

description : string

A long description of the credential.

owner : string

The name of the user who this credential belongs to.

remote_host_id : integer

The ID of the remote host associated with this credential.

remote_host_name : string

The name of the remote host associated with this credential.

state : string

The U.S. state for the credential. Only for VAN credentials.

created_at : string/time

The creation time for this credential.

updated_at : string/time

The last modification time for this credential.

default : boolean

Whether or not the credential is a default. Only for Database credentials.

list(self, *, type='DEFAULT', remote_host_id='DEFAULT', default='DEFAULT', limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List credentials

Parameters:
type : string, optional

The type (or types) of credentials to return. One or more of: Amazon Web Services S3, Bitbucket, CASS/NCOA PAF, Certificate, Civis Platform, Custom, Database, Google, Github, Salesforce User, Salesforce Client, and TableauUser. Specify multiple values as a comma-separated list (e.g., “A,B”).

remote_host_id : integer, optional

The ID of the remote host associated with the credentials to return.

default : boolean, optional

If true, will return a list with a single credential which is the current user’s default credential.

limit : integer, optional

Number of results to return. Defaults to its maximum of 1000.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to updated_at. Must be one of: updated_at, created_at, name.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
id : integer

The ID of the credential.

name : string

The name identifying the credential

type : string

The credential’s type.

username : string

The username for the credential.

description : string

A long description of the credential.

owner : string

The name of the user who this credential belongs to.

remote_host_id : integer

The ID of the remote host associated with this credential.

remote_host_name : string

The name of the remote host associated with this credential.

state : string

The U.S. state for the credential. Only for VAN credentials.

created_at : string/time

The creation time for this credential.

updated_at : string/time

The last modification time for this credential.

default : boolean

Whether or not the credential is a default. Only for Database credentials.

list_shares(self, id)

List users and groups permissioned on this object

Parameters:
id : integer

The ID of the resource that is shared.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

post(self, type, username, password, *, name='DEFAULT', description='DEFAULT', remote_host_id='DEFAULT', state='DEFAULT', system_credential='DEFAULT', default='DEFAULT')

Create a credential

Parameters:
type : string
username : string

The username for the credential.

password : string

The password for the credential.

name : string, optional

The name identifying the credential.

description : string, optional

A long description of the credential.

remote_host_id : integer, optional

The ID of the remote host associated with the credential.

state : string, optional

The U.S. state for the credential. Only for VAN credentials.

system_credential : boolean, optional
default : boolean, optional

Whether or not the credential is a default. Only for Database credentials.

Returns:
id : integer

The ID of the credential.

name : string

The name identifying the credential

type : string

The credential’s type.

username : string

The username for the credential.

description : string

A long description of the credential.

owner : string

The name of the user who this credential belongs to.

remote_host_id : integer

The ID of the remote host associated with this credential.

remote_host_name : string

The name of the remote host associated with this credential.

state : string

The U.S. state for the credential. Only for VAN credentials.

created_at : string/time

The creation time for this credential.

updated_at : string/time

The last modification time for this credential.

default : boolean

Whether or not the credential is a default. Only for Database credentials.

post_authenticate(self, url, remote_host_type, username, password)

Authenticate against a remote host

Parameters:
url : string

The URL to your host.

remote_host_type : string

The type of remote host. One of: RemoteHostTypes::Bitbucket, RemoteHostTypes::GitSSH, RemoteHostTypes::Github, RemoteHostTypes::GoogleDoc, RemoteHostTypes::JDBC, RemoteHostTypes::Postgres, RemoteHostTypes::Redshift, RemoteHostTypes::S3Storage, and RemoteHostTypes::Salesforce

username : string

The username for the credential.

password : string

The password for the credential.

Returns:
id : integer

The ID of the credential.

name : string

The name identifying the credential

type : string

The credential’s type.

username : string

The username for the credential.

description : string

A long description of the credential.

owner : string

The name of the user who this credential belongs to.

remote_host_id : integer

The ID of the remote host associated with this credential.

remote_host_name : string

The name of the remote host associated with this credential.

state : string

The U.S. state for the credential. Only for VAN credentials.

created_at : string/time

The creation time for this credential.

updated_at : string/time

The last modification time for this credential.

default : boolean

Whether or not the credential is a default. Only for Database credentials.

post_temporary(self, id, *, duration='DEFAULT')

Generate a temporary credential for accessing S3

Parameters:
id : integer

The ID of the credential.

duration : integer, optional

The number of seconds the temporary credential should be valid. Defaults to 15 minutes. Must not be less than 15 minutes or greater than 36 hours.

Returns:
access_key : string

The identifier of the credential.

secret_access_key : string

The secret part of the credential.

session_token : string

The session token identifier.

put(self, id, type, username, password, *, name='DEFAULT', description='DEFAULT', remote_host_id='DEFAULT', state='DEFAULT', system_credential='DEFAULT', default='DEFAULT')

Update an existing credential

Parameters:
id : integer

The ID of the credential.

type : string
username : string

The username for the credential.

password : string

The password for the credential.

name : string, optional

The name identifying the credential.

description : string, optional

A long description of the credential.

remote_host_id : integer, optional

The ID of the remote host associated with the credential.

state : string, optional

The U.S. state for the credential. Only for VAN credentials.

system_credential : boolean, optional
default : boolean, optional

Whether or not the credential is a default. Only for Database credentials.

Returns:
id : integer

The ID of the credential.

name : string

The name identifying the credential

type : string

The credential’s type.

username : string

The username for the credential.

description : string

A long description of the credential.

owner : string

The name of the user who this credential belongs to.

remote_host_id : integer

The ID of the remote host associated with this credential.

remote_host_name : string

The name of the remote host associated with this credential.

state : string

The U.S. state for the credential. Only for VAN credentials.

created_at : string/time

The creation time for this credential.

updated_at : string/time

The last modification time for this credential.

default : boolean

Whether or not the credential is a default. Only for Database credentials.

put_shares_groups(self, id, group_ids, permission_level, *, share_email_body='DEFAULT', send_shared_email='DEFAULT')

Set the permissions groups has on this object

Parameters:
id : integer

The ID of the resource that is shared.

group_ids : list

An array of one or more group IDs.

permission_level : string

Options are: “read”, “write”, or “manage”.

share_email_body : string, optional

Custom body text for e-mail sent on a share.

send_shared_email : boolean, optional

Send email to the recipients of a share.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

put_shares_users(self, id, user_ids, permission_level, *, share_email_body='DEFAULT', send_shared_email='DEFAULT')

Set the permissions users have on this object

Parameters:
id : integer

The ID of the resource that is shared.

user_ids : list

An array of one or more user IDs.

permission_level : string

Options are: “read”, “write”, or “manage”.

share_email_body : string, optional

Custom body text for e-mail sent on a share.

send_shared_email : boolean, optional

Send email to the recipients of a share.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

Databases

class Databases(session_kwargs, client, return_type='civis')

Methods

delete_whitelist_ips(self, id, whitelisted_ip_id) Remove a whitelisted IP address
get(self, id) Show database information
get_whitelist_ips(self, id, whitelisted_ip_id) View details about a whitelisted IP
list(self) List databases
list_advanced_settings(self, id) Get the advanced settings for this database
list_schemas(self, id) List schemas in this database
list_whitelist_ips(self, id) List whitelisted IPs for the specified database
patch_advanced_settings(self, id, \*[, …]) Update the advanced settings for this database
post_schemas_scan(self, id, schema, \*[, …]) Creates and enqueues a schema scanner job
post_whitelist_ips(self, id, subnet_mask) Whitelist an IP address
put_advanced_settings(self, id, …) Edit the advanced settings for this database
delete_whitelist_ips(self, id, whitelisted_ip_id)

Remove a whitelisted IP address

Parameters:
id : integer

The ID of the database this rule is applied to.

whitelisted_ip_id : integer

The ID of this whitelisted IP address.

Returns:
None

Response code 204: success

get(self, id)

Show database information

Parameters:
id : integer

The ID for the database.

Returns:
id : integer

The ID for the database.

name : string

The name of the database.

adapter : string

The type of the database.

get_whitelist_ips(self, id, whitelisted_ip_id)

View details about a whitelisted IP

Parameters:
id : integer

The ID of the database this rule is applied to.

whitelisted_ip_id : integer

The ID of this whitelisted IP address.

Returns:
id : integer

The ID of this whitelisted IP address.

remote_host_id : integer

The ID of the database this rule is applied to.

security_group_id : string

The ID of the security group this rule is applied to.

subnet_mask : string

The subnet mask that is allowed by this rule.

authorized_by : string

The user who authorized this rule.

is_active : boolean

True if the rule is applied, false if it has been revoked.

created_at : string/time

The time this rule was created.

updated_at : string/time

The time this rule was last updated.

list(self)

List databases

Returns:
id : integer

The ID for the database.

name : string

The name of the database.

adapter : string

The type of the database.

list_advanced_settings(self, id)

Get the advanced settings for this database

Parameters:
id : integer

The ID of the database this advanced settings object belongs to.

Returns:
export_caching_enabled : boolean

Whether or not caching is enabled for export jobs run on this database server.

list_schemas(self, id)

List schemas in this database

Parameters:
id : integer

The ID of the database.

Returns:
schema : string

The name of a schema.

list_whitelist_ips(self, id)

List whitelisted IPs for the specified database

Parameters:
id : integer

The ID for the database.

Returns:
id : integer

The ID of this whitelisted IP address.

remote_host_id : integer

The ID of the database this rule is applied to.

security_group_id : string

The ID of the security group this rule is applied to.

subnet_mask : string

The subnet mask that is allowed by this rule.

created_at : string/time

The time this rule was created.

updated_at : string/time

The time this rule was last updated.

patch_advanced_settings(self, id, *, export_caching_enabled='DEFAULT')

Update the advanced settings for this database

Parameters:
id : integer

The ID of the database this advanced settings object belongs to.

export_caching_enabled : boolean, optional

Whether or not caching is enabled for export jobs run on this database server.

Returns:
export_caching_enabled : boolean

Whether or not caching is enabled for export jobs run on this database server.

post_schemas_scan(self, id, schema, *, stats_priority='DEFAULT')

Creates and enqueues a schema scanner job

Parameters:
id : integer

The ID of the database.

schema : string

The name of the schema.

stats_priority : string, optional

When to sync table statistics for every table in the schema. Valid options are the following. Option: ‘flag’ means to flag stats for the next scheduled run of a full table scan on the database. Option: ‘block’ means to block this job on stats syncing. Option: ‘queue’ means to queue a separate job for syncing stats and do not block this job on the queued job. Defaults to ‘flag’

Returns:
job_id : integer

The ID of the job created.

run_id : integer

The ID of the run created.

post_whitelist_ips(self, id, subnet_mask)

Whitelist an IP address

Parameters:
id : integer

The ID of the database this rule is applied to.

subnet_mask : string

The subnet mask that is allowed by this rule.

Returns:
id : integer

The ID of this whitelisted IP address.

remote_host_id : integer

The ID of the database this rule is applied to.

security_group_id : string

The ID of the security group this rule is applied to.

subnet_mask : string

The subnet mask that is allowed by this rule.

authorized_by : string

The user who authorized this rule.

is_active : boolean

True if the rule is applied, false if it has been revoked.

created_at : string/time

The time this rule was created.

updated_at : string/time

The time this rule was last updated.

put_advanced_settings(self, id, export_caching_enabled)

Edit the advanced settings for this database

Parameters:
id : integer

The ID of the database this advanced settings object belongs to.

export_caching_enabled : boolean

Whether or not caching is enabled for export jobs run on this database server.

Returns:
export_caching_enabled : boolean

Whether or not caching is enabled for export jobs run on this database server.

Endpoints

class Endpoints(session_kwargs, client, return_type='civis')

Methods

list(self) List API endpoints
list(self)

List API endpoints

Returns:
None

Response code 200: success

Enhancements

class Enhancements(session_kwargs, client, return_type='civis')

Methods

delete_cass_ncoa_projects(self, id, project_id) Remove a CASS/NCOA Enhancement from a project
delete_cass_ncoa_runs(self, id, run_id) Cancel a run
delete_cass_ncoa_shares_groups(self, id, …) Revoke the permissions a group has on this object
delete_cass_ncoa_shares_users(self, id, user_id) Revoke the permissions a user has on this object
delete_civis_data_match_projects(self, id, …) Remove a Civis Data Match Enhancement from a project
delete_civis_data_match_runs(self, id, run_id) Cancel a run
delete_civis_data_match_shares_groups(self, …) Revoke the permissions a group has on this object
delete_civis_data_match_shares_users(self, …) Revoke the permissions a user has on this object
delete_geocode_projects(self, id, project_id) Remove a Geocode Enhancement from a project
delete_geocode_runs(self, id, run_id) Cancel a run
delete_geocode_shares_groups(self, id, group_id) Revoke the permissions a group has on this object
delete_geocode_shares_users(self, id, user_id) Revoke the permissions a user has on this object
get_cass_ncoa(self, id) Get a CASS/NCOA Enhancement
get_cass_ncoa_runs(self, id, run_id) Check status of a run
get_civis_data_match(self, id) Get a Civis Data Match Enhancement
get_civis_data_match_runs(self, id, run_id) Check status of a run
get_geocode(self, id) Get a Geocode Enhancement
get_geocode_runs(self, id, run_id) Check status of a run
list(self, \*[, type, author, status, …]) List Enhancements
list_cass_ncoa_projects(self, id, \*[, hidden]) List the projects a CASS/NCOA Enhancement belongs to
list_cass_ncoa_runs(self, id, \*[, limit, …]) List runs for the given cass_ncoa
list_cass_ncoa_runs_logs(self, id, run_id, \*) Get the logs for a run
list_cass_ncoa_runs_outputs(self, id, run_id, \*) List the outputs for a run
list_cass_ncoa_shares(self, id) List users and groups permissioned on this object
list_civis_data_match_projects(self, id, \*) List the projects a Civis Data Match Enhancement belongs to
list_civis_data_match_runs(self, id, \*[, …]) List runs for the given civis_data_match
list_civis_data_match_runs_logs(self, id, …) Get the logs for a run
list_civis_data_match_runs_outputs(self, id, …) List the outputs for a run
list_civis_data_match_shares(self, id) List users and groups permissioned on this object
list_field_mapping(self) List the fields in a field mapping for Civis Data Match, Data Unification, and Table Deduplication jobs
list_geocode_projects(self, id, \*[, hidden]) List the projects a Geocode Enhancement belongs to
list_geocode_runs(self, id, \*[, limit, …]) List runs for the given geocode
list_geocode_runs_logs(self, id, run_id, \*) Get the logs for a run
list_geocode_runs_outputs(self, id, run_id, \*) List the outputs for a run
list_geocode_shares(self, id) List users and groups permissioned on this object
list_types(self) List available enhancement types
patch_cass_ncoa(self, id, \*[, name, …]) Update some attributes of this CASS/NCOA Enhancement
patch_civis_data_match(self, id, \*[, name, …]) Update some attributes of this Civis Data Match Enhancement
patch_geocode(self, id, \*[, name, …]) Update some attributes of this Geocode Enhancement
post_cass_ncoa(self, name, source, \*[, …]) Create a CASS/NCOA Enhancement
post_cass_ncoa_cancel(self, id) Cancel a run
post_cass_ncoa_runs(self, id) Start a run
post_civis_data_match(self, name, …[, …]) Create a Civis Data Match Enhancement
post_civis_data_match_cancel(self, id) Cancel a run
post_civis_data_match_clone(self, id, \*[, …]) Clone this Civis Data Match Enhancement
post_civis_data_match_runs(self, id) Start a run
post_geocode(self, name, remote_host_id, …) Create a Geocode Enhancement
post_geocode_cancel(self, id) Cancel a run
post_geocode_runs(self, id) Start a run
put_cass_ncoa(self, id, name, source, \*[, …]) Replace all attributes of this CASS/NCOA Enhancement
put_cass_ncoa_archive(self, id, status) Update the archive status of this object
put_cass_ncoa_projects(self, id, project_id) Add a CASS/NCOA Enhancement to a project
put_cass_ncoa_shares_groups(self, id, …[, …]) Set the permissions groups has on this object
put_cass_ncoa_shares_users(self, id, …[, …]) Set the permissions users have on this object
put_civis_data_match(self, id, name, …[, …]) Replace all attributes of this Civis Data Match Enhancement
put_civis_data_match_archive(self, id, status) Update the archive status of this object
put_civis_data_match_projects(self, id, …) Add a Civis Data Match Enhancement to a project
put_civis_data_match_shares_groups(self, id, …) Set the permissions groups has on this object
put_civis_data_match_shares_users(self, id, …) Set the permissions users have on this object
put_geocode(self, id, name, remote_host_id, …) Replace all attributes of this Geocode Enhancement
put_geocode_archive(self, id, status) Update the archive status of this object
put_geocode_projects(self, id, project_id) Add a Geocode Enhancement to a project
put_geocode_shares_groups(self, id, …[, …]) Set the permissions groups has on this object
put_geocode_shares_users(self, id, user_ids, …) Set the permissions users have on this object
delete_cass_ncoa_projects(self, id, project_id)

Remove a CASS/NCOA Enhancement from a project

Parameters:
id : integer

The ID of the CASS/NCOA Enhancement.

project_id : integer

The ID of the project.

Returns:
None

Response code 204: success

delete_cass_ncoa_runs(self, id, run_id)

Cancel a run

Parameters:
id : integer

The ID of the cass_ncoa.

run_id : integer

The ID of the run.

Returns:
None

Response code 202: success

delete_cass_ncoa_shares_groups(self, id, group_id)

Revoke the permissions a group has on this object

Parameters:
id : integer

The ID of the resource that is shared.

group_id : integer

The ID of the group.

Returns:
None

Response code 204: success

delete_cass_ncoa_shares_users(self, id, user_id)

Revoke the permissions a user has on this object

Parameters:
id : integer

The ID of the resource that is shared.

user_id : integer

The ID of the user.

Returns:
None

Response code 204: success

delete_civis_data_match_projects(self, id, project_id)

Remove a Civis Data Match Enhancement from a project

Parameters:
id : integer

The ID of the Civis Data Match Enhancement.

project_id : integer

The ID of the project.

Returns:
None

Response code 204: success

delete_civis_data_match_runs(self, id, run_id)

Cancel a run

Parameters:
id : integer

The ID of the civis_data_match.

run_id : integer

The ID of the run.

Returns:
None

Response code 202: success

delete_civis_data_match_shares_groups(self, id, group_id)

Revoke the permissions a group has on this object

Parameters:
id : integer

The ID of the resource that is shared.

group_id : integer

The ID of the group.

Returns:
None

Response code 204: success

delete_civis_data_match_shares_users(self, id, user_id)

Revoke the permissions a user has on this object

Parameters:
id : integer

The ID of the resource that is shared.

user_id : integer

The ID of the user.

Returns:
None

Response code 204: success

delete_geocode_projects(self, id, project_id)

Remove a Geocode Enhancement from a project

Parameters:
id : integer

The ID of the Geocode Enhancement.

project_id : integer

The ID of the project.

Returns:
None

Response code 204: success

delete_geocode_runs(self, id, run_id)

Cancel a run

Parameters:
id : integer

The ID of the geocode.

run_id : integer

The ID of the run.

Returns:
None

Response code 202: success

delete_geocode_shares_groups(self, id, group_id)

Revoke the permissions a group has on this object

Parameters:
id : integer

The ID of the resource that is shared.

group_id : integer

The ID of the group.

Returns:
None

Response code 204: success

delete_geocode_shares_users(self, id, user_id)

Revoke the permissions a user has on this object

Parameters:
id : integer

The ID of the resource that is shared.

user_id : integer

The ID of the user.

Returns:
None

Response code 204: success

get_cass_ncoa(self, id)

Get a CASS/NCOA Enhancement

Parameters:
id : integer
Returns:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

type : string

The type of the enhancement (e.g CASS-NCOA)

created_at : string/time

The time this enhancement was created.

updated_at : string/time

The time the enhancement was last updated.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
state : string

The status of the enhancement’s last run

schedule : dict::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer

Parent ID that triggers this enhancement.

notifications : dict::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
running_as : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
source : dict::
  • database_table : dict::
    • schema : string
      The schema name of the source table.
    • table : string
      The name of the source table.
    • remote_host_id : integer
      The ID of the database host for the table.
    • credential_id : integer
      The id of the credentials to be used when performing the enhancement.
    • multipart_key : list
      The source table primary key.
destination : dict::
  • database_table : dict::
    • schema : string
      The schema name for the output data.
    • table : string
      The table name for the output data.
column_mapping : dict::
  • address1 : string
    The first address line.
  • address2 : string
    The second address line.
  • city : string
    The city of an address.
  • state : string
    The state of an address.
  • zip : string
    The zip code of an address.
  • name : string
    The full name of the resident at this address. If needed, separate multiple columns with +, e.g. first_name+last_name
  • company : string
    The name of the company located at this address.
use_default_column_mapping : boolean

Defaults to true, where the existing column mapping on the input table will be used. If false, a custom column mapping must be provided.

perform_ncoa : boolean

Whether to update addresses for records matching the National Change of Address (NCOA) database.

ncoa_credential_id : integer

Credential to use when performing NCOA updates. Required if ‘performNcoa’ is true.

output_level : string

The set of fields persisted by a CASS or NCOA enhancement.For CASS enhancements, one of ‘cass’ or ‘all.’For NCOA enhancements, one of ‘cass’, ‘ncoa’ , ‘coalesced’ or ‘all’.By default, all fields will be returned.

limiting_sql : string

The limiting SQL for the source table. “WHERE” should be omitted (e.g. state=’IL’).

archived : string

The archival status of the requested item(s).

get_cass_ncoa_runs(self, id, run_id)

Check status of a run

Parameters:
id : integer

The ID of the cass_ncoa.

run_id : integer

The ID of the run.

Returns:
id : integer

The ID of the run.

cass_ncoa_id : integer

The ID of the cass_ncoa.

state : string

The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.

is_cancel_requested : boolean

True if run cancel requested, else false.

started_at : string/time

The time the last run started at.

finished_at : string/time

The time the last run completed.

error : string

The error, if any, returned by the run.

get_civis_data_match(self, id)

Get a Civis Data Match Enhancement

Parameters:
id : integer
Returns:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

type : string

The type of the enhancement (e.g CASS-NCOA)

created_at : string/time

The time this enhancement was created.

updated_at : string/time

The time the enhancement was last updated.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
state : string

The status of the enhancement’s last run

schedule : dict::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer

Parent ID that triggers this enhancement.

notifications : dict::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
running_as : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
input_field_mapping : dict

The column mapping for the input table. See /enhancements/field_mapping for list of valid fields.

input_table : dict::
  • database_name : string
    The Redshift database name for the table.
  • schema : string
    The schema name for the table.
  • table : string
    The table name.
match_target_id : integer

The ID of the Civis Data match target. See /match_targets for IDs.

output_table : dict::
  • database_name : string
    The Redshift database name for the table.
  • schema : string
    The schema name for the table.
  • table : string
    The table name.
max_matches : integer

The maximum number of matches per record in the input table to return. Must be between 0 and 10. 0 returns all matches.

threshold : number/float

The score threshold (between 0 and 1). Matches below this threshold will not be returned. The default value is 0.5.

archived : boolean

Whether the Civis Data Match Job has been archived.

last_run : dict::
  • id : integer
  • state : string
  • created_at : string/time
    The time that the run was queued.
  • started_at : string/time
    The time that the run started.
  • finished_at : string/time
    The time that the run completed.
  • error : string
    The error message for this run, if present.
get_civis_data_match_runs(self, id, run_id)

Check status of a run

Parameters:
id : integer

The ID of the civis_data_match.

run_id : integer

The ID of the run.

Returns:
id : integer

The ID of the run.

civis_data_match_id : integer

The ID of the civis_data_match.

state : string

The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.

is_cancel_requested : boolean

True if run cancel requested, else false.

started_at : string/time

The time the last run started at.

finished_at : string/time

The time the last run completed.

error : string

The error, if any, returned by the run.

get_geocode(self, id)

Get a Geocode Enhancement

Parameters:
id : integer
Returns:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

type : string

The type of the enhancement (e.g CASS-NCOA)

created_at : string/time

The time this enhancement was created.

updated_at : string/time

The time the enhancement was last updated.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
state : string

The status of the enhancement’s last run

schedule : dict::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer

Parent ID that triggers this enhancement.

notifications : dict::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
running_as : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
remote_host_id : integer

The ID of the remote host.

credential_id : integer

The ID of the remote host credential.

source_schema_and_table : string

The source database schema and table.

multipart_key : list

The source table primary key.

limiting_sql : string

The limiting SQL for the source table. “WHERE” should be omitted (e.g. state=’IL’).

target_schema : string

The output table schema.

target_table : string

The output table name.

country : string

The country of the addresses to be geocoded; either ‘us’ or ‘ca’.

provider : string

The geocoding provider; one of postgis, nominatim, and geocoder_ca.

output_address : boolean

Whether to output the parsed address. Only guaranteed for the ‘postgis’ provider.

archived : string

The archival status of the requested item(s).

get_geocode_runs(self, id, run_id)

Check status of a run

Parameters:
id : integer

The ID of the geocode.

run_id : integer

The ID of the run.

Returns:
id : integer

The ID of the run.

geocode_id : integer

The ID of the geocode.

state : string

The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.

is_cancel_requested : boolean

True if run cancel requested, else false.

started_at : string/time

The time the last run started at.

finished_at : string/time

The time the last run completed.

error : string

The error, if any, returned by the run.

list(self, *, type='DEFAULT', author='DEFAULT', status='DEFAULT', archived='DEFAULT', limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List Enhancements

Parameters:
type : string, optional

If specified, return items of these types.

author : string, optional

If specified, return items from this author. Must use user IDs. A comma separated list of IDs is also accepted to return items from multiple authors.

status : string, optional

If specified, returns items with one of these statuses. It accepts a comma- separated list, possible values are ‘running’, ‘failed’, ‘succeeded’, ‘idle’, ‘scheduled’.

archived : string, optional

The archival status of the requested item(s).

limit : integer, optional

Number of results to return. Defaults to 20. Maximum allowed is 50.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to updated_at. Must be one of: updated_at, name, created_at, last_run.updated_at.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

type : string

The type of the enhancement (e.g CASS-NCOA)

created_at : string/time

The time this enhancement was created.

updated_at : string/time

The time the enhancement was last updated.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
state : string

The status of the enhancement’s last run

archived : string

The archival status of the requested item(s).

list_cass_ncoa_projects(self, id, *, hidden='DEFAULT')

List the projects a CASS/NCOA Enhancement belongs to

Parameters:
id : integer

The ID of the CASS/NCOA Enhancement.

hidden : boolean, optional

If specified to be true, returns hidden items. Defaults to false, returning non-hidden items.

Returns:
id : integer

The ID for this project.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
name : string

The name of this project.

description : string

A description of the project.

users : list::

Users who can see the project. - id : integer

The ID of this user.

  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
auto_share : boolean
created_at : string/time
updated_at : string/time
archived : string

The archival status of the requested item(s).

list_cass_ncoa_runs(self, id, *, limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List runs for the given cass_ncoa

Parameters:
id : integer

The ID of the cass_ncoa.

limit : integer, optional

Number of results to return. Defaults to 20. Maximum allowed is 100.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to id. Must be one of: id.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
id : integer

The ID of the run.

cass_ncoa_id : integer

The ID of the cass_ncoa.

state : string

The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.

is_cancel_requested : boolean

True if run cancel requested, else false.

started_at : string/time

The time the last run started at.

finished_at : string/time

The time the last run completed.

error : string

The error, if any, returned by the run.

list_cass_ncoa_runs_logs(self, id, run_id, *, last_id='DEFAULT', limit='DEFAULT')

Get the logs for a run

Parameters:
id : integer

The ID of the cass_ncoa.

run_id : integer

The ID of the run.

last_id : integer, optional

The ID of the last log message received. Log entries with this ID value or lower will be omitted.Logs are sorted by ID if this value is provided, and are otherwise sorted by createdAt.

limit : integer, optional

The maximum number of log messages to return. Default of 10000.

Returns:
id : integer

The ID of the log.

created_at : string/date-time

The time the log was created.

message : string

The log message.

level : string

The level of the log. One of unknown,fatal,error,warn,info,debug.

list_cass_ncoa_runs_outputs(self, id, run_id, *, limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List the outputs for a run

Parameters:
id : integer

The ID of the job.

run_id : integer

The ID of the run.

limit : integer, optional

Number of results to return. Defaults to its maximum of 50.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to created_at. Must be one of: created_at, id.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
object_type : string

The type of the output. Valid values are File, Table, Report, Project, Credential, or JSONValue

object_id : integer

The ID of the output.

name : string

The name of the output.

link : string

The hypermedia link to the output.

value : string
list_cass_ncoa_shares(self, id)

List users and groups permissioned on this object

Parameters:
id : integer

The ID of the resource that is shared.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

list_civis_data_match_projects(self, id, *, hidden='DEFAULT')

List the projects a Civis Data Match Enhancement belongs to

Parameters:
id : integer

The ID of the Civis Data Match Enhancement.

hidden : boolean, optional

If specified to be true, returns hidden items. Defaults to false, returning non-hidden items.

Returns:
id : integer

The ID for this project.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
name : string

The name of this project.

description : string

A description of the project.

users : list::

Users who can see the project. - id : integer

The ID of this user.

  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
auto_share : boolean
created_at : string/time
updated_at : string/time
archived : string

The archival status of the requested item(s).

list_civis_data_match_runs(self, id, *, limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List runs for the given civis_data_match

Parameters:
id : integer

The ID of the civis_data_match.

limit : integer, optional

Number of results to return. Defaults to 20. Maximum allowed is 100.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to id. Must be one of: id.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
id : integer

The ID of the run.

civis_data_match_id : integer

The ID of the civis_data_match.

state : string

The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.

is_cancel_requested : boolean

True if run cancel requested, else false.

started_at : string/time

The time the last run started at.

finished_at : string/time

The time the last run completed.

error : string

The error, if any, returned by the run.

list_civis_data_match_runs_logs(self, id, run_id, *, last_id='DEFAULT', limit='DEFAULT')

Get the logs for a run

Parameters:
id : integer

The ID of the civis_data_match.

run_id : integer

The ID of the run.

last_id : integer, optional

The ID of the last log message received. Log entries with this ID value or lower will be omitted.Logs are sorted by ID if this value is provided, and are otherwise sorted by createdAt.

limit : integer, optional

The maximum number of log messages to return. Default of 10000.

Returns:
id : integer

The ID of the log.

created_at : string/date-time

The time the log was created.

message : string

The log message.

level : string

The level of the log. One of unknown,fatal,error,warn,info,debug.

list_civis_data_match_runs_outputs(self, id, run_id, *, limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List the outputs for a run

Parameters:
id : integer

The ID of the job.

run_id : integer

The ID of the run.

limit : integer, optional

Number of results to return. Defaults to its maximum of 50.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to created_at. Must be one of: created_at, id.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
object_type : string

The type of the output. Valid values are File, Table, Report, Project, Credential, or JSONValue

object_id : integer

The ID of the output.

name : string

The name of the output.

link : string

The hypermedia link to the output.

value : string
list_civis_data_match_shares(self, id)

List users and groups permissioned on this object

Parameters:
id : integer

The ID of the resource that is shared.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

list_field_mapping(self)

List the fields in a field mapping for Civis Data Match, Data Unification, and Table Deduplication jobs

Returns:
field : string

The name of the field.

description : string

The description of the field.

list_geocode_projects(self, id, *, hidden='DEFAULT')

List the projects a Geocode Enhancement belongs to

Parameters:
id : integer

The ID of the Geocode Enhancement.

hidden : boolean, optional

If specified to be true, returns hidden items. Defaults to false, returning non-hidden items.

Returns:
id : integer

The ID for this project.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
name : string

The name of this project.

description : string

A description of the project.

users : list::

Users who can see the project. - id : integer

The ID of this user.

  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
auto_share : boolean
created_at : string/time
updated_at : string/time
archived : string

The archival status of the requested item(s).

list_geocode_runs(self, id, *, limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List runs for the given geocode

Parameters:
id : integer

The ID of the geocode.

limit : integer, optional

Number of results to return. Defaults to 20. Maximum allowed is 100.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to id. Must be one of: id.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
id : integer

The ID of the run.

geocode_id : integer

The ID of the geocode.

state : string

The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.

is_cancel_requested : boolean

True if run cancel requested, else false.

started_at : string/time

The time the last run started at.

finished_at : string/time

The time the last run completed.

error : string

The error, if any, returned by the run.

list_geocode_runs_logs(self, id, run_id, *, last_id='DEFAULT', limit='DEFAULT')

Get the logs for a run

Parameters:
id : integer

The ID of the geocode.

run_id : integer

The ID of the run.

last_id : integer, optional

The ID of the last log message received. Log entries with this ID value or lower will be omitted.Logs are sorted by ID if this value is provided, and are otherwise sorted by createdAt.

limit : integer, optional

The maximum number of log messages to return. Default of 10000.

Returns:
id : integer

The ID of the log.

created_at : string/date-time

The time the log was created.

message : string

The log message.

level : string

The level of the log. One of unknown,fatal,error,warn,info,debug.

list_geocode_runs_outputs(self, id, run_id, *, limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List the outputs for a run

Parameters:
id : integer

The ID of the job.

run_id : integer

The ID of the run.

limit : integer, optional

Number of results to return. Defaults to its maximum of 50.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to created_at. Must be one of: created_at, id.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
object_type : string

The type of the output. Valid values are File, Table, Report, Project, Credential, or JSONValue

object_id : integer

The ID of the output.

name : string

The name of the output.

link : string

The hypermedia link to the output.

value : string
list_geocode_shares(self, id)

List users and groups permissioned on this object

Parameters:
id : integer

The ID of the resource that is shared.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

list_types(self)

List available enhancement types

Returns:
name : string

The name of the type.

patch_cass_ncoa(self, id, *, name='DEFAULT', schedule='DEFAULT', parent_id='DEFAULT', notifications='DEFAULT', source='DEFAULT', destination='DEFAULT', column_mapping='DEFAULT', use_default_column_mapping='DEFAULT', perform_ncoa='DEFAULT', ncoa_credential_id='DEFAULT', output_level='DEFAULT', limiting_sql='DEFAULT')

Update some attributes of this CASS/NCOA Enhancement

Parameters:
id : integer

The ID for the enhancement.

name : string, optional

The name of the enhancement job.

schedule : dict, optional::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer, optional

Parent ID that triggers this enhancement.

notifications : dict, optional::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
source : dict, optional::
  • database_table : dict::
    • schema : string
      The schema name of the source table.
    • table : string
      The name of the source table.
    • remote_host_id : integer
      The ID of the database host for the table.
    • credential_id : integer
      The id of the credentials to be used when performing the enhancement.
    • multipart_key : list
      The source table primary key.
destination : dict, optional::
  • database_table : dict::
    • schema : string
      The schema name for the output data.
    • table : string
      The table name for the output data.
column_mapping : dict, optional::
  • address1 : string
    The first address line.
  • address2 : string
    The second address line.
  • city : string
    The city of an address.
  • state : string
    The state of an address.
  • zip : string
    The zip code of an address.
  • name : string
    The full name of the resident at this address. If needed, separate multiple columns with +, e.g. first_name+last_name
  • company : string
    The name of the company located at this address.
use_default_column_mapping : boolean, optional

Defaults to true, where the existing column mapping on the input table will be used. If false, a custom column mapping must be provided.

perform_ncoa : boolean, optional

Whether to update addresses for records matching the National Change of Address (NCOA) database.

ncoa_credential_id : integer, optional

Credential to use when performing NCOA updates. Required if ‘performNcoa’ is true.

output_level : string, optional

The set of fields persisted by a CASS or NCOA enhancement.For CASS enhancements, one of ‘cass’ or ‘all.’For NCOA enhancements, one of ‘cass’, ‘ncoa’ , ‘coalesced’ or ‘all’.By default, all fields will be returned.

limiting_sql : string, optional

The limiting SQL for the source table. “WHERE” should be omitted (e.g. state=’IL’).

Returns:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

type : string

The type of the enhancement (e.g CASS-NCOA)

created_at : string/time

The time this enhancement was created.

updated_at : string/time

The time the enhancement was last updated.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
state : string

The status of the enhancement’s last run

schedule : dict::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer

Parent ID that triggers this enhancement.

notifications : dict::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
running_as : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
source : dict::
  • database_table : dict::
    • schema : string
      The schema name of the source table.
    • table : string
      The name of the source table.
    • remote_host_id : integer
      The ID of the database host for the table.
    • credential_id : integer
      The id of the credentials to be used when performing the enhancement.
    • multipart_key : list
      The source table primary key.
destination : dict::
  • database_table : dict::
    • schema : string
      The schema name for the output data.
    • table : string
      The table name for the output data.
column_mapping : dict::
  • address1 : string
    The first address line.
  • address2 : string
    The second address line.
  • city : string
    The city of an address.
  • state : string
    The state of an address.
  • zip : string
    The zip code of an address.
  • name : string
    The full name of the resident at this address. If needed, separate multiple columns with +, e.g. first_name+last_name
  • company : string
    The name of the company located at this address.
use_default_column_mapping : boolean

Defaults to true, where the existing column mapping on the input table will be used. If false, a custom column mapping must be provided.

perform_ncoa : boolean

Whether to update addresses for records matching the National Change of Address (NCOA) database.

ncoa_credential_id : integer

Credential to use when performing NCOA updates. Required if ‘performNcoa’ is true.

output_level : string

The set of fields persisted by a CASS or NCOA enhancement.For CASS enhancements, one of ‘cass’ or ‘all.’For NCOA enhancements, one of ‘cass’, ‘ncoa’ , ‘coalesced’ or ‘all’.By default, all fields will be returned.

limiting_sql : string

The limiting SQL for the source table. “WHERE” should be omitted (e.g. state=’IL’).

archived : string

The archival status of the requested item(s).

patch_civis_data_match(self, id, *, name='DEFAULT', schedule='DEFAULT', parent_id='DEFAULT', notifications='DEFAULT', input_field_mapping='DEFAULT', input_table='DEFAULT', match_target_id='DEFAULT', output_table='DEFAULT', max_matches='DEFAULT', threshold='DEFAULT', archived='DEFAULT')

Update some attributes of this Civis Data Match Enhancement

Parameters:
id : integer

The ID for the enhancement.

name : string, optional

The name of the enhancement job.

schedule : dict, optional::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer, optional

Parent ID that triggers this enhancement.

notifications : dict, optional::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
input_field_mapping : dict, optional

The column mapping for the input table. See /enhancements/field_mapping for list of valid fields.

input_table : dict, optional::
  • database_name : string
    The Redshift database name for the table.
  • schema : string
    The schema name for the table.
  • table : string
    The table name.
match_target_id : integer, optional

The ID of the Civis Data match target. See /match_targets for IDs.

output_table : dict, optional::
  • database_name : string
    The Redshift database name for the table.
  • schema : string
    The schema name for the table.
  • table : string
    The table name.
max_matches : integer, optional

The maximum number of matches per record in the input table to return. Must be between 0 and 10. 0 returns all matches.

threshold : number/float, optional

The score threshold (between 0 and 1). Matches below this threshold will not be returned. The default value is 0.5.

archived : boolean, optional

Whether the Civis Data Match Job has been archived.

Returns:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

type : string

The type of the enhancement (e.g CASS-NCOA)

created_at : string/time

The time this enhancement was created.

updated_at : string/time

The time the enhancement was last updated.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
state : string

The status of the enhancement’s last run

schedule : dict::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer

Parent ID that triggers this enhancement.

notifications : dict::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
running_as : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
input_field_mapping : dict

The column mapping for the input table. See /enhancements/field_mapping for list of valid fields.

input_table : dict::
  • database_name : string
    The Redshift database name for the table.
  • schema : string
    The schema name for the table.
  • table : string
    The table name.
match_target_id : integer

The ID of the Civis Data match target. See /match_targets for IDs.

output_table : dict::
  • database_name : string
    The Redshift database name for the table.
  • schema : string
    The schema name for the table.
  • table : string
    The table name.
max_matches : integer

The maximum number of matches per record in the input table to return. Must be between 0 and 10. 0 returns all matches.

threshold : number/float

The score threshold (between 0 and 1). Matches below this threshold will not be returned. The default value is 0.5.

archived : boolean

Whether the Civis Data Match Job has been archived.

last_run : dict::
  • id : integer
  • state : string
  • created_at : string/time
    The time that the run was queued.
  • started_at : string/time
    The time that the run started.
  • finished_at : string/time
    The time that the run completed.
  • error : string
    The error message for this run, if present.
patch_geocode(self, id, *, name='DEFAULT', schedule='DEFAULT', parent_id='DEFAULT', notifications='DEFAULT', remote_host_id='DEFAULT', credential_id='DEFAULT', source_schema_and_table='DEFAULT', multipart_key='DEFAULT', limiting_sql='DEFAULT', target_schema='DEFAULT', target_table='DEFAULT', country='DEFAULT', provider='DEFAULT', output_address='DEFAULT')

Update some attributes of this Geocode Enhancement

Parameters:
id : integer

The ID for the enhancement.

name : string, optional

The name of the enhancement job.

schedule : dict, optional::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer, optional

Parent ID that triggers this enhancement.

notifications : dict, optional::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
remote_host_id : integer, optional

The ID of the remote host.

credential_id : integer, optional

The ID of the remote host credential.

source_schema_and_table : string, optional

The source database schema and table.

multipart_key : list, optional

The source table primary key.

limiting_sql : string, optional

The limiting SQL for the source table. “WHERE” should be omitted (e.g. state=’IL’).

target_schema : string, optional

The output table schema.

target_table : string, optional

The output table name.

country : string, optional

The country of the addresses to be geocoded; either ‘us’ or ‘ca’.

provider : string, optional

The geocoding provider; one of postgis, nominatim, and geocoder_ca.

output_address : boolean, optional

Whether to output the parsed address. Only guaranteed for the ‘postgis’ provider.

Returns:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

type : string

The type of the enhancement (e.g CASS-NCOA)

created_at : string/time

The time this enhancement was created.

updated_at : string/time

The time the enhancement was last updated.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
state : string

The status of the enhancement’s last run

schedule : dict::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer

Parent ID that triggers this enhancement.

notifications : dict::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
running_as : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
remote_host_id : integer

The ID of the remote host.

credential_id : integer

The ID of the remote host credential.

source_schema_and_table : string

The source database schema and table.

multipart_key : list

The source table primary key.

limiting_sql : string

The limiting SQL for the source table. “WHERE” should be omitted (e.g. state=’IL’).

target_schema : string

The output table schema.

target_table : string

The output table name.

country : string

The country of the addresses to be geocoded; either ‘us’ or ‘ca’.

provider : string

The geocoding provider; one of postgis, nominatim, and geocoder_ca.

output_address : boolean

Whether to output the parsed address. Only guaranteed for the ‘postgis’ provider.

archived : string

The archival status of the requested item(s).

post_cass_ncoa(self, name, source, *, schedule='DEFAULT', parent_id='DEFAULT', notifications='DEFAULT', destination='DEFAULT', column_mapping='DEFAULT', use_default_column_mapping='DEFAULT', perform_ncoa='DEFAULT', ncoa_credential_id='DEFAULT', output_level='DEFAULT', limiting_sql='DEFAULT')

Create a CASS/NCOA Enhancement

Parameters:
name : string

The name of the enhancement job.

source : dict::
  • database_table : dict::
    • schema : string
      The schema name of the source table.
    • table : string
      The name of the source table.
    • remote_host_id : integer
      The ID of the database host for the table.
    • credential_id : integer
      The id of the credentials to be used when performing the enhancement.
    • multipart_key : list
      The source table primary key.
schedule : dict, optional::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer, optional

Parent ID that triggers this enhancement.

notifications : dict, optional::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
destination : dict, optional::
  • database_table : dict::
    • schema : string
      The schema name for the output data.
    • table : string
      The table name for the output data.
column_mapping : dict, optional::
  • address1 : string
    The first address line.
  • address2 : string
    The second address line.
  • city : string
    The city of an address.
  • state : string
    The state of an address.
  • zip : string
    The zip code of an address.
  • name : string
    The full name of the resident at this address. If needed, separate multiple columns with +, e.g. first_name+last_name
  • company : string
    The name of the company located at this address.
use_default_column_mapping : boolean, optional

Defaults to true, where the existing column mapping on the input table will be used. If false, a custom column mapping must be provided.

perform_ncoa : boolean, optional

Whether to update addresses for records matching the National Change of Address (NCOA) database.

ncoa_credential_id : integer, optional

Credential to use when performing NCOA updates. Required if ‘performNcoa’ is true.

output_level : string, optional

The set of fields persisted by a CASS or NCOA enhancement.For CASS enhancements, one of ‘cass’ or ‘all.’For NCOA enhancements, one of ‘cass’, ‘ncoa’ , ‘coalesced’ or ‘all’.By default, all fields will be returned.

limiting_sql : string, optional

The limiting SQL for the source table. “WHERE” should be omitted (e.g. state=’IL’).

Returns:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

type : string

The type of the enhancement (e.g CASS-NCOA)

created_at : string/time

The time this enhancement was created.

updated_at : string/time

The time the enhancement was last updated.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
state : string

The status of the enhancement’s last run

schedule : dict::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer

Parent ID that triggers this enhancement.

notifications : dict::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
running_as : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
source : dict::
  • database_table : dict::
    • schema : string
      The schema name of the source table.
    • table : string
      The name of the source table.
    • remote_host_id : integer
      The ID of the database host for the table.
    • credential_id : integer
      The id of the credentials to be used when performing the enhancement.
    • multipart_key : list
      The source table primary key.
destination : dict::
  • database_table : dict::
    • schema : string
      The schema name for the output data.
    • table : string
      The table name for the output data.
column_mapping : dict::
  • address1 : string
    The first address line.
  • address2 : string
    The second address line.
  • city : string
    The city of an address.
  • state : string
    The state of an address.
  • zip : string
    The zip code of an address.
  • name : string
    The full name of the resident at this address. If needed, separate multiple columns with +, e.g. first_name+last_name
  • company : string
    The name of the company located at this address.
use_default_column_mapping : boolean

Defaults to true, where the existing column mapping on the input table will be used. If false, a custom column mapping must be provided.

perform_ncoa : boolean

Whether to update addresses for records matching the National Change of Address (NCOA) database.

ncoa_credential_id : integer

Credential to use when performing NCOA updates. Required if ‘performNcoa’ is true.

output_level : string

The set of fields persisted by a CASS or NCOA enhancement.For CASS enhancements, one of ‘cass’ or ‘all.’For NCOA enhancements, one of ‘cass’, ‘ncoa’ , ‘coalesced’ or ‘all’.By default, all fields will be returned.

limiting_sql : string

The limiting SQL for the source table. “WHERE” should be omitted (e.g. state=’IL’).

archived : string

The archival status of the requested item(s).

post_cass_ncoa_cancel(self, id)

Cancel a run

Parameters:
id : integer

The ID of the job.

Returns:
id : integer

The ID of the run.

state : string

The state of the run, one of ‘queued’, ‘running’ or ‘cancelled’.

is_cancel_requested : boolean

True if run cancel requested, else false.

post_cass_ncoa_runs(self, id)

Start a run

Parameters:
id : integer

The ID of the cass_ncoa.

Returns:
id : integer

The ID of the run.

cass_ncoa_id : integer

The ID of the cass_ncoa.

state : string

The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.

is_cancel_requested : boolean

True if run cancel requested, else false.

started_at : string/time

The time the last run started at.

finished_at : string/time

The time the last run completed.

error : string

The error, if any, returned by the run.

post_civis_data_match(self, name, input_field_mapping, input_table, match_target_id, output_table, *, schedule='DEFAULT', parent_id='DEFAULT', notifications='DEFAULT', max_matches='DEFAULT', threshold='DEFAULT', archived='DEFAULT')

Create a Civis Data Match Enhancement

Parameters:
name : string

The name of the enhancement job.

input_field_mapping : dict

The column mapping for the input table. See /enhancements/field_mapping for list of valid fields.

input_table : dict::
  • database_name : string
    The Redshift database name for the table.
  • schema : string
    The schema name for the table.
  • table : string
    The table name.
match_target_id : integer

The ID of the Civis Data match target. See /match_targets for IDs.

output_table : dict::
  • database_name : string
    The Redshift database name for the table.
  • schema : string
    The schema name for the table.
  • table : string
    The table name.
schedule : dict, optional::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer, optional

Parent ID that triggers this enhancement.

notifications : dict, optional::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
max_matches : integer, optional

The maximum number of matches per record in the input table to return. Must be between 0 and 10. 0 returns all matches.

threshold : number/float, optional

The score threshold (between 0 and 1). Matches below this threshold will not be returned. The default value is 0.5.

archived : boolean, optional

Whether the Civis Data Match Job has been archived.

Returns:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

type : string

The type of the enhancement (e.g CASS-NCOA)

created_at : string/time

The time this enhancement was created.

updated_at : string/time

The time the enhancement was last updated.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
state : string

The status of the enhancement’s last run

schedule : dict::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer

Parent ID that triggers this enhancement.

notifications : dict::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
running_as : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
input_field_mapping : dict

The column mapping for the input table. See /enhancements/field_mapping for list of valid fields.

input_table : dict::
  • database_name : string
    The Redshift database name for the table.
  • schema : string
    The schema name for the table.
  • table : string
    The table name.
match_target_id : integer

The ID of the Civis Data match target. See /match_targets for IDs.

output_table : dict::
  • database_name : string
    The Redshift database name for the table.
  • schema : string
    The schema name for the table.
  • table : string
    The table name.
max_matches : integer

The maximum number of matches per record in the input table to return. Must be between 0 and 10. 0 returns all matches.

threshold : number/float

The score threshold (between 0 and 1). Matches below this threshold will not be returned. The default value is 0.5.

archived : boolean

Whether the Civis Data Match Job has been archived.

last_run : dict::
  • id : integer
  • state : string
  • created_at : string/time
    The time that the run was queued.
  • started_at : string/time
    The time that the run started.
  • finished_at : string/time
    The time that the run completed.
  • error : string
    The error message for this run, if present.
post_civis_data_match_cancel(self, id)

Cancel a run

Parameters:
id : integer

The ID of the job.

Returns:
id : integer

The ID of the run.

state : string

The state of the run, one of ‘queued’, ‘running’ or ‘cancelled’.

is_cancel_requested : boolean

True if run cancel requested, else false.

post_civis_data_match_clone(self, id, *, clone_schedule='DEFAULT', clone_triggers='DEFAULT', clone_notifications='DEFAULT')

Clone this Civis Data Match Enhancement

Parameters:
id : integer

The ID for the enhancement.

clone_schedule : boolean, optional

If true, also copy the schedule to the new enhancement.

clone_triggers : boolean, optional

If true, also copy the triggers to the new enhancement.

clone_notifications : boolean, optional

If true, also copy the notifications to the new enhancement.

Returns:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

type : string

The type of the enhancement (e.g CASS-NCOA)

created_at : string/time

The time this enhancement was created.

updated_at : string/time

The time the enhancement was last updated.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
state : string

The status of the enhancement’s last run

schedule : dict::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer

Parent ID that triggers this enhancement.

notifications : dict::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
running_as : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
input_field_mapping : dict

The column mapping for the input table. See /enhancements/field_mapping for list of valid fields.

input_table : dict::
  • database_name : string
    The Redshift database name for the table.
  • schema : string
    The schema name for the table.
  • table : string
    The table name.
match_target_id : integer

The ID of the Civis Data match target. See /match_targets for IDs.

output_table : dict::
  • database_name : string
    The Redshift database name for the table.
  • schema : string
    The schema name for the table.
  • table : string
    The table name.
max_matches : integer

The maximum number of matches per record in the input table to return. Must be between 0 and 10. 0 returns all matches.

threshold : number/float

The score threshold (between 0 and 1). Matches below this threshold will not be returned. The default value is 0.5.

archived : boolean

Whether the Civis Data Match Job has been archived.

last_run : dict::
  • id : integer
  • state : string
  • created_at : string/time
    The time that the run was queued.
  • started_at : string/time
    The time that the run started.
  • finished_at : string/time
    The time that the run completed.
  • error : string
    The error message for this run, if present.
post_civis_data_match_runs(self, id)

Start a run

Parameters:
id : integer

The ID of the civis_data_match.

Returns:
id : integer

The ID of the run.

civis_data_match_id : integer

The ID of the civis_data_match.

state : string

The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.

is_cancel_requested : boolean

True if run cancel requested, else false.

started_at : string/time

The time the last run started at.

finished_at : string/time

The time the last run completed.

error : string

The error, if any, returned by the run.

post_geocode(self, name, remote_host_id, credential_id, source_schema_and_table, *, schedule='DEFAULT', parent_id='DEFAULT', notifications='DEFAULT', multipart_key='DEFAULT', limiting_sql='DEFAULT', target_schema='DEFAULT', target_table='DEFAULT', country='DEFAULT', provider='DEFAULT', output_address='DEFAULT')

Create a Geocode Enhancement

Parameters:
name : string

The name of the enhancement job.

remote_host_id : integer

The ID of the remote host.

credential_id : integer

The ID of the remote host credential.

source_schema_and_table : string

The source database schema and table.

schedule : dict, optional::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer, optional

Parent ID that triggers this enhancement.

notifications : dict, optional::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
multipart_key : list, optional

The source table primary key.

limiting_sql : string, optional

The limiting SQL for the source table. “WHERE” should be omitted (e.g. state=’IL’).

target_schema : string, optional

The output table schema.

target_table : string, optional

The output table name.

country : string, optional

The country of the addresses to be geocoded; either ‘us’ or ‘ca’.

provider : string, optional

The geocoding provider; one of postgis, nominatim, and geocoder_ca.

output_address : boolean, optional

Whether to output the parsed address. Only guaranteed for the ‘postgis’ provider.

Returns:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

type : string

The type of the enhancement (e.g CASS-NCOA)

created_at : string/time

The time this enhancement was created.

updated_at : string/time

The time the enhancement was last updated.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
state : string

The status of the enhancement’s last run

schedule : dict::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer

Parent ID that triggers this enhancement.

notifications : dict::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
running_as : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
remote_host_id : integer

The ID of the remote host.

credential_id : integer

The ID of the remote host credential.

source_schema_and_table : string

The source database schema and table.

multipart_key : list

The source table primary key.

limiting_sql : string

The limiting SQL for the source table. “WHERE” should be omitted (e.g. state=’IL’).

target_schema : string

The output table schema.

target_table : string

The output table name.

country : string

The country of the addresses to be geocoded; either ‘us’ or ‘ca’.

provider : string

The geocoding provider; one of postgis, nominatim, and geocoder_ca.

output_address : boolean

Whether to output the parsed address. Only guaranteed for the ‘postgis’ provider.

archived : string

The archival status of the requested item(s).

post_geocode_cancel(self, id)

Cancel a run

Parameters:
id : integer

The ID of the job.

Returns:
id : integer

The ID of the run.

state : string

The state of the run, one of ‘queued’, ‘running’ or ‘cancelled’.

is_cancel_requested : boolean

True if run cancel requested, else false.

post_geocode_runs(self, id)

Start a run

Parameters:
id : integer

The ID of the geocode.

Returns:
id : integer

The ID of the run.

geocode_id : integer

The ID of the geocode.

state : string

The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.

is_cancel_requested : boolean

True if run cancel requested, else false.

started_at : string/time

The time the last run started at.

finished_at : string/time

The time the last run completed.

error : string

The error, if any, returned by the run.

put_cass_ncoa(self, id, name, source, *, schedule='DEFAULT', parent_id='DEFAULT', notifications='DEFAULT', destination='DEFAULT', column_mapping='DEFAULT', use_default_column_mapping='DEFAULT', perform_ncoa='DEFAULT', ncoa_credential_id='DEFAULT', output_level='DEFAULT', limiting_sql='DEFAULT')

Replace all attributes of this CASS/NCOA Enhancement

Parameters:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

source : dict::
  • database_table : dict::
    • schema : string
      The schema name of the source table.
    • table : string
      The name of the source table.
    • remote_host_id : integer
      The ID of the database host for the table.
    • credential_id : integer
      The id of the credentials to be used when performing the enhancement.
    • multipart_key : list
      The source table primary key.
schedule : dict, optional::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer, optional

Parent ID that triggers this enhancement.

notifications : dict, optional::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
destination : dict, optional::
  • database_table : dict::
    • schema : string
      The schema name for the output data.
    • table : string
      The table name for the output data.
column_mapping : dict, optional::
  • address1 : string
    The first address line.
  • address2 : string
    The second address line.
  • city : string
    The city of an address.
  • state : string
    The state of an address.
  • zip : string
    The zip code of an address.
  • name : string
    The full name of the resident at this address. If needed, separate multiple columns with +, e.g. first_name+last_name
  • company : string
    The name of the company located at this address.
use_default_column_mapping : boolean, optional

Defaults to true, where the existing column mapping on the input table will be used. If false, a custom column mapping must be provided.

perform_ncoa : boolean, optional

Whether to update addresses for records matching the National Change of Address (NCOA) database.

ncoa_credential_id : integer, optional

Credential to use when performing NCOA updates. Required if ‘performNcoa’ is true.

output_level : string, optional

The set of fields persisted by a CASS or NCOA enhancement.For CASS enhancements, one of ‘cass’ or ‘all.’For NCOA enhancements, one of ‘cass’, ‘ncoa’ , ‘coalesced’ or ‘all’.By default, all fields will be returned.

limiting_sql : string, optional

The limiting SQL for the source table. “WHERE” should be omitted (e.g. state=’IL’).

Returns:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

type : string

The type of the enhancement (e.g CASS-NCOA)

created_at : string/time

The time this enhancement was created.

updated_at : string/time

The time the enhancement was last updated.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
state : string

The status of the enhancement’s last run

schedule : dict::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer

Parent ID that triggers this enhancement.

notifications : dict::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
running_as : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
source : dict::
  • database_table : dict::
    • schema : string
      The schema name of the source table.
    • table : string
      The name of the source table.
    • remote_host_id : integer
      The ID of the database host for the table.
    • credential_id : integer
      The id of the credentials to be used when performing the enhancement.
    • multipart_key : list
      The source table primary key.
destination : dict::
  • database_table : dict::
    • schema : string
      The schema name for the output data.
    • table : string
      The table name for the output data.
column_mapping : dict::
  • address1 : string
    The first address line.
  • address2 : string
    The second address line.
  • city : string
    The city of an address.
  • state : string
    The state of an address.
  • zip : string
    The zip code of an address.
  • name : string
    The full name of the resident at this address. If needed, separate multiple columns with +, e.g. first_name+last_name
  • company : string
    The name of the company located at this address.
use_default_column_mapping : boolean

Defaults to true, where the existing column mapping on the input table will be used. If false, a custom column mapping must be provided.

perform_ncoa : boolean

Whether to update addresses for records matching the National Change of Address (NCOA) database.

ncoa_credential_id : integer

Credential to use when performing NCOA updates. Required if ‘performNcoa’ is true.

output_level : string

The set of fields persisted by a CASS or NCOA enhancement.For CASS enhancements, one of ‘cass’ or ‘all.’For NCOA enhancements, one of ‘cass’, ‘ncoa’ , ‘coalesced’ or ‘all’.By default, all fields will be returned.

limiting_sql : string

The limiting SQL for the source table. “WHERE” should be omitted (e.g. state=’IL’).

archived : string

The archival status of the requested item(s).

put_cass_ncoa_archive(self, id, status)

Update the archive status of this object

Parameters:
id : integer

The ID of the object.

status : boolean

The desired archived status of the object.

Returns:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

type : string

The type of the enhancement (e.g CASS-NCOA)

created_at : string/time

The time this enhancement was created.

updated_at : string/time

The time the enhancement was last updated.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
state : string

The status of the enhancement’s last run

schedule : dict::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer

Parent ID that triggers this enhancement.

notifications : dict::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
running_as : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
source : dict::
  • database_table : dict::
    • schema : string
      The schema name of the source table.
    • table : string
      The name of the source table.
    • remote_host_id : integer
      The ID of the database host for the table.
    • credential_id : integer
      The id of the credentials to be used when performing the enhancement.
    • multipart_key : list
      The source table primary key.
destination : dict::
  • database_table : dict::
    • schema : string
      The schema name for the output data.
    • table : string
      The table name for the output data.
column_mapping : dict::
  • address1 : string
    The first address line.
  • address2 : string
    The second address line.
  • city : string
    The city of an address.
  • state : string
    The state of an address.
  • zip : string
    The zip code of an address.
  • name : string
    The full name of the resident at this address. If needed, separate multiple columns with +, e.g. first_name+last_name
  • company : string
    The name of the company located at this address.
use_default_column_mapping : boolean

Defaults to true, where the existing column mapping on the input table will be used. If false, a custom column mapping must be provided.

perform_ncoa : boolean

Whether to update addresses for records matching the National Change of Address (NCOA) database.

ncoa_credential_id : integer

Credential to use when performing NCOA updates. Required if ‘performNcoa’ is true.

output_level : string

The set of fields persisted by a CASS or NCOA enhancement.For CASS enhancements, one of ‘cass’ or ‘all.’For NCOA enhancements, one of ‘cass’, ‘ncoa’ , ‘coalesced’ or ‘all’.By default, all fields will be returned.

limiting_sql : string

The limiting SQL for the source table. “WHERE” should be omitted (e.g. state=’IL’).

archived : string

The archival status of the requested item(s).

put_cass_ncoa_projects(self, id, project_id)

Add a CASS/NCOA Enhancement to a project

Parameters:
id : integer

The ID of the CASS/NCOA Enhancement.

project_id : integer

The ID of the project.

Returns:
None

Response code 204: success

put_cass_ncoa_shares_groups(self, id, group_ids, permission_level, *, share_email_body='DEFAULT', send_shared_email='DEFAULT')

Set the permissions groups has on this object

Parameters:
id : integer

The ID of the resource that is shared.

group_ids : list

An array of one or more group IDs.

permission_level : string

Options are: “read”, “write”, or “manage”.

share_email_body : string, optional

Custom body text for e-mail sent on a share.

send_shared_email : boolean, optional

Send email to the recipients of a share.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

put_cass_ncoa_shares_users(self, id, user_ids, permission_level, *, share_email_body='DEFAULT', send_shared_email='DEFAULT')

Set the permissions users have on this object

Parameters:
id : integer

The ID of the resource that is shared.

user_ids : list

An array of one or more user IDs.

permission_level : string

Options are: “read”, “write”, or “manage”.

share_email_body : string, optional

Custom body text for e-mail sent on a share.

send_shared_email : boolean, optional

Send email to the recipients of a share.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

put_civis_data_match(self, id, name, input_field_mapping, input_table, match_target_id, output_table, *, schedule='DEFAULT', parent_id='DEFAULT', notifications='DEFAULT', max_matches='DEFAULT', threshold='DEFAULT', archived='DEFAULT')

Replace all attributes of this Civis Data Match Enhancement

Parameters:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

input_field_mapping : dict

The column mapping for the input table. See /enhancements/field_mapping for list of valid fields.

input_table : dict::
  • database_name : string
    The Redshift database name for the table.
  • schema : string
    The schema name for the table.
  • table : string
    The table name.
match_target_id : integer

The ID of the Civis Data match target. See /match_targets for IDs.

output_table : dict::
  • database_name : string
    The Redshift database name for the table.
  • schema : string
    The schema name for the table.
  • table : string
    The table name.
schedule : dict, optional::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer, optional

Parent ID that triggers this enhancement.

notifications : dict, optional::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
max_matches : integer, optional

The maximum number of matches per record in the input table to return. Must be between 0 and 10. 0 returns all matches.

threshold : number/float, optional

The score threshold (between 0 and 1). Matches below this threshold will not be returned. The default value is 0.5.

archived : boolean, optional

Whether the Civis Data Match Job has been archived.

Returns:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

type : string

The type of the enhancement (e.g CASS-NCOA)

created_at : string/time

The time this enhancement was created.

updated_at : string/time

The time the enhancement was last updated.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
state : string

The status of the enhancement’s last run

schedule : dict::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer

Parent ID that triggers this enhancement.

notifications : dict::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
running_as : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
input_field_mapping : dict

The column mapping for the input table. See /enhancements/field_mapping for list of valid fields.

input_table : dict::
  • database_name : string
    The Redshift database name for the table.
  • schema : string
    The schema name for the table.
  • table : string
    The table name.
match_target_id : integer

The ID of the Civis Data match target. See /match_targets for IDs.

output_table : dict::
  • database_name : string
    The Redshift database name for the table.
  • schema : string
    The schema name for the table.
  • table : string
    The table name.
max_matches : integer

The maximum number of matches per record in the input table to return. Must be between 0 and 10. 0 returns all matches.

threshold : number/float

The score threshold (between 0 and 1). Matches below this threshold will not be returned. The default value is 0.5.

archived : boolean

Whether the Civis Data Match Job has been archived.

last_run : dict::
  • id : integer
  • state : string
  • created_at : string/time
    The time that the run was queued.
  • started_at : string/time
    The time that the run started.
  • finished_at : string/time
    The time that the run completed.
  • error : string
    The error message for this run, if present.
put_civis_data_match_archive(self, id, status)

Update the archive status of this object

Parameters:
id : integer

The ID of the object.

status : boolean

The desired archived status of the object.

Returns:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

type : string

The type of the enhancement (e.g CASS-NCOA)

created_at : string/time

The time this enhancement was created.

updated_at : string/time

The time the enhancement was last updated.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
state : string

The status of the enhancement’s last run

schedule : dict::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer

Parent ID that triggers this enhancement.

notifications : dict::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
running_as : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
input_field_mapping : dict

The column mapping for the input table. See /enhancements/field_mapping for list of valid fields.

input_table : dict::
  • database_name : string
    The Redshift database name for the table.
  • schema : string
    The schema name for the table.
  • table : string
    The table name.
match_target_id : integer

The ID of the Civis Data match target. See /match_targets for IDs.

output_table : dict::
  • database_name : string
    The Redshift database name for the table.
  • schema : string
    The schema name for the table.
  • table : string
    The table name.
max_matches : integer

The maximum number of matches per record in the input table to return. Must be between 0 and 10. 0 returns all matches.

threshold : number/float

The score threshold (between 0 and 1). Matches below this threshold will not be returned. The default value is 0.5.

archived : boolean

Whether the Civis Data Match Job has been archived.

last_run : dict::
  • id : integer
  • state : string
  • created_at : string/time
    The time that the run was queued.
  • started_at : string/time
    The time that the run started.
  • finished_at : string/time
    The time that the run completed.
  • error : string
    The error message for this run, if present.
put_civis_data_match_projects(self, id, project_id)

Add a Civis Data Match Enhancement to a project

Parameters:
id : integer

The ID of the Civis Data Match Enhancement.

project_id : integer

The ID of the project.

Returns:
None

Response code 204: success

put_civis_data_match_shares_groups(self, id, group_ids, permission_level, *, share_email_body='DEFAULT', send_shared_email='DEFAULT')

Set the permissions groups has on this object

Parameters:
id : integer

The ID of the resource that is shared.

group_ids : list

An array of one or more group IDs.

permission_level : string

Options are: “read”, “write”, or “manage”.

share_email_body : string, optional

Custom body text for e-mail sent on a share.

send_shared_email : boolean, optional

Send email to the recipients of a share.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

put_civis_data_match_shares_users(self, id, user_ids, permission_level, *, share_email_body='DEFAULT', send_shared_email='DEFAULT')

Set the permissions users have on this object

Parameters:
id : integer

The ID of the resource that is shared.

user_ids : list

An array of one or more user IDs.

permission_level : string

Options are: “read”, “write”, or “manage”.

share_email_body : string, optional

Custom body text for e-mail sent on a share.

send_shared_email : boolean, optional

Send email to the recipients of a share.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

put_geocode(self, id, name, remote_host_id, credential_id, source_schema_and_table, *, schedule='DEFAULT', parent_id='DEFAULT', notifications='DEFAULT', multipart_key='DEFAULT', limiting_sql='DEFAULT', target_schema='DEFAULT', target_table='DEFAULT', country='DEFAULT', provider='DEFAULT', output_address='DEFAULT')

Replace all attributes of this Geocode Enhancement

Parameters:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

remote_host_id : integer

The ID of the remote host.

credential_id : integer

The ID of the remote host credential.

source_schema_and_table : string

The source database schema and table.

schedule : dict, optional::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer, optional

Parent ID that triggers this enhancement.

notifications : dict, optional::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
multipart_key : list, optional

The source table primary key.

limiting_sql : string, optional

The limiting SQL for the source table. “WHERE” should be omitted (e.g. state=’IL’).

target_schema : string, optional

The output table schema.

target_table : string, optional

The output table name.

country : string, optional

The country of the addresses to be geocoded; either ‘us’ or ‘ca’.

provider : string, optional

The geocoding provider; one of postgis, nominatim, and geocoder_ca.

output_address : boolean, optional

Whether to output the parsed address. Only guaranteed for the ‘postgis’ provider.

Returns:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

type : string

The type of the enhancement (e.g CASS-NCOA)

created_at : string/time

The time this enhancement was created.

updated_at : string/time

The time the enhancement was last updated.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
state : string

The status of the enhancement’s last run

schedule : dict::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer

Parent ID that triggers this enhancement.

notifications : dict::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
running_as : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
remote_host_id : integer

The ID of the remote host.

credential_id : integer

The ID of the remote host credential.

source_schema_and_table : string

The source database schema and table.

multipart_key : list

The source table primary key.

limiting_sql : string

The limiting SQL for the source table. “WHERE” should be omitted (e.g. state=’IL’).

target_schema : string

The output table schema.

target_table : string

The output table name.

country : string

The country of the addresses to be geocoded; either ‘us’ or ‘ca’.

provider : string

The geocoding provider; one of postgis, nominatim, and geocoder_ca.

output_address : boolean

Whether to output the parsed address. Only guaranteed for the ‘postgis’ provider.

archived : string

The archival status of the requested item(s).

put_geocode_archive(self, id, status)

Update the archive status of this object

Parameters:
id : integer

The ID of the object.

status : boolean

The desired archived status of the object.

Returns:
id : integer

The ID for the enhancement.

name : string

The name of the enhancement job.

type : string

The type of the enhancement (e.g CASS-NCOA)

created_at : string/time

The time this enhancement was created.

updated_at : string/time

The time the enhancement was last updated.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
state : string

The status of the enhancement’s last run

schedule : dict::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
parent_id : integer

Parent ID that triggers this enhancement.

notifications : dict::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
running_as : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
remote_host_id : integer

The ID of the remote host.

credential_id : integer

The ID of the remote host credential.

source_schema_and_table : string

The source database schema and table.

multipart_key : list

The source table primary key.

limiting_sql : string

The limiting SQL for the source table. “WHERE” should be omitted (e.g. state=’IL’).

target_schema : string

The output table schema.

target_table : string

The output table name.

country : string

The country of the addresses to be geocoded; either ‘us’ or ‘ca’.

provider : string

The geocoding provider; one of postgis, nominatim, and geocoder_ca.

output_address : boolean

Whether to output the parsed address. Only guaranteed for the ‘postgis’ provider.

archived : string

The archival status of the requested item(s).

put_geocode_projects(self, id, project_id)

Add a Geocode Enhancement to a project

Parameters:
id : integer

The ID of the Geocode Enhancement.

project_id : integer

The ID of the project.

Returns:
None

Response code 204: success

put_geocode_shares_groups(self, id, group_ids, permission_level, *, share_email_body='DEFAULT', send_shared_email='DEFAULT')

Set the permissions groups has on this object

Parameters:
id : integer

The ID of the resource that is shared.

group_ids : list

An array of one or more group IDs.

permission_level : string

Options are: “read”, “write”, or “manage”.

share_email_body : string, optional

Custom body text for e-mail sent on a share.

send_shared_email : boolean, optional

Send email to the recipients of a share.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

put_geocode_shares_users(self, id, user_ids, permission_level, *, share_email_body='DEFAULT', send_shared_email='DEFAULT')

Set the permissions users have on this object

Parameters:
id : integer

The ID of the resource that is shared.

user_ids : list

An array of one or more user IDs.

permission_level : string

Options are: “read”, “write”, or “manage”.

share_email_body : string, optional

Custom body text for e-mail sent on a share.

send_shared_email : boolean, optional

Send email to the recipients of a share.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

Exports

class Exports(session_kwargs, client, return_type='civis')

Methods

delete_files_csv_runs(self, id, run_id) Cancel a run
get_files_csv(self, id) Get a CSV Export
get_files_csv_runs(self, id, run_id) Check status of a run
list(self, \*[, type, author, status, …]) List
list_files_csv_runs(self, id, \*[, limit, …]) List runs for the given csv_export
list_files_csv_runs_logs(self, id, run_id, \*) Get the logs for a run
list_files_csv_runs_outputs(self, id, run_id, \*) List the outputs for a run
patch_files_csv(self, id, \*[, name, …]) Update some attributes of this CSV Export
post_files_csv(self, source, destination, \*) Create a CSV Export
post_files_csv_runs(self, id) Start a run
put_files_csv(self, id, source, destination, \*) Replace all attributes of this CSV Export
put_files_csv_archive(self, id, status) Update the archive status of this object
delete_files_csv_runs(self, id, run_id)

Cancel a run

Parameters:
id : integer

The ID of the csv_export.

run_id : integer

The ID of the run.

Returns:
None

Response code 202: success

get_files_csv(self, id)

Get a CSV Export

Parameters:
id : integer
Returns:
id : integer

The ID of this Csv Export job.

name : string

The name of this Csv Export job.

source : dict::
  • sql : string
    The SQL query for this Csv Export job
  • remote_host_id : integer
    The ID of the destination database host.
  • credential_id : integer
    The ID of the credentials for the destination database.
destination : dict::
  • filename_prefix : string
    The prefix of the name of the file returned to the user.
  • storage_path : dict::
    • file_path : string
      The path within the bucket where the exported file will be saved. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”
    • storage_host_id : integer
      The ID of the destination storage host.
    • credential_id : integer
      The ID of the credentials for the destination storage host.
    • existing_files : string
      Notifies the job of what to do in the case that the exported file already exists at the provided path.One of: fail, append, overwrite. Default: fail. If “append” is specified,the new file will always be added to the provided path. If “overwrite” is specifiedall existing files at the provided path will be deleted and the new file will be added.By default, or if “fail” is specified, the export will fail if a file exists at the provided path.
include_header : boolean

A boolean value indicating whether or not the header should be included. Defaults to true.

compression : string

The compression of the output file. Valid arguments are “gzip” and “none”. Defaults to “gzip”.

column_delimiter : string

The column delimiter for the output file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.

hidden : boolean

A boolean value indicating whether or not this request should be hidden. Defaults to false.

force_multifile : boolean

Whether or not the csv should be split into multiple files. Default: false

max_file_size : integer

The max file size, in MB, created files will be. Only available when force_multifile is true.

get_files_csv_runs(self, id, run_id)

Check status of a run

Parameters:
id : integer

The ID of the csv_export.

run_id : integer

The ID of the run.

Returns:
id : integer
state : string
created_at : string/time

The time that the run was queued.

started_at : string/time

The time that the run started.

finished_at : string/time

The time that the run completed.

error : string

The error message for this run, if present.

output_cached_on : string/time

The time that the output was originally exported, if a cache entry was used by the run.

list(self, *, type='DEFAULT', author='DEFAULT', status='DEFAULT', hidden='DEFAULT', archived='DEFAULT', limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List

Parameters:
type : string, optional

If specified, return exports of these types. It accepts a comma-separated list, possible values are ‘database’ and ‘gdoc’.

author : string, optional

If specified, return exports from this author. It accepts a comma-separated list of author ids.

status : string, optional

If specified, returns export with one of these statuses. It accepts a comma-separated list, possible values are ‘running’, ‘failed’, ‘succeeded’, ‘idle’, ‘scheduled’.

hidden : boolean, optional

If specified to be true, returns hidden items. Defaults to false, returning non-hidden items.

archived : string, optional

The archival status of the requested item(s).

limit : integer, optional

Number of results to return. Defaults to 20. Maximum allowed is 50.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to updated_at. Must be one of: updated_at, name, created_at, last_run.updated_at.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
id : integer

The ID for this export.

name : string

The name of this export.

type : string

The type of export.

created_at : string/time

The creation time for this export.

updated_at : string/time

The last modification time for this export.

state : string
last_run : dict::
  • id : integer
  • state : string
  • created_at : string/time
    The time that the run was queued.
  • started_at : string/time
    The time that the run started.
  • finished_at : string/time
    The time that the run completed.
  • error : string
    The error message for this run, if present.
author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
list_files_csv_runs(self, id, *, limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List runs for the given csv_export

Parameters:
id : integer

The ID of the csv_export.

limit : integer, optional

Number of results to return. Defaults to 20. Maximum allowed is 100.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to id. Must be one of: id.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
id : integer
state : string
created_at : string/time

The time that the run was queued.

started_at : string/time

The time that the run started.

finished_at : string/time

The time that the run completed.

error : string

The error message for this run, if present.

list_files_csv_runs_logs(self, id, run_id, *, last_id='DEFAULT', limit='DEFAULT')

Get the logs for a run

Parameters:
id : integer

The ID of the csv_export.

run_id : integer

The ID of the run.

last_id : integer, optional

The ID of the last log message received. Log entries with this ID value or lower will be omitted.Logs are sorted by ID if this value is provided, and are otherwise sorted by createdAt.

limit : integer, optional

The maximum number of log messages to return. Default of 10000.

Returns:
id : integer

The ID of the log.

created_at : string/date-time

The time the log was created.

message : string

The log message.

level : string

The level of the log. One of unknown,fatal,error,warn,info,debug.

list_files_csv_runs_outputs(self, id, run_id, *, limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List the outputs for a run

Parameters:
id : integer

The ID of the csv_export.

run_id : integer

The ID of the run.

limit : integer, optional

Number of results to return. Defaults to its maximum of 50.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to created_at. Must be one of: created_at, id.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
object_type : string

The type of the output. Valid values are File, Table, Report, Project, Credential, or JSONValue

object_id : integer

The ID of the output.

name : string

The name of the output.

link : string

The hypermedia link to the output.

value : string
patch_files_csv(self, id, *, name='DEFAULT', source='DEFAULT', destination='DEFAULT', include_header='DEFAULT', compression='DEFAULT', column_delimiter='DEFAULT', hidden='DEFAULT', force_multifile='DEFAULT', max_file_size='DEFAULT')

Update some attributes of this CSV Export

Parameters:
id : integer

The ID of this Csv Export job.

name : string, optional

The name of this Csv Export job.

source : dict, optional::
  • sql : string
    The SQL query for this Csv Export job
  • remote_host_id : integer
    The ID of the destination database host.
  • credential_id : integer
    The ID of the credentials for the destination database.
destination : dict, optional::
  • filename_prefix : string
    The prefix of the name of the file returned to the user.
  • storage_path : dict::
    • file_path : string
      The path within the bucket where the exported file will be saved. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”
    • storage_host_id : integer
      The ID of the destination storage host.
    • credential_id : integer
      The ID of the credentials for the destination storage host.
    • existing_files : string
      Notifies the job of what to do in the case that the exported file already exists at the provided path.One of: fail, append, overwrite. Default: fail. If “append” is specified,the new file will always be added to the provided path. If “overwrite” is specifiedall existing files at the provided path will be deleted and the new file will be added.By default, or if “fail” is specified, the export will fail if a file exists at the provided path.
include_header : boolean, optional

A boolean value indicating whether or not the header should be included. Defaults to true.

compression : string, optional

The compression of the output file. Valid arguments are “gzip” and “none”. Defaults to “gzip”.

column_delimiter : string, optional

The column delimiter for the output file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.

hidden : boolean, optional

A boolean value indicating whether or not this request should be hidden. Defaults to false.

force_multifile : boolean, optional

Whether or not the csv should be split into multiple files. Default: false

max_file_size : integer, optional

The max file size, in MB, created files will be. Only available when force_multifile is true.

Returns:
id : integer

The ID of this Csv Export job.

name : string

The name of this Csv Export job.

source : dict::
  • sql : string
    The SQL query for this Csv Export job
  • remote_host_id : integer
    The ID of the destination database host.
  • credential_id : integer
    The ID of the credentials for the destination database.
destination : dict::
  • filename_prefix : string
    The prefix of the name of the file returned to the user.
  • storage_path : dict::
    • file_path : string
      The path within the bucket where the exported file will be saved. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”
    • storage_host_id : integer
      The ID of the destination storage host.
    • credential_id : integer
      The ID of the credentials for the destination storage host.
    • existing_files : string
      Notifies the job of what to do in the case that the exported file already exists at the provided path.One of: fail, append, overwrite. Default: fail. If “append” is specified,the new file will always be added to the provided path. If “overwrite” is specifiedall existing files at the provided path will be deleted and the new file will be added.By default, or if “fail” is specified, the export will fail if a file exists at the provided path.
include_header : boolean

A boolean value indicating whether or not the header should be included. Defaults to true.

compression : string

The compression of the output file. Valid arguments are “gzip” and “none”. Defaults to “gzip”.

column_delimiter : string

The column delimiter for the output file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.

hidden : boolean

A boolean value indicating whether or not this request should be hidden. Defaults to false.

force_multifile : boolean

Whether or not the csv should be split into multiple files. Default: false

max_file_size : integer

The max file size, in MB, created files will be. Only available when force_multifile is true.

post_files_csv(self, source, destination, *, name='DEFAULT', include_header='DEFAULT', compression='DEFAULT', column_delimiter='DEFAULT', hidden='DEFAULT', force_multifile='DEFAULT', max_file_size='DEFAULT')

Create a CSV Export

Parameters:
source : dict::
  • sql : string
    The SQL query for this Csv Export job
  • remote_host_id : integer
    The ID of the destination database host.
  • credential_id : integer
    The ID of the credentials for the destination database.
destination : dict::
  • filename_prefix : string
    The prefix of the name of the file returned to the user.
  • storage_path : dict::
    • file_path : string
      The path within the bucket where the exported file will be saved. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”
    • storage_host_id : integer
      The ID of the destination storage host.
    • credential_id : integer
      The ID of the credentials for the destination storage host.
    • existing_files : string
      Notifies the job of what to do in the case that the exported file already exists at the provided path.One of: fail, append, overwrite. Default: fail. If “append” is specified,the new file will always be added to the provided path. If “overwrite” is specifiedall existing files at the provided path will be deleted and the new file will be added.By default, or if “fail” is specified, the export will fail if a file exists at the provided path.
name : string, optional

The name of this Csv Export job.

include_header : boolean, optional

A boolean value indicating whether or not the header should be included. Defaults to true.

compression : string, optional

The compression of the output file. Valid arguments are “gzip” and “none”. Defaults to “gzip”.

column_delimiter : string, optional

The column delimiter for the output file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.

hidden : boolean, optional

A boolean value indicating whether or not this request should be hidden. Defaults to false.

force_multifile : boolean, optional

Whether or not the csv should be split into multiple files. Default: false

max_file_size : integer, optional

The max file size, in MB, created files will be. Only available when force_multifile is true.

Returns:
id : integer

The ID of this Csv Export job.

name : string

The name of this Csv Export job.

source : dict::
  • sql : string
    The SQL query for this Csv Export job
  • remote_host_id : integer
    The ID of the destination database host.
  • credential_id : integer
    The ID of the credentials for the destination database.
destination : dict::
  • filename_prefix : string
    The prefix of the name of the file returned to the user.
  • storage_path : dict::
    • file_path : string
      The path within the bucket where the exported file will be saved. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”
    • storage_host_id : integer
      The ID of the destination storage host.
    • credential_id : integer
      The ID of the credentials for the destination storage host.
    • existing_files : string
      Notifies the job of what to do in the case that the exported file already exists at the provided path.One of: fail, append, overwrite. Default: fail. If “append” is specified,the new file will always be added to the provided path. If “overwrite” is specifiedall existing files at the provided path will be deleted and the new file will be added.By default, or if “fail” is specified, the export will fail if a file exists at the provided path.
include_header : boolean

A boolean value indicating whether or not the header should be included. Defaults to true.

compression : string

The compression of the output file. Valid arguments are “gzip” and “none”. Defaults to “gzip”.

column_delimiter : string

The column delimiter for the output file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.

hidden : boolean

A boolean value indicating whether or not this request should be hidden. Defaults to false.

force_multifile : boolean

Whether or not the csv should be split into multiple files. Default: false

max_file_size : integer

The max file size, in MB, created files will be. Only available when force_multifile is true.

post_files_csv_runs(self, id)

Start a run

Parameters:
id : integer

The ID of the csv_export.

Returns:
id : integer
state : string
created_at : string/time

The time that the run was queued.

started_at : string/time

The time that the run started.

finished_at : string/time

The time that the run completed.

error : string

The error message for this run, if present.

output_cached_on : string/time

The time that the output was originally exported, if a cache entry was used by the run.

put_files_csv(self, id, source, destination, *, name='DEFAULT', include_header='DEFAULT', compression='DEFAULT', column_delimiter='DEFAULT', hidden='DEFAULT', force_multifile='DEFAULT', max_file_size='DEFAULT')

Replace all attributes of this CSV Export

Parameters:
id : integer

The ID of this Csv Export job.

source : dict::
  • sql : string
    The SQL query for this Csv Export job
  • remote_host_id : integer
    The ID of the destination database host.
  • credential_id : integer
    The ID of the credentials for the destination database.
destination : dict::
  • filename_prefix : string
    The prefix of the name of the file returned to the user.
  • storage_path : dict::
    • file_path : string
      The path within the bucket where the exported file will be saved. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”
    • storage_host_id : integer
      The ID of the destination storage host.
    • credential_id : integer
      The ID of the credentials for the destination storage host.
    • existing_files : string
      Notifies the job of what to do in the case that the exported file already exists at the provided path.One of: fail, append, overwrite. Default: fail. If “append” is specified,the new file will always be added to the provided path. If “overwrite” is specifiedall existing files at the provided path will be deleted and the new file will be added.By default, or if “fail” is specified, the export will fail if a file exists at the provided path.
name : string, optional

The name of this Csv Export job.

include_header : boolean, optional

A boolean value indicating whether or not the header should be included. Defaults to true.

compression : string, optional

The compression of the output file. Valid arguments are “gzip” and “none”. Defaults to “gzip”.

column_delimiter : string, optional

The column delimiter for the output file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.

hidden : boolean, optional

A boolean value indicating whether or not this request should be hidden. Defaults to false.

force_multifile : boolean, optional

Whether or not the csv should be split into multiple files. Default: false

max_file_size : integer, optional

The max file size, in MB, created files will be. Only available when force_multifile is true.

Returns:
id : integer

The ID of this Csv Export job.

name : string

The name of this Csv Export job.

source : dict::
  • sql : string
    The SQL query for this Csv Export job
  • remote_host_id : integer
    The ID of the destination database host.
  • credential_id : integer
    The ID of the credentials for the destination database.
destination : dict::
  • filename_prefix : string
    The prefix of the name of the file returned to the user.
  • storage_path : dict::
    • file_path : string
      The path within the bucket where the exported file will be saved. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”
    • storage_host_id : integer
      The ID of the destination storage host.
    • credential_id : integer
      The ID of the credentials for the destination storage host.
    • existing_files : string
      Notifies the job of what to do in the case that the exported file already exists at the provided path.One of: fail, append, overwrite. Default: fail. If “append” is specified,the new file will always be added to the provided path. If “overwrite” is specifiedall existing files at the provided path will be deleted and the new file will be added.By default, or if “fail” is specified, the export will fail if a file exists at the provided path.
include_header : boolean

A boolean value indicating whether or not the header should be included. Defaults to true.

compression : string

The compression of the output file. Valid arguments are “gzip” and “none”. Defaults to “gzip”.

column_delimiter : string

The column delimiter for the output file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.

hidden : boolean

A boolean value indicating whether or not this request should be hidden. Defaults to false.

force_multifile : boolean

Whether or not the csv should be split into multiple files. Default: false

max_file_size : integer

The max file size, in MB, created files will be. Only available when force_multifile is true.

put_files_csv_archive(self, id, status)

Update the archive status of this object

Parameters:
id : integer

The ID of the object.

status : boolean

The desired archived status of the object.

Returns:
id : integer

The ID of this Csv Export job.

name : string

The name of this Csv Export job.

source : dict::
  • sql : string
    The SQL query for this Csv Export job
  • remote_host_id : integer
    The ID of the destination database host.
  • credential_id : integer
    The ID of the credentials for the destination database.
destination : dict::
  • filename_prefix : string
    The prefix of the name of the file returned to the user.
  • storage_path : dict::
    • file_path : string
      The path within the bucket where the exported file will be saved. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”
    • storage_host_id : integer
      The ID of the destination storage host.
    • credential_id : integer
      The ID of the credentials for the destination storage host.
    • existing_files : string
      Notifies the job of what to do in the case that the exported file already exists at the provided path.One of: fail, append, overwrite. Default: fail. If “append” is specified,the new file will always be added to the provided path. If “overwrite” is specifiedall existing files at the provided path will be deleted and the new file will be added.By default, or if “fail” is specified, the export will fail if a file exists at the provided path.
include_header : boolean

A boolean value indicating whether or not the header should be included. Defaults to true.

compression : string

The compression of the output file. Valid arguments are “gzip” and “none”. Defaults to “gzip”.

column_delimiter : string

The column delimiter for the output file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.

hidden : boolean

A boolean value indicating whether or not this request should be hidden. Defaults to false.

force_multifile : boolean

Whether or not the csv should be split into multiple files. Default: false

max_file_size : integer

The max file size, in MB, created files will be. Only available when force_multifile is true.

Files

class Files(session_kwargs, client, return_type='civis')

Methods

delete_projects(self, id, project_id) Remove a File from a project
delete_shares_groups(self, id, group_id) Revoke the permissions a group has on this object
delete_shares_users(self, id, user_id) Revoke the permissions a user has on this object
get(self, id, \*[, link_expires_at, inline]) Get details about a file
get_preprocess_csv(self, id) Get a Preprocess CSV
list_projects(self, id, \*[, hidden]) List the projects a File belongs to
list_shares(self, id) List users and groups permissioned on this object
patch(self, id, \*[, name, expires_at]) Update details about a file
patch_preprocess_csv(self, id, \*[, …]) Update some attributes of this Preprocess CSV
post(self, name, \*[, expires_at]) Initiate an upload of a file into the platform
post_multipart(self, name, num_parts, \*[, …]) Initiate a multipart upload
post_multipart_complete(self, id) Complete a multipart upload
post_preprocess_csv(self, file_id, \*[, …]) Create a Preprocess CSV
put(self, id, name, expires_at) Update details about a file
put_preprocess_csv(self, id, file_id, \*[, …]) Replace all attributes of this Preprocess CSV
put_preprocess_csv_archive(self, id, status) Update the archive status of this object
put_projects(self, id, project_id) Add a File to a project
put_shares_groups(self, id, group_ids, …) Set the permissions groups has on this object
put_shares_users(self, id, user_ids, …[, …]) Set the permissions users have on this object
delete_projects(self, id, project_id)

Remove a File from a project

Parameters:
id : integer

The ID of the File.

project_id : integer

The ID of the project.

Returns:
None

Response code 204: success

delete_shares_groups(self, id, group_id)

Revoke the permissions a group has on this object

Parameters:
id : integer

The ID of the resource that is shared.

group_id : integer

The ID of the group.

Returns:
None

Response code 204: success

delete_shares_users(self, id, user_id)

Revoke the permissions a user has on this object

Parameters:
id : integer

The ID of the resource that is shared.

user_id : integer

The ID of the user.

Returns:
None

Response code 204: success

get(self, id, *, link_expires_at='DEFAULT', inline='DEFAULT')

Get details about a file

Parameters:
id : integer

The ID of the file.

link_expires_at : string, optional

The date and time the download link will expire. Must be a time between now and 36 hours from now. Defaults to 30 minutes from now.

inline : boolean, optional

If true, will return a url that can be displayed inline in HTML

Returns:
id : integer

The ID of the file.

name : string

The file name.

created_at : string/date-time

The date and time the file was created.

file_size : integer

The file size.

expires_at : string/date-time

The date and time the file will expire. If not specified, the file will expire in 30 days. To keep a file indefinitely, specify null.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
download_url : string

A JSON string containing information about the URL of the file.

file_url : string

The URL that may be used to download the file.

detected_info : dict::
  • include_header : boolean

    A boolean value indicating whether or not the first row of the file is a header row.

  • column_delimiter : string

    The column delimiter for the file. One of “comma”, “tab”, or “pipe”.

  • compression : string

    The type of compression of the file. One of “gzip”, or “none”.

  • table_columns : list::

    An array of hashes corresponding to the columns in the file. Each hash should have keys for column “name” and “sql_type” - name : string

    The column name.

    • sql_type : string
      The SQL type of the column.
get_preprocess_csv(self, id)

Get a Preprocess CSV

Parameters:
id : integer
Returns:
id : integer

The ID of the job created.

file_id : integer

The ID of the file.

in_place : boolean

If true, the file is cleaned in place. If false, a new file ID is created. Defaults to true.

detect_table_columns : boolean

If true, detect the table columns in the file including the sql types. If false, skip table column detection.Defaults to false.

force_character_set_conversion : boolean

If true, the file will always be converted to UTF-8 and any character that cannot be converted will be discarded. If false, the character set conversion will only run if the detected character set is not compatible with UTF-8 (e.g., UTF-8, ASCII).

include_header : boolean

A boolean value indicating whether or not the first row of the file is a header row. If not provided, will attempt to auto-detect whether a header row is present.

column_delimiter : string

The column delimiter for the file. One of “comma”, “tab”, or “pipe”. If not provided, the column delimiter will be auto-detected.

hidden : boolean

The hidden status of the item.

list_projects(self, id, *, hidden='DEFAULT')

List the projects a File belongs to

Parameters:
id : integer

The ID of the File.

hidden : boolean, optional

If specified to be true, returns hidden items. Defaults to false, returning non-hidden items.

Returns:
id : integer

The ID for this project.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
name : string

The name of this project.

description : string

A description of the project.

users : list::

Users who can see the project. - id : integer

The ID of this user.

  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
auto_share : boolean
created_at : string/time
updated_at : string/time
archived : string

The archival status of the requested item(s).

list_shares(self, id)

List users and groups permissioned on this object

Parameters:
id : integer

The ID of the resource that is shared.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

patch(self, id, *, name='DEFAULT', expires_at='DEFAULT')

Update details about a file

Parameters:
id : integer

The ID of the file.

name : string, optional

The file name. The extension must match the previous extension.

expires_at : string/date-time, optional

The date and time the file will expire.

Returns:
id : integer

The ID of the file.

name : string

The file name.

created_at : string/date-time

The date and time the file was created.

file_size : integer

The file size.

expires_at : string/date-time

The date and time the file will expire. If not specified, the file will expire in 30 days. To keep a file indefinitely, specify null.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
download_url : string

A JSON string containing information about the URL of the file.

file_url : string

The URL that may be used to download the file.

detected_info : dict::
  • include_header : boolean

    A boolean value indicating whether or not the first row of the file is a header row.

  • column_delimiter : string

    The column delimiter for the file. One of “comma”, “tab”, or “pipe”.

  • compression : string

    The type of compression of the file. One of “gzip”, or “none”.

  • table_columns : list::

    An array of hashes corresponding to the columns in the file. Each hash should have keys for column “name” and “sql_type” - name : string

    The column name.

    • sql_type : string
      The SQL type of the column.
patch_preprocess_csv(self, id, *, file_id='DEFAULT', in_place='DEFAULT', detect_table_columns='DEFAULT', force_character_set_conversion='DEFAULT', include_header='DEFAULT', column_delimiter='DEFAULT')

Update some attributes of this Preprocess CSV

Parameters:
id : integer

The ID of the job created.

file_id : integer, optional

The ID of the file.

in_place : boolean, optional

If true, the file is cleaned in place. If false, a new file ID is created. Defaults to true.

detect_table_columns : boolean, optional

If true, detect the table columns in the file including the sql types. If false, skip table column detection.Defaults to false.

force_character_set_conversion : boolean, optional

If true, the file will always be converted to UTF-8 and any character that cannot be converted will be discarded. If false, the character set conversion will only run if the detected character set is not compatible with UTF-8 (e.g., UTF-8, ASCII).

include_header : boolean, optional

A boolean value indicating whether or not the first row of the file is a header row. If not provided, will attempt to auto-detect whether a header row is present.

column_delimiter : string, optional

The column delimiter for the file. One of “comma”, “tab”, or “pipe”. If not provided, the column delimiter will be auto-detected.

Returns:
id : integer

The ID of the job created.

file_id : integer

The ID of the file.

in_place : boolean

If true, the file is cleaned in place. If false, a new file ID is created. Defaults to true.

detect_table_columns : boolean

If true, detect the table columns in the file including the sql types. If false, skip table column detection.Defaults to false.

force_character_set_conversion : boolean

If true, the file will always be converted to UTF-8 and any character that cannot be converted will be discarded. If false, the character set conversion will only run if the detected character set is not compatible with UTF-8 (e.g., UTF-8, ASCII).

include_header : boolean

A boolean value indicating whether or not the first row of the file is a header row. If not provided, will attempt to auto-detect whether a header row is present.

column_delimiter : string

The column delimiter for the file. One of “comma”, “tab”, or “pipe”. If not provided, the column delimiter will be auto-detected.

hidden : boolean

The hidden status of the item.

post(self, name, *, expires_at='DEFAULT')

Initiate an upload of a file into the platform

Parameters:
name : string

The file name.

expires_at : string/date-time, optional

The date and time the file will expire. If not specified, the file will expire in 30 days. To keep a file indefinitely, specify null.

Returns:
id : integer

The ID of the file.

name : string

The file name.

created_at : string/date-time

The date and time the file was created.

file_size : integer

The file size.

expires_at : string/date-time

The date and time the file will expire. If not specified, the file will expire in 30 days. To keep a file indefinitely, specify null.

upload_url : string

The URL that may be used to upload a file. To use the upload URL, initiate a POST request to the given URL with the file you wish to import as the “file” form field.

upload_fields : dict

A hash containing the form fields to be included with the POST request.

post_multipart(self, name, num_parts, *, expires_at='DEFAULT')

Initiate a multipart upload

Parameters:
name : string

The file name.

num_parts : integer

The number of parts in which the file will be uploaded. This parameter determines the number of presigned URLs that are returned.

expires_at : string/date-time, optional

The date and time the file will expire. If not specified, the file will expire in 30 days. To keep a file indefinitely, specify null.

Returns:
id : integer

The ID of the file.

name : string

The file name.

created_at : string/date-time

The date and time the file was created.

file_size : integer

The file size.

expires_at : string/date-time

The date and time the file will expire. If not specified, the file will expire in 30 days. To keep a file indefinitely, specify null.

upload_urls : list

An array of URLs that may be used to upload file parts. Use separate PUT requests to complete the part uploads. Links expire after 12 hours.

post_multipart_complete(self, id)

Complete a multipart upload

Parameters:
id : integer

The ID of the file.

Returns:
None

Response code 204: success

post_preprocess_csv(self, file_id, *, in_place='DEFAULT', detect_table_columns='DEFAULT', force_character_set_conversion='DEFAULT', include_header='DEFAULT', column_delimiter='DEFAULT', hidden='DEFAULT')

Create a Preprocess CSV

Parameters:
file_id : integer

The ID of the file.

in_place : boolean, optional

If true, the file is cleaned in place. If false, a new file ID is created. Defaults to true.

detect_table_columns : boolean, optional

If true, detect the table columns in the file including the sql types. If false, skip table column detection.Defaults to false.

force_character_set_conversion : boolean, optional

If true, the file will always be converted to UTF-8 and any character that cannot be converted will be discarded. If false, the character set conversion will only run if the detected character set is not compatible with UTF-8 (e.g., UTF-8, ASCII).

include_header : boolean, optional

A boolean value indicating whether or not the first row of the file is a header row. If not provided, will attempt to auto-detect whether a header row is present.

column_delimiter : string, optional

The column delimiter for the file. One of “comma”, “tab”, or “pipe”. If not provided, the column delimiter will be auto-detected.

hidden : boolean, optional

The hidden status of the item.

Returns:
id : integer

The ID of the job created.

file_id : integer

The ID of the file.

in_place : boolean

If true, the file is cleaned in place. If false, a new file ID is created. Defaults to true.

detect_table_columns : boolean

If true, detect the table columns in the file including the sql types. If false, skip table column detection.Defaults to false.

force_character_set_conversion : boolean

If true, the file will always be converted to UTF-8 and any character that cannot be converted will be discarded. If false, the character set conversion will only run if the detected character set is not compatible with UTF-8 (e.g., UTF-8, ASCII).

include_header : boolean

A boolean value indicating whether or not the first row of the file is a header row. If not provided, will attempt to auto-detect whether a header row is present.

column_delimiter : string

The column delimiter for the file. One of “comma”, “tab”, or “pipe”. If not provided, the column delimiter will be auto-detected.

hidden : boolean

The hidden status of the item.

put(self, id, name, expires_at)

Update details about a file

Parameters:
id : integer

The ID of the file.

name : string

The file name. The extension must match the previous extension.

expires_at : string/date-time

The date and time the file will expire.

Returns:
id : integer

The ID of the file.

name : string

The file name.

created_at : string/date-time

The date and time the file was created.

file_size : integer

The file size.

expires_at : string/date-time

The date and time the file will expire. If not specified, the file will expire in 30 days. To keep a file indefinitely, specify null.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
download_url : string

A JSON string containing information about the URL of the file.

file_url : string

The URL that may be used to download the file.

detected_info : dict::
  • include_header : boolean

    A boolean value indicating whether or not the first row of the file is a header row.

  • column_delimiter : string

    The column delimiter for the file. One of “comma”, “tab”, or “pipe”.

  • compression : string

    The type of compression of the file. One of “gzip”, or “none”.

  • table_columns : list::

    An array of hashes corresponding to the columns in the file. Each hash should have keys for column “name” and “sql_type” - name : string

    The column name.

    • sql_type : string
      The SQL type of the column.
put_preprocess_csv(self, id, file_id, *, in_place='DEFAULT', detect_table_columns='DEFAULT', force_character_set_conversion='DEFAULT', include_header='DEFAULT', column_delimiter='DEFAULT')

Replace all attributes of this Preprocess CSV

Parameters:
id : integer

The ID of the job created.

file_id : integer

The ID of the file.

in_place : boolean, optional

If true, the file is cleaned in place. If false, a new file ID is created. Defaults to true.

detect_table_columns : boolean, optional

If true, detect the table columns in the file including the sql types. If false, skip table column detection.Defaults to false.

force_character_set_conversion : boolean, optional

If true, the file will always be converted to UTF-8 and any character that cannot be converted will be discarded. If false, the character set conversion will only run if the detected character set is not compatible with UTF-8 (e.g., UTF-8, ASCII).

include_header : boolean, optional

A boolean value indicating whether or not the first row of the file is a header row. If not provided, will attempt to auto-detect whether a header row is present.

column_delimiter : string, optional

The column delimiter for the file. One of “comma”, “tab”, or “pipe”. If not provided, the column delimiter will be auto-detected.

Returns:
id : integer

The ID of the job created.

file_id : integer

The ID of the file.

in_place : boolean

If true, the file is cleaned in place. If false, a new file ID is created. Defaults to true.

detect_table_columns : boolean

If true, detect the table columns in the file including the sql types. If false, skip table column detection.Defaults to false.

force_character_set_conversion : boolean

If true, the file will always be converted to UTF-8 and any character that cannot be converted will be discarded. If false, the character set conversion will only run if the detected character set is not compatible with UTF-8 (e.g., UTF-8, ASCII).

include_header : boolean

A boolean value indicating whether or not the first row of the file is a header row. If not provided, will attempt to auto-detect whether a header row is present.

column_delimiter : string

The column delimiter for the file. One of “comma”, “tab”, or “pipe”. If not provided, the column delimiter will be auto-detected.

hidden : boolean

The hidden status of the item.

put_preprocess_csv_archive(self, id, status)

Update the archive status of this object

Parameters:
id : integer

The ID of the object.

status : boolean

The desired archived status of the object.

Returns:
id : integer

The ID of the job created.

file_id : integer

The ID of the file.

in_place : boolean

If true, the file is cleaned in place. If false, a new file ID is created. Defaults to true.

detect_table_columns : boolean

If true, detect the table columns in the file including the sql types. If false, skip table column detection.Defaults to false.

force_character_set_conversion : boolean

If true, the file will always be converted to UTF-8 and any character that cannot be converted will be discarded. If false, the character set conversion will only run if the detected character set is not compatible with UTF-8 (e.g., UTF-8, ASCII).

include_header : boolean

A boolean value indicating whether or not the first row of the file is a header row. If not provided, will attempt to auto-detect whether a header row is present.

column_delimiter : string

The column delimiter for the file. One of “comma”, “tab”, or “pipe”. If not provided, the column delimiter will be auto-detected.

hidden : boolean

The hidden status of the item.

put_projects(self, id, project_id)

Add a File to a project

Parameters:
id : integer

The ID of the File.

project_id : integer

The ID of the project.

Returns:
None

Response code 204: success

put_shares_groups(self, id, group_ids, permission_level, *, share_email_body='DEFAULT', send_shared_email='DEFAULT')

Set the permissions groups has on this object

Parameters:
id : integer

The ID of the resource that is shared.

group_ids : list

An array of one or more group IDs.

permission_level : string

Options are: “read”, “write”, or “manage”.

share_email_body : string, optional

Custom body text for e-mail sent on a share.

send_shared_email : boolean, optional

Send email to the recipients of a share.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

put_shares_users(self, id, user_ids, permission_level, *, share_email_body='DEFAULT', send_shared_email='DEFAULT')

Set the permissions users have on this object

Parameters:
id : integer

The ID of the resource that is shared.

user_ids : list

An array of one or more user IDs.

permission_level : string

Options are: “read”, “write”, or “manage”.

share_email_body : string, optional

Custom body text for e-mail sent on a share.

send_shared_email : boolean, optional

Send email to the recipients of a share.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

Git_Repos

civis.resources._resources.Git_Repos

alias of civis.resources._resources.GitRepos

Groups

class Groups(session_kwargs, client, return_type='civis')

Methods

delete_members(self, id, user_id) Remove a user from a group
delete_shares_groups(self, id, group_id) Revoke the permissions a group has on this object
delete_shares_users(self, id, user_id) Revoke the permissions a user has on this object
get(self, id) Get a Group
list(self, \*[, query, permission, …]) List Groups
list_shares(self, id) List users and groups permissioned on this object
patch(self, id, \*[, name, description, …]) Update some attributes of this Group
post(self, name, \*[, description, slug, …]) Create a Group
put(self, id, name, \*[, description, slug, …]) Replace all attributes of this Group
put_members(self, id, user_id) Add a user to a group
put_shares_groups(self, id, group_ids, …) Set the permissions groups has on this object
put_shares_users(self, id, user_ids, …[, …]) Set the permissions users have on this object
delete_members(self, id, user_id)

Remove a user from a group

Parameters:
id : integer

The ID of the group.

user_id : integer

The ID of the user.

Returns:
None

Response code 204: success

delete_shares_groups(self, id, group_id)

Revoke the permissions a group has on this object

Parameters:
id : integer

The ID of the resource that is shared.

group_id : integer

The ID of the group.

Returns:
None

Response code 204: success

delete_shares_users(self, id, user_id)

Revoke the permissions a user has on this object

Parameters:
id : integer

The ID of the resource that is shared.

user_id : integer

The ID of the user.

Returns:
None

Response code 204: success

get(self, id)

Get a Group

Parameters:
id : integer
Returns:
id : integer

The ID of this group.

name : string

This group’s name.

created_at : string/time

The date and time when this group was created.

description : string

The description of the group.

slug : string

The slug for this group.

organization_id : integer

The ID of the organization this group belongs to.

organization_name : string

The name of the organization this group belongs to.

member_count : integer

The total number of members in this group.

must_agree_to_eula : boolean

Whether or not members of this group must sign the EULA.

default_otp_required_for_login : boolean

The two factor authentication requirement for this group.

role_ids : list

An array of ids of all the roles this group has.

default_time_zone : string

The default time zone of this group.

default_jobs_label : string

The default partition label for jobs of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

default_notebooks_label : string

The default partition label for notebooks of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

default_services_label : string

The default partition label for services of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

members : list::

The members of this group. - id : integer

The ID of this user.

  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
  • email : string
    This user’s email address.
  • primary_group_id : integer
    The ID of the primary group of this user.
list(self, *, query='DEFAULT', permission='DEFAULT', include_members='DEFAULT', limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List Groups

Parameters:
query : string, optional

If specified, it will filter the groups returned. Infix matching is supported (e.g., “query=group” will return “group” and “group of people” and “my group” and “my group of people”).

permission : string, optional

A permissions string, one of “read”, “write”, or “manage”. Lists only groups for which the current user has that permission.

include_members : boolean, optional

Show members of the group.

limit : integer, optional

Number of results to return. Defaults to 50. Maximum allowed is 1000.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to name. Must be one of: name, created_at.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to asc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
id : integer

The ID of this group.

name : string

This group’s name.

created_at : string/time

The date and time when this group was created.

slug : string

The slug for this group.

organization_id : integer

The ID of the organization this group belongs to.

organization_name : string

The name of the organization this group belongs to.

member_count : integer

The total number of members in this group.

members : list::

The members of this group. - id : integer

The ID of this user.

  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
list_shares(self, id)

List users and groups permissioned on this object

Parameters:
id : integer

The ID of the resource that is shared.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

patch(self, id, *, name='DEFAULT', description='DEFAULT', slug='DEFAULT', organization_id='DEFAULT', must_agree_to_eula='DEFAULT', default_otp_required_for_login='DEFAULT', role_ids='DEFAULT', default_time_zone='DEFAULT', default_jobs_label='DEFAULT', default_notebooks_label='DEFAULT', default_services_label='DEFAULT')

Update some attributes of this Group

Parameters:
id : integer

The ID of this group.

name : string, optional

This group’s name.

description : string, optional

The description of the group.

slug : string, optional

The slug for this group.

organization_id : integer, optional

The ID of the organization this group belongs to.

must_agree_to_eula : boolean, optional

Whether or not members of this group must sign the EULA.

default_otp_required_for_login : boolean, optional

The two factor authentication requirement for this group.

role_ids : list, optional

An array of ids of all the roles this group has.

default_time_zone : string, optional

The default time zone of this group.

default_jobs_label : string, optional

The default partition label for jobs of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

default_notebooks_label : string, optional

The default partition label for notebooks of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

default_services_label : string, optional

The default partition label for services of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

Returns:
id : integer

The ID of this group.

name : string

This group’s name.

created_at : string/time

The date and time when this group was created.

description : string

The description of the group.

slug : string

The slug for this group.

organization_id : integer

The ID of the organization this group belongs to.

organization_name : string

The name of the organization this group belongs to.

member_count : integer

The total number of members in this group.

must_agree_to_eula : boolean

Whether or not members of this group must sign the EULA.

default_otp_required_for_login : boolean

The two factor authentication requirement for this group.

role_ids : list

An array of ids of all the roles this group has.

default_time_zone : string

The default time zone of this group.

default_jobs_label : string

The default partition label for jobs of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

default_notebooks_label : string

The default partition label for notebooks of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

default_services_label : string

The default partition label for services of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

members : list::

The members of this group. - id : integer

The ID of this user.

  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
  • email : string
    This user’s email address.
  • primary_group_id : integer
    The ID of the primary group of this user.
post(self, name, *, description='DEFAULT', slug='DEFAULT', organization_id='DEFAULT', must_agree_to_eula='DEFAULT', default_otp_required_for_login='DEFAULT', role_ids='DEFAULT', default_time_zone='DEFAULT', default_jobs_label='DEFAULT', default_notebooks_label='DEFAULT', default_services_label='DEFAULT')

Create a Group

Parameters:
name : string

This group’s name.

description : string, optional

The description of the group.

slug : string, optional

The slug for this group.

organization_id : integer, optional

The ID of the organization this group belongs to.

must_agree_to_eula : boolean, optional

Whether or not members of this group must sign the EULA.

default_otp_required_for_login : boolean, optional

The two factor authentication requirement for this group.

role_ids : list, optional

An array of ids of all the roles this group has.

default_time_zone : string, optional

The default time zone of this group.

default_jobs_label : string, optional

The default partition label for jobs of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

default_notebooks_label : string, optional

The default partition label for notebooks of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

default_services_label : string, optional

The default partition label for services of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

Returns:
id : integer

The ID of this group.

name : string

This group’s name.

created_at : string/time

The date and time when this group was created.

description : string

The description of the group.

slug : string

The slug for this group.

organization_id : integer

The ID of the organization this group belongs to.

organization_name : string

The name of the organization this group belongs to.

member_count : integer

The total number of members in this group.

must_agree_to_eula : boolean

Whether or not members of this group must sign the EULA.

default_otp_required_for_login : boolean

The two factor authentication requirement for this group.

role_ids : list

An array of ids of all the roles this group has.

default_time_zone : string

The default time zone of this group.

default_jobs_label : string

The default partition label for jobs of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

default_notebooks_label : string

The default partition label for notebooks of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

default_services_label : string

The default partition label for services of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

members : list::

The members of this group. - id : integer

The ID of this user.

  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
  • email : string
    This user’s email address.
  • primary_group_id : integer
    The ID of the primary group of this user.
put(self, id, name, *, description='DEFAULT', slug='DEFAULT', organization_id='DEFAULT', must_agree_to_eula='DEFAULT', default_otp_required_for_login='DEFAULT', role_ids='DEFAULT', default_time_zone='DEFAULT', default_jobs_label='DEFAULT', default_notebooks_label='DEFAULT', default_services_label='DEFAULT')

Replace all attributes of this Group

Parameters:
id : integer

The ID of this group.

name : string

This group’s name.

description : string, optional

The description of the group.

slug : string, optional

The slug for this group.

organization_id : integer, optional

The ID of the organization this group belongs to.

must_agree_to_eula : boolean, optional

Whether or not members of this group must sign the EULA.

default_otp_required_for_login : boolean, optional

The two factor authentication requirement for this group.

role_ids : list, optional

An array of ids of all the roles this group has.

default_time_zone : string, optional

The default time zone of this group.

default_jobs_label : string, optional

The default partition label for jobs of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

default_notebooks_label : string, optional

The default partition label for notebooks of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

default_services_label : string, optional

The default partition label for services of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

Returns:
id : integer

The ID of this group.

name : string

This group’s name.

created_at : string/time

The date and time when this group was created.

description : string

The description of the group.

slug : string

The slug for this group.

organization_id : integer

The ID of the organization this group belongs to.

organization_name : string

The name of the organization this group belongs to.

member_count : integer

The total number of members in this group.

must_agree_to_eula : boolean

Whether or not members of this group must sign the EULA.

default_otp_required_for_login : boolean

The two factor authentication requirement for this group.

role_ids : list

An array of ids of all the roles this group has.

default_time_zone : string

The default time zone of this group.

default_jobs_label : string

The default partition label for jobs of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

default_notebooks_label : string

The default partition label for notebooks of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

default_services_label : string

The default partition label for services of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

members : list::

The members of this group. - id : integer

The ID of this user.

  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
  • email : string
    This user’s email address.
  • primary_group_id : integer
    The ID of the primary group of this user.
put_members(self, id, user_id)

Add a user to a group

Parameters:
id : integer

The ID of the group.

user_id : integer

The ID of the user.

Returns:
id : integer

The ID of this group.

name : string

This group’s name.

created_at : string/time

The date and time when this group was created.

description : string

The description of the group.

slug : string

The slug for this group.

organization_id : integer

The ID of the organization this group belongs to.

organization_name : string

The name of the organization this group belongs to.

member_count : integer

The total number of members in this group.

must_agree_to_eula : boolean

Whether or not members of this group must sign the EULA.

default_otp_required_for_login : boolean

The two factor authentication requirement for this group.

role_ids : list

An array of ids of all the roles this group has.

default_time_zone : string

The default time zone of this group.

default_jobs_label : string

The default partition label for jobs of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

default_notebooks_label : string

The default partition label for notebooks of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

default_services_label : string

The default partition label for services of this group. Only available if custom_partitions feature flag is set. Do not use this attribute as it may break in the future.

members : list::

The members of this group. - id : integer

The ID of this user.

  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
  • email : string
    This user’s email address.
  • primary_group_id : integer
    The ID of the primary group of this user.
put_shares_groups(self, id, group_ids, permission_level, *, share_email_body='DEFAULT', send_shared_email='DEFAULT')

Set the permissions groups has on this object

Parameters:
id : integer

The ID of the resource that is shared.

group_ids : list

An array of one or more group IDs.

permission_level : string

Options are: “read”, “write”, or “manage”.

share_email_body : string, optional

Custom body text for e-mail sent on a share.

send_shared_email : boolean, optional

Send email to the recipients of a share.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

put_shares_users(self, id, user_ids, permission_level, *, share_email_body='DEFAULT', send_shared_email='DEFAULT')

Set the permissions users have on this object

Parameters:
id : integer

The ID of the resource that is shared.

user_ids : list

An array of one or more user IDs.

permission_level : string

Options are: “read”, “write”, or “manage”.

share_email_body : string, optional

Custom body text for e-mail sent on a share.

send_shared_email : boolean, optional

Send email to the recipients of a share.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

Imports

class Imports(session_kwargs, client, return_type='civis')

Methods

delete_files_csv_runs(self, id, run_id) Cancel a run
delete_files_runs(self, id, run_id) Cancel a run
delete_projects(self, id, project_id) Remove an Import from a project
delete_shares_groups(self, id, group_id) Revoke the permissions a group has on this object
delete_shares_users(self, id, user_id) Revoke the permissions a user has on this object
get(self, id) Get details about an import
get_batches(self, id) Get details about a batch import
get_files_csv(self, id) Get a CSV Import
get_files_csv_runs(self, id, run_id) Check status of a run
get_files_runs(self, id, run_id) Check status of a run
list(self, \*[, type, author, destination, …]) List Imports
list_batches(self, \*[, hidden, limit, …]) List batch imports
list_files_csv_runs(self, id, \*[, limit, …]) List runs for the given csv_import
list_files_csv_runs_logs(self, id, run_id, \*) Get the logs for a run
list_files_runs(self, id, \*[, limit, …]) List runs for the given import
list_files_runs_logs(self, id, run_id, \*[, …]) Get the logs for a run
list_projects(self, id, \*[, hidden]) List the projects an Import belongs to
list_runs(self, id) Get the run history of this import
list_runs_logs(self, id, run_id, \*[, …]) Get the logs for a run
list_shares(self, id) List users and groups permissioned on this object
patch_files_csv(self, id, \*[, name, …]) Update some attributes of this CSV Import
post(self, name, sync_type, is_outbound, \*) Create a new import configuration
post_batches(self, file_ids, schema, table, …) Upload multiple files to Civis
post_cancel(self, id) Cancel a run
post_files(self, schema, name, …[, …]) Initate an import of a tabular file into the platform
post_files_csv(self, source, destination, …) Create a CSV Import
post_files_csv_runs(self, id) Start a run
post_files_runs(self, id) Start a run
post_runs(self, id) Run an import
post_syncs(self, id, source, destination, \*) Create a sync
put(self, id, name, sync_type, is_outbound, \*) Update an import
put_archive(self, id, status) Update the archive status of this object
put_files_csv(self, id, source, destination, …) Replace all attributes of this CSV Import
put_files_csv_archive(self, id, status) Update the archive status of this object
put_projects(self, id, project_id) Add an Import to a project
put_shares_groups(self, id, group_ids, …) Set the permissions groups has on this object
put_shares_users(self, id, user_ids, …[, …]) Set the permissions users have on this object
put_syncs(self, id, sync_id, source, …[, …]) Update a sync
put_syncs_archive(self, id, sync_id, \*[, …]) Update the archive status of this sync
delete_files_csv_runs(self, id, run_id)

Cancel a run

Parameters:
id : integer

The ID of the csv_import.

run_id : integer

The ID of the run.

Returns:
None

Response code 202: success

delete_files_runs(self, id, run_id)

Cancel a run

Parameters:
id : integer

The ID of the import.

run_id : integer

The ID of the run.

Returns:
None

Response code 202: success

delete_projects(self, id, project_id)

Remove an Import from a project

Parameters:
id : integer

The ID of the Import.

project_id : integer

The ID of the project.

Returns:
None

Response code 204: success

delete_shares_groups(self, id, group_id)

Revoke the permissions a group has on this object

Parameters:
id : integer

The ID of the resource that is shared.

group_id : integer

The ID of the group.

Returns:
None

Response code 204: success

delete_shares_users(self, id, user_id)

Revoke the permissions a user has on this object

Parameters:
id : integer

The ID of the resource that is shared.

user_id : integer

The ID of the user.

Returns:
None

Response code 204: success

get(self, id)

Get details about an import

Parameters:
id : integer

The ID for the import.

Returns:
name : string

The name of the import.

sync_type : string

The type of sync to perform; one of Dbsync, AutoImport, GdocImport, GdocExport, and Salesforce.

source : dict::
  • remote_host_id : integer
  • credential_id : integer
  • additional_credentials : list
    Array that holds additional credentials used for specific imports. For salesforce imports, the first and only element is the client credential id. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
  • name : string
destination : dict::
  • remote_host_id : integer
  • credential_id : integer
  • additional_credentials : list
    Array that holds additional credentials used for specific imports. For salesforce imports, the first and only element is the client credential id. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
  • name : string
schedule : dict::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
notifications : dict::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
parent_id : integer

Parent id to trigger this import from

id : integer

The ID for the import.

is_outbound : boolean
job_type : string

The job type of this import.

syncs : list::

List of syncs. - id : integer - source : dict:

- id : integer
    The ID of the table or file, if available.
- path : string
    The path of the dataset to sync from; for a database source,
    schema.tablename. If you are doing a Google Sheet export, this can
    be blank. This is a legacy parameter, it is recommended you use one
    of the following: databaseTable, file, googleWorksheet, salesforce
- database_table : dict::
    - schema : string
        The database schema name.
    - table : string
        The database table name.
    - use_without_schema : boolean
        This attribute is no longer available; defaults to false but
        cannot be used.
- file : dict::
    - id : integer
        The file id.
- google_worksheet : dict::
    - spreadsheet : string
        The spreadsheet document name.
    - spreadsheet_id : string
        The spreadsheet document id.
    - worksheet : string
        The worksheet tab name.
    - worksheet_id : integer
        The worksheet tab id.
- salesforce : dict::
    - object_name : string
        The Salesforce object name.
  • destination : dict::
    • path : string
      The schema.tablename to sync to. If you are doing a Google Sheet export, this is the spreadsheet and sheet name separated by a period. i.e. if you have a spreadsheet named “MySpreadsheet” and a sheet called “Sheet1” this field would be “MySpreadsheet.Sheet1”. This is a legacy parameter, it is recommended you use one of the following: databaseTable, googleWorksheet
    • database_table : dict::
      • schema : string
        The database schema name.
      • table : string
        The database table name.
      • use_without_schema : boolean
        This attribute is no longer available; defaults to false but cannot be used.
    • google_worksheet : dict::
      • spreadsheet : string
        The spreadsheet document name.
      • spreadsheet_id : string
        The spreadsheet document id.
      • worksheet : string
        The worksheet tab name.
      • worksheet_id : integer
        The worksheet tab id.
  • advanced_options : dict::
    • max_errors : integer
    • existing_table_rows : string
    • diststyle : string
    • distkey : string
    • sortkey1 : string
    • sortkey2 : string
    • column_delimiter : string
    • column_overrides : dict
      Hash used for overriding auto-detected names and types, with keys being the index of the column being overridden.
    • escaped : boolean
      If true, escape quotes with a backslash; otherwise, escape quotes by double-quoting. Defaults to false.
    • identity_column : string
    • row_chunk_size : integer
    • wipe_destination_table : boolean
    • truncate_long_lines : boolean
    • invalid_char_replacement : string
    • verify_table_row_counts : boolean
    • partition_column_name : string
      This parameter is deprecated
    • partition_schema_name : string
      This parameter is deprecated
    • partition_table_name : string
      This parameter is deprecated
    • partition_table_partition_column_min_name : string
      This parameter is deprecated
    • partition_table_partition_column_max_name : string
      This parameter is deprecated
    • last_modified_column : string
    • mysql_catalog_matches_schema : boolean
      This attribute is no longer available; defaults to true but cannot be used.
    • chunking_method : string
      The method used to break the data into smaller chunks for transfer. The value can be set to sorted_by_identity_columns or if not set the chunking method will be chosen automatically.
    • first_row_is_header : boolean
    • export_action : string
      The kind of export action you want to have the export execute. Set to “newsprsht” if you want a new worksheet inside a new spreadsheet. Set to “newwksht” if you want a new worksheet inside an existing spreadsheet. Set to “updatewksht” if you want to overwrite an existing worksheet inside an existing spreadsheet. Set to “appendwksht” if you want to append to the end of an existing worksheet inside an existing spreadsheet. Default is set to “newsprsht”
    • sql_query : string
      If you are doing a Google Sheet export, this is your SQL query.
    • contact_lists : string
    • soql_query : string
    • include_deleted_records : boolean
state : string
created_at : string/date-time
updated_at : string/date-time
last_run : dict::
  • id : integer
  • state : string
  • created_at : string/time
    The time that the run was queued.
  • started_at : string/time
    The time that the run started.
  • finished_at : string/time
    The time that the run completed.
  • error : string
    The error message for this run, if present.
user : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
running_as : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
next_run_at : string/time

The time of the next scheduled run.

time_zone : string

The time zone of this import.

hidden : boolean

The hidden status of the item.

archived : string

The archival status of the requested item(s).

get_batches(self, id)

Get details about a batch import

Parameters:
id : integer

The ID for the import.

Returns:
id : integer

The ID for the import.

schema : string

The destination schema name. This schema must already exist in Redshift.

table : string

The destination table name, without the schema prefix. This table must already exist in Redshift.

remote_host_id : integer

The ID of the destination database host.

state : string

The state of the run; one of “queued”, “running”, “succeeded”, “failed”, or “cancelled”.

started_at : string/time

The time the last run started at.

finished_at : string/time

The time the last run completed.

error : string

The error returned by the run, if any.

hidden : boolean

The hidden status of the item.

get_files_csv(self, id)

Get a CSV Import

Parameters:
id : integer
Returns:
id : integer

The ID for the import.

name : string

The name of the import.

source : dict::
  • file_ids : list
    The file ID(s) to import, if importing Civis file(s).
  • storage_path : dict::
    • storage_host_id : integer
      The ID of the source storage host.
    • credential_id : integer
      The ID of the credentials for the source storage host.
    • file_paths : list
      The file or directory path(s) within the bucket from which to import. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”If specifying a directory path, the job will import every file found under that path. All files must have the same column layout and file format (e.g., compression, columnDelimiter, etc.).
destination : dict::
  • schema : string
    The destination schema name.
  • table : string
    The destination table name.
  • remote_host_id : integer
    The ID of the destination database host.
  • credential_id : integer
    The ID of the credentials for the destination database.
  • primary_keys : list
    A list of column(s) which together uniquely identify a row in the destination table.These columns must not contain NULL values. If the import mode is “upsert”, this field is required;see the Civis Helpdesk article on “Advanced CSV Imports via the Civis API” for more information.
  • last_modified_keys : list
    A list of the columns indicating a record has been updated.If the destination table does not exist, and the import mode is “upsert”, this field is required.
first_row_is_header : boolean

A boolean value indicating whether or not the first row of the source file is a header row.

column_delimiter : string

The column delimiter for the file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.

escaped : boolean

A boolean value indicating whether or not the source file has quotes escaped with a backslash.Defaults to false.

compression : string

The type of compression of the source file. Valid arguments are “gzip” and “none”. Defaults to “none”.

existing_table_rows : string

The behavior if a destination table with the requested name already exists. One of “fail”, “truncate”, “append”, “drop”, or “upsert”.Defaults to “fail”.

max_errors : integer

The maximum number of rows with errors to ignore before failing. This option is not supported for Postgres databases.

table_columns : list::

An array of hashes corresponding to the columns in the order they appear in the source file. Each hash should have keys for database column “name” and “sqlType”.This parameter is required if the table does not exist, the table is being dropped, or the columns in the source file do not appear in the same order as in the destination table.The “sqlType” key is not required when appending to an existing table. - name : string

The column name.

  • sql_type : string
    The SQL type of the column.
loosen_types : boolean

If true, SQL types with precisions/lengths will have these values increased to accommodate data growth in future loads. Type loosening only occurs on table creation. Defaults to false.

execution : string

In upsert mode, controls the movement of data in upsert mode. If set to “delayed”, the data will be moved after a brief delay. If set to “immediate”, the data will be moved immediately. In non-upsert modes, controls the speed at which detailed column stats appear in the data catalogue. Defaults to “delayed”, to accommodate concurrent upserts to the same table and speedier non-upsert imports.

redshift_destination_options : dict::
  • diststyle : string
    The diststyle to use for the table. One of “even”, “all”, or “key”.
  • distkey : string
    Distkey for this table in Redshift
  • sortkeys : list
    Sortkeys for this table in Redshift. Please provide a maximum of two.
hidden : boolean

The hidden status of the item.

get_files_csv_runs(self, id, run_id)

Check status of a run

Parameters:
id : integer

The ID of the csv_import.

run_id : integer

The ID of the run.

Returns:
id : integer

The ID of the run.

csv_import_id : integer

The ID of the csv_import.

state : string

The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.

is_cancel_requested : boolean

True if run cancel requested, else false.

started_at : string/time

The time the last run started at.

finished_at : string/time

The time the last run completed.

error : string

The error, if any, returned by the run.

get_files_runs(self, id, run_id)

Check status of a run

Parameters:
id : integer

The ID of the import.

run_id : integer

The ID of the run.

Returns:
id : integer

The ID of the run.

import_id : integer

The ID of the import.

state : string

The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.

is_cancel_requested : boolean

True if run cancel requested, else false.

started_at : string/time

The time the last run started at.

finished_at : string/time

The time the last run completed.

error : string

The error, if any, returned by the run.

list(self, *, type='DEFAULT', author='DEFAULT', destination='DEFAULT', source='DEFAULT', status='DEFAULT', hidden='DEFAULT', archived='DEFAULT', limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List Imports

Parameters:
type : string, optional

If specified, return imports of these types. It accepts a comma-separated list, possible values are ‘AutoImport’, ‘DbSync’, ‘Salesforce’, ‘GdocImport’.

author : string, optional

If specified, return imports from this author. It accepts a comma-separated list of author ids.

destination : string, optional

If specified, returns imports with one of these destinations. It accepts a comma-separated list of remote host ids.

source : string, optional

If specified, returns imports with one of these sources. It accepts a comma-separated list of remote host ids. ‘DbSync’ must be specified for ‘type’.

status : string, optional

If specified, returns imports with one of these statuses. It accepts a comma-separated list, possible values are ‘running’, ‘failed’, ‘succeeded’, ‘idle’, ‘scheduled’.

hidden : boolean, optional

If specified to be true, returns hidden items. Defaults to false, returning non-hidden items.

archived : string, optional

The archival status of the requested item(s).

limit : integer, optional

Number of results to return. Defaults to 20. Maximum allowed is 50.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to updated_at. Must be one of: updated_at, name, created_at, last_run.updated_at.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
name : string

The name of the import.

sync_type : string

The type of sync to perform; one of Dbsync, AutoImport, GdocImport, GdocExport, and Salesforce.

source : dict::
  • remote_host_id : integer
  • credential_id : integer
  • additional_credentials : list
    Array that holds additional credentials used for specific imports. For salesforce imports, the first and only element is the client credential id. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
  • name : string
destination : dict::
  • remote_host_id : integer
  • credential_id : integer
  • additional_credentials : list
    Array that holds additional credentials used for specific imports. For salesforce imports, the first and only element is the client credential id. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
  • name : string
schedule : dict::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
id : integer

The ID for the import.

is_outbound : boolean
job_type : string

The job type of this import.

state : string
created_at : string/date-time
updated_at : string/date-time
last_run : dict::
  • id : integer
  • state : string
  • created_at : string/time
    The time that the run was queued.
  • started_at : string/time
    The time that the run started.
  • finished_at : string/time
    The time that the run completed.
  • error : string
    The error message for this run, if present.
user : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
time_zone : string

The time zone of this import.

archived : string

The archival status of the requested item(s).

list_batches(self, *, hidden='DEFAULT', limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List batch imports

Parameters:
hidden : boolean, optional

If specified to be true, returns hidden items. Defaults to false, returning non-hidden items.

limit : integer, optional

Number of results to return. Defaults to 20. Maximum allowed is 50.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to updated_at. Must be one of: updated_at, created_at.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
id : integer

The ID for the import.

schema : string

The destination schema name. This schema must already exist in Redshift.

table : string

The destination table name, without the schema prefix. This table must already exist in Redshift.

remote_host_id : integer

The ID of the destination database host.

state : string

The state of the run; one of “queued”, “running”, “succeeded”, “failed”, or “cancelled”.

started_at : string/time

The time the last run started at.

finished_at : string/time

The time the last run completed.

error : string

The error returned by the run, if any.

list_files_csv_runs(self, id, *, limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List runs for the given csv_import

Parameters:
id : integer

The ID of the csv_import.

limit : integer, optional

Number of results to return. Defaults to 20. Maximum allowed is 100.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to id. Must be one of: id.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
id : integer

The ID of the run.

csv_import_id : integer

The ID of the csv_import.

state : string

The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.

is_cancel_requested : boolean

True if run cancel requested, else false.

started_at : string/time

The time the last run started at.

finished_at : string/time

The time the last run completed.

error : string

The error, if any, returned by the run.

list_files_csv_runs_logs(self, id, run_id, *, last_id='DEFAULT', limit='DEFAULT')

Get the logs for a run

Parameters:
id : integer

The ID of the csv_import.

run_id : integer

The ID of the run.

last_id : integer, optional

The ID of the last log message received. Log entries with this ID value or lower will be omitted.Logs are sorted by ID if this value is provided, and are otherwise sorted by createdAt.

limit : integer, optional

The maximum number of log messages to return. Default of 10000.

Returns:
id : integer

The ID of the log.

created_at : string/date-time

The time the log was created.

message : string

The log message.

level : string

The level of the log. One of unknown,fatal,error,warn,info,debug.

list_files_runs(self, id, *, limit='DEFAULT', page_num='DEFAULT', order='DEFAULT', order_dir='DEFAULT', iterator='DEFAULT')

List runs for the given import

Parameters:
id : integer

The ID of the import.

limit : integer, optional

Number of results to return. Defaults to 20. Maximum allowed is 100.

page_num : integer, optional

Page number of the results to return. Defaults to the first page, 1.

order : string, optional

The field on which to order the result set. Defaults to id. Must be one of: id.

order_dir : string, optional

Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.

iterator : bool, optional

If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.

Returns:
id : integer

The ID of the run.

import_id : integer

The ID of the import.

state : string

The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.

is_cancel_requested : boolean

True if run cancel requested, else false.

started_at : string/time

The time the last run started at.

finished_at : string/time

The time the last run completed.

error : string

The error, if any, returned by the run.

list_files_runs_logs(self, id, run_id, *, last_id='DEFAULT', limit='DEFAULT')

Get the logs for a run

Parameters:
id : integer

The ID of the import.

run_id : integer

The ID of the run.

last_id : integer, optional

The ID of the last log message received. Log entries with this ID value or lower will be omitted.Logs are sorted by ID if this value is provided, and are otherwise sorted by createdAt.

limit : integer, optional

The maximum number of log messages to return. Default of 10000.

Returns:
id : integer

The ID of the log.

created_at : string/date-time

The time the log was created.

message : string

The log message.

level : string

The level of the log. One of unknown,fatal,error,warn,info,debug.

list_projects(self, id, *, hidden='DEFAULT')

List the projects an Import belongs to

Parameters:
id : integer

The ID of the Import.

hidden : boolean, optional

If specified to be true, returns hidden items. Defaults to false, returning non-hidden items.

Returns:
id : integer

The ID for this project.

author : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
name : string

The name of this project.

description : string

A description of the project.

users : list::

Users who can see the project. - id : integer

The ID of this user.

  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
auto_share : boolean
created_at : string/time
updated_at : string/time
archived : string

The archival status of the requested item(s).

list_runs(self, id)

Get the run history of this import

Parameters:
id : integer
Returns:
id : integer
state : string
created_at : string/time

The time that the run was queued.

started_at : string/time

The time that the run started.

finished_at : string/time

The time that the run completed.

error : string

The error message for this run, if present.

list_runs_logs(self, id, run_id, *, last_id='DEFAULT', limit='DEFAULT')

Get the logs for a run

Parameters:
id : integer

The ID of the import.

run_id : integer

The ID of the run.

last_id : integer, optional

The ID of the last log message received. Log entries with this ID value or lower will be omitted.Logs are sorted by ID if this value is provided, and are otherwise sorted by createdAt.

limit : integer, optional

The maximum number of log messages to return. Default of 10000.

Returns:
id : integer

The ID of the log.

created_at : string/date-time

The time the log was created.

message : string

The log message.

level : string

The level of the log. One of unknown,fatal,error,warn,info,debug.

list_shares(self, id)

List users and groups permissioned on this object

Parameters:
id : integer

The ID of the resource that is shared.

Returns:
readers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
writers : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
owners : dict::
  • users : list::
    • id : integer
    • name : string
  • groups : list::
    • id : integer
    • name : string
total_user_shares : integer

For owners, the number of total users shared. For writers and readers, the number of visible users shared.

total_group_shares : integer

For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.

patch_files_csv(self, id, *, name='DEFAULT', source='DEFAULT', destination='DEFAULT', first_row_is_header='DEFAULT', column_delimiter='DEFAULT', escaped='DEFAULT', compression='DEFAULT', existing_table_rows='DEFAULT', max_errors='DEFAULT', table_columns='DEFAULT', loosen_types='DEFAULT', execution='DEFAULT', redshift_destination_options='DEFAULT')

Update some attributes of this CSV Import

Parameters:
id : integer

The ID for the import.

name : string, optional

The name of the import.

source : dict, optional::
  • file_ids : list
    The file ID(s) to import, if importing Civis file(s).
  • storage_path : dict::
    • storage_host_id : integer
      The ID of the source storage host.
    • credential_id : integer
      The ID of the credentials for the source storage host.
    • file_paths : list
      The file or directory path(s) within the bucket from which to import. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”If specifying a directory path, the job will import every file found under that path. All files must have the same column layout and file format (e.g., compression, columnDelimiter, etc.).
destination : dict, optional::
  • schema : string
    The destination schema name.
  • table : string
    The destination table name.
  • remote_host_id : integer
    The ID of the destination database host.
  • credential_id : integer
    The ID of the credentials for the destination database.
  • primary_keys : list
    A list of column(s) which together uniquely identify a row in the destination table.These columns must not contain NULL values. If the import mode is “upsert”, this field is required;see the Civis Helpdesk article on “Advanced CSV Imports via the Civis API” for more information.
  • last_modified_keys : list
    A list of the columns indicating a record has been updated.If the destination table does not exist, and the import mode is “upsert”, this field is required.
first_row_is_header : boolean, optional

A boolean value indicating whether or not the first row of the source file is a header row.

column_delimiter : string, optional

The column delimiter for the file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.

escaped : boolean, optional

A boolean value indicating whether or not the source file has quotes escaped with a backslash.Defaults to false.

compression : string, optional

The type of compression of the source file. Valid arguments are “gzip” and “none”. Defaults to “none”.

existing_table_rows : string, optional

The behavior if a destination table with the requested name already exists. One of “fail”, “truncate”, “append”, “drop”, or “upsert”.Defaults to “fail”.

max_errors : integer, optional

The maximum number of rows with errors to ignore before failing. This option is not supported for Postgres databases.

table_columns : list, optional::

An array of hashes corresponding to the columns in the order they appear in the source file. Each hash should have keys for database column “name” and “sqlType”.This parameter is required if the table does not exist, the table is being dropped, or the columns in the source file do not appear in the same order as in the destination table.The “sqlType” key is not required when appending to an existing table. - name : string

The column name.

  • sql_type : string
    The SQL type of the column.
loosen_types : boolean, optional

If true, SQL types with precisions/lengths will have these values increased to accommodate data growth in future loads. Type loosening only occurs on table creation. Defaults to false.

execution : string, optional

In upsert mode, controls the movement of data in upsert mode. If set to “delayed”, the data will be moved after a brief delay. If set to “immediate”, the data will be moved immediately. In non-upsert modes, controls the speed at which detailed column stats appear in the data catalogue. Defaults to “delayed”, to accommodate concurrent upserts to the same table and speedier non-upsert imports.

redshift_destination_options : dict, optional::
  • diststyle : string
    The diststyle to use for the table. One of “even”, “all”, or “key”.
  • distkey : string
    Distkey for this table in Redshift
  • sortkeys : list
    Sortkeys for this table in Redshift. Please provide a maximum of two.
Returns:
id : integer

The ID for the import.

name : string

The name of the import.

source : dict::
  • file_ids : list
    The file ID(s) to import, if importing Civis file(s).
  • storage_path : dict::
    • storage_host_id : integer
      The ID of the source storage host.
    • credential_id : integer
      The ID of the credentials for the source storage host.
    • file_paths : list
      The file or directory path(s) within the bucket from which to import. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”If specifying a directory path, the job will import every file found under that path. All files must have the same column layout and file format (e.g., compression, columnDelimiter, etc.).
destination : dict::
  • schema : string
    The destination schema name.
  • table : string
    The destination table name.
  • remote_host_id : integer
    The ID of the destination database host.
  • credential_id : integer
    The ID of the credentials for the destination database.
  • primary_keys : list
    A list of column(s) which together uniquely identify a row in the destination table.These columns must not contain NULL values. If the import mode is “upsert”, this field is required;see the Civis Helpdesk article on “Advanced CSV Imports via the Civis API” for more information.
  • last_modified_keys : list
    A list of the columns indicating a record has been updated.If the destination table does not exist, and the import mode is “upsert”, this field is required.
first_row_is_header : boolean

A boolean value indicating whether or not the first row of the source file is a header row.

column_delimiter : string

The column delimiter for the file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.

escaped : boolean

A boolean value indicating whether or not the source file has quotes escaped with a backslash.Defaults to false.

compression : string

The type of compression of the source file. Valid arguments are “gzip” and “none”. Defaults to “none”.

existing_table_rows : string

The behavior if a destination table with the requested name already exists. One of “fail”, “truncate”, “append”, “drop”, or “upsert”.Defaults to “fail”.

max_errors : integer

The maximum number of rows with errors to ignore before failing. This option is not supported for Postgres databases.

table_columns : list::

An array of hashes corresponding to the columns in the order they appear in the source file. Each hash should have keys for database column “name” and “sqlType”.This parameter is required if the table does not exist, the table is being dropped, or the columns in the source file do not appear in the same order as in the destination table.The “sqlType” key is not required when appending to an existing table. - name : string

The column name.

  • sql_type : string
    The SQL type of the column.
loosen_types : boolean

If true, SQL types with precisions/lengths will have these values increased to accommodate data growth in future loads. Type loosening only occurs on table creation. Defaults to false.

execution : string

In upsert mode, controls the movement of data in upsert mode. If set to “delayed”, the data will be moved after a brief delay. If set to “immediate”, the data will be moved immediately. In non-upsert modes, controls the speed at which detailed column stats appear in the data catalogue. Defaults to “delayed”, to accommodate concurrent upserts to the same table and speedier non-upsert imports.

redshift_destination_options : dict::
  • diststyle : string
    The diststyle to use for the table. One of “even”, “all”, or “key”.
  • distkey : string
    Distkey for this table in Redshift
  • sortkeys : list
    Sortkeys for this table in Redshift. Please provide a maximum of two.
hidden : boolean

The hidden status of the item.

post(self, name, sync_type, is_outbound, *, source='DEFAULT', destination='DEFAULT', schedule='DEFAULT', notifications='DEFAULT', parent_id='DEFAULT', next_run_at='DEFAULT', time_zone='DEFAULT', hidden='DEFAULT')

Create a new import configuration

Parameters:
name : string

The name of the import.

sync_type : string

The type of sync to perform; one of Dbsync, AutoImport, GdocImport, GdocExport, and Salesforce.

is_outbound : boolean
source : dict, optional::
  • remote_host_id : integer
  • credential_id : integer
  • additional_credentials : list
    Array that holds additional credentials used for specific imports. For salesforce imports, the first and only element is the client credential id. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
destination : dict, optional::
  • remote_host_id : integer
  • credential_id : integer
  • additional_credentials : list
    Array that holds additional credentials used for specific imports. For salesforce imports, the first and only element is the client credential id. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
schedule : dict, optional::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
notifications : dict, optional::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
parent_id : integer, optional

Parent id to trigger this import from

next_run_at : string/time, optional

The time of the next scheduled run.

time_zone : string, optional

The time zone of this import.

hidden : boolean, optional

The hidden status of the item.

Returns:
name : string

The name of the import.

sync_type : string

The type of sync to perform; one of Dbsync, AutoImport, GdocImport, GdocExport, and Salesforce.

source : dict::
  • remote_host_id : integer
  • credential_id : integer
  • additional_credentials : list
    Array that holds additional credentials used for specific imports. For salesforce imports, the first and only element is the client credential id. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
  • name : string
destination : dict::
  • remote_host_id : integer
  • credential_id : integer
  • additional_credentials : list
    Array that holds additional credentials used for specific imports. For salesforce imports, the first and only element is the client credential id. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
  • name : string
schedule : dict::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
notifications : dict::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
parent_id : integer

Parent id to trigger this import from

id : integer

The ID for the import.

is_outbound : boolean
job_type : string

The job type of this import.

syncs : list::

List of syncs. - id : integer - source : dict:

- id : integer
    The ID of the table or file, if available.
- path : string
    The path of the dataset to sync from; for a database source,
    schema.tablename. If you are doing a Google Sheet export, this can
    be blank. This is a legacy parameter, it is recommended you use one
    of the following: databaseTable, file, googleWorksheet, salesforce
- database_table : dict::
    - schema : string
        The database schema name.
    - table : string
        The database table name.
    - use_without_schema : boolean
        This attribute is no longer available; defaults to false but
        cannot be used.
- file : dict::
    - id : integer
        The file id.
- google_worksheet : dict::
    - spreadsheet : string
        The spreadsheet document name.
    - spreadsheet_id : string
        The spreadsheet document id.
    - worksheet : string
        The worksheet tab name.
    - worksheet_id : integer
        The worksheet tab id.
- salesforce : dict::
    - object_name : string
        The Salesforce object name.
  • destination : dict::
    • path : string
      The schema.tablename to sync to. If you are doing a Google Sheet export, this is the spreadsheet and sheet name separated by a period. i.e. if you have a spreadsheet named “MySpreadsheet” and a sheet called “Sheet1” this field would be “MySpreadsheet.Sheet1”. This is a legacy parameter, it is recommended you use one of the following: databaseTable, googleWorksheet
    • database_table : dict::
      • schema : string
        The database schema name.
      • table : string
        The database table name.
      • use_without_schema : boolean
        This attribute is no longer available; defaults to false but cannot be used.
    • google_worksheet : dict::
      • spreadsheet : string
        The spreadsheet document name.
      • spreadsheet_id : string
        The spreadsheet document id.
      • worksheet : string
        The worksheet tab name.
      • worksheet_id : integer
        The worksheet tab id.
  • advanced_options : dict::
    • max_errors : integer
    • existing_table_rows : string
    • diststyle : string
    • distkey : string
    • sortkey1 : string
    • sortkey2 : string
    • column_delimiter : string
    • column_overrides : dict
      Hash used for overriding auto-detected names and types, with keys being the index of the column being overridden.
    • escaped : boolean
      If true, escape quotes with a backslash; otherwise, escape quotes by double-quoting. Defaults to false.
    • identity_column : string
    • row_chunk_size : integer
    • wipe_destination_table : boolean
    • truncate_long_lines : boolean
    • invalid_char_replacement : string
    • verify_table_row_counts : boolean
    • partition_column_name : string
      This parameter is deprecated
    • partition_schema_name : string
      This parameter is deprecated
    • partition_table_name : string
      This parameter is deprecated
    • partition_table_partition_column_min_name : string
      This parameter is deprecated
    • partition_table_partition_column_max_name : string
      This parameter is deprecated
    • last_modified_column : string
    • mysql_catalog_matches_schema : boolean
      This attribute is no longer available; defaults to true but cannot be used.
    • chunking_method : string
      The method used to break the data into smaller chunks for transfer. The value can be set to sorted_by_identity_columns or if not set the chunking method will be chosen automatically.
    • first_row_is_header : boolean
    • export_action : string
      The kind of export action you want to have the export execute. Set to “newsprsht” if you want a new worksheet inside a new spreadsheet. Set to “newwksht” if you want a new worksheet inside an existing spreadsheet. Set to “updatewksht” if you want to overwrite an existing worksheet inside an existing spreadsheet. Set to “appendwksht” if you want to append to the end of an existing worksheet inside an existing spreadsheet. Default is set to “newsprsht”
    • sql_query : string
      If you are doing a Google Sheet export, this is your SQL query.
    • contact_lists : string
    • soql_query : string
    • include_deleted_records : boolean
state : string
created_at : string/date-time
updated_at : string/date-time
last_run : dict::
  • id : integer
  • state : string
  • created_at : string/time
    The time that the run was queued.
  • started_at : string/time
    The time that the run started.
  • finished_at : string/time
    The time that the run completed.
  • error : string
    The error message for this run, if present.
user : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
running_as : dict::
  • id : integer
    The ID of this user.
  • name : string
    This user’s name.
  • username : string
    This user’s username.
  • initials : string
    This user’s initials.
  • online : boolean
    Whether this user is online.
next_run_at : string/time

The time of the next scheduled run.

time_zone : string

The time zone of this import.

hidden : boolean

The hidden status of the item.

archived : string

The archival status of the requested item(s).

post_batches(self, file_ids, schema, table, remote_host_id, credential_id, *, column_delimiter='DEFAULT', first_row_is_header='DEFAULT', compression='DEFAULT', hidden='DEFAULT')

Upload multiple files to Civis

Parameters:
file_ids : list

The file IDs for the import.

schema : string

The destination schema name. This schema must already exist in Redshift.

table : string

The destination table name, without the schema prefix. This table must already exist in Redshift.

remote_host_id : integer

The ID of the destination database host.

credential_id : integer

The ID of the credentials to be used when performing the database import.

column_delimiter : string, optional

The column delimiter for the file. Valid arguments are “comma”, “tab”, and “pipe”. If unspecified, defaults to “comma”.

first_row_is_header : boolean, optional

A boolean value indicating whether or not the first row is a header row. If unspecified, defaults to false.

compression : string, optional

The type of compression. Valid arguments are “gzip”, “zip”, and “none”. If unspecified, defaults to “gzip”.

hidden : boolean, optional

The hidden status of the item.

Returns:
id : integer

The ID for the import.

schema : string

The destination schema name. This schema must already exist in Redshift.

table : string

The destination table name, without the schema prefix. This table must already exist in Redshift.

remote_host_id : integer

The ID of the destination database host.

state : string

The state of the run; one of “queued”, “running”, “succeeded”, “failed”, or “cancelled”.

started_at : string/time

The time the last run started at.

finished_at : string/time

The time the last run completed.

error : string

The error returned by the run, if any.

hidden : boolean

The hidden status of the item.

post_cancel(self, id)

Cancel a run

Parameters:
id : integer

The ID of the job.

Returns:
id : integer

The ID of the run.

state : string

The state of the run, one of ‘queued’, ‘running’ or ‘cancelled’.

is_cancel_requested : boolean

True if run cancel requested, else false.

post_files(self, schema, name, remote_host_id, credential_id, *, max_errors='DEFAULT', existing_table_rows='DEFAULT', diststyle='DEFAULT', distkey='DEFAULT', sortkey1='DEFAULT', sortkey2='DEFAULT', column_delimiter='DEFAULT', first_row_is_header='DEFAULT', multipart='DEFAULT', escaped='DEFAULT', hidden='DEFAULT')

Initate an import of a tabular file into the platform

Parameters:
schema : string

The schema of the destination table.

name : string

The name of the destination table.

remote_host_id : integer

The id of the destination database host.

credential_id : integer

The id of the credentials to be used when performing the database import.

max_errors : integer, optional

The maximum number of rows with errors to remove from the import before failing.

existing_table_rows : string, optional

The behaviour if a table with the requested name already exists. One of “fail”, “truncate”, “append”, or “drop”.Defaults to “fail”.

diststyle : string, optional

The diststyle to use for the table. One of “even”, “all”, or “key”.

distkey : string, optional

The column to use as the distkey for the table.

sortkey1 : string, optional

The column to use as the sort key for the table.

sortkey2 : string, optional

The second column in a compound sortkey for the table.

column_delimiter : string, optional

The column delimiter of the file. If column_delimiter is null or omitted, it will be auto-detected. Valid arguments are “comma”, “tab”, and “pipe”.

first_row_is_header : boolean, optional

A boolean value indicating whether or not the first row is a header row. If first_row_is_header is null or omitted, it will be auto-detected.

multipart : boolean, optional

If true, the upload URI will require a multipart/form-data POST request. Defaults to false.

escaped : boolean, optional

If true, escape quotes with a backslash; otherwise, escape quotes by double-quoting. Defaults to false.

hidden : boolean, optional

The hidden status of the item.

Returns:
id : integer

The id of the import.

upload_uri : string

The URI which may be used to upload a tabular file for import. You must use this URI to upload the file you wish imported and then inform the Civis API when your upload is complete using the URI given by the runUri field of this response.

run_uri : string

The URI to POST to once the file upload is complete. After uploading the file using the URI given in the uploadUri attribute of the response, POST to this URI to initiate the import of your uploaded file into the platform.

upload_fields : dict

If multipart was set to true, these fields should be included in the multipart upload.

post_files_csv(self, source, destination, first_row_is_header, *, name='DEFAULT', column_delimiter='DEFAULT', escaped='DEFAULT', compression='DEFAULT', existing_table_rows='DEFAULT', max_errors='DEFAULT', table_columns='DEFAULT', loosen_types='DEFAULT', execution='DEFAULT', redshift_destination_options='DEFAULT', hidden='DEFAULT')

Create a CSV Import

Parameters:
source : dict::
  • file_ids : list
    The file ID(s) to import, if importing Civis file(s).
  • storage_path : dict::
    • storage_host_id : integer
      The ID of the source storage host.
    • credential_id : integer
      The ID of the credentials for the source storage host.
    • file_paths : list
      The file or directory path(s) within the bucket from which to import. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”If specifying a directory path, the job will import every file found under that path. All files must have the same column layout and file format (e.g., compression, columnDelimiter, etc.).
destination : dict::
  • schema : string
    The destination schema name.
  • table : string
    The destination table name.
  • remote_host_id : integer
    The ID of the destination database host.
  • credential_id : integer
    The ID of the credentials for the destination database.
  • primary_keys : list
    A list of column(s) which together uniquely identify a row in the destination table.These columns must not contain NULL values. If the import mode is “upsert”, this field is required;see the Civis Helpdesk article on “Advanced CSV Imports via the Civis API” for more information.
  • last_modified_keys : list
    A list of the columns indicating a record has been updated.If the destination table does not exist, and the import mode is “upsert”, this field is required.
first_row_is_header : boolean

A boolean value indicating whether or not the first row of the source file is a header row.

name : string, optional

The name of the import.

column_delimiter : string, optional

The column delimiter for the file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.

escaped : boolean, optional

A boolean value indicating whether or not the source file has quotes escaped with a backslash.Defaults to false.

compression : string, optional

The type of compression of the source file. Valid arguments are “gzip” and “none”. Defaults to “none”.

existing_table_rows : string, optional

The behavior if a destination table with the requested name already exists. One of “fail”, “truncate”, “append”, “drop”, or “upsert”.Defaults to “fail”.

max_errors : integer, optional

The maximum number of rows with errors to ignore before failing. This option is not supported for Postgres databases.

table_columns : list, optional::

An array of hashes corresponding to the columns in the order they appear in the source file. Each hash should have keys for database column “name” and “sqlType”.This parameter is required if the table does not exist, the table is being dropped, or the columns in the source file do not appear in the same order as in the destination table.The “sqlType” key is not required when appending to an existing table. - name : string

The column name.

  • sql_type : string
    The SQL type of the column.
loosen_types : boolean, optional

If true, SQL types with precisions/lengths will have these values increased to accommodate data growth in future loads. Type loosening only occurs on table creation. Defaults to false.

execution : string, optional

In upsert mode, controls the movement of data in upsert mode. If set to “delayed”, the data will be moved after a brief delay. If set to “immediate”, the data will be moved immediately. In non-upsert modes, controls the speed at which detailed column stats appear in the data catalogue. Defaults to “delayed”, to accommodate concurrent upserts to the same table and speedier non-upsert imports.

redshift_destination_options : dict, optional::
  • diststyle : string
    The diststyle to use for the table. One of “even”, “all”, or “key”.
  • distkey : string
    Distkey for this table in Redshift
  • sortkeys : list
    Sortkeys for this table in Redshift. Please provide a maximum of two.
hidden : boolean, optional

The hidden status of the item.

Returns:
id : integer

The ID for the import.

name : string

The name of the import.

source : dict::
  • file_ids : list
    The file ID(s) to import, if importing Civis file(s).
  • storage_path : dict::
    • storage_host_id : integer
      The ID of the source storage host.
    • credential_id : integer
      The ID of the credentials for the source storage host.
    • file_paths : list
      The file or directory path(s) within the bucket from which to import. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”If specifying a directory path, the job will import every file found under that path. All files must have the same column layout and file format (e.g., compression, columnDelimiter, etc.).
destination : dict::
  • schema : string
    The destination schema name.
  • table : string
    The destination table name.
  • remote_host_id : integer
    The ID of the destination database host.
  • credential_id : integer
    The ID of the credentials for the destination database.
  • primary_keys : list
    A list of column(s) which together uniquely identify a row in the destination table.These columns must not contain NULL values. If the import mode is “upsert”, this field is required;see the Civis Helpdesk article on “Advanced CSV Imports via the Civis API” for more information.
  • last_modified_keys : list
    A list of the columns indicating a record has been updated.If the destination table does not exist, and the import mode is “upsert”, this field is required.
first_row_is_header : boolean

A boolean value indicating whether or not the first row of the source file is a header row.

column_delimiter : string

The column delimiter for the file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.

escaped : boolean

A boolean value indicating whether or not the source file has quotes escaped with a backslash.Defaults to false.

compression : string

The type of compression of the source file. Valid arguments are “gzip” and “none”. Defaults to “none”.

existing_table_rows : string

The behavior if a destination table with the requested name already exists. One of “fail”, “truncate”, “append”, “drop”, or “upsert”.Defaults to “fail”.

max_errors : integer

The maximum number of rows with errors to ignore before failing. This option is not supported for Postgres databases.

table_columns : list::

An array of hashes corresponding to the columns in the order they appear in the source file. Each hash should have keys for database column “name” and “sqlType”.This parameter is required if the table does not exist, the table is being dropped, or the columns in the source file do not appear in the same order as in the destination table.The “sqlType” key is not required when appending to an existing table. - name : string

The column name.

  • sql_type : string
    The SQL type of the column.
loosen_types : boolean

If true, SQL types with precisions/lengths will have these values increased to accommodate data growth in future loads. Type loosening only occurs on table creation. Defaults to false.

execution : string

In upsert mode, controls the movement of data in upsert mode. If set to “delayed”, the data will be moved after a brief delay. If set to “immediate”, the data will be moved immediately. In non-upsert modes, controls the speed at which detailed column stats appear in the data catalogue. Defaults to “delayed”, to accommodate concurrent upserts to the same table and speedier non-upsert imports.

redshift_destination_options : dict::
  • diststyle : string
    The diststyle to use for the table. One of “even”, “all”, or “key”.
  • distkey : string
    Distkey for this table in Redshift
  • sortkeys : list
    Sortkeys for this table in Redshift. Please provide a maximum of two.
hidden : boolean

The hidden status of the item.

post_files_csv_runs(self, id)

Start a run

Parameters:
id : integer

The ID of the csv_import.

Returns:
id : integer

The ID of the run.

csv_import_id : integer

The ID of the csv_import.

state : string

The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.

is_cancel_requested : boolean

True if run cancel requested, else false.

started_at : string/time

The time the last run started at.

finished_at : string/time

The time the last run completed.

error : string

The error, if any, returned by the run.

post_files_runs(self, id)

Start a run

Parameters:
id : integer

The ID of the import.

Returns:
id : integer

The ID of the run.

import_id : integer

The ID of the import.

state : string

The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.

is_cancel_requested : boolean

True if run cancel requested, else false.

started_at : string/time

The time the last run started at.

finished_at : string/time

The time the last run completed.

error : string

The error, if any, returned by the run.

post_runs(self, id)

Run an import

Parameters:
id : integer

The ID of the import to run.

Returns:
run_id : integer

The ID of the new run triggered.

post_syncs(self, id, source, destination, *, advanced_options='DEFAULT')

Create a sync

Parameters:
id : integer
source : dict::
  • path : string
    The path of the dataset to sync from; for a database source, schema.tablename. If you are doing a Google Sheet export, this can be blank. This is a legacy parameter, it is recommended you use one of the following: databaseTable, file, googleWorksheet, salesforce
  • database_table : dict::
    • schema : string
      The database schema name.
    • table : string
      The database table name.
    • use_without_schema : boolean
      This attribute is no longer available; defaults to false but cannot be used.
  • file : dict
  • google_worksheet : dict::
    • spreadsheet : string
      The spreadsheet document name.
    • spreadsheet_id : string
      The spreadsheet document id.
    • worksheet : string
      The worksheet tab name.
    • worksheet_id : integer
      The worksheet tab id.
  • salesforce : dict::
    • object_name : string
      The Salesforce object name.
destination : dict::
  • path : string
    The schema.tablename to sync to. If you are doing a Google Sheet export, this is the spreadsheet and sheet name separated by a period. i.e. if you have a spreadsheet named “MySpreadsheet” and a sheet called “Sheet1” this field would be “MySpreadsheet.Sheet1”. This is a legacy parameter, it is recommended you use one of the following: databaseTable, googleWorksheet
  • database_table : dict::
    • schema : string
      The database schema name.
    • table : string
      The database table name.
    • use_without_schema : boolean
      This attribute is no longer available; defaults to false but cannot be used.
  • google_worksheet : dict::
    • spreadsheet : string
      The spreadsheet document name.
    • spreadsheet_id : string
      The spreadsheet document id.
    • worksheet : string
      The worksheet tab name.
    • worksheet_id : integer
      The worksheet tab id.
advanced_options : dict, optional::
  • max_errors : integer
  • existing_table_rows : string
  • diststyle : string
  • distkey : string
  • sortkey1 : string
  • sortkey2 : string
  • column_delimiter : string
  • column_overrides : dict
    Hash used for overriding auto-detected names and types, with keys being the index of the column being overridden.
  • escaped : boolean
    If true, escape quotes with a backslash; otherwise, escape quotes by double-quoting. Defaults to false.
  • identity_column : string
  • row_chunk_size : integer
  • wipe_destination_table : boolean
  • truncate_long_lines : boolean
  • invalid_char_replacement : string
  • verify_table_row_counts : boolean
  • partition_column_name : string
    This parameter is deprecated
  • partition_schema_name : string
    This parameter is deprecated
  • partition_table_name : string
    This parameter is deprecated
  • partition_table_partition_column_min_name : string
    This parameter is deprecated
  • partition_table_partition_column_max_name : string
    This parameter is deprecated
  • last_modified_column : string
  • mysql_catalog_matches_schema : boolean
    This attribute is no longer available; defaults to true but cannot be used.
  • chunking_method : string
    The method used to break the data into smaller chunks for transfer. The value can be set to sorted_by_identity_columns or if not set the chunking method will be chosen automatically.
  • first_row_is_header : boolean
  • export_action : string
    The kind of export action you want to have the export execute. Set to “newsprsht” if you want a new worksheet inside a new spreadsheet. Set to “newwksht” if you want a new worksheet inside an existing spreadsheet. Set to “updatewksht” if you want to overwrite an existing worksheet inside an existing spreadsheet. Set to “appendwksht” if you want to append to the end of an existing worksheet inside an existing spreadsheet. Default is set to “newsprsht”
  • sql_query : string
    If you are doing a Google Sheet export, this is your SQL query.
  • contact_lists : string
  • soql_query : string
  • include_deleted_records : boolean
Returns:
id : integer
source : dict::
  • id : integer
    The ID of the table or file, if available.
  • path : string
    The path of the dataset to sync from; for a database source, schema.tablename. If you are doing a Google Sheet export, this can be blank. This is a legacy parameter, it is recommended you use one of the following: databaseTable, file, googleWorksheet, salesforce
  • database_table : dict::
    • schema : string
      The database schema name.
    • table : string
      The database table name.
    • use_without_schema : boolean
      This attribute is no longer available; defaults to false but cannot be used.
  • file : dict::
    • id : integer
      The file id.
  • google_worksheet : dict::
    • spreadsheet : string
      The spreadsheet document name.
    • spreadsheet_id : string
      The spreadsheet document id.
    • worksheet : string
      The worksheet tab name.
    • worksheet_id : integer
      The worksheet tab id.
  • salesforce : dict::
    • object_name : string
      The Salesforce object name.
destination : dict::
  • path : string
    The schema.tablename to sync to. If you are doing a Google Sheet export, this is the spreadsheet and sheet name separated by a period. i.e. if you have a spreadsheet named “MySpreadsheet” and a sheet called “Sheet1” this field would be “MySpreadsheet.Sheet1”. This is a legacy parameter, it is recommended you use one of the following: databaseTable, googleWorksheet
  • database_table : dict::
    • schema : string
      The database schema name.
    • table : string
      The database table name.
    • use_without_schema : boolean
      This attribute is no longer available; defaults to false but cannot be used.
  • google_worksheet : dict::
    • spreadsheet : string
      The spreadsheet document name.
    • spreadsheet_id : string
      The spreadsheet document id.
    • worksheet : string
      The worksheet tab name.
    • worksheet_id : integer
      The worksheet tab id.
advanced_options : dict::
  • max_errors : integer
  • existing_table_rows : string
  • diststyle : string
  • distkey : string
  • sortkey1 : string
  • sortkey2 : string
  • column_delimiter : string
  • column_overrides : dict
    Hash used for overriding auto-detected names and types, with keys being the index of the column being overridden.
  • escaped : boolean
    If true, escape quotes with a backslash; otherwise, escape quotes by double-quoting. Defaults to false.
  • identity_column : string
  • row_chunk_size : integer
  • wipe_destination_table : boolean
  • truncate_long_lines : boolean
  • invalid_char_replacement : string
  • verify_table_row_counts : boolean
  • partition_column_name : string
    This parameter is deprecated
  • partition_schema_name : string
    This parameter is deprecated
  • partition_table_name : string
    This parameter is deprecated
  • partition_table_partition_column_min_name : string
    This parameter is deprecated
  • partition_table_partition_column_max_name : string
    This parameter is deprecated
  • last_modified_column : string
  • mysql_catalog_matches_schema : boolean
    This attribute is no longer available; defaults to true but cannot be used.
  • chunking_method : string
    The method used to break the data into smaller chunks for transfer. The value can be set to sorted_by_identity_columns or if not set the chunking method will be chosen automatically.
  • first_row_is_header : boolean
  • export_action : string
    The kind of export action you want to have the export execute. Set to “newsprsht” if you want a new worksheet inside a new spreadsheet. Set to “newwksht” if you want a new worksheet inside an existing spreadsheet. Set to “updatewksht” if you want to overwrite an existing worksheet inside an existing spreadsheet. Set to “appendwksht” if you want to append to the end of an existing worksheet inside an existing spreadsheet. Default is set to “newsprsht”
  • sql_query : string
    If you are doing a Google Sheet export, this is your SQL query.
  • contact_lists : string
  • soql_query : string
  • include_deleted_records : boolean
put(self, id, name, sync_type, is_outbound, *, source='DEFAULT', destination='DEFAULT', schedule='DEFAULT', notifications='DEFAULT', parent_id='DEFAULT', next_run_at='DEFAULT', time_zone='DEFAULT')

Update an import

Parameters:
id : integer

The ID for the import.

name : string

The name of the import.

sync_type : string

The type of sync to perform; one of Dbsync, AutoImport, GdocImport, GdocExport, and Salesforce.

is_outbound : boolean
source : dict, optional::
  • remote_host_id : integer
  • credential_id : integer
  • additional_credentials : list
    Array that holds additional credentials used for specific imports. For salesforce imports, the first and only element is the client credential id. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
destination : dict, optional::
  • remote_host_id : integer
  • credential_id : integer
  • additional_credentials : list
    Array that holds additional credentials used for specific imports. For salesforce imports, the first and only element is the client credential id. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
schedule : dict, optional::
  • scheduled : boolean
    If the item is scheduled.
  • scheduled_days : list
    Day based on numeric value starting at 0 for Sunday.
  • scheduled_hours : list
    Hours of the day it is scheduled on.
  • scheduled_minutes : list
    Minutes of the day it is scheduled on.
  • scheduled_runs_per_hour : integer
    Alternative to scheduled minutes, number of times to run per hour.
notifications : dict, optional::
  • urls : list
    URLs to receive a POST request at job completion
  • success_email_subject : string
    Custom subject line for success e-mail.
  • success_email_body : string
    Custom body text for success e-mail, written in Markdown.
  • success_email_addresses : list
    Addresses to notify by e-mail when the job completes successfully.
  • success_email_from_name : string
    Name from which success emails are sent; defaults to “Civis.”
  • success_email_reply_to : string
    Address for replies to success emails; defaults to the author of the job.
  • failure_email_addresses : list
    Addresses to notify by e-mail when the job fails.
  • stall_warning_minutes : integer
    Stall warning emails will be sent after this amount of minutes.
  • success_on : boolean
    If success email notifications are on.
  • failure_on : boolean
    If failure email notifications are on.
parent_id : integer, optional

Parent id to trigger this import from

next_run_at : string/time, optional

The time of the next scheduled run.

time_zone : string, optional

The time zone of this import.

Returns:
name : string

The name of the import.

sync_type