Imports
- class Imports(session_kwargs, client, return_type='raw')
Examples
>>> import civis >>> client = civis.APIClient() >>> client.imports.list_shares(...)
Methods
delete_files_csv_runs
(id, run_id)Cancel a run
delete_files_runs
(id, run_id)Cancel a run
delete_projects
(id, project_id)Remove an Import from a project
delete_shares_groups
(id, group_id)Revoke the permissions a group has on this object
delete_shares_users
(id, user_id)Revoke the permissions a user has on this object
get
(id)Get details about an import
get_batches
(id)Get details about a batch import
get_files_csv
(id)Get a CSV Import
get_files_csv_runs
(id, run_id)Check status of a run
get_files_runs
(id, run_id)Check status of a run
list
(*[, type, destination, source, status, ...])List Imports
list_batches
(*[, hidden, limit, page_num, ...])List batch imports
list_dependencies
(id, *[, user_id])List dependent objects for this object
list_files_csv_runs
(id, *[, limit, ...])List runs for the given CSV Import job
list_files_csv_runs_logs
(id, run_id, *[, ...])Get the logs for a run
list_files_runs
(id, *[, limit, page_num, ...])List runs for the given Import job
list_files_runs_logs
(id, run_id, *[, ...])Get the logs for a run
list_projects
(id, *[, hidden])List the projects an Import belongs to
list_runs
(id)Get the run history of this import
list_runs_logs
(id, run_id, *[, last_id, limit])Get the logs for a run
list_shares
(id)List users and groups permissioned on this object
patch_files_csv
(id, *[, name, source, ...])Update some attributes of this CSV Import
post
(name, sync_type, is_outbound, *[, ...])Create a new import configuration
post_batches
(file_ids, schema, table, ...[, ...])Upload multiple files to Civis
post_cancel
(id)Cancel a run
post_files
(schema, name, remote_host_id, ...)Initate an import of a tabular file into the platform
post_files_csv
(source, destination, ...[, ...])Create a CSV Import
Start a run
post_files_runs
(id)Start a run
post_runs
(id)Run an import
post_syncs
(id, source, destination, *[, ...])Create a sync
put
(id, name, sync_type, is_outbound, *[, ...])Update an import
put_archive
(id, status)Update the archive status of this object
put_files_csv
(id, source, destination, ...)Replace all attributes of this CSV Import
put_files_csv_archive
(id, status)Update the archive status of this object
put_projects
(id, project_id)Add an Import to a project
put_shares_groups
(id, group_ids, ...[, ...])Set the permissions groups has on this object
put_shares_users
(id, user_ids, ...[, ...])Set the permissions users have on this object
put_syncs
(id, sync_id, source, destination, *)Update a sync
put_syncs_archive
(id, sync_id, *[, status])Update the archive status of this sync
put_transfer
(id, user_id, ...[, email_body, ...])Transfer ownership of this object to another user
- delete_files_csv_runs(id: int, run_id: int)
Cancel a run
- Parameters:
- idint
The ID of the CSV Import job.
- run_idint
The ID of the run.
- Returns:
- None
Response code 202: success
- delete_files_runs(id: int, run_id: int)
Cancel a run
- Parameters:
- idint
The ID of the Import job.
- run_idint
The ID of the run.
- Returns:
- None
Response code 202: success
- delete_projects(id: int, project_id: int)
Remove an Import from a project
- Parameters:
- idint
The ID of the Import.
- project_idint
The ID of the project.
- Returns:
- None
Response code 204: success
Revoke the permissions a group has on this object
- Parameters:
- idint
The ID of the resource that is shared.
- group_idint
The ID of the group.
- Returns:
- None
Response code 204: success
Revoke the permissions a user has on this object
- Parameters:
- idint
The ID of the resource that is shared.
- user_idint
The ID of the user.
- Returns:
- None
Response code 204: success
- get(id: int)
Get details about an import
- Parameters:
- idint
The ID for the import.
- Returns:
civis.response.Response
- namestr
The name of the import.
- sync_typestr
The type of sync to perform; one of Dbsync, AutoImport, GdocImport, and GdocExport.
- sourcedict
remote_host_id : int
credential_id : int
- additional_credentialsList[int]
Array that holds additional credentials used for specific imports. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
name : str
- destinationdict
remote_host_id : int
credential_id : int
- additional_credentialsList[int]
Array that holds additional credentials used for specific imports. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
name : str
- scheduledict
- scheduledbool
If the item is scheduled.
- scheduled_daysList[int]
Days of the week, based on numeric value starting at 0 for Sunday. Mutually exclusive with scheduledDaysOfMonth
- scheduled_hoursList[int]
Hours of the day it is scheduled on.
- scheduled_minutesList[int]
Minutes of the day it is scheduled on.
- scheduled_runs_per_hourint
Deprecated in favor of scheduled minutes.
- scheduled_days_of_monthList[int]
Days of the month it is scheduled on, mutually exclusive with scheduledDays.
- notificationsdict
- urlsList[str]
URLs to receive a POST request at job completion
- success_email_subjectstr
Custom subject line for success e-mail.
- success_email_bodystr
Custom body text for success e-mail, written in Markdown.
- success_email_addressesList[str]
Addresses to notify by e-mail when the job completes successfully.
- success_email_from_namestr
Name from which success emails are sent; defaults to “Civis.”
- success_email_reply_tostr
Address for replies to success emails; defaults to the author of the job.
- failure_email_addressesList[str]
Addresses to notify by e-mail when the job fails.
- stall_warning_minutesint
Stall warning emails will be sent after this amount of minutes.
- success_onbool
If success email notifications are on. Defaults to user’s preferences.
- failure_onbool
If failure email notifications are on. Defaults to user’s preferences.
- parent_idint
Parent id to trigger this import from
- idint
The ID for the import.
is_outbound : bool
- job_typestr
The job type of this import.
- syncsList[dict]
List of syncs.
id : int
- sourcedict
- idint
The ID of the table or file, if available.
- pathstr
The path of the dataset to sync from; for a database source, schema.tablename. If you are doing a Google Sheet export, this can be blank. This is a legacy parameter, it is recommended you use one of the following: databaseTable, file, googleWorksheet
- database_tabledict
- schemastr
The database schema name.
- tablestr
The database table name.
- use_without_schemabool
This attribute is no longer available; defaults to false but cannot be used.
- filedict
- idint
The file id.
- google_worksheetdict
- spreadsheetstr
The spreadsheet document name.
- spreadsheet_idstr
The spreadsheet document id.
- worksheetstr
The worksheet tab name.
- worksheet_idint
The worksheet tab id.
- salesforcedict
- object_namestr
The Salesforce object name.
- destinationdict
- pathstr
The schema.tablename to sync to. If you are doing a Google Sheet export, this is the spreadsheet and sheet name separated by a period. i.e. if you have a spreadsheet named “MySpreadsheet” and a sheet called “Sheet1” this field would be “MySpreadsheet.Sheet1”. This is a legacy parameter, it is recommended you use one of the following: databaseTable, googleWorksheet
- database_tabledict
- schemastr
The database schema name.
- tablestr
The database table name.
- use_without_schemabool
This attribute is no longer available; defaults to false but cannot be used.
- google_worksheetdict
- spreadsheetstr
The spreadsheet document name.
- spreadsheet_idstr
The spreadsheet document id.
- worksheetstr
The worksheet tab name.
- worksheet_idint
The worksheet tab id.
- advanced_optionsdict
max_errors : int
existing_table_rows : str
diststyle : str
distkey : str
sortkey1 : str
sortkey2 : str
column_delimiter : str
- column_overridesdict
Hash used for overriding auto-detected names and types, with keys being the index of the column being overridden.
- escapedbool
If true, escape quotes with a backslash; otherwise, escape quotes by double-quoting. Defaults to false.
identity_column : str
row_chunk_size : int
wipe_destination_table : bool
truncate_long_lines : bool
invalid_char_replacement : str
verify_table_row_counts : bool
- partition_column_namestr
This parameter is deprecated
- partition_schema_namestr
This parameter is deprecated
- partition_table_namestr
This parameter is deprecated
- partition_table_partition_column_min_namestr
This parameter is deprecated
- partition_table_partition_column_max_namestr
This parameter is deprecated
last_modified_column : str
- mysql_catalog_matches_schemabool
This attribute is no longer available; defaults to true but cannot be used.
- chunking_methodstr
This parameter is deprecated
first_row_is_header : bool
- export_actionstr
The kind of export action you want to have the export execute. Set to “newsprsht” if you want a new worksheet inside a new spreadsheet. Set to “newwksht” if you want a new worksheet inside an existing spreadsheet. Set to “updatewksht” if you want to overwrite an existing worksheet inside an existing spreadsheet. Set to “appendwksht” if you want to append to the end of an existing worksheet inside an existing spreadsheet. Default is set to “newsprsht”
- sql_querystr
If you are doing a Google Sheet export, this is your SQL query.
contact_lists : str
soql_query : str
include_deleted_records : bool
state : str
created_at : str (date-time)
updated_at : str (date-time)
- last_rundict
id : int
state : str
- created_atstr (time)
The time that the run was queued.
- started_atstr (time)
The time that the run started.
- finished_atstr (time)
The time that the run completed.
- errorstr
The error message for this run, if present.
- userdict
- idint
The ID of this user.
- namestr
This user’s name.
- usernamestr
This user’s username.
- initialsstr
This user’s initials.
- onlinebool
Whether this user is online.
- running_asdict
- idint
The ID of this user.
- namestr
This user’s name.
- usernamestr
This user’s username.
- initialsstr
This user’s initials.
- onlinebool
Whether this user is online.
- next_run_atstr (time)
The time of the next scheduled run.
- time_zonestr
The time zone of this import.
- hiddenbool
The hidden status of the item.
- archivedstr
The archival status of the requested item(s).
- my_permission_levelstr
Your permission level on the object. One of “read”, “write”, or “manage”.
- get_batches(id: int)
Get details about a batch import
- Parameters:
- idint
The ID for the import.
- Returns:
civis.response.Response
- idint
The ID for the import.
- schemastr
The destination schema name. This schema must already exist in Redshift.
- tablestr
The destination table name, without the schema prefix. This table must already exist in Redshift.
- remote_host_idint
The ID of the destination database host.
- statestr
The state of the run; one of “queued”, “running”, “succeeded”, “failed”, or “cancelled”.
- started_atstr (time)
The time the last run started at.
- finished_atstr (time)
The time the last run completed.
- errorstr
The error returned by the run, if any.
- hiddenbool
The hidden status of the item.
- get_files_csv(id: int)
Get a CSV Import
- Parameters:
- idint
- Returns:
civis.response.Response
- idint
The ID for the import.
- namestr
The name of the import.
- sourcedict
- file_idsList[int]
The file ID(s) to import, if importing Civis file(s).
- storage_pathdict
- storage_host_idint
The ID of the source storage host.
- credential_idint
The ID of the credentials for the source storage host.
- file_pathsList[str]
The file or directory path(s) within the bucket from which to import. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”If specifying a directory path, the job will import every file found under that path. All files must have the same column layout and file format (e.g., compression, columnDelimiter, etc.).
- destinationdict
- schemastr
The destination schema name.
- tablestr
The destination table name.
- remote_host_idint
The ID of the destination database host.
- credential_idint
The ID of the credentials for the destination database.
- primary_keysList[str]
A list of column(s) which together uniquely identify a row in the destination table.These columns must not contain NULL values. If the import mode is “upsert”, this field is required;see the Civis Helpdesk article on “Advanced CSV Imports via the Civis API” for more information.
- last_modified_keysList[str]
A list of the columns indicating a record has been updated.If the destination table does not exist, and the import mode is “upsert”, this field is required.
- first_row_is_headerbool
A boolean value indicating whether or not the first row of the source file is a header row.
- column_delimiterstr
The column delimiter for the file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.
- escapedbool
A boolean value indicating whether or not the source file has quotes escaped with a backslash.Defaults to false.
- compressionstr
The type of compression of the source file. Valid arguments are “gzip” and “none”. Defaults to “none”.
- existing_table_rowsstr
The behavior if a destination table with the requested name already exists. One of “fail”, “truncate”, “append”, “drop”, or “upsert”.Defaults to “fail”.
- max_errorsint
The maximum number of rows with errors to ignore before failing. This option is not supported for Postgres databases.
- table_columnsList[dict]
An array of hashes corresponding to the columns in the order they appear in the source file. Each hash should have keys for database column “name” and “sqlType”.This parameter is required if the table does not exist, the table is being dropped, or the columns in the source file do not appear in the same order as in the destination table.The “sqlType” key is not required when appending to an existing table.
- namestr
The column name.
- sql_typestr
The SQL type of the column.
- loosen_typesbool
If true, SQL types with precisions/lengths will have these values increased to accommodate data growth in future loads. Type loosening only occurs on table creation. Defaults to false.
- executionstr
In upsert mode, controls the movement of data in upsert mode. If set to “delayed”, the data will be moved after a brief delay. If set to “immediate”, the data will be moved immediately. In non-upsert modes, controls the speed at which detailed column stats appear in the data catalogue. Defaults to “delayed”, to accommodate concurrent upserts to the same table and speedier non-upsert imports.
- redshift_destination_optionsdict
- diststylestr
The diststyle to use for the table. One of “even”, “all”, or “key”.
- distkeystr
Distkey for this table in Redshift
- sortkeysList[str]
Sortkeys for this table in Redshift. Please provide a maximum of two.
- hiddenbool
The hidden status of the item.
- my_permission_levelstr
Your permission level on the object. One of “read”, “write”, or “manage”.
- get_files_csv_runs(id: int, run_id: int)
Check status of a run
- Parameters:
- idint
The ID of the CSV Import job.
- run_idint
The ID of the run.
- Returns:
civis.response.Response
- idint
The ID of the run.
- csv_import_idint
The ID of the CSV Import job.
- statestr
The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.
- is_cancel_requestedbool
True if run cancel requested, else false.
- created_atstr (time)
The time the run was created.
- started_atstr (time)
The time the run started at.
- finished_atstr (time)
The time the run completed.
- errorstr
The error, if any, returned by the run.
- get_files_runs(id: int, run_id: int)
Check status of a run
- Parameters:
- idint
The ID of the Import job.
- run_idint
The ID of the run.
- Returns:
civis.response.Response
- idint
The ID of the run.
- import_idint
The ID of the Import job.
- statestr
The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.
- is_cancel_requestedbool
True if run cancel requested, else false.
- created_atstr (time)
The time the run was created.
- started_atstr (time)
The time the run started at.
- finished_atstr (time)
The time the run completed.
- errorstr
The error, if any, returned by the run.
- list(*, type: str = None, destination: str = None, source: str = None, status: str = None, author: str = None, hidden: bool = None, archived: str = None, limit: int = None, page_num: int = None, order: str = None, order_dir: str = None, iterator: bool = None)
List Imports
- Parameters:
- typestr, optional
If specified, return imports of these types. It accepts a comma-separated list, possible values are ‘AutoImport’, ‘Dbsync’, ‘Salesforce’, ‘GdocImport’.
- destinationstr, optional
If specified, returns imports with one of these destinations. It accepts a comma-separated list of remote host ids.
- sourcestr, optional
If specified, returns imports with one of these sources. It accepts a comma-separated list of remote host ids. ‘Dbsync’ must be specified for ‘type’.
- statusstr, optional
If specified, returns imports with one of these statuses. It accepts a comma-separated list, possible values are ‘running’, ‘failed’, ‘succeeded’, ‘idle’, ‘scheduled’.
- authorstr, optional
If specified, return items from any of these authors. It accepts a comma- separated list of user IDs.
- hiddenbool, optional
If specified to be true, returns hidden items. Defaults to false, returning non-hidden items.
- archivedstr, optional
The archival status of the requested item(s).
- limitint, optional
Number of results to return. Defaults to 20. Maximum allowed is 50.
- page_numint, optional
Page number of the results to return. Defaults to the first page, 1.
- orderstr, optional
The field on which to order the result set. Defaults to updated_at. Must be one of: updated_at, name, created_at, last_run.updated_at.
- order_dirstr, optional
Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.
- iteratorbool, optional
If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.
- Returns:
civis.response.PaginatedResponse
- namestr
The name of the import.
- sync_typestr
The type of sync to perform; one of Dbsync, AutoImport, GdocImport, and GdocExport.
- sourcedict
remote_host_id : int
credential_id : int
- additional_credentialsList[int]
Array that holds additional credentials used for specific imports. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
name : str
- destinationdict
remote_host_id : int
credential_id : int
- additional_credentialsList[int]
Array that holds additional credentials used for specific imports. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
name : str
- scheduledict
- scheduledbool
If the item is scheduled.
- scheduled_daysList[int]
Days of the week, based on numeric value starting at 0 for Sunday. Mutually exclusive with scheduledDaysOfMonth
- scheduled_hoursList[int]
Hours of the day it is scheduled on.
- scheduled_minutesList[int]
Minutes of the day it is scheduled on.
- scheduled_runs_per_hourint
Deprecated in favor of scheduled minutes.
- scheduled_days_of_monthList[int]
Days of the month it is scheduled on, mutually exclusive with scheduledDays.
- idint
The ID for the import.
is_outbound : bool
- job_typestr
The job type of this import.
state : str
created_at : str (date-time)
updated_at : str (date-time)
- last_rundict
id : int
state : str
- created_atstr (time)
The time that the run was queued.
- started_atstr (time)
The time that the run started.
- finished_atstr (time)
The time that the run completed.
- errorstr
The error message for this run, if present.
- userdict
- idint
The ID of this user.
- namestr
This user’s name.
- usernamestr
This user’s username.
- initialsstr
This user’s initials.
- onlinebool
Whether this user is online.
- time_zonestr
The time zone of this import.
- archivedstr
The archival status of the requested item(s).
- list_batches(*, hidden: bool = None, limit: int = None, page_num: int = None, order: str = None, order_dir: str = None, iterator: bool = None)
List batch imports
- Parameters:
- hiddenbool, optional
If specified to be true, returns hidden items. Defaults to false, returning non-hidden items.
- limitint, optional
Number of results to return. Defaults to 20. Maximum allowed is 50.
- page_numint, optional
Page number of the results to return. Defaults to the first page, 1.
- orderstr, optional
The field on which to order the result set. Defaults to updated_at. Must be one of: updated_at, created_at.
- order_dirstr, optional
Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.
- iteratorbool, optional
If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.
- Returns:
civis.response.PaginatedResponse
- idint
The ID for the import.
- schemastr
The destination schema name. This schema must already exist in Redshift.
- tablestr
The destination table name, without the schema prefix. This table must already exist in Redshift.
- remote_host_idint
The ID of the destination database host.
- statestr
The state of the run; one of “queued”, “running”, “succeeded”, “failed”, or “cancelled”.
- started_atstr (time)
The time the last run started at.
- finished_atstr (time)
The time the last run completed.
- errorstr
The error returned by the run, if any.
- list_dependencies(id: int, *, user_id: int = None)
List dependent objects for this object
- Parameters:
- idint
The ID of the resource that is shared.
- user_idint, optional
ID of target user
- Returns:
civis.response.Response
- object_typestr
Dependent object type
- fco_typestr
Human readable dependent object type
- idint
Dependent object ID
- namestr
Dependent object name, or nil if the requesting user cannot read this object
- permission_levelstr
Permission level of target user (not user’s groups) for dependent object. Null if no target user or not shareable (e.g. a database table).
- descriptionstr
Additional information about the dependency, if relevant
- shareablebool
Whether or not the requesting user can share this object.
- list_files_csv_runs(id: int, *, limit: int = None, page_num: int = None, order: str = None, order_dir: str = None, iterator: bool = None)
List runs for the given CSV Import job
- Parameters:
- idint
The ID of the CSV Import job.
- limitint, optional
Number of results to return. Defaults to 20. Maximum allowed is 100.
- page_numint, optional
Page number of the results to return. Defaults to the first page, 1.
- orderstr, optional
The field on which to order the result set. Defaults to id. Must be one of: id.
- order_dirstr, optional
Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.
- iteratorbool, optional
If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.
- Returns:
civis.response.PaginatedResponse
- idint
The ID of the run.
- csv_import_idint
The ID of the CSV Import job.
- statestr
The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.
- is_cancel_requestedbool
True if run cancel requested, else false.
- created_atstr (time)
The time the run was created.
- started_atstr (time)
The time the run started at.
- finished_atstr (time)
The time the run completed.
- errorstr
The error, if any, returned by the run.
- list_files_csv_runs_logs(id: int, run_id: int, *, last_id: int = None, limit: int = None)
Get the logs for a run
- Parameters:
- idint
The ID of the CSV Import job.
- run_idint
The ID of the run.
- last_idint, optional
The ID of the last log message received. Log entries with this ID value or lower will be omitted.Logs are sorted by ID if this value is provided, and are otherwise sorted by createdAt.
- limitint, optional
The maximum number of log messages to return. Default of 10000.
- Returns:
civis.response.Response
- idint
The ID of the log.
- created_atstr (date-time)
The time the log was created.
- messagestr
The log message.
- levelstr
The level of the log. One of unknown,fatal,error,warn,info,debug.
- list_files_runs(id: int, *, limit: int = None, page_num: int = None, order: str = None, order_dir: str = None, iterator: bool = None)
List runs for the given Import job
- Parameters:
- idint
The ID of the Import job.
- limitint, optional
Number of results to return. Defaults to 20. Maximum allowed is 100.
- page_numint, optional
Page number of the results to return. Defaults to the first page, 1.
- orderstr, optional
The field on which to order the result set. Defaults to id. Must be one of: id.
- order_dirstr, optional
Direction in which to sort, either asc (ascending) or desc (descending) defaulting to desc.
- iteratorbool, optional
If True, return a generator to iterate over all responses. Use when more results than the maximum allowed by limit are needed. When True, limit and page_num are ignored. Defaults to False.
- Returns:
civis.response.PaginatedResponse
- idint
The ID of the run.
- import_idint
The ID of the Import job.
- statestr
The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.
- is_cancel_requestedbool
True if run cancel requested, else false.
- created_atstr (time)
The time the run was created.
- started_atstr (time)
The time the run started at.
- finished_atstr (time)
The time the run completed.
- errorstr
The error, if any, returned by the run.
- list_files_runs_logs(id: int, run_id: int, *, last_id: int = None, limit: int = None)
Get the logs for a run
- Parameters:
- idint
The ID of the Import job.
- run_idint
The ID of the run.
- last_idint, optional
The ID of the last log message received. Log entries with this ID value or lower will be omitted.Logs are sorted by ID if this value is provided, and are otherwise sorted by createdAt.
- limitint, optional
The maximum number of log messages to return. Default of 10000.
- Returns:
civis.response.Response
- idint
The ID of the log.
- created_atstr (date-time)
The time the log was created.
- messagestr
The log message.
- levelstr
The level of the log. One of unknown,fatal,error,warn,info,debug.
- list_projects(id: int, *, hidden: bool = None)
List the projects an Import belongs to
- Parameters:
- idint
The ID of the Import.
- hiddenbool, optional
If specified to be true, returns hidden items. Defaults to false, returning non-hidden items.
- Returns:
civis.response.Response
- idint
The ID for this project.
- authordict
- idint
The ID of this user.
- namestr
This user’s name.
- usernamestr
This user’s username.
- initialsstr
This user’s initials.
- onlinebool
Whether this user is online.
- namestr
The name of this project.
- descriptionstr
A description of the project.
- usersList[dict]
Users who can see the project.
- idint
The ID of this user.
- namestr
This user’s name.
- usernamestr
This user’s username.
- initialsstr
This user’s initials.
- onlinebool
Whether this user is online.
auto_share : bool
created_at : str (time)
updated_at : str (time)
- archivedstr
The archival status of the requested item(s).
- list_runs(id: int)
Get the run history of this import
- Parameters:
- idint
- Returns:
civis.response.Response
id : int
state : str
- created_atstr (time)
The time that the run was queued.
- started_atstr (time)
The time that the run started.
- finished_atstr (time)
The time that the run completed.
- errorstr
The error message for this run, if present.
- list_runs_logs(id: int, run_id: int, *, last_id: int = None, limit: int = None)
Get the logs for a run
- Parameters:
- idint
The ID of the import job.
- run_idint
The ID of the run.
- last_idint, optional
The ID of the last log message received. Log entries with this ID value or lower will be omitted.Logs are sorted by ID if this value is provided, and are otherwise sorted by createdAt.
- limitint, optional
The maximum number of log messages to return. Default of 10000.
- Returns:
civis.response.Response
- idint
The ID of the log.
- created_atstr (date-time)
The time the log was created.
- messagestr
The log message.
- levelstr
The level of the log. One of unknown,fatal,error,warn,info,debug.
List users and groups permissioned on this object
- Parameters:
- idint
The ID of the resource that is shared.
- Returns:
civis.response.Response
- readersdict
- usersList[dict]
id : int
name : str
- groupsList[dict]
id : int
name : str
- writersdict
- usersList[dict]
id : int
name : str
- groupsList[dict]
id : int
name : str
- ownersdict
- usersList[dict]
id : int
name : str
- groupsList[dict]
id : int
name : str
- total_user_sharesint
For owners, the number of total users shared. For writers and readers, the number of visible users shared.
- total_group_sharesint
For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.
- patch_files_csv(id: int, *, name: str = None, source: dict = None, destination: dict = None, first_row_is_header: bool = None, column_delimiter: str = None, escaped: bool = None, compression: str = None, existing_table_rows: str = None, max_errors: int = None, table_columns: List[dict] = None, loosen_types: bool = None, execution: str = None, redshift_destination_options: dict = None)
Update some attributes of this CSV Import
- Parameters:
- idint
The ID for the import.
- namestr, optional
The name of the import.
- sourcedict, optional
- file_idsList[int]
The file ID(s) to import, if importing Civis file(s).
- storage_pathdict
- storage_host_idint
The ID of the source storage host.
- credential_idint
The ID of the credentials for the source storage host.
- file_pathsList[str]
The file or directory path(s) within the bucket from which to import. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”If specifying a directory path, the job will import every file found under that path. All files must have the same column layout and file format (e.g., compression, columnDelimiter, etc.).
- destinationdict, optional
- schemastr
The destination schema name.
- tablestr
The destination table name.
- remote_host_idint
The ID of the destination database host.
- credential_idint
The ID of the credentials for the destination database.
- primary_keysList[str]
A list of column(s) which together uniquely identify a row in the destination table.These columns must not contain NULL values. If the import mode is “upsert”, this field is required;see the Civis Helpdesk article on “Advanced CSV Imports via the Civis API” for more information.
- last_modified_keysList[str]
A list of the columns indicating a record has been updated.If the destination table does not exist, and the import mode is “upsert”, this field is required.
- first_row_is_headerbool, optional
A boolean value indicating whether or not the first row of the source file is a header row.
- column_delimiterstr, optional
The column delimiter for the file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.
- escapedbool, optional
A boolean value indicating whether or not the source file has quotes escaped with a backslash.Defaults to false.
- compressionstr, optional
The type of compression of the source file. Valid arguments are “gzip” and “none”. Defaults to “none”.
- existing_table_rowsstr, optional
The behavior if a destination table with the requested name already exists. One of “fail”, “truncate”, “append”, “drop”, or “upsert”.Defaults to “fail”.
- max_errorsint, optional
The maximum number of rows with errors to ignore before failing. This option is not supported for Postgres databases.
- table_columnsList[dict], optional
An array of hashes corresponding to the columns in the order they appear in the source file. Each hash should have keys for database column “name” and “sqlType”.This parameter is required if the table does not exist, the table is being dropped, or the columns in the source file do not appear in the same order as in the destination table.The “sqlType” key is not required when appending to an existing table.
- namestr
The column name.
- sql_typestr
The SQL type of the column.
- loosen_typesbool, optional
If true, SQL types with precisions/lengths will have these values increased to accommodate data growth in future loads. Type loosening only occurs on table creation. Defaults to false.
- executionstr, optional
In upsert mode, controls the movement of data in upsert mode. If set to “delayed”, the data will be moved after a brief delay. If set to “immediate”, the data will be moved immediately. In non-upsert modes, controls the speed at which detailed column stats appear in the data catalogue. Defaults to “delayed”, to accommodate concurrent upserts to the same table and speedier non-upsert imports.
- redshift_destination_optionsdict, optional
- diststylestr
The diststyle to use for the table. One of “even”, “all”, or “key”.
- distkeystr
Distkey for this table in Redshift
- sortkeysList[str]
Sortkeys for this table in Redshift. Please provide a maximum of two.
- Returns:
civis.response.Response
- idint
The ID for the import.
- namestr
The name of the import.
- sourcedict
- file_idsList[int]
The file ID(s) to import, if importing Civis file(s).
- storage_pathdict
- storage_host_idint
The ID of the source storage host.
- credential_idint
The ID of the credentials for the source storage host.
- file_pathsList[str]
The file or directory path(s) within the bucket from which to import. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”If specifying a directory path, the job will import every file found under that path. All files must have the same column layout and file format (e.g., compression, columnDelimiter, etc.).
- destinationdict
- schemastr
The destination schema name.
- tablestr
The destination table name.
- remote_host_idint
The ID of the destination database host.
- credential_idint
The ID of the credentials for the destination database.
- primary_keysList[str]
A list of column(s) which together uniquely identify a row in the destination table.These columns must not contain NULL values. If the import mode is “upsert”, this field is required;see the Civis Helpdesk article on “Advanced CSV Imports via the Civis API” for more information.
- last_modified_keysList[str]
A list of the columns indicating a record has been updated.If the destination table does not exist, and the import mode is “upsert”, this field is required.
- first_row_is_headerbool
A boolean value indicating whether or not the first row of the source file is a header row.
- column_delimiterstr
The column delimiter for the file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.
- escapedbool
A boolean value indicating whether or not the source file has quotes escaped with a backslash.Defaults to false.
- compressionstr
The type of compression of the source file. Valid arguments are “gzip” and “none”. Defaults to “none”.
- existing_table_rowsstr
The behavior if a destination table with the requested name already exists. One of “fail”, “truncate”, “append”, “drop”, or “upsert”.Defaults to “fail”.
- max_errorsint
The maximum number of rows with errors to ignore before failing. This option is not supported for Postgres databases.
- table_columnsList[dict]
An array of hashes corresponding to the columns in the order they appear in the source file. Each hash should have keys for database column “name” and “sqlType”.This parameter is required if the table does not exist, the table is being dropped, or the columns in the source file do not appear in the same order as in the destination table.The “sqlType” key is not required when appending to an existing table.
- namestr
The column name.
- sql_typestr
The SQL type of the column.
- loosen_typesbool
If true, SQL types with precisions/lengths will have these values increased to accommodate data growth in future loads. Type loosening only occurs on table creation. Defaults to false.
- executionstr
In upsert mode, controls the movement of data in upsert mode. If set to “delayed”, the data will be moved after a brief delay. If set to “immediate”, the data will be moved immediately. In non-upsert modes, controls the speed at which detailed column stats appear in the data catalogue. Defaults to “delayed”, to accommodate concurrent upserts to the same table and speedier non-upsert imports.
- redshift_destination_optionsdict
- diststylestr
The diststyle to use for the table. One of “even”, “all”, or “key”.
- distkeystr
Distkey for this table in Redshift
- sortkeysList[str]
Sortkeys for this table in Redshift. Please provide a maximum of two.
- hiddenbool
The hidden status of the item.
- my_permission_levelstr
Your permission level on the object. One of “read”, “write”, or “manage”.
- post(name: str, sync_type: str, is_outbound: bool, *, source: dict = None, destination: dict = None, schedule: dict = None, notifications: dict = None, parent_id: int = None, next_run_at: str = None, time_zone: str = None, hidden: bool = None)
Create a new import configuration
- Parameters:
- namestr
The name of the import.
- sync_typestr
The type of sync to perform; one of Dbsync, AutoImport, GdocImport, and GdocExport.
- is_outboundbool
- sourcedict, optional
remote_host_id : int
credential_id : int
- additional_credentialsList[int]
Array that holds additional credentials used for specific imports. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
- destinationdict, optional
remote_host_id : int
credential_id : int
- additional_credentialsList[int]
Array that holds additional credentials used for specific imports. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
- scheduledict, optional
- scheduledbool
If the item is scheduled.
- scheduled_daysList[int]
Days of the week, based on numeric value starting at 0 for Sunday. Mutually exclusive with scheduledDaysOfMonth
- scheduled_hoursList[int]
Hours of the day it is scheduled on.
- scheduled_minutesList[int]
Minutes of the day it is scheduled on.
- scheduled_runs_per_hourint
Deprecated in favor of scheduled minutes.
- scheduled_days_of_monthList[int]
Days of the month it is scheduled on, mutually exclusive with scheduledDays.
- notificationsdict, optional
- urlsList[str]
URLs to receive a POST request at job completion
- success_email_subjectstr
Custom subject line for success e-mail.
- success_email_bodystr
Custom body text for success e-mail, written in Markdown.
- success_email_addressesList[str]
Addresses to notify by e-mail when the job completes successfully.
- success_email_from_namestr
Name from which success emails are sent; defaults to “Civis.”
- success_email_reply_tostr
Address for replies to success emails; defaults to the author of the job.
- failure_email_addressesList[str]
Addresses to notify by e-mail when the job fails.
- stall_warning_minutesint
Stall warning emails will be sent after this amount of minutes.
- success_onbool
If success email notifications are on. Defaults to user’s preferences.
- failure_onbool
If failure email notifications are on. Defaults to user’s preferences.
- parent_idint, optional
Parent id to trigger this import from
- next_run_atstr (time), optional
The time of the next scheduled run.
- time_zonestr, optional
The time zone of this import.
- hiddenbool, optional
The hidden status of the item.
- Returns:
civis.response.Response
- namestr
The name of the import.
- sync_typestr
The type of sync to perform; one of Dbsync, AutoImport, GdocImport, and GdocExport.
- sourcedict
remote_host_id : int
credential_id : int
- additional_credentialsList[int]
Array that holds additional credentials used for specific imports. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
name : str
- destinationdict
remote_host_id : int
credential_id : int
- additional_credentialsList[int]
Array that holds additional credentials used for specific imports. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
name : str
- scheduledict
- scheduledbool
If the item is scheduled.
- scheduled_daysList[int]
Days of the week, based on numeric value starting at 0 for Sunday. Mutually exclusive with scheduledDaysOfMonth
- scheduled_hoursList[int]
Hours of the day it is scheduled on.
- scheduled_minutesList[int]
Minutes of the day it is scheduled on.
- scheduled_runs_per_hourint
Deprecated in favor of scheduled minutes.
- scheduled_days_of_monthList[int]
Days of the month it is scheduled on, mutually exclusive with scheduledDays.
- notificationsdict
- urlsList[str]
URLs to receive a POST request at job completion
- success_email_subjectstr
Custom subject line for success e-mail.
- success_email_bodystr
Custom body text for success e-mail, written in Markdown.
- success_email_addressesList[str]
Addresses to notify by e-mail when the job completes successfully.
- success_email_from_namestr
Name from which success emails are sent; defaults to “Civis.”
- success_email_reply_tostr
Address for replies to success emails; defaults to the author of the job.
- failure_email_addressesList[str]
Addresses to notify by e-mail when the job fails.
- stall_warning_minutesint
Stall warning emails will be sent after this amount of minutes.
- success_onbool
If success email notifications are on. Defaults to user’s preferences.
- failure_onbool
If failure email notifications are on. Defaults to user’s preferences.
- parent_idint
Parent id to trigger this import from
- idint
The ID for the import.
is_outbound : bool
- job_typestr
The job type of this import.
- syncsList[dict]
List of syncs.
id : int
- sourcedict
- idint
The ID of the table or file, if available.
- pathstr
The path of the dataset to sync from; for a database source, schema.tablename. If you are doing a Google Sheet export, this can be blank. This is a legacy parameter, it is recommended you use one of the following: databaseTable, file, googleWorksheet
- database_tabledict
- schemastr
The database schema name.
- tablestr
The database table name.
- use_without_schemabool
This attribute is no longer available; defaults to false but cannot be used.
- filedict
- idint
The file id.
- google_worksheetdict
- spreadsheetstr
The spreadsheet document name.
- spreadsheet_idstr
The spreadsheet document id.
- worksheetstr
The worksheet tab name.
- worksheet_idint
The worksheet tab id.
- salesforcedict
- object_namestr
The Salesforce object name.
- destinationdict
- pathstr
The schema.tablename to sync to. If you are doing a Google Sheet export, this is the spreadsheet and sheet name separated by a period. i.e. if you have a spreadsheet named “MySpreadsheet” and a sheet called “Sheet1” this field would be “MySpreadsheet.Sheet1”. This is a legacy parameter, it is recommended you use one of the following: databaseTable, googleWorksheet
- database_tabledict
- schemastr
The database schema name.
- tablestr
The database table name.
- use_without_schemabool
This attribute is no longer available; defaults to false but cannot be used.
- google_worksheetdict
- spreadsheetstr
The spreadsheet document name.
- spreadsheet_idstr
The spreadsheet document id.
- worksheetstr
The worksheet tab name.
- worksheet_idint
The worksheet tab id.
- advanced_optionsdict
max_errors : int
existing_table_rows : str
diststyle : str
distkey : str
sortkey1 : str
sortkey2 : str
column_delimiter : str
- column_overridesdict
Hash used for overriding auto-detected names and types, with keys being the index of the column being overridden.
- escapedbool
If true, escape quotes with a backslash; otherwise, escape quotes by double-quoting. Defaults to false.
identity_column : str
row_chunk_size : int
wipe_destination_table : bool
truncate_long_lines : bool
invalid_char_replacement : str
verify_table_row_counts : bool
- partition_column_namestr
This parameter is deprecated
- partition_schema_namestr
This parameter is deprecated
- partition_table_namestr
This parameter is deprecated
- partition_table_partition_column_min_namestr
This parameter is deprecated
- partition_table_partition_column_max_namestr
This parameter is deprecated
last_modified_column : str
- mysql_catalog_matches_schemabool
This attribute is no longer available; defaults to true but cannot be used.
- chunking_methodstr
This parameter is deprecated
first_row_is_header : bool
- export_actionstr
The kind of export action you want to have the export execute. Set to “newsprsht” if you want a new worksheet inside a new spreadsheet. Set to “newwksht” if you want a new worksheet inside an existing spreadsheet. Set to “updatewksht” if you want to overwrite an existing worksheet inside an existing spreadsheet. Set to “appendwksht” if you want to append to the end of an existing worksheet inside an existing spreadsheet. Default is set to “newsprsht”
- sql_querystr
If you are doing a Google Sheet export, this is your SQL query.
contact_lists : str
soql_query : str
include_deleted_records : bool
state : str
created_at : str (date-time)
updated_at : str (date-time)
- last_rundict
id : int
state : str
- created_atstr (time)
The time that the run was queued.
- started_atstr (time)
The time that the run started.
- finished_atstr (time)
The time that the run completed.
- errorstr
The error message for this run, if present.
- userdict
- idint
The ID of this user.
- namestr
This user’s name.
- usernamestr
This user’s username.
- initialsstr
This user’s initials.
- onlinebool
Whether this user is online.
- running_asdict
- idint
The ID of this user.
- namestr
This user’s name.
- usernamestr
This user’s username.
- initialsstr
This user’s initials.
- onlinebool
Whether this user is online.
- next_run_atstr (time)
The time of the next scheduled run.
- time_zonestr
The time zone of this import.
- hiddenbool
The hidden status of the item.
- archivedstr
The archival status of the requested item(s).
- my_permission_levelstr
Your permission level on the object. One of “read”, “write”, or “manage”.
- post_batches(file_ids: List[int], schema: str, table: str, remote_host_id: int, credential_id: int, *, column_delimiter: str = None, first_row_is_header: bool = None, compression: str = None, hidden: bool = None)
Upload multiple files to Civis
- Parameters:
- file_idsList[int]
The file IDs for the import.
- schemastr
The destination schema name. This schema must already exist in Redshift.
- tablestr
The destination table name, without the schema prefix. This table must already exist in Redshift.
- remote_host_idint
The ID of the destination database host.
- credential_idint
The ID of the credentials to be used when performing the database import.
- column_delimiterstr, optional
The column delimiter for the file. Valid arguments are “comma”, “tab”, and “pipe”. If unspecified, defaults to “comma”.
- first_row_is_headerbool, optional
A boolean value indicating whether or not the first row is a header row. If unspecified, defaults to false.
- compressionstr, optional
The type of compression. Valid arguments are “gzip”, “zip”, and “none”. If unspecified, defaults to “gzip”.
- hiddenbool, optional
The hidden status of the item.
- Returns:
civis.response.Response
- idint
The ID for the import.
- schemastr
The destination schema name. This schema must already exist in Redshift.
- tablestr
The destination table name, without the schema prefix. This table must already exist in Redshift.
- remote_host_idint
The ID of the destination database host.
- statestr
The state of the run; one of “queued”, “running”, “succeeded”, “failed”, or “cancelled”.
- started_atstr (time)
The time the last run started at.
- finished_atstr (time)
The time the last run completed.
- errorstr
The error returned by the run, if any.
- hiddenbool
The hidden status of the item.
- post_cancel(id: int)
Cancel a run
- Parameters:
- idint
The ID of the job.
- Returns:
civis.response.Response
- idint
The ID of the run.
- statestr
The state of the run, one of ‘queued’, ‘running’ or ‘cancelled’.
- is_cancel_requestedbool
True if run cancel requested, else false.
- post_files(schema: str, name: str, remote_host_id: int, credential_id: int, *, max_errors: int = None, existing_table_rows: str = None, diststyle: str = None, distkey: str = None, sortkey1: str = None, sortkey2: str = None, column_delimiter: str = None, first_row_is_header: bool = None, multipart: bool = None, escaped: bool = None, hidden: bool = None)
Initate an import of a tabular file into the platform
- Parameters:
- schemastr
The schema of the destination table.
- namestr
The name of the destination table.
- remote_host_idint
The id of the destination database host.
- credential_idint
The id of the credentials to be used when performing the database import.
- max_errorsint, optional
The maximum number of rows with errors to remove from the import before failing.
- existing_table_rowsstr, optional
The behaviour if a table with the requested name already exists. One of “fail”, “truncate”, “append”, or “drop”.Defaults to “fail”.
- diststylestr, optional
The diststyle to use for the table. One of “even”, “all”, or “key”.
- distkeystr, optional
The column to use as the distkey for the table.
- sortkey1str, optional
The column to use as the sort key for the table.
- sortkey2str, optional
The second column in a compound sortkey for the table.
- column_delimiterstr, optional
The column delimiter of the file. If column_delimiter is null or omitted, it will be auto-detected. Valid arguments are “comma”, “tab”, and “pipe”.
- first_row_is_headerbool, optional
A boolean value indicating whether or not the first row is a header row. If first_row_is_header is null or omitted, it will be auto-detected.
- multipartbool, optional
If true, the upload URI will require a multipart/form-data POST request. Defaults to false.
- escapedbool, optional
If true, escape quotes with a backslash; otherwise, escape quotes by double-quoting. Defaults to false.
- hiddenbool, optional
The hidden status of the item.
- Returns:
civis.response.Response
- idint
The id of the import.
- upload_uristr
The URI which may be used to upload a tabular file for import. You must use this URI to upload the file you wish imported and then inform the Civis API when your upload is complete using the URI given by the runUri field of this response.
- run_uristr
The URI to POST to once the file upload is complete. After uploading the file using the URI given in the uploadUri attribute of the response, POST to this URI to initiate the import of your uploaded file into the platform.
- upload_fieldsdict
If multipart was set to true, these fields should be included in the multipart upload.
- post_files_csv(source: dict, destination: dict, first_row_is_header: bool, *, name: str = None, column_delimiter: str = None, escaped: bool = None, compression: str = None, existing_table_rows: str = None, max_errors: int = None, table_columns: List[dict] = None, loosen_types: bool = None, execution: str = None, redshift_destination_options: dict = None, hidden: bool = None)
Create a CSV Import
- Parameters:
- sourcedict
- file_idsList[int]
The file ID(s) to import, if importing Civis file(s).
- storage_pathdict
- storage_host_idint
The ID of the source storage host.
- credential_idint
The ID of the credentials for the source storage host.
- file_pathsList[str]
The file or directory path(s) within the bucket from which to import. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”If specifying a directory path, the job will import every file found under that path. All files must have the same column layout and file format (e.g., compression, columnDelimiter, etc.).
- destinationdict
- schemastr
The destination schema name.
- tablestr
The destination table name.
- remote_host_idint
The ID of the destination database host.
- credential_idint
The ID of the credentials for the destination database.
- primary_keysList[str]
A list of column(s) which together uniquely identify a row in the destination table.These columns must not contain NULL values. If the import mode is “upsert”, this field is required;see the Civis Helpdesk article on “Advanced CSV Imports via the Civis API” for more information.
- last_modified_keysList[str]
A list of the columns indicating a record has been updated.If the destination table does not exist, and the import mode is “upsert”, this field is required.
- first_row_is_headerbool
A boolean value indicating whether or not the first row of the source file is a header row.
- namestr, optional
The name of the import.
- column_delimiterstr, optional
The column delimiter for the file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.
- escapedbool, optional
A boolean value indicating whether or not the source file has quotes escaped with a backslash.Defaults to false.
- compressionstr, optional
The type of compression of the source file. Valid arguments are “gzip” and “none”. Defaults to “none”.
- existing_table_rowsstr, optional
The behavior if a destination table with the requested name already exists. One of “fail”, “truncate”, “append”, “drop”, or “upsert”.Defaults to “fail”.
- max_errorsint, optional
The maximum number of rows with errors to ignore before failing. This option is not supported for Postgres databases.
- table_columnsList[dict], optional
An array of hashes corresponding to the columns in the order they appear in the source file. Each hash should have keys for database column “name” and “sqlType”.This parameter is required if the table does not exist, the table is being dropped, or the columns in the source file do not appear in the same order as in the destination table.The “sqlType” key is not required when appending to an existing table.
- namestr
The column name.
- sql_typestr
The SQL type of the column.
- loosen_typesbool, optional
If true, SQL types with precisions/lengths will have these values increased to accommodate data growth in future loads. Type loosening only occurs on table creation. Defaults to false.
- executionstr, optional
In upsert mode, controls the movement of data in upsert mode. If set to “delayed”, the data will be moved after a brief delay. If set to “immediate”, the data will be moved immediately. In non-upsert modes, controls the speed at which detailed column stats appear in the data catalogue. Defaults to “delayed”, to accommodate concurrent upserts to the same table and speedier non-upsert imports.
- redshift_destination_optionsdict, optional
- diststylestr
The diststyle to use for the table. One of “even”, “all”, or “key”.
- distkeystr
Distkey for this table in Redshift
- sortkeysList[str]
Sortkeys for this table in Redshift. Please provide a maximum of two.
- hiddenbool, optional
The hidden status of the item.
- Returns:
civis.response.Response
- idint
The ID for the import.
- namestr
The name of the import.
- sourcedict
- file_idsList[int]
The file ID(s) to import, if importing Civis file(s).
- storage_pathdict
- storage_host_idint
The ID of the source storage host.
- credential_idint
The ID of the credentials for the source storage host.
- file_pathsList[str]
The file or directory path(s) within the bucket from which to import. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”If specifying a directory path, the job will import every file found under that path. All files must have the same column layout and file format (e.g., compression, columnDelimiter, etc.).
- destinationdict
- schemastr
The destination schema name.
- tablestr
The destination table name.
- remote_host_idint
The ID of the destination database host.
- credential_idint
The ID of the credentials for the destination database.
- primary_keysList[str]
A list of column(s) which together uniquely identify a row in the destination table.These columns must not contain NULL values. If the import mode is “upsert”, this field is required;see the Civis Helpdesk article on “Advanced CSV Imports via the Civis API” for more information.
- last_modified_keysList[str]
A list of the columns indicating a record has been updated.If the destination table does not exist, and the import mode is “upsert”, this field is required.
- first_row_is_headerbool
A boolean value indicating whether or not the first row of the source file is a header row.
- column_delimiterstr
The column delimiter for the file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.
- escapedbool
A boolean value indicating whether or not the source file has quotes escaped with a backslash.Defaults to false.
- compressionstr
The type of compression of the source file. Valid arguments are “gzip” and “none”. Defaults to “none”.
- existing_table_rowsstr
The behavior if a destination table with the requested name already exists. One of “fail”, “truncate”, “append”, “drop”, or “upsert”.Defaults to “fail”.
- max_errorsint
The maximum number of rows with errors to ignore before failing. This option is not supported for Postgres databases.
- table_columnsList[dict]
An array of hashes corresponding to the columns in the order they appear in the source file. Each hash should have keys for database column “name” and “sqlType”.This parameter is required if the table does not exist, the table is being dropped, or the columns in the source file do not appear in the same order as in the destination table.The “sqlType” key is not required when appending to an existing table.
- namestr
The column name.
- sql_typestr
The SQL type of the column.
- loosen_typesbool
If true, SQL types with precisions/lengths will have these values increased to accommodate data growth in future loads. Type loosening only occurs on table creation. Defaults to false.
- executionstr
In upsert mode, controls the movement of data in upsert mode. If set to “delayed”, the data will be moved after a brief delay. If set to “immediate”, the data will be moved immediately. In non-upsert modes, controls the speed at which detailed column stats appear in the data catalogue. Defaults to “delayed”, to accommodate concurrent upserts to the same table and speedier non-upsert imports.
- redshift_destination_optionsdict
- diststylestr
The diststyle to use for the table. One of “even”, “all”, or “key”.
- distkeystr
Distkey for this table in Redshift
- sortkeysList[str]
Sortkeys for this table in Redshift. Please provide a maximum of two.
- hiddenbool
The hidden status of the item.
- my_permission_levelstr
Your permission level on the object. One of “read”, “write”, or “manage”.
- post_files_csv_runs(id: int)
Start a run
- Parameters:
- idint
The ID of the CSV Import job.
- Returns:
civis.response.Response
- idint
The ID of the run.
- csv_import_idint
The ID of the CSV Import job.
- statestr
The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.
- is_cancel_requestedbool
True if run cancel requested, else false.
- created_atstr (time)
The time the run was created.
- started_atstr (time)
The time the run started at.
- finished_atstr (time)
The time the run completed.
- errorstr
The error, if any, returned by the run.
- post_files_runs(id: int)
Start a run
- Parameters:
- idint
The ID of the Import job.
- Returns:
civis.response.Response
- idint
The ID of the run.
- import_idint
The ID of the Import job.
- statestr
The state of the run, one of ‘queued’ ‘running’ ‘succeeded’ ‘failed’ or ‘cancelled’.
- is_cancel_requestedbool
True if run cancel requested, else false.
- created_atstr (time)
The time the run was created.
- started_atstr (time)
The time the run started at.
- finished_atstr (time)
The time the run completed.
- errorstr
The error, if any, returned by the run.
- post_runs(id: int)
Run an import
- Parameters:
- idint
The ID of the import to run.
- Returns:
civis.response.Response
- run_idint
The ID of the new run triggered.
- post_syncs(id: int, source: dict, destination: dict, *, advanced_options: dict = None)
Create a sync
- Parameters:
- idint
- sourcedict
- pathstr
The path of the dataset to sync from; for a database source, schema.tablename. If you are doing a Google Sheet export, this can be blank. This is a legacy parameter, it is recommended you use one of the following: databaseTable, file, googleWorksheet
- database_tabledict
- schemastr
The database schema name.
- tablestr
The database table name.
- use_without_schemabool
This attribute is no longer available; defaults to false but cannot be used.
file : dict
- google_worksheetdict
- spreadsheetstr
The spreadsheet document name.
- spreadsheet_idstr
The spreadsheet document id.
- worksheetstr
The worksheet tab name.
- worksheet_idint
The worksheet tab id.
- salesforcedict
- object_namestr
The Salesforce object name.
- destinationdict
- pathstr
The schema.tablename to sync to. If you are doing a Google Sheet export, this is the spreadsheet and sheet name separated by a period. i.e. if you have a spreadsheet named “MySpreadsheet” and a sheet called “Sheet1” this field would be “MySpreadsheet.Sheet1”. This is a legacy parameter, it is recommended you use one of the following: databaseTable, googleWorksheet
- database_tabledict
- schemastr
The database schema name.
- tablestr
The database table name.
- use_without_schemabool
This attribute is no longer available; defaults to false but cannot be used.
- google_worksheetdict
- spreadsheetstr
The spreadsheet document name.
- spreadsheet_idstr
The spreadsheet document id.
- worksheetstr
The worksheet tab name.
- worksheet_idint
The worksheet tab id.
- advanced_optionsdict, optional
max_errors : int
existing_table_rows : str
diststyle : str
distkey : str
sortkey1 : str
sortkey2 : str
column_delimiter : str
- column_overridesdict
Hash used for overriding auto-detected names and types, with keys being the index of the column being overridden.
- escapedbool
If true, escape quotes with a backslash; otherwise, escape quotes by double-quoting. Defaults to false.
identity_column : str
row_chunk_size : int
wipe_destination_table : bool
truncate_long_lines : bool
invalid_char_replacement : str
verify_table_row_counts : bool
- partition_column_namestr
This parameter is deprecated
- partition_schema_namestr
This parameter is deprecated
- partition_table_namestr
This parameter is deprecated
- partition_table_partition_column_min_namestr
This parameter is deprecated
- partition_table_partition_column_max_namestr
This parameter is deprecated
last_modified_column : str
- mysql_catalog_matches_schemabool
This attribute is no longer available; defaults to true but cannot be used.
- chunking_methodstr
This parameter is deprecated
first_row_is_header : bool
- export_actionstr
The kind of export action you want to have the export execute. Set to “newsprsht” if you want a new worksheet inside a new spreadsheet. Set to “newwksht” if you want a new worksheet inside an existing spreadsheet. Set to “updatewksht” if you want to overwrite an existing worksheet inside an existing spreadsheet. Set to “appendwksht” if you want to append to the end of an existing worksheet inside an existing spreadsheet. Default is set to “newsprsht”
- sql_querystr
If you are doing a Google Sheet export, this is your SQL query.
contact_lists : str
soql_query : str
include_deleted_records : bool
- Returns:
civis.response.Response
id : int
- sourcedict
- idint
The ID of the table or file, if available.
- pathstr
The path of the dataset to sync from; for a database source, schema.tablename. If you are doing a Google Sheet export, this can be blank. This is a legacy parameter, it is recommended you use one of the following: databaseTable, file, googleWorksheet
- database_tabledict
- schemastr
The database schema name.
- tablestr
The database table name.
- use_without_schemabool
This attribute is no longer available; defaults to false but cannot be used.
- filedict
- idint
The file id.
- google_worksheetdict
- spreadsheetstr
The spreadsheet document name.
- spreadsheet_idstr
The spreadsheet document id.
- worksheetstr
The worksheet tab name.
- worksheet_idint
The worksheet tab id.
- salesforcedict
- object_namestr
The Salesforce object name.
- destinationdict
- pathstr
The schema.tablename to sync to. If you are doing a Google Sheet export, this is the spreadsheet and sheet name separated by a period. i.e. if you have a spreadsheet named “MySpreadsheet” and a sheet called “Sheet1” this field would be “MySpreadsheet.Sheet1”. This is a legacy parameter, it is recommended you use one of the following: databaseTable, googleWorksheet
- database_tabledict
- schemastr
The database schema name.
- tablestr
The database table name.
- use_without_schemabool
This attribute is no longer available; defaults to false but cannot be used.
- google_worksheetdict
- spreadsheetstr
The spreadsheet document name.
- spreadsheet_idstr
The spreadsheet document id.
- worksheetstr
The worksheet tab name.
- worksheet_idint
The worksheet tab id.
- advanced_optionsdict
max_errors : int
existing_table_rows : str
diststyle : str
distkey : str
sortkey1 : str
sortkey2 : str
column_delimiter : str
- column_overridesdict
Hash used for overriding auto-detected names and types, with keys being the index of the column being overridden.
- escapedbool
If true, escape quotes with a backslash; otherwise, escape quotes by double-quoting. Defaults to false.
identity_column : str
row_chunk_size : int
wipe_destination_table : bool
truncate_long_lines : bool
invalid_char_replacement : str
verify_table_row_counts : bool
- partition_column_namestr
This parameter is deprecated
- partition_schema_namestr
This parameter is deprecated
- partition_table_namestr
This parameter is deprecated
- partition_table_partition_column_min_namestr
This parameter is deprecated
- partition_table_partition_column_max_namestr
This parameter is deprecated
last_modified_column : str
- mysql_catalog_matches_schemabool
This attribute is no longer available; defaults to true but cannot be used.
- chunking_methodstr
This parameter is deprecated
first_row_is_header : bool
- export_actionstr
The kind of export action you want to have the export execute. Set to “newsprsht” if you want a new worksheet inside a new spreadsheet. Set to “newwksht” if you want a new worksheet inside an existing spreadsheet. Set to “updatewksht” if you want to overwrite an existing worksheet inside an existing spreadsheet. Set to “appendwksht” if you want to append to the end of an existing worksheet inside an existing spreadsheet. Default is set to “newsprsht”
- sql_querystr
If you are doing a Google Sheet export, this is your SQL query.
contact_lists : str
soql_query : str
include_deleted_records : bool
- put(id: int, name: str, sync_type: str, is_outbound: bool, *, source: dict = None, destination: dict = None, schedule: dict = None, notifications: dict = None, parent_id: int = None, next_run_at: str = None, time_zone: str = None)
Update an import
- Parameters:
- idint
The ID for the import.
- namestr
The name of the import.
- sync_typestr
The type of sync to perform; one of Dbsync, AutoImport, GdocImport, and GdocExport.
- is_outboundbool
- sourcedict, optional
remote_host_id : int
credential_id : int
- additional_credentialsList[int]
Array that holds additional credentials used for specific imports. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
- destinationdict, optional
remote_host_id : int
credential_id : int
- additional_credentialsList[int]
Array that holds additional credentials used for specific imports. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
- scheduledict, optional
- scheduledbool
If the item is scheduled.
- scheduled_daysList[int]
Days of the week, based on numeric value starting at 0 for Sunday. Mutually exclusive with scheduledDaysOfMonth
- scheduled_hoursList[int]
Hours of the day it is scheduled on.
- scheduled_minutesList[int]
Minutes of the day it is scheduled on.
- scheduled_runs_per_hourint
Deprecated in favor of scheduled minutes.
- scheduled_days_of_monthList[int]
Days of the month it is scheduled on, mutually exclusive with scheduledDays.
- notificationsdict, optional
- urlsList[str]
URLs to receive a POST request at job completion
- success_email_subjectstr
Custom subject line for success e-mail.
- success_email_bodystr
Custom body text for success e-mail, written in Markdown.
- success_email_addressesList[str]
Addresses to notify by e-mail when the job completes successfully.
- success_email_from_namestr
Name from which success emails are sent; defaults to “Civis.”
- success_email_reply_tostr
Address for replies to success emails; defaults to the author of the job.
- failure_email_addressesList[str]
Addresses to notify by e-mail when the job fails.
- stall_warning_minutesint
Stall warning emails will be sent after this amount of minutes.
- success_onbool
If success email notifications are on. Defaults to user’s preferences.
- failure_onbool
If failure email notifications are on. Defaults to user’s preferences.
- parent_idint, optional
Parent id to trigger this import from
- next_run_atstr (time), optional
The time of the next scheduled run.
- time_zonestr, optional
The time zone of this import.
- Returns:
civis.response.Response
- namestr
The name of the import.
- sync_typestr
The type of sync to perform; one of Dbsync, AutoImport, GdocImport, and GdocExport.
- sourcedict
remote_host_id : int
credential_id : int
- additional_credentialsList[int]
Array that holds additional credentials used for specific imports. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
name : str
- destinationdict
remote_host_id : int
credential_id : int
- additional_credentialsList[int]
Array that holds additional credentials used for specific imports. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
name : str
- scheduledict
- scheduledbool
If the item is scheduled.
- scheduled_daysList[int]
Days of the week, based on numeric value starting at 0 for Sunday. Mutually exclusive with scheduledDaysOfMonth
- scheduled_hoursList[int]
Hours of the day it is scheduled on.
- scheduled_minutesList[int]
Minutes of the day it is scheduled on.
- scheduled_runs_per_hourint
Deprecated in favor of scheduled minutes.
- scheduled_days_of_monthList[int]
Days of the month it is scheduled on, mutually exclusive with scheduledDays.
- notificationsdict
- urlsList[str]
URLs to receive a POST request at job completion
- success_email_subjectstr
Custom subject line for success e-mail.
- success_email_bodystr
Custom body text for success e-mail, written in Markdown.
- success_email_addressesList[str]
Addresses to notify by e-mail when the job completes successfully.
- success_email_from_namestr
Name from which success emails are sent; defaults to “Civis.”
- success_email_reply_tostr
Address for replies to success emails; defaults to the author of the job.
- failure_email_addressesList[str]
Addresses to notify by e-mail when the job fails.
- stall_warning_minutesint
Stall warning emails will be sent after this amount of minutes.
- success_onbool
If success email notifications are on. Defaults to user’s preferences.
- failure_onbool
If failure email notifications are on. Defaults to user’s preferences.
- parent_idint
Parent id to trigger this import from
- idint
The ID for the import.
is_outbound : bool
- job_typestr
The job type of this import.
- syncsList[dict]
List of syncs.
id : int
- sourcedict
- idint
The ID of the table or file, if available.
- pathstr
The path of the dataset to sync from; for a database source, schema.tablename. If you are doing a Google Sheet export, this can be blank. This is a legacy parameter, it is recommended you use one of the following: databaseTable, file, googleWorksheet
- database_tabledict
- schemastr
The database schema name.
- tablestr
The database table name.
- use_without_schemabool
This attribute is no longer available; defaults to false but cannot be used.
- filedict
- idint
The file id.
- google_worksheetdict
- spreadsheetstr
The spreadsheet document name.
- spreadsheet_idstr
The spreadsheet document id.
- worksheetstr
The worksheet tab name.
- worksheet_idint
The worksheet tab id.
- salesforcedict
- object_namestr
The Salesforce object name.
- destinationdict
- pathstr
The schema.tablename to sync to. If you are doing a Google Sheet export, this is the spreadsheet and sheet name separated by a period. i.e. if you have a spreadsheet named “MySpreadsheet” and a sheet called “Sheet1” this field would be “MySpreadsheet.Sheet1”. This is a legacy parameter, it is recommended you use one of the following: databaseTable, googleWorksheet
- database_tabledict
- schemastr
The database schema name.
- tablestr
The database table name.
- use_without_schemabool
This attribute is no longer available; defaults to false but cannot be used.
- google_worksheetdict
- spreadsheetstr
The spreadsheet document name.
- spreadsheet_idstr
The spreadsheet document id.
- worksheetstr
The worksheet tab name.
- worksheet_idint
The worksheet tab id.
- advanced_optionsdict
max_errors : int
existing_table_rows : str
diststyle : str
distkey : str
sortkey1 : str
sortkey2 : str
column_delimiter : str
- column_overridesdict
Hash used for overriding auto-detected names and types, with keys being the index of the column being overridden.
- escapedbool
If true, escape quotes with a backslash; otherwise, escape quotes by double-quoting. Defaults to false.
identity_column : str
row_chunk_size : int
wipe_destination_table : bool
truncate_long_lines : bool
invalid_char_replacement : str
verify_table_row_counts : bool
- partition_column_namestr
This parameter is deprecated
- partition_schema_namestr
This parameter is deprecated
- partition_table_namestr
This parameter is deprecated
- partition_table_partition_column_min_namestr
This parameter is deprecated
- partition_table_partition_column_max_namestr
This parameter is deprecated
last_modified_column : str
- mysql_catalog_matches_schemabool
This attribute is no longer available; defaults to true but cannot be used.
- chunking_methodstr
This parameter is deprecated
first_row_is_header : bool
- export_actionstr
The kind of export action you want to have the export execute. Set to “newsprsht” if you want a new worksheet inside a new spreadsheet. Set to “newwksht” if you want a new worksheet inside an existing spreadsheet. Set to “updatewksht” if you want to overwrite an existing worksheet inside an existing spreadsheet. Set to “appendwksht” if you want to append to the end of an existing worksheet inside an existing spreadsheet. Default is set to “newsprsht”
- sql_querystr
If you are doing a Google Sheet export, this is your SQL query.
contact_lists : str
soql_query : str
include_deleted_records : bool
state : str
created_at : str (date-time)
updated_at : str (date-time)
- last_rundict
id : int
state : str
- created_atstr (time)
The time that the run was queued.
- started_atstr (time)
The time that the run started.
- finished_atstr (time)
The time that the run completed.
- errorstr
The error message for this run, if present.
- userdict
- idint
The ID of this user.
- namestr
This user’s name.
- usernamestr
This user’s username.
- initialsstr
This user’s initials.
- onlinebool
Whether this user is online.
- running_asdict
- idint
The ID of this user.
- namestr
This user’s name.
- usernamestr
This user’s username.
- initialsstr
This user’s initials.
- onlinebool
Whether this user is online.
- next_run_atstr (time)
The time of the next scheduled run.
- time_zonestr
The time zone of this import.
- hiddenbool
The hidden status of the item.
- archivedstr
The archival status of the requested item(s).
- my_permission_levelstr
Your permission level on the object. One of “read”, “write”, or “manage”.
- put_archive(id: int, status: bool)
Update the archive status of this object
- Parameters:
- idint
The ID of the object.
- statusbool
The desired archived status of the object.
- Returns:
civis.response.Response
- namestr
The name of the import.
- sync_typestr
The type of sync to perform; one of Dbsync, AutoImport, GdocImport, and GdocExport.
- sourcedict
remote_host_id : int
credential_id : int
- additional_credentialsList[int]
Array that holds additional credentials used for specific imports. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
name : str
- destinationdict
remote_host_id : int
credential_id : int
- additional_credentialsList[int]
Array that holds additional credentials used for specific imports. For DB Syncs, the first element is an SSL private key credential id, and the second element is the corresponding public key credential id.
name : str
- scheduledict
- scheduledbool
If the item is scheduled.
- scheduled_daysList[int]
Days of the week, based on numeric value starting at 0 for Sunday. Mutually exclusive with scheduledDaysOfMonth
- scheduled_hoursList[int]
Hours of the day it is scheduled on.
- scheduled_minutesList[int]
Minutes of the day it is scheduled on.
- scheduled_runs_per_hourint
Deprecated in favor of scheduled minutes.
- scheduled_days_of_monthList[int]
Days of the month it is scheduled on, mutually exclusive with scheduledDays.
- notificationsdict
- urlsList[str]
URLs to receive a POST request at job completion
- success_email_subjectstr
Custom subject line for success e-mail.
- success_email_bodystr
Custom body text for success e-mail, written in Markdown.
- success_email_addressesList[str]
Addresses to notify by e-mail when the job completes successfully.
- success_email_from_namestr
Name from which success emails are sent; defaults to “Civis.”
- success_email_reply_tostr
Address for replies to success emails; defaults to the author of the job.
- failure_email_addressesList[str]
Addresses to notify by e-mail when the job fails.
- stall_warning_minutesint
Stall warning emails will be sent after this amount of minutes.
- success_onbool
If success email notifications are on. Defaults to user’s preferences.
- failure_onbool
If failure email notifications are on. Defaults to user’s preferences.
- parent_idint
Parent id to trigger this import from
- idint
The ID for the import.
is_outbound : bool
- job_typestr
The job type of this import.
- syncsList[dict]
List of syncs.
id : int
- sourcedict
- idint
The ID of the table or file, if available.
- pathstr
The path of the dataset to sync from; for a database source, schema.tablename. If you are doing a Google Sheet export, this can be blank. This is a legacy parameter, it is recommended you use one of the following: databaseTable, file, googleWorksheet
- database_tabledict
- schemastr
The database schema name.
- tablestr
The database table name.
- use_without_schemabool
This attribute is no longer available; defaults to false but cannot be used.
- filedict
- idint
The file id.
- google_worksheetdict
- spreadsheetstr
The spreadsheet document name.
- spreadsheet_idstr
The spreadsheet document id.
- worksheetstr
The worksheet tab name.
- worksheet_idint
The worksheet tab id.
- salesforcedict
- object_namestr
The Salesforce object name.
- destinationdict
- pathstr
The schema.tablename to sync to. If you are doing a Google Sheet export, this is the spreadsheet and sheet name separated by a period. i.e. if you have a spreadsheet named “MySpreadsheet” and a sheet called “Sheet1” this field would be “MySpreadsheet.Sheet1”. This is a legacy parameter, it is recommended you use one of the following: databaseTable, googleWorksheet
- database_tabledict
- schemastr
The database schema name.
- tablestr
The database table name.
- use_without_schemabool
This attribute is no longer available; defaults to false but cannot be used.
- google_worksheetdict
- spreadsheetstr
The spreadsheet document name.
- spreadsheet_idstr
The spreadsheet document id.
- worksheetstr
The worksheet tab name.
- worksheet_idint
The worksheet tab id.
- advanced_optionsdict
max_errors : int
existing_table_rows : str
diststyle : str
distkey : str
sortkey1 : str
sortkey2 : str
column_delimiter : str
- column_overridesdict
Hash used for overriding auto-detected names and types, with keys being the index of the column being overridden.
- escapedbool
If true, escape quotes with a backslash; otherwise, escape quotes by double-quoting. Defaults to false.
identity_column : str
row_chunk_size : int
wipe_destination_table : bool
truncate_long_lines : bool
invalid_char_replacement : str
verify_table_row_counts : bool
- partition_column_namestr
This parameter is deprecated
- partition_schema_namestr
This parameter is deprecated
- partition_table_namestr
This parameter is deprecated
- partition_table_partition_column_min_namestr
This parameter is deprecated
- partition_table_partition_column_max_namestr
This parameter is deprecated
last_modified_column : str
- mysql_catalog_matches_schemabool
This attribute is no longer available; defaults to true but cannot be used.
- chunking_methodstr
This parameter is deprecated
first_row_is_header : bool
- export_actionstr
The kind of export action you want to have the export execute. Set to “newsprsht” if you want a new worksheet inside a new spreadsheet. Set to “newwksht” if you want a new worksheet inside an existing spreadsheet. Set to “updatewksht” if you want to overwrite an existing worksheet inside an existing spreadsheet. Set to “appendwksht” if you want to append to the end of an existing worksheet inside an existing spreadsheet. Default is set to “newsprsht”
- sql_querystr
If you are doing a Google Sheet export, this is your SQL query.
contact_lists : str
soql_query : str
include_deleted_records : bool
state : str
created_at : str (date-time)
updated_at : str (date-time)
- last_rundict
id : int
state : str
- created_atstr (time)
The time that the run was queued.
- started_atstr (time)
The time that the run started.
- finished_atstr (time)
The time that the run completed.
- errorstr
The error message for this run, if present.
- userdict
- idint
The ID of this user.
- namestr
This user’s name.
- usernamestr
This user’s username.
- initialsstr
This user’s initials.
- onlinebool
Whether this user is online.
- running_asdict
- idint
The ID of this user.
- namestr
This user’s name.
- usernamestr
This user’s username.
- initialsstr
This user’s initials.
- onlinebool
Whether this user is online.
- next_run_atstr (time)
The time of the next scheduled run.
- time_zonestr
The time zone of this import.
- hiddenbool
The hidden status of the item.
- archivedstr
The archival status of the requested item(s).
- my_permission_levelstr
Your permission level on the object. One of “read”, “write”, or “manage”.
- put_files_csv(id: int, source: dict, destination: dict, first_row_is_header: bool, *, name: str = None, column_delimiter: str = None, escaped: bool = None, compression: str = None, existing_table_rows: str = None, max_errors: int = None, table_columns: List[dict] = None, loosen_types: bool = None, execution: str = None, redshift_destination_options: dict = None)
Replace all attributes of this CSV Import
- Parameters:
- idint
The ID for the import.
- sourcedict
- file_idsList[int]
The file ID(s) to import, if importing Civis file(s).
- storage_pathdict
- storage_host_idint
The ID of the source storage host.
- credential_idint
The ID of the credentials for the source storage host.
- file_pathsList[str]
The file or directory path(s) within the bucket from which to import. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”If specifying a directory path, the job will import every file found under that path. All files must have the same column layout and file format (e.g., compression, columnDelimiter, etc.).
- destinationdict
- schemastr
The destination schema name.
- tablestr
The destination table name.
- remote_host_idint
The ID of the destination database host.
- credential_idint
The ID of the credentials for the destination database.
- primary_keysList[str]
A list of column(s) which together uniquely identify a row in the destination table.These columns must not contain NULL values. If the import mode is “upsert”, this field is required;see the Civis Helpdesk article on “Advanced CSV Imports via the Civis API” for more information.
- last_modified_keysList[str]
A list of the columns indicating a record has been updated.If the destination table does not exist, and the import mode is “upsert”, this field is required.
- first_row_is_headerbool
A boolean value indicating whether or not the first row of the source file is a header row.
- namestr, optional
The name of the import.
- column_delimiterstr, optional
The column delimiter for the file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.
- escapedbool, optional
A boolean value indicating whether or not the source file has quotes escaped with a backslash.Defaults to false.
- compressionstr, optional
The type of compression of the source file. Valid arguments are “gzip” and “none”. Defaults to “none”.
- existing_table_rowsstr, optional
The behavior if a destination table with the requested name already exists. One of “fail”, “truncate”, “append”, “drop”, or “upsert”.Defaults to “fail”.
- max_errorsint, optional
The maximum number of rows with errors to ignore before failing. This option is not supported for Postgres databases.
- table_columnsList[dict], optional
An array of hashes corresponding to the columns in the order they appear in the source file. Each hash should have keys for database column “name” and “sqlType”.This parameter is required if the table does not exist, the table is being dropped, or the columns in the source file do not appear in the same order as in the destination table.The “sqlType” key is not required when appending to an existing table.
- namestr
The column name.
- sql_typestr
The SQL type of the column.
- loosen_typesbool, optional
If true, SQL types with precisions/lengths will have these values increased to accommodate data growth in future loads. Type loosening only occurs on table creation. Defaults to false.
- executionstr, optional
In upsert mode, controls the movement of data in upsert mode. If set to “delayed”, the data will be moved after a brief delay. If set to “immediate”, the data will be moved immediately. In non-upsert modes, controls the speed at which detailed column stats appear in the data catalogue. Defaults to “delayed”, to accommodate concurrent upserts to the same table and speedier non-upsert imports.
- redshift_destination_optionsdict, optional
- diststylestr
The diststyle to use for the table. One of “even”, “all”, or “key”.
- distkeystr
Distkey for this table in Redshift
- sortkeysList[str]
Sortkeys for this table in Redshift. Please provide a maximum of two.
- Returns:
civis.response.Response
- idint
The ID for the import.
- namestr
The name of the import.
- sourcedict
- file_idsList[int]
The file ID(s) to import, if importing Civis file(s).
- storage_pathdict
- storage_host_idint
The ID of the source storage host.
- credential_idint
The ID of the credentials for the source storage host.
- file_pathsList[str]
The file or directory path(s) within the bucket from which to import. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”If specifying a directory path, the job will import every file found under that path. All files must have the same column layout and file format (e.g., compression, columnDelimiter, etc.).
- destinationdict
- schemastr
The destination schema name.
- tablestr
The destination table name.
- remote_host_idint
The ID of the destination database host.
- credential_idint
The ID of the credentials for the destination database.
- primary_keysList[str]
A list of column(s) which together uniquely identify a row in the destination table.These columns must not contain NULL values. If the import mode is “upsert”, this field is required;see the Civis Helpdesk article on “Advanced CSV Imports via the Civis API” for more information.
- last_modified_keysList[str]
A list of the columns indicating a record has been updated.If the destination table does not exist, and the import mode is “upsert”, this field is required.
- first_row_is_headerbool
A boolean value indicating whether or not the first row of the source file is a header row.
- column_delimiterstr
The column delimiter for the file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.
- escapedbool
A boolean value indicating whether or not the source file has quotes escaped with a backslash.Defaults to false.
- compressionstr
The type of compression of the source file. Valid arguments are “gzip” and “none”. Defaults to “none”.
- existing_table_rowsstr
The behavior if a destination table with the requested name already exists. One of “fail”, “truncate”, “append”, “drop”, or “upsert”.Defaults to “fail”.
- max_errorsint
The maximum number of rows with errors to ignore before failing. This option is not supported for Postgres databases.
- table_columnsList[dict]
An array of hashes corresponding to the columns in the order they appear in the source file. Each hash should have keys for database column “name” and “sqlType”.This parameter is required if the table does not exist, the table is being dropped, or the columns in the source file do not appear in the same order as in the destination table.The “sqlType” key is not required when appending to an existing table.
- namestr
The column name.
- sql_typestr
The SQL type of the column.
- loosen_typesbool
If true, SQL types with precisions/lengths will have these values increased to accommodate data growth in future loads. Type loosening only occurs on table creation. Defaults to false.
- executionstr
In upsert mode, controls the movement of data in upsert mode. If set to “delayed”, the data will be moved after a brief delay. If set to “immediate”, the data will be moved immediately. In non-upsert modes, controls the speed at which detailed column stats appear in the data catalogue. Defaults to “delayed”, to accommodate concurrent upserts to the same table and speedier non-upsert imports.
- redshift_destination_optionsdict
- diststylestr
The diststyle to use for the table. One of “even”, “all”, or “key”.
- distkeystr
Distkey for this table in Redshift
- sortkeysList[str]
Sortkeys for this table in Redshift. Please provide a maximum of two.
- hiddenbool
The hidden status of the item.
- my_permission_levelstr
Your permission level on the object. One of “read”, “write”, or “manage”.
- put_files_csv_archive(id: int, status: bool)
Update the archive status of this object
- Parameters:
- idint
The ID of the object.
- statusbool
The desired archived status of the object.
- Returns:
civis.response.Response
- idint
The ID for the import.
- namestr
The name of the import.
- sourcedict
- file_idsList[int]
The file ID(s) to import, if importing Civis file(s).
- storage_pathdict
- storage_host_idint
The ID of the source storage host.
- credential_idint
The ID of the credentials for the source storage host.
- file_pathsList[str]
The file or directory path(s) within the bucket from which to import. E.g. the file_path for “s3://mybucket/files/all/” would be “/files/all/”If specifying a directory path, the job will import every file found under that path. All files must have the same column layout and file format (e.g., compression, columnDelimiter, etc.).
- destinationdict
- schemastr
The destination schema name.
- tablestr
The destination table name.
- remote_host_idint
The ID of the destination database host.
- credential_idint
The ID of the credentials for the destination database.
- primary_keysList[str]
A list of column(s) which together uniquely identify a row in the destination table.These columns must not contain NULL values. If the import mode is “upsert”, this field is required;see the Civis Helpdesk article on “Advanced CSV Imports via the Civis API” for more information.
- last_modified_keysList[str]
A list of the columns indicating a record has been updated.If the destination table does not exist, and the import mode is “upsert”, this field is required.
- first_row_is_headerbool
A boolean value indicating whether or not the first row of the source file is a header row.
- column_delimiterstr
The column delimiter for the file. Valid arguments are “comma”, “tab”, and “pipe”. Defaults to “comma”.
- escapedbool
A boolean value indicating whether or not the source file has quotes escaped with a backslash.Defaults to false.
- compressionstr
The type of compression of the source file. Valid arguments are “gzip” and “none”. Defaults to “none”.
- existing_table_rowsstr
The behavior if a destination table with the requested name already exists. One of “fail”, “truncate”, “append”, “drop”, or “upsert”.Defaults to “fail”.
- max_errorsint
The maximum number of rows with errors to ignore before failing. This option is not supported for Postgres databases.
- table_columnsList[dict]
An array of hashes corresponding to the columns in the order they appear in the source file. Each hash should have keys for database column “name” and “sqlType”.This parameter is required if the table does not exist, the table is being dropped, or the columns in the source file do not appear in the same order as in the destination table.The “sqlType” key is not required when appending to an existing table.
- namestr
The column name.
- sql_typestr
The SQL type of the column.
- loosen_typesbool
If true, SQL types with precisions/lengths will have these values increased to accommodate data growth in future loads. Type loosening only occurs on table creation. Defaults to false.
- executionstr
In upsert mode, controls the movement of data in upsert mode. If set to “delayed”, the data will be moved after a brief delay. If set to “immediate”, the data will be moved immediately. In non-upsert modes, controls the speed at which detailed column stats appear in the data catalogue. Defaults to “delayed”, to accommodate concurrent upserts to the same table and speedier non-upsert imports.
- redshift_destination_optionsdict
- diststylestr
The diststyle to use for the table. One of “even”, “all”, or “key”.
- distkeystr
Distkey for this table in Redshift
- sortkeysList[str]
Sortkeys for this table in Redshift. Please provide a maximum of two.
- hiddenbool
The hidden status of the item.
- my_permission_levelstr
Your permission level on the object. One of “read”, “write”, or “manage”.
- put_projects(id: int, project_id: int)
Add an Import to a project
- Parameters:
- idint
The ID of the Import.
- project_idint
The ID of the project.
- Returns:
- None
Response code 204: success
Set the permissions groups has on this object
- Parameters:
- idint
The ID of the resource that is shared.
- group_idsList[int]
An array of one or more group IDs.
- permission_levelstr
Options are: “read”, “write”, or “manage”.
- share_email_bodystr, optional
Custom body text for e-mail sent on a share.
- send_shared_emailbool, optional
Send email to the recipients of a share.
- Returns:
civis.response.Response
- readersdict
- usersList[dict]
id : int
name : str
- groupsList[dict]
id : int
name : str
- writersdict
- usersList[dict]
id : int
name : str
- groupsList[dict]
id : int
name : str
- ownersdict
- usersList[dict]
id : int
name : str
- groupsList[dict]
id : int
name : str
- total_user_sharesint
For owners, the number of total users shared. For writers and readers, the number of visible users shared.
- total_group_sharesint
For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.
Set the permissions users have on this object
- Parameters:
- idint
The ID of the resource that is shared.
- user_idsList[int]
An array of one or more user IDs.
- permission_levelstr
Options are: “read”, “write”, or “manage”.
- share_email_bodystr, optional
Custom body text for e-mail sent on a share.
- send_shared_emailbool, optional
Send email to the recipients of a share.
- Returns:
civis.response.Response
- readersdict
- usersList[dict]
id : int
name : str
- groupsList[dict]
id : int
name : str
- writersdict
- usersList[dict]
id : int
name : str
- groupsList[dict]
id : int
name : str
- ownersdict
- usersList[dict]
id : int
name : str
- groupsList[dict]
id : int
name : str
- total_user_sharesint
For owners, the number of total users shared. For writers and readers, the number of visible users shared.
- total_group_sharesint
For owners, the number of total groups shared. For writers and readers, the number of visible groups shared.
- put_syncs(id: int, sync_id: int, source: dict, destination: dict, *, advanced_options: dict = None)
Update a sync
- Parameters:
- idint
The ID of the import to fetch.
- sync_idint
The ID of the sync to fetch.
- sourcedict
- pathstr
The path of the dataset to sync from; for a database source, schema.tablename. If you are doing a Google Sheet export, this can be blank. This is a legacy parameter, it is recommended you use one of the following: databaseTable, file, googleWorksheet
- database_tabledict
- schemastr
The database schema name.
- tablestr
The database table name.
- use_without_schemabool
This attribute is no longer available; defaults to false but cannot be used.
file : dict
- google_worksheetdict
- spreadsheetstr
The spreadsheet document name.
- spreadsheet_idstr
The spreadsheet document id.
- worksheetstr
The worksheet tab name.
- worksheet_idint
The worksheet tab id.
- salesforcedict
- object_namestr
The Salesforce object name.
- destinationdict
- pathstr
The schema.tablename to sync to. If you are doing a Google Sheet export, this is the spreadsheet and sheet name separated by a period. i.e. if you have a spreadsheet named “MySpreadsheet” and a sheet called “Sheet1” this field would be “MySpreadsheet.Sheet1”. This is a legacy parameter, it is recommended you use one of the following: databaseTable, googleWorksheet
- database_tabledict
- schemastr
The database schema name.
- tablestr
The database table name.
- use_without_schemabool
This attribute is no longer available; defaults to false but cannot be used.
- google_worksheetdict
- spreadsheetstr
The spreadsheet document name.
- spreadsheet_idstr
The spreadsheet document id.
- worksheetstr
The worksheet tab name.
- worksheet_idint
The worksheet tab id.
- advanced_optionsdict, optional
max_errors : int
existing_table_rows : str
diststyle : str
distkey : str
sortkey1 : str
sortkey2 : str
column_delimiter : str
- column_overridesdict
Hash used for overriding auto-detected names and types, with keys being the index of the column being overridden.
- escapedbool
If true, escape quotes with a backslash; otherwise, escape quotes by double-quoting. Defaults to false.
identity_column : str
row_chunk_size : int
wipe_destination_table : bool
truncate_long_lines : bool
invalid_char_replacement : str
verify_table_row_counts : bool
- partition_column_namestr
This parameter is deprecated
- partition_schema_namestr
This parameter is deprecated
- partition_table_namestr
This parameter is deprecated
- partition_table_partition_column_min_namestr
This parameter is deprecated
- partition_table_partition_column_max_namestr
This parameter is deprecated
last_modified_column : str
- mysql_catalog_matches_schemabool
This attribute is no longer available; defaults to true but cannot be used.
- chunking_methodstr
This parameter is deprecated
first_row_is_header : bool
- export_actionstr
The kind of export action you want to have the export execute. Set to “newsprsht” if you want a new worksheet inside a new spreadsheet. Set to “newwksht” if you want a new worksheet inside an existing spreadsheet. Set to “updatewksht” if you want to overwrite an existing worksheet inside an existing spreadsheet. Set to “appendwksht” if you want to append to the end of an existing worksheet inside an existing spreadsheet. Default is set to “newsprsht”
- sql_querystr
If you are doing a Google Sheet export, this is your SQL query.
contact_lists : str
soql_query : str
include_deleted_records : bool
- Returns:
civis.response.Response
id : int
- sourcedict
- idint
The ID of the table or file, if available.
- pathstr
The path of the dataset to sync from; for a database source, schema.tablename. If you are doing a Google Sheet export, this can be blank. This is a legacy parameter, it is recommended you use one of the following: databaseTable, file, googleWorksheet
- database_tabledict
- schemastr
The database schema name.
- tablestr
The database table name.
- use_without_schemabool
This attribute is no longer available; defaults to false but cannot be used.
- filedict
- idint
The file id.
- google_worksheetdict
- spreadsheetstr
The spreadsheet document name.
- spreadsheet_idstr
The spreadsheet document id.
- worksheetstr
The worksheet tab name.
- worksheet_idint
The worksheet tab id.
- salesforcedict
- object_namestr
The Salesforce object name.
- destinationdict
- pathstr
The schema.tablename to sync to. If you are doing a Google Sheet export, this is the spreadsheet and sheet name separated by a period. i.e. if you have a spreadsheet named “MySpreadsheet” and a sheet called “Sheet1” this field would be “MySpreadsheet.Sheet1”. This is a legacy parameter, it is recommended you use one of the following: databaseTable, googleWorksheet
- database_tabledict
- schemastr
The database schema name.
- tablestr
The database table name.
- use_without_schemabool
This attribute is no longer available; defaults to false but cannot be used.
- google_worksheetdict
- spreadsheetstr
The spreadsheet document name.
- spreadsheet_idstr
The spreadsheet document id.
- worksheetstr
The worksheet tab name.
- worksheet_idint
The worksheet tab id.
- advanced_optionsdict
max_errors : int
existing_table_rows : str
diststyle : str
distkey : str
sortkey1 : str
sortkey2 : str
column_delimiter : str
- column_overridesdict
Hash used for overriding auto-detected names and types, with keys being the index of the column being overridden.
- escapedbool
If true, escape quotes with a backslash; otherwise, escape quotes by double-quoting. Defaults to false.
identity_column : str
row_chunk_size : int
wipe_destination_table : bool
truncate_long_lines : bool
invalid_char_replacement : str
verify_table_row_counts : bool
- partition_column_namestr
This parameter is deprecated
- partition_schema_namestr
This parameter is deprecated
- partition_table_namestr
This parameter is deprecated
- partition_table_partition_column_min_namestr
This parameter is deprecated
- partition_table_partition_column_max_namestr
This parameter is deprecated
last_modified_column : str
- mysql_catalog_matches_schemabool
This attribute is no longer available; defaults to true but cannot be used.
- chunking_methodstr
This parameter is deprecated
first_row_is_header : bool
- export_actionstr
The kind of export action you want to have the export execute. Set to “newsprsht” if you want a new worksheet inside a new spreadsheet. Set to “newwksht” if you want a new worksheet inside an existing spreadsheet. Set to “updatewksht” if you want to overwrite an existing worksheet inside an existing spreadsheet. Set to “appendwksht” if you want to append to the end of an existing worksheet inside an existing spreadsheet. Default is set to “newsprsht”
- sql_querystr
If you are doing a Google Sheet export, this is your SQL query.
contact_lists : str
soql_query : str
include_deleted_records : bool
- put_syncs_archive(id: int, sync_id: int, *, status: bool = None)
Update the archive status of this sync
- Parameters:
- idint
The ID of the import to fetch.
- sync_idint
The ID of the sync to fetch.
- statusbool, optional
The desired archived status of the sync.
- Returns:
civis.response.Response
id : int
- sourcedict
- idint
The ID of the table or file, if available.
- pathstr
The path of the dataset to sync from; for a database source, schema.tablename. If you are doing a Google Sheet export, this can be blank. This is a legacy parameter, it is recommended you use one of the following: databaseTable, file, googleWorksheet
- database_tabledict
- schemastr
The database schema name.
- tablestr
The database table name.
- use_without_schemabool
This attribute is no longer available; defaults to false but cannot be used.
- filedict
- idint
The file id.
- google_worksheetdict
- spreadsheetstr
The spreadsheet document name.
- spreadsheet_idstr
The spreadsheet document id.
- worksheetstr
The worksheet tab name.
- worksheet_idint
The worksheet tab id.
- salesforcedict
- object_namestr
The Salesforce object name.
- destinationdict
- pathstr
The schema.tablename to sync to. If you are doing a Google Sheet export, this is the spreadsheet and sheet name separated by a period. i.e. if you have a spreadsheet named “MySpreadsheet” and a sheet called “Sheet1” this field would be “MySpreadsheet.Sheet1”. This is a legacy parameter, it is recommended you use one of the following: databaseTable, googleWorksheet
- database_tabledict
- schemastr
The database schema name.
- tablestr
The database table name.
- use_without_schemabool
This attribute is no longer available; defaults to false but cannot be used.
- google_worksheetdict
- spreadsheetstr
The spreadsheet document name.
- spreadsheet_idstr
The spreadsheet document id.
- worksheetstr
The worksheet tab name.
- worksheet_idint
The worksheet tab id.
- advanced_optionsdict
max_errors : int
existing_table_rows : str
diststyle : str
distkey : str
sortkey1 : str
sortkey2 : str
column_delimiter : str
- column_overridesdict
Hash used for overriding auto-detected names and types, with keys being the index of the column being overridden.
- escapedbool
If true, escape quotes with a backslash; otherwise, escape quotes by double-quoting. Defaults to false.
identity_column : str
row_chunk_size : int
wipe_destination_table : bool
truncate_long_lines : bool
invalid_char_replacement : str
verify_table_row_counts : bool
- partition_column_namestr
This parameter is deprecated
- partition_schema_namestr
This parameter is deprecated
- partition_table_namestr
This parameter is deprecated
- partition_table_partition_column_min_namestr
This parameter is deprecated
- partition_table_partition_column_max_namestr
This parameter is deprecated
last_modified_column : str
- mysql_catalog_matches_schemabool
This attribute is no longer available; defaults to true but cannot be used.
- chunking_methodstr
This parameter is deprecated
first_row_is_header : bool
- export_actionstr
The kind of export action you want to have the export execute. Set to “newsprsht” if you want a new worksheet inside a new spreadsheet. Set to “newwksht” if you want a new worksheet inside an existing spreadsheet. Set to “updatewksht” if you want to overwrite an existing worksheet inside an existing spreadsheet. Set to “appendwksht” if you want to append to the end of an existing worksheet inside an existing spreadsheet. Default is set to “newsprsht”
- sql_querystr
If you are doing a Google Sheet export, this is your SQL query.
contact_lists : str
soql_query : str
include_deleted_records : bool
- put_transfer(id: int, user_id: int, include_dependencies: bool, *, email_body: str = None, send_email: bool = None)
Transfer ownership of this object to another user
- Parameters:
- idint
The ID of the resource that is shared.
- user_idint
ID of target user
- include_dependenciesbool
Whether or not to give manage permissions on all dependencies
- email_bodystr, optional
Custom body text for e-mail sent on transfer.
- send_emailbool, optional
Send email to the target user of the transfer?
- Returns:
civis.response.Response
- dependenciesList[dict]
Dependent objects for this object
- object_typestr
Dependent object type
- fco_typestr
Human readable dependent object type
- idint
Dependent object ID
- namestr
Dependent object name, or nil if the requesting user cannot read this object
- permission_levelstr
Permission level of target user (not user’s groups) for dependent object. Null if no target user or not shareable (e.g. a database table).
- descriptionstr
Additional information about the dependency, if relevant
- sharedbool
Whether dependent object was successfully shared with target user