Class RDSDataSpec
- All Implemented Interfaces:
Serializable
,Cloneable
The data specification of an Amazon Relational Database Service (Amazon RDS)
DataSource
.
- See Also:
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionclone()
boolean
The AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon RDS database.Describes theDatabaseName
andInstanceIdentifier
of an an Amazon RDS database.DataRearrangement - A JSON string that represents the splitting requirement of aDataSource
.A JSON string that represents the schema for an Amazon RDSDataSource
.The Amazon S3 location of theDataSchema
.The role (DataPipelineDefaultResourceRole) assumed by an Amazon Elastic Compute Cloud (Amazon EC2) instance to carry out the copy operation from Amazon RDS to an Amazon S3 task.The Amazon S3 location for staging Amazon RDS data.The security group IDs to be used to access a VPC-based RDS DB instance.The query that is used to retrieve the observation data for theDataSource
.The role (DataPipelineDefaultRole) assumed by AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3.The subnet ID to be used to access a VPC-based RDS DB instance.int
hashCode()
void
setDatabaseCredentials
(RDSDatabaseCredentials databaseCredentials) The AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon RDS database.void
setDatabaseInformation
(RDSDatabase databaseInformation) Describes theDatabaseName
andInstanceIdentifier
of an an Amazon RDS database.void
setDataRearrangement
(String dataRearrangement) DataRearrangement - A JSON string that represents the splitting requirement of aDataSource
.void
setDataSchema
(String dataSchema) A JSON string that represents the schema for an Amazon RDSDataSource
.void
setDataSchemaUri
(String dataSchemaUri) The Amazon S3 location of theDataSchema
.void
setResourceRole
(String resourceRole) The role (DataPipelineDefaultResourceRole) assumed by an Amazon Elastic Compute Cloud (Amazon EC2) instance to carry out the copy operation from Amazon RDS to an Amazon S3 task.void
setS3StagingLocation
(String s3StagingLocation) The Amazon S3 location for staging Amazon RDS data.void
setSecurityGroupIds
(Collection<String> securityGroupIds) The security group IDs to be used to access a VPC-based RDS DB instance.void
setSelectSqlQuery
(String selectSqlQuery) The query that is used to retrieve the observation data for theDataSource
.void
setServiceRole
(String serviceRole) The role (DataPipelineDefaultRole) assumed by AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3.void
setSubnetId
(String subnetId) The subnet ID to be used to access a VPC-based RDS DB instance.toString()
Returns a string representation of this object; useful for testing and debugging.withDatabaseCredentials
(RDSDatabaseCredentials databaseCredentials) The AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon RDS database.withDatabaseInformation
(RDSDatabase databaseInformation) Describes theDatabaseName
andInstanceIdentifier
of an an Amazon RDS database.withDataRearrangement
(String dataRearrangement) DataRearrangement - A JSON string that represents the splitting requirement of aDataSource
.withDataSchema
(String dataSchema) A JSON string that represents the schema for an Amazon RDSDataSource
.withDataSchemaUri
(String dataSchemaUri) The Amazon S3 location of theDataSchema
.withResourceRole
(String resourceRole) The role (DataPipelineDefaultResourceRole) assumed by an Amazon Elastic Compute Cloud (Amazon EC2) instance to carry out the copy operation from Amazon RDS to an Amazon S3 task.withS3StagingLocation
(String s3StagingLocation) The Amazon S3 location for staging Amazon RDS data.withSecurityGroupIds
(String... securityGroupIds) The security group IDs to be used to access a VPC-based RDS DB instance.withSecurityGroupIds
(Collection<String> securityGroupIds) The security group IDs to be used to access a VPC-based RDS DB instance.withSelectSqlQuery
(String selectSqlQuery) The query that is used to retrieve the observation data for theDataSource
.withServiceRole
(String serviceRole) The role (DataPipelineDefaultRole) assumed by AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3.withSubnetId
(String subnetId) The subnet ID to be used to access a VPC-based RDS DB instance.
-
Constructor Details
-
RDSDataSpec
public RDSDataSpec()
-
-
Method Details
-
setDatabaseInformation
Describes the
DatabaseName
andInstanceIdentifier
of an an Amazon RDS database.- Parameters:
databaseInformation
- Describes theDatabaseName
andInstanceIdentifier
of an an Amazon RDS database.
-
getDatabaseInformation
Describes the
DatabaseName
andInstanceIdentifier
of an an Amazon RDS database.- Returns:
- Describes the
DatabaseName
andInstanceIdentifier
of an an Amazon RDS database.
-
withDatabaseInformation
Describes the
DatabaseName
andInstanceIdentifier
of an an Amazon RDS database.- Parameters:
databaseInformation
- Describes theDatabaseName
andInstanceIdentifier
of an an Amazon RDS database.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
setSelectSqlQuery
The query that is used to retrieve the observation data for the
DataSource
.- Parameters:
selectSqlQuery
- The query that is used to retrieve the observation data for theDataSource
.
-
getSelectSqlQuery
The query that is used to retrieve the observation data for the
DataSource
.- Returns:
- The query that is used to retrieve the observation data for the
DataSource
.
-
withSelectSqlQuery
The query that is used to retrieve the observation data for the
DataSource
.- Parameters:
selectSqlQuery
- The query that is used to retrieve the observation data for theDataSource
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
setDatabaseCredentials
The AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon RDS database.
- Parameters:
databaseCredentials
- The AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon RDS database.
-
getDatabaseCredentials
The AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon RDS database.
- Returns:
- The AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon RDS database.
-
withDatabaseCredentials
The AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon RDS database.
- Parameters:
databaseCredentials
- The AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon RDS database.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
setS3StagingLocation
The Amazon S3 location for staging Amazon RDS data. The data retrieved from Amazon RDS using
SelectSqlQuery
is stored in this location.- Parameters:
s3StagingLocation
- The Amazon S3 location for staging Amazon RDS data. The data retrieved from Amazon RDS usingSelectSqlQuery
is stored in this location.
-
getS3StagingLocation
The Amazon S3 location for staging Amazon RDS data. The data retrieved from Amazon RDS using
SelectSqlQuery
is stored in this location.- Returns:
- The Amazon S3 location for staging Amazon RDS data. The data
retrieved from Amazon RDS using
SelectSqlQuery
is stored in this location.
-
withS3StagingLocation
The Amazon S3 location for staging Amazon RDS data. The data retrieved from Amazon RDS using
SelectSqlQuery
is stored in this location.- Parameters:
s3StagingLocation
- The Amazon S3 location for staging Amazon RDS data. The data retrieved from Amazon RDS usingSelectSqlQuery
is stored in this location.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
setDataRearrangement
DataRearrangement - A JSON string that represents the splitting requirement of a
DataSource
.
Sample -
"{\"splitting\":{\"percentBegin\":10,\"percentEnd\":60}}"
- Parameters:
dataRearrangement
- DataRearrangement - A JSON string that represents the splitting requirement of aDataSource
.
Sample -
"{\"splitting\":{\"percentBegin\":10,\"percentEnd\":60}}"
-
getDataRearrangement
DataRearrangement - A JSON string that represents the splitting requirement of a
DataSource
.
Sample -
"{\"splitting\":{\"percentBegin\":10,\"percentEnd\":60}}"
- Returns:
- DataRearrangement - A JSON string that represents the splitting
requirement of a
DataSource
.
Sample -
"{\"splitting\":{\"percentBegin\":10,\"percentEnd\":60}}"
-
withDataRearrangement
DataRearrangement - A JSON string that represents the splitting requirement of a
DataSource
.
Sample -
"{\"splitting\":{\"percentBegin\":10,\"percentEnd\":60}}"
- Parameters:
dataRearrangement
- DataRearrangement - A JSON string that represents the splitting requirement of aDataSource
.
Sample -
"{\"splitting\":{\"percentBegin\":10,\"percentEnd\":60}}"
- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
setDataSchema
A JSON string that represents the schema for an Amazon RDS
DataSource
. TheDataSchema
defines the structure of the observation data in the data file(s) referenced in theDataSource
.A
DataSchema
is not required if you specify aDataSchemaUri
Define your
DataSchema
as a series of key-value pairs.attributes
andexcludedVariableNames
have an array of key-value pairs for their value. Use the following format to define yourDataSchema
.{ "version": "1.0",
"recordAnnotationFieldName": "F1",
"recordWeightFieldName": "F2",
"targetFieldName": "F3",
"dataFormat": "CSV",
"dataFileContainsHeader": true,
"attributes": [
{ "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],
"excludedVariableNames": [ "F6" ] }
<?oxy_insert_end>- Parameters:
dataSchema
- A JSON string that represents the schema for an Amazon RDSDataSource
. TheDataSchema
defines the structure of the observation data in the data file(s) referenced in theDataSource
.A
DataSchema
is not required if you specify aDataSchemaUri
Define your
DataSchema
as a series of key-value pairs.attributes
andexcludedVariableNames
have an array of key-value pairs for their value. Use the following format to define yourDataSchema
.{ "version": "1.0",
"recordAnnotationFieldName": "F1",
"recordWeightFieldName": "F2",
"targetFieldName": "F3",
"dataFormat": "CSV",
"dataFileContainsHeader": true,
"attributes": [
{ "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],
"excludedVariableNames": [ "F6" ] }
-
getDataSchema
A JSON string that represents the schema for an Amazon RDS
DataSource
. TheDataSchema
defines the structure of the observation data in the data file(s) referenced in theDataSource
.A
DataSchema
is not required if you specify aDataSchemaUri
Define your
DataSchema
as a series of key-value pairs.attributes
andexcludedVariableNames
have an array of key-value pairs for their value. Use the following format to define yourDataSchema
.{ "version": "1.0",
"recordAnnotationFieldName": "F1",
"recordWeightFieldName": "F2",
"targetFieldName": "F3",
"dataFormat": "CSV",
"dataFileContainsHeader": true,
"attributes": [
{ "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],
"excludedVariableNames": [ "F6" ] }
<?oxy_insert_end>- Returns:
- A JSON string that represents the schema for an Amazon RDS
DataSource
. TheDataSchema
defines the structure of the observation data in the data file(s) referenced in theDataSource
.A
DataSchema
is not required if you specify aDataSchemaUri
Define your
DataSchema
as a series of key-value pairs.attributes
andexcludedVariableNames
have an array of key-value pairs for their value. Use the following format to define yourDataSchema
.{ "version": "1.0",
"recordAnnotationFieldName": "F1",
"recordWeightFieldName": "F2",
"targetFieldName": "F3",
"dataFormat": "CSV",
"dataFileContainsHeader": true,
"attributes": [
{ "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],
"excludedVariableNames": [ "F6" ] }
-
withDataSchema
A JSON string that represents the schema for an Amazon RDS
DataSource
. TheDataSchema
defines the structure of the observation data in the data file(s) referenced in theDataSource
.A
DataSchema
is not required if you specify aDataSchemaUri
Define your
DataSchema
as a series of key-value pairs.attributes
andexcludedVariableNames
have an array of key-value pairs for their value. Use the following format to define yourDataSchema
.{ "version": "1.0",
"recordAnnotationFieldName": "F1",
"recordWeightFieldName": "F2",
"targetFieldName": "F3",
"dataFormat": "CSV",
"dataFileContainsHeader": true,
"attributes": [
{ "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],
"excludedVariableNames": [ "F6" ] }
<?oxy_insert_end>- Parameters:
dataSchema
- A JSON string that represents the schema for an Amazon RDSDataSource
. TheDataSchema
defines the structure of the observation data in the data file(s) referenced in theDataSource
.A
DataSchema
is not required if you specify aDataSchemaUri
Define your
DataSchema
as a series of key-value pairs.attributes
andexcludedVariableNames
have an array of key-value pairs for their value. Use the following format to define yourDataSchema
.{ "version": "1.0",
"recordAnnotationFieldName": "F1",
"recordWeightFieldName": "F2",
"targetFieldName": "F3",
"dataFormat": "CSV",
"dataFileContainsHeader": true,
"attributes": [
{ "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],
"excludedVariableNames": [ "F6" ] }
- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
setDataSchemaUri
The Amazon S3 location of the
DataSchema
.- Parameters:
dataSchemaUri
- The Amazon S3 location of theDataSchema
.
-
getDataSchemaUri
The Amazon S3 location of the
DataSchema
.- Returns:
- The Amazon S3 location of the
DataSchema
.
-
withDataSchemaUri
The Amazon S3 location of the
DataSchema
.- Parameters:
dataSchemaUri
- The Amazon S3 location of theDataSchema
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
setResourceRole
The role (DataPipelineDefaultResourceRole) assumed by an Amazon Elastic Compute Cloud (Amazon EC2) instance to carry out the copy operation from Amazon RDS to an Amazon S3 task. For more information, see Role templates for data pipelines.
- Parameters:
resourceRole
- The role (DataPipelineDefaultResourceRole) assumed by an Amazon Elastic Compute Cloud (Amazon EC2) instance to carry out the copy operation from Amazon RDS to an Amazon S3 task. For more information, see Role templates for data pipelines.
-
getResourceRole
The role (DataPipelineDefaultResourceRole) assumed by an Amazon Elastic Compute Cloud (Amazon EC2) instance to carry out the copy operation from Amazon RDS to an Amazon S3 task. For more information, see Role templates for data pipelines.
- Returns:
- The role (DataPipelineDefaultResourceRole) assumed by an Amazon Elastic Compute Cloud (Amazon EC2) instance to carry out the copy operation from Amazon RDS to an Amazon S3 task. For more information, see Role templates for data pipelines.
-
withResourceRole
The role (DataPipelineDefaultResourceRole) assumed by an Amazon Elastic Compute Cloud (Amazon EC2) instance to carry out the copy operation from Amazon RDS to an Amazon S3 task. For more information, see Role templates for data pipelines.
- Parameters:
resourceRole
- The role (DataPipelineDefaultResourceRole) assumed by an Amazon Elastic Compute Cloud (Amazon EC2) instance to carry out the copy operation from Amazon RDS to an Amazon S3 task. For more information, see Role templates for data pipelines.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
setServiceRole
The role (DataPipelineDefaultRole) assumed by AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.
- Parameters:
serviceRole
- The role (DataPipelineDefaultRole) assumed by AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.
-
getServiceRole
The role (DataPipelineDefaultRole) assumed by AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.
- Returns:
- The role (DataPipelineDefaultRole) assumed by AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.
-
withServiceRole
The role (DataPipelineDefaultRole) assumed by AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.
- Parameters:
serviceRole
- The role (DataPipelineDefaultRole) assumed by AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
setSubnetId
The subnet ID to be used to access a VPC-based RDS DB instance. This attribute is used by Data Pipeline to carry out the copy task from Amazon RDS to Amazon S3.
- Parameters:
subnetId
- The subnet ID to be used to access a VPC-based RDS DB instance. This attribute is used by Data Pipeline to carry out the copy task from Amazon RDS to Amazon S3.
-
getSubnetId
The subnet ID to be used to access a VPC-based RDS DB instance. This attribute is used by Data Pipeline to carry out the copy task from Amazon RDS to Amazon S3.
- Returns:
- The subnet ID to be used to access a VPC-based RDS DB instance. This attribute is used by Data Pipeline to carry out the copy task from Amazon RDS to Amazon S3.
-
withSubnetId
The subnet ID to be used to access a VPC-based RDS DB instance. This attribute is used by Data Pipeline to carry out the copy task from Amazon RDS to Amazon S3.
- Parameters:
subnetId
- The subnet ID to be used to access a VPC-based RDS DB instance. This attribute is used by Data Pipeline to carry out the copy task from Amazon RDS to Amazon S3.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
getSecurityGroupIds
The security group IDs to be used to access a VPC-based RDS DB instance. Ensure that there are appropriate ingress rules set up to allow access to the RDS DB instance. This attribute is used by Data Pipeline to carry out the copy operation from Amazon RDS to an Amazon S3 task.
- Returns:
- The security group IDs to be used to access a VPC-based RDS DB instance. Ensure that there are appropriate ingress rules set up to allow access to the RDS DB instance. This attribute is used by Data Pipeline to carry out the copy operation from Amazon RDS to an Amazon S3 task.
-
setSecurityGroupIds
The security group IDs to be used to access a VPC-based RDS DB instance. Ensure that there are appropriate ingress rules set up to allow access to the RDS DB instance. This attribute is used by Data Pipeline to carry out the copy operation from Amazon RDS to an Amazon S3 task.
- Parameters:
securityGroupIds
- The security group IDs to be used to access a VPC-based RDS DB instance. Ensure that there are appropriate ingress rules set up to allow access to the RDS DB instance. This attribute is used by Data Pipeline to carry out the copy operation from Amazon RDS to an Amazon S3 task.
-
withSecurityGroupIds
The security group IDs to be used to access a VPC-based RDS DB instance. Ensure that there are appropriate ingress rules set up to allow access to the RDS DB instance. This attribute is used by Data Pipeline to carry out the copy operation from Amazon RDS to an Amazon S3 task.
NOTE: This method appends the values to the existing list (if any). Use
setSecurityGroupIds(java.util.Collection)
orwithSecurityGroupIds(java.util.Collection)
if you want to override the existing values.- Parameters:
securityGroupIds
- The security group IDs to be used to access a VPC-based RDS DB instance. Ensure that there are appropriate ingress rules set up to allow access to the RDS DB instance. This attribute is used by Data Pipeline to carry out the copy operation from Amazon RDS to an Amazon S3 task.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
withSecurityGroupIds
The security group IDs to be used to access a VPC-based RDS DB instance. Ensure that there are appropriate ingress rules set up to allow access to the RDS DB instance. This attribute is used by Data Pipeline to carry out the copy operation from Amazon RDS to an Amazon S3 task.
- Parameters:
securityGroupIds
- The security group IDs to be used to access a VPC-based RDS DB instance. Ensure that there are appropriate ingress rules set up to allow access to the RDS DB instance. This attribute is used by Data Pipeline to carry out the copy operation from Amazon RDS to an Amazon S3 task.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
toString
Returns a string representation of this object; useful for testing and debugging. -
equals
-
hashCode
public int hashCode() -
clone
-