Informatica Cloud Data Integration - February 2022 - What's New

40
Informatica ® Cloud Data Integration February 2022 What's New

Transcript of Informatica Cloud Data Integration - February 2022 - What's New

Informatica® Cloud Data IntegrationFebruary 2022

What's New

Informatica Cloud Data Integration What's NewFebruary 2022

© Copyright Informatica LLC 2016, 2022

This software and documentation are provided only under a separate license agreement containing restrictions on use and disclosure. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica LLC.

U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation is subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License.

Informatica, Informatica Cloud, Informatica Intelligent Cloud Services, PowerCenter, PowerExchange, and the Informatica logo are trademarks or registered trademarks of Informatica LLC in the United States and many jurisdictions throughout the world. A current list of Informatica trademarks is available on the web at https://www.informatica.com/trademarks.html. Other company and product names may be trade names or trademarks of their respective owners.

Portions of this software and/or documentation are subject to copyright held by third parties. Required third party notices are included with the product.

The information in this documentation is subject to change without notice. If you find any problems in this documentation, report them to us at [email protected].

Informatica products are warranted according to the terms and conditions of the agreements under which they are provided. INFORMATICA PROVIDES THE INFORMATION IN THIS DOCUMENT "AS IS" WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING WITHOUT ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT.

Publication Date: 2022-02-23

Table of Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Informatica Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Informatica Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Informatica Intelligent Cloud Services web site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Informatica Intelligent Cloud Services Communities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Informatica Intelligent Cloud Services Marketplace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Data Integration connector documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Informatica Knowledge Base. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Informatica Intelligent Cloud Services Trust Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Informatica Global Customer Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Chapter 1: February 2022. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8New Features and Enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Flat file advanced attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Taskflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Platform REST API. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Intelligent structure models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Changed behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Monitor subtaskflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

New connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Enhanced connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Chapter 2: January 2022. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13New features and enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

CLAIRE recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

New connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Enhanced connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Chapter 3: December 2021. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15New features and enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Data Integration Elastic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

New connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Chapter 4: November 2021. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16New features and enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Table of Contents 3

Data Integration Elastic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Data transfer tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Flat file formatting options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Intelligent structure models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Taskflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

File listeners. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Changed behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Flat file formatting options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

New connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Enhanced connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Changed behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

Chapter 5: October 2021. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22New features and enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

Flat file formatting options and advanced attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

CLAIRE recommendations for source objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

Copying transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

Data transfer tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Dynamic mapping tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Masking tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Taskflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Transformations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

File listener. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Intelligent structure models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Data Integration REST API. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Changed behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

Connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

New connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

Enhanced connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

Changed behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

Support for serverless runtime environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

Chapter 6: Upgrade. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34Preparing for the upgrade. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

Post-upgrade tasks for the February 2022 release. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

Unsupported Hadoop distribution packages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Sequence Generator transformation in mappings enabled for pushdown optimization. . . . . . . 35

RunAJob log4j properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Microsoft Azure Data Lake Storage Gen2 Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

File integration proxy server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

Post-upgrade tasks for the October 2021 release. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

Advanced properties in mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4 Table of Contents

Custom query override in taskflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

Sequence Generator transformation in mappings enabled for pushdown optimization. . . . . . . 37

File Processor Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

Google BigQuery V2 Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

Hive Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

Microsoft Azure Synapse SQL Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

Chapter 7: Enhancements in previous releases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Table of Contents 5

PrefaceRead What's New to learn about new features, enhancements, and behavior changes in Informatica Intelligent Cloud Services℠ Data Integration for the February 2022 release. You can also learn about upgrade steps that you might need to perform.

Informatica ResourcesInformatica provides you with a range of product resources through the Informatica Network and other online portals. Use the resources to get the most from your Informatica products and solutions and to learn from other Informatica users and subject matter experts.

Informatica DocumentationUse the Informatica Documentation Portal to explore an extensive library of documentation for current and recent product releases. To explore the Documentation Portal, visit https://docs.informatica.com.

If you have questions, comments, or ideas about the product documentation, contact the Informatica Documentation team at [email protected].

Informatica Intelligent Cloud Services web siteYou can access the Informatica Intelligent Cloud Services web site at http://www.informatica.com/cloud. This site contains information about Informatica Cloud integration services.

Informatica Intelligent Cloud Services CommunitiesUse the Informatica Intelligent Cloud Services Community to discuss and resolve technical issues. You can also find technical tips, documentation updates, and answers to frequently asked questions.

Access the Informatica Intelligent Cloud Services Community at:

https://network.informatica.com/community/informatica-network/products/cloud-integration

Developers can learn more and share tips at the Cloud Developer community:

https://network.informatica.com/community/informatica-network/products/cloud-integration/cloud-developers

Informatica Intelligent Cloud Services MarketplaceVisit the Informatica Marketplace to try and buy Data Integration Connectors, templates, and mapplets:

6

https://marketplace.informatica.com/

Data Integration connector documentationYou can access documentation for Data Integration Connectors at the Documentation Portal. To explore the Documentation Portal, visit https://docs.informatica.com.

Informatica Knowledge BaseUse the Informatica Knowledge Base to find product resources such as how-to articles, best practices, video tutorials, and answers to frequently asked questions.

To search the Knowledge Base, visit https://search.informatica.com. If you have questions, comments, or ideas about the Knowledge Base, contact the Informatica Knowledge Base team at [email protected].

Informatica Intelligent Cloud Services Trust CenterThe Informatica Intelligent Cloud Services Trust Center provides information about Informatica security policies and real-time system availability.

You can access the trust center at https://www.informatica.com/trust-center.html.

Subscribe to the Informatica Intelligent Cloud Services Trust Center to receive upgrade, maintenance, and incident notifications. The Informatica Intelligent Cloud Services Status page displays the production status of all the Informatica cloud products. All maintenance updates are posted to this page, and during an outage, it will have the most current information. To ensure you are notified of updates and outages, you can subscribe to receive updates for a single component or all Informatica Intelligent Cloud Services components. Subscribing to all components is the best way to be certain you never miss an update.

To subscribe, go to https://status.informatica.com/ and click SUBSCRIBE TO UPDATES. You can then choose to receive notifications sent as emails, SMS text messages, webhooks, RSS feeds, or any combination of the four.

Informatica Global Customer SupportYou can contact a Customer Support Center by telephone or online.

For online support, click Submit Support Request in Informatica Intelligent Cloud Services. You can also use Online Support to log a case. Online Support requires a login. You can request a login at https://network.informatica.com/welcome.

The telephone numbers for Informatica Global Customer Support are available from the Informatica web site at https://www.informatica.com/services-and-training/support-services/contact-us.html.

Preface 7

C h a p t e r 1

February 2022The following topics provide information about new features, enhancements, and behavior changes in the February 2022 release of Informatica Intelligent Cloud Services℠ Data Integration.

New Features and EnhancementsThe February 2022 release of Informatica Intelligent Cloud Services℠ Data Integration includes the following new features and enhancements.

Flat file advanced attributesWhen you configure a flat file source, target, or lookup object, you can configure the following advanced attributes:

• File name and file directory for source objects.

• Lookup source file name and lookup source directory for lookup objects.

• Output file name for target objects.

TaskflowsThis release includes the following enhancements to taskflows:

Support for charSet in the base64Decode function

In the base64Decode function, you can configure the charSet argument to return the base64-decoded value of the provided input string. Taskflows support the character sets that Azul JDK supports for encoding.

Subtaskflow location

When you add a subtaskflow to a taskflow, you can view the location of the subtaskflow in the Subtaskflow step.

For more information, see Taskflows.

8

Platform REST APIThis release includes the following enhancements to the Informatica Intelligent Cloud Services platform REST API.

Secure Agent groups

The following enhancements were made to the runtimeEnvironment REST API version 2 resource for Secure Agent groups:Create, update, and delete Secure Agent groups

You can create Secure Agent groups, add Secure Agents to Secure Agent groups, remove Secure Agents from Secure Agent groups, and delete Secure Agent groups.

Manage Secure Agent group selections

You can enable and disable Informatica Intelligent Cloud Services and connectors for Secure Agent groups.

Source control

The following source control enhancements were made to the REST API:Relax object specification validation in pull requests

You can set the relaxObjectSpecificationValidation flag to true so that objectSpecification items are ignored if their sources do not exist in the assets being pulled. If the flag is set to false, an error occurs if an objectSpecification source doesn't exist in the assets in the pull request.

Undo checkout

You can undo the checkout of a project, folder, or asset using the undoCheckout REST API version 3 resource.

Commit details

You can retrieve details about a commit from your repository using the commit REST API version 3 resource.

Pull by commit hash

You can pull objects that were modified by a particular commit in your repository and load them into your organization using the pullByCommitHash REST API version 3 resource.

For more information about the Informatica Intelligent Cloud Services REST API, see REST API Reference.

TransformationsThis release includes the following enhancements to transformations.

Hierarchy Builder transformation

You can choose a string or binary output format for the transformation.

For more information about the Hierarchy Builder transformation, see Transformations.

Intelligent structure modelsWhen an intelligent structure model is based on a JSON file, Intelligent Structure Discovery identifies numbers that are enclosed in double quotes as strings, in accordance with JSON semantics.

You can select to discover the data type of numbers that are enclosed in double quotes based on the content of the node that contains the data. For example, by default, Intelligent Structure Discovery identifies the value

New Features and Enhancements 9

"3" as a string. However, when "3" is the value of a Version node, if you select to discover the data type of the node by content, Intelligent Structure Discovery identifies the value as a number.

For more information about intelligent structure models, see Components.

Changed behaviorThe February 2022 release of Informatica Intelligent Cloud Services Data Integration includes the following changed behaviors.

Monitor subtaskflowsWhen you run a taskflow that contains subtaskflows, you can click the View Subtasks link for the taskflow to view the details of the subtaskflows. If the subtaskflow contains nested subtaskflows, you can drill down further to view the details of each level using the View Subtasks link. This change applies to the MyJobs page in Data Integration and the All Jobs and Running Jobs page in Monitor.

The following image shows the taskflow that contains the nested subtaskflows and subtasks:

The following image shows the nested subtaskflows and subtasks when you click View Subtasks for the taskflow:

Previously, the taskflow and the nested subtaskflows were displayed as separate jobs.

For more information about monitoring subtaskflows, see Monitor.

ConnectorsThe February 2022 release includes the following enhanced connectors.

10 Chapter 1: February 2022

New connectorsThis release includes the following new connectors.

Amazon DynamoDB V2 Connector

You can use Amazon DynamoDB V2 Connector to connect to Amazon DynamoDB from Data Integration. Use the Amazon DynamoDB V2 connection in elastic mappings to read data from and write data to Amazon DynamoDB tables.

MongoDB V2 Connector

You can use MongoDB V2 Connector to connect to a MongoDB Atlas database from Data Integration. Use the MongoDB V2 connection in elastic mappings to read data from or write data to collections in a MongoDB Atlas database.

Important: Amazon DynamoDB V2 and MongoDB V2 Connectors are available for preview. Preview functionality is supported for evaluation purposes but is unwarranted and is not production-ready. Informatica recommends that you use in non-production environments only. Informatica intends to include the preview functionality in an upcoming release for production use, but might choose not to in accordance with changing market or technical circumstances. For more information, contact Informatica Global Customer Support. To use the functionality, your organization must have the appropriate licenses.

Enhanced connectorsThis release includes enhancements to the following connectors.

Google Cloud Spanner Connector

You can create and run elastic mappings to read from or write data to a Google Cloud Spanner table.

Google Cloud Storage V2 Connector

You can configure an elastic mapping to incrementally load files when you use a directory as the source. When you incrementally load files, the mapping task reads and processes only files in the directory that have changed since the mapping task last ran.

Hive Connector

This release includes the following enhancements for Hive Connector:

• You can parameterize the Hive source object, target object, and the connection in mappings. You can also override the parameters at runtime using a parameter file.

• When you configure a mapping or an elastic mapping to read from or write to Hive, you can use the IAM role to stage Hive data in Amazon S3.

Kafka Connector

This release includes the following enhancements for Kafka Connector:

• Hierarchical data types in elastic mappings

- You can read and write the hierarchical data types for Avro and JSON files.

- For write operations, you can use hierarchical data types only when you create a Kafka target at runtime.

• You can use an intelligent structure model in an elastic mapping to parse semi-structured or structured data in complex Kafka source files such as Avro, JSON, Excel, and XML files and create a model of the underlying structure of the source data.

• You can configure one-way or two-way SSL authentication to connect to a Kafka broker in an elastic mapping.

Connectors 11

• You can read data from or write data to a Kafka topic in binary format in an elastic mapping.

Microsoft Azure Blob Storage V3 Connector

You can use the shared access signature authentication to grant restricted access rights to the resources in your Microsoft Azure Blob Storage account.

Microsoft Azure Data Lake Storage Gen2 Connector

This release includes the following enhancements for Microsoft Azure Data Lake Storage Gen2 Connector:

• You can use the shared key authentication to connect to Microsoft Azure Data Lake Storage Gen2 using the account name and account key.

• You can use access control lists to grant different levels of permissions to access directories and files to each user and service. If you do not want to use role-based access control to grant access to all of the data in a storage account, you can use access control lists to grant read, write, and execute permissions to a specific directory or file.

• You can use Gzip compression to compress Parquet files.

Microsoft Azure Synapse SQL Connector

You can configure a Router transformation in a mapping enabled for pushdown optimization to route data into multiple output groups based on one or more conditions.

Snowflake Cloud Data Warehouse V2 Connector

This release includes the following enhancements for Snowflake Cloud Data Warehouse V2 Connector:

• You can configure mappings or elastic mappings to read from or write data that contains semi-structured data types such as variant, object, and array to Snowflake.

• You can configure the proxy server properties in the Snowflake connection so that the Secure Agent can use the proxy server to connect to Snowflake.

12 Chapter 1: February 2022

C h a p t e r 2

January 2022The following topics provide information about new features, enhancements, and behavior changes in the January 2022 release of Informatica Intelligent Cloud Services℠ Data Integration.

New features and enhancementsThe January 2022 release of Informatica Intelligent Cloud Services℠ Data Integration includes the following new features and enhancements.

CLAIRE recommendationsWhen CLAIRE™ recommends additional source objects in a mapping, it might also recommend adding a Joiner transformation or a Union transformation. If you accept the recommendation, CLAIRE automatically joins or unions the recommended source with the original source.

You can preview the recommended transformations and fields before you accept the recommendation.

For more information about CLAIRE recommendations, see Mappings.

TransformationsThis release includes the following enhancements to transformations.

Structure Parser transformation

If the output type of the transformation is JSON, you can choose to add all the model tags to the output at run time, including tags that don't exist in the input. The transformation adds tags that don't exist in the input as empty tags with a NULL value.

For more information about the Structure Parser transformation, see Transformations.

ConnectorsThe January 2022 release includes the following enhanced connectors.

13

New connectorsThis release includes the following new connectors.

Business 360 FEP Connector

You can use Business 360 FEP Connector to securely write data to specific root fields or fields in a field group in the Business 360 data store.

Enhanced connectorsThis release includes enhancements to the following connectors.

Databricks Delta Connector

You can use the Databricks Delta SQL Engine to design mappings.

14 Chapter 2: January 2022

C h a p t e r 3

December 2021The following topics provide information about new features, enhancements, and behavior changes in the December 2021 release of Informatica Intelligent Cloud ServicesSM Data Integration.

New features and enhancementsThe December 2021 release of Informatica Intelligent Cloud Services℠ Data Integration includes the following new features and enhancements.

Data Integration ElasticThis release includes the following new features for Data Integration Elastic.

Machine Learning transformation

You can use the Machine Learning transformation to run a machine learning model and return predictions to an elastic mapping.

For more information, see Transformations.

API collections

You can create an API collection that stores REST API requests to use in the Machine Learning transformation.

For more information, see Components.

ConnectorsThe December 2021 release includes the following enhanced connectors.

New connectorsThis release includes the following new connectors.

NICE Satmetrix Connector

You can use NICE Satmetrix Connector to connect to Satmetrix from Data Integration. Use NICE Satmetrix Connector to read data from and write data to Satmetrix.

15

C h a p t e r 4

November 2021The following topics provide information about new features, enhancements, and behavior changes in the November 2021 release of Informatica Intelligent Cloud ServicesSM Data Integration.

New features and enhancementsThe November 2021 release of Informatica Intelligent Cloud Services℠ Data Integration includes the following new features and enhancements.

Data Integration ElasticThis release includes the following new features for Data Integration Elastic.

Incrementally loading directory files

You can configure an elastic mapping to incrementally load files when you use a directory as the source. When you incrementally load files, the mapping task reads and processes only files in the directory that have changed since the mapping task last ran. You can incrementally load files when you read from an Amazon S3 V2 or Microsoft Azure Data Lake Storage Gen2 source.

For more information, see Tasks.

Encoding functions

In an elastic mapping, new encryption functions use the Advanced Encryption Standard (AES) algorithm with the Galois/Counter Mode (GCM) of operation. The AES algorithm is a FIPS-approved cryptographic algorithm that uses 128, 192, or 256-bit keys. You can use the following new functions:

• AES_GCM_ENCRYPT. Returns ciphertext as a binary value after performing AES-GCM encryption on an input value with the given initialization vector and key. The ciphertext is encrypted plaintext.

• AES_GCM_DECRYPT. Returns plaintext, a decrypted value as a string, after performing AES-GCM decryption on an input value with the given initialization vector and key.

For more information, see Function Reference.

Data transfer tasksData transfer tasks include the following enhancements:

Advanced data filters

When you filter source data, you can configure an advanced filter using a filter expression. You can configure advanced data filters for source data and combined source data.

16

Create and edit connections

You can create, view, and edit source and target connections, when you configure a data transfer task.

Display fields in alphabetical order

When you preview source and target fields, you can view fields in alphabetical order. When you view fields in alphabetical order, Data Integration does not change the order of the fields in the source or target.

Flat file formatting optionsWhen you configure formatting options for delimited flat file sources and targets in mappings, mapplets, and mapping tasks, you can use the following character types as the delimiter:

• Multibyte. Select Other and then enter the character that you want to use.

• Control. Select Non-printable and then select the character that you want to use.

For more information about formatting flat file sources and targets, see Transformations.

Intelligent structure modelsThis release includes the following enhancements to intelligent structure models:

Data normalization of nested repeating groups

You can use data normalization to reduce data redundancy and improve data integrity. When a model that is based on a JSON, XML, or XSD file contains nested repeating groups, Intelligent Structure Discovery structures data in those groups as normalized. In models that are based on other input types, you can manually select to structure nested repeating groups as normalized. You can manually change the data normalization mode of a model in all model types.

Output groups for nested repeating groups

When a model that is based on a JSON or XML file contains nested repeating groups, Intelligent Structure Discovery assigns each nested repeating group to its own output group, thus reducing the number of ports in the model.

Document identifiers

When a model that is based on a JSON, XML, or XSD file contains non-repeating output groups, Intelligent Structure Discovery adds a document identifier to each non-repeating output group. You can use the document identifiers to join groups with the Joiner transformation.

For more information about intelligent structure models, see Components.

TaskflowsThis release includes the following enhancements to taskflows:

Additional information in File Watch Task step

When you add a file listener to a File Watch Task step, you see a description and output fields on the File Watch Task tab.

Since the output fields are displayed on the File Watch Task tab, you no longer see a separate Output Fields tab.

Additional information in Ingestion Task step

When you add a file ingestion task to an Ingestion Task step, you see a description and output fields on the Ingestion Task tab.

New features and enhancements 17

Since the output fields are displayed on the Ingestion Task tab, you no longer see a separate Output Fields tab.

Initial values for temporary fields

You can set an initial value for a temporary field in a taskflow. The value can change during the taskflow run.

For more information, see Taskflows.

File listenersThis release includes the following new features to file listeners:

Relative path for file listener source

You can enter a relative path to the source file system as the relative to the connection folder path. Using a relative path simplifies asset migration.

Post pick up action for Amazon S3 V2 connection

You can select the post action as Delete for Amazon S3 v2 connection type, if the file pattern is an indicator file.

Changed behaviorThe November 2021 release of Informatica Intelligent Cloud Services Data Integration includes the following changed behaviors.

Flat file formatting optionsTo use the EOT (end of transmission) character as the delimiter in a flat file, in the Formatting Options dialog box, select Non-printable, and then select \004 EOT.

Previously, to use the EOT character as the delimiter, you selected EOT.

For more information about formatting flat file sources and targets, see Transformations.

ConnectorsThe November 2021 release includes the following enhanced connectors.

New connectorsThis release includes the following new connectors.

Google Sheets V2 Connector

You can use Google Sheets V2 Connector to connect to Google Sheets from Data Integration. Use Google Sheets V2 Connector to read data from and write data to Google Sheets.

18 Chapter 4: November 2021

UKGPro V2 Connector

You can use UKGPro V2 Connector to connect to UKGPro from Data Integration. Use UKGPro V2 Connector to read data from UKGPro.

Enhanced connectorsThis release includes enhancements to the following connectors.

Amazon Redshift V2 Connector

This release includes the following enhancements for Amazon Redshift V2 Connector:

• Pushdown optimization for mappings using the Amazon Redshift V2 connectionThe pushdown optimization functionality is expanded for Amazon Redshift V2 Connector in this release and includes the following enhancements:

- You can configure source or full pushdown optimization to read data from an Amazon Redshift source and write data to an Amazon Redshift target.

- You can configure source pushdown optimization to read data from an Amazon Redshift source and write data to other targets.

- You can configure Router, Sorter, Joiner, and Union transformations in a mapping.

- You can configure a Lookup transformation in a mapping to lookup data from an Amazon S3 or Amazon Redshift source.

- You can push MD5() and IIF() functions to the Amazon Redshift database.

- You can configure update, upsert, delete, and data driven operations when you write to an Amazon Redshift target.

- You can configure a mapping to write to multiple Amazon Redshift V2 targets. You can then optimize the write operation by using an insert, update, upsert, delete, or data driven operation for multiple targets individually.

- You can specify a parameter file in the mapping task to override the Amazon Redshift V2 or Amazon S3 V2 source, lookup, and target connections and objects in a mapping.

- When you configure a full pushdown optimization for a mapping that reads from and writes to Amazon Redshift V2 and a transformation or function is not applicable, the task partially pushes down the mapping logic to the point where the transformation is supported for pushdown optimization.

For more information about the features, transformations, and functions that you can use with pushdown optimization, see the help for Amazon Redshift V2 Connector.

• You can configure a Lookup transformation to use a persistent cache. When you use a persistent cache, Data Integration saves and reuses the cache files from the previous mapping run.

Databricks Delta Connector

This release includes the following enhancements for Databricks Delta Connector:

You can use the Databricks Delta connection in mappings to configure pushdown optimization.

• You can push down a mapping that read from a Databricks Delta source and write to a Databricks Delta target using Databricks Delta connection in the mapping task.

• You can use the Databricks Delta SQL Engine to run mappings enabled with full pushdown optimization.For more information about functions, transformations, data types, and operators applicable to mappings enabled with full pushdown optimization, see the help for Databricks Delta Connector.

Connectors 19

Google BigQuery V2 Connector

This release includes the following enhancements for Google BigQuery V2 Connector:

• When you configure a SQL transformation to call stored procedures or entered query, you can parameterize the Google BigQuery V2 connection to define parameter values in a parameter file.

• You can configure a Lookup transformation to use a persistent cache. When you use a persistent cache, Data Integration saves and reuses the cache files from the previous mapping run.

JDBC V2 Connector

You can use the serverless runtime environment to run the JDBC V2 mappings.

Kakfa Connector

You can use the Confluent schema registry to access Avro schemas for Kafka sources and targets in mappings.

Microsoft Azure Synapse SQL Connector

You can configure a Lookup transformation to use a persistent cache. When you use a persistent cache, Data Integration saves and reuses the cache files from the previous mapping run.

ODBC Connector

When you use an ODBC connection with the Teradata ODBC subtype in an SQL transformation, you can override the connection with values specified in a parameter file.

Snowflake Cloud Data Warehouse V2 Connector

This release includes the following enhancements for Snowflake Cloud Data Warehouse V2 Connector:

• You can configure a Lookup transformation to use a persistent cache. When you use a persistent cache, Data Integration saves and reuses the cache files from the previous mapping run.

• When you add an SQL transformation in a Snowflake Cloud Data Warehouse V2 mapping, you can use a parameterized connection in the SQL transformation and override the parameters at runtime using a parameter file.

Changed behaviorThis release includes changes in behavior for the following connectors.

Coupa V2 Connector

Effective in this release, when you use Coupa Connector in the Source transformation to read a single row from Coupa and if there is a failure response from the API endpoint, the mapping runs successfully. The success rows status in the job details reflect as 0 and the error message related to the fault or error row is logged in the session log.

Previously, when there was a failure response from the API endpoint while reading a single row from Coupa, the task failed.

Google BigQuery V2 Connector

Effective in this release, when the mapping contains an override to the dataset name and table name and the Create Disposition property is set to Create if Needed in the Google BigQuery target transformation, the mapping runs successfully but the Secure Agent does not consider the specified values to override to the dataset name and table name.

20 Chapter 4: November 2021

Previously, when the mapping contained an override to the dataset name and table name and the Create Disposition property is set to Create if Needed in the Google BigQuery target transformation, the mapping failed.

Salesforce Connector

Effective in this release, when you edit the service URL for an existing Salesforce connection, you must re-enter the following fields for the standard and OAuth connection:

• Standard connection - password and security token

• OAuth connection - consumer key, consumer secret, and refresh token

Previously, when you edited the service URL, you did not have to re-enter the fields.

Connectors 21

C h a p t e r 5

October 2021The following topics provide information about new features, enhancements, and behavior changes in the October 2021 release of Informatica Intelligent Cloud Services Data Integration.

New features and enhancementsThe October 2021 release of Informatica Intelligent Cloud Services℠ Data Integration includes the following new features and enhancements.

Flat file formatting options and advanced attributesYou can use the end of transmission character as the delimiter in a delimited flat file.

You can also configure the thousand separator and the decimal separator for flat file sources and targets in mappings and mapping tasks.

For more information about configuring Source and Target transformations, see Transformations.

CLAIRE recommendations for source objectsIf you enable CLAIRE recommendations in the mapping canvas and you select a database table or file-based resource as the source object in a Source transformation, CLAIRE can recommend sources that can be joined to the source object for the following additional source types:

• Amazon S3

• Google Big Query

• Microsoft Azure Data Lake Storage Gen2

• Microsoft Azure SQL Database

• Microsoft Azure SQL Data Warehouse

For more information about CLAIRE recommendations in mappings, see Mappings.

Copying transformationsYou can copy and paste multiple transformations at once between the following open assets:

• Between mappings

• Between elastic mappings

22

• Between mapplets

• From a mapping or elastic mapping to a mapplet

Data transfer tasksWhen you configure a data transfer task, you can augment the source data with data from a lookup source. The task queries the lookup source based on the lookup condition that you specify and returns the result of the lookup to the target.

To add a lookup source, configure a second source for the task on the Second Source page.

For more information about data transfer tasks, see Tasks.

Dynamic mapping tasksThis release includes the following enhancements to dynamic mapping tasks:

Custom queries

You can use a custom query as a source or lookup object in dynamic mapping tasks that are based on non-elastic mappings.

Expression parameters

Use the expression editor to configure and validate expression parameters.

REST API

You can use the Informatica Intelligent Cloud Services REST API to create, update, or delete a dynamic mapping task.

For more information, see REST API Reference.

For more information about dynamic mapping tasks, see Tasks.

Masking tasksThis release includes the following enhancements to masking tasks:

Dependent masking

You can apply the Dependent masking technique on source columns. Dependent masking uses custom dictionary values from a dictionary that you use to mask another column in the source data.

Seed value parameter

You can enter the seed value in masking rules as a parameter.

For more information about masking tasks, see Tasks.

TaskflowsThis release includes the following enhancements to taskflows:

Additional information in Data Task step

When you add a mapping task to a Data Task step, you see a description, input fields, and output fields on the Data Task tab.

New features and enhancements 23

The taskflow also returns the following new output fields:

• Total Transformation Errors. Returns the total number of transformation errors in the Data Task step.

• First Error Code. Returns the error code for the first error message in the Data Task step.

Note: In Monitor and on the My Jobs page, the value of the Total Transformation Errors and First Error Code output fields is 0 for taskflows that you had run before the October 2021 release. You must republish the existing taskflows to see the correct values of the output fields.

Support for failing taskflows in Command Task step

You can configure a taskflow to fail on its completion if a Command Task step fails or does not run.

If a Command Task step fails or does not run, the taskflow continues running the subsequent steps. However, after the taskflow completes, the taskflow status is set to failed.

Support for environment variables in Command Task step

In a Command Task step, you can use environment variables for the script file name, input arguments, and work directory in the input fields.

Query parameters for monitoring taskflow status

You can use the status resource to query the status of a taskflow using query parameters such as run ID, run status, start time, end time, offset, and row limit.

For more information, see Taskflows.

TransformationsThis release includes the following enhancements to transformations.

Data Masking transformation

This release includes the following enhancements to Data Masking transformations:

Dependent masking

You can apply the Dependent masking technique on source columns. Dependent masking uses custom dictionary values from a dictionary that you use to mask another column in the source data.

Seed value parameter

You can enter the seed value in masking rules as a parameter.

Advanced email masking

You can use the Advanced email masking rule to mask email addresses in source data with realistic data instead of random ASCII characters.

Lookup condition in custom substitution rules

You can configure a lookup condition in a custom substitution rule. If the values specified in the lookup condition match, the rule replaces the source field with the dictionary value. You can specify a default value to replace the source data if there is no match.

For more information about the Data Masking transformation, see Transformations.

Lookup transformation

When you configure a Lookup transformation with a dynamic lookup cache, you can create a generated key for a field in the target object.

24 Chapter 5: October 2021

If the lookup object contains a field that is based on a generated sequence, you can use the Sequence-ID field to generate new sequence ID values. Data Integration automatically detects the existing range of sequence values in the field to generate new sequence IDs.

For more information about the Lookup transformation, see Transformations.

File listenerThis release includes the following enhancements to file listeners:

File listener job details

You can view the details of a completed file listener job using the Informatica Intelligent Cloud Services REST API.

For more information, see REST API Reference.

File stability

When you configure a file listener, you can define the stability check interval for a file listener job.

For more information, see Components.

Intelligent structure modelsThis release includes the following enhancements to intelligent structure models:

XSD-based models

For XSD-based models, Intelligent Structure Discovery assigns each nested repeating group to its own output group, thus reducing the number of ports in the model.

Elastic mappings

Intelligent Structure Discovery enhances the efficiency of creating HTYPE data for the following input types:

• CSV files

• Log files

• JSON files

Delimited files that contain headers

Intelligent Structure Discovery parses delimited files that contain headers when one or both of the following mismatches between input file and model exit:

• The order of the columns in the input file is different than the order of the columns in the model.

• The input file doesn't contain all the columns that the model contains.

For more information about intelligent structure models, see Components.

Data Integration REST APIYou can use the federated task ID when you use the mttask resource to get mapping task details or update a mapping task.

For more information, see the REST API Reference.

New features and enhancements 25

Changed behaviorThe October 2021 release of Informatica Intelligent Cloud Services Data Integration includes the following changed behaviors.

ConnectorsThe October 2021 release includes the following enhanced connectors.

New connectorsThis release includes the following new connectors.

SAP HANA CDC Connector

You can use the SAP HANA CDC Connector in a Data Integration mapping task to read data from a SAP HANA database and write the data to any supported target type.

Enhanced connectorsThis release includes enhancements to the following connectors.

Amazon Redshift V2 Connector

This release includes the following enhancements for Amazon Redshift V2 Connector:

• You can configure an SQL transformation using a user-entered SQL query.

• If you are using the older version of the Amazon Redshift Connector and you plan to upgrade to the newer Amazon Redshift V2 Connector, you can choose to retain the configured field mappings from the old connector.

Amazon S3 V2 Connector

If you are using the older version of the Amazon S3 Connector and you plan to upgrade to the newer Amazon S3 V2 Connector, you can choose to retain the configured field mappings from the old connector.

Db2 for i CDC Connector

Db2 for i CDC connector sources can now replicate data to the following targets:

• Aurora PostgreSQL

• Amazon Redshift

• Google BigQuery

• MySQL

• PostgreSQL

• Snowflake

Db2 for LUW CDC Connector

Db2 for LUW CDC connector sources can now replicate data to the following targets:

• Aurora PostgreSQL

26 Chapter 5: October 2021

• Amazon Redshift

• Google BigQuery

• MySQL

• PostgreSQL

• Snowflake

Db2 for z/OS CDC Connector

Db2 for z/OS CDC connector sources can now replicate data to the following targets:

• Aurora PostgreSQL

• Amazon Redshift

• Google BigQuery

• MySQL

• PostgreSQL

• Snowflake

Google BigQuery V2 Connector

This release includes the following enhancements for Google BigQuery V2 Connector:

• You can define expressions to flag rows for an insert, update, delete, or reject operation in a target transformation in an elastic mapping.

• You can read or write data of the Record and Repeated data type in an elastic mapping.

• You can specify multiple SQL queries when you configure an SQL transformation using a user-entered SQL query.

• You can suppress post-SQL commands when error occurs while the mapping writes data to the target table.

• When you perform pushdown optimization using the Google BigQuery V2 connection, you can now push the SESSSTARTTIME system variable to the Google BigQuery database by using full pushdown optimization.

Google Cloud Storage V2 Connector

When you create an elastic mapping and read data from a flat file or complex file, you can use wildcard characters to specify the source directory name or the source file name.

Hadoop Files V2 Connector

You can configure a mapping to read from or write data to Hadoop Distributed File System on Cloudera CDP 7.1 private cloud and Cloudera CDW 7.2 public cloud.

Hive Connector

This release includes the following enhancements for Hive Connector:

• You can configure a mapping to read from or write data to Hive data sources on Cloudera CDP 7.1 private cloud and Cloudera CDW 7.2 public cloud.

• Effective in this release, elastic mappings include the following enhancements in Hive Connector:

- You can configure delete, update, and data driven operations for a Hive target in an elastic mapping. When you configure a data driven operation, you can specify an expression to update, insert, or delete records in a Hive target.

Connectors 27

- You can configure Kerberos authentication and SSL for a Hive connection in elastic mappings to access the Hadoop cluster that uses Kerberos or SSL.

- You can configure an elastic mapping to read from or write data to Hive data sources on the following Hadoop distributions:

- Amazon EMR 6.1, 6.2, and 6.3 Cloudera CDH 6.1

- Azure HDInsight 4.0

- Cloudera CDH 6.1

- Cloudera CDP 7.1 private cloud and Cloudera CDP 7.2 public cloud

- You can configure a dynamic mapping task to create and batch multiple jobs based on the same elastic mapping.

JDBC V2 Connector

You can create and run mappings to read from or write to Aurora PostgreSQL and other databases that support the Type 4 JDBC driver.

Kafka Connector

You can create and run elastic mappings to read from or write messages to a Kafka topic.

Microsoft Azure Data Lake Storage Gen2 Connector

When you use a parameter file in a mapping task, you can save the parameter file in a cloud-hosted directory in Microsoft Azure Data Lake Storage Gen2.

Microsoft Azure Synapse SQL Connector

This release includes the following enhancements for Microsoft Azure Synapse SQL Connector:

• Pushdown enhancements for mappings that include Microsoft Azure Synapse SQL Connector

- When you configure a full or source pushdown optimization for a mapping and a transformation is not applicable, the task partially pushes down the mapping logic to the point where the transformation is supported for pushdown optimization.

- You can configure a Lookup transformation in a mapping enabled for pushdown optimization to lookup data from a Microsoft Azure Data Lake Storage Gen2 or Microsoft Azure Synapse SQL source.

- You can configure a Union transformation to merge data from multiple pipelines into a single pipeline.

- You can use the data driven operation in a mapping enabled for full pushdown optimization to define expressions that flag rows for an insert, update, delete, or reject operation.

- You can override the default update SQL statement when you write data to Microsoft Azure Synapse SQL

• You can use the copy command to bulk upload data from the staging location to Microsoft Azure Synapse SQL.

• You can configure an SQL transformation to process SQL queries and stored procedures midstream in a Microsoft Azure Synapse SQL mapping.

• You can configure the pre-build lookup cache to build the lookup cache before the Lookup transformation receives the data.

Microsoft Dynamics 365 for Sales Connector

You can use Microsoft Dynamics 365 for Sales Connector to read from or write data to Microsoft Dynamics 365 for Sales on-premise application. You can use the OAuth 2.0 Password Grant authentication type to log in to the Microsoft Dynamics 365 for Sales on-premises.

28 Chapter 5: October 2021

Microsoft SQL Server Connector

When you configure a full or source pushdown optimization for an Expression transformation, you can calculate an unique checksum value for a row of data each time you read data from a source object.

Microsoft SQL Server CDC Connector

Microsoft SQL Server CDC connector sources can now replicate data to the following targets:

• Aurora PostgreSQL

• Amazon Redshift

• Google BigQuery

• MySQL

• PostgreSQL

• Snowflake

MySQL CDC Connector

MySQL CDC connector sources can now replicate data to the following targets:

• Aurora PostgreSQL

• Amazon Redshift

• Google BigQuery

• MySQL

• PostgreSQL

• Snowflake

Oracle CDC V2 Connector

Oracle CDC V2 connector sources can now replicate data to the following targets:

• Aurora PostgreSQL

• Amazon Redshift

• Google BigQuery

• MySQL

• PostgreSQL

• Snowflake

PostgreSQL CDC Connector

PostgreSQL CDC connector sources can now replicate data to the following targets:

• Aurora PostgreSQL

• Amazon Redshift

• Google BigQuery

• MySQL

• PostgreSQL

• Snowflake

SAP BAPI Connector

The BAPI Connector contains changes for performance enhancement.

Connectors 29

Snowflake Cloud Data Warehouse V2 Connector

This release includes the following enhancements for Snowflake Cloud Data Warehouse V2 Connector:

• You can use the serverless runtime environment to run Snowflake tasks enabled for pushdown optimization.

• You can configure delete, update, and data driven operations for a Snowflake target in an elastic mapping. When you configure a data driven operation, you can specify an expression to update, insert, reject, or delete records in a Snowflake target.

• You can read from or write data to Snowflake hosted on Snowflake GovCloud. You can configure mappings and pushdown optimization for mapping tasks that read from or write data to Snowflake hosted on Snowflake GovCloud.

• When you configure an SQL transformation to call a stored procedure in Snowflake, you can specify the Snowflake database, schema, and procedure name in the advanced SQL properties.

VSAM CDC Connector

VSAM CDC connector sources can now replicate data to the following targets:

• Aurora PostgreSQL

• Amazon Redshift

• Google BigQuery

• MySQL

• PostgreSQL

• Snowflake

Changed behaviorThis release includes changes in behavior for the following connectors.

Cloudera 6.1 package

Effective in this release, the Cloudera 6.1 package that contains the Informatica Hadoop distribution script and the Informatica Hadoop distribution property files is part of the Secure Agent installation. The package contains support for additional Hadoop distributions. You must have the licence to use the Cloudera 6.1 distribution package.

To access the Hadoop distributions using Hive or Hadoop Files V2 Connector, you must run the Hadoop distribution script and specify the distribution version for the mapping or elastic job. Even if you want to use only the CDH 6.1 distribution for the source or target, you must still download the CDH 6.1 libraries using the Hadoop distribution script.

Previously, you had to contact Global Customer Support to download and run the Informatica Hadoop distribution script.

Important: These changes are not applicable for connectors such as Amazon S3 V2 Connector, Microsoft Azure Data Lake Storage Gen2 Connector, Google Cloud Storage V2 Connector, or Kafka Connector. To use the Cloudera CDH 6.1 libraries for these connectors, you require only the Cloudera CDH 6.1 license.

30 Chapter 5: October 2021

Steps to access the Hadoop distributions from the Cloudera 6.1 package

You must perform the following tasks to run the script from the Secure Agent installation location and access the Hadoop distributions:

1. Go to the following Secure Agent installation directory where the Informatica Hadoop distribution script is located:<Secure Agent installation directory>/downloads/package-Cloudera_6_1/package/Scripts

2. Copy the Scripts folder outside the Secure Agent installation directory on your machine.

3. From the terminal, run the following command from the Scripts folder: ./infadistro.sh4. When prompted, select Data Integration or Data Integration Elastic for which you want to run the

script.

• Enter 1 to select Cloud Data Integration.

• Enter 2 to select Cloud Data Integration Elastic.

Note: Data Integration Elastic is applicable only for Hive mappings.

5. When prompted, specify the value of the Hadoop distribution that you want to use.The third-party libraries are copied to the following directory based on the option you selected in step 4:

• For Data Integration: <Secure Agent installation directory>/apps/Data_Integration_Server/ext/deploy_to_main/distros/Parsers/<Hadoop distribution version>/lib

• For Data Integration Elastic: <Secure Agent installation directory>/ext/connectors/thirdparty/informaticallc.hiveadapter/spark/lib

where the value of the Hadoop distribution version is based on the Hadoop distribution you specified.

6. If you copy the Scripts folder to a machine where the Secure Agent is not installed, perform steps 4 and 5:

• For Data Integration, the third-party libraries are copied to the following directory: <CurrentDirectory>/deploy_to_main/distros/Parsers/<Hadoop distribution version>/libManually copy the deploy_to_main directory to the following Secure Agent directory: <Secure Agent installation directory>/apps/Data_Integration_Server/ext

• For Data Integration Elastic, the third-party libraries are copied to <CurrentDirectory>/informaticallc.hiveadapter/spark/libManually copy the informaticallc.hiveadapter directory to the following Secure Agent directory: <Secure Agent installation directory>/ext/connectors/thirdparty/

7. Set the INFA_HADOOP_DISTRO_NAME property for the DTM in the Secure Agent properties and set the value of the distribution version that you want to use.

8. Restart the Secure Agent.

Hadoop distributions applicable for mappings and elastic mappings

The following table lists the supported distribution versions that you can access from the Cloudera 6.1 distribution package for mappings and elastic mappings. You must specify the appropriate Hadoop

Connectors 31

distribution values when you run the Hadoop distribution script and set the DTM property based on the Hadoop distribution that you want to access:

Jobs Hadoop Distribution Distribution Option in infadistro.sh Script

Value in DTM Flag

Data Integration* Cloudera CDH 6.1 CDH_6.1 CDH_6.1

Hortonworks HDP 3.1 HDP_3.1 HDP_3.1

Amazon EMR 5.20 EMR_5.20 EMR_5.20

Azure HDInsight 4.0 HDInsight_4.0 HDInsight_4.0

Data Integration Elastic**

Cloudera CDH 6.1 CDH_6.1 CDH_6.1

Cloudera CDP 7.1 private cloud

CDP_7.1 DTM flag is not required.

Cloudera CDW 7.2 public cloud

CDW_7.2 DTM flag is not required.

Amazon EMR 6.1, 6.2, and 6.3

EMR_5.20Applicable for Amazon EMR 6.1, 6.2, and 6.3

EMR_5.20Applicable for Amazon EMR 6.1, 6.2, and 6.3

Azure HDInsight 4.0 HDInsight_4.0 HDInsight_4.0

*Applies to Hive and Hadoop Files V2 Connector. **Applies to Hive Connector.

Capture debug logs

Effective in this release, if you want the session to capture the debug logs, set the following properties:

1. In the Custom Configuration section in the agent properties, set the LOGLEVEL=DEBUG flag as a DTM property for the Data Integration Server.

2. On the Schedule page in the mapping task properties, select the Verbose execution mode.

To exclude the debug logs, change the Verbose execution mode to Standard in the mapping task properties.

Previously, the session logs included the debug logs because you set the LOGLEVEL=DEBUG property and ran the mapping task in Standard execution mode.

Error messages for elastic mappings

Effective in this release, when an elastic mapping fails, the error messages that appear on the user interface are standardized and do not contain the stack trace from exceptions. For details of the error message, you must check the session log.

Previously, error messages contained the exception stack trace and internal details in the message description.

Expression transformations in mappings enabled for pushdown optimization

Effective in this release, when you configure a mapping for pushdown optimization, you can continue to add an Expression transformation each to multiple sources followed by a join downstream in the mapping.

Additionally, you can add multiple Expression transformations that branch out from a transformation and then branch in into a transformation downstream in the mapping.

32 Chapter 5: October 2021

Previously, if the mapping contained multiple Expression transformations that were connected to a single transformation downstream, pushdown was disabled and the mapping ran without pushdown optimization.

Amazon S3 V2 Connector

Effective in this release, when you specify the customer master key ID in the connection properties and select server side encryption as the encryption type for complex files, the target file is encrypted with server side encryption.

Previously, the target file was encrypted with server side encryption with KMS.

Google BigQuery V2 Connector

Effective in this release, you can suppress post-SQL commands when error occurs while the mapping writes data to the target table.

Previously, the post-SQL commands ran even if the mapping fails to write the data to the target.

Microsoft Azure Synapse SQL Connector

Effective in this release, Microsoft Azure Synapse SQL Connector includes the following changes:

• When you override the target table name or schema name and truncate the target table in Microsoft Azure Synapse SQL, the Secure Agent truncates the table or schema that you specify in the override property before it writes the data to the target.Previously, the Secure Agent truncated the table or schema that you specify at the design time.

• When you override the target table name or schema name and create a new target at runtime, the Secure Agent creates the target table based on the table name or schema name override that you specify in the advanced properties.Previously, the Secure Agent created the table based on the table name or schema name that you specify at the design time.

SAP BAPI Connector

Effective in this release, you can select the Jco Trace option in the BAPI Connection Connection Properties section to store information about the JCo calls in a trace file.

Previously, the Trace option was not applicable. You could store the information in a trace file only by defining the Trace parameter in the SAP Additional Parameters field.

Support for serverless runtime environmentEffective in this release, you can use the serverless runtime environment to run mappings with the following connectors:

• JDBC (JDBC_IC) Connector

• NetSuite RESTlet V2 (NetSuite V2) Connector

• ODBC Connector

• SAP Connector - SAP ADSO Writer, SAP BAPI, SAP Table, SAP ODP Extractor

• Web Service Consumer (WS Consumer) Connector

Connectors 33

C h a p t e r 6

UpgradeThe following topics provide information about tasks that you might need to perform before or after an upgrade of Informatica Intelligent Cloud Services Data Integration. Post-upgrade tasks for previous monthly releases are also included in case you haven't performed these tasks after the previous upgrade.

Preparing for the upgradeThe Secure Agent upgrades the first time that you access Informatica Intelligent Cloud Services after the upgrade.

Files that you added to the following directory are preserved after the upgrade:

<Secure Agent installation directory>/apps/Data_Integration_Server/ext/deploy_to_main/bin/rdtm-extra

Perform the following steps to ensure that the Secure Agent is ready for the upgrade:

1. Ensure that each Secure Agent machine has sufficient disk space available for upgrade.

The machine must have 5 GB free space or the amount of disk space calculated using the following formula, whichever is greatest:

Minimum required free space = 3 * (size of current Secure Agent installation directory - space used for logs directory)

2. Close all applications and open files to avoid file lock issues, for example:

• Windows Explorer

• Notepad

• Windows Command Processor (cmd.exe)

Post-upgrade tasks for the February 2022 releasePerform the following tasks after your organization is upgraded to the February 2022 release.

34

Unsupported Hadoop distribution packagesEffective in this release, Informatica no longer supports the following Hadoop distribution packages:

• cloudera_cdh5u8

• cloudera_cdh5u13

• Cloudera_5_4

• EMR_5.20

• AmazonEMR_5_0

• emr_5_4_0

• HDInsight_3.6

• HDInsight_4.0

• HDP_3.1

• Hortonworks_2_3

• hortonworks_2.5

• hortonworks_2.6

Even if you have the licenses for these packages, you can no longer use them.

You must upgrade to Cloudera_6.1 Hadoop distribution package. To get the license, contact Informatica Global Customer Support.

The Cloudera 6.1 package is part of the Secure Agent installation and contains the Informatica Hadoop distribution script and the supported Informatica Hadoop distribution property files. When you run the Hadoop distribution script, you can specify the supported distribution version available in the package for the mapping or elastic job.

Sequence Generator transformation in mappings enabled for pushdown optimization

After you upgrade, existing tasks enabled with pushdown optimization runs without pushdown optimization. This issue occurs when the NEXTVAL() port in a Sequence Generator transformation is linked directly to a single input port or multiple input ports in a Target transformation.

Previously, when the NEXTVAL() port was linked directly to a single input port or multiple input ports in a Target transformation, the mappings ran successfully with pushdown optimization, but generated incorrect data.

RunAJob log4j propertiesIf you used the runAJob utility before the February release, you need to replace your existing log4j.properties file with the log4j2.properties file. Logs will not be generated until the file is replaced.

1. Create a copy of the log4j2_default.properties file which is located in the following directory:

<Secure Agent installation directory>\apps\runAJobCli2. Rename the file to log4j2.properties.

3. Optionally, configure parameters in the file.

4. Save the file.

Post-upgrade tasks for the February 2022 release 35

Microsoft Azure Data Lake Storage Gen2 ConnectorAfter you upgrade, the existing mappings fail if the metadata of the imported object does not match the metadata fetched in an override.

You must fix the metadata of the imported object and run the mapping again.

Previously, if the metadata of the imported object did not match the metadata fetched in an override, the mapping did not fail.

File integration proxy serverIf you use the file integration proxy server, update the server with the latest version of the fis-proxy-server.zip file.

For more information, see What's New in the Administrator help.

Post-upgrade tasks for the October 2021 releasePerform the following tasks after your organization is upgraded to the October 2021 release.

Advanced properties in mappingsAfter you upgrade, existing mappings fail when the source and target advanced properties contain data type values that the fields do not support.

For example, when you run an existing Microsoft Azure Data Lake Storage Gen2 mapping that has the Block Size source and target advanced property value defined as a String value of 1GB instead of an Integer value, the mapping fails with the following error message:

Exception occurred while converting blockSize value 1GB to Integer

Previously, the mappings passed even if you specified a String or BigInt data type value as the block size.

Before you upgrade to the October 2021 release, you must modify your mappings to include a valid data type value that the advanced source and target property field supports.

Custom query override in taskflowsAfter you upgrade, existing taskflows that override the custom query of a mapping task might need manual updates.

If a taskflow contains a Data Task step that uses a mapping task with a custom query, and you override the custom query in the taskflow with a query that exceeds 65 characters, the mapping task fails with an error.

To run the taskflow successfully, update the overridden custom query in the Data Task input field. To do this, delete the input field, reselect the input field, and then enter the custom query again. You can use a custom query that exceeds 65 characters.

Note: Before reselecting the Data Task input field, you must clear the cache or switch to the incognito mode.

For more information about overriding a custom query in taskflows, see the following community article:

https://network.informatica.com/docs/DOC-19268

36 Chapter 6: Upgrade

Sequence Generator transformation in mappings enabled for pushdown optimization

After you upgrade, existing tasks enabled with pushdown optimization runs without pushdown optimization. This issue occurs when the NEXTVAL() port in a Sequence Generator transformation is linked to input ports of multiple downstream transformations in the mapping.

If the NEXTVAL() port in a Sequence Generator transformation is linked directly to a single input port or multiple input ports in a Target transformation, the mapping runs with pushdown optimization.

Previously, when the NEXTVAL() port was linked to input ports of multiple downstream transformations, the mappings ran successfully with pushdown optimization, but generated incorrect data.

File Processor ConnectorAfter the upgrade, if you decrypt the files that were encrypted in an earlier release, the mapping task runs successfully but the files are not decrypted properly. This is applicable when you use password-based decryption (PBE).

To fix this issue, you must perform one of the following tasks:

• Encrypt the file again using the October 2021 release and then decrypt the files.

• Decrypt the files that were encrypted in an earlier release by configuring the custom property FileProcessorFIPSSupport value as false for the DTM type in the Data Integration Service.

Google BigQuery V2 ConnectorAfter the upgrade, existing mappings fail in the following scenarios:

• The schema of the table that you specified as an override and the corresponding table selected during design time are different.To run mappings successfully, set the DisableMappingDeployment custom property value to true for the Secure Agent in Cloud Data Integration.

• The mapping contains an override to the dataset name and table name and the Create Disposition property is set to Create if Needed in the Google BigQuery target transformation.To run mappings successfully, set the DisableMappingDeployment custom property value to true in the Secure Agent properties.

Hive ConnectorAfter the upgrade, existing elastic mappings that read from or write data to Hive on the Cloudera CDW 7.2 public cloud distribution fail with the following error:

java.lang.NoClassDefFoundError: org/apache/hadoop/io/Text

To access the required jars for Cloudera CDW 7.2 public cloud, run the Hadoop distribution script and specify the distribution version CDW_7.2 for the elastic job.

The script is located in the following location:

<Secure Agent installation directory>/downloads/package-Cloudera_6_1/package/Scripts

For more information on running the script, see the section "Cloudera 6.1 distribution" in the Changed behavior topic.

Post-upgrade tasks for the October 2021 release 37

Microsoft Azure Synapse SQL ConnectorIf an existing mapping is enabled for the truncate target option and also contains an override to the target table name or schema name in the mapping properties, the data output before and after the upgrade differs.

The data output differs after the upgrade because the agent truncates the table or schema specified in the override property.

Previously, the agent truncated the table or schema that you specified at design time.

If you do not want the agent to truncate the table or schema that you specify in the override property, disable the truncate table option and run the mapping again.

38 Chapter 6: Upgrade

C h a p t e r 7

Enhancements in previous releases

You can find information on enhancements and changed behavior in previous Data Integration releases on Informatica Network.

What's New guides for releases occurring within the last year are included in the following community article: https://network.informatica.com/docs/DOC-17912

39

I n d e x

CCloud Application Integration community

URL 6Cloud Developer community

URL 6

DData Integration community

URL 6Data Masking transformation

enhancements 24data transfer tasks

lookup sources 23

FFile Integration Service proxy 36fis-proxy-server.zip 36

HHierarchy Builder transformation

enhancements 9

IInformatica Global Customer Support

contact information 7Informatica Intelligent Cloud Services

web site 6

Llog4j2 properties file for RunAJob utility 35Lookup transformation

enhancements 24

Lookup transformation (continued)Sequence-ID field 24

Mmaintenance outages 7

RREST API

enhancements 25RunAJob log4j2 properties file 35

SSecure Agents

upgrade preparation 34status

Informatica Intelligent Cloud Services 7Structure Parser transformation

enhancements 13system status 7

Ttrust site

description 7

Uupgrade notifications 7upgrade preparation

Secure Agent preparation 34

Wweb site 6

40