Sunday, 27 July 2014


Informatica interview questions 1
We have a target source table containing 3 columns: Col1, Col2 and Col3. There is only 1 row in the table as follows:
Col1 Col2 Col3
—————–
  a       b       c
There is target table contain only 1 column Col. Design a mapping so that the target table contains 3 rows as follows:
Col
—–
a
b
c
Informatica interview questions 2

There is a source table that contains duplicate rows. Designs a mapping to load all the unique rows in 1 target while all the duplicate rows (only 1 occurrence) in another target.
<!–[endif]–>
Informatica interview questions 3
There is a source table containing 2 columns Col1 and Col2 with data as follows:
Col1       Col2
——        ——
 a              l
 b             p
 a             m
 a             n
 b             q
 x          y
Design a mapping to load a target table with following values from the above mentioned source:
Col1       Col2
——        ——
  a            l, m, n
  b            p, q
  x            y
Informatica interview questions 4
Design an Informatica mapping to load first half records to 1 target while other half records to a separate target.
Informatica interview questions 5
A source table contains emp_name and salary columns. Develop an Informatica mapping to load all records with 5th highest salary into the target table.
Informatica interview questions 6
Let’s say I have more than have record in source table and I have 3 destination table A,B,C. I have to insert first 1 to 10 records in A then 11 to 20 in B and 21 to 30 in C.
Then again from 31 to 40 in A, 41 to 50 in B and 51 to 60 in C……So on up to last record.
Informatica interview questions 6
Validation rules for connecting transformations in Informatica?
 Informatica interview questions 7
Source is a flat file and want to load unique and duplicate records separately into two separate targets; right??
Informatica interview questions 8
Input file
———
10
10
10
20
20
30
output file
————
1
2
3
1
2
1
scenario-it will count the no of records for example in this above case first 10 is there so it will count 1,den again 10 is there so it will be 2, when 20 comes it will be 1 again.
Informatica interview questions 9
Input file
———
10
10
10
20
20
30
output file
———-
1
2
3
4
5
6 
Informatica interview questions 10
Input file
———
10
10
10
20
20
30
output file
———->
1
1
1
2
2
3 
Informatica interview questions 11
There are 2 tables(input table)
table aa table bb
——– ———
id name id name
– —– — —-
101 ramesh 106 harish
102 shyam 103 hari
103 —- 104 ram
104 —-
output file
———-
id name
– —-
101 ramesh
102 shyam
103 hari
104 ram
Informatica interview questions 12
There are 2 tables(input table)
table aa table bb
——– ———
id name id name
– —– — —-
101 ramesh 106 harish
102 shyam 103 hari
103 —- 104 ram
104 —-
output file
———-
id name
– —-
101 ramesh
102 shyam
103 hari
104 ram
Informatica interview questions 12
table aa(input file)
——————
id name
– —-
10 aa
10 bb
10 cc
20 aa
20 bb
30 aa

Output
—–
id name1 name2 name3
– —— —— —–
10 aa bb cc
20 aa bb –
30 aa — – 
Informatica interview questions 14
table aa(input file)
——————
id name
– —-
10 a
10 b
10 c
20 d
20 e
output
——-
id name
– —-
10 abc
20 de 
Informatica interview questions 15
In the below scenario how can I split the row into multiple depending on date range?
The source rows are as

ID Value from_date(mm/dd) To_date(mm/dd)
1 $10 1/2 1/3
2 $5 1/5 1/8
3 $20 1/9 1/11
The target should be
ID Value Date
1 $10 1/2
1 $10 1/3
2 $5 1/5
2 $5 1/6
2 $5 1/7
2 $5 1/8
3 $20 1/9
3 $20 1/10
3 $20 1/11
What is the informatica solution?
Informatica interview questions 16
How is the following be achieved with single Informatica Mapping.
* If the Header table has error value or no value (NULL) then
those records and their corresponding child records in the
SUBHEADER and DETAIL tables should be rejected from the target (TARGET1,TARGET 2 or TARGET3).
* If the HEADER table record is valid, but the SUBHEADER or
DETAIL table record has an error value (NULL) then the no
data should be loaded into either of the target TARGET1,TARGET 2 or TARGET3.
* If the HEADER table record is valid and the SUBHEADER or DETAIL table record also has valid records only then the
data should be loaded into the target TARGET1,TARGET 2 and TARGET3.
HEADER
C1 C2 C3 C4 C5 C6
1 ABC null null C1
2 ECI 756 CENTRAL TUBE C2
3 GTH 567 PINCDE C3
SUBHEADER
C1 C2 C3 C4 C5 C6
1 01001 VALUE3 748 543
1 01002 VALUE4 33 22
1 01003 VALUE6 23 11
2 02001 AAP1 334 443
2 02002 AAP2 44 22
3 03001 RADAR2 null 33
3 03002 RADAR3 null 234
3 03003 RADAR4 83 31
DETAIL
C1 C2 C3 C4 C5 C6
1 D01 TXXD2 748 543
1 D02 TXXD3 33 22
1 D03 TXXD4 23 11
2 D01 PXXD2 56 224
2 D02 PXXD3 666 332
—————————————————————————–
TARGET1
2 XYZ 756 CENTRALTUBE CITY2
TARGET2
2 02001 AAP1 334 443
2 02002 AAP2 44 22
TARGET3
2 D01 PXXD2 56 224
2 D02 PXXD3 666 332
————————————————————————–
Informatica interview questions 17
If i had source like unique & duplicate records like 1,1,2,3,3,4 then i want load unique records in one target like 2,4 and i want load duplicate records like 1,1,3,3
Informatica interview questions 18
I Have 100 Records in a relational table and i want to load the record in 3 targets , first records goes to target 1 and second to target 2 and third to target 3 and so on ,what are the tx used in this.
Informatica interview questions 19
There are three columns empid, salmonth, sal contains the values
101,january,1000
101 febuary 1000 …
like twelve rows are there then my required out put is like contains 13  columns empid jan feb march ……. dec and the values are 101 1000, 1000, 1000 etc
Informatica interview questions 20
I have a source as a file or db table. 
E-no e-name sal and dept
0101 Max 100 1
0102 steve 200 2
0103 Alex 300 3
0104 Sean 76 1
0105 swaroop 120. 2
If i Want to run one session 3 times.
First Run : It should populate department  1.
Second Run : Only department 2
Third Run : Only department  3 

What are the differences between Connected and Unconnected Lookup?

The differences are illustrated in the below table
Connected LookupUnconnected Lookup
Connected lookup participates in dataflow and receives input directly from the pipelineUnconnected lookup receives input values from the result of a LKP: expression in another transformation
Connected lookup can use both dynamic and static cacheUnconnected Lookup cache can NOT be dynamic
Connected lookup can return more than one column value ( output port )Unconnected Lookup can return only one column value i.e. output port
Connected lookup caches all lookup columnsUnconnected lookup caches only the lookup output ports in the lookup conditions and the return port
Supports user-defined default values (i.e. value to return when lookup conditions are not satisfied)Does not support user defined default values

What is meant by active and passive transformation?

An active transformation is the one that performs any of the following actions:
1) Change the number of rows between transformation input and output. Example: Filter transformation.
2) Change the transaction boundary by defining commit or rollback points., example transaction control transformation.
3) Change the row type, example Update strategy is active because it flags the rows for insert, delete, update or reject.
On the other hand a passive transformation is the one which does not change the number of rows that pass through it. Example: Expression transformation.

What is the difference between Router and Filter?

Following differences can be noted,
RouterFilter
Router transformation divides the incoming records into multiple groups based on some condition. Such groups can be mutually inclusive (Different groups may contain same record)Filter transformation restricts or blocks the incoming record set based on one given condition.
Router transformation itself does not block any record. If a certain record does not match any of the routing conditions, the record is routed to default groupFilter transformation does not have a default group. If one record does not match filter condition, the record is blocked
Router acts like CASE.. WHEN statement in SQL (Or Switch().. Case statement in C)Filter acts like WHERE condition is SQL.

What can we do to improve the performance of Informatica Aggregator Transformation?

Aggregator performance improves dramatically if records are sorted before passing to the aggregator and "sorted input" option under aggregator properties is checked. The record set should be sorted on those columns that are used in Group By operation.
It is often a good idea to sort the record set in database level (click here to see why?) e.g. inside a source qualifier transformation, unless there is a chance that already sorted records from source qualifier can again become unsorted before reaching aggregator
You may also read this article to know how to tune the performance of aggregator transformation

What are the different lookup cache(s)?

Informatica Lookups can be cached or un-cached (No cache). And Cached lookup can be either static or dynamic. A static cache is one which does not modify the cache once it is built and it remains same during the session run. On the other hand, A dynamic cache is refreshed during the session run by inserting or updating the records in cache based on the incoming source data. By default, Informatica cache is static cache.
A lookup cache can also be divided as persistent or non-persistent based on whether Informatica retains the cache even after the completion of session run or deletes it

How can we update a record in target table without using Update strategy?

A target table can be updated without using 'Update Strategy'. For this, we need to define the key in the target table in Informatica level and then we need to connect the key and the field we want to update in the mapping Target. In the session level, we should set the target property as "Update as Update" and check the "Update" check-box.
Let's assume we have a target table "Customer" with fields as "Customer ID", "Customer Name" and "Customer Address". Suppose we want to update "Customer Address" without an Update Strategy. Then we have to define "Customer ID" as primary key in Informatica level and we will have to connect Customer ID and Customer Address fields in the mapping. If the session properties are set correctly as described above, then the mapping will only update the customer address field for all matching customer IDs.

Under what condition selecting Sorted Input in aggregator may fail the session?

  • If the input data is not sorted correctly, the session will fail.
  • Also if the input data is properly sorted, the session may fail if the sort order by ports and the group by ports of the aggregator are not in the same order.

Why is Sorter an Active Transformation?

This is because we can select the "distinct" option in the sorter property.
When the Sorter transformation is configured to treat output rows as distinct, it assigns all ports as part of the sort key. The Integration Service discards duplicate rows compared during the sort operation. The number of Input Rows will vary as compared with the Output rows and hence it is an Active transformation.

Is lookup an active or passive transformation?

From Informatica 9x, Lookup transformation can be configured as as "Active" transformation.
Find out
However, in the older versions of Informatica, lookup is a passive transformation

What is the difference between Static and Dynamic Lookup Cache?

We can configure a Lookup transformation to cache the underlying lookup table. In case of static or read-only lookup cache the Integration Service caches the lookup table at the beginning of the session and does not update the lookup cache while it processes the Lookup transformation.
In case of dynamic lookup cache the Integration Service dynamically inserts or updates data in the lookup cache and passes the data to the target. The dynamic cache is synchronized with the target.
In case you are wondering why do we need to make lookup cache dynamic, read this article on dynamic lookup

What is the difference between STOP and ABORT options in Workflow Monitor?

When we issue the STOP command on the executing session task, the Integration Service stops reading data from source. It continues processing, writing and committing the data to targets. If the Integration Service cannot finish processing and committing data, we can issue the abort command.
In contrast ABORT command has a timeout period of 60 seconds. If the Integration Service cannot finish processing and committing data within the timeout period, it kills the DTM process and terminates the session.

What are the new features of Informatica 9.x in developer level?

From a develo
  • Now you can write SQL override on un-cached lookup also. Previously you could do it only on cached lookup
  • You can control the size of your session log. In a real-time environment you can control the session log file size or time
  • Database deadlock resilience feature - this will ensure that your session does not immediately fail if it encounters any database deadlock, it will now retry the operation again. You can configure number of retry attempts.

How to Delete duplicate row using Informatica

Scenario 1: Duplicate rows are present in relational database

Suppose we have Duplicate records in Source System and we want to load only the unique records in the Target System eliminating the duplicate rows. What will be the approach?
Assuming that the source system is a Relational Database, to eliminate duplicate records, we can check the Distinctoption of the Source Qualifier of the source table and load the target accordingly.

Informatica Interview Questions & Answers:

  What are the differences between Connected and Unconnected lookup?Connected lookup participates in the mapping(dataflow), just like any other transformation. Unconnected lookup is used when a lookup function is used instead in an expression transformation in the mapping in which case the lookup does not appear in the main flow (dataflow) of the mapping. Connected lookup can return more than one value(output port) whereas an Unconnected lookup gives only one output port. Unconnected lookups are reusable.

  • Connected transformation is connected to other transformations or directly to target table in the mapping. An unconnected transformation is not connected to other transformations in the mapping. It is called within another transformation, and returns a value to that transformation.

  When do we use dynamic cache and static cache in connected and unconnected lookup transformations?

  • Dynamic cache used for updation of Master Table & SCD(Slowly Changing Dimensions) type 1.
  • Static used for Flatfile.

  What is the tracing level?

  • Tracing level means that the amount of data storing in to the log files.
  • Tracing Levels are TERSE,VERBOSE,VERBOSE INITIALIZATION and NORMAL.
  • This is a kind of explanation when a session log is created....
    Normal: It explains in a detailed manner
    Verbose: It explains detailed explanation for each and every row

  How many types of transformations supported by sorted input?

  • Aggregator Transformation,Joiner Transformation and Lookup Transformation support sorted input so that it will increase the session performance.

  How many number of sessions that u can create in a batch?

  • Any number of sessions. But best practice is to have less number of tasks which will help especially during migration.

  Name 4 output files that Informatica server creates during session running:

  • Session log
  • Workflow log
  • Errors log
  • Badfile

  what is the difference between stop and abort?

  • Stop command immediately kills the reading process and doesn't have any timeout period.
  • Abort command gives a time out period of 60secs to the informatica server to finish the dtm process else it kills the dtm process.

  What is Update Override? What are the differences between SQL Override and Update Override?

  • Update Override - It is an option available in TARGET instance. By default, Target table is updated based on Primary key values. To update the Target table on non primary key values, we can generate the default Query and override the Query according to the requirement. Suppose for example, if we want to update the record in target table, when a column value='AAA' then, we can include this condition in Where clause of default Query.
  • Coming to SQL override - It is an option available in Source Qualifier and Lookup transformation where we can include joins, filters, Group by and Order by.