Thursday, 19 June 2014

How to delete duplicate Integration services

1. Take back up of the following tables and perform the below delete statements to delete log tables.

DELETE FROM OPB_SESS_TASK_LOG;
DELETE FROM OPB_SWIDGINST_LOG;
DELETE FROM OPB_DTL_SWIDG_LOG;
DELETE FROM OPB_TASK_INST_RUN;
DELETE FROM OPB_WFLOW_VAR_RUN;
DELETE FROM OPB_WFLOW_DEP_RUN;
DELETE FROM OPB_WFLOW_CACHE;
DELETE FROM OPB_WFLOW_RUN;
COMMIT;

2. Execute the below query for the list of integration services list under the repository

SELECT * FROM OPB_SERVER_INFO;

3. Delete the duplicate integration service based on the least server id  


                        (ex: 2 – Integration_ABC, 5 - Integration_ABC, perform delete on 2)
Delete from OPB_SERVER_INFO
where server_id in ( 2,5) -- least server id as it is outdated

Monday, 1 October 2012

PC86 - Purge older versions of objects in a versioned PowerCenter Repository


pmrep connect -r Repo_Name -d Domain_Name -n Administrator -x Pwd> test.out


pmrep purgeversion -n -f > test1.out

last_n_versions_to_keep --- Number of latest checked-in object versions to keep for an active object in the folder specified (note: -f is an optional feature).

The value must be an integer greater than 0.

For example, enter 5 to purge all versions except the last five checked-in versions.

PC86 - Informatica Services Not able to Start



Informatica Repository and Integration Services will be Not able to Start and you can view below error codes in admin console and server logs

ERROR CODES: [CNX_53021 ],[DOM_10022], [LGS_10048],[DOM_10055],[SPC_10050],[SPC_10013],[SPC_10008] and  [InfPrivilegeSynchronizationEngine]

Logs:

Admin Console:

ERROR Mon Sep 24 00:00:33 2012 1833290048 CNX_53021 Received an invalid request.

Service Manager ERROR Mon Sep 24 04:07:21 2012 Thread 2 of 6 in DomainServiceThreadPool DOM_10055 Unable to start service process [INFA8_DEV_REPO] on node [n1_ServerName].


Service Manager ERROR Mon Sep 24 04:07:21 2012 Thread 2 of 6 in DomainServiceThreadPool SPC_10050 The domain failed to restart the service [INFA8_DEV_REPO] after trying for the maximum [3] restart attempts.

Service Manager ERROR Mon Sep 24 04:07:21 2012 Thread 2 of 6 in DomainServiceThreadPool SPC_10013 Process for service INFA8_DEV_REPO failed to start.

Service Manager WARNING Mon Sep 24 04:07:21 2012 Domain Monitor SPC_10012 Process for service [INFA8_DEV_REPO] terminated unexpectedly.

Service Manager ERROR Mon Sep 24 04:07:21 2012 Domain Monitor SPC_10008 Service Process [INFA8_DEV_REPO] output error [/infa/informatica/pc86/server/bin/pmrepagent: error while loading shared libraries: libpmrcore.so: cannot open shared object file: No such file or directory

Service Manager ERROR Mon Sep 24 04:07:17 2012 Thread 2 of 6 in DomainServiceThreadPool DOM_10055 Unable to start service process [INFA1_DEV_REPO] on node [n1_ServerName].

Service Manager ERROR Mon Sep 24 04:07:17 2012 Thread 2 of 6 in DomainServiceThreadPool SPC_10050 The domain failed to restart the service [INFA1_DEV_REPO] after trying for the maximum [3] restart attempts.

Service Manager ERROR Mon Sep 24 04:07:17 2012 Thread 2 of 6 in DomainServiceThreadPool SPC_10013 Process for service INFA1_DEV_REPO failed to start.

Service Manager WARNING Mon Sep 24 04:07:16 2012 Domain Monitor SPC_10012 Process for service [INFA1_DEV_REPO] terminated unexpectedly.

Service Manager ERROR Mon Sep 24 04:07:16 2012 Domain Monitor SPC_10008 Service Process [INFA1_DEV_REPO] output error [/infa/informatica/pc86/server/bin/pmrepagent: error while loading shared libraries: libpmrcore.so: cannot open shared object file: No such file or directory

Service Manager ERROR Mon Sep 24 04:07:06 2012 Thread 3 of 6 in DomainServiceThreadPool SPC_10013 Process for service INFA8_DEV_REPO failed to start.

Service Manager WARNING Mon Sep 24 04:07:06 2012 Domain Monitor SPC_10012 Process for service [INFA8_DEV_REPO] terminated unexpectedly.

Service Manager ERROR Mon Sep 24 04:07:06 2012 Domain Monitor SPC_10008 Service Process [INFA8_DEV_REPO] output error [/infa/informatica/pc86/server/bin/pmrepagent: error while loading shared libraries: libpmrcore.so: cannot open shared object file: No such file or directory

Service Manager ERROR Mon Sep 24 04:07:06 2012 Thread 3 of 6 in DomainServiceThreadPool DOM_10055 Unable to start service process [INFA8_DEV_REPO] on node [n1_ServerName].

Service Manager ERROR Mon Sep 24 04:07:02 2012 Thread 3 of 6 in DomainServiceThreadPool DOM_10055 Unable to start service process [INFA1_DEV_REPO] on node [n1_ServerName].

Service Manager ERROR Mon Sep 24 04:07:02 2012 Thread 3 of 6 in DomainServiceThreadPool SPC_10013 Process for service INFA1_DEV_REPO failed to start.

Service Manager WARNING Mon Sep 24 04:07:01 2012 Domain Monitor SPC_10012 Process for service [INFA1_DEV_REPO] terminated unexpectedly.

Service Manager ERROR Mon Sep 24 04:07:01 2012 Domain Monitor SPC_10008 Service Process [INFA1_DEV_REPO] output error [/infa/informatica/pc86/server/bin/pmrepagent: error while loading shared libraries: libpmrcore.so: cannot open shared object file: No such file or directory

Catalina Log :

2012-09-24 02:47:06,619 INFO [InfPrivilegeSynchronizationEngine] Returning 55 supported privileges.
2012-09-24 02:47:16,919 ERROR [ConfigFacadeImpl] PCSF Error in listAvailableNamespaces.
com.informatica.pcsf.common.exception.PCSFTimeoutException: [DOM_10022] Election is in progress, cannot complete request.

Node Log:

2012-09-24 02:47:25,398 ERROR [Monitor Guaranteed Message File Thread] [LGS_10048] The following error occurred while logging to Log Service: [[DOM_10022] Election is in progress, cannot complete request.].\n
com.informatica.pcsf.common.exception.PCSFTimeoutException: [DOM_10022] Election is in progress, cannot complete request.

exceptions.log

2012-09-24 02:47:25,397 ERROR [Monitor Guaranteed Message File Thread] [LGS_10048] The following error occurred while logging to Log Service: [[DOM_10022] Election is in progress, cannot complete request.].\n
com.informatica.pcsf.common.exception.PCSFTimeoutException: [DOM_10022] Election is in progress, cannot complete request.

ISSUE Services will be not be starting due to missing of  libpmrcore.so file

SOLUTION:
  • Copy libpmrcore.so file from other server to current server path, installation path (/server/bin/)
  • Start infaservices from tombin path
  • Verify Repository and integration Services



Part 4.5 Physical Data Modeling


Physical Database Design

Physical database design is the process of developing a set of required data structures on a selected database. It involves the following tasks:
  • Converting entities into tables
  • Converting relationships into foreign keys
  • Converting attributes into columns
  • Defining constraints
 The purpose of physical design is to optimize performance as closely as possible.







 Along with the logical data model, the database designer requires the following to make sound design decisions:

  • The Process Model, detailing input processes (creation and updating of rows in tables) and output requirements (retrieval of data from the database)
  • The mapping that shows the processes that access each entity class and how (create, update, retrieve)
  • Nonstructural data requirements
  • Performance requirements
  • The target DBMS
  • Disk space requirement
  • Availability of skilled programming resources








 

Part 4.4 Logical Data Modeling



Introduction to Logical Model

Logical Data Model refers to the actual implementation of a conceptual module in a database. It represents normalized design of common data model which is required to support the design of an information system.
The very core of the logical data model is the definition of the three types of data objects which the building blocks of the data model and these data objects are the entities, attributes, and relationships. Entities refer to persons, places, events or things which are of particular interest to the company.

Some examples of entities are Employees, States, Orders, and Time Sheets. Attributes refer to the properties of the entities. Examples of attributes for the Employee entity are first name, birthday, gender, address, age and many others. Lastly, relationships refer to the way where in the entities relate to each other.  An example relationship would be "customers purchase products" or "students enroll in classes". 

If the database is ported to another DBMS supporting a similar structure, the logical data model can still be used as a baseline for the new physical data model.


Characteristics of a Logical Model

  • Logical model works in an iterative manner.
  • Its design is independent of database.
  • It includes all entities and relationships among them.
  • All attributes for each entity are specified.
  • The primary key for each entity is specified.
  • Foreign keys (keys identifying the relationship between different entities) are specified.










Part 4.3 Dimensional Data Model



4.3 Dimensional Data Modeling


4.3.1      Steps in Dimensional Modeling
4.3.2      Star Schema & Snowflake Schema
4.3.3      The Implementation Approach (Bill & Ralph)
4.3.4      Dimensional Modeling Fundamentals
4.3.4.1   Dimensional Modeling Overview
4.3.4.2   Dimensional Modeling Steps
4.3.5     Confirmed Dimensions
4.3.6     Slowly Changing Dimension
4.3.7     Additivity of Measures
4.3.8     Dimensional Modeling framework


4.3.1 Steps in Dimensional Modeling





4.3.2 Star Schema & Snowflake Schema






4.3.3 The Implementation Approach (Bill & Ralph)


  4.3.4 Dimensional Modeling Fundamentals


4.3.4.1 Dimensional Modeling Overview


Dimensional Modeling


  • A logical design technique that seeks to present the data in a standard framework that is intuitive and allows for high performance access
  • It is inherently dimensional and adheres to a discipline that uses relational model with some important restrictions
  • The fundamental idea of dimensional modeling is that nearly every type of business data can be represented as a kind of cube of data, where the cells of the cube contain measured values and the edges of the cube define the natural dimensions of the data
  • Every dimensional model is composed of one table with multipart key, called the fact table, and a set of smaller tables called dimension tables
  • Each dimension table has a single part primary key that corresponds exactly to one of the components of the multipart key in the fact table
  • This characteristic star-like structure is often called a star join
  • The fact table contains facts or measurements of the business
  • The dimension tables contain textual attributes that describe the facts
  • The attributes in the dimension tables are used for constraining & grouping data within data warehouse queries

Dimension Tables


  • Dimension Tables are the entry points into the data warehouse
  • Dimension tables are designed especially for selection and grouping under a common head
  • Determine contextual background for facts
  • Parameters for OLAP
  • Common Dimensions are D
          Date
         
Product
         
Location/Region
         
Customers
 Fact Table


The fact table is where the numerical measurements (measure)of the business are stored

Facts

  • The detail information in a Fact tables,For Examples: Sales Quantity, Unit Sales Price, Sales Amount etc.
  • Key performance indicators of the business
  • Numeric in Nature
  • Analyzed across the dimensions

4.3.4.2 Dimensional Modeling Steps


Dimensional Modeling Steps


  1. To build a dimensional database Choose the business processes that you want to use to analyze the subject area to be modeled.
  2. Determine the Granularity of the fact tables.
  3. Identify Dimensions and Hierarchies for each fact table.
  4. Identify Measures for the fact tables.
  5. Determine the Attributes for each dimension table.
  6. Get users to verify the data model.

4.3.5 Confirmed Dimensions








4.3.6 Slowly Changing Dimension








4.3.7 Additivity of Measures



Additivity is the ability of measures to be added across all dimensions of the fact table. Measures could be fully additive, semi additive or non additive


Fully Additive - Perfectly additive across all dimensions - No issues.

Semi Additive - Not additive across one or more dimensions (usually across time). Need to query using single unit of non-additive dimension.

Non-Additive - Non-additive. Store individual components and calculate ratio of sums vs. sum of ratio’s.



4.3.8 Dimensional Modeling framework









 
  

Friday, 28 September 2012

Part 4.2 Relational Data Modeling



4.2 Relational Data Modeling

     4.2.1     Relational Data Model Concepts
     4.2.2     ER to Relational Mapping Algorithm
     4.2.3     Relational Database Design
     4.2.3.1 Relational Database Design Approach
     4.2.3.2 Normalization
    
4.2.1 Relational Data Model Concepts

Relation Data Model

  • A relational data model is a representational or implementation or record-based data model
  • The model uses the concept of a mathematical relation - which looks somewhat like a table of values
  • The model has its theoretical basis in set theory and first order predicate logic

 4.2.2 ER to Relational Mapping Algorithm




 
Logical Design


Based upon the conceptual data model

Four key steps

1. Develop a logical data model for each known user interface for the application using normalization principles.

2. Combine normalized data requirements from all user interfaces into one consolidated logical database model

3. Translate the conceptual E-R data model for the application into normalized data requirements

4. Compare the consolidated logical database design with the translated E-R model and produce one final logical database model for the application



Step 1 – (Identify Strong Entities)

Step 2 -- (Identify Weak Entities)
Step 3 -- (Identify Relationships)
Step 4 -- (Identify dependent relationships)
Step 5 -- (Identify dependent relationships if required)
Step 6 -- (Identify multi dependent relationships)
Step 7 -- (Identify for any new sub tables)
Step 8 – (Specialization)


4.2.3 Relational Database Design

4.2.3.1: Relational Database Design Approaches

Database Design Approaches


Bottom-up (Design by synthesis)

  • The basic relationship among individual attributes is used as starting point to build up relations
  • Not very popular in practice
  • Suffers from the problem of collecting a large number of binary attribute relationships as the starting point
Top-down (Design by Analysis)

  • Starts with a number of groupings of attributes into relations that have already been obtained from conceptual design and mapping activities
  • Design by Analysis is then applied to the relations individually and collectively, leading to further decomposition until all desirable properties are met

4.2.3.2 Normalization

Normalization

  • Normalization of data can be looked upon as a process of analyzing the given relational schemas based on their functional dependencies and primary keys to
          - minimize redundancy
          - minimize insertion, deletion, and modification anomalies

Normal Form

  • Normal form of a relation refers to the highest normal form condition that it meets
  • Normal forms, when considered in isolation from other factors, do not guarantee a good design. The normalization through decomposition must confirm the following two properties for a good database design:
          -  The lossless join or nonadditive join property (Mandatory)

          -  The dependency preservation property (Desirable)


 

Tuesday, 25 September 2012

Part 4.1 Conceptual Data Modeling

4. Data Modeling

4.1 Conceptual Data Modeling
      4.1.1 Introduction to Conceptual Data Model
      4.1.2 Stages in Conceptual Modeling
      4.1.3 Components of a Conceptual Data Model
      4.1.4 ER Modeling Basic Concepts
      4.1.5 Enhanced ER Modeling
      4.1.6 Guidelines for ER Modeling
4: 4.1 Conceptual Data Model
Conceptual Data Model
A conceptual schema or conceptual data model is a map of concepts and their relationships. This describes the semantics of an organization and represents a series of assertions about its nature. Specifically, it describes the things of significance to an organization (entity classes), about which it is inclined to collect information, and characteristics of (attributes) and associations between pairs of those things of significance (relationships).
A conceptual data model identifies the highest-level relationships between the different entities.
Features of conceptual data model include:
Includes the important entities and the relationships among them.
No attribute is specified.
No primary key is specified.
4.1.2 Stages in Conceptual Modeling

Main stages in conceptual modeling are as follows:

--  Identification of requirements (done in previous lesson)
--  Designing of solutions
--  Evaluation of solutions
4.1.3 Components of a Conceptual Data Model
  
4.1.4 ER Modeling Basic Concepts
ER Model

The ER model is a conceptual model
Describes data as entities, relationships and attributes
No standard notation for displaying ER diagrams
(We will choose one among several alternatives for this presentation)
Entity
An entity is a “thing” in the real world with an independent existence
An entity may be an object with a physical existence
For example, person, car, house
An entity may be an object with a conceptual existence
For example, company, job, university-course

 
4.1.5 Enhanced ER Modeling
  
Enhanced ER Model, Includes all the modeling concepts of the ER model

In addition, it includes the following concepts:

  • Subclass & super-class
  • Specialization & generalization
  • Category
  • Attribute & relationship inheritance
Subclass & Super-class


  • In many cases an entity type has numerous sub-groupings of its entities that are meaningful and need to be represented explicitly because of their significance to the database application. Each of these sub-groupings are called subclass.
  • The entity type on which these subclasses are formed is called super-class
  • The relationship between a super-class and any one of its subclasses is called a super-class/subclass, a class/subclass or an IS-A (or IS-AN) relationship, e.g., a SECRETARY IS-AN EMPLOYEE
  • A member entity of a subclass represents the same real-world entity as some member of the super-class, but in a distinct specific role
Specialization (Top to Bottom Approach)


Specialization is the process of defining a set of subclasses of an entity type


In addition, it allows us to do the following:
  •  Establish additional specific attributes with each subclasses
  •  Establish additional specific relationship types between each subclass and other entity types or other subclasses
Generalization (Bottom to Top Approach)


Generalization refers to the process of defining a generalized entity type from the given entity types


Generalization process can be viewed as being functionally the inverse of the specialization process   
 


  A specialization lattice with multiple inheritance for a UNIVERSITY database.





Monday, 24 September 2012

Part 3: How to Data Model



3.1: Identify Entity Types

Entity

Object that can be observed and classified by its properties and characteristics

Business definition with a clear boundary

Characterized by a noun


Example: Product, Employee

3.2: Identify Attributes

Attributes


Characteristics and properties of entities

Example :

Book Id, Description, book category are attributes of entity “Book”

Attribute name should be unique and self-explanatory

Primary Key, Foreign Key, Constraints are defined on Attributes

Relationship


3.3: Identify Relationships

Relationship between entities - structural interaction and association

described by a verb

Cardinality

1-1
1-M
M-M


Example : Books belong to Ambarish


3.4: Apply Naming Conventions

Naming Standards


1. Be correct, that is, both functionally and technically accurate.
2. Be clear, avoiding the use of vague terms such as “handle” or "process.”
3. Be concise, using the fewest number of words possible, avoiding articles and needless prepositions.
4. Be unique, avoiding wording similar to that of any other name.
5. Be atomic, representing only a single concept.
6. Contain only letters of the alphabet, numbers, and word separators.
7. Follow the specified format for names
8. Reflect common terminology used throughout Federal Student Aid.
9. Use complete names wherever possible instead of abbreviations or acronyms.
10. Use only approved abbreviations or acronyms when the data modeling tool restricts the length of the name.


3.5: Assign Keys


Keys

Super Key
Primary Key
Candidate Key
Secondary Key

Super Key

A super key is a column or set of columns that uniquely identifies a row within a table.

Given table: EMPLOYEES{employee_id, firstname, surname, sal}

Possible superkeys are:

{employee_id}
{employee_id, firstname}
(employee_id, firstname, surname, sal}

Only the the minimal superkey - {employee_id} - will be considered as a candidate key.










Friday, 21 September 2012

Part 2: Data Model



Advanced Data Modeling and Architecture --  Table of Content


Part 1: Data Modeling Overview



2.1 : Definition of a Data Model


Model
         Representation of a set of business requirements in a standard structured framework understood by the users
Data Model
 
A collection of concepts that can be used to describe the structure of a database
         Most data models include a set of basic operations for query and updates on the database
         It is becoming more common to include concepts in the data model to specify the dynamic aspect or behavior defined operations
Data Model Instances
   Conceptual Model
   Logical Model
   Physical Model

2.2 : Data Model Types

Database Model
  A database model is a theory or specification describing how a  database is structured and used. Several such models have been suggested. Common models include
–  Flat model
–  Hierarchical model
–  Network model
–  Relational model:
–  Object-relational model
–  Dimensional Model













2.3 Data Modeling Stages & Deliverables



2.4: Main Phases of Database Design




2.5: Data Modeling In DWH Environment

























2.6: Data Modeling Approaches





2.7: Data Modeling Life Cycle





2.8: Data Models & Zachman Framework