DronaBlog

Wednesday, November 28, 2018

About Us




TechnoGuru is the platform to share knowledge through the articles, the quiz and Youtube videos based on Informatica Mater Data Management, Informatica Data Director, Active VOS, Informatica Services Integration Framework, Informatica User Exits, Provisioning Tool, Unix, Oracle, Microservices, Weblogic and Java. TechnoGuru team has extensive working experience and knowledge about these technologies.

The main focus of the website www.dronatechnoworld.com is to share knowledge with everyone and provide the latest updates in Technology.

You can reach us by email: guru4technoworld@gmail.com or by phone at +1 (914) 340 4011. You can donate us here



Monday, November 19, 2018

What are Zombie, Orphan and Daemon processes in Unix system?

Are you looking for the information about process management in Unix System? If already know about foreground and background process and would like to know more about Zombie, Orphan and Daemon processes in Unix system? If yes, then refer this article to understand more interesting things about process management in Unix. If you have not gone through the previous article about Process Management in Unix System then I would recommend going through it first.

What are Zombie processes?

Zombie processes are those processes which are killed or completed execution but still shows an entry in the process table.

Important points-
  • The zombie process shows the process with a Z state.
  • The zombie process is also known as the defunct process.
  • Such kind of processes are dead and not being used for any business or system processing.
  • The orphan processes and the zombie processes are different.
  • A child process always first becomes a zombie before being removed 
  • To remove zombies we can use 'kill' command as the SIGCHLD signal can be sent to the parent manually.
Reasons for the zombie processes-
  1. The issue in the parent program: If the Zombies that exist for more time then it indicates an issue in the parent program
  2. The issue with child program: The Unix system uncommon decision to not reap children.
  3. The operating system issue: The bug in the operating system may cause the parent program is no longer running.

Impact of the zombie process-
  1. No out of memory issue as the zombie process does not consume memory
  2. The issue with running out of process table entries

 What are Orphan processes?

A child process which remains running itself even after its parent process is completed or terminated is called as an orphan process.

Important points

  1. The orphan process is get created unknowingly and unintentionally due to process crash.
  2. Normally, Unix system tries to protect the orphan process with help of user's shell. The Unix system tries to terminate the child processes by SIGHUP process signal.
  3. Re-parenting automatically happens in Unix environment. i.e. assigning the parent to the child process. However, it is internal to Unix system
  4. An orphan process is a computer process whose parent process has finished or terminated, though it remains running itself. Normally special init system process will be the parent for such orphan process. The process is still considered an orphan process as its original parent who created this process is no more exists.
  5. In some cases, the orphan process is created intentionally. This is to detach the current user's session and let it run in the background, especially when the process is long running. Sometimes such orphan processes are also called a daemon process.

What is the daemon process?

The system related processes which run in the background and often run with the permissions of the root are called daemon processes. In many cases, these daemon processes service requests from other processes.
  1. A daemon has no controlling terminal. 
  2. The daemon process cannot open /dev/tty. 
  3. e.g If we execute the command "ps -ef" and all daemon processes will have ? for the tty field in the output.
  4. If we have a program which runs for a long time, it is good idea to make it a daemon and run it in the background.
  5. Normally, such processes start when the system is bootstrapped 
  6. The daemon processes terminate only when the system is shut down. 

Example of daemon process is real world is - printer daemon waiting for print commands.



Wednesday, November 14, 2018

Process Management in Unix System

Are you looking for an article on how does process management happen in Unix? Would you like to foreground and background processes in the Unix? Are you also would you like to know about commands related to the process management? If yes, then you reached the right place. This article provides detailed information about process management in Unix system.

Overview

The new process will be created and started whenever we issue a command in the Unix system. In order to execute a program or a command, a special environment is created. e.g. If we execute the command 'grep' or 'ls', it will start the process internally.

The process ID (aka pid) is five-digit id used by Unix operating system to track process. It is unique for each process in given executing Unix environment.
The pid values repeat because all the possible numbers are used up. However, no two processes with the same pid exist in the Unix system because it is used to track each process.

Types of processes

There are two types of processes-
  1. Foreground processes
  2. Background processes

1. Foreground Processes

The process which takes input from the keyboard and sends its output to the screen is a foreground process. 

If any foreground process is running we cannot execute any other command or start any other process as prompt will not be available until the existing process is finished.
e.g. When we execute 'ls' command output is returned to the screen. It is an example of the foreground process.

2. Background Processes

The process which runs without being connected to your keyboard is called the background process.

The background process goes in wait mode if it requires any keyboard input. We can execute other commands while the existing process is running.  In order to execute the command in background mode, add an ampersand (&) at the end of the command. 
e.g. When we execute the command 'ls * &', it runs as the background process.

Commands

a) Use below command to list currently running processes
$ps

The result of this command will be
PID       TTY      TIME        CMD
18008     ttyp3    00:00:00    abc.sh
18311     ttyp3    00:03:31    test
19789     ttyp3    00:00:00    ps

b) In some cases, -f (i.e. full) option is used to get more information
$ps -f

The result of this command will be
UID      PID  PPID C STIME    TTY   TIME CMD
abcuid   1138 3062 0 01:23:03 pts/6 0:00 abc
abcuid   2239 3602 0 02:22:54 pts/6 0:00 pqer
abcuid   3362 3157 0 03:10:53 pts/6 0:00 xyz

Here,
UID: It is the user ID that the process belongs to
PID: Process ID
PPID: Parent process ID
C: CPU utilization of process
STIME: Process start time
TTY: Terminal type
TIME: CPU time is taken by the process
CMD: The command that started this process

Child and Parent Processes

  • Two ID numbers will be assigned to each process.
  • These two ID numbers are the parent process ID (PPID) and the process ID (PID) or child id.
  • In the Unix system, each user process has a parent process and hence PPID is assigned to each user process.
  • The shell will act as a parent whenever we execute any command.
  • If we see the output of the command 'ps -f', we can notice that the process ID and the parent process ID is listed.
The video below provides more information about processes in Unix system.




Tuesday, November 6, 2018

ElasticSearch in the Informatica MDM


Are you looking for the information about ElasticSearch? What is purpose using Elasticsearch in Informatica MDM? If yes, then this article provides you with detailed information about it. This article also highlights on brief history about Elastic Search.

What is Elasticsearch?

Elasticsearch is an open source, a distributed, multitenant-capable full-text search engine developed in Java. It is founded in 2012 to provide a scalable search solution. It comes as ELK stack i.e. Elasticsearch, Logstash and Kibana. These three products together provide great search solution. Elasticsearch is a search engine based on Lucene. Logstash is a repository where actual logs(Information/data) is stored and send to Elasticsearch. Kibana is a user interface where logs are shown in an analytical form such as graph etc.

Informatica MDM and Elasticsearch

Elastic search is integrated with Informatica MDM from MDM version 10.3 for better search functionality in Customer 360 application.  Once the Elasticsearch in MDM, the search functionality can be viewed in Customer 360 application as below -


Elasticsearch and Solr Search in Informatica MDM

We can use either Solr or Elasticsearch with Informatica MDM. Both search engines are based on Lucene library. However, Elasticsearch is better in performance. Search with Solr is deprecated and is replaced by the search with Elasticsearch.

  1. With Elasticsearch, we can use the asterisk wildcard character (*) to perform a search. 
  2. The query parser of Elasticsearch provides the flexibility to use various types of characters in the search strings. 
  3. Solr search does not provide the flexibility to use various types of characters
  4. With Elasticsearch we can use operators such as AND and OR to search for records.

How to install Elasticsearch?

Elasticsearch package comes with Informatica MDM 10.3. The installation instructions are simple and provided in the installation guide. You can install Elasticsearch on any machine where the MDM Hub components are installed or on a separate machine. However, if you would like to install it as standalone then you can install Elasticsearch from here. DOWNLOAD

You can refer the video below to configure Elastic search with Informatica MDM

Tuesday, September 25, 2018

How to monitor what are users logged in to the Informatica Data Director Application?



Are you looking for an article on how to monitor IDD users? Are you also looking for information what changes need to be made in order to achieve it? If yes, then refer this article. This article explains what is need of user monitoring and how to configure it.

Introduction

The Informatica Data Director (IDD) is one of the business critical application. The various business users uses IDD application. It is always good idea to monitor users using the application for security reason. In lower environments such as development or QA, it become more tedious to  track who made the change. So having monitory control on login mechanism will try to avoid such incidents. This articles helps to configure IDD application for monitoring users who uses it.

Configuration file

We need to use log4j.xml file to log users which uses IDD application. We can use existing log file or can create new  log file.

File Location

We need to update log4j.xml file from below location
<install directory>\hub\server\conf

Code Changes

Add the code below after consoleappender code in the log4j.xml file

<!-- File appender for Login Tracker-->
    <appender name="loginAppender" class="org.apache.log4j.RollingFileAppender">
        <param name="File" value="/hub/server/logs/LoginTracker.log"/>
        <param name="MaxBackupIndex" value="5"/>
        <param name="MaxFileSize" value="500MB"/>
        <param name="Threshold" value="DEBUG"/>

        <layout class="org.apache.log4j.PatternLayout">
            <!-- The default pattern: Date Priority [Category] Thread Message -->
            <param name="ConversionPattern" value="[%d{ISO8601}] [%t] [%-5p] %c: %m%n"/>
        </layout>
    </appender>

    <!-- Added the following category to invoke the appender for Login Tracker -->
    <category name="com.siperian.dsapp.common.util.LoginLogger">
        <priority value="INFO"/>
     <appender-ref ref="loginAppender"/>
    </category>

    <!-- Added the following category to invoke the appender for Login Tracker of MDM-->
    <category name="com.siperian.sam.authn.jaas.JndiLoginModule">
        <priority value="INFO"/>
     <appender-ref ref="loginAppender"/>
    </category>

Server Restart

Normally application server restart is not required. However, if log file is not generated after above code changes then restart the application server.



How to analyze the log file

If user is logged in or logged out then this information will be stored in the log file. The log file entry will look like as below :

[2018-09-25 15:03:31,774] [http-/0.0.0.0:8080-5] [INFO ] com.siperian.dsapp.common.util.LoginLogger: User <admin> logged into IDD
[2018-09-25 15:04:14,255] [http-/0.0.0.0:8080-2] [INFO ] com.siperian.dsapp.common.util.LoginLogger: User <admin> has been logged out of the IDD"
[2018-09-25 15:04:14,329] [http-/0.0.0.0:8080-2] [INFO ] com.siperian.dsapp.common.util.LoginLogger: User <testuser> logged into IDD
[2018-09-25 15:05:16,295] [http-/0.0.0.0:8080-5] [INFO ] com.siperian.dsapp.common.util.LoginLogger: User <testuser> has been logged out of the IDD"
[2018-09-25 15:05:16,295] [http-/0.0.0.0:8080-5] [INFO ] com.siperian.dsapp.common.util.LoginLogger: User <admin> has been logged out of the IDD"
[2018-09-25 15:05:23,309] [http-/0.0.0.0:8080-6] [INFO ] com.siperian.dsapp.common.util.LoginLogger: User <jamesmanager> logged into IDD
[2018-09-25 15:06:32,365] [http-/0.0.0.0:8080-7] [INFO ] com.siperian.dsapp.common.util.LoginLogger: User <jamesmanager> has been logged out of the IDD"





The video below provides additional information about how to monitor users which are logged in the IDD Application.


Sunday, September 23, 2018

Informatica Master Data Management - MDM - Quiz - 10

Q1. Match rule sets include which of the following

A. A search level that dictates the search strategy
B. Any number of automatic and manual match column rules
C. A filter to selectively include or exclude records from tha match batch
D. Match path for inter-table/Intra-table-matching

Q2. Which correctly describes master data??

A. Customer name
B. Customer address
C. Customer purchases
D. Customer preferences

Q3. A role is a set of privileges to access secure informatica MDM hub resources

A. True
B. False

Q4. Which statement best describes what the build match group(BMG) process does

A. Allows only a single "null to non-null" match into any group
B. Removes redundant matching
C. Executes in advance of the consolidate process
D. All statements are correct

Q5. What happens to records in the stage process that have structural integrity issues?

A. They are written to reject tables.
B. They are placed in the manager merge process
C. They are written to the raw table
D. They are added to the enhanced filtering process for resolution.

Previous Quiz             Next Quiz

Friday, September 21, 2018

Informatica Master Data Management - MDM - Quiz - 9



Q1. Which of these features are supported in Metadata manager?


A. The renaming of certain design objects.
B. Promoting record states.
C. Running a simulation of applying a change List.
D. Validate repository.

Q2. After configuration of the hub store which batch jobs are created automatically?


A. External Match jobs
B. Revalidate Jobs
C. Promote jobs
D. Synchronize jobs

Q3. When grand children are displyaed in table, view all grand children are deplayed not just those related to the selected child.


A. True
B. False

Q4. What does the trust frameWork do dynamically?


A. Defines whether two records will match
B. Maintains cell-level survivor ship of only the best attributes
C. Calculates a data quality score to be used on a data score card.
D. Standardizes data to make its most trustworthy form.

Q5. Which of the following is NOT an advantage of the MDM hub?


A. Can run in any database and version.
B. Flexibility to use any data model that is appropriate for a given customer.
C. A consistent design and architecture built on a single code base.
D. The ability to handle any data domain.



Previous Quiz             Next Quiz

Understanding Survivorship in Informatica IDMC - Customer 360 SaaS

  In Informatica IDMC - Customer 360 SaaS, survivorship is a critical concept that determines which data from multiple sources should be ret...