Category: Unix/Linux

A memory leak is an unintentional form of memory consumption whereby the developer fails to free an allocated block of memory when no longer needed. The consequences of such an issue depend on the application itself. Consider the following general three cases:

Case Description of Consequence
Short Lived User-land Application Little if any noticable effect. Modern operating system recollects lost memory after program termination.
Long Lived User-land Application Potentially dangerous. These applications continue to waste memory over time, eventually consuming all RAM resources. Leads to abnormal system behavior
Kernel-land Process Very dangerous. Memory leaks in the kernel level lead to serious system stability issues. Kernel memory is very limited compared to user land memory and should be handled cautiously.

Memory is allocated but never freed.

Memory leaks have two common and sometimes overlapping causes:

* Error conditions and other exceptional circumstances.
* Confusion over which part of the program is responsible for freeing the memory

Most memory leaks result in general software reliability problems, but if an attacker can intentionally trigger a memory leak, the attacker might be able to launch a denial of service attack (by crashing the program) or take advantage of other unexpected program behavior resulting from a low memory condition


We have all seen or received them…an unexpected email with an enticing or urgent subject line, maybe even from someone you know. However, don’t rush to open it; think before you act! There are many things that come to you via email that can put you and Organization at risk.

What should I be on the look out for?

There are basically three types of risky emails:

  • Mass-mailing viruses – computer viruses that are capable of spreading via email
  • Phishing – a tactic used by identity thieves, to trick people into disclosing financial or other sensitive information, primarily by using email messages and counterfeit websites
  • Spam – unsolicited email messages sent to thousands or millions of recipients

Although Spam’s definition doesn’t seem to be a risk, it can be just as troublesome. Besides being unwanted, Spam can also be used as a delivery method for malicious software. Open a file or click on a link in a Spam message and you could be putting yourself at risk.

What should I do to avoid being impacted?

  • Ensure that your PC has the latest security software installed and that it is always up-to-date
  • Ensure that your PC has all of the latest security patches and updates installed
  • Do not open attachments within suspicious or unexpected messages, even from people that you trust. Viruses often look like they have been sent from someone you know.
  • Do not click on links to websites within suspicious or unexpected email messages.  If you are ever in doubt about the legitimacy of a message check with the sender.  Contact that person and ask if he/she sent such message before you trust it.
  • Do not publish your email address in public locations wherever possible
  • Do not reply to suspicious or unexpected messages
  • Enable junk filters in Outlook
Application monitoring is a very important aspect of a project but unfortunately not much attention is paid to develop the effective monitoring while the project is not live. Once project is live lack of proper monitoring costs in terms of downtime when support persons are not aware if application is having some problems or application not working at all. 

Discussion on application monitoring should start early at least from the time when deployment details are being worked out. Some application may require some specific scripts or tools or authorizations, an early   discussion on monitoring will make it in a better position to address the delays in its implementation.

This document gives a basic introduction to the challenges , type of monitoring and best practices which can be followed to ensure high availability of the live systems .

Challenges in Application monitoring :
Following are some of the challenges faced today for application monitoring :

1. Proactive Monitoring :  Proactive monitoring  means monitor the system and application health  and take corrective action when it reaches a certain threshold level .The threshold level is defined as the level where application is not showing deterioration but can deteriorate if corrective actions are not taken .  The biggest challenges is to gather the statistics to workout a threshold and number of parameters and process that needs to be monitor .  Applications which interact directly with the customer for example ecommerce, banking  & online applications needs to be monitored proactively so that problems are detected even before it impacts the end user customer. 2. Complexity & number of applications: An application may become more complex if it has a global user base. The application has to support multiple languages, culture and currencies. Application may have multiple instances located in different regions of the world and may be using different time or logging format. To effectively monitor global applications one has to under stand the application instances their interconnectivity, flow coordinate with regional teams and in most cases depend on regional teams for monitoring the application.

3. Shared Systems: Applications are often shared in a system in order to utilize the full capacity of the hardware and this implementation brings in its own set of challenges. For a single application system it is easier to track the resources like memory, CPU, disk, network bandwidth but in shared application environment some application may take the resources and others may get impacted for apparently no fault of their own. Sometime application owners may not be contactable to take corrective actions.

4. Clustered Systems: To avoid a single point of failure applications are hosted in a clustered environment with number of machines in different network and locations. From monitoring perspective it poses another challenge of keeping track of the  request & failure logs , memory , CPU network , disk resources  as one has to look at all the cluster component machines logs and resource just to isolate which one is giving bad performance  .

5. Limited Logging  in production environment : Since volume of transactions are very high  and application code has already been run through performance , reliability and quality assurance cycles the  application code in the production environment  is generally enabled for minimum logging information  . This may lead to situation when actually indicator of a problem may not show up  in the logs . The logs may not show the error message until the logging level increased .

6. Custom logging in production : Logging in online production environments at the most can be changed to higher level as provided by the code. In case of particular problem when logging and other debugging methods does not provide a clue to the problem  special instrumented code has to be developed and deployed to capture error condition events . The instrumented code has to be deployed in production environment only since  the problem could not be replicated under test conditions .Deploying a custom code in   production  calls for application downtime which may not be acceptable to the application owners  and business groups involved  and also  require considerable efforts on part of supporting team to maintain it . This custom code may get overwritten by the next release cycle code .

Types of Monitoring for applications :
Applications are simultaneously monitored at various points  to  ensure its availability  and monitoring as a whole falls under following categories :

1. Health Monitoring :  As a proactive step  application health has to be monitored constantly  in order to address any issue before it becomes a serious issue . Health monitoring  in a simple arrangement will consists of  taking a snapshot of system & application parameters and comparing it to the standard benchmarks . For example in a system if a transaction is known to take around one second to complete and we can monitor this response time and setup alerts if the response time increases . Automated monitoring of health parameters is the best way of ensuring high availability of an application environment.

2. Error Monitoring : Errors in any application can impact the user experience adversely. An error condition in an application can cause user experience to fail out rightly or can cause unexpected errors such as time outs or failure to submit or display the requested data .  Errors can arise either due to  software problem , relating to  application code , web server , application server or database server  or  due to an hardware issues relating to memory , CPU processing , disk space or network issues .

These type of errors are monitored differently , application errors are mostly monitored by analyzing the application , web server , application server logs ,  understanding the error message and using that error message to find  the nature of problem .  For example an application may stop processing new requests and  from log files we may find the possible reason for this  behavior if the application is not able to process the requests due to resource shortage like cpu , memory ,network bandwidth , database performance etc. Application monitoring  requirement and  tools to monitor can be designed by studying the application documentation , architecture , platform , error messages etc.

Hardware monitoring is done using the standard tools and commands available for the particular hardware. Every operating system has tools and commands to monitor memory usage , CPU usage and disk usage but to monitor & report these resources on a regular basis  custom scripts can be written which is independent of application code .

3. Performance Monitoring : Performance of an application is critical to create  good user experience. An application which responds to user requests in  reasonable amount of time will have a good impact on user whereas an application which takes seconds or minutes  to respond will cause users to abandon the application . Application Performance  is derived from application code and supporting hardware . The code ensures that  the program routines incorporated in the program are capable of handling at least desired number of actual user requests and hardware provides the necessary memory and processing capabilities..

Application performance can be monitored from the  application access time , request processing time & time reported for various transactions in the application logs . While the application logs may provide some data about the processing time actual user experience can be simulated by sending requests to applications from different locations and measuring the resulting application response time in real-time. 

4.Configuration Monitoring :  Applications releases and  operating system changes can impact the hardware and software configuration of a machine .It is very important to monitor configuration  to avoid any undocumented and untested configuration element .Each of the configuration change needs to be documented and  monitored for any unauthorized change . The best way to monitor configuration is through a change control process where a change is submitted approved and them implemented . The change control process keeps  record for all the changes and allows  to monitor  the changes by the persons responsible  for the applications.

5. Security Monitoring : In today’s global scenario it is very important to monitor applications  for security . Security monitoring involves ensuring latest security patches are implemented in application servers , web servers & database servers .  Software companies frequently issues security warning in their software products and these security warnings should be carefully studied  & implemented to ensure compliance and protection against hackers . At any given point of time the software versions  should be monitored  to understand if they poses any security threat and update them with newer  & secure safe version .

Some companies have security teams who constantly monitor hardware and software for possible security breach and send their recommendations  but generally support team should subscribe to the newsletters from software companies which informs about the later security threats .

Best Practices for application  monitoring :
Systems can fail  due to various reasons related to hardware , operating system , network or applications itself . Sometimes despite good efforts  systems and applications fail . Although one can not assure always available status of these components there are some best practices which can be followed to ensure high availability of applications :

1. Plan Early : If there is  a new application or software component is becoming live and needs monitoring it is better to involve in early discussions of architecture  and  design to get an overview of things to come . This give time to think and implement the monitoring solution when required. In many cases it will help as monitoring solution may not be a straight forward and may require additional resources and efforts.

2. Monitoring proactively  : Don’t let system/applications  go down  and  its failure be used as a point to start corrective action . Monitor systems and applications proactively for the symptoms of problem so that corrective action can be initiated before system/application  fails . Proactive monitoring can  achieved  by monitoring some threshold values  for resources utilization like CPU memory , network bandwidth and application health parameters . If the system crosses the threshold values  a system health check has to be performed which include finding the running processes , memory utilization by various process , monitoring application logs etc . The health check and corrective action  proactively can  avoid system and application crash.

3. Balance the Load  : Load balancers are used to distribute the load on to the servers which can handle the load . In the event of one server being heavily loaded or  down the load balancers can automatically direct the traffic to the healthy server . This operation by load balancers is transparent to the users and they will not notice the difference. Load balancers can be hardware or software based and  if not present has to be used for a high transaction application.

4. Cluster the servers : Clustering removes the single point of failure by providing multiple points for request processing . In the event of one server being down due to hardware failure , network failure  or  heavily load on resources   , requests are sent and processed by  other members of the cluster .

5. Create a Recovery Plan : To avoid delay  online applications should have a well documented & tested recovery plan . The plan should cover the steps and checklists to be followed in the event of a application failure. A simple example would be to test the fail over feature of a server and observe the total requests failure and time taken to failover etc. which can give a estimated time when a alternate server will be up . Having a plan at the time of failure avoid time wastage to look for alternatives.

6. Deploy application code from a trusted  & tested source : Application code should be released from the trusted & tested source such as version control system , staging or quality assurance environments . No code should be released which has external changes other then trusted source where only authorized persons have access . Using code in this way presents a opportunity to simulate any code problems and examine the code base itself by the development teams.

7. Create a Service Level Agreement : A service level agreement in writing emphasize the need and scope of monitoring . It provides  monitoring requirements   for the support team and  a standard to measure the application availability   by the business groups. This document will give a estimated time to respond and fix the issues and teams can work in advance to create a recovery plan which meets the service level agreement .

8. Use Good hardware : Hardware which is proven to be reliable in the industry should be used for production environment . All the additional component cabling etc should be of high standard to avoid problems due to hardware failures . Replacement components should be of exact specifications as original. The hardware should have support mechanism with manufacturing company or  other company which can supply the components and troubleshooting expertise in case of a failure.

9. Seek Professional Help : If your application is mission critical ,involves impact to customers and revenue then it is not sufficient to relay on home grown solutions for monitoring but you should seek professional advice from the companies which have been doing monitoring for other companies. These companies besides monitoring applications can provide you with different type of reports like response time , downtime , uptime etc. which may be helpful in marinating and planning for the application resources.


Implementation :
 To implement effective application monitoring one has to under stand the nature of application , what exactly it is trying to do . For this one doesn’t have to have the full application code knowledge but the basic flow of information should be clear .    

1. Uptime Monitoring For setting this type of monitoring applications are monitored if they are up and running . A simple monitor can be setup by monitoring the server urls or  server processes . The problem with this type is that it can tell if a application is up  it does not tell  if application can process the transactions .

2. Transaction Monitoring Transaction based applications are best monitored using transaction monitor . If the application involves some form submissions and displaying a success message ,the same behavior can be simulated  using some scripts and status can be captured to find success status. The script can do the transactions at repeated intervals and send alerts if something fails.

This can be used effectively in proactive monitoring if the application can return back the transaction processing time or some other status which can be quantified .  The transaction completion time/status  can be monitored and compared with expected times . If a transaction takes much time  one can look  at the application logs to figure out the problem & take corrective action to avoid a crash.

3. Data files monitoring In some application environments   transactions happen offline where the data travels in an offline manner from one point to another like businesses sending their daily sales data to their head office every night in the form of a data file. This type of flow can be monitored by constantly monitoring the various drop and pick points of the data  files . At frequent intervals counts can be taken at drop and pickup points to ensure the files are moving properly .

This also provides a means to proactively monitor the flow as  the problem will be known when files starts to accumulate at a drop point  on its first occurrence and system can be prevented from clogging by looking into the cause which resulted in accumulation of files.

4. Database MonitoringApplications uses databases and databases should be monitored  for its uptime state as transaction state . Uptime state is easy to monitor by monitoring some key processes we can determine if data base is up or not. To monitor the transactional health of a database some monitoring transactions like creating records , updating the records etc can be done and the time taken for each transaction and their final status is noted. 

When the transactions starts to fail we can know that database is having some issues but as proactive monitoring we can monitor the time taken to complete each transaction . In most of the cases if system becomes overloaded the transaction time will be higher and that can give a vital clue to look the problem area in database and correct it before it goes down .

5. Resource Monitoring CPU , Memory , network disk ,monitoring is equally important as the above ones . constantly monitoring the system resources can  prevent application and operating system slow down and crash. 

If the CPU and memory is reaching its peak the application can go into a hung state . If disk space is full applications can crash right away as they may not be able to write logs etc on the disk.

Network bandwidth over utilization can also causes application crash where by the request queues starts building up due to slow network.

All the resources offer quantitative measurements and  can be mentored using the scripts using existing system utilities . For proactive monitoring  threshold  values can be set for each resource  and on reaching the threshold one can  investigate the cause of over utilization of resources .

According to the Open Source Security Testing Methodology Manual (OSSTMM, there are seven main types of security testing. They are:
Vulnerability Scanning
Security Scanning
Penetration Testing
Risk Assessment
Security Auditing
Ethical Hacking
Posture Assessment & Security Testing


vi  pronounced as ” vee eye ” is a unix editor available on  almost all the unix  operating systems , solaris , bsd ,aix , hpux etc.

This document is a quick reference to vi editor and will be of help if your are new to unix , learning unix  or just refreshing your vi knowledge after a few years.

In order to work correctly the vi need correct terminal type (TERM) setting  .The TERM setting depends on the type of terminal you have . Commonly used TERM types are vt100 , vt220 and ansi .  In most cases vt100 will  work  fine . In case vi is not able to understand the TERM you have given, it starts in open mode giving you a line by line display .
Generally TERM is taken from .profile or /etc/profile  but can be set at the command line as :
$export TERM    

echo $TERM will display the current TERM set.

Create new file or Open existing file in vi
vi without any file name will open a  new file where you can enter the text and edit but while coming out you will be asked to enter a valid file name to save the text.
vi with a  file name as argument will open that file for editing  if the file already exists it opens it otherwise it creates a new file by the argument.
Example :  $vi  testfile
Creates or opens the existing file called testfile

Modes in vi
 vi operates in following  two modes :
i. ) Command Mode : After a file is opened it is opened  in command mode ,that is , input from the keyboard will be treated as vi commands  and you will not see the words you are typing on the screen .

 ii.) Insert Mode: To enter the text you have to put vi in insert  by pressing ‘i’ or ‘a’  after which you can add the text and whatever is being type will be seen on the screen. . To switch between these mode Esc key is used .   Esc i  (text mode)  Esc (command mode) 

Saving & Exiting  vi editor

 You can exit vi in different ways :

1.) Quit without saving : If you don’t want to save the work :q  will take you out without saving your editing in vi.
2.) Write & quit : . Simple :w saves the current file but don’t exit. For save and quit  :wq is used in vi. 
3.) Forced Quite : An ! (Exclamation sign at the end of  exit commands ( :q! , :wq! )  causes a forced quit from vi  after ignoring editing (for :q!)  or writing (for :wq!) all the changes..

Offering Unix , Solaris & IT Training Services throughout the UK.

For quick reference check these links

vi Editor Commands

Basic vi Commands

Mastering the VI editor

Hope this helps everyone who is new into UNIX 🙂


Unix commands are the first thing needed by a tester who is testing applications on unix platform. Unix operating systems  comes with online manual  system which can be used to see the command details , syntax options and examples on while working on a unix  system. Unix manual can be accessed using man <command name> and it requires the man package   installed and MANPATH  set to man  directories. The manual page directories may differ in different unix operating systems and  man package may not be installed in all systems .

Following are a few of the most popular and useful commands used in unix operating system

wildcard characters


The * wildcard character substitutes for one or more characters in a filename. For instance, to list all the files in your directory that end with .c, enter the command
ls *.c


? (question mark) serves as  wildcard character for any one character in a filename. For instance, if you have files named prog1, prog2, prog3, and prog3  in your directory, the Unix command:
ls prog?


cd dir      Change to directory d

mkdir dir        Create new directory d

mv dir1 dir2 Rename directory d1 as d2

rmdir dir Remove directory d


list , no details only names
ls filename , filename with wildcard character/s.       

list , details
ls -1  filename , filename with wildcard character/s.   

move  to directory
mv filename    dirname     (wildcard character/s supported)

copy file to other/current  directory
cp file  directory/newfile    or cp directory/oldfile  .

Delete the file
rm  file  ,  rm -rf  directory  – Recursively remove files & directly without any warning.

file  filename  , file command tries to determine the file type , text , executable etc after comparing the values in /etc/magic .

File edit/create/view

vi  – vi  full screen editor 
vi  filename   , Opens a existing file or creates

ed – Line Text editor
ed  filename

count – Line, word, & char
wc  filename

Text content display – List contents of file at once
cat  filename

Text content display by screen :  List contents of file  screen by screen 
more  filename

Concatenate –  file1 & file2 into file3
cat file1 file2 >file3  

File operation

Change read/write/execute mode of fil
chmod mode file

chown [-R] [-h] owner[:group] file

move (rename )  file
mv file1  file2     Rename file file1 as file2

rm file  Delete (remove) file f

Compare two files
cmp file1 file2   

Copy file file1 into file2
cp file1 file2      

Sort Alphabetically
sort file

Sort Numerically
sort -n file

Split f into n-line pieces
split  [-n]  f

match pattern
grep pattern file     Outputs lines that

Lists file differences
diff file1 file2   

head f Output beginning of file
head  file

Output end of file
tail file


Suspend current process
CTRL/z *       

Interrupt processes
CTRL/c *      

Stop screen scrolling
CTRL/s *      

Resume screen scrolling
CTRL/q *       

Sleep for n seconds
sleep n    

Print list of jobs

Kill job n
kill %     

Remove process n
kill  -9 n  

status process status stats

Resume background job n
bg  [%n]       

Resume foreground job n
fg  [%n]       

Exit from shell

User admin

add a new user login to the system
# useradd -u 655 -g 20 -d /home/ttes testlogin  loginname

-u is userid , if not specified system takes highest available .
-g group id should be existing in /etc/group , if not specified other or user is assigned.
-d home directory , default is to use user as the directory name under the home directory.
loginname – new login name to be created .

#useradd testlogin    will create a user by the name ‘testlogin’ with all default values .

password Change
passwd  <user>

alias (csh/tcsh) – Create command
alias name1 name2     

alias (ksh/bash) – Create alias command
alias name1=”name2″   

alias – Remove alias  
unalias name1[na2…]


 Output file f to line printer
lp -d printer file   

System  Status

Display disk quota

Print date & time

List logged in users

Display current user

Output user information
finger  [username]    

Display recent commands

Environment Variable

set command alone displays the environment variables, it is used to set options in ksh   like set -o vi 

export variable ,  export  makes variable visible in sub shells.

Set environment  variable  (csh/tcsh)  to value v
sentenv name v

Set environment  variable  (ksh/bash)  to value v
export name=v      example :  export TERM=vt100


Connecting to a  remote host
$telnet hostname/ip address      or  $telnet

Telnet brings up the login prompt of remote host and  expects you to enter your user name & password .Without argument it enters command mode (telnet>) and accepts command listed by ? at telnet> prompt.
Communication is not encrypted between two hosts.

Securely connecting to a remote host

ssh  username@hostname  or ssh -l username hostname
Depending on ssh setting for your account you may or may not be asked a password to login. Your login/passwd will be same login password as you would use with telnet connection.
Communication is encrypted between two hosts so if someone intercepts your communication he will not be able to use it.

Copy files from/to remote host

ftp hostname
ftp expects you to enter  your username/passwd or if it is ftp only account it will require ftp account password .
put , mput (multipleput) command is used to transfer files to remote host.
get , mget (multipleput) command is used to transfer files from remote host.
ftp allows some limited number of commands to be executed at ftp> prompt & summary of ftp command can be found by using ? at ftp>  prompt

Securely copy files from/to remote host

sftp username@hostname:remotefile  localfile 

Communication is encrypted between two hosts.

Test the tcp/ip  connectivity between two hosts

ping hostname
If you can ping a host the host is reachable from the machine that you are using .
Router/firewall configuration may prevent ping to succeed .

Backup and  Restore

backup and restore using tar , TApeaRchive

tar tvf filename.tar   —  View the table of content of a tar archive
tar xvf filename.tar   — Extract content of a tar archive
tar cvf filename.tar file1 file2  file3Create a tar archive called filename.tar using file1, file2,file3 .
tar can’t copy the special files , device files .Not suitable for taking root backup.

backup and restore using cpio  , CopyInputOutput

cpio is mostly used in conjunction with other commands to generate a list of files to be copied :
#ls | cpio -o > /dev/rmt/c0t0d0 — Copy the contents of a directory into a tape archive:
#find . -depth -print | cpio -pd newdir — copy entire directory to other place:
#find . -cpio /dev/rmt/c0t0d0 — Copy files in current directory to a tape
cpio can copy special files and hence useful in taking root backup containing device file.

Find files  , directories

find  files , directories

Find  command is used to find the files , directories and to run commands on the list of files thus generated .By default, find does not follow symbolic links.
find . -name *.log -print    — Simple find to list log files
find . -name ‘*.log’ -exec rm  {} \;  — Simple find to find log files and delete them .
find accepts a long list of options to find the files based on different parameters such as create time , modified time , of certain size etc. Please refer to man find for more option.

Too Good…