Posts

Showing posts from 2013

How to write Professional CV

Image
Follow below guidelines while preparing your CV: 1. Title Provide Your Name  & Address with Email ID & Contact Phone. 2. Objective: This part is very important as it emphasizes your Career Objective. Mention clearly about your Objective in joining new company and how your previous experience helps the organization. 3.Professional Experience: Provide Current working Company, Role, L ocation. Provide Bullet points regarding your professional experience. Note:  This may include some of your technical skills, end to end working experience, communication skills, interpersonal skills, leadership skills, Production Environment experiences, Agile experiences, etc. 4. Technical Skills: As its name says, provide your technical skills which are more relevant to the position which you applied for. Suppose your applying for JAVA Lead, i nclude all related to Java technologies such Spring, Struts, hibernate etc. Please don't include skills like :  JAVA, .ne

How to Clear Java Cache in windows 8

Image
Well, here we go, Mouse over cursor to top-right corner and click on search and enter java then you will find it as shown below. Click on View button in General Tab, then you will be able to see the java cache files then delete them. That's all !!

How to shutdown windows 8

Image
Its little confusing on how to shutdown windows 8 operating system.  Here we go: 1. Move the cursor to top right corner of the desktop screen.  2. you will find a Vertical Bar with search, share, start, devices, settings as shown below: 3. Click on Settings button and you will find option as shown in below screenshot. Select Power button and there you will find shutdown icon.  Yay!! I found it... :) BTW, another shortcut is to press Alt+F4 on desktop screen then you will find below dialog to shutdown. If you like the post give a thumbs up :)

Name node

PIG Error message "org.apache.pig.tools.grunt.Grunt - ERROR 2998: Unhandled internal error. name"

Problem: Error when try to load a file in Apache PIG grunt> Data = LOAD 'MyTest.csv' using PigStorage(','); 2013-11-05 14:14:57,699 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2998: Unhandled internal error. name 2013-11-05 14:14:57,699 [main] WARN  org.apache.pig.tools.grunt.Grunt - There is no log file to write to. 2013-11-05 14:14:57,699 [main] ERROR org.apache.pig.tools.grunt.Grunt - java.lang.NoSuchFieldError: name     at org.apache.pig.parser.QueryParserStringStream. (QueryParserStringStream.java:32)     at org.apache.pig.parser.QueryParserDriver.tokenize(QueryParserDriver.java:200)     at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:168)     at org.apache.pig.PigServer$Graph.validateQuery(PigServer.java:1574)     at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1547)     at org.apache.pig.PigServer.registerQuery(PigServer.java:549)     at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:9

How to run MapReduce job in hadoop

Image
Guys, Here I am gonna show how we can run  Map  Reduce program in ubuntu linux. Pre-requisites: 1. Hadoop should be installed successfully. 2. Hadoop is in running mode. (bin/hadoop start-all.sh) Here we go: 1. Create a Any directory in Ubuntu as below: admin@admin-G31M-ES2L:/usr$ mkdir testjars 2. Now change the directory to newly created one. admin@admin-G31M-ES2L:/usr$ cd testjars 3. Create Input File as Input.txt and copy paste some Text into it. admin@admin-G31M-ES2L:/usr/testjars$ vi input.txt  4. Now Move this directory to Hadoop HDFS. admin@admin-G31M-ES2L: /usr/local/hadoop $ bin/hadoop dfs -put /usr/testjars/ /usr/testjar 5. So we have moved a directory which is having input file to process by map reduce job to Hadoop HDFS. Note: In order to run word count map reduce job the input file needs to be placed in HDFS. So we have done this just now. 6. Word count jar is available in "hadoop-examples-1.0.4.jar" under hadoop installation direc

JSON Exception Causes: Exception in thread "main" groovy.json.JsonException: Lexing failed on line: 1, column: 1, while reading 'c', no possible valid JSON value or punctuation could be recognized

Exception Message: Exception in thread "main" groovy.json.JsonException: Lexing failed on line: 1, column: 1, while reading 'c', no possible valid JSON value or punctuation could be recognized. at groovy.json.JsonLexer.nextToken(JsonLexer.java:82) at groovy.json.JsonSlurper.parse(JsonSlurper.java:73) at groovy.json.JsonSlurper.parseText(JsonSlurper.java:59) at com.jayway.restassured.path.json.JsonPath. (JsonPath.java:114) Probable Root Cause: Remove toString() from response object.

Testing REST API using Jmeter

Image
Now a days REST APIs are widely used in web applications. REST API are light weight and provide faster response and easy to use Webservices. REST APIs are used in BigData applications using Hadoop Environment, Amazon Webservices etc. Here are simple steps to test REST API using Apache Jemeter. Pre-requisites: Java...Hmm yeah obviously. Start Jmeter 1.   Initially You see Testplan in left plane. 2.  Right click on it->Add->Config Element ->select HTTP Request defaults  (needed if you want to make request url by default for all your rest calls 3. Right click Test plan->Add->Threads->Select Thread Group( to select a Thread to run testcases) 4.Right click Thread Group->Add->Sampler -> Select HTTP request 4. Select Method as GET/POST based on your requirement. 5. Enter Basepath to url ( hmm..yeah got it. you can enter request parameters also here along with your basepath Example: "/basepath/info.xml?id=234" 6.Right click Thread G

Configure SSH using CYGWIN on Windows 7

Image
Steps: Make sure Open SSH and Open SSL packages installed in Cygwin.   Run, chmod  +r etc/passwd chmod u+x etc/passwd chmod +r etc/group chmod u+x etc/group chmod 755 /var touch /var/log/sshd.log chmod 664 /var/log/ssh.log then, Run $ ssh-host-config *** Query: Overwrite existing /etc/ssh_config file? (yes/no) yes *** Info: Creating default /etc/ssh_config file *** Query: Overwrite existing /etc/sshd_config file? (yes/no) yes *** Info: Creating default /etc/sshd_config file *** Info: Privilege separation is set to yes by default since OpenSSH 3.3. *** Info: However, this requires a non-privileged account called 'sshd'. *** Info: For more info on privilege separation read /usr/share/doc/openssh/READ ME.privsep. *** Query: Should privilege separation be used? (yes/no) no *** Info: Updating /etc/sshd_config file *** Query: Do you want to install sshd as a service? *** Query: (Say "no" if it is already installed as a service)

Steps to configure Hadoop in Eclipse

Pre-Conditions: Make sure Java installed and Java path is set correctly. Make sure hadoop is configured correctly. (Run jps it should show below result /usr/local/hadoop/bin$ jps 12106 11764 JobTracker 11988 TaskTracker 11217 NameNode 11674 SecondaryNameNode 12234 Jps 11438 DataNode ) Steps: 1. Install Eclipse Juno (Hadoop Support available from Eclipse 3.3 Versions onwards) 2. Go to Hadoop Home directory ( cd $HADOOP_HOME) 3. Run below command ant clean package ( It will compile src files in hadoop directory) 4.Navigate to $HADOOP_HOME/src/contrib/

Testing Strategies: Big Data and Hadoop

Well. Im currently researching how we can test Big Data related applications. After I browse many sites I got some useful information on How we can test BIG Data based Hadoop Systems. BigData testing basically involve Analytics part like BI, ETL testing, running Jobs Flows through any third party Hadoop Implemented Tools. BI: Its purely Report based testing where we will be having Analysed Data Store to Showcase Report based Results through Any third party Reporting Tools like Cognos, MS Bi etc. ETL testing: Traditional ETL tools will be extending capabilty to integrate with Hadoop HDFS( Hadoop Distributed File System).  Ex: In IBM datastage  8.7, There is new sequence stage called BDFS(Big Data File Stage) which will act as connector to Hadoop HDFS. Running MapReduce/Hive Jobs flows: This is pure Hadoop Testing, We will be given an front end tool to configure, Number of instances, The Input file to process, The Output file to store Result, And the Processing Ap

Apache Hive Error Resolution Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf

Exception: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf     at java.lang.Class.forName0(Native Method)     at java.lang.Class.forName(Class.java:264)     at org.apache.hadoop.util.RunJar.main(RunJar.java:149) Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf     at java.net.URLClassLoader$1.run(URLClassLoader.java:217)     at java.security.AccessController.doPrivileged(Native Method)     at java.net.URLClassLoader.findClass(URLClassLoader.java:205)     at java.lang.ClassLoader.loadClass(ClassLoader.java:321)     at java.lang.ClassLoader.loadClass(ClassLoader.java:266)     ... 3 more Make sure you have below things setup in ./bashrc export HIVE_HOME=/usr/local/hive export PATH=$HIVE_HOME/bin: $PATH   Also Move(or remove) Hadoop_Classpath from Hadoop-env.sh to ./bashrc

Hadoop Tips

How to start Hadoop, user@user-G31M-ES2L:/usr/local/hadoop/bin$./start-all.sh How to stop Hadoop, user@user-G31M-ES2L:/usr/local/hadoop/bin$./stop-all.sh How to place files in HDFS, user@user-G31M-ES2L:~$ hadoop dfs -put filename.csv /tmp/hdfsdir/ How to check hadoop processes status: user@user-G31M-ES2L:~$/usr/local/hadoop$ jps 3423 TaskTracker 3217 JobTracker 2674 NameNode 4511 Jps 3555 RunJar 3122 SecondaryNameNode 2909 DataNode 4381 RunJar