Your Name

Address | Email-Id | Telephone


CAREER OBJECTIVE:
  • Over 8+years of professional IT experience in all phases of Software Development Life Cycle including hands on experience in Java/J2EE technologies and Big Data Analytics.
  • Extensive experience working in Teradata, Oracle, Netezza, SQL Server and MySQL database.
  • Excellent understanding and knowledge of NOSQL databases like MongoDB, HBase, and Cassandra.
  • Strong experience working with different Hadoop distributions like Cloudera, Horton works, MapR and Apache distributions.
  • Experience in installation, configuring, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH 5.X) distributions and on Amazon web services (AWS).
  • Implemented Kafka Custom encoders for custom input format to load data into Kafka Partitions.
  • Real time streaming the data using Spark with Kafka for faster processing.
PROFESSIONAL EXPERIENCE:
1.) Spark/Hadoop Developer

Company Name-Location – July 2015 to October 2016

Responsibilities:

  • Responsible for building scalable distributed data solutions using Hadoop.
  • Experienced in loading and transforming of large sets of structured, semi structured, and unstructured data.
  • Developed Spark jobs and Hive Jobs to summarize and transform data.
  • Expertise in implementing SparkScala application using higher order functions for both batch and interactive analysis requirement.
  • Experienced in developing Spark scripts for data analysis in both python and scala.
  • Built on-premise data pipelines using kafka and spark for real time data analysis.
  • Created reports in TABLEAU for visualization of the data sets created and tested native Drill, Impala and Spark connectors.
  • Analysed the SQL scripts and designed the solution to implement using Scala.
  • Implemented Hive complex UDF’s to execute business logic with Hive Queries.
  • Responsible for loading bulk amount of data in HBase using MapReduce by directly creating H-files and loading them.

Environment: MapR, Cloudera, Hadoop, HDFS, AWS, PIG, Hive, Impala, Drill, SparkSql, OCR, MapReduce, Flume, Sqoop, Oozie, Storm, Zepplin, Mesos, Docker, Solr, Kafka, Mapr DB, Spark, Scala, Hbase, ZooKeeper, Tableau, Shell Scripting, Gerrit, Java, Redis.

2.) Hadoop Developer

Company Name-Location – November 2014 to May 2015

Responsibilities:

  • Analyzing the requirement to setup a cluster.
  • Worked on analyzing Hadoop cluster and different big data analytic tools including Map Reduce, Hive and Spark.
  • Involved in loading data from LINUX file system, servers, Java web services using Kafka Producers, partitions.
  • Implemented Kafka Custom encoders for custom input format to load data into Kafka Partitions.
  • Developed Spark scripts by using Scala shell commands as per the requirement.
  • Migrated complex Map Reduce programs into Spark RDD transformations, actions.
  • Implemented Spark RDD transformations to map business analysis and apply actions on top of transformations.
  • Involved in creating Hive tables, loading with data and writing hive queries which runs internally in Map Reduce way.
  • Developed the Map Reduce programs to parse the raw data and store the pre Aggregated data in the partitioned tables.
  • Loaded and transformed large sets of structured, semi structured, and unstructured data with Map Reduce, Hive and pig.

Environment: Hadoop, Cloudera, HDFS, pig, Hive, Flume, Sqoop, NiFi, AWS Redshift, Python, Spark, Scala, MongoDB, Cassandra, Snowflake, Solr, ZooKeeper, MySQl, Talend, Shell Scripting, Linux Red Hat, Java.

3.) Continuous monitoring and managing the Hadoop cluster through Cloudera Manager

Company Name-Location  – October 2013 to September 2014

Responsibilities:

  • Converting the existing relational database model to Hadoop ecosystem.
  • Generate datasets and load to HADOOP Ecosystem.
  • Worked with Linux systems and RDBMS database on a regular basis to ingest data using Sqoop.
  • Continuous monitoring and managing the Hadoop cluster through Cloudera Manager.
  • Involved in review of functional and non-functional requirements.
  • Implemented Framework susing Javaand python to automate the ingestion flow.
  • Responsible to manage data coming from different sources.
  • Loaded the CDRs from relational DB using Sqoopand other sources to Hadoop cluster by using Flume.
  • Experience in processing large volume of data and skills in parallel execution of process using Talend functionality.
  • Involved in loading data from UNIX file system and FTP to HDFS.
  • Designed and implemented HIVE queries and functions for evaluation, filtering, loading and storing of data.

Environment: Hadoop, Hortonworks, HDFS, pig, Hive, Flume, Sqoop, Ambari, Ranger, Python, Akka, Play framework, Informatica, Elastic search, Linux- Ubuntu, Solr.

4.) Java Developer

Company Name-Location – September 2010 to June 2011

Responsibilities:

  • Designed Java Servlets and Objects using J2EE standards.
  • Involved in developing multi threading for improving CPU time.
  • Used Multi threading to simultaneously process tables as and when a user data is completed in one table.
  • Involved in developing the presentation layer using Spring MVC/Angular JS/JQuery.
  • Design and development of Web pages using HTML 4.0, CSS including Ajax controls and XML.
  • Worked closely with Photoshop designers to implement mock-ups and the layouts of the application.
  • Involved in writing the Properties, methods in the Class Modules and consumed web services.

Environment: Core Java, JavaBeans, HTML 4.0, CSS 2.0, PL/SQL, MySQL 5.1, Angular JS, JavaScript 1.5, Flex, AJAX and Windows

EDUCATIONAL QUALIFICATIONS:
Course (Stream)/ Examination
Institution/University/School
Year of Passing
Performance
Bachelor of Technology in computer scienceArunai Engineering College200990%
HSCSt. Mary’s Matric HSS200584%
SSLCVetri  Higher Secondary School

2003

80%

SKILLS:
  • JAVA (7 years)
  • APACHE HADOOP MAPREDUCE (4 years)
  • HADOOP SQOOP (4 years)
  • APACHE KAFKA (4 years)
  • FLUME (4 years)
ADDITIONAL INFORMATION:
  • Hadoop, MapReduce, Pig, Hive,YARN,Kafka,Flume, Sqoop, Impala, Oozie, ZooKeeper, Spark,Solr, Storm, Drill,Ambari, Mahout, MongoDB, Cassandra, Avro, Parquet and Snappy.
  • Languages Java, Scala, Python,Jruby, SQL, HTML, DHTML, JavaScript, XML and C/C++
  • No SQL Databases Cassandra, MongoDBandHBase
  • Java Technologies Servlets, JavaBeans, JSP, JDBC, JNDI, EJB and struts
  • Development / Build Tools Eclipse, Ant, Maven,Gradle,IntelliJ, JUNITand log4J.
  • Hadoop Distributions Cloudera,MapR, Hortonworks, IBM BigInsights
  • Frameworks Struts, spring and Hibernate
  • App/Web servers WebSphere, WebLogic, JBoss and Tomcat
  • DB Languages MySQL, PL/SQL, PostgreSQL and Oracle
  • Operating systems UNIX, LINUX, Mac OS and Windows Variants
  • Data analytical tools R, SAS and MATLAB

 


Hadoop Developer Sample Resume 2


CAREER OBJECTIVES
  • Overall 8 Years of professional Information Technology experience in Hadoop, Linux and Data base Administration activities such as installation, configuration and maintenance of systems/clusters.
  • Having extensive experience in Linux Administration & Big Data Technologies as a Hadoop Administration.
  • Hands on experience in Hadoop Clusters using Horton works (HDP), Cloudera (CDH3, CDH4), oracle big data and Yarn distributions platforms.
  • Possessing skills in Apache Hadoop, Map-Reduce, Pig, Impala, Hive, HBase, Zookeeper, Sqoop, Flume, OOZIE, and Kafka, storm, Spark, Java Script, and J2EE.
  • Experience in deploying and managing the multi-node development and production Hadoop cluster with different Hadoop components (Hive, Pig, Sqoop, Oozie, Flume, HCatalog, HBase, Zookeeper) using Horton works Ambari.
  • Good experience in creating various database objects like tables, stored procedures, functions, and triggers using SQL, PL/SQL and DB2.
  • Used Apache Falcon to support Data Retention policies for HIVE/HDFS.
  • Experience in Configuring Name-node High availability and Name-node Federation and depth knowledge on Zookeeper for cluster coordination services.
  • Designing and implementing security for Hadoop cluster with Kerberos secure authentication.
WORK EXPERIENCE
Hadoop Developer

Company Name-Location – July 2017 to Present

Responsibilities:

  • Working on Hadoop HortonWorks distribution which managed services. HDFS, MapReduce2, Hive, Pig, HBASE, SQOOP, Flume, Spark, AMBARI Metrics, Zookeeper, Falcon and OOZIE etc. for4cluster ranges from LAB, DEV, QA to PROD.
  • Monitor Hadoop cluster connectivity and security on AMBARI monitoring system.
  • Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades.
  • Responsible for Cluster maintenance, Monitoring, commissioning and decommissioning Data nodes, troubleshooting review data backups, review log files.
  • Installed, tested and deployed monitoring solutions with SPLUNK services and involved in utilizing SPLUNK apps.
  • Day to day responsibilities includes solving developer issues, deployments moving code from one environment to other environment, providing access to new users and providing instant solutions to reduce the impact and documenting the same and preventing future issues.

Environment: Hue, Oozie, Eclipse, HBase, HDFS, MAPREDUCE, HIVE, PIG, FLUME, OOZIE, SQOOP, RANGER, ECLIPSE, SPLUNK.

Spark/Hadoop Developer

Company Name-Location – August 2016 to June 2017

Responsibilities:

  • Responsible for Cluster Maintenance, Monitoring, Managing, Commissioning and decommissioning Data nodes, Troubleshooting, and review data backups, Manage & review log files for Horton works.
  • Adding/Installation of new components and removal of them through Cloudera.
  • Monitoring workload, job performance, capacity planning using Cloudera.
  • Major and Minor upgrades and patch updates.
  • Creating and managing the Cron jobs.
  • Installed Hadoop eco system components like Pig, Hive, HBase and Sqoop in a Cluster.
  • Experience in setting up tools like Ganglia for monitoring Hadoop cluster.
  • Handling the data movement between HDFS and different web sources using Flume and Sqoop.
  • Extracted files from NoSQL database like HBase through Sqoop and placed in HDFS for processing.
  • Installed Oozie workflow engine to run multiple Hive and Pig jobs.

Environment: Linux, Shell Scripting, Tableau, Map Reduce, Teradata, SQL server, NoSQL, Cloudera, Flume, Sqoop, Chef, Puppet, Pig, Hive, Zookeeper and HBase.

EDUCATION
Course (Stream)/ Examination
Institution/University/School
Year of Passing
Performance
Bachelors in Electronics and Communication EngineeringAnna university – Chennai, Tamil Nadu201580%
HSCFES Higher Secondary School- Chennai201184%
SSLCVikkaas Higher Secondary School

2009

80%

SKILLS
  • LINUX (8 years)
  • APACHE HADOOP HDFS (5 years)
  • HADOOP SQOOP (5 years)
  • APACHE HBASE (5 years)
  • FLUME (5 years)
ADDITIONAL INFORMATION

Big Data Technologies:-

  • Hadoop, HDFS, Map Reduce
  • YARN, PIG, Hive, HBase
  • Zookeeper, Oozie, Ambari, Kerberos
  • Knox, Ranger, Sentry, Spark, Tez, Accumulo
  • Impala, Hue, Storm, Kafka
  • Flume, Sqoop, Solr.

Technical Skills:-

  • Operating Systems Linux, AIX, CentOS, Solaris & Windows.
  • Databases Oracle 10/11g, 12c, DB2, MySQL, HBase, Cassandra, MongoDB.
  • Backups VERITAS, Netback up & TSM Backup.
  • Virtualization VMware, vSphere, VIO.
  • Scripting Languages Shell & Perl programming, Python.

Categorized in: