Sr. Hadoop Associate
Diligent IT Services
Total years of experience :8 years, 10 Months
* Cluster Creation: Setting up production grade Hadoop clusters and its components through Cluster Management Tools in Cloud environment like AWS.
* Configuring Services: Configuring various components such as HDFS, YARN, MapReduce (MR1& MR2), Sqoop, Hive, Zookeeper, Sentry.
* Nodes Monitoring: Decommissioning and commissioning the Node on running cluster including Balancing HDFS block data.
* High Availability: Experienced in Designing Hadoop architecture AWS Cloud and with Production ready features like High Availability, Scalability and Security.
* Planning a Cluster: Installation, Configuration and Administration of Various Hadoop distributions .
* ETL Tools: Sqoop configuration to import/export data to/from MySQL databases, also worked with Hive.
* Troubleshooting: To troubleshoot, diagnose and solve the Hadoop issues and making sure that they do not occur again, Raise remedy ticket to responsible team if required.
* YARN: Roles and responsibilities include Yarn administration.
* CDH: Working knowledge of Cloudera Manager .
* HDP: Working knowledge of Ambari .
* Security: Working Knowledge on Kerberos AD, Sentry, TLS/SSL .
• Provided Admin Support in Big data Cluster to US Client.
• Contents like Big-data, Hadoop Administration, Cloud(AWS).
• Trained on Hadoop Ecosystem, Architecture and its Configuration.
• Trained on Cloudera, Hortonworks Enterprise.
• Trained on Data center, Live Hadoop Cluster with the Security Part
* Monitoring cluster and handling issue like job handling and service
maintenance.
* Resolving change tickets like adding a user or responding to
developer’s issue.
* Created shell script to automate housekeeping.
* Taking back-up of HDFS and Hive metadata.
*Configuring Trash and recovery & Setting Quota for the Users.
* Job monitoring.
* Ensured that given tasks gets finished by deadline.
Tasks/Achievements
* Trained on A+, N+, Cloud +, Security+.
* Monitoring the cluster and reporting any abnormalities.
* Raising L1 tickets in ticketing tools and updating the supervisors.
* Creating EoD reports and sending it to the clients and supervisors.
* Housekeeping on the hosts to free up disk usage on both servers
and HDFS.
* Storing metadata or result files in S3 through script.
* Worked in different project with different enterprises