Junaid Pagdiwale, Lead Cloud Infrastructure and system Administrator

Junaid Pagdiwale

Lead Cloud Infrastructure and system Administrator

Inuxu Digital Media Technologies Pvt Ltd

Location
India - Pune
Education
Bachelor's degree, Chemical Engineering
Experience
5 years, 1 Months

Share My Profile

Block User


Work Experience

Total years of experience :5 years, 1 Months

Lead Cloud Infrastructure and system Administrator at Inuxu Digital Media Technologies Pvt Ltd
  • India - Pune
  • My current job since October 2021

GOOGLE CLOUD PLATFORM :-
GOOGLE CLOUD PLATFORM DATABASE MANAGEMNET :-
SETTING UP LANDING ZONE :- Creating organisation, creating
projects under organisation, Setting up VPC for Projects.
VPC :- Setting up VPC Along With Firewall rules for different
Resources.
SERVERLESS VPC :- Creating serverless VPC for CLoud Run
service to connect to mysql Instance on Private IP.
Service Account :- Creating Service Account for Different Services used in the project.
IAM :- Assigning Permissions to Users and Service Account based on Use case, Access Management for Service account and users for time based permission for multiple services.
VM INSTANCES :- Creating VM instances for workload, Setting up Instances along with Disk Management based on service requirement, analysing the requirement of VM's resources based on workload and Using Differents Disk to get outmost result.
CLOUD BUCKET :- Setting up Buckets based on Business
Requirement, Using diffrernt type of buckets to store data based on data criticality.
PUB/SUB :- Topic creation, subscription Creation, Managing Spark streaming Job to consume messages from topics.
LOGGING AND MONITORING :- Creating Monitoring Dashboard in GCP for all the services such as healthcheck, resources utilisation, Network and latency check, Performance of VM's.
GOOGLE CLOUD PLATFORM HADOOP/BIGDATA :-
GOOGLE CLOUD PLATFORM DEVOPS :-
Building Apache Hadoop cluster with HA, Hive, Spark and Sqoop on Linode.
MYSQL :- Creating Migration Job for mysql Database Migration from
Linode to GCP, Configuring the Core properties adding properties,
adding users, setting up connection of Dataproc, VM's and other
services to connect to Dataproc.
AEROSPIKE CLUSTER :- Setting up aerospike cluster from scratch
on VM's for business requirment for cache service solution,
configuration of aerospike properties eg:- aerospike.conf, using best
practices for setting up cluster (prerequisits), adding and removing
nodes in runtime, monitoring the cluster and its resource availability,
tuning up cluster for optimum result for cache solution,
implementation of different parameters to enhance and increase the
cluster performance, moving ahead REPLACING SQL WITH
AEROSPIKE FOR DATABASE MANAGEMENT AND CACHE
SOLUTION.
DATAPROC CLUSTER :- Building Dataproc cluster from scratch for
all environment, Configuration tunning accourding to requirement,
setting up Capacity Scheduler and tuning core properties like core-
site.xml, hdfs-site.xml, yarn-site.xml, hive-site.xml, spark-default.conf,
capacity-scheduler.xml, configuring storage for hadoop on Cloud
Storage, Setting up Spark-Streaming Jobs for Pub/Sub to DataProc,
Setting up Auto-scaling Property for Cluster.
DELTA LAKE :- Setting Up Metastore and Integrating it with
DATAPROC cluster for data transformation, storing data into hive
tables, for reads & writes, Streaming reads and writes. updating the
tables realtime with spark streaming jobs and transforming data
through table classes i.e browns, silver and gold table. (UNDER
IMPLEMENTATION).
KUBERNETES :- Building Kubernetes from Scratch for Bussiness
Application, Creating YAML files for GKE deployment, Service and
Ingress, managing resources of GKE cluster, Creating service and
Ingress for Pods, Defining backend Host and Path rule for serverlet
call for applictions, Applying Healthcheck and configuring Helath
check with Liveness and Readiness, Configure AutoScale up policy
for kuberentes Nodes and pods accourding to workload.
CLOUD BUILD :- Creating CI/CD pipepline for continious deployment
of appliction using cloud Source Repository Integrating BITBUCKET
with cloud source repository and building triggers in cloud build for
continous deployment on GKE and Cloud RUN services.
ARTIFACT REGISTORY :- Setting up Artifact Registory for different
Projects and application to store Images using BITBUCKET and
cloud YAML to create and push images onto Artifact Registory.

Hadoop Administrator at Techshed Technologies Pvt Ltd
  • India - Pune
  • April 2019 to October 2021

Hands-on experience in Installing, Configuring and adding alerts in Clourera.
Experienced in setting Cloudera Hadoop prerequisite on Linux server Experienced in Hadoop Cluster planning and designing
Hadoop Cluster Upgraded Cloudera Manager and CDH for
production. Cluster Configure and Monitor Jobs in production to monitor Job progress, and troubleshoot issues.
Created data pipeline from S3 to HDFS and vice-versa.
Involved in start to end process of production hadoop cluster setup
where in installation, configuration and monitoring the Hadoop Cluster Worked on automation tool like Ansible.
Adding/removing nodes to an existing hadoop cluster. Responsible for Cluster maintenance, commissioning and decommissioning Data nodes, Cluster.
Monitoring, Troubleshooting, Manage and review data backups, Man & review Hadoop log files and High Availability (HA). Configured various property files like core-site.xml, hdfs-site.xml, map-red-site.xml based upon the job requirement.
Involved in analyzing systems failures, identifying root causes, and recommend root course of actions. Documents the systems processes and procedures for future references.
Manage and reviewed Hadoop log files as a part of administration for troubleshooting purposes. Communicate and escalate issueappropriately.

Education

Bachelor's degree, Chemical Engineering
  • at Savitribai Phule Pune University
  • July 2017

Specialties & Skills

Languages

English
Intermediate
Hindi
Native Speaker