Key Responsibilities:Design, develop, and optimize big data pipelines and ETL workflows using PySpark, Hadoop (HDFS, MapReduce, Hive, HBase).Develop and maintain data ingestion, transformation, and integration processes on Google Cloud Platform servi...