BigData Hadoop Developer - Seattle, WA Job requirement

Candidates should be good in BigData and Pyspark and Python and Coding

Project Description:

Primary Skill :Big Data

Mandatory Skills :Python, some scala, Airflow, Spark, Hive, Parquet, running on EMR.

Job Description

Develop and maintain custom, complex ETL pipelines written in Python, some scala, Airflow, Spark, Hive, Parquet, running on EMR.

Migrate legacy data pipelines (SQL Server, SSIS, some RedShift) to newer technologie...

See full