Hadoop Developer || Seattle, WA || Long Term Job requirement

Title: Hadoop Developer

Location: Seattle, WA

Duration: Long Term

MUST have

· 4+ years of industrial experience in Big Data analytics, Data manipulation, Modeling, tuning using Hadoop Eco system tool and Spark framework.

· Extensive experience in Eco systems like Hadoop, Spark, Hive, Map Reduce, Yarn/MRv2, AWS, Apache NiFi, Solr

· Extensive experience in scheduling Hadoop and Spark jobs using Control M and Oozie

· Strong experience and knowledge of real time data analytics using Streaming technologies like Kafka, Spark Streaming, Storm, Flume and Coordinating system like Zookeeper.

· Experience on File formats like Json, Multi line Json, ORC, AVRO, Parquet, CSV

· Experience on NoSQL databases like Cassandra, HBase, Neo4J, Mongo DB

· Experience on Visualization and Business tools like Tableau, Kibana.

· Hands on experience in developing SPARK applications using RDD’s, Data frames, Dataset, Spark SQL, Transformations and Actions using IDE’s like Eclipse, Intellij Idea etc.

· Extensively worked on Spark using Scala on Hadoop cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle.

· Expertise in writing Unix shell scripts

· Strong Programming skills on Python, Java.

· Created Hive tables like internal, external, partition, bucketing tables to store structured data ...

See full