Must Have Skills:
- Spark
- Python
Detailed Job Description:
- Design and implement distributed data processing pipelines using Spark, Hive, Python, and other tools and languages prevalent in the Hadoop ecosystem.
- Ability to design and implement end to end solution.
- Experience publishing RESTful APIs to enable real time data consumption using Open API specifications
- Experience with open source NOSQL technologies such as HBase, Dyna...