IT - Technology Lead | Big Data - Data Processing | Spark Job requirement

Responsibilities:

  • Design and implement distributed data processing pipelines using Spark, Hive, Python, and other tools and languages prevalent in the Hadoop ecosystem. Ability to design and implement end to end solution.
  • Experience publishing RESTful API’s to enable real - time data consumption using OpenAPI specifications
  • Experience with open source NOSQL technologies such a...

See full