• BS or MS in Computer Science/Software Engineering or equivalent work experience.
  • Experience with data modelling, data streaming, data transformation, modern data stores, building data pipelines (ETLs).
  • Experience with data management and data engineering in Production Systems.
  • You have strong customer orientation and understand the system end-user perspective.
  • Understand project scoping documents & client datasets, drive data discovery sessions with clients (including Q&A and follow-ups).
  • Carry out data exploration & analysis (EDA).
  • You have strong software engineering skills and strong programming skills in Python/Pandas/ML Libs.
  • Proficiency in SQL – advanced, window functions, aggregate functions, joins, etc.
  • You have experience with the use of Kubernetes at scale, Docker, and big data frameworks.
  • You have a proven understanding of distributed computing architectures.
  • You have excellent communication skills with the ability to clearly explain technical terms to non-technical audience.
  • Experience in Kubernetes ecosystem like Helm and Argo workflow, CI/CD.
  • Experience with Machine Learning Solutions and productization.
  • You enjoy solving puzzles and troubleshooting issues.
  • You enjoy multi-tasking and providing significant positive impact to the business through your work.
  • Nice to have:
    • Data manipulations in python – wrangling & manipulating large datasets in spark & pandas dataframes.
    • Supply chain domain knowledge - Supply Chain Management, especially demand planning aspects, CPG, Manufacturing etc.
    • Knowledge of how drivers influence demand, e.g., pricing, promotions, initiatives, external factors like weather patterns etc.
    • Understanding of the ML / Modelling process, Feature Generation, Training, Hyper-parameter tuning, predictions (scoring).