- 2+ years of experience in building data pipelines.
- Experience building data pipelines using StreamSets or Azure Data Factory.
- Understanding of stream processing with knowledge on Kafka.
- Experience with scripting languages i.e. Python, Perl, etc.
- Experience with SQL (RDBMS), NoSQL (MongoDB), and PostgreSQL.
- Understanding of data flows, data architecture, ETL and processing of structured and unstructured data.
- Current experience developing and deploying applications to a public cloud (AWS, GCE).
- Experience with DevOps tools (GitHub, Jira) and methodologies (Lean, Agile, Scrum).
- Experience with ETL, Data Modeling, and working with large-scale datasets. Extremely proficient in writing performant SQL working with large data volumes.
- Experience on Azure DevOps is a plus.
- Ability to manage competing priorities simultaneously and drive projects to completion.
- Bachelor's degree or higher in a quantitative/technical field (e.g. Computer Science, Engineering).
- Excellent written and verbal communication skills in English.
- Experience in working in agile (SCRUM) methodology.
We are looking for a savvy Data Engineer to join our growing team. We work with fortune 500 companies to build their data infrastructure and help them with their data journey. The Data Engineer will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing the data flow. The Data Engineer will help generate data pipelines and subsequently with DataOps. He/ She must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.