Modak is a fast-growing boutique data engineering company that enables enterprises manage and utilize their data landscape effectively. Modak uses machine learning (ML) techniques to transform how analytics content is prepared, consumed, and shared. Modak Nabu™ is an enterprise product that has been covered in Gartner’s “Market Guide to Data preparation 2020”. Modak Nabu™ is also recognized with the “Best in the Show” award in the Bio-IT World Conference & Expo 2022.
Suggest you to go through our About Us page
We are looking for a Data Engineer for our dynamic team. As part of the role, the selected candidate would be working on extensive data modeling, large scale data migration and working on data lakes and data fabrics. The candidate would be a critical part of the team that builds and automates data migration. We have extensive focus on cloud (MS-Azure, AWS, GCP etc.) and scalable solutions. The Data Engineer will be responsible for expanding and optimizing our data pipeline architecture, as well as optimizing the flow of data. We are looking for someone who enjoys coding and has a genuine interest in engaging with customer projects. Also, at the same time is interested in solving challenging problems, ranging from back-end data processing and machine learning to front end visualization and dashboarding.
- BS or MS degree in Computing/Data/Data Science related field from top universities in US, and a minimum up to 1 year of work experience
- Understanding of Stream processing with knowledge on Kafka
- Knowledge of traditional programming and software development methodologies
- Knowledge or Experience with development languages i.e., Python, Perl, Java, MS.Net etc.
- Knowledge or Experience in building data pipelines to MS Azure or AWS or GCP
- Knowledge or Experience with SQL (RDBMS), NoSQL (MongoDB), and PostgreSQL
- Understanding of Data Flows, Data Architecture, ETL and processing of structured/unstructured data
- Demonstrated ability to work in a fast paced and changing environment with short deadlines, interruptions, and multiple tasks/projects occurring at once.
- Knowledge or Experience with Cloud (MS Azure / GCP / AWS) infrastructure deployments – application resource management in GCP environment(s).
- Experience with cloud provider data services.
- Knowledge with data pipeline applications (Kafka/Spark/Casandra)
- Have strong Data Science and/or software engineering knowledge or experience.
- Excellent interpersonal and documentation skills.