Job Description
Are you a seasoned data engineer with a passion for hands-on technical work? Do you thrive in an environment that values innovation, collaboration, and cutting-edge technologies? We are looking for a seasoned Integration Engineer to join our team, someone who is passionate about building and maintaining scalable data pipelines and integrations. The ideal candidate will have a strong foundation in Python programming, experience with Snowflake for data warehousing, proficiency in AWS and Kubernetes (EKS) for cloud services management, and expertise in CI/CD practices, Apache Airflow, DBT, and API development. This role is critical to enhancing our data integration capabilities and supporting our data-driven initiatives.
Role and Responsibilities:
As the Technical Data Integration Engineer, you will play a pivotal role in shaping the future of our data integration engineering initiatives. You will be part of talented data integration engineers while remaining actively involved in the technical aspects of the projects. Your responsibilities will include:
Hands-On Contribution: Continue to be hands-on with data integration engineering tasks, including data pipeline development, EL processes, and data integration. Be the go-to expert for complex technical challenges.Integrations Architecture: Design and implement scalable and efficient data integration architectures that meet business requirements. Ensure data integrity, quality, scalability, and security throughout the pipeline.Tool Proficiency: Leverage your expertise in Snowflake, SQL, Apache Airflow, AWS, API, and Python to architect, develop, and optimize data solutions. Stay current with emerging technologies and industry best practices.Data Quality: Monitor data quality and integrity, implementing data governance policies as needed.Cross-Functional Collaboration: Collaborate with data science, data warehousing, analytics, and other cross-functional teams to understand data requirements and deliver actionable insights.Performance Optimization: Identify and address performance bottlenecks within the data infrastructure. Optimize data pipelines for speed, reliability, and efficiency.
#LI-Hybrid