KQ-001 Senior Bigdata Operations Engineer (Remote/Flexible)

KQ-001 Senior Bigdata Operations Engineer (Remote/Flexible)

11 feb
|
Aitopics
|
Xico

11 feb

Aitopics

Xico

Senior BigData Operations Engineer (Remote/Flexible)Insulet started in 2000 with an idea and a mission to enable our customers to enjoy simplicity, freedom and healthier lives through the use of our Omnipod product platform.
In the last two decades we have improved the lives of hundreds of thousands of patients by using innovative technology that is wearable, waterproof, and lifestyle accommodating.
We are looking for highly motivated, performance-driven individuals to be a part of our expanding team.
Our continued success depends on hiring amazing people guided by shared values who exceed customer expectations!
Position Overview

Insulet Corporation, maker of the OmniPod, is the leader in tubeless insulin pumps.




We are seeking a highly skilled Senior Data Operations Engineer to join our team.
This role involves designing and implementing robust data architectures using Databricks, AWS, and Azure platforms.
The ideal candidate will have extensive experience in big data technologies, cloud platforms, and data engineering.
Initially, you will proactively identify and resolve data pipeline issues to ensure high-quality on-time data for our clients.
In addition, you will optimize pipeline performance and understand how our clients use their data.
We are a fast-growing company that provides an energetic work environment and tremendous career growth opportunities.
Key Responsibilities Monitor and manage data pipelines to ensure the timely and accurate flow of data.Resolve quality checks alerts to ensure the accuracy, completeness, and consistency of data.Monitor and optimize data pipeline performance to meet and exceed established service level agreements (SLAs).Collaborate with data engineering, data stewards,



and data owner teams to implement improvements in data quality processes and data pipeline performance.Collaborate with IT Ops and SRE for building a robust monitoring pipeline.Implement proactive measures to identify and resolve data issues automatically.Maintain comprehensive documentation of data operations processes, monitoring procedures, and issue resolution protocols.Ensure documentation and operating procedures are up to date to facilitate knowledge transfer and training.Understand CI/CD technologies and incorporate Terraform and automation of Infrastructure and pipelines.Understand Databricks and Databricks Access Bundles and incorporate automation and well-architected data operations.Education & Experience Bachelor's degree in Mathematics, Computer Science, Electrical/Computer Engineering, or a closely related STEM field is required.




(Four years relevant work experience may be considered in lieu of educational requirement).5+ years' experience in hands-on data operations including data pipeline monitoring, support, and engineering.Experience in data Quality Assurance, Control, and Lineage for large datasets in relational/non-relational databases is strongly preferred.Experience managing robust ETL /ELT pipelines for big real-world datasets that could include messy data, unpredictable schema changes, and/or incorrect data types is strongly preferred.Proficiency in Databricks, AWS, and/or Azure, or similar cloud platforms.Experience monitoring and supporting data pipelines that are fast, scalable, reliable, and accurate.Understand clients' data requirements to meet client data operations needs.Strong analytical and problem-solving skills,



with a keen attention to detail.Effective communication and collaboration abilities.Experience with big data technologies such as Apache Spark, Hadoop, and Kafka.Expertise in data modeling, ETL processes, and data warehousing.Experience with both batch data processing and streaming data.Experience in implementing and maintaining Business Intelligence tools linked to an external data warehouse or relational/non-relational databases is required.Excellent design and development experience with SQL and NoSQL databases, OLTP, and OLAP databases.Knowledge in non-relational databases (MongoDB) is a plus.Demonstrated knowledge of managing large datasets in the cloud (Azure, SQL, AWS, etc.)
is required.Knowledge of ETL and workflow tools (Databricks, Azure Data Factory, AWS Glue, etc.)
is a plus.Demonstrated knowledge of building, maintaining, and scaling cloud architectures (Azure, AWS, etc.
),



specifically cloud data tools that leverage Spark, is preferred.Experienced coding abilities in Python and SQL scripting languages.Demonstrated familiarity with different data types as inputs (e.g., CSV, XML, JSON, etc.
).Demonstrated knowledge of database and dataset validation best practices.Demonstrated knowledge of software engineering principles and practices.Ability to communicate effectively and document objectives and procedures.NOTE: This position is eligible for 100% remote working arrangements (may work from home/virtually 100%; may also work hybrid on-site/virtual as desired).
#LI-Remote

#J-18808-Ljbffr

El anuncio original lo puedes encontrar en Kit Empleo:
https://www.kitempleo.com.mx/empleo/139724521/kq-001-senior-bigdata-operations-engineer-remote-flexible-xico/?utm_source=html

Suscribete a esta alerta:
Escribe tu dirección de correo electrónico, te permitirá de estar al tanto de los últimos empleos por: kq-001 senior bigdata operations engineer (remote/flexible)

Postulate a este anuncio

Muestra tus habilidades a la empresa, rellenar el formulario y deja un toque personal en la carta, ayudará el reclutador en la elección del candidato.

Suscribete a esta alerta:
Escribe tu dirección de correo electrónico, te permitirá de estar al tanto de los últimos empleos por: kq-001 senior bigdata operations engineer (remote/flexible)