26 feb
Epam Systems
Nuevo Casas Grandes
.We are seeking a **Lead Data Engineer** to join our team.This position combines engineering and analytical duties.
You will develop and maintain data infrastructure while analyzing cost data, identifying trends, and collaborating with teams to optimize cloud expenses.
Primarily focusing on AWS, this role demands a solid understanding of AWS services and cost structures.
You will utilize tools such as Python, Databricks, Snowflake, Airflow, and Looker to provide impactful insights and solutions that enhance cost optimizations and support decision-making.
**Responsibilities**- Design, construct, and sustain scalable ETL pipelines to process and adjust large volumes of cloud cost and usage data- Merge data from various sources,
including AWS, into centralized data repositories like Snowflake- Create and uphold data models to aid cost analysis and reporting requirements- Enhance query output and storage effectiveness for vast datasets- Automate repetitive data processing jobs and set up robust monitoring for data pipelines- Guarantee data precision and dependability via validation techniques- Examine cloud cost data to pinpoint trends, anomalies, and opportunities for optimization- Engage intimately with teams to review expenditure changes and address cost anomalies- Cooperate with stakeholders to comprehend cost determinants and offer actionable insights- Assist teams in crafting dashboards and visualizations to monitor critical cost indicators- Compile reports and presentations to convey findings and suggestions to leadership- Collaborate with teams to formulate strategies for cost reduction and operational enhancement**Requirements**:- Bachelor's degree in Computer Science, Data Engineering, Data Analytics, or a related field- 5+ years of experience in data engineering, data analysis,
or a combined role- 1+ years of relevant leadership experience- Expertise in AWS services (e.G., EC2, S3, RDS, Lambda) and their cost frameworks- Proficiency in SQL and skills in relational databases like Snowflake or Redshift- Familiarity with Databricks and Spark for large-scale data processing- Hands-on experience with ETL tools and frameworks (e.G
Muestra tus habilidades a la empresa, rellenar el formulario y deja un toque personal en la carta, ayudará el reclutador en la elección del candidato.