18 ene
Agileengine
Ciudad de México
What you will do
1. Develop efficient, clean, and maintainable Python code for machine learning pipelines, leveraging our in-house libraries and tools;
2. Collaborate with the team on code reviews to ensure high code quality and adhere to best practices established in our shared codebase;
3. Contribute to building and maintaining our MLOps infrastructure from the ground up, with a focus on extensibility and reproducibility;
4. Take ownership of projects by gathering requirements, creating technical design documentation, breaking down tasks, estimating efforts, and executing with key performance indicators (KPIs) in mind;
5. Optimize machine learning models for performance and scalability;
6.
Integrate machine learning models into production systems using frameworks like SageMaker;
7. Stay up-to-date with the latest advancements in machine learning and MLOps;
8. Assist in improving our data management, model tracking, and experimentation solutions;
9. Contribute to enhancing our code quality, repository structure, and model versioning;
10. Help identify and implement the best practices for ML services deployment and monitoring;
11. Collaborate on establishing CI/CD pipelines and promoting deployments across environments;
12. Address technical debt items and refactor code as needed.
Must haves
1. 3+ years of experience in machine learning engineering or a related role;
2. Strong proficiency in Python programming;
3. Experience with machine learning frameworks such as PyTorch, TensorFlow, or scikit-learn;
4. Familiarity with cloud platforms like AWS, including services like SageMaker, S3, and Secrets Manager;
5. Experience with data processing, cleaning, and feature engineering for structured and unstructured data;
6. Knowledge of software development best practices, including version control (Git), testing,
and documentation;
7. Excellent problem-solving and debugging skills;
8. Strong communication and collaboration abilities;
9. Ability to work independently and take ownership of projects;
10. Upper-intermediate English level.
Nice to haves
1. Experience with Infrastructure as Code (IaC) tools, preferably Pulumi or Terraform;
2. Experience with classification models and libraries such as XGBoost, SentenceTransformers, or LLMs;
3. Knowledge of data versioning, experiment tracking, and model registry concepts;
4. Familiarity with data pipeline and ETL tools like Dagster, Snowflake, and DBT;
5. Experience with monitoring logs, metrics,
and performance testing for batch inference workloads;
6. Contributions to open-source machine learning projects;
7. Experience with deploying and monitoring machine learning models in production.
#J-18808-Ljbffr
Muestra tus habilidades a la empresa, rellenar el formulario y deja un toque personal en la carta, ayudará el reclutador en la elección del candidato.