La empresa Turing está brindando oportunidades de empleo para el cargo de Kafka Engineer en la región de América Latina. El tipo de trabajo que ofrecemos es Jornada completa.
Estamos buscando candidatos con habilidades en Ingeniería y Tecnología de la información y experiencia en Sin experiencia, así como personas honestas, disciplinadas y responsables.
El salario estimado ofrecido por nuestra empresa es bastante competitivo, alrededor de L 9,300 - L 23,000 (por Mes). Sin embargo, dicho salario puede variar dependiendo de la decisión de la compañía.
Turing se especializa en el sector Desarrollo de software, por lo tanto, si estás interesado en postularte, puedes hacerlo directamente.
Informacion del trabajo
Descripción del trabajo
A US-based company that is home to the world’s largest selection of guitars and musical equipment is looking for a Kafka Engineer. The engineer will play a critical role in modernizing the platform’s current infrastructure while building modern, robust, and scalable features. The company operates the world’s largest multichannel musical instrument retail services and is on a mission to develop and nurture lifelong musicians and make a difference in the musical world. They have managed to raise more than $30mn in funding so far.
Job Responsibilities:
- Help facilitate the implementation of Confluent Kafka streaming and enhance the middleware administration
- Responsible for setting up Kafka brokers, Kafka Mirror Makers, and Kafka Zookeeper on hosts in collaboration with the Infrastructure team
- Design, build and maintains Kafka topics
- Contribute to the tuning and architecture with a strong understanding of related Kafka Connect and Linux fundamentals
- Carefully observe Kafka health metrics and alerts, taking action in a timely manner
- Implement a real-time and batch data input pipeline employing best practices in data modeling and ETL/ELT operations
- Participate in technological decisions and work with smart colleagues
- Review code, implementations, and provide useful input to assist others in developing better solutions
- Develop documentation on design, architecture, and solutions
- Provide assistance and coaching to peers and more junior engineers
- Build good working relationships at all levels of the organization and across functional teams
- Assume accountability for the project’s timetables and deliverables
- Create dataflows and pipelines ranging from simple to complicated
- Support the investigation and resolution of production difficulties
- Work to keep the system and data security at a high level, ensuring that the application’s confidentiality, integrity, and availability are not jeopardized
- Express stakeholder’s needs into familiar language that can be adopted for use with Behavior Driven Development (BDD) or Test-Driven Development (TDD)
- Build solutions that are stable, scalable, and easy to use while fitting into the broader data architecture
- Assists in the formation of Communities of Practice
- Utilize industry-standard approaches to continuously improve the performance of source code
- Consistently enhance the performance of source code using industry-standard methodologies
- Steer the technology direction and options by proferring suggestions based on experience and research
- Encourages the creation of group norms and procedures
Job Requirements:
- Bachelor’s/Master’s degree in Engineering, Computer Science (or equivalent experience)
- 7+ years of direct expertise with data pipelines and application integrations
- Experience in the design, development of Clusters, and Producers/Consumers
- Proficiency in enabling Cloud/hybrid Cloud using Confluent Kafka Data streaming through Kafka, SQS/SNS queuing, etc
- Strong container expertise, especially Docker
- Prolific skills working with technologies such as Ansible, Puppet, Terraform, OpenShift, Kubernetes, AWS, AWS Lambda, and Event Streaming
- Working experience in a public cloud environment as well as on-premise infrastructure
- DataDog, Splunk, KSQL, Spark, and PySpark experience is a plus
- Excellent knowledge of distributed architectures, including Microservices, SOA, RESTful APIs, and data integration architectures
- Familiarity with any of the following message/file formats: Parquet, Avro, ORC
- Excellent understanding of AWS Cloud Data Lake technologies, including Kinesis/Kafka, S3, Glue, and Athena
- It’s advantageous to know RabbitMQ and Tibco Messaging technologies
- Previous expertise in designing and implementing data models for applications, operations, or analytics
- Track record of working with information repositories, data modelling, and business analytics tools is a strong suit
- Be familiar with databases, data lakes, and schemas with advanced expertise and experience in online transactional (OLTP) and analytical processing (OLAP)
- Experience in Streaming Service, EMS, MQ, Java, XSD, File Adapter, and ESB-based application design and development experience
- Capable of working in a fast-paced team to keep the data and reporting pipeline running smoothly
Beneficio
- Bono por horas extras
- Gana experiencia
- Ambiente de trabajo cómodo
Información de solicitud de empleo
La información que proporcionamos anteriormente puede actualizarse repentinamente, siga buscando la información completa a través del botón "Aplica ya" o en el sitio web oficial de la empresa Turing, para no encontrar eventos no deseados.
Consejos del administrador: solicitar un trabajo es gratis.
Ojalá consigas el trabajo que deseas.
Instrucciones
- Haga clic en el botón "Aplica ya" arriba.
- Después de eso, será dirigido a la página de Envío de Solicitudes, hay consejos para enviar solicitudes y entrevistas.
- En la página de envío de solicitudes, haga clic en el botón "Formulario de solicitud".
- En esa página puede ver información más completa de la empresa y ver la cantidad de personas que solicitan el trabajo.
- Lo siguiente es hacer clic en "Aplicar".
- Por favor, regístrese en el sitio web si no tiene una cuenta, pero si la tiene, puede llenar inmediatamente el formulario de solicitud.
- Finalizado.