Application deadline date has been passed for this Job.
Exploreture
Job Overview
Data Engineer
Duties & Responsibilities:
- Design and develop data solutions tailored to client requirements, ensuring they are scalable, reliable, and capable of meeting enterprise demands.
- Implement highly available data processing applications, adhering to Continuous Integration (CI) and Continuous Delivery (CD) practices to ensure swift and safe deployment of new features.
- Maintain high code quality by following software engineering best practices, including comprehensive code reviews and automated testing.
- Collaborate effectively within a cross-functional team in an Agile delivery environment, contributing to a cohesive team dynamic and project success.
- Adhere DevOps principles, actively participating in the entire software lifecycle from development and QA to deployment and post-production support.
- Build optimised data solutions and ingestion patterns with considerations for security and data governance.
- Build data pipelines and real-time data ecosystems to provide interactive customer experiences that will feed data into other systems via multiple interface types.
- Work directly with overseas client groups on requirement gathering and domain knowledge gathering.
- Process/reformat/arrange data ingestion and build data processing pipelines.
- Develop and implement complex statistical analyses for data processing, exploration, model building and implementation.
- Translate complex technical and functional requirements into detailed designs.
- Explore and implement efficient data storage and processing solutions, aiming for streamlined, cost-effective approaches that do not compromise on performance.
Qualification & Experience:
- A Bachelor’s Degree in Computer Science or equivalent qualification.
- 1-2 years of experience in developing enterprise-grade data processing applications with a demonstrable track record of delivering robust, scalable solutions.
- A good programming background in Python, R, and Go.
- Proficiency in handling large volumes of data, with experience in both relational and NoSQL databases (MySQL, MongoDB) and distributed storage systems (HDFS, Amazon S3, Redshift).
- Hands-on experience in ETL design and development using ETL tools (preferably Informatica, PowerBI and cloud tools such as AWS Data Pipelines, EMR, Spark, Hive).
- Experience working in a Scrum Agile delivery environment, and knowledge of DevOps practices.
- Experience in code management and CICD tools such as GitHub, Gitlab, Jenkins.
- Experience in an Agile environment and aligning Pod members on technical vision and path to implementation.
- Working experience with streaming data (using tools such as AWS Kinesis, Kafka, Apache Storm, Apache Spark).
What’s On Offer:
- Competitive pegged salaries.
- Ongoing quarterly training and development to help you grow.
- Attractive performance bonuses.
- Comprehensive family medical insurance.
- Access counseling services to support your well-being.
Job Detail
- Offered SalaryNot Specified
- Career LevelNot Specified
- Experience1 Year
- GenderBoth
- INDUSTRYComputer and technology
- QualificationBachelor's Degree