Working Hours: Monday - Friday (10am – 7pm)
Working Location: Central
Salary Package: Up to $7,000 (with VB and AWS)
We are seeking a Data Engineer to join our team of analytics experts. The successful candidate will be responsible for expanding and optimising data pipeline architecture, as well as improving data flow and collection processes to support cross-functional teams.
The Data Engineer will design, develop, construct, test, and maintain analytical data warehouses, and build scalable data pipelines based on business requirements. You will also be involved in designing, deploying, and continuously improving ETL/ELT processes, as well as preparing data for descriptive and prescriptive analytics.
In this role, you will be expected to have a strong understanding of database infrastructure and schema design, and be able to perform performance tuning and optimisation. You will also design, develop, review, and optimise core database stored procedures, batch job scripts, and Java-based data processing components.
Key Responsibilities:
• Develop, construct, test and maintain data architectures such as databases, data warehouses and large-scale data processing systems
• Design and develop data pipelines/systems for data modelling, mining and production
• Ensure the data architecture is in place to support routine and ad-hoc requirements of data analytics team, stakeholders and the business
• Leverage on variety of programming languages and data crawling/processing tools to make raw data clean and highly available for use in descriptive and predictive modelling
• Recommend and implement ways to improve data quality, reliability, flexibility and efficiency Ensure data assets and data catalogs are organized and stored in an efficient way so that information is easy to access and retrieve PL/SQL and SQL Tuning and optimization of newly develop and existing applications
Requirements:
• Minimum 3 years' working experience in data architecture, data warehousing, data processing, data modelling and ETL/ELT, familiarity with real-time streaming solutions.
• Experience in Kubernetes-based DevOps practices, with experience in container orchestration, CI/CD pipelines, and microservices deployment.
• Working experience in database development (Oracle SQL/PLSQL)
• Working experience in AWS cloud environment, familiar with solutions such as EC2, S3, EMR, Redshift, Athena, Kinesis
• Programming knowledge in Python, R, SQL for data cleaning, processing and aggregation
• Proficiency in one or more of the following: Java, Hadoop, HDFS, Apache Airflow, Apache Spark, Scala, Hive, Pig
• Basic knowledge of Oracle database architecture
By submitting your resume, you consent to the collection, use, and disclosure of your personal information per ScienTec’s Privacy Policy (scientecconsulting.com/privacy-policy).
This authorizes us to:
• Contact you about potential opportunities.
• Delete personal data as it is not required at this application stage.
• All applications will be processed with strict confidence. Only shortlisted candidates will be contacted.
Wong Siew Ting (Maeve) - R25127375
ScienTec Consulting Pte Ltd - 11C5781