RESPONSIBILITIES:
- Head the ingestion of all forms of structured and unstructured data in real-time and batch formats form various sources
- Build, test, and scale data pipelines
- Involved in developing the Group’s data warehouse using cloud solutions
- Liaise closely with DS and BA teams regionally for data processing and extracting useful business insights
- Automate data processing whenever possible and uphold data integrity and data security protocols
REQUIREMENTS:
Must Haves
- Conversational Mandarin will be required as this role requires you to communicate with Hong Kong, China and Taiwan based colleagues who speak primarily in Simplified Chinese
- Masters or bachelor’s degree in IT, Computer Science, Software Engineering, Applied Mathematics or equivalent with at least 5 years of data warehousing and data lake system development experience
- Expert in Python, Scala, Hive, Kafka or Spark
- Expert proficiency in ETL, building, enhancing, and scaling data pipelines on-premises and on cloud-based applications
Good to Haves
- Experience with cloud-based data modules on AWS, Azure or Alibaba Cloud
- Experience in building curated and semantic data layers
- Experience in data ingestion and data cleansing
- Experience in visual dashboards will be preferred
- Experience in advanced manufacturing, semiconductor or/and digital supply chain will be highly recommended
Please email Xa*****@ch**********.sg for a confidential discussion
EA License no: 16S8066 | Reg no.: R1980978
Only successful candidates will be notified. Due to work pass restrictions, the client is only able to prioritize Singaporeans followed by Singapore Permanent Residents applicants.