Big Data Developer

Location: Singapore
Reference: 2596649
Salary: up to S$110k per annum
Contact: Ollie Wood
Email: email Ollie
1. Project Delivery work

  • Be part of a distributed agile planning and delivery model for Big Data software projects, with other hubs in Czech Republic, USA, where you apply lifecycle methodology to agile project delivery
  • You work directly with customers and infrastructure teams, either face-to-face or over teleconferencing tools, to clarify requirements, implement features, run tests and resolve technical issues. You have the ability (knowledge, training and access) to work on new projects or existing code bases, shape and apply team best practices using DevOps, collaboration and continuous integration tools.
2. Platform Pipeline and planning
  • You help to shape the global Big Data and DevOps team’s platform plans (capabilities, products and releases). You explain the roadmap, tools and services to target Asia Pacific audiences. You influence and inform the global plans with the understanding of Asia Pacific user requirements and challenges.
3. Evangelism, consulting and adoption support
  • You engage stakeholders to understand business problems and translate them into Big Data solution plans and estimates, aligned with and making best use of the Platform capabilities.
  • Your deep knowledge of best practices for data management and processing, integration and delivery helps to drive increased adoption of Big Data and DevOps tools throughout the Asia Pacific IT community.
Desired experience and skills
  • Development and prototyping of data driven software products in a DevOps environment, using agile lifecycle methodology.
  • Good knowledge of version control systems such as Git, SVN, Mercurial etc.
  • Experience with software hosting solutions such as GitHub, Bitbucket.
  • Experience with popular development tools and tools platforms
  • Experience with Atlassian products is a plus (Confluence, Jira, Bitbucket)
  • Experience with end-to-end software product ownership
  • Familiar with software lifecycle (SDLC) and agile methods (e.g. SCRUM)
  • Good knowledge of the Hadoop and Hadoop ecosystem tools, Spark, SQL, NoSQL, Kibana, ingestion tools, etc.
  • We are tool agnostic but primary tools and languages are Linux shell, Java, Scala, Python, Javascript, R, Spotfire, and R.
  • Knowledge of Unix tools, Jenkins, Docker, Ansible, automated testing, and infrastructure-as-code tools
  • Ability to stay up-to-date with leading Big Data and software development tools, and set the benchmark for best practices to lead the industry
  • Domain knowledge and experience in business disciplines such as Market Research, Financial Engineering, Sales and Marketing, Drug Discovery, Medicine, Manufacturing, HR or Procurement is not mandatory, but appreciated.
  • We evaluate candidates on a case-by case basis, but we prefer a qualification in IT: MSc Degree or BSc Degree or equivalent with relevant experience in Computer Science, Computer Science Engineering, Mathematics, with a solid background in Linux systems, cloud and big data technologies.
  • Quantitative, scientific and technology disciplines such as Mathematics, Statistics, Data Science, Analytics, Physics, and Bioinformatics are a plus.
  • A real world problem solver: Strong analytical and problem-solving skills, an ethic of quality and enterprise scale service.
  • A user advocate: a mind-set of collaboration.
  • A passion for Big Data technologies, and making them work in a corporate environment for a regulated industry.
If this is not quite right for you but your role is focused on the data ecosystem then get in touch with

CEI No: R1112169 | Licence No: 07C3147