Saturday, 26 February 2022

Job Opportunity at NMB - Senior Data Engineer


Senior Data Engineer

Job Purpose:

To build and maintain optimized and highly available data pipelines that facilitate deeper analysis and reporting.

Main Responsibilities:

  • Design, implement and maintain the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies into a Data Warehouse based on internal process improvements, automation and optimization of data delivery.
  • Identify, design and implement internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes
  • Design and build solutions to empower users to perform self-serve analytics needs.
  • Assemble large, complex data sets that meet functional / non-functional business requirements and design custom ETL and ELT processes.
  • Implement enhancements and new features across data infrastructure systems including Data warehouse, ETL, Master Data Management & BI platform
  • Maintain the overall data infrastructure systems
  • Troubleshoot data issues, data platforms and perform root cause analysis
  • Design and implement data products and features in collaboration with product owners, data analysts, and business partners using Agile / Scrum methodology
  • Design and build an optimal organizational data infrastructure and architecture for optimal extraction, transformation, and loading of large data volumes from a wide variety of data sources using SQL and Azure, AWS big data technologies.
  • Profile and analyze data for the purpose of designing scalable solutions
  • Define and apply appropriate data acquisition and consumption strategies for given technical scenarios
  • Work with architecture and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to
  • Anticipate, identify and solve issues concerning data management to improve data quality.
  • Work with analytics tools that utilize the data pipeline to provide actionable insights into operational efficiency and other key business performance metrics.
  • Advise on the best tools/services/resources to build robust data pipelines for data ingestion, connection, transformation, and distribution
  • Transform data and engineer new features for machine learning models
  • Perform deep-dive analysis including the application of advanced analytical techniques to solve some of the more critical and complex business problems
  • Create new methods to visualize core business metrics through reports, dashboards, and analytics tools.
  • Work with different stakeholders to assist with data-related technical issues and support their data infrastructure needs.
  • Work with business users to understand the domain knowledge and troubleshooting of product issues

Knowledge and Skills:

  • Understanding of ETL framework and tools
  • Understanding of reporting & data visualization tools
  • Excellent analytical, creative and problem-solving skills.
  • Excellent verbal and written communication skills with the ability to interact effectively with people at all levels.
  • Ability to work effectively within a team.
  • Ability to prioritize, meet deadlines and work under pressure
  • Ability to work independently with minimal supervision
  • Attention to detail

Qualifications and Experience:

  • BSc in Computer Science, Computer Engineering, Data science, or relevant field.
  • Strong programming experience in SQL, Python, R
  • 5 years of experience in data engineering role
  • 3 years of big data experience
  • Experience in building and optimizing data warehouse and big data pipeline architectures
  • Experience with design, development and maintenance of ETL tools
  • Experience with maintenance and troubleshooting of BI platforms; SQL and NoSQL databases
  • Experience in data mining & machine learning
  • Successful history of manipulating processing and extracting value from large disconnected datasets
  • Extensive experience working with Hadoop and related processing frameworks such as Spark, Hive, etc.
  • Experience :5.0 Year(s)

The deadline for submitting the application is 10 March 2022.

CLICK HERE TO APPLY

No comments:

Post a Comment