Administration and monitoring of the Enterprise Information Management (EIM) systems to ensure EIM operations run efficiently with the desired SLA and security compliance.
Monitoring performance and advising any necessary infrastructure changes to the EIM landscape. Work with the infrastructure teams to implement such changes.
Design and develop architecture for data services ecosystem spanning Relational, Columnar, NoSQL, In-Memory, Data Warehouses and BI & Big Data technologies. This include designing and implementing data pipelines & ETL processes.
Design data models for mission critical and high volume data management, real-time and distributed data process aligning with the business requirements.
Promote and develop data architecture best practices, guidelines, procedures and repeatable and scalable frameworks.
Work with business units on their analytics initiatives, providing the data science expertise and resources besides being responsible for extracting the data from EIM as required for the project.
Work with analytics vendors, providing the data sets as required and support the business users in assessments & validation of the analytics/statistical & machine learning models.
Work with and help business units with tools like Tableau to visualize and create dashboards with the relevant data in EIM.
Work closely with our application teams to operationalise & integrate analytics/machine learning models into our production systems.
BS in Computer Science or other related discipline is required. Advanced degree related to Analytics, Machine Learning & AI preferred.
At least 3 years of relevant industry experience in following areas:
Knowledge and working experience with Machine Learning, AI, statistical techniques, and information retrieval as well as on data management systems, practices and standards.
Knowledge and working experience with at least one visualization tools like Tableau, PowerBI, Qlik, or similar open source tools.
Working experience in architecting highly performant databases using RedShift, PostgreSQL and Cassandra or NoSQL.
Knowledge & working experience in Big Data technologies like In-Memory, New SQL, NoSQL, Hadoop Hive/Spark, etc.
Knowledge & experience in shell scripting, R, Python, Perl, Ruby, or any other scripting language. Should be proficient in at least shell scripting and R or Python.
Experience with commercial ETL platforms with in-depth knowledge and understanding of ETL methodology & design supporting data transformations layer. Working experience with Ab Initio would be a bonus.
Strong knowledge and experience with Agile/Scrum methodology and iterative practices in a service delivery lifecycle is a PLUS.
Working experience in an AWS or similar public cloud environment is another PLUS.
Excellent interpersonal & communication skills and proven ability as a problem-solver.