The health and safety of our employees and candidates is very important to us. Due to the current situation related to the Novel Coronavirus (2019-nCoV), we’re leveraging our digital capabilities to ensure we can continue to recruit top talent at the HSBC Group. As your application progresses, you may be asked to use one of our digital tools to help you through your recruitment journey. If so, one of our Resourcing colleagues will explain how our video-interviewing technology will be used throughout the recruitment process and will be on hand to answer any questions you might have.
Some careers have more impact than others.
If you’re looking for a career where you can make a real impression, join HSBC and discover how valued you’ll be.
HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions.
Jab Description:-
-
- Work with stakeholders including Business, IT and Design teams to assist with technical issues and support infrastructure needs
- Work with implementation teams from concept to operations, providing deep technical subject matter expertise for successfully deploying large scale data solutions in the enterprise, using modern data/analytics technologies on premise and cloud
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Integrate massive datasets from multiple data sources for data modelling
- Formulate business problems as technical data problems while ensuring key business drivers are captured in collaboration with product management
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using BigData and Cloud technologies
- Design, create and maintain optimal data pipelines and architectures for data processing
- Implement methods for Devops automation of all parts of the build data pipelines to deploy from development to production
- Extract, Load, Transform, clean, and validate data
- Query datasets, visualize query results and create reports
- Keep the business data separated and secure across Regional and Global boundaries through multiple data centers and GCP regions
- Work with data and analytics experts to strive for greater functionality in our data systems
- Skillset:-
- Active Google Cloud Data Engineer Certification or Active Google Professional Cloud Architect Certification
- Experienced in Data lake, data warehouse ETL build and design
- Experienced in migrating large enterprise legacy systems to cloud - Hadoop to GCP
- Hands-on GCP experience with a minimum of an end to end solution designed and implemented at production scale
- Hands on experience of Python and Py-spark programming
- Experienced in designing, building and operationalizing large-scale enterprise data solutions and applications using one or more of GCP data and analytics services in combination with 3rd parties - Spark, Hive, Cloud DataProc, Cloud Dataflow, Big Table, Cloud BigQuery, Cloud PubSub, Cloud storage, Cloud Functions & Github.
- Experienced in designing and building production data pipelines from data ingestion to consumption within a hybrid big data architecture, using Cloud Native GCP, Python etc.
- Experienced in building and optimizing ‘big data’ data pipelines, architectures and data sets
- Experienced in performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Strong analytic skills related to working with unstructured datasets
- Build processes supporting data transformation, data structures, metadata, dependency and workload management
- A successful history of manipulating, processing and extracting value from large disconnected datasets
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
- Experience with big data tools: Hadoop, Spark, Kafka, etc
- Experience with data pipeline and workflow management tools: Airflow, etc.
- Experience with object-oriented/object function scripting languages: Python, Java etc.
- Experience with Open-source technologies and tools: GitHub, Pyspark, Jenkins, Ansible etc.
- Experience supporting and working with cross-functional teams in a dynamic agile environment
- Well versed with the use of Agile tools - Jira and Confluence
Qualifications:-
- Experience with big data tools: Hadoop, Spark, Kafka, etc
- Experience with data pipeline and workflow management tools: Airflow, etc.
- Experience with object-oriented/object function scripting languages: Python, Java etc.
- Experience with Open-source technologies and tools: GitHub, Pyspark, Jenkins, Ansible etc.
- Experience supporting and working with cross-functional teams in a dynamic agile environment
- Well versed with the use of Agile tools - Jira and Confluence
You’ll achieve more when you join HSBC.
HSBC is an equal opportunity employer committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and, opportunities to grow within an inclusive and diverse environment. We encourage applications from all suitably qualified persons irrespective of, but not limited to, their gender or genetic information, sexual orientation, ethnicity, religion, social status, medical care leave requirements, political affiliation, people with disabilities, color, national origin, veteran status, etc., We consider all applications based on merit and suitability to the role.”
Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website.
**Issued By HSBC Software Development Centre***