Your Growth
Only at McKinsey
Work on real-world, high-impact projects across a variety of industries – Identify micro patterns in data that our clients can exploit to maintain their competitive advantage and watch your technical solutions transform their day-to-day business.
Experience the best environment to grow as a technologist and a leader– Develop a sought-after perspective connecting technology and business value by working on real-life problems across a variety of industries and technical challenges to serve our clients on their changing needs.
Be surrounded by inspiring individuals as part of diverse multidisciplinary teams - Develop a holistic perspective of AI by partnering with the best design, technical, and business talent in the world as your team members.
Our Tech Stack
While we advocate for using the right tech for the right task, we often leverage the following technologies: Python, PySpark, the PyData stack, SQL, Airflow, Databricks, our own open-source data pipelining framework called Kedro, Dask/RAPIDS, container technologies such as Docker and Kubernetes, cloud solutions such as AWS, GCP, and Azure, and more!
Your Impact
As a Data Engineer at QuantumBlack, you will work in cross-functional Agile project teams alongside Data Scientists, Machine Learning Engineers, other Data Engineers, Project Managers, and industry experts. You will work hand-in-hand with our clients, from data owners, users, and fellow engineers to C-level executives.
You are a highly-collaborative individual who wants to solve problems that drive business value. You have a strong sense of ownership and enjoy hands-on technical work. Our values resonate with yours.
As a Data Engineer, you’ll:
- Help to build and maintain the technical platform for advanced analytics engagements, spanning data science and data engineering work.
- Design and build data pipelines for machine learning that are robust, modular, scalable, deployable, reproducible, and versioned.
- Create and manage data environments and ensure information security standards are maintained at all times.
- Understand clients data landscape and assess data quality.
- Map data fields to hypotheses and curate, wrangle, and prepare data for use in advanced analytics models.
- Have the opportunity to contribute to R&D projects and internal asset development.
- Contribute to cross-functional problem-solving sessions with your team and our clients, from data owners and users to C-level executives, to address their needs and build impactful analytics solutions.
Your qualifications and skills
- Degree in computer science, engineering, mathematics, or equivalent experience
- 2+ years of relevant professional experience
- Ability to write clean, maintainable, scalable and robust code in an object-oriented language, e.g., Python, Scala, Java, in a professional setting
- Proven experience building data pipelines in production for advanced analytics use cases
- Experience working across structured, semi-structured and unstructured data
- Exposure to software engineering concepts and best practices, inc. DevOps, DataOps and MLOps would be considered a plus
- Familiarity with distributed computing frameworks (e.g. Spark, Dask), cloud platforms (e.g. AWS, Azure, GCP), containerization, and analytics libraries (e.g. pandas, numpy, matplotlib)
- Commercial client-facing or senior stakeholder management experience would be beneficial