Transformation & Turnaround
Data Engineer II - McKinsey Transformatics Job ID: 96878Do you want to work on complex and pressing challenges-the kind that bring together curious, ambitious, and determined leaders who strive to become better every day?
If this sounds like you, you've come to the right place. Your ImpactYou will also be part of an effort to build next-gen data platforms on the cloud to enable our business stakeholders to have rapid data access and incubate emerging technologies. You will be part of the team managing data governance, ensuring necessary data security and lifecycle controls are in place.
In this role, you will design and build data products. You will develop and maintain scalable, reusable data products that serve as foundational assets for analytics, reporting, and machine learning pipelines.
You will build and maintain data pipelines. You will design, develop, and optimize ETL/ELT workflows using tools like AWS Lambda, AWS Glue, Snowflake, or Databricks to ensure efficient and scalable data processing.
You will manage data infrastructure. You will work with cloud platforms like AWS to configure and manage storage (S3), compute resources, and workflow orchestration.
You will optimize performance. You will improve query efficiency, indexing strategies, and partitioning techniques to enhance data processing speed and cost-effectiveness in Snowflake, or Delta Lake (Databricks).
You will work closely with data scientists, product owners, and business stakeholders to provide clean, structured datasets that enable advanced analytics and machine learning.
You will ensure data governance and security. You will implement best practices for access control, data lineage tracking, and regulatory compliance (SOC 2, GDPR).
You will automate and monitore workflows. You will be responsible to develop scalable data pipeline automation using orchestration tools like Step Functions, or Databricks Workflows. Implement logging, alerting, and monitoring solutions to ensure data quality, reliability, and system performance.
You will stay updated on new technologies, participate in hackathons, and contribute to improving data engineering best practices across the team.
You will work in our McKinsey Client Capabilities Network in EMEA and will be part of our Wave Transformatics team.
Wave is a McKinsey SaaS product that equips clients to successfully manage improvement programs and transformations. Focused on business impact, Wave allows clients to track the impact of individual initiatives and understand how they affect longer term goals. It is a mix of an intuitive interface and McKinsey business expertise that gives clients a simple and insightful picture of what can otherwise be a complex process by allowing them to track the progress and performance of initiatives against business goals, budget and time frames.
Our Transformatics team builds data and AI products to provide analytics insights to clients and McKinsey teams involved in transformation programs across the globe. The current team is composed of data engineers, data scientists and project managers who are spread across several geographies. The team covers a variety of industries, functions, analytics methodologies and platforms - e.g. Cloud data engineering, advanced statistics, machine learning, predictive analytics, MLOps and generative AI.
As a member of the team, you will work alongside a team of skilled data engineers to design, build, and optimize scalable data solutions that power analytics, reporting, and machine learning. As a data engineer, you will be responsible for procuring data from APIs, ingesting it into the data storage layer, and ensuring its quality through cleaning and standardization. You will develop scalable data ingestion pipelines that integrate on cloud ecosystems, making data readily available for analytics and reporting.
Your Growth Driving lasting impact and building long-term capabilities with our clients is not easy work. You are the kind of person who thrives in a high performance/high reward culture - doing hard things, picking yourself up when you stumble, and having the resilience to try another way forward.
In return for your drive, determination, and curiosity, we'll provide the resources, mentorship, and opportunities you need to become a stronger leader faster than you ever thought possible. Your colleagues-at all levels-will invest deeply in your development, just as much as they invest in delivering exceptional results for clients. Every day, you'll receive apprenticeship, coaching, and exposure that will accelerate your growth in ways you won't find anywhere else.
When you join us, you will have:
- Continuous learning: Our learning and apprenticeship culture, backed by structured programs, is all about helping you grow while creating an environment where feedback is clear, actionable, and focused on your development. The real magic happens when you take the input from others to heart and embrace the fast-paced learning experience, owning your journey.
- A voice that matters: From day one, we value your ideas and contributions. You'll make a tangible impact by offering innovative ideas and practical solutions. We not only encourage diverse perspectives, but they are critical in driving us toward the best possible outcomes.
- Global community: With colleagues across 65+ countries and over 100 different nationalities, our firm's diversity fuels creativity and helps us come up with the best solutions for our clients. Plus, you'll have the opportunity to learn from exceptional colleagues with diverse backgrounds and experiences.
- World-class benefits: On top of a competitive salary (based on your location, experience, and skills), we provide a comprehensive benefits package, which includes medical, dental, mental health, and vision coverage for you, your spouse/partner, and children.
Your qualifications and skills- Bachelor's or Master's degree in Computer Science, Engineering, or a related technical discipline
- 2+ years of hands-on experience in data engineering, ETL development, cloud-based data solutions , or building data products that serve analytics, automation, or machine learning needs
- Strong foundational knowledge of AWS cloud services, including S3, Lambda, Glue, and Snowflake, with a focus on scalable and cost-efficient data architectures
- Proficiency in Python, with experience in modularization, writing optimized, production-ready code for data transformations and automation
- Advanced SQL skills, including query optimization, performance tuning, and database design
- Experience in building robust and scalable data pipelines using AWS Glue, Step Functions and SQL-based transformations (using stored procedures)
- Solid understanding of data modeling, data warehousing concepts, and schema design best practices
- Hands-on experience with Tableau or other BI tools for data visualization and dashboard development
- Exposure to DevOps and CI/CD practices, including infrastructure-as-code, version control (Git), and automated deployment strategies
- Strong problem-solving mindset, with the ability to troubleshoot and optimize complex data workflows efficiently
- Excellent communication and collaboration skills, with the ability to work effectively in agile, cross-functional teams
- Experience with Databricks for scalable data processing and PySpark for distributed data transformations (preferred)
Please review the additional requirements regarding essential job functions of McKinsey colleagues.
FOR U.S. APPLICANTS: McKinsey & Company is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by applicable law.
FOR NON-U.S. APPLICANTS: McKinsey & Company is an Equal Opportunity employer. For additional details regarding our global EEO policy and diversity initiatives, please visit our McKinsey Careers and Diversity & Inclusion sites.