BlackRock Data Engineer in Budapest, Hungary

BlackRock helps investors build better financial futures. As a fiduciary to our clients, we provide the investment and technology solutions they need when planning for their most important goals. As of December 31, 2017, the firm managed approximately $6.288 trillion in assets on behalf of investors worldwide. For additional information on BlackRock, please visit | Twitter: @blackrock at | Blog: | LinkedIn: .

Job Description:

Data Engineer – Research & Analytics


BlackRock is one of the world’s preeminent asset management firms and a premier provider of global investment management, risk management and advisory services to institutional, intermediary and individual investors around the world. BlackRock offers a range of solutions — from rigorous fundamental and quantitative active management approaches aimed at maximizing outperformance to highly efficient indexing strategies designed to gain broad exposure to the world’s capital markets. Our clients can access our investment solutions through a variety of product structures, including individual and institutional separate accounts, mutual funds and other pooled investment vehicles, and the industry-leading iShares® ETFs.

This is an opportunity to work within the ETF and Index Investing Group; this group produces iShares®; the world’s most comprehensive family of exchanged traded products. We are seeking a self-motivated Data Engineer to enhance our Global Data engineering team.

The mandate of this team is to create commercially driven tools, underscored by rigorous research, and aligned with our front office needs. Our US team contains finance professionals, researchers, quants, computer scientists and data analytics engineers. Our unique team structure enables taking innovative and commercially relevant ideas, prototyping them rapidly, and commercializing them – all within one team. We do so by leveraging the vast array of tools and data available and augmenting with specialized applications suited for rapid prototyping. The team is highly sought after to solve data- and analytically-intensive challenges and present them in intuitive and visually compelling ways.

The focus of your role will be to build out our financial data warehouse, data lake, data pipelining infrastructure and surrounding data ecosystem. This includes leveraging ETL Tools like Pentaho Data Integration, optimised batch processing, dependency analysis, and Massively Parallel Processing Databases

We value:

• Work Life Balance and working smart over “face time”

• Hands on technical leadership

• Small teams with highly talented technologists

• Team work, open dialogue, and challenge of the status quo

• Quality AND Time to Market, and building infrastructure to help us do both

Product Examples the EII Analytics team:

• Real Time Valuation of Corporate Bonds, leveraging miscellaneous data sources with millions of data points, for 30K bonds in parallel.

• Complex Event Processing tools to evaluate the effectiveness of our ETFs in real-time, measuring over 500K rules per second, across 2K+ listings.

• Detailed ETF Pre-Trade analytics to guide client trading decisions

• Text mining to identify sales opportunities

Technology Stack

• Python, Greenplum (SQL, SP, Functions), Java, Pentaho, Cassandra, ESP (Complex Event Processing), GIT, Maven, Linux

• New product development will leverage Kafka, Kafka Streams, Spark, Kubernetes and Docker


• Data engineer in an analytics team, embedded within the business, focused on delivering commercially relevant and game changing software

• Drive key technology stack and implementation decisions

• Handling and managing very large data sets.

• Understand the data, the lifecycle and business applications.

• Create and optimise DB solutions in collaboration with research and development teams

• Improve existing data collection procedures using ETL or Big Data tools

• Coordinate with other team members in a multi-office, multi-region environment – SF, NY, London and Budapest.

• Deliver high level of client service through responsiveness and accuracy

Skills and Experience:


• Strong English language skills

• Expert in databases (SQL, stored procedures and data modelling), preferably Greenplum, PostgreSQL.

• Confident in one programming language like Java SE or Python

• Knowledge of ETL processes (data pipelining) using Big Data stack or standard ETL tools such as (Pentaho/Informatica)

• Experience with agile Iteration /Scrum Sprint development processes, from project inception, product delivery and on-going enhancement.

• Experience working with LINUX/UNIX servers

• Strong teamwork, communication skills and time management abilities a must


• Experience with Open Source tools (Maven, Spring, JUnit, etc.)

• Experience with Kafka, Kafka Streams, Spark, OneTick and SAP’s ESP a strong plus

• Proficiency in statistical package programming skills (Matlab, R, SAS) - For example in an applied research environment

• Java application servers such as Apache, JBoss, Tomcat, Jetty

• Background in investment research or portfolio management area of a financial firm

• Financial knowledge and interest strongly recommended

BlackRock is proud to be an Equal Opportunity and Affirmative Action Employer. We evaluate qualified applicants without regard to race, colour, national origin, religion, sex, disability, veteran status, and other statuses protected by law.

BlackRock will consider qualified applicants with a criminal history.

To apply to the role, please use the following URL:

BlackRock is proud to be an Equal Opportunity and Affirmative Action Employer. We evaluate qualified applicants without regard to race, color, national origin, religion, sex, disability, veteran status, and other statuses protected by law.