Principal Big Data Engineer at Fidelity Investments in Boston

Principal Big Data Engineer at Fidelity Investments in Boston

Job Description

Personal Investments business unit of Fidelity is seeking a hands-on Lead Big Data Engineer, reporting to VP of Big Data Architecture to help anchor an exciting and fast-paced engineering team passionate about designing and implementing large-scale distributed data processing systems using cutting edge cloud based open source and proprietary big data technologies! This role will implement a variety of solutions to process data within, and expose data from a Data Lake that enables our data analysts and scientists to explore data in the ad-hoc manner as well as quickly implement data-driven models that generate accurate insights in an automated fashion. This position is a critical element to delivering Fidelity’s promise of crafting the best customer experiences in financial services.

The Team

Our teams are flat, non-hierarchical structures, which run on agile principles. The lead engineer will be driving multiple initiatives by means of designing architecture, defining best practices and evangelizing initiatives across this and other teams. Many of these initiatives are just starting, allowing you the opportunity to drive and affect the foundational development work for years ahead. We invest in a broad range of technologies and experiment before delivering a production system. As a lead engineer, this role will help lead our technology research, evaluation and PoC efforts, mentor and guide less senior members of our team!

The Expertise You Have

  • Bachelor’s degree or higher in a technology related field (e.g. Engineering, Computer Science, etc.) required.
  • 5+ years of hands-on experience applying principles, best practices and trade-offs of schema design to various types of database systems, both relational and NoSQL. Understanding data manipulation principles.
  • 7+ years of hands-on experience building distributed back-end enterprise software platforms in one or more modern Object-Oriented Programming languages (Python preferred) including the ability to code in more than one programming language.
  • 5+ years of working in Linux environment, ability to interface with the OS using system tools, scripting languages, integration frameworks, etc.
  • 2+ years of hands-on experience in implementing batch and real-time Big Data integration frameworks and/or applications, in private or public cloud, preferably AWS, using various technologies (Hadoop, Spark, etc); debugging, identifying performance bottlenecks and fine-tuning those frameworks.
  • 2+ years of experience with DevOps, Continuous Integration and Continuous Delivery (Maven, Jenkins, Stash, Ansible, Docker).
  • Experience and comfort executing projects in Agile environments (Kanban and Scrum).

The Skills You Bring

  • The customer’s satisfaction is your main priority.
  • Enjoy owning systems end-to-end and are hands-on in every aspect of the software development and support: requirement discussion, architecture, prototyping, development, debugging, unit-testing, deployment, support.
  • Review multiple approaches before making a design decision, evaluating simplicity, robustness, performance, memory footprint, disk throughput, etc.
  • Passion and intellectual curiosity to learn new technologies and business areas.
  • Enjoy researching technologies and figuring out which ones work best.
  • Love prototyping and experimenting. Find it OK when your project occasionally fails, and you learn from that failure.
  • Enjoy mentoring junior developers.

Post ID: 64

Discover More AI Jobs:

Apply for this job Apply via Facebook
Share this job

We are one of the largest AI Communities online. Our publications have over 8.5 Million Views Annually and we have over 120K subscribers.