Big Data Engineer

About the job

Job Title: Big Data Engineer

Client: Fortune 1000 client

Location: San Jose, CA – 95134 (REMOTE until COVID-19)

Duration: Permanent Full-Time position

 

Complete Description:

The most important thing to us about you is that you have a passion for working on cool stuff and can work well with cool people. We love the energy shown in your projects (and those side projects you do, ‘just for you’) and we love that you can get in a room with amazing developers and learn and teach and contribute and grow. Our Agile, collaborative approach is important to everyone here.

If you have worked on bringing a large software product to market, or have a desire to gain this experience, this role might be perfect for you.

 

Responsibilities:

We focus on building quality software in an agile and results-oriented environment. Specific responsibilities include:

  • Create sophisticated, imaginative, and efficient solutions for large-scale applications for the automotive industry.
  • Evolve existing framework to support new scalability requirements as well as new functionality needed.
  • Work with the team to drive big data solutions.
  • Partner with the Product Owners and other teams to deliver business value for client and our customers
  • Deliver high quality solutions using agile methodologies including TDD and CI/CD
  • Design reusable components by utilizing various standard frameworks.
  • Make technology recommendations that support optimal construction, maintenance, and performance.
  • Collaborate with other development teams inside and outside the company to leverage their capabilities and share ours.
  • Work with global and cross-functional teams like Operations, Support, Sales, etc.
  • BS or MS in Computer Science or equivalent

 

Experience:

Qualified candidates will generally have at around 7 years of software development experience, including:

Must Have:

  • Working experience in Big data technologies.
  • Hands on experience on Java/Python/Scala(Minimum 4 years)
  • Solid understanding of big data environments.
  • Solid hands-on experience with Spark Core & Spark Streaming.
  • Experience writing batch jobs to extract data from S3/Data warehouse using Spark
  • Experience in Apache Kafka or AWS SQS/Kinesis.
  • Proven experience processing and aggregating over millions of rows.
  • Build and maintain performant, fault-tolerant, scalable distributed software systems.
  • Experience with agile software development methodology.
  • Strong written and verbal communication skills.

Good to Have:

  • Experience with AWS (EMR, Lambda, Kinesis, Glue, S3, SQS, SNS, IAM) is a big PLUS.
  • Experience of RESTful APIs to enable real-time data consumption.
  • Experience in database skills (SQL, NoSQL).
  • Should be able to fine tune performance of Spark applications.
  • Demonstrated success working with cross-functional teams. Well-versed in the development challenges inherited with highly scalable, highly available, and highly resilient systems.
  • Experience with Snowflake.

Education:

  • A BS in Computer Science or equivalent education/experience

The end client is unable to sponsor or transfer visas for this position; all parties authorized to work in the US without sponsorship are encouraged to apply. (No Corp-to-Corp)

Discover More AI Jobs:

 

More Information

Apply for this job Apply via Facebook
Share this job

We are one of the largest AI Communities online. Our publications have over 8.5 Million Views Annually and we have over 120K subscribers.