Data Engineer in New York, NY at Open Systems Technologies

Date Posted: 11/10/2019

Job Snapshot

  • Employee Type:
    Full-Time
  • Location:
    New York, NY
  • Job Type:
  • Experience:
    At least 3 year(s)
  • Date Posted:
    11/10/2019

Job Description

A global investment bank is currently seeking a Data Engineer to join their team in New York. This candidate will be responsible for building out and enhancing data and data pipeline architecture. You must have experience with architecture building and data wrangling, you will support the data analysts and data scientists on initiatives and ensure optimal data delivery. 

Responsibilities:

  • Design a scalable and efficient data pipeline architecture.
  • Manage all stages of software development lifecycle.
  • Assemble large complex data sets that meet team and business requirements.
  • Collaborate with engineers across the organization to identify and remedy data quality and integrity issues in core systems.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, deploying infrastructure for scalability, high availability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using appropriate technologies.
  • Create data tools for analytics and team members that assist them in building and optimizing our data with actionable insights and results.
  • Ability to adapt to new languages/technologies as infrastructure evolves

Skills:

  • Graduate degree in Computer Science, Data Science, Statistics or another related quantitative field
  • 3+ years of experience in a Data Engineer or similar role
  • Experience working with SQL and relational databases
  • Experience with kdb+/Q
  • Experience with object-oriented and functional programming languages: Python, R, Java, C++
  • Experience with data pipeline and workflow management tools
  • Exposure and/or experience to cloud services and toolsets
  • Exposure and/or experience to big data platforms and frameworks: Hadoop, Spark, Kafka, etc.
  • Experience building and optimizing large data sets and pipelines, architectures, ETL & data warehouse development.
  • Exposure to building out data models across multiple sources and datasets.
  • Experience performing root cause analysis on internal and external data, and processes to answer specific business questions and identify opportunities for improvement.
  • Previous experience in financial industry is not required but considered an asset.

Job category:
  • Information Technology
Job keywords:
  • Big Data
  • Relational Database
  • SQL