Big Data

Senior Big Data Engineer - Latency Monitoring System

Senior Big Data Engineer - Latency Monitoring System

Location: New York

Company: Bloomberg LP

 
Bloom731-lexington-office.jpg
 

 

You're a software engineer who has develops and integrates large systems with many parts, and can distribute data between those parts. You're interested in Big Data sets, performance analysis, and statistics. You enjoy working with low latency, high throughput systems, and are comfortable thinking about the distance between two cities in both miles and milliseconds. You're interested in using open-source technologies, but if it doesn't exist you're happy to build it.

If this sounds like you, then consider working on the Latency Monitoring System. We're building a system from scratch to explore the latency of market data delivery on Bloomberg's global network. You'll be involved nearly from the beginning, designing a system that helps both developers and business departments understand how data flows through our system and where we can improve. We're looking for someone who can contribute to all aspects of the system, from parsing to data storage to data analysis. You'll work with a small, flexible team on identifying how applications behave under load, which applications can be improved, and where the bottlenecks are.

We'll trust you to:

  •  Design and implement distributed data analytics systems, using Hadoop/Spark, Python, and C/C++
  •  Manage cloud resources in order to maintain resiliency and performance
  •  Effectively roll out new features using an Agile methodology
  •  Work with a small team on all parts of the system, from data capture to display
  •  Participate with the rest of the team in analyzing the latency data, finding bottlenecks, and proposing solutions

You need to have:

  •  2+ years experience with Hadoop and Spark
  •  2+ years experience with Openstack or Amazon EC2 (or equivalent)
  •  4+ years experience with Python
  •  BS or MS in Computer Science or equivalent experience
  •  Experience with GitHub and a solid understanding of core concepts with Git
  •  Familiarity with Linux
  •  A solid understanding of basic statistics and core computer science concepts

We'd love to see:

  •  A strong understanding of distributed computing
  •  Familiarity with web technologies, including NGinx, Flask, and REST APIs
  •  Experience with chef, puppet or ansible
  •  Familiarity with system administration tasks, such as managing services, hardware, and network configurations
  •  Prior experience working with trading or market data

Big Data Platform Engineer - Data Technologies

 
OfficeWalkway
 

Company: Bloomberg LP

Location: New York City and Princeton, NJ

 

Bloomberg runs on data. It’s our business and our product. It’s why thousands of companies partner with us. We're nearing one petabyte and growing, with no end in sight. Our data captures who, what, when, where and why our clients use Bloomberg products.

The Bloomberg Big Data Services engineering team (or BBDS for short) provides a software platform for hosting large datasets. It’s a mature platform complete with search, analytics and real-time pipeline processing capabilities. The system scales out to petabytes while maintaining low latency, availability and immediate discoverability by clients. This puts us in an enviable and unique position to address the unique challenges of financial markets.

Maybe you love solving intricate engineering issues with large systems. Or you like to dig into interesting problems around platform APIs, real-time data pipelines, search and analytics engines or query optimizers. If this sounds like you, keep reading!

We’ll trust you to:

  •  Build large distributed systems that will be the heart of our data platform. Your work will enable us to ingest and process trillions of data items
  •  Provide search and analytics across these structured, semi-structured and unstructured datasets
  •  Work on a number of large distributed computing systems such as HBase, MySQL Clusters, Kafka, Spark, Lucene, Solr/Elastic, HAProxy and in-memory stream processors
  •  Adapt and change many technologies to provide solutions for distributed data storage
  •  Care about synchronization, sub-second latencies, search and discoverability
  •  Maintain fault tolerance and high availability

You’ll need to have:

  •  5+ years experience in Java and JVM, C and Linux system including expertise in low-latency kernel level optimizations
  •  A background in software engineering and the capability to program in compiled and dynamic languages such as Python and JavaScript
  •  Expertise in data stores (both transactional and non-transactional) as well as the ability to code in a highly concurrent environment

We’d love to see:

  •  Experience with distributed systems, RESTful architectures and scalable, low-latency systems that provide high availability
  •  Deep knowledge of HBase, Spark, Cassandra and the Hadoop ecosystem of technologies or MySQL/WebScaleSQL and InnoDB engines
  •  A Bachelor’s or Master’s degree in Computer Science, Computer Engineering, Science, Math or equivalent experience
  •  Deep knowledge of search engines like Apache Lucene and Solr/Elasticsearch

Apply by filling out our skills profile found at the button below: