Big Data Platform Engineer - Data Technologies


Company: Bloomberg LP

Location: New York City and Princeton, NJ


Bloomberg runs on data. It’s our business and our product. It’s why thousands of companies partner with us. We're nearing one petabyte and growing, with no end in sight. Our data captures who, what, when, where and why our clients use Bloomberg products.

The Bloomberg Big Data Services engineering team (or BBDS for short) provides a software platform for hosting large datasets. It’s a mature platform complete with search, analytics and real-time pipeline processing capabilities. The system scales out to petabytes while maintaining low latency, availability and immediate discoverability by clients. This puts us in an enviable and unique position to address the unique challenges of financial markets.

Maybe you love solving intricate engineering issues with large systems. Or you like to dig into interesting problems around platform APIs, real-time data pipelines, search and analytics engines or query optimizers. If this sounds like you, keep reading!

We’ll trust you to:

  •  Build large distributed systems that will be the heart of our data platform. Your work will enable us to ingest and process trillions of data items
  •  Provide search and analytics across these structured, semi-structured and unstructured datasets
  •  Work on a number of large distributed computing systems such as HBase, MySQL Clusters, Kafka, Spark, Lucene, Solr/Elastic, HAProxy and in-memory stream processors
  •  Adapt and change many technologies to provide solutions for distributed data storage
  •  Care about synchronization, sub-second latencies, search and discoverability
  •  Maintain fault tolerance and high availability

You’ll need to have:

  •  5+ years experience in Java and JVM, C and Linux system including expertise in low-latency kernel level optimizations
  •  A background in software engineering and the capability to program in compiled and dynamic languages such as Python and JavaScript
  •  Expertise in data stores (both transactional and non-transactional) as well as the ability to code in a highly concurrent environment

We’d love to see:

  •  Experience with distributed systems, RESTful architectures and scalable, low-latency systems that provide high availability
  •  Deep knowledge of HBase, Spark, Cassandra and the Hadoop ecosystem of technologies or MySQL/WebScaleSQL and InnoDB engines
  •  A Bachelor’s or Master’s degree in Computer Science, Computer Engineering, Science, Math or equivalent experience
  •  Deep knowledge of search engines like Apache Lucene and Solr/Elasticsearch

Apply by filling out our skills profile found at the button below:

Infrastructure Production Developer Job

Location: Washington, DC, USA

Company: (Disclosed after submitted)

Job Requisition Number: 10-002

The Role:

Company's Infrastructure Production group in R&D delivers a wide range of technologies for their environment. The team builds out common services that every team can use and consume to monitor, visualize and diagnose their applications and infrastructure. We also are the forefront in implementing modern technology ideology within the organization and assist all of the teams with implementation, automation and design. The Production Engineering team is a new team inside of the environment and is one of the most fast paced and soon to be widely used across the entire company. You will have the ability to be apart of a large cultural shift within the organization. If you like large scale systems, billions of data points a day, automating all of the things, hacking on opensource software and making a cultural impact, ask us where to sign up.

What You'll Do:

- Design, architect, automate and deliver large scale production ready services for
employees to consume.
- Build internal tools to monitor, visualize and diagnose all aspects of applications &
hardware in the stack.
- Work closely with our product and platform teams with architecture, design and scaling
challenges they may have.
- Help teams replace legacy software and design patterns with modern technologies.
- Develop and maintain documentation, training and SLAs for managed infrastructure.


- Minimum 2 - 3 years of experience building similar systems
- Experience with large scale data processing
- Previous experience automating and implementing large scale fault tolerant distributed
- Experience with physical hardware and provisioning process
- Experience working with Opensource software

Common Tools we use:

- Ruby / Go
- Linux
- Kafka
- Hadoop / Zookeeper / HBase
- Mesos
- Icinga / OpenTSDB
- Chef
- Ruby / Go

The Company:

Leading company in global business, financial information and leader, that gives influential decision makers a critical edge by connecting them to a dynamic network of information, people and ideas. The company's strength - delivering data, news and analytics through innovative technology, quickly and accurately - is at the core of the our professional service, which provides real time financial information to more than 315,000 subscribers globally. The company's enterprise solutions build on the company's core strength, leveraging technology to allow customers to access, integrate, distribute and manage data and information across organizations more efficiently and effectively. Through many facets, the company provides data, news and analytics to decision makers in industries beyond finance. And our multimedia news platform, delivered through professional service, television, radio, mobile, the Internet and three magazines, Our media covers the world with more than 2,400 news and multimedia professionals at more than 150 bureaus in 73 countries.

Nearest Major Market: Washington DC
Job Segment: Developer, Engineer, Linux, Programmer, Information Technology, Technology, Engineering