change
starts now...

< BACK TO SEARCH RESULTS

Chicago, United States

Competitive

Our client is one of the world’s most active proprietary trading firms and a key market maker in various products listed on exchanges throughout the world. The firm was founded in 1989 and is a global financial organization, made up of two different divisions with trading operations that operate independently of one another including a securities trading division and an asset management department. The company employs over 600 people and has offices located in Amsterdam, New York City, Chicago, Hong Kong and Sydney. The Group is a market maker active in over 100 trading venues throughout the world and offers liquidity to over 200,000 through their securities division. The securities department makes markets in the major exchange traded instruments including equities, bonds, commodities and currencies, on 100 exchanges worldwide and is a significant liquidity provider on the NYSE Arca, NASDAQ, CBOE, BATS, CME Group, and ICE exchange among others. 

 

The Role

Our Client is looking for a combination skillset of data scientist and data engineer. The role will involve working with a large amount of raw unstructured trading and market data generated by the option market making algorithms.  You will work in collaboration with traders and quants in order to build data driven quantitative trading strategies, data pipelines. The role will involve a mixture of problem solving, data architecture, data engineering, data development, serialization, encoding, storage/ retrieval and analysis using data mining techniques to support the Quant Trading team in data analysis. 

 

The Candidate

  • Prior experience developing data loading/ ETL processes for Hadoop environment.
  • Experience writing clear, efficient, tested code.
  • Experience developing code as part of a wider team cross disciplinary team
  • Experience contributing to both program and system architecture 
  • Ability to work in an agile team
  • Essential requirements for the role is strong commercial experience in a data /software engineering role, including substantial use of the following technologies/ tools:
  • Java (or other significant OO language with desire to learn Java)
  • SQL
  • Hadoop
  • Commercial experience with Linux, and use of a version control system (preferably Git).

 

Any experience in the following would also be desirable:

  • ETL/ data load process development
  • Hive (and other SQL-on-Hadoop tools)
  • Experience dealing with large and/or complex data sets
  • Unit-testing frameworks (JUnit, Mockito etc) and Test-Driven Development.
  • Maven
  • Cloud solutions, particularly Amazon Web Services
  • Massively parallel (MPP) database systems such as Teradata
  • Other Hadoop data processing tools (Cascading, Spark, Pig, MapReduce etc.)
  • Other Big data/ NoSQL technologies
  • Microsoft SQL Server
  • A Bachelors or Master's degree in Computer Science, Software Engineering or related field is preferred.


If you would like to be considered for the position of Data Scientist, or wish to discuss the role further then please leave your details below. Your resume will be held in confidence until you connect with a member of our team.

Upload