Big Data Engineer

Location: Flexible

We are looking for a candidate with 3+ years expertise in data engineering and processing of complex data pipelines

Requirements:

  • Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
  • Practical experience with object-oriented in Scala and Java
  • Strong knowledge of some of big data tools: Spark, Spark streaming, Hadoop/HDFS, Kafka, Akka, Flink,…
  • Experience with relational databases, including Postgres, MySQL and ETL processing
  • Proven expertise with AWS cloud services: EC2, EMR, RDS, S3, ..
  • Knowledge of automating testing and deployment of distributed systems
  • Basic understanding in statistics..
  • Experience building robust REST API

 

Nice to have:

  • Experience with machine learning libraries: MLlib, Python scikit-learn, NoSQL databases:HBase, Cassandra,
  • Experience with data pipeline and workflow management tools: Airflow, NoSQL or Graph database: Neo4j ..
Apply Now and Jumpstart Your Career with Aidéo