## ## Responsibilities
- Create and maintain optimal data pipeline architecture
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and GCP 'big data' technologies.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Work with data and analytics experts to strive for greater functionality in our data systems.
## ## Requirements:
- Excellent working SQL knowledge and experience working with relational databases.
- Experience with relational SQL and NoSQL databases.
- Experience building and optimizing 'big data' data pipelines, architectures and data sets.
- Experience with CGP cloud service: GCS, BQ, Google Cloud Dataproc
- Experience with big data tools: Hadoop, Spark, Kafka, etc
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Be sure to mention the word BECKONS when applying to show you read the job post completely. This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Salary and compensation
$80,000 — $120,000/year
Location
🌏 Worldwide