About
We are looking for a savvy Data Engineer to work on next-generation educational systems.
The hire will be responsible for expanding and optimizing our data and data pipeline architecture,
as well as optimizing data flow and collection for cross-functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. In-depth experience in working with AWS, Google Cloud recommended.
They must be self-directed and comfortable supporting the data needs of multiple teams,
systems, and products. The right candidate will be excited by the prospect of optimizing or even re-designing
our company’s data architecture to support our next generation of products and data initiatives.
Responsibilities for Data Engineer
- Create and maintain optimal data pipeline architecture, through tools such as AWS Glue, S3, RDS MySQL, CouchDB.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes for Payroll/CRM, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
Qualifications for Data Engineer
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Strong organizational skills.
- Experience supporting and working with cross-functional teams in a dynamic environment.
We are looking for
We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science,
Statistics, Informatics, Information Systems or another quantitative field.
**They should also have experience using the following software/tools:**
* Experience with big data tools: AWS Data Lakes, Glue ETL, etc.
* Experience with relational SQL and NoSQL databases, including Apache CouchDB or MongoDB and MySQL.
* Experience with data pipeline and workflow management tools: Spark, Luigi, etc.
* Experience with AWS cloud services: EC2, EMR, RDS, Redshift
* Optional Experience with stream-processing systems: Storm, Spark-Streaming, etc.
* Experience with object-oriented/object function scripting languages: Python, Scala, etc.