Senior Data Engineer

Optoro (View all Jobs)

Washington, DC

Please mention No Whiteboard if you apply!
I'm a one-man team looking to improve tech interviews, and could use any support! 😄


Interview Process

1. Take home exercise. 2. Review your code onsite.

Programming Languages Mentioned

Python, SQL


About Optoro

Optoro is a fast-growing technology company that is revolutionizing the retail industry. Every year, more than 15% of retail goods are returned or simply never sell. This creates tons of unnecessary waste and costs retailers billions.

Our mission is to make retail more sustainable by eliminating all waste from returns. Our technology platform connects a seamless online returns experience with efficient supply chain processing and best in class reCommerce, so that retailers can improve outcomes across all points of the returns lifecycle. 

Backed by some of the top investors in the country - including Kleiner Perkins, Revolution Growth, and UPS - Optoro is powered by its collaborative, unconventional, and resourceful employees who love solving big problems. We are looking for individuals with similar creativity and energy to help build a lasting company focused on the triple bottom line.

Job Purpose:

The Data Engineering team empowers data-driven decision making by developing and maintaining a centralized data platform to provide accurate, comprehensive, and timely data across the organization. 

As a Senior Data Engineer, you will solve technical data problems at scale to benefit the company.  You will cross-collaborate with many Optoro teams to ensure necessary data reaches the centralized data warehouse. Then, you will transform the data and create pipelines for consumption by end users. The work you do will enable Optoro to optimize returns through data and provide enhanced visibility into the routing decisions for some of the largest retail enterprises in the world. You will report into the Associate Director of Engineering.

Responsibilities:

  • Partner with analytics, data science, and product engineering teams to ensure external data sources are democratized (accessible, documented, and unified)
  • Establish a strong foundation for the Data teams to leverage Google Cloud Platform 
  • Create automated data pipeline frameworks that provide data consumers with clean data ready for analysis
  • Resolve internal and external data exceptions in a timely and accurate manner
  • Catch bugs and style issues in code reviews
  • Learn new data processes and share knowledge with the team and cross-collaborators  
  • Continue professional learning beyond data focused development and into special interest projects
  • Lead the team to scope and deliver projects that drive data value
  • Partner with and mentor other data and analytics engineers

To be successful in this role, you must:

  • Be a change agent and drive innovation with your own ideas
  • Collaborate with data team members to design and implement improvement approaches
  • Constantly improve multi-environment data flow quality, security, and performance
  • Continually keep up with advancements in data engineering practices
  • You enjoy helping teams push the boundaries of analytical insights, creating new product features using data, and powering machine learning models

Qualifications

  • Bachelor’s degree in Computer Science or closely related field or a foreign equivalent
  • 4+ years of software development experience
  • Strong technical and data-focused abilities, including:
    • Administrative level data warehouse management including cost analysis and usage projections
    • Experience with Google Cloud Platform or AWS and Data related tools
    • Experience with distributed event data infrastructure (Kafka, PubSub, Avro schemas)
    • Proficient in SQL with standard database technologies like PostgreSQL, Snowflake, and Big Query
    • Knowledgeable of CI/CD Pipelines
    • Experience with building and maintaining Python applications
    • Hands-on experience with data ELT and scheduling tooling (Airflow, dbt)
    • Proven track record of delivering value to organizations through data-driven research and analysis
    • Hands-on experience deploying production quality code in a cloud environment, preferably GCP
  • Top-tier communication skills, including:
    • Extensive experience communicating technical data concepts to non-technical audiences
    • Demonstrated capability extracting necessary information from meetings and written communications
    • Understanding of how to navigate conflicting priorities across different teams within the company
    • Technical documentation background, maintaining and creating architectural design diagrams as needed

Preferred Qualifications

  • Experience working in ML pipelines and platforms
  • DataFlow and DataProc experience
  • Development experience with Docker and Kuberenetes.
  • Data lake experience
  • Automation in streaming data pipelines (Beam, Dataflow)

Perks: 

  • Competitive base salary, benefits and equity plans
  • Flexible paid time off
  • Volunteer paid time off
  • Perks for Parks Program
  • Summer Fridays
  • Health, dental, vision, and life insurance
  • 401k match
  • Sabbatical at 5 and 10 year anniversary 
  • Outside learning opportunities for career development
  • Opportunity to work with people who are passionate about what they do and also like to have fun

Optoro is an equal opportunity employer.

Please mention No Whiteboard if you apply!
I'm a one-man team looking to improve tech interviews, and could use any support! 😄


Get weekly alerts of new jobs from companies not using whiteboard interviews!