About Passport:
At Passport, we empower brands to reach their global potential by delivering the #1 international solutions for direct-to-consumer businesses. What sets us apart is our expertise in international shipping—enabling brands like Carpe, OneSkin, Rhode, GORUCK, and Wildflower Cases to scale globally with ease. Our extensive network, in-house team of logistics and ecommerce experts, integrations with Shopify and other ecommerce platforms, and user-friendly portal make us the trusted partner for growth-focused brands looking to expand internationally and create seamless cross-border experiences.
About you and the role:
Passport is looking for a Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. You will work closely with Developers, Data Analysts, and DevOps teams to ensure efficient data flow, storage, and processing, enabling data-driven decision-making across the organization.
What you'll be doing:
- Design, develop, and optimize ETL/ELT pipelines for batch and real-time data processing.
- Build and maintain data warehouses/lakes (Snowflake) and databases (SQL/NoSQL).
- Implement data integration solutions (dbt, Fivetran, Apache Airflow) to connect disparate systems.
- Ensure data quality, reliability, and performance through monitoring, testing, and troubleshooting.
- Collaborate with cross-functional teams to understand data requirements and deliver scalable solutions.
- Optimize data storage, retrieval, and processing for cost-efficiency and speed.
- Adhere to data governance, security, and compliance standards (e.g., GDPR, HIPAA).
Requirements:
- 2+ years experience as a Data Engineer (or Database Administrator)
- A college graduate with a technical degree
- Proficient in English reading and writing
- Strong proficiency in SQL, experience with relational databases (MySQL, MariaDB, Snowflake) and experience with NoSQL (Elasticsearch, OpenSearch, Redis)
- Understanding the operation of monitoring systems (Prometheus, Grafana, CloudWatch)
- Programming skills in Python, Bash for data processing
- Knowledge of cloud platforms (AWS) and their data services (S3, CloudWatch, Lambda, SNS, etc.)
- Familiarity with data orchestration tools (dbt, Airflow) and workflow automation
- Experience with data modeling, schema design, and performance tuning
- Ability to communicate effectively and clearly and collaborate with other teams
- Transparency and accuracy in task management
- Fast adaptation to existing code styles, practices and conventions combined with the ability to take an active role in their continuous review and improvement
- Work autonomy: skill of improving things with little guidance, but without any hesitation about clarifying questionable points
- Accountability: You take ownership of your work, the answers you provide to others, and learn from your mistakes
A sneak peek into our perks & benefits:
- Competitive cash and equity packages
- Annual software stipend
- 100% remote work environment #LI-Remote
- Paid Time Off
- Paid Parental Leave
- Monthly team get-togethers - bring on the Zoom comedians, pop-a-shot contests, and sip ’n paints!
- Quarterly team (virtual) gatherings and annual team offsites
- Learning & Development Fund for upskilling or products to improve your day-to-day work life
- One time remote work stipend to up your WFH game
- Teammates around the world in 8 different time zones!