We’re looking for a Senior Data Engineer to join JumpCloud’s Data Enablement team. Data Enablements Vision is for data to drive JumpCloud and our Customers. The current mission the team is on is to put in place foundational technology and process to uplevel the data capabilities of our Product and our Data Warehouse/Lakehouse.
We are introducing an Event Based Architecture, developing and refining a data model that supports JumpCloud’s growth strategy and modernizing our Data Warehouse. A successful data engineer will exhibit an entrepreneurial spirit and enjoy tackling data engineering problems that most other people cannot solve, as well as shaping the future capabilities of JumpCloud’s data engineering, performance reporting and data governance.
Come be a part of an exciting new team where you will be able to work on challenging projects, rich data sets, and develop valuable skills. This role involves frequent engagement with analytics partners, data/platform engineering and product engineering to mature our data model, pipelines and data practices. The role will report to the Senior Manager of Data.
This is a senior level position.
What you'll be doing:
As part of the Data Enablement team, and as part of the engineering organization as a whole here at JumpCloud, you will be responsible for providing critical data infrastructure and systems for multiple areas of the business, including Business Analysis, Product Development, Engineering, Finance, Sales and Executive Strategy
On a day-to-day basis, as a senior level data engineer, you may be asked to:
Interface with stakeholders to define needs and develop strategies for providing data
Integrate Technologies such as Airflow, Python, and Kafka
Plan, build, and maintain data pipelines from internal and external data sources
Implement data observability and monitoring in the pipeline and in the warehouse
Work with appropriate teams to ensure data security and data compliance
Guide data analysts to ensure clean delivery of data
You will work with other senior level engineers and architects with the goal to achieve top level proficiency in core data engineering skills and business functions
You have:
Extensive hands-on experience with building scalable data solutions with complex fast moving data sets
Can lead the technology on small to large sized projects from start to finish
Strong experience with Cloud Data Warehouses and Data Lake architectures and implementations
Proven proficiency in data modeling and database design, with an emphasis towards designing optimized self service data solutions.
Experience with both batch and streaming data pipelines and ELT processes
Ability to work and communicate effectively with other engineers, and both technical and non-technical business stakeholders
Ability to quickly integrate new technologies and industry best practices into your skill sets
Expert level SQL skills
Proficiency with the Python programming tools and ecosystem, while incorporating strong software engineering techniques
Experience working with some of these: message brokers, data sync/mirroring tools, stream and batch processors, data orchestrators and workflow engines
Nice to haves:
Python3 and golang Software Development, following general software engineering principles
SQL for data transformation and analysis, with optimization and tuning in mind
Snowflake data warehouses (or equivalent)
Dremio data lakehouses
Apache Airflow
Apache Kafka (and it’s supporting tools)
Experience of building stream and batch processing big data systems
Experience building observable data systems
Experience with basic data modeling and data architecture
Experience with cloud data storage techniques
Familiarity with data storage formats, such as JSON/Avro/Protobuf/Parquet/Iceberg
Can work effectively both independently and as part of the data engineering team as a whole
Experience with Data Governance, including data contracts and schema management
Experience with Data Security standards including RBAC and sensitive data handling