Data Engineer

Job Description

We are seeking a Data Engineer to join our dynamic team. The ideal candidate is an enthusiastic problem-solver who excels at building scalable data systems and has hands-on experience with DatabricksLookerAWSMongoDBPostgreSQL, and Node.js. You will work alongside sales, customer success and engineering to design, implement, and maintain a robust data infrastructure that powers our analytics and platform offerings.

Key Responsibilities

  1. Data Pipeline & Integration
    • Design, build, and maintain end-to-end data pipelines using Databricks (Spark) for efficient data ingestion, transformation, and processing.
    • Integrate data from various structured and unstructured sources, including medical imaging systems, EMRs, and external APIs.
  2. Analytics & Visualization
    • Collaborate with the analytics team to create, optimize, and maintain dashboards in Looker.
    • Implement best practices in data modeling and visualization to deliver actionable insights.
  3. Cloud Infrastructure
    • Deploy and manage cloud-based solutions on AWS (e.g., S3, EMR, Lambda, EC2) to ensure scalability, availability, and cost-efficiency.
    • Develop and maintain CI/CD pipelines for data-related services and applications.
  4. Database Management
    • Oversee MongoDB and PostgreSQL databases, including schema design, indexing, and performance tuning.
    • Ensure data integrity, availability, and optimized querying for both transactional and analytical workloads.
  5. Backend Development & APIs
    • Utilize Node.js to build and maintain RESTful APIs and microservices for data ingestion, transformation, and application integration.
    • Implement robust error handling, logging, and monitoring frameworks to ensure reliability and transparency.
  6. Security & Compliance
    • Adhere to healthcare compliance requirements (e.g., HIPAA) and best practices for data privacy and security.
    • Implement data governance frameworks to maintain data integrity and confidentiality.
  7. Collaboration & Documentation
    • Work cross-functionally with data scientists, product managers, and other engineering teams to gather requirements and define data workflows.
    • Document data pipelines, system architecture, and processes for internal and external stakeholders.
  8. Innovation & Continuous Improvement
    • Evaluate new technologies and methodologies to enhance data processing performance and scalability.
    • Provide recommendations on emerging trends in data engineering, analytics, and cloud infrastructure.

Requirements

  • Education & Experience
    • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
    • 3+ years of professional experience in data engineering or a similar role.
  • Technical Skills
    • Databricks (Spark): Proven expertise in building large-scale data pipelines.
    • Looker: Experience in creating dashboards, data models, and self-service analytics solutions.
    • AWS: Proficient with core services like S3, EMR, Lambda, IAM, EC2, etc.
    • MongoDB & PostgreSQL: Demonstrated ability to design schemas, optimize queries, and manage high-volume databases.
    • Node.js: Skilled in developing RESTful APIs, microservices, and backend applications.
    • SQL & Scripting: Strong SQL skills, plus familiarity with Python, Scala, or Java for data-related tasks.
  • Soft Skills
    • Excellent communication and team collaboration abilities.
    • Strong problem-solving aptitude and analytical thinking.
    • Detail-oriented, with a focus on delivering reliable, high-quality solutions.
  • Preferred
    • Experience in healthcare or imaging (e.g., DICOM, HL7/FHIR).
    • Familiarity with DevOps tools (Docker, Kubernetes, Terraform) and CI/CD pipelines.
    • Knowledge of machine learning workflows and MLOps practices.

Benefits

  • Health Care Plan (Medical, Dental & Vision)
  • Retirement Plan (401k, IRA)
  • Paid Time Off (Vacation, Sick & Public Holidays)
  • Training & Development
  • Work From Home