Senior Software Engineer, Platform Data Centralization and Storage

Job Description

About the Attentive Team
Have you ever received a text message from your favorite brand with an incredible offer?  Did you know that text message marketing delivers the highest ROI of any marketing channel?  And that more customers than ever prefer to connect with brands via text?  That is what we do at Attentive.  We empower the world’s leading brands to engage with their customers at the right moment, with the right message. Our platform powers more than 400 million messages every day, approaching 100 billion annually.

We’re building big things!  Check out our tech blog here: https://tech.attentive.com/

About the Role
Our Search Platform team is the backbone of Attentive’s data infrastructure, processing, storing, and optimizing data at massive scale and speed. We handle billions of events from over 100 million customers daily, enabling near-real-time data insights and AI-driven capabilities through our Data, Optimization, and ML Platforms. Joining our team offers a high-growth career opportunity to work with some of the world’s most talented engineers in a high-performance and high-impact culture.


What You'll Accomplish
  • Architect high-throughput solutions that power our most critical operations, ensuring scalability and efficiency
  • Expand and enhance our self-service platform, collaborating with cross-functional teams to fuel our AI, ML, and analytics goals
  • Tackle complex distributed data challenges, streamline system integrations, and uphold high standards of quality and governance
  • Champion cutting-edge technologies, keeping our platform at the forefront of industry advancements and enabling strategic outcomes
  • Unify data from diverse systems, paving the way for experimentation and innovation while empowering teams with intuitive tools and frameworks

  • Your Expertise
  • Proven experience as a Software Engineer with a focus on high throughput scalable systems
  • In-depth knowledge of high throughput processing technologies such as Hadoop, Spark, Flink and/or Kafka
  • Proficiency in Java and strong understanding of object-oriented design, data structures, algorithms, and optimization
  • You have development experience integrating and running tools such as Snowflake, Google BigQuery, Databricks Lakehouse, AWS Athena, Apache Trino, or Presto
  • You’ve experience with open source data storage formats such as Apache Iceberg, Parquet, Arrow, or Hudi
  • You are knowledgeable about data modeling, data access, and data replication techniques, such as CDC. 
  • You have a proven track record of architecting applications at scale and maintaining infrastructure as code via Terraform
  • You are excited by new technologies but are conscious of choosing them for the right reasons

  • What We Use
  • Our infrastructure runs primarily in Kubernetes hosted in AWS’s EKS
  • Infrastructure tooling includes Istio, Datadog, Terraform, CloudFlare, and Helm
  • Our backend is Java / Spring Boot microservices, built with Gradle, coupled with things like DynamoDB, Kinesis, AirFlow, Postgres, Planetscale, and Redis, hosted via AWS
  • Our frontend is built with React and TypeScript, and uses best practices like GraphQL, Storybook, Radix UI, Vite, esbuild, and Playwright
  • Our automation is driven by custom and open source machine learning models, lots of data and built with Python, Metaflow, HuggingFace 🤗, PyTorch, TensorFlow, and Pandas