Atlas Primary is a fast-growing company at the forefront of Data Science. Leveraging our expertise in collecting and analyzing healthcare data, and recognizing the potential of large language models (LLMs) to revolutionize the healthcare industry, Atlas Primary is committed to staying ahead of the curve in the AI-driven landscape.

As AI, particularly LLMs, disrupts knowledge work sectors, including healthcare technology, the future's most successful engineering teams will be compact and AI-centric. This shift towards AI-first thinking will reshape how organizations strategize and operate, emphasizing efficiency and cutting-edge technology.

It's the early days and paramount for us to lean into AI. Atlas Primary is taking a two-pronged approach. Firstly, we are utilizing AI to enhance and expand our core business operations, harnessing the technology's power to make us better, faster, and more efficient. Secondly, Atlas Primary is developing our first LLM application.

We are looking for an AI Engineer to join our company, to work closely with the CTO, CEO, and Product leadership team to define and implement a series of AI-empowered applications.

The AI Engineer will:

  • Experiment with new products leveraging AI and LLMs.
  • Build products to automate and scale our business operations
  • Reimagine what is possible with a small team by leveraging a variety of new LLM-powered technologies to accelerate all development and implementation

A successful candidate has:

  • A passion for new AI technologies (especially LLMs and agents)
  • An obsession with automation and getting into the weeds
  • Bulletproof backend software engineering foundations
  • The ability to make cut-through infrastructure and data processing decisions that scale
  • Strong product sense

Knowledge of particular systems is not required, but to get to get a sense of how we're approaching the problem, some of the technical skills that matter to us are:

  • Python for backend and data processing. Key frameworks are fastapi, pandas, ray, and airflow
  • GCP for all of our infrastructure and artifacts
  • BigQuery as our warehouse
  • Kubernates, Terraform, Helm, and Skaffold for our full deployment and lifecycle management
  • Occasionally we will also employ node.js backend and react.js front end resources

Proximity, or a willingness to travel, to our hubs in the San Francisco Bay Area; Atlanta, GA; Princeton, NJ; and Pune, India, is required.