AIML - Machine Learning Researcher, Post-training for Foundation Models
Apple
Software Engineering, Data Science
Cupertino, CA, USA
USD 181,100-318,400 / year + Equity
Posted on Feb 7, 2026
We are a group of engineers and researchers responsible for building foundation models at Apple. Within this group, the Post-Training work streams focus on transforming powerful pre-trained checkpoints into helpful, high-quality models that power billions of Apple products. We are looking for researchers who are passionate about foundation model post-training, including Supervised Fine-Tuning (SFT), Reinforcement Learning, with experiences in core capabilities such as instruction following, tool use, deep thinking and reasoning.
We believe that the most interesting problems in deep learning research arise when we try to bridge the gap between raw model capability and user-centric utility. This is where the most important breakthroughs in model adaptation and steering come from. You will work with a close-knit and fast-growing team of world-class engineers and researchers to tackle some of the most challenging problems in foundation model post-training. Your work will focus on defining the training recipes that turn a base model into a highly capable assistant. This involves research into existing and novel training data mix, algorithms and evaluation methodologies
- Recipe Development: Design and iterate on end-to-end post-training recipes, combining SFT, Reinforcement Learning and reasoning regimes to achieve specific model behaviors and capabilities.
- Algorithm Research: Develop and implement novel algorithms for preference optimization, model steering, and safety;
- Data Strategy: Research methods for high-quality human and synthetic data generation, automated data filtering, and curriculum learning to improve instruction following and reasoning capabilities.
- Evaluation: Design robust evaluation frameworks to measure model helpfulness, factuality, and utility, moving beyond static benchmarks to capture real-world performance.
- Collaboration: Work closely with pre-training teams to inform architecture choices and with product teams to understand user requirements.
- Demonstrated expertise in deep learning with a focus on LLMs, post-training, or reinforcement learning, backed by a strong publication record or real world experiences and accomplishments in these or closely related domains;
- Proficient programming skills in Python and one of the deep learning frameworks such as JAX or PyTorch.
- PhD or equivalent practical experience, in Computer Science, Machine Learning, or a related technical field.
- Proven track record in post-training: Specialization in post-training algorithms, techniques, and best practices for large foundation models with proven track record
- Post-training data: Deep experiences with human data labeling, synthetic data generation and data quality assessment for foundation models;
- Evaluation methodologies: Deep experience in evaluating data and training recipe and deeply understand the model building iterative process and life cycle;
- Reasoning Research: Experience in improving model performance on reasoning tasks (math, coding, logic)
- Scale & Systems: Experience training SOTA large models at scale and familiarity with distributed training challenges, and understand the trade-offs;
- Strong communication and collaborative skills: Strong communication skills and a passion for collaboration within and across teams;
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant.