A model for programmatic assessment in action is proposed that optimizes assessment for learning as well as decision making on learner progress. It is based on a set of assessment principles that are interpreted from empirical research. The model specifies cycles of training, assessment and learner support activities that are completed by intermediate and final moments of evaluation on aggregated data-points. Essential is that individual data-points are maximized for their learning and feedback value, whereas high stake decisions are based on the aggregation of many data-points. Expert judgment plays an important role in the program. Fundamental is the notion of sampling and bias reduction for dealing with subjectivity. Bias reduction is sought in procedural assessment strategies that are derived from qualitative research criteria. A number of challenges and opportunities are discussed around the proposed model. One of the virtues would be to move beyond the dominating psychometric discourse around individual instruments towards a systems approach of assessment design based on empirically grounded theory.