In my previous post on AI engineering I defined the concepts involved in this new discipline and explained that with the current state of the practice, AI engineers could also be named machine learning (ML) engineers. In this post I would like to 1) define our view on the profession of applied AI engineer and 2) present the toolbox of an AI engineer with tools, methods and techniques to defy the challenges AI engineers typically face. I end this post with a short overview of related work and future directions. Attached to it is an extensive list of references and additional reading material.
LINK
The current set of research methods on ictresearchmethods.nl contains only one research method that refers to machine learning: the “Data analytics” method in the “Lab” strategy. This does not reflect the way of working in ML projects, where Data Analytics is not a method to answer one question but the main goal of the project. For ML projects, the Data Analytics method should be divided in several smaller steps, each becoming a method of its own. In other words, we should treat the Data Analytics (or more appropriate ML engineering) process in the same way the software engineering process is treated in the framework. In the remainder of this post I will briefly discuss each of the existing research methods and how they apply to ML projects. The methods are organized by strategy. In the discussion I will give pointers to relevant tools or literature for ML projects.
LINK
Background: The present study investigates the suitability of various treatment outcome indicators to evaluate performance of mental health institutions that provide care to patients with severe mental illness. Several categorical approaches are compared to a reference indicator (continuous outcome) using pretest-posttest data of the Health of Nation Outcome Scales (HoNOS). Methods: Data from 10 institutions and 3189 patients were used, comprising outcomes of the first year of treatment by teams providing long-term care. Results: Findings revealed differences between continuous indicators (standardized pre-post difference score ES and ΔT) and categorical indicators (SEM, JTRCI, JTCS, JTRCI&CS, JTrevised) on their ranking of institutions, as well as substantial differences among categorical indicators; the outcome according to the traditional JT approach was most concordant with the continuous outcome indicators. Conclusions: For research comparing group averages, a continuous outcome indicator such as ES or ΔT is preferred, as this best preserves information from the original variable. Categorical outcomes can be used to illustrate what is accomplished in clinical terms. For categorical outcome, the classical Jacobson-Truax approach is preferred over the more complex method of Parabiaghi et al. with eight outcome categories. The latter may be valuable in clinical practice as it allows for a more detailed characterization of individual patients.