Testen van software is een speerpunt in onze opleiding Software Engineering. In de propedeusefase wordt de testgedreven software-ontwikkeling geoefend. De student wordt aangeleerd software met testen at te leveren. Als onderdeel van de toetsing werd een performance-assessment ontwikkeld, dat de mogelijkheid biedt modelleren, programmeren en testen integraal te toetsen. Studenten blijken deze nieuwe toetsvorm positief te waarderen. In het kader van competentiegericht onderwijs is dit performance-assessment een waardevolle toevoeging.
Both Software Engineering and Machine Learning have become recognized disciplines. In this article I analyse the combination of the two: engineering of machine learning applications. I believe the systematic way of working for machine learning applications is at certain points different from traditional (rule-based) software engineering. The question I set out to investigate is “How does software engineering change when we develop machine learning applications”?. This question is not an easy to answer and turns out to be a rather new, with few publications. This article collects what I have found until now.
LINK
Author supplied: "Abstract Software Architecture Compliance Checking (SACC) is an approach to verify conformance of implemented program code to high-level models of architectural design. Static SACC focuses on the modular software architecture and on the existence of rule violating dependencies between modules. Accurate tool support is essential for effective and efficient SACC. This document describes a test approach that may be used to determine how accurate a tested SACCT-tool is with respect to dependency analysis and violation reporting. This technical report is intended as a test manual and describes how a SACCT-tool can be tested. Two separate tests are described: the Benchmark test, and the FreeMind test."
LINK