With the proliferation of misinformation on the web, automatic misinformation detection methods are becoming an increasingly important subject of study. Large language models have produced the best results among content-based methods, which rely on the text of the article rather than the metadata or network features. However, finetuning such a model requires significant training data, which has led to the automatic creation of large-scale misinformation detection datasets. In these datasets, articles are not labelled directly. Rather, each news site is labelled for reliability by an established fact-checking organisation and every article is subsequently assigned the corresponding label based on the reliability score of the news source in question. A recent paper has explored the biases present in one such dataset, NELA-GT-2018, and shown that the models are at least partly learning the stylistic and other features of different news sources rather than the features of unreliable news. We confirm a part of their findings. Apart from studying the characteristics and potential biases of the datasets, we also find it important to examine in what way the model architecture influences the results. We therefore explore which text features or combinations of features are learned by models based on contextual word embeddings as opposed to basic bag-of-words models. To elucidate this, we perform extensive error analysis aided by the SHAP post-hoc explanation technique on a debiased portion of the dataset. We validate the explanation technique on our inherently interpretable baseline model.
DOCUMENT
Multilevel models (MLMs) are increasingly deployed in industry across different functions. Applications usually result in binary classification within groups or hierarchies based on a set of input features. For transparent and ethical applications of such models, sound audit frameworks need to be developed. In this paper, an audit framework for technical assessment of regression MLMs is proposed. The focus is on three aspects: model, discrimination, and transparency & explainability. These aspects are subsequently divided into sub-aspects. Contributors, such as inter MLM-group fairness, feature contribution order, and aggregated feature contribution, are identified for each of these sub-aspects. To measure the performance of the contributors, the framework proposes a shortlist of KPIs, among others, intergroup individual fairness (DiffInd_MLM) across MLM-groups, probability unexplained (PUX) and percentage of incorrect feature signs (POIFS). A traffic light risk assessment method is furthermore coupled to these KPIs. For assessing transparency & explainability, different explainability methods (SHAP and LIME) are used, which are compared with a model intrinsic method using quantitative methods and machine learning modelling.Using an open-source dataset, a model is trained and tested and the KPIs are computed. It is demonstrated that popular explainability methods, such as SHAP and LIME, underperform in accuracy when interpreting these models. They fail to predict the order of feature importance, the magnitudes, and occasionally even the nature of the feature contribution (negative versus positive contribution on the outcome). For other contributors, such as group fairness and their associated KPIs, similar analysis and calculations have been performed with the aim of adding profundity to the proposed audit framework. The framework is expected to assist regulatory bodies in performing conformity assessments of AI systems using multilevel binomial classification models at businesses. It will also benefit providers, users, and assessment bodies, as defined in the European Commission’s proposed Regulation on Artificial Intelligence, when deploying AI-systems such as MLMs, to be future-proof and aligned with the regulation.
DOCUMENT
Multilevel models using logistic regression (MLogRM) and random forest models (RFM) are increasingly deployed in industry for the purpose of binary classification. The European Commission’s proposed Artificial Intelligence Act (AIA) necessitates, under certain conditions, that application of such models is fair, transparent, and ethical, which consequently implies technical assessment of these models. This paper proposes and demonstrates an audit framework for technical assessment of RFMs and MLogRMs by focussing on model-, discrimination-, and transparency & explainability-related aspects. To measure these aspects 20 KPIs are proposed, which are paired to a traffic light risk assessment method. An open-source dataset is used to train a RFM and a MLogRM model and these KPIs are computed and compared with the traffic lights. The performance of popular explainability methods such as kernel- and tree-SHAP are assessed. The framework is expected to assist regulatory bodies in performing conformity assessments of binary classifiers and also benefits providers and users deploying such AI-systems to comply with the AIA.
DOCUMENT
Many lithographically created optical components, such as photonic crystals, require the creation of periodically repeated structures [1]. The optical properties depend critically on the consistency of the shape and periodicity of the repeated structure. At the same time, the structure and its period may be similar to, or substantially below that of the optical diffraction limit, making inspection with optical microscopy difficult. Inspection tools must be able to scan an entire wafer (300 mm diameter), and identify wafers that fail to meet specifications rapidly. However, high resolution, and high throughput are often difficult to achieve simultaneously, and a compromise must be made. TeraNova is developing an optical inspection tool that can rapidly image features on wafers. Their product relies on (a) knowledge of what the features should be, and (b) a detailed and accurate model of light diffraction from the wafer surface. This combination allows deviations from features to be identified by modifying the model of the surface features until the calculated diffraction pattern matches the observed pattern. This form of microscopy—known as Fourier microscopy—has the potential to be very rapid and highly accurate. However, the solver, which calculates the wafer features from the diffraction pattern, must be very rapid and precise. To achieve this, a hardware solver will be implemented. The hardware solver must be combined with mechatronic tracking of the absolute wafer position, requiring the automatic identification of fiduciary markers. Finally, the problem of computer obsolescence in instrumentation (resulting in security weaknesses) will also be addressed by combining the digital hardware and software into a system-on-a-chip (SoC) to provide a powerful, yet secure operating environment for the microscope software.
Currently, many novel innovative materials and manufacturing methods are developed in order to help businesses for improving their performance, developing new products, and also implement more sustainability into their current processes. For this purpose, additive manufacturing (AM) technology has been very successful in the fabrication of complex shape products, that cannot be manufactured by conventional approaches, and also using novel high-performance materials with more sustainable aspects. The application of bioplastics and biopolymers is growing fast in the 3D printing industry. Since they are good alternatives to petrochemical products that have negative impacts on environments, therefore, many research studies have been exploring and developing new biopolymers and 3D printing techniques for the fabrication of fully biobased products. In particular, 3D printing of smart biopolymers has attracted much attention due to the specific functionalities of the fabricated products. They have a unique ability to recover their original shape from a significant plastic deformation when a particular stimulus, like temperature, is applied. Therefore, the application of smart biopolymers in the 3D printing process gives an additional dimension (time) to this technology, called four-dimensional (4D) printing, and it highlights the promise for further development of 4D printing in the design and fabrication of smart structures and products. This performance in combination with specific complex designs, such as sandwich structures, allows the production of for example impact-resistant, stress-absorber panels, lightweight products for sporting goods, automotive, or many other applications. In this study, an experimental approach will be applied to fabricate a suitable biopolymer with a shape memory behavior and also investigate the impact of design and operational parameters on the functionality of 4D printed sandwich structures, especially, stress absorption rate and shape recovery behavior.
Nowadays, there is particular attention towards the additive manufacturing of medical devices and instruments. This is because of the unique capability of 3D printing technologies for designing and fabricating complex products like bone implants that can be highly customized for individual patients. NiTi shape memory alloys have gained significant attention in various medical applications due to their exceptional superelastic and shape memory properties, allowing them to recover their original shape after deformation. The integration of additive manufacturing technology has revolutionized the design possibilities for NiTi alloys, enabling the fabrication of intricately designed medical devices with precise geometries and tailored functionalities. The AM-SMART project is focused on exploring the suitability of NiTi architected structures for bone implants fabricated using laser powder bed fusion (LPBF) technology. This is because of the lower stiffness of NiTi alloys compared to Ti alloys, closely aligning with the stiffness of bone. Additionally, their unique functional performance enables them to dissipate energy and recover the original shape, presenting another advantage that makes them well-suited for bone implants. In this investigation, various NiTi-based architected structures will be developed, featuring diverse cellular designs, and their long-term thermo-mechanical performance will be thoroughly evaluated. The findings of this study underscore the significant potential of these structures for application as bone implants, showcasing their adaptability for use also beyond the medical sector.