With the proliferation of misinformation on the web, automatic methods for detecting misinformation are becoming an increasingly important subject of study. If automatic misinformation detection is applied in a real-world setting, it is necessary to validate the methods being used. Large language models (LLMs) have produced the best results among text-based methods. However, fine-tuning such a model requires a significant amount of training data, which has led to the automatic creation of large-scale misinformation detection datasets. In this paper, we explore the biases present in one such dataset for misinformation detection in English, NELA-GT-2019. We find that models are at least partly learning the stylistic and other features of different news sources rather than the features of unreliable news. Furthermore, we use SHAP to interpret the outputs of a fine-tuned LLM and validate the explanation method using our inherently interpretable baseline. We critically analyze the suitability of SHAP for text applications by comparing the outputs of SHAP to the most important features from our logistic regression models.
DOCUMENT
With the proliferation of misinformation on the web, automatic misinformation detection methods are becoming an increasingly important subject of study. Large language models have produced the best results among content-based methods, which rely on the text of the article rather than the metadata or network features. However, finetuning such a model requires significant training data, which has led to the automatic creation of large-scale misinformation detection datasets. In these datasets, articles are not labelled directly. Rather, each news site is labelled for reliability by an established fact-checking organisation and every article is subsequently assigned the corresponding label based on the reliability score of the news source in question. A recent paper has explored the biases present in one such dataset, NELA-GT-2018, and shown that the models are at least partly learning the stylistic and other features of different news sources rather than the features of unreliable news. We confirm a part of their findings. Apart from studying the characteristics and potential biases of the datasets, we also find it important to examine in what way the model architecture influences the results. We therefore explore which text features or combinations of features are learned by models based on contextual word embeddings as opposed to basic bag-of-words models. To elucidate this, we perform extensive error analysis aided by the SHAP post-hoc explanation technique on a debiased portion of the dataset. We validate the explanation technique on our inherently interpretable baseline model.
DOCUMENT
Artificial Intelligence (AI) offers organizations unprecedented opportunities. However, one of the risks of using AI is that its outcomes and inner workings are not intelligible. In industries where trust is critical, such as healthcare and finance, explainable AI (XAI) is a necessity. However, the implementation of XAI is not straightforward, as it requires addressing both technical and social aspects. Previous studies on XAI primarily focused on either technical or social aspects and lacked a practical perspective. This study aims to empirically examine the XAI related aspects faced by developers, users, and managers of AI systems during the development process of the AI system. To this end, a multiple case study was conducted in two Dutch financial services companies using four use cases. Our findings reveal a wide range of aspects that must be considered during XAI implementation, which we grouped and integrated into a conceptual model. This model helps practitioners to make informed decisions when developing XAI. We argue that the diversity of aspects to consider necessitates an XAI “by design” approach, especially in high-risk use cases in industries where the stakes are high such as finance, public services, and healthcare. As such, the conceptual model offers a taxonomy for method engineering of XAI related methods, techniques, and tools.
MULTIFILE
Many lithographically created optical components, such as photonic crystals, require the creation of periodically repeated structures [1]. The optical properties depend critically on the consistency of the shape and periodicity of the repeated structure. At the same time, the structure and its period may be similar to, or substantially below that of the optical diffraction limit, making inspection with optical microscopy difficult. Inspection tools must be able to scan an entire wafer (300 mm diameter), and identify wafers that fail to meet specifications rapidly. However, high resolution, and high throughput are often difficult to achieve simultaneously, and a compromise must be made. TeraNova is developing an optical inspection tool that can rapidly image features on wafers. Their product relies on (a) knowledge of what the features should be, and (b) a detailed and accurate model of light diffraction from the wafer surface. This combination allows deviations from features to be identified by modifying the model of the surface features until the calculated diffraction pattern matches the observed pattern. This form of microscopy—known as Fourier microscopy—has the potential to be very rapid and highly accurate. However, the solver, which calculates the wafer features from the diffraction pattern, must be very rapid and precise. To achieve this, a hardware solver will be implemented. The hardware solver must be combined with mechatronic tracking of the absolute wafer position, requiring the automatic identification of fiduciary markers. Finally, the problem of computer obsolescence in instrumentation (resulting in security weaknesses) will also be addressed by combining the digital hardware and software into a system-on-a-chip (SoC) to provide a powerful, yet secure operating environment for the microscope software.
Currently, many novel innovative materials and manufacturing methods are developed in order to help businesses for improving their performance, developing new products, and also implement more sustainability into their current processes. For this purpose, additive manufacturing (AM) technology has been very successful in the fabrication of complex shape products, that cannot be manufactured by conventional approaches, and also using novel high-performance materials with more sustainable aspects. The application of bioplastics and biopolymers is growing fast in the 3D printing industry. Since they are good alternatives to petrochemical products that have negative impacts on environments, therefore, many research studies have been exploring and developing new biopolymers and 3D printing techniques for the fabrication of fully biobased products. In particular, 3D printing of smart biopolymers has attracted much attention due to the specific functionalities of the fabricated products. They have a unique ability to recover their original shape from a significant plastic deformation when a particular stimulus, like temperature, is applied. Therefore, the application of smart biopolymers in the 3D printing process gives an additional dimension (time) to this technology, called four-dimensional (4D) printing, and it highlights the promise for further development of 4D printing in the design and fabrication of smart structures and products. This performance in combination with specific complex designs, such as sandwich structures, allows the production of for example impact-resistant, stress-absorber panels, lightweight products for sporting goods, automotive, or many other applications. In this study, an experimental approach will be applied to fabricate a suitable biopolymer with a shape memory behavior and also investigate the impact of design and operational parameters on the functionality of 4D printed sandwich structures, especially, stress absorption rate and shape recovery behavior.
Nowadays, there is particular attention towards the additive manufacturing of medical devices and instruments. This is because of the unique capability of 3D printing technologies for designing and fabricating complex products like bone implants that can be highly customized for individual patients. NiTi shape memory alloys have gained significant attention in various medical applications due to their exceptional superelastic and shape memory properties, allowing them to recover their original shape after deformation. The integration of additive manufacturing technology has revolutionized the design possibilities for NiTi alloys, enabling the fabrication of intricately designed medical devices with precise geometries and tailored functionalities. The AM-SMART project is focused on exploring the suitability of NiTi architected structures for bone implants fabricated using laser powder bed fusion (LPBF) technology. This is because of the lower stiffness of NiTi alloys compared to Ti alloys, closely aligning with the stiffness of bone. Additionally, their unique functional performance enables them to dissipate energy and recover the original shape, presenting another advantage that makes them well-suited for bone implants. In this investigation, various NiTi-based architected structures will be developed, featuring diverse cellular designs, and their long-term thermo-mechanical performance will be thoroughly evaluated. The findings of this study underscore the significant potential of these structures for application as bone implants, showcasing their adaptability for use also beyond the medical sector.