Abstract-Architecture Compliance Checking (ACC) is useful to bridge the gap between architecture and implementation. ACC is an approach to verify conformance of implemented program code to high-level models of architectural design. Static ACC focuses on the modular software architecture and on the existence of rule violating dependencies between modules. Accurate tool support is essential for effective and efficient ACC. This paper presents a study on the accuracy of ACC tools regarding dependency analysis and violation reporting. Seven tools were tested and compared by means of a custom-made test application. In addition, the code of open source system Freemind was used to compare the tools on the number and precision of reported violation and dependency messages. On the average, 74 percent of 34 dependency types in our custom-made test software were reported, while 69 percent of 109 violating dependencies within a module of Freemind were reported. The test results show large differences between the tools, but all tools could improve the accuracy of the reported dependencies and violations.
News media in The Netherlands show great variety in the extent and ways, in which they realize media accountability online in terms of actor transparency, product transparency and feedback opportunities online. It is suggested that even those news rooms that seem to adhere to transparency and public accountability still need to explore the functionality and application of media accountability instruments (MAI). Both in terms of potentials and pitfalls, news rooms need to consider about what they want to be transparent and in what ways. To the extent that online innovations are visible, traditional news media seem to experiment, as is the case with newsroom blogs or the project of hyper local journalism Dichtbij.nl, part of the Telegraaf Company. Various news media have on-going projects on audience participation, online applications and distribution models. However, since many projects merely aim at finding new applications, processes, platforms and business models, it remains to be seen assess whether projects are indeed reasonably innovative and feasible at the same time. The development of an online and therefore immediate, archived, personalized and interactive context, offers practical and ethical challenges to Dutch journalism. These challenges bring shifts in its role and responsibility to society. It means that changes occur in what journalists are accountable for, as well as ways in how they are accountable. The Dutch media landscape lodges various professional accountability instruments like the press council and both profession-wide and news media specific codes of ethics, but some of these instruments receive only moderate support. Proactive openness is more an exception than the rule and may well be a distinctive indicator for quality journalism. Although news media often acknowledge the importance of media accountability offline and online, they often lack the resources or courage to use them or have different priorities. This ambiguous position may indicate that in relation to media accountability online, Dutch news media are between hope and fear: that it will either improve their relationship with the public and fuel professional quality, or ask too much of resources with too little benefit.
This method paper presents a template solution for text mining of scientific literature using the R tm package. Literature to be analyzed can be collected manually or automatically using the code provided with this paper. Once the literature is collected, the three steps for conducting text mining can be performed as outlined below:• loading and cleaning of text from articles,• processing, statistical analysis, and clustering, and• presentation of results using generalized and tailor-made visualizations.The text mining steps can be applied to a single, multiple, or time series groups of documents.References are provided to three published peer reviewed articles that use the presented text mining methodology. The main advantages of our method are: (1) Its suitability for both research and educational purposes, (2) Compliance with the Findable Accessible Interoperable and Reproducible (FAIR) principles, and (3) code and example data are made available on GitHub under the open-source Apache V2 license.
Hoogwaardig afvalhout van bewoners, bouwbedrijven en meubelmakers blijft momenteel ongebruikt omdat het te arbeidsintensief is om grote hoeveelheden ongelijke stukken hout van verschillende afmetingen en soorten te verwerken. Waardevol hout wordt waardeloos afval, tegen de principes van de circulaire economie in. In CW.Code werken Powerhouse Company, Bureau HUNC en Vrijpaleis samen met de HvA om te onderzoeken hoe een toegankelijke ontwerptool te ontwikkelen om upcycling en waardecreatie van afvalhout te faciliteren. In andere projecten hebben HvA en partners verschillende objecten gemaakt van afvalhout: een stoel, een receptiebalie, kleine meubels en objecten voor de openbare ruimte, vervaardigd met industriële robots. Deze objecten zijn 3D gemodelleerd met behulp van specifieke algoritmen, in de algemeen gebruikte ontwerpsoftware Rhino en Grasshopper. De projectpartners willen nu onderzoeken hoe deze algoritmen via een toegankelijke tool bruikbaar te maken voor creatieve praktijken. Deze tool integreert generatieve ontwerpalgoritmen en regelsets die rekening houden met beschikbaar afvalhout, en de ecologische, financiële en sociale impact van resulterende ontwerpen evalueren. De belangrijkste ontwerpparameters kunnen worden gemanipuleerd door ontwerpers en/of eindgebruikers, waardoor het een waardevol hulpmiddel wordt voor het co-creëren van circulaire toepassingen voor afvalhout. Dit onderzoek wordt uitgevoerd door HvA Digital Production Research Group, met bovengenoemde partners. HUNC heeft ervaring met stadsontwikkeling waarbij gebruik wordt gemaakt van lokaal gekapt afvalhout. Vrijpaleis biedt toegang tot een actieve, lokale community van makers met een sterke band met buurtbewoners. Powerhouse Company heeft ervaring in het ontwerpen met hout in de bouw. Alle drie kunnen profiteren van slimmere circulaire ontwerptools, waarbij beschikbaar materiaal, productiebeperkingen en impactevaluatie worden geïntegreerd. De tool wordt ontwikkeld en getest voor twee designcases: een binnenmeubelobject en een buitengevelelement. Bevindingen hiervan zullen leidend zijn bij de ontwikkeling van de tool. Na afronding van het project is een bètaversie gereed voor validatie door ontwerpers, bewonerscollectieven en onderzoek/onderwijs van de HvA.
In the coming decades, a substantial number of electric vehicle (EV) chargers need to be installed. The Dutch Climate Accord, accordingly, urges for preparation of regional-scale spatial programs with focus on transport infrastructure for three major metropolitan regions among them Amsterdam Metropolitan Area (AMA). Spatial allocation of EV chargers could be approached at two different spatial scales. At the metropolitan scale, given the inter-regional flow of cars, the EV chargers of one neighbourhood could serve visitors from other neighbourhoods during days. At the neighbourhood scale, EV chargers need to be allocated as close as possible to electricity substations, and within a walkable distance from the final destination of EV drivers during days and nights, i.e. amenities, jobs, and dwellings. This study aims to bridge the gap in the previous studies, that is dealing with only of the two scales, by conducting a two-phase study on EV infrastructure. At the first phase of the study, the necessary number of new EV chargers in 353 4-digit postcodes of AMA will be calculated. On the basis of the findings of the Phase 1, as a case study, EV chargers will be allocated at the candidate street parking locations in the Amsterdam West borough. The methods of the study are Mixed-integer nonlinear programming, accessibility and street pattern analysis. The study will be conducted on the basis of data of regional scale travel behaviour survey and the location of dwellings, existing chargers, jobs, amenities, and electricity substations.
The scientific publishing industry is rapidly transitioning towards information analytics. This shift is disproportionately benefiting large companies. These can afford to deploy digital technologies like knowledge graphs that can index their contents and create advanced search engines. Small and medium publishing enterprises, instead, often lack the resources to fully embrace such digital transformations. This divide is acutely felt in the arts, humanities and social sciences. Scholars from these disciplines are largely unable to benefit from modern scientific search engines, because their publishing ecosystem is made of many specialized businesses which cannot, individually, develop comparable services. We propose to start bridging this gap by democratizing access to knowledge graphs – the technology underpinning modern scientific search engines – for small and medium publishers in the arts, humanities and social sciences. Their contents, largely made of books, already contain rich, structured information – such as references and indexes – which can be automatically mined and interlinked. We plan to develop a framework for extracting structured information and create knowledge graphs from it. We will as much as possible consolidate existing proven technologies into a single codebase, instead of reinventing the wheel. Our consortium is a collaboration of researchers in scientific information mining, Odoma, an AI consulting company, and the publisher Brill, sharing its data and expertise. Brill will be able to immediately put to use the project results to improve its internal processes and services. Furthermore, our results will be published in open source with a commercial-friendly license, in order to foster the adoption and future development of the framework by other publishers. Ultimately, our proposal is an example of industry innovation where, instead of scaling-up, we scale wide by creating a common resource which many small players can then use and expand upon.