Key to reinforcement learning in multi-agent systems is the ability to exploit the fact that agents only directly influence only a small subset of the other agents. Such loose couplings are often modelled using a graphical model: a coordination graph. Finding an (approximately) optimal joint action for a given coordination graph is therefore a central subroutine in cooperative multi-agent reinforcement learning (MARL). Much research in MARL focuses on how to gradually update the parameters of the coordination graph, whilst leaving the solving of the coordination graph up to a known typically exact and generic subroutine. However, exact methods { e.g., Variable Elimination { do not scale well, and generic methods do not exploit the MARL setting of gradually updating a coordination graph and recomputing the joint action to select. In this paper, we examine what happens if we use a heuristic method, i.e., local search, to select joint actions in MARL, and whether we can use outcome of this local search from a previous time-step to speed up and improve local search. We show empirically that by using local search, we can scale up to many agents and complex coordination graphs, and that by reusing joint actions from the previous time-step to initialise local search, we can both improve the quality of the joint actions found and the speed with which these joint actions are found.
LINK
Introduction: To determine if athletes with coordination impairment (CI) can continue playing wheelchair rugby (WR), while an evidence-based classification system, including impairment tests for CI is not yet available. This is a defensible practise if they show similar activity limitations as athletes with other eligible impairment types (OI) within the same sports class. Methods: Standardised activities were measured in 58 elite WR athletes; 14 with CI and 44 with OI. Wheelchair activities consisted of 20-meter sprint, 12-meter sprint with full stop, intermittent sprint (3-meter sprint, stop, 3-meter sprint, stop, 6-meter sprint with full stop), sprint-curve-slalom-curve, turn on the spot 180°, turn on the spot 90°, stop, turn 90°in the same direction, X-test (short circuit with sharp turns) without the ball. Ball activities consisted of maximal throwing distance, precision throwing short (25% of maximum throw) and long (75% of maximal throw) distance and X-test with the ball (pick-up the ball and dribble whilst pushing). Descriptive statistics were used and Spearman’s Rank correlation was assessed for athletes with CI and OI for each outcome measure. Differences between athletes with CI and OI were assessed using a Mann-Whitney U test. Results: Most activities showed a high correlation with the athlete class in both athletes with CI and athletes with OI. Furthermore, outcome measures of athletes with CI overlapped with athletes with OI in the same sports class for all activities. There was a trend for worse performance in athletes with CI in turn on the spot 90°, stop, turn 90°in the same direction, the short distance one handed precision throw (P 0.11)and in the X-test with the ball (P 0.10). Discussion: Despite the current lack of evidence based impairment tests for CI, it is a defensible practise to not exclude athletes with CI from WR with the current classification system. The trends for differences in performance that were found can support athletes and coaches in optimising performance of athletes with CI.
DOCUMENT
Graphs are ubiquitous. Many graphs, including histograms, bar charts, and stacked dotplots, have proven tricky to interpret. Students’ gaze data can indicate students’ interpretation strategies on these graphs. We therefore explore the question: In what way can machine learning quantify differences in students’ gaze data when interpreting two near-identical histograms with graph tasks in between? Our work provides evidence that using machine learning in conjunction with gaze data can provide insight into how students analyze and interpret graphs. This approach also sheds light on the ways in which students may better understand a graph after first being presented with other graph types, including dotplots. We conclude with a model that can accurately differentiate between the first and second time a student solved near-identical histogram tasks.
DOCUMENT
In this paper, the performance gain obtained by combining parallel peri- odic real-time processes is elaborated. In certain single-core mono-processor configurations, for example, embedded control systems in robotics comprising many short processes, process context switches may consume a considerable amount of the available processing power. For this reason, it can be advantageous to combine processes, to reduce the number of context switches and thereby increase the performance of the application. As we consider robotic applications only, often consisting of processes with identical periods, release times and deadlines, we restrict these configurations to periodic real-time processes executing on a single-core mono-processor. By graph-theoretical concepts and means, we provide necessary and sufficient conditions so that the number of context switches can be reduced by combining synchronising processes.
DOCUMENT
Many students persistently misinterpret histograms. This calls for closer inspection of students’ strategies when interpreting histograms and case-value plots (which look similar but are diferent). Using students’ gaze data, we ask: How and how well do upper secondary pre-university school students estimate and compare arithmetic means of histograms and case-value plots? We designed four item types: two requiring mean estimation and two requiring means comparison. Analysis of gaze data of 50 students (15–19 years old) solving these items was triangulated with data from cued recall. We found five strategies. Two hypothesized most common strategies for estimating means were confirmed: a strategy associated with horizontal gazes and a strategy associated with vertical gazes. A third, new, count-and-compute strategy was found. Two more strategies emerged for comparing means that take specific features of the distribution into account. In about half of the histogram tasks, students used correct strategies. Surprisingly, when comparing two case-value plots, some students used distribution features that are only relevant for histograms, such as symmetry. As several incorrect strategies related to how and where the data and the distribution of these data are depicted in histograms, future interventions should aim at supporting students in understanding these concepts in histograms. A methodological advantage of eye-tracking data collection is that it reveals more details about students’ problem-solving processes than thinking-aloud protocols. We speculate that spatial gaze data can be re-used to substantiate ideas about the sensorimotor origin of learning mathematics.
LINK
Many studies report changes taking place in the field of higher education, changes which present considerable challenges to educational practice. Educational science should contribute to developing design guidance, enabling practitioners to respond to these challenges. Design patterns, as a form of design guidance, show potential since they promise to facilitate the design process and provide common ground for communication. However, the potential of patterns has not been fully exploited yet. We have proposed the introduction of a task conceptualization as an abstract view of the concept chosen as central: the task. The choice of the constituting elements of the task conceptualization has established an analytical perspective for analysis and (re)design of (e)learning environments. One of the constituting elements is that of ‘boundary objects’, which has added a focus on objects facilitating the coordination, alignment and integration of collaborative activities. The presented task conceptualization is deliberately generic in nature, to ease the portability between schools of thought and make it suitable for a wide target audience. The conceptualization and the accompanying graphical and textual representations have shown much promise in supporting the process of analysis and (re)design and add innovative insights to the domain of facilitating the creation of design patterns.
DOCUMENT
From June 28 to July 7 the National Arts Festival took place in Grahamstown, South Africa. For the 20th time Cue, a daily print newspaper about the Festival, was produced by Rhodes University journalism students. It was the first time that the newspaper was printed in full color. Cue is at the core of journalistic production during the Festival. But nowadays, what is a newspaper without pictures or without an online edition? Cue Pix, run by the photo department at the School of Journalism and Media Studies in the AMM (African Media Matrix) provides the pictures. Cue Online is run by the NML (New Media Lab) in the same building and is mostly shoveling print content online. Cue Radio and Cue TV take care of the audio and video, and broadcast during the Festival. Up to 2000 copies of Cue newspaper were printed daily with the number of sold copies around 1600. The newspaper was sold in the Grahamstown streets for 3 Rand. The number of pages of Cue ranges from 16 to 20, including advertisements. Cue is produced by students and lecturers of the School of Journalism and consists of about 50 student-reporters, 10 sub-editors, and 2 editors (who are generally University staff). The productions layout is taken care of by a group of design students. Twenty students from the photo department take care of the pictures and rework them with Adobes Photoshop. Cue TV and Cue Radio (with a total of about 10 students) brought their reporting skills to the Festival as well. Reporting about the Festival by Cue is a major happening that has been growing over the years. From print to TV, to radio and online. This is fantastic, but also reflects equal problems in the media industry: each media platform runs their own show. Print, TV, radio and photography: they all have their own targets, content production, and some coordination. In order to take full advantage of the different possibilities of all the media platforms, convergence is the keyword.
DOCUMENT
Purpose - This paper provides an overview on the technical and vocational education and training (TVET) program components/mechanisms and their overall effect on learning outcomes in a developing country context. Design/methodology/approach - Using secondary data, this descriptive case study integrates the realistic evaluation framework of Pawson and Tilley (1997) with Total Quality Management (TQM) frameworks. Findings - Ethiopia's TVET system adopts/adapts international best practices. Following the implementation of the 2008 TVET strategy, the proportion of formal TVET graduates who were recognized as competent by the assessment and certification system increased from 17.42 percent in 2009/2010 to 40.23 percent in 2011/2012. Nevertheless, there is regional variation. Research limitations/implications - Outcome-based TVET reforms that are based on TQM frameworks could improve learning outcome achievements in developing countries by enhancing awareness, coordination, integration, flexibility, participation, empowerment, accountability and a quality culture. Nevertheless, this research is limited by lack of longitudinal data on competency test results. There is also a need for further investigation into the practice of TQM and the sources of differences in internal effectiveness across TVET institutions. Practical implications - Our description of the Ethiopian reform experience, which is based on international best experience, could better inform policy makers and practitioners in TVETelsewhere in Africa. Originality/value - A realistic evaluation of TVET programs, the articulation of the mechanisms, especially based on TQM, that affect TVET effectiveness would add some insight into the literature. The evidence we have provided from the Ethiopian case is also fresh. Keywords TVET reform, TVET quality, Total quality management, Internal effectiveness, Realistic evaluation, Developing countries, Ethiopia
MULTIFILE
We present a novel architecture for an AI system that allows a priori knowledge to combine with deep learning. In traditional neural networks, all available data is pooled at the input layer. Our alternative neural network is constructed so that partial representations (invariants) are learned in the intermediate layers, which can then be combined with a priori knowledge or with other predictive analyses of the same data. This leads to smaller training datasets due to more efficient learning. In addition, because this architecture allows inclusion of a priori knowledge and interpretable predictive models, the interpretability of the entire system increases while the data can still be used in a black box neural network. Our system makes use of networks of neurons rather than single neurons to enable the representation of approximations (invariants) of the output.
LINK
This applied research is an attempt to analyse the effectiveness of milk marketing and facilitate developing a sustainable milk value chain for dairy farmer’s groups in Punakha district. Both quantitative and qualitative methods of survey, key informant interviews and focus group discussion were used as research strategies to obtain relevant information. The survey was conducted using both open and closed-ended structured questionnaire in seven subdistricts of Barp, Dzomi, Guma, Kabisa, Shelnga-Bjemi, Talog and Toedwang. A total of 60 respondents; 30 existing milk suppliers and 30 non-milk suppliers were drawn using a simple random sampling technique. One-to-one interviews were conducted following semi-structured questions with eight key informants in the chain. One focus group interview was conducted with the existing dairy farmer groups representatives to triangulate and discover in-depth information about the situation of the milk value chain in the district. The survey data was analysed using the Statistical Package for Social Sciences software version 20. A method of grounded theory design was used to analyse the qualitative data of interviews and focus group discussion. Value chain mapping was employed for assessing the operational situation of the current milk chain. The mean cost of milk production was estimated at Nu.27.53 per litre and the maximum expenses were incurred in animal feeds which were estimated to be 46.34% of the total cost of milk production. In this study, milk producers had the highest share of added value and profit which were estimated at 45.45% and 44.85% respectively. Limited information and coordination amongst stakeholders have contributed to slow progression in the formal milk market. The finding reveals that 90% of nondairy farmer groups respondents were interested in joining formal milk marketing. The average morning milk available for supply from this group would be 4.41 ± 3.07 litres daily by each household. The study also found that 50% of the respondents were interested in supplying evening milk with an average of 4.43 ± 2.25 litres per day per household. Based on the result of this study, it was concluded that there are possibilities of expanding the milk value chain in the district. However, there is a need to enhance consistent milk supply through a quality-based milk payment system, access to reasonable input supplies, and facilitate strong multi-stakeholder processes along the milk value chain.
DOCUMENT