![]() | |
---|---|
Machine Learning at ECAI'98 Evgueni Smirnov The program committee of the 13th European Conference on Artificial Intelligence has accepted 15 papers in the field of machine learning from seven countries: France (3), Germany (1), Japan (1), Italy (3), The Netherlands (1), Slovenia (3) and the United Kingdom (3). The papers cover almost all important sub-areas of machine learning such as, concept learning (2), decision-tree learning (2), inductive logic programming (5), instance-based learning (1), artificial neural networks (2), bayesian learning (1), and learning in multiagent systems (2). Most of the papers represent extensions of the existing machine learning techniques towards more practical applications. This is a positive sign in comparison with the previous ECAI conference and it reflects the basic stream of machine-learning research. CONCEPT LEARNING Concept learning has been presented with two papers. The first paper (Brezellec P., and Soldano H., Tabata: a learning algorithm performing a bidirectional search in a reduced search pace using a tabu strategy) introduces an AQ like algorithm as the star generation is accomplished in an epsilon-reduced concept-language space using tabu search. The reduction is based on an equivalence of concept descriptions when they cover the same positive training examples. The extent of reduction depends on the number of empirical implications holding on positive examples. Experiments of the algorithm with UCI data sets have shown that it has a comparable predictive accuracy with respect to well-known algorithms such as CN2 and C4.5. The second paper (Smirnov E.N., and Braspenning P.J., Version space learning with instance-based boundary sets) has won the ECAI-98 Best Papers Prize. It proposes instance-based boundary sets as a new method of representation of version- spaces that overcomes the main computational problems of version space learning for two rather broad sub-classes of admissible description languages (union and intersection preserving languages). The key idea is to consider learning as an incremental process of intersecting simple version spaces based on particular training instances. The intersection is proposed to be done with conjunctive linking of the boundary sets of the spaces. This circumscription of the instance-based boundary sets characterises implicitly the target version space and avoids the exponential complexity of learning when the complexity of generation of at least one boundary set of simple version spaces is polynomial in the relevant properties of description languages. DECISION TREE LEARNING Decision tree learning has been considered in two papers. The research, presented in the first paper (Crockett K.A., Bandar Z., and Al-attar A., A fuzzy inference framework for induced decision trees), aims at overcoming the problem of sharp decision boundaries of ID3-like decision trees. A new Fuzzy Inference Algorithm is introduced that transforms a decision tree into a set of fuzzy rules through a process of fuzzification. A distinct advantage of this algorithm over other existing fuzzy algorithms is that no pre-fuzzifying of data is necessary and that is why the complexity of learning significantly decreases. The second paper (Robnik-Sikonja M., and Kononenko I., Pruning regression trees with MDL) proposes a theoretically sound MDL procedure for pruning regression trees. The parameters of the procedure can be set up by users or can be determined automatically. Empirical comparison has shown that the proposed procedure produces better trees than m-pruning and error-complexity pruning. INDUCTIVE LOGIC PROGRAMMING Inductive logic programming has dominated with its five papers that is due to its long-term tradition in Europe. The first paper ( Botta M., Giordana A., and Piola R., An Integrated Framework for Learning Numerical Terms in FOL) presents numerical term refining (NTR) as a new method for learning numerical terms in the context of refining knowledge bases described in first order languages. The method integrates translation of predicates with numerical terms into continuous-valued functions and the error gradient descent for function tuning. NTR is considered as a further development of the previous work (FONN) of the authors as two new important advantages are emphasized. The first one is that the method preserves the classical logic semantics in formulas. The second advantage is that the method can be easily integrated with other symbolic learning strategies in order to improve their classification accuracy. The second paper (Malerba D., Esposito F., and Lisi F.A., Learning recursive theories with ATRE) introduces a new approach to inducing recursive theories from instances. The approach is based on a separate-and-parallel-conquer search strategy to learn mutually recursive clauses, the generalised implication, and a technique for recovering the consistency of partially learned theories. The system of the approach has been tested on several artificial domains in ILP as well as on a more realistic task in the context of the European project Learning in Humans and Machines. The third paper (Ichise R., Inductive logic programming and genetic programming) proposes a new method for integration of inductive logic programming (ILP) and genetic programming (GP). The key idea is to combine the search method of GP with type and mode methods of ILP. An empirical analysis of a system, based on the method, shows that the system is capable to learn from positive and negative training instances as well as from training instances that do not belong to discrete classes. This is a significant improvement justifying the author's claim that the proposed method overcomes some of the main problems in existing approaches to integration of ILP and GP. The fourth paper (Mayer E., Inductive learning of chronicles) extends constrained inductive logic programming with object-oriented capabilities for inductive learning of chronicles (set of temporally related events). The fifth paper (Riguzzi F., Integrating abduction and induction) proposes a method for integration of abductive and inductive logic programming in order to overcome some of the problems in ILP: learning abductive theories, learning exceptions and learning multiple predicates. INSTANCE-BASED LEARNING Instance-based learning has been presented with only one paper (Payne T.R., and Edwards P., Implicit feature selection with the value difference metric). It demonstrates that the Value Difference Metric (proposed by Stanfill and Waltz in 1986) can be used for reducing the influence of irrelevant attributes in the context of the nearest-neighbour paradigm. Moreover the analytical and empirical analysis of this metric shows that applications of the metric do not require pre-processing training data which decreases significantly the overall complexity of the learning process. ARTIFICIAL NEURAL NETWORKS Artificial neural networks have been considered in two papers. The first one (Kukar M., and Kononenko I., Cost-sensitive learning with neural networks) presents four methods for cost-sensitive modifications of the backpropagation algorithm for multilayered feedforward neural networks (cost-sentisitive classification, adaptive output, adaptive learning rate and minimisation of misclassification costs). The experimental part of the work shows that all four methods successfully minimise misclassification costs, but the minimisation of misclassification costs method outperforms most of the known approaches in this research area. The second paper (Lane P., Simple synchrony networks: learning generalisations across syntactic constituents) considers Simple Synchrony Networks (SSN) as a new connectionist architecture that combines a technique for learning about patterns across time and Simple Recurrent Networks with Temporal Synchrony Variable Binding. An algorithm is proposed for training SSN networks that learns generalisations across syntactic constituents. BAYESIAN LEARNING Bayesian learning has been considered in only one paper (Mladenic D., Turning yahoo to automatic web-page classifier) that represents a real machine learning application. The paper introduces a completely new approach to automatic Web-page classification that is based on the Yahoo hierarchy. Applying the approach produces a set of Bayesian classifiers that are associated with particular categories (represented in the Yahoo hierarchy), i.e., the classifies can be used for determining the category belonging to each new example (Web-page). The experimental part of the work shows that the approach produces good results on real-world data. MULTIAGENT LEARNING Learning in multi-agent systems is represented for the first time in the ECAI program with two papers. The first paper (Hamdi M.S., and Kaiser K., Learning to coordinate behaviours) is focussed on the problem of coordinating individual goals in order to solve more complex tasks in the context of a self-improving reactive control system. The system is based on the emergence of more global behaviour from the interaction of smaller behavioural units. The coordination of the behaviour is accomplished by integrating a variant of Kohonen networks and reinforcement learning. The system has been tested on an artificial task that has shown its real application capacity (e.g., software agents). The second paper (Calderoni S., Collective learning in multiagent systems) presents a methodology for collective learning in autonomous agents' societies. Individual learning is proposed to be realised with a reinforcement learning algorithm while the synthesis of the agents' experience is accomplished with a genetic algorithm so that the society performance is improved. The experimental part of the work shows that the proposed methodology leads to promising results in an artificial task (the canonical foraging problem in an artificial ants' society). Department of Computer Science |