• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Previous conferences

The theory of rough sets was introduced by Zdzislaw Pawlak in early 1980s. It is based on a simple recognition of the fact that the ability to describe a set of objects is constrained by our limitation in distinguishing individual members of the set. In general, only classes of objects rather than individuals can be distinguished. Some elementary classes of this relation may be inconsistent, i.e. they include objects having the same descriptions but assigned to different categories. As a consequence of the above inconsistency it is not possible, in general, to precisely specify a set of objects in terms of elementary sets of indiscernible objects. To deal with this ambiguity the concept of the rough set is introduced which is a pair of two precise concepts -- lower and upper approximations constructed from elementary sets of objects. This idea is a starting point to study many other problems, in particular, analysis of classification problems, evaluating dependency between attributes and objects classification, determining the level of this dependency, calculating importance of attributes, reducing the set of attributes or generating decision rules from data. The rough set theory is complementary to fuzzy set theory and soft computing methods as it handles another kind of information vagueness and inconsistency. On the other hand these theories considered together provide strong tools for analysis of data burden with some kind of “imperfectness”, such as vagueness, ambiguity, imprecision, incompleteness, and uncertainty. The rough set theory turned out to be a significant methodological tool for such domains as artificial intelligence, data mining, classification, knowledge discovery, machine learning, information retrieval, control engineering and decision analysis. Finally, it inspired many real life applications.