It is envisaged that intelligent robots of the next generation be equipped with various sophistciated capabilities endowing them with desires and intentions, enabling them to perform hypothetical and defeasible reasoning, to solve problems creatively, to appreciate works of art, to achieve some form of cyberpleasure, etc. Understanding and the ability to develop explanations for observations and facts are fundamental for the realization of these capabilities. In fact explanation and understanding are ‘two sides of the same coin’ in both art and science.
Our objective is to highlight techniques used in Artificial Intelligence which could provide mechanisms for modeling the aesthetic response of an intelligent robot, based on the causal
explainability of complexity manifested in media such as electronic art. Leyton  argues that art is related to explanation, in particular that the aesthetic response is the mind’s evaluation
of causal explanation. He maintains that the level of aesthetic response to art works is proportional to the level of complexity  that an individual observes. He goes further arguing
that the desire for art works is part of a general desire that the human mind has for complexity. Barratt  also claims that humans seek to explicate complexity, and since the brain is finite, there must be a maximum degree of complexity that the mind is capable of explaining at any one time. If the degree of complexity is increased past this level, it exceeds the mind’s capacity to explain it, artistic chaos is reached and consequently the viewer deems the art work to be incoherent. He concludes that the limit is set by the ability to give causal explanation, it is not complexity that is appetitive, but causal explanation itself.
Clearly, if our aim is to develop intelligent robots with truly human-like characteristics, then they must be capable of artistic appreciation. For electronic art, appreciation must occur at the conceptual level and not at the physical (pixel) level. In the area of Artificial Intelligence the notion of explanation has been well explored. The complexity of explanations is often a reflection of the richness of the agent’s background knowledge, and its ability to discern its surrounding world. Indeed, the aesthetic response to artistic chaos is equivalent to an explanation of a contradiction. Central to such an explanatory capability is the need for mechanisms supporting the modification or revision of knowledge, that is, learning.
Belief revision  models the process of accepting new information in such a way that an intelligent agent’s epistemic state remains logically consistent, or coherent. Frameworks for explanation within the area of Artificial Intelligence can be used to support the aesthetic
response of an intelligent agent. In particular, two important parameters of an explanation may assist in gauging an aesthetic response, namely the plausibility [6,7] and the specificity  of the explanation. In summary, if aesthetic response is the evaluation of causal explanation, then we can endow an intelligent robot with aesthetic responses which ebb and flow in accordance with the complexity of the causal explanation achieved.
1. Barratt, K., (1980), “Logic and Design: The Syntax of Art, Science and Mathematics”, Eastview Editions, New Jersey.
2. Gärdenfors, P., (1988), “Knowledge in Flux”, A Bradford Book, MIT Press, Cambridge (Massachusetts), London.
3. Leyton, M., (1992), “Symmetry, Causality, Mind”, A Bradford Book, MIT Press, Cambridge (Massachusetts), London.
4. Poole, D., (1988), “A Logical Framework for Default Reasoning”, Artificial Intelligence 36, pp 27-47.
5. Walker, E.L., (1980), “Psychological Complexity and Preference”, Monterey, California: Wadsworth (Brooks/Cole).
6. Williams, M.A., “Explanations and Theory Base Transmutations”, in the Proceedings of the European Conference on Artificial Intelligence, 341-346, 1994.
7. Williams, M.A., (1994), “Transmutations of Knowledge Systems”, in J. Doyle, E. Sandewall, and P. Torasso, editors, Principles of Knowledge Representation and Reasoning: Proceedings of the Fourth International Conference (KR’94), Morgan Kaufmann, San Mateo, CA, (to appear)
- Mary-Anne Williams is a Lecturer in Information Systems, at the University of Newcastle in Australia. Her research interests lie in Aesthetics in Art and Science, Artificial Intelligence, Belief Revision, Creativity, Explanation, Knowledge Representation, and Logic and Design.