COPCOM 2012

 

Artificial Intelligence, Computational Game Theory, and Decision Theory
~unifying paths~

 

Cluj Napoca, 9 Noiembrie 2012, ora 10

Centrul pentru Studiul Complexitatii

Str. Fantanele 30, camera 14 (corp A, demisol)

Evenimentul isi propune:
-
sa identifice posibilitati de colaborare atat intre cercetatori cat si cu firme locale,
-
sa identifice aplicatii practice ce pot fi abordate cu instrumente computationale din domeniile AI, GT si DT,
-
sa prezinte concepte si instrumente ce pot fi transferate intre cele trei domenii si probleme concrete pe care acestea le pot rezolva.

 

Program:

·         ora 10deschidere;

·         10.15-11.45 sesiune de postere;

·         11.45-12.30 discutii;

·         12.30- 14.00 pranz;

·         14.00-15.30discutii;

 

Lista de lucrari:

 

1.      Adaptive Game Based Learning using a Multi-Agent System with Wasp-Like Behaviour

Bota Florentin, Dana Simian

UniversitateaLucian Blaga” din Sibiu

2.      Color Image Enhancement and Restoration in the JPEG Compressed Domain

Camelia Florea, Mihaela Gordan, Radu Orghidan, Bogdan Orza, Aurel Vlaicu

Technical University of Cluj-Napoca, Department of Communications

3.      3D liver Segmentation Using a Hybrid Method Based on Automatic and Interactive Approach

Marius Danciu, Radu Orghidan, Mihaela Gordan, Camelia Florea

Technical University of Cluj-Napoca, Department of Communications

4.      Equilibrium detection as solving optimization problems

Noémi Gaskó, Mihai Suciu, Rodica Ioana Lung, D. Dumitrescu

5.      Decision Making Simplified - Lorenz Dominance

Réka Nagy, Mihai Suciu, D. Dumitrescu

6.      Deriving Rules of Behaviour for Cognitive Radio Environments

Ligia Cremene, D. Dumitrescu, Noémi Gaskó, Réka Nagy

7.      Social Honesty Game

Marcel Cremene, Ligia Cremene, D. Dumitrescu

8.      Survey of Multicriteria optimization for industrial applications

Mihai Suciu

9.      Equitable Solutions in QoS-aware Service Optimization

Mihai Suciu, Marcel Cremene, F. Pop, D. Dumitrescu

10.  Between Selfishness and Altruism: Fuzzy Nash--Berge-Zhukovskii Equilibrium

Reka Nagy, Noemi Gasko, Rodica Ioana Lung, D. Dumitrescu

11.  Equilibrium detection in non-cooperative game theory. An evolutionary approach

D. Dumitrescu, Rodica Ioana Lung, Noemi Gasko, Reka Nagy

10.  Cognitive Antenna System for Sustainable Adaptive Radio Interfaces

Ligia Cremene, Nicolae Crişan

 

Abstracts:

1.      Adaptive Game Based Learning using a Multi-Agent System with Wasp-Like Behaviour

Bota Florentin, Dana Simian

UniversitateaLucian Blaga” din Sibiu

 

2.      Color Image Enhancement and Restoration in the JPEG Compressed Domain

Camelia Florea, Mihaela Gordan, Radu Orghidan, Bogdan Orza, Aurel Vlaicu

Technical University of Cluj-Napoca, Department of Communications

 

   Nowadays, there is a large amount of information at high resolution found on any large database, so besides the requirements for compressing it at high rates (in order to save storage space and transmission time) the requirements for fast processing (as developments directly in the compressed domain) have become essential. Real systems manipulate the data in compressed format, therefore, the embedding of the processing step into the decompression scheme will avoid the operation of fully decompression and recompression prior/after processing. The advantages of the processing in the compressed domain are already acknowledged by the scientific community, in terms of the computational speed at the same performance (sometimes even better) as the spatial domain processing.

Most images and video sequences compression standards are based on the discrete cosine transform (DCT), such as are the well-known formats: JPEG, MPEG1 to MPEG4, H.261 to H.264, HDTV, etc. These are the preferred formats for the majority of digital cameras and a large amount of image/video acquisition boards, which means that the compression steps can be performed by embedded software and hardware blocks within an onboard image processor.

The compressed domain processing algorithms imply its formulation in the DCT image representation space and typically are applied on small size image blocks in which the image is priory decomposed by tiling (due to the compression standard). Therefore, the processing can be realized after entropy decoding (see Figure 1.), before inverse DCT step (which is computationally expensive).

As the JPEG format is still one of the most used in image compression (over 95% of images on Internet are stored in this format), we concentrate in our work on it. Our aim here is to use solely and directly the information from the compressed domain file, such as the zig-zag ordered quantized DCT coefficients in the YUV color representation space, for image enhancement and information recovery/ restoration. The image enhancement and image information recovery are still of acute and growing importance. There are lot of faults and failures in data manipulation processes (acquisition, storage, transmission, and others) that can lead to the appearance of errors which require the contrast correction, corrupted (missing) information recovery/ restoration.

Fuzzy intensification operator and fuzzy rule-based contrast enhancement algorithms are well-known rather simple approaches with good visual results, but, as any fuzzy algorithm, these are by default nonlinear, thus not straightforward applicable on the JPEG bitstream. We proposed a suitable strategy for directly implementation in the compressed domain. The fuzzy set parameters are adaptively chosen by analyzing the histogram of the image in the compressed domain, in order to optimally enhance the image contrast. The non-linear enhancement procedures require some grey level threshold comparisons, for which an adaptive implementation, taking into account the frequency content of each coefficient block in the DCT encoded JPEG image is proposed. This guarantees the optimal quality at minimum computational cost. The experimental results for a set of various contrast images validate the good performance and functionality of the proposed implementation. The algorithm is applied only on the luminance component; however, it can be used to enhance color images as well, with no change of the chrominance components (which is a rather common approach in color image enhancement).

For image restoration process (information recovery) we introduced a JPEG compressed domain formulation within a sparse representation framework and we proved mathematically and experimentally not only its numerical efficiency as compared to the pixel level formulation, but also the good quality of the restoration results. Due to the JPEG standard scheme, the dictionary is constrained to the particular type of a complete DCT dictionary of fixed size (8×8 pixels) and the spatial image blocks are, by default, non-overlapping.

The usage of this reduced dimensionally feature space and skipping the decompression/ recompression prior/after processing makes the proposed algorithms faster than state of the art algorithms, having accurate results. Some experimental results are illustrated in the next Figures (2-4).

 

Figure 1. The two ways to process an image: pixel-level (spatial domain) and compressed domain processing

 

a)stdSLRb)stdSLR_ad_int.jpgc)adaptive_implementation_of_nonlinear_fuzzy_image_enhancement_algorithms_in_the_compressed_jpeg_images

    Figure 2. a) Original image Cars.jpg; b) Enhanced image using proposed fuzzy intensification operator; c) Enhanced image using proposed Takagi-Sugeno fuzzy system;

 

a)new_original_MskOrgN b)new_original_mask c)new_original_MskPrDCT

Figure 3. Image inpainting results: a) The corrupted image; b) The mask image; c) The recovered image

 

 

a)Boats24_0 b)Boats24_0

Figure 4. Result for impulse noise removal: a) The image affected by color impulse noise, with 20% noise density; b) The result obtained after recovery with the proposed approach.

 

 

Acknowledgements

This work was supported by the project "Development and support of multidisciplinary postdoctoral programs in major technical areas of national strategy of Research - Development - Innovation" 4D-POSTDOC, contract no. POSDRU/89/1.5/S/52603, project co-funded by the European Social Fund through Sectoral Operational Program Human Resources Development 2007-2013.

3.      3D liver Segmentation Using a Hybrid Method Based on Automatic and Interactive Approach

Marius Danciu, Radu Orghidan, Mihaela Gordan, Camelia Florea

Technical University of Cluj-Napoca, Department of Communications

Liver segmentation from computer tomography (CT) scans is not an easy task due to a multitude of factors such as variations in contrast, noise present in the image or inter-patient shape and appearance variability of the anatomical organs. Other problems are generated by surrounding tissues and organs (e.g. stomach, spleen) that have the same intensities inside or along boundaries and present the same structure. The proposed approach used for liver segmentation has the potential of numerical efficiency, as it borrows from the 2D compressed domain image segmentation (proven more efficient than the pixel segmentation) and generalizes it for the 3D case. The proposed supervised liver segmentation method can successfully outline liver in 3D abdominal volumes (CT slices). Its main novelty is the derivation of the local features in medical volume data based on a 3D DCT decomposition applied on voxels blocks. This is done in the similar fashion to the application of image analysis in the compressed 2D JPEG domain with the potential advantage of a reduced computational complexity and high accuracy. For the decision on the resulting 3-D blocks (as belonging or not to the liver) we employ a probabilistic support vector machine (SVM) classifier. Thus the proposed approach provides a complete 3D solution to medical volumes segmentation.

Current automatic segmentation approaches for soft tissue organs are highly problem specific. Some organs like the heart can be addressed using model-based approaches, since the degree of shape variation is low. For organs with a high degree of shape variation, like the liver, such approaches fail, and segmentation algorithms have to rely on evidence in the image. Unfortunately, the required information cannot always be extracted from the images. The resulting incorrect segmentation is useless for surgery planning, and therefore the current clinical practice relies on manual segmentation by drawing segmentation outlines in stacks of 2D images. This is time consuming at best and impractical for high resolution scans with hundreds of slices. Moreover, even trained radiologists tend to misinterpret complex 3D anatomy when only viewing 2D cross-sections. It must be emphasized that automatic segmentation algorithms are capable of recovering major parts of the required objects structure correctly in many medical applications including liver segmentation. However, certain cases defeat attempts at fully automated segmentation. These defects are often locally bound, such as – in case of the liver – leakage into the heart and stomach region, or problems with vena cava inferior. These observations have led us to believe that computer-aided interactive editing of the segmented surface generated by an automatic algorithm – segmentation refinement – is a suitable compromise, which is much less time consuming than manual segmentation, yet yields high quality results directly usable in surgery planning. Our research targets, therefore, the development of a system that helps radiologists in refining the segmented volumetric data obtained with automatic/semiautomatic algorithms, but also is opened to the inclusion of a variety of medical volume editing tools. Our goal was to design and implement a flexible, fast and accurate alternative to the few existing solutions, by integrating a fuzzy logic system in the 3D reconstruction of the virtual interaction probe. For this purpose we developed an interaction pen-like device tracked by two web cameras, integrated with a system for 3D medical data manipulation and 3D editing. The integrated setup presents various functions for 3D segmentation refinement and manipulation of 3D medical data.

The block diagram of the proposed solution is presented in Fig. 1. The virtual editing environment is illustrated in Fig. 2., and some segmentation result is illustrated in Fig. 3.

 

Figure 1. Block diagram of the proposed hybrid method for 3D liver segmentation

 

Fig.2. The setup of the proposed 3D medical visual interaction system;

 

Fig.3. 3D segmented liver, placed in the abdominal cavity.

Acknowledgements

This work was supported by the project "Doctoral studies in engineering sciences for developing the knowledge based society-SIDOC” contract no. POSDRU/88/1.5/S/60078, project co-funded from European Social Fund through Sectorial Operational Program Human Resources 2007-2013.

This work was also supported by the project "Development and support of multidisciplinary postdoctoral programmes in major technical areas of national strategy of Research - Development - Innovation" 4D-POSTDOC, contract no. POSDRU/89/1.5/S/52603, project co-funded by the European Social Fund through Sectoral Operational Programme Human Resources Development 2007-2013.

4.      Equilibrium detection as solving optimization problems

Noémi Gaskó, Mihai Suciu, Rodica Ioana Lung, D. Dumitrescu

Solving a multi-objective optimization problem generally means to obtain the Pareto front. In general all real-world optimization problems are multi-objective by nature, one disadvantage of Pareto based algorithms is that they approximate an infinite set of non-dominated solutions. Using some equilibria from non-cooperative Game Theory we replace the Pareto dominance with a dominance concept based on game equilibria. Nash equilibrium, its refinements (Aumann, modified strong Nash, strong Berge, etc.) and Berge-Zhukovskii equilibrium can be used in order to detect stable solutions in a multi-objective optimization problem. The advantage of this approach is that the set of solutions is not infinite, which simplifies the decision making process.

5.      Decision Making Simplified - Lorenz Dominance

Réka Nagy, Mihai Suciu, D. Dumitrescu

A problem frequently encountered in classical multi-criteria optimization is the existence of a large (often infinite) set of optimal solutions. The decision making based on selecting a unique preferred solution becomes difficult. The Lorenz dominance, also called equitable dominance relation, has been introduced by Kostreva and Ogryczak in [Kostreva and Ogryczak, 1999]. Lorenz domination, a refinement of the Pareto domination, is used in decision theory and fair optimization problems.

We study the Lorenz dominance relation for multiobjective optimization problems. We propose a differential evolution algorithm based on Lorenz dominance. The detected solutions are those optimal solutions that are the most balanced and most equitable when all objectives are considered. Compared to the same evolutionary algorithm based on Pareto dominance, the Lorenz based algorithm is more scalable to the number of objectives [Nagy et al., 2012a].

The most commonly used solution concept in Game Theory, Nash equilibrium, presumes that all the players in the game are completely rational agents, that have common knowledge of the structure of the game and they choose their strategies as best responses to the strategies of the adversary players. The outcome assured by Nash equilibrium is not always Pareto optimal. In many cases a more favorable outcome could be reached if all the players changed their strategies. On the other hand, Pareto equilibrium assures the greatest possible payoffs for the players, but the distribution of the payoffs is uneven and rarely equitable. Moreover, the set of Pareto optimal solutions is often infinite.

We consider that the Lorenz dominance relation could be successfully applied to address some limitations of standard Game Theory. Based on the Lorenz dominance relation, the concept of Lorenz equilibrium is proposed [Nagy et al., 2011]. Lorenz equilibrium provides optimal solutions that assure maximal payoffs for all players and an equitable payoff distribution.

Another drawback of classical Game Theory is the multiplicity of Nash equilibrium. The goal of Game Theory is to predict the outcome of games but when it comes to games with multiple equilibria it is impossible to give such a prediction. We apply Lorenz equilibrium in the process of selecting the most favorable Nash equilibrium in case of multiple equilibria [Nagy et al., 2012b].

References:

[Kostreva and Ogryczak, 1999] Kostreva, M. M. and Ogryczak, W. (1999). Linear Optimization with Multiple Equitable Criteria. RAIRO - Operations Research, 33(03):275-297.

[Nagy et al., 2011] Nagy, R., Dumitrescu, D., and Lung, R. I. (2011). Lorenz Equilibrium: Concept

and Evolutionary Detection. Symbolic and Numeric Algorithms for Scientic Computing, International Symposium on, 408-412.

[Nagy et al., 2012a] Nagy, R., Suciu, M., and Dumitrescu, D. (2012a). Exploring Lorenz Dominance.

Symbolic and Numeric Algorithms for Scientic Computing, International Symposium on.

[Nagy et al., 2012b] Nagy, R., Suciu, M. A., and Dumitrescu, D. (2012b). Lorenz Equilibrium: Equi-

tability in Non-Cooperative Games. In Proceedings of the 14th Annual Conference on Genetic and

Evolutionary Computation, GECCO'12, pages 489-496.

6.      Deriving Rules of Behaviour for Cognitive Radio Environments

Ligia Cremene, D. Dumitrescu, Noémi Gaskó, Réka Nagy

The aim of this research is to explore the potential of Game Theory (GT) in extracting rules of behaviour for emerging Cognitive Radio environments. We revisit the commons approach to unlicensed spectrum and try to show that a commons can be basically regulated from the inside out. GT simulations of CR interactions reveal the emergence of certain equilibria mirroring stable situations that may occur. Once these equilibria are detected and characterized, related norms may be expressed and then embedded into machines (CRs). Internalized norms may become the alternative to external enforcement of rules. We call these emerging norms techno-social norms (TSNs). TSNs could eventually become a means of regulating the use of unlicensed spectrum and making open spectrum access feasible. Open spectrum access scenarios are considered and analysis is performed based on reformulations of two game-theoretical models: Cournot and Bertrand. The standard oligopoly models are reformulated in terms of radio resource access in unlicensed bands. In order to capture the large variety of CR interaction situations, several GT equilibrium concepts are considered: Nash, Pareto, and Berge-Zhukovskii. The concept of Lorenz equilibrium is introduced. Joint equilibria – such as Nash-Pareto and Pareto-Nash are also considered. In order to capture the heterogeneity of CR interactions, the standard GT model is enriched allowing players to be biased toward different types of equilibrium (or rationality). An evolutionary game-equilibrium detection method is used. Numerical simulations bring relevant insights on the problem of autonomy vs. regulation in emerging CR environments. Relying on extensive GT simulations, some rules of behaviour – to be expanded into techno-social norms – may be derived.

7.      A Game Theoretical Perspective on Social Honesty

Marcel Cremene, Ligia Cremene, D. Dumitrescu

A new game is proposed for studying social honesty dynamics. This game is analyzed in spatial (latticeal) scenarios. Numerical results indicate that punishment probability is more important than the punishment level. Moreover, there seem to be certain critical values of the punishment probability that can change the society evolution to-wards honesty or dishonesty domination. Clusters of players are emerging spontaneously from the very first rounds. Spatially regrouped players are more powerful against invasions.

8.      Survey of Multicriteria optimization for industrial applications

Mihai Suciu

Traditionally optimization techniques deal with only one objective, multicriteria optimization techniques try to optimize simultaneously two or more objectives. The use of Multi-Objective Evolutionary Algorithms (EMOA) has become increasingly popular in the last few years, this has been mainly originated by the success of MOEAs in solving real-world problems. MOEAs have generated either competitive or better results than those produced using other search techniques. Multicriteria optimization techniques have been applied to a wide range of problems: (I) engineering applications - telecomunications and network applications, structural and mechanical engineering, electrical and electronics engineering, transport, civil and construction;(II) industrial applications - design and manufacture, scheduling, management, grouping and packing; (III) finance - investment portfolio optimization, stock ranking, risk return analysis. This survey gives a basic introduction in multicriteria optimization and gives some examples on how it can be applied to real-world problems.

9.      Equitable Solutions in QoS-aware Service Optimization

Mihai Suciu, Marcel Cremene, F. Pop, D. Dumitrescu

Service composition is one of the main research areas of Service Oriented Computing (SOC). A certain functionality may be offered by several services having different Quality of Service (QoS) attributes. Even if the QoS optimization problem is multi-objective by its nature, the most approaches are based on single-objective optimization. Compared to single-objective algorithms, multi-objective evolutionary algorithms (MOEAs) have some advantages: \textit{(i)} the aggregation of criterion functions is not necessary and \textit{(ii)} the user has the possibility to select \textit{a posteriori} one of the Pareto optimal solutions. But the end user may not be an expert in decision making. Thus, providing a set of solutions could confuse him/her.

We propose to address the QoS optimization problem using the concept of Lorenz dominance. Lorenz solutions are equitable and well balanced. Therefore, an equitable approach based on Lorenz dominance could simplify the Decision Maker's choice. Evolutionary detection of Lorenz solution seem to be an appealing one. Some state-of-the art MOEAs are slightly modified for addressing the QoS optimization problem. Lorenz solutions drastically reduce the number of solutions in the Pareto set, and thus the decision costs.

10.  Between Selfishness and Altruism: Fuzzy Nash--Berge-Zhukovskii Equilibrium

Reka Nagy, Noémi Gaskó, Rodica Ioana Lung, D. Dumitrescu

Nash equilibrium in many cases is not the best choice for human players. In case of trust games the Nash equilibrium is often mutual defection which is the worst possible outcome for all players. The Berge-Zhukovskii equilibrium models a more cooperative behavior, so in case of trust games, when players gain by cooperating, it is usually a better choice than Nash equilibrium.

Real life results show that players rarely follow the theoretical predictions. Our aim is to find new equilibria types that offer a more realistic modeling of human players.

The fuzzy Nash--Berge-Zhukovskii equilibrium is proposed which is a fuzzy combination of the Nash and Berge-Zhukovskii equilibrium. Several continuous trust games are investigated. Numerical results indicate that fuzzy Nash--Berge-Zhukovskii equilibrium is suitable to model real-life situations.

11.  Equilibrium detection in non-cooperative game theory. An evolutionary approach

D. Dumitrescu, Rodica Ioana Lung, Noémi Gaskó, Réka Nagy

The most important task in non-cooperative Game Theory is the equilibria detection. A computational method is presented in order to detect different types of equilibria.

The method is based on generative relations. Generative relations are an algebraic tool for describing game equilibria. The generative relations of Nash, Aumann, Berge-Zhukovskii, Pareto and Lorenz equilibria are presented. The concept of meta-strategy is described that allows players to have different rationality types. The joint Nash-Pareto and Nash Berge-Zhukovskii equilibrium is presented. Numerical experiments indicate the potential of the proposed method.

12. Cognitive Antenna System for Sustainable Adaptive Radio Interfaces

Ligia Cremene, Nicolae Crişan

Communication systems are usually implemented on a heterogeneous infrastructure and must operate in environments with accelerated dynamics. Adaptation is thus a key feature of such a system. Long-term, sustainable, adaptive solutions did not receive much attention in the design phase of wireless communication systems. With the advent of LTE, which was designed as a highly flexible radio interfacecreated to evolvethere is room for disruptive solutions to be put in place. A new approach for the receiver is proposed, where the antenna takes an active role in characterizing and eventually learning the operation environment. The proposed solutiona Cognitive Antenna System (CAS), is based on two main mechanisms that we called antenna vision (AV) and signal fishing (SF). In the core cognitive cycleobserve-decide-actwe aim to improve theobservepart, which critically influences the whole decision process. The signal fishing and antenna vision mechanisms bring a set of advantages: higher received SNR, no additional noise, higher AoA estimation accuracy.