IEEE(c) 1996 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.


Return to Table of Contents

A Flexible Graphical User Interface for Embedding Heterogeneous Neural Network Simulators


Radu Drossu          Zoran Obradovic       Justin Fletcher

rdrossu@eecs.wsu.edu   zoran@eecs.wsu.edu   jfletche@eecs.wsu.edu




School of Electrical Engineering and Computer Science

Washington State University, Pullman, Washington, 99164-2752








Abstract

The graphical user interface (GUI) for heterogeneous neural network simulators proposed in this article is intended to be of use both for the novice and for the experienced neural network user. For the novice, it provides an easy to use neural network simulation package that insulates the user from the requirements of knowing the simulator implementation details or the configuration file syntax. For the experienced neural network professional it provides an interface that is easily extensible to include any additional neural network simulator in its binary form. To satisfy both academic and personal computer environments, the GUI has been developed by using the free TCL/TK software package, available on workstations running UNIX and on PC's running the free Linux operating system. Although the GUI and the embedded simulators have been successfully tested both in neural network research and training programs, a more extensive testing in undergraduate and graduate level classes is in progress.

I. Introduction

Neural network technology has the potential to solve a variety of today's technological problems which conventional computing techniques are unable to solve. The popularity of this technology is rapidly growing since it proved its applicability in everyday applications as diverse as hand written zip code recognition [8], power consumption predictions [7] and financial markets forecasting [1].

Neural network data modeling techniques are applicable to two general types of problems:

In addition, these techniques are applicable not only to time independent, but also to time dependent problems (time series), where the patterns to be classified are temporal sequences of previous process values.

Neural network computational models consist of a number of relatively simple, interconnected processing units called neurons working in parallel. Different neural network architectures can be used to solve a given classification or prediction problem. In an architecture the neurons are interconnected through synaptic links and are grouped in layers, with synaptic links usually connecting neurons in adjacent layers. Typically three different layer types can be distinguished: input (the layer that external stimuli are applied to), output (the layer that outputs results to the exterior world) and hidden (the intermediate computational layers between input and output layer). The neural network architectures can be, roughly speaking, divided in two categories: feedforward, in which the signal flow between different layers is unidirectional (from input layer towards output layer), or recurrent, in which the flow between different layers is bidirectional (both from input layer towards output layer as well as the reverse).

The most distinctive element in neural networks, as opposed to traditional computational structures, is denoted as learning. Learning represents the adaptation of some characteristic neural network parameters by using a set of examples (known outcomes of the problem for given conditions) so that for given input patterns the outputs reflect the value of a specific function. The final data modeling goal is not to memorize some known patterns, but to be able to generalize from prior examples (to be able to estimate the outcomes of the problem under unknown conditions).

Neural network data modeling represents the combination of an architecture and an appropriate learning technique. Traditional approaches assume learning on a prespecified architecture, whereas constructive learning creates the appropriate architecture during the learning process. Most neural networks have a number of architectural and learning parameters that have to be appropriately specified. Selection of appropriate parameters for large scale applications is an important experimental problem. If parameters are not appropriate, the algorithms may take a long time to converge or may not converge at all [12]. Consequently, in training novice neural network users, an emphasis on techniques and intuition development is necessary for properly setting these parameters.

Currently, most neural network modeling is performed on standard sequential computers (rather than on specialized neural network hardware) using simulation software. Until recently the choice of a simulation software was not a primary issue among neural network professionals, as application of neural network technology has been limited to relatively small problems. However, the choice of an efficient simulation software plays an important role when dealing with large scale, real life applications as desired by industry. The approach proposed in this paper provides developers with a number of simulator modules implementing various learning algorithms, all using a single user interface. In such an environment it is easier to select the model that is appropriate for a particular application than to become accustomed to various independent simulators.

Until recently, most neural network modeling work was performed by highly trained neural network experts. In practice, such experts learn how to use several commercial or freely available packages and in addition develop their own software to deal with special issues not supported by the packages. Experience seems to indicate that to support features not present in the standard packages, most neural network experts prefer building their own software rather than using tools developed by other experts (even inside a single group). This is a result of the significant time required to learn how to use someone else's neural network tool properly. We strongly believe that the software reuse among experts can be significantly improved by using a flexible user interface, as proposed in this paper, that can easily support developers' needs.

As the applicability of artificial neural networks seems to extend far beyond the computer science domain to areas like electrical engineering, biology, medicine and business, the need for rapidly trained personnel able to quickly assimilate neural network techniques and to apply them to the corresponding domain becomes increasingly acute. Thus the necessity arises to acquire some basic neural network knowledge rapidly, without covering underlying mathematical and programming details, but rather concentrating on understanding the conceptual model as well as on designing adequate experiments tuned to a particular problem. Our experience indicates that the choice of a graphical user interface, as proposed in this paper, plays a crucial role when trying to transfer neural network technology rapidly to non-expert users.

The graphical user interface (GUI) proposed in this article is intended to be of use both for the novice and for the experienced user. The software package included with this paper provides interfaces to both the most used traditional backpropagation learning through gradient descent optimization on a prespecified analog architecture [14] and to a novel learning technique developed in our laboratory, known as hyperplane determination from examples (HDE), which automatically grows a problem tailored discrete architecture [4, 5]. The fine-tuning of each neural network learning process is done by adjusting specific learning parameters. For the novice, it provides an easy to use neural network simulation package which supports both backpropagation and HDE, insulating the user from the requirements of knowing the implementation details and the configuration file syntax for running the simulators. For the experienced neural network professional it provides an interface that is easily extensible to include any additional neural network simulator in its binary form.

The paper is organized as follows: Section 2 contains a description of the graphical user interface and its underlying menu structure; Section 3 illustrates the flexibility, portability and extensibility of the GUI by means of embedding two radically different neural network simulators and Section 4 discusses the applications of the GUI in research and teaching as well as suggestions for further improving the GUI.

II. GUI Design and Implementation

The objective of our effort was to develop a portable GUI that would be able to serve neural network professionals as well as a large number of novice neural network users who might have access only to personal computers (PCs). With the introduction of Linux, a free UNIX operating system that runs on Intel 386/486/Pentium-based personal computers (PCs), the use of UNIX and X Window environments becomes affordable not only to institutions, but also to the large PC user community [13]. Linux provides the PC user with the power and flexibility of the UNIX operating system, as well as with an overwhelming amount of free, good quality software. One of the software packages, included in most of the Linux distributions and available also on UNIX workstations in academic environments, is Dr. John Ousterhout's TCL/TK package [9]. TCL, standing for ``tool command language'', is a general purpose scripting language for controlling and extending applications, whereas TK is a toolkit designed to develop X Window applications. For those who are unfamiliar with X Windows, this represents a windowing environment similar in appearance to PC window environments that is used in the UNIX environment. Both TCL and TK are implemented as libraries of C procedures that allow the extension of their core features. In its actual form, TCL is interpreted, thus suggesting that there might be a slowdown as compared to a compiled program. This potential slowdown isn't apparent for scripts of a few thousand lines, as is sufficient for developing a GUI for neural network simulation. The CD-ROM software accompanying this paper includes the GUI and simulators' executable files for HP 9000 series workstations as well as for PCs running Linux. To run the included software on one of these platforms, the existence of TCL version 7.3 and TK version 3.6 is also required.

The GUI proposed in this paper is developed for UNIX and Linux environments to satisfy both expert and novice users' needs. To ensure portability, the GUI has been written using just the TCL/TK core features without any additional C procedures. The GUI allows an interactive setting of neural network simulator parameters as well as the execution of the simulations. In its current form, the GUI embeds two radically different simulators, a backpropagation simulator and the HDE constructive learning simulator. A basic understanding of TCL/TK allows the user to easily embed additional simulators by "linking" their executable files to the GUI interface.

The GUI provides the following basic features:

  1. A text editor for editing plain text files.
  2. The color configuration of all the GUI items.
  3. Setting the characteristic features and parameters for the underlying neural network simulators.
  4. Saving the neural network configuration in the simulator specific file formats, as well as retrieving a neural network configuration from a file for the backpropagation simulator.
  5. Viewing the neural network configuration both in a tutorial style form (in which the significance of the different configuration options is briefly explained), as well as in the simulators' configuration file formats.
  6. Running the simulators both as foreground and as background processes (for an explanation of foreground and background processes, the reader can consult [10, 13].
  7. A help file that provides information on running the simulators both by using the GUI as well as by editing the corresponding configuration files for the simulators.

In order to run the GUI, the contents of either the directory called NNGUI/UNIX (for HP 9000 series workstations), or of NNGUI/LINUX (for PC's running Linux) from the CD-ROM must be copied to a user's directory (such as MYNNGUI). After setting the current directory to MYNNGUI the GUI can then be started by issuing the command {\tt nngui}. The different menus of the GUI can be accessed either by clicking them with the left mouse button or by simultaneously pressing the ALT key and the underlined letter key of the menu name. During application execution, certain GUI options can be accessed directly from the main window through the use of a keyboard accelerator (hot key). The options that have corresponding hot keys have the key combination displayed in the pop-up menus (see the help file for a list of options), where CTRL+A represents pressing the CTRL and A keys simultaneously. The interface is not case sensitive, so lowercase a is the same as uppercase A). In each file selection box, a file is selected by double-clicking it with the left mouse button.

The menu structure of the GUI can be represented hierarchically as in Fig. 1.

A brief explanation of each of the different menu items supported under the current GUI implementation follows. For a more detailed explanation, install the program from the CD-ROM and select the Help option under the Help menu item, or press CTRL+H in the main menu window.

A. The File Menu

The following file access options are included in this menu.

B. The Network Menu

This menu allows the selection of the desired simulator. The current version of the GUI supports two different neural network models:

C. The Configuration Menu

Parameters specific to the supported neural network models are set by using this menu. This menu is neural network model sensitive, so the characteristics displayed depend on the simulator selected in the Network menu. A more detailed description of this menu in the current GUI implementation is given in Section 3.

D. The Run Menu

The Run menu starts the selected neural network simulator. It consists of three menu items:

E. The View Menu

This menu allows the display of the current simulator settings.

F. The Color Menu

For a more pleasant appearance of the GUI, a multitude of color options can be set in this menu. The {\it Color Mode} menu item allows the selection of colors in three different ways, either by using X Windows' default names, or by using two more sophisticated, but fine-tunable, color selection techniques, HSV and RGB. A setup screen for an HSV color selection is shown in Fig. 4.

The default color configuration can be restored by selecting the Restore Color Configuration menu item, whereas the preferred color configuration can be saved by choosing the Save Color Configuration menu item. Whenever a new color configuration is saved, it will overwrite the previously saved color configuration stored in the file config.crt.

G. The Help Menu

This menu contains a release number of the current GUI version under the About the GUI menu item, as well as a help file under Help.

III. The Configuration Menu as a Key to Embedding Heterogeneous Neural Network Simulators

The two neural network simulators embedded in the current GUI are radically different implementations. The backpropagation simulator represents a traditional learning technique applied to a fixed architecture, whereas the HDE constructive learning simulator updates the architecture during the learning process. The two simulators are also significantly different in software implementation. Although both were programmed in C, the HDE simulator uses XLib calls to implement a graphical visualization of the learning process. In addition, whereas the backpropagation simulator is intended to be run only sequentially (on a single sequential computer), the HDE simulator can be run both sequentially or in parallel, either on a highly parallel machine, such as the Paragon at the San Diego Supercomputer Center, or in a local distributed system (a computer network). Effective support of two very different neural network models by a single GUI is a strong indicator of the GUI's flexibility. In addition, as long as a compiled (binary) version of the simulators is available on a computer that has the TCL/TK (version 7.3/3.6) package installed, the GUI can be successfully used, thus showing the GUI's portability. Finally, additional neural network simulators can be embedded in the GUI without significant effort as long as the binary files of the simulators are available, proving also the GUI's extensibility.

From the perspective of the end user, who should be insulated from implementation details, the only differences that appear between various simulators are reflected in just the Configuration menu. As previously mentioned, this menu depends on the neural network simulator selected in the Network menu. If additional simulators are added to the GUI, these will be reflected in the corresponding Configuration menu options when selected.

A brief description of the configuration options for the two neural network simulators included in this version of the GUI follows. For a detailed description of the parameters that are set in this menu, the reader is advised to install the software from the CD-ROM and to consult the help file (by pressing CTRL+H in the main window). For details of configuring model parameters, the basic concepts for the backpropagation learning technique can be found in [6], while the HDE constructive learning concepts can be found in [4, 5].

The corresponding Configuration menu settings for the backpropagation simulator are:

The Configuration menu settings for the HDE constructive learning simulator are:

IV. Application in Research and Teaching

The primary goal of the GUI development was to assist both neural network professionals and novice users in performing neural network experiments on heterogeneous simulators. From the professionals' perspective, the two embedded neural network simulators were successfully used either directly or by means of the GUI in our lab for several research projects covering various problems [2, 3, 4, 5].

In summer 1995, the GUI was also used by neural network first-time users during the WSU/NSF Teacher Institute for Science/Mathematics Education through the Engineering Experiences training program. Two mathematics teachers with no prior knowledge of either the UNIX operating system or neural networks spent six weeks in our laboratory with the objective of learning neural network techniques for designing backpropagation experiments. After becoming accustomed with the GUI in a fast learning process, the results obtained by them on a few neural network benchmark classification problems like the Monks problems [11] and the breast cancer diagnosis problem [15] were comparable to the best known results for the corresponding problems. As a final result, they also developed a neural network teaching module to be used in high school.

Although the GUI and the embedded simulators have been successfully tested both in neural network research and training programs, more extensive in-class testing is needed. Hence, the GUI is intended for use in two courses offered in Fall '95 at the School of Electrical Engineering and Computer Science at Washington State University. One is an undergraduate course on Artificial Intelligence while the other one is a graduate course in Neural Networks. From these two diverse groups of students we expect to gain valuable suggestions on future extensions to the current version of the GUI. Finally, any feedback from the readers of this article and users of the software provided on the accompanying CD-ROM would be extremely beneficial for improving the current GUI implementation and embedding additional neural network simulators.

ACKNOWLEDGEMENTS

We would like to thank LeeAnn Wagner and Bud Wright, participants in the 1995 WSU/NSF Teacher Institute for Science/Mathematics Education through Engineering Experiences program, who successfully completed the neural network training program, supporting our hypothesis that the current GUI implementation is an appropriate teaching tool for students with no prior experience in this domain. We also thank Professors R. Zollars, D. Orlich, J. Petersen and W. Thomson, principal investigators for the NSF grant ESI-9254358, who approved Wagner and Wright's summer project in our lab and Professor Jack Meador for his willingness to use our software in his neural networks course. The partial support of NSF research grant NSF-IRI-9308523 is also gratefully acknowledged. Finally, we thank Ioana Danciu for her constructive comments on a preliminary version of the manuscript.

REFERENCES

[1] T. Chenoweth and Z. Obradovic, ``A Multi-Component Nonlinear Prediction System for the SP 500 Index,'' in Neurocomputing Journal (in press).

[2] R. Drossu et al., ``Single and Multiple Frame Video Traffic Prediction Using Neural Network Models,'' in Computer Networks, Architecture and Applications, S. V. Raghavan and B. N. Jain eds., Chapman \& Hall, pp. 146-158, 1995.

[3] R. Drossu and Z. Obradovic, ``Novel Results on Stochastic Modelling Hints for Neural Network Prediction,'' in World Congress on Neural Networks, Washington D.C., vol. 3, pp. 230-233, 1995.

[4] J. Fletcher and Z. Obradovic, ``Combining Prior Symbolic Knowledge and Constructive Neural Networks,'' in Connection Science: Journal of Neural Computing, Artificial Intelligence and Cognitive Research, vol. 5, nos. 3-4, pp. 365-375, 1993.

[5] J. Fletcher and Z. Obradovic, ``A Discrete Approach to Constructive Neural Network Learning,'' in Neural, Parallel and Scientific Computations, vol. 3, no. 3, pp. 307-320, 1995.

[6] S. Haykin, ``Neural Networks: A Comprehensive Foundation,'' MacMillan Publishing Company, 1994.

[7] M. Mangeas, A. S. Weigend, ``Forecasting Electricity Demand Using Nonlinear Mixture of Experts,'' in World Congress on Neural Networks, Washington D.C., vol. 2, pp. 48-53, 1995.

[8] O. Matan et al., ``Multi-Digit Recognition Using a Space Displacement Neural Network,'' in Advances in Neural Information Processing Systems, vol. 4, pp. 488-495, 1992.

[9] J. K. Ousterhout, ``Tcl and the Tk Toolkit,'' Addison-Wesley, 1994.

[10] M. G. Sobell, ``UNIX System V: A Practical Guide. Third Edition,'' Benjamin Cummings, 1995.

[11] S.B. Thrun et al. ``The MONK's Problems: A Performance Comparison of Different Learning Algorithms,'' Technical Report, Department of Computer Science, Carnegie Mellon University, 1991.

[12] R. Venkateswaran and Z. Obradovic, ``Efficient Learning through Cooperation,'' in World Congress on Neural Networks, San Diego, CA, vol. 3, pp. 390-395, 1994.

[13] M. Welsh, L. Kaufman, ``Running LINUX,'' O'Reilly and Associates, Sebastopol, California, 1995.

[14] P. Werbos, ``Beyond Regression: New Tools for Predicting and Analysis in the Behavioral Sciences,'' Harvard University, Ph.D. thesis, 1974. Reprinted by Willey and Sons, 1995.

[15] W. H. Wolberg and O. L. Mangasarian, ``Multisurface Method of Pattern Separation for Medical Diagnosis Applied to Breast Cytology,'' in Proc. National Academy of Sciences, U.S.A., vol. 87, pp 9193-9196, 1990.






Radu Drossu received his M.S. degree in Electrical Engineering from the Polytechnical Institute of Bucharest - Romania in 1990. He worked as a system programmer and hardware designer at the Research Institute for Automation (IPA), Bucharest from 1990 to 1993. He is currently a Ph.D. candidate in Computer Science at Washington State University, doing his research in Artificial Neural Networks under the supervision of Dr. Zoran Obradovic.

Zoran Obradovic received the B.S. degree in Applied Mathematics, Information and Computer Sciences in 1985, the M.S. degree in Mathematics and Computer Science in 1987, both from the University of Belgrade and the Ph.D. degree in Computer Science from the Pennsylvania State University in 1991. He is currently a research scientist at the Mathematical Institute of the Serbian Academy of Sciences and Arts, Belgrade and an Assistant Professor in the School of Electrical Engineering and Computer Science, Washington State University. The objective of his research is to explore the applicability of neural network technology to large scale classification and time series prediction problems in very noisy domains.

Justin Fletcher received his B.S. in Interdisciplinary Studies in 1980 and his M.S. in Computer Science in 1983 from the University of Idaho. From 1982 until 1991 he worked as software engineer, consultant and software manager in industry. He received his Ph.D. degree in Computer Science from Washington State University in 1994 specializing in Artificial Neural Networks and is currently a software consultant.