|
Soft sensors are those that exist only as a mathematical calculation based on
measurements from other online sensors in real-time or from off-line sources like
results of the analysis of laboratory data. Soft sensors depend on combining several process variables which are measured and used for process monitoring and control. Soft sensors are inferential estimators, drawing conclusions from process observations when hardware sensors are unavailable or unsuitable. They have an important auxiliary role in sensor validation when performance declines through age or fault accumulation. The nonlinear behavior exhibited by many industrial processes can be usefully modeled with the techniques of computational intelligence: neural networks; fuzzy systems and nonlinear partial least squares. “Application of Soft Sensors in Process Monitoring and Control: A Review”, by the authors, Ajaya K Pani and Hare Krishna Mohanta, is a comprehensive review on the topic. The authors after highlighting all the various applications and situations warranting the use of soft sensors, describe all the important steps of soft sensor development such as collection of historical plant data for different variables and their processing, development of model based on the available data and the validation of the model. A critical review of various techniques available for data handling and modeling has been presented.
The universal asynchronous receiver/transmitter (UART) is the key component of the serial communications subsystem of a computer. It takes bytes of data and transmits the individual bits in a sequential fashion. At the destination, a second UART reassembles the bits into complete bytes. Thus a UART is used to convert the transmitted information at each end of the link. Parul Sharma and Ashutosh Gupta, in their paper, “Design, Implementation and Optimization of Highly Efficient UART”, describe the implementation of a proposed UART using Verilog hardware description language (VHDL). ModelSim SE 6.0d was used for simulations and Xilinx ISE 10.1 tool used for implementation A Virtex4 with 363 and Sparten3 FPGA with 320 input/output pins were used as a target device. The authors conclude that the Sparten3FG is a better choice for the implementation of the proposed UART system as the power consumption in case of Sparten3FG is 93 mW which is much less than a power consumption 268 mW of Virtex4FX.
Automatic classification of electroencephalography (EEG) signals for different types of mental activities, is an active area of research and has many applications such as brain computer interface (BCI) and medical diagnoses. The BCI is a system that transforms the brain activity of different mental tasks into a control signal. The system provides an augmentative communication method for patients with severe motor disabilities. A number of classifiers using statistical methods such as linear discriminant analysis (LDA), Hidden Markov Classifier and z-scale base discriminant analysis (ZDA) have been reported in the past. The main drawback of these methods is that they do not work well for nonlinear classification problems. Artificial neural network (ANN) is currently accepted as the best classification method for EEG signals. “Classification of Five Mental Tasks from EEG Data Using Neural Network Based on Principal Component Analysis”, by the authors, Vijay Khare, Jayashree Santhosh, Sneh Anand and Manvir Bhatia, evaluates the performance of multilayer back propagation neural network (MLP-BP NN) with resilient training method for discrimination of five mental tasks. Principal component analysis (PCA) method was used for feature extraction from signals and the coefficients used as the input vector for MLP-BP NN. The authors report that the classifier showed an accuracy of over 86% in recognizing various mental tasks.
Digitally-stored multimedia material is a rapidly growing resource due to the ongoing technological advancement in data storage, communications and computing. Transfer of long audio and video files via the internet and the storage capacities of portable multimedia devices and personal computers have been rapidly increasing. Thus, computerized solutions for automatic organization of the multimedia material are an attractive approach to access the content efficiently. Information indexing and retrieval is therefore an important field of application for automatic audio recognition. “Automatic Classification and Indexing of Audio Broadcast Data”, by the authors, P Dhanalakshmi, S Palanivel and V Ramalingam proposes effective algorithms to automatically classify audio clips into one of the six classes: music, news, sports, advertisement, cartoon and movie. The authors have used a five-layer auto associative neural network (AANN) model to capture the distribution of the following selected features: Linear predictive coefficients (LPC), linear predictive cepstral coefficients (LPCC) and Mel frequency cepstral coefficients (MFCC). The authors have also presented an audio indexing system which uses K-Means clustering algorithm to retrieve movie clips operating with an accuracy of about 91%.
The field of antenna engineering is central to all wireless technologies and plays a vital role in the successful deployment of network systems. Some of the requirements imposed on an antenna are as follows: it should be relatively-cheap and easy to manufacture, should be light weight and compact with a low profile but robust body and should be environment-friendly. The microstrip patch antenna fulfills these requirements although nonrelenting efforts are still on to further miniaturize the antenna. In the paper, “Design and Development of Hybrid Microstrip Array Antenna”, the authors,
S L Mallikarjun, P M Hadalgi, R G Madhuri and S A Malipatil describe the design and construction of an element hybrid microstrip array antenna. The authors have also implemented four and eight element hybrid array antennas and characterized the antennas for their performance.
Multi-document summarization is a procedure that aims at extraction of information from multiple texts written about the same topic. The resulting summary report allows individual users to quickly familiarize themselves with information contained in a large cluster of documents. Multi-document summarization creates information reports that are both concise and comprehensive. Automatic summaries present information extracted from multiple sources algorithmically. “Merging Multi-Document Text Summaries: A Case Study”, by Shanmugasundaram Hariharan, proposes a domain independent algorithm for merging text documents to obtain generic extractive. The effect of parameters like stemming and stop words on the performance of the algorithm was studied. Sentence position approach has been used to improve the quality of summarization.
--
Elizabeth Zacharias
Consulting Editor |