|
Computer-related risks include aviation accidents, space flight disasters, insurance
fraud, telephone outages, public transport accidents, defective medical devices,
security vulnerabilities and stock market outages. Much could be learned from analyzing these bugs, but certain problems reappear again and again. To help find bugs, software engineers have developed inspections and testing methods. Test harnesses automatically subject software to thousands of tests. Research has uncovered correlations between bug-prone software modules and code complexity, test coverage, code churn, and even the structure of the organization that produced the modules. These correlations can be used to identify bug-prone software and test it extra thoroughly. Yet we seem unable to get rid of bugs. Fixing bugs manually is expensive, time-consuming, and unpleasant. Automatically repairing them might save us from the many causes of bugs, including misunderstandings, lack of time, carelessness or plain old laziness. Automation might also produce bug fixes more quickly, especially if maintainers are faced with an overwhelming number of bug reports and must triage them. Even if only a portion of the bugs could be fixed automatically, doing so would be beneficial, because it would lead to better software more quickly, at lower cost, and user frustration and losses due to bugs would be reduced.
But this brings into question some fundamental limitations. Yet in the past decade, a number of young scientists have taken on automatic bug fixing. First, automated program repair does not promise to fix just any bug. Only small individual locations are fixable; bugs that require alterations of multiple locations are too hard, at least for now. Second, the technique requires a supply of test cases. Some of these test cases tickle the bug and fail. The other test cases do not fail; they represent desirable behavior that must be preserved. The automatic bug fixer must mutate the program in such a way that the failing test cases stop failing, while the non-failing test cases continue to pass. Note the specifications for the software are not provided; test cases are a substitute, which, as we well know, can show the presence but not the absence of bugs.
B T Geetha, M V Srinath and V Perumal, in the first paper, “Optimized Scheduling Algorithm for Energy-Efficient Wireless Network Transmissions”, have considered the problem of erecting utility optimal scheduling algorithms in discrete stochastic networks where the communication links have time varying qualities and the nodes are powered by finite capacity energy storage devices but are capable of harvesting energy.
In the second paper, “Generating a Complete and Precise Back Index for E-Books”, Vibhooti Markandey, has described the Back-Index-Tool that generates back index of books in machine readable format. This work is able to precisely position subject-indexing terms in semantic sense.
R K Meenakshi and D Arivazhagan, in their paper, “Risk Management in IT Sector: Opportunities and Challenges”, have tried to show that risk can be minimized by careful handling of data with proper follow-up action. Two cases are discussed here.
In the last paper, “Image Enhancement Techniques in the Spatial Domain: An Overview,” Deepa Raj and Pushpa Mamoria, have tried to highlight different techniques of image enhancement for gray-scale image by specifying their importance in the spatial domain.
-- C R K Prasad
Consulting Editor |