# Introduction ith the popularity of the web and networked computers are finding their way into a wide and spread range of working environments. This new computing model have made a competition of early development, reliable and distributed software components that communicate with one another across the underlying networked and extendable infrastructure as per the requirement of different user. A distributed software component can be plugged into distributed applications and can be used for some specific purpose. The intention of most of the developers behind is reuse or slight modification of old or reliable component and this makes more reliable by using distributed software components to build new systems. Even though, it is also important for developer to know the functionality of distributed or compatible software component in any system. The design of component and requirement specification should clearly document the functional input, output with conditions and moreover it is the reliability percentage wise. Software reliability has been defined as the probability that a software system operates with no failure for a specified time on specified operating conditions. In other words, by estimating or predicting the reliability [1] of component, the quality of software product can be estimated. The satisfaction of customers is directly dependent on the quality of that software. The analysis report that is commonly used to describe software reliability has been derived from observed or failure intensity. Failure intensity is defined as the number of failures observed per unit time period. Failure intensity is a also good measure for reflecting the user perspective of software quality. As Computer applications are going more diverse and spreading through almost every area of everyday life then reliability factor becomes a very important characteristic of software or component systems. The reliable component is a base of system and part of system i.e. client, administrator and working environment. Since it is a matter of cost and performance to produce a system having documented and estimated reliability [2] of system. Therefore, it is necessary to measure its reliability before releasing any software. When reliability reaches at threshold level then the software component can be released for further use. To do this, a number of models [3] have been proposed and has been being developed. Software modeling is a statistical estimation [4] method applied to failure data collected or simulated the software component or system developed after integration of software component by different approach of joining in software engineering .This can be one after a component testing has been executed so that failure data are available. The implementation of newly developed and modified models tries to make system better and help in predicting the reliability in a accurate way. The most important parameter of any software product are level of quality, time of delivery, and final cost of the product. The time of delivery and cost should be quantitative and pre decided, whereas these attributes is difficult to define Quantitatively. Reliability is one, and probably the most Software reliability is related directly to operation and performance instead of designing of a component. Therefore, software reliability is estimated by analyzing the observed failure data [5] of component and then applying Goel -Okumoto Model [6][7] , rather than the number of remaining faults in a component. So, estimation of reliability of system is more useful than finding the number of remaining fault. The uncertainty involved in the estimation for a specific interval expressed in terms of confidence interval and estimation of parameter used. This paper evaluates the estimates the reliability of component by using the Goel-Okumoto ( D D D D D D D D ) model on the set of failure data taken from simulating the real software applications. This should be done before any component release .The reusability of that component enhances the overall reliability of system and gives accurate estimation of value of reliability. The result shows that the proposed model has a technical point for improving software reliability and providing additional metrics for development project evaluation, management and time of delivery of new developed system. # II. # Goel Okumoto Model : NHPP SRGM Exponential Model Non Homogeneous Poisson Process (NHPP) based software reliability growth models are generally classified into two groups. The first group contains models, which use the machine execution time or calendar time [8] as a unit of fault detection/removal period. Such models are called continuous time models. The second group contains models, which use the number of test occasions/cases as a unit of fault detection period. Such models are called discrete time models [9], since the unit of software fault detection period is countable. A Goel Okumoto Model also known as exponential NHPP model is based on the following assumptions:(a) All fault in a component are independent from the failure detected. (b)The number of failures detected at any time is proportional to the current number of fault in a component. (c) The fault is removed immediately as soon as the failure happens, no new faults are introduced during the removal of fault. The following differential Equation1 include the above assumptions. Where m(t) is expected number of component failures by time t, a is Total fault content rate function, i.e., the sum of expected number of initial software faults and introduced faults by time t and b is Failure detection rate per fault at time t. important, aspect of software quality. S ???? (??) ???? = ??[?? ? ??(??)](1) The mean value solution of above differential equation is given by Equation 2where tn is time of nth failure occurrence ??(??) = ???1 ? e ?bt n ? (2) Failure intensity function is given by Equation 3 as follows ??(??) = ?????? ????? ??(3) III. # Estimation of Parameter The different a and b parameter also reflect different assumptions of the software testing processes. In this section, we derive a new NHPP model for an interrelationship dependent function between a and b parameter by a common parameter from a generalized class of model. The most common method for the estimation of parameters is the Maximum Likelihood Estimation (MLE) method. MLE method of estimation of a broad collection of software reliability for grouped data is discussed in detail. To estimate a and b for a sample of n units, first obtain the likelihood function: take the natural logarithm on both sides. The equation for estimation of a and b is given in Equation 4where a = y n (1??? ??? ?? ?? (4) Where yn is actual value of nth failure observed at time t. The parameter a can be estimated using MLE method based on the number o failures in a particular interval. Suppose that an observation interval {0, tk} is divided into set of sub intervals (0,ti],(t1,t2]???(tk-1,tk] , Equation 5 was used to determine the value of b . y n t n e ?bt n 1?e ?bt n = ? ? (y k ??? ???1 )(t k e ?bt k ?t k ?1 e ?bt k ?1 ) (e ?bt k ?1 ? e ?bt k ) ? ?? ??=1(5) The number of failures per subinterval [8] is recorded as ni(i=1,2,3..,k) with respect to the number of failures in(ti-1,ti].The parameters a and b are estimated using iterative Newton Raphson Method, which is given as in Equation 6Equation 7 and Equation 8. ?? = b 0 ? f(b 0 ) f? (b 0 ) (6) ð??"ð??"(??) = y n t n e ?bt n 1 ? e ?bt n ? ? ? (y k ??? ???1 )(t k e ?bt k ?t k ?1 e ?bt k ?1 ) (e ?bt k ?1 ? e ?bt k ) ? = 0 ?? ??=1(7)ð??"ð??" ? (??) = ? ? (y k ??? ???1 )(t k ?t k ?1 ) 2 e ?b (t k +t k ?1 ) (e ?bt k ?1 ? e ?bt k ) 2 ? ?? ??=1(8) IV. # Model Analysis and Results # a) Data and Model Criteria Once the analytical expression for the mean value function m(t) is derived, in this paper, the model parameters to be estimated in the mean value function can then be obtained with the help of a developed octave program based on the least squares estimate (LSE) method. Goel and Okumoto described failure detection as a non-homogeneous Poisson process with an exponentially decaying rate function .It is a simple non-homogeneous Poisson process model. The data of failure of 25 days have been observed here for estimating the reliability [10]. In table 1, the data of 25 days failure and cumulated failure have been shown here. The two function of Reliability and Remaining fault function can be used to find the release of date or the additional testing time is required to reach ready state. After simulation the result of 25 days of testing were observed. Based on these data and using the MLE C method, the estimated values for the two parameter are given in the table. Each data set provides the cumulative number of faults by each week up to 25 weeks. The Fig. 1 represents the cumulative number of faults versus the cumulative system test hours at the end of each The Phase 2 data set is given in Table 2. We developed a Octave program to perform the analysis and all the calculations for LSE estimates. The parameter a is the number of initial faults in the software and the parameter b is related to the failure detection rate during testing process. The software reliability R(x/t) is defined as the probability of a failure free operations of a complete software for a specified time i.e. interval (t, t +x) in a specified environment in Equation 9. The interval methods of estimation are explained by applying the results to the software failure data .The set of software errors analyzed here is borrowed from a simulated data ( an 1 days interval). where R(s|t) is reliability of component during (t, t+s) time. ??(??????) = ?? ???(?? ????? ??? ??? (??+?? ) ) (9) # Conclusion This work has proposed a method of estimating the reliability of reusable architecture which can be used to build a software by using Goel-Okumoto Software Reliability Growth Model. The data available from an exponential distribution are grouped and the this model used to illustrate the parameter estimation problem. The measurement of reliability decides the quality and level of reliability decides the time of delivery of any software the reliability is increased with testing time but the reliability never becomes 100% even when the observed fault is close to zero. As per the other criteria in the above analysis, the best estimate for the remaining fault is less than 10 and then the component can be released. The integration of more reliable component can make the system more reliable. This solution will help the developer of third party component to predict the release the component with the specified marked reliability. Volume XIII Issue XIII Version I 1![Figure 1 : Number of Fault Observed Wrt Number ofDays of TestingFor example the component can be released the software if the expected reliability is greater than the threshold value i.e 90.09778 %and above 90%.](image-2.png "Figure 1 :") 23![Figure 2 : Remaining Fault wrt number of Days of testing](image-3.png "Figure 2 :Figure 3 :") 4![Figure 4 : Comparision of Remaining Fault and Reliability wrt Number of Days of Testing](image-4.png "Figure 4 :") 5![Figure 5 : Failure Intensity wrt Number of Days of Testing](image-5.png "Figure 5 :") 1DaysFailure ObservedCumulative FailureDaysFailure ObservedCumulativeFailure1323214512722355155132311661661384107617314151187185146679419114772962011488510121315196107221152102109232154114113241155127120251156132122 2DayABRemaining FaultReliabilityFailure Intensity15138.380.13331680.52.184416133.710.14321284.661.676917141.250.12741483.481.816218139.720.13041285.911.528619138.850.13221087.861.303020140.340.1290988.711.205221140.100.1295890.091.049522141.910.1255890.600.992923142.030.1252791.620.880124142.31540.1246692.480.786925141.130.1275593.710.6538b) AnalysisIn fig 1 the cumulative failure observed havebeen shown as per number of days of testing. It isobviously seen that the number of fault observed isdecreasing with days. © 2013 Global Journals Inc. (US) © 2013 Global Journals Inc. (US) Global Journal of Computer Science and Technology * System Software Reliability HPham Springer Series in Reliability Engineering London Springer 2006 * A New Methodology for Predicting Soft-ware Reliability in the Random Field Environments XTeng HPham IEEE Transactions on Reliability 55 3 2006 * Software Reliability Models with Time dependent Hazard Function Based on Bayesian Approach LPham HPham IEEE Transactions on Systems, Man, and Cybernetics Part A 30 1 2000 * Software Reliability Assessment: Imperfect Debugging and Multiple Failure Types in Software Development HPham EG and G-RAAM-10737 1993 Idaho National EngineeringLaboratory * Time-dependent Faultdetection Rate Model for Software and Other Performance Measures ALGoel KOkumoto IEEE Transactions on Reliability 28 1979 * Software Reliability Growth Modeling: Models and Applications SYamada SOsaki IEEE Transactions on Software Engineering 11 12 1985 * Assessing Software Reliability using Inter Failures Time Data RDr BandlaSatya Prasad DrR R LSrinivasa Rao Kantham International Journal of Computer Applications 18 7 March 2011 * Considering Fault Removal Efficiency in Software Reliability Assessment XZhang XTeng HPham IEEE Transactions on Systems, Man, and Cybernetics -Part A 33 1 2003 * An NHPP Software Reliability Models and its Comparison HPham XZhang International Journal of Reliability, Quality and Safety Engineering 4 3 1997