Friday, June 14, 2019

Credit Risk Analysis - Application of Logistic Regression Essay

Credit Risk Analysis - Application of Logistic Regression - Essay ExampleThe scales of different variables are sort out as most of the variables are set as nominal, which however, is non correct. Out of a mix of 20 in bloodsucking variables, 7 variables are referred as scale variables, 4 variables are labeled as ordinal and the rest of variables are considered as nominal. 2. In applying binary logistical regression, precedings LR regularity is used to run the data because this method takes variables one by one in the analysis and in the last quantity, present the most statistic wholey substantive and important variables which are helpful in the analysis. 3. Hosmer and Lemeshow Test, is selected to find out the relationship between the observed values and the expected values. With the help of SPSS, following tables are generated, since Forward LR method is used and due to this method, 11 steps are taken by this method, therefore, in order to maintain the conciseness of the repo rt, the values of tout ensemble foregoing 10 steps have been omitted from the tables. Only values pertaining to step 11 are taken in the analysis. All the tables and their interpretation are presented from next page. categorization Tablea,b Observed Predicted CreditRisk Percentage Correct Bad Good Step 0 CreditRisk Bad 0 300 .0 Good 0 700 100.0 Overall Percentage 70.0 a. unremitting is included in the model. b. The cut value is .500 The 2 x 2 table that has been presented above, tallies the incorrect and correct estimations for the constants of postal code model. Rows represent the actual (observed) value of dependent whereas columns represent the predicted values. The overall percentage is taken as 100%. In a perfect model, the cases will be in the diagonal. If there is heteroscedasticity in logistic model, then for both the rows, the percentage will nearly be same. This phenomenon is not found here, the model is predicting Good cases but any Bad cases are not predicted. While, the overall percentages are predicted exactly having overall percentage of 70% which is moderately good. It is to be noted by the researcher that the category which is most frequent for all Good cases produces the same and correct percentage of 70%. Variables in the Equation B S.E. Wald df Sig. Exp(B) Step 0 Constant .847 .069 150.762 1 .000 2.333 In the above mentioned SPSS results, for all the independent variables, the coefficients are 0. The findings significantly reveal that in this case, the null hypothesis should be rejected. Omnibus Tests of Model Coefficients Chi-square Df Sig. Step 11 Step 5.276 1 .022 freeze down 300.781 31 .000 Model 300.781 31 .000 The purpose of chi-square goodness of fit test is to investigate whether the step of judging null hypothesis is justified or not. In this case, the step has been taken from constant-only model to independent model. The step of adding variables or variable in this scenario can be justified if the values are less than 0.05. I f the step would be to exclude variables from equations of this model, than it would be justified by taking the cutoff point as greater than 0.10. Since the sig. values are less 0.05, therefore null hypothesis can be rejected and the model is statistically significant. Model Summary Step -2 Log likelihood Cox & Snell R Square Nagelkerke R Square 11 920.948a .260 .368 a. Estimation terminated at iteration number 5 because parameter

No comments:

Post a Comment