Subscribe to Dr. Granville's Weekly Digest

Strategy for building a “good” predictive model

By Ian Morton. Ian worked in credit risk for big banks for a number of years. He learnt about how to (and how not to) build “good” statistical models in the form of scorecards using the SAS Language.

Read original post and similar articles here. I thing Ian's list below is a good starting point. I would add a few steps such as deployment, maintenance at the end, and gathering requirements, understanding goal and success metrics at the top.

Initial investigations
1. Look at the data dictionary to see which data is available

2. What is the outcome ? is it yes / no ? is it continuous ?
3. Decide upon the model required (logistic ! for yes / no outcome)

Getting the data ready
4 cross tabulations on categorical variables to understand the coding and volumes

5. summary statistics to understand the distribution of the continuous variables
6. Ask questions about data quality:

  • remove these variables from any potential models ? or,
  • think about imputation ? or,
  • obtain accurate data ?

7. Convert continuous variables into categorical variables

Modelling

8. Check for multi-colinearity / correlation between variables (variance inflation factors), or correlation tests

9. Check for interactions
10. Choose type of logistic approach (e.g. forward, backward, stepwise)
11. Choose the baseline attribute for each categorical variable
12. Create a random variable – mustn’t step into the model - something is wrong if it does step into the model
13. Split the dataset into two parts (ratio 80%/20%)

  • using random selection without replacement
  • the larger sample is the build dataset
  •  the smaller sample is the test dataset 
14. Put all variables from the build dataset (including interactions and the random variable) into the model and run it
  • Check odds ratios – do they make sense ?, and
  • Check the coefficients – do they make sense ?
Check the model

15. Do diagnostic checks and plots of the fit (e.g. Somers D, residuals etc., etc.)

16. Put all variables from the test dataset (including interactions and the random variable) into a new model and run it

  • Are the coefficients the same as the model it was built on ? and
  • Are the odds ratios the same as the model it was built on ?

Start again

 17. Back to the start, fine tune the grouping of the data, put variables in or take variables out.

Related articles

Views: 3128

Comment

You need to be a member of AnalyticBridge to add comments!

Join AnalyticBridge

Comment by Chandrasekhara S. "C.S." Ganti on May 21, 2013 at 7:21am

Mirko K./ Vince G.

Thanks .  An excellent post  for those of us who did not know about Ian Morton's work.

We see in Linked-In / Data Science Group posts that debates on causation vs correlation and whether i) any cause-effect relationships matter at all and that ii)  the preponderance of  observed association  data in Terra bytes / Petabytes are indeed explaining the process away and hence are sufficient.

C.S

Comment by Ian Morton on May 16, 2013 at 3:21am

Mirko,

Thanks for your post. In relation to your additions "deployment, maintenance at the end, and gathering requirements, understanding goal and success metrics at the top". Yes, you are of course correct. I see your additions as the important wraparound to my suggestions.

Ian

© 2014   AnalyticBridge.com is a subsidiary and dedicated channel of Data Science Central LLC

Badges  |  Report an Issue  |  Terms of Service