# Posts Tagged R

### Upper bound of the Information Value statistic

Posted by sqlpete in scorecards, stats on February 12, 2017

### Information Value

Despite having worked with it for years, it has always irked me that I don’t know the derivation of the Information Value (IV) statistic.

It’s used liberally throughout credit risk work, but the background to its invention seems somewhat hazy. Clearly it’s related to Shannon Entropy, via the construct. In Naeem Siddiqi’s well-known book Credit Risk Scorecards, he writes *“Information Value, […] comes from information theory”* and references Kulback’s 1959 book Information Theory and Statistics, which I don’t have. Someone else suggested that it stems from the work of I.J. Good, but I can’t find an explicit definition in any of his papers I’ve managed to look at. (I bought his book Good Thinking, about the foundations of probability and statistical interference, but it’s *waaaay* too complex for me!)

The Information Value (IV) is defined as:

, where is the number of ‘goods’ in category i, and is the number of ‘bads’.

In his book, Siddiqi gives the following rule of thumb regarding the value of IV:

< 0.02 | unpredictive |

0.02 to 0.1 | weak |

0.1 to 0.3 | medium |

0.3 to 0.5 | strong |

0.5+ | “should be checked for over-predicting” |

For an independent variable with an IV over 0.5, it might be somehow related to the dependent variable, and you might want to consider leaving it out. (If you build a scorecard that has a bureau score as one of your variables, then you’ll almost certainly see this.)

[See these two links for more about Information Value, and an example or two of its use: All about “Information Value” and Information Value (IV) and Weight of Evidence (WOE).]

### Upper Bound

The lower bound of the IV is fairly obviously zero: if for all the categories, then the difference is zero, so their sum is zero times , which is also zero. But what about the upper bound?

I’ve put together this small PDF document: Upper bound of the Information Value (IV), in which (I think!) I show that the upper bound is very close to , where is the total number of goods, and is the total number of bads.

Of course, it’s wise to at least check the result with some code — so in R, let’s create a million tables at random, and look at the actual figures that are produced:

```
Z <- 1000000; # number of iterations
IV <- rep(0, Z); # array of IVs
lGB <- rep(0, Z); # array of (log(n_g) + log(n_b))
for (i in 1:Z)
{
k <- sample(2:20, 1); # number of categories
g <- sample(1:100, k, replace=T); # good
b <- sample(1:100, k, replace=T); # bad
ng <- sum(g);
nb <- sum(b);
IV[i] <- sum( ((g/ng)-(b/nb)) * log((g/ng)/(b/nb)) );
lGB[i] <- log(ng) + log(nb);
}
plot(IV, lGB, xlab="IV", ylab="log(N_G)+log(N_B)",
main="IV vs log(N_G)+log(N_B)", pch=19,col="blue",cex=0.5);
abline(a=0,b=1,col="red",lwd=2); # draw the line x=y
```

As you can see, there are no points below the red ‘x=y’ line; in other words, the IV is always less than . There are a few points that are close; the closest is:

```
min(lGB-IV)
[1] 0.2161227
```

I know that is not the best possible upper bound — a closer, but more complex answer is reasonably obvious from the document — but *“log(number of goods) plus log(number of bads)”* is (a) memorable, and (b) close enough for me!

### Floats may not look distinct

The temporary table ` #Data` contains the following:

```
SELECT * FROM #Data
GO
value
-------
123.456
123.456
123.456
(3 row(s) affected)
```

Three copies of the same number, right? However:

```
SELECT DISTINCT value FROM #Data
GO
value
-------
123.456
123.456
123.456
(3 row(s) affected)
```

We have the exact same result set. How can this be?

It’s because what’s being *displayed* isn’t necessarily what’s *stored internally*. This should make it clearer:

```
SELECT remainder = (value - 123.456) FROM #Data
GO
remainder
----------------------
9.9475983006414E-14
1.4210854715202E-14
0
(3 row(s) affected)
```

The numbers aren’t all **123.456** exactly; the data is in floating-point format, and two of the values were ever-so-slightly larger. The lesson is: be very careful when using aggregate functions on columns declared as type `float`.

Some other observations:

- The above will probably be reminiscent to anyone who’s done much text wrangling in SQL. Strings look identical to the eye, but different to SQL Server’s processing engine; you end up having to examine every character, finding and eliminating extraneous tabs (ASCII code 9), carriage returns (ASCII code 13), line-feeds (ASCII code 10), or even weirder.
- If your requirement warrants it, I can thoroughly recommend the GNU Multiple Precision Arithmetic Library, which stores numbers to arbitrary precision. It’s available as libraries for C/C++, and as the R package gmp:

```
# In R:
> choose(200,50); # This is 200! / (150! 50!)
[1] 4.538584e+47
> library(gmp);
Attaching package: ‘gmp’
> chooseZ(200,50);
Big Integer ('bigz') :
[1] 453858377923246061067441390280868162761998660528
# Dividing numbers:
> as.bigz(123456789012345678901234567890) / as.bigz(9876543210)
Big Rational ('bigq') :
[1] 61728394506172838938859798528 / 4938271605
# ^^ the result is stored as a rational, in canonical form.
```

## Recent Comments