Security Through Obscurity

In my previous post, I wrote about GUIDs (aka UUIDs), marvellous datatypes that allow you to uniquely identify a piece of information. Because the probability of randomly generating two the same is so small, there’s a sense that merely possessing one as a key to a piece of data, means the data is somehow secured. This is false, in two senses — the first sense, not so critical; the second, very critical.

Before that, let’s look at how they might be used. I’ll use URLs / webpages as an example, because it’s a common application.

Let’s say I’ve written my own blogging system. The text of each article is stored in a database table, and the primary key of the table uniquely identifies my blog posts. When I list my blog posts so people can click on them, the URL might look like this:

http://www.myblog.com/blogpost.php?ArticleID=17

The unique identifier — in this example, the number 17 — forms part of the URL, and someone can look at all my blog posts merely by changing the number after the equals sign. It’s ok, it’s only a blog; the worst they can do is put in a number of a post I haven’t published yet. Depending on how I’ve written my blogging engine, they’ll either see an unfinished post, or (preferably) a message saying there’s currently no live article with that id.

In the bad old days of internet website development, lesser developers used integer IDs to refer to data more sensitive than a blog post:

http://www.mylocaldoctor.uk.com/patient.asp?PatientID=118

(Of course, I’m making these URLs up!) The website might ask you to log into some part of the site via a secure method, but once you were in, you’d be able to view the records of anyone on the system! Clearly a terrible thing, but this type of security hole has always existed; here’s an example from a few years ago, concerning Apple’s password reset process.

But it’s ok, we know about GUIDs now! GUIDs to the rescue! If the URL looks like this:

http://www.mylocaldoctor.uk.com/patient.asp?PatientUID=C2DFE25B-6C1B-46B3-9497-DA45EF76D994

, and the website retrieves each patient’s private information via a GUID, then we’re perfectly safe, no?

No, not safe (sense 1)

In the sense that a GUID is ‘hard for a human to guess’, yes, you’re pretty safe. No-one is going to randomly type in some digits and hit upon a valid GUID. BUT:

1. What if it’s not someone typing? What if it’s a piece of software making thousands of guesses per second? What if it’s a distributed botnet, millions of pieces of software each making thousands of guesses per second? * And your site has millions of users? The probability of making at least one correct guess would not be negligible.

[* Presuming your server can cope with all those requests…]

2. What if your GUID isn’t all that random? There are people out there who, given enough examples, can derive information about your server’s random number generator, and make accurate guesses about historical and future GUIDs. The situation is worse still if you’ve used sequential GUIDs; and potentially catastrophic if the developer who’s written the code hasn’t really understood the point of them (see previous post).

Point 1 can be addressed by putting checks in place for incorrect guesses. Modern server set-ups will allow you to block IP addresses after unusual patterns of requests have been detected.

Point 2 is harder to address: how truly random is the code that generates your GUIDs? In practice, you just don’t know. The study of random number generation is an entire academic discipline of study, you could devote the rest of your life to assessing various random number generators!

But the above doesn’t matter because…

No, not safe (sense 2)

This is the crux of the matter: obscurity (hiding something) is not security. All it does is make something harder to locate, not impossible. Just because you can’t guess a GUID, it doesn’t mean there aren’t other ways of obtaining them, e.g.:

  • Hacking
  • Leaking (e.g. via accidental email forwarding)
  • Social engineering
  • Mistakes (e.g. devs ‘temporarily’ storing them in webpage HTML)

The point is that proper security needs to be applied on top, in all cases. Where sensitive information is concerned, people should be logging in securely, with as robust a system as it’s possible to use. So even if a bad person stumbles across information (e.g. GUIDs) they shouldn’t have, they can’t use it, because they don’t have the requisite access.

In conclusion

To sum up, using a GUID on its own is nowhere near adequate-enough security, there’s much more to be done. A final tip: Given that there’s no such thing as “100% secure”, your goal should be to record all user activity on your site / app, and dive into it regularly to check for breaches or unusual patterns. As a bonus, you get to see how users are really using your software — I promise there’ll be some surprises in there!

, ,

Leave a comment

GUIDs

If you know what GUIDs are, please click here to skip to the part where I talk about using them in databases.

GUIDs (Globally Unique Identifiers, aka UUIDs) are simply a string of 32 random hexadecimal numbers (that is: characters 0-9 and A-F), separated into five groups by dashes.

Here’s a GUID:

C2DFE25B-6C1B-46B3-9497-DA45EF76D994

All modern languages are able to generate them; here’s how I generated it in SQL:

SELECT NEWID()
GO

------------------------------------
C2DFE25B-6C1B-46B3-9497-DA45EF76D994

(1 row(s) affected)

A GUID is simply a big random number, presented in a human-readable form. How big is ‘big’? With 32 hex digits, it means a GUID can take any of 16^32 = 2^128 values. (2^128 is approximately 3.4 x 10^38)

GUIDs are big. They’re so big, that you could label every atom in the universe using just 3 GUIDs. In fact, it’d be massive overkill: 3 GUIDs have a potential 2^384 values between them, which is equal to 3.940201 x 10^115; the number of atoms in the universe is estimated at 10^82, many orders of magnitude less.

Because GUIDs can take such an enormous range of values, the chances of generating a duplicate are minuscule. Quote:

“In other words, only after generating 1 billion UUIDs every second for the next 100 years, the probability of creating just one duplicate would be about 50%.” (wikipedia)

The ‘U’ in GUID basically means ‘unique for all practical purposes you’re likely to ever be involved with’ (unless you work for CERN, in which case I take it back).

So, that’s great: we have this construct that for all intents and purposes is unique (and I’ll treat it as such from here on), and we can generate one any time we want one. But how are they used?

Usage

The most common usage of GUIDs is as keys for referring to other pieces of information, especially a block of structured information. For example, when I request a customer’s credit file, there’s a GUID, right near the top of the file. If I need to refer to that credit file again (whether inside my organisation, or with the issuing bureau), I can refer to it by the GUID, and we all know exactly which file I mean — not just the customer/address it refers to, but the data as it stood at that point.

In databases

Now, database tables need a primary key to identify each row – and, by definition, the value of the key has to be unique. So it would seem a natural thing to want to have a GUID as a primary key. Even better: not only will we ensure that every row in our table will be unique, but every row in every table can be uniquely identified, in every database in the world! And you don’t even need to request a GUID from your database server when you create the data for a row, you can pre-generate primary keys in your C# code, and use them before they ever need to be stored on the server!

Sounds too good to be true, so what’s the catch?

The catch

First off, most developers, analysts (and even DBAs) talk about ‘primary keys’ when they mean clustering keys – often, they’re the same piece of information, but they absolutely don’t have to be. The primary key is the piece of data that uniquely identifies a row in a table. The clustering key is the piece of data that determines the order of the data when it’s stored (on disk). More often than not, a straightforward incrementing integer (1,2,3…) can do the job of both, but it’s an informed choice that the database developer should be making.

When the clustering key is an incrementing integer, organising the data on disk is easy: the data goes in the next available slot. But when it’s (effectively) a random number, where does it go? The database has to make guesses about how much data there’s likely to be; guesses that it’ll have to re-assess every time a new row needs to be INSERTed into the database – worst case, it’s re-organising the data on disk every few INSERTs. This is really inefficient, and causes unnecessary stress on your server.

Internally in SQL Server, GUIDs take up 16 bytes of space, compared to the 4 bytes of an INT, or 8 bytes of a BIGINT. That’s not a major issue, unless you have lots of indexes on your table: indexes on tables automatically contain the clustering key, so with a GUID clustering key, every single index defined on that table will also contain the GUID. Potentially lots of valuable space used up, if you’re not careful.

Ok, let’s list some bad points about GUIDs as clustering keys:

  • They cause inefficiencies under the hood: the server can’t make it’s usual good guesses about where to store data. NB: There is such a thing as a SEQUENTIAL GUID (Info here at MSDN), which lessens the impact – personally, I still wouldn’t bother.
  • They take up four times more space than traditional INTs, which could be a problem if you have lots of indexes.
  • Table JOINs are slower; SQL Server is optimised for joining tables together via simple integers.

There’s another (very important) reason not to use them that people tend to overlook: it makes debugging and tracking down errors incredibly painful! Incrementing numbers are intuitive, easy to memorise (if they’re small enough), easy to compare (“x+z” happened after “x”)… but GUIDs are just a ‘blob’ of data, there’s nothing intuitive about them.

How to use GUIDs, pain-free

It’s simple: add a GUID as a normal column and index it!

ALTER TABLE dbo.Person
	ADD COLUMN PersonUID UNIQUEIDENTIFIER NULL
GO

-- ...UPDATE the table to fill PersonUID here ...

CREATE NONCLUSTERED INDEX IX_Person_PersonUID
	ON dbo.Person(PersonUID)
GO

That’s as complex as it needs to be.

Sad note: In the past, I’ve had developers add GUIDs that looked like this:

00000000-0000-0000-0000-000000000001
00000000-0000-0000-0000-000000000002
00000000-0000-0000-0000-000000000003
...

, thus completely missing the point of using a GUID in the first place. If you’re not auto-generating the GUIDs in-database, make sure to check devs aren’t putting daft data in. (Obviously something you should do anyway!)


Finally, we have our highly unique piece of information, stored in the best way possible. So now we can use that in a URL on a public-facing website to reference people’s data securely, no? Sorry, that’s not quite right — a little issue of this thing called Security Through Obscurity, which I’ll write about next time.

, , , ,

1 Comment

Using median rather than mean

Analysts in consumer finance are used to dealing with large sets of incoming customer data, and are often asked to provide summary statistics to show whether application quality is changing over time: Are credit scores going down? What’s the average monthly income? What’s the average age of the customers applying this quarter, compared to last?

Early on in the process, the data is likely to be messy, unsanitised, and potentially chock-full of errors. (Once the data has been processed to the point of considering credit-worthiness, then it’s all clean and perfect, yeah..?) Therefore, you have to be careful when you report on this data, as it’s easy to get nonsense figures that will send your marketing and risk people off in the wrong direction.

Two really common errors seen in numerical application data:

1. Putting gross yearly salary into the net monthly income field, and it’s not always easy to spot which one it ought to be: if it says ‘£50000’, then it’s very likely to be an annual figure; but what about ‘£7000’? If monthly, that’s a really good wage; but I can assure you, people earning that much are still applying for credit.

2. Dates of birth: if unknown, it’s common to see ‘1st January 1900’ and similar. So when you convert it to age, the customer is over 100.

Also, if you happen to get credit scores with your data, you have to watch out for magic numbers like 9999 – which to Callcredit means “no [credit] score could be generated”, not that the customer has the credit-worthiness of Bill Gates or George Soros.

Hence, it’s fairly obvious, that if you’re including these figures in mean averages, you’re going to be giving a misleading impression, and people can infer the wrong thing. For example, say you’ve 99 applications with an average monthly income of £2000, but there’s a also an incorrect application with a figure of £50,000. If you report the mean, you’ll get an answer of £2480, instead of the correct £2010 (assuming that £50k salary translates to ~£3k take-home per month). However, if you report the median, you’ll get an answer of £2000, whether the incorrect data is in there or not.

In statistical parlance, the median is “a robust measure of central tendency”, whereas the mean is not. The median isn’t affected by a few outliers (at either end).


End note: Credit scores can be (roughly) normally distributed; for a normal distribution, the median and mean (and mode) are the same. But data doesn’t have to be normally distributed: e.g. call-waiting times follow the exponential distribution, where the median and mean are not the same.

, , , ,

Leave a comment

XSDs FTW

I’m a big fan of passing data to and from stored procedures (sprocs) as XML, especially XML that represents a complete object, or a list or hierarchy of objects. For a start, XML is perfectly human-readable (if you’re doing it right), and nearly every system and language knows how to work with it, SQL Server / TSQL included. What makes it even better, is being able to validate the XML before you even begin to parse it, using an XSD (XML Schema Definition).

Here’s a complete example you can copy and run:

USE tempdb
GO

-- If the XSD already exists, drop it:

IF EXISTS (
  SELECT xsc.name 
    FROM sys.xml_schema_collections xsc
    WHERE xsc.name='TestSchema'
)
BEGIN
  DROP XML SCHEMA COLLECTION TestSchema
END
GO

-- Create the schema:

CREATE XML SCHEMA COLLECTION TestSchema AS '
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema">

  <xsd:simpleType name="ST_EmailAddress">
      <xsd:restriction base="xsd:string">
      <xsd:pattern value="[^@]*@([0-9a-zA-Z][-\w]*[0-9a-zA-Z]\.)+[a-zA-Z]{2,9}"/>
      </xsd:restriction>
  </xsd:simpleType>

  <xsd:simpleType name="ST_Usage">
    <xsd:restriction base="xsd:string">
      <xsd:enumeration value="home"/>
      <xsd:enumeration value="work"/>
      <xsd:enumeration value="other"/>
    </xsd:restriction>  
  </xsd:simpleType>

  <xsd:complexType name="CT_EmailAndUsage">
    <xsd:simpleContent>
      <xsd:extension base="ST_EmailAddress">
        <xsd:attribute name="usage" use="required" type="ST_Usage" />
      </xsd:extension>
    </xsd:simpleContent>
  </xsd:complexType>

  <xsd:element name="emailList">
    <xsd:complexType>
      <xsd:sequence>
        <xsd:element name="email" type="CT_EmailAndUsage" minOccurs="1" maxOccurs="3" />
      </xsd:sequence>
    </xsd:complexType>
  </xsd:element>

</xsd:schema>'
GO

-- Make some dummy data that conforms to the schema above:

DECLARE @testXML AS XML(TestSchema)

SET @testXML = '
<emailList>
  <email usage="home">pete@home.com</email>
  <email usage="work">pete@work.com</email>
  <email usage="other">pete@other.com</email>
</emailList>
'

-- Query it:

SELECT
    id = ROW_NUMBER() OVER (ORDER BY e.i)
    ,EmailAddress = e.i.value('(.)[1]','VARCHAR(255)')
    ,Usage = e.i.value('(@usage)[1]', 'VARCHAR(20)')
  FROM @testXML.nodes('//email') AS e(i)
GO

The result set is:


id   EmailAddress    Usage
---  --------------  ------
1    pete@home.com   home
2    pete@work.com   work
3    pete@other.com  other

(3 row(s) affected)

Now, try messing around with the contents of the @testXML variable, e.g.:

  1. Set usage to a string that’s not ‘home’, ‘work’ or ‘other’
  2. Add a fourth email address
  3. Take the ‘@’ symbol out of an email address
  4. Put in some extra nodes that don’t belong

,then re-run the code. They all fail, because the XML has to conform to the XSD we’ve defined as TestSchema. So, SQL Server automatically rejects any input that fails data validation (e.g. format of email address) or breaks business logic (‘no more than three emails’); if the XML was being passed to a sproc, the call would fail, and no code inside would ever run.

Obviously, you may not want to automatically reject ‘broken’ XML, you’ll probably want to record this fact. That’s fine – your code (sproc) can accept a schema-less XML, and attempt the cast itself; and if it fails, you can respond however you like.

There’s certainly an overhead in learning the language of XSDs, but because it’s a generic technology, there are plenty of online resources, e.g. w3schools. When it comes to transferring complex objects around as data, I don’t know of a better way than using XML and XSD.

Note

Because Microsoft haven’t got round to coding it yet, you can’t query the text value of a node that’s been defined as a type in XSD. That is, I’d ordinarily like to be able to query the email address itself like this:

EmailAddress = e.i.value('(./text())[1]','VARCHAR(255)')

, because directly accessing the node text is faster (presumably because it doesn’t have to do any more parsing). But sadly, it’ll just fail with an error. However, this is unlikely to cause practical problems, it’s just a mild annoyance that’s vastly outweighed by the benefits that come from validated XML.

, ,

Leave a comment

Converting JSON to XML : A Gateway to Cygwin

Of the files I have to deal with on a weekly basis, I’d put the breakdown at 50% Excel, 40% CSV, and 10% XML. This is fine, I can reliably transfer data from those files into SQL Server without too many tears. However, today I was presented with JSON-formatted versions of files, that I’d normally get as XML; and I haven’t had to deal with JSON since I last wrote PHP/AJAX code, about five years ago.

Now, SQL Server 2016 can natively read/write JSON code (see, for example, this MSDN blog), but I use SQL Server 2014, which knows nothing about JSON.

Of course, I googled for JSON to XML converters. There are many, mostly in the form of libraries for other systems, and even a few online converters that would do the job ‘in-browser’. Unfortunately, the data I needed to convert was credit file data, and that data is sensitive. I can’t just go pasting it into unknown websites without completely understanding what’s going to happen to it – if there’s the slightest chance my data could get uploaded and saved elsewhere, I can’t use that site. I did find an online site that did the conversion purely in javascript (no POSTs back to the server), so I copied the code locally, pasted my JSON in… and it crashed the browser (Chrome). Turns out 80kb of JSON was too much for this javascript, and in fact, a couple of the standalone converters I tried also had trouble with this (small) amount of code.

There was even a pure T-SQL converter (written as a function) that I tried, but unfortunately, that didn’t work out either. Which is a shame, as a SQL-based solution appeals to me greatly!

To cut a dull story short, here’s how I did it: perl. Thanks to the third most popular answer to this stackoverflow question, the answer was to open up a cygwin window, and type:


cat MyFile.json | perl -MJSON -MXML::Simple -e 'print XMLout(decode_json(do{local$/;}),RootName=>"json")' > MyFile.xml

(Thank you very much, stackoverflow user azatoth!)

And that did the trick; I had to do some minor tidying up (due to @ symbols, and node names starting with a number), but in the main, it did the job for me, with a minimum of effort.

The point of this post is two-fold:

  1. When this requirement crops up again, I only have to look here to remind myself, and…
  2. To spread the word about cygwin.

Cygwin (www.cygwin.com) is a way to get hold of Unix/Linux-style functionality on Windows. I’ve used it for years now, and it’s an invaluable part of what I do; it’s literally one of the first things I install on any new machine.

If you do any significant amount of text file processing, there are many great command-line tools to be found within the cygwin environment; just a few I use on at least a weekly, if not daily, basis:

  • grep: for searching through text files, using regular expressions
  • sed: for basic text transformations
  • awk: a programming language for text processing
  • perl: a programming language widely used in the *nix world

The beauty of these tools, is that they’re so widely used, it’s almost guaranteed that whatever you want to do, someone else has already put the correct syntax online (cf. my JSON to XML problem). Usually, some light googling (often incorporating the term ‘+stackoverflow’) will get you your answer. I wouldn’t claim for a second that I ‘knew’ these tools (apart from maybe grep), but being able to work with them is enough.

If you’re a developer or analyst who has to routinely work with data files, I can’t recommend cygwin highly enough.

, , ,

Leave a comment

Floats may not look distinct

The temporary table #Data contains the following:


SELECT * FROM #Data
GO

value
-------
123.456
123.456
123.456

(3 row(s) affected)

Three copies of the same number, right? However:


SELECT DISTINCT value FROM #Data
GO

value
-------
123.456
123.456
123.456

(3 row(s) affected)

We have the exact same result set. How can this be?

It’s because what’s being displayed isn’t necessarily what’s stored internally. This should make it clearer:


SELECT remainder = (value - 123.456) FROM #Data
GO

remainder
----------------------
9.9475983006414E-14
1.4210854715202E-14
0

(3 row(s) affected)

The numbers aren’t all 123.456 exactly; the data is in floating-point format, and two of the values were ever-so-slightly larger. The lesson is: be very careful when using aggregate functions on columns declared as type float.

Some other observations:

  • The above will probably be reminiscent to anyone who’s done much text wrangling in SQL. Strings look identical to the eye, but different to SQL Server’s processing engine; you end up having to examine every character, finding and eliminating extraneous tabs (ASCII code 9), carriage returns (ASCII code 13), line-feeds (ASCII code 10), or even weirder.
  • If your requirement warrants it, I can thoroughly recommend the GNU Multiple Precision Arithmetic Library, which stores numbers to arbitrary precision. It’s available as libraries for C/C++, and as the R package gmp:

# In R:

> choose(200,50);  # This is 200! / (150! 50!)
[1] 4.538584e+47
> library(gmp);
Attaching package: ‘gmp’
> chooseZ(200,50);
Big Integer ('bigz') :
[1] 453858377923246061067441390280868162761998660528

# Dividing numbers:
> as.bigz(123456789012345678901234567890) / as.bigz(9876543210)
Big Rational ('bigq') :
[1] 61728394506172838938859798528 / 4938271605
# ^^ the result is stored as a rational, in canonical form.

, , , ,

Leave a comment

The meaning of NULL, and why magic data is bad

Part 1: NULL

As a junior web developer, I remember other developers warning me about database NULLs: “you don’t really want to deal with them”, “code them out if you can”, “program around them”, “turn them into something else”. Dear reader, they were quite wrong.

For a piece of data, a NULL value can mean any of:

  1. We don’t know this.
  2. We don’t know this yet (but there’s an expectation we’ll know this at a later date).
  3. We don’t know this, because the question of “What is the value of X for object Y?” is not applicable here.
  4. We don’t know this, and the chances of us ever knowing it are practically zero.
  5. It is logically impossible for this data to exist.

Context usually clues us into which meaning of NULL we’re looking at; if a customer’s email address field is NULL, then in terms of the above:

  1. It hasn’t been asked for / provided / collected.
  2. It hasn’t been asked for / provided / collected yet, but we might be getting this data in the future.
  3. The customer doesn’t have an email address.
  4. The rest of the customer’s data is incorrect or missing, so we have no means of contacting them to find their email address.
  5. The ‘customer’ isn’t something capable of owning an email address(*).

Regardless, if a customer doesn’t have an email address in the system, any code that consumes customer data will have to cope in a sensible manner. If the code is a data entry form, then an empty text field will be displayed; but if the code does marketing mail-outs, then it’ll just have to skip that record.

(*) It could be that the table is ‘multi-use’, and the field makes no sense for some types of data.

Going back to meaning (5), a couple of better examples might be:

  • ‘O’ level GCSE results (at age 16):   My age cohort did GCEs, therefore it is logically impossible for anyone to ascertain my GCSE results.
  • Date of last gynaecological exam:   Clearly, this would never be applicable for anyone born genetically male.

(In multivariate analysis, these would be referred to as structural zeros, rather than the usual sampling zeros. “It was impossible for it to occur” vs. “We did not see it occur”.)

Despite NULL being the very embodiment of “no information”, sometimes “no information” is the information in itself! Trivially, e.g., a SQL query to find all customers without email addresses, will specifically look for the NULL value in that field. Data with NULLs in be indexed, same as any other data. You can even create a filtered index that goes straight to the NULL data:

CREATE INDEX IX_Customer_EmailIsNULL
  ON dbo.Customer(Email)
  WHERE Email IS NULL

So a NULL value is generally not something to be avoided, modified, or coded around. It is eminently useful, and a vital part of your data structures.


Part 2: Bad magic

Now, I started with part 1 because of a common pattern I see used in data capture, usually due to novice / junior / misguided developers. An example: I have a database table of addresses (called Address), with the usual fields. My company operates strictly within the UK, so in an effort to keep our data as clean as possible, we have a CHECK constraint on the Postcode field; not a foreign key to a table of all postcodes ever (who wants to maintain that??), but a simple check against the UK postcode format. The check will prevent entries like “unknown”, or mistakes like “SW1A IAA” (‘I’ instead of ‘1’). Also, the postcode is ‘NOT NULL’-able — because every address has a postcode, right?

It might look like this:

CREATE TABLE xyz.[Address]
(
	AddressID INT NOT NULL PRIMARY KEY
	,Line1 VARCHAR(255) NOT NULL
	,Line2 VARCHAR(255) NULL
	,Postcode VARCHAR(10) NOT NULL
		CHECK (Postcode LIKE '[A-Z][0-9] [0-9][A-Z][A-Z]'
		OR Postcode LIKE '[A-Z][A-Z][0-9] [0-9][A-Z][A-Z]')
)

(Clearly the CHECK constraint isn’t exhaustive: as it is, it’ll reject SW1A 1AA, the postcode of Buckingham Palace. It’ll do for illustrating the point.)

If customer data is supplied without a postcode, then any INSERT will fail. What tends to happen, is that over time, you’ll see the Postcode field start to contain values like ZZ1 1ZZ; a value that passes our simple CHECK constraint rules, but is probably not a valid UK postcode.

So how did ZZ1 1ZZ get into the database?

Scenario 1a:

The developer coding the application form tried to INSERT a record with no postcode, thus the operation failed with an error. So in the input form, they put some code to change a blank postcode to ZZ1 1ZZ when INSERT-ing.

Scenario 1b:

The customer input form hooks up to an address validator; if the address cannot be validated, then the customer is asked to fill in all the fields themself, and can easily put in an invalid postcode which doesn’t make it past the simple check constraint on the Address table. The developer dutifully catches the error, changes the postcode to ZZ1 1ZZ and re-INSERTs.

Scenario 2:

A customer complained about being marketed to, and needs to be removed from the database as soon as possible. To do it properly would mean changing code in several systems; the quick hack is to change their postcode to ZZ1 1ZZ, then make sure the mail-out query ignores records with that postcode value. This is then adopted as semi-official practice: “To remove a customer from marketing, just set their postcode to ZZ1 1ZZ.”

There are multiple problems with having a postcode of ZZ1 1ZZ meaning ‘unknown’, ‘error’ or ‘do not contact’:

  1. It’s a ‘magic’ string; for it to have system-wide meaning, every single system must understand it, and what it represents. What if someone INSERT-ed ZZ2 2ZZ? It wouldn’t be similarly understood, it would be treated as a real postcode.
  2. Every new developer and analyst has to be told about the magic string. What if there’s a magic string for every piece of data? Ok, it could be solved by using VIEWs, but then that’s more code that has to be known about, and scrupulously maintained.
  3. What if, by some mistake, post is sent out to that postcode? (This will happen, I guarantee it.) One of your other systems is likely recording the fact that mail has been sent correctly, but the chances of it arriving are slim.
  4. The real postcode ZZ1 1ZZ might not exist now, but it may in the future: there are many examples of postcodes falling into and out of use. How will you know if your postcode is genuine, or a magic string? Take note: postcodes that start ZZ99 are real live NHS “pseudo-postcodes”…

As you’ve probably realised, my answer would be to make the postcode field NULL-able(*), and to INSERT a NULL in the case of missing or broken data, completely avoiding any magic strings. It needs no special handling, and contextually, it very probably has a limited range of well-understood meanings; e.g. if you see a field MiddleName that is NULL for some records, you would presume it to mean the Customer has no middle name.

Note this is why in the email example in Part 1, we shouldn’t use a blank string instead of a NULL – because a blank string is still a ‘magic’ string, just one that would happen to be widely understood. There will be cases when a blank string legitimately means something quite different to a NULL.

(*) I’ve heard people claim that fields with CHECK constraints can’t be NULL-able. In modern flavours of SQL Server, this is demonstrably false. If the field is NULL, the constraint just isn’t checked.


Part 3: Keeping track

By clearing up one locus of ambiguity, I’m afraid I’m going to introduce a new one. Presumably, we’re going to want to record why our postcode field is NULL. We can either:

(A) Create a new lookup table, ReasonForNull (say); add a new field, ReasonForNullID, to our Address table, add a suitable foreign key, and a CHECK constraint that says “if the Postcode is NULL, then ReasonForNullID must not be NULL – and vice versa”, e.g.:

ALTER TABLE xyz.[Address]
ADD CONSTRAINT CK_Address_PostcodeOrReason 
CHECK( (Postcode IS NOT NULL AND ReasonForNullID IS NULL)
	OR (Postcode IS NULL AND ReasonForNullID IS NOT NULL)
)

or

(B) Create our new lookup table (as above), but also create another new table, Address_ReasonForNull, like so:

CREATE TABLE xyz.Address_ReasonForNull
(
	AddressID INT NOT NULL
		CONSTRAINT PK_Address_ReasonForNull
		PRIMARY KEY CLUSTERED
	,ReasonForNullID TINYINT NOT NULL
	,CreatedOn DATETIME NOT NULL
		CONSTRAINT DF_Address_ReasonForNull_CreatedOn
		DEFAULT(GETDATE())
	,CONSTRAINT FK_Address_ReasonForNull_AddressID
		FOREIGN KEY (AddressID)
		REFERENCES xyz.Address(AddressID)
	,CONSTRAINT FK_Address_ReasonForNull_ReasonForNullID
		FOREIGN KEY (ReasonForNullID)
		REFERENCES xyz.ReasonForNull(ReasonForNullID)
)

and only INSERT into it when we have an invalid postcode.

Neither (A) nor (B) is a perfect solution. (A) will waste a byte per Address record (if ReasonForNullID is declared as a TINYINT) if the postcode is ok, but has the advantage of strictly maintaining integrity, thanks to the CHECK constraint. (B) wastes no space, but there is no simple way (that I know of) of enforcing that a child record must exist, given data in the parent record.

If we want to record, say, the postcode that was entered but not validated, then it’s no bother under scenario (B) to add a new field to our Address_ReasonForNull table:

ALTER TABLE xyz.Address_ReasonForNull
	ADD OriginalData VARCHAR(20) NULL

However, if we were doing (A), then we’d have to add this column to the main Address table (and change the CHECK constraint); potentially, we could waste a lot of space.

Personally, I’d favour (B), and would push for all data changes to be made via stored procedures (aka sprocs). That way, I can ensure that the data in my two tables is kept perfectly in sync.

Any thoughts or comments? Feel free to let us know!

, , , ,

Leave a comment