Sunday, April 01, 2007
During the last week of March 2007, major media outlets reported major intrusions into T.J.Maxx stores ‘s computer systems, with the outside possibility of the loss of personal information on over 40 million credit card customers. In all fairness, most of the information probably would not be usable. Nevertheless, retail employees and police in (at least) Florida and Great Britain apparently detected unauthorized use attributable to this loss, which may have occurred before Dec 2006. The Tjmaxx website has an “important customer alert” that opens from the top of its home page.
This breach, like several others from data collection companies and even from the federal government (the Veterans Administration) show that corporate databases are as large a vulnerability for consumer security as is home computer, personal website or social networking site use.
I “retired” from my mainstream information technology career at the end of 2001. Most of it was spent on mainframe business systems, running daily and monthly batch cycles. From the late 1980s on, these systems tended to have major security (RACF, Top Secret) that prevented unauthorized production file access by applications programmers. In time, source code management and elevation procedures improved to the point that one could prove that, in theory, these systems should be quite secure. Client-server replications of this data were not always as secure.
More relevant is the way many implementations and ugrades are tested, often with full parallels involving full production data. Sometimes employees access this data from home by telecommuting, or from laptops that they take with them when traveling. Sometimes listings are taken home for detailed eyeballing during systems testing and implementation plans. This was more or less acceptable (except with government classified data) in the 1980s and 1990s, but is likely to become much less acceptable today. Companies will have to become progressive in adopting more secure systems parallel procedures during implementations. This is a sensitive matter for associates, whose job performance depends on the accuracy of system parallels. Automated file-to-file or database compares (File-Aid on the mainframe has tools to do this) could reduce the need for copying and moving around production data.
Previous proposals on this blog have suggested that a “preferred address” consumer notification system based on the USPS NCOA become a lynchpin in protecting consumer data security. Such a system would need to be offline from the public Internet and need very careful physical security planning.
More about FICO:
Fair Isaacs (the FICO score company – that gives every consumer an akashic “grade”) has its own personal data security recommendations here.
Credit scores, which of course have become controversial in the consumer data security problems, are sometimes used for other purposes than loans, credit cards and mortgages. Employers and, of course, landlords also use them. It used to be common on job applications for applicants to sign statements consenting to private investigations of their “mode of living” but in practice these statements were usually meaningless. In theory, they would authorize investigative consumer reports by credit bureaus, including interviewing people who know the applicant. In practice, when I worked for Chilton in the 1980s (a credit reporting company in Dallas that is folded into Experian – still active in Dallas – today) it appeared that these requests were relatively infrequent. Here is a wiki reference on use of Credit scores.
FICO considers only financial behavior. A FICO score, as far as I know, does not consider other behaviors, publicity, personality, or non money-related factors (and these could be very much affected by the Internet if they did count). When I worked for Chilton in the 1980s, the FICO interface from our credit reporting system was called "risk predictor" and was written in assembler language on the mainframe.