Kevin D. Mitnick's The art of deception
Just finished reading Kevin D. Mitnick's The art of deception. The first part of the book contains the stories of social engineers in action. The second part is a set of recommendations.
The stories are fun to read; the recommendations are sometimes useful too, but there is a caveat. A transcending idea behind them is that in order to protect oneself from social engineers, one has to be very careful when releasing information to the public. Things such as the names of the servers, or the contact information of the employees or the organization chart should be, according to Kevin, kept within the company—anything which is reachable for a social engineer could simplify his con.
This is not true.
This is exactly as telling that proprietary software products are more secure than open source alternatives, because an attacker cannot read the source code and spot the vulnerabilities.
I already explained that the gap between, on one side, the unsafe outside world full of scoundrels, and, on the other side, the world within the company, where everyone has only one thing on his mind—the prosperity of the corporation—doesn't exist. In social engineering, it is also true. If your security is based on the secrecy of a few phone numbers and names of servers, then what do you do as soon as a disgruntled employee is fired? Do you rename all your servers and ask all your employees to change their first and family names?
So by acting like the information is never disclosed outside the company, you're only increasing the risk of a social engineering con. It's simple: since employees believe the information is classified, a social engineer who acquired this information from an insider or from a person who left the company automatically acquires a powerful mean to make someone believe that he's an insider—that is, exactly the thing one wants to avoid. Telling: “Can you copy for me the following files from STNB-27-C, please?” will not be interpreted the same if the name of the server is known to be public, or believed to be classified. (By the way, STNB-27-C is a terrible name for a server. Don't do that.)
This means one and one only thing: if you can't ensure the confidentiality of a piece of data, either make it public, or at least don't advertise it as classified. Maybe, by lack of better approaches, this was a good thing to do in 2002, when the book was written; I don't know that. In any case, it is not a good approach any longer today. Information security should rely on proven procedures; both technical procedures, which include cryptography, as well as procedures for humans to follow.
Ensuring each machine and each application are hostile to everything from the outside, like I suggested in the article I quoted at the beginning of this one, is therefore a wise approach. It doesn't matter for a specific server whether the person who tries to access it is a guy from Somalia, or just an application written by the same team and hosted on a neighbor server; both are treated the same, because both are public.
Aside this little point where I disagree with Kevin, the book itself is great and interesting, full of concrete examples of things which could go wrong because of the human factor. Make sure you read it.