These notes are (c) 2004, Robert J. Hansen, derived from a talk given by Professor Douglas Jones of the University of Iowa. These may be redistributed under the Creative Commons Sharealike Non-Commercial License subject to the additional condition that Professor Jones is mentioned as the original lecturer.
Notes from January 19, 2005
- Is there a problem?
- Who could doubt there's a problem?
- "There's a problem because there's a problem" is insufficient argumentation; it's tautological
- The industry has survived--and profited wildly!--without security for so long that it's reasonable to ask if security is even necessary
- Management asks this question all the time.
- Evidence of a problem abounds
- Spam
- 419 scams are security problems
- They're not computer security problems per se, in that they were around long before the Internet became commonplace; but now they're our problem and we need to deal with them.
- Email viruses
- Accountants tell us they cost the world billions of dollars a year in IT resources to mitigate them
- If a crime costs billions of dollars a year and we're still arguing if there's a problem, that's not debate: that's denial.
- 419 scams are security problems
- Joe-jobs / Credibility attacks
- “A good name is more desirable than great riches; to be esteemed is better than silver or gold.” (Proverbs 22:1, NIV)
- Today reputation is even more important. Prior to cheap worldwide communications, you could get away from a bad reputation by traveling to the next town. Today, your reputation is worldwide. Credibility attacks are personal violations, and need to be considered as such
- Unfortunately, it's hard to invent a communication medium more amenable to credibility attacks than the Internet!
- Imagine the billions of dollars lost if someone were to send out child-porn with a forged microsoft.com return email address
- Imagine how much worse it would be if a legitimate microsoft.com server were rooted and used to send out the emails
- Imagine how much money you could make shorting the stock...
- Credibility attacks are trivial to turn into financial profits. It's not just about some wacko with an axe to grind against you. It's impersonal: you're targeted only because someone can make a buck off you.
- Underreporting of electronic fraud
- Reporting crime means effective mechanisms exist for mitigating future risk
- If Alice murders Bob, Bob's family reports Alice and gets her thrown in prison, thus mitigating future risk
- But if no prisons existed or the system was corrupt, Bob's family would tell everyone he died of the flu--why risk provoking wrath from Alice?
- The underreporting of electronic crime is strong evidence we're living in a lawless era
- Spam
- Dimensions of the problem
- Who are the threats?
- Repressive agents and/or agencies
- MPAA, RIAA: want ability to hijack your computer remotely to keep you from doing things they don't like
- Script kiddies: want ability to hijack your computer remotely to make you do things they like
- Where's the difference? Defense against one will also defend against another.
- Users
- Anything an attacker can do to bring a system to its knees, a legitimate user will sooner or later do of their own accord.
- There are a lot more regular users than attackers!
- Software which is supremely reliable in the face of clueless users will also be supremely resistant to attack
- Users also routinely escalate into attackers: "the system doesn't want me to do this, but I need to do this for my job, so I'm going to circumvent the access controls"
- You can't fire them, because then who's left to run the store?
- Only solution: secure software.
- Software
- "Intelligent agents" are doing more and more work for us online; RSS and RDF are only the very beginning
- A badly-programmed intelligent agent has all the worst traits of users and one even worse trait: they can do stupid, dangerous things millions of times a second
- Repressive agents and/or agencies
- Malice is equivalent to error; to defend against attack is equivalent to defending against error and vice-versa
- Who are the threats?
- Who could doubt there's a problem?
- Fundamental tools of security: MECHANISM and POLICY
- Distinction by analogy: it's intellectual fraud, but convenient
- Mechanisms: a lock-and-key combination
- Or, less analogically, a mechanism is a device; an unintelligent thing which only acts and reacts in certain specific ways. You put the key in the lock, you turn the key, the bolt retracts, the door unlocks.
- Software is deterministic (even non-deterministic software is deterministic in that it's always non-deterministic), thus, software is mechanism.
- Policy: a security guard deciding who gets keys to the building
- Less analogically, policy is the name we give to the rules human beings set up about how mechanisms will be used
- There is no magic mechanism which leads to security--policy is always necessary, and often neglected
- Good mechanisms + good policy may equal good security
- Bad mechanisms + good policy = bad security (think of easily-picked locks, but great rules on who gets keys)
- Good mechanisms + bad policy = bad security (think of great locks, but giving keys to anyone who walks by on the street)
- Mechanisms: a lock-and-key combination
- Mechanisms are rated by time to subvert and time to teach subversion
- Time to subvert: the time required to break the mechanism for someone who already knows how to break the mechanism
- Time to teach: the time required to learn how to break the mechanism
- Today, time to teach is reduced to minutes by use of prepackaged attacks (Perl scripts, rootkits, etc.)
- And given the speed of computers, time to subvert is measured in seconds
- This is not a very optimistic picture, is it?
- It's getting worse.
- Subtlety: there's one additional trait: time to discovery
- Once an attack is discovered, the game is pretty much over.
- So keep them from being discovered.
- This is often misunderstood as an argument for security through obscurity. It's not. Trying to hide things doesn't work, because they always get discovered.
- The only realistic option is to make mechanisms so robust that the time to discovery is nontrivial.
- Most respected ciphers are like this. It works wonderfully for the cryptanalytic community!
- Once a break against a cipher is discovered, it's all over. But discovering that break can take decades or more.
- We need to emulate cryptographers in this regard. What can cryptographers teach us about secure systems? Quite a lot.
- Will be expanded on later in the course, in much detail.
- Distinction by analogy: it's intellectual fraud, but convenient
- To understand security today, read Dilbert
- Computer security is about risk management. It's not about making a profit. As such, it's always treated by businesses as an overhead expense, and to be minimized as much as possible while not being subject to a minority-shareholder negligience lawsuit.
- A corporation's memory is measured in terms of quarters and is punctuated by earnings statements. This means attacks from ten years ago will be completely forgotten, while the once-in-a-hundred-years confluence of events that affected the bottom line last quarter will be part of every SEC filing for decades to come.
- The preceding is not specific to IT!
- Risk management is insurance
- The only reason we buy insurance is we have a 300-year cultural history of insurance, and 300 years of seeing how it benefits us
- If someone were to suddenly invent insurance today, we'd think they were mad. "You want me to pay you money if my house doesn't burn down, but you'll pay me money if it does?"
- Computer security is in the same boat as insurance
- Even today, corporations don't like insurance--it's an overhead expense to be minimized
- Even if/when computer security gets recognized as being just like insurance, we can continue to expect to see security be minimalized and marginalized
- No company gives out awards for "Best Catastrophe Avoided Due To Constant Hounding By ISO 9000 Inspectors".
- By definition, the best way to avoid a catastrophe is so far in advance that nobody else ever saw the risk
- Perversely, this rewards incompetent security geeks. A narrowly-averted catastrophe is far less successful, from a security standpoint, than a catastrophe which never materialized--but people who narrowly avert catastrophe are hailed as heroes, because the more-successfully-averted catastrophes are never seen by management.
- Features are driven by marketing and focus groups
- ... which are in turn driven by customers
- This isn't bad in and of itself; responsiveness to market demands is a good thing.
- However, responsiveness to market demands assumes the market is capable of evaluating the consequences of actions.
- This makes sense for agriculture; your average person is capable of saying a rotten apple is less valuable than a fresh apple
- It doesn't make much sense for technology, where Joe Consumer is typically unwilling or unable to make intelligent differentiations
- Giving people what they want is a good thing: but when we're giving people what they want, we need to keep in mind we have a responsibility to give them a safe product, too!
- Compare to the automotive industry: you can buy any of hundreds of makes and models of cars to satisfy almost any whim--but you're guaranteed that your car's bumpers and seat belts and air bags meet certain minimum standards
- The idea of government and industry looking out for consumer interests is not necessarily socialist. Free markets can coexist with strong consumer protections.
- Many would argue strong consumer protections are essential to free markets
- In the absence of consumer protections, can we say the software market is free?
- Businesses can get away with selling unsafe software (ala the Ford Pinto). Businesses can make money doing it.
- And so, businesses do.
- Businesses may even have a legal obligation to produce unsafe software!
- Producing safe software is hard and expensive.
- Businesses have a fiscal obligation to spend shareholder money wisely, in activities which will give positive return on investment.
- But security is never a value-add.
Notes from January 21, 2005
- Magic Pixie Dust
- Magic Pixie Dust is what people want: sprinkle a little magic pixie dust on your software and/or your processes, and suddenly you'll be safe.
- It doesn't work that way. There are no silver bullets. There is no magic pixie dust.
- If there was magic pixie dust, we wouldn't need to study computer security. Everyone would have magic pixie dust and everyone would be safe.
- The fact people are trying to sell magic pixie dust is evidence their pixie dust doesn't work. If magic pixie dust worked, we wouldn't have this security nightmare.
- Don't sell anyone magic pixie dust. Don't let anyone sell you magic pixie dust.
- The new arms race
- "In the battle between armor and warhead, always bet on the warhead."
- In computer security, offense is stronger than defense
- You're stuck defending against the attacks you know about
- The attacker gets to use techniques you haven't ever heard of before
- One solution: go on the offensive. Switch the tactical roles.
- This is not an endorsement of strikeback.
- It is only advocating we change the way we think. C.f. good airline screening practices. If you let people tell you where they're going, why they're going there, etc., you won't catch anyone. After all, hijackers expect those questions and they've rehearsed good answers to get past your defenses. But if you ask someone "... and what's your favorite restaurant in this city? What bus route do you take home? How bad is the rush-hour traffic?", or other unexpected questions, then they're in the defensive role and you're trying to find holes in their story.
- This arms race is between attack and defense. You don't want to be on defense.
- A defender's job is to identify vulnerabilities and plan defenses
- Note: a person studies a system and devises a procedure
- Types of defenses, in descending order of preferability:
- Eliminate vulnerabilities
- Block vulnerabilities
- Distract from vulnerabilities
- Detect exploitation of vulnerabilities
- Elimination of vulnerabilities is difficult. We (usually) have limited knowledge of our system, which includes limited knowledge of our vulnerabilities
- Layers of independent defense (blocking the vulnerability) is the failsafe. With independent orthogonal barriers to unlawful access, it's unlikely that any one vulnerability will result in compromise.
- Distraction is a last-ditch defense. By making the attacker think they're getting something they want, we distract them from the fact they're not. This is a defense, but a pathetically weak one; even if the honeypot works, we have to consider our systems compromised and compromisable.
- Detection is defense against the future; it does not defend against the present. It may protect you in the future by showing you how someone broke into your system and giving you the information you need to prevent future compromise, or it may protect you in the future by giving you the evidence needed to get a conviction and put a cracker in prison. Either way, it's defense against future compromise and useless in the present.
- Risk Management
- The name of the game is amortization. But amortization is very difficult.
- Cost of ownership should not exceed the value of assets to owner
- If an asset takes $1000 to protect it but is only worth $10 to the owner, the way to defend it is simple--get rid of it!
- Ah, if only it were this easy. Just because you buy something at $100 doesn't mean it's worth $100. You can always get ripped off (it's worth less than you paid) and you can always find a good buy (worth more than you paid).
- Valuations will vary wildly among people. For instance, doctors view medical records as a necessary instrument, but not in terms of dollars and cents. They're necessary for patient care--nothing more. On the other hand, insurance companies might find medical records very valuable in deciding whom to accept into a policy.
- Attackers are unlikely to spend more in an attack than the resource is worth to them
- Beware of assigning dollar values. It's hard to put a price on pride, prestige or wounded ego. From an attacker's perspective, a $1000 attack on a website makes perfect sense if their esteem in the kiddie community rises by $1000.
- People make dumb valuations all the time, and attackers are no different. If an attacker misestimates the value of a resource, they may grossly overspend on the attack.
- Each attack has a different price and a different likelihood of success. A smart attacker will choose the cheapest option measured by (cost * likelihood of success) among those attacks of which the attacker is aware. A dumb attacker will choose the cheapest option measured by cost among those attacks of which the attacker is aware.
- There are a lot of dumb attackers.
- So what's the cost of defense?
- Stupid defenders think the lowest cost of defense is the best
- However, if the attacker is going for the cheapest attack to execute, this may not be the same as the cheapest thing to defend against!
- Thanks to businesses viewing security as an overhead expense to be minimized, businesses are particularly susceptible to this line of thinking.
- Stupid defenders think the lowest cost of defense is the best
- Surprise is the name of the game
- Expect to be surprised. Expect your estimations of asset value, attack cost and vulnerability identification to be wildly at odds with reality.
- Accurate estimations are a voodoo black art. Good estimations are like wielding magic powers against attackers. Bad estimations mean you're in the same boat as everyone else.
- Examples:
- A homeowner thinks of their PC as a thousand-dollar piece of hardware. A spammer thinks of the homeowner's PC as a zombie which can be purchased from a cracker group and turned into a spamserver for $20.
- Most people think technological expertise is expensive. It's not. First, given the dismal state of security, it doesn't take much skill to implement attacks; and second, thanks to the dot-com implosion, we have a lot of technologically skilled people waiting tables.
- How hard is the lock on your front door to pick? Do you know how to pick locks? If you don't, then do you really know? Or are you just taking someone else's word for it?
- If an asset takes $1000 to protect it but is only worth $10 to the owner, the way to defend it is simple--get rid of it!