COUNTERPANE LOGO
Crypto-Gram Newsletter
September 15, 2002
by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@counterpane.com
A free monthly newsletter providing summaries, analyses, insights, and
commentaries on computer security and cryptography.
Back issues are available at
. To subscribe, visit
or send a blank message to
crypto-gram-subscribe@chaparraltree.com.
Copyright (c) 2002 by Counterpane Internet Security, Inc.
------------------------------------------------------------------------
In this issue:
* AES News <#1>
* Crypto-Gram Reprints <#2>
* The Doghouse: Bodacion <#3>
* Reveal and Me <#4>
* News <#5>
* Counterpane News <#6>
* Microsoft Word 97 Vulnerability <#7>
* Security Notes from All Over: The Odyssey <#8>
* Comments from Readers <#9>
------------------------------------------------------------------------
AES News
AES may have been broken. Serpent, too. Or maybe not. In either case,
there's no need to panic. Yet. But there might be soon. Maybe.
Some of the confusion stems from different definitions of "attack." To a
cryptographer, an attack is anything that breaks the algorithm faster
than brute force, even if it is completely impractical. To an engineer,
an attack is something that is practical, or at least might be practical
in a few years. An attack that breaks AES to a cryptographer might not
to an engineer. The rest of the confusion stems from not being sure the
attack actually works.
Let's start from the beginning. A few months ago, Courtois and Pieprzyk
posted a paper outlining a new attack against Rijndael (AES) and
Serpent. The authors used words like "optimistic evaluation" and "might
be able to break" to soften their claims, but the paper described a
better-than-brute-force attack against Serpent, and possibly one against
Rijndael as well.
Basically, the attack works by trying to express the entire algorithm as
multivariate quadratic polynomials, and then using an innovative
technique to treat the terms of those polynomials as individual
variables. This gives you a system of linear equations in a
quadratically large number of variables, which you have to solve. There
are a bunch of minimization techniques, and several other clever tricks
you can use to make the solution easier. (This is a gross
oversimplification of the paper; read it for more detail.)
The attack depends much more critically on the complexity of the
nonlinear components than on the number of rounds. Ciphers with small
S-boxes and simple structures are particularly vulnerable. Serpent has
small S-boxes and a simple structure. AES has larger S-boxes, but a very
simple algebraic description. (Twofish has small S-boxes, too, but a
more complex nonlinear structure. No one has implemented the attack
against Twofish, but I'm not willing to stand up and declare the cipher
immune.)
These are amazing results. Previously, the best attacks worked by
breaking simplified variants of AES using very impractical attack models
(e.g., requiring immense amounts of chosen plaintext). This paper
claimed to break the entire algorithm, and with only one or two known
plaintexts. Moreover, the first cipher broken was Serpent: the cipher
universally considered to be the safest, most conservative choice.
There was some buzz about the paper in the academic community, but it
quickly died down. I believe the problem was that the paper was dense
and hard to understand. The attack technique, something called XSL, was
brand new. (It's based on another technique, called XL, presented at
Eurocrypt 2000.) And the results were so startling -- an attack against
Serpent! -- that they were just discounted.
Meanwhile, Fuller and Millan released a paper showing that AES's 8x8-bit
S-box is really an 8x1-bit S-box. There's really only one piece of
nonlinearity going on in the cipher; everything else is linear. Another
paper came from Filiol. He claimed to have detected some biases in the
Boolean functions of AES, which could possibly be used to break AES. But
there are just too few details in the paper to make sense of this claim
yet.
At Crypto 2002, Murply and Robshaw published a surprising result,
allowing all of AES to be expressed in a single field. They postulated a
cipher called BES that treats each AES byte as an 8-byte vector. BES
operates on blocks of 128 bytes; for a special subset of the plaintexts
and keys, BES is isomorphic to AES. This representation has several nice
properties that may make it easier to cryptanalyze.
Most interestingly, the BES representation gives the XSL method a much
more concise representation, and therefor sparser and simpler equations
that are easier to solve. Moreover, there are intermediate versions of
BES -- 2-byte vectors, 4-byte vectors, etc. -- decreasing in complexity
as you head towards BES-8. These representations identified a bunch more
quadratic equations that apply to AES and BES. When you throw them into
the XSL mix, Courtois and Pieprzyk's attack now has a 2^100 complexity,
as opposed to the wiffly waffly 2^200-or-so complexity claimed earlier.
So, here's the current scorecard. Courtois and Pieprzyk claim a
2^100-ish attack against AES. They claim a 2^200-ish attack against
Serpent. This is an enormously big deal.
Assuming that it's real.
We are in the era of completely theoretical cryptanalysis. Cipher key
lengths have gotten so long that attacks simply can't be implemented;
their complexity is just too great. But implementation is critical; some
attacks have hidden problems when you try them out, and other attacks
are more efficient than predicted. You can try the attack on simplified
versions of the cipher -- fewer rounds, smaller block size -- but you
can never be sure the attack scales as predicted. Differential
cryptanalysis was developed this way; the attack was demonstrated on
simpler variants of DES and then extrapolated to the full DES. (I don't
believe that the attack has ever been implemented on the full DES.) Many
of the attacks we use to break algorithms -- linear, boomerang, slide,
mod n, etc. -- are more often mathematical arguments than computer
demonstrations. I don't believe that we will learn in our lifetimes
whether the 2^100 attack on AES really works or not. And we need a lot
more analysis and testing of the general XSL technique, on weaker
algorithms and simplified variants of real algorithms.
So we're in a quandary. We might have an amazing new cryptanalytic
technique, but we don't know if there's an error in the analysis, and
there's no way to test the technique empirically. We have to wait until
others go over the same work. And to be sure, we have to wait until
someone improves the attack to a practical point before we know if the
algorithm was broken to begin with.
In any case, there's no cause for alarm yet. These attacks can be no
more implemented in the field than they can be tested in a lab. No AES
(or Serpent) traffic can be decrypted using these techniques. No
communications are at risk. No products need to be recalled. There's so
much security margin in these ciphers that the attacks are irrelevant.
But there is call for worry. If the attack really works, it can only get
better. My fear is that we could see optimizations of the XSL attack
breaking AES with a 2^80-ish complexity, in which case things starts to
get dicey about ten years from now. That's the problem with theoretical
cryptanalysis: we learn whether or not an attack works at the same time
we learn whether or not we're at risk.
The work is fascinating. During the AES process, everyone agreed that
Rijndael was the risky choice, Serpent was the conservative choice, and
Twofish was in the middle. To have Serpent be the first to fall (albeit
marginally), and to have Rijndael fall so far so quickly, is something
no one predicted. But it's how cryptography works. The community
develops a series of algorithms for which there are no known attacks,
and then new attack tools come out of the blue and strike a few of them
down. We all scramble, and then the cycle repeats.
We're starting to see the new attack tools that work against some of the
AES finalists. It's an open question as to how long the tools will
remain theoretical. But many cryptographers who previously felt good
about AES are having second thoughts.
Summary of recent AES results:
Preliminary version of the Courtois and Pieprzyk paper (final to be
presented at Asiacrypt 2002):
Fuller and Millan Paper:
Filiol paper:
Murphy and Robshaw paper:
Rijndael analysis by the Twofish team from May 2000:
One effect of theoretical cryptanalysis is inconsistent standards for
papers. Courtois and Pieprzyk submitted their paper to Crypto 2002, as
did Murphy and Robshaw. For some reason, the latter was accepted and the
former wasn't. In any case, the Courtois and Pieprzyk paper will appear
at Asiacrypt later this year.
------------------------------------------------------------------------
Crypto-Gram Reprints
Crypto-Gram is currently in its fifth year of publication. Back issues
cover a variety of security-related topics, and can all be found on
. These are a selection of
articles that appeared in this calendar month in other years.
Special issue on 9/11, including articles on airport security,
biometrics, cryptography, steganography, intelligence failures, and
protecting liberty:
Full Disclosure and the Window of Exposure:
Open Source and Security:
Factoring a 512-bit Number:
------------------------------------------------------------------------
The Doghouse: Bodacion
In case you didn't see it, Bodacion markets the "Hacker Proof" and
"Virus Proof" Hydra, an "Invulnerable Internet Server." The Hydra is
immune to all operating system attacks, because "HYDRA simply has no
operating system to take control of - there is nothing to hack in to..."
Now building a secure OS that has no way to execute arbitrary code and
no command line is a good idea -- we do the same thing with our Sentry
-- but these guys pour the snake oil onto the idea pretty thickly.
According to their Web site, the basis of Hydra's security is something
called "Bodacions" based on "Biomorphic Technology." I'll let them
describe Biomorphic Technology to you in their own words, because I
don't think I could do it justice"
"At the core of HYDRA's security features is a biomorphic technology
based on a field of mathematics called 'Chaotic Dynamics.' Using Chaos
Theory, HYDRA can generate special groups of characters called
Bodacions. Bodacions are impossible to guess, and never repeat.
"With these unique properties, Bodacions make perfect session ID's,
order numbers, customer ID's, cryptographic one-time pads, or any number
that needs to be unique, non-repeating, and difficult to guess. HYDRA
even uses this technology to scramble TCP sequence numbers for increased
network security."
Visit their Web site and regain your sense of awe; we've come so far in
computer security, yet we still regularly see this stuff.
------------------------------------------------------------------------
Reveal and Me
I am bad for the youth of America. Me, personally.
The AntiChildPorn.org offers a free program called "Reveal." It's
designed for parents to spy on their children. Basically, someone runs
this program on a hard drive and it scans for bad words. In the words of
AntiChildPorn.org: "Reveal works by searching all files found and
comparing each word inside a file against special dictionaries of words
commonly used by pedophiles, child pornographers, cultists, occultists,
drug pushers and purveyors of hate and violence."
Leaving aside discussions about whether or not this constitutes good
parenting, this isn't a half bad idea for a computer program. If you're
faced with a couple of gigabytes of random stuff, it makes sense to
write a computer program that simply scans the stuff. It isn't perfect,
but it's okay for a quick pass.
The problem comes from the fact that the word list for Reveal is secret.
Much like the list of unacceptable URLs blocked by the various blocking
software, it's not available for the user to look at and modify. Even
worse, disassembling the software to look at the list might be a
violation of the DMCA.
Anyway, the word list is on the Web (at least as of this writing). Along
with the sexual words you'd expect -- I won't print them because too
many e-mail filtering programs will block this newsletter as a result --
are a whole lot of words you wouldn't: ugly, weapon, shroud, dummy, fat.
And in the occult dictionary was my name: "SCHNEIER". I know my name.
It's rare. There aren't any occult people with my name. There aren't any
occult meanings of my name. And neither are there for the name above
mine: Rabbi Schneerson. Though that leads me to suppose that it might
refer to the one other Schneier I've run across on the Web: Rabbi Arthur
Schneier.
So does AntiChildPorn.org not like rabbis, or cryptographers? Or both?
Reveal's Word List:
------------------------------------------------------------------------
News
A company's own employees are its biggest security threat:
Song lyrics: "Bit Commitment Blues"
Good article on the cyberwar/cyberterrorism hype and nonsense:
Essay on the dangers of moving the Computer Security Division of NIST
into the Department of Homeland Security:
Possible Palladium patents from Microsoft:
6,330,670 Digital rights management operating system
6,327,652 Loading and identifying a digital rights management operating
system
You can probably find others pending in Europe, where you have to
disclose upon filing.
At a panel on Palladium at the USENIX Security Conference in August,
Microsoft representatives claimed that there was no way Palladium could
be used to enforce Digital Rights Management. In response, Lucky Green
invented a bunch of ways Palladium could be used to enforce DRM and then
filed for a patent.
Excellent article on hacking the blackjack tables at Las Vegas. It seems
that while Vegas knew how to spot card counters, they could not detect
counters that worked in teams:
A new company, PGP Corp., has purchased PGP from Network Associates.
Hackers want boring people to stop encrypting things:
Read this for the comments at the end where a British intelligence
officer, when faced with the information that his secrets are being
eavesdropped on, suggests that the government should outlaw scanners. He
probably figures it would be easier than actually fixing the problem.
Good article on the realistic risks of cyber-terrorism:
There's a new Twofish C library, written by Niels Ferguson. The main
differences with existing code available is that this one is fully
portable, easy to integrate, well documented, and contains extensive
self-tests. And it's 100% free.
Civil liberties after 9/11; EPIC's chronology:
"I'm not proud," [Brian] Valentine [senior vice president in charge of
Microsoft's Windows development team] said, as he spoke to a crowd of
developers here at the company's Windows .Net Server developer
conference. "We really haven't done everything we could to protect our
customers ... Our products just aren't engineered for security."
Microsoft's Craig Mundie on security. My favorite quote: "People confuse
'security' and Trustworthy Computing."
RIAA sues Verizon; both sides cite the DMCA:
Good stuff on electronic voting:
Recently I heard a rumor that I am in favor of electronic voting,
Internet voting, and the like. This couldn't be further than the truth.
Here's my position:
------------------------------------------------------------------------
Counterpane News
Schneier is speaking about Counterpane monitoring in Seattle, Vancouver,
Columbus, and Sacramento. For details see:
Schneier will deliver a keynote address at ISSE 2002, at Disneyland
Paris, on 2 October.
Schneier is speaking at SMAU 2002 in Milan, Italy, on 25 October.
Schneier is speaking and will be on a panel at the Symposium on Privacy
and Security in Zurich, Switzerland, 30-31 October.
------------------------------------------------------------------------
Microsoft Word 97 Vulnerability
Here's the vulnerability. Alice sends Bob a Word document. Bob edits it
and sends it back. Unbeknownst to Bob, the document he sends back can
contain any file on his computer. All Alice has to know is the file's
pathname.
To make the vulnerability work, Alice embeds a particular code in the
Word document she sends Bob. When Bob opens the document, Word scarfs up
the file off his hard drive and embeds it into the Word document. Bob
can't see this happening, and he has no way of knowing it has happened.
If he looks at the document in Notepad, though, he can see the snooped
file. Then, when Bob saves the document, the file becomes part of the
saved document. He sends it back to Alice, and she has successfully
stolen the file.
This attack works with any file on Bob's computer, and any file on
another server that Bob currently has access to. It's not a macro, so
turning off macros doesn't help. It's not a piece of malware that an
antivirus program will catch. It's just a feature of Word 97 being used
in a novel way. And Alice can embed hundreds of these codes into the
Word document she sends Bob, so if she doesn't know the exact filename
she can make lots of guesses.
This is an enormous security hole, and one that the user is simply
unable to close. All Bob can do is 1) refuse to return Word 97 documents
he edits, or 2) manually examine them all in Notepad or WordPad.
Another Microsoft vulnerability...so what? There are hundreds of these a
year. Why bother writing about it?
To me, the interesting aspect of this is that Microsoft is no longer
supporting Word 97. This means the company has an interesting choice:
they can patch the vulnerability, or they can demand that users upgrade
to the latest version of Word. Doing the latter is sleazy, but it's in
Microsoft's best interest for people to upgrade. They might think of
this simply as added incentive.
We're seeing more and more of this: vulnerabilities in products that are
no longer supported. When the SNMP vulnerabilities were published
earlier this year, many products with the vulnerability were no longer
supported. Some were made by companies no longer in business.
I first read about this vulnerability in an e-mail newsletter called
"Woody's Office Watch." Alex Gantman reported the Word 97 vulnerability
on Bugtraq, and Woody Leonhard claims that he has discovered similar
vulnerabilities in Word 2000 and Word 2002. He's keeping them quiet for
a while, giving Microsoft a chance to fix them.
------------------------------------------------------------------------
Security Notes from All Over: The Odyssey
Polyphemus's one eye is a single point of failure; when Odysseus pokes
it out, he is much less able to defend himself. Polyphemus's alarm is
ignored because Odysseus said his name was Nobody, so he winds up
shouting that nobody is trying to kill him (you'd think the other
Cyclopes would come see what's going on, but maybe Polyphemus shouts
random stupid things all the time, like an IDS). Polyphemus finally has
to let the sheep out to graze -- it's a mission-critical function -- and
Odysseus and his men then escape by masquerading as legitimate traffic
(sheep).
------------------------------------------------------------------------
Comments from Readers
Just a note before printing comments on arming pilots. While I am
increasingly interested in applying computer-security analysis
techniques to non-computer problems, I am not at all interested in
the gun control debate. While the former opens up avenues for
informed debate, the latter is much more analogous to a religious
war. I am continually amazed by how many people -- on both sides of
the issue -- argue from their conclusions rather than rationally
evaluate the evidence. The comments below are ones that I think
contribute to the analysis, and have been edited of "theology." And
it is unlikely that I will print comments on these comments next
month. There's only so much of this debate I can tolerate.
From: Blake Leverett
Subject: Arming Pilots
Your first and second objections involve the handling of the guns
that the pilots would carry: how do the guns get around, and how do
we make sure that guns aren't left lying around?
There is only one answer to all of these questions: a pilot will
carry his or her own gun on his or her person. There can be no
lockers or any such storage because, as you pointed out, we can't
have guns just lying around. No competent person would ever let his
gun out of his immediate control. The pilot carries the weapon in a
close-fitting holster at all times, even when he leaves the cockpit.
Most commercial airline pilots have military training and are
already trained in the use of handguns. As a side note, it is much
easier for an attacker to seize a policeman's gun, as it is in an
open side holster. To seize a pilot's gun, you first have to guess
where it's located (shoulder holster, back holster, ankle, left or
right) and must make personal contact to wrest the weapon from the
pilot.
None of the above is theory. Thousands of people carry concealed
weapons today, both police and private citizens. And there are
hundreds of guns behind the security blockades at airports, too.
Before 9/11 at least, there were lots of people who could carry
weapons into the "secured" area. They could show their
law-enforcement ID and go right past the "security" guards.
Your third point about training the pilots is moot. Most pilots are
already trained by the U.S. military. And this is a voluntary
program. It would be foolish to force a pilot to carry a weapon
against his will. There are training programs available for every
possible use of a handgun, and I would imagine pilots would have to
pass stringent training requirements.
Lastly, guns are more useful as a deterrent than as a tool to subdue
hijackers. By the time you have hijackers on the plane with intent
to overtake the plane, bad things are going to happen with any
solution. I believe emotion is overtaking logic here: people are
willing to allow armed sky marshals, but not willing to arm the
pilots. The pilots already hold your life in their hands. As
professionals trained to act quickly in a crisis in the air, they
are much more qualified to be armed than some Dirty Harry wanna-be
they drag in to be a sky marshal.
From: Ron Lautmann
Subject: Arming Pilots
Hundreds, perhaps thousands, of guns are safely carried on U.S.
airlines today. Every sworn peace officer who flies from place to
place in the U.S. is armed on the flight. FBI, Secret Service, ATF
agents and others all fly armed and somehow they get their guns
through the airports and on planes with no problem. When they get to
the security gate they present their credentials and easily pass
through. The obvious solution to handling guns by pilots is to let
them carry them at all times just like peace officers. Maybe they
should become sworn peace officers, too.
Many pilots have expressed a keen interest in carrying guns in the
cockpit. Organizations like APSA (see )
attest to this fact. One could assume from this that the pilots
would get significant training in how to handle guns safely and how
to best use them in the event of an attack. Pilots who don't want to
undergo such training could voluntarily opt out of the program and
not carry a gun.
Hijackers would have no way of knowing which pilots were armed, so
they would have no advantage in knowing that some pilots were not armed.
News reports consistently tell us that even with the tightened
security checks at the airports, there is a one in four chance that
a weapon will pass through the security screening process unnoticed.
I believe that arming pilots will help protect against this
unfortunate fact.
By the way, how many policemen get their gun taken away from them,
as you state? I don't think there will be too many hijackers who
will rely on this method to obtain their weapons. Waiting to pounce
on the pilot as he makes his way from the cockpit to the lavatory is
just too iffy a situation for a hijacker.
Finally, if the last line of defense for protecting the country
against a hijacked airliner is being shot down by an F16 fighter, I
would prefer that my pilot be armed rather than risk getting shot down.
From: "Bill Nickless"
Subject: Arming Pilots
Thousands of handguns are already on airplanes and in airports. I
routinely see handguns on the hips of security personnel at airport
screening points, and air marshals are already known to be carrying
handguns. Many federal agency employees, including those of the
Smithsonian Institution, can and do routinely carry their handguns
when they travel. State police on official business (such as
bodyguards for state officers) routinely carry handguns. Officers
from foreign countries routinely protect diplomats and government
officers on airlines with handguns.
Airline pilots are already some of the most carefully screened and
trained people in any industry. They routinely operate very complex
machinery. Their primary duty is to protect the lives and health of
their passengers, not just fly airplanes. Today they can only
protect themselves with the "crash axe" in the cockpit.
Having airline pilots carry guns is not a new idea. In fact, for
many years they were required to carry them by federal law, as the
airlines carried U.S. mail. A Houston Chronicle story at
is only
one example of a situation where an armed hijacker was successfully
stopped by an armed pilot.
From: "ADP"
Subject: Arming Pilots
As a retired airline captain with over 34 years of service, I agree
with you completely regarding the arming of airline pilots. I think
it is the dumbest idea since the PC Jr.
We are a nation of people with short attention spans and even
shorter memories. A pilot's job is to fly his or her aircraft...period.
Before 911, we pilots were taught to acquiesce to the hijacker's
demands. That system worked for many years. With the advent of
suicidal terrorists, that system must be abandoned. The captain of
an airliner is responsible for his crew, of course, but he is even
more responsible for the safety of his aircraft and passengers. It
saddens me that, under certain circumstances, an airline captain
might have to risk the life of a crew member. It appalls me,
however, that airline pilots are not concentrating on controlling
their aircraft. A gunfight at 30,000 feet involving a pilot means
that only one other pilot is flying the aircraft. (There are very
few three-man aircraft left flying).
Make the cockpit doors impregnable. Provide for safe egress of the
pilots in the event of a crash. Let pilots fly while others take
care of security.
From: Norman Yarvin
Subject: Arming Pilots
In the latest Crypto-Gram, you listed a lot of problems with arming
pilots. I think they are sound objections to a plan in which
carrying guns is mandatory. But if instead the plan were to merely
give the pilots the option of carrying guns, many of those problems
would be much lessened. The pilots who would carry guns if it were
optional would mostly be the ones who had given thought to tactics,
and who were decent marksmen. (Note that a large fraction of pilots
are ex-military.) To lessen the possibility of being disarmed, they
could be given freedom to carry concealed, or to leave their guns in
the cockpit when stepping out to visit the lavatory. A terrorist
could not be certain that the pilots had their guns on them, or even
that there were any guns on the plane at all.
As for the protocol for carrying weapons on board, in a
firearms-optional system each pilot would have to be responsible for
his own gun at all times. That way, also, he could choose a gun and
holster that he was comfortable with and could conceal well. This
would be not much different from the way sky marshals carry their
guns on board.
I think such a plan would have more chance of helping than of
harming, though it would be no panacea. But I must admit that it is
unlikely to be implemented: the mentality of control is so strong in
this country that if anything is done at all, it is likely to be a
case of "today, prohibited; tomorrow, mandatory."
From: Allen Gordon
Subject: Arming Pilots
I asked a friend who has been a pilot for United Airlines for over
35 years. About this he said, "Hmm, lets see, I'm right handed. I
sit in the chair on the left. I pull the gun out with my right hand,
but since I'm strapped into the chair, I can't turn very far, so I'm
liable to wind up shooting the co-pilot!"
From: Ric Woodson
Subject: Arming Pilots
In response to the guns in cockpits debate, I would like to suggest
an alternative to which I have not yet had anyone come up with a
better solution. Mount along the full length of each side wall of
the passenger area, a tube within a tube. Each tube has openings
down its length approximately 1/3 of its diameter. The outer tube is
stationary, the inner tube rotates to an open position only at the
command of the cockpit.
Inside the inner tube, are 1/2 size baseball bats laid end to end.
Once the tubes are open, the window passenger has access to the bats
in the tube. These can be used offensively or defensively. Each row
of seats would then have something like two bats per row. More than
enough to use for re-acquisition of control of the craft. There
would be too many bats to be collected and managed by the
"terrorists" (did you ever try to pick up more than four bats at a
time?). No chance for a misfire. Nothing to take the pilots away
from their jobs. Too small to be used to bash in security doors.
Easy for authorities to inventory and reclaim after the landing.
Cheap and relatively easy to install. After all, who has more
experience with a Louisville slugger than an American passenger? How
about giving the passengers a chance if a revolt is necessary. Send
the marshals home and save the money. Forget the high-tech
solutions, this is not a high tech problem. I know it sounds radical
at first but think about it a while.
From: Jay Ackroyd
Subject: Arming Pilots
All well said, but you've left something out, which applies to both
marshals and pilots. Once you get a gun on a plane, the exploit
turns into getting the gun from the guy who has it, and using it to
take the plane over. Remember that we have to assume terrorists work
in teams of four or five who don't mind dying. The first part of the
exploit is to identify who is armed and where the gun is, which only
requires the sacrifice of one of the team's members. That knowledge
can then be used as part of predesigned plans for getting the gun.
As you say in that very interesting Atlantic article, flight
attendants and passengers cooperating to prevent a hijacking is our
most effective measure for preventing the use of planes as missiles.
Guns on planes don't enhance that measure, and may weaken it.
From: Michael Ortega-Binderberger
Subject: Arming Pilots
A complicating factor that you skipped involves other countries. I'm
an international student in the U.S. I'm from Mexico, and can tell
you that guns are a big no-no over there. Likewise, many countries
would not let American pilots carry guns when traveling there (even
if they did, it would be problematic). Likewise, many foreign
airlines will not arm their pilots, even on flights to the U.S. The
net result is if it were easy to see which airplanes on which routes
were "armed" and which were not, that itself would provide a
wide-open door for abuse.
From: "Nicholas C. Weaver"
Subject: Arming Pilots
There are now many new features in place which prevent hijackings
(notably, passengers willing to maim any potential terrorist, among
other factors). There are NO more new features in place to prevent a
rogue pilot from crashing the plane, as appeared to happen in the
case of Egypt Air.
A gun in the cockpit would probably make the latter attack easier,
as the rogue pilot with the gun shoots his counterparts then crashes
the plane, instead of having to fight off the rest of the cockpit crew.
From: Niels Ferguson
Subject: Palladium
Microsoft claims lots of benefits for Pd, some of which are to allow
Digital Rights Management (DRM). However, most of the benefits can
already be achieved by existing hardware. All Intel CPUs since the
286 have had very good hardware separation between tasks. It is only
Microsoft's choice not to use this feature that has led to a single
hunk of inter-dependent code.
Intel CPUs can protect one program from the other. You can create
secure device drivers which can no longer crash you computer. But,
the basic operating system will always have full control of the
computer. So you can protect programs from each other, and the user
from malicious programs, but the user always maintains complete
control over his machine.
What Pd adds is to take control away from the user. It "allows" the
user to give up part of his control over the machine, and give it to
a program. This is of course required for DRM, but I cannot really
think of any other application. They talked about some things like
banking software, but that is just silly. We have perfectly good
cryptography to handle those threats, and using Pd for banking would
be very dangerous. After all, the Pd chip isn't protected against
physical attacks, so you have to trust the owner of the computer anyway.
There was some misdirection about it not being possible to change
the whole Windows operating system, so Pd is needed to create a kind
of micro-kernel under the OS. This is not true. You can do the same
on Intel hardware; VMware is a good example. Microsoft can achieve
the same security features (except for DRM) using existing hardware
and the same amount of software development effort.
My conclusion: The only reason for Pd is DRM. All the rest is just a
smoke-screen, or stupidity. You can never tell the difference.
From: "Nicholas C. Weaver"
Subject: Palladium
The portions designed to protect the owner/user of the computer do
not require hardware: they rely on the OS doing proper things with
regard to "alien" code. There is nothing which prevents universal
code signing for source authentication, heavy sandboxing, etc, being
imposed on the current systems. The hardware is necessary to prevent
the debugger-style attacks.
QED: The hardware is designed primarily NOT to benefit the
owner/user, but to limit the owner/user's ability to manipulate the
system. Is this a good thing for most people?
From: Fredrik Viklund
Subject: Face Recognition
The failures of face recognition as a means of diagnosing terrorists
made me think of parallels in medical diagnosis where the problems
are similar.
The demands of a diagnostic method are quite different depending on:
* Is it the false positives or the false negatives that have to be
avoided
* Is the disease widespread or rare?
* Is the diagnostic tool costly in terms of money or pain for the
patient?
For a wide-spread disease (such as the non-lethal parasite ascaris)
where treatment is cheap and relatively painless for the patient, a
cheap and simple diagnostic test is suitable. Low cost and no pain
for the test and treatment means no problem if some false negatives
or false positives appear. Lets say that 50% of the population is
infected. Then, a false positive rate of 2% will largely not
influence the results of treatment costs. A false negative rate of
2% will, however, cause a lot of people (1% of population) still
being around, spreading the disease.
A rare, lethal disease with painful treatment, on the other hand,
requires a diagnostic tool with very few false positives and
negatives. If only 0.1% of the population has the disease, a false
positive rate of 2% will increase the cost and pain for treatment
20-fold. A false negative rate of 2% will "only" leave 0.002% of
population without treatment and 98% of the infected will be
detected. This is the case parallel to terrorism.
This has a tremendous impact on which methods are suitable for
diagnosing diseases (and terrorists), and I certainly wish that the
people responsible for diagnosing terrorism had studied more
epidemiology before issuing the treatment.
From: Martin Spamer
Subject: License to Hack
In regard of your comments "License to Hack," I would like to point
out that the 'counter attacks' as proposed by RIAA/MPAA would remain
illegal in most other countries.
Indeed, this behaviour would be illegal in the UK under Section 1 of
the "The Computer Misuse Act 1990":
(1) A person is guilty of an offence if: (a) he causes a computer to
perform any function with intent to secure access to any program or
data held in any computer; b) the access he intends to secure is
unauthorised; and (c) he knows at the time when he causes the
computer to perform the function that is the case.
(2) The intent a person has to have to commit an offence under this
section
need not be directed at: (a) any particular program or data; (b) a
program or data of any particular kind; or (c) a program or data
held in any particular computer.
(3) A person guilty of an offence under this section shall be liable on
summary conviction to imprisonment for a term not exceeding six
months or to
a fine not exceeding level 5 on the standard scale or to both.
Since this UK legislation is a result of European treaty obligations
,
similar legislation exists [or will do] throughout Europe.
If the U.S. proposals are passed as seem likely, we can look forward
to a reverse of the Dmitri Sklyarov situation with RIAA/MPAA
officials being arrested, jailed, and/or extradited around Europe.
From: "David Banes"
Subject: License to Hack
Part of the bill reads: "a copyright owner shall not be liable in
any criminal or civil action for disabling, interfering with,
blocking, diverting, or otherwise impairing the unauthorized
distribution, display, performance, or reproduction of his or her
copyrighted work on a publicly accessible peer-to-peer file trading
network, if such impairment does not, without authorization, alter,
delete, or otherwise impair the integrity of any computer file or
data residing on the computer of a file trader."
The last part is key to understanding the bill, as U.S. copyright
holders
will trip themselves up if they do in fact release viruses that,
"without authorization, alter, delete, or otherwise impair the
integrity of any computer file or data residing on the computer of a
file trader" because files will be altered (log files etc) and
executables changed if a virus is active.
My understanding of the Bill is that it allows for peer-to-peer
networks to be blocked or disabled at the network level, not the
individual file traders computer level.
From: Marty Levy
Subject: Carnival Booth Snakepaper
Loved the last Crypto-Gram, particularly the description of M$ Pd. I
do, however, take issue with "Carnival Booth," which you described
as "really good work." The work was slightly interesting, but it
seemed to be based on at least one assumption that is seriously
flawed, and which seems to nullify the key conclusions of the paper.
This false assumption is so blatant that I have to suspect that the
authors have a political/social agenda, and I'm disappointed that
you seemed to endorse their work given that it does not stand up to
even modest scrutiny.
The authors of the paper make the assumption that by querying CAPS
and thus determining the profile of attackers who are unlikely to be
targeted, the terrorist organization can then instead prefer to use
low-profile attackers. I agree that in a world where the terrorists
truly had a random (or extremely large and diverse) population to
draw from, this technique would be viable. The authors try to
bolster the assumption that such a strategy is viable in Section 3.3
by naming five recent "terrorists" - Lindh, Reid, Helder, Kaczynski
and McVeigh. Their assertion based on the observation that these
five terrorists exist is that "Terrorists clearly have no shortage
of diversity."
First of all, these five all do share at least one (and probably
more) characteristic in common -- they are all males. I don't have
age statistics handy, but I'll guess that most of them were under 40
when committing their first terroristic acts.
More importantly, the population that significant terrorist
organizations have to draw from of people willing to be arrested and
possibly die is most likely not all that diverse. Certainly, the
9/11 perpetrators had common characteristics which are also
relatively low occurrence in the general population.
Once the terrorists figure out that older women born in the USA with
non-Arabic names are less likely to be targeted by CAPS than young
men born in the Middle East with Arabic names, how will they put
that information to practical use?
The paper did come near the correct conclusion: Any competent
terrorist now knows that certain traits are more likely to garner
attention, and they will try to use and recruit people who do not
have those traits (or use subterfuge to hide those traits). For this
reason, random inspection should be used, but it should not fully
supplant targeted inspection.
I'm surprised that you didn't point out a major logical fallacy in
the paper: If terrorists can detect that ALL inspections are random,
they could then revert to reliance upon the much larger population
at their disposal (who share particular characteristics). This is a
prototypical issue in counterintelligence, and you should have
pointed it out.
This paper would have been much more useful if the authors tried to
determine how to optimize a mix between targeted and random
inspections. I am hopeful that the FAA has enlisted the help of good
statisticians to do so already.
------------------------------------------------------------------------
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses,
insights, and commentaries on computer security and cryptography. Back
issues are available on .
To subscribe, visit or
send a blank message to crypto-gram-subscribe@chaparraltree.com. To
unsubscribe, visit .
Please feel free to forward CRYPTO-GRAM to colleagues and friends who
will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as
long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is founder and CTO of
Counterpane Internet Security Inc., the author of "Secrets and Lies" and
"Applied Cryptography," and an inventor of the Blowfish, Twofish, and
Yarrow algorithms. He is a member of the Advisory Board of the
Electronic Privacy Information Center (EPIC). He is a frequent writer
and lecturer on computer security and cryptography.
Counterpane Internet Security, Inc. is the world leader in Managed
Security Monitoring. Counterpane's expert security analysts protect
networks for Fortune 1000 companies world-wide.
Copyright (c) 2002 by Counterpane Internet Security, Inc.
next issue
previous issue
back to Crypto-Gram index
------------------------------------------------------------------------
Copyright Counterpane Internet Security, Inc., 2003
Reprint Permission