Copyright Michael B. Scher This document may be freely distributed by electronic media solely for non-commercial purposes. Reproduction in any form must be of the entire work including this copyright notice. Print copies (except a personal use copy) with permission of the author only. All other rights reserved.

strange(at)cultural.com

"Computer Security and Intrusion: The Technical (a sketch of things to come)"
Lecture on computer security concepts, complex system intrusion trends and predictions; given as part of the "Doing Business in a Networked World" conference at the John Marshall Law School.

January 16, 1998



According to the conference agenda, I have 15 minutes to give you the
technical side of computer security and intrusion.

Um.  Not going to happen.

Instead, I am going to give you a basic idea of what computer intrusion is
about, and then I'll describe what I see as two burgeoning areas of system
security compromise.

Comprimises of computer security are more or less had by way of exploiting
one of two kinds of weaknesses.

First, weaknesses in the very protocols used to transmit data.  And second,
weaknesses in programs -- programs that may actually be the ones implementing
the protocols, but not necessarily.

An example of a protocol weakness, that is, a weakness in the way we
arrange and package data for sending would be the telnet protocol -- a kind
of remote login to another machine.

It sends your password across the network in plain, clear text -- completely
visible to any one on any machine on the local networks who cares to
"sniff" the network at either end of the communication.

As well,the data might be "sniffable" at any of the the various hops
between.  Thus, an attacker can effectively leverage this weakness into
access to your account.

 It was designed to work well and reliably on a badly performing, even a
failing, network, but the designers never thought about a hostile one.

The same goes for FTP -- the file transfer protocol -- and most arrangements
for POP email.




Now, we can use encryption, or a non-resuable password system - like S/KEY
or the popular SecureID Smartcard system --
and, if done RIGHT, we can pretty much work around this weakness.

So, weaknesses in protocols are generally just that they're being used
under conditions, often hostile conditions, that the designers never really
thought about.  Also, the people using them usually don't give much thought
to the insecurity.

As well, a protocol meant for security could just be designed poorly for
what it is supposed to do, but this occurs mostly with poor implementations
of otherwise reasonable encryption schemes.

The second set of weakness are those in the programs themselves.
Weaknesses in programs tend more or less to be the result of the author
failing to anticipate something, or incorrectly anticipating it.

Program weaknesses tend to have more to do with someone feeding the program
DATA of a kind or quantity that the programmer didn't think to expect.
And, sometimes a program is weak when exposed to an unusual situation on
the computer itself.  Like, the disk is full.

A big concern with programs not anticipating strange conditions is that on
many systems, UNIX and Windows included, programs on that same machine draw
on a common set of prefab functions.  So a problem in one program may well
implicate a lot of programs if the problem is in one of these common
functions.

For example, Sun, in their Solaris operating system just last year had a
bug.  One could feed a program options that were way the heck longer than
it expected.  When the program was run, it would go to spit back an error
at you about your rediculous options...

...but, by then, the computer had stuck the garbage you gave it into a
section of memory that was meant for something smaller.  And the so-called
garbage is really the machine-language code of another program.  Because
the Sun programmers never anticipated something that large being used, they
also didn't expect the thing to exceed the bounds of that memory space.
The malicious attacker makes sure it is large enough to overflow the
allocated space right into the part of memory with the very program that
would have spit back the error -- the one about to say, "Hey that's not
right!"  But by then it's too late and the original program is partly
overwritten with program code of the attacker's own devising.

The system keeps running what it thinks of as "the program," but the
program is not the same.  The attacker has subverted it.  If the program
ran with special privileges for some reason -- probably not a bad reason --
then the user's subversive new program will have that privilege, too.  The
user could thus effectively become the system administrator, because --
let's say -- the program, which was just going to do one limited thing as
the system administrator, has been tripped up and hijacked.

Now, this kind of attack, called a "buffer overflow" represents just a
fraction of the kinds of attacks involving program weaknesses that are out
there.  Pretty much any program running with any extra privilege could --
not necessarily will, at all -- but could lead to greater access for the
attacker than the attacker previously had if the program can be made to run
into circumstances that the programmers did not anticipate.

Most attacks directed at local problems in software are much simpler than
the one in this example.  But . . . There are lots of bugs just like it
that have been announced in the last two years in lots of programs on lots
of operating systems.  Some allow a remote user to get into a local
account.  And some allow a person who has or gets into a local account to
subvert system administrator access;

You see how this might escalate.


This is where traditional heavier security measures come in -- and by
traditional, I mean, in the last six or seven years.  Firewalls and filters
act as points of control that stand between your network and the outside
world, limiting the services your machines offer to unknowns.  They're
mostly about keeping outsiders from even poking at your systems.  In a way,
they trump a lot of security problems by making it very hard if not
impossible for outsiders to feed unexpected data to buggy programs.  In my
experience, however, this situation makes a lot of firewalled companies
over-confident about security to the point their networks start to resemble
mollusks, hard shells on the outside, but very very squishy inside if the
attacker can get past that line of defense.

And there are almost always other ways in.  There's the company's own
dialup server, for users with their roving laptops.  There's the user who
set up their work computer and fax modem so they could call into it from
home.  There's just plain old fast-talking access out of harried,
overworked, and underpaid staff.  So, companies should be concerned about
network and system security inside the firewall, too.  Especially
considering that most unauthorized accesses come from insiders; Insiders
who can use any kind of attack, and do it from the inside.

So, now we get onto our two growing areas of security problems, both of
which more or less let an attacker bypass firewalls -- to some degree -- and
which represent even more reason for companies to more than just rely on a
firewall.

These two growing areas are, first. what some people are terming "passive
attacks," and second, attacks I term "complex attacks."  Now, we've seen
examples of both of these for years, but lately, interest in them, and
discovery of new varities has really exploded... and I would expect the use
of them to really take off soon, much in the way that the "buffer overflow"
kind of attack took off last year.  Three years ago, people were talking as
if exploiting a buffer overflow was impossible.  Now, buffer overflows are
among the most commonly exploited point of attack.

What's a passive attack?

Well, recall, I was saying that you could SEND another computer something
it never expected, or log into it with a sniffed username and password, and
then try to exploit some local problem to gain system administrator
privileges.  That's active.  The attacker is actively targeting a machine
at that point, and trying to get into it.  In a passive attack, the
attacker isn't really targeting a specific machine.

So, network sniffing itself is a kind of passive attack.  Once the attacker
has control of a machine, either by attacking it, or because they already
have the right to use that machine, say they're an employee on premises...
once they're there, they just set it up to record anything interesting that
zips across its local network, across the LAN, the ethernet, the token
ring...  you get the idea.  They later collect the data and see what
goodies are in it.  That's a pretty old passive attack.

The new breed that I think we'll be seeing takes an already existing server
-- lets say a web server -- to which people are already connecting, looking
at web pages.  No big deal.  But let's say their web browser --
HYPOTHETICALLY, lets's say, Microsoft Internet Explorer 4.0 or 4.01 -- let's
say their browser was programmed such that IT could get data the
programmers just didn't anticipate.  Let's say it's a REALLY BIG URL -- one
people click on, but better yet, let's say, one the browser loads
automatically with an "auto-refresh" -- you've probably seen an auto-
refresh, where a page automatically shifts you to another?
Let's also say that the problem here is a "buffer overflow."  In this one,
the REALLY BIG URL, which contains a program in machine language merrily
overruns the bounds set aside for it in Explorer, because some programmer
just figured that no one would EVER make a URL THAT BIG.  The loaded URL
plows right through part of Explorer, plows into the currently running
part, and redirects the flow of the program to run the new stuff.  The
malicious program is now what's really running, and it sets aside some
memory for iteself.  Then, using routines already built into Windows, it
downloads a much larger program from the net, saves it to disk, and runs
it.

Explorer probably dies a horrible death at this point.  Or perhaps the
malicious attacker's program closes it for you.

Whatever -- Explorer chokes and quits.  I put it to you that Explorer dying
is hardly so rare that anyone would think much of it.

So, by just setting up a malicious web page on some site that employees of
Company X visit, an attacker can casually run programs of his making on
machines INSIDE of Company X's firewall.  The programs could be as simple
as viruses, or they could do something much more elaborate, like doctor the
user's system to save or send off plain text copies of any password they
type; they can copy off files, they can sniff the network inside the
firewall, whatever.

Those of you who'e made web sites realize, too, that with a modern web
server, our attacker could set things up so ONLY users of the vulnerable
kind of web browser who connect to the site would get the dangerous data.
Anybody else would just get something innocuous.  The attacker could
further refine it so ONLY connections from particular domains using a
vulnerable browser would get the bad data.  So it could be fairly specific,
targeting only one company, and otheriwse go unnoticed for quite some time.

An attacker would obviously do very well to set up this kind of passive
trap on any very popular web site.

I don't know if you read about it, but one of Yahoo's web servers was
hacked not long ago.  It turned out they only doctored the page that users
of  older and text-only browsers got.  But what's really interesting is
that the attackers claimed that viewers of the page had, bu just going
there, exposed their computers to a "logic bomb."

Fortunately, although the break-in at Yahoo seems to have been real, the
alleged passive attack of planting a logic bomb there was just a hoax.  But
as you can see, the IDEA of the passive attack has been on peoples' minds.
And, with this Wednesday's announcement of the buffer overflow in Explorer,
the old adage that you can't get a virus by just browsing the web, has gone
out the window.  Fortunately, this particular passive attack is NOT easy to
do.  Eventually, though, some clever soul will release a ready to roll
version for the less-clueful.

That was passive attacks.

Our other trend is something I call complex attacks.  A complex attack
would be one in which more than one computer has to be -- actively -- messed
with.   So I don't mean someone just gaining information from breaking into
one computer and using it to break into another.  It's more like the person
has to do something impolite and, probably illegal, to two or more computer
systems in order to really attack just one of them.

A famous example was Kevin Mitnick's break-in to Tsutomu Shimomura's
computer.  Mitnick could tell that someone was logged into Shimomura's
machine, but had been sitting idle for quite a long time.  The idea was to
first silence the machine from which the person was logged in to
Shimomura's.  Then a program would send, over and over, just enough of a
typed command to backdoor the account the user had been logged into,
claiming all the while to be the now unconscious computer.  There was some
weak authentication, consisting of a number that had to be guessed, but
with the real machine unconscious, the program had plenty of time.  The
number, which people had considered a kind of authentication for some time,
wasn't originally intended to be authentication, but just quality control.

So eventually, the program Mitnick was using got a right number, tossed in
just enough data to back door that user account, and finished.  Eventually
the gagged client machine would recover.  But that didn't matter anymore,
because now, Mitnick could just log straight into the target machine.

Another complex attack involves the Internet's Domain Name System, which is
the huge, hierarchical, distributed set of . . . well . . . databases that
contain the translations between the names of machines -- like
www.something.com -- that people use, and the not really memorable numbers
like 204.34.157.98 that thankfully, only the machines use.  See, most
computers on the Net defer to some machine close to them to ask for these
kinds of lookups.  If you're a dialup user, it's probably your ISP's name
server.

Well, instead of having to go out and ask around the net for the answer to
something over and over, these name servers will cache the replies for a
while.  There is a weakness in the protocol used for the basic Domain Name
lookup conversation.  The weakness is that a malicious person can fairly
easily convince a name server that a forged answer sent by an attacker came
from the real authoratative name server for that domain; once again, it's
just a number guess.  This becomes especially easy, if again, the attacker
just did something to beat the machine he's pretending to be into silence.

The machine that's just been tricked caches the bogus information for a
while, and hands it around to anyone asking for the real information.

Now, that's called "corrupting cache information" and it's more or less
what Eugene Kash-per-eff of the Alternic did to the Internic, this past
July.  He faked out hundreds, perhaps thousands, of name servers around the
world into cacheing the ALTERNIC'S numeric address as the right response
for when anyone looked up www.internic.net, effectively redirecting all web
traffic for the Internic over to the Alternic's web site.  He did this as a
"protest" against the Internic's name service monopoly.  The Alternic was a
competing startup enterprise in which he, by the way, had a large stake.

Ok, so
Passive attack of a buffer overflow in a client program like Internet
Explorer.
Complex attack like webserver cache corruption.


This leads me to what I'll call today's "ultimate combination-of-all-the-
above house of horrors attack."

Here we go.

Some malicious attacker breaks into a poorly-secured, more or less idle and
unused machine on the Internet, probably some out of the way UNIX machine
that nobody has done any security work on in a long time.  He sets up on
this machine an evil webserver like we talked about.
Then, he further sets it up so it only sends the evil data to people
running a vulnerable web browser.  So, not everybody gets the deadly
garbage.  Then, he runs up a program on another hacked machine which runs
around and corrupts the caches of hundreds and hundreds of important
seeming companies' name servers.  And what site does he fake them out
about?  He picks someplace really popular -- Let's say Yahoo... and he makes
the faked-out servers redirect web traffic for Yahoo to his dangerous
server.  Thousands of systems could be affected before anyone notices, and
firewalls that let the employees browse the Web wouldn't do a thing to slow
it down.


So, to cap.

Security holes exist in protocols and in programs, mostly as the result of
1. using them in environments for which they were not intended
or
2. casually accepting data of some kind that the programmers never
anticipated receiving
or
3. errors that weaken otherwise strong programs and protocols

Passive and complex attacks can be used to get at systems behind firewalls
-- in many cases, as if the firewall were not present.  There are, besides
the few I have mentioned, some more passive and complex attacks that are
widely known, and to which many systems remain vulnerable.  Moreover, a
large number of these are likely to be found over the next few years,
particularly as our systems become increasingly interdependent and software
and protocols are added at a rate driven by rather furious competition.

What can you do?  As we've heard a lot of people say:  Policy policy
policy.   If you don't want to restrict what employees can do on the Net,
at least maybe require they use only a certain suite of software, so that
when a hole is discovered, upgrading enterprise-wide, and searching the
machines for any signs of naughtiness can be done in an organized way.

Security audits and regular security upgrades to help insure that should
some  machines inside one's firewall become compromised, it does not mean
attackers can access everything else behind the firewall.