Discussion:
compressed content-transfer-encoding?
Mark Horton
1999-07-27 14:56:14 UTC
Permalink
Is there an RFC (or movement toward one) for a compressed encoding
within MIME?

There seems to be a lot of interest lately in compressing attachments.
End users are encouraged to zip Office files before mailing them,
and at least one product (MaxCompression) does this automatically
(in a proprietary way.) Even with modem compression, there seems to
be some gain from this, and the disk storage implications on the mail
server are also interesting.

It seems that a new MIME standard for content-transfer-encoding that
would indicate a compressed base64 type ala gzip could be nice.
Creative minds might even improve the efficiency of base64 at the
same time, if we don't have to worry about translation into EBCDIC
anymore.

Mark
Jacob Palme
1999-07-27 18:42:35 UTC
Permalink
At 10.56 -0400 99-07-27, Mark Horton wrote:
>Is there an RFC (or movement toward one) for a compressed encoding
>within MIME?

HTTP has compression headers. We should find out if and how much they
are used, and why. HTTP needs compression even more than e-mail,
because people are waiting for documents to show up in real time. If
compression has not been successful in HTTP, will it be in e-mail
where the need is less than in HTTP?

--- --- Excerpts from the HTTP/1.1 specification, RFC 2616 --- ---

content-coding = token

All content-coding values are case-insensitive. HTTP/1.1 uses
content-coding values in the Accept-Encoding (section 14.3) and
Content-Encoding (section 14.11) header fields. Although the value
describes the content-coding, what is more important is that it
indicates what decoding mechanism will be required to remove the
encoding.

The Internet Assigned Numbers Authority (IANA) acts as a registry for
content-coding value tokens. Initially, the registry contains the
following tokens:

gzip An encoding format produced by the file compression program
"gzip" (GNU zip) as described in RFC 1952 [25]. This format is a
Lempel-Ziv coding (LZ77) with a 32 bit CRC.

compress
The encoding format produced by the common UNIX file compression
program "compress". This format is an adaptive Lempel-Ziv-Welch
coding (LZW).

Use of program names for the identification of encoding formats
is not desirable and is discouraged for future encodings. Their
use here is representative of historical practice, not good
design. For compatibility with previous implementations of HTTP,
applications SHOULD consider "x-gzip" and "x-compress" to be
equivalent to "gzip" and "compress" respectively.

deflate
The "zlib" format defined in RFC 1950 [31] in combination with
the "deflate" compression mechanism described in RFC 1951 [29].

identity
The default (identity) encoding; the use of no transformation
whatsoever. This content-coding is used only in the Accept-
Encoding header, and SHOULD NOT be used in the Content-Encoding
header.

New content-coding value tokens SHOULD be registered; to allow
interoperability between clients and servers, specifications of the
content coding algorithms needed to implement a new value SHOULD be
publicly available and adequate for independent implementation, and
conform to the purpose of content coding defined in this section.

------------------------------------------------------------------------
Jacob Palme <***@dsv.su.se> (Stockholm University and KTH)
for more info see URL: http://www.dsv.su.se/~jpalme
Larry Masinter
1999-07-27 18:57:34 UTC
Permalink
> Is there an RFC (or movement toward one) for a compressed encoding
> within MIME?
>
> There seems to be a lot of interest lately in compressing attachments.
> End users are encouraged to zip Office files before mailing them,
> and at least one product (MaxCompression) does this automatically
> (in a proprietary way.) Even with modem compression, there seems to
> be some gain from this, and the disk storage implications on the mail
> server are also interesting.
>
> It seems that a new MIME standard for content-transfer-encoding that
> would indicate a compressed base64 type ala gzip could be nice.
> Creative minds might even improve the efficiency of base64 at the
> same time, if we don't have to worry about translation into EBCDIC
> anymore.

HTTP (RFC 2616) added a different transformational layer
(Content-Encoding) to avoid combinatorial explosion with different
transfer-encodings (transfer-encodings were also kept distinct
from content-transfer-encodings). Valid content-coding tokens
include "gzip", "compress" and "deflate".

It might be possible to extend Content-Encoding to work with mail.

And if you want to avoid Base64 in mail, well, there's BINARYMIME
(RFC 1830).

Larry
--
http://www.parc.xerox.com/masinter
Bonatti, Chris
1999-07-28 00:34:53 UTC
Permalink
Congratulations. We've reinvented the OSI Presentation layer. :-|

Kidding aside. This requirement (?) has also been discussed in the context
of X.400, but it has never motivated anybody enough to make it happen. My
feeling is that RFC 2616 took basically the right approach, though.

What we are sorely in need of are some standardized IETF middleware
applications that do things like compression, digital signature, etc. for
any client application. There are a number of good models for this that
could be drawn on. However, I don't think that this will happen until a
critical mass of people get fed up with solving every problem n times (for n
applications).

Chris


--------------------------------
| International Electronic |
| Communication Analysts, Inc. |
| Christopher D. Bonatti |
| Principal Engineer |
| ***@ieca.com |
| Tel: 301-208-2349 |
--------------------------------



___________________

Larry Masinter wrote:

> > Is there an RFC (or movement toward one) for a compressed encoding
> > within MIME?
> >
> > There seems to be a lot of interest lately in compressing attachments.
> > End users are encouraged to zip Office files before mailing them,
> > and at least one product (MaxCompression) does this automatically
> > (in a proprietary way.) Even with modem compression, there seems to
> > be some gain from this, and the disk storage implications on the mail
> > server are also interesting.
> >
> > It seems that a new MIME standard for content-transfer-encoding that
> > would indicate a compressed base64 type ala gzip could be nice.
> > Creative minds might even improve the efficiency of base64 at the
> > same time, if we don't have to worry about translation into EBCDIC
> > anymore.
>
> HTTP (RFC 2616) added a different transformational layer
> (Content-Encoding) to avoid combinatorial explosion with different
> transfer-encodings (transfer-encodings were also kept distinct
> from content-transfer-encodings). Valid content-coding tokens
> include "gzip", "compress" and "deflate".
>
> It might be possible to extend Content-Encoding to work with mail.
>
> And if you want to avoid Base64 in mail, well, there's BINARYMIME
> (RFC 1830).
>
> Larry
> --
> http://www.parc.xerox.com/masinter
Jacob Palme
1999-07-29 09:22:45 UTC
Permalink
At 11.57 -0700 99-07-27, Larry Masinter wrote:
>HTTP (RFC 2616) added a different transformational layer
>(Content-Encoding) to avoid combinatorial explosion with different
>transfer-encodings (transfer-encodings were also kept distinct
>from content-transfer-encodings). Valid content-coding tokens
>include "gzip", "compress" and "deflate".

After thinking more about this, I have come to the conclusion
that compression should be automatic and supported by the
standards. The reason for this is that this will make things
easier for the users. The users need not ever see that information
is compressed during transmission.

The issue is whether compression should be done in the application
layer, with special compression headers, etc., or in lower layers.

For example, and e-mail message is usually forwarded through
at least two store-and-forward MTAs:

Original -> Local MTA -> Local MTA -> Final
sender for sender for recipient recipient

With compression in the application layer, an attachment will
be compressed by the original sender, and not uncompressed
again until by the final recipient.

With compression in the transport layer, the attachment
will be compressed and uncompressed for each store-and-forward
step.
------------------------------------------------------------------------
Jacob Palme <***@dsv.su.se> (Stockholm University and KTH)
for more info see URL: http://www.dsv.su.se/~jpalme
Keith Moore
1999-07-28 19:03:23 UTC
Permalink
> It seems that a new MIME standard for content-transfer-encoding that
> would indicate a compressed base64 type ala gzip could be nice.
> Creative minds might even improve the efficiency of base64 at the
> same time, if we don't have to worry about translation into EBCDIC
> anymore.

it's been discussed many times; afaik the biggest problem is that nobody
has bothered to write up a concrete proposal. the second biggest problem,
of course, is that it would break lots of existing software.

Keith
Bonatti, Chris
1999-07-28 19:35:53 UTC
Permalink
Keith,

Actually, I suspect that you have it backwards because there is a dependency
involved. I think that a lot of folks would be willing to write such a
proposal (myself included) except for the second problem. ;-) We are victims
of our own success. We can't afford to think of better approaches because we
can't afford to have a big departure for Web:TNG or SMTP:TNG.

Chris


__________________

Keith Moore wrote:

> > It seems that a new MIME standard for content-transfer-encoding that
> > would indicate a compressed base64 type ala gzip could be nice.
> > Creative minds might even improve the efficiency of base64 at the
> > same time, if we don't have to worry about translation into EBCDIC
> > anymore.
>
> it's been discussed many times; afaik the biggest problem is that nobody
> has bothered to write up a concrete proposal. the second biggest problem,
> of course, is that it would break lots of existing software.
>
> Keith
V***@vt.edu
1999-07-28 19:54:57 UTC
Permalink
On Wed, 28 Jul 1999 15:03:23 EDT, Keith Moore said:
> it's been discussed many times; afaik the biggest problem is that nobody
> has bothered to write up a concrete proposal. the second biggest problem,
> of course, is that it would break lots of existing software.

That, and the fact that currently, the average text/plain being sent
around is relatively small (2-3K or so) and won't be a BIG win (you can't
save mor than 3 K, and that's only 2 packets on an ethernet ;)

The things that chew up the bandwidth are things like .GIF, .JPG,
etc attachements, which usually tend to have some compression already
done on them. Now, if you have some big spreadsheets from some big
company that specializes in bloatware, perhaps the right thing to do
is convince them to make it take less disk space. Yes, disk is cheap,
but that hardly justifies intentional waste....

Has anybody done any studies at all on whether said compression would
actually *win* us enough to be worth it? As a data point, my MH folders
live on a compressed file system (each 4K block is LZ-compressed individually),
and takes about 110M compressed and 198M uncompressed. *HOWEVER*, a *very*
large chunk of that is Received: headers and the like, which would NOT
be compressible...

If I get ambitious tonight, I'll see what the compression of the bodyparts
ends up being..


--
Valdis Kletnieks
Computer Systems Senior Engineer
Virginia Tech
Tim Kehres
1999-07-28 19:26:55 UTC
Permalink
>> It seems that a new MIME standard for content-transfer-encoding that
>> would indicate a compressed base64 type ala gzip could be nice.
>> Creative minds might even improve the efficiency of base64 at the
>> same time, if we don't have to worry about translation into EBCDIC
>> anymore.
>
>it's been discussed many times; afaik the biggest problem is that nobody
>has bothered to write up a concrete proposal. the second biggest problem,
>of course, is that it would break lots of existing software.


Just curious - has the idea of doing on the fly compression at the ESMTP
level ever been considered? This would have the advantage of not breaking
any of the upper layers, and would only be enabled between MTA's with the
capability. It would be tough on the MTA's, but with CPU and disk speeds
increasing, it might be feasible. Anyway, just a thought...

Best Regards,

Tim Kehres
International Messaging Associates
http://www.ima.com
Keith Moore
1999-07-28 19:35:23 UTC
Permalink
> Just curious - has the idea of doing on the fly compression at the ESMTP
> level ever been considered?

aarrgh. SMTP is already complex enough. the last thing I want to see
is more complex MTAs adding more failure cases.
Ned Freed
1999-07-28 19:52:33 UTC
Permalink
> > > It seems that a new MIME standard for content-transfer-encoding that
> > > would indicate a compressed base64 type ala gzip could be nice.
> > > Creative minds might even improve the efficiency of base64 at the
> > > same time, if we don't have to worry about translation into EBCDIC
> > > anymore.

> > it's been discussed many times; afaik the biggest problem is that nobody
> > has bothered to write up a concrete proposal. the second biggest problem,
> > of course, is that it would break lots of existing software.

> Just curious - has the idea of doing on the fly compression at the ESMTP
> level ever been considered?

Yes, but given the existance of the TLS SMTP extension, which can
negotiate compression, why bother defining another mechanism?

> This would have the advantage of not breaking
> any of the upper layers, and would only be enabled between MTA's with the
> capability. It would be tough on the MTA's, but with CPU and disk speeds
> increasing, it might be feasible. Anyway, just a thought...

Compression is a nonissue compared to crypto. But in general we need crypto a
lot more than we need to save bandwidth...

Ned
M Horton
1999-07-28 20:13:28 UTC
Permalink
> Just curious - has the idea of doing on the fly compression at the ESMTP
> level ever been considered?

I think there are two major problems this proposal tries to solve:

(1) Big attachments take forever to download over a slow link with
POP or IMAP, or to submit with SMTP.

(2) Big attachments take up a lot of disk space on the mail server
or in mail folders where you save them.

Enhancing SMTP doesn't solve either problem.

> We need a deployment mechanism first. And that's precisely what we're
> trying to get through RESCAP.

Not being familiar with RESCAP, do you have a URL for it?

Some ideas for deployment:

(1) Provide a free tool that will convert the compressed format to
the uncompressed format, open source and also compiled for the
various platforms.

(2) Put up some mail relays on the net. You forward your message to
a mail relay - it returns it to you with uncompressed attachments.

(3) Have a phased deployment of clients. We choose a date, say, 2
years off, by when we expect most clients will be upgraded. Each
client has a switch setting for whether to compress binary attachments,
with 3 settings:
Always compress
Never compress
Compress if the send date is after <the chosen date>
The default setting could be the third one.

In addition, the Compose window should let you override the default
setting for a specific message.

> That, and the fact that currently, the average text/plain being sent
> around is relatively small (2-3K or so) and won't be a BIG win (you can't
> save mor than 3 K, and that's only 2 packets on an ethernet ;)
>
> The things that chew up the bandwidth are things like .GIF, .JPG,
> etc attachements, which usually tend to have some compression already
> done on them. Now, if you have some big spreadsheets from some big
> company that specializes in bloatware, perhaps the right thing to do
> is convince them to make it take less disk space. Yes, disk is cheap,
> but that hardly justifies intentional waste....

I would think one would want a setting in the client saying "only
compress attachments if they are larger than ___ KB." My experience
is that MS Office files are the usual large attachments that cause
problems. While I am hopeful that Microsoft will eventually have a
reasonably small default Office format, their web page says Office 2000
files are the same format as Office 97, and that their HTML versions
are actually larger than the proprietary format.

> Has anybody done any studies at all on whether said compression would
> actually *win* us enough to be worth it? As a data point, my MH folders
> live on a compressed file system (each 4K block is LZ-compressed
> individually),
> and takes about 110M compressed and 198M uncompressed. *HOWEVER*, a *very*
> large chunk of that is Received: headers and the like, which would NOT
> be compressible...

I tried compressing (with winzip, which seems to be universally available
on PCs) a lot of large Office files I have around. The results were
spectacular - 90% compression was common. I really had to hunt to find
files that compressed to the expected 50%.

I would guess that GZip would do even better.

Mark
V***@vt.edu
1999-07-29 15:18:05 UTC
Permalink
On Wed, 28 Jul 1999 16:13:28 EDT, M Horton <***@lucent.com> said:
> (3) Have a phased deployment of clients. We choose a date, say, 2
> years off, by when we expect most clients will be upgraded. Each
> client has a switch setting for whether to compress binary attachments,
> with 3 settings:
> Always compress
> Never compress
> Compress if the send date is after <the chosen date>
> The default setting could be the third one.

Gaak. *NO*.

I found a Sendmail 5.65 on our campus this month.

Think about that. ;)

--
Valdis Kletnieks
Computer Systems Senior Engineer
Virginia Tech
Tony Hansen
1999-07-28 20:24:48 UTC
Permalink
Tim Kehres wrote:
>
> >> It seems that a new MIME standard for content-transfer-encoding that
> >> would indicate a compressed base64 type ala gzip could be nice.
> >> Creative minds might even improve the efficiency of base64 at the
> >> same time, if we don't have to worry about translation into EBCDIC
> >> anymore.
> >
> >it's been discussed many times; afaik the biggest problem is that nobody
> >has bothered to write up a concrete proposal. the second biggest problem,
> >of course, is that it would break lots of existing software.
>
> Just curious - has the idea of doing on the fly compression at the ESMTP
> level ever been considered? This would have the advantage of not breaking
> any of the upper layers, and would only be enabled between MTA's with the
> capability. It would be tough on the MTA's, but with CPU and disk speeds
> increasing, it might be feasible. Anyway, just a thought...

I've heard proposals to possibly do this through a SASL layer.

Tony Hansen
***@att.com
Timothy L Martin
1999-07-28 21:50:32 UTC
Permalink
> > capability. It would be tough on the MTA's, but with CPU and disk speeds
> > increasing, it might be feasible. Anyway, just a thought...
>
> I've heard proposals to possibly do this through a SASL layer.

The main issue with this is that current protocols that support SASL
only allow for one SASL layer at a time so you could get either
compression or kerberos, not both. I still think this could be useful
in certain circumstances and would be willing to implement it if
anyone were to write a SASL compression draft.
Tim Kehres
1999-07-28 19:49:10 UTC
Permalink
>> Just curious - has the idea of doing on the fly compression at the ESMTP
>> level ever been considered?
>
>aarrgh. SMTP is already complex enough. the last thing I want to see
>is more complex MTAs adding more failure cases.


I was thinking in terms of an ESMTP extension. In any event, by your own
admission, doing it as a MIME type would break a lot of software at the end
points. At least this way it is only attempted between mutually consenting
MTA's and is transparent to the end users.

Best Regards,

Tim Kehres
International Messaging Associates
http://www.ima.com
Keith Moore
1999-07-28 19:58:10 UTC
Permalink
> >aarrgh. SMTP is already complex enough. the last thing I want to see
> >is more complex MTAs adding more failure cases.
>
>
> I was thinking in terms of an ESMTP extension. In any event, by your own
> admission, doing it as a MIME type would break a lot of software at the end
> points. At least this way it is only attempted between mutually consenting
> MTA's and is transparent to the end users.

an MTA that transparently corrupts your mail isn't necessarily a good thing.

note also that compression algorithms tend to be specific to particular
kinds of content - what works well with text or object code typically
doesn't work well with images, audio, or video.

Keith
Jacob Palme
1999-07-29 12:07:41 UTC
Permalink
We already have a standard for sending compressed data in e-mail.
We have an IANA registered content-type application/zip.

What more is needed? That mailers start using it?
Is there a need for an attribute to "Content-Type:application/zip"
with the name "Uncompressed-Content-Type"?.
------------------------------------------------------------------------
Jacob Palme <***@dsv.su.se> (Stockholm University and KTH)
for more info see URL: http://www.dsv.su.se/~jpalme
Keith Moore
1999-07-29 14:28:37 UTC
Permalink
> We already have a standard for sending compressed data in e-mail.
> We have an IANA registered content-type application/zip.

application/zip is not a standard.
V***@vt.edu
1999-07-29 15:22:56 UTC
Permalink
On Thu, 29 Jul 1999 10:28:37 EDT, Keith Moore said:
> > We already have a standard for sending compressed data in e-mail.
> > We have an IANA registered content-type application/zip.
>
> application/zip is not a standard.

Not only is it not a standard, but it loses in the translation.

Let's say I start with a Microsoft Word application. Currently, I
can attach that as an application/msword, and at the remote end, the MUA
will be able to intuit proper handling of the *actual file* from that.

An application/zip loses that. It basically renders any object
as unidentifiable as an application/octet-stream.

There's also a security issue, in that I can be confident that a MIME
bodypart that is an application/msword is *one* file, the name of which
I can either predict and check, or control.

It's a lot harder to make *real* sure that you're not unzipping things
you didn't want to.

--
Valdis Kletnieks
Computer Systems Senior Engineer
Virginia Tech
Jacob Palme
1999-07-30 19:25:58 UTC
Permalink
At 10.28 -0400 99-07-29, Keith Moore wrote:
> > We already have a standard for sending compressed data in e-mail.
> > We have an IANA registered content-type application/zip.
>
>application/zip is not a standard.

It depends on the definition of a standard. If you define a
standard the way RFC 2418 defines "Internet standard" then
it is not a standard. But then neither ASCII nor UNICODE
nor HTML 4.0 are standards!

And not even HTTP is a standard, it is only a draft standard!

I would prefer that standards developed by IETF are labelled
IETF standards and not Internet standards, since the term
"Internet standard" may wrongly give the impression that
these are the only standards for the Internet.

My definition of a standard is "a common format or protocols
used by many different interworking products from different
vendors". With that definiton of "standard" certainly
application/zip is a standard.
------------------------------------------------------------------------
Jacob Palme <***@dsv.su.se> (Stockholm University and KTH)
for more info see URL: http://www.dsv.su.se/~jpalme
Keith Moore
1999-07-30 19:47:46 UTC
Permalink
> My definition of a standard is "a common format or protocols
> used by many different interworking products from different
> vendors".

even so, I don't think zip qualifies. it's common, but
certainly not ubiquitous, and it's mostly used on a single
vendor's platforms.

Keith
Kai Henningsen
1999-07-31 10:40:00 UTC
Permalink
***@cs.utk.edu (Keith Moore) wrote on 30.07.99 in <***@astro.cs.utk.edu>:

> > My definition of a standard is "a common format or protocols
> > used by many different interworking products from different
> > vendors".
>
> even so, I don't think zip qualifies. it's common, but
> certainly not ubiquitous, and it's mostly used on a single
> vendor's platforms.

Well, it's difficult finding a platform zip has not been ported to. OTOH,
I have no idea how widespread MIME support for application/zip is.

As a data point, it's in my /etc/mime.types (and, incidentally, /etc/mime-
magic). That's a Debian/GNU Linux system.

And ISTR that the first commercial vendor to ship their OS with an
unzipping lib was IBM (OS/2), and that M$ has still not done so.

MfG Kai
Keith Moore
1999-07-31 15:26:08 UTC
Permalink
> > even so, I don't think zip qualifies. it's common, but
> > certainly not ubiquitous, and it's mostly used on a single
> > vendor's platforms.
>
> Well, it's difficult finding a platform zip has not been ported to.

indeed, you can find a zip program for almost any platform in existence.
but that doesn't mean it's widely used on more than one of those platforms.

(similar things could be said for most comprssion/archive formats)

but the real problem with the application/zip hack is not the format,
it's that it's a very lame way of adding compression to MIME.

Keith
Larry Masinter
1999-07-31 16:01:00 UTC
Permalink
> but the real problem with the application/zip hack is not the format,
> it's that it's a very lame way of adding compression to MIME.

Very cogent argument. Perhaps you meant to say that using the content-type
to indicate compression format hides the actual content.

These days, anti-virus scanners seem to have a feature for 'scan ZIP
files too', so, even though it's lame, the auxiliary mechanisms seem
to be making their way through the infrastructure.

And Java ships with Zip-file 'file system' support.

Even though application/zip is lame, other mechanisms might be
worse. Maybe we should look harder at providing what would be
necessary to make 'application/zip' actually work, e.g., some
top-level indication of what's actually in the package?

Larry
Keith Moore
1999-07-31 16:02:36 UTC
Permalink
> Even though application/zip is lame, other mechanisms might be
> worse. Maybe we should look harder at providing what would be
> necessary to make 'application/zip' actually work, e.g., some
> top-level indication of what's actually in the package?

this has the same deployment barrier as adding a new
content-transfer-encoding - in either case the mime mail
reader needs to know what to do with the extra parameter
or the new c-t-e.
Larry Masinter
1999-07-31 16:44:44 UTC
Permalink
> > Even though application/zip is lame, other mechanisms might be
> > worse. Maybe we should look harder at providing what would be
> > necessary to make 'application/zip' actually work, e.g., some
> > top-level indication of what's actually in the package?
>
> this has the same deployment barrier as adding a new
> content-transfer-encoding - in either case the mime mail
> reader needs to know what to do with the extra parameter
> or the new c-t-e.

I'm not sure this is true. Unaware MIME mailers will just
see they have application/zip, that they either recognize
or not, and users of existing MIME mail programs have simple
way of configuring their mail reader to at do something
sensible with it.

Adding a new content-transfer-encoding has a more serious
deployment problem, because most deployed systems don't have
any kind of extensibility built in for CTE.

Mailing around zip files is common practice. Many people are
used to dealing with getting a zip file. Leveraging this
just means making the common practice easier to accomplish
for senders, and less awkward for receivers; it's a product
enhancement that mail client vendors could add today.

In the menu for 'attach file', it could just have a check
box for 'send zipped', for example, and a setting for making
'send zipped' the default for attachments, or for attachments
that aren't known to already have better media-specific
compression.
Keith Moore
1999-07-31 17:36:40 UTC
Permalink
> > > Even though application/zip is lame, other mechanisms might be
> > > worse. Maybe we should look harder at providing what would be
> > > necessary to make 'application/zip' actually work, e.g., some
> > > top-level indication of what's actually in the package?
> >
> > this has the same deployment barrier as adding a new
> > content-transfer-encoding - in either case the mime mail
> > reader needs to know what to do with the extra parameter
> > or the new c-t-e.
>
> I'm not sure this is true. Unaware MIME mailers will just
> see they have application/zip, that they either recognize
> or not, and users of existing MIME mail programs have simple
> way of configuring their mail reader to at do something
> sensible with it.

not clear. it's one thing to add an ordinary content-type to an
existing mail reader, quite another to add a content-type that
says "decode this body part and then dispatch to the appropriate
content-type handler for its contents" especially if the
contents can contain multiple files, or multiparts, or signed
objects. etc.

> Adding a new content-transfer-encoding has a more serious
> deployment problem, because most deployed systems don't have
> any kind of extensibility built in for CTE.

yes, but my point is that basically you have to upgrade the MUA
anyway to make this work well. might as well do it right.

> Mailing around zip files is common practice. Many people are
> used to dealing with getting a zip file. Leveraging this
> just means making the common practice easier to accomplish
> for senders, and less awkward for receivers; it's a product
> enhancement that mail client vendors could add today.

it's by no means a common practice for everyone - just for some
users of certain platforms. even the most common platform on
which zip is used doesn't ship with zip support. so no, in
general, people are not used to dealing with getting a zip file.

so what you are proposing to do is to clutter up the MIME
architecture and degrade the recipient's user interface
just so a minority of users who already use zip don't have
to upgrade immedately. in the long run I don't think it's
worth it.

Keith
Paul Hoffman / IMC
1999-07-31 17:05:49 UTC
Permalink
I see a problem here with making a content type (zipped) act like a c-t-e.
Zipped seems fine for "attachments", that is, leaves in the MIME tree. But
some of the requirement for making messages smaller would want to compress
a whole message, which might be a nested multipart. At this point, zipped
hides the lower layers. If we want a rule that say "you can't use
app/zipped for multiparts", how does this become different than a c-t-e?

--Paul Hoffman, Director
--Internet Mail Consortium
Jacob Palme
1999-08-01 10:00:46 UTC
Permalink
At 12.02 -0400 99-07-31, Keith Moore wrote:
> > Even though application/zip is lame, other mechanisms might be
> > worse. Maybe we should look harder at providing what would be
> > necessary to make 'application/zip' actually work, e.g., some
> > top-level indication of what's actually in the package?
>
>this has the same deployment barrier as adding a new
>content-transfer-encoding - in either case the mime mail
>reader needs to know what to do with the extra parameter
>or the new c-t-e.

No, it has one very important advantage: It will co-work,
with existing software using application/zip, in the way
people are already accustomed to sending and receiving
compressed e-mail in that format.

At 10.05 -0700 99-07-31, Paul Hoffman / IMC wrote:
>I see a problem here with making a content type (zipped) act like a
>c-t-e. Zipped seems fine for "attachments", that is, leaves in the
>MIME tree. But some of the requirement for making messages smaller
>would want to compress a whole message, which might be a nested
>multipart. At this point, zipped hides the lower layers. If we want
>a rule that say "you can't use app/zipped for multiparts", how does
>this become different than a c-t-e?

Zip is not used to compress multiparts, just leaves. This is no
serious restriction, since the main part of e-mail messages seldom
needs compression. It is the attachments which are often large and
bulky. Even when the main part is in HTML format, all the images are
in other than the main body part.

At 13.36 -0400 99-07-31, Keith Moore wrote:
>so what you are proposing to do is to clutter up the MIME
>architecture and degrade the recipient's user interface
>just so a minority of users who already use zip don't have
>to upgrade immedately. in the long run I don't think it's
>worth it.

The zip format may be in a minority of all mail attachments,
simply since most mail attachments are not compressed
using any compression format. But the zip format is
certainly in a large majority of the e-mailed attachments
which are compressed at all, and which are sent to me
by various people around the world. The only other
compression formats which are common in e-mail are
the JPEG and GIF formats. I hardly ever get any e-mail
with attachments in any other compression format.
------------------------------------------------------------------------
Jacob Palme <***@dsv.su.se> (Stockholm University and KTH)
for more info see URL: http://www.dsv.su.se/~jpalme
Keith Moore
1999-08-01 19:11:53 UTC
Permalink
> At 12.02 -0400 99-07-31, Keith Moore wrote:
> > > Even though application/zip is lame, other mechanisms might be
> > > worse. Maybe we should look harder at providing what would be
> > > necessary to make 'application/zip' actually work, e.g., some
> > > top-level indication of what's actually in the package?
> >
> >this has the same deployment barrier as adding a new
> >content-transfer-encoding - in either case the mime mail
> >reader needs to know what to do with the extra parameter
> >or the new c-t-e.
>
> No, it has one very important advantage: It will co-work,
> with existing software using application/zip, in the way
> people are already accustomed to sending and receiving
> compressed e-mail in that format.

it will co-work with existing software that most people don't
have, in the way that a small minority of people are already
accustomed to working. the vast majority of people will have
to either install new software or learn how to cope with
zip files, or both. as long as people have to install new
software, why not have them install a new MIME mail reader?

saying that people are already accustomed to using zip is a
lot like saying that AOL comes with built-in MIME support.

> At 13.36 -0400 99-07-31, Keith Moore wrote:
> >so what you are proposing to do is to clutter up the MIME
> >architecture and degrade the recipient's user interface
> >just so a minority of users who already use zip don't have
> >to upgrade immedately. in the long run I don't think it's
> >worth it.
>
> The zip format may be in a minority of all mail attachments,
> simply since most mail attachments are not compressed
> using any compression format. But the zip format is
> certainly in a large majority of the e-mailed attachments
> which are compressed at all, and which are sent to me
> by various people around the world. The only other
> compression formats which are common in e-mail are
> the JPEG and GIF formats. I hardly ever get any e-mail
> with attachments in any other compression format.

you're missing the point. just because you have zip installed
on your computer doesn't mean that the majority of computer
users have zip installed.

eith
Ned Freed
1999-08-01 19:06:57 UTC
Permalink
> > > Even though application/zip is lame, other mechanisms might be
> > > worse. Maybe we should look harder at providing what would be
> > > necessary to make 'application/zip' actually work, e.g., some
> > > top-level indication of what's actually in the package?
> >
> > this has the same deployment barrier as adding a new
> > content-transfer-encoding - in either case the mime mail
> > reader needs to know what to do with the extra parameter
> > or the new c-t-e.

> I'm not sure this is true. Unaware MIME mailers will just
> see they have application/zip, that they either recognize
> or not, and users of existing MIME mail programs have simple
> way of configuring their mail reader to at do something
> sensible with it.

> Adding a new content-transfer-encoding has a more serious
> deployment problem, because most deployed systems don't have
> any kind of extensibility built in for CTE.

Believe me, from a tech support perspective the problems here are far worse
than those associated with a CTE. Think about the implications in the context
of IMAP, for example, where separate fetch of body parts is an important
feature.

> Mailing around zip files is common practice. Many people are
> used to dealing with getting a zip file. Leveraging this
> just means making the common practice easier to accomplish
> for senders, and less awkward for receivers; it's a product
> enhancement that mail client vendors could add today.

Summary and unconditional rejection of ZIPs is also common because of the
inability in some environments to check the content for viruses.

> In the menu for 'attach file', it could just have a check
> box for 'send zipped', for example, and a setting for making
> 'send zipped' the default for attachments, or for attachments
> that aren't known to already have better media-specific
> compression.

In case it isn't clear, I am absolutely opposed to increased use of
application/zip within MIME. We need to solve the real problem here, which is
to allow end-to-end knowledge of what C-Ts and CTEs are supported. Once this is
done we can deploy new CTEs or whatever else we chose willy-nilly. And until it
is done none of this stuff, with the exception of TLS-based compression, stands
a snowball's chance in hell of being widely deployed.

Ned
Ned Freed
1999-08-01 19:12:33 UTC
Permalink
> > Even though application/zip is lame, other mechanisms might be
> > worse. Maybe we should look harder at providing what would be
> > necessary to make 'application/zip' actually work, e.g., some
> > top-level indication of what's actually in the package?

> this has the same deployment barrier as adding a new
> content-transfer-encoding - in either case the mime mail
> reader needs to know what to do with the extra parameter
> or the new c-t-e.

Actually the problems are far worse than a new CTE. We end up having to add a
bunch of really gross nonorthogonal nonsense to MIME headers and the butchery
we'd need on the IMAP front is too terrible to contemplate. And besides, this
is effectively a non-lead CTE, and as such runs smack into the no-nested-CTE
consensus that allowed MIME to deploy in the first place.

Ned
Ned Freed
1999-07-28 20:05:40 UTC
Permalink
> > >aarrgh. SMTP is already complex enough. the last thing I want to see
> > >is more complex MTAs adding more failure cases.
> >
> >
> > I was thinking in terms of an ESMTP extension. In any event, by your own
> > admission, doing it as a MIME type would break a lot of software at the end
> > points. At least this way it is only attempted between mutually consenting
> > MTA's and is transparent to the end users.

> an MTA that transparently corrupts your mail isn't necessarily a good thing.

> note also that compression algorithms tend to be specific to particular
> kinds of content - what works well with text or object code typically
> doesn't work well with images, audio, or video.

I prefer to characterize it this way: There are type-specific compressions,
which tend to be built into the media types they are appropriate for and which
also may not yield the same output as input, and type-independent compressions,
which tend to be applied on top of rather than inside of media types and which
always yield the same output as input.

We're only talking about the latter sort of compression here. Use of the former
is a solved problem, and hence a red herring in this context.

Ned
Keith Moore
1999-07-28 20:21:04 UTC
Permalink
my point is that even the "type-independent compressions" (LZW,
deflate, etc) are at best ineffective (and at worse pessimal)
when applied to different kinds of media types for which they
were designed. you therefore don't want to gratuitously apply
them to random content-types.

Keith
Ned Freed
1999-07-28 21:23:56 UTC
Permalink
> my point is that even the "type-independent compressions" (LZW,
> deflate, etc) are at best ineffective (and at worse pessimal)
> when applied to different kinds of media types for which they
> were designed. you therefore don't want to gratuitously apply
> them to random content-types.

Actually this isn't true -- for one thing, there's still an advantage to be had
by compressing base64 at the transport level. And for another, the
type-independent conversions sometimes do manage to squeeze a little more juice
out out that the type-specific conversions didn't manage. And when that's not
possible, they are smart enough to tell this is the case and opt out; the
result is a little CPU lost on the sending side (very cheap), effectively no
bloat, and no significant CPU loss on the receiver.

The quality of these things really has gone up over the years, so with the
proper use of modern options pessimal outcomes are eliminated.

Ned
Keith Moore
1999-07-29 00:46:43 UTC
Permalink
> > my point is that even the "type-independent compressions" (LZW,
> > deflate, etc) are at best ineffective (and at worse pessimal)
> > when applied to different kinds of media types for which they
> > were designed. you therefore don't want to gratuitously apply
> > them to random content-types.
>
> Actually this isn't true -- for one thing, there's still an advantage to
> be had by compressing base64 at the transport level.

right, but I was talking about MIME compression. if we compress at
the transport level, we might as well just use ssl.

> And for another, the
> type-independent conversions sometimes do manage to squeeze a little
> more juice out out that the type-specific conversions didn't manage.

yes, though it's often prety marginal - my guess is that it's not worth
the cost of incompatibility. (again, talking about MIME compression)

> And when that's not
> possible, they are smart enough to tell this is the case and opt out; the
> result is a little CPU lost on the sending side (very cheap), effectively no
> bloat, and no significant CPU loss on the receiver.

basically true, for newer algorithms.

Keith
Ned Freed
1999-07-28 19:49:10 UTC
Permalink
> > It seems that a new MIME standard for content-transfer-encoding that
> > would indicate a compressed base64 type ala gzip could be nice.
> > Creative minds might even improve the efficiency of base64 at the
> > same time, if we don't have to worry about translation into EBCDIC
> > anymore.

> it's been discussed many times; afaik the biggest problem is that nobody
> has bothered to write up a concrete proposal. the second biggest problem,
> of course, is that it would break lots of existing software.

I actually have it written up, but have never bothered to release the
specification because of the deployment problem.

We need a deployment mechanism first. And that's precisely what we're trying
to get through RESCAP.

Ned
Tim Kehres
1999-07-28 20:09:31 UTC
Permalink
>> >aarrgh. SMTP is already complex enough. the last thing I want to see
>> >is more complex MTAs adding more failure cases.
>>
>>
>> I was thinking in terms of an ESMTP extension. In any event, by your
own
>> admission, doing it as a MIME type would break a lot of software at the
end
>> points. At least this way it is only attempted between mutually
consenting
>> MTA's and is transparent to the end users.
>
>an MTA that transparently corrupts your mail isn't necessarily a good
thing.


Yes, of course. I was not advocating the populating of the world with
broken software. :-) :-)

I've not had a chance to review the RFC that Ned made reference to (TLS STMP
Extension), but on the surface it sounds like the capabilities that I was
suggesting anyway.

Thanks and Best Regards,

Tim Kehres
International Messaging Associates
http://www.ima.com
Tim Kehres
1999-07-28 20:24:24 UTC
Permalink
>On Wed, 28 Jul 1999 15:03:23 EDT, Keith Moore said:
>> it's been discussed many times; afaik the biggest problem is that nobody
>> has bothered to write up a concrete proposal. the second biggest
problem,
>> of course, is that it would break lots of existing software.
>
>That, and the fact that currently, the average text/plain being sent
>around is relatively small (2-3K or so) and won't be a BIG win (you can't
>save mor than 3 K, and that's only 2 packets on an ethernet ;)
>
>The things that chew up the bandwidth are things like .GIF, .JPG,
>etc attachements, which usually tend to have some compression already
>done on them. Now, if you have some big spreadsheets from some big
>company that specializes in bloatware, perhaps the right thing to do
>is convince them to make it take less disk space. Yes, disk is cheap,
>but that hardly justifies intentional waste....
>
>Has anybody done any studies at all on whether said compression would
>actually *win* us enough to be worth it? As a data point, my MH folders
>live on a compressed file system (each 4K block is LZ-compressed
individually),
>and takes about 110M compressed and 198M uncompressed. *HOWEVER*, a *very*
>large chunk of that is Received: headers and the like, which would NOT
>be compressible...


As you say, it is highly dependent upon the attachment type. File types
that are already in a compressed state like JPEG's as you reference above
won't buy you anything. On the other hand, at least in our environment, we
tend to send around a lot of documentation, in the form of Microsoft Word,
Adobe FrameMaker, or PDF formats, which compress rather nicely. Images in
TIFF format are also highly compressible. I would suspect that as the usage
of HTML in email increases, you might get some benefit here as well, but of
course only for the larger sized messages.

Due to the dependency on the attachment types, I suspect that what might be
a significant win for one environment would make no difference in another.
In reference to your comment above about the bloatware and convincing people
to use less space, I'm not sure what you have in mind here. I'm not aware
of any simple way to have for instance your Excel, Word, or Frame files
automagically compressed.

Above you also mention the online storage of the messages. What is the
intent here? Are we trying to save in disk space or bandwidth utilization
when the attachment is in transit. I was assuming the later. Saving a
couple hundred meg of disk space with the price of drives these days hardly
seems worth the effort.

Best Regards,

Tim Kehres
International Messaging Associates
http://www.ima.com
V***@vt.edu
1999-07-29 15:16:38 UTC
Permalink
On Thu, 29 Jul 1999 04:24:24 +0800, "Tim Kehres" said:
> Above you also mention the online storage of the messages. What is the
> intent here? Are we trying to save in disk space or bandwidth utilization
> when the attachment is in transit. I was assuming the later. Saving a
> couple hundred meg of disk space with the price of drives these days hardly
> seems worth the effort.

I mentioned online storage only because I had statistics on the compression.
What an MUA does in its message store is of course its own business. ;)

I'm also distressed at this cavalier "with the cost of..." trend of late.

Some places run on tight budgets or have other constraints. I turned
on disk compression on my worstation, and got back about 30% of the /home
space. This saved me from having to buy a new disk drive. So right
there, we've saved $200 just on the cost of the drive. But there's more.

Remember to factor in the cost of my downtime while I back up my machine,
take it down, do all the needed recabling, replace the drive, boot it
up, and restore all the data from tape. Add the cost of most of a day's
work for me. Add in the cost of *tomorrow* being shot because I didn't
get stuff done today, so tomorrow I'm digging out from under today's backlog.

Suddenly, worrying about efficient use of disk space starts looking a
lot better - and makes a CPU and/or memory upgrades a lot cheaper in
comparison.

And I've *got* disk compression software. I don't have CPU compression
software. ;)


--
Valdis Kletnieks
Computer Systems Senior Engineer
Virginia Tech
Dave Crocker
1999-07-31 17:26:06 UTC
Permalink
At 08:16 AM 7/29/99 , ***@vt.edu wrote:
>I'm also distressed at this cavalier "with the cost of..." trend of late.
>
>Some places run on tight budgets or have other constraints. I turned

As in, most of the world.

We really do need to remember just how limited bandwidth (in particular)
and "Internet performance" are in much of the Internet. Added to that is
that much of the user community pays a usage fee for connection time.

And it will be a very long time before things improve.

d/

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Dave Crocker Tel: +1 408 246 8253
Brandenburg Consulting Fax: +1 408 273 6464
675 Spruce Drive <http://www.brandenburg.com>
Sunnyvale, CA 94086 USA <mailto:***@brandenburg.com>
Keith Moore
1999-07-31 17:39:47 UTC
Permalink
> We really do need to remember just how limited bandwidth (in particular)
> and "Internet performance" are in much of the Internet. Added to that is
> that much of the user community pays a usage fee for connection time.

sure, but compressing modems *are* widely deployed. if the "connection"
uses a compressing modem, any additional savings in "connection time"
resulting from smtp or mime compression is truly marginal.

you'll get more connection time savings from pipelining smtp than you
will from compressing the data.

Keith
Kai Henningsen
1999-08-01 11:46:00 UTC
Permalink
***@cs.utk.edu (Keith Moore) wrote on 31.07.99 in <***@astro.cs.utk.edu>:

> > We really do need to remember just how limited bandwidth (in particular)
> > and "Internet performance" are in much of the Internet. Added to that is
> > that much of the user community pays a usage fee for connection time.
>
> sure, but compressing modems *are* widely deployed. if the "connection"
> uses a compressing modem, any additional savings in "connection time"
> resulting from smtp or mime compression is truly marginal.

I don't know about you, but in my experience, modem compression (and
similar schemes) typically does a fairly poor job.

There's a fairly simple reason. This type of compression must keep
latencies from growing too much. This *really* hurts compression.

> you'll get more connection time savings from pipelining smtp than you
> will from compressing the data.

That's only true if the time is dominated by RCPT TO: handling, that is,
you're sending many small mails. When you're sending a few large mails,
pipelining is completely irrelevant, and compression is very important,
because you spend most of the time in the DATA phase anyway.

I don't think anyone really wants compression for small mails.

MfG Kai
Arnt Gulbrandsen
1999-08-01 17:24:08 UTC
Permalink
***@khms.westfalen.de (Kai Henningsen)
> I don't know about you, but in my experience, modem compression (and
> similar schemes) typically does a fairly poor job.
>
> There's a fairly simple reason. This type of compression must keep
> latencies from growing too much. This *really* hurts compression.

I see what you mean. However, PPP compression (RFC 1962) should not
have this problem, since the additional time taken to start sending
each packet should be more than offset by the decrease in transmission
time.

--Arnt
Keith Moore
1999-08-01 18:12:09 UTC
Permalink
> > There's a fairly simple reason. This type of compression must keep
> > latencies from growing too much. This *really* hurts compression.
>
> I see what you mean. However, PPP compression (RFC 1962) should not
> have this problem, since the additional time taken to start sending
> each packet should be more than offset by the decrease in transmission
> time.

depends on which PPP compression algorithm is used. the typical
(for me at least) "BSD compress" algorithm is almost the same
algorithm as the one used in modems. but then again I use
NetBSD. I don't know what compression Mac or Windoze stacks use.

my understanding is that with compressing modems, the main additional
value of PPP compression is in reducing interrupt overhead.

Keith
Arnt Gulbrandsen
1999-08-01 18:36:34 UTC
Permalink
Keith Moore <***@cs.utk.edu>
> depends on which PPP compression algorithm is used. the typical
> (for me at least) "BSD compress" algorithm is almost the same
> algorithm as the one used in modems.

Yes, but the data is different :) A PPP-based algorithm can look ahead
until the end of the IP packet without cost. A modem has to wait for
each byte coming in across a comparatively slow connection, so good
compression means waiting for a "long" time before emitting output.

In the extreme case, a 56k modem with a 115,200bps host link and
twelve-bit BSD compression can't read more than about 1.5-2 characters
before it has to emit output or introduce latency.

(Maybe I've just understood why people use these newfangled host
connections - EPP or whatever. I forget the abbreviation.)

--Arnt
Keith Moore
1999-08-01 18:06:15 UTC
Permalink
> I don't know about you, but in my experience, modem compression (and
> similar schemes) typically does a fairly poor job.
>
> There's a fairly simple reason. This type of compression must keep
> latencies from growing too much. This *really* hurts compression.

actually my experience is just the opposite ... modem compression
(usually based on LZW) does a reasonably good job ... not as good
as gzip or bz2 but close enough. and yes, one difference between
the effectiveness of these algorithms is the amount of latency/
lookahead that is needed. LZW is amazingly good for an algorithm
needing only a fixed amount of memory and one octet lookahead.

but for me the disappointing thing about modems is that they add far
more latency than you would expect given their speed. I don't think
this is the fault of the compression so much as the error correction
(I suspect that many modems rely on error correction to compensate
for marginal analog circuitry)

> > you'll get more connection time savings from pipelining smtp than you
> > will from compressing the data.
>
> That's only true if the time is dominated by RCPT TO: handling, that is,
> you're sending many small mails. When you're sending a few large mails,
> pipelining is completely irrelevant, and compression is very important,
> because you spend most of the time in the DATA phase anyway.

right. but even these days most messages are small. and I was assuming
that this traffic was already going over compressing modems, which would
save almost as much bandwidth as compressing smtp data.

Keith
Yutaka Sato 佐藤豊
1999-07-29 16:29:31 UTC
Permalink
In message <***@cbmail.cb.lucent.com> on 07/27/99(23:56:14)
you "Mark Horton" <***@lucent.com> wrote:
|It seems that a new MIME standard for content-transfer-encoding that
|would indicate a compressed base64 type ala gzip could be nice.

I think the standard should satisfy followings:
- Content-Type must indicate the type of the body properly (of course)
- any combination of Content-Type and C-T-Encoding should be possible
- any order of recursive (nested) C-T-Encodings should be applicable
Maybe I reinvented something rejected in past but I'm curious why using
message/rfc822 for nested encoding is not good. For example, I think
we can send gziped text encoded in base64 like this:

Content-Type: message/mime
Content-Transfer-Encoding: base64

<base64-encoded-body>

where the <base64-encoded-body> is decoded into a MIME message like this:

Content-Type: text/plain
Content-Transfer-Encoding: x-gzip

<gzip-encoded-body>

Cheers,
Yutaka
--
Yutaka Sato <***@etl.go.jp> http://www.etl.go.jp/~ysato/ @ @
Computer Science Division, Electrotechnical Laboratory ( - )
1-1-4 Umezono, Tsukuba, Ibaraki, 305-8568 Japan _< >_
Keith Moore
1999-07-29 16:43:54 UTC
Permalink
> I think the standard should satisfy followings:
> - Content-Type must indicate the type of the body properly (of course)
> - any combination of Content-Type and C-T-Encoding should be possible
> - any order of recursive (nested) C-T-Encodings should be applicable

oh no, not this argument again...

> Maybe I reinvented something rejected in past but I'm curious why using
> message/rfc822 for nested encoding is not good.

the more levels of encoding you have, the less chance you
have of being able to interoperate.

Keith
Tim Kehres
1999-07-29 17:19:27 UTC
Permalink
Jacob,

>With compression in the application layer, an attachment will
>be compressed by the original sender, and not uncompressed
>again until by the final recipient.


In addition however the receiving UA will need to have compatible
capabilities with the sending UA. If this is not the case, the message may
get there, but will be rendered unusable. I suspect that this is the
problem with breaking software that Keith has been suggesting.

The advantage of this approach is that you (may) save in local and transient
storage.

>With compression in the transport layer, the attachment
>will be compressed and uncompressed for each store-and-forward
>step.


In this situation the compression is transparent to both end UA's, and it's
application is dependent upon the negioated capabilities of each transport
link. Advantages include not having to modify any UA's or have knowledge of
recipient UA capabilities. Disadvantage is that there is no gain on the
local storage side.

Seems like both approaches may have their merits, depending upon what
problem you are trying to solve at a given time.

Best Regards,

-- Tim
Keith Moore
1999-07-29 17:47:01 UTC
Permalink
> >With compression in the application layer, an attachment will
> >be compressed by the original sender, and not uncompressed
> >again until by the final recipient.
>
>
> In addition however the receiving UA will need to have compatible
> capabilities with the sending UA. If this is not the case, the message may
> get there, but will be rendered unusable. I suspect that this is the
> problem with breaking software that Keith has been suggesting.

there are two kinds of brokenness that this threatens to introduce:

1. brokenness due to lack of backward compatibility
2. brokenness due to software bugs as a result of increased
complexity.

email is already too unreliable; the last thing I want to
do is make MTAs more complex than they already are (thus further
decreasing reliability) for a dubious benefit.

note that some approaches to compression are more reliable than
others - using a separate per-hop negotiated compression layer
on top of SMTP strikes me as more reliable than on-the-fly
translation of content-transfer-encodings. particularly if
the separate compression layer has built in integrity checking.

Keith
Dave Crocker
1999-07-31 17:23:13 UTC
Permalink
At 07:28 AM 7/29/99 , Keith Moore wrote:
> > We already have a standard for sending compressed data in e-mail.
> > We have an IANA registered content-type application/zip.
>
>application/zip is not a standard.

Just to beat this one into the ground:

It's also not a compression mechanism.

It does "bagging" of independent obkects, but does not compress the bits.

d/



=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Dave Crocker Tel: +1 408 246 8253
Brandenburg Consulting Fax: +1 408 273 6464
675 Spruce Drive <http://www.brandenburg.com>
Sunnyvale, CA 94086 USA <mailto:***@brandenburg.com>
Kai Henningsen
1999-08-01 11:39:00 UTC
Permalink
***@brandenburg.com (Dave Crocker) wrote on 31.07.99 in <***@mail.bayarea.net>:

> At 07:28 AM 7/29/99 , Keith Moore wrote:
> > > We already have a standard for sending compressed data in e-mail.
> > > We have an IANA registered content-type application/zip.
> >
> >application/zip is not a standard.
>
> Just to beat this one into the ground:
>
> It's also not a compression mechanism.
>
> It does "bagging" of independent obkects, but does not compress the bits.

This turns out not to be the case.

The application/zip format typically compresses the contained objects with
deflate. ("typically" because there are other algorithms of only historic
significance, and also the option not to use compression; but it's been a
long time since I last saw anything except deflate.)

Now, I certainly agree that this is a poor match with MIME, but the reason
is that it uses a different, incompatible mechanism to describe the
contained objects.

MfG Kai
Continue reading on narkive:
Loading...