edu.stanford.smi.protege.server.socket.deflate
Class CompressingOutputStream
java.lang.Object
java.io.OutputStream
edu.stanford.smi.protege.server.socket.deflate.CompressingOutputStream
- All Implemented Interfaces:
- Closeable, Flushable
- Direct Known Subclasses:
- HybridCompressingOutputStream
public class CompressingOutputStream
- extends OutputStream
This code is based on the ideas presented in http://javatechniques.com/blog/compressing-data-sent-over-a-socket/
by
Philip Isenhour. I am very grateful for the approach that he presented. The key idea is to avoid using the GZip and the Zip streams
and to use the Deflater and Inflater methods directly. In addition, Philip Isenhour essentially defines a packet type that has a
header indicating the size and compressed size of the packet contents. I took these two key ideas and wrote the following code
by scratch without reference to Philip Isenhour's documents. I think that some version of Philip Isenhour's ideas should find their way
into the core java libraries because otherwise people will continue struggling with this problem.
I have tried several other approaches to a compressing input and compressing output stream. The first approach was to base the
input and output streams on the GZip input and output stream. There are web pages on the internet that suggest that calling the GZipOutputStream's
finish() method during the flush() will work. I had trouble with this approach when a write occurs on the stream after
the flush() (which calls finish()). I would get exceptions indicating that the GZip Output stream was finished and therefore unwriteable.
I then tried to use the ZIPInput/OutputStreams. I would flush data by creating a ZipEntry and writing it out. This approach
actually worked very well. But it had a mysterious bug where some data was either not fully written out or not read. In the rmi
context things would hang. This bug was relatively rare and only happened on certain machines. I never found out what the problem was.
The beauty of Philip Isenhour's approach is that the developer can completely control how data is flushed and fully written out. The developer
can also ensure that on the read method all the data is fully read. So there should not be any more rmi hangs. The only issue is whether the
deflate/inflate logic is correct. This is pretty thoroughly tested in our server-client testing (though there are *always* bugs hidden somewhere).
- Author:
- tredmond
Method Summary |
void |
flush()
|
protected void |
logPacket(byte[] compressedBuffer,
int compressedSize)
|
void |
write(int b)
|
Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
COMPRESSION_PAD
public static int COMPRESSION_PAD
BUFFER_SIZE
public static int BUFFER_SIZE
KB
public static int KB
os
protected OutputStream os
buffer
protected byte[] buffer
offset
protected int offset
CompressingOutputStream
public CompressingOutputStream(OutputStream os)
write
public void write(int b)
throws IOException
- Specified by:
write
in class OutputStream
- Throws:
IOException
flush
public void flush()
throws IOException
- Specified by:
flush
in interface Flushable
- Overrides:
flush
in class OutputStream
- Throws:
IOException
logPacket
protected void logPacket(byte[] compressedBuffer,
int compressedSize)
Submit a bug report or feature request
Protégé is a trademark of Stanford University.
Copyright (c) 1998-2011 Stanford University.