[java] Exception: Unexpected end of ZLIB input stream

You have to call close() on the GZIPOutputStream before you attempt to read it. The final bytes of the file will only be written when the file is actually closed. (This is irrespective of any explicit buffering in the output stack. The stream only knows to compress and write the last bytes when you tell it to close. A flush() probably won't help ... though calling finish() instead of close() should work. Look at the javadocs.)

Here's the correct code (in Java);

package test;

import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.zip.GZIPInputStream;
import java.util.zip.GZIPOutputStream;

public class GZipTest {

    public static void main(String[] args) throws
                FileNotFoundException, IOException {
        String name = "/tmp/test";
        GZIPOutputStream gz = new GZIPOutputStream(new FileOutputStream(name));
        gz.write(10);
        gz.close();       // Remove this to reproduce the reported bug
        System.out.println(new GZIPInputStream(new FileInputStream(name)).read());
    }
}

(I've not implemented resource management or exception handling / reporting properly as they are not relevant to the purpose of this code. Don't treat this as an example of "good code".)

Examples related to java

Under what circumstances can I call findViewById with an Options Menu / Action Bar item? How much should a function trust another function How to implement a simple scenario the OO way Two constructors How do I get some variable from another class in Java? this in equals method How to split a string in two and store it in a field How to do perspective fixing? String index out of range: 4 My eclipse won't open, i download the bundle pack it keeps saying error log

Examples related to scala

Intermediate language used in scalac? Why does calling sumr on a stream with 50 tuples not complete Select Specific Columns from Spark DataFrame Joining Spark dataframes on the key Provide schema while reading csv file as a dataframe how to filter out a null value from spark dataframe Fetching distinct values on a column using Spark DataFrame Can't push to the heroku Spark - Error "A master URL must be set in your configuration" when submitting an app Add jars to a Spark Job - spark-submit

Examples related to io

Reading file using relative path in python project How to write to a CSV line by line? Getting "java.nio.file.AccessDeniedException" when trying to write to a folder Exception: Unexpected end of ZLIB input stream How to get File Created Date and Modified Date Printing Mongo query output to a file while in the mongo shell Load data from txt with pandas Writing File to Temp Folder How to get resources directory path programmatically ValueError : I/O operation on closed file