NEurodrOne said:Is it BARF! ??
does this really work to the specified extent? Theoretically it seems applicable but has it been implemented on all type of files?
Tried it on one of my already compressed zipped files [50MB]. The first attempt cut it into half and then correspondingly the ratio went on getting equalized[Got bored after it reached 13MB! ] Yup, It would be really awesome to see any practically realized tests of reaching that 1Kb barrier till date. Any compressed examples of it thixkull?!
thixkull said:im sure it didnt store 25mb in just a 3 letter file name... that to it uses just 2 letters other than the x...
About BARF. The program is a joke. A lot of people have claimed (especially on comp.compression) that they have found a way to compress every input file above a certain size, or compress random data. If such a program was possible then you could recompress the compressed output recursively until the output was as small as you liked.
I wrote the program to show how it is possible for a compressor to cheat. You have to design the rules carefully to avoid this. For example if you look at the Calgary challenge at The Compression/SHA-1 Challenge BARF would not pass. First, the size has to include the size of the decompressor, and the BARF decompressor has to have a copy of the Calgary corpus embedded in it to work (which it does). Second, the recursive part would not work because the whole thing has to be packed into an archive and the archive has to store the name of the compressed file.