That's interesting. I think it's great that you implement this into the rw software.
Published by Vlasta on December 2nd 2014.
The first specification of the JPEG image format was published more than 20 years ago and it obviously was a great success. Even after so many years, most digital cameras still produce .jpg images and a good portion of all images on the web are .jpg images. Now imagine if all .jpgs were about 10 percent smaller with exactly the same quality. Wouldn't that be nice?
There is thing called arithmetic coding, which can be used as a replacement for Huffman coding in the final step of JPEG compression. And it is more effective and the files using arithmetic coding are a bit smaller. Between 5% and 12% according to my tests.
So, 10% does not sound like much, but that means 10% more shots on your memory card, 10% more photos on that CD, 10% faster download speeds or 10% less paid for data transferred. The sad thing is that the JPEG standard had this option, but nobody dared to use it due to software patents and the previous disaster with GIFs.
This is a nice example of how software patents slow down the technological progress. This particular software patent cost us all a lot of money (sum those 10% up). And the issuers did not make money on it, because nobody chose to use it. Everyone went the safe, patent-less way. Loss-loss.
Three years ago the patent expired and the standard JPEG library included the long abandoned option to use arithmetic coding to save that few percent of space. But it is too late now, as every piece of software would have to be updated to be compatible with the "new" JPEGs.
Let's have a look. Chrome does not support it, Firefox does not support it, Internet Explorer does not support it, Photoshop CS3 does not support it, Windows image viewer does not support it. Nobody uses it, nobody cares about extra 10% space saved.
Anyway, I care and RWPaint and RWPhotos were able to read JPEGs with arithmetic coding for about a year now. From the next version, you'll be able to save them as well. If you want to try it right now, grab this and replace the current codec in your installation folder. Then in the configuration, press the "Arithmetic coding" icon. If you want to make your own comparisons of how much space it can save you, be sure to save both with and without arithmetic coding at the same time to avoid possible re-saving loss.
That's interesting. I think it's great that you implement this into the rw software.
http://filmicgames.com/archives/778
Dan Forever, on November 16th, 2011 at 5:46 am Said:
I find the Arithmetic jpeg works in Chrome, though perhaps it was updated between the time this blog post was written and my comment.
However, living in Europe I believe we don’t have software patents (though I’m not sure if that’s a per-country basis or EU specific)
https://bugzilla.mozilla.org/show_bug.cgi?id=680385
Akkana Peck 2014-11-01 12:07:12 PDT
Chromium here (on Debian sid) shows arithmetic coded images just fine. I think chrome/chromium uses the system library, so it probably varies by OS.
Yes, it works on Linux via the system lib. On Windows, it does not work.
thanks for some information
Informative indeed.
Arithmetic coding is SLOW! It is possible to read the time it took to load an image in IrfanView. Decompressing arithmetic takes about * 9 times * longer compared to Huffman using JPEG-Turbo, and 5 times longer using old JPEG. Other software can be expected to use either variant.
Computers are getting faster, but expectations for resolution and quality are also rising. It might not be possible to flick through hi-res scans without a noticeable delay with Arithmetic. The greatest compression ratio difference (13%) is found in large non-uniform images (where various parts would have different optimal huffman codes).
Small web-resolution images that you might download over paid traffic see only 6-7% improvement. And on the web, this difference is dwarfed by the typical bloat of css/javascript, and overhead from calling advertising/tracking bugs. We can also often encounter JPEG files with meaningless Exif/Xmp metadata automatically generated by Photoshop, which bloat their size. Compression isn't that important anymore.
During all those years when the patents were still in effect, it was definitely not a good idea to use arithmetic coding. ACDSee with its unique read-ahead and cache-behind features was even needed for normal files.
I have just found this problem. I have just had a Hard Drive repaired and many of my photos now will not open.
When attempting to open it in Photoshop, it says various things depending on the file. Some say
"Could not complete your request because an unknown or invalid JPEG marker type is found'
While others say:
"Could not complete your request because reading arithmetic coded JPEG files is not implemented"
When trying to open them in Paint it says "Paint cannot read this file, This is not a valid bitmap file, or its format is not currently supported"
Does anybody know a way to fix this? As i'd like to recover the photos if i can.
10% less of 100% of X is 90% of X. 10% more of 90% of X, however, is (90% + 9%) of X, or 99% of X.
To use a simple example: if you shave off 10% of a 500K file, you end up with a 450K file. But that 50K in savings represents 11% of 450K (50K/450K*100%=11.1%), not 10%.
That may only be marginally more, but it's important to remember this fact for situations where the savings are more substantial than 10%. For example, if you saved 33%, then the resulting decrease in size would represent 50% of the smaller size's value.
Find out how Vista icons differ from XP icons.
See how RealWorld Icon Editor handles Vista icons.