How does a predictive coding aid in lossless compression?Lossless Video Compression PipelineCompressing normally distributed dataHow can GIF compression be lossless if the maximum # of colors is 256?Hash for verifying both compressed and uncompressed data?Can random suitless $52$ playing card data be compressed to approach, match, or even beat entropy encoding storage? If so, how?Arithmetic coding and “the optimal compression ratio”How does adaptative Huffman coding (Vitter algorithm) work?Universal Lossless Compression?Is von Neumann's randomness in sin quote no longer applicable?Algorithm for estimating lossless compression factor

Would Slavery Reparations be considered Bills of Attainder and hence Illegal?

How could indestructible materials be used in power generation?

Why didn't Boeing produce its own regional jet?

Are there any examples of a variable being normally distributed that is *not* due to the Central Limit Theorem?

Unlock My Phone! February 2018

What do you call someone who asks many questions?

Why can't we play rap on piano?

If human space travel is limited by the G force vulnerability, is there a way to counter G forces?

How to tell a function to use the default argument values?

What reasons are there for a Capitalist to oppose a 100% inheritance tax?

One verb to replace 'be a member of' a club

Could the museum Saturn V's be refitted for one more flight?

Why would the Red Woman birth a shadow if she worshipped the Lord of the Light?

What are some good books on Machine Learning and AI like Krugman, Wells and Graddy's "Essentials of Economics"

Extract rows of a table, that include less than x NULLs

Can compressed videos be decoded back to their uncompresed original format?

How do conventional missiles fly?

Valid term from quadratic sequence?

Avoiding the "not like other girls" trope?

Why didn't Miles's spider sense work before?

Determining Impedance With An Antenna Analyzer

How dangerous is XSS?

Is "remove commented out code" correct English?

Mathematica command that allows it to read my intentions



How does a predictive coding aid in lossless compression?


Lossless Video Compression PipelineCompressing normally distributed dataHow can GIF compression be lossless if the maximum # of colors is 256?Hash for verifying both compressed and uncompressed data?Can random suitless $52$ playing card data be compressed to approach, match, or even beat entropy encoding storage? If so, how?Arithmetic coding and “the optimal compression ratio”How does adaptative Huffman coding (Vitter algorithm) work?Universal Lossless Compression?Is von Neumann's randomness in sin quote no longer applicable?Algorithm for estimating lossless compression factor













4












$begingroup$


I'm working on this lab where we need to apply a lossless predictive coding to an image before compressing it (with Huffman, or some other lossless compression algorithm).



From the example seen below, it's pretty clear that by pre-processing the image with predictive coding, we've modified its histogram and concentrated all of its grey levels around 0. But why exactly does this aid compression?



Is there maybe a formula to determine the compression rate of Huffman, knowing the standard deviation and entropy of the original image? Otherwise, why would the compression ratio be any different; it's not like the range of values has changed between the original image and pre-processed image.





Thank you in advance,



Liam.










share|cite|improve this question









$endgroup$
















    4












    $begingroup$


    I'm working on this lab where we need to apply a lossless predictive coding to an image before compressing it (with Huffman, or some other lossless compression algorithm).



    From the example seen below, it's pretty clear that by pre-processing the image with predictive coding, we've modified its histogram and concentrated all of its grey levels around 0. But why exactly does this aid compression?



    Is there maybe a formula to determine the compression rate of Huffman, knowing the standard deviation and entropy of the original image? Otherwise, why would the compression ratio be any different; it's not like the range of values has changed between the original image and pre-processed image.





    Thank you in advance,



    Liam.










    share|cite|improve this question









    $endgroup$














      4












      4








      4





      $begingroup$


      I'm working on this lab where we need to apply a lossless predictive coding to an image before compressing it (with Huffman, or some other lossless compression algorithm).



      From the example seen below, it's pretty clear that by pre-processing the image with predictive coding, we've modified its histogram and concentrated all of its grey levels around 0. But why exactly does this aid compression?



      Is there maybe a formula to determine the compression rate of Huffman, knowing the standard deviation and entropy of the original image? Otherwise, why would the compression ratio be any different; it's not like the range of values has changed between the original image and pre-processed image.





      Thank you in advance,



      Liam.










      share|cite|improve this question









      $endgroup$




      I'm working on this lab where we need to apply a lossless predictive coding to an image before compressing it (with Huffman, or some other lossless compression algorithm).



      From the example seen below, it's pretty clear that by pre-processing the image with predictive coding, we've modified its histogram and concentrated all of its grey levels around 0. But why exactly does this aid compression?



      Is there maybe a formula to determine the compression rate of Huffman, knowing the standard deviation and entropy of the original image? Otherwise, why would the compression ratio be any different; it's not like the range of values has changed between the original image and pre-processed image.





      Thank you in advance,



      Liam.







      image-processing data-compression huffman-coding






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked 3 hours ago









      Liam F-ALiam F-A

      261




      261




















          1 Answer
          1






          active

          oldest

          votes


















          6












          $begingroup$

          Huffman coding, as usually applied, only considers the distribution of singletons. If $X$ is the distribution of a random singleton, then Huffman coding uses between $H(X)$ and $H(X)+1$ bits per singleton, where $H(cdot)$ is the (log 2) entropy function.



          In contrast, predictive coding can take into account correlations across data points. As a simple example, consider the following sequence:
          $$
          0,1,2,ldots,255,0,1,2,ldots,255,ldots
          $$

          Huffman coding would use 8 bits per unit of data, whereas with predictive coding we could get potentially to $O(log n)$ bits for the entire sequence.






          share|cite|improve this answer









          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "419"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f106450%2fhow-does-a-predictive-coding-aid-in-lossless-compression%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            6












            $begingroup$

            Huffman coding, as usually applied, only considers the distribution of singletons. If $X$ is the distribution of a random singleton, then Huffman coding uses between $H(X)$ and $H(X)+1$ bits per singleton, where $H(cdot)$ is the (log 2) entropy function.



            In contrast, predictive coding can take into account correlations across data points. As a simple example, consider the following sequence:
            $$
            0,1,2,ldots,255,0,1,2,ldots,255,ldots
            $$

            Huffman coding would use 8 bits per unit of data, whereas with predictive coding we could get potentially to $O(log n)$ bits for the entire sequence.






            share|cite|improve this answer









            $endgroup$

















              6












              $begingroup$

              Huffman coding, as usually applied, only considers the distribution of singletons. If $X$ is the distribution of a random singleton, then Huffman coding uses between $H(X)$ and $H(X)+1$ bits per singleton, where $H(cdot)$ is the (log 2) entropy function.



              In contrast, predictive coding can take into account correlations across data points. As a simple example, consider the following sequence:
              $$
              0,1,2,ldots,255,0,1,2,ldots,255,ldots
              $$

              Huffman coding would use 8 bits per unit of data, whereas with predictive coding we could get potentially to $O(log n)$ bits for the entire sequence.






              share|cite|improve this answer









              $endgroup$















                6












                6








                6





                $begingroup$

                Huffman coding, as usually applied, only considers the distribution of singletons. If $X$ is the distribution of a random singleton, then Huffman coding uses between $H(X)$ and $H(X)+1$ bits per singleton, where $H(cdot)$ is the (log 2) entropy function.



                In contrast, predictive coding can take into account correlations across data points. As a simple example, consider the following sequence:
                $$
                0,1,2,ldots,255,0,1,2,ldots,255,ldots
                $$

                Huffman coding would use 8 bits per unit of data, whereas with predictive coding we could get potentially to $O(log n)$ bits for the entire sequence.






                share|cite|improve this answer









                $endgroup$



                Huffman coding, as usually applied, only considers the distribution of singletons. If $X$ is the distribution of a random singleton, then Huffman coding uses between $H(X)$ and $H(X)+1$ bits per singleton, where $H(cdot)$ is the (log 2) entropy function.



                In contrast, predictive coding can take into account correlations across data points. As a simple example, consider the following sequence:
                $$
                0,1,2,ldots,255,0,1,2,ldots,255,ldots
                $$

                Huffman coding would use 8 bits per unit of data, whereas with predictive coding we could get potentially to $O(log n)$ bits for the entire sequence.







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered 2 hours ago









                Yuval FilmusYuval Filmus

                196k15184349




                196k15184349



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Computer Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f106450%2fhow-does-a-predictive-coding-aid-in-lossless-compression%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Log på Navigationsmenu

                    Creating second map without labels using QGIS?How to lock map labels for inset map in Print Composer?How to Force the Showing of Labels of a Vector File in QGISQGIS Valmiera, Labels only show for part of polygonsRemoving duplicate point labels in QGISLabeling every feature using QGIS?Show labels for point features outside map canvasAbbreviate Road Labels in QGIS only when requiredExporting map from composer in QGIS - text labels have moved in output?How to make sure labels in qgis turn up in layout map?Writing label expression with ArcMap and If then Statement?

                    Nuuk Indholdsfortegnelse Etyomologi | Historie | Geografi | Transport og infrastruktur | Politik og administration | Uddannelsesinstitutioner | Kultur | Venskabsbyer | Noter | Eksterne henvisninger | Se også | Navigationsmenuwww.sermersooq.gl64°10′N 51°45′V / 64.167°N 51.750°V / 64.167; -51.75064°10′N 51°45′V / 64.167°N 51.750°V / 64.167; -51.750DMI - KlimanormalerSalmonsen, s. 850Grønlands Naturinstitut undersøger rensdyr i Akia og Maniitsoq foråret 2008Grønlands NaturinstitutNy vej til Qinngorput indviet i dagAntallet af biler i Nuuk må begrænsesNy taxacentral mødt med demonstrationKøreplan. Rute 1, 2 og 3SnescootersporNuukNord er for storSkoler i Kommuneqarfik SermersooqAtuarfik Samuel KleinschmidtKangillinguit AtuarfiatNuussuup AtuarfiaNuuk Internationale FriskoleIlinniarfissuaq, Grønlands SeminariumLedelseÅrsberetning for 2008Kunst og arkitekturÅrsberetning for 2008Julie om naturenNuuk KunstmuseumSilamiutGrønlands Nationalmuseum og ArkivStatistisk ÅrbogGrønlands LandsbibliotekStore koncerter på stribeVandhund nummer 1.000.000Kommuneqarfik Sermersooq – MalikForsidenVenskabsbyerLyngby-Taarbæk i GrønlandArctic Business NetworkWinter Cities 2008 i NuukDagligt opdaterede satellitbilleder fra NuukområdetKommuneqarfik Sermersooqs hjemmesideTurist i NuukGrønlands Statistiks databankGrønlands Hjemmestyres valgresultaterrrWorldCat124325457671310-5