Skip to content

Latest commit

 

History

History
131 lines (112 loc) · 5.42 KB

Entropy.org

File metadata and controls

131 lines (112 loc) · 5.42 KB

Information theory: HW #1 solution

Part I: computing the entropy of a given file, comparing with standart archiving tool

My variant is 10, so the file I needed to process, accordingly to the information in list_of_files.docx is PIC. The first part of the task suggests to compute the empiric probability distribution for block length of $n ∈ {1,2,3,4}$, including all subblocks/substrings. I’ll provide a short code snipped in haskell that shows how it’s done. Code examples are provided as a proof of work mostly, because haskell is pretty specific language and i acknowledge that understanding it from a scratch requires a lot of effort. I’ll leave some explanation notes though.

file :: ByteString
file = $(embedFile "/home/volhovm/code/coding-theory/PIC")

-- Returns all substrings of length l of `file`
subBlocks :: Int -> [ByteString]
subBlocks bSize =
    map (\i -> BS.take bSize $ BS.drop i file)
        [0..(BS.length file - bSize - 1)]

-- Given a block size returns a map from substring to number of occurences
distributionMap :: Int -> M.Map ByteString Int
distributionMap bSize =
    foldr (M.alter $ Just . maybe 1 (+1)) M.empty $ subBlocks bSize

file is function that returns string content of the file PIC. The file is imported as bytestring which is alias for array of bytes (Byte8). subBlocks is a function that, when given bSize – the length of the passing window/block, returns all substrings of this length. distributionMap collects a map from string to int, where key : string is a substring of length bSize, and element int is a number of this substring occurences in the file.

Next step is computing entropy $H(X)$:

-- Generates an entropy from block size
entropy :: Int -> Double
entropy bSize =
  let dm = distributionMap bSize
      n = intcast $ BS.length file - bSize
      distribution = M.map (\x -> intcast x / n) dm
      entr = negate $ sum $ map (\x -> x * log x) $ M.elems distribution
  in entr / intcast bSize

Variable dm retrieves a distribution map using already introduced function. n is the number of all subblocks/windows inspected: $length(file) - blockSize$. distribution stays for the same map, but with modified elements – each number of occurences is now divided by n. And, finally, entr is entropy – for every element $x$ in distribution it first computes $x * log(x)$, then sums them all and negates, that’s exactly $-∑z∈Xp(z)log{p(z)}$, where $x = p(z)$.

Finally, entropy function is runned $4$ times with values $bSize ∈ {1..4}$:

main :: IO ()
main = forM_ [1..4] $ \i -> putStr (show i ++ ": ") >> print (entropy i)

That produces the following result:

nentropy per char
10.8388
20.7048
30.6328
40.5665

Next step is applying archiver algorithm to data and comparing practical results of compression with theoretical data. I chose to use gzip algorithm from the library to avoid metadata addition that may happen if console archiver is applied. Results with best compression settings (where $U$ stays for “uncompressed” file, $C$ for compressed and $s(..)$ for size) are:

s(U) (bytes)s(C) (bytes)
51321652218

As it can be seen, compression ratio is $≈9.828$. The number correlates well with $8 / 0.8388 = 9.54$ (bits in byte / entropy per bit) for block size $1$. Computing other sizes gives us ratios $11.4$ for block size $2$, $12.6$ for $3$ and $14.2$ for $4$. So theoretical minimum compression ratio is higher than practical.

Part II: bonus task

Let’s assume that $X = \{x, p(x)\}$ is given discrete ensemble and $n$ is a length of input file. After first half of file is processed, we get knowledge about values of $(x_1 … xn/2), x_i ∈ X_i = X$. Two common approaches to encoding the data is:

  1. Extending the alphabet, introducing fixed blocks of initial alphabet characters (let’s say of a size $n$) as new alphabet characters.
  2. Taking into account relation of current character and previous ones.

Theorem 1.5.c states that $H_n(X) ≥ H(X|Xn-1)$, so more optimal entropy comes when we include previous results into creating encoding algorithm.

The most naive approach is to calculate, using the instructions for the part 1, entropy for the first and the second parts of our datafile. Here are the results (obtained from modified solution for part 1):

nFirst partSecond part
10.99610.6724
20.83700.5597
30.74560.4951
40.65890.4407

So it can be seen that though the entropy in the first half is higher, it’s less efficient to construct encoding algorithm using information about, let’s say, blocks (if we’re thinking in context of encoding blocks of constant size) from the first part only. So as probability distribution is not the same, any algorithm created in the task context will be less efficient.