
On May 20, 2011, at 5:16 PM, Sudeep Khuraijam wrote:
I could not help but admire nanog in its full form ;) and I cannot resist anymore. Allow me to suggest the EPR paradox machine.
The cost of regenerating unpredictable information is inefficient by orders of magnitude, but wait... isn't it what we are trying to solve?
On May 20, 2011, at 1:32 PM, Paul Timmins wrote:
On 05/20/2011 03:34 PM, Paul Graydon wrote: On 05/20/2011 08:53 AM, Brett Frankenberger wrote: Even if those problems were solved, you'd need (on average) just as many bits to represent which digit of pi to start with as you'd need to represent the original message.
-- Brett Not quite sure I follow that. "Start at position xyz, carry on for 10000 bits" shouldn't be as long as telling it all 10000 bits?
Yes, it will be as long or longer (on average), because you have to represent position XYZ in some fashion, and send that representation to the decoder, and it can easily be longer than the original message. Suppose that it takes 20,000 bits to represent XYZ. How have you saved bits ? Having XYZ be longer than the original message is just as likely as having it be shorter. The same problem applies to the original suggestion. You will not (on average) save bits. If typical messages are not totally random, you can compress by considering the nature of that non-randomness, and tailoring your compression accordingly. These schemes are using random strings / hashes for their compression, and thus will (on average) not save bits even if a message is highly non-random. Regards Marshall
Currently we have a compression algorithm for doing this already in widespread use. We create a list of numbers ranging from 1 to 255 and then provide an index into that array. We save space by assuming it's a single character.
____________________________________________ Sudeep Khuraijam | Netops | liveops | Office 408 8442511 | Mobile 408 666 9987 skhuraijam@liveops.com<mailto:skhuraijam@liveops.com> | aim: skhuraijam