Section 7
Hypercube Spread Spectrum Encryption System
HCSSES is a process not an algorithm.
The HCSSES system supports 1/2 key, 1 key and 2 key designs.
The 1/2 key is designed to protect a user's password by generating a digital signature and uses only the application signature.
The 1 key system is used to protect stored data and uses 2 signatures; the application signature and the user's password. In my applications, the application has 2 signatures, 1 to protect the application's data or the user's default data and 1 to protect the user's data. Each user's data can be protected with different passwords. Since multiple default data sets are allowed in my applications, the user can use different passwords to protect each set.
The 2 key system uses a separate encryption ( public ) and decryption ( private ) key. A master key is generated to produce a decryption key by the owning system ( whoever owns the decryption key is the owner ). The decryption key is private to the owner and is used to create a public encryption key and the algorithms to use the encryption key with. The 2 key system is designed to be used for single session and not to store protected data. With e-commerce, the store would create on the fly a random decryption key. The decryption key is used to create an encryption key and the encryption algorithms in the form of an applet. The applet would then be transfered to the user. The applet is used to validate the identity of the user with a user id and password. The applet then creates a digital signature of the password. The signature, user id and anything else is then encrypted. The validation packet is then sent to the store where it is decrypted. The digital signature and user id are extracted and compared to a client data base of user ids and digital signatures. If the user id and signature matches an entry in the client data base, then the correct password was used with the given user id. Each data packet sent should contain the data, user id and digital signature. When the session ends, the keys and algorithms are discarded. This process eliminates the need to store and protect passwords. The digital signature is only valid during that one session and it's value has no meaning when not encrypted.
For the 2 key system there are several possible ways to generate the applet.
Since HCSSES is designed to operate like a processor, using complex feedback loops and recursion, breaking the encrypted data in real-time can be virtually impossible (and can even take years ), even with knowing the encryption key, encryption tables and decryption tables. With 1 GB tables, and using redundancy, feedback and recursion can make the first byte probability 1 in about 2^22 ( about 4 million ). Byte 2 probability would be 1 in 2^44 ( about 17 trillion ). At Byte 6 ( minimum data packet size ) the probability would be 1 in 2^704 ( about 8.4 x 10^211, say 84 million followed by a trillion repeated 17 times ). At this point HCSSES has achieved a randomness equivalent to a DES key with a length of 704 bits. This is why the first pass through the spreader keys has the most impact on security and why they are the only ones checked by the Qualify Key unit ( described below ).
The problem with this system is the time to download the applet with such large tables. With DSL at 200kB ( about 1,600,000 bits per second ), three 1 MB tables will require no less than 15 seconds. Three 1 GB tables would require at least 4.2 hours. With a high speed dedicated digital line running at 1000 Mb ( 1,000,000,000 bits per second ), the applet transfer time for three 1 GB tables would be about 10.5 minutes. Thus, large tables are really only useful on local highly secured systems or where the security level must be at it's maximum. Using tables with only one translation function will reduce the transfer time by 2/3.
Why use a table?.
A table is used to replace a computational algorithm. A table look up is faster than complex calculations. A table exchanges memory for speed.
Why three tables?
Complex calculations are done in three basic areas of the HCSSES. The first one is the calculation of N. The second one is the calculation of Vz. The third is the calculations to spread the Vz results. The look up tables simulate the affects of these calculations. The redundancy of the tables is what makes reverse table look up no better at breaking the encryption than doing reverse processing on the algorithms.
The demo translation algorithms use a data space equivalent to three tables each with a size of 2^14 ( about 16,000 ) bytes. With an average modem transfer speed of 22 kb ( 2,750 bytes per second, common speed for 28k baud modems ), an applet of the demo's table size would require about 18 seconds to download. This would be fast enough and secure enough to protect a user's order in a single session. The 6 th byte probability is 1 in about 2^93 ( about 12 thousand trillion trillion ).
If a wire tap captured the encrypted data in transit, had the decryption algorithms and tried every possible recursive table lookup combinations at a rate of one billion decryption keys per second ( this would be one wickedly fast computer ), the longest decode time would be about 387 billion years.
There are about 31 million seconds in a year. So the computer would be processing about 31 thousand trillion keys a year.
387,000,000,000 = 12,000,000,000,000 / 31
or
387 = 12,000 / 31.
A PowerPC processor running at 500 MHz, using specially developed code to do nothing else but process keys, might achieve a key rate of 150 million keys per second, assuming the algorithm is stored completely in level 1 cache and was not interrupted, the longest time to compromise would be about 2.5 trillion years. However, the demo is not table based. Since it's a 1 key design with fixed algorithms and the decryption algorithm is known, it can be broken in a matter of minutes.
The HCSSES key length should not exceed the length of the data ( the additional key codes would never be extracted ). The usual key length for HCSSES is around 48 bits ( minimum data packet size ). Longer key lengths are only needed to keep the complexity of the key codes easier to generate and maintain. The security of HCSSES is not directly related to the length of the key.
HCSSES security is dependent on the redundancy of the algorithms, single session operation, variable data sequencing ( same data, same key, different results ), spreading performance, data abstraction ( the encrypted data is a path through hyperspace not a scrambling of the user data ), elimination of as many parties involved ( digital certificates add another avenue for a security breach ), multi-part security ( no one has complete knowledge of the key, the key comes in three parts, application, user and owner ), the elimination of complexity in layered security systems like SSL and TLS, the elimination of the need to store passwords and the security required to protect them, there is no single public algorithm, and no dependency on various application's ability to handle encryption or security.
The description that follows describes the implementation used in the demonstration application, which uses a single key to encrypt and decrypt. The key is created from fusing the application signature and the user's password. The application signature is generated to produce a good spread indifferent to the affects of the user's password ( thus allowing them to input anything ). The demo is also set up to compress the output data to reduce data expansion.
Refer to figure - HCSSES Algorithm, for the following discussion.
Inputs: Key, Data, Mode, Sys7.5
The Key contains the instructions used by the transformation functions that encrypt the data.
The Data is the data to be encrypted.
The Mode contains a set of flags to modify the behavior of the Key Sequencer.
The Sys7.5 flag is used to determine which pseudo random number to use for the White Noise Generator.
Outputs: Edata, Fail
Edata is the encrypted data.
Fail contains the error code of the encrypter.
The Key Sequencer is the heart of the encrypter. Upon demand from another function it extracts a function ( key ) code from the next instruction in the key. The basic sequencer generates three types of key codes; spread, normalize or vector. The Key Sequencer is composed of two basic units, the Key Start Unit and the Sequencer.
The Key Start unit determines the starting position for key code extraction of the Key Sequencer. It basically determines the start point in hyperspace for creating the Edata.
The Qualify Key unit determines the quality of the key in relation to the spreader function. It extracts key codes from the key in the order the other functions do. The Qualify Key examines only the spread key codes. It determines if the quality of the spread key codes will spread the data well enough to meet the minimum security requirements. The Qualify Key plots the spread quality of each spread key code against the weighted spread key code { Q, KC }. Under perfect conditions, this would plot a bell curve. The plotted data with a random generated key will cluster ( group ). The cluster is examined to determine where on the perfect bell curve it's fits. This computation is the Key Quality value. The spreader function determines the minimum Key Quality. Normally any Key Quality above two is good enough for most purposes. The Qualify Key is used only if the mode setting to use it is set. Normally, you would only check the quality of a key during creation of the key to reduce the encryption time. The Qualify Key only checks a minimum number of spread key codes, usually only the first 32 ( what ever the Edata packet size is ). If the spreader functions are written well, then only the startup spread key codes have the most impact on the security of the Edata.
The Random Number Generator is a basic pseudo random number generator. It must deliver random numbers in the size of the White Noise Generator. The Random Number Generator is used generally only in the startup phase.
The White Noise Generator is a specialized pseudo random number generator. The White Noise Generator, generates one and zero sequences in a packet size requested by a speader function.
The Data Auto Fill? function, makes sure that enough input data is supplied to meet the minimum security requirements. The minimum is usually 48 bits ( 6 bytes ). If the data is short, it gets enough bits from the White Noise Generator to meet the minimum data packet size. If noise is added, then the data is spread to distribute the real data throughout the data packet. If the mode is set not to allow dummy data to be added and it is required, then the function fails.
The Spread function, spreads an input data packet ( usually 8 bits ) through a white noise data packet ( 8 to 32 bits ). The spread key code determines how the data will be spread. The larger the white noise data packet, the larger the Edata packet size will be. All the Spread functions are the same except for the Spread Extra function, which deals with a lot more data.
The Node Signature function takes a key value and a data value and computes a node signature. The K and D values form a matrix who's values are re sequenced to form a NS value.
The NS Flags function extracts a number of bits from the NS that will help to identify an invalid path early in the decryption path. The NS flags are added to D.
The Vector function computes a 3-D vector from N, DF and K.
The Zero Reference function computes a new reference point for the next vector calculation.
The Normalize function normalizes the 3-D vector into a smaller unit ( the size of the original data or 8 bits ). The 3-D vector contains more bits than the original data due to the spreader functions.
The Make Path computes a vector from the last V to the new V. This value is stored in an array.
The Signature functions all compute digital signatures of the various data packets. The signatures are then added to the Edata.
The K and D values form a matrix who's values are re sequenced to form a N value.
Refer to figure - Translating 2-D to 3-D, for the following discussion.
The K, D and N data points represent a point in 3-D space shown as V. The vector Vz is a path from the origin to V. This process converts 1-D data into 3-D data.
The Vz value is normalized to form a new zero reference point in the V calculation.
The process is repeated for each input data value ( figure - Translating 3-D to 4-D ).
A path from the last V to the original origin creates the last Vz as shown in figure - 4-D Path. The Vz array creates a 4-D loop through hyperspace.
To see how the Edata varies each time the same data is encrypted with the same key, the hypercube is translated into a hypersphere.
The 4-D path can be normalize by finding a single 3-D space that the 4-D path is contained in. This allows a circle to be calculated from the 4-D loop in 3-D space. The center of the circle is calculated as in figure - Translating V onto a Sphere. A vector is computed from the center through each point ( V ). The longest vector is used to form a sphere. All other vectors are extended until they make contact with the surface of the sphere. Although the sphere appears 3-D, it actually exist in hyperspace ( the points are actually in different 3-D spaces ). The points on the surface are normalized to fall on the same longitude as shown in figure - Translating V onto a Longitude. This is the place they appear on a sphere without spreading. Spreading the input data moves a point on the surface of the sphere along the X and Z axis ( changes the longitude value ) as shown in figure - Spreading V. Spreading the key moves a point on the surface of the sphere along the Y and Z axis ( changes the latitude value ). Spreading the node moves a point on the surface of the sphere along the X and Y axis ( changes the longitude and latitude values ). The Edata is the vectors from the center to the points on the surface of the sphere. Since spreading is a mixture of fixed data ( input data ) and variable data ( noise ), with each encryption given the same data and key will produce varying Edata. Picking a random starting point in hyperspace ( any Vz vector ) also causes the Edata to vary.
Spread Spectrum, Introduction, My Rantings
Last updated: Mar 4, 2011