Sunday, September 27, 2009

Survey

1. I've spent about 2-4 hours on each assignment. I would say the lecture and reading prepared me for them.

2. Hmmmm. Tough one. I don't like algorithmic approach we sometimes take. For instance, DES is an algorithm, and learning to perform it is like learning how to take a derivative. It's useful, and particularly involved, but I don't feel like I'm gaining an insight into why the algorithm works, in the sense that XORing and (mod 2) arithmetic have specific meanings and properties related to information and such, but I feel like we're assuming it just obfuscates the data, which means we very easily could fall into the trap of constructing an encryption system with some elementary attack vectors. I don't know what the long term goals are for the curriculum. My feedback is based on the assumption that we learn about and analyze possible attack vectors and can prove that certain aspects of a given cryptosystem (even a self-concocted one) are secure. All of that said, I like the theory and I like the demonstration of certain cryptosystems' weaknesses.

3. I would like a term project where we create our own cryptosystem (in groups maybe) and "publish" the algorithm for the class. We then analyze other groups algorithms and see if we can find a way to break the cryptosystem without resorting to a full-on brute force attack. There could be restrictions on key length or something so that a brute force attack was possible, to aid in finding another way to break it. I think this would be a good way to be honest with ourselves in the creation of an information security system and get a tiny bit of an independent research experience.

Thursday, September 24, 2009

3.11

1. I always lose my focus when he starts talking about bits and such. Surprisingly, I think I understood the Galois fields (if that's what they were).

2. A few months ago, I read an article about a company (or maybe it was just a guy) who claimed that he could manipulate encrypted information in a meaningful way without destroying the integrity of the data, and without knowing the actual data. This was via an cryptological homomorphism, which, I think, simply means that certain operations can be performed on x by performing a corresponding operation on E(x). It seems to me that this is a crock. Is sufficiently encrypted information indistinguishable from random noise or what?

Tuesday, September 22, 2009

4.5-4.8

1. This was a nice history lesson. The hardest part was the different modes.

2. There's a lot of ways to measure whether some encryption is "secure enough". If there is no demand for the data being encrypted, any encryption (or none) is good enough. DES and some other algorithms seem to get their security from the fact that breaking it is computationally infeasible, but general purpose programming on data-parallel processors (graphics cards) is becoming more and more accessible. I'm not exaggerating when I say that a top of the line GPU today (with 240 compute cores) is over two orders of magnitude faster than the top Intel chips (Core i7 as they're called) today. So what, do we continue to increase key length? We've done that a little with RSA and it's a testament to Moore's law and capitalist ingenuity that we've had to. But when quantum computing becomes available, the time to break RSA becomes polynomial (maybe even linear) with the key length, and not exponential. It's almost stupid to get cracking on encrypted data now since you could wait for (or work toward) quantum computers and crack the encryption long before your classical computing device yielded any fruit. I guess what I'm saying is computationally infeasible is super relative and proofs of security should not rely on such a property.

Monday, September 21, 2009

4.1-4.2,4.4

1. I had a hard time following the algorithm, or at least appreciating what the algorithm was doing. It'd be nice to go over the proof again, too.

2. The DES algorithm seems like a convoluted process. The fact that many suspected there was a backdoor suggests to me that some security through obscurity is employed with this algorithm. The book said that the initial permutation was probably to optimize it for the hardware of the time. This, plus the linear feedback cipher, makes me wonder how much an algorithm's security is compromised in the name of speed.

This slideshow article came up today. It's not super interesting, but it is topical: pictorial representation of the past six decades of research and development in encryption.

Thursday, September 17, 2009

2.8-2.11

1. Why is the does the LSB of the output of the trapdoor function on predictable (consecutive, no less) input give a pseudorandom number sequence? Wouldn't it depend on the function or is this characteristic of trapdoor functions?

2. All of this should be prefaced, "if I'm not mistaken..." According to Shannon's theory of entropy and information, a sequence with the highest entropy (and so information content) is a totally random one since it cannot be predicted. So why can a sequence generated from a relatively small number of bits be cryptographically secure? Isn't it impossible for such a sequence to have more information than its inputs? (I'll do you a favor and stay away from the sophomoric pontification: "Is anything really random?")

Wednesday, September 16, 2009

3.8, 2.5-2.8

1. The most difficult part for me was finding the decryption matrix with a known plaintext attack in section 3.7ish.

2. I was struck by the mention of cryptography in former pop-culture. The chain of secrecy is only as strong as its weakest link. By that, I mean, suppose you have really sensitive information that you are encrypting with RSA-128. That's pretty secure but not impossible to break. It might take some supercomputing resources or some mathematician resources. It then may be the case that one of the ends of the communication link is more vulnerable. If someone has access to the information, maybe they can give that access away. Maybe the going rate for hired goons is lower than mathematicians. Speaking of, mathematicians are generally known (as compared to engineers, say) for their idealism as their world is pure in a sense, perfectly described by certain principles. Devoid of the human element is maybe what I'm getting at. But that's not satisfactory when real security is on the line. I mean, don't some proofs of security rely on the existence of a perfect random oracle, which doesn't exist? Anyway, we can't neglect the human element. That sounds silly.

Sunday, September 13, 2009

2.3

1. The most difficult part for me is following using the dot product for probabilities. The reasons follow...

2. It's interesting to me when a mathematical operation is taken out of the context that probably motivated its creation and put in a different one. Of course, we're a long way from geometric interpretations in any advanced math course. If we're looking for intuition (which I am), we might consider the 26-dimensional vector space which W exists in. A higher dot product would mean the vectors are more "coincident", right? As one shifts the vector, one is actually shifting the components into adjacent dimensions. Of course, I am conditioned to think of vector spaces when we use dot products. I am probably shoehorning this concept into an entirely unrelated framework that would not only give no more insight into the problem, it would also destroy creativity.

Thursday, September 10, 2009

2.1-2.2, 2.4

1. Running frequency analysis on English plaintext is good and all, but in the real world, not only is English with punctuation encrypted, but all types of binary data, including compressed data (which has more entropy, I think). If something is entropy encoded, is it no longer susceptible to frequency analysis attacks?

2. I like how he said that everyone "knows" that substitution ciphers can be broken by frequency analysis. It's kind of like the budding cryptographer's mantra: "run some frequency analysis on it". Do all languages exhibit this kind of statistical exploitability? Chinese has a few thousand characters and the semantics and meanings are largely contextual. I suppose digrams and trigrams take that into account to a degree, but I wonder what Chinese codes looked like.

Lecture

1. Larrabee's cipher was (self-)proclaimed to be the greatest cipher invented or discovered. With Hogg's improvements. That seems a little weird. Probably the most difficult thing for me here is not falling into the same trap they did. Yeah, I know it's a cop out.

2. I was reading the liberal internet the other day (actually a few years ago), and some people were discussing their disgust with President Bush's wiretapping policy and the administration's appeal to the idea "if you have nothing to hide, why does privacy matter?" I suppose one could make the argument that we (as a society) conflate privacy with secrecy or security or something, but I wonder what their response would be to early Church leaders. Certainly they had a need to keep things secret from their enemies, but I think they are often labelled as aggressors hiding secret plans and not as victims trying to hide their location (or something). On the other hand, they openly suggested bribing politicians to get legislation through, so we can be sure those were different times.

Thursday, September 3, 2009

3.2-3.3

1. I took Number Theory Summer term, so most of this is really fresh in my mind. I can never remember the proof that if (a,n)=d and ax=b (mod n), there are exactly d solutions distinct mod n to the congruence. Leaving fractions in congruence relations is new.

2. I was brought up with the doctrine that math was a tool and the motivation for inventing/discovering new math was solving problems. Certainly Newton's development of calculus jived with this. Recently in my math courses, I have heard professors say that math is something where you invent rules and see what ramifications those rules have. I also have had two (old) math professors talk about how the vast majority of dissertations are on such esoteric topics that there is no foreseeable application to any results obtained in the research. These two ideas have impressed me with a bit of cynicism. Am I naive in thinking that these "invented" rules are not arbitrary--that there is some real-world motivation for them? And if we assume there is no application of a set of different topics, what does it matter which one is researched, since it's all mind-games anyway?... Now the moral: When I learned about congruence and divisibility, I wavered between it being exclusively either obvious or (more often) inconsequential. Same with primes. But modern cryptography rests on these concepts and (essentially) truths. Did the mathematicians who developed it see the potential in it? I suppose they didn't, but why did these ideas disseminate among the mathematicians and become fixed in the curriculum?

Tuesday, September 1, 2009

Introduction

I am a senior majoring in Mathematics and minoring with specialty in Computer Science. Post calculus, I have taken linear algebra, ordinary diff. eq., partial diff. eq., abstract algebra, math 190, combinatorics, number, theory, theory of analysis 1, calc. of several variables, and discrete math (CS 236). I am taking this class because it is a part of my coherent set which also includes combinatorics and number theory. The coherence is that they will all contribute to work in information security. I don't have much experience with any CAS, but I have used Maple before. Dr. Vianey Villamizar is one of the better math instructors I've had. He is really good at exposing the motivation for certain constructions (right word?) like the Fourier transform, so that the material seems more approachable and accessible. If I were unable to come to your office hours, MWF 2-3pm, TTh anytime would work.