Wednesday, December 9, 2009

16.5

1. This section wasn't extremely difficult, but I would like to investigate what properties inverses in some field have when they are considered in an enclosing structure, like the ring Z.

2. We've talked a couple times about how there is no real notion of smallness when it comes to points on the curve E, and how it is for that reason that we can't use the same methods to attack the discrete logarithm problem for elliptic curves. At first I was tempted to say that we picked the wrong analogy and that comparing it to a discrete logarithm was guiding our intuition in the wrong direction, but I suppose it is the same problem if we used multiplicative notation instead of additive. Anyway, my point is that perhaps there is someway to define smallness based on certain properties, etc.

Sunday, December 6, 2009

16.4

1. The most difficult part for me was realizing that one of the more significant issues in GF(2) and GF(4) (maybe GF(2^n) in general?) is finding the additive inverse of an element and not adding two elements together, but I guess it turns out to be the same thing. It takes an extra step.

2. Even though fields in Z_2 are a more natural fit for computers, does that mean that fields Z_2[x] are also? (This next part is all "if I'm not mistaken".) In the text, only a few lines were dealt with: vertical, horizontal, and the line y=x. A line could have the slope of any element in the group. In GF(4), names were given to each element, so I suppose in, say, GF(256), even though most of the elements may have the ring element x in their representation, this is a different x than the one appearing in y=mx+b, so if considering the line through (0,1) and (1, x^2+x+1) on some hypothetical curve, we would obtain y=(x^2+x+1)x. Is that y=x^3+x^2+x? In other words, are the x's the same? I'm leaning towards yes.

Friday, December 4, 2009

16.3

1. I had a question about the p-1 analogue for factoring but in writing it, I saw the answer. However, why the process isolates the behavior of one of the factors is still a mystery to me. That and the entire second half of the reading.

2. It's interesting to me that choosing a random curve (mod n) results in essentially a random number of points, around n, on the curve. Since there is a practical limit to the size of numbers that can be factored using this method, I'm going to assume that the time complexity goes up exponentially with the number of digits (so essentially linearly with the value of the number). I have nothing of worth to say about elliptic curves themselves, unfortunately.

Tuesday, December 1, 2009

16.2

1. Pardon my ignorance, but how can we take a derivative of a function (implicit or not) when there is no (obvious) notion of continuity. Don't we use difference quotients or something? Is that the same thing in this case?

2. I am consistently amazed at the connections and applications that seemingly obscure branches of math have. First, the recognition that many of the results of, say, topology come not from a construct itself (say, the complex numbers), but from the properties that construct has (field properties). It then allows one to interpret things like fractions in finite fields where all members are representable by integers. More pertinently, the investigation of arc length of ellipses led to a more intense study of elliptic curves which turn out to be extremely fascinating. I must say apologetically that I have nothing of interest to say about elliptic curves over Z mod n. This is all new to me.

Sunday, November 29, 2009

16.1

1. Why do we not have to consider a quadratic term in the x's? Is it possible to write any elliptic curve without that term by using a method similar to completing the square? Also, isn't it possible to choose two points so that a third point isn't intersected? What then?

2. I have never been exposed to elliptic curves before, at least not in an expository setting. Thinking about this for a moment, I bet that any elliptic curve with rational coefficients can be obtained through a rational map with integer coefficients. I have discovered a truly remarkable proof for this, but it is too large to put in this blog post. Seriously though, it is pretty incredible that such seemingly unrelated fields (algebra and analysis? it's all numbers anyway) could be linked in such an elegant way. And to think that that link somehow proves that x^n+y^n=z^n has no integer solutions for n>2. It's amazing.

Monday, November 23, 2009

2.12

1. I'm not going to pretend I know exactly how the Enigma works. I grok the permutations and stuff, but I don't know how the mechanics of the machine works. Which leads us to...

2. At this time, computation was still in its relative infancy. Whether Alan Turing's involvement at Bletchley Park abetted his role as the father of modern computation or vice versa, I'm uncertain, but the timing is interesting. The reason I bring this up is because one of the weaknesses was in their "error correcting codes" which I think fall under the umbrella of coding theory. Since radio transmission was prone to error, they introduced direct redundancy which weakened their security. It is exactly this situation that led Claude Shannon to develop the mathematical theory of communication, information theory, which explores the physical limits of information transfer through a noisy channel. Physical might not be the right word. Anyway, what I am spending far too long to get at is it's pretty amazing that they could reverse engineer such a convoluted device. We have a hard enough time seeing a string of numbers and determining how it was encrypted, but we have at least some idea. It would be very easy for the Germans to obscure their messages in simple ways so that it would be harder for a cracker to realize they were on the right track. It really does make you think about 2 Nephi 31, to a point.

Sunday, November 22, 2009

19.3

1. Aside from quantum anything, the most difficult part for me is the Fourier transform. I know that it transforms things from the time or spatial domain to the frequency domain, but despite my continued readings on the subject, I just can't grasp it.

2. Quantum computing changes the game, but I think it is not the panacea that many make it out to be (and the author agrees with me on this point). If I'm not mistaken, there are provably secure encryption models based on the behavior of quantum systems. Of course it is in the best interest of governments and institutions to keep their information secure, but do we expect that overcoming the physical limitations of quantum computing will allow us to use an arbitrarily large number of qubits (or maybe a practically large)? If not, we can always stay ahead of the tech with longer and longer keys, right?

Thursday, November 19, 2009

19.1-19.2

1. What was the most difficult part of the section? The quantum part. The unintuitive part. The bracket notation. All of the above. In seriousness though, the probabilities only allow them to detect an eavesdropper and not hide the information, correct? If so, is it possible (although something that can be made arbitrarily improbable) that Eve intercepts the key undetected? Since there is a 25% chance that Bob detects an error, the chance is 4/2^n for n bits that Eve is undetected (if I understand this correctly).

2. The polarization setup is interesting. The two-slit experiment is mind boggling. Quantum physics has leaked into our culture with things like Schroedinger's cat. What is it about the universe that allows something to be more than one way simultaneously unless observed? What is inherent in observation? Maybe it's that we have no way of observing something in more than one state, but I suppose if that were true, the two slit experiment wouldn't work. What the heck.

Tuesday, November 17, 2009

14.1-14.2

1. I'm sorry; is it easy to find square roots mod a prime? Am I totally blanking on this? Why is the modulus a product of two primes?

2. I think this is actually a pretty important concept vis à vis cryptography. The need to prove that you know something without divulging what that something is is kind of the same as a shared secret. People do this in everyday life. Or at least my family does. My mom calls me and asks what various passwords are for accounts and stuff, but I don't want to have to go to a private location to divulge it (and who knows who's eavesdropping the line itself) so we rely on a shared secret. I say, remember the name of my fourth grade teacher? or something like that. In this case, however, one is able to prove they know the shared secret without divulging it.

Sunday, November 15, 2009

12.1-12.2

1. Is it true that something can be said about the secret if the random number is below some bound B? That is, given that the message could be any number from -infinity to infinity, does a component of 12356 tell you anything? Upon further inspection, I see that it does not...

2. This is a very interesting way to keep secrets. Considering the plane method, given two planes, one could consider the line created by their intersection and glean possible messages, correct? Most would be nonsensical but if a meaningful message could be found, there is a not insignificant probability that it is the actual message. Obtaining a meaningful message is another story though.

Thursday, November 12, 2009

Midterm 2

1. It would be hard to say what is most important. On the one hand, we have studied RSA fairly extensively and gotten into the nitty gritty, and that's very practical. On the other, the more abstract security requirements and how certain principles can hide information in certain ways to meet those requirements is also important. My answer would vary day to day, I think.

2. I expect to see some purely algorithmic questions, like to decode a simple ElGamal message to to use Pohlig-Hellman to solve a discrete log. Maybe some hash questions.

3. ElGamal.

4. Coding theory and information theory. Mostly information theory. Here's one: It would be better to compress a message before encrypting it to maximize compression, as opposed to encrypting before compressing, right? Or is it a wash? Or does it depend. It probably depends.

Tuesday, November 10, 2009

8.3, 9.5

1. This overview of SHA-1 was only marginally better than reading the spec for it, but the author did warn how technical it was. I was about to ask why AND was the minimum function and OR was the maximum, but I just got it.


2. The author said that a hash function is good only if a change of one bit in the input stream changes many bits in the hash. This of course is a necessary property, but not sufficient for a good hash. Some others that I can think of are: hashes should be uniformly distributed over the inputs, and this goes for each input length, so that for length K and hash length N, there should be generally 2^(K-N) messages of length K that yield the same hash. Can we say anything about the information content (high entropy) of a message and what its hash is? Or maybe that it is generally indistinguishable from the hash of a message with low entropy? That would actually be a desirable property, right?

Sunday, November 8, 2009

9.1-9.4

1. Why would you sign something that you didn't know the contents of? This is not incredulity about the actions of the uninformed, but rather, under what circumstances would you sign something that you didn't know the contents of? Does it involve some notion of trust, whether it be a chain or hierarchy of trust or a certificate authority?

2. The overview given of how to alter a contract to obtain 2^30 variations was (understandably) simplistic. In reality, many file formats allow arbitrary data to be stored somewhere within, and there is certainly not a one-to-one correspondence between representations and the information required to store them. I suppose either party could inject random garbage into a file in such a way that the stream was still a valid sequence in the language. This may not address the problem directly, though. The issue, as I see it, is that in a contract, both parties need verification that the same contract is being signed. In the physical world, this can mean temporal and/or spatial proximity, but it may be possible with an escrow or arbiter or some other trusted third party. Or maybe it's a simple problem and I just don't see it.

Thursday, November 5, 2009

8.4-8.5, 8.7

1. I take it the approximation for birthday attack probabilities is taken from Stirling's approximation of large factorials. I need a way to obtain an intuition regarding probabilities. Otherwise, I'm always crunching numbers, and I get distracted.

2. In some circumstances, a probabilistic algorithm is sufficient. This is especially intriguing considering a deterministic primality test exists, but probabilistic tests are still in vogue. I wonder if this is because the running time of the deterministic is significantly higher than the the currently employed ones. However, the discrete logarithm birthday attack is not used because it has a higher time complexity than Baby Step, Giant Step. What I'm getting at is that probabilistic algorithms are probabilistic on the input and not a random process. Otherwise, we could just run the algorithm independently many times until we obtained a solution.

Wednesday, November 4, 2009

8.1-8.2

1. The most difficult part for me is the subtle differences between the properties. Pre-image resistant is that, given a specific value in the codomain, it is difficult to come up with a value in the domain that maps to it. The second property is that is is hard to find two messages that hash to the same value, so we could consider the "easiest" hash value to obtain and that should still be hard. The third is that given a message, it is hard to come up with another message that maps to the same. In this case, the hash is still fixed, but the message may be the easiest value to obtain that maps to it, and we seek a different one. Is this right?

2. I am very interested in the role of the random oracle in theoretical proofs of security. Even though one doesn't exist, it is nice to abstract such a notion away from implementation detail. On the other hand, it is that very abstraction that may lead to a practical weakness in the system.

Sunday, November 1, 2009

7.3-7.5

1. I think the real question here is, if Bob is smart enough to understand that there is a unique solution to the equation B=a^x (mod p) and that it encapsulates the information he is interested in, why would he believe that Alice had a system predicting the outcomes of football games?

2. In a lot of fields, it seems like there is a meet-in-the-middle of utility vs. theory or maybe practicality vs. theory. Even more general, there are people at opposing sides of some spectrum who are trying to solve problems almost solely with the tools they have. For instance, in computing, since faster is better, both hardware people and software people work to make their component of the system faster. Since their improvements are independent of the others', an even greater speedup is achieved. So it is with mathematics, it seems to me. We have applied mathematicians who have outlined a set of properties a given cryptographic system must have, and some pure mathematician who has invested a lot in some family of problems and understands the difficulties provides a solution. This in turn causes more research to be done in this class of problems (perhaps by people with ill intentions), and free enterprise improves humanity once again.

Friday, October 30, 2009

7.2

1. How come by making beta alpha to the t power we can hide information?

2. I think group theory is starting to cogitate in my mind. It's starting to feel very concrete. When this happens to me with math, I start proving things anew, for myself. Not new things, but I feel like I develop the proof for certain things even though I was brought up in them. They're just no longer rote. It's happening a little bit with RSA also. If I had a real blog, I wouldn't subject you to this inane commentary.

Tuesday, October 27, 2009

6.5-6.7, 7.1

1. In discrete logs, why would the prime be left out of the notation? Surely the logarithm makes little sense without the context of the prime in question, or can it be generalized?

2. Interestingly enough, one of the efforts to break RSA is headed by people who want to watch high definition films on their Linux machines. Based on what I read some months ago, one attack vector concerned the use of a low public exponent (something like e=3) in the AACS Blu-ray standard. This could be seen as a discrete logarithm problem, correct? And are these logarithms harder to solve as the exponent grows?

Sunday, October 25, 2009

6.4.1

1. Why look for linear dependence (mod 2) specifically?

2. I am guessing this method doesn't scale because the matrices would get unduly large. Since each column represents a prime, we could quickly get to 1000 columns whilst dealing with primes around 10000. When dealing with 50 digit numbers, that's probably nothing.

Friday, October 23, 2009

6.4

1. The most difficult part for me is this: choosing to use B! is to make the algorithm easier, right? Otherwise, we could just use powers of primes less than B.

2. This is brilliant and simple and was disseminated in 1978 which boggles my mind. Part of me thinks that this is some low-hanging fruit that was somehow missed by all the other people looking for thesis topics but another thinks that the nature of things always looks more obvious in hindsight.

Tuesday, October 20, 2009

6.3

1. Why does the Miller-Rabin test work? I don't see it.

2. How is the guarantee that the test is correct more than 3/4 the time supposed to lend any kind of assurance to a person relying on it to secure their private data. He mentions once or twice that the test can be performed multiple times to get certainty arbitrarily close to 100% but also mentions that in many cases, one test is enough. My question is, in what venue would someone who needed a prime (for whatever reason) be satisfied with a certainty with lower bound 0.75? How is that useful?

Sunday, October 18, 2009

3.10

1. If the Jacobi symbol is +1, it does not imply that a is a square mod n, but if it is -1, it implies that it definitely is not. Is that correct?

2. Whoa. The law of quadratic reciprocity. In some cases, this can be extended to even powers that are themselves powers of two, right? Suppose we want to determine whether b is a fourth mod p. Then x^4=b (mod p) is the same as (x^2)^2=b (mod p) so we determine, if p=3 (mod 4) (?), x^2. We then determine whether x^2 is a square mod p. Here when we write x^2, we aren't actually assuming that such an x exists, although it looks like it. If necessary, we could let y=x^2 and solve y^2=b (mod p) and then x^2=y (mod p) so that x^4=b (mod p). Is this right?

Thursday, October 15, 2009

3.9

1. So when the author says to let x=y^(p+1)/4 (mod p) so that x^4=y^p+1 (mod p), do we know x and y?

2. While I was reading, I was thinking how the actual intuition behind Fermat's theorem has been kept from me in my education. I know proof techniques and enough number theory to justify it, but it has been so formalized that the impetus is lost. Then I started thinking, maybe results are what's important since they allow us to settle our brains--to rest them upon the rigor and irrefutable truth proof brings. We start thinking of things at higher levels and in more general terms until we're all brought up in the same education and creativity is only preserved in the sporadic genius of choice minds.

Tuesday, October 13, 2009

3.12, 6.2

1. I'm not sure I follow how the continued fraction method actually yields the factorization of n.

2. The news that RSA had a seemingly obvious weakness was unsettling, and underscores the importance of interdisciplinary collaboration. The author mentions that this class of weaknesses was not considered since the mathematics was well understood, and I would be interested to know the emphasis of the Stanford undergraduate who revealed this weakness. I'm sure he was studying math. I bet he was also studying Computer Science.

Thursday, October 8, 2009

6.1

1. I hate to be like this, but I have gone over RSA several times. Stay tuned for attacks, though. I'm sure I'll have questions.

2. I'm going to once again bring up the "invented" vs. "discovered" notion of mathematics. It seems like the math climate today is that things are defined and then we see what follows, and this strikes me as an "invented" viewpoint. I fall in the both camp but there are a lot of things--identities and elegant results--that I think are discovered. To that extent, I feel like I'm surrounded by atheists. Except for our author who mentions that it's conjectured that the NSA discovered the RSA algorithm before, well, R., S., and A. did. Now I have mixed feelings. I think this is an algorithm that was invented. It's sort of like when you find the balance (religiously) between what's possible and what's realistic (like miraculous healing), and then someone attributes an everyday empirical result to a miracle. You don't want to suggest it's not intervention, but you don't really think it is. And by you, I mean me. Sorry, this isn't very topical. This is my only blog, though.

Wednesday, October 7, 2009

3.6-3.7

1. So if n is a Carmichael number and (a,n)=1 (which means a does not share what I assume is a small number of prime factors), a^phi(n)=1 (mod n) and the Euler-Fermat theorem fails?

2. I am fascinated by logarithms, probably because I don't understand them beyond basic application. I think they are analytic which boggles my mind: how numbers can contain in themselves and their algebraic relationships so much information. The discrete logarithm is also interesting because I can see how it would be useful but I have no idea how to go about finding it.

Sunday, October 4, 2009

3.4-3.5

1. I know I'm not supposed to say nothing, but this is the fourth time I've been "taught" the Chinese Remainder theorem and fourth I've gone over modular exponentiation.

2. Cryptography is a pretty good mix of the theoretical and the practical. On the one hand, secure cryptosystems need to be provably secure, and this almost always includes a mathematical analysis of the system in an idealized manner, as in, is it possible to break it without going through all possible solutions? Is there a method to obtain a solution analytically? On the other, we have real world needs of speed and convenience. We move from states to processes and algorithms, of which modular exponentiation is a prime example. Not only does it avoid overflow, the time complexity is logarithmic with the exponent and not linear like the naive, albeit correct, implementation is.

Thursday, October 1, 2009

Test Review

1. The most important things we have talked about, I think, are the principles of a secure algorithm, like whether it is susceptible to a known plaintext attack or under what conditions is a certain method secure. Probably not crack this Caeser cipher but more like what are the main vulnerabilities of the Hill Cipher.

2. A few simple code cracking problems. A question or two on whether a proposed algorithm has a certain weakness. Maybe one to do a portion of DES or AES.

3. AES and DES. I think I'm ok with all the rest.

Sunday, September 27, 2009

Survey

1. I've spent about 2-4 hours on each assignment. I would say the lecture and reading prepared me for them.

2. Hmmmm. Tough one. I don't like algorithmic approach we sometimes take. For instance, DES is an algorithm, and learning to perform it is like learning how to take a derivative. It's useful, and particularly involved, but I don't feel like I'm gaining an insight into why the algorithm works, in the sense that XORing and (mod 2) arithmetic have specific meanings and properties related to information and such, but I feel like we're assuming it just obfuscates the data, which means we very easily could fall into the trap of constructing an encryption system with some elementary attack vectors. I don't know what the long term goals are for the curriculum. My feedback is based on the assumption that we learn about and analyze possible attack vectors and can prove that certain aspects of a given cryptosystem (even a self-concocted one) are secure. All of that said, I like the theory and I like the demonstration of certain cryptosystems' weaknesses.

3. I would like a term project where we create our own cryptosystem (in groups maybe) and "publish" the algorithm for the class. We then analyze other groups algorithms and see if we can find a way to break the cryptosystem without resorting to a full-on brute force attack. There could be restrictions on key length or something so that a brute force attack was possible, to aid in finding another way to break it. I think this would be a good way to be honest with ourselves in the creation of an information security system and get a tiny bit of an independent research experience.

Thursday, September 24, 2009

3.11

1. I always lose my focus when he starts talking about bits and such. Surprisingly, I think I understood the Galois fields (if that's what they were).

2. A few months ago, I read an article about a company (or maybe it was just a guy) who claimed that he could manipulate encrypted information in a meaningful way without destroying the integrity of the data, and without knowing the actual data. This was via an cryptological homomorphism, which, I think, simply means that certain operations can be performed on x by performing a corresponding operation on E(x). It seems to me that this is a crock. Is sufficiently encrypted information indistinguishable from random noise or what?

Tuesday, September 22, 2009

4.5-4.8

1. This was a nice history lesson. The hardest part was the different modes.

2. There's a lot of ways to measure whether some encryption is "secure enough". If there is no demand for the data being encrypted, any encryption (or none) is good enough. DES and some other algorithms seem to get their security from the fact that breaking it is computationally infeasible, but general purpose programming on data-parallel processors (graphics cards) is becoming more and more accessible. I'm not exaggerating when I say that a top of the line GPU today (with 240 compute cores) is over two orders of magnitude faster than the top Intel chips (Core i7 as they're called) today. So what, do we continue to increase key length? We've done that a little with RSA and it's a testament to Moore's law and capitalist ingenuity that we've had to. But when quantum computing becomes available, the time to break RSA becomes polynomial (maybe even linear) with the key length, and not exponential. It's almost stupid to get cracking on encrypted data now since you could wait for (or work toward) quantum computers and crack the encryption long before your classical computing device yielded any fruit. I guess what I'm saying is computationally infeasible is super relative and proofs of security should not rely on such a property.

Monday, September 21, 2009

4.1-4.2,4.4

1. I had a hard time following the algorithm, or at least appreciating what the algorithm was doing. It'd be nice to go over the proof again, too.

2. The DES algorithm seems like a convoluted process. The fact that many suspected there was a backdoor suggests to me that some security through obscurity is employed with this algorithm. The book said that the initial permutation was probably to optimize it for the hardware of the time. This, plus the linear feedback cipher, makes me wonder how much an algorithm's security is compromised in the name of speed.

This slideshow article came up today. It's not super interesting, but it is topical: pictorial representation of the past six decades of research and development in encryption.

Thursday, September 17, 2009

2.8-2.11

1. Why is the does the LSB of the output of the trapdoor function on predictable (consecutive, no less) input give a pseudorandom number sequence? Wouldn't it depend on the function or is this characteristic of trapdoor functions?

2. All of this should be prefaced, "if I'm not mistaken..." According to Shannon's theory of entropy and information, a sequence with the highest entropy (and so information content) is a totally random one since it cannot be predicted. So why can a sequence generated from a relatively small number of bits be cryptographically secure? Isn't it impossible for such a sequence to have more information than its inputs? (I'll do you a favor and stay away from the sophomoric pontification: "Is anything really random?")

Wednesday, September 16, 2009

3.8, 2.5-2.8

1. The most difficult part for me was finding the decryption matrix with a known plaintext attack in section 3.7ish.

2. I was struck by the mention of cryptography in former pop-culture. The chain of secrecy is only as strong as its weakest link. By that, I mean, suppose you have really sensitive information that you are encrypting with RSA-128. That's pretty secure but not impossible to break. It might take some supercomputing resources or some mathematician resources. It then may be the case that one of the ends of the communication link is more vulnerable. If someone has access to the information, maybe they can give that access away. Maybe the going rate for hired goons is lower than mathematicians. Speaking of, mathematicians are generally known (as compared to engineers, say) for their idealism as their world is pure in a sense, perfectly described by certain principles. Devoid of the human element is maybe what I'm getting at. But that's not satisfactory when real security is on the line. I mean, don't some proofs of security rely on the existence of a perfect random oracle, which doesn't exist? Anyway, we can't neglect the human element. That sounds silly.

Sunday, September 13, 2009

2.3

1. The most difficult part for me is following using the dot product for probabilities. The reasons follow...

2. It's interesting to me when a mathematical operation is taken out of the context that probably motivated its creation and put in a different one. Of course, we're a long way from geometric interpretations in any advanced math course. If we're looking for intuition (which I am), we might consider the 26-dimensional vector space which W exists in. A higher dot product would mean the vectors are more "coincident", right? As one shifts the vector, one is actually shifting the components into adjacent dimensions. Of course, I am conditioned to think of vector spaces when we use dot products. I am probably shoehorning this concept into an entirely unrelated framework that would not only give no more insight into the problem, it would also destroy creativity.

Thursday, September 10, 2009

2.1-2.2, 2.4

1. Running frequency analysis on English plaintext is good and all, but in the real world, not only is English with punctuation encrypted, but all types of binary data, including compressed data (which has more entropy, I think). If something is entropy encoded, is it no longer susceptible to frequency analysis attacks?

2. I like how he said that everyone "knows" that substitution ciphers can be broken by frequency analysis. It's kind of like the budding cryptographer's mantra: "run some frequency analysis on it". Do all languages exhibit this kind of statistical exploitability? Chinese has a few thousand characters and the semantics and meanings are largely contextual. I suppose digrams and trigrams take that into account to a degree, but I wonder what Chinese codes looked like.

Lecture

1. Larrabee's cipher was (self-)proclaimed to be the greatest cipher invented or discovered. With Hogg's improvements. That seems a little weird. Probably the most difficult thing for me here is not falling into the same trap they did. Yeah, I know it's a cop out.

2. I was reading the liberal internet the other day (actually a few years ago), and some people were discussing their disgust with President Bush's wiretapping policy and the administration's appeal to the idea "if you have nothing to hide, why does privacy matter?" I suppose one could make the argument that we (as a society) conflate privacy with secrecy or security or something, but I wonder what their response would be to early Church leaders. Certainly they had a need to keep things secret from their enemies, but I think they are often labelled as aggressors hiding secret plans and not as victims trying to hide their location (or something). On the other hand, they openly suggested bribing politicians to get legislation through, so we can be sure those were different times.

Thursday, September 3, 2009

3.2-3.3

1. I took Number Theory Summer term, so most of this is really fresh in my mind. I can never remember the proof that if (a,n)=d and ax=b (mod n), there are exactly d solutions distinct mod n to the congruence. Leaving fractions in congruence relations is new.

2. I was brought up with the doctrine that math was a tool and the motivation for inventing/discovering new math was solving problems. Certainly Newton's development of calculus jived with this. Recently in my math courses, I have heard professors say that math is something where you invent rules and see what ramifications those rules have. I also have had two (old) math professors talk about how the vast majority of dissertations are on such esoteric topics that there is no foreseeable application to any results obtained in the research. These two ideas have impressed me with a bit of cynicism. Am I naive in thinking that these "invented" rules are not arbitrary--that there is some real-world motivation for them? And if we assume there is no application of a set of different topics, what does it matter which one is researched, since it's all mind-games anyway?... Now the moral: When I learned about congruence and divisibility, I wavered between it being exclusively either obvious or (more often) inconsequential. Same with primes. But modern cryptography rests on these concepts and (essentially) truths. Did the mathematicians who developed it see the potential in it? I suppose they didn't, but why did these ideas disseminate among the mathematicians and become fixed in the curriculum?

Tuesday, September 1, 2009

Introduction

I am a senior majoring in Mathematics and minoring with specialty in Computer Science. Post calculus, I have taken linear algebra, ordinary diff. eq., partial diff. eq., abstract algebra, math 190, combinatorics, number, theory, theory of analysis 1, calc. of several variables, and discrete math (CS 236). I am taking this class because it is a part of my coherent set which also includes combinatorics and number theory. The coherence is that they will all contribute to work in information security. I don't have much experience with any CAS, but I have used Maple before. Dr. Vianey Villamizar is one of the better math instructors I've had. He is really good at exposing the motivation for certain constructions (right word?) like the Fourier transform, so that the material seems more approachable and accessible. If I were unable to come to your office hours, MWF 2-3pm, TTh anytime would work.

Sunday, April 12, 2009

Final

I need to understand group theory much, much better. A couple of nights ago, I had a math dream where I was playing chess with my brother and each piece was labeled with an ordered pair. For some reason, each side had only fourteen pieces. For some reason, in my dream I knew that the ordered pairs were in Z_2 x Z_7 and that (1,1) would generate it since gcd(2,7)=1. It was really weird. A few nights later (so, like, two nights ago), I had another dream about group theory. It was crazy. In class, I say we go over the Sylow theorems and their implications.

Thursday, April 9, 2009

Section 8.5

1. I understand the reasoning they used to prove certain properties and classify the groups, but that's merely an artifact of knowing how to apply a theorem. At this point, they're little more than memorized facts and not the concepts they should be (which is necessary for any useful application).

2. Quaternions, as you probably know, have a direct application to rotations in 3D space and avoid the problem known as "gimbal lock" when performing such transformations. They therefore have a direct application to 3D graphics, which I suspected groups would since they are often given geometric interpretations. If I'm not mistaken, quaternions do not entirely "live" in three spatial dimensions--they comprise four--like D_n does not live in two.

Friday, April 3, 2009

8.2

1. The Fundamental Theory of Finite Abelian Groups sounds important. Why do the people who write the textbooks always lay the groundwork out so that the proofs of the fundamental theorems is so anti-climactic? I can't say I followed Lemma 8.6, an "intricate" proof.

2. I'm swimming in deep water here. I knew Abstract Algebra would be a bit abstract, but I have no intuition with this kind of stuff.

Wednesday, April 1, 2009

8.1

1. I'm not sure I understand the implications of Theorem 8.1. In fact, this is all a bit hazy.

2. I realized at the end of a lot of problem sets, there is a grey box headed "Application" containing a reference to a real-world application of the topic at hand. Am I to assume if there is no such grey box, there is no real-world application? I'm sorry; to really have a better grasp on this topic, I'm going to need time to digest it.

Sunday, March 22, 2009

7.8

1. Composition factors look interesting and deep. I wouldn't mind another explanation of them.

2. I think the real insight here is in the definitions of kernel, the homomorphic mapping from one group to another, the standardization of these theorems and definitions. These are given freely and then the proofs are relegated to previous results in analogous ring constructs. I suppose this is a good thing, since, while the definitions and homomorphisms seem trivial, it is easy to see these things after the fact.

Thursday, March 19, 2009

7.7

1. The most difficult part was... I don't know. I don't know what I don't know.

2. Quotient groups seem like the natural thing to construct after all that we saw with quotient rings.

Thursday, March 12, 2009

7.5

1. Whoa. Who saw this coming? Congruence for groups? Considering we have (supposedly) been exposed to the proofs twice before, this is sitting pretty okay with me. Lagrange's Theorem seems important.

2. Okay, I know I always ask this question (mostly rhetorically, and never accusatory): what can this be used for? I still see applications with computer graphics and digital interfaces, but that's my bent. I suppose the von Neumann computer model is one avenue where the abstract meets the actual, but I'm blind to other applications. I guess I'll give you a rundown of what I'm thinking since this is the "making connections" portion of the blog: I read a question on Slashdot a few weeks ago that was essentially, "How big is the intersection between philosophy and computer science?" and the general answer was, "Not very." This struck me as odd, because it seems to me that problem definition/specification/nature borders on the philosophical if it isn't outright. In my experience, I see a lot of what I'd call poorly coded applications that imposed arbitrary limitations which I conjecture is because the developer could not or would not prove to his/her satisfaction that certain conditions could not occur without some sweeping restriction. One naive approach to writing anything that models things geometrically might impose arbitrary limits on the types of transformations one could do one things as "tangible" as 3D models or as abstract (conceptually) as graphical user interfaces (which, I think, must exist in spatial dimensions, by definition...?). As a sort of contrived example, I can imagine one ensuring that an entire group of discrete transformations were applied iteratively (since computers do that well) by using a generator and then removing a lot of "logic" code that ensures each transformation was applied with the efficient and elegant approach that, they must have been applied since it's been proven such and such is a generator for the group. Of course, this is the approach Donald Knuth took with his software, in that he said, "Beware of bugs in the above code; I have only proved it correct, not tried it."

Sunday, March 8, 2009

7.3

1. The most difficult part of the section was the proof that a cyclic subgroup is the smallest subgroup containing elements of a given set. I take it since there is notation for such a group, it will come up more.

2. I think that a bunch of groups exist in Pascal's triangle with numbers in the proximity of one another. I think all elements a in Z sub n where (a,n)=1 form a group under multiplication. I think the set of all physical manipulations can be broken down to primitives which form a group.

Tuesday, March 3, 2009

7.1 Part 2

1. Nothing too difficult here. I either understand it really well or not well at all. Sorry I can't be of more help here.

2. I'm going to assume that I really don't understand this well at all (even though I think I do). I tutor kids in math, and I hear a lot of kids say, "This is the most useless stuff. I'm never going to use this. Why am I learning this?" Of course, I can see the applicability of what they're learning. They could too, if they really understood the story problems. But there aren't story problems in this book. I can think of using these, uh, constructs as tools toward proving that certain systems will have certain behavior, that could help in the design of, well, a lot of things, I think.

Sunday, March 1, 2009

7.1

1. I hate to be not helpful, but I had Math 190 last semester, so groups are pretty fresh in my mind. No problems here.

2. I get the impression that transformations are associated a lot with groups. If one were designing/creating a computer program that had an interface that mimicked that natural, tangible, physical interface we have, I bet it would be useful to be able to prove that any transformation was the composition of perhaps simpler transformations, proving that transformations were closed under composition and that any manipulation the user might make could be predicted, or at least its result could be. This is probably a very rudimentary application, if it even is one.

Thursday, February 26, 2009

9.4

1. I'm assuming this is not part of the curriculum, so the most difficult part is irrelevant...? I don't know. I saw this construction in Math 190, so it's all good.

2. It talked about the definition of addition, multiplication, etc. being "motivated" by the definition in Z and Q. I'm wondering a few things: which came first, the numbers themselves and their requirements, or some abstract notion of quantity and relationships...I mean, we could define things a lot of different ways, right? But 1/2 of a pie plus 1/4 of a pie is 3/4 of a pie. I guess the numbers are placeholders for some notion of quantity anyway, and that notion has to be consistent and maybe all of math is just what follows from that. I might be kind of an engineer at heart, but I keep asking myself what immediate applications I can find for this (and I mean this specifically). Is there any kind of tangible example of quotient fields outside of the rationals?

Tuesday, February 24, 2009

Exam 2

1. I think the First Theorem of Isomorphisms is important. I also think the nature of R/I when I is a prime ideal is important.

2. I think I need a firmer grasp of prime ideals and maximals.

3. Prove (log n)/n decreases monotonically.

Sunday, February 22, 2009

6.3

1. This really wasn't that difficult of a section. I like the idea of maximals.

2. An integral domain is related to Z mod p where p is prime. Let ~ be congruence. If ab ~ 0 (mod n), with a, b both nonzero, then n must be composite since ab is a multiple of n (assuming n divides neither a nor b). Actually, typing this out, this isn't really a new concept. So I guess I'm making the connection now that I should have made before.

Monday, February 16, 2009

6.1

1. Ideals. It doesn't seem that bad. It's always harder to broaden my definition of things, but that's the nature of generalization. I mean, that's what we have to do, right? We can't conceive of everything and we can only conceive of things pretty close to our experience. So we generalize. We make sweeping claims from finitely many specifics. And proofs are generalizations too. But, uh, nothing difficult here. So I either really get it or I really don't.

2. I think it's interesting that even though we are broadening our definitions, congruence is still based on subtraction. I know that we could define that symbol to mean a lot of things (redefining addition and satisfying our axioms, of course), but it seems weird to me that we don't have a lot of different flavors of congruence (maybe we do...?) like difference congruence, logarithmic congruence, or cross-product congruence or something.

Thursday, February 12, 2009

5.3

1. There wasn't too much difficult in this section. I am amazed at how similar irreducible polynomial quotient rings and Z mod p for some prime behave. Seriously, the fact that it is a field, and the implications of all that, well, it was unexpected to me.

2. I have this problem where I ask "why?" all the time. First it was "why are we learning this?" That was quickly mitigated as I renewed my faith in the BYU mathematics department--realized that they wouldn't teach things that one didn't need. It then shifted to, "Why was this developed?" Newton invented the Calculus (or discovered it, if you prefer) to model the laws of nature. I understand a little about the application of fields to cryptography and even to computation in general, but is the development of these ideas really that recent? (By recent, I mean early 1900s). Sometimes I have this suspicion that we are learning things that, while true, aren't immediately applicable, but they'll be assets when the fallout comes and we have to decide who gets to stay in the shelter and who doesn't. Kind of like that desert island game. I'm kidding. Kind of.

Monday, February 9, 2009

5.1

1. The most difficult thing for me was recognizing that the congruence classes for linear polynomials are unique for each unique l. p. and the corollary to that talked about what x^2+1 was related to.

2. Obviously, this has a lot to do with congruence classes for integers. I'm waiting for the big reveal when we see that all this information lets us send information securely via encryption or something very practical like that. Or maybe this is a course to prepare undergraduates for graduate work. What is a real world application of polynomial congruence classes?

Thursday, January 29, 2009

4.1

1. Once again I am burdened with the question: when are we just manipulating symbols and when are we actually manipulating ideas (obviously the latter is probably the answer and necessary for any rigorous study)? This chapter has challenged my conceptions of what x is. I guess even in my old way of thinking, it's just an element with certain properties. This actually did clear up my question of why we would say that polynomials in a ring had coefficients in that ring since it seemed stupid to have some polynomials in Z but use elements of R in place of x.

2. Talking about the degree of functions, I am wondering: how do you prove something that is obvious? The interesting thing is that obvious is subjective. I have been bored in class when a seemingly obvious concept has been proven. I have been confused in class when some method of proof is substituted with hand-waving because the result is obvious. Experience breeds intuition. We're all being taught to think in a certain way, to think with certain biases and conventions. But I think the world is too complex, and the subject matter far too advanced, for someone with real intuition to build it up from scratch.

Sunday, January 25, 2009

Catch-up

1. I spend between one and two hours on each homework assignment. I spend probably an hour total reading the material and blogging about it, so between two and three hours total--nothing unreasonable. I feel the reading/lectures prepare me for the assignments.

2. I like the class. For now, as the material isn't too difficult, the class lecture seems redundant, but I can see the purpose of reading before, and I'm sure it will get harder, so it's all good.

3. It's a good class. It's well-taught. I think I can say that without pandering.

Thursday, January 22, 2009

3.3

1. As the chapter said, this stuff isn't too hard with manageably-sized sets but as the sets grow, so does the problem of finding isomorphisms/homomorphisms by hand (or so it seems to me). This seems like a job well-suited for a computer: it is rigorously definable and really just symbol manipulation, which is what the Turing vision of a computer entailed.

2. Mathematics is for me often intriguing in that I wonder if a lot of math is developed (or discovered, but let's not start that debate...) but with no real purpose and is simply discarded. On the one hand, real world problems serve as motivation for mathematical (or any kind of) discovery that can lend itself to the solution of the problem. On the other, I could see many well-educated and well-read mathematicians with nothing really new to offer but who still develop some kind of consistent system and declare it new math. I wonder if Fermat knew the implications that prime numbers would have on our "futuristic" society (where almost all security is based thereon, etc.).

Tuesday, January 20, 2009

3.2

1. The most difficult thing for me in this section is keeping symbol manipulation separate from the logical validity of the operations. It seems interesting to me that behaviors of systems can be dictated by defined symbols and operations. I probably don't understand it, though, to be saying that. I think it's also interesting that kids are taught symbol manipulation through school and teachers struggle to teach how it is applied--there's just too much to go over and most kids just aren't willing to put forth the effort into something if it doesn't come easy. As a math tutor, I see this consistently and is the rule, not the exception.

2. I am taking Theory of Analysis (315) right now and the book started with fields. The author, who doesn't mince words, went through a similar process as this book to establish laws for fields, groups, etc., albeit with way fewer words. It's nice to see some mathematical convergence since for so long it seems there is just new topic after new topic.

Tuesday, January 13, 2009

2.3

1. I just had Math 190, so modular arithmetic is fresh in my mind. I know you probably hate that response, but I suppose I don't know what I don't know and so I can't think about it at a deeper level "at will". This presupposes that you are using this to determine what to emphasize during class time. If you would like to know the hardest aspect anyway, I suppose it would be...I really don't have anything at this point. Stay tuned.

2. One of my math professors said, "Mathematics is an opportunity to be honest with yourself." Like right now, I don't know the ramifications of a prime modulus producing a finite field but I have to assume that these concepts are taught for a) some application which I cannot come up with, or b) as a foundation to more advanced and applicable concepts. The great thing is that in the near future, I will have another tool at my disposal to solve problems. That's cool.

Sunday, January 11, 2009

2.2

1. The most difficult thing I can foresee is proving that Z modulo n is only a field if n is a prime number. Actually, that won't be that bad, since the greatest common divisor of the prime number and all its congruence classes is 1 (except for [0]). I'm taking Theory of Analysis (315) and I have to prove, with fixed b > 0, that r = m/n = p/q => (b^m)^(1/n) = (b^p)^(1/q). Any suggestions?

2. I was talking to a BYU mathematics Master's student and he told me he could find no application for the infinity of infinities proof (that there are uncountable sets of uncountable sets). It got me thinking about all of the development of prime numbers and number theory in a time when it was unreasonable to use them for, say, cryptography. I can't see a use for finite fields yet, but I'm sure when I realize what I can do with them, I'll wonder what I ever did without them.

Thursday, January 8, 2009

1.1 - 1.3

1. I would say the most difficult thing for me is accepting that negative numbers are primes. It flies in the face of most of my long held beliefs. But I suppose that even positive primes are divisible by -1, invalidating the definition that primes are only divisible by 1 and themselves.

2. I think that the fundamental theorem of arithmetic may have some ramifications for the methods of cryptography in use now (the RSA scheme). If I recall correctly, the security of the RSA scheme hinges on the fact that it is hard to factor large numbers. That's it, though. Computational power is still increasing at a fairly rapid rate (especially considering the use of FPGAs in GPUs) and who's to say an algorithm won't be developed to factor the numbers in polynomial time? With more investigation, I might be able to determine if the existence of multiple prime factorizations of a number compromised the RSA scheme, but at cursory glance it looks like it wouldn't (because the totient function would be different and so the keys would be different), but it did get me thinking.

Tuesday, January 6, 2009

Introduction

My name is Kimball Germane. I am a Junior with a declared major of Computer Engineering, but I am changing it to Mathematics with a minor in Computer Science.

Math classes taken since calculus:

Math 334
Math 348 (I think it's 348, the class was PDE)
CS 236 (Discrete Math)
Math 214
Math 190
Math 343

I am taking this class 60% to fulfill requirements for the major, 40% because I like math. But maybe I'm not so sure on what abstract algebra really is. I really liked Math 190. I wonder if it's anything like that...

The most effective math professor I have had is either Vianey Villamizar or Sum Chow. I had Dr. Villamizar for both flavors of differential equations and he can explain (what I think are) complex topics very clearly. I had Dr. Chow for linear algebra, and even though there was a little bit of a language barrier, his method of teaching was enlightening.

I've trained in Shaolin kung fu for over eight years.