Thursday, November 15, 2012

11.3-11.4, due on November 16

This was a good section, fairly easy to understand. I got a little confused with gcd because we show that the set of all the numbers that are linear combinations of the two numbers in question (i.e. a and b where we're looking for gcd(a,b)) has a least element, even though we're looking for the greatest common denominator. However if we have the greatest common denominator, then the linear combination of the two should be the smallest possible linear combination, or the least element of the set on page 251.

The Euclidean Algorithm is a useful way to find the gcd of really big numbers! The proof was tough to follow, but then the example (example 11.10) laid out actually how to explicitly do it and it made sense. I like the sequence of these sections as well, how we first learned about the division algorithm and then about gcds and now we use all of those in the Euclidean Algorithm.

Tuesday, November 13, 2012

11.1-11.2, due on November 14

The Division Algorithm is a very basic concept, but the proof is crazy. I especially get tripped up when we introduce q and r into the proof. At first I didn't understand the aspect of it being unique. However, I noticed the restriction 0<= r < a, which makes sure that q and r are unique. In the actual proof, why do we consider the set of integers where b-ax>=0? Also why are the integers that satisfy the qualifications for the set postive? It seems that a negative value for x would make sure that b-ax was always greater than zero. I understand that that wouldn't work with our division algorithm, however the inequality and relation for the set don't seem to take this into account.

As we've had types of these problems before, it's neat that we get to learn more about them now. For instance, I thought the application of the division theroem to divisibility of integers (I'm not sure how to say that; where we let a=2,3,4, ... etc and then we know how to write any integer as a product aq+r. For example 2q+1). We learned about this and used it earlier, taking it as true without really learning the proof behind it. Now we know!

Sunday, November 11, 2012

Rest of Section 10.5, due on November 12

On page 239, the actual proof of the Schroder-Bernstein Theorem is tough to follow. It seems like the function g1 doesn't really mean anything, but rather it's just some random function set to equal the function g. This serves to give us a bijective function and an inverse, but how is it relevant? Where does the function come from?

The idea behind the theorem is really neat and it makes sense. It reminds me of calculus and that theorem dealing with limits. I forgot the name, but it could be called the squeeze theorem. We're almost forcing the value of the cardinality due to the limits on either side, which is super common when applied to real numbers and the like, but this is applied to sets and so it's doubly cool.

Friday, November 9, 2012

10.5, due on November 9

Several parts of this section were difficult to understand. The introduction of a restriction went fine, however when applied to the Reals, that threw me. Once I reread it and understood that the restriction is a subset of the initial function, or rather that the restriction is taken on a set that is a subset of the initial domain, I understood how the restriction g1 could be one-to-one. Effectively we are only taking the positives and thus leaving out the negatives that would yield the same values of the range when squared.
Lemma 10.16 is still throwing me and I would appreciate it if we spent some time on it in class. I don't understand why we take values that are in the union of A and B. It seems weird to be taking values from the domain and range of a function.

The definition of a function from the union of two sets to the union of their corresponding sets that make up the range of two individual functions was super cool. I hadn't really thought about that before. It also makes sense that the two sets A and C would need to be disjoint for the function h to be considered a function (pg 237).

Tuesday, November 6, 2012

How Data Analytics is Transforming our Lives by Jack Thompson

Honestly this address was very hard to follow and wasn't very enjoyable. He relied on a number of videos to demostrate what he was talking about but didn't explain them very well. He didn't spend a lot of time on any one topic either. Overall, it could have been a lot better

Nearing the end, he finally talked about something significant and relevant to us today...Facebook! No but really he discussed a little the significance of social media and the fate of privacy in the future. Who's data will be who's? What constitutes your data? He asserted that in the future we won't be able to protect our data, that there will be so much technology spread around that anything that was ever on the Internet will be there to stay etc. That was a neat part for me! Analyzing data, seems like an interesting direction and line of work, but I definitely know it's not for me.

10.4, due on November 7

Most tough to understand for me was the inital proof of 2^A being equivalent to P(A). Where does this proof come from? Well, I know that we used it for finding cardinality but where does the actual proof come from? How would we think to make a function equal to that piecewise function?

That being said, it was neat that out of nowhere this function comes from the set of subsets and it relates the pairs of elements of P(A) and 2^A. I don't quite understand it, but it works out nicely. Other than that, this section is really short and I don't have a lot else to say!

Sunday, November 4, 2012

10.4, due on November 5

Maybe I just haven't spent enough time with decimal expansions yet, but there are two proofs given in the text that I'm not sure where and how they arrive at their contradictions. One is 10.8 and the other is 10.12. The first is the set of real numbers that is uncountable. I get lost somewhere in the defining of another decimal expansion that helps us reach the contradiction. And then with 10.12, it seems like in 10.11 they state the exact opposite.

The idea of decimal expansions seem neat to me. There was one proof on my homework I did one time where the TA said I should have taken the decimal expansion of the number to prove it was irrational/rational and so ever since I've been excited to learn how.