Tuesday, December 4, 2012

Final Blog post, due on December 5

  • Which topics and theorems do you think are the most important out of those we have studied?
  • Over the whole semester there has been many that we have studied. The most important I think is the methods of proving things: induction, contrapostive, direct, etc. Equivalence relations also fall under the most important, along with functions. Of the more recent things, the Schroder-Bernstein Theorem is probably the most important.

  • What do you need to work on understanding better before the exam? Come up with a mathematical question you would like to see answered or a problem you would like to see worked out.

  • The most recent things are toughest for me (epsilon delta proofs and infinite series in particular). If I had a choice I would like to work out 12.6 or any of the extra problems at the end of the chapter (12.31-12.34). I recognize that some of these were homework problems but I want to make sure that I understand them. We could also just work out similar ones!


  • What have you learned in this course? How might these things be useful to you in the future?
  • I've learned a lot about proving things. All of the theorems and topics that I mentioned in the first part are all things that I've learned and have retained. Also I've learned some neat characteristics of numbers, both even and odd, and many, many other proofs and facts!

    These sort of things honestly won't help me too much in my career. However, the general mindset of seeking for things that would make a proof not true very much apply. I can use it in engineering analysis of a given situation or problem. Also logic has already helped a ton in the analysis of circuits.

    Monday, December 3, 2012

    12.4-12.5, due on December 3

    Honestly these sections are still like Hebrew to me. I don't have a real concrete understanding of the general format to prove limits; I scooted by just barely in the last homework. Choosing the epsilon/sigma always throws me. I'm sure I'll get it, but as of right now it's my biggest obstacle to mastering these sections. One thing that doesn't make sense to me is the necessity of condition 2 for continuity when we have condition 3. Is there ever a time when condition 3 (on page 289) is satisfied when condition 2 isn't?

    I enjoyed the proofs that were showing the addition of two things (whether they be polynomial or just general functions) have the same limit as that of the individual limits added together. The same also goes for the product of two objects. The use of inequality amazes me. It seriously is like the best thing in a proof. You can typically make something smaller (or larger) that makes your proof not only possible but ten times as simple.

    Friday, November 30, 2012

    12.3 due on November 30

    Perhaps it's something in the previous section that we weren't assigned or not, but what does it mean by the deleted neighborhood of a? It seems like it includes everything except for a, which would kind of make sense because we're discussing limits and quite often limits aren't defined at the point. The toughest part for me about the chapter is choosing the values for epsilon or sigma. How exactly do we know what we want to specify sigma as? It varies from problem to problem and I haven't yet gotten the pattern of it.

    The coolest thing about this class for me is actually understanding things that I've grown up learning. While I wouldn't say that this section actually helped me understand limits and calculus, it has definitely helped. Something that was neat was the need to specify a range of acceptable 'closeness'. In programming you often need to put stopping criteria in a loop which is analagous to limits and their 'stopping criteria' of epsilon.

    Tuesday, November 27, 2012

    12.1, due on November 28

    I'm having a difficult time understanding the use of the ceiling of a number to prove what it converges to. In the proof of result 12.1, we choose a ceiling of 1/e. Since 1/e is always less than one, wouldn't the ceiling then be 1? Then we let n be an integer greater than that, or greater than one. This makes 1/n less than e. I understand all this; the reason why I'm stating it is I suppose to check my understanding. What I don't understand is how that proves that the sequence converges to 0. Or I suppose how that proves that it converges at all. Not understanding this concept makes understanding the rest of this section very tough! Unfortunately I won't be in class tomorrow due to an interview, so I won't be there for the explanation.

    Not understanding that concept of the ceiling has really taken all the fun out of this section. Every proof uses that fact to prove some divergence or convergence. So I lack something specific that I enjoyed about the section. More generally it's fascinating to me that we can prove convergence in a different way than how I learned in calculus. In calculus we always ended up taking limits and using L'Hopitals rule or some other rule to show that something either converged or diverged. It's neat to have another way to show that.

    Sunday, November 25, 2012

    Exam 3 Assignment, due on November 26

  • Which topics and theorems do you think are the most important out of those we have studied?

  • I feel like showing sets are denumerable or uncountable was something very important we talked about. As well, the Euclidean Algorithm and the Fundamental Theory of Arithmatic were also very important.

  • What kinds of questions do you expect to see on the exam?

  • I would expect to see a free response dealing with each of the above stated topics. The true false for this section might be fairly tough. I'm not entirely sure what to expect because a lot of what we did dealt with extensive proofs. Perhaps some questions might deal with divisibility of integers or cardinality of sets.

  • What do you need to work on understanding better before the exam? Come up with a mathematical question you would like to see answered or a problem you would like to see worked out.

  • I'm concerned on the Schroder-Berstein Theorem on knowing how to prove it. If we don't need to do that then great, but the proof hasn't really clicked for me. Also, I get confused when the restrictions are added and the 'new codomain' of a set and how that helps in a proof. If we don't need to prove the Schroder-Berstein Theorem then I won't really worry about it, however if we do then I would love to review that proof.

    Monday, November 19, 2012

    11.6 and 11.7, due on November 19

    The Fundamental Theorem of Arithmetic has a neat result but I found myself lost in the proof. The first question arose on page 257 when it says that "by the induction hypothesis, each of a and b is prime or can be expressed as a product of primes." Is this just a restatement of what we are trying to prove? Because if not that almost seems backwards. Right after this sentence it goes on to say that by the principle of induction, that the theorem is true. Next question deals with Theorem 11.20, dealing with the relation between n^.5 being rational and an integer. While writing this I actually figured it out...I think that the use of several theorems that we have proved in the past was a little confusing.

    We talked about this today in class, but the shortcuts for divisibility of several numbers are really neat! We had learned certain parts of these growing up and now it's neat to be able to see why they work. I really enjoyed the rules for 3, 9 and 8. However, 11 seems to me just a little too much work! Although I'm not sure why it works and how useful perfect numbers are, perfect numbers are super cool. Is that just a coincidence for why the numbers are a sum of 1 to the largest prime divisor?

    11.5 due on November 19th

    This section mainly centers on Euclid's Lemma and its corallaries. The actual proof of the Lemma is neat, however, what is more interesting to me is the usefulness of the lemma in proving other things. In the book it talks about several corallaries and uses the lemma to prove them. Something else that caught my eye was in the proof of theorem 11.16. Two results for 'c' were found and both were substituted in the same equation allowing us to pull out the 'ab' necessary to show that ab|c. I thought that was clever and something I would have missed on my way through the proof for sure.

    This section was fairly short and easy for me to understand. At first I had to reason in my mind what it meant for two numbers to have a gcd of 1. Actually, I guess I still don't quite understand that set of numbers. It states that the two numbers are integers, so it seems like to have a gcd of 1 and all the coeffecients to be integers as well, the two numbers (integers also) would need to be zero and one or some combination of negative and postive integers. I would like it if we could go over some examples of integers with gcd of 1 just so I can have a concrete understanding.

    Thursday, November 15, 2012

    11.3-11.4, due on November 16

    This was a good section, fairly easy to understand. I got a little confused with gcd because we show that the set of all the numbers that are linear combinations of the two numbers in question (i.e. a and b where we're looking for gcd(a,b)) has a least element, even though we're looking for the greatest common denominator. However if we have the greatest common denominator, then the linear combination of the two should be the smallest possible linear combination, or the least element of the set on page 251.

    The Euclidean Algorithm is a useful way to find the gcd of really big numbers! The proof was tough to follow, but then the example (example 11.10) laid out actually how to explicitly do it and it made sense. I like the sequence of these sections as well, how we first learned about the division algorithm and then about gcds and now we use all of those in the Euclidean Algorithm.

    Tuesday, November 13, 2012

    11.1-11.2, due on November 14

    The Division Algorithm is a very basic concept, but the proof is crazy. I especially get tripped up when we introduce q and r into the proof. At first I didn't understand the aspect of it being unique. However, I noticed the restriction 0<= r < a, which makes sure that q and r are unique. In the actual proof, why do we consider the set of integers where b-ax>=0? Also why are the integers that satisfy the qualifications for the set postive? It seems that a negative value for x would make sure that b-ax was always greater than zero. I understand that that wouldn't work with our division algorithm, however the inequality and relation for the set don't seem to take this into account.

    As we've had types of these problems before, it's neat that we get to learn more about them now. For instance, I thought the application of the division theroem to divisibility of integers (I'm not sure how to say that; where we let a=2,3,4, ... etc and then we know how to write any integer as a product aq+r. For example 2q+1). We learned about this and used it earlier, taking it as true without really learning the proof behind it. Now we know!

    Sunday, November 11, 2012

    Rest of Section 10.5, due on November 12

    On page 239, the actual proof of the Schroder-Bernstein Theorem is tough to follow. It seems like the function g1 doesn't really mean anything, but rather it's just some random function set to equal the function g. This serves to give us a bijective function and an inverse, but how is it relevant? Where does the function come from?

    The idea behind the theorem is really neat and it makes sense. It reminds me of calculus and that theorem dealing with limits. I forgot the name, but it could be called the squeeze theorem. We're almost forcing the value of the cardinality due to the limits on either side, which is super common when applied to real numbers and the like, but this is applied to sets and so it's doubly cool.

    Friday, November 9, 2012

    10.5, due on November 9

    Several parts of this section were difficult to understand. The introduction of a restriction went fine, however when applied to the Reals, that threw me. Once I reread it and understood that the restriction is a subset of the initial function, or rather that the restriction is taken on a set that is a subset of the initial domain, I understood how the restriction g1 could be one-to-one. Effectively we are only taking the positives and thus leaving out the negatives that would yield the same values of the range when squared.
    Lemma 10.16 is still throwing me and I would appreciate it if we spent some time on it in class. I don't understand why we take values that are in the union of A and B. It seems weird to be taking values from the domain and range of a function.

    The definition of a function from the union of two sets to the union of their corresponding sets that make up the range of two individual functions was super cool. I hadn't really thought about that before. It also makes sense that the two sets A and C would need to be disjoint for the function h to be considered a function (pg 237).

    Tuesday, November 6, 2012

    How Data Analytics is Transforming our Lives by Jack Thompson

    Honestly this address was very hard to follow and wasn't very enjoyable. He relied on a number of videos to demostrate what he was talking about but didn't explain them very well. He didn't spend a lot of time on any one topic either. Overall, it could have been a lot better

    Nearing the end, he finally talked about something significant and relevant to us today...Facebook! No but really he discussed a little the significance of social media and the fate of privacy in the future. Who's data will be who's? What constitutes your data? He asserted that in the future we won't be able to protect our data, that there will be so much technology spread around that anything that was ever on the Internet will be there to stay etc. That was a neat part for me! Analyzing data, seems like an interesting direction and line of work, but I definitely know it's not for me.

    10.4, due on November 7

    Most tough to understand for me was the inital proof of 2^A being equivalent to P(A). Where does this proof come from? Well, I know that we used it for finding cardinality but where does the actual proof come from? How would we think to make a function equal to that piecewise function?

    That being said, it was neat that out of nowhere this function comes from the set of subsets and it relates the pairs of elements of P(A) and 2^A. I don't quite understand it, but it works out nicely. Other than that, this section is really short and I don't have a lot else to say!

    Sunday, November 4, 2012

    10.4, due on November 5

    Maybe I just haven't spent enough time with decimal expansions yet, but there are two proofs given in the text that I'm not sure where and how they arrive at their contradictions. One is 10.8 and the other is 10.12. The first is the set of real numbers that is uncountable. I get lost somewhere in the defining of another decimal expansion that helps us reach the contradiction. And then with 10.12, it seems like in 10.11 they state the exact opposite.

    The idea of decimal expansions seem neat to me. There was one proof on my homework I did one time where the TA said I should have taken the decimal expansion of the number to prove it was irrational/rational and so ever since I've been excited to learn how.

    Monday, October 29, 2012

    Blog Post, due on October 29


    • Which topics and theorems do you think are the most important out of those we have studied?
    For this section I would say it would be functions and the various properties of certain functions (bijective, one to one, onto)
    • What kinds of questions do you expect to see on the exam?
    I expect to see some proofs regarding these topics. Probably some true false although I'm not a fan. 
    • What do you need to work on understanding better before the exam? Come up with a mathematical question you would like to see answered or a problem you would like to see worked out.
    I need help with the more general principle of induction. Specifically when we need to use which principle of induction and then how to select a good starting number. Is there a method? or do you just work through a problem to find out which one will work. 

    Thursday, October 25, 2012

    9.6-9.7, due on October 26

    This section is straightforward. I have a fairly good grasp of the material. The most difficult parts included finding the inverse of the permutation. Just the way that it's phrased in the book is a little confusing. Other than that was just the bogging down amid the proofs about inverse functions, but they're still very easy probably due to the fact that we've talked about inverse functions since high school.

    The neatest part for me was example 9.12 and finding the inverse of that function. The solution was very elegant and in a way that I wouldn't have thought to do. Instead that set up the f of f^-1 and plugged f^-1 into f and solved for f^-1. It's so easy but it was different than my thought.

    Tuesday, October 23, 2012

    9.5, due on October 24

    The most difficult part of this chapter were the examples at the end of the section. For example, example 9.10 threw me for a loop with showing that the composition was defined. Then I realized that since the input into the next function was a subset of the more or less intended or defined input for the function that that made sense.

    Even though it was confusing at first that is my favorite part about this section. Thr functions are like playing frogger. You need to jump from function to function just like lilypads. If you don't go in order it doesnt work. You can't skip from domain to any range you want. But in the example referred to above our jump, or function, landed us in the ballpark that we needed to be allowing us to use the next function. Even though we didn't have every part of that set, the function was prepared to deal with any one element in the larger set B and so it worked for the subset of B

    Sunday, October 21, 2012

    9.3-9.4, due on October 22

    One to one, onto, and bijective functions make sense. For me, the waters get murky when we throw in the Real Numbers. Because the real numbers are infinite sets, it gets confusing to me how a function can be one-to-one but not onto or vice versa. It would seem like the domain and codomain in each instance are infinite and so by the same reasons that a function is not onto it would also not be one to one.

    Something about proving that something is bijective is just satisfying. It's like, no matter which way I go from domain to codomain I know what's going on. Each element in the domain is unique and the same for codomain. That brings up a practical application question though, would this ever truly happen in a set of data? I suppose if you were measuring the velocity of some constantly increasing object you would have distinct time and velocity data. Is there anything cool that we can use bijective functions for?

    Friday, October 19, 2012

    9.1-9.2, due on October 19

    In section 9.1 (page 198), why does it say that f3 is not a function from A to B because f3 is not equal to A? It seems like the function would never be equal to its sets because the function contains ordered pairs of elements in 2 different sets. So then wouldn't f3 always not be equal to A?

    The little history connection was neat. The old definition of a function still holds today. The picture of mapping in the book was very effective in helping me picture what is actually going on. I'm an engineer and prefer things that are more concrete I suppose. It was also neat how functions that we've grown up with make sense with this definiton of a function.

    Tuesday, October 16, 2012

    8.6, due on October 17

    I got it! As I was writing this it clicked. I couldn't for the life of me figure out how to reduce the equivalence classes down to within 0 through 5. But then I thought about the remainder. Okay...well now that that's settled. Everything else makes sense, but that was the toughest part for me.

    I think that the idea of being 'closed' is kind of fun. And then I started thinking about what a set would have to have in it to be a closed under multiplication and addition and realized that that could get big really fast. While that could be tedious, theoretically it's interesting that if we multiply or add two numbers in the set that the product is still in the set. Actually, would we be able to represent that in a finite set? Probably not I suppose since the product of any two integers also can form a product with another integer in the set. Neat.

    Sunday, October 14, 2012

    8.5, due on October 15

    After a few reads, most of this section made sense. However I'm still not accustomed to equivalence classes and they're used extensively in this section. For Result 8.7, the alternate proof for proving that 2a + b is symmetric is still throwing me off. I understand how we want to add the two equations together, but I don't understand how we sub in for 3x and 3y. Hopefully we go over that in class.

    I enjoy when we incorporate things learned or used previously and here we talked about how an equivalence class can relate to properties of integers namely being able to classify any integer within 3 equivalence classes for a certain relation. Also the proofs of equivalence relations are neat; they're elegant and smart which is fun.

    Thursday, October 11, 2012

    Circles, Rivers, and Polygon Packing: Mathematical Methods in Origami by Robert Lang

    My biggest problem with the lecture was that I didn't know enough. The last time I did origami I think I was in middle school. Lang just kind of jumped in to the math of origami almost as if we were already well versed in the field. While that's fine and all and I'm sure there's some people there that understood a lot more than I did, it was tough to understand.

    However the applications for origami surprised me. I had heard of other engineers solving problems with origami but didn't know how. While I still don't know exactly how, he showed one way that stresses in a structure could be determined with origami. Another neat thing was that he pulled in a little bit of what we talked about in class with the 4 colors for a map. However for origami you must be able to represent the fold pattern with only 2 colors. Another neat thing was that putting a fraction into binary coded your folds that you needed to make to represent that fraction on the paper. Overall the lecture was really neat, the application was cool, and the art was also crazy intense. Who knew origami had some much math involved? Okay....well you probably did, but not me!

    8.3-8.4, due on October 12

    For whatever reason I had the hardest time understanding why [1]={1,3,6} etc. Once I figured it out I felt really unintelligent. As well was the relation on page 179 for [a]. We had been talking about [a] before but for the second time we returned to the relation a=b. Having forgotten this I was confused as to why the relation that we had just talked about (x in Z such that x R a) was equal to x in Z such that x=a.

    Something neat about equivalence classes was being able to predict the equality of [1] [3] and [6]. Since 1,3,6 were all related to each other, then [1]=[3]=[6]. The same went for the other integers as well. Also the use of the properties that we had just learned about was neat too. It provided for clever and elegant proofs, which are always enjoyable.

    8.1-8.2, due on October 10

    Just from the reading, understanding relations was very difficult. Even though I've read some other things too, relations don't exactly make sense. The relation should contain ordered pairs for the examples we're talking about, but it seems to me that normal relations would be after the order of y=3x or some other equation. Yet the relation seems to be definied much more loosely than that. We're given random sets of ordered pairs between two matrices but to me there doesn't really seem to be a pattern in the relation. Also the general lack of examples in this section of the book is kind of disappointing.

    Although I'm not sure how they're used yet, I think properties of relations are kind of fun. It's a little bit more finite than a lot of the things we've talked about up til now. The relation either contains these sets of ordered pairs or it doesn't so it's easy to see what properties the sets have.

    Sunday, October 7, 2012

    6.3 - 6.4, due on October 8

    A few things were very difficult to understand in this section. A lot of the proofs are very difficult and involve some unintuitive algebraic manipulation and whooplah. The second is that I still don't see the need for the strong principle of mathematical induction as opposed to the normal principle of induction. The book states that it's common for sequences of numbers but I still fail to see exactly why you need to give numerous examples that it's true for initial conditions in order to be able to prove it generally.

    These sections were fairly unenjoyable. For me the proofs were very difficult to follow and required numerous readings to grasp their concept. Now that I understand them fairly well, I appreciate them, the minimum counterexample more than the strong principle of induction. It seems like a neat way to prove something but also like you can prove the same thing with normal induction. I assume that it would also be helpful in cases of proofs where contradiction is more convenient than a more direct approach.

    Friday, October 5, 2012

    6.2, due on October 5

    It seems to me like the more general principle of mathematical induction and the initial statement are so close that the book truly didn't need to distinguish between the do. What I think I'm getting is that for the more generalized statement, your base case doesn't have to be one. So why didn't we just learn that when we introduced it in the first place? It's a fairly simple concept. The toughest part of this section was the algebraic manipulation. I feel like the proofs are much more technical because we are trying to symbolically prove a case each time. This leads to a lot of generally tough mathematics!

    For what it's worth I did see the benefit of being able to use a different case than the base case (unless I'm off base on that). As well, the well-ordered principle came into play, which was something I was wondering about from last section. It seems like with induction we can basically prove anything dealing with indexed sets or infinite series. So that's cool.

    Tuesday, October 2, 2012

    6.1, due on October 2

    So apparently I included comments on this reading in the section where I was supposed to talk about 5.1. So much now makes sense! I was really confused why you had asked us to read 6.1 and then we never talked about it. Induction is a pretty basic principle. The part that I couldn't understand was the usefulness of being well-ordered. It doesn't seem to connect in my mind with induction. If we could talk about it in class that would be great.

    Induction is a very cool concept. It's pretty much like a proof by cases on steroids. If the base case is true, and then every other case after that is true, it means that the statement is true! It was also neat the way that Gauss used that to find really large sums. That's a neat aspect of induction.

    Sunday, September 30, 2012

    Blog Post, due on October 1

  • Which topics and theorems do you think are the most important out of those we have studied?
  • I feel like the methods of proving things are the most important. Contradiction, Induction, Direct Proof, etc.

  • What kinds of questions do you expect to see on the exam?
  • I'm expecting lots of proofs and maybe a couple definitions. I feel like it would be unfair to ask us to prove something outside the range of what we've done (number theory, set theory etc) just because there are little tricks that you have to know to be able to work the proof.


  • What do you need to work on understanding better before the exam? Come up with a mathematical question you would like to see answered or a problem you would like to see worked out.

  • Definitely mathematical induction. I feel like we talked about it and didn't really work any problems. So while I don't have any specific problems or anything, I would like to hear about what we need to know about it for the exam.

    Thursday, September 27, 2012

    7.1-7.3, due on September 28

    I don't have much intelligent to say about these sections. 7.1 was hard for me because I didn't really follow the importance of it. It was almost like a history lesson or a long, long definition about the difference between a theorem and a conjecture. Section 7.2 is not really anything new, just a bunch of simple statements and applying different qualifiers, all of which seemed like we've covered before. Section 7.3 got a little more interesting.

    I enjoyed 7.3, especially the option of proving if it is false or true. However, I realize that this could be a lot tougher for proofs since we don't necessarily know that it is true. It will require greater analysis before the proof to decide how to proceed.

    Tuesday, September 25, 2012

    5.4 - 5.5, due on September 26

    I usually use this space to kind of write my questions and things I didn't understand. I hope that helps and/or is what you're looking for. This whole idea of uniqueness, what kind of properties can this entail? I assume that when we ask if there is a unique root on an interval, that means it is the only one in the interval? Would uniqueness also expand to parity? For instance, there is only one number contained in the set S that makes some function even.

    I find it neat that we're using the things that we learned about odd and even numbers and proofs in the principles of disproving existence statements. Is there ever an end of things to prove? I appreciate that the book added in a different viewpoint at the end. Every disproof is really just a proof of the of the negation.

    Sunday, September 23, 2012

    5.2-5.3, due on September 24

    The most difficult portion of this text was the actual explanation of contradiction. I understand all the examples and proofs, but when symbols are thrown in it gets hairy. The notation of -> Contradiction threw me for a loop, but as I look at it a second time it does make sense. Another question arises with Theorem 5.16. It states an assumption "We may further assume that a/b has been expressed in lowest terms". I assume that we assumed this just because the nature of the proof. How would we know to do something like this? Is there a guideline?

    The prisoner problem was also interesting. Myself, I would just assume that the other two prisoners were dumb and wouldn't follow that logic to find out that their dot was red, but the point was clear. The review likewise was interesting. However we have also read about induction (ahead of the book) so I wonder how that compares to the others. It would be nice to see another table like that with induction in it.


    Thursday, September 20, 2012

    4.5-4.6 and 6.1, due on September 21

    The principle of mathematical induction was tough. It made sense, but many of the proofs are a little tough to follow. I hope we can review this in class, especially Theorem 6.2. I think that they're proving this by contradiction, however it states that if the theorem is false, then it satisfies conditions 1 and 2 (first case is true and the implication is true) but then it states that there's still positive integers for which P(n) is false. If the initial conditions are satisfied by assuming the end is false, you don't have a proof. One of the 2 conditions needs to be false. Also the concept of well-ordering could be explained a little better. I'm not sure how being well-ordered leads to the concept of induction. 

    The first two sections, 4.5 and 4.6, were very basic, but useful. Those proofs were short and clever, but nothing really new. Induction is a neat principle, one that I've also had a little experience with. But the proofs are super long! Also mathematically intensive! Good thing I like math. I also liked the way to compute sums super quickly with the little story thrown in there too. I always wondered how people did those big numbers so fast. It's just neat that we can prove stuff like that and then use it!

    Tuesday, September 18, 2012

    4.3-4.4, due on September 19

    These sections were very straightforward and there wasn't a lot that was too difficult to understand. We basically took the techniques that we had been applying with integer-related proofs and applied them to the real numbers and sets. The one thing that did stump me for a minute was the proof of theorem 4.17 involving absolute values. I was concerned about the use of the inequality in the proof but then I realized that we pulled it from the previous page in the statement of the proof. Another thing that did come up was stating contrapositive and the use of loss of generality. The book states that stating contrapositive isn't necessary, however in class you instructed us that we might want to do it. Regarding the loss of generality, the book just stated WLOG and then assumed one of the possibilities. You told us in class that we might want to write something like, the proof follows the same steps. Are we going to get knocked points if we don't, even though the book says it's not necessary?

    I thoroughly love set theory. It's something pretty familiar to me; In linear algebra and the IMPACT boot camp we did a lot of proofs involving different sort of sets. What's different now is I actually understand it! It is almost weird to me but I enjoyed these proofs. I thought they were pretty cool but also elegant.

    Additional Questions:

    • How long have you spent on the homework assignments? Did lecture and the reading prepare you for them?
    Homework assignments don't really take me that long. I usually get through all of them in under an hour. Yes, the reading and lecture have been super helpful in preparing me. I feel like there's usually one that I don't quite understand, but if there is I ask about it at the start of class.
    • What has contributed most to your learning in this class thus far?
    The best has probably been the reading. The blog post forces me to actually read before class and if there's something that I don't fully understand we go over the reading in the lecture. 
    • What do you think would help you learn more effectively or make the class better for you? (This can be feedback for me, or goals for yourself.)
    I'm enjoying the class and understanding most of it. I have however heard horror stories about Math 290 tests. We haven't really talked too much about what will be expected of us for the test (as far as during lecture, something like "you will be required to prove similar things like this"). Perhaps this helps students not to single one thing out, but it would be nice to know!

    Sunday, September 16, 2012

    4.1-4.2, due on September 17

    I think the toughest part about this section is getting used to the vernacular and notation. For instance modulus was a bit confusing since it was introduced and then thrown around everywhere. Something that I don't quite understand was result 4.12 in the book. I understand how the result was obtained, but if we need to prove one statement or another do we really need to prove both to prove the result by contrapositive? Logically it would seem like we wouldn't need to.

    Something that was neat was the use of cases combined with contrapositive. As well that any number can be written as a mod of 2 or 3. This proves useful when we proved a bunch of little properties of products and sums of numbers. All of these properties seem so simple but it's neat to be able to prove them.

    Thursday, September 13, 2012

    3.4-3.5, due on September 14

    One thing about this section that was tough was the proof of theorem 3.16. It is a biconditional and so you have to prove it backwards and forwards. However, it seemed like the book only proved it forwards. It took me a little while to wrap my head around it. I realized that it was a proof by contrapositive (it didn't say)  and it still was a proof by cases. 

    Proof by cases was pretty neat. It really makes sense logically to examine every case so that we can draw a general conclusion that applies for every case. As well was the introduction of "without loss of generality". I'm a fan of saving time. 

    Sunday, September 9, 2012

    3.1-3.3, due on September 12

    1. First of all this was a really neat section. It's nice to get into proofs. One thing that was a little hard to follow was some of the even and odd number proofs. At first I failed to see how (for instance in Result 3.5) the statement "Since -5x-2 is an integer, -5n-3 is an odd integer" proved that -5n-3 was an odd integer. Something along these lines was present in many proofs. Upon closer inspection I realized that we simply needed to show that -5n-3 could be written in the generic form 2y+1 to be an odd integer and what I was missing was that we used the fact that any product or addition etc of integers remained in the integers.

    2. I've had some experience with proofs and this section is neat because it explains the theory of how to prove something. For instance, proof by contrapositive. I never understood why we would use it, just that you prove pretty much the opposite and that proves the original. However now with logical equivalence, I can see why and what's more, I'm convinced now that this is an actual proof. Another cool thing along the same lines is I now understand what a trivial proof is and why it is in fact trivial. Likewise this also stems from the logic and the logic tables.

    Pages 5-12 of Chapter 0, due on September 10

    1. One thing I would like clarification on is when we can use the symbols "for all" or "there exists". The text states that they should only be used when discussing logic. Does this mean only when we're setting up a problem or in the body of the proof or what? As well, do we need to state the meaning of these symbols if we haven't used them before?

    2. It was neat learning about a "display". I realized immediately that the text uses this very effectively and conveys the idea simply to the reader. This chapter is very good to read just before we use LateX to compose our homework because it'll be fresh in our minds while learning. The same goes with the majority of everything in this section! It'll be important to learn the LateX commands. I also really enjoy the use of "we" in mathematical context. It helps me not to feel dumb as I read really complicated math.

    Thursday, September 6, 2012

    2.9-2.11, due on September 7

    1. Most of the things in this chapter are self-explanatory, they're just an extension of logic. The characterization threw me a little for a loop. The example that it gives in the book is that the the statements: A triangle T is equilateral if and only if T has three equal sides. It states that this is not a characterization but a definition. I'm not sure what the difference is. If we had defined an equilateral triangle as a triangle with three equal angles, then would this statement be a characterization?

    2. This semester I'm also in a programming class for engineers and the logic that we have been learning is the exact same as all the logic used in writing code. This is super neat to me because all of the logic and the not statements etc. are all of a sudden actually useful to me instead of just for use of proving higher math (which I'm not planning on taking).

    Saturday, September 1, 2012

    2.5-2.8, due on September 5

    1. The toughest part of this reading is honestly the symbols. There was introduced a lot of symbols from last section and now in this section they're everywhere! Another thing that is tough for me is once again the practical application of some of the things that we learned. For instance with example 2.16, it once again refers to the teacher giving the A example. I don't understand the logical equivalence (with this example) of P=>Q and (~P) v Q. ~P would be If you don't earn an A on the final exam and Q would be you receive an A for the final grade, but I don't quite understand how both a logically equivalent.

    2. Probably the neatest thing in this section is the idea of logical equivalence. I like how in this section the book identifies how it would be useful. Ofttimes it is difficult to show that one very abstract statement is true hence it is super cool that we can show that it is logically equivalent with a much easier example and then prove that that is true, thus proving the first is true!

    Thursday, August 30, 2012

    2.1-2.4, due on August 31

    1. The most difficult part of this material for me was implication. I understand why T & T = T, T & F = F, and even F & F = T, but not why F & T = T! The example in the book talks about how the student didn't get an A on his test, but still got the A and the teacher didn't lie. I understand that the teacher didn't lie, but the first and second statements no longer have relation in my mind. Because he didn't get an A on his test does not lead to him getting an A in the class.

    2. This reading was fairly basic and straightforward, which I enjoy. Though I'm not entirely sure why and for what purpose, it was neat that this reading seemed to be setting forth a very ordered method of presenting things. For example with statements. A statement is a statement whether or not it's true or false. However, the way that you state it must be in the form of a statement. I've had some experience with higher math and proofs require it. You can't prove or disprove a question!

    Tuesday, August 28, 2012

    1.1-1.6, due on August 29

    1. The most difficult part of this reading for me were index sets. I have read over it a few times and still have trouble understanding where all the sets in the Index set come from. The book states that it is simply a set which is used as a mechanism for the sets we want to consider. So each element in the indexed set is simply a set that is numbered? The examples show some fun things that can be done with it, especially with set operations but I'm not entirely sure how this is to be used.

    2. For me one of the coolest things was with the Cartesian Products of Sets. Not that it is terribly relevant to my field, but it was cool to me that any line could be written as an element of the Cartesian product R x R.
    As well, set partitions was neat because it helped me visualize how real numbers relate to rational and irrational numbers for example.

    Introduction, due on August 29

    What is your year in school and major?
    -Junior, Mechanical Engineering

    Which calculus-or-above math courses have you taken?
    -Math 113 - Wayne Barrett,
    -Math 343 - Jeremy West,
    -Math 334 - Mark Meilstrup,
    -Math 214 - Lawrence Fearnley,
    -Math 513R - Jeff Humphreys, Jeremy West

    Why are you taking this class?
    -Although I'm an engineer, I love math! I decided to take the standard math courses and go for the math minor instead of just taking math for engineers. Math 290 is the last course that I need to complete for my minor.

    Tell me about the math professor or teacher you have had who was the most and/or least effective. What did he do that worked so well/poorly?
    -The classes I took from Jeremy West were by far the ones that I enjoyed the most. Part of the reason was that he simply taught them well. He also spent however amount of time was needed on a particular concept for students to understand. However, what truly made him stand apart was the passion he had for math. When he completed a proof with us, he would get so excited, even if it was a simple one!
    -Probably the least effective teacher I had was Lawrence Fearnley. It was awhile ago but from what I remember, he was very difficult to understand due to both a slight accent and the volume of his voice. As well, he didn't communicate the ideas very well, preferring to speak rather than write and do examples.

    Write something interesting or unique about yourself.
    -I always break things in pairs. For instance, I first broke both of my arms, then several years later both of my ankles. I go all out!

    If you are unable to come to my scheduled office hours, what times would work for you?
    -I have a class MWF at 9. However, at 10 o'clock until 1 o'clock I am available.