Monday, October 29, 2012

Blog Post, due on October 29


  • Which topics and theorems do you think are the most important out of those we have studied?
For this section I would say it would be functions and the various properties of certain functions (bijective, one to one, onto)
  • What kinds of questions do you expect to see on the exam?
I expect to see some proofs regarding these topics. Probably some true false although I'm not a fan. 
  • What do you need to work on understanding better before the exam? Come up with a mathematical question you would like to see answered or a problem you would like to see worked out.
I need help with the more general principle of induction. Specifically when we need to use which principle of induction and then how to select a good starting number. Is there a method? or do you just work through a problem to find out which one will work. 

Thursday, October 25, 2012

9.6-9.7, due on October 26

This section is straightforward. I have a fairly good grasp of the material. The most difficult parts included finding the inverse of the permutation. Just the way that it's phrased in the book is a little confusing. Other than that was just the bogging down amid the proofs about inverse functions, but they're still very easy probably due to the fact that we've talked about inverse functions since high school.

The neatest part for me was example 9.12 and finding the inverse of that function. The solution was very elegant and in a way that I wouldn't have thought to do. Instead that set up the f of f^-1 and plugged f^-1 into f and solved for f^-1. It's so easy but it was different than my thought.

Tuesday, October 23, 2012

9.5, due on October 24

The most difficult part of this chapter were the examples at the end of the section. For example, example 9.10 threw me for a loop with showing that the composition was defined. Then I realized that since the input into the next function was a subset of the more or less intended or defined input for the function that that made sense.

Even though it was confusing at first that is my favorite part about this section. Thr functions are like playing frogger. You need to jump from function to function just like lilypads. If you don't go in order it doesnt work. You can't skip from domain to any range you want. But in the example referred to above our jump, or function, landed us in the ballpark that we needed to be allowing us to use the next function. Even though we didn't have every part of that set, the function was prepared to deal with any one element in the larger set B and so it worked for the subset of B

Sunday, October 21, 2012

9.3-9.4, due on October 22

One to one, onto, and bijective functions make sense. For me, the waters get murky when we throw in the Real Numbers. Because the real numbers are infinite sets, it gets confusing to me how a function can be one-to-one but not onto or vice versa. It would seem like the domain and codomain in each instance are infinite and so by the same reasons that a function is not onto it would also not be one to one.

Something about proving that something is bijective is just satisfying. It's like, no matter which way I go from domain to codomain I know what's going on. Each element in the domain is unique and the same for codomain. That brings up a practical application question though, would this ever truly happen in a set of data? I suppose if you were measuring the velocity of some constantly increasing object you would have distinct time and velocity data. Is there anything cool that we can use bijective functions for?

Friday, October 19, 2012

9.1-9.2, due on October 19

In section 9.1 (page 198), why does it say that f3 is not a function from A to B because f3 is not equal to A? It seems like the function would never be equal to its sets because the function contains ordered pairs of elements in 2 different sets. So then wouldn't f3 always not be equal to A?

The little history connection was neat. The old definition of a function still holds today. The picture of mapping in the book was very effective in helping me picture what is actually going on. I'm an engineer and prefer things that are more concrete I suppose. It was also neat how functions that we've grown up with make sense with this definiton of a function.

Tuesday, October 16, 2012

8.6, due on October 17

I got it! As I was writing this it clicked. I couldn't for the life of me figure out how to reduce the equivalence classes down to within 0 through 5. But then I thought about the remainder. Okay...well now that that's settled. Everything else makes sense, but that was the toughest part for me.

I think that the idea of being 'closed' is kind of fun. And then I started thinking about what a set would have to have in it to be a closed under multiplication and addition and realized that that could get big really fast. While that could be tedious, theoretically it's interesting that if we multiply or add two numbers in the set that the product is still in the set. Actually, would we be able to represent that in a finite set? Probably not I suppose since the product of any two integers also can form a product with another integer in the set. Neat.

Sunday, October 14, 2012

8.5, due on October 15

After a few reads, most of this section made sense. However I'm still not accustomed to equivalence classes and they're used extensively in this section. For Result 8.7, the alternate proof for proving that 2a + b is symmetric is still throwing me off. I understand how we want to add the two equations together, but I don't understand how we sub in for 3x and 3y. Hopefully we go over that in class.

I enjoy when we incorporate things learned or used previously and here we talked about how an equivalence class can relate to properties of integers namely being able to classify any integer within 3 equivalence classes for a certain relation. Also the proofs of equivalence relations are neat; they're elegant and smart which is fun.

Thursday, October 11, 2012

Circles, Rivers, and Polygon Packing: Mathematical Methods in Origami by Robert Lang

My biggest problem with the lecture was that I didn't know enough. The last time I did origami I think I was in middle school. Lang just kind of jumped in to the math of origami almost as if we were already well versed in the field. While that's fine and all and I'm sure there's some people there that understood a lot more than I did, it was tough to understand.

However the applications for origami surprised me. I had heard of other engineers solving problems with origami but didn't know how. While I still don't know exactly how, he showed one way that stresses in a structure could be determined with origami. Another neat thing was that he pulled in a little bit of what we talked about in class with the 4 colors for a map. However for origami you must be able to represent the fold pattern with only 2 colors. Another neat thing was that putting a fraction into binary coded your folds that you needed to make to represent that fraction on the paper. Overall the lecture was really neat, the application was cool, and the art was also crazy intense. Who knew origami had some much math involved? Okay....well you probably did, but not me!

8.3-8.4, due on October 12

For whatever reason I had the hardest time understanding why [1]={1,3,6} etc. Once I figured it out I felt really unintelligent. As well was the relation on page 179 for [a]. We had been talking about [a] before but for the second time we returned to the relation a=b. Having forgotten this I was confused as to why the relation that we had just talked about (x in Z such that x R a) was equal to x in Z such that x=a.

Something neat about equivalence classes was being able to predict the equality of [1] [3] and [6]. Since 1,3,6 were all related to each other, then [1]=[3]=[6]. The same went for the other integers as well. Also the use of the properties that we had just learned about was neat too. It provided for clever and elegant proofs, which are always enjoyable.

8.1-8.2, due on October 10

Just from the reading, understanding relations was very difficult. Even though I've read some other things too, relations don't exactly make sense. The relation should contain ordered pairs for the examples we're talking about, but it seems to me that normal relations would be after the order of y=3x or some other equation. Yet the relation seems to be definied much more loosely than that. We're given random sets of ordered pairs between two matrices but to me there doesn't really seem to be a pattern in the relation. Also the general lack of examples in this section of the book is kind of disappointing.

Although I'm not sure how they're used yet, I think properties of relations are kind of fun. It's a little bit more finite than a lot of the things we've talked about up til now. The relation either contains these sets of ordered pairs or it doesn't so it's easy to see what properties the sets have.

Sunday, October 7, 2012

6.3 - 6.4, due on October 8

A few things were very difficult to understand in this section. A lot of the proofs are very difficult and involve some unintuitive algebraic manipulation and whooplah. The second is that I still don't see the need for the strong principle of mathematical induction as opposed to the normal principle of induction. The book states that it's common for sequences of numbers but I still fail to see exactly why you need to give numerous examples that it's true for initial conditions in order to be able to prove it generally.

These sections were fairly unenjoyable. For me the proofs were very difficult to follow and required numerous readings to grasp their concept. Now that I understand them fairly well, I appreciate them, the minimum counterexample more than the strong principle of induction. It seems like a neat way to prove something but also like you can prove the same thing with normal induction. I assume that it would also be helpful in cases of proofs where contradiction is more convenient than a more direct approach.

Friday, October 5, 2012

6.2, due on October 5

It seems to me like the more general principle of mathematical induction and the initial statement are so close that the book truly didn't need to distinguish between the do. What I think I'm getting is that for the more generalized statement, your base case doesn't have to be one. So why didn't we just learn that when we introduced it in the first place? It's a fairly simple concept. The toughest part of this section was the algebraic manipulation. I feel like the proofs are much more technical because we are trying to symbolically prove a case each time. This leads to a lot of generally tough mathematics!

For what it's worth I did see the benefit of being able to use a different case than the base case (unless I'm off base on that). As well, the well-ordered principle came into play, which was something I was wondering about from last section. It seems like with induction we can basically prove anything dealing with indexed sets or infinite series. So that's cool.

Tuesday, October 2, 2012

6.1, due on October 2

So apparently I included comments on this reading in the section where I was supposed to talk about 5.1. So much now makes sense! I was really confused why you had asked us to read 6.1 and then we never talked about it. Induction is a pretty basic principle. The part that I couldn't understand was the usefulness of being well-ordered. It doesn't seem to connect in my mind with induction. If we could talk about it in class that would be great.

Induction is a very cool concept. It's pretty much like a proof by cases on steroids. If the base case is true, and then every other case after that is true, it means that the statement is true! It was also neat the way that Gauss used that to find really large sums. That's a neat aspect of induction.