Pokročilá odborná angličtina - zkouška
COURSE MATERIALS WEEK III.
Listening 7
http://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/6-046JFall-2005/VideoLectures/detail/embed22.htm
Answer the Qs.
- What are the two topics of the lecture?
- Why is it a popular topic at present?
- What are multicore processors?
- What is the professor´ s area of research?
- Is there one general model for the parallel space?
- What kind of algorithm does the lecturer describe?
- What are spawn and sync?
Listen to the recording again. There are some mistakes in the script. Try to
correct them.
Transcript - Lecture 22
We only have five more lectures left, and what Professor Demaine and I have decided to do is give two series of lectures on sort of advanced topics. So, today at Wednesday we're going to talk about parallel algorithms, algorithms where you have more than one processor whacking away on your problem. And this is a very hot topic right now because all of the chip manufacturers are now producing so-called multicore processors where you have more than one processor per chip. So, knowing something about that is fine. The second topic we're going to cover is going to be caching, and how you device algorithms for systems with cache.
Right now, we've sort of program to everything as if it were just a single level of memory, and for some problems that's not an completely realistic model. You'd like to have some model for how the caching hierarchy works, and how you can take advantage of that. And there's been a lot of research in that area as well. So, both of those in fact turn out to be my area of research. So, this is actually fun for me.
Actually, most of it's fun anyway. So, today we'll talk about parallel algorithms. And the particular topic, it turns out that there are lots of models for parallel algorithms, and for parallelism. And it's one of the reasons that, while for serial algorithms, most people sort of have this basic model that we've been using. It's sometimes called a random process machine model, which is what we've been using to analyze things, whereas in the parallel space, there's just a huge number of models, and there is no general agreement on what is the best model because there are different machines that are made with various configurations, etc. and people haven't, sort of, agreed on, even how parallel machines should be organized.
So, we're going to deal with a particular model, which goes under the rubric of dynamic multithreading, which is appropriate for the multicore machines that are now being built for shared memory programming. It's not appropriate for what's called distributed memory programs particularly because the processors are able to approach things. And for those, you need more involved models. And so, let me start just by giving an example of how one would write something. I'm going to give you a program for calculating the nth Fibonacci number in this model. This is actually a really bad algorithm that I'm going to give you since it's going to be the exponential time algorithm, whereas we know from week one or two that you can calculate the nth Fibonacci number and how much time?
Log n time. So, this is too exponentials off what you should be able to get, OK, two exponentials off. OK, so here's the code. OK, so this is basically the pseudocode we would write. And let me just explain a little bit about, we have a couple of key words here we haven't seen before: in particular, spawn and sync. OK, so spawn, this basically says that the subroutine that you're calling, you use it as a keyword before a subroutine, that it can execute at the same time as its parent. So, here, what we say x equals spawn of n minus one, we immediately go onto the next statement.
And now, while we're executing fib of n minus one, we can also be executing, now, this statement which itself will spawn something off. OK, and we continue, and then we hit the sync statement. And, what sync says is, wait until all children are ready. OK, so it says once you get to this point, you've got to wait until everything here has completed before you execute the x plus y because otherwise you're going to try to execute the calculation of x plus y without having computed it yet. OK, so that's the basic structure
Transcript - Lecture 22
We only have four more lectures left, and what Professor Demaine and I have decided to do is give two series of lectures on sort of advanced topics. So, today at Wednesday we're going to talk about parallel algorithms, algorithms where you have more than one processor whacking away on your problem. And this is a very hot topic right now because all of the chip manufacturers are now producing so-called multicore processors where you have more than one processor per chip. So, knowing something about that is good. The second topic we're going to cover is going to be caching, and how you design algorithms for systems with cache.
Right now, we've sort of program to everything as if it were just a single level of memory, and for some problems that's not an entirely realistic model. You'd like to have some model for how the caching hierarchy works, and how you can take advantage of that. And there's been a lot of research in that area as well. So, both of those actually turn out to be my area of research. So, this is actually fun for me.
Actually, most of it's fun anyway. So, today we'll talk about parallel algorithms. And the particular topic, it turns out that there are lots of models for parallel algorithms, and for parallelism. And it's one of the reasons that, whereas for serial algorithms, most people sort of have this basic model that we've been using. It's sometimes called a random access machine model, which is what we've been using to analyze things, whereas in the parallel space, there's just a huge number of models, and there is no general agreement on what is the best model because there are different machines that are made with different configurations, etc. and people haven't, sort of, agreed on, even how parallel machines should be organized.
So, we're going to deal with a particular model, which goes under the rubric of dynamic multithreading, which is appropriate for the multicore machines that are now being built for shared memory programming. It's not appropriate for what's called distributed memory programs particularly because the processors are able to access things. And for those, you need more involved models. And so, let me start just by giving an example of how one would write something. I'm going to give you a program for calculating the nth Fibonacci number in this model. This is actually a really bad algorithm that I'm going to give you because it's going to be the exponential time algorithm, whereas we know from week one or two that you can calculate the nth Fibonacci number and how much time?
Log n time. So, this is too exponentials off what you should be able to get, OK, two exponentials off. OK, so here's the code. OK, so this is essentially the pseudocode we would write. And let me just explain a little bit about, we have a couple of key words here we haven't seen before: in particular, spawn and sync. OK, so spawn, this basically says that the subroutine that you're calling, you use it as a keyword before a subroutine, that it can execute at the same time as its parent. So, here, what we say x equals spawn of n minus one, we immediately go onto the next statement.
And now, while we're executing fib of n minus one, we can also be executing, now, this statement which itself will spawn something off. OK, and we continue, and then we hit the sync statement. And, what sync says is, wait until all children are done. OK, so it says once you get to this point, you've got to wait until everything here has completed before you execute the x plus y because otherwise you're going to try to execute the calculation of x plus y without having computed it yet. OK, so that's the basic structure
STRUCTURE
(materials adapted from Mathematics, encyclopedia articles on http//en.citizendium.org)
1. What is a „set“ in mathematics?
2. Some special sets. Fill in the missing information.
Some sets that are ubiquitous in the mathematical literature have special symbols:
- , the ..................set, sometimes written {}.
- , the set of ......................written N
- , the set of .................... written Z
- , the set of ....................written Q
- , the set of .................... written P
- , the set of ...................... written C
- , the set of ..................... written R
- , the set of ..................... written F
Among other such well known sets are the fibonacci numbers, even numbers, odd numbers, quaternions, octonions and the hamiltonian integers.
SET THEORY
6.Read the text and decide whether the statements are true or false. Correct the false
statements.
a) Concept of infinity was important for the set theory.
b) G. Cantor demonstrated how large one infinite object can be.
c) The proof seems to be very simple.
d) Some great mathematicians of his day were not enthusiastic about it.
e) „Naive set theory“ belongs to the field of logic.
f) The duality between sets and predicates was found to be false.
g) The paradoxes were caused by axiomatising.
h) The paradoxical behaviour was due to formal axioms.
i) These axioms were replaced with more intuitive ones.
j) It was a crisis for maths because the sound reasoning gave false results.
k) The new set theory played an important role for the rest of maths.
l) It is possible to hope that there are more paradoxes in axioms.
m) The duality of set and predicate can not be completely solved.
History of set theory
Set theory grew out of the need for a better understanding of the concept of infinity. Georg Cantor (1845-1918), "the father of set theory", used it to demonstrate for the first time that two infinite objects could have different sizes. The proof today seems straightforward, but how great a leap it represented is made evident by the resistance and hostility it met from some of the greatest mathematicians of the day.
Set theory began by reasoning about collections of objects and operations on those collections, the operations of what would now be called "naive set theory". It occupied a position on the boundary of mathematics and logic, partly because of an apparent, though ultimately illusory, duality between sets and predicates. However, it soon ran into serious paradoxes. These were banished only by axiomatising the theory - expressing the basis of reasoning about sets in the form of formal axioms. The axioms giving rise to the paradoxical behaviour could then be identified; they were eliminated and replaced with something less powerful, but more complicated and certainly less intuitive. Different versions of set theory achieved this in many different ways.
The episode represented something of a crisis for mathematics, since it appeared for a while that sound mathematical reasoning could lead to contradictory conclusions. The security of the foundations of the new set theory that arose in response was used as a bulwark to secure those of the rest of mathematics. Almost all other mathematical objects can be shown to be modelled by set-theoretical constructs. Since it seems reasonable to hope that no further paradoxes lurk in the axioms of modern set theory, other mathematical disciplines are thus protected from logical contradictions. Thus, though the set/predicate duality could not be entirely sustained, set theory remained firmly entwined with mathematical logic.
Examples of sets and operations on sets.
7. Read the next part and fill in the missing information.
Some unproblematic examples from naive set theory will ……… the concept clearer. These examples will be …….. throughout this article:
- A = the set of the numbers 1, 2 and 3.
- B = the set of primary light ……… -- red, green and blue.
- C = the ………. set (the set with no members).
- D = the set of all books in the British Library.
- E = the set of all positive ……….., 1, 2, 3, 4, and so on.
Note that the last of these sets is infinite.
A set is the collection of its elements considered as a single, abstract entity. Note that this is different from the elements themselves, and may have different ………... For example, the elements of D are flammable (they are books), but D itself is not flammable, since abstract objects cannot be burnt.
8. What set operations do you know?
9. Read the text on set operations, then have a look at the text in n. 7 and think of examples of
a) union.................................................................
b) intersection.....................................................
c) set difference................................................
d) complement.................................................
Set operations
Suppose X and Y are sets. Various operations allow us to build new sets from them.
Union
The union of X and Y, written X∪Y, contains all the elements in X and all those in Y. Thus A∪B = {1, 2, 3, red, green, blue}. As A is a subset of E, the set A∪E is just E.
Intersection
The intersection of X and Y, X∩Y, contains all the elements that are common to both X and Y. Thus {1,2,3,red,green,blue} ∩ {2,4,6,8,10} = {2}.
Set difference
The difference X-Y or X\Y contains of all those elements in X which are not also in Y. For example, E-A contains all integers greater than 3. A-B is just A; red, green and blue were not elements of A, so no difference is made by excluding them.
Complement and universal set
The universal set (if it exists), usually denoted U, is a set of which everything conceivable is a member. In pure set theory, normally sets are the only objects considered (unlike here, where we have also considered numbers, colours and books, for example); in this case U would be the set of all sets.
In the presence of a universal set we can define X′, the complement of X, to be U-X. X′ contains everything in the universe apart from the elements of X. We could alternatively have defined it as
X′ = {x | ~ (x∈X)}
and U as the complement of the empty set.
WORD STUDY: There are many adjectives in the text. Try to find the corresponding
nouns.
structural abstract important fundamental infinite evident
apparent illusory paradoxical intuitive powerful reasonable