Monday, 31 October 2016
Microsoft Office Word 2007 Part-19 Tutorial on Watermark
Friday, 28 October 2016
Wednesday, 26 October 2016
Microsoft Word 2007 Bangla Tutorial Part-16 on equation editor for word ...
Friday, 21 October 2016
How to get more VIEWS on YouTube videos
Thursday, 20 October 2016
MS Word 2007 Tutorial in Bangla Part-17
Wednesday, 19 October 2016
Math Tutorial on Fill in the Blanks For Kids
Sunday, 16 October 2016
Microsoft Office Word 2007 Tutorial Part- 15
Friday, 14 October 2016
Microsoft Office Word 2007 tutorial Part - 14 (A)
Thursday, 13 October 2016
THE EARLY HISTORY OF COMPUTERS
This Section introduces the reader to the early history of computer hardware and software. The purpose of this Section is to describe the enormous changes that occurred in the early days of the computer industry in order to provide context for the discussions that will follow. This Section does not describe events up to the present day. More recent developments (including the growth of the Internet) are discussed in the sections that follow, and in later chapters, where modern technological developments often present new legal issues.
Following the invention of the abacus approximately 5,000 years ago, the field of computing machines did not develop significantly until the eighteenth century. Leonardo da Vinci (1425-1519) sketched some designs for mechanical adding machines. Blaise Pascal (1623-1662) invented and built the “Pascaline,” a sophisticated mechanical device for counting. Although not commercially
successful because of its cost and delicate construction, the counting-wheel design served as the basis for most mechanical calculators until the 1960s. At the turn of the nineteenth century, Joseph-Marie Jaquard (1752-1834) introduced a new loom technology that used punched cards to control the movement of needles, thread, and fabric to create distinctive patterns through a binary mechanical automation technology. In the mid-nineteenth century, Charles Babbage envisioned mechanical devices (the Difference Engine and the Analytical Engine) to perform arithmetic operations. His designs, involving thousands of gears, proved impractical. One of his students, Lady Ada August
Lovelace, proposed the use of punched cards to automate the operation of such
devices.
successful because of its cost and delicate construction, the counting-wheel design served as the basis for most mechanical calculators until the 1960s. At the turn of the nineteenth century, Joseph-Marie Jaquard (1752-1834) introduced a new loom technology that used punched cards to control the movement of needles, thread, and fabric to create distinctive patterns through a binary mechanical automation technology. In the mid-nineteenth century, Charles Babbage envisioned mechanical devices (the Difference Engine and the Analytical Engine) to perform arithmetic operations. His designs, involving thousands of gears, proved impractical. One of his students, Lady Ada August
Lovelace, proposed the use of punched cards to automate the operation of such
devices.
Toward the end of the nineteenth century, a U.S. Census Bureau agent named Herman Hollerith developed a punched-card tabulating machine to automate the census. Drawing on the use of “punched photography” by railroads (to encrypt passengers’ hair and eye color on tickets), Hollerith proposed the encoding of census data for each person on a separate card that could be tabulated mechanically. After developing this technology for the Census Bureau, he formed the Tabulating Machine Company in 1896 to serve the growing demand for office machinery, such as typewriters, record-keeping systems, and adding machines. The company grew through the expansion of its business and merger with other office supply companies, and in 1924 Thomas J. Watson, the company’s general manager, changed the company’s name to International Business Machines Corporation (IBM). By the late 1920s, IBM was the fourth largest office machine supplier in the world, behind Remington- Rand, National Cash Register (NCR), and Burroughs Adding Machine Company.
IBM made numerous improvements to tabulating technology during the 1920s and 1930s and eventually developed a machine that could compare cards, a significant innovation that enabled machines to perform simple logic (if— then) operations.
Tuesday, 11 October 2016
Measuring Computing Power
For physical machines, we can compare the power of different machines by measuring the amount of mechanical work they can perform within a given amount of time. This power can be captured with units like horsepower and watt. Physical power is not a very useful measure of computing power, though, since the amount of computing achieved for the same amount of energy varies greatly. Energy is consumed when a computer operates, but consuming energy is not the purpose of using a computer.
Two properties that measure the power of a computing machine are:
1. How much information it can process?
2. How fast can it process?
1. How much information it can process?
2. How fast can it process?
We defer considering the second property until Part II, but consider the first question here.
InformationInformally, we use information to mean knowledge. But to understand informa- information
tion quantitatively, as something we can measure, we need a more precise way
to think about information.
tion quantitatively, as something we can measure, we need a more precise way
to think about information.
Measuring Computing PowerThe way computer scientists measure information is based on how what is known changes as a result of obtaining the information. The primary unit of information is a bit bit . One bit of information halves the amount of uncertainty. It is equivalent to answering a “yes” or “no” question, where either answer is equally likely beforehand. Before learning the answer, therewere two possibilities; after learning the answer, there is one. binary question We call a question with two possible answers a binary question. Since a bit can have two possible values, we often represent the values as 0 and 1. For example, suppose we perform a fair coin toss but do not reveal the result. Half of the time, the coin will land “heads”, and the other half of the time the coin will land “tails”.
Without knowing any more information, our chances of guessing the correct answer are 1
2 . One bit of information would be enough to convey either “heads” or “tails”; we can use 0 to represent “heads” and 1 to represent “tails”. So, the amount of information in a coin toss is one bit.
Similarly, one bit can distinguish between the values 0 and 1:
2 . One bit of information would be enough to convey either “heads” or “tails”; we can use 0 to represent “heads” and 1 to represent “tails”. So, the amount of information in a coin toss is one bit.
Similarly, one bit can distinguish between the values 0 and 1:
How many bits of information are there in the outcome of tossing a six-sided
die?
There are six equally likely possible outcomes, so without any more information
we have a one in six chance of guessing the correct value. One bit is not enough
to identify the actual number, since one bit can only distinguish between two
values. We could use five binary questions like this:
die?
There are six equally likely possible outcomes, so without any more information
we have a one in six chance of guessing the correct value. One bit is not enough
to identify the actual number, since one bit can only distinguish between two
values. We could use five binary questions like this:
This is quite inefficient, though, since we need up to five questions to identify
the value (and on average, expect to need 313
questions). Can we identify the
value with fewer than 5 questions?
the value (and on average, expect to need 313
questions). Can we identify the
value with fewer than 5 questions?
Computing 5Our goal is to identify questions where the “yes” and “no” answers are equally likely—that way, each answer provides the most information possible. This is not the case if we start with, “Is the value 6?”, since that answer is expected to be “yes” only one time in six. Instead, we should start with a question like, “Is the value at least 4?”. Here, we expect the answer to be “yes” one half of the time, and the “yes” and “no” answers are equally likely. If the answer is “yes”, we know the result is 4, 5, or 6. With two more bits, we can distinguish between these three values (note that two bits is actually enough to distinguish among four different values, so some information is wasted here). Similarly, if the answer to the first question is no, we know the result is 1, 2, or 3. We need two more bits to distinguish which of the three values it is. Thus, with three bits, we can distinguish all six possible outcomes.
Three bits can convey more information that just six possible outcomes, however. In the binary question tree, there are some questions where the answer is not equally likely to be “yes” and “no” (for example, we expect the answer to “Is the value 3?” to be “yes” only one out of three times). Hence, we are not obtaining a full bit of information with each question. Each bit doubles the number of possibilities we can distinguish, so with three bits we can distinguish between 2 2 2 = 8 possibilities. In general, with n bits, we can distinguish between 2n possibilities. Conversely, distinguishing among k possible values requires log2 k bits. The logarithm is defined such that if a = bc logarithm then logb a = c. Since each bit has two possibilities, we use the logarithm base
2 to determine the number of bits needed to distinguish among a set of distinct
possibilities. For our six-sided die, log2 6 2.58, so we need approximately 2.58
binary questions. But, questions are discrete: we can’t ask 0.58 of a question, so
we need to use three binary questions. Trees. Figure 1.1 depicts a structure of binary questions for distinguishing among eight values. We call this structure a binary tree. We will see many useful binary tree applications of tree-like structures in this book. Computer scientists draw trees upside down. The root is the top of the tree, and the leaves are the numbers at the bottom (0, 1, 2, . . ., 7). There is a unique path from the root of the tree to each leaf. Thus, we can describe each of the eight
6 1.2. Measuring Computing Power
possible values using the answers to the questions down the tree. For example,
if the answers are “No”, “No”, and “No”, we reach the leaf 0; if the answers are
“Yes”, “No”, “Yes”, we reach the leaf 5. Since there are no more than two possible
answers for each node, we call this a binary tree.
We can describe any non-negative integer using bits in this way, by just adding
additional levels to the tree. For example, if we wanted to distinguish between
16 possible numbers, we would add a new question, “Is is >= 8?” to the top
of the tree. If the answer is “No”, we use the tree in Figure 1.1 to distinguish
numbers between 0 and 7. If the answer is “Yes”, we use a tree similar to the one
in Figure 1.1, but add 8 to each of the numbers in the questions and the leaves.
The depth depth of a tree is the length of the longest path fromthe root to any leaf. The
example tree has depth three. A binary tree of depth d can distinguish up to 2d
different values.
Figure 1.1. Using three bits to distinguish eight possible values.
Units of Information. One byte is defined as eight bits. Hence, one byte of
information corresponds to eight binary questions, and can distinguish among
28 (256) different values. For larger amounts of information, we use metric prefixes,
but instead of scaling by factors of 1000 they scale by factors of 210 (1024).
Hence, one kilobyte is 1024 bytes; one megabyte is 220 (approximately one million)
bytes; one gigabyte is 230 (approximately one billion) bytes; and one terabyte
is 240 (approximately one trillion) bytes.
Three bits can convey more information that just six possible outcomes, however. In the binary question tree, there are some questions where the answer is not equally likely to be “yes” and “no” (for example, we expect the answer to “Is the value 3?” to be “yes” only one out of three times). Hence, we are not obtaining a full bit of information with each question. Each bit doubles the number of possibilities we can distinguish, so with three bits we can distinguish between 2 2 2 = 8 possibilities. In general, with n bits, we can distinguish between 2n possibilities. Conversely, distinguishing among k possible values requires log2 k bits. The logarithm is defined such that if a = bc logarithm then logb a = c. Since each bit has two possibilities, we use the logarithm base
2 to determine the number of bits needed to distinguish among a set of distinct
possibilities. For our six-sided die, log2 6 2.58, so we need approximately 2.58
binary questions. But, questions are discrete: we can’t ask 0.58 of a question, so
we need to use three binary questions. Trees. Figure 1.1 depicts a structure of binary questions for distinguishing among eight values. We call this structure a binary tree. We will see many useful binary tree applications of tree-like structures in this book. Computer scientists draw trees upside down. The root is the top of the tree, and the leaves are the numbers at the bottom (0, 1, 2, . . ., 7). There is a unique path from the root of the tree to each leaf. Thus, we can describe each of the eight
6 1.2. Measuring Computing Power
possible values using the answers to the questions down the tree. For example,
if the answers are “No”, “No”, and “No”, we reach the leaf 0; if the answers are
“Yes”, “No”, “Yes”, we reach the leaf 5. Since there are no more than two possible
answers for each node, we call this a binary tree.
We can describe any non-negative integer using bits in this way, by just adding
additional levels to the tree. For example, if we wanted to distinguish between
16 possible numbers, we would add a new question, “Is is >= 8?” to the top
of the tree. If the answer is “No”, we use the tree in Figure 1.1 to distinguish
numbers between 0 and 7. If the answer is “Yes”, we use a tree similar to the one
in Figure 1.1, but add 8 to each of the numbers in the questions and the leaves.
The depth depth of a tree is the length of the longest path fromthe root to any leaf. The
example tree has depth three. A binary tree of depth d can distinguish up to 2d
different values.
Figure 1.1. Using three bits to distinguish eight possible values.
Units of Information. One byte is defined as eight bits. Hence, one byte of
information corresponds to eight binary questions, and can distinguish among
28 (256) different values. For larger amounts of information, we use metric prefixes,
but instead of scaling by factors of 1000 they scale by factors of 210 (1024).
Hence, one kilobyte is 1024 bytes; one megabyte is 220 (approximately one million)
bytes; one gigabyte is 230 (approximately one billion) bytes; and one terabyte
is 240 (approximately one trillion) bytes.
Computer Technologies- Processes, Procedures, and Computers
Computer science is the study of information processes. A process is a sequence
processes of steps. Each step changes the state of the world in some small way, and the
result of all the steps produces some goal state. For example, baking a cake,
mailing a letter, and planting a tree are all processes. Because they involve physical
things like sugar and dirt, however, they are not pure information processes.
Computer science focuses on processes that involve abstract information rather
than physical things.
The boundaries between the physical world and pure information processes,
however, are often fuzzy. Real computers operate in the physical world: they
obtain input through physical means (e.g., a user pressing a key on a keyboard
that produces an electrical impulse), and produce physical outputs (e.g., an image
displayed on a screen). By focusing on abstract information, instead of the
physical ways of representing and manipulating information, we simplify computation
to its essence to better enable understanding and reasoning.
procedure A procedure is a description of a process. A simple process can be described
just by listing the steps. The list of steps is the procedure; the act of following
them is the process.
A procedure that can be followed without any thought is
algorithm called a mechanical procedure. An algorithm is a mechanical procedure that is
guaranteed to eventually finish.
For example, here is a procedure for making coffee, adapted from the actual
A mathematician is directions that come with a major coffeemaker:
a machine for
turning coffee into
theorems.
Attributed to Paul
Erdos¨
1. Lift and open the coffeemaker lid.
2. Place a basket-type filter into the filter basket.
3. Add the desired amount of coffee and shake to level the coffee.
4. Fill the decanter with cold, fresh water to the desired capacity.
5. Pour the water into the water reservoir.
6. Close the lid.
7. Place the empty decanter on the warming plate.
8. Press the ON button.
Describing processes by just listing steps like this has many limitations. First,
natural languages are very imprecise and ambiguous. Following the steps correctly
requires knowing lots of unstated assumptions. For example, step three
assumes the operator understands the difference between coffee grounds and
finished coffee, and can infer that this use of “coffee” refers to coffee grounds
since the end goal of this process is to make drinkable coffee. Other steps assume
the coffeemaker is plugged in and sitting on a flat surface.
One could, of course, add lots more details to our procedure and make the language
more precise than this. Even when a lot of effort is put into writing precisely
and clearly, however, natural languages such as English are inherently ambiguous.
This is why the United States tax code is 3.4 million words long, but
lawyers can still spend years arguing over what it really means.
Another problem with this way of describing a procedure is that the size of the
Chapter 1. Computing 3
description is proportional to the number of steps in the process. This is fine
for simple processes that can be executed by humans in a reasonable amount
of time, but the processes we want to execute on computers involve trillions of
steps. This means we need more efficient ways to describe them than just listing
each step one-by-one.
To program computers, we need tools that allow us to describe processes precisely
and succinctly. Since the procedures are carried out by a machine, every
step needs to be described; we cannot rely on the operator having “common
sense” (for example, to know how to fill the coffeemaker with water without explaining
that water comes from a faucet, and how to turn the faucet on). Instead,
we need mechanical procedures that can be followed without any thinking.
A computer is a machine that can: computer
1. Accept input. Input could be entered by a human typing at a keyboard,
received over a network, or provided automatically by sensors attached to
the computer.
2. Execute a mechanical procedure, that is, a procedure where each step can
be executed without any thought.
3. Produce output. Output could be data displayed to a human, but it could
also be anything that effects the world outside the computer such as electrical
signals that control how a device operates. A computer
terminal is not
some clunky old
television with a
typewriter in front
of it. It is an
interface where the
mind and body can
connect with the
universe and move
bits of it about.
Douglas Adams
Computers exist in a wide range of forms, and thousands of computers are hidden
in devices we use everyday but don’t think of as computers such as cars,
phones, TVs, microwave ovens, and access cards.
Our primary focus is on universal
computers, which are computers that can perform all possible mechanical
computations on discrete inputs except for practical limits on space and
time. The next section explains what it discrete inputs means; Chapters 6 and 12
explore more deeply what it means for a computer to be universal.
What is Computing?
The first million years of hominid history produced tools to amplify, and later
mechanize, our physical abilities to enable us to move faster, reach higher, and
hit harder. We have developed tools that amplify physical force by the trillions
and increase the speeds at which we can travel by the thousands.
Tools that amplify intellectual abilities are much rarer. While some animals have
developed tools to amplify their physical abilities, only humans have developed
tools to substantially amplify our intellectual abilities and it is those advances
that have enabled humans to dominate the planet. The first key intellect amplifier
was language. Language provided the ability to transmit our thoughts to
others, as well as to use our own minds more effectively.
The next key intellect
amplifier was writing, which enabled the storage and transmission of thoughts
over time and distance.
Computing is the ultimate mental amplifier—computers can mechanize any intellectual
activity we can imagine. Automatic computing radically changes how
humans solve problems, and even the kinds of problems we can imagine solving.
Computing has changed the world more than any other invention of the
past hundred years, and has come to pervade nearly all human endeavors. Yet,
we are just at the beginning of the computing revolution; today’s computing offers
just a glimpse of the potential impact of computing.
There are two reasons why everyone should study computing: It may be true that
you have to be able
to read in order to
fill out forms at the
DMV, but that’s not
why we teach
children to read.
We
teach them to read
for the higher
purpose of allowing
them access to
beautiful and
meaningful ideas.
Paul Lockhart,
Lockhart’s Lament
1. Nearly all of the most exciting and important technologies, arts, and sciences
of today and tomorrow are driven by computing.
2. Understanding computing illuminates deep insights and questions into
the nature of our minds, our culture, and our universe.
Anyone who has submitted a query to Google, watched Toy Story, had LASIK
eye surgery, used a smartphone, seen a Cirque Du Soleil show, shopped with a
credit card, or microwaved a pizza should be convinced of the first reason. None
of these would be possible without the tremendous advances in computing over
the past half century.
Although this book will touch on on some exciting applications of computing,
our primary focus is on the second reason, which may seem more surprising.
2 1.1. Processes, Procedures, and Computers
Computing changes how we think about problems and how we understand the
world. The goal of this book is to teach you that new way of thinking.
Monday, 10 October 2016
Video Tutorials on Microsoft Office Excel 2007 Part - 2
Sunday, 9 October 2016
Video Tutorials on Microsoft Office Word 2007 Tutorials in Bangla Part - 13
Microsoft Office Word 2007 Tutorial In Bangla Part-12
Saturday, 8 October 2016
Video Tutorials on Microsoft Office Word 2007 Tutorial in Bangla Part-11
Friday, 7 October 2016
Video Tutorials on Microsoft Office Word 2007
Microsoft Office Word 2007 Tutorial in Bangla Part-8
Microsoft Office Word 2007 Tutorial in Bangla Part-1
Microsoft Word 2007 Tutorial In Bangla Part 4
Video Tutorials on Microsoft Office Word 2007
Microsoft Office Word 2007 Tutorial in Bangla Part-7
Outsourcing Preparation Video Tutorials on Microsoft Office Excel 2007 T...
Subscribe to:
Comments (Atom)