Wednesday, June 21, 2006

BANDWIDTH

Site FeedMost hosting companies offer a variety of bandwidth options in their plans. So exactly what is bandwidth as it relates to web hosting? Put simply, bandwidth is the amount of traffic that is allowed to occur between your web site and the rest of the internet. The amount of bandwidth a hosting company can provide is determined by their network connections, both internal to their data center and external to the public internet.


Network Connectivity

The internet, in the most simplest of terms, is a group of millions of computers connected by networks. These connections within the internet can be large or small depending upon the cabling and equipment that is used at a particular internet location. It is the size of each network connection that determines how much bandwidth is available. For example, if you use a DSL connection to connect to the internet, you have 1.54 Mega bits (Mb) of bandwidth. Bandwidth therefore is measured in bits (a single 0 or 1). Bits are grouped in bytes which form words, text, and other information that is transferred between your computer and the internet.

If you have a DSL connection to the internet, you have dedicated bandwidth between your computer and your internet provider. But your internet provider may have thousands of DSL connections to their location. All of these connection aggregate at your internet provider who then has their own dedicated connection to the internet (or multiple connections) which is much larger than your single connection. They must have enough bandwidth to serve your computing needs as well as all of their other customers. So while you have a 1.54Mb connection to your internet provider, your internet provider may have a 255Mb connection to the internet so it can accommodate your needs and up to 166 other users (255/1.54).


Traffic

A very simple analogy to use to understand bandwidth and traffic is to think of highways and cars. Bandwidth is the number of lanes on the highway and traffic is the number of cars on the highway. If you are the only car on a highway, you can travel very quickly. If you are stuck in the middle of rush hour, you may travel very slowly since all of the lanes are being used up.

Traffic is simply the number of bits that are transferred on network connections. It is easiest to understand traffic using examples. One Gigabyte is 2 to the 30th power (1,073,741,824) bytes. One gigabyte is equal to 1,024 megabytes. To put this in perspective, it takes one byte to store one character. Imagine 100 file cabinets in a building, each of these cabinets holds 1000 folders. Each folder has 100 papers. Each paper contains 100 characters - A GB is all the characters in the building. An MP3 song is about 4MB, the same song in wav format is about 40MB, a full length movie can be 800MB to 1000MB (1000MB = 1GB).

If you were to transfer this MP3 song from a web site to your computer, you would create 4MB of traffic between the web site you are downloading from and your computer. Depending upon the network connection between the web site and the internet, the transfer may occur very quickly, or it could take time if other people are also downloading files at the same time. If, for example, the web site you download from has a 10MB connection to the internet, and you are the only person accessing that web site to download your MP3, your 4MB file will be the only traffic on that web site. However, if three people are all downloading that same MP at the same time, 12MB (3 x 4MB) of traffic has been created. Because in this example, the host only has 10MB of bandwidth, someone will have to wait. The network equipment at the hosting company will cycle through each person downloading the file and transfer a small portion at a time so each person's file transfer can take place, but the transfer for everyone downloading the file will be slower. If 100 people all came to the site and downloaded the MP3 at the same time, the transfers would be extremely slow. If the host wanted to decrease the time it took to download files simultaneously, it could increase the bandwidth of their internet connection (at a cost due to upgrading equipment).


Hosting Bandwidth

In the example above, we discussed traffic in terms of downloading an MP3 file. However, each time you visit a web site, you are creating traffic, because in order to view that web page on your computer, the web page is first downloaded to your computer (between the web site and you) which is then displayed using your browser software (Internet Explorer, Netscape, etc.) . The page itself is simply a file that creates traffic just like the MP3 file in the example above (however, a web page is usually much smaller than a music file).

A web page may be very small or large depending upon the amount of text and the number and quality of images integrated within the web page. For example, the home page for CNN.com is about 200KB (200 Kilobytes = 200,000 bytes = 1,600,000 bits). This is typically large for a web page. In comparison, Yahoo's home page is about 70KB.


How Much Bandwidth Is Enough?

It depends (don't you hate that answer). But in truth, it does. Since bandwidth is a significant determinant of hosting plan prices, you should take time to determine just how much is right for you. Almost all hosting plans have bandwidth requirements measured in months, so you need to estimate the amount of bandwidth that will be required by your site on a monthly basis

If you do not intend to provide file download capability from your site, the formula for calculating bandwidth is fairly straightforward:

Average Daily Visitors x Average Page Views x Average Page Size x 31 x Fudge Factor

If you intend to allow people to download files from your site, your bandwidth calculation should be:

[(Average Daily Visitors x Average Page Views x Average Page Size) + (Average Daily File Downloads x Average File Size)] x 31 x Fudge Factor

Let us examine each item in the formula:

Average Daily Visitors - The number of people you expect to visit your site, on average, each day. Depending upon how you market your site, this number could be from 1 to 1,000,000.

Average Page Views - On average, the number of web pages you expect a person to view. If you have 50 web pages in your web site, an average person may only view 5 of those pages each time they visit.

Average Page Size - The average size of your web pages, in Kilobytes (KB). If you have already designed your site, you can calculate this directly.

Average Daily File Downloads - The number of downloads you expect to occur on your site. This is a function of the numbers of visitors and how many times a visitor downloads a file, on average, each day.

Average File Size - Average file size of files that are downloadable from your site. Similar to your web pages, if you already know which files can be downloaded, you can calculate this directly.

Fudge Factor - A number greater than 1. Using 1.5 would be safe, which assumes that your estimate is off by 50%. However, if you were very unsure, you could use 2 or 3 to ensure that your bandwidth requirements are more than met.

Usually, hosting plans offer bandwidth in terms of Gigabytes (GB) per month. This is why our formula takes daily averages and multiplies them by 31.


Summary

Most personal or small business sites will not need more than 1GB of bandwidth per month. If you have a web site that is composed of static web pages and you expect little traffic to your site on a daily basis, go with a low bandwidth plan. If you go over the amount of bandwidth allocated in your plan, your hosting company could charge you over usage fees, so if you think the traffic to your site will be significant, you may want to go through the calculations above to estimate the amount of bandwidth required in a hosting plan.

404 ERROR

Site FeedHere is some trivia, but interesting nonetheless.

While surfing the Net many a times you get the browser's standard "404 -
Page Not Found" message. Have you ever thought,

Why "404 - Page Not Found" ?
Why not "808 - Page Not Found"?

Here's why...

The history of 404 :

Before the beginning of time, when the Internet was still very much under the spell of bare Unix shells and Gopher, before SLIP or PPP became widely used, an ambitious group of young scientists at CERN (Switzerland) started working on what was to become the media revolution of the nineties: the World Wide Web, later to be known as WWW, or simply 'the Web'. Their aim: to create a database infrastructure that offered open access to data in various formats: multi-media. The ultimate goal was clearly to create a protocol that would combine text and pictures and present it as one document, and allow linking to other such documents: hypertext.

Because these bright young minds were reluctant to reveal their progress (and setbacks) to the world, they started developing their protocol in a closed environment: CERN's internal network. Many hours were spent on what later became the world-wide standard for multimedia documents. Using the physical lay-out of CERN's network and buildings as a metaphor for the 'real world' they situated different functions of the protocol in different offices within CERN.

In an office on the fourth floor (room 404), they placed the World Wide Web's central database: any request for a file was routed to that office, where two or three people would manually locate the requested files and transfer them, over the network, to the person who made that request. When the database started to grow, and the people at CERN realized that they were able to retrieve documents other than their own research-papers, not only the number of requests grew, but also the number of requests that could not be fulfilled, usually because the person who requested a file typed in the wrong name for that file. Soon these faulty requests were answered with a standard message: "Room 404: file not found".

Later, when these processes were automated and people could directly query the database, the messageIDs for error messages remained linked to the physical location the process took place: "404: file not found". The room numbers remained in the error codes in the official release of HTTP (Hyper Text Transfer Protocol) when the Web left CERN to conquer the world, and are still displayed when a browser makes a faulty request to a Web server. In memory of the heroic boys and girls that worked deep into the night for all those months, in those small and hot offices at CERN, Room 404 is preserved as a 'place on the Web'. None of the other rooms are still used for the Web. Room 404 is the only and true monument to the beginning of the Web, a tribute to a place in the past, where the future was shaped.

ARTIFICIAL INTELLIGENCE

Site FeedBasic Questions
Q. What is artificial intelligence?
A. It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.
Q. Yes, but what is intelligence?
A. Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.
Q. Isn't there a solid definition of intelligence that doesn't depend on relating it to human intelligence?
A. Not yet. The problem is that we cannot yet characterize in general what kinds of computational procedures we want to call intelligent. We understand some of the mechanisms of intelligence and not others.
Q. Is intelligence a single thing so that one can ask a yes or no question ``Is this machine intelligent or not?''?
A. No. Intelligence involves mechanisms, and AI research has discovered how to make computers carry out some of them and not others. If doing a task requires only mechanisms that are well understood today, computer programs can give very impressive performances on these tasks. Such programs should be considered ``somewhat intelligent''.
Q. Isn't AI about simulating human intelligence?
A. Sometimes but not always or even usually. On the one hand, we can learn something about how to make machines solve problems by observing other people or just by observing our own methods. On the other hand, most work in AI involves studying the problems the world presents to intelligence rather than studying people or animals. AI researchers are free to use methods that are not observed in people or that involve much more computing than people can do.
Q. What about IQ? Do computer programs have IQs?
A. No. IQ is based on the rates at which intelligence develops in children. It is the ratio of the age at which a child normally makes a certain score to the child's age. The scale is extended to adults in a suitable way. IQ correlates well with various measures of success or failure in life, but making computers that can score high on IQ tests would be weakly correlated with their usefulness. For example, the ability of a child to repeat back a long sequence of digits correlates well with other intellectual abilities, perhaps because it measures how much information the child can compute with at once. However, ``digit span'' is trivial for even extremely limited computers.
However, some of the problems on IQ tests are useful challenges for AI.
Q. What about other comparisons between human and computer intelligence?
Arthur R. Jensen [Jen98], a leading researcher in human intelligence, suggests ``as a heuristic hypothesis'' that all normal humans have the same intellectual mechanisms and that differences in intelligence are related to ``quantitative biochemical and physiological conditions''. I see them as speed, short term memory, and the ability to form accurate and retrievable long term memories.
Whether or not Jensen is right about human intelligence, the situation in AI today is the reverse.
Computer programs have plenty of speed and memory but their abilities correspond to the intellectual mechanisms that program designers understand well enough to put in programs. Some abilities that children normally don't develop till they are teenagers may be in, and some abilities possessed by two year olds are still out. The matter is further complicated by the fact that the cognitive sciences still have not succeeded in determining exactly what the human abilities are. Very likely the organization of the intellectual mechanisms for AI can usefully be different from that in people.
Whenever people do better than computers on some task or computers use a lot of computation to do as well as people, this demonstrates that the program designers lack understanding of the intellectual mechanisms required to do the task efficiently.
Q. When did AI research start?
A. After WWII, a number of people independently started to work on intelligent machines. The English mathematician Alan Turing may have been the first. He gave a lecture on it in 1947. He also may have been the first to decide that AI was best researched by programming computers rather than by building machines. By the late 1950s, there were many researchers on AI, and most of them were basing their work on programming computers.
Q. Does AI aim to put the human mind into the computer?
A. Some researchers say they have that objective, but maybe they are using the phrase metaphorically. The human mind has a lot of peculiarities, and I'm not sure anyone is serious about imitating all of them.
Q. What is the Turing test?
A. Alan Turing's 1950 article Computing Machinery and Intelligence [Tur50] discussed conditions for considering a machine to be intelligent. He argued that if the machine could successfully pretend to be human to a knowledgeable observer then you certainly should consider it intelligent. This test would satisfy most people but not all philosophers. The observer could interact with the machine and a human by teletype (to avoid requiring that the machine imitate the appearance or voice of the person), and the human would try to persuade the observer that it was human and the machine would try to fool the observer.
The Turing test is a one-sided test. A machine that passes the test should certainly be considered intelligent, but a machine could still be considered intelligent without knowing enough about humans to imitate a human.
Daniel Dennett's book Brainchildren [Den98] has an excellent discussion of the Turing test and the various partial Turing tests that have been implemented, i.e. with restrictions on the observer's knowledge of AI and the subject matter of questioning. It turns out that some people are easily led into believing that a rather dumb program is intelligent.
Q. Does AI aim at human-level intelligence?
A. Yes. The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans. However, many people involved in particular research areas are much less ambitious.
Q. How far is AI from reaching human-level intelligence? When will it happen?
A. A few people think that human-level intelligence can be achieved by writing large numbers of programs of the kind people are now writing and assembling vast knowledge bases of facts in the languages now used for expressing knowledge.
However, most AI researchers believe that new fundamental ideas are required, and therefore it cannot be predicted when human level intelligence will be achieved.
Q. Are computers the right kind of machine to be made intelligent?
A. Computers can be programmed to simulate any kind of machine.
Many researchers invented non-computer machines, hoping that they would be intelligent in different ways than the computer programs could be. However, they usually simulate their invented machines on a computer and come to doubt that the new machine is worth building. Because many billions of dollars that have been spent in making computers faster and faster, another kind of machine would have to be very fast to perform better than a program on a computer simulating the machine.
Q. Are computers fast enough to be intelligent?
A. Some people think much faster computers are required as well as new ideas. My own opinion is that the computers of 30 years ago were fast enough if only we knew how to program them. Of course, quite apart from the ambitions of AI researchers, computers will keep getting faster.
Q. What about parallel machines?
A. Machines with many processors are much faster than single processors can be. Parallelism itself presents no advantages, and parallel machines are somewhat awkward to program. When extreme speed is required, it is necessary to face this awkwardness.
Q. What about making a ``child machine'' that could improve by reading and by learning from experience?
A. This idea has been proposed many times, starting in the 1940s. Eventually, it will be made to work. However, AI programs haven't yet reached the level of being able to learn much of what a child learns from physical experience. Nor do present programs understand language well enough to learn much by reading.
Q. Might an AI system be able to bootstrap itself to higher and higher level intelligence by thinking about AI?
A. I think yes, but we aren't yet at a level of AI at which this process can begin.
Q. What about chess?
A. Alexander Kronrod, a Russian AI researcher, said ``Chess is the Drosophila of AI.'' He was making an analogy with geneticists' use of that fruit fly to study inheritance. Playing chess requires certain intellectual mechanisms and not others. Chess programs now play at grandmaster level, but they do it with limited intellectual mechanisms compared to those used by a human chess player, substituting large amounts of computation for understanding. Once we understand these mechanisms better, we can build human-level chess programs that do far less computation than do present programs.
Unfortunately, the competitive and commercial aspects of making computers play chess have taken precedence over using chess as a scientific domain. It is as if the geneticists after 1910 had organized fruit fly races and concentrated their efforts on breeding fruit flies that could win these races.
Q. What about Go?
A. The Chinese and Japanese game of Go is also a board game in which the players take turns moving. Go exposes the weakness of our present understanding of the intellectual mechanisms involved in human game playing. Go programs are very bad players, in spite of considerable effort (not as much as for chess). The problem seems to be that a position in Go has to be divided mentally into a collection of subpositions which are first analyzed separately followed by an analysis of their interaction. Humans use this in chess also, but chess programs consider the position as a whole. Chess programs compensate for the lack of this intellectual mechanism by doing thousands or, in the case of Deep Blue, many millions of times as much computation.
Sooner or later, AI research will overcome this scandalous weakness.
Q. Don't some people say that AI is a bad idea?
A. The philosopher John Searle says that the idea of a non-biological machine being intelligent is incoherent. He proposes the Chinese room argument www-formal.stanford.edu/jmc/chinese.html The philosopher Hubert Dreyfus says that AI is impossible. The computer scientist Joseph Weizenbaum says the idea is obscene, anti-human and immoral. Various people have said that since artificial intelligence hasn't reached human level by now, it must be impossible. Still other people are disappointed that companies they invested in went bankrupt.
Q. Aren't computability theory and computational complexity the keys to AI? [Note to the layman and beginners in computer science: These are quite technical branches of mathematical logic and computer science, and the answer to the question has to be somewhat technical.]
A. No. These theories are relevant but don't address the fundamental problems of AI.
In the 1930s mathematical logicians, especially Kurt Gödel and Alan Turing, established that there did not exist algorithms that were guaranteed to solve all problems in certain important mathematical domains. Whether a sentence of first order logic is a theorem is one example, and whether a polynomial equations in several variables has integer solutions is another. Humans solve problems in these domains all the time, and this has been offered as an argument (usually with some decorations) that computers are intrinsically incapable of doing what people do. Roger Penrose claims this. However, people can't guarantee to solve arbitrary problems in these domains either. See my Review of The Emperor's New Mind by Roger Penrose. More essays and reviews defending AI research are in [McC96a].
In the 1960s computer scientists, especially Steve Cook and Richard Karp developed the theory of NP-complete problem domains. Problems in these domains are solvable, but seem to take time exponential in the size of the problem. Which sentences of propositional calculus are satisfiable is a basic example of an NP-complete problem domain. Humans often solve problems in NP-complete domains in times much shorter than is guaranteed by the general algorithms, but can't solve them quickly in general.
What is important for AI is to have algorithms as capable as people at solving problems. The identification of subdomains for which good algorithms exist is important, but a lot of AI problem solvers are not associated with readily identified subdomains.
The theory of the difficulty of general classes of problems is called computational complexity. So far this theory hasn't interacted with AI as much as might have been hoped. Success in problem solving by humans and by AI programs seems to rely on properties of problems and problem solving methods that the neither the complexity researchers nor the AI community have been able to identify precisely.
Algorithmic complexity theory as developed by Solomonoff, Kolmogorov and Chaitin (independently of one another) is also relevant. It defines the complexity of a symbolic object as the length of the shortest program that will generate it. Proving that a candidate program is the shortest or close to the shortest is an unsolvable problem, but representing objects by short programs that generate them should sometimes be illuminating even when you can't prove that the program is the shortest.

Branches of AI
Q. What are the branches of AI?
A. Here's a list, but some branches are surely missing, because no-one has identified them yet. Some of these may be regarded as concepts or topics rather than full branches.
logical AI
What a program knows about the world in general the facts of the specific situation in which it must act, and its goals are all represented by sentences of some mathematical logical language. The program decides what to do by inferring that certain actions are appropriate for achieving its goals. The first article proposing this was [McC59]. [McC89] is a more recent summary. [McC96b] lists some of the concepts involved in logical aI. [Sha97] is an important text.
search
AI programs often examine large numbers of possibilities, e.g. moves in a chess game or inferences by a theorem proving program. Discoveries are continually made about how to do this more efficiently in various domains.
pattern recognition
When a program makes observations of some kind, it is often programmed to compare what it sees with a pattern. For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face. More complex patterns, e.g. in a natural language text, in a chess position, or in the history of some event are also studied. These more complex patterns require quite different methods than do the simple patterns that have been studied the most.
representation
Facts about the world have to be represented in some way. Usually languages of mathematical logic are used.
inference
From some facts, others can be inferred. Mathematical logical deduction is adequate for some purposes, but new methods of non-monotonic inference have been added to logic since the 1970s. The simplest kind of non-monotonic reasoning is default reasoning in which a conclusion is to be inferred by default, but the conclusion can be withdrawn if there is evidence to the contrary. For example, when we hear of a bird, we man infer that it can fly, but this conclusion can be reversed when we hear that it is a penguin. It is the possibility that a conclusion may have to be withdrawn that constitutes the non-monotonic character of the reasoning. Ordinary logical reasoning is monotonic in that the set of conclusions that can the drawn from a set of premises is a monotonic increasing function of the premises. Circumscription is another form of non-monotonic reasoning.
common sense knowledge and reasoning
This is the area in which AI is farthest from human-level, in spite of the fact that it has been an active research area since the 1950s. While there has been considerable progress, e.g. in developing systems of non-monotonic reasoning and theories of action, yet more new ideas are needed. The Cyc system contains a large but spotty collection of common sense facts.
learning from experience
Programs do that. The approaches to AI based on connectionism and neural nets specialize in that. There is also learning of laws expressed in logic. [Mit97] is a comprehensive undergraduate text on machine learning. Programs can only learn what facts or behaviors their formalisms can represent, and unfortunately learning systems are almost all based on very limited abilities to represent information.
planning
Planning programs start with general facts about the world (especially facts about the effects of actions), facts about the particular situation and a statement of a goal. From these, they generate a strategy for achieving the goal. In the most common cases, the strategy is just a sequence of actions.
epistemology
This is a study of the kinds of knowledge that are required for solving problems in the world.
ontology
Ontology is the study of the kinds of things that exist. In AI, the programs and sentences deal with various kinds of objects, and we study what these kinds are and what their basic properties are. Emphasis on ontology begins in the 1990s.
heuristics
A heuristic is a way of trying to discover something or an idea imbedded in a program. The term is used variously in AI. Heuristic functions are used in some approaches to search to measure how far a node in a search tree seems to be from a goal. Heuristic predicates that compare two nodes in a search tree to see if one is better than the other, i.e. constitutes an advance toward the goal, may be more useful. [My opinion].
genetic programming
Genetic programming is a technique for getting programs to solve a task by mating random Lisp programs and selecting fittest in millions of generations. It is being developed by John Koza's group and here's a tutorial.
Applications of AI
Q. What are the applications of AI?
A. Here are some.
game playing
You can buy machines that can play master level chess for a few hundred dollars. There is some AI in them, but they play well against people mainly through brute force computation--looking at hundreds of thousands of positions. To beat a world champion by brute force and known reliable heuristics requires being able to look at 200 million positions per second.
speech recognition
In the 1990s, computer speech recognition reached a practical level for limited purposes. Thus United Airlines has replaced its keyboard tree for flight information by a system using speech recognition of flight numbers and city names. It is quite convenient. On the the other hand, while it is possible to instruct some computers using speech, most users have gone back to the keyboard and the mouse as still more convenient.
understanding natural language
Just getting a sequence of words into a computer is not enough. Parsing sentences is not enough either. The computer has to be provided with an understanding of the domain the text is about, and this is presently possible only for very limited domains.
computer vision
The world is composed of three-dimensional objects, but the inputs to the human eye and computers' TV cameras are two dimensional. Some useful programs can work solely in two dimensions, but full computer vision requires partial three-dimensional information that is not just a set of two-dimensional views. At present there are only limited ways of representing three-dimensional information directly, and they are not as good as what humans evidently use.
expert systems
A ``knowledge engineer'' interviews experts in a certain domain and tries to embody their knowledge in a computer program for carrying out some task. How well this works depends on whether the intellectual mechanisms required for the task are within the present state of AI. When this turned out not to be so, there were many disappointing results. One of the first expert systems was MYCIN in 1974, which diagnosed bacterial infections of the blood and suggested treatments. It did better than medical students or practicing doctors, provided its limitations were observed. Namely, its ontology included bacteria, symptoms, and treatments and did not include patients, doctors, hospitals, death, recovery, and events occurring in time. Its interactions depended on a single patient being considered. Since the experts consulted by the knowledge engineers knew about patients, doctors, death, recovery, etc., it is clear that the knowledge engineers forced what the experts told them into a predetermined framework. In the present state of AI, this has to be true. The usefulness of current expert systems depends on their users having common sense.
heuristic classification
One of the most feasible kinds of expert system given the present knowledge of AI is to put some information in one of a fixed set of categories using several sources of information. An example is advising whether to accept a proposed credit card purchase. Information is available about the owner of the credit card, his record of payment and also about the item he is buying and about the establishment from which he is buying it (e.g., about whether there have been previous credit card frauds at this establishment)

More questions
Q. How is AI research done?
A. AI research has both theoretical and experimental sides. The experimental side has both basic and applied aspects.
There are two main lines of research. One is biological, based on the idea that since humans are intelligent, AI should study humans and imitate their psychology or physiology. The other is phenomenal, based on studying and formalizing common sense facts about the world and the problems that the world presents to the achievement of goals. The two approaches interact to some extent, and both should eventually succeed. It is a race, but both racers seem to be walking.
Q. What are the relations between AI and philosophy?
A. AI has many relations with philosophy, especially modern analytic philosophy. Both study mind, and both study common sense. The best best reference is [Tho03].
Q. What should I study before or while learning AI?
A. Study mathematics, especially mathematical logic. The more you learn about science in general the better. For the biological approaches to AI, study psychology and the physiology of the nervous system. Learn some programming languages--at least C, Lisp and Prolog. It is also a good idea to learn one basic machine language. Jobs are likely to depend on knowing the languages currently in fashion. In the late 1990s, these include C++ and Java.
Q. What is a good textbook on AI?
A. Artificial Intelligence by Stuart Russell and Peter Norvig, Prentice Hall is the most commonly used textbbook in 1997. The general views expressed there do not exactly correspond to those of this essay. Artificial Intelligence: A New Synthesis by Nils Nilsson, Morgan Kaufman, may be easier to read. Some people prefer Computational Intelligence by David Poole, Alan Mackworth and Randy Goebel, Oxford, 1998.
Q. What organizations and publications are concerned with AI?
A. The American Association for Artificial Intelligence (AAAI), the European Coordinating Committee for Artificial Intelligence (ECCAI) and the Society for Artificial Intelligence and Simulation of Behavior (AISB) are scientific societies concerned with AI research. The Association for Computing Machinery (ACM) has a special interest group on artificial intelligence SIGART.

Friday, May 26, 2006

http://apps.rockyou.com/rockyou.swf?instanceid=26329490" quality="high" wmode="transparent" width="426" height="320" name="flashticker" align="middle" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer>

FOETRON

http://apps.rockyou.com/rockyou.swf?instanceid=26329490" quality="high" wmode="transparent" width="426" height="320" name="flashticker" align="middle" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer"/>

FOETRON

http://apps.rockyou.com/rockyou.swf?instanceid=26329490" quality="high" wmode="transparent" width="426" height="320" name="flashticker" align="middle" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer"/>

Sunday, March 05, 2006

CAPACITORS

FUNCTION
Capacitors store electric charge. They are used with resistors in timing circuits because it takes time for a capacitor to fill with charge. They are used to smooth varying DC supplies by acting as a reservoir of charge. They are also used in filter circuits because capacitors easily pass AC (changing) signals but they block DC (constant) signals.

CAPACITANCE

This is a measure of a capacitor's ability to store charge. A large capacitance means that more charge can be stored. Capacitance is measured in farads, symbol F. However 1F is very large, so prefixes are used to show the smaller values.
Three prefixes (multipliers) are used, µ (micro), n (nano) and p (pico):
µ means 10-6 (millionth), so 1000000µF = 1F
n means 10-9 (thousand-millionth), so 1000nF = 1µF
p means 10-12 (million-millionth), so 1000pF = 1nF
Capacitor values can be very difficult to find because there are many types of capacitor with different labelling systems!
There are many types of capacitor but they can be split into two groups, polarised and unpolarised. Each group has its own circuit symbol.


TO READ MORE CLICK ON THE LINKS BELOW

Light Emitting Diodes (LEDs)

FunctionLEDs emit light when an electric current passes through them.
Connecting and soldering LEDs must be connected the correct way round, the diagram may be labelled a or + for anode and k or - for cathode (yes, it really is k, not c, for cathode!). The cathode is the short lead and there may be a slight flat on the body of round LEDs. If you can see inside the LED the cathode is the larger electrode (but this is not an official identification method).
LEDs can be damaged by heat when soldering, but the risk is small unless you are very slow. No special precautions are needed for soldering most LEDs.

TO READ MORE CLICK ON THE LINKS

HISTORY OF UNIVERSE IN 200 WORDS

" title="Atom feed">Site Feed
Quantum fluctuation. Inflation. Expansion. Strong nuclear interaction. Particle-antiparticle annihilation. Deuterium and helium production. Density perturbations. Recombination. Blackbody radiation. Local contraction. Cluster formation. Reionization? Violent relaxation. Virialization. Biased galaxy formation? Turbulent fragmentation. Contraction. Ionization. Compression. Opaque hydrogen. Massive star formation. Deuterium ignition. Hydrogen fusion. Hydrogen depletion. Core contraction. Envelope expansion. Helium fusion. Carbon, oxygen, and silicon fusion. Iron production. Implosion. Supernova explosion. Metals injection. Star formation. Supernova explosions. Star formation. Condensation. Planetesimal accretion. Planetary differentiation. Crust solidification. Volatile gas expulsion. Water condensation. Water dissociation. Ozone production. Ultraviolet absorption. Photosynthetic unicellular organisms. Oxidation. Mutation. Natural selection and evolution. Respiration. Cell differentiation. Sexual reproduction. Fossilization. Land exploration. Dinosaur extinction. Mammal expansion. Glaciation. Homo sapiens manifestation. Animal domestication. Food surplus production. Civilization! Innovation. Exploration. Religion. Warring nations. Empire creation and destruction. Exploration. Colonization. Taxation without representation. Revolution. Constitution. Election. Expansion. Industrialization. Rebellion. Emancipation Proclamation. Invention. Mass production. Urbanization. Immigration. World conflagration. League of Nations. Suffrage extension. Depression. World conflagration. Fission explosions. United Nations. Space exploration. Assassinations. Lunar excursions. Resignation. Computerization. World Trade Organization. Terrorism. Internet expansion. Reunification. Dissolution. World-Wide Web creation. Composition. Extrapolation? I think it is an extremely good piece of writing. What do you

AJAX

" title="Atom feed">Site Feed
First thing first, i am not goin to explain the AJAX at core level or at OS level. I am a beginner to this technology. so, i will try my best to explain, how can u build ur website on AJAX technology. OK. u can find loads of material on web for this topic. so i am not going in detail of this theory. U will feel comfortable if i cud explain the implementation. right!! but lets some basics.
AJAX: Asynchronous Javascript and XML.(try wikepedia for more)
What did u feel the difference, when u open a gmail account and yahoo mail account? (Don't worry yahoo is going AJAX very soon) the difference is: each time u open a new mail in yahoo, the whole page refreshes, however in gmail the only some part of the page changes its content. right!! this is the benefit of AJAX. u didn't need to refresh the whole page. it means, if u want to change a part of any web page, just get the data for that part using the network link. so u r saving the bandwidth of ur link. on the other hand if u will refresh the whole page for the change of a part, it doesn't make any sense. but this is the way most of the websites work. Soooo, switch to AJAX technology. HOW? i will TRY to explain the method. the whole method by which u can experience and learn the AJAX technology.The mantra behind this technology is Javascript and XML. nothing is new here. javascript and xml are known for years but i don't know why this technology is appearing, now. Sine i had worked on PHP, Javascript, XML, and HTML using APACHE server and MYSQL database server, so i will explain this technology using all these languages. don't woory, i will explain from basics so u must have some basic knowledge of PHP, JAVASCRIPT and XML(with html, of course). rest u can leave on me (and goooogle!!!). i will explain how to setup apache and mysql server on your windows machine. So, if u want to learn with me, in meantime u shud try to learn basics of PHP and Javascript(1-2 chapter of any book will be suffice). AJAX works like: ur client page will make request to server using javascript. the javascript will form the request packet using XML and send it to server, where the server will process this xml packet and extract the required information. this information will further processed by php and then reply will be send to client machine using xml packet. the javascript at client machine will recv the xml packet (and sometime html information) which will be displayed at client side page. THATS IT. so next time i will explain how to setup ur server and a small example. be ready to MAKE UR WEBSITE based on AJAX.i invite those ppl who knows more abt this technology. I want to learn and you too. so plz share ur suggestions and knowledge

Sunday, December 25, 2005

AFTER BPO IT’S EPO’S

" title="Atom Few years ago, It was call centers that were outsourced to India, then came the technical and animation outsourcing phase. This year there has been a rapid growth in the new form of outsourcing- education process outsourcing (EPO) or online tutoring.
INDIAN brains are recognized world over. Today teachers in India are tutoring children across the globe in maths and science, thanks to the internet. With the rapid development in technology and other online learning, tutoring companies have got a boost. Many are investing in technologies like multimedia chat rooms, voice over internet protocol and so on. The real power of internet as an educational medium is not in its ability to cheaply broadcast canned messages to the masses, but in its ability to network students and teachers together.
HOW DOSE EPO WORKS?
Students across the globe and Indian teacher log on to a website at a predefined time for a particular course. The technology used is IEC, which integrates web, video and voice in an IP based software platform. While it is a one on one session for a student, the teacher usually attends to multiple students simultaneously on different links. The session is generally of an hour duration, providing sufficient time to even ask questions.feed">Site Feed

What is FlashGet?

" title="Atom feed">Site FeedFlashGet is specifically designed to address two of the biggest problems when downloading files: Speed and management of downloaded files.If you've ever waited forever for your files to download from a slow connection, or been cut off midway through a download - or just can't keep track of your ever-growing downloads - FlashGet is for you. FlashGet can split downloaded files into sections, downloading each section simultaneously, for an increase in downloading speed from 100% to 500%. This, coupled with FlashGet's powerful and easy-to-use management features, helps you take control of your downloads like never before.
SpeedFlashGet can automatically split files into sections or splits, and download each split simultaneously. Multiple connections are opened to each file, and the result is the the most efficient exploitation of the bandwidth available. Whatever your connection, FlashGet makes sure all of the bandwidth is utilized. Difficult, slow downloads that normally take ages are handled with ease. Download times are drastically reduced.ManagementFlashGet is capable of creating unlimited numbers of categories for your files. Download jobs can be placed in specifically-named categories for quick and easy access. The powerful and easy-to-use management features in FlashGet help you take control of your downloads easily.
HighlightsSpeed. The ability to split files into up to 10 parts, with each part downloading simultaneously. Up to 8 different simultaneous download jobs. FlashGet just might be the fastest download software around! Organize. Categorize files with FlashGet's integrated & simple-yet-powerful file management features before your files engulf you! Mirror search. Automatically search for the fastest server available for the fastest possible downloads. Automatically have FlashGet dial up, hang up & shut down the computer when you're not around! Schedule to download files whenever you feel! Whether it's while you snooze or during off-peak periods, certain times each weekday, weekend or whatever. The choice is yours! Manage your copious downloaded files with FlashGet's simple yet powerful user interface. Automate your FlashGet downloads with a browser click! Supports Internet Explorer, Netscape and Opera* web browsers. *with freely downloadable plug-in.Superior ease-of-use. FlashGet's interface is logical, integrated, informative and customizable.Queue your downloads with FlashGet's logical queueing system. Control the download speed limit so that downloading files doesn't interfere with your web browsing!Easily see any aspect of your downloads at a glance. Whether it be server status messages, monitoring splits, amount downloaded, time left...whatever! No excessive clicking into multiple open windows to see what's going on! Customize the the FlashGet toolbar and user interface, including the Graph and log window colors. Support for proxy servers for maximum downloading flexibility. Speak your language with FlashGet's auto-select language capabilities (20+ selectable languages available). Check for FlashGet updates from within FlashGet. Monitor your download progress, server status messages and download splits graphically with the easiest, most functional user interface around!
+ much, much more!

SPYBOT-SEARCH AND DESTROY

" title="AtomSpybot - Search & Destroy can detect and remove spyware of different kinds from your computer. Spyware is a relatively new kind of threat that common anti-virus applications do not yet cover. If you see new toolbars in your Internet Explorer that you didn't intentionally install, if your browser crashes, or if you browser start page has changed without your knowing, you most probably have spyware. But even if you don't see anything, you may be infected, because more and more spyware is emerging that is silently tracking your surfing behaviour to create a marketing profile of you that will be sold to advertisement companies. Spybot-S&D is free, so there's no harm in trying to see if something snooped into your computer, too :)
Spybot-S&D can also clean usage tracks, an interesting function if you share your computer with other users and don't want them to see what you worked on. And for professional users, it allows to fix some registry inconsistencies and extended reports.
License
Spybot-S&D comes under the
Dedication Public License.
Requirements
Microsoft Windows 95, 98, ME, NT, 2000 or XP
Minimum of 5 MB free hard disk space, more recommended for updates and backups feed">Site Feed

Welcome to QuickTime 6.5.1

" title="AQuickTime is Apple's award-winning, industry-leading software architecture for creating, playing and streaming digital media for Mac OS and Windows.
QuickTime 6.5.1 delivers a number of new features and important updates, including:• Apple Lossless Encoder, a new lossless audio codec that retains the full quality of uncompressed CD audio while requiring about half the storage space.• Significant improvements to AAC encoding, resulting in high-quality sound over a full range of audio frequencies.• Enhanced support for
iTunes and other QuickTime-based applications.
For more information about QuickTime, please visit the QuickTime web site at
http://www.apple.com/quicktime. The QuickTime web site also provides many links to cool QuickTime content and to other Internet sites that showcase QuickTime.
System Requirements: QuickTime 6.5.1 requires Windows 98, Windows Millennium Edition (aka Windows Me), Windows 2000 or Windows XP. It requires an Intel Pentium or compatible processor and at least 128 MB of RAM. About Roland's Sound Set for General MIDI and GS Format This release of QuickTime includes an instrument sound set licensed from Roland Corporation that makes a complete General MIDI compatible sound set. It also includes additional sounds necessary to make a complete GS Format compatible sound set.What is the GS Format?The GS Format is a standardized set of specifications for sound sources that defines the manner in which multitimbral sound generating devices will respond to the MIDI messages. The GS Format complies with the General MIDI System Level - 1. The GS Format also defines a number of other details over and above the features of General MIDI. These include unique specifications for sound and functions available for tone editing, effects, and other specifications concerning the manner in which sound sources will respond to MIDI messages. Any device that is equipped with GS Format sound sources can faithfully reproduce both General MIDI sound recordings and GS Format MIDI sound recordings. How to contact Roland:
Roland Corporation4-16, Dojimahama 1-chome,Kita-ku, Osaka 530-0004, Japan
For more information about Roland and its line of products, visit their website at:
http://www.roland.co.jpLimitationsRoland reserves all rights to the Sound Set not expressly granted by Roland Corporation U.S. or by Apple under the terms of Apple's Software Distribution Agreement. © Copyright 1991-2004 Apple Computer, Inc. Apple, the Apple logo, Macintosh, and Power Macintosh are trademarks of Apple Computer, Inc., registered in the U.S. and other countries. iDVD, iMovie, Final Cut Pro, iTunes, QuickTime, QuickTime Player, and PictureViewer are trademarks of Apple Computer, Inc. All other trademarks are the property of their respective owners.tom feed">Site Feed

Tiny polymer tips boost fibre coupling efficiency


" title="Atom fFrench scientists have developed a low-cost method for growing an efficient microlens at the end of optical fibre. James Tyrrell reports on the road to commercialization.
From
Opto & Laser Europe
Polymer tip
Researchers in France have come up with a low-cost way of fabricating custom-shaped polymer tips to enhance the light-gathering capability of optical fibre. Triggered by low-power laser light, the small tip grows inside a drop of photosensitive liquid deposited at the end of an optical fibre. The tip, which behaves like a microlens, can dramatically boost the fibre's coupling efficiency to optical components such as laser diodes, or act as a low-loss microscope probe.
Research director Pascal Royer and his colleague Renaud Bachelot, based at the Laboratoire de Nanotechnologie et d'Instrumentation Optique (Universit? de Technologie de Troyes, France), hit upon the idea while working with a team of Centre National de la Recherche Scientifique (CNRS) photo-chemists based in Mulhouse, France. The research has led to the formation of a company - LovaLite - which opened for business in October.
Microlens tip
Tip growth process
Bachelot and his colleagues follow a simple process to produce their tips. Firstly, they cleave the fibre, wash it in acetone (to remove any dust and organic waste) and then check its optical properties. Next, using a pipette, they deposit a drop of photosensitive liquid formulation at the end of the fibre.
This photosensitive formulation contains, among other things, a sensitizer dye (eosin) and an acrylate monomer. When the eosin absorbs laser light it promotes the release of radicals which initiate polymerization of the monomer. The system is particularly sensitive to visible light in the 450-550 nm range, which means that the process can be driven by an argon laser (514nm) or the green line (542nm) of a He-Ne laser.
The shape of the drop and the tip can be controlled by adjusting the composition and viscosity of the formulation. The scientists are able to manipulate the drop's radius of curvature simply by raising or lowering the temperature to change the viscosity.
Custom-shaped tip
Exposing the formulation to a pulse of green laser light that is guided along the core to the end of the fibre (typically 2s in duration) initiates photopolymerization and creates a robust polymer tip within the drop.
Sometimes the team can actually see the tip growing. "It is very beautiful, we can visualize the growth of the tip by observing the yellow fluorescence from the eosin," Bachelot told OLE. "For example, in the case of high eosin concentrations, this speed is quite slow and we can see the tip growing in real time. Sometimes we can guess if a tip will be good or not simply by observing the fluorescence."
The final stage in the process is to wash the drop with methanol to remove any unpolymerized material from the tip. The team typically grows tips that measure between 15 and 150?m in length and have a radius of curvature from around 0.2 to 2?m. They offer transmission greater than 80% and a polarization dependent loss of less than 0.1dB.
Encouraged by their first results, Bachelot and his colleagues went on to study the process in detail. "The first point is that tip growth relies on the growth of a waveguide," explained Bachelot. "As the light propagates in the drop of formulation, the refractive index is increased [from 1.48 to 1.52] by photopolymerization."
Optical microscope probe
The team noticed that instead of diverging, the tip was tending to converge. Hoping to illustrate the effect more clearly, they dipped the end of a singlemode fibre into a thick layer of formulation. They discovered that sending a long pulse of laser light down the fibre produced a thin probe-like tip 500?m in length. Light was being self-guided through the solution.
Another key player in the process turned out to be oxygen. Photopolymerization begins only when absorbed energy is greater than a threshold value - Eth - which increases in the presence of oxygen. This means that photopolymerization at the boundary between the air and the drop of photosensitive formulation is very selective. For short exposure times, only the centre of the laser's Gaussian beam is able to trigger the polymerization, which produces a sharp tip. Flatter tips with a radius corresponding to the geometry of the drop require a longer dose of light.
The French team is currently applying its technology in three areas, two of which involve using the tip as a probe for optical scanning microscopy. If operated in the far-field domain, the tip acts as a microlens for illuminating or collecting light from the sample surface. Using their tips, Bachelot and his colleagues have demonstrated an imaging resolution of around λ/2.
Electrical field
For near-field microscopy, which offers much higher resolution (in Bachelot's case around λ/20), the tips have to be modified as currently their radius limit is around 250nm. According to Bachelot, the simplest method, known as the shadow effect, is to metallize a rotating tip from the side. Using this approach, the French scientists have succeeded in creating a 100nm-wide optical aperture at the extremity of the tip.
Typically, near-field probes are made by tapering optical fibres. Because the taper can be very small, around the cut-off diameter, these probes often act as poor lightguides. "If they launch 1mW at the extremity of the fibre, at the other extremity where the hole is, they get only 1?W," said Bachelot. He added: "If they want to increase the launch power, they can destroy the tip end because light is absorbed by the metal film [destroying the aperture]. In our case, there is no taper as the light is guided. For a 100nm aperture, we observe a transmission in the range of 5-10%. This is huge."
Pascal Royer
The microlens used in far-field microscopy also functions as a very effective tool for coupling optical components to fibres. Recently, Bachelot and his co-workers reported that they had used their tip technology to couple 70% of the output from a 9.5mW laser diode (1310nm) into an optical fibre (Optics Letters 29 1971). The maximum coupled output between the 15?m long tip at the end of a 9?m core-diameter fibre was found to be 6.7mW for an optimal tip-laser distance of 4?m. By comparison, when the same experiment was carried out with a bare cleaved fibre, the coupled power was less than 1.5mW. Bachelot believes that it would also be possible to couple tips to other optical components such as photonic crystal structures and integrated waveguides.
Top tip team
Although the team has concentrated on making single-peaked tips, it is now considering other options. It has recently managed to produce multi-peaked tips on multimode fibre by applying mechanical strain to the fibre during photopolymerization. This selectively excites linearly polarized modes within the fibre. The multi-peaked tip is a three-dimensional mould of the intensity distribution within the fibre. This could potentially allow a new format of optical communication in which distinct modes carry information rather than wavelengths.
Initially, Royer and Bachelot preferred to explore their ideas from behind the university's closed doors. Now that several patent applications have been filed, the team is keen to test its discovery in the marketplace. French law sets strict limits on the commercial activities of university staff and so Royer and Bachelot have hired a full-time director to lead LovaLite. They appointed Brahim Dahmani, a former CNRS scientist who has spent the past 15 years working for Corning, along with an engineer and a technician. The company is located near to the university in Technopole de l'Aube, a science park funded by the Champagne-Ardenne region that specializes in incubating hi-tech start-ups.
eed">Site Feed

Surface mounted optics aid automated assembly


Surface mounted optics aid automated assembly

Michael Hatcher reports on a technique developed by a Swiss collaboration that paves the way towards faster, automated assembly of miniature optical subsystems.
From Opto & Laser Europe

Miniature mounts

Although optical subsystems are widespread in applications such as sensing and telecoms today, the way in which the optics components are assembled and packaged remains a tricky and time-consuming business. For the most part, optics are passively aligned and then stuck together with glue.
In high-volume semiconductor manufacture, such techniques would be unthinkable. More sophisticated methods for the assembly of photonics modules are essential if their manufacture is to reach the same level of automation as that of electronics while maintaining high-precision alignment and high reliability.
A collaboration between the Swiss Federal Institute of Technology in Lausanne (EPFL) and Leica Geosystems of Switzerland has recently developed a technique that appears to offer a way forward. The group's three-dimensional miniaturized optical surface-mounted devices (TRIMO-SMD) imitate the assembly techniques that were developed for the electronics industry 20 years ago. The method, which uses six-axis robotic motion, automated optical alignment and laser-reflow soldering to make photonics modules, is currently being made commercially available by Leica Geosystems.
Automated assembly
Opto & Laser Europe reported on the first generation of this automated assembly equipment back in September 1998. Originally developed by the EPFL, optical surface-mounted devices (O-SMD) were useful for assembling optics of approximately 8-10 mm in size. A spin-off company called BrightPower was set up to commercialize the technology.

Lidar transceiver

However, the O-SMD technique, which involves simultaneous laser-welding of three metal cups that act as a tripod holding the optical component, was judged to be unsuitable for micro-optical assembly. For the past few years, Leica Geosystems has been working closely with both the Institute of Applied Optics and the Institute of Robotics at EPFL on a technique that, while it works on a similar principle to O-SMD, is suitable for use in manufacturing micro-optic systems - TRIMO-SMD.
"We decided that O-SMD was way too large for future projects," said Laurent Stauffer, who has been managing the technological development of the first commercial product to use TRIMO-SMD at Leica Geosystems. "TRIMO works extremely well with laser diodes. In many applications where you have a laser diode and you need an optical component you can use TRIMO, so there is a very wide potential market."
TRIMO-SMD is designed for use with optical components of around 2 mm in diameter. "We regard the high throughput and reliability possible with TRIMO-SMD to be something of a quantum step in optical assembly," Stauffer told Opto & Laser Europe.
Smaller optics
Laser-welding a mount to a substrate was not viable for making subsystems that incorporate smaller optics. Instead, laser-soldering or brazing was found to be the best method of attachment. This is the key difference between O-SMD and TRIMO-SMD: rather than holding the optics in place with a tripod of metal cups, the optical element in TRIMO-SMD is suspended up to 400µm away from the substrate material, and is moved into position by a robot. There is no contact between the substrate and the optical mount, which according to Stauffer is an improvement on O-SMD. "Contact between the substrate and the element was a bit of a problem in the past," he said.
In TRIMO-SMD, a mount called a universal holder is produced first. This holder consists of a 2.5mm-diameter round cup and two vertical arms 2.6mm long. It is covered with a tin preform in preparation for the soldering process. "The universal holder is our standard interface between the optics and the ground plate," explained Stauffer. A sub-mount containing the optical element is then laser-welded to the holder. With the holder gripped by a robot-controlled jig, the optical element is aligned using cameras and sensors. The robot can move in six dimensions and, according to Stauffer, has a placement precision of 0.25µm.
When the optical element is suspended precisely above the desired position, an 808nm continuous-wave high-power diode laser fires 20-40W through the substrate underneath the optics. The substrate is partially transparent and part-metallized to enable wetting of the surface during soldering. When the laser hits the preform, the tin melts and then drops onto the mounting plate to form a stable joint with the holder.
This method fixes the optical element into position in just 2s. The soldering causes slight thermal shrinkage that alters the exact position of the optical element. Stauffer says, however, that this can be easily resolved: "With a 200µm gap, there is typically a shrinkage of 3µm - this can be calibrated accurately, and you can simply offset the optical element before soldering," he commented. After soldering, the gripper is relaxed and the optics left in a fixed position. "The placement of each optic is repeatable to within 1µm [to a 99% confidence limit]," said Stauffer.

TRIMO-SMD robot

When Leica Geosystems and EPFL first developed TRIMO-SMD, they had a particular product in mind: a laser rangefinder used in military applications. Leica needed to place a beam-shaping optic directly in front of a laser diode inside the rangefinder. Thanks to the new micro-optic assembly produced using TRIMO-SMD, the distance over which the equipment is effective has been increased from 5 to 10km.
Award-winning technology
The technique had a welcome boost in February this year, when its selection as one of the winners of the Swiss Technology Award meant that it was exhibited at the Hannover Messe technology show. "We found five very interested potential customers for TRIMO-SMD at Hannover," Stauffer said. A spin-off company dedicated to the commercialization of TRIMO-SMD is planned and will be set up in the autumn of this year. According to Stauffer, "Leica will support the company over the first few years and the goal is to offer a manufacturing service, technical support and consulting."
In the meantime, Leica is looking to cash in on its investment (the company owns two patents protecting the technology) by licensing the technique to industrial partners. "We think that there is a very wide market for TRIMO-SMD, particularly companies that are involved in optical sensing and telecommunications," Stauffer said. "Leica Geosystems will use TRIMO for applications using optical sensing and we expect other customers to do the same. Medical applications are also possible."
In addition to the laser rangefinder, Leica has built a lidar transceiver module using TRIMO-SMD. The technique enabled a dramatic reduction in the size and weight of the transceiver and improved its stability and robustness. Coupled with a drop in price, the module could open up a new market for transceivers.
A third module that has been built by Leica using TRIMO-SMD is a series of components that produce second-harmonic generation from an Nd:YAG microchip laser. "All of the devices used are ideally sized for TRIMO-SMD, and components like the self-focusing lens can be positioned to a very high accuracy," said Stauffer.
He recommends that anybody considering using the TRIMO technique should carefully plan exactly what they need to assemble: "You need to 'think TRIMO', and design your optics accordingly." He adds that although micro-optic assembly currently takes around 10-15 min using the technique (the active alignment has been only partially automated), this could be shortened and optimized for high-throughput mass-production by any company willing to invest in the technology.
If a major manufacturer of optical subsystems is forthcoming with that kind of investment, the TRIMO-SMD technique could propel photonics manufacturing along the same path as electronic

Friday, December 23, 2005

iTunes

" title="Atom feed">Site FYou can use iTunes to create your own personal digital music library and easily organize and listen to your collection of digital music files. You can also create your own custom audio CDs and transfer your music to an Apple iPod.

Important: After installing iTunes 4.6 for Windows, you'll only be able to transfer music to your iPod using iTunes. To transfer music from MusicMatch Jukebox or Audible Manager to your iPod, you'll need to first import the music into iTunes. For more information, search iTunes and Music Store Help.

System requirements
iTunes 4.6 requires Windows 2000 or Windows XP with a QuickTime-compatible audio card. Also make sure you have the latest Service Pack for your computer using Windows Update.

To create CDs or DVDs, you need an iTunes-compatible CD or DVD burner. To use the iTunes visualizer, you need a QuickTime-compatible video card.

If you plan to listen to music previews or buy music from the iTunes Music Store, a DSL, cable modem, or local area network (LAN) Internet connection is recommended.

Installing iTunes 4.6
Double-click the iTunes 4.6 installer and follow the instructions that appear. When you install iTunes, QuickTime 6.5.1 is also installed.

What's new in iTunes 4.6
iTunes 4.6 includes support for playing your music wirelessly using AirPort Express with AirTunes. It also includes a number of other minor enhancements.

For more information
For more information about using iTunes, open iTunes and choose Help > iTunes and Music Store Help. Type a question in the search field, or click Overview or Contents. If you're connected to the Internet, you can learn how to use iTunes by taking the iTunes tutorial. Visit www.apple.com/support/itunes/windows/tutorial/index.html.

If you've purchased music from the Music Store and have a billing question, open iTunes and choose Help > Music Store Customer Service.

For more information about using your iPod with iTunes, open iTunes and choose Help > iPod Help.

For the latest news about iTunes, visit the iTunes website at www.apple.com/itunes or the Apple Support website at www.apple.com/support/itunes. For the latest information about iPod, visit www.apple.com/ipod.

A note about copyright
This software may be used to reproduce materials. It is licensed to you only for reproduction of non-copyrighted materials, materials in which you own the copyright, or materials you are authorized or legally permitted to reproduce. If you are uncertain about your right to copy any material, contact your legal advisor.eed

Researchers compile radiation database

" title="AtoAs the number of photonic systems used in nuclear, space and high-energy physics environments grows, radiation-induced performance-degradation of optical materials and devices becomes an increasingly important issue. Johan van der Linden discovers how it is to be tackled.
From
Opto & Laser Europe
Testbed reactor
Late last year, lasers were used for the first time to transmit data between orbiting satellites. But as the photonics revolution begins to extend into the harsh environments of the space and nuclear industries, there is an urgent need to assess the performance of optical components - both active and passive - under the influence of various forms of radiation.
One organization that aims to do this is the Belgian nuclear research centre SCK-CEN. During the past decade it has investigated the radiation resistance of a number of photonic components, including optical fibres, semiconductor light sources and photodetectors, fibre-optic couplers and sensors, and liquid-crystal cells.
Comparable effects
Francis Berghmans, head of photonics at SCK-CEN, has led the work on the effects of radiation. "We found that, although the basic environmental conditions such as dose rate, total dose and radiation type may differ from one application to the next, the fundamental effects that influence devices often remain comparable," he said.
However, the results of exposure can vary. Exposure to particle radiation, such as proton and neutron beams, can cause displacement damage, whereas exposure to electromagnetic radiation, such as gamma rays, will primarily induce defects resulting from ionization. This means that even though a particular component may be able to withstand large doses of gamma radiation, making it useful in civil nuclear facilities, it could be too sensitive to protons to be suitable for space applications.
Lanzarote
Passive devices, particularly optical fibres comprising Bragg gratings, are the most frequently studied components, due to their potential use as strain, temperature and multi-point structural integrity sensors in thermonuclear environments.
High radiation doses generally create defects - known as colour centres - in optical glasses, which can lead to significant transmission losses and light generation from unwanted wavelength bands. This is a major obstacle to the efficient operation of optical communication systems.
Berghmans has found that in standard germanium-doped fibres, high radiation doses can induce absorption losses of several hundred dB/km in the 1310 nm and 1550 nm telecom transmission windows. Pure silica fibres suffer about one tenth of the losses seen in germanium-doped fibres. However, the optical fibres required for data transmission in nuclear facilities are comparatively short in length, so standard fibre loss levels may be acceptable.
In semiconductor-based active optical components, radiation-induced damage can introduce defect states into the crystal lattice and create new energy levels in the bandgap. These defects may act as generation-recombination centres, leading to increased threshold current and lower optical output from laser diodes. In photodiodes, increased dark current and lower responsivity are the likely hazards.
VCSEL array
"Our experiments have demonstrated that photodetectors are the most critical components in optical communication systems," said Berghmans. His findings show that, at low doses, III-V-based photodiodes are not as sensitive to radiation-induced degradation as silicon-based detectors. As far as sources are concerned, vertical-cavity surface-emitting lasers (VCSELs) seem to have more radiation tolerance than edge-emitting light sources. Berghmans puts this enhanced tolerance down to the VCSEL's thin active layer and initially brief carrier lifetime, which mean that a great many defects must be induced before they seriously affect the efficiency of the device.
Optical components are increasingly used in space applications, ranging from teleobjective lenses to communication systems for use in spacecraft and between satellites.
Most commonly-applied optical materials are prone to darkening - or solarization - in irradiation environments, so glass manufacturers supply radiation-hardened products (analogues of standard glasses that have been doped with cerium oxide) which exhibit improved end-of-life transmission properties. However, the performance of spaceborne optical systems rests on the reliability of refractive components.
Cerium doping retains more than 90% transmittance in the visible spectrum, but it has been shown to have some negative effects on other system performance parameters. For instance, radiation has a substantial effect on the refractive-index profile of cerium-doped components.
Glass sample a)
Last November, the ESA's Research and Technology Centre (ESTEC) in Noordwijk, the Netherlands, presented the results of a study it had assigned to the France-based space company Astrium and SCK-CEN to assess the stability of physical properties in commercially available glass materials.
Dominic Doyle, a technical officer at ESTEC, explained the need for such a study: "The main reason was the deficit of a reliable, usable and easily accessible database concerning the radiation characteristics of refractive optical materials. This [study] is a step towards establishing a comprehensive database to quantify radiation effects for use in the design and development of spaceborne optical systems."
Glass sample b)
Michel Fruit, manager of optical design and engineering at Astrium, and his colleagues place special emphasis on studying refractive index changes in proton and gamma radiation fields to simulate a range of different Earth orbits. "We found that cerium-doped specimens can show significant steps in the wavefront profile," he said.
Depending on the base material, the refractive index change can be positive as well as negative, although it is generally rather small (less than 10-5). In optical systems that use a large number of lenses, however, the effect can be significant. Fortunately, says Fruit, it can be predicted. "The radiation-induced refractive index change and absorption-increase sensitivity is linear - particularly in proton environments - and this allows a dose-coefficient modelling approach to be used," he said.
Huge task lies ahead
Compiling the database is an enormous task and it will be several years before it is accessible, probably via the ESA's Web site. Once complete, the database could be of use in a range of related fields such as deep-ultraviolet lithography and pulsed high-power lasers, because such systems need high-performance refractive optical components.
Since gamma rays are photons, any optical system that is exposed to high-energy photons could benefit from the radiation studies. This applies to deep-ultraviolet lithography in particular, since it would use many optical components and the long exposure times involved would result in significant radiation doses. Because such systems work to tight tolerances, an awareness of possible radiation effects is crucial.
According to Doyle, standard methods must now be adopted. "Given the workload involved [in compiling the database], one of our most immediate goals is to concentrate on the standardization of the assessment methodology with industrial, institutional and agency partners," he said. "Such a methodology could eventually be approved by the ISO or the European Cooperation for Space Standardization."
m feed">Site Feed

Pulse Spreading

" title="Atom feeThe data which is carried in an optical fibre consists of pulses of light energy following each other rapidly. There is a limit to the highest frequency, i.e. how many pulses per second which can be sent into a fibre and be expected to emerge intact at the other end. This is because of a phenomenon known as pulse spreading which limits the "Bandwidth" of the fibre.

Figure 11 Pulse Spreading in an Optical Fibre
The pulse sets off down the fibre with an nice square wave shape. As it travels along the fibre it gradually gets wider and the peak intensity decreases.
Cause of Pulse Spreading
The cause of cause spreading is dispersion. This means that some components of the pulse of light travel at different rates along the fibre. there are two forms of dispersion.
1. Chromatic dispersion
2. Modal dispersion
Chromatic Dispersion
Chromatic dispersion is the variation of refractive index with the wavelength (or the frequency) of the light. Another way of saying this is that each wavelength of light travels through the same material at its own particular speed which is different from that of other wavelengths.
For example, when white light passes through a prism some wavelengths of light bend more because their refractive index is higher, i.e. they travel slower This is what gives us the "Spectrum" of white light. The "red' and "orange" light travel slowest and so are bent most while the "violet" and "blue" travel fastest and so are bent less. All the other colours lie in between.
This means that different wavelengths travelling through an optical fibre also travel at different speeds. This phenomenon is called "Chromatic Dispersion".
Figure 10 Dispersion of Light through a Prism
Modal Dispersion
In an optical fibre there is another type of dispersion called "Multimode Dispersion".
More oblique rays (lower order modes) travel a shorter distance. These correspond to rays travelling almost parallel to the centre line of the fibre and reach the end of fibre sooner. The more zig-zag rays (higher order modes) take a longer route as they pass along the fibre and so reach the end of the fibre later.
Now:-
Total dispersion = chromatic dispersion + multimode dispersion
Or put simply: for various reasons some components of a pulse of light travelling along an optical fibre move faster and other components move slower. So, a pulse which starts off as a narrow burst of light gets wider because some components race ahead while other components lag behind, rather like the runners in a marathon race.
Consequences of pulse spreading
Frequency Limit (Bandwidth)
The further the pulse travels in the fibre the worse the spreading gets

Figure 12 - Merging of Pulses in a Fibre.
Pulse spreading limits the maximum frequency of signal which can be sent along a fibre. If signal pulses follow each other too fast then by the time they reach the end fibre they will have merged together and become indistinguishable. This is unaceptable for digital systems which depend on the precise sequence of pulses as a code for information. The Bandwidth is the highest number of pulses per second, that can be carried by the fibre without loss of information due to pulse spreading.
Distance Limit
A given length of fibre, as explained above has a maximum frequency (bandwidth) which can be sent along it. If we want to increase the bandwidth for the same type of fibre we can achieve this by decreasing the length of the fibre. Another way of saying this is that for a given data rate there is a maximum distance which the data can be sent.
Bandwidth Distance Product (BDP)
We can combine the two ideas above into a single term called the bandwidth distance product (BDP). It is the bandwidth of a fibre multiplied by the length of the fibre. The BDP is the bandwidth of a kilometre of fibre and is a constant for any particular type of fibre. For example, suppose a particular type of multimode fibre has a BDP of 20 MHz.km, then:-
1 km of the fibre would have a bandwidth of 20 MHz
2 km of the fibre would have a bandwidth of 10 MHz
5 km of the fibre would have a bandwidth of 4 MHz
4 km of the fibre would have a bandwidth of 5 MHz
10 km of the fibre would have a bandwidth of 2 MHz
20 km of the fibre would have a bandwidth of 1 MHz
The typical B.D.P. of the three types of fibres are as follows:-
Multimode 6 - 25 MHz.km
Single Mode 500 - 1500 MHz.km
Graded Index 100 - 1000 MHZ.km
NB: The units of BDP are MHz.km (read as megahertz kilometres). They are not MHz/km (read as megahertz per kilometres). This is because the quantity is a product (of bandwidth and distance) and not a ratio.
Choice of Fibre
Multimode Fibre
Muitimode fibre is suitable for local area networks (LAN's) because it can carry enough energy to support all the subscribers to the network. In a LAN the distances involved, however, are small. Little pulse spreading can take place and so the effects of dispersion are unimportant.
Single Mode Fibre.
Multimode Dispersion is eliminated by using Single Mode fibre. The core is so narrow that only one mode can travel. So the amount of pulse spreading in a single mode fibre is greatly reduced from that of a multimode fibre. Chromatic dispersion however remains even in a single mode fibre. Thus even in single mode fibre pulse spreading can occur. But chromatic dispersion can be reduced by careful design of the chemical composition of the glass.
The energy carried by a single mode fibre, however, is much less than that carried by a multimode fibre. For this reason single mode fibre is made from extremely low loss, very pure, glass.
Single mode low absorption fibre is ideal for telecommunications because pulse spreading is small.
Graded Index Fibre
In graded index fibre rays of light follow sinusoidal paths. This means that low order modes, i.e. oblique rays, stay close to the centre of the fibre, high order modes spend more time near the edge of core. Low order modes travel in the high index part of the core and so travel slowly, whereas high order modes spend
predominantly more time in the low index part of the core and so travel faster. This way, although the paths are different lengths, all the modes travel the length of the fibre in tandem, i.e., they all reach the end of the fibre at the same time. This eliminates multimode dispersion and reduces pulse spreading.
Graded Index fibre has the advantage that it can carry the same amount of energy as multimode fibre. The disadvantage is that this effect takes place at only one wavelength, so the light source has to be a laser diode which has a narrow linewidth.
Figure 13 - Ray Paths in Graded Index Fibred">Site Feed

High Precision Attenuating Optical Fiber Streamlines Fixed In-Line Attenuator Production

" title="Atom feed"CorActive, an independent manufacturer of advanced specialty optical fiber products, has announced expanded availability of its family of high precision attenuating optical fibers. Immediate delivery of all CorActive attenuating optical fibers enables attenuator manufacturers to lower production costs by eliminating excessive inventory overhead. CorActive attenuating optical fiber products are based on standard telecom single mode optical fiber geometry but feature a core that is doped with metal ions to partially or completely absorb the incoming light.
CorActive’s single mode attenuating optical fiber product line includes High Attenuation Fiber with an attenuation range of 0.4 to 15 dB/cm, Extreme Attenuation Fiber with attenuation greater than 15 dB/cm, as well as Low Attenuation Fiber for use in patch cords and backplane assemblies with an attenuation range of 0.5 to 40 dB/m. All CorActive attenuating optical fiber products feature virtually uniform attenuation over the 1250 to 1620 nm window ensuring compatibility with current and future DWDM, CATV and other telecom networks.
“Today’s tight economy in telecommunications has enabled CorActive to excel by providing a superior attenuating optical fiber product with virtually immediate delivery. Reducing inventory costs and improving production yields has enabled our customers to better compete in an industry known for razor thin margins.” explained Adrien Noël, CorActive’s CEO.
CorActive’s industry leading attenuation tolerances coupled with minimal batch-to-batch variance and immediate availability from our comprehensive inventory enables our customers to minimize production and inventory carrying costs. CorActive customers order the exact attenuation per unit length that is required for their current production requirements – a matching fiber will be shipped immediately from CorActive’s inventory in North America, Europe or Asia. CorActive supplies attenuating optical fiber to many of the world’s leading manufacturers of attenuator products.
Further assisting production of superior fixed in-line attenuators is CorActive’s tight control over the core/cladding concentricity and the circularity of the fiber core. Fixed in-line optical attenuators are constructed by inserting a 2 centimeter piece of attenuating optical fiber into a tube or ferrule with precise inside diameter tolerances. With a core size of 9 microns for standard single mode telecom fiber, there is very little room for error. CorActive’s industry leading cladding diameter tolerance of +/-0.5 micrometers ensures that attenuator assembly is fast and precise. The resulting attenuator product features a core that is very precisely aligned enabling very low core misalignment when coupled to single mode telecom fiber.
CorActive’s Attenuating optical fiber has been in full-scale production for over 4 years and are available immediately for sampling. Complete fiber characterization data is provided with all CorActive optical fiber products and our optical design engineers are available to assist in obtaining optimal simulation results.
About CorActive
CorActive is a well financed independent developer and manufacturer of advanced Specialty Optical Fiber (SOF) products for OEM customers serving the telecommunications, sensor, defense, security, industrial, medical and aerospace industries. CorActive uniquely offers a full line of standard SOF products, including erbium doped, ultra violet sensitive and attenuating optical fiber, plus custom fiber development services for specific applications. At CorActive we pride ourselves in providing technologically advanced specialty optical fiber products that uniquely enable our customers to offer superior products and services. With research & development and production facilities located in Quebec City, CorActive serves a worldwide customer base for standard and custom SOF products. For more information visit www.coractive.com.
>Site Feed

LASER 2003: Can fibre lasers steal the show?

" title="Atom feed">Site FeedAccording to many in the laser industry, fibre lasers are now a serious alternative to solid-state and carbon dioxide lasers for industrial material-processing applications. Here, we look at some of the fibre-laser vendors that will be looking to make a splash at this year's event.
From LASER. World of Photonics Visitor Magazine
Fibre preform
The boom years of optical telecoms may be long gone, but some of the technological advances of that time will be on view to LASER 2003. World of Photonics visitors in some unexpected ways.
One example of these is in the field of high-power fibre lasers, which are sure to turn a few heads in the production-engineering sector. During the telecoms boom, firms needed reliable, high-power 980 nm diode sources to pump erbium-doped fibre amplifiers (EDFAs). The technology was developed rapidly to satisfy the market's demand.
With components like EDFAs no longer so popular, these diodes are being employed in high-power fibre lasers that are finding industrial applications such as in the manufacture of car parts and medical devices.
Two show exhibitors in particular - Southampton Photonics (SPI) of the UK and IPG Photonics, US - have shifted their emphasis away from telecoms to the industrial applications of high-power fibre lasers.
On the fast track
IPG was first out of the blocks with its high-power sources and is seeing them used in the automotive and medical-device industries. "The telecoms hiccup has allowed us to fast-track high-power fibre lasers," said Bill Shiner, business-development manager of IPG Photonics' industrial-laser group.
While IPG has dived straight into the market, SPI has added to its ranks senior staff with experience in the industrial sector. The firm's plan is to sell its fibre sources to OEMs, rather than direct to the user application. SPI has developed what it believes to be a superior fibre design. So far, it has focused on military applications (it has contracts with DARPA in the US and Qinetiq in the UK), but at LASER 2003. World of Photonics, SPI will be launching its first fibre-laser products tailored to the industrial market.
In a fibre laser, a doped silica fibre is excited by a diode source. Two Bragg gratings written into the fibre act like the mirrors of a "normal" laser cavity to generate the laser emission, resulting in a compact source with excellent beam quality. IPG has found a way to "bundle" its ytterbium-doped fibre lasers together efficiently, and it has produced systems that emit up to 6kW continuous-wave power at 1080nm.
Aside from the many technical advantages claimed by the makers of fibre lasers, it is their cost-of-ownership that may turn out to be the key factor. Stuart Woods is SPI's director of business development. He estimates that over the typical lifetime of a source, the total cost of ownership of a fibre laser is approximately one-third that of a similar carbon dioxide or solid-state device. This is despite the initial purchase of a fibre laser generally being slightly more expensive than a DPSS laser, and it highlights the exceptionally low maintenance cost. Woods has another way of putting it: the fibre laser gives the lowest "cost per millijoule" of any comparable laser, coming in at less than $200.
Faster welding
IPG recently installed a 2kW fibre laser at the Edison Welding Institute, a leading materials-joining organization in the US, and a 6kW fibre-laser unit at an (undisclosed) automotive plant in Germany. During trials, the air-cooled 6kW unit was integrated with a robot and used for welding and cutting steel and aluminium alloys. According to IPG, the fibre laser could cut and weld faster than comparable YAG sources.
"Last year industry experts forecast that a multiyear development would be required to convince automotive and other major industries to accept this unknown technology," said Shiner, adding: "Large numbers of prospective customers are now lining up for pre-production tests."
Shiner is confident that IPG lasers will go on to make a big impact on a variety of applications: "The lasers have a 20% wallplug efficiency and are ideal for marking, cutting and welding," he said. "I believe that they will revolutionize the industrial-laser market."
Meanwhile, several SPI units are undergoing customer-evaluation tests. SPI's DARPA project is to build a singlemode, single-polarization 1kW fibre laser with an M2 value of 1. The first phase of this project is now complete, with the firm producing a 50W polarization-maintained (PM) output at 1060nm and a 25W non-PM output at 1550nm. The Qinetiq contract is to produce distributed-feedback fibre lasers for acoustic sensor arrays, and the first stage of this project was completed in December last year.
SPI's fibre lasers are based on its patented fibre design. Mikhail Zervas, SPI's chief scientist, explained: "Conventional active fibres are core-doped at the centre of the fibre. Our design is based on ring doping." Zervas says that conventional doping increases saturation and limits the maximum extractable energy from the fibre. With ring doping, the gain is more controlled and the output less noisy.
With its Q-switched fibre lasers, SPI reckons that it should be able to deliver more energy per pulse than is possible with the conventional active fibre architecture.
JDS Uniphase also has plans for its fibre lasers. Product marketing manager Rüdiger Hack says that the firm is working to increase the power output: "The next step is 50 and 100W models with an M2 of 1."
While Hack also believes that fibre lasers will revolutionize industrial-laser applications, he believes that costs are currently too high: "Manufacturers need to work on driving down component and manufacturing costs, especially for high-volume applications."
Shiner's view appears to dispute this: he says that the price of IPG's fibre sources are comparable with those of Nd:YAG sources up to around the 4kW mark. Any higher, and he admits that fibre lasers do become the more expensive option.
Not that this is dulling his optimism: "I think [fibre lasers] will become huge in the cutting market. I don't really see how we can lose - in a few years they should really dominate the YAG business, especially in areas like automotive welding. We will take on YAGs first and then carbon dioxide lasers."
Currently, the market - estimated to be worth $60-70 m - is dominated by IPG, which has a share of more than 50%. JDS Uniphase takes the only other significant share with 26%. This looks set to change as SPI enters the market, with other major laser vendors also expected to get in on the act.
Woods' estimate is that the total addressable market for fibre lasers could be as much as $300 m. He argues that multisource agreements between fibre-laser vendors could be the way forward.
Well, so much for the hype. At LASER 2003. World of Photonics you can see what the fibre-laser vendors have on offer.