Categories
Network

Lexical Semantics: Hyponyny Networks

Lexical Semantics: Hyponyny Networks.
Question 2 Not all dictionary definitions contain classifiers, but many do, and in some cases when you look up the classifier itself, you find another even more general classifier within its definition. For example, you might like to think about the following definitions from the Collins English Dictionary. Colostrum is the thin milky secretion from the nipples that precedes and follows true lactation. It consists largely of serum and white blood cells. A secretion is a substance that is released from a cell, especially a glandular cell, and is synthesized in the cell from simple substances extracted from the blood or similar fluid.
Substance is (1) the tangible basic matter of which a thing consists; or (2) a specific type of matter, especially a homogeneous material with definite or fairly definite chemical composition. Matter is (1) that which makes up something, especially a physical object; material. What are the classifiers in these definitions? (Why is this question hard to answer? Can you change the definition to make it easier? ) Draw a diagram to show the hyponymy chain you found in (a), with hyponyms shown below their classifiers. Can you think of any additional levels that you can put in the hyponymy chain above secretion?
Add them. Sebum and saliva are co-hyponyms of colostrum. Add them to the diagram, along with two co-hyponyms for each level of the chain. Add distinguishers to your diagram, to differentiate each of the co-hyponyms you have added. On an intuitive level it would seem a simple task to select the different classifiers within each of the above definitions however, several problems arise which belie this. Colostrum is the easiest to deal with as it is the most specific of the four terms, although there is still potential for an error to be made.

The only classifier in this description is ‘secretion’ as, according to Hudson (1995: 26) “the classifier … is the first common noun that follows is”[1]. Although this syntactic relationship is useful as a method of identification, it is not the reason ‘secretion’ is a classifier of ‘colostrum’. Syntactic relationships exist between lexemes, not senses, and are governed by the relationships between senses, thus it is the latter that hyponymic networks represent. The classifier (C) is the concept that is superordinate to the ense in question (S1) in that S1 must possess enough characteristics of the classifier to make it a type of that concept, even if not a typical one, as well as distinguishers that serve to differentiate it from the classifier and any other co-hyponyms. More simply, S1 is a hyponym of C iff all S1 are a type of C, but not all C are S1 (op cit. 16). Furthermore, classifiers for common nouns will always capture what S1 is, not how or why it is. In the case of ‘colostrum’ only ‘secretion’ performs this function: we can say that colostrum is a type of secretion.
It is important, however, to refine the concept of ‘what it is’: if this is taken to include a material concept as well as a typical one, i. e. , what it is made up of or consists of, there is more scope for what can be considered a classifier. Under this description both ‘serum’ and white blood cell’ can be considered as classifiers of ‘colostrum’. This does not seem to be correct though, as ‘colostrum’ is not a type of serum or white blood cell, nor does it possess enough of the characteristics of either to qualify as a hyponym.
Therefore, in such cases we can eliminate concepts about the material of which a referent of the given sense consists as candidates for classifiers. Having established the criteria for identifying classifiers it should now be easier to identify those for the remaining senses however, there are further difficulties. It is safe to say that ‘substance’ is the classifier of ‘secretion’ according to the above rule but the use of ‘substance’ twice in the definition provides potential for confusion: according to the definition for ‘secretion’ above we can make the following statement: (A) a secretion is a substance1 made up of substances2.
The difficulty seems to lie in SUBSTANCE being polysemic (Palmer 1981: 100), a fact apparently proven by its having two definitions. This implies that SUBSTANCE1 represents one of the given senses of ‘substance’ whilst SUBSTANCE2 represents the other, but neither fits with sense (1) as both are a specific type of matter. Therefore, both must be the concept in sense (2) but if SUBSTANCE1 and SUBSTANCE2 do have the same sense statement (A) has no useful meaning, for it to do so SUBSTANCE requires an additional sense. The solution is provided in the definition of ‘secretion’: SUBSTANCE1 is istinguished from SUBSTANCE2 by the addition of ‘simple’ to the latter. In this way it can be seen that SUBSTANCE1 refers to sense (2) whereas SUBSTANCE2 refers to a different sense that is related to, but more specific than (2). To avoid such confusion replacing SUBSTANCE2 with a different lexeme could prove useful, e. g. , COMPOUND, although this is not necessary so long as we understand that SUBSTANCE is polysemic and we know which sense each refers to. As ‘substance1’ has the sense (2) in the definition we shall refer to it as ‘substance (2)’ and it is this sense that is the classifier for ‘secretion’.
The definition provided for ‘substance (2)’ makes identifying the classifier here straightforward as it begins by telling us that it is a “specific type of matter” (my emphasis), which is the central criteria for hyponymy. So given that ‘matter’ is the classifier for ‘substance (2)’ we can now find the next classifier in the chain. It could be assumed that the brevity of the definition makes this task even more simple however, the definition is a “consists of” statement which rules out any concepts it contains as a classifier. It is thus the case that not all concepts have a superordinate concept.
As such we can say that ‘matter’ sits at the top of the hyponymy chain and is the broadest sense of ‘colostrum’. Given this information we can now represent all of the relationships above in the following diagram: Fig. 1) Initial hyponymy chain for colostrum. This chain is based solely on the definitions given above however, the claim can be made that this diagram does not contain a complete set of classifiers for ‘colostrum’. There are facts about ‘secretion’ that are not contained in ‘substance (2)’ but that cannot be considered as unique to it, in particular those about its relationship with organisms and organic matter.
This claim is based on the fact, as given in the definition, that ‘secretion’ is a substance particular to cells, which are the constituent parts of an organism. All of this information is unrepresented within the chain as it is because the relationship ‘secretion’ has with ‘cell’ is not due to a shared nature or type. When the hyponymy test is applied the mismatch is more evident: ! a secretion is a type of cell. This does not deny that the two are related however, only that they are not the same kind of thing, so instead n alternative way must be found of including and representing this relationship. As ‘cell’ is the missing concept there must be some sense it shares with ‘secretion’. According to my definition of ‘cell’ many together make up an organism and because any substance that is a ‘secretion’ is the product of a cell, it can also be considered the product of an organism. We can go a step further and state that both are types of substance particular to organisms, which allows the statement a ‘secretion’ is a ‘substance particular to organisms’.
This can be further refined when the concepts ‘glandular’ and ‘blood’ are considered as these relate specifically to ‘body’, not just to any organism in general. We can thus replace ‘organism’ and instead state that a ‘secretion’ is a ‘substance particular to a body’ or, more concisely, it is a ‘bodily substance’. A second gap exists between ‘bodily substance’ and ‘substance’ for the same reason as above: arguably, a ‘bodily substance’ has characteristics shared with other types of particular substance that together constitute a more general type of substance.
As mentioned above ‘organism’ bears a relation to ‘organic material’ in that all of the substances of which an organism is composed are organic. Given that a body is a kind of organism any bodily substance must also be organic but not all organic material is of the body hence, ‘organic material’ is a classifier of ‘bodily substance’. These new facts can be added to Fig. 1) to provide a more complete sense network: Fig. 2) Full hyponymy chain for ‘colostrum’.
When considering potential co-hyponyms there are two criteria that must be met: the co-hyponyms must share most if not all of the sense of the shared classifier but they must be differentiated by at least one distinguisher (Hudson 1995: 27). Each of the co-hyponyms in Fig. 3) meets these criteria but this does not mean to imply it is a simple task. Take ‘matter’ and ‘substance (1)’: the two could initially be considered to be co-hyponyms. This, however, is not the case. Essentially, the definitions for ‘substance1’ and ‘matter’ are the same: we could give a definition of matter s ‘that of which a thing consists’ because CONSISTS OF and MAKES UP have the same sense. Nor does there appear to be any fact about either concept that serves to differentiate them so we must accept that rather than ‘matter’ and ‘substance (1)’ bearing a hyponymic relationship they are actually synonyms. As such, SUBSTANCE (1) is nothing more than an alternative lexeme that can be used to represent ‘matter’ and so can be omitted from the network. Fig. 3) shows that although many of the co-hyponyms do not bear a direct relation to ‘colostrum’ they are part of a conceptual network that illustrates how senses are related.
It also displays the fact that the further up the chain a concept is the broader is the range of its hyponyms because the sense becomes more generalised at each level. Furthermore, it also shows how concepts can share multiple classifiers and hyponyms. Fig. 3) Hyponymy network for ‘colostrum’. Distinguishers can be concise or generalised providing they serve as differentiators between the senses. When selecting appropriate facts to include the notion of prototypes should be accounted for in that any potential distinguisher should ideally describe a prototypical referent of the given sense (op. it. 20). Take ‘glandular’ it appears in the definition of ‘secretion’ but it has been omitted from the network. This is because it is not a prototypical characteristic in that not even the majority of secretions are from glandular cells, it is only provided as an example of the kind of cell involved. A further difficulty in selecting distinguishers is deciding what kind of information to include. Definitive information serve to provide the minimum data needed to clarify a concept whilst encyclopaedic information attempts to provide all of the facts about a concept.
The danger with the latter is that information may be included that does not serve to differentiate that concept from another. I would argue that both kinds of information should be included provided that each fact is part of the sense it iff that fact is relevant to the function of differentiation. Fig. 4) includes information of both kinds and, although I have removed the referent and lexeme classifier for the sake of clarity, it can be considered as the most complete network of senses that relate to colostrum’. Fig. 4) Complete hyponymy network for ‘colostrum’. Bibliography Hudson, R. (1995). Word Meaning. Padstow: Routledge. Palmer, F. R. (1981). Semantics. Bath: Cambridge University Press. Stevenson, A. (ed. ) (2007). Shorter Oxford English Dictionary (6th edition). Italy: Oxford University Press. Word Count 1693 not including diagrams. 1799 with diagrams ———————– [1]I have used “ “ for quotations rather than ‘ ‘ to prevent confusion between quotes and senses.

Lexical Semantics: Hyponyny Networks

Calculate the Price

Approximately 250 words

Total price (USD) $: 10.99

Categories
Network

Convolutional Neural Network

Convolutional Neural Network.
Convolutional Neural Network: A boon for deep facial recognition in Biometrics.Vishalakshi Rituraj1, Research Scholar-phD (CS), Magadh University, Bodhgaya.Email id: [email protected] Krishna Singh2, Associate Prof., Mathematics Dept., A. N. College Patna.Abstract:-Today Biometric recognition systems are gaining much acceptance and lots of popularity due to its wide application area.
They are considered to be more secure compared to the traditional password based methods. Research is being done to improve the biometric security to tackle the risk and challenges from surroundings. Artificial Intelligence has played a significant role in biometric security. Convolutional Neural Network (CNN) belongs to AI family, has been designed to work a little like human brain but not exactly, handles the complexity and variations in facial images very effectively.
This paper is going to focus on Artificial Intelligence, Machine Learning, Deep Learning and how a CNN carries out facial detection.Keywords:- Biometrics, Neural network, Learning, convolution, neurons, Pattern Recognition. 1) Introduction:-The increasing demand of technology in each and every field of our lives has raised the risk of data security in parallel. From the very ancient time, man is putting his best effort to get his things secured.

But today in this digital world, we are facing more problems due to impostors and other types of security hacks. Besides these, the curious human nature has always been trying to do something new and to cross the predefined boundaries. Intelligence is a by birth human quality but now a days, technology has made machines to think and behave like us to some extent.
This concept of manmade intelligence created by rigorous use of complex mathematical operations and searching algorithms is known as Artificial Intelligence (AI). When we saw the AI used in Hollywood movie TERMINATOR, we didn’t even imagine the concept of such a smart machine that could handle different situations.
But now, it seems impossible is going to be possible due to AI as it has opened the door of a completely new world of opportunities. Artificial intelligence is a branch of computer science aiming to make a computer, robot, or a software think intelligently, in the same manner the intelligent humans think and it has been proved very useful where traditional algorithmic solutions don’t work well.
We are using AI based applications everywhere in our day to day life, such as- spam filters in gmail account, plagiarism checker, Google’s intelligent prediction in web searching, suggestions on Facebook and Youtube and many more. The main purpose of designing AI system is to include the following areas:-PlanningLearningProblem SolvingPattern RecognitionSpeech/Facial RecognitionNatural language processingCreativity, and many more.
Neural networks and deep learning, a branch of AI currently provide the best methods to solve many problems associated with the Biometric authentication. Biometrics is a noble technique for personal authentication either on the basis of physical attribute (fingerprint, iris, face, palm, hand, DNA etc.) or behavioral (Speech, signature, keystroke etc.).
As we all know, our face is one of the wonderful creations of God and the unique diversities among all faces help us to differentiate one another. Facial recognition is the fastest growing field because a large no. of applications is adopting it. Recently, Apple launched its face recognition system equipped iPhone X on 12 Sept 2017 and it is claimed that it can identify the face in dark or even when owner has different hairstyle or look as well.
Apple says that the facial recognition cannot be spoofed by using a photograph or even a mask [1].(2) Application areas of Facial Recognition- Facial biometric recognition is being popular due to its wide range of applications and it can easily be deployed and integrated anywhere if there is modern high definition camera. Some of the trending applications are-Many electronic devices are integrated with face biometric to eliminate the need of passwords and thus providing enhanced security and accessing method.
Facebook’s automatic facial detection feature recognizes our friends’ faces with pretty good accuracy and starts suggestion based on it.Criminal identification has become simpler by better recognition of facial image through CCTV surveillance. It may minimize traffic rule breaking and road accidents.Some universities use facial recognition system as a tool to monitor the attendance of the students so that the management cannot be fooled by letting students to sign in behalf of others.
ESG Management School in Paris is using facial recognition software in its online classes to make sure students aren’t slacking off. Using a software called Nestor, the webcam on a student’s computer will analyze eye movements and facial expressions to find out if he or she is paying attention during video lectures.[2]
In our paper, we will focus on the need of facial recognition and how deep learning and neural networks have been a backbone for this technology. 2) Machine Learning (ML) and Deep Learning (DL):- Machine learning is considered as subset of AI which uses statistical techniques and algorithms which make a machine capable of making decision or prediction by learning from the given data and adapt through experience.
The process of learning begins with observations or data, such as examples, direct experience, or instruction, in order to look for patterns in data and make better decisions in the future based on the examples that we provide. The primary aim is to allow the computers learn automatically without human intervention or assistance and adjust actions accordingly [3].
Deep learning is a subset of Machine learning where a machine has a higher level of recognition accuracy and aims to solve real world problems like image recognition, sound recognition, space exploration, weather forecasting and so many other automated applications. Here, the word ‘deep’ refers to the no. of layers in the network to accomplish a task.
Deep learning methods use neural network architectures, very much like neurons in human brain, introducing a concept of Artificial Neural Network (ANN). 3) Concept of Artificial Neural Network in problem solving:- Today, automated systems have made our lives too easy and have replaced man in some places. But when we talk about ‘intelligence’, man will always be superior to machines because of their god gifted nervous system which is composed of billions of neurons.
These neurons are interconnected together and pass signals to one another which make the entire system to identify, classify and analyze things. Getting inspiration from biological neural network, the concept of ANN came into existence. The inventor of the first neurocomputer, Dr. Robert Hecht-Nielsen, defines a neural network as – “…a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.” [4]
Figure1: A simple ANN structure. [5] 3.1) Types of ANN: – (A) On the basis of topological arrangement, there are two types of ANN-a) A Feed-Forward Network :- In this type of ANN, data flow takes place in only one direction through different layers and none of the layers is fed with signal from background direction.
This network does not have feedback loops as output of one layer becomes the input for other layers. Practically, in a Feed forward network, any prediction does not have to be affected with the previous predictions.Figure 2: A Feed-Forward Network [6]b) Recurrent Neural Networks (RNN):- This type of neural network allows feedback loop by transmitting signals not only in one direction, instead data flow is carried out from backward direction too, sometimes also known as FeedBack ANN.
In RNN, each neuron has its connection with others and how the flow of data is maintained, will be governed by its internal memory. The decision taken by RNN gets affected by the decision made by the network at previous. It means, the current output of a RNN depends on both the previous output as well as the current input.
Figure 3: Recurrent Neural Networks (RNN) [7](B) On the basis of layering, there are two types of ANN-(a) Single Layer Network- In this type of network, neurons on input layers are connected with the neurons present at the output layer and there is no layer in between these two layers.(b) Multi Layer Network- This type of ANN consists of more than one layer in between input and output layer which are called hidden layers.
These hidden layers carry out computation by passing data from one layer to another. In this scheme, output from one layer becomes input for next layer and so on; finally output is obtained from output layer.(4) Convolutional Neural Network (CNN):- A convolutional neural network (CNN) is a subset of deep learning and belongs to the category of multilayer, feed-forward artificial neural networks. One of the most promising areas where this technology is rapidly growing, is security.
It has been very helpful in monitoring suspicious banking transactions, as well as in video surveillance systems or CCTV.Figure 4: A typical CNN architecture [8]Besides input and output layers, CNN has many hidden layers in between which may be classified as-Convolutional Layer:- This layer performs the core operations of training and forms the basis of CNN.
Each layer has a single set of weights for all neurons and each neuron is responsible for processing a small part of the input space. Thus, the convolutional layer is just an image convolution of the previous layer, where the weights specify the convolution filter [9].Pooling Layer:- This layer also known as downsampling layer, is placed after the convolutional layer. Pooling layer is responsible for reducing the spatial size (Width x Height) of the Input Volume which will be passed to the next convolutional Layer.
Fully Connected Layer:- This layer connects each neuron on previous layer with all the neurons present on the next layer.(5) Facial detection/Recognition using CNN:- A human brain sees multiple images in a day and is able to distinguish each one accurately without realizing how the processing is done.
But, there is a different case with machines because they have to recognize an image on the basis of learning. Facial detection is a method to identify a person or object based on their unique features and this process involves the detection and extraction of the face from the original image or video. After this, the face recognition takes place where different complex computer algorithms are used to recognize a face.
Here, we will understand the entire process of face detection and recognition. A face detection system involves two phases:-(I) Enrollment Phase- Face Detection- In this phase, several pictures of the same person is captured to whom the system should recognize as “known” with different facial expressions and head positions.
Feature Extraction- In this step, different feature measures are applied which can better describe a human face. There are different algorithms such as Principal Component Analysis (PCA), Haar Features, Local Binary Pattern (LBP) etc. available for the facial measurement. On the basis of these measurements, CNN is trained for learning in future. Storing in Database- All the extracted features are stored in a database so that they can be used further in identification process.
Face DetectionPre-processingFeature ExtractionFace RecognitionImageVerification/Identification(II) Recognition Phase-Figure 5: Architecture of Face Recognition System [10]Face Detection- When an image is admitted for identification, It is checked that whether it matches with the captured and stored images from the database by using face detection algorithms. Pre-processing- Pre-processing is necessary to make an easier and smooth training phase.
The collected face images or video frames need to be passed through Pre-processing phase to eliminate the noise, blur, shadows, lighting and other unwanted factors. The final smooth image obtained so, will be passed to the next feature extraction phase.Feature Extraction- After Pre-processing phase, feature extraction is carried out by the CNN which was trained during Enrollment phase.
Recognition- This is the last step where a suitable classifier such as Nearest Neighbor, Bayesian classifier, Euclidean Distance classifier etc., can be chosen. This classifier compares the feature vector stored in the database with the query feature vector and finally the best matched face image comes as a recognition output.
6) Conclusion:Biometric verification/authentication is going to be deployed everywhere from government to private organizations in coming days. In this paper, we studied the relation among AI, ML, DL, ANN and CNN. We have also demonstrated the way CNN carries facial detection with improved accuracy.
The field of AI has a wide spectrum and open for researchers. So, it aims to provide better result in biometric security in future.
References

“You can stymie the iPhone X Face ID – but it takes some work”, Anick Jesdanun, https://phys.org/news/2017-10-stymie-iphone-id-.html
“Entrepreneur India”, https://www.entrepreneur.com/slideshow/280493#2
“What is Machine Learning? A definition” Luca Scagliarini, Marco Varone, http://www.expertsystem.com/machine-learning-definition/.
“Artificial Intelligence-Neural Networks”, https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligence_neural_networks.htm.
“Artificial neural network”, https://en.wikipedia.org/wiki/Artificial_neural_network.
“Artificial Intelligence-Neural Networks”, https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligence_neural_networks.htm.
“Artificial Intelligence-Neural Networks”, https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligence_neural_networks.htm.
“Convolutional neural network”, https://en.wikipedia.org/wiki/Convolutional_neural_network.
“Convolutional Neural Networks”, http://andrew.gibiansky.com/blog/machine-learning/convolutional-neural-networks/.
“Face Recognition Using Neural Network: A Review”, Manisha M. Kasar, Debnath Bhattacharyya and Tai-hoon Kim, International Journal of Security and Its Applications, Vol. 10, No. 3 (2016), pp.81-100.

Convolutional Neural Network

Calculate the Price

Approximately 250 words

Total price (USD) $: 10.99

Categories
Network

Cellphones and Digital Networks

Cellphones and Digital Networks.
Cell phones have been around for nearly 15 years and are now everywhere you look. Over a quarter of Americans and a half of Europeans own cell phones and the numbers have been increasing exponentially. With the continuing increase in technology cell phones have become smaller, cheaper, and thanks to the move from analog to digital the calls are much clearer. They offer a great amount of convenience, and can be very economically for the busy businessman on the go. Advancements in cell phones are always being made, giving a clearer sound and lighter feel, as well as a longer life.
The cell phone industry has been one of the fastest growing in the world. The electronics are fairly simple, but they are so small that they are truly and engineering marvel. This paper will discuss in depth the many different components of the average cell phone, and talk about how it converts your voice into something that can be sent through a digital network. The paper will also look at how the inner workings allow for a phone to act as a microcomputer, with Internet access, address books, and even games.
Finally, it will review the many exciting ideas for this growing market and look to the future of the industry, and how the industry plans on overcoming various limiting factors. Alexander Graham Bell invented the telephone in 1876, 18 years later Guglielmo Marconi created the first radio. It was only natural that these two great technologies would eventually be combined to create the cellular craze. In the 80’s few people used radiophones, these phones were the precursor to cellular, but they had several limiting factors preventing them from every becoming a major part of everyday society.

In the radio telephone system, there was one central antenna tower per major city, and no more than 25 channels available on that tower. Each phone needed a powerful transmitter, big enough to transmit 40 or 50 miles. It also meant that not many people could use radiotelephones due to the lack of channels. With the current cellular system any none adjacent cell can use the same frequency, so the amount of phones that can be used are nearly limitless. These cells also mean that each phone does not need a strong transmitter, so the phone can be a lot smaller.
With the innovation of digital phones, many great features are now available, such as caller id, Internet access, and several other new features. It also meant that the phone would need a microprocessor to convert from analog to digital, this complicated the circuitry, but left it with new technology available the industry was able to make the phone as small as possible. The only restriction in size became the user-input devices, and the screen size. Usefulness of the Digital Cell Phone
The digital cellular phone offers many advantages to today”s society. The conveniences that it offers over simply not having one are obvious and they vary from person to person. But there are many advantages over other types of phones as well. The cellular phone not only allows people to communicate with others while they are on the go, but it also offers many other features to help people. With the services that digital provides, people can access email and find information almost anywhere in the world for a reasonable fee.
In the future, as the integration of phones and computers grow, people will be able to access tutorials in the field, and use them to communicate with specialists saving a great amount of time for many researchers. Today digital cell phones, such as the one shown in Appendix C figure 1, can process millions of calculations per second in order to compress and decompress the voice stream. In order to do this each phone is equipped with a circuit board that contains many different chips. The circuit board of a common phone is shown in Appendix C figure 2.
Two chips described earlier are the Analog-to-Digital and Digital-to-Analog conversion chips that translate the outgoing audio signal from analog to digital and the incoming signal from digital back to analog. There is also a Digital Signal Processor that is highly customized processor designed to perform signal manipulation calculations at high speed. The microprocessor controls the keyboard and display and deals with command and control signaling with the base station, it also coordinates the rest of the functions on the board.
This microprocessor is as powerful as the super computer of the 70’s that took up whole rooms, but is now the size of a finger. By using its arithmetic/logic unit or ALU it can perform all mathematical operation that run many of today features in phones. It is also responsible for the transfer of data throughout the phone. It will also make decisions and then run a new set of instructions. In Appendix C figure 3 a very simple microprocessor is shown. Cell phones use microprocessors that are much more complex, but the use the same idea.
The ROM and flash memory chips provide storage for the phone’s operating system and customizable features, such as the directory and various simple games. (Appendix C figure 4) The RF and power section handles power management and recharging, and also deals with the hundreds of FM channels. Finally, the Radio Frequency amplifiers handle signals in and out of the antenna. The Radio Frequency amplifier is the same device as you would find in your car’s radio. The display has grown considerably in size as the number of features offered by cell phones has increased.
Most phones currently available offer built-in phone directories, calculators and even games. It some new products that will be discussed later, cell phone counter as PDA’s offering very large screen and offer all of the benefit you would find in today’s hand held computers. The display is a liquid crystal display (LCD). It is made of thousands of tiny crystals with two possible colors. They have recently announced that they will be offering color screens on some new phones that work like the display of a laptop computer.
Very small speakers and microphones, about the size of a dime, amplify the analog waves. These devices are just like that of a portable radio and the microphones used on television talk shows. They are both wired to the microprocessor. In order for digital cell phones to take advantage of the added capacity and clearer quality, they must convert your voice into binary information. This means that it must break it down to 1’s and 0’s. The reason that this is so advantageous is that unlike analog, digital is either on or off, 1 or 0, instead of oscillating between the two.
For the conversion, the device must first record an analog wave, such as the one in Appendix B figure 1. To create the highest fidelity possible, it records number to represent the wave, instead of the wave itself as represented in Appendix B figure 2. The cell phones analog-to-digital converter, a device that is also found in a CD player, does this process. On the other end a separate digital-to-analog converter is used for playback. The quality of transfer depends on the sampling rate, that controls how many samples are taken per second, and the sampling precision.
The precision controls how many different levels are possible in the sample. The better these two are the clearer the sound, but it takes a higher speed processor and requires a greater amount of data transfer. In Appendix B the benefits are shown in figure 3. Most common digital cellular systems use Frequency Shift Keying to send data back and forth. This system uses one frequency for 1’s and another for 0’s and rapidly switching between the two. This requires optimal modulation and encoding schemes for recording, compressing, sending, and then decoding without loss of quality.
Because of this digital phones contain an amazing amount of processing power. The cellular network is web of towers covering areas, generally thought of as hexagonal cells as shown in APPENDIX A Figure 1. The genius of the cellular system is because cell phones and base stations use low-power transmitters, so the same frequencies can be reused in non-adjacent cells. Each cell is about 10 square miles and has a base station that consists of a tower and a small building containing the radio equipment. As more people join the cellular world, companies are quickly adding more towers to accommodate them.
Every digital carrier is assigned different frequencies, an average carrier may get about 2400 frequencies per city, and this number is about three times the amount as analog. The reason that more channels are available is because digital data can be compressed and manipulated much easier than analog. Each tower uses one seventh of the available frequencies, so none of the surrounding 6 towers interfere. The cell phone uses two frequencies per call, called a duplex channel. The duplex channel allows one channel to be used for listening and the other for talking, so unlike a CB or walkie-talkie, both people can talk at the same time.
This system currently allows for about 168 people to talk in each cell, for each system. The cellular approach requires a large number of base stations in a city of any size, but because so many people are using cell phones, costs remain low per user. Every cell phone has a special code associated with it, called an electronic serial number (ESN). It is a unique 32-bit number programmed into the phone when it is manufactured. When the phone is activated another five digit code called a system identification code (SID), a unique 5 digit number that is assigned to each carrier by the FCC, is imprinted in the phones memory.
When you first power up a cell phone, it checks a control channel to find the SID. If the phone cannot find any control channels to listen to, it knows it is out of range, and displays a no service message. After finding the SID, the phones check to see if it matches the SID programmed in the phone, and if it does not match it knows that the phone is roaming. The central location that the cell phone is registered to keeps track of the cell that your phone is in, so that it can find you when someone calls the phone. When the phone is turned on it sends its ESN to the control channel.
If the phone goes out of range, it will take a short while to locate your phone when it enters back into service. This can cause loss of calls, even though the phone is in service, but this problem is very temporary. When someone does call your phone it is sent to the central tower called the Mobile Telephone Switching Office (MTSO). This office is continually communicating with the cell phone. It sends and receives the calls, as well as telling it what frequencies to use. This is all done through the control channel, so it does not impair any calls.
As you move toward the edge of your cell, the cell’s tower will see that your signal strength is diminishing. At the same time, the base station in the cell you are moving toward, which is listening and measuring signal strength on all frequencies, will be able to see your phone’s signal strength increasing. The two base stations coordinate themselves through the MTSO, and at some point, your phone gets a signal on a control channel telling it to change frequencies. There are three common technologies used by cell phone providers.
These are Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), and Code Division Multiple Access (CDMA). In FDMA every call is done on a separate frequency. FDMA separates the spectrum into distinct voice channels by splitting it into uniform chunks of bandwidth. This is very similar to the way that radio stations operate. Each station is assigned a signal at a different frequency within the available band. FDMA is used mainly for analog transmission, so it is slowly being phased out. It is capable of carrying digital information, but it is not considered an efficient method for digital transmission.
Time Division Multiple Access gives each call a certain amount of time on a frequency. The Electronics Industry Alliance and the Telecommunications Industry Association use TDMA. In TDMA, a narrow bandwidth that is 30 kHz wide and 6. 7 milliseconds long is split time-wise into three time slots. (Appendix D, figure 1) Each conversation gets the radio frequency for one-third of the time. This is possible because voice data that has been converted to digital information is compressed so that it takes up significantly less transmission space. Therefore, TDMA has three times the capacity of an analog system using the same number of channels.
TDMA systems operate in either the 800 MHz or 1900 MHz frequency bands. Some phones have the ability to switch between bands. This function is called simply Dual-Band, and is important when traveling between different band frequencies. TDMA is also the access technology for Global System for Mobile communications. The Global system uses different frequencies in different areas of the world and is not compatible with other TDMA systems. GSM operates in the 900 MHz and 1800 MHz bands in Europe and Asia and in the 1900 MHz band in the United States. GSM systems use encryption to make phone calls more secure.
GSM is the international standard in Europe, Australia and much of Asia and Africa. In covered areas, cell-phone-users can buy one phone that will work anywhere else the standard is supported. To connect to the specific service providers in these different countries, GSM-users simply switch SIM cards. SIM cards are small removable disks that slip in and out of GSM cell phones. They store all the connection data and identification numbers you need to access a particular wireless service provider. Unfortunately, the 1900 MHz GSM phones used in the United States are not compatible with the international system.

Cellphones and Digital Networks

Calculate the Price

Approximately 250 words

Total price (USD) $: 10.99

Categories
Network

Analysis of Different Nowadays Networks

Analysis of Different Nowadays Networks.
CIRCUIT SWITCHING AND PACKET SWITCHING 1) INTRODUCTION Telecommunication networks carry information signals among entities, which are geographically for apart. The communication switching system enables universal connectivity. Switches can be valuable asset to networking[1]. Overall, they can increase the capacity and speed of our network. Every time in computer network we access the internet or another  computer network outside our immediate location, our messages are sent through a maze of transmission media and connection devices.
The mechanism for moving information between different computer network and network segment is called switching in computer network[2]. Figure 1: Switched network Long distance transmission is typically done over a network of switched nodes. Nodes not concerned with content of data. A collection of nodes and connections is a communications network. Data routed by being switched from node to node. Nodes may connect to other nodes only, or to stations and other nodes. Node to node links usually multiplexed. However, switching should not be seen as a cure-all for network issues.
There are two different switching technologies which are: 1) Circuit switching and 2) Packet switching. 1. Circuit Switching Circuit switching was the first switching technique have been used in communication network. This is due to easy to carry analog signals. Circuit switching network establishes a fixed bandwidth channel between nodes before the users may communicate, as if the nodes were physically connected with an electrical circuit. The bit delay is constant during the connection, as opposed to packet switching, where packet queues may cause varying delay.

In circuit switching, the transmission medium is typically divided into channels using Frequency Division Multiplexing (FDM), Time Division Multiplexing (TDM), or Code Division Multiplexing (CDM). A circuit is a string of concatenated channels from the source to the destination that carries an information flow. To establish the circuits, a signaling mechanism is used. This signaling only carriers control information, and it is considered an overhead. Since all decisions are taken by the signaling process, the signaling mechanism is the most complex part in circuit switching.
Each circuit cannot be used by other callers until the circuit is released and a new connection is set up. Even if no communication is taking place in a dedicated circuit then, that channel still remains unavailable to other users. Channels that are available for new calls to be set up are said to be idle. Telephone network is example of circuit switching system. Virtual circuit switching is a packet switching technology that may emulate circuit switching, in the sense that the connection is established before any packets are transferred, and that packets are delivered in order.
Unlike with packet switched networks, we cannot just send a ‘packet’ to the destination. We need to establish and later terminate the connection. We need to have some way of transmitting control information, we can either do this in band that the same channel we use for data or out of band which is on a seperate dedicated channel. Phone networks used in band signaling a while ago we could control switching and other functionality by playing tones into the telephone. Today in band signaling is considered unsecure and is not used except for compability with old systems[3]. 2. Packet Switching
Packet switching is a communications paradigm in which packets are routed between nodes over data links shared with other traffic. In packet-based networks, the message gets broken into small data packets. These packets are sent out from the computer and they travel around the network seeking out the most efficient route to travel as circuit become available. This does not necessarily mean that they seek out the shortest route. Each packet may go different route from the others. Each packet contains a “header” with information necessary for routing the packet from source to destination.
The header address also describes the sequences for reassembly at the destination computer so that the packets are put back into the correct order. Each packet in a data stream is independent. To be able to understand packet-switching, we need to know what a packet is. The Internet Protocol (IP), just like many other protocols, breaks data into chunks and wraps the chunks into structures called packets. Each packet contains, along with the data load, information about the IP address of the source and the destination nodes, sequence numbers and some other control information.
A packet can also be called a segment or datagram. Once they reach their destination, the packets are reassembled to make up the original data again. It is therefore obvious that, to transmit data in packets, it has to be digital data. The packet switching can broadly be divided into two main categories, first is the virtual circuit approach and other is the datagram approach. In the virtual circuit approach to packet switching, the relationship between all packets belonging to the message or a session is preserved. A single route is chosen between the sender and the receiver at beginning of the session.
When the data are sent, all packets of transmission travel one after another along that route. The wide area networks use the virtual circuit approach to the packet switching. The virtual circuit approach needs a call setup for establishing a virtual circuit between the source and destination. A call teardown deletes virtual circuit. After the setup, routing takes place based on the identifier known as the virtual circuit identifier. This approach can be used in the WANs, frame relay and an ATM. In the other approach of packet switching that is the datagram approach, each packet is treated independently of all others.
Even if one packet is just a piece of a multi-packet transmission, the network treats it as though it is existed alone. Packets in this approach are known as the datagram. The internet has chosen datagram approach to switching in the network layer. It uses the universal addresses defined in network layer to route packets from the source to destination. In packet-switching, the packets are sent towards the destination irrespective of each other. Each packet has to find its own route to the destination. There is no predetermined path; the decision as to which node to hop to in the next step is taken only when a node is reached.
Each packet finds its way using the information it carries, such as the source and destination IP addresses[4]. 2) HISTORY OF CIRCUIT SWITCHING AND PACKET SWITCHING * Evolution of Circuit Switching Switches are used to build transmission path between telephone set on a flexible basis. Without switches, each telephone set would require a direct, dedicated circuit to every other telephone set in order to be able to communicate. This is a full-mesh physical topology network. Such a full mesh network clearly is resource-intensive, impractical and even impossible, as early experience proved.
Circuit Switching were developed for voice communications. Contemporary circuit switches provide continuous access to logical channels over high-capacity physical circuits for the duration of the conversation. In January 1878, the first telephone switch went into operation in New Haven Connecticut. Switching technology had advanced drastically over the intervening decades, yet the basic function had remained the same: interconnect users of telephones by creating circuits between them. Every telephone has a line, or circuit, that connects physically to a telephone switch.
In the simple case of both the person making the call and the person being called are connected to the same switch, the caller dials the number of the desired person, the switch checks to see if the line is available, and if it is, the two lines are interconnected by the switch. The connection is maintained until one person hangs up his or her telephone, at which time the switch terminates the connection, freeing both lines for other calls. Three characteristics of this type of switching, called “circuit switching,” are important.
First, before the two parties can talk the circuit between them has to be created, and it takes time for a switch to check if a connection can be made and then to make the connection. Second, when a connection has been made, it creates a dedicated connection. No other party can reach either party of a dedicated connection until that connection has ended. Three, since switches are very expensive one accounting policy telephone companies implemented to recover their investment was to institute a minimum charge for every telephone call, generally three minutes.
For voice calls that lasted many minutes, a minimum charge did not represent a problem. But communications between computers often last less than seconds, much less minutes. It was difficult to image how circuit switching could work efficiently for computer communications when such a system took minutes to make a connection, created dedicated connections so only one person, or party, could be in connection with another party, and had a prohibitive cost structure. Although these issues were generally understood before the experiments of Roberts and Marill in 1965, they were once again strongly confirmed.
The experiments also made it abundantly clear that the problems confronting computer communications were not only with the circuit-switching architecture of the telephone system. Host operating system software of the day assumed there was only one Host and all connecting devices were as if “slaves. ” Hosts were not designed to recognize or interact with peer-level computers; the concept of peer-level computing did not yet exist. Thus, in interconnecting two computers, one had to be master and one slave. The problem only became worse if more than two computers wanted to interconnect and communicate.
Nevertheless, the problem of Host software was considered to be solvable if a suitable communication system could be designed and made to work. Fortunately, an inquisitive innovative scientist, Paul Baran, had already explored the problems of circuit switching beginning in 1959. By 1962, he had made his concept of a message-based communication system publicly known. Independently, in 1965, an English scientist, Donald Davies reached the same conclusions as had Baran and would coin its name: packet switching. * Evolution of Packet Switching The concept of packet switching had two independent beginnings, with Paul Baran and Donald Davies.
Leonard Kleinrock conducted early research and authored a book in 1961 in the related field of digital message switching without explicitly using the concept of packets and also later played a leading role in building and management of the world’s first packet switched network, namely the ARPANET. Baran developed the concept of packet switching during his research for the US Air Force into survivable communications networks, first published in 1962, and then including and expanding somewhat within a series of eleven papers titled “On distributed communications” in 1964.
Baran’s earlier paper described a general architecture for a large-scale, distributed survivable communication network. His paper focused on three key ideas: 1) the use of a decentralized network with multiple paths between any two points, 2) dividing complete user messages into what he called message blocks (packets), and 3) delivery of this message by store and forward switching. Baran’s study paved the way for Robert Taylor and J. C. R.
Licklider, both wide-area network evangelists working at the Information Processing Technology Office, and it also helped influence Lawrence Roberts to adopt the technology when Taylor put him in charge of development of the ARPANET. Baran’s packet switching work similar to the research performed independently by Donald Davies at the National Physical Laboratory, UK. In 1965, Davies developed the concept of packet switched networks and proposed development of a U. K. wide network. He gave a talk on the proposal in 1966, after which a person from Ministry of Defense told him about Baran’s work.
At the 1967 ACM Symposium on operating system principles, Davies and Robert bringing the two groups together. Interestingly, Davies had be chosen some of the same parameters for his original network design as Baran, such as a packet size of 1024 bits. Roberts and the ARPANET team took the name “packet switching” itself from Davies work. In 1970, Davies helped build a packet switched network called Mark I to serve the NPL in the UK. It was replaced with the Mark II in 1973, and remained in operation until 1986, influencing other packet communications research in UK and Europe[5]. 3) COMPARISON BETWEEN CIRCUIT AND PACKET SWITCHING
Circuit Switching: In circuit switching a message path or data communication path or channel or circuit is dedicated to an entire message block during the process of message transmission. The entire bandwidth is dedicated to the said message as it were, and before any data transmission can take place circuit initialisation and setup has to be done to enable or determine the avalaibility of the link as in trying to make a call using the telephon line for voice messaging or even dial-up procedure where you need to establsih that the line is free for use in the first place; and then have the line engaged all through your time of use.
All the message travel through the same path and keep the link engaged all the while when the block of message is been relayed or transmitted. In circuit switching, whole of the data travels along a single dedicated path between the two terminals whereas in datagram switching data is divided into packets and each of these packets are treated indepently and travel along different paths, source and destination being the same. Circuit switching concept is used in Telephony networks where a dedicated line is assigned to particular connection, the connection in this case is permanent during the connection.
Considerable amount of bandwidth is wasted in this process and at a time only one way communication is possible. Circuit switching is done at physical layer whereas datagram switching is generally done at network layer. Circuit switching requires the resources to be reserved before the transmission of data but datagram switching doesn’t require such reservation of resources. Advantages: 1. Fixed delays, because of the dedicated circuit – no interference and no sharing. 2. Guaranteed continous service, also because of the dedicated circuit. . Guaranted the full bandwidth for the duration of the call. Disadvantages: 1. Takes a relatively long time to set up the circuit. 2. Difficult to support variable data rates and is not efficient for burst traffic. The equipment may be unused for a lot of call, if no data is being sent the dedicated line still remains open. 3. During crisis or disaster, the network may become unstable or unavailable. 4. It was primarily developed for voice traffic rather than data traffic. Packet Switching:
In packet switching the block of data is split into small units with each unit having a sequence number attached to it for orderly identification within a given message block and these different units are usaully sent across the available diffrent links or channels of data transmission from one end to the other end point where they arrvive at different times but have to be assembled together in the correct order at this location via the sequence numbers to get out the original message back without any data degredation occuring as a result of the different paths of transmissions from source to destination.
Also no single data channel is dedicated to any given message block in the course of transmission as many units of different messages can be multiplexed and then get demultiplexed at their deffferent destinations correctly since there are codes to differentiate each unit of message, resulting to no conflict at all. Packet switching splits messages into small units and transmitting them to destination using different paths while at the same time keeping tracks or maintaining an orderliness of the units for proper and correct reassembling of the units to get the original message back.
Packet switching is generally used in Internet data transmmission where we send data without minding if the link is free or not as far as we are connected and the pieces of information that we sent are then split into smaller units and then sent in packets, with each packets switched through different data channel most times and with no loss at the end. The main advantage of packet-switching is that it permits “statistical multiplexing” on the communications lines. The packets from many different sources can share a line, allowing for very efficient use of the fixed capacity.
With current technology, packets are generally accepted onto the network on a first-come, first-served basis. If the network becomes overloaded, packets are delayed or discarded (“dropped”)[6]. Advantages: 1. Since packet are typically short, the communication links between the nodes are only allocated to transferring a single message for a short period of time while transmitting each packet. Longer messages require a series of packets to be sent but do not require the link to be dedicated between the transmission of each packet.
The implication is that packets belonging to other messages may be sent between the packets of the message being sent from one node to other node. This provides a much fairer sharing of the resources of each of the links. 2. The ability to do statistical multiplexing which can exploit the inherent “burstiness” in many data applications and thereby enable sharing of the network resources more efficiently among multiple data streams is a major advantage. 3. Pipelining”- This simultaneous use of communications links represents a gain in effieciency, the total delay for transmission across a packet network may be considerebly less than for message switching, despite the inclusion of a header in each packet rather than in each message. Disadvantages: 1. Packets arriving in wrong order. 2. Under heavy use there can be delay. 3. Protocols are needed for a reliable transfer. 4. Not so good for some types data streams. Real-time video streams can lose frames due to the way packets arrive out of sequence[7]. ) PERFORMANCE ANALYSIS Circuit Switching In circuit switching, a unique connection is used to move data between the two end user[8]. “Circuit-Switched type networks” are most commonly portions of the ubiquitous telephone networks to which we are all accustomed. In these networks, which generally transmit voice or data, a pribate transmission path is established between any pair or group of users attempting to communicate and is held as long as transmission is required.
Telephone networks are typically circuit switched, because voice traffic requires the consistent timing of a single, dedicated physical path to keep a constant delay on the circuit. Figure 2: Example of circuit switching Figure 3: Public circuit switching network Subcribers: The device that attach to the network. Subscriber loop: The link between the subscriber and the network. Exchanges: The switching centers in the network. End office: The switching center that directly supports subscribers. Trunks: The branches between exchanges. They carry multiple voice-frequency circuit using either FDM or synchronous TDM.
Figure 4: Circuit establishment Basic performance equation for a single link in a circuit-switched network: Let’s consider a system with N circuits on a single link, with customers arriving according to a Poisson process at rate ? customers per second, and with successful customers having a mean holding time of h seconds, distributed as a negative exponential distribution with parameter ? = 1/h. If a customer attempting a new call finds all the circuits busy, there are no waiting places, so we’ll assume that the customer just goes away and forgets about making the call.
Define the state of our system by the random variable K, where K represents the number of customers currently in the system, then K can take on any integer value in the range from 0 to N. With these assumptions, our model is simply a state-dependent queue, with arrival rate ?? (independent of the state), and service rate i?? when the system is in state K=i. This is known as an M/M/N/N queue: Markovian arrivals, Markovian service time, N servers, and a maximum of N customers in the system. We can draw the following Markov chain diagram to represent the system.
When there are I customers the service rate is i?? , which is due to the fact that there are i customers, each with a service rate ? , so the total service rate is i??. Figure 5: Markov chain diagram Under conditions of statistical equilibrium, the solution is: pi=AiN! j=0NAjj! Observe that this is simply a truncated Poisson distribution and also the result depends on the traffic A, and not the specific values of ? and ?. To establish a path in circuit switching three consecutive phases are required: 1. Connection establishment. 2. Data transfer. 3.
Connection teardown. Elements of a circuit-switch node (Figure 6): * Digital Switch: Provides a trasnparent signal path between any pair of attached devices. * Control Unit: Establishes, maintains and tears down connections. * Network Interface: Functions and hardware needed to connect digital and analog terminals and trunk lines. Figure 6: Circuit switch element Packet Switching In packet switching, data are broken into packets of fixed or variable size, depending on the protocol used. The performance of packet switching is called best effort performance.
If you transmit from sender to receiver, all the network will do its best to get the packet to the other end as fast as possible, but there are no guarantees on how fast that packet will arrive. Figure 7: Example of packet switching Packet switching is used to optimize the use of the channel capacity available in digital telecommunication networks such as computer networks, to minimize the transmission latency, the time it takes for data to pass across the network. It is also used to increase robustness of communication. These layers are introduced to break down the complexity of communications.
The top layer (layer 7) is the layer at user level. As the layers go down, they get increasingly primitive. Layer is most primitive from as it is just binary numbers prepared to be transmit to the end node. Seven layers of open systems interconnection models are shown in table 1[7]: Layer Number| Name| Description| 1| Pysical Layer| Deals with physical connection between nodes in network. | 2| Data Link Layer| Maintaining and optimising actual connection. | 3| Network Layer| Deals with communication of data on a network. | 4| Transportation Layer| Sequencing, error detection and optimisation of communication. 5| Session Layer| Controls the communication between applications running on end nodes. | 6| Presentation Layer| Format data and provides syntaxes for application. | 7| Application Layer| Contains management functions. | Table 1: Layers of open systems interconnection model Every packet contain some control information in its header, which is required for routing and other purposes. Figure 8: Packet data format Initially, transmission time decreases as packet size is reduced. But, as packet size is reduced and the payload part of a packet becomes comparable to the control part, transmission time increases.
Figure 9: Variation of transmission time with packet size. As packet size is decreased, the transmission time reduces until it is comparable to the size of control information. There is a close relationship between packet size and transmission time as shown in Figure 9. In this case it is assumed that there is a virtual circuit from station X to Y through nodes a and b. Times required for transmission decreases as each message is divided into 2 and 5 packets. However, the transmission time increases if each message is divided into 10 packets[9].
The packet switched networks allow any host to send data to any other host without reserving the circuit. Multiple paths between a pair of sender and receiver may exist in a packet switched network. One path is selected between source and destination. Whenever the sender has data to send, it converts them into packets and forwards them to next computer or router. The router stores this packet till the output line is free. Then, this packet is transferred to next computer or router (called as hop). This way, it moves to the destination hop by hop. All the packets belonging to a transmission may or may not take the same route.
The route of a packet is decided by network layer protocols. As we know there are two approaches for packet switching which are: 1. Datagram switching, 2. Virtual circuit swtiching. 1. Datagram Switching: Each packet is routed independently through network which is also called connectionless packet-switching. Datagram packet switching sends each packet along the path that is optimal at the time the packet is sent. When a packet traverses the network each intermediate station will need to determine the next hop. Routers in the internet are packet switches that operate in datagraam mode.
Each packet may travel by a different path. Each different path will have a different total transmission delay (the number of hops in the path may be different, and the delay across each hop may change for different routes). Therefore, it is possible for the packets to arrive at the destination in a different order from the order in which they were sent[10]. Figure 10: Datagram packet switching Figure 11: Delay in datagram packet switching There are three primary types of datagram packet switches: * Store and forward: Buffers data until the entire packet is received and checked for errors.
This prevents corrupted packets from propagating throughout the network but increases switching delay. * Fragment free: Filters out most error packets but doesn’t necessarily prevent the propagation of errors throughout the network. It offers faster switching speeds and lower delay than store-and-forward mode. * Cut through: Does not filter errors; it switches packets at the highest throughput, offering the least forwarding delay. 2. Virtual Circuit Switching: Virtual circuit packet switching (VC-switching) is a packet switching technique which merges datagram packet switching and circuit switching to extract both of their advantages.
VC switching is a variation of datagram packet switching where packets flow on so-called logical circuits for which no physical resources like frequencies or time slots are allocated shown in Figure 12. Each packet carries a circuit identifier, which is local to a link and updated by each switch on the path of the packet from its source to its destination[10]. A virtual circuit is defined by the sequence of the mappings between a link taken by packets and the circuit identifier packets carry on this link. In VC-switching, routing is performed at circuit establishment time to keep packet forwarding fast.
Other advantages of VC-switching include the traffic engineering capability of circuit switching, and the resources usage efficiency of datagram packet switching. Nevertheless, a main issue of VC-Switched networks is the behavior on a topology change. As opposed to Datagram Packet Switched networks which automatically recompute routing tables on a topology change like a link failure, in VC-switching all virtual circuits that pass through a failed link are interrupted. Hence, rerouting in VC-switching relies on traffic engineering techniques[6].
Figure 12: Virtual circuit packet switching Figure 13: Delay on packets in virtual-packet switching 5) APPLICATION OF CIRCUIT AND PACKET SWITCHING Circuit Switching 1. Plain Old Telephone Service (POTS) The plain old telephone system (POTS) is the largest circuit switched network. The original GSM network is also circuit switched. Prior to the existence of new types of networks, all communication systems had to be built based on the existing telecommunications facilities, which were largely oriented to what the common carriers refer to as plain old telephone service, known as POTS.
Consequently, even today, in order to use POTS for data communications, it is necessary to use a modem to convert the data to a form suitable for voice-transmission media. The data transmission rate that can be obtained over a POTS connection is typically less than 64 Kbps. These rates are adequate for text and audio transmission. However, they are not suf? cient for good quality video transmission in real-time. 2. Switched 56 Service Switched 56 service is a dial-up digital service provided by local and long distance telephone companies. For a connection, a data service unit/data channel unit (DSU/CSU) is used instead of a modem.
Switched 56 service uses a 64 Kbps channel, but one bit per byte is used for band signaling, leaving 56 Kbps for data. This service allows the transmission of information over one or two twisted cable pairs to multiple points at a data rate of 56 Kpbs. 3. Integrated Services Digital Network (ISDN) The ISDN was designed in the 1980s to offer end-to-end digital connectivity, while providing the required QoS with data rates in the range of Kbps to Mbps over switched connections. In order to provide even higher data rates, the original ISDN was extended to broadband ISDN (BISDN) (Martin, 1985).
The ISDN services are provided to users as ISDN interfaces, each comprising a number of ISDN channels. Using 64-Kbps channels, called bearer or B channels, ISDN provides access to the digital network. ISDN provides lower error rate compared to typical voiceband modems and a relatively high bandwidth data channel[11]. Packet Switching 1. VOIP It is becoming increasingly accepted to transmit delay sensitive data through a packet switched network (rather than circuit switched). There are protocols that can create a virtually real-time environment – which, for voice conversations, is sufficient.
Voice over IP is essentially a voice signal encoded into a digital format, being sent through a packet switched network (or possibly any other network) using the Internet Protocol (IP). Over recent years there have been standards developed and supported by major companies including ITU-T H. 323. VOIP has a long way to evolve before it is used as widespread as circuit switched networks, but it is well on its way. 2. IPv6 The current protocol that is employed almost everywhere IP (IPv4) has come to the end of its useful life. This is mainly because it has run out of addresses to uniquely identify every non-private computer in the world.
IPv6 has been deigned to be more efficient than IPv4 and solve the addressing problems that we face at present. Ipv6 will use 128 bits to address nodes, which provides 2128possibilities (roughly3. 4? 1038). It will incorporate a special ‘option mechanism’ to store optional headers in the transport layer (to maximize efficiency by reducing required space). Finally, Ipv6 will have support for resource allocation, allowing packets to be part of a ‘traffic flow’ which will provide better communication of data such as video/voice streams [VOIP]. 6) CONCLUSION In large networks there might be multiple paths linking sender and receiver.
Information may be switched as it travels through various communication channels. Data networks can be classified as using circuit-switching or packet-switching. Packet switching, which forms the basis of the Internet, is a form of statistical multiplexing in which senders divide messages into small packets. The switching centers receive the control signals, messages or conversations and forwards to the required destination, after necessary modification link amplification if necessary. In computer communication, the switching technique used is known as packet switching or message switch (store and forward switching).
In telephone network the switching method used is called circuit switching. Circuit switching is a technique that directly connects the sender and the receiver in an unbroken path. In the modern and fast paced world, what we are looking for is efficiency, low costs and reliability and packet-switched networks seems to fulfill most of the criteria that the society is looking for. It would only be a matter of time before circuit switching becomes a thing of the past. 7) REFERENCES [1] Stallings, W. , Data and Computer Communications, 7th ed. 1999, Upper Saddle River, NJ: Prentice Hall. [2] Notes. com, C.
What is Switching. Available from: http://ecomputernotes. com/computernetworkingnotes/computer-network/what-is-switching. [3] ABC, T. , Circuit Switching. 2005. [4] Jia, S. and G. Wang. Network performance analysis of packet-switching C;sup;3;/sup; system. in TENCON ’89. Fourth IEEE Region 10 International Conference. 1989. [5] Wikipedia, Packet Switching, 2012, Wikipedia. [6] Torlak, P. M. , Telecommunication Switching and Transmission. Packet Switching and Computer Networks: UTD. [7] Heng Zheng Hann, C. Y. Y. , Fareezul Asyraf, Farhana Binti Mohamad, Fong Poh Yeee, Circuit Switching vs Packet Switching, C.
Y. Y. Heng Zheng Hann, Fareezul Asyraf, Farhana Binti Mohamad, Fong Poh Yeee, Editor, Wikibooks. [8] Gebali, F. , Analysisof Computer and Communication. Switches and Routers2008, New York, USA: Springer. [9] Kharagpur, I. , Switching Techniques: Circuit Switching, CSE. [10] Notes. com, C. Datagram Packet vs. Virtual Packet. Available from: http://ecomputernotes. com/computernetworkingnotes/switching/distinguish-between-datagram-packet-switching-and-virtual-circuit-switching. [11] Dr. Farid Farahmand, D. Q. Z. , Circuit Switching. 2007.

Analysis of Different Nowadays Networks

Calculate the Price

Approximately 250 words

Total price (USD) $: 10.99

Categories
Network

Networking Concepts – Summary

Networking Concepts – Summary.
The alma of this paper Is to find the easiest and cost-effective method of connecting two separate networks. A relatively simple device called a bridge, which Is implemented through a combination of hardware and software, achieves interconnection between two networks that are the same. Interconnection between networks that are not similar, for example, a Wide Area Network and a Local Area Network can be achieved through as much more complex device called a router.
A router Is a device, which can accept messages that are in a certain format produced by one particular network and reinstates them to another format that is used by another network. In this particular case of Nancy, a director of network infrastructure, it is not likely that a full replacement of networking equipment Is required. Alternative A, which is about Installing a few devices In the headquarters of BOB. The advantages of this alternative are that is that It Is the easiest, the least expensive and the quickest to implement.
The other advantage is that this approach will have a very small impact on the network infrastructure. The disadvantage of this approach is that there will be performance penalties due to lack of integration in the architecture of the network. The second option Is replacing the network components of Bob’s entire network for It to use the same protocols as BE and the two can communicate freely. The advantage of this approach Is that there will be a huge Improvement In performance due to integration in the network architecture.

The major cons of this alternative are that there will be major impacts on the network infrastructure; there also will be ajar costs incurred and a lot of time required to implement this alternative. Their last alternative Is whereby the management of BE bank replaces all the devices of the B WAN and even probably the MANS so that each city or branch can communicate with the network of BE while the Lana in individual divisions remain unchanged. The advantage of this alternative is that there will be better performance gains than in alternative A and takes significantly little time to implement.
The con of this approach is that it does not achieve full integration of the network and the stickiest to support Bobs network over time It might add problems to the My recommendation is that alternative C is the most applicable when it comes to the time taken to implement and the cost that is incurred in order to achieve a significant level of network architecture. However, when it comes to a long-term point of view, the second alternative is the best so long as BE passes through a transition stage like the first alternative in order to meet its immediate needs and take ample time to put into action the full changes of the infrastructure.

Networking Concepts – Summary

Calculate the Price

Approximately 250 words

Total price (USD) $: 10.99

Categories
Network

Virtual Private Network

Virtual Private Network.
Faith, my best friend has been trying to get some online writing job. She found some good websites the only problem was her location; the services could not be offered in her country Kenya. She informed me about it and I just learned about VPN so I advised to use it.
So what’s a VPN?
VPN stands for Virtual Private Networks. It gives you online privacy and anonymity by creating a private network from a public Internet connection. VPNs mask your Internet protocol (IP) address so your online actions are virtually untraceable. Most important, VPN services establish secure and encrypted connections too.

How VPN protects your privacy?
VPNs essentially create a data tunnel between your local network and an exit node in another location, which could be thousands of miles away, making it seem as if you’re in another place. This benefit allows “online freedom” or the ability to access your favorite apps and websites from anywhere in the world.VPN providers.There are many choices when it comes to VPN providers. There are some VPN providers who offer free service and there are some who charge for VPN service.
Paid VPN providers offer robust gateways, proven security, free software and unmatched speed.VPN protocolsThe number of protocols and available security features has grown with time but the most common protocols are:PPTP-PPTP tunnels a point-to-point connection over the GRE protocol.It is strong and can be set up on every major OS but it is not the most secure.
L2TP/IPsec- It is more secure than PPTP and offers more features. L2TP/IPsec implements two protocols together to gain the best features of each; L2TP protocol creates a tunnel and IPsec provides a secure channel.
This makes an impressively secure package.Open VPN- OpenVPN is an SSL-based VPN that is gaining popularity. SSL is a mature encryption protocol and OpenVPN can run on a single UDP or TCP port.The software used is open source and freely available.That’s all for today for more inquiries on VPNs register on my email list for more info.

Virtual Private Network

Calculate the Price

Approximately 250 words

Total price (USD) $: 10.99

Categories
Network

An Analysis of Project Networks as Resource Planning Tools

An Analysis of Project Networks as Resource Planning Tools.
An Analysis of Project Networks as Resource Planning Tools| Usage and availability of resources are essential considerations when establishing Project Networks in Resource Planning. This analysis has focused on some of the risks of certain actions used to offset resource constraints, advantages/disadvantages for reducing project scope, and options/advantages/disadvantages for reducing project duration. If implemented correctly, careful consideration of the outlined risks will make managing a project a little less painless. | Following is an analysis of project networks as resource planning tools.
The analysis will be segmented into three topical areas to include: * Risks associated with leveling resources, compressing, or crashing projects, and imposed durations or “catch-up” as the project is being implemented; * Advantages and disadvantages for reducing project scope to accelerate a project and what can be done to reduce the disadvantages * Three options for reducing project duration and advantages and disadvantages to these options Risks Associated with Leveling Resources, Compressing, or Crashing Projects, and Imposed Durations or “Catch-Up” The text (Gray and Larson, 2008) gives good definitions for the risks associated with certain actions used to offset resource constraints. The act or process of evening out “resource demand by delaying noncritical activities (using slack) to lower peak demand” (Gray and Larson, 2008) is considered leveling resources.
This action ultimately increases the resource utilization, which is more than likely the desired result. Even though one may get the desired results resource-wise, leveling resources often results in pushing out the end-date of a project. In most cases, that is the extreme outcome. Another risk that bears its head when slack is reduced, is loss of flexibility which equates to an increase in critical activities. Without slack anywhere in a project network, ALL activities become critical. This means that everything has to fall perfectly in place in order to stay on the prescribed timeline. Compressing a schedule means that you will be conducting project activities in parallel. Compressing is not applicable to all project activities.

A good example can be seen if you have activities labeled “Hire Workers” and “Dig Foundation”. You can’t implement the “Hire Workers” and “Dig Foundation” activities in parallel because to dig a foundation you need to have someone to do the digging. (brighthub. com/office/project-management/articles/51684. aspx#ixzz0ongX7ECF, 20 May 2010). Risks of compressing include: * Increases risk of rework * Increases communications challenges, and may * Require more resources Crashing a schedule involves allocating more resources so that an activity can be completed on time or before time, assuming that by deploying more resources the activity can be completed earlier.
One good aspect about crashing a schedule (just like compressing), you do not need to crash all activities. The activities that impact the schedule are those with no slack, thus being the only ones that are affected. Risks associated with this action are as follows: “Budget: Since you allocated more resources, you will not deliver the project on-budget. Demoralization: Existing resources may get demoralized by the increase in people to complete activities that were originally assigned to them. Coordination: More resources translates to an increase in communication challenges” (brighthub. com/office/project-management/articles/51684. aspx#ixzz0onfuKUmj, 20 May 2010).
These risks combined or by themselves can ultimately pose the overall risk of reducing the effectiveness of the existing resources. Advantages and Disadvantages for Reducing Project Scope to Accelerate a Project and what can be Done to Reduce the Disadvantages Reducing the scope of the project can lead to big savings both in time and costs. It typically means the elimination of certain tasks. At the same time scaling down the scope may reduce the value of the project such that it is no longer worthwhile or fails to meet critical success factors. An advantage to reducing project scope is the project is more likely to stay on schedule and on budget. It also allows for more focus being applied to the remaining deliverables in the project scope.
A disadvantage that may arise is loss of quality in work due to key quality deliverables selected to be cut in order to balance the timeline of the project. The key to offsetting the disadvantages is “reassessing the project requirements to determine which are essential and which are optional. This requires the active involvement of all key stakeholders. More intense re-examination of requirements may actually improve the value of the project by getting it done more quickly and for a lower cost. ” (just answer. com 21 May 2010) Three Options for Reducing Project Duration and Advantages and Disadvantages to these Options Reducing the duration a project can be managed by reducing the duration of an activity/activities almost always results in higher direct cost.
When the duration of a critical activity is reduced, the project’s critical path can be change with other activities and that new path will determine the new project completion date. Following are three options to reducing project duration. Adding Resources: This is a popular method to reduce project time by assigning additional staff and equipment to activities-if it is assessed appropriately. The activities at hand need to be researched accordingly and proper determinations of how much time will be saved prior to just throwing bodies at it. The first thing that comes to mind when you add resources is “double the resources, reduce the length of the project in half.
The unforeseen disadvantage that arises is the increase in the amount of time that an existing team member must spend in explaining what has been done already and what is planned. This increases the overall communication time spent by the team which phenomenally ends up adding/losing valuable time. Outsourcing Project work: A common method for shortening the project time is to subcontract an activity. The subcontract may have access to superior technology or expertise that will accelerate the completion of the activity (Gray and Larson, 2008). Additionally, significant cost reduction, and flexibility can be gained when a company outsources (Gray and Larson, 2008).
Disadvantages that may be experienced are conflict due to contrasting interpersonal interactions and internal morale issues if the work has normally been done in-house (Gray and Larson, 2008). Scheduling Overtime: The easiest way to add more labor to a project is not to add more people, but to schedule overtime. The www. businesslink. gov outlines potential advantages of using overtime working include: * a more flexible workforce * the ability to deal with bottlenecks, busy periods, cover of absences and staff shortages without the need to recruit extra staff * increased earning for employees * avoidance of disruption to jobs where the workload is more difficult to share, e. g. ransport and driving * the ability to carry out repair and maintenance which has to be done outside normal working hours However, disadvantages may include: * the expense of premium overtime rates * inefficiency if employees slacken their pace of work in order to qualify for overtime * regular long working hours, which can adversely affect employees’ work, health and home lives * fatigue, which may increase absence levels and lead to unsafe working practices * employee expectations of overtime, leading to resentment and inflexibility if you try to withdraw it. (businesslink. gov, 22 May 2010) Conclusion Usage and availability of resources are essential considerations when establishing Project Networks in Resource Planning.
This analysis has focused on some of the risks of certain actions used to offset resource constraints, advantages/disadvantages for reducing project scope, and options/advantages/disadvantages for reducing project duration. If implemented correctly, careful consideration of the outlined risks will make managing a project a little less painless. References Brighthub. com. Difference Between Schedule Crashing and Compressing, Retrieved 20 May, 2010 http://www. brighthub. com/office/project-management/articles/51684. aspx#ixzz0onfuKUmj Brighthub. com. When to Crash or Compress a Schedule, Retrieved 20 May 2010 http://www. brighthub. com/office/project-management/articles/51684. aspx#ixzz0onfuKUmj
Read also: Conveyor Belt Project

An Analysis of Project Networks as Resource Planning Tools

Calculate the Price

Approximately 250 words

Total price (USD) $: 10.99

Categories
Network

GIS Based Load Flow Study for Distribution Network at Sihora Township

GIS Based Load Flow Study for Distribution Network at Sihora Township.
This undertaking work pertains to ;
“GIS based load flow survey for Distribution Network at Sihora township” .

Chapter 1: Introduction
In India, Power sector reforms are afoot chiefly to reconstruct efficiency and fiscal wellness in the sector and assorted SEBs have followed common form of reforms based on “World Bank Supported Orissa” theoretical account of ninetiess. Main nonsubjective covered under reforms are ; Unbundling of SEBs in to three separate sectors of Generation, Transmission & A ; Distribution and Corporatization of sectors. Added fiscal encouragement to the reforms procedure came in the signifier of “Accelerated Power Development and Reform Program” ( APDRP )and States willing to set about Distribution Reforms are eligible to pull financess in this strategy.
Distribution and Use of Power are 3rd and 4th sections of Integrated Power Systems and are unluckily weakest links as compared to Generation and UHV/EHV Transmission of Power because of high proficient and commercial losingss, overloading of Transformers and Feeders/Distributors and mass scale pilferage of power. Power Distribution nevertheless, assumes function of a gross gaining section of power system. Therefore, the existent challenge of reforms in Power Sector lies in efficient direction of Distribution and Utilization sections so that consumers get good power quality.
Power Sector Reforms initiated by Govt. of India, peculiarly in Distribution sector, are viewed as strong steps to better commercial and fiscal viability of this sector and the APDRP launched in the twelvemonth 2001 was launched chiefly to beef up Primary, Secondary and Tertiary Distribution Networks and decrease ofAverage Technical and Commercial Losses ( AT & A ; C Losses ) .Main aims of this plan screen

Constitution of baseline informations.
Renovation and modernisation of 33/11 & A ; 11/0.4 KV Sub-Stations.
Decrease of AT & A ; C losingss
Commercial viability.
Decrease of outages & A ; breaks.
Increase consumer satisfaction through beef uping & As ; up-gradation of Sub-Transmission & A ; Distribution web and by supplying good power quality.

1.1 Application of Geographic Information System ( GIS ) in Distribution Systems.
GIS is a computer- based system to assistance in the aggregation, care, storage, analysis, end product, and distribution of spacial informations and information. Geographic information Systems ( GIS ) and Network Analysis are quickly progressing Fieldss in recent old ages and remain most important application countries.
G- Stands for geographic and it has something to make with geographics.
I – Stands for information i. vitamin E, geographic information.
S- Stands for system. GIS is an incorporate system of geographics and information tied together.
1.2ROLE OF GIS IN DISTRIBUTION REFORMS.
Distribution is a job country in any Electric Power Supply Utility in India chiefly because the Technical plus Commercial losingss are extortionately high, ( 50 – 55 % ) . GIS can assist cut down losingss and better energy efficiency through its part in the undermentioned countries of Distribution reforms:
1. 100 % consumer metering and Automatic Meter Reading.
2. Feeder & A ; Distribution Transformer metering: Installation of inactive ( electronic ) metres on all 11 KV surpassing feeders and distribution transformers.
3. Effective Myocardial infarction: Both feeder and DT inactive metres record active energy, power factor and burden information which can be downloaded to a computing machine web to construct effectual MIS for speedy decision-making.
4. Energy accounting: Energy received in each 11 kV sub-station and 11 KV out-going feeders, energy billed and T & A ; D losingss at each feeder and DT can be decently accounted for.
5. Installation of capacitance Bankss & A ; web reconfiguration: Installation of capacitances at 11 & A ; 400 Volt degrees, reconfiguration of feeder/ Distributors & A ; DTs in such a manner as to cut down the length feeders/distributors thereby cut downing Technical losingss.
6. High Voltage Distribution System ( HVDS ) : Installation of little energy efficient DTs providing power to 10 to 15 families merely, re-conductoring of overladen subdivisions, digital function of the full distribution system and burden flow surveies to beef up the distribution system.
1.3 GIS aid in accomplishing the above aims through assorted applications:
1. Creation of consumer database and consumer indexing: Indexing of all LT & A ; HT consumers, so as to segregate consumers feeder-wise and DT-wise. The consumers are mapped utilizing GIS engineering and identified based on their alone electrical reference, called Consumer Index Number ( CIN ) .
2. Function of Sub-transmission and Electrical Distribution Network: It is every bit of import to hold all the 33 KV substations, 11 KV feeders, DTs and LT feeders digitally mapped and geo-referenced.
3. Load Flow Studies: Having completed the aforesaid undertakings, burden and consumer profiles can be studied and illations drawn for rectifying instabilities in the web.
4. Load Prediction: GIS has proved itself an effectual tool in placing ideal location for proposed Sub-Stations, demand-side direction, Load prediction.
1.4 CASE STUDY
GIS has been used as a tool to transport out Consumer indexing and Load Flow Studies for Primary and Secondary distribution Network at Sihora township, near Jabalpur, under the Poorv Kshetra Vidyut Vitran Company ( MPPKVVCL ) and I was associated with this survey. Both these surveies were conducted at the same time. Basic aim was to update consumer informations and program betterment in the Network and to make away with over-loading of transformers and feeders so as to accomplish an acceptable electromotive force profile i.e, to supply all L.T. & A ; H.T. Consumers electromotive force in the scope 6 % .
Following stairss are covered in the instance survey ;

Field work for placing assets or GPS Survey.
Transportation of GPS Co-ordinates to Lat-Lon Co-ordinates utilizing iilwis package.
Downloading of orbiter images utilizing Google Earth pro.
Alliance of spacial informations.
Forming Database.
Conducting Load Flow Study.

Decision summarises the result of this survey.
Chapter 2: LITERATURE REVIEW
2.1 Review 1
“ Application of Geographic Information Systems and Global Positioning Systems in Human-centered Emergencies: Lessons Learned, Programme Implications and Future Research”by Reinhard Kaiser Centers for Disease Control CDC and Prevention ( CDC ) , Paul B. Spiegel CDC, Alden K. Henderson CDC, Michael L. Gerber CDC ( Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA ) .
This paper discusses application of GIS & A ; GPS in human exigencies.
2.2 Review 2
International diary onNetwork Analysis in Geographic Information Science: Review, Assessment, and Projections ( Cartography and Geographic Information Science, Vol. 34, No. 2, 2007, pp. 103-111 ) byKevin M. Curtin.
This documents informs thatNetwork informations constructions were one of the earliest representations in geographic information systems ( GIS ) , and web analysis remains one of the most important and relentless research and application countries in geographic information scientific discipline.
2.3 Review 3
GIS AND NETWORK ANALYSIS
( By Manfred M. Fischer Department of Economic Geography & A ; Geoinformatics Vienna University of Economics and Business Administration Rossauer Lande 23/1 A-1090 Vienna, Austria ) .
Writer has described that the information theoretical accounts and design issues which are specifically oriented to GIS-T, and identified several betterments of the traditional web informations theoretical account that are needed to back up advanced web analysis in a land transit context.
2.4 Review 4
Electrical Network Mapping and Consumer Indexing utilizing GIS
( By S P S Raghav Chairman and Managing Director UPCL, Dehradun and Jayant K Sinha Dy General Manager ( IT ) UPCL, Dehradun ) .
This paper analyzes the present power scenario and the function of GIS in spearheading the Distribution reforms processes to better the power industry’s viability.
2.5 Review 5
GIS Based Power Distribution System: A Case Study For The Bhopal City( Dr. Tripta Thakur, Dept. of Electrical Engineering, MANIT, Bhopal ) .
Asset function utilizing GPS and high declaration remote feeling images has been reported in this paper utilizing Arc GIS 9.1software.
Problem Definition
The East DISCOM, at Jabalpur identified few townships as pilot undertakings for system betterment where the bing distribution web were- haphazard, shabbily constructed and expanded in an unplanned mode. AT & A ; C losingss were extortionately high runing between 50-60 % . With this in position, the GPS based information was opted to make reliable informations base and transport out the burden flow survey for the web at 11kv degree to obtain electromotive force profile within the prescribed bound of ± 6 % and besides to place low electromotive force pockets.
Aims of Thesis.

Constitution of baseline informations.
Renovation and modernisation of 33/11 & A ; 11/0.4 KV Sub-Stations.
Decrease of AT & A ; C losingss.
Improvement of Voltage Profiles.
Commercial viability.
Improved care – Decrease of outages & A ; breaks.
Increase consumer satisfaction by supplying good quality power supply.

Chapter 3:GEOGRAPHICAL POSITIONING SYSTEM ( GPS )
GPS Facts

Developed by Department of Defence as a military navigational tool.
Systems birth was in the early 1970’s
24 Satellites revolving at high heights ( 11,000 stat mis ) First Satellite launched in 1978
Became to the full operational in April 1995
Useful dark & A ; twenty-four hours – rain or radiance
Use of wireless moving ridges
Accuracy depends on unit, some are accurate to a centimeter.
There are 3 orbits – LEO ( long Earth orbit ) , MEO ( average Earth orbit ) and GEO ( geostationary Earth orbit ) . The GPS system is located in GEO orbit.

3.1 Geographic placement system ( GPS )
GPS is a world-wide radio-navigation system formed from a configuration of 24 orbiters and their land Stationss. It uses these “ semisynthetic stars ” as mention points to cipher places accurate to a affair of metres. These orbiters have really accurate redstem storksbills on board. The orbiters continuously send wireless signals towards Earth. These wireless signals are picked up by GPS receiving systems

Figure – 1
GPS receiving systems have become really economical, doing the engineering accessible to virtually everyone. GPS provides uninterrupted 3-dimensional positioning 24 hours a twenty-four hours to the military and civilian users throughout the universe. These yearss GPS is happening its manner into autos, boats, planes, building equipment, farm machinery, even laptop computing machines. It has a enormous sum of applications in GIS informations aggregation, surveying, and function. GPS is progressively used for precise placement of geospatial informations and the aggregation of informations in the field.
Figure – 2

Figure – 3
3.2 GPS Control Stations
There are five control Stationss that monitor the orbiters.
• Control stations enable information on Earth to be transmitted to the orbiters ( updates and all right turning ) .
• Control Stationss continuously track orbiters, and update the places of each orbiter.
• Without control Stationss, the truth of the system would degrade in a affair of yearss.
3.3 GPS Receivers
• GPS units are referred to as “receivers” .
• They receive information ( wireless signals ) from orbiters.
• The GPS receiving system is made of three parts ; I ) Satellites revolving the Earth two ) control and monitoring Stationss on Earth and three ) GPS receiving systems owned by users. GPS satellites send signals from infinite which are picked up and identified by GPS receiving systems. Each GPS receiving system so provides three dimensional location ( latitude, longitude, and height ) along with clip taken.
3.4 Three sections of GPS
The Space section:The infinite section consists of 20 four orbiters circling the Earth at an height of 12,000 stat mis. High height allows the signals to cover a big country. The orbiters are arranged in their orbits such that a GPS receiving system on Earth can ever have a signal from at least four orbiters at any given clip. Each orbiter transmits low wireless signals with a alone codification on different frequences. The GPS receiving system identifies the signals. The chief intent of these coded signals is to let for appraisal of travel clip from the orbiter to the GPS receiving system. The travel clip multiplied by the velocity of light peers the distance from the orbiter to the GPS receiving system. Since these are low power signals and won’t travel through solid objects, it is of import to hold a clear position of the sky.
The Control section: The control section tracks the orbiters and so provides them with corrected orbital and clip information. The control section consists of four remote-controlled control Stationss and one maestro control station. The four remote-controlled Stationss receive informations from the orbiters and so direct that information to the maestro control station where it is corrected and sent back to the GPS orbiters.
The User section:The user section consists of the users and their Global positioning system receiving systems. Number of users can hold entree at any minute of clip.
3.5 Working of GPS
When a GPS receiving system is turned on, it foremost downloads orbit information of all the orbiters. This processes, the first clip, can take every bit long as 12.5 proceedingss, but one time this information is downloaded, it is stored in the receiving systems memory for future usage. Even though the GPS receiving system knows the precise location of the orbiters in infinite, it still needs to cognize the distance from each orbiter it is having a signal from. That distance is calculated, by the receiving system, by multiplying the speed of the familial signal by the clip it takes the signal to make the receiving system. The receiving system already knows the speed, which is the velocity of a wireless moving ridge or 186,000 stat mis per second ( the velocity of visible radiation ) . To find the clip portion of the expression, the receiving system matches the orbiters transmitted codification to its ain codification, and by comparing them find how much it needs to detain its codification to fit the orbiters code. This delayed clip is multiplied by the velocity of visible radiation to acquire the distance. The GPS receiving systems clock is less accurate than the atomic clock in the orbiter, hence, each distance measuring must be corrected to account for the GPS receiving systems internal clock mistake.

Figure – 3
3.6 GPS Terminology
2D Positioning: In footings of a GPS receiving system, this means that the receiving system is merely able to lock on to three orbiters which merely allows for a two dimensional place hole. Without an height, there may be a significant mistake in the horizontal co-ordinate.
3D Placement:Position computations in three dimensions. The GPS receiving system has locked on to 4 orbiters. This provides an height in a add-on to a horizontal co-ordinate, which means a much more accurate place hole.
Real Time Differential GPS: Real-time DGPS employs a 2nd, stationary GPS receiving system at a exactly measured topographic point ( normally established through traditional study methods ) . This receiving system corrects any mistakes found in the GPS signals, including atmospheric deformation, orbital anomalousnesss, Selective Availability ( when it existed ) , and other mistakes. A DGPS station is able to make this because its computing machine already knows its precise location, and can easy find the sum of mistake provided by the GPS signals. DGPS corrects or reduces the effects of:

Orbital mistakes
Atmospheric deformation
Selective Handiness
Satellite clock mistakes
Receiver clock mistakes

DGPS can non rectify for GPS receiving system noise in the user’s receiving system, multipath intervention, and user errors. In order for DGPS to work decently, both the user’s receiving system and the DGPS station receiving system must be accessing the same orbiter signals at the same clip.

Figure – 4

GIS Based Load Flow Study for Distribution Network at Sihora Township

Calculate the Price

Approximately 250 words

Total price (USD) $: 10.99

Categories
Network

Project Network

Project Network.
A project network illustrates the relationships between activities (or tasks) in the project. Showing the activities as nodes or on arrows between event nodes are two main ways to draw those relationships. With activities on arrow (AOA) diagrams, you are limited to showing only the finish-to-start relationships – that is, the arrow can represent only that the activity ps the time from the event at the start of the arrow to the event at the end. As well, “dummy” activities have to be added to show some of the more complex relationships and dependencies between activities.
These diagrams came into use in the 1950’s, but are now falling into disuse. Activity on node (AON) diagrams place the activity on the node, and the interconnection arrows illustrate the dependencies between the activities. There are more flexible and can show all of the major types of relationships. Since the activity is on a node, the emphasis (and more data) usually can be placed on the activity. AOA diagrams emphasize the milestones (events); AON networks emphasize the tasks. Introduction to The Nine Project Management Knowledge Areas
Also read about our new agile delivery model called Scrumthat is significantly different than the model below. As a PMP I often get questions about what goes into running a project. I will try to explain in a couple of articles the various components that make up a project. There are several ways to look at a project as a whole. You can view it as a series of processes. Some processes are executed in order and some are recurring processes that are executed at various stages throughout the entire project.

You can also view the project from the different knowledge areas that are needed to execute the project. I will cover the knowledge areas in this article and go on to the processes in my next article. There are nine knowledge areas and each one covers its own important part of the project. A knowledge area can cover several phases or process groups of the project. The nine areas are mentioned below in some detail. Integration Management If each little part of the project is a tree, Integration Management is the entire forest.
It focuses on the larger tasks that must be done for the project to work. It is the practice of making certain that every part of the project is coordinated. In Integration Management, the project is started, the project plan is assembled and executed, the work is monitored and verification of the results of the work is performed. As the project ends the project manager also performs the tasks associated with closing the project. A project manager must be very good at Integration Management or the project may very well fail.
Other knowledge areas are also important, but Integration Management is the area that requires the most management and control of the entire project. Scope Management This area involves control of the scope of the project. It involves management of the requirements, details and processes. Changes to the scope should be handled in a structured, procedural, and controlled manner. The goal of scope management is to define the need, set the expectations, deliver to the expectations, manage changes, and minimize surprises and gain acceptance of the project.
Good scope management focuses on making sure that the scope is well defined and communicated very clearly to all stakeholders. It also involves managing the project to limit unnecessary changes. Time Management Project Time Management is concerned with resources, activities, scheduling and schedule management. It involves defining and sequencing activities and estimating the duration and resources needed for each activity. The goal is to build the project schedule subsequently to manage changes and updates to the schedule.
When the schedule is first created, it is often referred to as the time baseline of the project. It is later used to compare updated baselines to the original baseline. Many project managers use software to build and maintain the schedule and baselines. Cost Management This knowledge area includes cost estimating and budgeting. After the cost of the project has been estimated the project management must control the cost and makes changes to the budget as needed. The Project Cost Estimate is dependent on the accuracy of the cost estimate of each activity in the project.
The accuracy changes as the project progresses. For instance, in the initiation of the project the estimate is more difficult to assess than later in the project when the scope and the schedule have been defined in detail. Quality Management This area is an important area where outputs of different processes are measured against some predetermined acceptable measure. The project manager must create a quality management plan. The quality plan is created early in the project because decisions made about quality can have a significant impact on other decisions about scope, time, cost and risk.
The area also includes quality control and assurance. The main difference between control and assurance is that control looks at specific results to see if they conform to the quality standard, whereas assurance focuses primarily on the quality process improvement. Human Resource Management This area involves HR planning like roles and responsibilities, project organization, and staff management planning. It also involves assigning staff; assess performance of project team members, and overall management of the project team.
The project manager is the “Boss” of the project and Human Resource Management is essentially the knowledge area of running the project in relations to the resources assigned to the project. Communications Management This area focuses on keeping the project’s stakeholders properly informed throughout the entire project. Communication is a mixture of formal and informal, written and verbal, but it is always proactive and thorough. The project manager must distribute accurate project information in a timely manner to the correct audience.
It involves creating a communications plan that explains what kind of information should be communicated on a regular basis and who should receive it. It includes project performance reporting to stakeholders so everyone is on the same page of the project progress, for example, what is outstanding, what is late, and what risks are left to worry about, etc. Risk Management This involves planning how to handle risks to the project. Specifically the project manager must identify risks and also plan how to respond to the risks if they occur.
Risk has two characteristics: Risk is related to an uncertain event, and a risk may affect the project for good or for bad. When risks are assessed, the project manager usually has to assess several things: How likely will the risk happen, how will it affect the project if it happens, and how much will it cost if it happens? The project manager will use a lot of risk analysis tools and techniques to answer these questions. Procurement Management This area focuses on a set of processes performed to obtain goods or services from an outside organization.
The project manager plans purchases and acquisitions of products and services that can’t be provided by the project manager’s own organization. It includes preparing procurement documents, requesting vendor responses, selecting the vendors, and creating and administering contracts with each outside vendor. As you can see there are many knowledge areas that a project manager must excel at. Even though some areas are more important than others, each area must be executed with care and professionalism in order for any project to be successful. ———————————————— Work Breakdown Structure, WBS Chart and Project Management WBS Work Breakdown Structure, WBS, Term Definition Work breakdown structure, WBS, is a project management technique initially developed by the US Defense Establishment, which deconstructs a project with the intent to identify the deliverables required to complete the project. The project management work breakdown structure, WBS, is utilized at the beginning of the project to define the scope, estimate costs and organize Gantt schedules.
Work breakdown structure, WBS, captures all the elements of a project in an organized fashion. Breaking down large, complex projects into smaller project pieces provides a better framework for organizing and managing the project. WBS can facilitate resource allocation, task assignment, responsibilities, measurement and control of the project. The project management work breakdown structure, WBS, is utilized at the beginning of the project to define the scope, estimate costs and organize Gantt schedules.
In the project management WBS it is important that the project is not broken down into too much detail as that can lead to micro management. Conversely, too little detail can result in tasks that are too large to manage effectively. Work breakdown structure, WBS, can be presented in a tabular list, an indented task list as part of a Gantt chart or in a hierarchical tree. More often the work breakdown structure, WBS is listed in a hierarchical tree that captures deliverables and tasks needed to achieve project completion. ork breakdown structure (WBS) * E-Mail * Print * A * AA * AAA * inShare1 * Facebook * Twitter * Share This * RSS * Reprints A work breakdown structure (WBS) is a chart in which the critical work elements, called tasks, of a project are illustrated to portray their relationships to each other and to the project as a whole. The graphical nature of the WBS can help a project manager predict outcomes based on various scenarios, which can ensure that optimum decisions are made about whether or not to adopt suggested procedures or changes.
When creating a WBS, the project manager defines the key objectives first and then identifies the tasks required to reach those goals. A WBS takes the form of a tree diagram with the “trunk” at the top and the “branches” below. The primary requirement or objective is shown at the top, with increasingly specific details shown as the observer reads down. When completed, a well-structured WBS resembles a flowchart in which all elements are logically connected, redundancy is avoided and no critical elements are left out. Elements can be rendered as plain text or as text within boxes.
The elements at the bottom of the diagram represent tasks small enough to be easily understood and carried out. Interactions are shown as lines connecting the elements. A change in one of the critical elements may affect one or more of the others. If necessary, these lines can include arrowheads to indicate time progression or cause-and-effect. A well-organized, detailed WBS can assist key personnel in the effective allocation of resources, project budgeting, procurement management, scheduling, quality assurance,quality control, risk management, product delivery and service oriented management.
Related article: Conveyor Belt Project

Project Network

Calculate the Price

Approximately 250 words

Total price (USD) $: 10.99

Categories
Network

Bead Bar Network Paper

Bead Bar Network Paper.
Bead bar specializes in making beads jewellery for the customers. They have three divisions’ namely studios, franchises and bead bar on board that requires to be connected for synchronization of activities. It is required to create a network design and the appropriate topology which would be of good to the company for communicating the requirements and sharing information to keep in synchronization with the current state of the business.

The network topology would be discussed which would make the communication feasible and possible with regard to all the physical and network barriers.

The network design is the architecture which would give a clear picture of the interconnection of devices and the departments to facilitate the sharing of business information. The final section discusses the pros and cons of the proposed topology in question. Background information of Bead Bar: Bead Bar as an organization is departmentalized into three divisions namely studios, franchises and bead bar on board. The present situation does not create a network among the divisions and thus creates inconsistencies in information sharing and knowledge about the company as a whole, at any given point of time.
A computer network would facilitate the process of getting the entire job done for every customer at a lesser time than usual. It would make sure that information regarding ones choice and preferences would be catered and stored for future benefits. The network would enrich the communication among the divisions which in turn would facilitate greater workability and functionality in operation. Recommendation overview: The network recommendation for the Bead Bar could be capitulated into LAN and WAN.
The internal network within the divisions would have a LAN network; however the inter-department communication would be made possible using WAN. Creating a LAN would create an internal network which can be made possible for connecting the personnel in the very department itself. For the LAN network, switches and hubs are used for connecting the sole division itself and for WAN routers are used for interconnecting each other. Explanation of the Network Design: All the three divisions of the company are interconnected using the network cable in a wired network using both LAN and WAN.
Using LAN the computers are interconnected within the office or building premises so that all the employees are able to get information on demand. The head office has a central server where all the information us stored in the database. The other offices are also networked using the LAN technologies. The switch is two layered and used to take care of the storing and forwarding mechanism as stated in Tanenbaum (2003). Using WAN the network connections are using the public data services and get connect to internet and using VPN technology, using login credentials.
Network Topology: The LAN technology follows a star topology with hubs. The interconnecting devices used facilitate the use and share of information. The hubs are used to store and forward the information. Star topology is used which would facilitate the efficient use of network resources (Star Topology). Advantages of the architecture: • Having a star topology would make it less expensive in relation to mesh topology. • In a star, each device needs only one link and one I/O port to connect it to any number of other devices (Forouzan, 2003).
• It makes the star topology easy to install and reconfigure with time and need. • Star topology requires far less cabling and any additions, deletions and moves involve only one connection between that device and hub. • It is quite robust in nature; if one link fails the others do not cease to operate. This factor also enhances the fault identification and fault isolation. • As long as the hub is in working condition, it is quite easy to monitor link problems and bypass defective links. • A WAN is used for connecting to the internet so as to get connected with the other departments across geographic locations.
• VPN technology is used to validate the user of the network so that the connection established is secure in nature. It would use the login name and password facilities to enable a secure way of handling data. • A database server is used so that all the information is stored centrally and all the users access the information using their credentials. • The VPN also makes sure that not all users would be able to access all for ms of data an data security and integrity is restricted using the login credentials. Drawbacks of the architecture:
• The VPN technology would be quite expensive to implement (VPN). • The cost of switches and hubs would be costly. • The use of websites where all the computers are used for accessing directly the internet would have made the architecture more accessible but security would have been less. Even the cost of web server would be incurred quite high. Conclusion The primary objective to connect has been taken into account and the network topology has been discussed to give shape to the entire network for interconnecting with the various divisions in the company.
The network architecture and the drawbacks associated with it are thoroughly examined for its feasibility and communication. The network topology would have an upper hand on the drawbacks and is quite sufficient to inter-connect the enterprise to capitalize on its resources. References/ Bibliography Forouzan A. Behrouz (2003). TCP/IP Protocol Suite, second edition. Tata McGraw Hill. Physinfo (2006). Network Topologies. Retrieved October 26, 2007 from http://physinfo. ulb. ac. be/cit_courseware/networks/pt2_1. htm

Bead Bar Network Paper

Calculate the Price

Approximately 250 words

Total price (USD) $: 10.99