About Network Security
Network security combines multiple layers of defences at the edge and in the network. Each network security layer implements policies and controls. Authorized users gain access to network resources, but malicious actors are blocked from carrying out exploits and threats. Digitization has transformed our world. How we live, work, play, and learn have all changed. Every organization that wants to deliver the services that customers and employees demand must protect its network. Network security also helps you protect proprietary information from attack. Ultimately it protects your reputation.
Network security can be made up of hardware devices, specialized software, physical security (i.e. locked computer rooms), and rules for people to follow. Just like securing your home, a network security system must protect against threats coming in from the outside and also deal with intruders if they make it inside. Network security consists of the policies and practices adopted to prevent and monitor unauthorized access, misuse, modification, or denial of a computer network and network-accessible resources. Network security involves the authorization of access to data in a network, which is controlled by the network administrator. Users choose or are assigned an ID and password or other authenticating information that allows them access to information.
Network security covers a variety of computer networks, both public and private, that are used in everyday jobs; conducting transactions and communications among businesses, government agencies and individuals. Networks can be private, such as within a company, and others which might be open to public access. Network security is involved in organizations, enterprises, and other types of institutions. It does as its title explains: It secures the network, as well as protecting and overseeing operations being done. The most common and simple way of protecting a network resource is by assigning it a unique name and a corresponding password.
Types of Network Security
Not every user should have access to your network. To keep out potential attackers, you need to recognize each user and each device. Then you can enforce your security policies. You can block noncompliant endpoint devices or give them only limited access. This process is network access control (NAC).
Antivirus and anti malware software:
“Malware,” short for “malicious software,” includes viruses, worms, Trojans, ransomware, and spyware. Sometimes malware will infect a network but lie dormant for days or even weeks. The best antimalware programs not only scan for malware upon entry, but also continuously track files afterward to find anomalies, remove malware, and fix damage.
Any software you use to run your business needs to be protected, whether your IT staff builds it or whether you buy it. Unfortunately, any application may contain holes, or vulnerabilities, that attackers can use to infiltrate your network. Application security encompasses the hardware, software, and processes you use to close those holes.
To detect abnormal network behavior, you must know what normal behavior looks like. Behavioral analytics tools automatically discern activities that deviate from the norm. Your security team can then better identify indicators of compromise that pose a potential problem and quickly remediate threats.
Data loss prevention:
Organizations must make sure that their staff does not send sensitive information outside the network. Data loss prevention, or DLP, technologies can stop people from uploading, forwarding, or even printing critical information in an unsafe manner.
Email gateways are the number one threat vector for a security breach. Attackers use personal information and social engineering tactics to build -sophisticated phishing campaigns to deceive recipients and send them to sites serving up malware. An email security application blocks incoming attacks and controls outbound messages to prevent the loss of sensitive data.
Firewalls put up a barrier between your trusted internal network and untrusted outside networks, such as the Internet. They use a set of defined rules to allow or block traffic. A firewall can be hardware, software, or both. Cisco offers unified threat management (UTM) devices and threat-focused next-generation firewalls.
Intrusion prevention systems:
An intrusion prevention system (IPS) scans network traffic to actively block attacks. Cisco Next-Generation IPS(NGIPS) appliances do this by correlating huge amounts of global threat intelligence to not only block malicious activity but also track the progression of suspect files and malware across the network to prevent the spread of outbreaks and reinfection.
Mobile device security:
Cybercriminals are increasingly targeting mobile devices and apps. Within the next 3 years, 90 percent of IT organizations may support corporate applications on personal mobile devices. Of course, you need to control which devices can access your network. You will also need to configure their connections to keep network traffic private.
Software-defined segmentation puts network traffic into different classifications and makes enforcing security policies easier. Ideally, the classifications are based on endpoint identity, not mere IP addresses. You can assign access rights based on role, location, and more so that the right level of access is given to the right people and suspicious devices are contained and remediated.
Security information and event management:
SIEM products pull together the information that your security staff needs to identify and respond to threats. These products come in various forms, including physical and virtual appliances and server software.
A virtual private network encrypts the connection from an endpoint to a network, often over the Internet. Typically, a remote-access VPN uses IPsec or Secure Sockets Layer to authenticate the communication between device and network.
A web security solution will control your staff’s web use, block web-based threats, and deny access to malicious websites. It will protect your web gateway on site or in the cloud. “Web security” also refers to the steps you take to protect your own website.
Wireless networks are not as secure as wired ones. Without stringent security measures, installing a wireless LAN can be like putting Ethernet ports everywhere, including the parking lot. To prevent an exploit from taking hold, you need products specifically designed to protect a wireless network.
Goals of Network Security
Three primary goals of Network Security are
The first goal of Network Security is “Confidentiality”. The function of “Confidentiality” is in protecting precious business data (in storage or in motion) from unauthorized persons. Confidentiality part of Network Security makes sure that the data is available OLNY to intended and authorized persons. Access to business data should be only for those individuals who are permitted to use that data.
The second goal of Network Security is “Integrity”. Integrity aims at maintaining and assuring the accuracy and consistency of data. The function of Integrity is to make sure that the date is accurate and reliable and is not changed by unauthorized persons or hackers. The data received by the recipient must be exactly same as the data sent from the sender, without change in even single bit of data.
The third goal of network security is “Availability”. The function of “Availability” in Network Security is to make sure that the Data, Network Resources or Network Services are continuously available to the legitimate users, whenever they require it
Types of attacks
Attacks can be classified into two categories. Here we are presenting some basic class of attacks which can be a cause for slow network performance, uncontrolled traffic
Active in which an intruder initiates commands to disrupt the network’s normal operation. Some active attacks are spoofing attack, Wormhole attack, Modification, Denial of services, Sinkhole, and Sybil attack.
Passive when a network intruder intercepts data traveling through the networkPassive attack The names of some passive attacks are traffic analysis, Eavesdropping, and Monitoring.
About the Project
The importance of vehicular networks within the Intelligent Transportation System (ITS) research ?eld is evident if one considers the envisaged future cooperative cars and some of the current vehicular services, such as ?eet management, road pricing or e-call, which already use computer communications to operate. However, each service usually provides its own communication architecture, different protocols (probably non standardized), speci?c hardware, and thus communication links cannot be shared.
These issues lead to ?exibility lacks in the hardware installed in the vehicle and extra costs to the ?nal user. With the aim of solving this problem, ISO and ETSI have been working during the last years in a common communication architecture for (vehicular) cooperative systems. The resulting standards provide a common framework to implement interoperable networks.
Standardized ISO/ETSI reference communication architecture provides a transversal layer with security services at different levels of the stack. This sets the bases for the design and integration of security protocols, key management, cyphering schemes, ?rewalling capabilities, etc. to appear in the next years. According to ETSI TC ITS the security needs that should be considered in vehicular cooperative systems are con?dentiality, integrity, authenticity, availability and non-repudiation.
Messages are usually routed using ITS-speci?c network protocols and then individually protected using a public key infrastructure (PKI). This provides a security scheme valid for broadcast scenarios and cases with low volumes of traf?c, considering both vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. Although many services in vehicular networks can be served by this security approach, a number of traf?c ef?ciency, infotainment and even noti?cation-based safety services could use IPv6 unicast or multicast traf?c that can be more ef?ciently transmitted using end-to-end security associations powered by symmetric cryptography.
The main objective if this project is to develop vehicle to infrastructure communications by using ipv6 technologies such as IKEv2 and IPsec.
The main purpose of this project is to secure vehicle to infrastructure communications using an IPv6 compliant solution.
A Theory of Network Localization
In this paper, we give a hypothetical establishment to the issue of system restriction in which a few hubs know their areas and different hubs decide their areas by measuring the separations to their neighbours. We develop grounded charts to model system restriction and apply diagram unbending nature hypothesis to test the conditions for interesting localizability and to build exceptionally localizable systems. We encourage study the computational multifaceted nature of system limitation and research a subclass of grounded diagrams where confinement can be processed productively. We finish up with a discourse of restriction in sensor systems where the sensors are put haphazardly
A Survey of Architectures and Localization Techniques for Underwater Acoustic Sensor Networks
The far reaching appropriation of the Wireless Sensor Networks (WSNs) in different applications in the physical environment and the fast progression of the WSN innovation have inspired the improvement of Underwater Acoustic Sensor Networks (UASNs). UASNs and physical WSNs have a few normal properties while there are a few difficulties specific to UASNs that are generally because of acoustic interchanges, and innate versatility. These difficulties call for novel designs and conventions to guarantee effective operation of the UASN. Limitation is one of the central undertakings for UASNs which is required for information labeling, hub following, target location, and it can be utilized for enhancing the execution of medium get to and system conventions. As of late, different UASN designs and countless procedures have been proposed. In this paper, we exhibit a far reaching review of these models and restriction techniques. To acclimate the peruser with the UASNs and restriction ideas, we begin our paper by giving foundation data on limitation, best in class oceanographic frameworks, and the difficulties of submerged interchanges. We then present our definite overview, trailed by a discourse on the execution of the confinement methods and open exploration issues.
Toward Accurate Mobile Sensor Network Localization in Noisy Environments
The hub restriction issue in versatile sensor systems has gotten critical consideration. As of late, molecule channels adjusted from mechanical autonomy have delivered great confinement correctnesses in traditional settings. Disregarding these triumphs, cutting edge arrangements endure essentially when utilized as a part of testing indoor and portable situations described by a high level of radio sign anomaly. New arrangements are expected to address these difficulties. We propose a fluffy rationale based methodology for portable hub restriction in testing situations. Confinement is figured as a fluffy multilateration issue. For meager systems with couple of accessible stays, we propose a fluffy network forecast plan. The fluffy rationale based limitation plan is actualized in a test system and contrasted with cutting edge arrangements. Broad reenactment results exhibit upgrades in the restriction exactness from 20 to 40 percent when the radio anomaly is high. An equipment execution running on Epic bits and transported by iRobot portable hosts affirms reenactment comes about and stretches out them to this present reality.
Scalable Localization with Mobility Prediction for Underwater Sensor Networks
Because of brutal fluid situations, non-immaterial hub versatility and substantial system scale, limitation for vast scale portable submerged sensor systems is exceptionally testing. In this paper, by using the anticipated versatility examples of submerged items, we propose a plan, called Scalable Localization plan with Mobility Prediction (SLMP), for submerged sensor systems. In SLMP, restriction is performed hierarchically, and the entire confinement procedure is partitioned into two sections: stay hub limitation and conventional hub confinement. Amid the confinement handle, each hub predicts its future versatility design as indicated by its past known area data, and it can appraise its future area in view of the anticipated portability design. Grapple hubs with known areas in the system will control the restriction process keeping in mind the end goal to adjust the exchange off between confinement precision, limitation scope, and correspondence cost. We lead broad recreations, and our outcomes demonstrate that SLMP can enormously diminish restriction correspondence cost while keeping up moderately high confinement scope and limitation exactness.
System analysis is the study of sets of interacting entites, including communications system analysis. This field is closely related to requirement analysis or operation research. It is also an explicit formal carried out to help someone identify a better course of action and make a better decision.
System development can be made through having two major components. They are system analysis and system design. In system analysis more emphasis is give to understanding the details of an existing system or a proposed one and then deciding whether the proposed system is desirable or not and whether the existing system needs any improvements. Thus system analysis is the process of investigating a system, identify the problems and using the information to recommend implementation to the system.
System Analysis is initial stage in line with System Development Life Cycle model. This technique Analysis could also be a technique that starts with the analyst. Analysis could also be a careful study of the numerous operations performed by a system and their relationships at intervals and outdoors of the system. One aspect of analysis is shaping the boundaries of the system and determinative whether or not or not or not a candidate system got to accept various connected systems. Throughout analysis, information unit of measurement collected on the accessible files, decision points, and transactions handled by this technique.
Logical system models and tools that unit of measurement used in analysis. Training, experience, and customary sense unit of measurement required for assortment of the info needed to undertake and do the analysis.
The effective data retrieval need, the large amount of documents demand the cloud server to perform result relevance ranking, instead of returning undifferentiated results. Such ranked search system enables data users to find the most relevant information quickly, rather than burdensomely sorting through every match in the content collection. Ranked search can also elegantly eliminate unnecessary network traffic by sending back only the most relevant data, which is highly desirable in the “pay-as-you-use” cloud paradigm.
For privacy protection, such ranking operation, however, should not leak any keyword related information. On the other hand, to improve the search result accuracy as well as to enhance the user searching experience, it is also necessary for such ranking system to support multiple keywords search, as single keyword search often yields far too coarse results.
Disadvantages of Existing system:
The encrypted cloud data search system remains a very challenging task because of inherent security and privacy obstacles, including various strict requirements.
On enrich the search flexibility, they are still not adequate to provide users with acceptable result ranking functionality
In this paper, for the first time, we define and solve the problem of multi-keyword ranked search over encrypted cloud data (MRSE) while preserving strict system wise privacy in the cloud computing paradigm. Among various multi-keyword semantics, we choose the efficient similarity measure of “coordinate matching,” i.e., as many matches as possible, to capture the relevance of data documents to the search query. Specifically, we use “inner product similarity”, i.e., the number of query keywords appearing in a document, to quantitatively evaluate such similarity measure of that document to the search query. During the index construction, each document is associated with a binary vector as a sub-index where each bit represents whether corresponding keyword is contained in the document.
The search query is also described as a binary vector where each bit means whether corresponding keyword appears in this search request, so the similarity could be exactly measured by the inner product of the query vector with the data vector. However, directly outsourcing the data vector or the query vector will violate the index privacy or the search privacy. To meet the challenge of supporting such multi keyword semantic without privacy breaches, we propose a basic idea for the MRSE using secure inner product computation, which is adapted from a secure k-nearest neighbor (kNN) technique , and then give two significantly improved MRSE schemes in a step-by-step manner to achieve various stringent privacy requirements.
Advantages of Proposed system:
Search result should be ranked by the cloud server according to some ranking criteria.
To reduce the communication cost.
The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding of the major requirements for the system is essential.
Three key considerations involved in the feasibility analysis are
This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.
This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.
The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.
FEASIBILITY STUDY IN THIS PROJECT
The feasibility study is necessary to determine whether the proposed system feasible considering the technical, operational and economical factors. By having a detailed feasible study the maintenance will have a clear-cut view of the proposed system with the benefits and drawbacks. All projects are feasible given unlimited resources and infinite time unfortunately, the development of computer based system or product is more likely plagued by a scarcity of resources and difficult delivery dates. It is both necessary and student to evaluate the feasibility of a project at the earliest possible time. Months or years of effort , thousands or millions of dollars ,and untold professional embarrassment can be averted if an ill-conceived system is recognized early in the definition phase.
Feasibility and risk analysis are related in many ways. If project risk is great the feasibility of producing quality software is reduced. During product engineering however, we concentrate our attention on four primary areas of interest. The project “Inventory Management System” is technically feasibility because of the below mentioned feature. The project was developed in Oracle 8.0 and java with graphical user interface. It provides the high level of reliability, availability and compatibility. Preliminary investigation examines project feasibility, the likelihood the system will be useful to the organization. The main objective of the feasibility study is to test the Technical, Operational and economical feasibility for adding new modules and debugging old running system. All systems are feasible if they are given unlimited resources and infinite time. There are aspects in the feasibility study portion of the preliminary investigation:
User-friendly: Customer will use the forms for their various transactions i.e.for adding routes, viewing the routes details. Also the customer wants the reports to view the various transactions based on the constraints. These forms and reports are generated as User-friendly to the client.
Reliability: The package will pick-up current transactions online. Regarding the old transaction, user will enter them in to the system.
Security: The web server and database server should be protected from hacking, virus etc.
Portability: The application will be developed using standard open source software(Except Oracle) like java , tomcat, web server, Internet Explorer Browser etc. These s/w will work both on windows and linux o/s. Hence Portability problem will not arise.
Availability: This s/w will be available always
Maintainability: The system called the ewheelz uses the 2-tier architecture . The 1st tier is the GUI , which is said to be front endand the 2nd tier is the database , which uses Oracle ,Which is the backend. The frontend can be run on different systems (clients). The database will be running at the server. User access these forms by using the user-ids and the passwords.
The technical issues usually raised during the feasibility stage of the investigation includes the following:
Does the necessary technology exist to do what is suggested?
Do the proposed equipments have the technical capacity to hold the data required to use the new system?
Will the proposed system provide adequate response to inquiries, regardless of the number or location of users?
Can the system be upgraded if developed?
Are there technical guarentees of accuracy, reliability, ease of access and data security?
The computerized system takes care of the present existing systems data flow and procedures completely and should generate all the reports of the manual system besides a host of other management reports.
It should be built as a web based application with separate web server and database server. This is required as the activities are spread through out the organization customer wants a centralized database. Furthered some of the linked transactions takes place in different locations.
Open source s/w like TOMCAT,JAVA,Mysql and Linux is used to minimize the cost for the customer.
System: Dual Core Processor
Hard Disk: 80 GB
Operating system: Windows XP.
Coding Language: C#.NET
Data Base: MS SQL SERVER 2005
The System Design Document describes the system requirements, operating environment, system and subsystem architecture, files and database design, input formats, output layouts, human-machine interfaces, detailed design, processing logic, and external interfaces.It describes the desired featurs and operations in detail, including rukes and process diagrams.
The Design Phase seeks to develop detailed specifications that emphasize the physical solution to the user’s information technology needs. The system requirements and logical description of the entities, relationships, and attributes of the data that were documented during the Requirements Analysis Phase are further refined and allocated into system and database design specifications that are organized in a way suitable for implementation within the constraints of a physical environment (e.g., computer, database, facilities).
A formal review of the high-level architectural design is conducted prior to detailed design of the automated system/application to achieve confidence that the design satisfies the system requirements, is in conformance with the enterprise architecture and prescribed design standards, to raise and resolve any critical technical and/or project-related issues, and to identify and mitigate project, technical, security, and/or business risks affecting continued detailed design and subsequent lifecycle activities. During the Design Phase, the initial strategy for any necessary training is also begun. Estimates of project expenses are updated to reflect actual costs and estimates for future phases. In addition, the work planned for future phases is redefined, if necessary, based on information acquired during the Design Phase.
The designer’s goal is to produce a model or illustration of associate entity that’s able to later be designed. Beginning, once system demand are like and analysed, system vogue is that the primary of the 3 technical activities -design, code and take a look at that’s needed to form and verify code.
Basic Design Approach
Fig. 4.1: System Overview
Fig. 4.2: Overview for vehicular communications
Data Flow Diagram
Data flows are data structures in motion, while data stores are data structures. Data flows are paths or ‘pipe lines’, along which data structures travel, where as the data stores are place where data structures are kept until needed.Data flows are data structures in motion, while data stores are data structures at rest.Hence it is possible that the data flow and the data store would be made up of the same data structure. Data flow diagrams is a very handy tool for the system analyst because it gives the analyst the overall picture of the system, it is a diagrammatic approach.
A DFD is a pictorial representation of the path which data takes From its initial interaction with the existing system until it completes any interaction. The diagram will describe the logical data flows dealing the movements of any physical items. The DFD also gives the insight into the data that is used in the system i.e., who actually uses it is temporarily stored.A DFD does not show a sequence of steps. A DFD only shows what the different process in a system is and what data flows between them.
Rules for DFD :
Fix the scope of the system by means of context diagrams.
Organize the DFD so that the main sequence of the actions reads left to right and top to bottom.
Identify all inputs and outputs.
Identify and label each process internal to the system with rounded circles.
A process is required for all the data transformation and transfers. Therefore, never connect a data store to a data source or the destinations or another data store with just a data flow arrow.
Do not indicate hardware and ignore control information.
Make sure the names of the processes accurately convey everything the process is done.
There must not be unnamed process.
Indicate external sources and destinations of the data, with squares.
Identify all data flows for each process step, except simple Record retrievals.
Label data flow on each arrow.
Use the details flow arrow to indicate data movements.
There can’t be unnamed data flow.
A data flow can’t connect two external entities.
The Data Flow Diagram is a simple graphical notation that can be used to represent a system in terms of the input data to the system various processing carried out on these data, and the output data generated by the system. The main reason why Data Flow Diagram technique is so popular is on account of the fact that it is very simple formalism – it is simple to understand and use.
A Data Flow Diagram model uses a very limited number of primitive symbols to represent the function performed by a system and the data flow among these functions. Starting with a set of high-level functions that a system performs, a DFD model hierarchy represents various sub functions.
There are essentially five different types of symbols used to construct DFDs, they are Functional symbol, External Entity, Dataflow symbol, Data store symbol, Output.
Target connected vehicles
No vehicles in range
Connected vehicles in network
Fig. 4.3: Data Flow Diagram for User.
The Unified Modeling Language (UML) is a general-purpose, developmental, modeling language in the field of software engineering, that is intended to provide a standard way to visualize the design of a system.
The UML diagram purpose is to not only used to describe the objects and information structures in an application, but also show the communication with its users. It provides a wide range of usages; from modeling the static view of an application to describing responsibilities for a system.
UML (Unified Modeling Language) is a modeling language used by software developers. UML can be used to develop diagrams and provide users with ready-to-use, expressive modeling examples. Some UML tools generate program language code from UML. UML can be used for modeling a system independent of a platform language.
UML diagrams are used-
To reason about system behavior.
To detect errors and omissions early in the life cycle.
To present proposed designs and communicate with stakeholders.
To understand requirements.
To drive implementation.
Types of UML Diagrams:
The current UML standards call for 13 different types of diagrams: class, activity, object, use case, sequence, package, state, component, communication, composite structure, interaction overview, timing, and deployment.
These diagrams are organized into two distinct groups: structural diagrams and behavioral or interaction diagrams.
Structural UML diagrams
Composite structure diagram
Behavioral UML diagrams
Use case diagram
Interaction overview diagram
UML Use Case Diagram:
Use case diagram is used during the analysis phase of a project to identify the system functionality. It describes the interaction of people or external device with the system under design. It doesn’t show much detail, but only summarizes some of the relationships between use cases, actors, and systems.
14141456400800Basically four elements need to be included in use case diagram. They are actors, system, use case, and relationship. Actors represent whoever or whatever interact with the system. They can be humans, other computers, or other software systems. Use cases represent the actions that are performed by one or more actors for a particular goal. System is whatever you are developing.
Fig. 4.4: Use case diagram
UML Class Diagram
A UML class diagram is not only used to describe the object and information structures in an application, but also show the communication with its users. It provides a wide range of usages; from modeling the static view of an application to describing responsibilities for a system. Composition is a special type of aggregation that denotes a strong ownership.
In a UML class diagram, classes represent an abstraction of entities with common characteristics. Associations represent static relationships between classes. Aggregation is a special type of association in which objects are assembled or configured together to create a more complex object. Generalization is a relationship in which one model element (the child) is based on another model element (the parent). Dependency relationship is a relationship in which one element, the client, uses or depends on another element, the supplier.
Fig. 4.5: Class diagram
UML Sequence Diagram
Sequence diagrams are used in the analysis and design phases. It’s an interaction diagram that details how operations are carried out. Sequence diagram is often used to depict the chronologically-structured event flow through a use case. It’s good at presenting the communication relationships between objects; and what messages trigger those communications.
-381002924175Lifeline represents typical instances of the components or classes in your system. Messages are shown as arrows. They can be complete, lost or found; synchronous or asynchronous; call or signal. Activate is used to denote participant activation. Once a participant is activated, its lifeline appears. Objects are model elements that represent instances of a class or of classes. Classes in UML show architecture and features of the designed system. Actor specifies a role played by a user or any other system that interacts with the subject.
Fig. 4.6: Sequence diagram
UML Activity Diagram
The purpose of activity diagram is to describe the procedural flow of actions as part of a larger activity. It is used to model how activities are coordinated to provide a service; to show the events needed to achieve some operation, and to illustrate how the events in a single use case relate to one another.
Activity diagrams consist of activities, states and transitions between activities and states. The initial state is the starting point on the activity diagram. It is the point at which you begin reading the action sequence. An activity is a unit of work that needs to be carried out. State defines current condition of an event or activity. Decision activity is introduced in UML to support conditionals in activities. It shows where the exit transition from a state or activity may branch in alternative directions depending on a condition. The bar represents synchronization of the completion of those activities. Control flow in computer science refers to the order in which the individual statements, instructions or function calls of an imperative or a declarative program are executed or evaluated. An object flow is the same thing as a control flow, but it is shown as a dashed line instead of a solid one.
Dot net Framework
NET Framework (pronounced dot net) is a software framework developed by Microsoft that runs primarily on Microsoft Windows. … Microsoft also produces an integrated development environment largely for .NET software called Visual Studio.
.NET is a programming framework created by Microsoft that developers can use tocreate applications more easily. As commenter dwroth aptly put it, “a framework is just a bunch of code that the programmer can call without having to write it explicitly.” In a perfect world, you wouldn’t need .NET Framework.
The domain name net is a generic top-level domain (gTLD) used in the Domain Name System of the Internet. The name is derived from the word network, indicating it was originally intended for organizations involved in networkingtechnologies, such as Internet service providers and other infrastructure companies. The primary (but not only) languages developers use to build software on the .NET Framework are C# and Visual Basic. The main tool used by .NET developers is Visual Studio, known as an Integrated Development Environment (IDE).
The .NET is the technology from Microsoft, on which all other Microsoft technologies will be depending on in future. It is a major technology change, introduced by Microsoft, to catch the market from the SUN’s Java. Few years back, Microsoft had only VC++ and VB to compete with Java, but Java was catching the market very fast. With the world depending more and more on the Internet/ Web and java related tools becoming the best choice for the web applications, Microsoft seemed to be loosing the battle. Thousands of programmers moved to java from VC++ and VB. To recover the .market, .Microsoft announced .NET.
But Microsoft has a wonderful history of starting late but catching up quickly. This is true in case of .NET too. Microsoft put their best men at work for a secret project called Next Generation Windows Services (NGWS)., under the direct supervision of Mr. Bill Gates.
The outcome of the project is what we now know as .NET. Even though .NET has borrowed most of it’s ideas from Sun’s J2EE, it has really outperformed their competitors.
Microsoft’s VC++ was a powerful tool. But it was too complex. It has too many data types, and developers had to learn many libraries including Windows SDK, MFC, ATL, COM etc. There were many data type compatibility issues while exchanging data between different layers. Visual Basic was too easy, and many serious programmers hated it just for that reason. Even though Visual basic was very easy to use, it was not very flexible to develop serious applications. SUN’s Java became a very good choice for these reasons. It had the flexibility and power of C++ and at the same time easy enough to catch the attention of VB programmers.
Microsoft recognized these factors and they introduced the .NET considering all these factors. All unwanted complexities are eliminated and a pure object oriented programming model was introduced. This makes programmer’s life very easy.
.NET is said to be Microsoft development model in which software becomes platform and device independent and data becomes available over the internet. Due to this vision Microsoft .NET is also called Microsoft strategy for connecting systems, informationand devices through web services so people can collaborate and communicates effectively.
The Microsoft .NET vision
The idea that all devices will some day be connected by a global broadband network (Internet) and that software will become service provided over this network. “.NET” has been applied to everything from the next version of the Windows operating system to development tools.
Major Problems before .NET:
The following are the major problems in previous Microsoft Technologies faced by the developers for application development and deployment, which has been solved by the .NET:
Registration of COM components: COM components had to be registered on the target machine before they could be used by the application. The application had to look up the Windows registry to locate and load the COM components.
Unloading COM components: COM objects also required a special logic for freeing up the objects from memory. This method is known as reference counting of COM objects. It is used to keep track of the number of active references. When an object’s reference count reaches zero, the object is removed from memory. The major problem that arises out of this situation is that of circular reference. If circular references exist between two COM components, they would not be freed from memory.
Versioning Problem (DLL hell): Whenever applications that use COM components were installed on a machine, the installation process would update the registry with the COM components information. Thus, there was a chance that these DLLs would be overwritten when some other applications were installed on the same computer. Therefore, an application that had been referring to one particular DLL would refer to the wrong DLL. This caused a major problem when an application was referring to particular version of a DLL.
THE .NET PLATFORM
The .NET platform is a set of technologies. Microsoft .NET platform simplify software development (Windows or WEB) by building applications of XML Web services.
The.NET platform consists of the following core technologies:
The .NET Framework
The .NET Enterprise Servers
Building block services
Visual Studio .NET
A programming model (.NET Framework) that enables developers to build Extensible Markup Language (XML) Web services and applications.
A set of .NET Enterprise Servers, including Windows 2000, Microsoft SQL Server., and Microsoft BizTalk® Server, that integrate, run, operate, and manage XML Web services and applications.
Microsoft .NET is a set of Microsoft software technologies for rapidly building and integrating XML Web services, Microsoft Windows-based applications, and Web solutions. The .NET Framework is a language-neutral platform for writing programs that can easily and securely interoperate. There’s no language barrier with .NET: there are numerous languages available to the developer including Managed C++, C#, Visual Basic and Java Script. The .NET framework provides the foundation for components to interact seamlessly, whether locally or remotely on different platforms. It standardizes common data types and communications protocols so that components created in different languages can easily interoperate.
“.NET” is also the collective name given to various software components built upon the .NET platform. These will be both products (Visual Studio.NET and Windows.NET Server, for instance) and services (like Passport, .NET My Services, and so on).
The .NET Framework must run on an operating system. Currently, the .NET Framework is built to run on the Microsoft Win32® operating systems, such as’ Windows 2000, Windows XP, and Windows 98. When .NET Framework running on Windows 2000 then application services (such as Component Services, Message Queuing, Internet Information Services (IIS), and Windows Management Instrumentation (WMI) ) are available to the developers. The .NET Framework exposes application services through classes in the .NET Framework class library.
There are some client software such as Windows XP and Windows CE, which helps developers deliver a comprehensive user experience across a family of devices. A set of building block services that are a user-centric set of XML Web services that move control of user data from applications to users. For example, Microsoft Passport is a core component of the .NET initiative that makes it easier to integrate various applications.
XML Web services are programmable Web components that can be shared among applications on the Internet or the intranet. The .NET Framework provides tools and classes for building, testing, and distributing XML Web services.
Visual Studio .NET is a tool, which can be used to develop XML Web services and Windows and Web applications for an enriched user experience.
TYPES OF .NET LANGUAGES
To help create languages for the .NET Framework, Microsoft created the Common Language Infrastructure specification (CLI). The CLI describes the features that each language must provide in order to use the .NET Framework and comm6n language runtime and to interoperate with components written in other languages. If a language implements the necessary functionality, it is said to be .NET-compliant.
Every .NET-compliant language supports the same data types, uses the same .NET Framework classes, compiles to the same MSIL, and uses a single common language runtime to manage execution. Because of this, every .NET-compliant language is a first-class Microsoft .NET citizen.-Developers are free to choose the best language for a particular component without losing any of the power and freedom of the platform. In addition, components written in one language can easily interoperate with components written in another language. For example, you can write a class in C# that inherits from a base class written in Visual Basic.
The .NET Framework was developed so that it could support a theoretically infinite number of development languages. Currently, more than 20 development languages work with the .NET Framework. C# is the programming language specifically designed for the .NET platform, but C++ and Visual Basic have also been upgraded to fully support the .NET framework. The following are the commonly used languages provided by the Microsoft:
Although Visual C++ (VC++) , has undergone changes to incorporate .NET; yet VC++ also maintains its status being a platform dependent programming. Many new MFC classes have been added a programmer can choose between using MFC and compiling the program into a platform specific executable file; or using .NET framework classes and compile into platform independent MISL file. A programmer can also specify (via directives) when ever he uses “unsafe” (the code that by passes CLR, e.g. the use of pointers) code.
Out of ALL .NET languages, Visual Basic.NET (VB.NET) is one language that has probably undergone into the most of changes. Now VB.NET may be considered a complete Object- Oriented Language (as opposed to its previous “Half Object Based and Half Object Oriented” status).
Visual Basic .NET provides substantial language innovations over previous versions of visual basic. Visual Basic .NET supports inheritance, constructors, polymorphism, constructor overloading, structured exceptions, stricter type checking, free threading, and many other features. There is only one form of assignment: noLet of set methods. New rapid application development (BAD) features, such as XML Designer, Server Explorer, and Web Forms designer, are available in Visual Basic from Visual Studio .NET. With this release, Visual Basic Scripting Edition provides full Visual Basic functionality.
Microsoft has also developed a brand new programming language C# (C Sharp). This language makes full use of .NET. It is a pure object oriented language. A Java programmer may find most aspects of this language which is identical to Java. If you are a new comer to Microsoft Technologies – this language is the easiest way to get on the .NET band wagon. While VC++ and VB enthusiast would stick to VC.NET and VB.NET; they would probably increase their productivity by switching to C#. C# is developed to make full use of all the intricacies of .NET. The learning curve of C# for a Java programmer is minimal. Microsoft has also come up with a The Microsoft Java Language Conversion Assistant-which is a tool that automatically converts existing Java-language source code into C# for developers who want to move their existing applications to the Microsoft .NET Framework.
Microsoft has also developed J# (Java Sharp). C# is similar to Java, but it is not entirely’ identical. It is for this reason that Microsoft has developed J# – the syntax of J# is identical to Visual J++. Microsoft’s growing legal battle with Sun, over Visual J++ – forced Microsoft to discontinue Visual J++. So J# is Microsoft’s indirect continuation of Visual J++. It has been reported that porting a medium sized Visual J++ project, entirely to J# takes only a few days of effort.
Jscript.NET is rewritten to be fully .NET aware. It includes support for classes, inheritance, types and compilation, and it provides improved performance and productivity features. JScript.NET is also integrated with visual Studio .NET. You can take advantage of any .NET Framework class in JScript .NET.
Microsoft encourages third party vendors to make use of Visual Studio. Net. Third, party vendors can write compilers for different languages ~ that compile the language to MSIL
(Microsoft Intermediate Language). These vendors need not develop their own development environment. They can easily use Visual Studio.NET as an IDE for their .NET compliant language. A vendor has already produced COBOL.NET that integrates with Visual Studio.NET and compiles into MSIL. Theoretically it would then be possible to come up with Java compiler that compiles into MSIL, instead of Java Byte code; and uses CLR instead of JVM. However Microsoft has not pursued this due to possible legal action by Sun.
Several third party languages are supporting the .NET platform. These languages include APL, COBOL, Pascal, Eiffel, Haskell, ML, Oberon, Perl, Python, Scheme and Smalltalk.
OBJECTIVES OF. NET FRAMEWORK:
1. To provide a consistent object-oriented programming environment whether object codes is stored and executed locally on Internet-distributed, or executed remotely.
2. To provide a code-execution environment to minimizes software deployment and guarantees safe execution of code.
3. Eliminates the performance problems.
There are different types of application, such as Windows-based applications and Web-based applications.
FEATURES OF .NET FRAMEWORK
It is a platform neutral framework.
It is a layer between the operating system and the programming language.
It supports many programming languages, including VB.NET, C# etc.
.NET provides a common set of class libraries, which can be accessed from any .NET based programming language. There will not be separate set of classes and libraries for each language. If you know anyone .NET language, you can write code in any .NET language.
In future versions of Windows, .NET will be freely distributed as part of operating system and users will never have to install .NET separately.
THE .NET FRAMEWORK
The .NET Framework has two main parts:
The Common Language Runtime (CLR).
A hierarchical set of class libraries.
The CLR is described as the “execution engine” of .NET. It provides the environment within which programs run. The most important features are
Conversion from a low-level assembler-style language, called Intermediate Language (IL), into code native to the platform being executed on.
Memory management, notably including garbage collection.
Checking and enforcing security restrictions on the running code.
Loading and executing programs, with version control and other such features.
The following features of the .NET framework are also worth description:
The code that targets .NET, and which contains certain extra Information “metadata” – to describe itself. Whilst both managed and unmanaged code can run in -the runtime, only managed code contains the information that allows the CLR to guarantee, for instance, safe execution and interoperability.
With Managed Code comes Managed Data. CLR provides memory allocation and Deal location facilities, and garbage collection. Some .NET languages use Managed Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not. Targeting CLR can, depending on the language you’re using, impose certain constraints on the features available. As with managed and unmanaged code, one can have both managed and unmanaged data in .NET applications – data that doesn’t get garbage collected but instead is looked after by unmanaged code.
Common Type System
The CLR uses something called the Common Type System (CTS) to strictly enforce type-safety. This ensures that all classes are compatible with each other, by describing types in a common way. CTS define how types work within the runtime, which enables types in one language to interoperate with types in another language, including cross-language exception handling. As well as ensuring that types are only used in appropriate ways, the runtime also ensures that code doesn’t attempt to access memory that hasn’t been allocated to it.
Common Language Specification
The CLR provides built-in support for language interoperability. To ensure that you can develop managed code that can be fully used by developers using any programming language, a set of language features and rules for using them called the Common Language Specification (CLS) has been defined. Components that follow these rules and expose only CLS features are considered CLS-compliant.
THE CLASS LIBRARY
.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of the namespace is called System; this contains basic types like Byte, Double, Boolean, and String, as well as Object. All objects derive from System. Object. As well as objects, there are value types. Value types can be allocated on the stack, which can provide useful flexibility. There are also efficient means of converting value types to object types if and when necessary.
The set of classes is pretty comprehensive, providing collections, file, screen, and network I/O, threading, and so on, as well as XML and database connectivity.
The class library is subdivided into a number of sets (or namespaces), each providing distinct areas of functionality, with dependencies between the namespaces kept to a minimum.
Managed Extensions for C++ and attributed programming are just some of the enhancements made to the C++ language. Managed Extensions simplify the task of migrating existing C++ applications to the new .NET Framework.
C# is Microsoft’s new language. It’s a C-style language that is essentially “C++ for Rapid Application Development”. Unlike other languages, its specification is just the grammar of the language. It has no standard library of its own, and instead has been designed with the intention of using the .NET libraries as its own.
CONSTRUCTORS AND DESTRUCTORS:
Constructors are used to initialize objects, whereas destructors are used to destroy them. In other words, destructors are used to release the resources allocated to the object. In C#.NET the sub finalize procedure is available. The sub finalize procedure is used to complete the tasks that must be performed when an object is destroyed. The sub finalize procedure is called automatically when an object is destroyed. In addition, the sub finalize procedure can be called only from the class it belongs to or from derived classes.
Garbage Collection is another new feature in C#.NET. The .NET Framework monitors allocated resources, such as objects and variables. In addition, the .NET Framework automatically releases memory for reuse by destroying objects that are no longer in use.
In C#.NET, the garbage collector checks for the objects that are not currently in use by applications. When the garbage collector comes across an object that is marked for garbage collection, it releases the memory occupied by the object.
Overloading is another feature in C#. Overloading enables us to define multiple procedures with the same name, where each procedure has a different set of arguments. Besides using overloading for procedures, we can use it for constructors and properties in a class.
C#.NET also supports multithreading. An application that supports multithreading can handle multiple tasks simultaneously, we can use multithreading to decrease the time taken by an application to respond to user interaction.
STRUCTURED EXCEPTION HANDLING
C#.NET supports structured handling, which enables us to detect and remove errors at runtime. In C#.NET, we need to use Try…Catch…Finally exception statements to create handlers. Using Try…Catch…Finally statements, we can create robust and effective exception handlers to improve the performance of our application.
FEATURES OF SQL-SERVER
SQL stands for Structured Query Language. SQL lets you access and manipulate databases. SQL became a standard of the American National Standards Institute (ANSI) in 1986, and of the International Organization for Standardization (ISO) in 1987.Microsoft SQL Server is a relational database management system, or RDBMS, that supports a wide variety of transaction processing, business intelligence and analytics applications in corporate IT environments.
It Creates a user-defined function in SQL Server and Azure SQL Database. A user-defined function is a Transact-SQL or common language runtime (CLR) routine that accepts parameters, performs an action, such as a complex calculation, and returns the result of that action as a value. … To replace a stored procedure.
The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the term Analysis Services. Analysis Services also includes a new data mining component. The Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server 2000 Meta Data Services. References to the component now use the term Meta Data Services. The term repository is used only in reference to the repository engine within Meta Data Services. SQL-SERVER database consist of six type of objects, They are,
What Can SQL do?
SQL can execute queries against a database
SQL can retrieve data from a database
SQL can insert records in a database
SQL can update records in a database
SQL can delete records from a database
SQL can create new databases
SQL can create new tables in a database
SQL can create stored procedures in a database
SQL can create views in a database
SQL can set permissions on tables, procedures, and views
Using SQL in Web Site
To build a web site that shows data from a database, you will need:
An RDBMS database program (i.e. MS Access, SQL Server, MySQL)
To use a server-side scripting language, like PHP or ASP
To use SQL to get the data you want
To use HTML / CSS to style the page
MICROSOFT VISUAL STUDIOS
Microsoft Visual Studio is an integrated development environment (IDE) from Microsoft.It is used to develop computer programs for Microsoft Windows, as well as web sites, web applications and web services. Visual Studio uses Microsoft software development platforms such as Windows API, Windows Forms, Windows Presentation Foundation, Windows Store and Microsoft Silverlight.
Visual Studio supports 36 different programming languages and allows the code editor and debugger to support (to varying degrees) nearly any programming language, provided a language-specific service exists
Visual Studio does not support any programming language, solution or tool intrinsically; instead, it allows the plugging of functionality coded as a VSPackage. When installed, the functionality is available as a Service. The IDE provides three services: SVsSolution, which provides the ability to enumerate projects and solutions; SVsUIShell, which provides windowing and UI functionality (including tabs, toolbars and tool windows); and SVsShell, which deals with registration of VSPackages. In addition, the IDE is also responsible for coordinating and enabling communication between services. All editors, designers, project types and other tools are implemented as VSPackages. Visual Studio uses COM to access the VSPackages. The Visual Studio SDK also includes the Managed Package Framework (MPF), which is a set of managed wrappers around the COM-interfaces that allow the Packages to be written in any CLI compliant language. However, MPF does not provide all the functionality exposed by the Visual Studio COM interfaces. The services can then be consumed for creation of other packages, which add functionality to the Visual Studio IDE.
Support for programming languages is added by using a specific VSPackage called a Language Service. A language service defines various interfaces which the VSPackage implementation can implement to add support for various functionalities. Functionalities that can be added this way include syntax coloring, statement completion, brace matching, parameter information tooltips, member lists and error markers for background compilation. If the interface is implemented, the functionality will be available for the language. Language services are implemented on a per-language basis. The implementations can reuse code from the parser or the compiler for the language. Language services can be implemented either in native code or managed code. For native code, either the native COM interfaces or the Babel Framework (part of Visual Studio SDK) can be used. For managed code, the MPF includes wrappers for writing managed language services.
Features of Microsoft Visual Studio:
Like any other IDE, it includes a code editor that supports syntax highlighting and code completion.
Visual Studio includes a debugger that works both as a source-level debugger and as a machine-level debugger. It works with both managed code as well as native code and can be used for debugging applications written in any language supported by Visual Studio. In addition, it can also attach to running processes and monitor and debug those processes. If source code for the running process is available, it displays the code as it is being run. If source code is not available, it can show the disassembly. The Visual Studio debugger can also create memory dumps as well as load them later for debugging.Multi-threaded programs are also supported. The debugger can be configured to be launched when an application running outside the Visual Studio environment crashes.
Visual Studio includes a host of visual designers to aid in the development of applications. These tools include:
Windows Forms Designer
The Windows Forms designer is used to build GUI applications using Windows Forms. Layout can be controlled by housing the controls inside other containers or locking them to the side of the form. Controls that display data (like textbox, list box and grid view) can be bound to data sources like databases or queries. Data-bound controls can be created by dragging items from the Data Sources window onto a design surface. The UI is linked with code using an event-driven programming model. The designer generates either C# or VB.NET code for the application.
The WPF designer, codenamed Cider, was introduced with Visual Studio 2008. Like the Windows Forms designer it supports the drag and drop metaphor. It is used to author user interfaces targeting Windows Presentation Foundation. It supports all WPF functionality including data binding and automatic layout management. It generates XAML code for the UI. The generated XAML file is compatible with Microsoft Expression Design, the designer-oriented product. The XAML code is linked with code using a code-behind model.
The Class Designer is used to author and edit the classes (including its members and their access) using UML modeling. The Class Designer can generate C# and VB.NET code outlines for the classes and methods. It can also generate class diagrams from hand-written classes.
The data designer can be used to graphically edit database schemas, including typed tables, primary and foreign keys and constraints. It can also be used to design queries from the graphical view.
From Visual Studio 2008 onwards, the mapping designer is used by LINQ to SQL to design the mapping between database schemas and the classes that encapsulate the data. The new solution from ORM approach, ADO.NET Entity Framework, replaces and improves the old technology.
Open Tabs Browser
The open tabs browser is used to list all open tabs and to switch between them. It is invoked using CTRL+TAB.
The Properties Editor tool is used to edit properties in a GUI pane inside Visual Studio. It lists all available properties (both read-only and those which can be set) for all objects including classes, forms, web pages and other items.
The Object Browser is a namespace and class library browser for Microsoft .NET. It can be used to browse the namespaces (which are arranged hierarchically) in managed assemblies. The hierarchy may or may not reflect the organization in the file system.
In Visual Studio parlance, a solution is a set of code files and other resources that are used to build an application. The files in a solution are arranged hierarchically, which might or might not reflect the organization in the file system. The Solution Explorer is used to manage and browse the files in a solution.
Team Explorer is used to integrate the capabilities of Team Foundation Server, the Revision Control System into the IDE (and the basis for Microsoft’s CodePlex hosting environment for open source projects). In addition to source control it provides the ability to view and manage individual work items (including bugs, tasks and other documents) and to browse TFS statistics. It is included as part of a TFS install and is also available as a download for Visual Studio separately. Team Explorer is also available as a stand-alone environment solely to access TFS services.
Data Explorer is used to manage databases on Microsoft SQL Server instances. It allows creation and alteration of database tables (either by issuing T-SQL commands or by using the Data designer). It can also be used to create queries and stored procedures, with the latter in either T-SQL or in managed code via SQL CLR. Debugging and IntelliSense support is available as well.
The Server Explorer tool is used to manage database connections on an accessible computer. It is also used to browse running Windows Services, performance counters, Windows Event Log and message queues and use them as a datasource.
PreEmptive Protection-Dotfuscator Community Edition
Visual Studio includes a free ‘light’ version of Dotfuscator by PreEmptive Solutions which obfuscates and hardens applications to help secure trade secrets (IP), reduce piracy/counterfeiting, protect against tampering and unauthorized debugging. Dotfuscator works with all flavors of .NET including ASP.NET, Xamarin, Unity and UWP.
Text Generation Framework
Visual Studio includes a full text generation framework called T4 which enables Visual Studio to generate text files from templates either in the IDE or via code.
ASP.NET Web Site Administration Tool
The ASP.NET Web Site Administration Tool allows for the configuration of ASP.NET websites.
Visual Studio Tools for Office
Visual Studio Tools for Office is a SDK and an add-in for Visual Studio that includes tools for developing for the Microsoft Office suite. Previously (for Visual Studio .NET 2003 and Visual Studio 2005) it was a separate SKU that supported only Visual C# and Visual Basic languages or was included in the Team Suite. With Visual Studio 2008, it is no longer a separate SKU but is included with Professional and higher editions. A separate runtime is required when deploying VSTO solutions.
Visual Studio allows developers to write extensions for Visual Studio to extend its capabilities. These extensions “plug into” Visual Studio and extend its functionality. Extensions come in the form of macros, add-ins, and packages. Macros represent repeatable tasks and actions that developers can record programmatically for saving, replaying, and distributing. Macros, however, cannot implement new commands or create tool windows. They are written using Visual Basic and are not compiled. Add-Ins provide access to the Visual Studio object model and can interact with the IDE tools. Add-Ins can be used to implement new functionality and can add new tool windows. Add-Ins are plugged into the IDE via COM and can be created in any COM-compliant languages. Packages are created using the Visual Studio SDK and provide the highest level of extensibility. They can create designers and other tools, as well as integrate other programming languages. The Visual Studio SDK provides unmanaged APIs as well as a managed API to accomplish these tasks. However, the managed API isn’t as comprehensive as the unmanaged one. Extensions are supported in the Standard (and higher) versions of Visual Studio 2005. Express Editions do not support hosting extensions.
System Implementation describes how the information system will be deployed, installed and transitioned into an operational system. The plan contains an overview of the system, a brief description of the major tasks involved in the implementation, the overall resources needed to support the implementation effort (such as hardware, software. facilities, materials, and personnel), and any site-specific implementation requirements. The plan is developed during the Design Phase and is updated during the Development Phase; the final version is provided in the Integration and Test Phase and is used for guidance during the Implementation Phase. The outline shows the structure of the Implementation Plan.
Implementation is the carrying out, execution, or practice of a plan, a method, or any design, idea, model, specification, standard or policy for doing something. As such, implementation is the action that must follow any preliminary thinking in order for something to actually happen. It is the phase where visions and plans become reality. This is the logical conclusion, after evaluating, deciding, visioning, planning, applying for funds and finding the financial resources of a project.
Implementation is that the tactic of obtaining systems personnel check abreast of and place new instrumentation into use, train users install the new application and construct any files of data needed to it. depending on the scale of the organization which is able to be troubled in practice the applying and so the chance associated with its use, system developer may price a lot of extremely to see the operation in mere one house of the firm, say at intervals the one department or with just one square measure two persons
The main plan for the system developed is to upgrading existing system to the proposed system. There are mainly 4 methods of upgrading the existing to proposed
Parallel Run System.
Direct Cut-Over System.
Parallel Run System:
It is the most secure method of converting from an existing to new system. In this approach both the systems run in parallel for a specific period of time. During that period if any serious problems were identified while using the new system, the new system is dropped and the older system is taken at the start point again.
Direct Cut-Over Method:
In this approach a working version of the system is implemented in one part of the organization such as single work area or department. When the system is deemed complete it is installed through out the organization either all at once (direct cut-over) or gradually (phase-in).
A pilot conversion is a hardware or software migration method that involves rolling out the new system to a small group of users for testing and evaluation. During thepilot implementation, the test group users can provide valuable feedback on thesystem to make the eventual rollout to all users go more smoothly.
In this method a part of the system is first implemented and over time other remaining parts are implemented.
Implementation Plan Used:
The Workflow Management system is developed on the basis of “Parallel Run method” because we upgraded the system, which is already in use to fulfill the requirements of the end-user. The system already in use is treated as the old system and the new system is developed on the basis of the old system and maintained the standards processed by the older system. The upgraded system is working well and is implemented on the client successfully.
Creating the sender system
Delivery of packet
Vehicle positioning is a key requirement for many safety applications. Active safety systems require precise vehicle positioning in order to assess the safety threats accurately, especially for those systems which are developed for warning/intervention in safety critical situations. When warning drivers of a local hazard (e.g. an accident site), accurate vehicle location information is important for warning the right driver groups at the right time.
Global positioning system and digital maps have become major tools for vehicle positioning providing not only vehicle location information but also geometry preview of the road being used. Advances in wireless communication have made it possible for a vehicle to share its location information with other vehicles and traffic operation centres which greatly increases the opportunities to apply vehicle positioning technologies for improving road safety. This paper presents a state?of?the?art review of vehicle positioning requirements for safety applications and vehicle positioning technologies. The paper also examines key issues relating to current and potential future applications of vehicle positioning technologies for improving road safety.
Intelligent Transportation Systems (ITS) have emerged to utilize different technologies to enhance the performance and quality of transportation networks. Many applications of ITS need to have a highly accurate location information from the vehicles in a network. The Global Positioning System (GPS) is the most common and accessible technique for vehicle localization. However, conventional localization techniques which mostly rely on GPS technology are not able to provide reliable positioning accuracy in all situations.
The rapid development of the internet has brought huge benefits and social impacts; however, internet security has also become a great problem for users, since traditional approaches to packet classification cannot achieve satisfactory detection performance due to their low accuracy and efficiency.
The workflow of the system involves two stages: the training stage and the detection
stage. In the training stage, the system initially captures characteristic patterns from a set of application packet flows. After this training is completed, the detection stage allows the user to detect the target application by capturing new application flows.
Packet switching is a method of grouping data which is transmitted over a digital network into packets which are made of a header and a payload. Data in the header is used by networking hardware to direct the packet to its destination where the payload is extracted and used by application software. Packet switching is the primary basis for data communications in computer networks worldwide. The first step in the packet switching process is to break the data to be sent down into blocks of around a few hundred bytes in size. Each packet is then given a destination IP address and forwarded on to a router that is closer to that destination, which forwards the data on to another router, and so forth.
Here we check whether the packet is reached the destination or not. We check for
Packet loss occurs when one or more packets transmitted over an IP network fail to arrive at their destination. Packet loss is typically caused by what is generally referred to as “network congestion,” which in itself can have a number of actual causes. Packet loss is measured as the percentage of packets lost compared to packets transmitted. Packet loss can reduce throughput for a given sender, whether unintentionally due to network malfunction, or intentionally as a means to balance available bandwidth between multiple senders when a given router or network link reaches nears its maximum capacity
CAUSES FOR PACKET LOSS
Faulty hardware or cabling
In a network based on packet switching, transmission delay(or store-and-forward delay, also known as packetization delay) is the amount of time required to push all the packet’s bits into the wire. In other words, this is thedelay caused by the data-rate of the link.
CAUSES FOR TRANSMISSION DELAYS
• Queuing delays
• Transmission delays
• Propagation delays
Deadlock is one of the most serious system failures that can occur in a computer system or a network. Deadlock states have been observed in existing computer networks emphasizing the need for carefully designed flow control procedures (controllers) to avoid deadlocks.
Deadlocks in interconnection networks can occur as a result of cyclic resource dependencies formed when messages hold onto some resources (i.e., virtual channels) while waiting to acquire others
4.DELIVERY OF PACKET
Here we check whether the packet is delivered to the destination or not. Packet delivery ratio is defined as the ratio of data packets received by the destinations to those generated by the sources. Mathematically, it can be defined as: PDR= S1÷ S2 Where, S1 is the sum of data packets received by the each destination and S2 is the sum of data packets generated by the each source.
Generally there are a few reasons why a packet fails to reach its destination.
Layer 1 (physical) issues are generally signalling level problems.
Contention/congestion related loss generally occurs when at least one point of the network doesn’t have the capacity needed for peak demand.
Routing convergence/changes related drops. These aren’t very common on a day to basis but occur frequently when large scale routing updates are being processed.
Human (configuration) errors are more common than #3 but can have the same symptom like a routing loop but they never fix themselves. It takes a person to correct the issue.