Category Archives: Computer science homework help

Object-Oriented Programming, Event-Driven Programming, Procedural Programming

Object-Oriented Programming, Event-Driven Programming, Procedural Programming

There are a number of advantages to using object-oriented programming (OOP) to procedural programming. OOP is a type of computer programming in which the data structure and functions are well defined and added in the data structure (Phillips, 2010). In OOP, an objects attributes are classified as a single unit. On the other hand, procedural programming involves the break-up of complicated programs into smaller procedures. Object-oriented programming is better in that it enables a programmer to reuse the code during application development. Thus a developer can be able to reuse the code instead of writing a new one which may be time consuming. Procedural programming does not allow reuse of the code. Another advantage of OOP is that it enhances inheritance (Phillips, 2010). OOP enables a programmer to base an object on an existing class or object. This is referred to as class-inheritance and prototypal inheritance respectively.

The following is an example of simple class with one at least one attribute and one method.

import datetime # utilized for date objects.

 

class Person:

 

def __init__(self, name, birthdate, surname, telephone, address, email):

self.name = name

self.birthdate = birthdate

self.surname = surname

 

 

self.telephone = telephone

self.address = address

self.email = email

 

def age(self):

today = datetime.date.today()

age = today.year self.birthdate.year

 

if today < datetime.date(today.year, self.birthdate.month, self.birthdate.day):

age -= 1

 

return age

 

person = Person(

“mark”,

“doe”,

datetime.date(1992, 3, 12), # year, month, day

“No. 21 Swift Street, Smallville”,

“555 456 0987”,

“mark.doe@example.com”

)

 

give(person.name)

give(person.email)

give(person.age())

 

 

Class represents related data that is grouped together and functions applied to act upon the data. The class above represents personal data of a particular individual. The classes store a number of attributes. These include birthdate, name, age, address, telephone and email. The purpose of the related method is to establish a new object by using the provided data. The class, attributes and methods are related. The first type of relationship stems from the fact that all of them describe the same object but in different ways.

Feature of object-oriented programming that Visual Logic Lacks

Visual Logic supports development of programs which have multiple procedures but does not support the development of classes in programming. In Visual Logic, classes or objects cannot share common attributes or features as often happens in inheritance. This feature is present in object oriented programming. Lack of class development has been a major challenge impacting the application of Visual Logic in programming. In object oriented programming, it is easy for the programmer to establish class hierarchies which is quite difficult while using Visual Logic. Class hierarchies can be easily identified while using design class diagrams and other tools while using Unified Modeling Language (Dale & Weems, 2007).

Another drawback with the use of Visual Logic are inherent challenges in the programming language being used. Visual Logic commonly employs Prolog as the programming language. In most of the programming languages such as Prolog, the programmer is forced to determine all the procedural aspects detailing the execution of the program (Dale & Weems, 2007). In this case, the logical semantics are incompatible with the procedural semantics established in the program. This means that the programmer must spend more time in developing the semantics to use. It is also more expensive, a reason why most programmers opt for object oriented programming.

Advantage of using event-driven programming, compared to purely procedural programming

The use of event-driven programming has a number of advantages to using purely procedural programming. First, event-driven programming is known for its high flexibility compared to procedural programming (Yeager, 2014). This is because in event-driven programming, the application flow is controlled by events rather than a sequential program. In most cases, there is no need for users to get an understanding of how tasks in event-driven programming are executed or performed. Procedural programming executes commands in particular order. This leads to rigidity in execution of tasks which limits users on how tasks can be performed or executed. Owing to this limitation, procedural programming is better suited to small projects such as in computers. For instance, it can be applied in giving instructions to computers on various tasks such as multiplying numbers and displaying the results.

Another advantage of using event-driven programming is that it offers robustness (Yeager, 2014). Event-driven programming is less sensitive to the order of activities performed by the users. In procedural programming, a sequence of all the activities must be maintained and well thought out during the developmental stages. The programmer must anticipate all the sequence of activities that a user can implement while using the program. This is followed by identification of feedbacks on all the steps anticipated. Signals provide crucial feedback upon which future decisions are based. This makes purely procedural programming less robust compared to event-driven programming. Event-driven programming is reaction-bound in nature (Yeager, 2014). This means that it works through receiving signals or events from users. Procedural programming is based on acting rather than being reaction based. Another benefit of event-driven programming is that it is service oriented and time driven.

References

Nell B. Dale, Chip Weems, Programming and problem solving with Java, Edition 2, Jones &       Bartlett Publishers, 2007,

Phillips, D. (2010). Python 3 object oriented programming: Harness the power of Python 3          objects. Birmingham, U.K: Packt Pub.

Yeager, D. P. (2014). Object-oriented programming languages and event-driven programming.   Dulles, VA: Mercury Learning and Information.

 

 

Failed Information Technology Projects

Question

There can be numerous factors that contribute to failed information technology projects. Are there consistent factors that emerge in  all failed projects or is each project unique? Analyse the case history of an IT system that failed and explain why it failed. Propose interventions that might have prevented the failure include explanations for why you think these interventions should work.

Answer

Failed Information Technology Projects

Information technology (IT) projects are complex to implement and run effectively. Often, new projects experience partial or even total failure in the worst case scenarios. Statistics indicate that more than half of information technology projects fail due to various reasons such as poor planning, ineffective controls, lack of a clear scope, poor change management, and among other reasons. While other industries also suffer from project failure, the IT industry is more susceptible to the risk of failure compared to other industries. This paper will analyze the case history of HealthCare.gov, giving reasons that contributed to its failure and possible preventative measures that could have been taken.

In 2013, the U.S. federal government launched HealthCare.gov which was meant to be a digital health platform that could provide health services to millions of Americans with ease. Soon after the website was launched, over 20 million Americans visited the site but only about 500,000 Americans were able to assess the website (Johnson & Reed, 2013). Even then, only a smaller number were able to obtain medical coverage. Just like in other failed IT projects, the reasons for the failure were consistent. Most IT projects fail due to unclear objectives, inadequate skills, poor stakeholder consultation, lack of clear scope of the project, wrong leaders, unrealistic timescales, lack of planning, poor communication, and among other common reasons. In the health care system, the main reasons why the IT project failed were lack of planning and inadequate skills for those who were involved in the implementation. 

According to Johnson & Reed (2013), the company that worn the tender to build HealthCare.gov was not the best qualified for the job. Due to a complex tendering process, the company that worn the contract was the one able to negotiate the legal process best, but not the one capable of best handling the complexity of the project. The project management team lacked the required expertise in handling such a complex project. This was notable with the flawed design of the website. For example, the designers did not integrate the use of cache in the system. This means that for each information accessed, the request had to go through the system database, creating congestion in the (Angelo, 2015). The project leadership was also not handled in the correct way. According to Boulton (2013), the project did not have an appointed leader in charge of decision making. This contributed to a series of failures such as lack of software integration that eventually led to the failure of the whole project. In addition, several deadlines for the project were missed especially in developing health exchanges. 

An outdated project management methodology was also used (Angelo, 2015). The waterfall model which is a type of a sequential design process was used. This design model may not be the best since it encourages designers to work on a particular process or level of development and then proceed to the next. In this case, it is difficult to make alterations since there is no working software produced until after the entire project nears completion. In this case, there is a high amount of risk taken and uncertainty in the entire project. It is also difficult to go back and make modifications once the project is in the testing stages. A single launch of the health website meant that it was not tested in various development stages (Angelo, 2015). The single launch was catastrophic since there were many errors with the entire system which had not been identified.

There are a number of interventions that could have prevented the failure of the healthcare system website. One such intervention is the use of highly qualified team of developers. The team of developers from CGI Group lacked experience in complex IT projects which increased the risks of the project failing. A new trend has emerged whereby the government forms a highly skilled team of IT professionals to implement various projects instead of entrusting them to outside developers.  A staged roll-out of the healthcare system would have prevented the failure of the system. This is because errors would be identified at each of the stages and then resolved before the project proceeded to the final launch stage. Testing and observation of the system at each stage would also have prevented the project failure. Testing ensures that errors are identified early before the project reaches the final stages.

Leadership was clearly lacking in the implementation of the healthcare system project. If there was a strong leadership at the top, the project would likely have been a success. Leadership is important in ensuring that a project goes on as stipulated. Leadership helps in ensuring the project remains within budget and is completed in the established timeframe. A strong leadership could also help in ensuring that all requirements of the system are met. Communication is crucial in project management. It is the role of the project leader to maintain communication among all the parties involved such as the government, stakeholders, developers, and users.

In conclusion, IT projects are at a higher risk of failure due to the numerous challenges involved during implementation. Often, IT project failure emanates from recurrent problems such as poor planning and inadequate skills. HealthCare.gov website failed mainly due to inadequate skills of the developers and poor project leadership.

 

References

Angelo, R. (2015). HealthCare.gov: A retrospective lesson in the failure of the project stakeholders. Issues in Information Systems, 16(1): 15-20.

Boulton, C. (2013, Dec 24). HealthCare.gov’s Sickly Launch Defined Bad IT Projects in 2013. The Wall Street Journal, p. 12.

Johnson, C., & Reed, H. (2013, Oct. 24). Why the Government Never Gets Tech Right. The New York Times, p. 6

Internet Strategy

Project Estimate Using COCOMO II

Project Estimate Using COCOMO II

The COCOMO II model can be used in developing estimates for various functions such as performing tradeoffs and in transaction processing. COCOMO II provides individuals with a wide variety of techniques and technologies. COCOMO II provides the much needed support for business software and also in object-oriented software application. The most important use of the COCOMO II model is in estimating the number of individuals who can successfully develop and implement a project. Cost estimation is important since it helps the management to make informed decisions. It helps in establishing competitive bid contracts and also in determining a reliable budget (Cook, David, & Leishman, 2004).

Estimation models can be generated simply by assessing particular characteristics of past projects such as team size, duration, disk usage, and cost. Small projects can be modelled as follows: EFFORT = a*Size + b (Cook, David, & Leishman, 2004). The magnitude in this case is given by linear function which gives the size of the project. This can be applied in projects that can be handled by a maximum of three persons. The early design model in COCOMO II can be used to make rough estimates in the early stages of its architecture development. This employs a range of new cost drivers and estimating equations. COCOMO II comprises of 17 cost drivers. In order to set up each cost driver, a user is supposed to carefully assess the development environment, assess the project, and the entire team to implement the project. Cost drivers show the multiplicative ability of the entire project in terms of costs. For instance, if the team is to develop a type of software that controls flight on an airplane, then the cost driver’s required software reliability would be very high. For example, an effort multiplier of about 2.6 would be used.

The COCOMO II model makes it possible to estimate the required effort. This is measured in the number of persons-months (Musilek et al., 2002). This is based primarily on the estimate of the software project size which is given or measured per thousand Source Lines of Code (SLOC). Effort = 2.94*EAF*(KSLOC)E. EAF simply refers to Effort Adjustment Factor that is obtained from cost drivers. ‘E’ is the exponent that is obtained from a maximum of five scale drivers. For instance, it is common to assign an Effort Adjustment Factor of 1.00 for a project will all Scale Drivers and Nominal Cost Drivers. The exponent for this can be taken as 1.0997. The assumption here is that the project will have 2,000 source lines of code. COCOMO II projection is that the entire project requires 6.3 Person-Months of effort. The equation can be written as:

Effort = 2.94 x 1.0 x 21.0997 = 6.3.

Efffort Adjustment Factor is taken to be the product of effort multipliers that are in accordance to thee cost drivers of the entire project. In projects which are rated very high in terms of complexity (1.34 as the effort multiplier), with low tools and language experience (1.09 as the effort multiplier), and with nominal cost drivers (1.00 as the effort multiplier), then the EAF is taken as the product of both effort multipliers. Effort Adjustment Factor = 1.34 x 1.09  = 1.46. Effort = 2.94 x (1.46) x (2)1.0997 = 99.2 Person-Months. The schedule equation can be used to analyze the number of months it would take to complete the entire project. The project duration is given by effort projected by the effort equation (Musilek et al., 2002). The COCOMO II calculations are based on the projections of the project size as given by the Source Lines of Code. Source lines included in the project are those delivered as a component of the product itself. Support software and test drivers are thus excluded. In addition, the source lines recognized are those made by thee project stuff. This excludes those generated by applications.

COCOMO II model also tries to give a sufficient size estimate. This can be difficult especially if the only data available is one which involves effort. The new code must be included in the calculations in order to give an accurate estimate. The normal application development entails the use code reused from places (can be modified or used as it is), new code development, or automatically translated code. Adjustment factors enable the programmers to capture alteration in codes and testing and the quantity of design.  This also takes into consideration the programmers’ familiarity or knowledge of code and their general understanding. COCOMO II model assumes a number of aspects especially in relation to the application size in KSLOC (Cook, David, & Leishman, 2004).

References

Cook, D., David, A., & Leishman, T. R. (2004). Lessons Learned From Software Engineering Consulting. The Journal of Defense Software Engineering, 17(2), 4-6.

Musilek, P., Pedrycz, W., Nan, S., & Succi, G. (2002). On the sensitivity of COCOMO II software cost estimation model. Proceedings of the Eighth IEEE Symposium on Software Metrics.

Internet Strategy

Cost Estimation Proposal

Question

The triple constraints for creating cost estimates include resources, time and money. What cost estimation approaches provide consistent triangulation of triple constraints for creating accurate estimates? Should different approaches be used for different types of projects?

For this Assignment, you will practice using WBS to create a cost estimate and model for building a new, state-of-the-art multimedia classroom for your organisation. The timeline for this project is six months.

Answer

Cost Estimation Proposal

Figure 1.1. Work Breakdown Structure for a Multimedia Classroom Construction Project

Assumptions

The first assumption in this project is that the multimedia classrooms will be used by master students pursuing computer science, information technology and engineering courses at the university. The second assumption is that there will be no significant changes in the cost of various outlined equipment to be used in construction of the multimedia classroom during the entire period. Third, it is also assumed that the systems, materials, and various equipment will be as indicated in this cost estimation proposal. Lastly, it is assumed that all the current and new funding sources that may come along during building of the multimedia classroom will cover all the expenses and ensure all facilities are in place.

Cost estimations for personnel and materials required

Project management

The entire project will take a period of 6 months. During this period, the team will ensure that 20-high end personal computers can facilitate learning in a multimedia classroom. The demands of this project are not high comparing the number of computers and facilities to be installed with the timeframe given. As such, a project manager and two other individuals will be able to complete the work in due time. The project managers will receive remuneration of $80 per hour. The project manager will work 4 hours a day for 5 days in a week, making a total of 80 hours a month. For the entire period, the project manager will work a total of 480 hours, the total cost coming to $38,400.

The staff members will also be responsible for training. They will each receive a remuneration of $50 per hour. They will work for a total of 160 hours per month, and 960 hours for the entire project. Hourly wage for the project manager and the team members was arrived at based on the mean hourly wages provided by Bureau of Labor Statistics 2015. Since the project is for a 6-month duration, it was agreed that the project manager and team members would receive an above market rate remuneration. According to the United States Department of Labor (2015), those individuals engaged in computer occupations earned a mean hourly wage of $41.43 in 2015.

Related :

Internet Strategy

Hardware components

The project requires 20 high-end personal computers. The brand chosen was Apple’s MacBook series. Specifically, Apple iMac 27” has been outlined as the best for this project. The specific features of the personal computers are: 3.2 GHz, 8 GB RAM, 1 TB hard drive, 4th generation, and Intel Core i5. These personal computers can be acquired at a cost of $1,999 each (“iMac,” 2016). The network server to be installed will be Apple Xserve. The price of this server is $2,999. Internet access for the entire 20 personal computers will cost $10,000. This will provide high speeds unlike other cheaper options in the market. Reliability and stability of internet connectivity were also considered in deciding on the local area network for internet access. This includes installation as well as the configuration charges for the computers, server and instructor station. The instructor station will be set up at a cost of $2,500. The projector system will cost $1,254. This cost is divided as follows: $804 is the cost of a Benq 1080p Full HD DLP projector and $450 will go to EluneVision Luna projection screen (106”).

The following table shows the cost estimates for the entire project.

Units/hours Cost per unit/hours Subtotals Percentage of total
Labor
Project Manager 480 hours 80 $38,400
Team member 1 960 hours 50 $48,000
Team member 2 960 hours 50 $48,000
Total $134,400 70%
Hardware and other components
High-end personal computers 20 1,999 $39,980
Network server 1 2,999 $2,999
Internet access/LAN 1 10,000 $10,000
Instructor station 1 2,500 $2,500
Projector 1 804 $804
Screen 1 450 $450
Total $56,733 30%
Grand Total $191,133 100%

 

From the above cost estimation table, it is clear that labor costs make up a large proportion of the entire budgeted costs. Labor costs make up 70 percent of the entire budget costs, while hardware and other components comprise of 30 percent of the total project costs. Team members will take the largest share of the labor costs. The project manager will work half time as he/she will only be involved in supervision of the entire project. Among hardware and other components, high-end personal computers will take up the highest share of the budget. It is important to acquire high-end computers which will provide high speeds, storage capacity, and high display. The Apple iMac 27” will deliver all these. In addition, Apple personal computers are better since they offer more compatibility with various software and spreadsheet applications. The Apple iMac 27” also comes with pre-installed system with multiple software for use.

References

iMac. 2016. Buy all-new iMac with breakthrough improvements-Apple. Retrieved from: http://www.apple.com/ca/shop/buy-mac/imac

United States Department of Labor. (2015). May 2015 National Occupational Employment and Wages Estimates United States. Retrieved from: http://www.bls.gov/oes/current/oes_nat.htm#15-0000

Intranet project and Self-service portal system for Dingwow Inc.

Intranet project and Self-service portal system for Dingwow Inc.

Business goals and project goals

There are a number of business goals associated with the project. First, the self-service portal aims at enabling users to combine data from a variety of sources into a configurable and highly flexible interface. Second, the self-service portal will provide customizable and self-sufficient IT solution that will bring more benefits to the company. The major business goal will be to cut down the costs of running the business or administrative costs. Another benefit is streamlining common processes in the organization which will enhance smooth workflow between departments and various branches nationwide. As such, sharing of information between departments and branches in different geographical locations will be much easier. The self-service portal solution is tailored to meet the needs of both large and medium organizations (Meyler & Bengtsson, 2015). Other goals include centralization of service requests for internal services, built-in customizable routing, form data capture from various processors such as word, pdf forms, and word, and establishment of central port for common forms such as expenses and time sheets.

Related:

Project Estimate Using COCOMO II

Scope

The employee self-service portal system is tailored with users in mind, considering a number of aspects including the ease of training them (Meyler & Bengtsson, 2015). Self-service users should be able to log incidents, make requests, interact in chat or through a live feed, view articles, and search database for information using a friendly website. A self-service portal system should cater to four basic categories or functions of the human resource management. These categories include benefits, payroll, organizational administration, and human resource. The self-service encompasses various administrative services such as benefit services, employee communications, and data updates. Management productivity services to be covered include salary actions, approvals and employee change actions. The organization may choose to allow employees update certain information or merely to view it depending on its sensitivity.

Time and budget constraints

Majority of organizations face time constraints during implementation of self-service portal. This may be occasioned by governance or compliance issues which may interfere with installation operations. Still, some organizations experience lack of adequate resources to cater for upgrading of their systems. Disparage IT infrastructures may also be a key hindrance and add to the costs of implementing the self-service portal. For instance, the organization may be having large volumes of paperwork which requires manual processing. This may escalate the entire cost of the project and increase completion time. Process design and implementation may take time due to the intricate processes and stages involved. For instance, developers must first document the existing processes, optimize various processes, develop new policies, create a pilot implementation, and among other time consuming processes (Ellermann et al., 2013).

Related: Cost Estimation Proposal

General and technical requirements

There are specific general and technical requirements in the application of self-service portal. The general requirements include accelerators, configuration templates, and pre-configured self-service human resource modules for employees, process designs and tools. The user infrastructure required is access channels which mainly involve web browsers, design time tools, web dynpro, and portal runtime. The specific user infrastructure services include page building, search options, personalization, collaboration, navigation, and among others. A number of portal deployment options are available such as single central port, separate portals and federated portal network. Other key components in a self-service portal include internet (SSL), application gateway, load balancer, customer portal, reporting portal, corporate portal, TREX software, directory server and Enterprise Resource Planning system (Ellermann et al., 2013).

Training and documentation

It is important to conduct training and documentation following the implementation of a self-service portal system in the organization (Meyler & Bengtsson, 2015). Employee development is critical with implementation of a self-service portal in order to optimize usage. Employee development takes the form of training which provides insights to employees. A pilot program can provide members with great opportunities for training. It is important to define and document all the existing processes. All the provisioning and deployment processes should bear formal definitions especially during the request phase of the project. The IT group should document all the existing processes prior to commencement of automating the deployment and provisioning process. Each process should be clearly defined based key metrics and ensure that manual processes are fully optimized.

Installation

A System Center Configuration Model (SCCM 2012) contains an in-built application web portal that enables users to access software or other applications that are available to them (Damati, 2015). Installation can only be carried out on a WF Management Server but not on the DW Management Server. Other types of servers can support the installation. For a successful installation, one must have the system center 2012 R2 service manager, with an update roll-up of 8 and later period. The installation also requires importing the portal .mpb management pack so that web pages can be displayed. A number of features must be activated for a complete installation. These include web server (IIS), .NET framework 3.5 features, ASP.NET 4.5, and windows authentication. Configuring the client server details and the self-service portal server completes the installation (Damati, 2015).

References

Damati, M. (2015). System Center Service Manager 2012 R2-UR8-Deploy the New Self-Service Portal. Retrieved from: http://blogs.technet.com/b/modamati/archive/2015/11/22/system-  center-service-manager-2012-r2-ur8-deploy-the-new-self-service-portal-my-             experience.aspx

Ellermann, T., Wilson, K., Nielsen, K., & Clark, J. (2013). Microsoft System Center Optimizing   Service Manager. New York, NY: Microsoft Press.

Meyler, K., & Bengtsson, A. (2015). System Center 2012 service manager unleashed.       Indianapolis, Ind: Sams.

Related:

Internet Strategy

Services in my Area

Services in my Area

Services in my Area

Identify the DSL and cable modem services referenced in this assignment

The local carriers in the region include Verizon, AT&T, and Road Runner. Verizon Communication Inc. is responsible for providing digital subscriber line technology (DSL). Specifically, Verizon provides what is commonly referred to as Asymmetric xDSL. This comprises of an optical carrier. It uses fiber optics to carry data among users, although it is not cable modem internet technology. Verizon provides its services in numerous states such as Louisiana, District of Columbia, Connecticut, Kansas, and others. AT&T provides DSL internet access in the area of which I am currently subscribed. AT&T lacks compatibility with dial-up and cable modems, meaning that it only provides DSL internet access. AT$T provides services in places such as Washington, Montana, Colorado, Utah, Idaho, Montana, Oregon, Iowa, Missouri, Minnesota, North Dakota, Kansas, Wyoming, and among others.

Speeds of DSL and cable modem

The speeds of DSL and cable modem vary greatly. Most Americans are faced with the tough choice between cable modem and DSL in accessing internet services. Cable broadband is generally faster than DSL. On the other hand, digital subscriber line (DSL) is much cheaper especially at slow speeds. DSL is appropriate for users who do not require high speeds. For instance, it is appropriate for users requiring basic internet connection for simple tasks such as checking mail or visit one or few websites. Cable modems utilizes coaxial cables to deliver information to residential users. On the other hand, DSL utilizes copper wiring that is found in telephone lines.

As earlier mentioned, DSL has lower speeds compared to cable modem. DSL provides users with download speeds in the range between 1.5 Mbps and 15 Mbps. The upstream speed for DSL is in the range of 128 kbps and 1 Mbps. On the other hand, download speeds of cable modems are in the range of 25 Mbps to 100 Mbps on the higher end. Upstream speeds are in the range of 2 Mbps and 8 Mbps on the higher end. However, according to Dulaney (2010), cable modems do not usually reach the higher speeds that have been mentioned. It is also important to note that speeds may be affected by other factors such as the type of computers being used to access the internet.

Related paper: Case Study 2: ING Life

Cost of DSL and cable modem

DSL and cable modem are much costly than dialup accounts. Charges range between $30 and $50 per month for both DSL and cable modem. Installation charges also add up to the total cost of DSL and cable modem. The main advantage in having cable modem or DSL is that the two do not tie the phone when the user is online. The most important thing to note that the two technologies cost roughly the same in terms of installation and operation charges. However, what sets them apart is the fact that cable modem has higher speeds which means it has better speed to cost ratio. Thus, it would be advisable for a company such as Carlson to acquire cable modem over DSL due to the speed to cost ratio. According to Dulaney (2010), installation of both cable modem and DSL costs between $100 and $200. Equipment costs may also be incurred. The cost of this averages $50 to $100. Some activation costs may also be incurred. As earlier mentioned, monthly charges average $40 to $50 for both cable modem and DSL.

Reliability of cable modem and DSL

Cable modem and DSL have different reliability owing to unique characteristics of each type of connectivity. In DSL internet, data movers directly from the user to ISP or vice versa through the phone line. This means that there is no sharing of bandwidth among the various users(“TechRepublic,” 2004). The impact of this is that consistent performance is achieved even though the number of users in a particular area increases. In cable modem type of internet connection, the bandwidth is delivered as a block to users who share it amongst themselves. Thus in such as connection, it is difficult to achieve consistent performance due to the fact that the number of users keep varying on any time of the day. Since users share the same bandwidth, too many of them may lead to slow speeds. Heavy use of the internet may also lead to slow speeds due to sharing of the bandwidth.

DSL provides users with access to both the internet as well as the phone line. Users can be able to choose the connection speeds they need from service providers, which also determines the price. DSL is however greatly affected by the distance between the user and the provider’s location. As the distance between the two increases, the network connection is affected in terms of strength. According to Docter, Dulaney, &Skandier (2012), DSL and cable modem have reliable connections meaning they experience minimum downtimes. Both can also deliver internet at good speeds for home use.

Recommend DSL or cable modem

Based on the previous comparison and evaluation of services available closest to my region, cable modem would be the better option to DSL. Although both have proven reliability, certain characteristics of DSL internet makes it unattractive. Cable modem provides internet at generally higher speeds than DSL. In the current era, access speed is of great essence. This makes cable modem the best alternative between the two. DSL becomes weaker with increasing distance from service providers. In my area, DSL internet access may be weak since it is farthest from the service providers (The area is about 3 miles from the nearest DSL services). It is generally recommended that users be between 2 to 3 miles from the DSL central office so that speeds are not affected by distance (“TechRepublic,” 2004).

As earlier mentioned, DSL and cable modem have basically the same installation and operation costs to the user. However, what sets them apart from the rest is the fact that cable modem provides higher access speeds, and thus giving users value for their money. In terms of security, DSL is more secure compared to cable modem. This is because in DSL there is no sharing of connections with other users. In cable modem, top notch security can also be achieved by ensuring that all the security features are working properly and that the firewall settings are as recommended.

Related paper: Case Study 3:Carlson Companies

Diagram of the DSL and Cable Modem connections to  ISP

References

Docter, Q., Dulaney, E. A., &Skandier, T. (2012). CompTIA A+ complete study guide: Exams     220-801 and 220-802 (2nd Edition). Indianapolis, IN: Wiley.

Dulaney, E. A. (2010). Linux all-in-one desk reference for dummies. Hoboken, NJ: Wiley Pub.

TechRepublic (Firm). (2004). Home office computing survival guide. Louisville, KY:       TechRepublic.

Carlson Companies

Carlson Company Case Study

Carlson Companies Case Study

How the Carlson SAN approach would be implemented in today’s environment

SAN approach can be implemented in a similar fashion to the one outlined in the case study. The main focus during implementation lies in ensuring a seamless migration from the old system to the new one. In database management, IT experts aim at ensuring there is a smooth flow of data from one point to another. During implementation, IT experts must ensure that users are not in any way affected by the migration process. Since problems are likely to occur, they must ensure that these are identified early enough before leading to high losses. The approach used by the company meets all the necessary requirements, for instance higher storage capacity, use of common IP networking protocols, data backup, and cost effectiveness.

Pros and cons of consolidating data on a SAN central data facility versus the dispersed arrangement it replaces

There are a number of benefits and drawbacks in consolidating data on a SAN central data facility in comparison to the dispersed arrangement it replaces. In terms of benefits, an IP SAN strategy can enable the company to use familiar or common IP management tools. In the case of Carlson, use of IP security tools is of great significance. As Steven Brown, the CIO explains, encrypting packet streams will enable the company expand the volume of transactions it can handle. According to LaPlante (2009), another benefit is that management of consolidated data is easier since it utilizes less machines and servers. Carlson will also be able to add extra free space with consolidation of data since there are fewer servers to be used. This in turn leads to a reduction in energy costs. Labor costs as well as maintenance costs are also reduced due to the low number of servers. A large share of data center costs are attributed to administrative expenses such as labor costs. Server consolidation greatly reduces these costs.

Related paper: Case Study 2: ING Life

There are drawbacks a company can face when it opts to replace the dispersed arrangement. There is a risk of loss of data during the migration process. It may be difficult to restore data lost in the process especially where data storage services in use are inefficient. Another drawback is that challenges may arise during the migration process, severely impacting the Carlson’s operations. End users may be greatly affected by the migrations. The company may lose customers or clients due to technical difficulties that may arise. Another challenge is that consolidated data on SAN central facilities may need to be continuously optimized in order to meet performance and capacity requirements.

Evaluate the issues raised from the Carlson SAN mixing equipment

A number of issues were raised from Carlson SAN mixing equipment. Carlson SAN is known to combine equipment from a number of vendors. This can be termed as a weakness on the part of the consolidated data server. It is well known that the initial cost of setting up SAN data recovery system may be quite high. Therefore, the IT experts should ensure that there are appropriate disk applications that can help reduce conflicts among various applications and software products (Ciampa, 2014). Since equipment is derived from multiple vendors, the IT experts should configure them in a way that they match and least likely to cause conflict or mismatch. It is the management’s role to ensure that this is implemented. The management can solve this problem by contracting standard equipment and software that can be changed to fit the requirements of the company. Another option for the management is to consider reducing the number of vendors so as to gain consistency.

The need for reduction of administration and management of storage networking through Carlson’s IP SAN

As earlier mentioned, IP SAN can significantly help Carlson to cut down on administration overhead and also ensure easier management of storage networking. SAN provides room for multiple extensions that helps build interconnectivity among multiple enterprises. This enables the SAN segment to be extended at relatively lower costs. SAN extensions help in creating broader availability, serviceability and reliability to end users (Ciampa, 2014). Due to the low number of storage platforms required, Carlson does not need any administrators to manage multiple servers like in the case of dispersed arrangement. A single control panel can be employed in managing the entire storage, in addition to utilizing a homogeneous software. This makes it quite easier for the management to control the entire system. Minimal training is needed for the staff thus saving on costs. Also, there are minimal errors experienced in using the system. Another major impact is that storage is optimally utilized compared to storage in dispersed arrangements.

Related paper: Elastic and Inelastic Traffic

Application of cloud computing by Carlson instead of SAN.

Cloud computing technology can also be used by Carlson, although there are no much relative benefits which can be obtained from cloud computing over SAN. According to Ohlhorst (2012), SAN delivers higher speeds compared to cloud computing. In line with this, cloud computing is known to reduce onsite storage capacity. In terms of storage capacity and data management efficiency, Carlson may not obtain any relative benefits on adopting cloud computing to SAN. Users point out a number of weaknesses in cloud computing mostly associated with security and reliability. Thus, it would be recommended that Carlson continues to use SAN and not to migrate to cloud computing since there are no relative benefits and the cost might be prohibitive.

References

Ciampa, M. (2014). CompTIA Security + Guide to Network Security Fundamentals. Boston:       Cengage Learning.

LaPlante, E. (2009). The pros and cons of server consolidation. IT Business Insider. Retrieved     from:             http://www.itbusinessinsider.com/inf/pos_cos_server_consolidation/index.htm1#axzz2cF  TLFLV

Ohlhorst, F. (2012). Cloud storage, rewriting SAN’s future. Network Computing. Retrieved         from: http://www.networkcomputing.com/cloud-storage/cloud-storage-rewriting-sans-    future/232900096

 

ING Life

Case Study 2: ING Life

Case Study 2: ING Life

Technology is rapidly evolving in the modern business world and thus the need for ING Life to embrace new and emerging technologies. New inventions and innovations are coming up and any serious business has to understand the changes that occur in its technological environment. This will ensure that they stay competitive and keep up with the market trends otherwise they will lose business to the competition. There are various risks and difficulties that businesses face in their day to day operations through interactions with their environment and to deal with them, they would have to understand and classify the risks.

The internet, which is widely used as a medium of communication in different geographical areas, is one of the public infrastructures which ING Life heavily depends on for normal business operations. Despite the multiple benefits associated with using the internet, it may come at a high cost to the business.  The internet poses numerous risks and difficulties that majority of businesses must face. As such, businesses must keep on assessing their preparedness to face the risks. This should be done on a regular basis, since technology is fast evolving, and without overlooking anything because that may lead to serious damages to the business. Online security breaches has become so common that one would think that businesses would always stay alert for these yet so many are still caught unawares when some of these breaches unexpectedly occur. This is brought about by failure to take the necessary precautions (Joe 2010).

Related paper: Desktop Virtualization

ING Life continues to face an enormous challenges, just like other businesses that depend on the internet. This is mostly in the form of cybercrimes perpetrated by malicious hackers, crackers and attackers. A Hacker is one who is well conversant with computer technology, computer programming and hardware. Hackers utilize the set of these skills to gain access to sensitive business information which they use to commits acts fraud. The hackers’ main motive is to learn about programming and take over another business’s system. On the other hand, crackers get access to sensitive business information which they use to steal confidential client information or interfering with the programming of the hacked system network. Lastly, attackers cause chaos in order to make a name for themselves by targeting various sites where they utilize information that they have gotten from the internet and by doing this they create computer viruses or worms that will interfere with the normal systems functioning.

Security is a major concern for any business operation especially with increased technology; firms have to ensure that they safe guard their data from unauthorized access; they also have to ensure that their data is kept safe from theft, loss or mismanagement. Data theft is likely to happen when employees use their personal mobile gadgets to share data or access   company information which significantly increases the risks for the business. Some of the security breaches that occur are caused by employees either maliciously or unintentionally. In order to ensure that their data is safeguarded, ING Life must put in place appropriate precautions such as installing the necessary security software. A good security software can be Cisco pix firewall, a web-to-host software which is used on secure socket layer (Joe, 2010).

Related paper: Environmental Assessment Group of the IEEE-Standards Research

Cisco Pix firewall has the ability to reach and connect to a wider audience because it operates on 1.7 gigabytes per second (Behrens, Riley, & Khan, 2005). Since ING Life needs to reach a wider audience comprising of their increasing number of brokers, Cisco Pix firewall would be the best fit for this line of business. It can enhance the security of ING Life through its numerous features such as state full inspection firewall, IPSec and L2TP/PPTP which is based on virtual private network (VPN). Its content filtering capabilities ensures that it detects any unauthorized entries. The secure perimeters between the networks controlled by the firewall is maintained by Adaptive Security Algorithm (ASA). Incoming and outgoing data is managed by putting in place policies to every entry.

Web-to-host software is whereby a business ensures that it creates a website that enhances client interactions with the business. The website is supported by web hosting services through the use of secure socket layer (SSL) which uses keys and encryptions to secure information sharing between different computes. This is applied by a majority of web users (Stallings, 2009). This ensures to a great extent that information is secure because it operates in two protocols namely records protocol and handshake protocol. In these, authentication is required before an encrypted secure socket layer connection is established.  By having a security consultant, ING Life will avoid one or two risks that might have been overlooked because the consultants understand most of these threats and are able to detect them on time and find ways to counter them.

Related paper: Elastic and Inelastic Traffic

A business can never be fully secure while operating online. To be safe from these breaches, unauthorized access to company information, theft of information, loss or mismanagement as mentioned in paragraph four, it’s always better to keep track of the changes that take place in the technology front. As much as ING Life has put in place measures to ensure security in their servers, they need to know that innovations in terms of security software occur such that systems must be updated on a regular basis.

The extranet is an online database system that allows businesses to share important information with their clients and suppliers. It is whereby internet can be accessed by both members who are within the network systems but on the other hand it also allows for users who are outside of the network system to gain access to the servers.

The extranet is not secure as the intranet because it is internet based and thus prone to hackers, attackers and crackers meaning that confidential information’s can fall into the wrong hands and cause major damages to a business. The maintenance cost for the extranet server is usually very high which may increase business expenditure (Tan & Wiley InterScience, 2010). The employees of the business will have to be trained on how the system works which translates to additional costs to ensure that this is achieved. The software and hardware used in installation of the system are also very costly. Companies which opt to use the intranet to connect should ensure that they have proper procedures and guidelines on how the system operates.

References

Behrens, T., Riley, C., & Khan, U. (2005). Cisco PIX firewalls: Configure, manage, &     troubleshoot. Rockland, Mass: Syngress Pub.

Joe, J. (2010). Internet Security and Your Business –Knowing the Risks. Retrieved from:             http://www.symantec.com

Stallings, W. (2009). Business data communications. Upper Saddle River, NJ: Pearson/Prentice   Hall.

Tan, H. C., & Wiley InterScience (Online service). (2010). Capture and reuse of project   knowledge in construction. Chichester, U.K: Wiley-Blackwell.

 

 

Elastic and Inelastic Traffic

Elastic and Inelastic Traffic

Elastic and Inelastic Traffic

Internet traffic to an organization’s web page is a fundamental component of the company’s ability to conduct its business through the internet. It is a frustrating experience when a specific website is not reachable, or the access is slow (Zhang, 2012). Basically, there are two types of internet traffic: elastic and inelastic internet traffic. Elastic traffic can adjust, over wide ranges, to changes in delay and throughout across the internet and still meets the requirements of its application. On the other hand, inelastic traffic does not easily adapt, if at all, changes in delay and throughout across the internet. Currently, web traffic is about one trillion bits per second and surprisingly, it is on the rise, and soon it will hit three tbps.

Names are assigned to varied genres of devices in the modern networks, workstation, servers, and routers. A well-designed naming model should enable users to access the created device by name. The name of the sources indicates what to check while an address shows its location (Li & Chen, 2012). Most network protocols always require the parent device to obtain a network address, and the end user system must map this address to a name.

Developing an addressing and naming model for ten departments

For a company that consists of ten departments that have 1000 employees, there is a need to design 9 Local Area Network (LAN) with equal geographical separation. For the network to work effectively and efficiently, it would be prudent if each LAN can contain more than forty-five computers. The Information Technology unit of an entity bears the responsibility for developing an address and a name model in the organization (Ash, 2007). The organization should use a common data center of around twenty-five backed enterprise servers and routers. All these servers should have a single data center operation. For easy identification and avoidance of confusion, names of these servers should contain a location code. When a device has more than one interface, then all these devices should be mapped to a single common name.

Developing an addressing and naming model for equal separation by geography

Deploying multiple tenants in a shared infrastructure optimizes resources utilization at a lower cost but requires designs that address secure tenant separation to insecure end-to-end path isolation. The virtualized multi-tenant data center architecture should use a path isolation technique to divide a shared infrastructure logically into multiple virtual networks. The architectures should start by installing network layer 3 (L3) separations. L3 provides tenant isolation with separated dedicated per-tenant routing and forwarding avoiding inter-tenant traffic within the data unless configured (Ash, 2007). Separating network layer 2 (L2) will provide isolation and identification of tenant traffic across the L2 domain and across shared links. Moreover, by having network service separation will provide unique policies at the VLAN level of granularity.

Developing an addressing and naming model for a common data center

A good user experience depends on predictable performance within the data center network. By installing Ethernet networks, the company can bridge the performance and scalability gap between capacity-oriented clusters and purpose built custom systems architectures(Li & Chen, 2012).. This data center should be a home for one or more processors where to compute resources should be arranged into racks and allocated as clusters consisting of five hosts. These five hosts should be orchestrated to exploit thread-level parallelism central to most internet workloads through dividing incoming requests into parallel subtasks. The cluster-application model should be shared among multiple applications.

Related paper: Florida Department of Management Services

 

Analyzing the functional problems of throughput, delay and packet loss

Elastic Internet traffic can easily reconcile, over wide ranges, to changes in stall and throughput across the web and can still fulfill the needs of its applications. By installing TCP based internet, traffic on personal connection reconciles to congestion by decreasing the rate at which information is presented to the communication system (Zhang, 2012). The routers on the communication systems have a responsibility of receiving and forwarding packets. TCP notices packet loss and carries through retransmissions to ensure trusty messaging.

Analyzing how DNS works

The DNS would be used in the addressing and naming part of the plan. The DNS is a hierarchical distributed naming model for computing. The DNS will offer to support the internet infrastructure by providing a distributed and fairly robust mechanism that resolves Internet host names into IP addresses back into the host names. Moreover, the DNS will provide support to other internet directories like lookup capabilities to retrieve information pertaining to DNS name servers and mail exchangers.

Summary

Cyberspace is an enormous invisible world that connects millions of computer to each other. For effective and efficient operations of a ten department organization, designing 9 LAN with a backup of five enterprise servers and routers will ensure the free flow of data in the organization. For tenants who are geographically separated, implementation of the virtualized multi-tenant data center should ensure resource optimization (Zhang, 2012). Moreover, Ethernet networks installation will help bridge information gap in an infrastructure. However, designing of a good TCP will notice packet loss and carry through retransmissions which will ensure that there is trusty messaging within and without the organization. Notably, the DNS will provide robust mechanism to resolve all Internet host names. Poor internet traffic can be attributed to the absence of traffic infrastructure and low level of computer penetration.

Related paper: Requirements for the Corporate Computing Function

References

Ash, G. (2007). Traffic engineering and QoS optimization of integrated voice & data networks. Amsterdam: Elsevier/Morgan Kaufmann Publishers.

Li, T. & Chen, S. (2012). Traffic measurement on the internet. New York, NY: Springer.

Zhang, J. (2012). ICLEM 2012. Reston, Va.: American Society of Civil Engineers.

Florida Department of Management Services

Florida Department of Management Services

Florida Department of Management Services

What security mechanisms are need to protect the DMS systems from both state employees and users accessing over the internet?

There a two security mechanisms that can greatly help in protecting the DMS system from threats caused by state employees and users either knowingly or unknowingly. These security mechanisms include use of Internet Protocol Security (IPsec) and Virtual Private Networks (Shoniregun, 2007). Internet Protocol Security is defined as a protocol suite that enhances security of internet protocol communications through encrypting IP packet data for every communication session. IPsec can be applied in implementing mutual authentication between various agents and also in determining the cryptographic keys necessary during each session. IPsec can be used in enhancing security primarily of data flows through three major ways. First, it can be used to secure data flows between security gateways. This is also referred to as network-to-network protection. Second, it can protect data flows between a pair of hosts, and lastly, between host and security gateway.

Virtual Private Networks can also be used to protect the DMS systems. Virtual Private Networks are especially important where there is use of Wi-Fi hotspots which may not be secure (Douligeris & Serpanos, 2007). Virtual Private Networks are import since they enhance the extension of private networks across unsecure networks such as public network. A Virtual Private Network enables users to exchange data over public networks in a way which seems like the computing devices are connected directly through a private network. Virtual Private Networks can be developed by creating virtual point-to-point connection. This is achieved through the use of traffic encryption, dedicated connections or by using virtual tunneling protocols. In the state system such as Florida Department, the employees would use their work credentials to authenticate while logging onto the DMS system via the VPN access. This enhances security.

Read also:Requirements for the Corporate Computing Function

Visit the DMS Web site and list the major services found there. Discuss the relative merits of each.

Florida Department of Management Services provides a number of key services through its DMS Web site. The web site provides state employees and agencies with business operation and human resource support (“Florida Department of Management Services” 2016).

Human resource support

Human resource support is one of the major services provided under the DMS web site. Human resource support is mainly involved with the running of the state personnel system. It is subdivided further into four sections namely Florida Retirements System, Human Resource Management, Insurance Benefits, and People First.

The Florida Retirement System provides services related to management of retirees and administration of pension. The division serves a number or roles concerning retirement planning. For instance, it is responsible for administering retirement benefits, overseeing the general management of the pension fund, monitoring, ensuring compliance to state and federal policies, providing financial advice to members, administration of debt services, and among other roles. Human Resource Management pertains to the management of the state’s entire workforce. The human resource management division in conjunction with agency personnel offices develops employment guidelines, strategies and practices affecting all employees. The Insurance Benefits division formally known as the Division of State Group Insurance (DSIG) is responsible for developing appropriate insurance cover for state employees. People First division concerns the provision of human resource services through an information management system. Some of the services provided include attendance and leave services, organizational management, payroll management, and among others (“Florida Department of Management Services” 2016).

The merits of the above is that state employees can easily access services at any time. They can also be able to access services on their own without having to visit physical offices. Another merit is that the system reduces the operating costs incurred by the state government. This is because it reduces the need for personnel who would be required to provide the same kind of services provided by the system.

Business operations

The DMS web site provides a number of business operations of the state government. The major services provide under business operations include fleet management, real estate development and management, and state purchasing. The state purchasing division enables the state government to give the best value for goods and services. This division includes items such as vendor information, information about state contracts, details on how to do business with the state, information on the state’s emergency network, and training and certification with relation to public purchasing. In real estate and development sector, DMS web site contains information about the general management of the state’s pool of facilities. This contains details on leasing, operations and maintenance and building and construction regulations. The fleet management division is concerned with managing Florida’s aircrafts, private prisons, and federal property.

Read also: Environmental Assessment Group of the IEEE -Standards Research

The merit of this is that it enables Florida provide high quality services to customers. Use of the DMS web site also enables the state to lower operational costs. The revenues saved can be used for other development purposes.

Suggest improvements to existing services and suggest new services that should be added.

A new service that could be added is the online purchasing system that will enable users to make purchases online. Users will be able to access vendor information online as they order products. This system will enhance trade in the state. Another service that should be added is a web-based application that enables users to access general information about the state government. This platform should allow the state government to post government reports, disseminate crucial information to the public, and possibly receive feedback from the same public.

References

Florida Department of Management Services. (2016). About. Retrieved from:             http://www.dms.myflorida.com/

In Douligeris, C., & In Serpanos, D. N. (2007). Network security: Current status and future          directions. Piscataway, NJ: IEEE Press.

Shoniregun, C. A. (2007). Synchronizing Internet Protocol Security (SIPSec). New York:             Springer.