Wednesday, August 26

SONA 2009 - ICT Related


Smile Every year, the State of the Nation Address or “SONA” is given by the President before a joint session of both houses of Congress, pursuant to Article VII, Section 23 of the 1987 Constitution, which reads: “The President shall address the Congress at the opening of its regular session.” And this year, Gloria Macapagal-Arroyo’s 2009 State of the Nation Address was delivered on July 27, 2009.



Likewise, on the assumption that I have heard and read on the SONA, the 3 areas related to ICT are as follows:

1.“Sa telecommunications naman, inatasan ko ang Telecommunications Commission na kumilos na tungkol sa mga sumbong na dropped calls at mga nawawalang load sa cellphone. We need to amend the Commonwealth-era Public Service Law. And we need to do it now.”



The Philippines' National Telecommunications Commission (Filipino: Pambansang Komisyon sa Telekomunikasyon), abbreviated as NTC, is an agency of the Philippine government under the Department of Transportation and Communications responsible for supervising, adjudicating and controlling over all telecommunications services throughout the Philippines. It is a regulatory agency providing an environment that ensures reliable, affordable and viable infrastructure and services in information and communications technology (ICT) accessible to all.

In addition, as what I have research, the NTC now had issued already a circular regarding on the ‘dropped calls’. They issued stricter rules on the drop-call rate of the telecommunication companies, or telcos, in a bid to protect consumers. The regulator’s move comes after complaints by the Senate leadership about vanishing mobile-phone loads or credits.

*In a Memorandum Order 03-06-2009, the regulator said the drop-call rate for telcos should be improved to 2 percent, or two dropped calls for every 100 calls, from 5 percent.

A dropped call pertains to an irregularly disconnected call. A call attempted but dropped before six seconds after the called party answers should not be considered a call. The agency noted that blocked and dropped calls were caused by network congestion and system failure.

On the other hand, regarding on spam messages, to resolve complaints on spam, the commission also issued a draft circular amending the rules on the broadcasting messaging service. Spam are unsolicited and unwanted messages, which can be commercial offerings, promotions, advertisements and surveys.

*The new circular said that push messages should not be allowed, adding that “subscriptions or requests for contents and/or information shall be initiated by the subscribers.” That the commercial and promotional advertisements, surveys and other broadcast messages should be allowed only if prior consent from the subscribers was secured by the telco. It added that the mobile network operators should keep records of all requests for contents and information from subscribers for the delivery of the message for at least two months.

Records related to complaints filed by consumers should not be disposed until such complaints were finally resolved. Records of complaints should also be forwarded to the commission upon request.

Moreover, the agency also issued separate guidelines for the expiration of the load and pulse billing.

*Now, a P10 load is valid for only a day; for a P30 load, three days; for a P200 load, 30 days; and a P300 load, 60 days. The commission is also proposing a call rate of three seconds per pulse, from one minute charging, whether postpaid or prepaid. The call rate to a per-second basis of actual use would assure that there would be no charges on calls that are dropped because of poor quality service, network congestion and other causes that terminate calls within the first second of the call, the commission said.

Currently, telcos charge on per minute basis, costing P6.50 per call.

“For me, it is really important to resolve such an issues like that regarding on telecommunications because it is really in demand and important right now in our daily life. If before, this things we just call it only as our wants but now, it is already our needs because this time it is really now difficult to communicate with the other without it. And if we have a kind of problems like dropped calls or lost cellphone loads, then how can we use it? It will just become a worthless thing.”


2.“Kung noong nakaraan, lumakas ang electronics, today we are creating wealth by developing the BPO and tourism sectors as additional engines of growth. Electronics and other manufactured exports rise and fall in accordance with the state of the world economy. But BPO remains resilient. With earnings of $6 billion and employment of 600,000, the BPO phenomenon speaks eloquently of our competitiveness and productivity. Let us have a Department of ICT.”



Business Process Outsourcing (BPO) is delegation of one or more of your non-core activities to an external service provider, who in turn will be responsible for managing and administering the selected process based on pre-defined and measurable performance criteria. An important aspect of business process outsourcing is its ability to free corporate executives from some of their day-to-day process management responsibilities. Executives get more control over their most valuable time. Time to explore new revenue streams, time to accelerate other projects, and time to focus on their customers. Outsourcing helps in not only cutting costs but also improves the speed and quality of services.

However, Information and Communication Technologies are now a crucial part of modern life. Almost everyone is affected by them, directly or indirectly. Therefore, Pres. Gloria Arroyo spoke made a final push for the creation of DICT (Department of Information and Communications Technology) in her last State of the Nation Address (Sona) and as she highlighted the huge role that the BPO sector has played in the local economy.

As I have read, a proposed law calling for the creation of DICT has been pending in Congress for the last five years. This even if Malacanang has certified the bill as urgent.

Arroyo made the endorsement as she noted in her Sona that the Philippines is now starting to create wealth with the development of the BPO industry as an engine of growth. But the president’s call for the creation of DICT, however, was less urgent compared to the appeal she made to revise consumer laws as she underscored her directive to the National Telecommunications Commission to act on complaints against dropped calls and lost cellphone load.

“For me, we as an IT are the one would be happier with that creation of Dept. of ICT because we are the no.1 involve on that field. And also by aspiring to have that Department, it will transform our country into a knowledge society and make available the benefits of Information Technology to all citizens, especially those in rural areas and living in poverty. And with DICT, it will probably achieve as what they call INFO-AGE (Inclusive, Networked, Fast, Open, Accountable, Globally benchmarked and Efficient) government.”


3.“As the seeds of fundamental political reform are planted, let us address the highest exercise of democracy, voting! In 2001, I said we would finance fully automated elections. We got it, thanks to Congress.”

On my research, the Philippines' election process has remained the same over the years. The manual voting, casting and counting has always been palatable for those with vested interest to control and manipulate the elections. With the advent of messy and fraud full elections it makes modernization even more imperative.


RA 8436 as amended by RA 9369 authorizes COMELEC to use an automated election system in the conduct of national and local election. Angara who filed a bill which seeks to authorize an eleven billion pesos (P11-B) supplemental appropriation for the use of an Automated Election System emphasized the need to finally automate our elections.

"The heart of a true democracy lies in achieving clean and honest elections which provide mandate and authority to elected public officials. Our laws mandate the Commission on Elections (COMELEC) to use an automated election system in the conduct of national and local elections to encourage transparency, credibility, fairness and accuracy of elections," said Angara who chairs the Senate Committee on Finance.

Of the total amount of P11,301,790,000 (Eleven billion three hundred one million seven hundred ninety thousand pesos) P9,959,710,000 will go to the acquisition of machines for the 2010 automated national and local elections. Moreover, P1,342,080,000 will be allocated for the preparatory activities in relation to the conduct of 2010 automated elections.

“For me, I’m so happy that they have already financed fully the automated elections because it does mean that we really have it and that the elections will be continued next year. Actually, honestly, I’m so excited for it because I want to experience it especially if we, the IT students have also given the chance to work with it on the election time. hehe”


References:
•http://www.manilatimes.net/national/2009/july/03/yehey/top_stories/20090703top8.html
•http://www.mb.com.ph/articles/213088/gma-sona-bats-dept-ict
•http://www.ams-world.com/bposervices.htm&usg=__CTcHZwoDXchjnIOumbsWhZO_S_o=&h=253&w=168&sz=14&hl=tl&start=1&um=1&tbnid=J6avbx1CpEAwyM:&tbnh=111&tbnw=74&prev=/images%3Fq%3Dwhat%2Bis%2BBPO%253F%26hl%3Dtl%26um%3D1
•http://www.senate.gov.ph/press_release/2009/0605_angara1.asp

Internet Connection Infrastructure

Wink If I were hired by the University President as an IT consultant, I would rather suggest Infrastructure in order for internet connectivity be improved. First in foremost, I will define and discuss first all about the Internet and its OSI Model.

The Internet

The Internet is a worldwide network of computers and computer networks that can communicate with each other using the Internet Protocol. Any computer on the Internet has a unique IP address that can be used by other computers to route information to it. Hence, any computer on the Internet can send a message to any other computer using its IP address. These messages carry with them the originating computer's IP address allowing for two-way communication. In this way, the Internet can be seen as an exchange of messages between computers.

The Internet works in part because of protocols that govern how the computers and routers communicate with each other. The nature of computer network communication lends itself to a layered approach where individual protocols in the protocol stack run more-or-less independently of other protocols. This allows lower-level protocols to be customized for the network situation while not changing the way higher-level protocols operate. A practical example of why this is important is because it allows an Internet browser to run the same code regardless of whether the computer it is running on is connected to the Internet through an Ethernet or Wi-Fi connection. Protocols are often talked about in terms of their place in the OSI reference model, which emerged in 1983 as the first step in an unsuccessful attempt to build a universally adopted networking protocol suite.

The OSI Reference Model

For the Internet, the physical medium and data link protocol can vary several times as packets traverse the globe. This is because the Internet places no constraints on what physical medium or data link protocol is used. This leads to the adoption of media and protocols that best suit the local network situation. In practice, most intercontinental communication will use the Asynchronous Transfer Mode (ATM) protocol (or a modern equivalent) on top of optic fibre. This is because for most intercontinental communication the Internet shares the same infrastructure as the public switched telephone network.

At the network layer, things become standardized with the Internet Protocol (IP) being adopted for logical addressing. For the World Wide Web, these “IP addresses” are derived from the human readable form using the Domain Name System (e.g. 72.14.207.99 is derived from www.google.com). At the moment, the most widely used version of the Internet Protocol is version four but a move to version six is imminent.

At the transport layer, most communication adopts either the Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP). TCP is used when it is essential every message sent is received by the other computer where as UDP is used when it is merely desirable. With TCP, packets are retransmitted if they are lost and placed in order before they are presented to higher layers. With UDP, packets are not ordered or retransmitted if lost. Both TCP and UDP packets carry port numbers with them to specify what application or process the packet should be handled by. Because certain application-level protocols use certain ports, network administrators can manipulate traffic to suit particular requirements. Examples are to restrict Internet access by blocking the traffic destined for a particular port or to affect the performance of certain applications by assigning priority.

Above the transport layer, there are certain protocols that are sometimes used and loosely fit in the session and presentation layers, most notably the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. These protocols ensure that the data transferred between two parties remains completely confidential and one or the other is in use when a padlock appears in the address bar of your web browser. Finally, at the application layer, are many of the protocols Internet users would be familiar with such as HTTP (web browsing), POP3 (e-mail), FTP (file transfer), IRC (Internet chat), BitTorrent (file sharing) and OSCAR (instant messaging).


Local Area Networks

Despite the growth of the Internet, the characteristics of local area networks (computer networks that run at most a few kilometres) remain distinct. This is because networks on this scale do not require all the features associated with larger networks and are often more cost-effective and efficient without them.

In the mid-1980s, several protocol suites emerged to fill the gap between the data link and applications layer of the OSI reference model. These were Appletalk, IPX and NetBIOS with the dominant protocol suite during the early 1990s being IPX due to its popularity with MS-DOS users. TCP/IP existed at this point but was typically only used by large government and research facilities. As the Internet grew in popularity and a larger percentage of traffic became Internet-related, local area networks gradually moved towards TCP/IP and today networks mostly dedicated to TCP/IP traffic are common. The move to TCP/IP was helped by technologies such as DHCP that allowed TCP/IP clients to discover their own network address — a functionality that came standard with the AppleTalk/IPX/NetBIOS protocol suites.

It is at the data link layer though that most modern local area networks diverge from the Internet. Whereas Asynchronous Transfer Mode (ATM) or Multiprotocol Label Switching (MPLS) are typical data link protocols for larger networks, Ethernet and Token Ring are typical data link protocols for local area networks. These protocols differ from the former protocols in that they are simpler (e.g. they omit features such as Quality of Service guarantees) and offer collision prevention. Both of these differences allow for more economic set-ups.

Despite the modest popularity of Token Ring in the 80's and 90's, virtually all local area networks now use wired or wireless Ethernet. At the physical layer, most wired Ethernet implementations use copper twisted-pair cables (including the common 10BASE-T networks). However, some early implementations used coaxial cables and some recent implementations (especially high-speed ones) use optic fibres. Where optic fibre is used, the distinction must be made between multi-mode fibre and single-mode fibre. Multi-mode fibre can be thought of as thicker optical fibre that is cheaper to manufacture but that suffers from less usable bandwidth and greater attenuation (i.e. poor long-distance performance).


Wide Area Network

Network infrastructure is the underlying system of cabling, phone lines, hubs, switches, routers and other devices that connect various parts of an organization through a Wide Area Network (WAN). If a sound network infrastructure is in place, most users can connect people and information throughout their organization and beyond to accomplish assigned responsibilities. Without a network infrastructure, such capabilities are available piecemeal, usually to individuals who may have the vision, initiative and resources to create this capability for themselves.

A WAN allows users to communicate with other personnel within the organization through tools such as e-mail systems. The WAN also provides a bridge to the Internet and World Wide Web that allows anyone connected to the WAN to access information and people outside the organization. WANs are usually "closed" through security measures that prevent external third parties from accessing information within the WAN without a password and/or personal identification number.

A key function of a WAN is to connect Local Area Networks (LANs) throughout the colleges. The LAN is housed within a building and serves to connect all users within that building to one local network. By connecting the LAN to a WAN, all LAN users gain access to others in the enterprise and to the electronic world beyond the network. A community college that has every user connected through a LAN to a WAN has established the infrastructure necessary to take full advantage of the telecommunications capabilities that exist today and those that will be available in the future. ACCD's network infrastructure consists of a WAN that connects the college's four campuses to the district IS data operations center and to the satellite campuses. Each ACCD campus connects to the Internet through gigabit Ethernet lines using an Alcatel OS9 router and an Alcatel 7800 Gigabit Ethernet switch located at the central district data operations center.

Gigabit Ethernet is a LAN architecture that supports data transfer rates of 1 gigabit (1,000 megabits) per second. The Cisco 7513 Internet router is used to provide video links to Regions 13 and 20 Education Service Centers (ESC). The networking infrastructure also includes e-mail servers and a Cisco Pix firewall to prevent intruders using an Internet connection from accessing ACCDs internal network.


Some Technical Terms related on Internet Connection

Bandwidth describes the data throughput capacity of a particular communications technology or link. It is closely analogous to the carrying capacity of a water pipe. It is usually measured as the number of bits of information per second that can be transferred, a bit being a single binary digit (either '0' or '1'). A single alphanumeric character is usually represented by a string of eight bits (a byte). Allowing for overheads, the rate at which characters can be transferred over a particular link is roughly one tenth of the specified bit transfer rate. So, for example, at a relatively low transfer rate of 14,400 bits per second (14.4 Kbps) a page of text of say 2,500 characters (approximately 20,000 bits) would take nearly 2 seconds.

A full colour picture (image) could require 100 Kbits to represent a 25x25 mm2 area. Usually images are compressed using a system such as JPEG (for Joint Photographic Experts Group) which can reduce this by a factor of 10 to 100, depending on the richness of the visual information. A video clip, with sound and pictures, is similar to a series of pictures and a 60 second segment using a small frame (75x50 mm2) and a low quality compression system can take up 4 Mbits (500 Kbytes), which would take almost five minutes to download using a 14.4 Kbps line running at full capacity. On the other hand, a full screen broadcast quality image with 720x480 resolution using MPEG-2 (for Moving Pictures Experts Group), such as is used for DVD movies, requires up to 15 Mbps of bandwidth. The various rates are compared in Table 1.

Low bandwidth (or low speed) links are anything below 100,000 bits per second (represented as 100 Kbps). High bandwidth or high speed links are in the range 100 Kbps to 2,000 Kbps which is usually presented as 2 Mbps.

Broadband commonly refers to a data throughput capacity of more than 2 Mbps. The term reflects the ability of such links to handle many different types of information up to and including full motion video and other services requiring very large throughput capability.

Analogue vs Digital. Analogue information is based on signals where some feature of the signal, usually amplitude or phase, varies continuously with time. Digital information (a 1 or 0), on the other hand, is represented by just one of two possible states: high/low (voltage), on/off (signal) etc. Computers deal with digital information. It is often necessary to convert the digital information used in the computers into analogue signals in order for it to be transmitted over a communications link and then convert it back to digital form when received. A modem (modulator/demodulator) is used to carry out the analogue to digital conversion, and its reverse.

The upper limit of data carrying capacity over a normal telephone line for analogue communications is 56 Kbps (and this only under favourable conditions) whereas digital communications can range up to several megabits per second (1 Mbps plus). In general, analogue signals are better over long distances and noisy lines.

Cable modems, which connect to co-axial cable networks, can carry data at speeds up to 2 Mbps. Digital signals can be used for higher data speeds but require high line quality and can usually be sent only over relatively short distances.

Fibre optic cables always transmit digital signals and under appropriate conditions can reach into the Gigabit range (1,000 Mbps plus). Fibre is widely used for the high volume inter city telecommunications, backbone and international links but is only slowly being deployed for business and domestic use.

The Internet Connection Infrastructure

In information technology and on the Internet, Infrastructure is the physical hardware used to interconnect computers and users. Infrastructure includes the transmission media, including telephone lines, cable television lines, and satellites and antennas, and also the routers, aggregators, repeaters, and other devices that control transmission paths. Infrastructure also includes the software used to send, receive, and manage the signals that are transmitted.
In some usages, infrastructure refers to interconnecting hardware and software and not to computers and other devices that are interconnected. However, to some information technology users, infrastructure is viewed as everything that supports the flow and processing of information.

Infrastructure companies play a significant part in evolving the Internet, both in terms of where the interconnections are placed and made accessible and in terms of how much information can be carried how quickly.

The figure below shows the relationships between some of the key entities which make up the internet connection infrastructure.


Some elements of the telecommunications/Internet infrastructure

Local loop: This includes the copper wire pairs that link terminals (commonly telephones) to their nearest exchange but may also in some rural areas include multi-access radio technology.

Internet Service Provider (ISP): The ISP is integral to connection to the Internet. It is the ISP who provides the Internet Protocol (IP) linking services which allow messages to be routed throughout the Internet 'cloud'. Most ISPs will have one or more broadband or high bandwidth connections to the telecommunications backbone which both allows users to connect to the ISP and links the ISP to other IP service providers on the Internet.

Telephone Exchange: Internet connection capacity is normally dependent on the capacity of the link between a user and their local telephone exchange, which will in turn depend on the location and the age and quality of the exchange's equipment. Many Telecom NZ exchanges in rural and congested urban areas were upgraded in the early 1980s and continue to provide good standard telephone services but are not able to provide services and access speeds which are available through a modern exchange.

Backbone: The telecommunications backbone is the network which links exchanges to each other and includes both transmission and circuit switching elements. The transmission elements may include copper and optical fibre cabling, and microwave links. The international circuits also include satellite links. Parts of the existing domestic backbone between some provincial centres may require upgrading to support greater digital data flows.


Technologies Available and in Use

Aside from dedicated fibre-optic and coaxial cable networks and wireless connections, which are available at present, access to the Internet is generally only available over the telephone network. Even for those with satellite connections, the return path from the user to the ISP relies on the telephone network. Thus, in practice for the overwhelming majority, technologies available for Internet connection are limited to those capable of using the copper wire local loop.


Technologies available over the local loop

V90 Modem: This is presently the 'domestic standard' for achieving a data rate of up to 56 Kbps over a standard telephone line. It requires only an inexpensive modem connecting a personal computer to a telephone line, and will support slow-motion video. There are limiting factors, however. A connection speed of 56 Kbps cannot be realised if there is more than one analogue-to-digital conversion in the connection to the ISP and is usually limited to a maximum line length between subscribers and the nearest exchange of 3 to 5 kilometres. In practice this means that for many users access speeds are less than the theoretical maximum. Typically in urban areas 33 Kbps is available but in many rural areas line quality is such that speeds fall well below this. For example, many rural areas are served with multi-access radio technology, which was introduced over ten years ago to eliminate party lines, and can only handle data transfer rates of 9.6 Kbps.

ADSL (Asynchronous Digital Subscriber Line): This is one of a family of technologies referred to collectively as xDSL, a term covering different types of Digital Subscriber Lines. xDSL technologies use sophisticated modulation schemes to pack digital data onto copper wires. xDSL is similar to ISDN (see below) inasmuch as both operate over existing copper telephone lines, but requires short runs to a central telephone exchange (about 2 kilometres). Potentially, xDSL offers broadband level speeds - up to 32 Mbps for downstream traffic, and from 32 kbps to over 1 Mbps for upstream traffic.

ADSL is offered by Telecom NZ to subscribers as JetStream. It is capable of speeds of up to (but generally much less than ) 6 Mbps in one direction and a much lesser speed in the other. It is currently available only in the main centres but is slated to be rolled out progressively throughout the country and is displacing ISDN (see below). However, this may be slow (after almost 20 years, ISDN still does not reach into most of rural New Zealand). Given that even inner city suburbs in areas such as Wellington cannot presently be serviced with ADSL, some further technology development will be required before ADSL can provide a widespread solution to bandwidth limitations, especially in rural areas.

ISDN (Integrated Services Digital Network): Telecom provides ISDN in all the main centres as well as many of the smaller centres, however, with one or two exceptions it is not generally available in rural areas. The technology supports data transfer rates of from 64 Kbps to 2 Mbps. Basic Rate ISDN installations provide the equivalent of two standard telephone lines. One can be used for voice and the other for data, or both lines can be used to achieve data rates of 128 Kbps. This is just adequate for two-way video applications such as distance learning and video-conferencing. Multiple ISDN lines can be used to obtain higher quality connections, for example, three ISDN lines provide a connection speed of 384 Kbps and this is typically used where video quality is critical, for example for tele-medicine applications. Telecom NZ has demonstrated a reluctance to make further investment in its ISDN infrastructure, promoting ADSL services as a preferred alternative.

Frame Relay: This is available from 64 Kbps and is easily scalable up to 2 Mbps. Frame relay is generally used as a dedicated point-to-point service or as a virtual private network and has the advantage of fixed price tariffs (no usage charges). The technology is specially suited to applications where guaranteed bandwidth is required such as for voice and video applications.

Asynchronous Transfer Mode (ATM): This technology offers very high bandwidth, up to 150 Mbps and is typically used for linking corporate networks.

IP Networking: This is a new family of services being piloted by Telecom NZ which is specially suitable for Internet connections where dedicated bandwidth is not critical. The focus is on flexibility and interconnectivity between a variety of connection services, such as dial-up telephone, ISDN and frame relay.


Rural issues with the local loop

While in some urban areas high bandwidth and even broadband access speeds are available, telephone subscribers more than 3 to 5 kilometres from an exchange are limited to access rates of 33 Kbps or less. In addition there are major problems with line quality reported for rural subscribers. A recent survey conducted for MAF reported 54% of rural subscribers as having problems affecting telephone lines including noise, electric fences (often a problem of poor fence installation by the farmer) and exchange overload. Telecom NZ reports that only 5% percent of local loop lines are not capable of maintaining a reliable data speed of 14.4 Kbps. The overwhelming majority of these would be in rural areas and thus this figure represents a large proportion of rural subscribers.

An indicator of insufficient infrastructure capacity is the number of reported problems with obtaining a second or third telephone line (an obvious way of trying to bypass the data rate bottleneck). Over one third of survey respondents who indicated that they had attempted to get a second line failed to do so.


Reference:
•http://en.wikipedia.org/wiki/Telecommunications
•http://images.google.com.ph/imgres?imgurl=http://www.window.state.tx.us/tspr/alamoccd/ex8-19.gif&imgrefurl=http://www.window.state.tx.us/tspr/alamoccd/ch08c.htm&usg=__QFFE6JleZKYmdBSFMxH9KzOBsc0=&h=873&w=762&sz=24&hl=tl&start=13&um=1&tbnid=h6u3xK_4vRUjgM:&tbnh=146&tbnw=127&prev=/images%3Fq%3Dinternet%2Bconnectivity%2Binfrastructure%26hl%3Dtl%26um%3D1
•http://searchdatacenter.techtarget.com/sDefinition/0,,sid80_gci212346,00.html
•http://images.google.com.ph/imgres?imgurl=http://executive.govt.nz/minister/maharey/divide/images/fig-1.gif&imgrefurl=http://executive.govt.nz/minister/maharey/divide/03-01.htm&usg=__IQ9PipLjXwHzHU2MdP8YQefs6iI=&h=328&w=460&sz=21&hl=tl&start=3&um=1&tbnid=ozNXpdX5s5uT9M:&tbnh=91&tbnw=128&prev=/images%3Fq%3Dinternet%2Bconnectivity%2Binfrastructure%26hl%3Dtl%26um%3D1

Barriers in IS/IT implementation

study According to what I have research, barriers in IS/IT implementation have a kinds and things to be considered. It could also be an important too. But how and why is it? Just find out in more discussions below. You will learn here also what are these certain barriers are.

Organizations are as alike and unique as human beings. Similarly, group processes can be as straightforward or as complex as the individuals who make up the organization. It is vital to successfully launching a new program that the leaders understand the strengths, weaknesses, and idiosyncrasies of the organization or system in which they operate. As you implement technology, you can also run into barriers in your firm. Try to anticipate these barriers to implementation so that you can develop strategies to minimize their impact or avoid them altogether.


Why are barriers important?

A barrier is an obstacle which prevents a given policy instrument being implemented, or limits the way in which it can be implemented. In the extreme, such barriers may lead to certain policy instruments being overlooked, and the resulting strategies being much less effective. For example, demand management measures are likely to be important in larger cities as ways of controlling the growth of congestion and improving the environment. But at the same time they are often unpopular, and cities may be tempted to reject them simply because they will be unpopular. If that decision leads in turn to greater congestion and a worse environment, the strategy will be less successful. The emphasis should therefore be on how to overcome these barriers, rather than simply how to avoid them.


What are the principal barriers?

Barriers are grouped into the four categories listed below.

1) Legal and institutional barriers
These include lack of legal powers to implement a particular instrument, and legal responsibilities which are split between agencies, limiting the ability of the city authority to implement the affected instrument.

2) Financial barriers
These include budget restrictions limiting the overall expenditure on the strategy, financial restrictions on specific instruments, and limitations on the flexibility with which revenues can be used to finance the full range of instruments.

3) Political and cultural barriers
These involve lack of political or public acceptance of an instrument, restrictions imposed by pressure groups, and cultural attributes, such as attitudes to enforcement, which influence the effectiveness of instruments.

4) Practical and technological barriers
While cities view legal, financial and political barriers as the most serious which they face in implementing policy instruments, there may also be practical limitations. For management and pricing, enforcement and administration are key issues. For infrastructure, management and information systems, engineering design and availability of technology may limit progress. Generally, lack of key skills and expertise can be a significant barrier to progress, and is aggravated by the rapid changes in the types of policy being considered.


How should we deal with barriers in the short term?

It is important not to reject a particular policy instrument simply because there are barriers to its introduction. One of the key elements in a successful strategy is the use of groups of policy instrument which help overcome these barriers. This is most easily done with the financial and political and cultural barriers, where one policy instrument can generate revenue to help finance another (as, for example, fares policy and service improvements), or one can make another more publicly acceptable (for example rail investment making road pricing more popular). A second important element is effective participation, which can help reduce the severity of institutional and political barriers, and encourage joint action to overcome them. Finally, effective approaches to implementation can reduce the severity of many barriers.


How can we overcome barriers in the longer term?

It is often harder to overcome legal, institutional and technological barriers in the short term. There is also the danger that some institutional and political barriers may get worse over time. However, strategies should ideally be developed for implementation over a 15-20 year timescale. Many of these barriers will not still apply twenty years hence, and action can be taken to remove others. For example, if new legislation would enable more effective instruments such as pricing to be implemented, it can be provided. If split responsibilities make achieving consensus impossible, new structures can be put in place. If finance for investment in new infrastructure is justified, the financial rules can be adjusted. TIPP makes a number of recommendations for longer term institutional change. Barriers should thus be treated as challenges to be overcome, not simply impediments to progress. A key element in a long term strategy should be the identification of ways of resolving these longer term barriers.


Furthermore, the following list of common barriers can be used to help your leadership team identify potential obstacles. The list of essential elements for change can help the team brainstorm possible solutions. The lists are a good starting point for a planning session that will be most effective if it also takes into account the organization's unique characteristics (Institute for Health Improvement).


Common Barriers
•Studying the problem too long without acting
•Trying to get everyone's agreement first
•Educating without changing structures or expectations
•Tackling everything at once
•Measuring nothing or everything
•Failing to build support for replication
•Assuming that the status quo is OK

More Barriers to Change
•Lack of such resources as time and commitment
•Resistance to change
•Lack of senior leadership support or physician champion
•Lack of cooperation from other agencies, providers, departments, and facilities
•Ineffective teams
•Burdensome data collection

Essential Elements for Change Effort
•Define the problem
•Define the target population
•Define effective treatment strategies and establish procedural guidelines
•Establish performance measures; set goals
•Define effective system changes and interventions
•Develop leadership and system change strategy


Other common problems/barriers in implementation

Technology Disconnect - Among businesses there is a commonly used phrase called “technology disconnect". As applied to law firms, it refers to the gap between the managing partners who make the bottom line decisions for the firm and the technologists who make major technology recommendations to the firm which may cost thousands, tens of thousands, hundreds of thousands, or millions of dollars. I hear many stories from the "technologists" that the firm does not support their efforts. But is it surprising in light of some of the past technological solutions that were sold to law firms and failed to become reality?

Technology today is mature enough and generally standard enough to make reasonable future decisions, but the disconnect may still lie between the managing partners and the technologists or now more popularly called the Chief Information Officer (CIO) or Chief Knowledge Officer (CKO). The managing partner does not understand the technology and the CIO, unless he or she is an attorney and/or partner, does not understand the business of the law firm. The managing partner usually does not see his job as understanding the technology, let alone implementing it. The CIO does not understand the practice of law and remains focused on installing the technology, but not the applications or teaching that will benefit the firm. The solution is for both the managing partner to take a greater interest in technology and the CIO to take a larger role in understanding the business of the firm. The foundation of the old must be preserved with the calculated implementation of new technology investments.

The managing partner(s) will have to be fluent in broad technology concepts so that they can communicate intelligently with the CIO about the value of such concepts to the firm. It is important for the CIO's to demystify the technology to the lawyers and others in the firm. Scheduling speakers, training sessions, keeping resource material available, and so on, can accomplish this. The managing partners must be willing to understand the technology and bridge the gap between their bottom line roles and the implementation of new technologies. Lawyers who did not grow up in the computer era manage law firms. They do not understand their power, capability and applications. Their resistance to the incorporation of digital information into the firm will spell trouble for these firms.

Be careful what you wish for - you may get your wish! One situation to be wary of is that in which the leaders accept that technology is important and see the implementation as buying some hardware and software, without training the firm. They generally will “tell” the committee, Chief Information Officer (CIO) or technology advocate what to do. This is a very difficult situation. Generally, they are the authority in the firm but do not understand technology and generally are too busy to spend the time understanding it. Education, if the leader takes the time, may be the only solution.

Computer Literacy - A recent survey showed that 51% of the top executives in the United States are computer illiterate. They rely heavily on their management team for advice for technology purchases. The main reasons for computer illiteracy are that computer knowledge and skills are considered a low priority, computers intimidate executives, and they resist change. They may be the naysayers who will not support your efforts to enact your firm’s strategic plan. Their negative comments and actions can cause a rift, and much worse, a nonadoption of technology in the firm.

Technology Department Resistance - Strangely enough, your own technologists may be against implementing new applications; maybe for good reason. Does your firm support their department with sufficient resources? What happened the last time that they implemented a new technology? Did you hold them responsible for glitches? Did you reward them for their long hours and worry about the implementation of the technology?

One sign of their hesitancy is shown in meetings where they point to a 3-year implementation period for applications that could realistically be up and running within 6 months. Others are reluctant to move to client/server or Intranet technology because it decentralizes their control over the computers. Be aware of the technologist who does not want to change. They are content in the DOS environment, using outdated technology, and see change as more work. However, change for change sake is not good – the benefits must be demonstrated.

Capital Investment and Billable Hour Concerns - Most law firms are not capital intensive. The money that is earned by the firm is distributed to its members. Before, the firm did not have to set aside or consider the thousands or tens of thousands of dollars that are necessary to implement or upgrade existing technology. It maybe difficult to convince one of the senior partners to invest substantial money in new software, hardware and training when he may be retiring in a few years. Also, beware of the impulsive enthusiasm where the partners have not committed for the long term.

Do not ever underestimate the impact of the billable hour on a technology plan. In simple terms, it would be more profitable for a firm if each attorney practiced law with a quill pen and law books. The lawyer could charge by the hour, incur no secretarial expenses, and manually research the law. The lawyer would make more profit than by automating. In fact, if one invests in technology to get the work done faster, then the firm will lose revenue by investing in the technology and by decreasing the number of billable hours that one could charge their client. This is a short sighted view that does not consider your client’s need for low cost, efficient services, value based billing, and the ability to handle more matters in a shorter period of time.

Once you understand the obstacles to implementation, sound practical approaches can be developed to overcome any objections. However, a word of caution, some law firms will not change. They have old cultures, and cumbersome structures and politics. They will give only lip service to needed reengineering and quiet the unrest by investing in some technology. Unfortunately, if a firm’s management is unwilling to adopt technology, they maybe discarded like the typewriter


On the other hand, for my adopted organization, the barriers that they’ve encountered were the following:

1. Difficulty of implementation
- Their difficulty of implementation is mostly on their time scheduling. Mostly, the implementation of their systems would be extended because of some problems and errors that arise during simulation and more revisions that they have done on it to make the systems good and function properly and perfectly as well as to achieve the customer’s satisfaction.

2. Security is an over hyped problem
- Regarding to their security, they do really have a problem on it because they don’t have the System Administrator whose the one will maintain and control the security of the server. Therefore, their General Manager is only the one that can now control and manipulate the server. But he is also very busy and not always around on their company that’s why this is the cause why the other personnel cannot access the server and cannot do their job which supposedly the information in an information system should be available for the users at any time.

3. Lack of personnel
- In their company, they were just few personnel there that’s why sometimes they cannot pursue to do the other systems because sometimes they will have so many projects to do and they cannot do it all. But inspite for that, still the company strives to be reputable in its excellent output of business solutions and systems.

4. Lack of potential customer connected
- Because of no server and loss of wireless networks or loss of internet connection sometimes, there would be no connection also with their customers. Therefore, the customers cannot access the files that they have to download and cannot have a communication with them.

5. Cost
- There was also the time in their company that they have a problem of the cost of enterprise that’s why they had a difficulty and cannot implement what they want to implement all the way sometimes.


References:
•http://www.konsult.leeds.ac.uk/public/level1/sec10/index.htm
•http://www.mywhatever.com/cifwriter/content/22/4481.html
•http://www.elawexchange.com/index.php?option=com_content&view=article&id=345&Itemid=310